-
Notifications
You must be signed in to change notification settings - Fork 16
Open
Description
While testing these out, I noticed this behavioral difference.
- With a traditional pod, if the workspace tries to allocate memory beyond its limit, the pod gets OOM killed, freeing the pod's resources.
- With an envbox pod, if not using CODER_MEMORY, processes running in the inner pod aren't restricted by the pod's limits; the pod is OOM killed if it tries to allocate more memory than is available on the node.
- If using CODER_MEMORY, excessive allocation kills the inner pod, but
/envbox dockerkeeps running and so the pod keeps running & reserving resources, even as the workload we actually cared about is dead.
The template I'm using is based on https://github.com/coder/coder/blob/main/examples/templates/kubernetes-envbox/main.tf ; the relevant portion of the pod definition looks like this:
container {
name = "dev"
image = local.envbox_image
command = ["/envbox", "docker"]
env {
name = "CODER_MEMORY"
value_from {
resource_field_ref {
resource = "limits.memory"
}
}
}
Possibly there is something I ought to tweak so that /envbox terminates when the inner pod does? I'm not sure if I've missed something there.
Metadata
Metadata
Assignees
Labels
No labels