Use Docker in Pods#
SkyPilot clusters running on Kubernetes are backed by one or more Pods.
Workflows that require container operations inside those Pods — such as
building and pushing images or launching nested containers — need an in-Pod
container runtime. SkyPilot provides a built-in enable_docker config that
automatically injects a sidecar container with the appropriate runtime.
This page describes two supported approaches and helps you choose the one that fits your security posture and cluster capabilities.
Approaches#
Docker-in-Docker (DinD) |
Rootless BuildKit |
|
|---|---|---|
Build & push images |
Yes |
Yes |
Run containers ( |
Yes |
No |
Requires |
Yes |
No |
Requires Docker on K8s node |
No (sidecar brings its own |
No |
Security risk |
Higher (container escape surface) |
Lower |
Tip
Use
enable_docker: BUILDif you only need image build/push.Use
enable_docker: trueif you need fulldocker runcapabilities.
Option 1: Full Docker access (privileged permission required)#
Set enable_docker: true to make the full docker CLI available inside
the pod — you can build images, push them, and run containers
(docker run).
Cluster prerequisite: The cluster must allow pods with privileged: true.
Note
GPU passthrough to nested containers (docker run --gpus) is not
currently supported. To test a GPU image, build and push it first,
then launch it directly with sky launch.
Configuration#
Add the following to the task YAML’s config field:
config:
kubernetes:
enable_docker: true
Or apply it globally to all SkyPilot clusters in SkyPilot config:
kubernetes:
enable_docker: true
To persist the Docker cache across cluster restarts, see Persist the cache.
Launch and verify#
sky launch -c dev examples/enable_docker/dind_cluster.yaml
# SSH into the cluster and confirm Docker is available
ssh dev
docker info
# Build and push an image using the docker CLI
docker build -t myregistry/myimage:latest .
docker push myregistry/myimage:latest
See dind_cluster.yaml for a complete example.
Option 2: Build-only#
If your cluster does not allow privileged: true pods, or you only need
to build and push images, set enable_docker: BUILD. This makes
docker buildx build available inside the pod without
requiring privileged permissions.
Limitation: docker run / container execution is not supported.
Configuration#
Add the following to the task YAML’s config field:
config:
kubernetes:
enable_docker: BUILD
Or apply it globally in SkyPilot config:
kubernetes:
enable_docker: BUILD
To persist the BuildKit cache across cluster restarts, see Persist the cache.
Launch and verify#
sky launch -c dev examples/enable_docker/buildkit_cluster.yaml
# SSH into the cluster and confirm buildx is configured
ssh dev
docker buildx ls
# Build and push an image using buildx
docker buildx build -t myregistry/myimage:latest --push .
See buildkit_cluster.yaml for a complete example.
Persist the cache#
By default the Docker / BuildKit cache is lost when the cluster is stopped or
restarted. To persist it, create a SkyPilot volume and reference it in the
enable_docker config.
Define a volume YAML:
# docker-cache-vol.yaml name: my-builder-cache type: k8s-pvc infra: k8s size: 50Gi config: storage_class_name: standard-rwo access_mode: ReadWriteOnce
Note
The cache volume must be backed by a block-storage filesystem (e.g., ext4/xfs on EBS, Persistent Disk, etc.). NFS-based storage such as AWS EFS, Google Cloud Filestore, or CephFS cannot be used because:
ALL mode (DinD): The overlay storage driver is not supported on NFS.
BUILD mode (rootless BuildKit): NFS prevents unpacking image layers with correct file ownership.
See the Docker known limitations for details.
Create the volume:
$ sky volumes apply docker-cache-vol.yaml
Reference the volume in
enable_docker:config: kubernetes: enable_docker: mode: ALL # or BUILD cache_volume: my-builder-cache