Installation#
Install SkyPilot#
SkyPilot supports installation with uv or pip.
# Create a virtual environment with pip pre-installed (required for SkyPilot)
# SkyPilot requires 3.7 <= python <= 3.13.
uv venv --seed --python 3.10
source .venv/bin/activate # Use WSL on Windows
uv pip install skypilot
# install dependencies for the clouds you want to use
uv pip install "skypilot[kubernetes,aws,gcp]"
Note
The --seed flag is required as it ensures pip is installed in the virtual environment.
SkyPilot needs pip to build wheels for remote cluster setup.
# Install as a globally available tool with pip included
# SkyPilot requires 3.7 <= python <= 3.13.
uv tool install --with pip skypilot
# install dependencies for the clouds you want to use
uv tool install --with pip "skypilot[kubernetes,aws,gcp]"
Note
The --with pip flag is required when using uv tool install.
Without it, SkyPilot will fail when building wheels for remote clusters.
# Recommended: use a new conda env to avoid package conflicts.
# SkyPilot requires 3.7 <= python <= 3.13.
conda create -y -n sky python=3.10
conda activate sky
pip install skypilot
# install dependencies for the clouds you want to use
pip install "skypilot[kubernetes,aws,gcp]"
Install SkyPilot from nightly build or source
SkyPilot provides nightly builds and source code for the latest features and for development.
Install from nightly build:
# Create a virtual environment with pip pre-installed (required for SkyPilot)
# SkyPilot requires 3.7 <= python <= 3.13.
uv venv --seed --python 3.10
source .venv/bin/activate # On Windows: .venv\Scripts\activate
uv pip install skypilot-nightly
# Build the dashboard (requires Node.js and npm)
npm --prefix sky/dashboard install
npm --prefix sky/dashboard run build
# Install as a globally available tool with pip included
# SkyPilot requires 3.7 <= python <= 3.13.
uv tool install --with pip skypilot-nightly
# Build the dashboard (requires Node.js and npm)
npm --prefix sky/dashboard install
npm --prefix sky/dashboard run build
# Recommended: use a new conda env to avoid package conflicts.
# SkyPilot requires 3.7 <= python <= 3.13.
conda create -y -n sky python=3.10
conda activate sky
pip install skypilot-nightly
# Build the dashboard (requires Node.js and npm)
npm --prefix sky/dashboard install
npm --prefix sky/dashboard run build
Install from source:
# Recommended: use a new conda env to avoid package conflicts.
# SkyPilot requires 3.7 <= python <= 3.13.
conda create -y -n sky python=3.10
conda activate sky
git clone https://github.com/skypilot-org/skypilot.git
cd skypilot
pip install -e .
Alternatively, we also provide a Docker image as a quick way to try out SkyPilot.
Run locally or connect to a remote API server#
SkyPilot can be run as a standalone application, or connect to a remote API server for multi-user collaboration.
To run SkyPilot locally:
Refer to the cloud setup section to download the necessary dependencies for the clouds you want to use.
Tip
You can install dependencies for multiple clouds at once with following commands:
# From stable release
uv pip install "skypilot[kubernetes,aws,gcp]"
# From nightly build
uv pip install "skypilot-nightly[kubernetes,aws,gcp]"
# From stable release
uv tool install --with pip "skypilot[kubernetes,aws,gcp]"
# From nightly build
uv tool install --with pip "skypilot-nightly[kubernetes,aws,gcp]"
# From stable release
pip install "skypilot[kubernetes,aws,gcp]"
# From nightly build
pip install "skypilot-nightly[kubernetes,aws,gcp]"
# From source
pip install -e ".[kubernetes,aws,gcp]"
Note
When using SkyPilot locally, run sky api stop after each upgrade or dependency installation
to enable the new version.
See Upgrading SkyPilot for more details.
To connect to a remote API server:
If your team has set up a SkyPilot remote API server, connect to it by running:
sky api login
There is no need to install any dependencies locally.
See Connecting to an API server for more details.
To deploy a remote API server:
See Deploying SkyPilot API Server for detailed instructions on how to deploy a remote API server.
Verify cloud access#
After installation, run sky check to verify that credentials are correctly set up:
sky check
This will produce a summary like:
Checking credentials to enable clouds for SkyPilot.
AWS: enabled
GCP: enabled
Azure: enabled
OCI: enabled
Lambda: enabled
Nebius: enabled
RunPod: enabled
Paperspace: enabled
Fluidstack: enabled
Cudo: enabled
Shadeform: enabled
IBM: enabled
SCP: enabled
Seeweb: enabled
vSphere: enabled
Cloudflare (for R2 object store): enabled
Kubernetes: enabled
Slurm: enabled
If any cloud’s credentials or dependencies are missing, sky check will
output hints on how to resolve them. You can also refer to the cloud setup
section below.
Tip
If your clouds show enabled — 🎉 🎉 Congratulations! 🎉 🎉 You can now head over to
Quickstart to get started with SkyPilot.
Tip
To check credentials only for specific clouds, pass the clouds as arguments: sky check aws gcp
Tip
If you are having trouble setting up credentials, it may be because the API server started before they were
configured. Try restarting the API server by running sky api stop and then sky api start.
Request quotas for first time users#
If your cloud account has not been used to launch instances before, the respective quotas are likely set to zero or a low limit. This is especially true for GPU instances.
Please follow Requesting Quota Increase to check quotas and request quota increases before proceeding.
Enable shell completion#
SkyPilot supports shell completion for Bash (Version 4.4 and up), Zsh and Fish. This is only available for click versions 8.0 and up (use pip install click==8.0.4 to install).
To enable shell completion after installing SkyPilot, you will need to modify your shell configuration.
SkyPilot automates this process using the --install-shell-completion option, which you should call using the appropriate shell name or auto:
sky --install-shell-completion auto
# sky --install-shell-completion zsh
# sky --install-shell-completion bash
# sky --install-shell-completion fish
Shell completion may perform poorly on certain shells and machines.
If you experience any issues after installation, you can use the --uninstall-shell-completion option to uninstall it, which you should similarly call using the appropriate shell name or auto:
sky --uninstall-shell-completion auto
# sky --uninstall-shell-completion zsh
# sky --uninstall-shell-completion bash
# sky --uninstall-shell-completion fish
Appendix: Cloud access for local SkyPilot#
SkyPilot can be run as a standalone application, or connect to a remote API server for multi-user collaboration.
When running SkyPilot locally, necessary dependencies and credentials need to be set up for the clouds you want to use. You can configure access to at least one cloud using the guides below.
To configure infra access for team deployment instead, see Optional: Configure cloud accounts.
SkyPilot supports most major cloud providers, Kubernetes, and Slurm.
Kubernetes#
Install the necessary dependencies for Kubernetes.
# SkyPilot requires 3.7 <= python <= 3.13.
# From stable release
uv pip install "skypilot[kubernetes]"
# From nightly build
uv pip install "skypilot-nightly[kubernetes]"
# SkyPilot requires 3.7 <= python <= 3.13.
# From stable release
uv tool install --with pip "skypilot[kubernetes]"
# From nightly build
uv tool install --with pip "skypilot-nightly[kubernetes]"
# SkyPilot requires 3.7 <= python <= 3.13.
# From stable release
pip install "skypilot[kubernetes]"
# From nightly build
pip install "skypilot-nightly[kubernetes]"
# From source
pip install -e ".[kubernetes]"
SkyPilot can run workloads on on-prem or cloud-hosted Kubernetes clusters
(e.g., EKS, GKE, Nebius Managed Kubernetes, Coreweave). The only requirement is a valid kubeconfig at
~/.kube/config.
# Place your kubeconfig at ~/.kube/config
mkdir -p ~/.kube
cp /path/to/kubeconfig ~/.kube/config
See SkyPilot on Kubernetes for more.
Tip
If you do not have access to a Kubernetes cluster, you can deploy a local Kubernetes cluster on your laptop with sky local up.
Slurm
#
Note
Early Access: Slurm support is under active development. If you’re interested in trying it out, please fill out this form.
SkyPilot can run workloads on Slurm clusters. The only requirement is SSH access to a Slurm login node.
To configure Slurm support, create a ~/.slurm/config file with your Slurm cluster configuration and add the SSH credentials to connect to the Slurm login node.
# Create the Slurm config directory
mkdir -p ~/.slurm
# Add your Slurm cluster configuration
cat > ~/.slurm/config << EOF
Host mycluster
HostName login.mycluster.myorg.com
User myusername
IdentityFile ~/.ssh/id_rsa
EOF
See SkyPilot on Slurm for more.
AWS#
Install the necessary dependencies for AWS.
# SkyPilot requires 3.7 <= python <= 3.13.
# From stable release
uv pip install "skypilot[aws]"
# From nightly build
uv pip install "skypilot-nightly[aws]"
# SkyPilot requires 3.7 <= python <= 3.13.
# From stable release
uv tool install --with pip "skypilot[aws]"
# From nightly build
uv tool install --with pip "skypilot-nightly[aws]"
# SkyPilot requires 3.7 <= python <= 3.13.
# From stable release
pip install "skypilot[aws]"
# From nightly build
pip install "skypilot-nightly[aws]"
# From source
pip install -e ".[aws]"
To set up AWS credentials, log into the AWS console and create an access key for yourself. If you don’t see the “Security credentials” link shown in the AWS instructions, you may be using SSO; see Using AWS SSO.
Now configure your credentials.
# Configure your AWS credentials
aws configure
For AWS Access Key ID, copy the “Access key” value from console.
For the AWS Secret Access Key, copy the “Secret access key” value from console.
The Default region name [None]: and Default output format [None]: fields are optional and can be left blank to choose defaults.
To use AWS IAM Identity Center (AWS SSO), see here for instructions.
Optional: To create a new AWS user with minimal permissions for SkyPilot, see Dedicated SkyPilot IAM user.
GCP#
Install the necessary dependencies for GCP.
# SkyPilot requires 3.7 <= python <= 3.13.
# From stable release
uv pip install "skypilot[gcp]"
# From nightly build
uv pip install "skypilot-nightly[gcp]"
# SkyPilot requires 3.7 <= python <= 3.13.
# From stable release
uv tool install --with pip "skypilot[gcp]"
# From nightly build
uv tool install --with pip "skypilot-nightly[gcp]"
# SkyPilot requires 3.7 <= python <= 3.13.
# From stable release
pip install "skypilot[gcp]"
# From nightly build
pip install "skypilot-nightly[gcp]"
# From source
pip install -e ".[gcp]"
# Install Google Cloud SDK via conda-forge
conda install -c conda-forge google-cloud-sdk
# Initialize gcloud
gcloud init
# Run this if you don't have a credentials file.
# This will generate ~/.config/gcloud/application_default_credentials.json.
gcloud auth application-default login
For MacOS with Silicon Chips:
curl -O https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-cli-darwin-arm.tar.gz gcloud.tar.gz tar -xf gcloud.tar.gz ./google-cloud-sdk/install.sh # Update your path with the newly installed gcloud
If you are using other architecture or OS, follow the Google Cloud SDK installation instructions to download the appropriate package.
Be sure to complete the optional step that adds
gcloudto yourPATH. This step is required for SkyPilot to recognize that yourgcloudinstallation is configured correctly.
Tip
If you are using multiple GCP projects, list all the projects by gcloud projects list and activate one by gcloud config set project <PROJECT_ID> (see GCP docs).
Common GCP installation errors
Here some commonly encountered errors and their fixes:
RemoveError: 'requests' is a dependency of conda and cannot be removed from conda's operating environmentwhen runningconda install -c conda-forge google-cloud-sdk— runconda update --force condafirst and rerun the command.Authorization Error (Error 400: invalid_request)with the url generated bygcloud auth login— install the latest version of the Google Cloud SDK (e.g., withconda install -c conda-forge google-cloud-sdk) on your local machine (which opened the browser) and rerun the command.
Optional: To create and use a long-lived service account on your local machine, see here.
Optional: To create a new GCP user with minimal permissions for SkyPilot, see GCP User Creation.
Azure#
Install the necessary dependencies for Azure.
# SkyPilot requires 3.7 <= python <= 3.13.
# From stable release
# Azure CLI has an issue with uv, and requires '--prerelease allow'.
uv pip install --prerelease allow azure-cli
uv pip install "skypilot[azure]"
# From nightly build
# Azure CLI has an issue with uv, and requires '--prerelease allow'.
uv pip install --prerelease allow azure-cli
uv pip install "skypilot-nightly[azure]"
# SkyPilot requires 3.7 <= python <= 3.13.
# From stable release
uv tool install --with pip "skypilot[azure]"
# From nightly build
uv tool install --with pip "skypilot-nightly[azure]"
# SkyPilot requires 3.7 <= python <= 3.13.
# From stable release
pip install "skypilot[azure]"
# From nightly build
pip install "skypilot-nightly[azure]"
# From source
pip install -e ".[azure]"
# Login
az login
# Set the subscription to use
az account set -s <subscription_id>
Hint: run az account subscription list to get a list of subscription IDs under your account.
CoreWeave#
CoreWeave integrates with SkyPilot through the Kubernetes integration. To set up:
Install the necessary dependencies for CoreWeave.
# SkyPilot requires 3.7 <= python <= 3.13.
# From stable release
uv pip install "skypilot[coreweave]"
# From nightly build
uv pip install "skypilot-nightly[coreweave]"
# SkyPilot requires 3.7 <= python <= 3.13.
# From stable release
uv tool install --with pip "skypilot[coreweave]"
# From nightly build
uv tool install --with pip "skypilot-nightly[coreweave]"
# SkyPilot requires 3.7 <= python <= 3.13.
# From stable release
pip install "skypilot[coreweave]"
# From nightly build
pip install "skypilot-nightly[coreweave]"
# From source
pip install -e ".[coreweave]"
Launch a Coreweave CKS cluster from the CoreWeave console.
Get your kubeconfig from the CoreWeave console and place it at
~/.kube/config.
Tip
CoreWeave also offers InfiniBand networking for high-performance distributed training. You can enable InfiniBand support by adding network_tier: best to your SkyPilot task configuration.
CoreWeave Object Storage (CAIOS)#
You can optionally set up CoreWeave Object Storage (CAIOS) as an S3-compatible object storage that can be used with SkyPilot for storing and accessing data in your workloads.
To get CAIOS Access Key ID and Secret Access Key:
Log into your CoreWeave Cloud console.
Navigate to Object Storage → Keys in the left sidebar.
Generate a new key pair.
SkyPilot uses separate configuration files for CAIOS to avoid conflicts with your AWS credentials. Run the following command to configure your CAIOS access credentials:
AWS_SHARED_CREDENTIALS_FILE=~/.coreweave/cw.credentials aws configure --profile cw
When prompted, enter your CAIOS credentials:
AWS Access Key ID [None]: <your_access_key_id>
AWS Secret Access Key [None]: <your_secret_access_key>
Default region name [None]:
Default output format [None]: json
Next, configure the endpoint URL and addressing style for CoreWeave Object Storage. This tells AWS CLI how to connect to CoreWeave’s S3-compatible service:
# For external access (outside CoreWeave CKS clusters)
AWS_CONFIG_FILE=~/.coreweave/cw.config aws configure set endpoint_url https://cwobject.com --profile cw
AWS_CONFIG_FILE=~/.coreweave/cw.config aws configure set s3.addressing_style virtual --profile cw
Note
CAIOS offers two endpoints for different use cases. Choose the right endpoint:
External access (slow but accessible from anywhere): Use
https://cwobject.comwhen launching SkyPilot clusters in non-CoreWeave CKS clusters. This endpoint is accessible from anywhere and uses secure HTTPS.Internal access (fast but only accessible within CoreWeave’s network): Use
http://cwlota.comonly if you are launching SkyPilot clusters inside CoreWeave CKS clusters and do not need to upload local data to the bucket. The LOTA endpoint provides faster access within CoreWeave’s network but only supports HTTP and is not accessible externally. Refer to LOTA documentation for more details.
Nebius#
Nebius is the ultimate cloud for AI explorers. To configure Nebius access:
Install the necessary dependencies for Nebius.
Note
Nebius is only supported for Python >= 3.10
# Nebius requires 3.10 <= python <= 3.13.
# From stable release
uv pip install "skypilot[nebius]"
# From nightly build
uv pip install "skypilot-nightly[nebius]"
# Nebius requires 3.10 <= python <= 3.13.
# From stable release
uv tool install --with pip "skypilot[nebius]"
# From nightly build
uv tool install --with pip "skypilot-nightly[nebius]"
# Nebius requires 3.10 <= python <= 3.13.
# From stable release
pip install "skypilot[nebius]"
# From nightly build
pip install "skypilot-nightly[nebius]"
# From source
pip install -e ".[nebius]"
Install and configure Nebius CLI:
mkdir -p ~/.nebius
nebius iam get-access-token > ~/.nebius/NEBIUS_IAM_TOKEN.txt
nebius --format json iam whoami|jq -r '.user_profile.tenants[0].tenant_id' > ~/.nebius/NEBIUS_TENANT_ID.txt
Optional: You can specify specific project ID and fabric in ~/.sky/config.yaml, see Configuration project_id and fabric for Nebius.
Alternatively, you can also use a service account to access Nebius, see Using Service Account for Nebius.
To use Nebius Managed Kubernetes, see Kubernetes Installation. Retrieve the Kubernetes credential with:
nebius mk8s cluster get-credentials --id <cluster_id> --external --kubeconfig $HOME/.kube/config
Nebius also offers Object Storage, an S3-compatible object storage without any egress charges. SkyPilot can download/upload data to Nebius buckets and mount them as local filesystem on clusters launched by SkyPilot. To set up Nebius support, run:
# Install boto
pip install boto3
# Configure your Nebius Object Storage credentials
aws configure --profile nebius
In the prompt, enter your Nebius Access Key ID and Secret Access Key (see instructions to generate Nebius credentials). Select auto for the default region and json for the default output format.
aws configure set aws_access_key_id $NB_ACCESS_KEY_AWS_ID --profile nebius
aws configure set aws_secret_access_key $NB_SECRET_ACCESS_KEY --profile nebius
aws configure set region <REGION> --profile nebius
aws configure set endpoint_url <ENDPOINT> --profile nebius
RunPod
#
RunPod is a specialized AI cloud provider that offers low-cost GPUs. To configure RunPod access:
Install the necessary dependencies for RunPod
# SkyPilot requires 3.7 <= python <= 3.13.
# From stable release
uv pip install "skypilot[runpod]"
# From nightly build
uv pip install "skypilot-nightly[runpod]"
# SkyPilot requires 3.7 <= python <= 3.13.
# From stable release
uv tool install --with pip "skypilot[runpod]"
# From nightly build
uv tool install --with pip "skypilot-nightly[runpod]"
# SkyPilot requires 3.7 <= python <= 3.13.
# From stable release
pip install "skypilot[runpod]"
# From nightly build
pip install "skypilot-nightly[runpod]"
# From source
pip install -e ".[runpod]"
Go to the Settings page on your RunPod console and generate an API key. Then, run:
pip install "runpod>=1.6.1"
runpod config
OCI
#
To access Oracle Cloud Infrastructure (OCI):
Install the necessary dependencies for OCI.
# SkyPilot requires 3.7 <= python <= 3.13.
# From stable release
uv pip install "skypilot[oci]"
# From nightly build
uv pip install "skypilot-nightly[oci]"
# SkyPilot requires 3.7 <= python <= 3.13.
# From stable release
uv tool install --with pip "skypilot[oci]"
# From nightly build
uv tool install --with pip "skypilot-nightly[oci]"
# SkyPilot requires 3.7 <= python <= 3.13.
# From stable release
pip install "skypilot[oci]"
# From nightly build
pip install "skypilot-nightly[oci]"
# From source
pip install -e ".[oci]"
Setup the credentials by following this guide. After completing the steps in the guide, the ~/.oci folder should contain the following files:
~/.oci/config
~/.oci/oci_api_key.pem
The ~/.oci/config file should contain the following fields:
[DEFAULT]
user=ocid1.user.oc1..aaaaaaaa
fingerprint=aa:bb:cc:dd:ee:ff:gg:hh:ii:jj:kk:ll:mm:nn:oo:pp
tenancy=ocid1.tenancy.oc1..aaaaaaaa
region=us-sanjose-1
# Note that we should avoid using full home path for the key_file configuration, e.g. use ~/.oci instead of /home/username/.oci
key_file=~/.oci/oci_api_key.pem
By default, the provisioned nodes will be in the root compartment. To specify the compartment other than root, create/edit the file ~/.sky/config.yaml, put the compartment’s OCID there, as the following:
oci:
region_configs:
default:
compartment_ocid: ocid1.compartment.oc1..aaaaaaaa......
Lambda Cloud
#
Lambda Cloud is a cloud provider offering low-cost GPUs. To configure Lambda Cloud access:
Install the necessary dependencies for Lambda Cloud.
# SkyPilot requires 3.7 <= python <= 3.13.
# From stable release
uv pip install "skypilot[lambda]"
# From nightly build
uv pip install "skypilot-nightly[lambda]"
# SkyPilot requires 3.7 <= python <= 3.13.
# From stable release
uv tool install --with pip "skypilot[lambda]"
# From nightly build
uv tool install --with pip "skypilot-nightly[lambda]"
# SkyPilot requires 3.7 <= python <= 3.13.
# From stable release
pip install "skypilot[lambda]"
# From nightly build
pip install "skypilot-nightly[lambda]"
# From source
pip install -e ".[lambda]"
Go to the API Keys page on your Lambda console to generate a key and then add it to ~/.lambda_cloud/lambda_keys:
mkdir -p ~/.lambda_cloud
echo "api_key = <your_api_key_here>" > ~/.lambda_cloud/lambda_keys
Together AI
#
Together AI offers GPU instant clusters. Accessing them is similar to using Kubernetes:
Install the necessary dependencies for Kubernetes.
# SkyPilot requires 3.7 <= python <= 3.13.
# From stable release
uv pip install "skypilot[kubernetes]"
# From nightly build
uv pip install "skypilot-nightly[kubernetes]"
# SkyPilot requires 3.7 <= python <= 3.13.
# From stable release
uv tool install --with pip "skypilot[kubernetes]"
# From nightly build
uv tool install --with pip "skypilot-nightly[kubernetes]"
# SkyPilot requires 3.7 <= python <= 3.13.
# From stable release
pip install "skypilot[kubernetes]"
# From nightly build
pip install "skypilot-nightly[kubernetes]"
# From source
pip install -e ".[kubernetes]"
Launch a Together Instant Cluster with cluster type selected as Kubernetes
Get the Kubernetes config for the cluster
Save the kubeconfig to a file, e.g.,
./together.kubeconfigCopy the kubeconfig to your
~/.kube/configor merge the Kubernetes config with your existing kubeconfig file by running:
KUBECONFIG=./together-kubeconfig:~/.kube/config kubectl config view --flatten > /tmp/merged_kubeconfig && mv /tmp/merged_kubeconfig ~/.kube/config
Paperspace
#
Paperspace is a cloud provider that provides access to GPU accelerated VMs. To configure Paperspace access:
Install the necessary dependencies for Paperspace.
# SkyPilot requires 3.7 <= python <= 3.13.
# From stable release
uv pip install "skypilot[paperspace]"
# From nightly build
uv pip install "skypilot-nightly[paperspace]"
# SkyPilot requires 3.7 <= python <= 3.13.
# From stable release
uv tool install --with pip "skypilot[paperspace]"
# From nightly build
uv tool install --with pip "skypilot-nightly[paperspace]"
# SkyPilot requires 3.7 <= python <= 3.13.
# From stable release
pip install "skypilot[paperspace]"
# From nightly build
pip install "skypilot-nightly[paperspace]"
# From source
pip install -e ".[paperspace]"
Go to follow these instructions to generate an API key. Add the API key with:
mkdir -p ~/.paperspace
echo "{'api_key' : <your_api_key_here>}" > ~/.paperspace/config.json
Vast
#
Vast is a cloud provider that offers low-cost GPUs. To configure Vast access:
Install the necessary dependencies for Vast.
# SkyPilot requires 3.7 <= python <= 3.13.
# From stable release
uv pip install "skypilot[vast]"
# From nightly build
uv pip install "skypilot-nightly[vast]"
# SkyPilot requires 3.7 <= python <= 3.13.
# From stable release
uv tool install --with pip "skypilot[vast]"
# From nightly build
uv tool install --with pip "skypilot-nightly[vast]"
# SkyPilot requires 3.7 <= python <= 3.13.
# From stable release
pip install "skypilot[vast]"
# From nightly build
pip install "skypilot-nightly[vast]"
# From source
pip install -e ".[vast]"
Go to the Account page on your Vast console to get your API key. Then, run:
pip install "vastai-sdk>=0.1.12"
mkdir -p ~/.config/vastai
echo "<your_api_key_here>" > ~/.config/vastai/vast_api_key
Fluidstack
#
Fluidstack is a cloud provider offering low-cost GPUs. To configure Fluidstack access:
Install the necessary dependencies for Fluidstack.
# SkyPilot requires 3.7 <= python <= 3.13.
# From stable release
uv pip install "skypilot[fluidstack]"
# From nightly build
uv pip install "skypilot-nightly[fluidstack]"
# SkyPilot requires 3.7 <= python <= 3.13.
# From stable release
uv tool install --with pip "skypilot[fluidstack]"
# From nightly build
uv tool install --with pip "skypilot-nightly[fluidstack]"
# SkyPilot requires 3.7 <= python <= 3.13.
# From stable release
pip install "skypilot[fluidstack]"
# From nightly build
pip install "skypilot-nightly[fluidstack]"
# From source
pip install -e ".[fluidstack]"
Go to the Home page on your Fluidstack console to generate an API key and then add the API key to ~/.fluidstack/api_key :
mkdir -p ~/.fluidstack
echo "your_api_key_here" > ~/.fluidstack/api_key
Cudo Compute
#
Cudo Compute provides low cost GPUs powered by green energy.
Install the necessary dependencies for Cudo Compute.
# SkyPilot requires 3.7 <= python <= 3.13.
# From stable release
uv pip install "skypilot[cudo]"
# From nightly build
uv pip install "skypilot-nightly[cudo]"
# SkyPilot requires 3.7 <= python <= 3.13.
# From stable release
uv tool install --with pip "skypilot[cudo]"
# From nightly build
uv tool install --with pip "skypilot-nightly[cudo]"
# SkyPilot requires 3.7 <= python <= 3.13.
# From stable release
pip install "skypilot[cudo]"
# From nightly build
pip install "skypilot-nightly[cudo]"
# From source
pip install -e ".[cudo]"
Create a billing account.
Create a project.
Create an API Key.
Download and install the cudoctl command line tool
Run
cudoctl init:
cudoctl init ✔ api key: my-api-key ✔ project: my-project ✔ billing account: my-billing-account ✔ context: default config file saved ~/.config/cudo/cudo.yml pip install "cudo-compute>=0.1.10"
If you want to want to use SkyPilot with a different Cudo Compute account or project, run cudoctl init again.
Shadeform
#
Shadeform is a cloud GPU marketplace that offers GPUs across a variety of vetted cloud providers. To configure Shadeform access:
Install the necessary dependencies for Shadeform.
# SkyPilot requires 3.7 <= python <= 3.13.
# From stable release
uv pip install "skypilot[shadeform]"
# From nightly build
uv pip install "skypilot-nightly[shadeform]"
# SkyPilot requires 3.7 <= python <= 3.13.
# From stable release
uv tool install --with pip "skypilot[shadeform]"
# From nightly build
uv tool install --with pip "skypilot-nightly[shadeform]"
# SkyPilot requires 3.7 <= python <= 3.13.
# From stable release
pip install "skypilot[shadeform]"
# From nightly build
pip install "skypilot-nightly[shadeform]"
# From source
pip install -e ".[shadeform]"
Go to the API Key Management page within your Shadeform account to generate a key and then add it to ~/.shadeform/api_key:
mkdir -p ~/.shadeform
echo "<your_api_key_here>" > ~/.shadeform/api_key
IBM
#
To access IBM’s VPC service:
Install the necessary dependencies for IBM.
Note
IBM is only supported for Python <= 3.11
# IBM requires 3.7 <= python <= 3.11.
# From stable release
uv pip install "skypilot[ibm]"
# From nightly build
uv pip install "skypilot-nightly[ibm]"
# IBM requires 3.7 <= python <= 3.11.
# From stable release
uv tool install --with pip "skypilot[ibm]"
# From nightly build
uv tool install --with pip "skypilot-nightly[ibm]"
# IBM requires 3.7 <= python <= 3.11.
# From stable release
pip install "skypilot[ibm]"
# From nightly build
pip install "skypilot-nightly[ibm]"
# From source
pip install -e ".[ibm]"
Store the following fields in ~/.ibm/credentials.yaml:
iam_api_key: <user_personal_api_key>
resource_group_id: <resource_group_user_is_a_member_of>
Create a new API key by following this guide.
Obtain a resource group’s ID from the web console.
Note
Stock images aren’t currently providing ML tools out of the box. Create private images with the necessary tools (e.g. CUDA), by following the IBM segment in this documentation.
To access IBM’s Cloud Object Storage (COS), append the following fields to the credentials file:
access_key_id: <access_key_id>
secret_access_key: <secret_key_id>
To get access_key_id and secret_access_key use the IBM web console:
Create/Select a COS instance from the web console.
From “Service Credentials” tab, click “New Credential” and toggle “Include HMAC Credential”.
Copy “secret_access_key” and “access_key_id” to file.
Finally, install rclone via: curl https://rclone.org/install.sh | sudo bash
Note
sky check does not reflect IBM COS’s enabled status. IBM: enabled only guarantees that IBM VM instances are enabled.
SCP (Samsung Cloud Platform)
#
Samsung Cloud Platform, or SCP, provides cloud services optimized for enterprise customers. You can learn more about SCP here.
To configure SCP access:
Install the necessary dependencies for SCP.
Note
SCP is only supported for Python <= 3.11
# SCP requires 3.7 <= python <= 3.11.
# From stable release
uv pip install "skypilot[scp]"
# From nightly build
uv pip install "skypilot-nightly[scp]"
# SCP requires 3.7 <= python <= 3.11.
# From stable release
uv tool install --with pip "skypilot[scp]"
# From nightly build
uv tool install --with pip "skypilot-nightly[scp]"
# SCP requires 3.7 <= python <= 3.11.
# From stable release
pip install "skypilot[scp]"
# From nightly build
pip install "skypilot-nightly[scp]"
# From source
pip install -e ".[scp]"
You need access keys and the ID of the project your tasks will run. Go to the Access Key Management page on your SCP console to generate the access keys, and the Project Overview page for the project ID. Then, add them to ~/.scp/scp_credential by running:
# Create directory if required
mkdir -p ~/.scp
# Add the lines for "access_key", "secret_key", and "project_id" to scp_credential file
echo "access_key = <your_access_key>" >> ~/.scp/scp_credential
echo "secret_key = <your_secret_key>" >> ~/.scp/scp_credential
echo "project_id = <your_project_id>" >> ~/.scp/scp_credential
Note
Multi-node clusters are currently not supported on SCP.
VMware vSphere
#
To configure VMware vSphere access:
Install the necessary dependencies for VMware vSphere.
# SkyPilot requires 3.7 <= python <= 3.13.
# From stable release
uv pip install "skypilot[vsphere]"
# From nightly build
uv pip install "skypilot-nightly[vsphere]"
# SkyPilot requires 3.7 <= python <= 3.13.
# From stable release
uv tool install --with pip "skypilot[vsphere]"
# From nightly build
uv tool install --with pip "skypilot-nightly[vsphere]"
# SkyPilot requires 3.7 <= python <= 3.13.
# From stable release
pip install "skypilot[vsphere]"
# From nightly build
pip install "skypilot-nightly[vsphere]"
# From source
pip install -e ".[vsphere]"
Store the vSphere credentials in ~/.vsphere/credential.yaml:
mkdir -p ~/.vsphere
touch ~/.vsphere/credential.yaml
Here is an example of configuration within the credential file:
vcenters:
- name: <your_vsphere_server_ip_01>
username: <your_vsphere_user_name>
password: <your_vsphere_user_passwd>
skip_verification: true # If your vcenter have valid certificate then change to 'false' here
# Clusters that can be used by SkyPilot:
# [] means all the clusters in the vSphere can be used by Skypilot
# Instead, you can specify the clusters in a list:
# clusters:
# - name: <your_vsphere_cluster_name1>
# - name: <your_vsphere_cluster_name2>
clusters: []
# If you are configuring only one vSphere instance, omit the following line.
- name: <your_vsphere_server_ip_02>
username: <your_vsphere_user_name>
password: <your_vsphere_user_passwd>
skip_verification: true
clusters: []
After configuring the vSphere credentials, ensure that the necessary preparations for vSphere are completed. Please refer to this guide for more information: Cloud Preparation for vSphere
Cloudflare R2#
Cloudflare offers R2, an S3-compatible object storage without any egress charges. SkyPilot can download/upload data to R2 buckets and mount them as local filesystem on clusters launched by SkyPilot. To set up R2 support:
Install the necessary dependencies for Cloudflare R2.
# SkyPilot requires 3.7 <= python <= 3.13.
# From stable release
uv pip install "skypilot[cloudflare]"
# From nightly build
uv pip install "skypilot-nightly[cloudflare]"
# SkyPilot requires 3.7 <= python <= 3.13.
# From stable release
uv tool install --with pip "skypilot[cloudflare]"
# From nightly build
uv tool install --with pip "skypilot-nightly[cloudflare]"
# SkyPilot requires 3.7 <= python <= 3.13.
# From stable release
pip install "skypilot[cloudflare]"
# From nightly build
pip install "skypilot-nightly[cloudflare]"
# From source
pip install -e ".[cloudflare]"
Run the following commands:
# Install boto
pip install boto3
# Configure your R2 credentials
AWS_SHARED_CREDENTIALS_FILE=~/.cloudflare/r2.credentials aws configure --profile r2
In the prompt, enter your R2 Access Key ID and Secret Access Key (see instructions to generate R2 credentials). Select auto for the default region and json for the default output format.
AWS Access Key ID [None]: <access_key_id>
AWS Secret Access Key [None]: <access_key_secret>
Default region name [None]: auto
Default output format [None]: json
Next, get your Account ID from your R2 dashboard and store it in ~/.cloudflare/accountid with:
mkdir -p ~/.cloudflare
echo <YOUR_ACCOUNT_ID_HERE> > ~/.cloudflare/accountid
Prime Intellect
#
Prime Intellect makes it easy to find global compute resources and train state-of-the-art models through distributed training across clusters. To configure Prime Intellect access:
Install the necessary dependencies for Prime Intellect.
# SkyPilot requires 3.7 <= python <= 3.13.
# From stable release
uv pip install "skypilot[primeintellect]"
# From nightly build
uv pip install "skypilot-nightly[primeintellect]"
# SkyPilot requires 3.7 <= python <= 3.13.
# From stable release
uv tool install --with pip "skypilot[primeintellect]"
# From nightly build
uv tool install --with pip "skypilot-nightly[primeintellect]"
# SkyPilot requires 3.7 <= python <= 3.13.
# From stable release
pip install "skypilot[primeintellect]"
# From nightly build
pip install "skypilot-nightly[primeintellect]"
# From source
pip install -e ".[primeintellect]"
Install and configure Prime Intellect CLI:
mkdir -p ~/.prime
prime login
# optional: set team id
prime config set-team-id <team_id>
Seeweb
#
Seeweb is your European GPU Cloud Provider. To access Seeweb:
Install the necessary dependencies for Seeweb.
Note
Seeweb is only supported for Python >= 3.10
# Seeweb requires 3.10 <= python <= 3.13.
# From stable release
uv pip install "skypilot[seeweb]"
# From nightly build
uv pip install "skypilot-nightly[seeweb]"
# Seeweb requires 3.10 <= python <= 3.13.
# From stable release
uv tool install --with pip "skypilot[seeweb]"
# From nightly build
uv tool install --with pip "skypilot-nightly[seeweb]"
# Seeweb requires 3.10 <= python <= 3.13.
# From stable release
pip install "skypilot[seeweb]"
# From nightly build
pip install "skypilot-nightly[seeweb]"
# From source
pip install -e ".[seeweb]"
Log into your Seeweb dashboard :.
Navigate to Compute → API Token in the control panel, and create New TOKEN.
Create the file
~/.seeweb_cloud/seeweb_keyswith the following contents:
[DEFAULT]
api_key = <your-api-token>
Appendix: Using SkyPilot in Docker#
As a quick alternative to installing SkyPilot on your laptop, we also provide a Docker image with SkyPilot main branch automatically cloned. You can simply run:
# NOTE: '--platform linux/amd64' is needed for Apple silicon Macs
docker run --platform linux/amd64 \
-td --rm --name sky \
-v "$HOME/.sky:/root/.sky:rw" \
-v "$HOME/.aws:/root/.aws:rw" \
-v "$HOME/.config/gcloud:/root/.config/gcloud:rw" \
berkeleyskypilot/skypilot
docker exec -it sky /bin/bash
If your cloud CLIs are already set up, your credentials (AWS and GCP) will be mounted to the container and you can proceed to Quickstart. Otherwise, you can follow the instructions in Cloud account setup inside the container to set up your cloud accounts.
Once you are done with experimenting with SkyPilot, remember to delete any clusters and storage resources you may have created using the following commands:
# Run inside the container:
sky down -a -y
sky storage delete -a -y
Finally, you can stop the container with:
docker stop sky
See more details about the dev container image
berkeleyskypilot/skypilot-nightly here.