Coding
PromptBeginner5 minmarkdown
Nano Banana Pro
Agent skill for nano-banana-pro
7
Multi-cloud orchestration for ML workloads with automatic cost optimization. Use when you need to run training or batch jobs across multiple clouds, leverage spot instances with auto-recovery, or optimize GPU costs across providers.
Sign in to like and favorite skills
Comprehensive guide to running ML workloads across clouds with automatic cost optimization using SkyPilot.
Use SkyPilot when:
Key features:
Use alternatives instead:
pip install "skypilot[aws,gcp,azure,kubernetes]" # Verify cloud credentials sky check
Create
hello.yaml:
resources: accelerators: T4:1 run: | nvidia-smi echo "Hello from SkyPilot!"
Launch:
sky launch -c hello hello.yaml # SSH to cluster ssh hello # Terminate sky down hello
# Task name (optional) name: my-task # Resource requirements resources: cloud: aws # Optional: auto-select if omitted region: us-west-2 # Optional: auto-select if omitted accelerators: A100:4 # GPU type and count cpus: 8+ # Minimum CPUs memory: 32+ # Minimum memory (GB) use_spot: true # Use spot instances disk_size: 256 # Disk size (GB) # Number of nodes for distributed training num_nodes: 2 # Working directory (synced to ~/sky_workdir) workdir: . # Setup commands (run once) setup: | pip install -r requirements.txt # Run commands run: | python train.py
| Command | Purpose |
|---|---|
| Launch cluster and run task |
| Run task on existing cluster |
| Show cluster status |
| Stop cluster (preserve state) |
| Terminate cluster |
| View task logs |
| Show job queue |
| Launch managed job |
| Deploy serving endpoint |
# NVIDIA GPUs accelerators: T4:1 accelerators: L4:1 accelerators: A10G:1 accelerators: L40S:1 accelerators: A100:4 accelerators: A100-80GB:8 accelerators: H100:8 # Cloud-specific accelerators: V100:4 # AWS/GCP accelerators: TPU-v4-8 # GCP TPUs
resources: accelerators: H100: 8 A100-80GB: 8 A100: 8 any_of: - cloud: gcp - cloud: aws - cloud: azure
resources: accelerators: A100:8 use_spot: true spot_recovery: FAILOVER # Auto-recover on preemption
# Launch new cluster sky launch -c mycluster task.yaml # Run on existing cluster (skip setup) sky exec mycluster another_task.yaml # Interactive SSH ssh mycluster # Stream logs sky logs mycluster
resources: accelerators: A100:4 autostop: idle_minutes: 30 down: true # Terminate instead of stop
# Set autostop via CLI sky autostop mycluster -i 30 --down
# All clusters sky status # Detailed view sky status -a
resources: accelerators: A100:8 num_nodes: 4 # 4 nodes × 8 GPUs = 32 GPUs total setup: | pip install torch torchvision run: | torchrun \ --nnodes=$SKYPILOT_NUM_NODES \ --nproc_per_node=$SKYPILOT_NUM_GPUS_PER_NODE \ --node_rank=$SKYPILOT_NODE_RANK \ --master_addr=$(echo "$SKYPILOT_NODE_IPS" | head -n1) \ --master_port=12355 \ train.py
| Variable | Description |
|---|---|
| Node index (0 to num_nodes-1) |
| Newline-separated IP addresses |
| Total number of nodes |
| GPUs per node |
run: | if [ "${SKYPILOT_NODE_RANK}" == "0" ]; then python orchestrate.py fi
# Launch managed job with spot recovery sky jobs launch -n my-job train.yaml
name: training-job file_mounts: /checkpoints: name: my-checkpoints store: s3 mode: MOUNT resources: accelerators: A100:8 use_spot: true run: | python train.py \ --checkpoint-dir /checkpoints \ --resume-from-latest
# List jobs sky jobs queue # View logs sky jobs logs my-job # Cancel job sky jobs cancel my-job
workdir: ./my-project # Synced to ~/sky_workdir file_mounts: /data/config.yaml: ./config.yaml ~/.vimrc: ~/.vimrc
file_mounts: # Mount S3 bucket /datasets: source: s3://my-bucket/datasets mode: MOUNT # Stream from S3 # Copy GCS bucket /models: source: gs://my-bucket/models mode: COPY # Pre-fetch to disk # Cached mount (fast writes) /outputs: name: my-outputs store: s3 mode: MOUNT_CACHED
| Mode | Description | Best For |
|---|---|---|
| Stream from cloud | Large datasets, read-heavy |
| Pre-fetch to disk | Small files, random access |
| Cache with async upload | Checkpoints, outputs |
# service.yaml service: readiness_probe: /health replica_policy: min_replicas: 1 max_replicas: 10 target_qps_per_replica: 2.0 resources: accelerators: A100:1 run: | python -m vllm.entrypoints.openai.api_server \ --model meta-llama/Llama-2-7b-chat-hf \ --port 8000
# Deploy sky serve up -n my-service service.yaml # Check status sky serve status # Get endpoint sky serve status my-service
service: replica_policy: min_replicas: 1 max_replicas: 10 target_qps_per_replica: 2.0 upscale_delay_seconds: 60 downscale_delay_seconds: 300 load_balancing_policy: round_robin
# SkyPilot finds cheapest option resources: accelerators: A100:8 # No cloud specified - auto-select cheapest
# Show optimizer decision sky launch task.yaml --dryrun
resources: accelerators: A100:8 any_of: - cloud: gcp region: us-central1 - cloud: aws region: us-east-1 - cloud: azure
envs: HF_TOKEN: $HF_TOKEN # Inherited from local env WANDB_API_KEY: $WANDB_API_KEY # Or use secrets secrets: - HF_TOKEN - WANDB_API_KEY
name: llm-finetune file_mounts: /checkpoints: name: finetune-checkpoints store: s3 mode: MOUNT_CACHED resources: accelerators: A100:8 use_spot: true setup: | pip install transformers accelerate run: | python train.py \ --checkpoint-dir /checkpoints \ --resume
name: hp-sweep-${RUN_ID} envs: RUN_ID: 0 LEARNING_RATE: 1e-4 BATCH_SIZE: 32 resources: accelerators: A100:1 use_spot: true run: | python train.py \ --lr $LEARNING_RATE \ --batch-size $BATCH_SIZE \ --run-id $RUN_ID
# Launch multiple jobs for i in {1..10}; do sky jobs launch sweep.yaml \ --env RUN_ID=$i \ --env LEARNING_RATE=$(python -c "import random; print(10**random.uniform(-5,-3))") done
# SSH to cluster ssh mycluster # View logs sky logs mycluster # Check job queue sky queue mycluster # View managed job logs sky jobs logs my-job
| Issue | Solution |
|---|---|
| Quota exceeded | Request quota increase, try different region |
| Spot preemption | Use for auto-recovery |
| Slow file sync | Use mode for outputs |
| GPU not available | Use for fallback clouds |