Caching with RunsOn
Accelerate builds with S3-based caching and Docker optimization
RunsOn provides multiple caching strategies to significantly reduce build times. The S3-based Magic Cache delivers up to 5x faster cache operations compared to GitHub's built-in caching.
Caching options overview#
| Method | Best For | Speed | Storage |
|---|---|---|---|
| Magic Cache | Dependencies, build artifacts | 5x GitHub cache | Unlimited (S3) |
| Ephemeral Registry | Docker images | Fast | Temporary |
| EBS Snapshots | Docker layers | Fastest | Persistent |
| S3 Cache | Docker buildx exports | Fast | Unlimited |
| EFS | Large datasets | Good | Unlimited |
| tmpfs | Small, hot data | Fastest | RAM-limited |
Magic Cache#
Magic Cache is an S3-based caching layer that works transparently with the standard actions/cache action. No workflow changes required.
How it works#
- RunsOn intercepts cache requests from
actions/cache - Caches are stored in an S3 bucket within your AWS account
- S3 VPC endpoints provide high-speed access without egress costs
- Cache retrieval is approximately 5x faster than GitHub's cache
Enabling Magic Cache#
Magic Cache is enabled by default in new RunsOn installations. Verify it's active:
1jobs:2 build:3 runs-on: runs-on=${{ github.run_id }}/runner=4cpu-linux-x644 steps:5 - uses: actions/checkout@v467 - uses: actions/cache@v48 with:9 path: ~/.npm10 key: npm-${{ hashFiles('package-lock.json') }}1112 - run: npm ciThe workflow uses standard actions/cache syntax. RunsOn automatically routes cache operations to S3.
Benefits#
- Unlimited storage - No 10GB cache limit like GitHub-hosted runners
- No egress costs - S3 VPC endpoints keep traffic within AWS
- Transparent operation - Works with existing workflows unchanged
- Fallback support - If RunsOn is unavailable, standard GitHub cache is used
Docker caching#
RunsOn provides several options for caching Docker builds:
Ephemeral Registry#
A temporary ECR registry within your VPC for fast Docker layer caching:
1jobs:2 build:3 runs-on: runs-on=${{ github.run_id }}/runner=4cpu-linux-x644 steps:5 - uses: actions/checkout@v467 - name: Build with registry cache8 run: |9 docker buildx build \10 --cache-from type=registry,ref=$RUNS_ON_REGISTRY/myapp:cache \11 --cache-to type=registry,ref=$RUNS_ON_REGISTRY/myapp:cache,mode=max \12 -t myapp:latest .The $RUNS_ON_REGISTRY environment variable points to your ephemeral registry.
EBS Snapshots#
Block-level snapshots provide the fastest Docker caching by eliminating layer export and compression:
1jobs:2 build:3 runs-on: runs-on=${{ github.run_id }}/runner=4cpu-linux-x64/snapshot=docker4 steps:5 - uses: actions/checkout@v467 - name: Build with snapshot cache8 run: docker build -t myapp:latest .The /snapshot=docker parameter restores Docker's data directory from a previous snapshot.
S3 Cache for buildx#
Export Docker layers directly to S3:
1jobs:2 build:3 runs-on: runs-on=${{ github.run_id }}/runner=4cpu-linux-x644 steps:5 - uses: actions/checkout@v467 - name: Build with S3 cache8 run: |9 docker buildx build \10 --cache-from type=s3,bucket=$RUNS_ON_CACHE_BUCKET,name=myapp \11 --cache-to type=s3,bucket=$RUNS_ON_CACHE_BUCKET,name=myapp,mode=max \12 -t myapp:latest .EFS for large data#
Mount EFS volumes for large, frequently-accessed datasets:
1jobs:2 ml-train:3 runs-on: runs-on=${{ github.run_id }}/runner=8cpu-linux-x64/efs=my-dataset:/data4 steps:5 - name: Train model6 run: python train.py --data-dir /dataEFS benefits:
- No compression overhead
- Unlimited storage scaling
- Persistent across jobs
- Shared access for parallel jobs
tmpfs for speed#
Use RAM-based tmpfs for performance-critical workloads:
1jobs:2 test:3 runs-on: runs-on=${{ github.run_id }}/runner=8cpu-linux-x644 steps:5 - uses: actions/checkout@v467 - name: Run tests with tmpfs8 run: |9 sudo mount -t tmpfs -o size=4G tmpfs /tmp/test-cache10 TEST_CACHE_DIR=/tmp/test-cache npm testCaching strategy recommendations#
For Node.js projects#
1- uses: actions/cache@v42 with:3 path: |4 ~/.npm5 node_modules6 key: node-${{ hashFiles('package-lock.json') }}7 restore-keys: node-For Python projects#
1- uses: actions/cache@v42 with:3 path: |4 ~/.cache/pip5 .venv6 key: python-${{ hashFiles('requirements.txt') }}7 restore-keys: python-For Go projects#
1- uses: actions/cache@v42 with:3 path: |4 ~/go/pkg/mod5 ~/.cache/go-build6 key: go-${{ hashFiles('go.sum') }}7 restore-keys: go-For Rust projects#
1- uses: actions/cache@v42 with:3 path: |4 ~/.cargo/registry5 ~/.cargo/git6 target7 key: rust-${{ hashFiles('Cargo.lock') }}8 restore-keys: rust-Cache management#
Viewing cache usage#
Cache metrics are available in:
- CloudWatch dashboards
- Daily cost emails
- RunsOn entry point dashboard
Clearing caches#
Clear S3 caches via AWS CLI:
1aws s3 rm s3://your-runs-on-cache-bucket/actions-cache/ --recursiveClear specific cache keys:
1aws s3 rm s3://your-runs-on-cache-bucket/actions-cache/node- --recursiveCache retention#
Configure S3 lifecycle rules for automatic cleanup:
1# CloudFormation parameter2CacheRetentionDays: 30Comparison with GitHub cache#
| Feature | GitHub Cache | RunsOn Magic Cache |
|---|---|---|
| Storage limit | 10 GB | Unlimited |
| Download speed | ~100 MB/s | ~500 MB/s |
| Upload speed | ~50 MB/s | ~300 MB/s |
| Retention | 7 days unused | Configurable |
| Cost | Included | S3 storage costs |
Next steps#
- Review Magic Cache documentation for advanced options
- Explore Docker caching strategies in detail
- Set up monitoring for cache metrics