Runners

Caching with RunsOn

Accelerate builds with S3-based caching and Docker optimization


RunsOn provides multiple caching strategies to significantly reduce build times. The S3-based Magic Cache delivers up to 5x faster cache operations compared to GitHub's built-in caching.

Caching options overview#

MethodBest ForSpeedStorage
Magic CacheDependencies, build artifacts5x GitHub cacheUnlimited (S3)
Ephemeral RegistryDocker imagesFastTemporary
EBS SnapshotsDocker layersFastestPersistent
S3 CacheDocker buildx exportsFastUnlimited
EFSLarge datasetsGoodUnlimited
tmpfsSmall, hot dataFastestRAM-limited

Magic Cache#

Magic Cache is an S3-based caching layer that works transparently with the standard actions/cache action. No workflow changes required.

How it works#

  1. RunsOn intercepts cache requests from actions/cache
  2. Caches are stored in an S3 bucket within your AWS account
  3. S3 VPC endpoints provide high-speed access without egress costs
  4. Cache retrieval is approximately 5x faster than GitHub's cache

Enabling Magic Cache#

Magic Cache is enabled by default in new RunsOn installations. Verify it's active:

1
jobs:
2
build:
3
runs-on: runs-on=${{ github.run_id }}/runner=4cpu-linux-x64
4
steps:
5
- uses: actions/checkout@v4
6
7
- uses: actions/cache@v4
8
with:
9
path: ~/.npm
10
key: npm-${{ hashFiles('package-lock.json') }}
11
12
- run: npm ci

The workflow uses standard actions/cache syntax. RunsOn automatically routes cache operations to S3.

Benefits#

  • Unlimited storage - No 10GB cache limit like GitHub-hosted runners
  • No egress costs - S3 VPC endpoints keep traffic within AWS
  • Transparent operation - Works with existing workflows unchanged
  • Fallback support - If RunsOn is unavailable, standard GitHub cache is used

Docker caching#

RunsOn provides several options for caching Docker builds:

Ephemeral Registry#

A temporary ECR registry within your VPC for fast Docker layer caching:

1
jobs:
2
build:
3
runs-on: runs-on=${{ github.run_id }}/runner=4cpu-linux-x64
4
steps:
5
- uses: actions/checkout@v4
6
7
- name: Build with registry cache
8
run: |
9
docker buildx build \
10
--cache-from type=registry,ref=$RUNS_ON_REGISTRY/myapp:cache \
11
--cache-to type=registry,ref=$RUNS_ON_REGISTRY/myapp:cache,mode=max \
12
-t myapp:latest .

The $RUNS_ON_REGISTRY environment variable points to your ephemeral registry.

EBS Snapshots#

Block-level snapshots provide the fastest Docker caching by eliminating layer export and compression:

1
jobs:
2
build:
3
runs-on: runs-on=${{ github.run_id }}/runner=4cpu-linux-x64/snapshot=docker
4
steps:
5
- uses: actions/checkout@v4
6
7
- name: Build with snapshot cache
8
run: docker build -t myapp:latest .

The /snapshot=docker parameter restores Docker's data directory from a previous snapshot.

S3 Cache for buildx#

Export Docker layers directly to S3:

1
jobs:
2
build:
3
runs-on: runs-on=${{ github.run_id }}/runner=4cpu-linux-x64
4
steps:
5
- uses: actions/checkout@v4
6
7
- name: Build with S3 cache
8
run: |
9
docker buildx build \
10
--cache-from type=s3,bucket=$RUNS_ON_CACHE_BUCKET,name=myapp \
11
--cache-to type=s3,bucket=$RUNS_ON_CACHE_BUCKET,name=myapp,mode=max \
12
-t myapp:latest .

EFS for large data#

Mount EFS volumes for large, frequently-accessed datasets:

1
jobs:
2
ml-train:
3
runs-on: runs-on=${{ github.run_id }}/runner=8cpu-linux-x64/efs=my-dataset:/data
4
steps:
5
- name: Train model
6
run: python train.py --data-dir /data

EFS benefits:

  • No compression overhead
  • Unlimited storage scaling
  • Persistent across jobs
  • Shared access for parallel jobs

tmpfs for speed#

Use RAM-based tmpfs for performance-critical workloads:

1
jobs:
2
test:
3
runs-on: runs-on=${{ github.run_id }}/runner=8cpu-linux-x64
4
steps:
5
- uses: actions/checkout@v4
6
7
- name: Run tests with tmpfs
8
run: |
9
sudo mount -t tmpfs -o size=4G tmpfs /tmp/test-cache
10
TEST_CACHE_DIR=/tmp/test-cache npm test

Caching strategy recommendations#

For Node.js projects#

1
- uses: actions/cache@v4
2
with:
3
path: |
4
~/.npm
5
node_modules
6
key: node-${{ hashFiles('package-lock.json') }}
7
restore-keys: node-

For Python projects#

1
- uses: actions/cache@v4
2
with:
3
path: |
4
~/.cache/pip
5
.venv
6
key: python-${{ hashFiles('requirements.txt') }}
7
restore-keys: python-

For Go projects#

1
- uses: actions/cache@v4
2
with:
3
path: |
4
~/go/pkg/mod
5
~/.cache/go-build
6
key: go-${{ hashFiles('go.sum') }}
7
restore-keys: go-

For Rust projects#

1
- uses: actions/cache@v4
2
with:
3
path: |
4
~/.cargo/registry
5
~/.cargo/git
6
target
7
key: rust-${{ hashFiles('Cargo.lock') }}
8
restore-keys: rust-

Cache management#

Viewing cache usage#

Cache metrics are available in:

  • CloudWatch dashboards
  • Daily cost emails
  • RunsOn entry point dashboard

Clearing caches#

Clear S3 caches via AWS CLI:

1
aws s3 rm s3://your-runs-on-cache-bucket/actions-cache/ --recursive

Clear specific cache keys:

1
aws s3 rm s3://your-runs-on-cache-bucket/actions-cache/node- --recursive

Cache retention#

Configure S3 lifecycle rules for automatic cleanup:

1
# CloudFormation parameter
2
CacheRetentionDays: 30

Comparison with GitHub cache#

FeatureGitHub CacheRunsOn Magic Cache
Storage limit10 GBUnlimited
Download speed~100 MB/s~500 MB/s
Upload speed~50 MB/s~300 MB/s
Retention7 days unusedConfigurable
CostIncludedS3 storage costs

Next steps#