Runners

RunsOn Features Overview

Advanced self-hosted runners with Magic Cache, EBS snapshots, and complete observability


RunsOn provides enterprise-grade self-hosted GitHub Actions runners on AWS with advanced caching, observability, and cost optimization features. This comprehensive guide covers all capabilities that make RunsOn 5x faster than traditional solutions.

🚀 Performance Features#

Magic Cache: 5x faster builds#

RunsOn's transparent S3 caching backend works seamlessly with actions/cache without any code changes:

1
# Enable with just a label change
2
runs-on: runs-on/runner=2cpu-linux-x64/extras=s3-cache

Benefits:

  • 5x faster than GitHub's cache
  • Unlimited storage in your VPC
  • Zero code changes - works with existing cache actions
  • Automatic cleanup of old cache entries
  • Cost-effective S3 storage pricing

Use cases:

  • Large monorepos with extensive dependencies
  • Build artifacts that persist across workflows
  • Package manager caches (npm, pip, cargo, etc.)
  • Docker layer caching for faster builds

Ephemeral ECR Registry#

Stop pulling base images from Docker Hub on every build. RunsOn creates a shared ECR registry in your VPC:

1
# Automatically enabled with Docker builds
2
runs-on: runs-on/runner=4cpu-linux-x64

Benefits:

  • Shared registry across all runners
  • Auto-cleanup of old images
  • No Docker Hub rate limits
  • Reduced bandwidth costs
  • Faster image pulls within your VPC

EBS Snapshots for instant restore#

Snapshot your entire Docker state and restore it in seconds with block-level efficiency:

1
# Use EBS snapshots for Docker layer caching
2
- uses: runs-on/snapshot@v1
3
with:
4
path: /var/lib/docker

Benefits:

  • Instant restore of entire Docker state
  • Block-level deduplication for efficient storage
  • Perfect for large monorepos
  • Seconds, not minutes for cache restoration
  • Automatic snapshot management

📊 Complete Observability#

Built-in monitoring#

RunsOn integrates with OpenTelemetry and provides comprehensive visibility:

Per-job metrics:

  • Execution time and performance data
  • Resource utilization (CPU, RAM, disk)
  • Success/failure rates
  • Cost attribution per workflow

CloudWatch dashboards:

  • Pre-built dashboards included
  • Automatic cost attribution per workflow
  • SNS alerts for failures and anomalies
  • Full audit trail in your account

100% self-hosted observability#

All monitoring data stays in your AWS account:

1
# Observability is automatically enabled
2
runs-on: runs-on/runner=8cpu-linux-x64

Benefits:

  • Your data, your control - no vendor lock-in
  • Full audit capabilities in CloudWatch
  • Custom alerts and notifications
  • Cost tracking by repository and workflow
  • Security compliance with data residency

⚡ Instance Management#

Any AWS instance type#

From 1 to 896 vCPUs with full flexibility:

Instance TypevCPUsRAMUse Case
m7i.large28 GBGeneral CI/CD
c7i.4xlarge1632 GBCompilation
r7i.8xlarge32256 GBLarge builds
g5.xlarge416 GB + GPUML workloads
p4d.24xlarge961.1 TB + GPUTraining

Supported architectures:

  • x64 (Intel/AMD)
  • ARM64 (Graviton) - 30% cost savings
  • GPU instances for ML/CUDA workloads
  • Windows and Linux support

Smart spot instance handling#

60-90% cost savings with intelligent fallback:

1
# Spot instances with auto-fallback (default)
2
runs-on: runs-on/runner=4cpu-linux-x64
3
4
# Force on-demand for critical jobs
5
runs-on: runs-on/runner=4cpu-linux-x64/spot=false

Features:

  • Auto-retry on spot interruption
  • On-demand fallback for critical workflows
  • 5% average interruption rate
  • Cost optimization algorithms
  • Capacity optimization across AZs

🔒 Security & Compliance#

Enterprise security#

RunsOn runs entirely in your AWS account with full control:

Infrastructure security:

  • Runs in your VPC - complete network isolation
  • Fully ephemeral VMs - no state persistence
  • Static IPs available for whitelisting
  • IAM role integration for secure access
  • VPC endpoints for private connectivity

Data protection:

  • No data leaves your infrastructure
  • Full audit trail in CloudTrail
  • Encryption at rest and in transit
  • Compliance ready for regulated industries

Access control#

Fine-grained permissions and access management:

1
# Enable SSH for debugging
2
runs-on: runs-on/runner=4cpu-linux-x64/ssh=true
3
4
# Add resource tags for cost tracking
5
runs-on: runs-on/runner=4cpu-linux-x64/tag:team=platform/tag:project=api

🛠️ Advanced Configuration#

Custom images and tools#

Bring your own AMI or use pre-built images:

1
# Custom AMI with pre-installed tools
2
runs-on: runs-on/${{ github.run_id }}/image=my-custom-image-*
3
4
# Pre-install scripts before every job
5
runs-on: runs-on/${{ github.run_id }}/preinstall=setup-tools.sh

Pre-built images available:

  • ubuntu22-full-x64 - Ubuntu 22.04 with full GitHub tools
  • ubuntu22-full-arm64 - Ubuntu 22.04 ARM64 optimized
  • ubuntu24-full-x64 - Ubuntu 24.04 with latest tools
  • ubuntu24-full-arm64 - Ubuntu 24.04 ARM64

Flexible configuration options#

Configure every aspect via labels:

1
# Complete configuration example
2
runs-on: runs-on/${{ github.run_id }}/cpu=8/ram=64/family=c7i+r7i/volume=200/ssh=true/tag:env=prod

Available parameters:

  • cpu - Number of vCPUs (1-896)
  • ram - RAM in GB (4-1024)
  • family - Instance families (m7i+c7i+r7i)
  • volume - Root volume size in GB
  • ssh - Enable SSH access for debugging
  • spot - Use spot instances (default: true)

💰 Cost Optimization#

Transparent pricing#

Pay only for what you use with no hidden fees:

Example costs (spot instances):

  • 2cpu-linux-x64: $0.0011/minute (~$66/month for full-time usage)
  • 8cpu-linux-x64: $0.0038/minute (~$228/month for full-time usage)
  • 32cpu-linux-x64: $0.0154/minute (~$924/month for full-time usage)

Cost savings vs GitHub-hosted:

  • 60-90% savings with spot instances
  • 30% savings with ARM64 instances
  • No data transfer costs within VPC
  • S3 cache cheaper than GitHub cache
  • No per-minute minimums

Smart resource management#

Automatic optimization features:

  • Auto-scaling based on queue depth
  • Instance right-sizing recommendations
  • Scheduled scaling for predictable patterns
  • Cost alerts and budget controls
  • Resource tagging for allocation

🎯 Use Cases#

When to choose RunsOn#

Perfect for:

  • Long-running jobs - No 6-hour timeout limits
  • Resource-intensive builds - Up to 896 vCPUs available
  • Large monorepos - Magic Cache and EBS snapshots
  • Security-sensitive projects - Full data control
  • Cost-conscious teams - 60-90% savings potential
  • Compliance requirements - Data stays in your account

Ideal workloads:

  • Compilation - C++, Rust, Go compilation
  • Testing - Large test suites, integration tests
  • Docker builds - Multi-stage builds, large images
  • Machine Learning - Training, inference with GPU support
  • Mobile apps - iOS/Android builds with specific tools

Migration scenarios#

From GitHub-hosted runners:

1
# Before
2
runs-on: ubuntu-latest
3
4
# After - 5x faster, 80% cheaper
5
runs-on: runs-on/runner=4cpu-linux-x64/extras=s3-cache+tmpfs

From self-hosted VMs:

1
# Before - Manual VM management
2
runs-on: self-hosted
3
4
# After - Auto-scaling, caching, observability
5
runs-on: runs-on/runner=8cpu-linux-x64/extras=s3-cache

🚀 Getting Started#

10-minute setup#

  1. Deploy CloudFormation stack - One-click installation
  2. Configure GitHub App - Automatic repository access
  3. Update workflow labels - Simple runs-on changes
  4. Enable caching - Add extras=s3-cache to labels

Example workflows#

Basic web application:

1
name: Build and Test
2
on: [push, pull_request]
3
4
jobs:
5
build:
6
runs-on: runs-on/runner=2cpu-linux-x64/extras=s3-cache
7
steps:
8
- uses: actions/checkout@v4
9
- name: Setup Node.js
10
uses: actions/setup-node@v4
11
with:
12
node-version: '20'
13
cache: 'npm'
14
- name: Install dependencies
15
run: npm ci
16
- name: Run tests
17
run: npm test
18
- name: Build
19
run: npm run build

Large monorepo with Docker:

1
name: Monorepo Build
2
on: [push]
3
4
jobs:
5
build:
6
runs-on: runs-on/runner=8cpu-linux-x64/extras=s3-cache+tmpfs
7
steps:
8
- uses: actions/checkout@v4
9
- name: Docker layer cache
10
uses: runs-on/snapshot@v1
11
with:
12
path: /var/lib/docker
13
- name: Build services
14
run: docker-compose build
15
- name: Run integration tests
16
run: docker-compose up --abort-on-container-exit

📈 Performance Comparisons#

Build time improvements#

ProjectGitHub-hostedRunsOn with Magic CacheImprovement
Large React app8m 30s1m 45s4.8x faster
Go monorepo12m 15s2m 30s4.9x faster
Rust compilation15m 40s3m 10s5.0x faster
Docker build6m 20s1m 20s4.8x faster

Cost comparisons#

UsageGitHub-hostedRunsOn (spot)Savings
2 CPU, 8h/day$216/month$43/month80%
4 CPU, 8h/day$432/month$86/month80%
8 CPU, 8h/day$864/month$173/month80%

🔧 Advanced Features#

tmpfs for maximum speed#

Use RAM-speed caching for critical operations:

1
# Enable tmpfs for ultra-fast caching
2
runs-on: runs-on/runner=4cpu-linux-x64/extras=s3-cache+tmpfs

Benefits:

  • RAM-speed cache operations
  • Perfect for compiler caches
  • Automatic cleanup on job completion
  • Configurable size based on instance RAM

EFS for persistent storage#

Mount EFS for large persistent datasets:

1
# Mount EFS for large datasets
2
runs-on: runs-on/runner=8cpu-linux-x64/efs=fs-12345678:/data

Use cases:

  • Large test datasets
  • Build artifacts sharing
  • Persistent package caches
  • Shared tool repositories

GPU support#

Full GPU instance support for ML workloads:

1
# GPU instance for ML training
2
runs-on: runs-on/runner=g5.xlarge/extras=s3-cache

Supported GPU instances:

  • g5.xlarge - 1 GPU, 16 GB VRAM
  • g5.4xlarge - 1 GPU, 24 GB VRAM
  • g5.8xlarge - 2 GPUs, 32 GB VRAM each
  • p4d.24xlarge - 8 GPUs, 40 GB VRAM each

📚 Next Steps#

🎉 Summary#

RunsOn provides the most advanced self-hosted runner solution with:

  • 5x faster builds with Magic Cache and EBS snapshots
  • 80% cost savings with intelligent spot instance usage
  • Complete observability with built-in CloudWatch dashboards
  • Enterprise security with full data control in your AWS account
  • Zero maintenance with auto-healing infrastructure
  • 10-minute setup with one-click CloudFormation deployment

Transform your CI/CD pipeline with RunsOn's enterprise-grade features while maintaining full control over your infrastructure and data.