RunsOn Features Overview
Advanced self-hosted runners with Magic Cache, EBS snapshots, and complete observability
RunsOn provides enterprise-grade self-hosted GitHub Actions runners on AWS with advanced caching, observability, and cost optimization features. This comprehensive guide covers all capabilities that make RunsOn 5x faster than traditional solutions.
🚀 Performance Features#
Magic Cache: 5x faster builds#
RunsOn's transparent S3 caching backend works seamlessly with actions/cache without any code changes:
1# Enable with just a label change2runs-on: runs-on/runner=2cpu-linux-x64/extras=s3-cacheBenefits:
- ✅ 5x faster than GitHub's cache
- ✅ Unlimited storage in your VPC
- ✅ Zero code changes - works with existing cache actions
- ✅ Automatic cleanup of old cache entries
- ✅ Cost-effective S3 storage pricing
Use cases:
- Large monorepos with extensive dependencies
- Build artifacts that persist across workflows
- Package manager caches (npm, pip, cargo, etc.)
- Docker layer caching for faster builds
Ephemeral ECR Registry#
Stop pulling base images from Docker Hub on every build. RunsOn creates a shared ECR registry in your VPC:
1# Automatically enabled with Docker builds2runs-on: runs-on/runner=4cpu-linux-x64Benefits:
- ✅ Shared registry across all runners
- ✅ Auto-cleanup of old images
- ✅ No Docker Hub rate limits
- ✅ Reduced bandwidth costs
- ✅ Faster image pulls within your VPC
EBS Snapshots for instant restore#
Snapshot your entire Docker state and restore it in seconds with block-level efficiency:
1# Use EBS snapshots for Docker layer caching2- uses: runs-on/snapshot@v13 with:4 path: /var/lib/dockerBenefits:
- ✅ Instant restore of entire Docker state
- ✅ Block-level deduplication for efficient storage
- ✅ Perfect for large monorepos
- ✅ Seconds, not minutes for cache restoration
- ✅ Automatic snapshot management
📊 Complete Observability#
Built-in monitoring#
RunsOn integrates with OpenTelemetry and provides comprehensive visibility:
Per-job metrics:
- Execution time and performance data
- Resource utilization (CPU, RAM, disk)
- Success/failure rates
- Cost attribution per workflow
CloudWatch dashboards:
- ✅ Pre-built dashboards included
- ✅ Automatic cost attribution per workflow
- ✅ SNS alerts for failures and anomalies
- ✅ Full audit trail in your account
100% self-hosted observability#
All monitoring data stays in your AWS account:
1# Observability is automatically enabled2runs-on: runs-on/runner=8cpu-linux-x64Benefits:
- ✅ Your data, your control - no vendor lock-in
- ✅ Full audit capabilities in CloudWatch
- ✅ Custom alerts and notifications
- ✅ Cost tracking by repository and workflow
- ✅ Security compliance with data residency
⚡ Instance Management#
Any AWS instance type#
From 1 to 896 vCPUs with full flexibility:
| Instance Type | vCPUs | RAM | Use Case |
|---|---|---|---|
m7i.large | 2 | 8 GB | General CI/CD |
c7i.4xlarge | 16 | 32 GB | Compilation |
r7i.8xlarge | 32 | 256 GB | Large builds |
g5.xlarge | 4 | 16 GB + GPU | ML workloads |
p4d.24xlarge | 96 | 1.1 TB + GPU | Training |
Supported architectures:
- ✅ x64 (Intel/AMD)
- ✅ ARM64 (Graviton) - 30% cost savings
- ✅ GPU instances for ML/CUDA workloads
- ✅ Windows and Linux support
Smart spot instance handling#
60-90% cost savings with intelligent fallback:
1# Spot instances with auto-fallback (default)2runs-on: runs-on/runner=4cpu-linux-x6434# Force on-demand for critical jobs5runs-on: runs-on/runner=4cpu-linux-x64/spot=falseFeatures:
- ✅ Auto-retry on spot interruption
- ✅ On-demand fallback for critical workflows
- ✅ 5% average interruption rate
- ✅ Cost optimization algorithms
- ✅ Capacity optimization across AZs
🔒 Security & Compliance#
Enterprise security#
RunsOn runs entirely in your AWS account with full control:
Infrastructure security:
- ✅ Runs in your VPC - complete network isolation
- ✅ Fully ephemeral VMs - no state persistence
- ✅ Static IPs available for whitelisting
- ✅ IAM role integration for secure access
- ✅ VPC endpoints for private connectivity
Data protection:
- ✅ No data leaves your infrastructure
- ✅ Full audit trail in CloudTrail
- ✅ Encryption at rest and in transit
- ✅ Compliance ready for regulated industries
Access control#
Fine-grained permissions and access management:
1# Enable SSH for debugging2runs-on: runs-on/runner=4cpu-linux-x64/ssh=true34# Add resource tags for cost tracking5runs-on: runs-on/runner=4cpu-linux-x64/tag:team=platform/tag:project=api🛠️ Advanced Configuration#
Custom images and tools#
Bring your own AMI or use pre-built images:
1# Custom AMI with pre-installed tools2runs-on: runs-on/${{ github.run_id }}/image=my-custom-image-*34# Pre-install scripts before every job5runs-on: runs-on/${{ github.run_id }}/preinstall=setup-tools.shPre-built images available:
ubuntu22-full-x64- Ubuntu 22.04 with full GitHub toolsubuntu22-full-arm64- Ubuntu 22.04 ARM64 optimizedubuntu24-full-x64- Ubuntu 24.04 with latest toolsubuntu24-full-arm64- Ubuntu 24.04 ARM64
Flexible configuration options#
Configure every aspect via labels:
1# Complete configuration example2runs-on: runs-on/${{ github.run_id }}/cpu=8/ram=64/family=c7i+r7i/volume=200/ssh=true/tag:env=prodAvailable parameters:
cpu- Number of vCPUs (1-896)ram- RAM in GB (4-1024)family- Instance families (m7i+c7i+r7i)volume- Root volume size in GBssh- Enable SSH access for debuggingspot- Use spot instances (default: true)
💰 Cost Optimization#
Transparent pricing#
Pay only for what you use with no hidden fees:
Example costs (spot instances):
2cpu-linux-x64: $0.0011/minute (~$66/month for full-time usage)8cpu-linux-x64: $0.0038/minute (~$228/month for full-time usage)32cpu-linux-x64: $0.0154/minute (~$924/month for full-time usage)
Cost savings vs GitHub-hosted:
- ✅ 60-90% savings with spot instances
- ✅ 30% savings with ARM64 instances
- ✅ No data transfer costs within VPC
- ✅ S3 cache cheaper than GitHub cache
- ✅ No per-minute minimums
Smart resource management#
Automatic optimization features:
- ✅ Auto-scaling based on queue depth
- ✅ Instance right-sizing recommendations
- ✅ Scheduled scaling for predictable patterns
- ✅ Cost alerts and budget controls
- ✅ Resource tagging for allocation
🎯 Use Cases#
When to choose RunsOn#
Perfect for:
- Long-running jobs - No 6-hour timeout limits
- Resource-intensive builds - Up to 896 vCPUs available
- Large monorepos - Magic Cache and EBS snapshots
- Security-sensitive projects - Full data control
- Cost-conscious teams - 60-90% savings potential
- Compliance requirements - Data stays in your account
Ideal workloads:
- Compilation - C++, Rust, Go compilation
- Testing - Large test suites, integration tests
- Docker builds - Multi-stage builds, large images
- Machine Learning - Training, inference with GPU support
- Mobile apps - iOS/Android builds with specific tools
Migration scenarios#
From GitHub-hosted runners:
1# Before2runs-on: ubuntu-latest34# After - 5x faster, 80% cheaper5runs-on: runs-on/runner=4cpu-linux-x64/extras=s3-cache+tmpfsFrom self-hosted VMs:
1# Before - Manual VM management2runs-on: self-hosted34# After - Auto-scaling, caching, observability5runs-on: runs-on/runner=8cpu-linux-x64/extras=s3-cache🚀 Getting Started#
10-minute setup#
- Deploy CloudFormation stack - One-click installation
- Configure GitHub App - Automatic repository access
- Update workflow labels - Simple
runs-onchanges - Enable caching - Add
extras=s3-cacheto labels
Example workflows#
Basic web application:
1name: Build and Test2on: [push, pull_request]34jobs:5 build:6 runs-on: runs-on/runner=2cpu-linux-x64/extras=s3-cache7 steps:8 - uses: actions/checkout@v49 - name: Setup Node.js10 uses: actions/setup-node@v411 with:12 node-version: '20'13 cache: 'npm'14 - name: Install dependencies15 run: npm ci16 - name: Run tests17 run: npm test18 - name: Build19 run: npm run buildLarge monorepo with Docker:
1name: Monorepo Build2on: [push]34jobs:5 build:6 runs-on: runs-on/runner=8cpu-linux-x64/extras=s3-cache+tmpfs7 steps:8 - uses: actions/checkout@v49 - name: Docker layer cache10 uses: runs-on/snapshot@v111 with:12 path: /var/lib/docker13 - name: Build services14 run: docker-compose build15 - name: Run integration tests16 run: docker-compose up --abort-on-container-exit📈 Performance Comparisons#
Build time improvements#
| Project | GitHub-hosted | RunsOn with Magic Cache | Improvement |
|---|---|---|---|
| Large React app | 8m 30s | 1m 45s | 4.8x faster |
| Go monorepo | 12m 15s | 2m 30s | 4.9x faster |
| Rust compilation | 15m 40s | 3m 10s | 5.0x faster |
| Docker build | 6m 20s | 1m 20s | 4.8x faster |
Cost comparisons#
| Usage | GitHub-hosted | RunsOn (spot) | Savings |
|---|---|---|---|
| 2 CPU, 8h/day | $216/month | $43/month | 80% |
| 4 CPU, 8h/day | $432/month | $86/month | 80% |
| 8 CPU, 8h/day | $864/month | $173/month | 80% |
🔧 Advanced Features#
tmpfs for maximum speed#
Use RAM-speed caching for critical operations:
1# Enable tmpfs for ultra-fast caching2runs-on: runs-on/runner=4cpu-linux-x64/extras=s3-cache+tmpfsBenefits:
- ✅ RAM-speed cache operations
- ✅ Perfect for compiler caches
- ✅ Automatic cleanup on job completion
- ✅ Configurable size based on instance RAM
EFS for persistent storage#
Mount EFS for large persistent datasets:
1# Mount EFS for large datasets2runs-on: runs-on/runner=8cpu-linux-x64/efs=fs-12345678:/dataUse cases:
- Large test datasets
- Build artifacts sharing
- Persistent package caches
- Shared tool repositories
GPU support#
Full GPU instance support for ML workloads:
1# GPU instance for ML training2runs-on: runs-on/runner=g5.xlarge/extras=s3-cacheSupported GPU instances:
g5.xlarge- 1 GPU, 16 GB VRAMg5.4xlarge- 1 GPU, 24 GB VRAMg5.8xlarge- 2 GPUs, 32 GB VRAM eachp4d.24xlarge- 8 GPUs, 40 GB VRAM each
📚 Next Steps#
- Installation Guide - Deploy RunsOn in 10 minutes
- Configuration Reference - Detailed configuration options
- Caching Guide - Optimize your caching strategy
- Security Hardening - Secure your runner deployment
- Troubleshooting - Common issues and solutions
🎉 Summary#
RunsOn provides the most advanced self-hosted runner solution with:
- 5x faster builds with Magic Cache and EBS snapshots
- 80% cost savings with intelligent spot instance usage
- Complete observability with built-in CloudWatch dashboards
- Enterprise security with full data control in your AWS account
- Zero maintenance with auto-healing infrastructure
- 10-minute setup with one-click CloudFormation deployment
Transform your CI/CD pipeline with RunsOn's enterprise-grade features while maintaining full control over your infrastructure and data.