Configuring RunsOn Runners
Customize CPU, RAM, instance types, and images for your workflows
RunsOn provides flexible configuration through job labels, repository configuration files, and stack-level settings. This guide covers all configuration options.
Job label syntax#
The simplest way to configure runners is through the runs-on label in your workflow:
1runs-on: runs-on=${{ github.run_id }}/runner=<runner-type>The ${{ github.run_id }} ensures each workflow run gets unique runner instances.
Pre-defined runner types#
RunsOn provides pre-configured runner types for common use cases:
Linux x64#
| Runner | vCPUs | RAM | Instance | Cost/min |
|---|---|---|---|---|
1cpu-linux-x64 | 1 | 4 GB | m7a.medium | $0.0008 |
2cpu-linux-x64 | 2 | 8 GB | m7i.large | $0.0011 |
4cpu-linux-x64 | 4 | 16 GB | m7i.xlarge | $0.0019 |
8cpu-linux-x64 | 8 | 32 GB | m7i.2xlarge | $0.0038 |
16cpu-linux-x64 | 16 | 64 GB | m7i.4xlarge | $0.0077 |
32cpu-linux-x64 | 32 | 128 GB | m7i.8xlarge | $0.0154 |
64cpu-linux-x64 | 64 | 256 GB | c7a.16xlarge | $0.0147 |
Linux ARM64#
| Runner | vCPUs | RAM | Instance | Cost/min |
|---|---|---|---|---|
1cpu-linux-arm64 | 1 | 4 GB | m7g.medium | $0.0006 |
2cpu-linux-arm64 | 2 | 8 GB | m7g.large | $0.0010 |
4cpu-linux-arm64 | 4 | 16 GB | m7g.xlarge | $0.0016 |
8cpu-linux-arm64 | 8 | 32 GB | m7g.2xlarge | $0.0032 |
16cpu-linux-arm64 | 16 | 64 GB | m7g.4xlarge | $0.0064 |
32cpu-linux-arm64 | 32 | 128 GB | m7g.8xlarge | $0.0128 |
64cpu-linux-arm64 | 64 | 256 GB | c7g.16xlarge | $0.0200 |
Prices shown are for spot instances. On-demand pricing is approximately 3x higher.
Custom configuration#
For fine-grained control, use individual parameters in the label:
1runs-on: runs-on=${{ github.run_id }}/cpu=4/ram=16/family=m7a+m7i-flexAvailable parameters#
| Parameter | Description | Example |
|---|---|---|
cpu | Number of vCPUs | cpu=4 |
ram | RAM in GB | ram=16 |
family | EC2 instance families (+ separated) | family=m7a+m7i |
image | AMI name or ID | image=ubuntu24-full-x64 |
spot | Use spot instances | spot=true (default) |
volume | Root volume size in GB | volume=100 |
hdd | Additional HDD volume | hdd=500 |
ssh | Enable SSH access | ssh=true |
Examples#
High-memory build:
1runs-on: runs-on=${{ github.run_id }}/cpu=8/ram=64/family=r7iLarge disk for artifacts:
1runs-on: runs-on=${{ github.run_id }}/runner=4cpu-linux-x64/volume=200On-demand for critical jobs:
1runs-on: runs-on=${{ github.run_id }}/runner=8cpu-linux-x64/spot=falseARM64 with custom image:
1runs-on: runs-on=${{ github.run_id }}/cpu=4/family=m7g/image=ubuntu24-full-arm64Default images#
RunsOn provides four pre-built Ubuntu images that mirror GitHub-hosted runner tooling:
| Image | OS | Architecture | Tools |
|---|---|---|---|
ubuntu22-full-x64 | Ubuntu 22.04 | x64 | Full GitHub runner tools |
ubuntu22-full-arm64 | Ubuntu 22.04 | ARM64 | Full GitHub runner tools |
ubuntu24-full-x64 | Ubuntu 24.04 | x64 | Full GitHub runner tools |
ubuntu24-full-arm64 | Ubuntu 24.04 | ARM64 | Full GitHub runner tools |
Images are rebuilt every 15 days from the official GitHub runner-images repository to maintain compatibility.
Custom images#
Create custom AMIs with pre-installed dependencies for faster job startup:
- Launch an instance with a base RunsOn image
- Install your custom software and dependencies
- Create an AMI from the instance
- Reference the AMI in your workflow:
1runs-on: runs-on=${{ github.run_id }}/image=ami-0123456789abcdef0For wildcard-based automatic updates, name your AMIs with a pattern:
1runs-on: runs-on=${{ github.run_id }}/image=my-custom-image-*RunsOn automatically uses the latest AMI matching the pattern.
Repository configuration#
Create a .github/runs-on.yml file for repository-level defaults:
1# .github/runs-on.yml2defaults:3 runner: 4cpu-linux-x644 spot: true56images:7 my-image:8 ami: ami-0123456789abcdef0910runners:11 large-build:12 cpu: 1613 ram: 6414 family: c7iReference custom runners in workflows:
1runs-on: runs-on=${{ github.run_id }}/runner=large-buildStack configuration#
Configure organization-wide defaults via CloudFormation parameters:
| Parameter | Description |
|---|---|
DefaultRunnerType | Default runner when not specified |
AllowedInstanceFamilies | Restrict available instance types |
MaxConcurrentRunners | Limit concurrent EC2 instances |
DefaultSpot | Enable/disable spot by default |
DefaultVolumeSize | Default root volume size |
Instance families#
Choose instance families based on workload characteristics:
| Family | Optimized For | Use Case |
|---|---|---|
m7i, m7a | General purpose | Most CI/CD workloads |
c7i, c7a | Compute | Compilation, testing |
r7i, r7a | Memory | Large datasets, caching |
m7g, c7g | ARM64 | Cost-efficient builds |
g5, p4d | GPU | ML training, rendering |
Combine families with + for flexibility:
1runs-on: runs-on=${{ github.run_id }}/cpu=4/family=m7i+m7a+c7iRunsOn selects the most cost-effective available instance.
Spot instances#
Spot instances provide 60-90% cost savings. RunsOn handles spot interruptions automatically:
- Jobs are retried on a new instance if interrupted
- On-demand fallback available for critical jobs
- Interruption rate is typically under 5%
Disable spot for time-sensitive jobs:
1runs-on: runs-on=${{ github.run_id }}/runner=8cpu-linux-x64/spot=falseResource tagging#
Add AWS tags for cost allocation:
1runs-on: runs-on=${{ github.run_id }}/runner=4cpu-linux-x64/tag:team=platform/tag:project=apiTags appear on EC2 instances and related resources for cost tracking.
Next steps#
- Set up caching to accelerate builds
- Review Linux runner options for detailed specifications
- Explore GPU runners for ML workloads