Open-source alternatives guide
How to Self-Host Gitness: Modern Open Source CI/CD 2026
Self-host Gitness (Harness Open Source) in 2026: Git hosting, CI/CD pipelines, container registry, and secrets management in a single lightweight binary.
How to Self-Host Gitness: Modern Open Source CI/CD in 2026
TL;DR
Gitness (Apache-2.0, 32K+ GitHub stars, Go) is Harness's open source unified Git + CI/CD platform. It combines Git repository hosting, pipeline execution, a container registry, secret management, and a modern web UI into a single lightweight binary. On the same server, Gitness uses ~100 MB RAM — compared to GitLab's 4+ GB minimum. If you want GitHub/GitLab-style development workflows without the cloud dependency or infrastructure overhead, Gitness is one of the best options in 2026.
Key Takeaways
- Gitness: Apache-2.0, 32K+ stars, Go — unified Git + CI/CD in one container
- Pipeline syntax: Compatible with Drone CI YAML format (stages, steps, services)
- Built-in container registry: Push/pull Docker images without a separate Harbor setup
- Secrets management: Encrypted secrets per repository, referenced in pipeline YAML
- Resource use: ~100 MB RAM — significantly lighter than GitLab (~4 GB+) or Forgejo+Woodpecker (~500 MB)
- Apache-2.0 license: Free for commercial self-hosting with no restrictions
Gitness vs Drone vs Woodpecker vs Forgejo+Runner
| Feature | Gitness | Drone CE | Woodpecker CI | Forgejo + Runner |
|---|---|---|---|---|
| License | Apache-2.0 | Apache-2.0 | Apache-2.0 | GPL-3.0 |
| Stars | 32K+ | 11K | 4K | 10K |
| Git hosting | ✅ Built-in | ❌ | ❌ | ✅ |
| Container registry | ✅ Built-in | ❌ | ❌ | ✅ |
| Secret management | ✅ | ✅ | ✅ | ✅ |
| UI quality | Excellent | Basic | Good | Good |
| Min RAM | ~100 MB | ~50 MB | ~50 MB | ~150 MB |
| Active development | ✅ Harness team | Slow | Community | ✅ Codeberg |
Choose Gitness if you want a single tool that covers Git hosting + CI + registry with the best UI and lightest footprint.
Choose Forgejo if you need the most Gitea/GitHub-compatible feature set with a strong community governance model.
Choose Woodpecker if you're already on Gitea/Forgejo and want a battle-tested CI system with a large plugin ecosystem.
Part 1: Docker Setup
# docker-compose.yml
services:
gitness:
image: harness/gitness:latest
container_name: gitness
restart: unless-stopped
ports:
- "3000:3000"
volumes:
- gitness_data:/data
- /var/run/docker.sock:/var/run/docker.sock # Required for running pipelines
environment:
GITNESS_URL_BASE: "https://git.yourdomain.com"
GITNESS_TOKEN_EXPIRE_DURATION: "720h"
# Optional: configure SMTP for email notifications
# GITNESS_MAIL_HOST: "smtp.yourdomain.com"
# GITNESS_MAIL_PORT: "587"
# GITNESS_MAIL_USERNAME: "noreply@yourdomain.com"
# GITNESS_MAIL_PASSWORD: "smtp-password"
# GITNESS_MAIL_FROM: "noreply@yourdomain.com"
volumes:
gitness_data:
docker compose up -d
Visit http://your-server:3000 and register your admin account on first launch.
Part 2: HTTPS with Caddy
git.yourdomain.com {
reverse_proxy localhost:3000
}
After setting up HTTPS, update the environment variable:
environment:
GITNESS_URL_BASE: "https://git.yourdomain.com" # Updated to HTTPS
docker compose up -d --force-recreate gitness
Part 3: Repository Management
Create a repository:
- Click + New Repository
- Set name, description, visibility (public/private)
- Initialize with README, .gitignore template, and license
Clone your repository:
# HTTPS clone
git clone https://git.yourdomain.com/your-username/repo-name.git
# SSH clone (configure SSH key in Settings → SSH Keys first)
git clone git@git.yourdomain.com:your-username/repo-name.git
Configure SSH keys:
- Go to Settings → SSH Keys → Add New SSH Key
- Paste your public key (
~/.ssh/id_ed25519.pub) - SSH clones work immediately
Branch protection rules:
- Repository → Settings → Branch Rules
- Add rule for
mainbranch:- Require pull request before merging
- Require N approvals
- Require status checks to pass (pipeline must be green)
Part 4: CI/CD Pipelines
Gitness pipelines are defined in .gitness/pipeline.yaml at the root of your repository. The syntax is based on Drone CI — familiar if you've used Drone, Woodpecker, or GitHub Actions.
Basic Pipeline (Node.js)
# .gitness/pipeline.yaml
kind: pipeline
type: docker
name: ci
steps:
- name: install
image: node:20-alpine
commands:
- npm ci
- name: lint
image: node:20-alpine
commands:
- npm run lint
depends_on:
- install
- name: test
image: node:20-alpine
commands:
- npm test
depends_on:
- install
- name: build
image: node:20-alpine
commands:
- npm run build
when:
branch:
- main
depends_on:
- test
Pipeline with Database Services
# .gitness/pipeline.yaml — with PostgreSQL for integration tests
kind: pipeline
type: docker
name: integration-tests
services:
- name: db
image: postgres:16-alpine
environment:
POSTGRES_DB: testdb
POSTGRES_USER: test
POSTGRES_PASSWORD: testpass
- name: redis
image: redis:7-alpine
steps:
- name: wait-for-services
image: alpine:3.19
commands:
- apk add postgresql-client redis
- until pg_isready -h db -U test; do sleep 1; done
- until redis-cli -h redis ping; do sleep 1; done
- name: test
image: golang:1.22-alpine
environment:
DATABASE_URL: "postgres://test:testpass@db:5432/testdb?sslmode=disable"
REDIS_URL: "redis://redis:6379/0"
commands:
- go test -v -race ./...
depends_on:
- wait-for-services
Docker Build and Push Pipeline
# Build and push to Gitness built-in container registry
kind: pipeline
type: docker
name: docker-publish
steps:
- name: build-and-push
image: docker:dind
privileged: true
environment:
REGISTRY_USERNAME:
from_secret: registry_username
REGISTRY_PASSWORD:
from_secret: registry_password
commands:
- docker login git.yourdomain.com -u $REGISTRY_USERNAME -p $REGISTRY_PASSWORD
- docker build -t git.yourdomain.com/$DRONE_REPO:${DRONE_COMMIT_SHA:0:8} .
- docker push git.yourdomain.com/$DRONE_REPO:${DRONE_COMMIT_SHA:0:8}
# Tag as 'latest' on main branch
- |
if [ "$DRONE_BRANCH" = "main" ]; then
docker tag git.yourdomain.com/$DRONE_REPO:${DRONE_COMMIT_SHA:0:8} \
git.yourdomain.com/$DRONE_REPO:latest
docker push git.yourdomain.com/$DRONE_REPO:latest
fi
when:
branch:
- main
Multi-Stage Deploy Pipeline
kind: pipeline
type: docker
name: deploy
steps:
- name: test
image: python:3.12-slim
commands:
- pip install -r requirements.txt
- pytest --tb=short
- name: build
image: docker:dind
privileged: true
commands:
- docker build -t git.yourdomain.com/$DRONE_REPO:$DRONE_COMMIT_SHA .
- docker push git.yourdomain.com/$DRONE_REPO:$DRONE_COMMIT_SHA
depends_on:
- test
- name: deploy-staging
image: alpine:3.19
environment:
SSH_KEY:
from_secret: deploy_ssh_key
commands:
- apk add --no-cache openssh-client
- echo "$SSH_KEY" > /tmp/key && chmod 600 /tmp/key
- |
ssh -i /tmp/key -o StrictHostKeyChecking=no deploy@staging.yourdomain.com \
"docker pull git.yourdomain.com/$DRONE_REPO:$DRONE_COMMIT_SHA && \
docker stop api || true && \
docker run -d --name api --rm \
git.yourdomain.com/$DRONE_REPO:$DRONE_COMMIT_SHA"
depends_on:
- build
when:
branch:
- main
- name: deploy-production
image: alpine:3.19
environment:
SSH_KEY:
from_secret: deploy_ssh_key
commands:
- apk add --no-cache openssh-client
- echo "$SSH_KEY" > /tmp/key && chmod 600 /tmp/key
- ssh -i /tmp/key deploy@production.yourdomain.com "deploy-script.sh $DRONE_COMMIT_SHA"
depends_on:
- build
when:
event:
- tag # Only deploy to production on git tags
Part 5: Secrets Management
Store sensitive values (API keys, deploy SSH keys, registry passwords) as encrypted secrets — never hardcode them in pipeline YAML.
Add a secret:
- Repository → Settings → Secrets → + New Secret
- Name:
deploy_ssh_key(no spaces, lowercase) - Value: paste the secret value (encrypted at rest)
Reference in pipeline:
environment:
MY_API_KEY:
from_secret: my_api_key # Name matches what you set in UI
DB_PASSWORD:
from_secret: database_password
Organization-level secrets (available across all repos):
- Organization Settings → Secrets → + New Secret
- Useful for shared registry credentials, Slack webhook URLs
Part 6: Built-in Container Registry
Gitness ships with a Docker-compatible container registry at git.yourdomain.com. You get private image hosting without deploying Harbor or Docker Registry separately.
# Login to the Gitness registry
docker login git.yourdomain.com
# Username: your-gitness-username
# Password: your-gitness-password (or API token from Settings → API Keys)
# Tag and push an image
docker tag myapp:latest git.yourdomain.com/your-org/myapp:v1.0.0
docker push git.yourdomain.com/your-org/myapp:v1.0.0
# Pull the image on your server
docker pull git.yourdomain.com/your-org/myapp:v1.0.0
In Kubernetes or Docker Compose:
# docker-compose.yml on your server
services:
api:
image: git.yourdomain.com/your-org/api:latest
# Add registry credentials via imagePullSecrets (Kubernetes)
# or: docker login git.yourdomain.com (Docker Compose)
Part 7: Webhooks for External Integrations
Trigger external systems when code is pushed, PRs are created, or pipelines complete:
- Repository → Settings → Webhooks → + New Webhook
- URL: your external service endpoint
- Events:
push,pull_request,tag_created
Example: Trigger n8n deployment workflow on push:
URL: https://n8n.yourdomain.com/webhook/gitness-push
Events: push
The payload includes:
ref: branch namecommits: array with author, message, changed filesrepository: name, clone URLsender: who pushed
Part 8: External Runner Setup
For CI jobs that need more resources, specific hardware, or a different OS:
# On a dedicated runner machine:
services:
gitness-runner:
image: harness/gitness-runner:latest
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
GITNESS_URL: "https://git.yourdomain.com"
GITNESS_TOKEN: "${RUNNER_TOKEN}" # Create at Admin → Runners → New Runner
GITNESS_RUNNER_NAME: "runner-01"
GITNESS_RUNNER_MAX_PROCS: "4" # Concurrent pipelines
GITNESS_RUNNER_OS: "linux"
GITNESS_RUNNER_ARCH: "amd64"
Multiple runners can register with the same Gitness instance. Jobs distribute across available runners automatically.
Maintenance and Backup
# Update Gitness to latest version
docker compose pull
docker compose up -d
# Backup all data (repositories + pipeline history + registry)
docker compose stop gitness
tar -czf gitness-backup-$(date +%Y%m%d).tar.gz \
$(docker volume inspect gitness_gitness_data --format '{{ "{{" }}.Mountpoint{{ "}}" }}')
docker compose start gitness
# View logs
docker compose logs -f gitness
# Check resource usage
docker stats gitness
Recommended backup schedule: Daily backup of the gitness_data volume. The backup includes all Git repositories (as bare repos), pipeline history, secrets (encrypted), and the container registry contents.
Resource Requirements
| Users | Repositories | Concurrent Pipelines | Recommended RAM | Recommended CPU |
|---|---|---|---|---|
| 1–5 | 1–20 | 1–2 | 512 MB | 1 vCPU |
| 5–20 | 20–100 | 2–5 | 2 GB | 2 vCPU |
| 20–100 | 100–500 | 5–10 | 4 GB | 4 vCPU |
These are approximate — the Docker socket mount means pipeline steps run as sibling containers, so CI workload doesn't add memory pressure to the Gitness process itself.
Why Self-Host Gitness?
The case for self-hosting Gitness comes down to three practical factors: data ownership, cost at scale, and operational control.
Data ownership is the fundamental argument. When you use a SaaS version of any tool, your data lives on someone else's infrastructure subject to their terms of service, their security practices, and their business continuity. If the vendor raises prices, gets acquired, changes API limits, or shuts down, you're left scrambling. Self-hosting Gitness means your data and configuration stay on infrastructure you control — whether that's a VPS, a bare metal server, or a home lab.
Cost at scale matters once you move beyond individual use. Most SaaS equivalents charge per user or per data volume. A self-hosted instance on a $10-20/month VPS typically costs less than per-user SaaS pricing for teams of five or more — and the cost doesn't scale linearly with usage. One well-configured server handles dozens of users for a flat monthly fee.
Operational control is the third factor. The Docker Compose configuration above exposes every setting that commercial equivalents often hide behind enterprise plans: custom networking, environment variables, storage backends, and authentication integrations. You decide when to update, how to configure backups, and what access controls to apply.
The honest tradeoff: you're responsible for updates, backups, and availability. For teams running any production workloads, this is familiar territory. For individuals, the learning curve is real but the tooling (Docker, Caddy, automated backups) is well-documented and widely supported.
Server Requirements and Sizing
Before deploying Gitness, assess your server capacity against expected workload.
Minimum viable setup: A 1 vCPU, 1GB RAM VPS with 20GB SSD is sufficient for personal use or small teams. Most consumer VPS providers — Hetzner, DigitalOcean, Linode, Vultr — offer machines in this range for $5-10/month. Hetzner offers excellent price-to-performance for European and US regions.
Recommended production setup: 2 vCPUs with 4GB RAM and 40GB SSD handles most medium deployments without resource contention. This gives Gitness headroom for background tasks, caching, and concurrent users while leaving capacity for other services on the same host.
Storage planning: The Docker volumes in this docker-compose.yml store all persistent Gitness data. Estimate your storage growth rate early — for data-intensive tools, budget for 3-5x your initial estimate. Hetzner Cloud and Vultr both support online volume resizing without stopping your instance.
Operating system: Any modern 64-bit Linux distribution works. Ubuntu 22.04 LTS and Debian 12 are the most commonly tested configurations. Ensure Docker Engine 24.0+ and Docker Compose v2 are installed — verify with docker --version and docker compose version. Avoid Docker Desktop on production Linux servers; it adds virtualization overhead and behaves differently from Docker Engine in ways that cause subtle networking issues.
Network: Only ports 80 and 443 need to be publicly accessible when running behind a reverse proxy. Internal service ports should be bound to localhost only. A minimal UFW firewall that blocks all inbound traffic except SSH, HTTP, and HTTPS is the single most effective security measure for a self-hosted server.
Backup and Disaster Recovery
Running Gitness without a tested backup strategy is an unacceptable availability risk. Docker volumes are not automatically backed up — if you delete a volume or the host fails, data is gone with no recovery path.
What to back up: The named Docker volumes containing Gitness's data (database files, user uploads, application state), your docker-compose.yml and any customized configuration files, and .env files containing secrets.
Backup approach: For simple setups, stop the container, archive the volume contents, then restart. For production environments where stopping causes disruption, use filesystem snapshots or database dump commands (PostgreSQL pg_dump, SQLite .backup, MySQL mysqldump) that produce consistent backups without downtime.
For a complete automated backup workflow that ships snapshots to S3-compatible object storage, see the Restic + Rclone backup guide. Restic handles deduplication and encryption; Rclone handles multi-destination uploads. The same setup works for any Docker volume.
Backup cadence: Daily backups to remote storage are a reasonable baseline for actively used tools. Use a 30-day retention window minimum — long enough to recover from mistakes discovered weeks later. For critical data, extend to 90 days and use a secondary destination.
Restore testing: A backup that has never been restored is a backup you cannot trust. Once a month, restore your Gitness backup to a separate Docker Compose stack on different ports and verify the data is intact. This catches silent backup failures, script errors, and volume permission issues before they matter in a real recovery.
Security Hardening
Self-hosting means you are responsible for Gitness's security posture. The Docker Compose setup provides a functional base; production deployments need additional hardening.
Always use a reverse proxy: Never expose Gitness's internal port directly to the internet. The docker-compose.yml binds to localhost; Caddy or Nginx provides HTTPS termination. Direct HTTP access transmits credentials in plaintext. A reverse proxy also centralizes TLS management, rate limiting, and access logging.
Strong credentials: Change default passwords immediately after first login. For secrets in docker-compose environment variables, generate random values with openssl rand -base64 32 rather than reusing existing passwords.
Firewall configuration:
ufw default deny incoming
ufw allow 22/tcp
ufw allow 80/tcp
ufw allow 443/tcp
ufw enable
Internal service ports (databases, admin panels, internal APIs) should only be reachable from localhost or the Docker network, never directly from the internet.
Network isolation: Docker Compose named networks keep Gitness's services isolated from other containers on the same host. Database containers should not share networks with containers that don't need direct database access.
VPN access for sensitive services: For internal-only tools, restricting access to a VPN adds a strong second layer. Headscale is an open source Tailscale control server that puts your self-hosted stack behind a WireGuard mesh, eliminating public internet exposure for internal tools.
Update discipline: Subscribe to Gitness's GitHub releases page to receive security advisory notifications. Schedule a monthly maintenance window to pull updated images. Running outdated container images is the most common cause of self-hosted service compromises.
Troubleshooting Common Issues
Container exits immediately or won't start
Check logs first — they almost always explain the failure:
docker compose logs -f gitness
Common causes: a missing required environment variable, a port already in use, or a volume permission error. Port conflicts appear as bind: address already in use. Find the conflicting process with ss -tlpn | grep PORT and either stop it or change Gitness's port mapping in docker-compose.yml.
Cannot reach the web interface
Work through this checklist:
- Confirm the container is running:
docker compose ps - Test locally on the server:
curl -I http://localhost:PORT - If local access works but external doesn't, check your firewall:
ufw status - If using a reverse proxy, verify it's running and the config is valid:
caddy validate --config /etc/caddy/Caddyfile
Permission errors on volume mounts
Some containers run as a non-root user. If the Docker volume is owned by root, the container process cannot write to it. Find the volume's host path with docker volume inspect VOLUME_NAME, check the tool's documentation for its expected UID, and apply correct ownership:
chown -R 1000:1000 /var/lib/docker/volumes/your_volume/_data
High resource usage over time
Memory or CPU growing continuously usually indicates unconfigured log rotation, an unbound cache, or accumulated data needing pruning. Check current usage with docker stats gitness. Add resource limits in docker-compose.yml to prevent one container from starving others. For ongoing visibility into resource trends, deploy Prometheus + Grafana or Netdata.
Data disappears after container restart
Data stored in the container's writable layer — rather than a named volume — is lost when the container is removed or recreated. This happens when the volume mount path in docker-compose.yml doesn't match where the application writes data. Verify mount paths against the tool's documentation and correct the mapping. Named volumes persist across container removal; only docker compose down -v deletes them.
Keeping Gitness Updated
Gitness follows a regular release cadence. Staying current matters for security patches and compatibility. The update process with Docker Compose is straightforward:
docker compose pull # Download updated images
docker compose up -d # Restart with new images
docker image prune -f # Remove old image layers (optional)
Read the changelog before major version updates. Some releases include database migrations or breaking configuration changes. For major version bumps, test in a staging environment first — run a copy of the service on different ports with the same volume data to validate the migration before touching production.
Version pinning: For stability, pin to a specific image tag in docker-compose.yml instead of latest. Update deliberately after reviewing the changelog. This trades automatic patch delivery for predictable behavior — the right call for business-critical services.
Post-update verification: After updating, confirm Gitness is functioning correctly. Most services expose a /health endpoint that returns HTTP 200 — curl it from the server or monitor it with your uptime tool.
Related: Best Open Source Developer Tools 2026 · Woodpecker CI vs Gitness · How to Self-Host Forgejo
The SaaS-to-Self-Hosted Migration Guide (Free PDF)
Step-by-step: infrastructure setup, data migration, backups, and security for 15+ common SaaS replacements. Used by 300+ developers.
Join 300+ self-hosters. Unsubscribe in one click.