How to Migrate from Sentry to GlitchTip 2026
How to Migrate from Sentry to GlitchTip 2026
GlitchTip is Sentry SDK-compatible — meaning the migration is as simple as changing a DSN URL. Same SDKs, same code, just a different backend. No application changes required beyond the connection string. Here's the full guide to setting up GlitchTip and making the switch.
TL;DR
Deploy GlitchTip with Docker Compose (3 containers vs Sentry's 20+), create a project, get a DSN, and change one line in your application. Total migration time for an existing app: about one hour. You keep all your Sentry SDK integrations and get unlimited errors for $5-10/month in VPS costs.
Key Takeaways
- GlitchTip is 100% Sentry SDK-compatible — no application code changes beyond the DSN
- Resource requirements: 1-2 GB RAM vs Sentry's 8-16 GB minimum
- Deployment: 3 containers (web, worker, database) vs Sentry's 20+
- MIT license vs Sentry's BSL (Business Source License, which restricts commercial self-hosting of Sentry)
- You lose: session replay, profiling, cron monitoring, AI-enhanced grouping, native integrations
- You gain: full self-hosted error tracking at $5-10/month total cost
Why Teams Switch from Sentry
Sentry Cloud is $26/month for the Team plan and $80/month for Business. Both include rate limits on errors. The open source version of Sentry exists but requires significant infrastructure: 8-16 GB RAM, Docker, 20+ containers, and meaningful maintenance overhead. The resource requirements put self-hosted Sentry out of reach for small teams.
GlitchTip takes the opposite approach: minimal resource requirements, simple deployment, and Sentry SDK compatibility. Where Sentry's self-hosted stack needs a dedicated server, GlitchTip runs alongside other services on a small $5-10/month VPS. For teams tracking errors on hobby projects, small SaaS products, or internal tools, this cost difference is decisive.
See the GlitchTip vs Sentry community comparison for a detailed feature breakdown, or the GlitchTip vs Highlight.io comparison if you're considering alternatives that include session replay.
Step 1: Deploy GlitchTip
# docker-compose.yml
version: "3.8"
services:
web:
image: glitchtip/glitchtip
ports:
- "8000:8000"
environment:
DATABASE_URL: postgresql://postgres:secret@db/glitchtip
SECRET_KEY: your-secret-key-here
GLITCHTIP_DOMAIN: https://errors.yourdomain.com
DEFAULT_FROM_EMAIL: errors@yourdomain.com
EMAIL_URL: smtp://user:pass@smtp.example.com:587
CELERY_WORKER_CONCURRENCY: 2
depends_on:
- db
- redis
worker:
image: glitchtip/glitchtip
command: ./bin/run-celery-with-beat.sh
environment:
DATABASE_URL: postgresql://postgres:secret@db/glitchtip
SECRET_KEY: your-secret-key-here
CELERY_WORKER_CONCURRENCY: 2
depends_on:
- db
- redis
db:
image: postgres:15
environment:
POSTGRES_DB: glitchtip
POSTGRES_PASSWORD: secret
volumes:
- pg-data:/var/lib/postgresql/data
redis:
image: redis:7-alpine
volumes:
- redis-data:/data
volumes:
pg-data:
redis-data:
docker compose up -d
GlitchTip runs with just four services (web, worker, PostgreSQL, Redis). The web service handles the dashboard and ingestion API. The worker processes events asynchronously, groups errors, and sends alerts. This is the same architecture as Sentry but dramatically simplified.
For production, add nginx as a reverse proxy with Let's Encrypt SSL:
server {
server_name errors.yourdomain.com;
location / {
proxy_pass http://localhost:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
client_max_body_size 20M;
}
}
The client_max_body_size directive is important — GlitchTip accepts source map uploads, which can be large. Default nginx limits will silently drop large uploads.
Step 2: Initial Setup
Access GlitchTip at https://errors.yourdomain.com and create your admin account. Then:
- Create an Organization (equivalent to Sentry's organization)
- Create a Project (select your platform: JavaScript, Python, Django, Rails, etc.)
- Copy the DSN from the project settings
GlitchTip generates DSNs in the same format as Sentry: https://key@host/project-id. The key part is that GlitchTip's DSN is a drop-in replacement for Sentry's DSN — the Sentry SDK reads both formats identically.
Step 3: Update Your Application
The entire migration is changing one line — the DSN:
Python:
import sentry_sdk
sentry_sdk.init(
dsn="https://key@errors.yourdomain.com/1", # ← Just change this
traces_sample_rate=0.1,
)
JavaScript:
import * as Sentry from "@sentry/browser";
Sentry.init({
dsn: "https://key@errors.yourdomain.com/1", // ← Just change this
tracesSampleRate: 0.1,
});
Node.js:
const Sentry = require("@sentry/node");
Sentry.init({
dsn: "https://key@errors.yourdomain.com/1", // ← Just change this
});
Ruby on Rails:
# config/initializers/sentry.rb
Sentry.init do |config|
config.dsn = 'https://key@errors.yourdomain.com/1' # ← Just change this
config.traces_sample_rate = 0.1
end
That's it. Same Sentry SDK. Same initialization code. Same error capture and context collection. Same breadcrumbs, user context, release tracking, and environment tags. Only the DSN changes.
Step 4: Verify Error Capture
Before cutting over production, verify GlitchTip is receiving errors correctly:
- Trigger a test error in your staging application:
// In browser console or test script
Sentry.captureException(new Error("GlitchTip migration test"));
- Check GlitchTip dashboard — the error should appear within seconds
- Verify stack traces render correctly with all context
- Check that breadcrumbs (click events, navigation, console logs) appear
- Verify user context is attached if you set
Sentry.setUser()in your app
If errors don't appear, check:
- Network connectivity from your app server to GlitchTip
- CORS configuration in nginx if ingesting from a browser
- The
GLITCHTIP_DOMAINenv var matches your actual domain exactly
Step 5: Upload Source Maps (JavaScript)
# Install Sentry CLI (works with GlitchTip)
npm install -g @sentry/cli
# Upload source maps
sentry-cli --url https://errors.yourdomain.com \
releases files YOUR_RELEASE \
upload-sourcemaps ./dist \
--auth-token YOUR_GLITCHTIP_TOKEN
Source map uploads use the same Sentry CLI tooling — GlitchTip implements the same release and artifact APIs. In your CI/CD pipeline, just update the --url flag to point to your GlitchTip instance.
In webpack or Vite:
// vite.config.ts
import { sentryVitePlugin } from "@sentry/vite-plugin";
export default {
plugins: [
sentryVitePlugin({
org: "your-org",
project: "your-project",
url: "https://errors.yourdomain.com", // ← Your GlitchTip URL
authToken: process.env.GLITCHTIP_AUTH_TOKEN,
}),
],
};
Step 6: Configure Alerts
GlitchTip sends alert notifications via email. For Slack or other integrations, use webhooks:
- Project Settings → Alert receivers → Add receiver
- Select Webhook
- Enter your Slack webhook URL or other HTTP endpoint
- GlitchTip sends a JSON payload on new issues or error spikes
For more complex alerting (PagerDuty, escalation policies), proxy GlitchTip's webhooks through a tool like n8n or send them to a custom endpoint that routes to your incident management system.
What Works in GlitchTip
- ✅ Error capture and grouping (by stack trace fingerprint)
- ✅ Stack traces with full context
- ✅ Source maps (JavaScript)
- ✅ Breadcrumbs (clicks, navigation, console logs)
- ✅ User context (
setUser()) - ✅ Tags and extra data
- ✅ Release tracking
- ✅ Environment filtering (production, staging, dev)
- ✅ Basic performance monitoring (transaction traces)
- ✅ Email alerts on new issues
- ✅ Webhook alerts
What You'll Lose vs Sentry Cloud
- ❌ Session replay (requires Highlight.io or Sentry)
- ❌ Profiling (continuous and transaction-based)
- ❌ Cron monitoring
- ❌ AI-enhanced error grouping and suggestions
- ❌ Native Jira/GitHub/Slack integrations (webhooks only)
- ❌ Advanced performance dashboards (Sentry's LCP, FCP, etc.)
- ❌ Sentry's mobile crash reporting depth
For most backend services, web applications, and internal tools, this feature gap is acceptable. The issues GlitchTip doesn't cover — session replay, profiling, cron monitoring — are useful but rarely the primary reason teams use error tracking. Error grouping, stack traces, and alert routing are the core value, and GlitchTip delivers all of these.
Cost Comparison
| Plan | Sentry | GlitchTip Self-Hosted |
|---|---|---|
| Developer | Free (5K errors) | Free (unlimited) |
| Team | $26/month | $5/month (VPS) |
| Business | $80/month | $10/month (VPS) |
| Enterprise | Custom | $20/month (VPS) |
The VPS cost includes all hosting — database, Redis, and the GlitchTip containers — on a shared server. For teams already running a VPS for other services (Coolify, Nextcloud, monitoring tools), GlitchTip can be added to the existing server with minimal additional cost.
Migration Timeline
| Time | Task |
|---|---|
| 30 min | Deploy GlitchTip, configure nginx and SSL |
| 5 min | Create org and project, get DSN |
| 5 min | Update DSN in application code |
| 15 min | Deploy app, verify errors appear |
| 30 min | Upload source maps, configure alerts |
| Total: ~90 min | Complete migration |
This is one of the easiest migrations in the open source ecosystem. The Sentry SDK compatibility means you don't need to retrain your team, update documentation, or change any error handling patterns in your codebase.
The Bottom Line
GlitchTip is the right choice for teams that want self-hosted error tracking without Sentry's infrastructure complexity or cloud costs. The SDK compatibility makes migration trivial. If your team tracks errors with the standard Sentry SDK patterns — capture exception, set user context, breadcrumbs, releases — GlitchTip handles everything you need at a fraction of the cost.
For the best open source monitoring alternatives to Better Stack, error tracking is one piece of a broader observability stack that also includes uptime monitoring and log management.
Compare error tracking tools on OSSAlt — features, resource requirements, and pricing side by side.
See open source alternatives to Sentry on OSSAlt.
Why GlitchTip Over Self-Hosted Sentry?
The most compelling argument for GlitchTip over self-hosted Sentry is operational simplicity. Running Sentry on your own infrastructure is a significant undertaking. The official Sentry self-hosted installation requires more than 20 Docker containers, including services for the web application, worker queues, schedulers, relay, Kafka, Zookeeper, ClickHouse, Redis, and PostgreSQL. The minimum recommended hardware is 8GB of RAM, and production workloads with moderate error volumes need 16GB or more. This means that "self-hosting Sentry" is less like running a single application and more like operating a small data platform. Teams that attempt it often find themselves spending as much time maintaining the Sentry infrastructure as they spend on their actual application.
GlitchTip reduces this to three or four containers: the web application, a Celery worker for background processing, PostgreSQL, and Redis. The whole thing runs comfortably on a $10-12/month VPS with 2GB of RAM. For small to medium teams, this is the entire error tracking setup — there's no Kafka to tune, no ClickHouse to maintain, no Zookeeper quorum to worry about. The operational surface is small enough that it can be understood, backed up, and maintained by a single developer without specialized infrastructure knowledge.
Licensing is another meaningful difference. Sentry moved from its original MIT license to the Business Source License (BSL) in 2019. The BSL restricts commercial use of the self-hosted Sentry distribution by competing products. For most companies self-hosting Sentry for their own error tracking, the BSL's restrictions don't apply in practice. However, the license change reflects a company that has an interest in limiting how its software is used. GlitchTip is licensed under MIT with no usage restrictions, and the governance risk of a surprise license change is lower for this smaller, community-maintained project.
The cost difference is substantial at any scale. A $26/month Sentry team plan covers five developers with a limited error quota. Running GlitchTip on a $10/month VPS gives you unlimited errors and unlimited users. Even factoring in the time cost of maintenance, for most teams the math strongly favors self-hosting.
What GlitchTip Does and Doesn't Do
GlitchTip implements the core error tracking workflow well. Errors are captured through the standard Sentry SDK, which means any language or framework with a Sentry SDK also works with GlitchTip — Python, JavaScript, Node, Ruby, PHP, Go, Rust, and dozens of others. Errors are grouped into issues using fingerprinting, so related errors from the same source location are combined rather than flooding your inbox with individual events. Stack traces render correctly with file names, line numbers, and surrounding code context. Breadcrumbs (the log of actions leading up to an error) are captured and displayed. User context, custom tags, and extra data attached to events all work as expected. Release tracking correlates error spikes with specific deployments.
The missing features are real, but their practical impact depends on your team's workflows. Session replay — recording video of user interactions leading up to an error — is not available in GlitchTip. This is a significant feature for frontend teams that use Sentry's replay to understand exactly what a user was doing when they hit a bug. If session replay is a regular part of your debugging workflow, GlitchTip is not a complete replacement. Profiling, which shows CPU and memory usage traces, is also absent.
AI-assisted issue grouping — where Sentry uses machine learning to better cluster related errors — is not in GlitchTip. The standard fingerprinting algorithm is what you get. In practice, the default fingerprinting covers most cases well, and teams that weren't actively using Sentry's AI grouping features won't notice the difference.
Production Infrastructure
For a production GlitchTip deployment, server sizing should be based on your error volume and team size. A 1 vCPU, 1GB RAM instance is appropriate for low-traffic applications generating fewer than 10,000 errors per day. A 2 vCPU, 4GB RAM instance comfortably handles most production workloads at mid-scale.
For the database, PostgreSQL is strongly recommended over SQLite in production. SQLite doesn't handle concurrent writes well and lacks the durability guarantees needed for production error data. Use a managed PostgreSQL service if your cloud provider offers one — it removes backup and failover management from your operational burden. If you're self-managing PostgreSQL, enable WAL archiving so you can perform point-in-time recovery.
For backups, the most important thing to protect is the PostgreSQL database. Daily automated backups with off-site storage (S3 or equivalent) cover most disaster scenarios. Test your restore process periodically — a backup you've never restored is an assumption, not a guarantee.
Common Pitfalls
The Celery worker is the most common source of operational issues. If the worker runs out of memory, it will be killed by the OS, and errors that were accepted by the web process will not be processed — they'll sit in the Redis queue until the worker restarts. Monitor worker memory usage and set a reasonable memory limit in your Docker Compose configuration.
Redis persistence settings affect what happens during a Redis restart. By default, Redis operates as an in-memory cache without durability guarantees. If Redis restarts before the Celery worker has processed queued events, those events are lost. For production deployments, enable Redis AOF persistence by setting appendonly yes in the Redis configuration. This adds modest write overhead but ensures events survive a Redis restart.
Email alert configuration is frequently misconfigured and only discovered when a critical error goes unnoticed. After deployment, trigger a test error and verify that an email notification arrives. Check that the EMAIL_URL environment variable points to a working SMTP server, and that your alert rules are configured to notify on new issues.