Skip to main content

Self-Host Loki: Log Aggregation Splunk Alternative 2026

·OSSAlt Team
lokigrafanaloggingsplunkself-hostingdocker2026

TL;DR

Grafana Loki (AGPL 3.0, ~24K GitHub stars, Go) is a horizontally scalable log aggregation system. Unlike Elasticsearch (which indexes all log content), Loki only indexes log labels — making it 10x cheaper on storage. Logs are queried with LogQL (similar to PromQL). Splunk charges $150+/GB/day. Loki self-hosted stores logs on local disk or S3 for pennies. Grafana has native Loki integration — you get logs alongside metrics in the same dashboards.

Key Takeaways

  • Loki: AGPL 3.0, ~24K stars, Go — label-indexed logs (not full-text index), cheap storage
  • Promtail: Agent that tails log files and Docker logs, ships to Loki
  • LogQL: Log query language — filter by labels, extract fields, aggregate
  • Grafana integration: Native Loki datasource — correlate logs and metrics in one view
  • 10x cheaper than Elasticsearch: No full-text index means tiny storage footprint
  • vs Elasticsearch: Loki = cheap+simple; Elasticsearch = full-text search+complex

Loki vs Elasticsearch vs Splunk

FeatureLokiElasticsearchSplunk
LicenseAGPL 3.0SSPL (not OSS)Proprietary
Index typeLabels onlyFull-textFull-text
Storage costLow (10x cheaper)HighVery high
Query languageLogQLElasticsearch DSLSPL
Grafana integrationNativeVia pluginVia plugin
Ingestion rateHighHighHigh
Full-text searchNo (regex only)YesYes
Self-host complexityLowMediumHigh

Part 1: Docker Compose Setup

# docker-compose.yml
services:
  loki:
    image: grafana/loki:latest
    container_name: loki
    restart: unless-stopped
    ports:
      - "3100:3100"
    volumes:
      - ./loki/loki-config.yml:/etc/loki/loki-config.yml:ro
      - loki_data:/loki
    command: -config.file=/etc/loki/loki-config.yml

  promtail:
    image: grafana/promtail:latest
    container_name: promtail
    restart: unless-stopped
    volumes:
      - ./promtail/promtail-config.yml:/etc/promtail/promtail-config.yml:ro
      - /var/log:/var/log:ro
      - /var/run/docker.sock:/var/run/docker.sock
    command: -config.file=/etc/promtail/promtail-config.yml
    depends_on:
      - loki

  grafana:
    image: grafana/grafana:latest
    container_name: grafana
    restart: unless-stopped
    ports:
      - "3000:3000"
    volumes:
      - grafana_data:/var/lib/grafana
      - ./grafana/provisioning:/etc/grafana/provisioning:ro
    environment:
      GF_SECURITY_ADMIN_PASSWORD: "${GRAFANA_PASSWORD}"
      GF_USERS_ALLOW_SIGN_UP: "false"

volumes:
  loki_data:
  grafana_data:

Part 2: Loki Configuration

# loki/loki-config.yml
auth_enabled: false

server:
  http_listen_port: 3100
  grpc_listen_port: 9096

common:
  path_prefix: /loki
  storage:
    filesystem:
      chunks_directory: /loki/chunks
      rules_directory: /loki/rules
  replication_factor: 1
  ring:
    instance_addr: 127.0.0.1
    kvstore:
      store: inmemory

schema_config:
  configs:
    - from: 2020-10-24
      store: tsdb
      object_store: filesystem
      schema: v13
      index:
        prefix: index_
        period: 24h

ruler:
  alertmanager_url: http://alertmanager:9093

limits_config:
  allow_structured_metadata: true
  volume_enabled: true
  retention_period: 744h    # 31 days

compactor:
  working_directory: /loki/retention
  delete_request_store: filesystem
  retention_enabled: true

Part 3: Promtail Configuration

# promtail/promtail-config.yml
server:
  http_listen_port: 9080
  grpc_listen_port: 0

positions:
  filename: /tmp/positions.yaml

clients:
  - url: http://loki:3100/loki/api/v1/push

scrape_configs:
  # All Docker container logs:
  - job_name: containers
    static_configs:
      - targets:
          - localhost
        labels:
          job: containerlogs
          __path__: /var/run/docker.sock

    # Use Docker service discovery:
    docker_sd_configs:
      - host: unix:///var/run/docker.sock
        refresh_interval: 5s
    relabel_configs:
      - source_labels: ['__meta_docker_container_name']
        regex: '/(.*)'
        target_label: 'container'
      - source_labels: ['__meta_docker_container_log_stream']
        target_label: 'logstream'
      - source_labels: ['__meta_docker_container_label_com_docker_compose_service']
        target_label: 'service'

  # Syslog:
  - job_name: syslog
    static_configs:
      - targets:
          - localhost
        labels:
          job: varlogs
          host: my-server
          __path__: /var/log/syslog

  # Nginx access logs:
  - job_name: nginx
    static_configs:
      - targets:
          - localhost
        labels:
          job: nginx
          __path__: /var/log/nginx/access.log
    pipeline_stages:
      - regex:
          expression: '^(?P<remote_addr>[\w.]+) - (?P<remote_user>[^ ]*) \[(?P<time_local>.*)\] "(?P<method>\S+) (?P<request>[^ ]*) (?P<protocol>[^ ]*)" (?P<status>\d+) (?P<body_bytes>\d+)'
      - labels:
          method:
          status:

Part 4: Grafana Datasource for Loki

# grafana/provisioning/datasources/loki.yml
apiVersion: 1
datasources:
  - name: Loki
    type: loki
    url: http://loki:3100
    isDefault: false
    access: proxy
    jsonData:
      maxLines: 1000

Part 5: LogQL Queries

# All logs from a container:
{container="nginx"}

# Filter by log content:
{container="myapp"} |= "ERROR"

# Regex filter:
{container="myapp"} |~ "error|exception|panic"

# Exclude pattern:
{container="myapp"} != "health check"

# Parse JSON logs and filter by field:
{container="myapp"} | json | level="error"

# Count errors per minute:
sum(count_over_time({container="myapp"} |= "ERROR" [1m]))

# Error rate as percentage:
sum(rate({container="myapp"} |= "ERROR" [5m])) /
sum(rate({container="myapp"} [5m]))

# Top 10 slowest requests from nginx:
{job="nginx"} | logfmt | response_time > 1.0 | sort by response_time desc | limit 10

# Logs from multiple services:
{service=~"api|worker|scheduler"} |= "ERROR"

# Last 24h of a specific user's activity:
{container="myapp"} | json | user_id="42"

Part 6: Grafana Dashboard — Logs Panel

  1. Grafana → + New Dashboard → + Add visualization
  2. Select Loki as data source
  3. Query: {container="myapp"} — all logs from container
  4. Visualization: Logs type
  5. Add a Time series panel with:
    • Query: sum(rate({container="myapp"} |= "ERROR" [5m]))
    • Shows error rate over time

Correlate logs with metrics

In a Grafana dashboard:

  1. Add Prometheus panel (e.g., request rate)
  2. Add Loki panel with {service="api"} |= "ERROR"
  3. Both panels share the same time range — click a spike in metrics, see the logs from that moment

Part 7: Loki Alert Rules

# loki/rules/alerts.yml
groups:
  - name: log_alerts
    rules:
      - alert: HighErrorRate
        expr: |
          sum(rate({container="myapp"} |= "ERROR" [5m])) > 0.1
        for: 5m
        labels:
          severity: warning
        annotations:
          summary: "High error rate in myapp logs"

      - alert: OOMKill
        expr: |
          count_over_time({job="syslog"} |= "Out of memory" [10m]) > 0
        for: 0m
        labels:
          severity: critical
        annotations:
          summary: "OOM kill detected on {{ $labels.host }}"

Part 8: S3 Storage for Large Scale

For more than a few servers, use S3 instead of local filesystem:

# loki-config.yml (S3 section):
common:
  storage:
    s3:
      endpoint: s3.amazonaws.com
      region: us-east-1
      bucketnames: your-loki-bucket
      access_key_id: "${AWS_ACCESS_KEY}"
      secret_access_key: "${AWS_SECRET_KEY}"

# Or MinIO (self-hosted S3):
common:
  storage:
    s3:
      endpoint: minio:9000
      bucketnames: loki
      access_key_id: "${MINIO_USER}"
      secret_access_key: "${MINIO_PASSWORD}"
      insecure: true
      s3forcepathstyle: true

Maintenance

# Update Loki stack:
docker compose pull
docker compose up -d

# Check Loki health:
curl http://localhost:3100/ready

# Check ingestion stats:
curl http://localhost:3100/metrics | grep loki_distributor

# Backup:
tar -czf loki-backup-$(date +%Y%m%d).tar.gz \
  $(docker volume inspect loki_loki_data --format '{{.Mountpoint}}')

# Logs:
docker compose logs -f loki
docker compose logs -f promtail

See all open source monitoring and logging tools at OSSAlt.com/categories/devops.

Comments