Skip to main content

OSS Pusher Alternatives 2026

·OSSAlt Team
websocketsreal-timepusherself-hostingdeveloper-tools
Share:

Best Open Source Alternatives to Pusher in 2026

Pusher charges per connection and per message. At 500 concurrent connections and 10 million messages per month, you are paying $49/month on the Startup plan. Scale to 10,000 connections and 100 million messages and the bill jumps to $299/month or higher on custom pricing. Presence channels, client events, and webhooks all count against your message quota.

The real problem is not just cost. Pusher's free tier caps at 200,000 messages per day and 100 concurrent connections. For production apps with any real-time traffic, you hit paid tiers fast. And once you are locked into Pusher's protocol, migrating means rewriting every client subscription.

Self-hosted alternatives eliminate per-message billing entirely. A $10/month VPS running Soketi or Centrifugo handles 10,000+ concurrent connections with sub-10ms latency. The trade-off is operational responsibility, but modern Docker deployments and managed databases make this straightforward.

TL;DR

Soketi for Pusher drop-in replacement (same protocol, zero code changes). Centrifugo for maximum scalability (millions of connections, built-in Redis/Nats clustering). Mercure for server-sent events and HTTP-native real-time. Laravel WebSockets for Laravel-only stacks. WS for Node.js apps needing raw WebSocket control without any abstraction layer.

Key Takeaways

  • Soketi is Pusher-protocol compatible. Existing Pusher client libraries work with zero changes. Horizontal scaling via Redis. MIT licensed.
  • Centrifugo handles 1M+ concurrent connections on a single node. Supports WebSockets, SSE, HTTP streaming, and gRPC. Built-in presence, history, and recovery.
  • Mercure uses SSE (Server-Sent Events) instead of WebSockets. Native HTTP/2 push. Built on web standards, works without JavaScript client libraries.
  • Laravel WebSockets integrates directly with Laravel Echo and Broadcasting. PHP-native, no separate runtime needed.
  • WS is the fastest raw WebSocket library for Node.js. No protocol overhead, no opinions. You build exactly what you need.
  • All five alternatives run on a single $5-20/month VPS and handle thousands of concurrent connections at sub-millisecond internal latency.

Feature Comparison

FeatureSoketiCentrifugoMercureLaravel WSWS (Node.js)
ProtocolPusherCustom + Pusher proxySSE/HTTPPusherRaw WebSocket
LanguageC++ (uWS)GoGoPHPNode.js (C++)
Max connections (single node)~100K~1M+~100K~10K~500K
ClusteringRedisRedis, Nats, TarantoolRedis (via Symfony)RedisManual
Presence channelsYesYesYes (topics)YesManual
Message historyNoYesYes (Bolt/Redis)NoNo
Auto-reconnect/recoveryClient-sideBuilt-in server-sideEventSource nativeClient-sideManual
AuthenticationPusher authJWT / connection tokenJWT / cookieLaravel authManual
LicenseAGPL-3.0MITAGPL-3.0MITMIT
GitHub stars4.8K+8.5K+4K+5.1K+21K+
Docker imageYesYesYesNo (PHP package)No (npm)

Soketi — Drop-In Pusher Replacement

Soketi implements the full Pusher WebSocket protocol. Every existing Pusher client library (pusher-js, Laravel Echo, pusher-py, pusher-http-ruby) connects to Soketi without modification. You change the host configuration from Pusher's servers to your Soketi instance and everything works.

Under the hood, Soketi uses uWebSockets.js (C++ compiled to native Node.js addon), which benchmarks at 3-5x the throughput of pure JavaScript WebSocket implementations. Memory usage stays under 150MB for 50,000 concurrent connections.

Key capabilities:

  • Full Pusher protocol compatibility (public, private, presence, encrypted channels)
  • Client events for peer-to-peer messaging
  • Webhooks for connection and channel lifecycle events
  • Horizontal scaling via Redis Cluster adapter
  • Prometheus metrics endpoint built in
  • Rate limiting per app, per connection

Self-Hosting Soketi with Docker

# docker-compose.yml
version: "3.8"
services:
  soketi:
    image: quay.io/soketi/soketi:latest-16-alpine
    ports:
      - "6001:6001"     # WebSocket port
      - "9601:9601"     # Metrics port
    environment:
      SOKETI_DEFAULT_APP_ID: "app-id"
      SOKETI_DEFAULT_APP_KEY: "app-key"
      SOKETI_DEFAULT_APP_SECRET: "app-secret"
      SOKETI_DEBUG: "1"
    restart: unless-stopped

  redis:
    image: redis:7-alpine
    ports:
      - "6379:6379"
    volumes:
      - redis_data:/data

volumes:
  redis_data:
docker compose up -d

Once running, point your Pusher client to your server:

import Pusher from "pusher-js";

const pusher = new Pusher("app-key", {
  wsHost: "your-server.com",
  wsPort: 6001,
  forceTLS: false,
  disableStats: true,
  enabledTransports: ["ws", "wss"],
  cluster: "", // required but unused for self-hosted
});

const channel = pusher.subscribe("my-channel");
channel.bind("my-event", (data) => {
  console.log("Received:", data);
});

Scaling: For multi-node deployments, enable the Redis adapter. Each Soketi instance shares channel state through Redis pub/sub. Add nodes behind a load balancer with sticky sessions (or use the Redis adapter for cross-node message delivery).

Limitations: Soketi does not persist message history. If a client disconnects and reconnects, it misses messages sent during the gap. For use cases requiring delivery guarantees, Centrifugo is a better fit. The AGPL-3.0 license may also be a consideration for proprietary deployments, though using Soketi as a standalone service (not embedded in your code) generally does not trigger AGPL copyleft requirements.

Best for: Teams migrating from Pusher with existing client code. The migration path is a configuration change, not a rewrite.

Centrifugo — Maximum Scale Real-Time Server

Centrifugo is a language-agnostic real-time messaging server written in Go. It handles WebSocket connections, SSE, HTTP streaming, GRPC streaming, and experimental WebTransport. A single Centrifugo node has been benchmarked at over 1 million concurrent WebSocket connections on a machine with 32GB RAM.

What separates Centrifugo from simpler WebSocket servers is its built-in feature set. Message history with configurable TTL means clients reconnecting after a brief disconnect recover missed messages automatically, without your application needing to implement replay logic. Presence tracking tells you exactly which users are subscribed to a channel at any moment. Server-side subscriptions let you control channel membership from your backend rather than trusting client requests.

Key capabilities:

  • Channel-level message history with TTL and size limits
  • Automatic message recovery on reconnect (no client-side replay logic)
  • Presence information (who is online in a channel)
  • Server-side subscriptions (backend controls what clients receive)
  • JWT authentication for connections and channel subscriptions
  • RPC (Remote Procedure Call) proxying through Centrifugo to your backend
  • Redis, KeyDB, Nats, or Tarantool for clustering and broker
  • Prometheus, Graphite metrics
  • Admin web UI for real-time monitoring

Self-Hosting Centrifugo with Docker

# docker-compose.yml
version: "3.8"
services:
  centrifugo:
    image: centrifugo/centrifugo:v5
    ports:
      - "8000:8000"   # HTTP/WebSocket
    volumes:
      - ./config.json:/centrifugo/config.json
    command: centrifugo -c config.json
    restart: unless-stopped
    ulimits:
      nofile:
        soft: 65536
        hard: 65536

  redis:
    image: redis:7-alpine
    ports:
      - "6379:6379"
// config.json
{
  "token_hmac_secret_key": "your-secret-key-min-32-chars-long!!",
  "admin": true,
  "admin_password": "admin-password",
  "admin_secret": "admin-secret",
  "api_key": "api-secret-key",
  "allowed_origins": ["http://localhost:3000"],
  "engine": "redis",
  "redis_address": "redis://redis:6379"
}

Publishing from your backend:

curl -X POST http://localhost:8000/api/publish \
  -H "Authorization: apikey api-secret-key" \
  -H "Content-Type: application/json" \
  -d '{
    "channel": "notifications",
    "data": {"text": "New deployment completed"}
  }'

Client-side connection with the official JavaScript SDK:

import { Centrifuge } from "centrifuge";

const client = new Centrifuge("ws://localhost:8000/connection/websocket", {
  token: "user-jwt-token",
});

const sub = client.newSubscription("notifications");

sub.on("publication", (ctx) => {
  console.log("New message:", ctx.data);
});

sub.subscribe();
client.connect();

Scaling: Centrifugo scales horizontally with Redis (or Nats/Tarantool) as a broker. Each node connects to the shared broker for cross-node message delivery. In benchmarks, a 3-node Centrifugo cluster with Redis handles 500,000+ concurrent connections with p99 latency under 15ms for message delivery.

Limitations: Centrifugo is not Pusher-protocol compatible. Migrating from Pusher requires switching to Centrifugo's client SDK and authentication model. The learning curve is steeper than Soketi, but the feature set is significantly richer. Centrifugo also supports a Pusher-compatible proxy mode via its PRO version, but the open source edition uses its own protocol.

Best for: Applications requiring high connection counts, message history, presence tracking, and automatic recovery. Chat apps, live dashboards, collaborative editing, multiplayer games.

Mercure — HTTP-Native Real-Time with SSE

Mercure takes a fundamentally different approach. Instead of WebSockets, it uses Server-Sent Events (SSE) over standard HTTP/2 connections. This means real-time updates work through regular HTTP infrastructure: CDNs, load balancers, proxies, and firewalls handle SSE connections without special WebSocket upgrade configuration.

Mercure follows a hub model. Your backend publishes updates to the Mercure hub via standard HTTP POST requests. Clients subscribe to topics by opening an SSE connection to the hub. The hub handles fan-out. This separation means your backend never manages persistent connections directly.

The protocol is an IETF Internet Draft, designed as a web standard rather than a proprietary protocol. Any HTTP client can subscribe to updates, including curl, EventSource in browsers, and mobile HTTP libraries. No WebSocket client library is needed.

Key capabilities:

  • SSE-based (works through CDNs, HTTP/2 push, standard infrastructure)
  • Authorization via JWT (publish and subscribe scopes)
  • Topic-based routing with URI templates for wildcard subscriptions
  • Message history with Last-Event-ID recovery
  • Built-in CORS handling
  • Automatic reconnection (native to SSE/EventSource)
  • Discovery via Link headers (clients auto-detect the hub URL)
  • Bolt or Redis persistence for message history

Self-Hosting Mercure with Docker

# docker-compose.yml
version: "3.8"
services:
  mercure:
    image: dunglas/mercure
    ports:
      - "3000:80"
    environment:
      MERCURE_PUBLISHER_JWT_KEY: "your-publisher-secret-key-min-256-bits"
      MERCURE_SUBSCRIBER_JWT_KEY: "your-subscriber-secret-key-min-256-bits"
      SERVER_NAME: ":80"
    restart: unless-stopped

Publishing an update from your backend:

curl -X POST http://localhost:3000/.well-known/mercure \
  -H "Authorization: Bearer YOUR_PUBLISHER_JWT" \
  -d "topic=https://example.com/orders/42" \
  -d "data={\"status\": \"shipped\"}"

Subscribing from the browser with zero dependencies:

const url = new URL("http://localhost:3000/.well-known/mercure");
url.searchParams.append("topic", "https://example.com/orders/{id}");

const eventSource = new EventSource(url, {
  headers: {
    Authorization: "Bearer SUBSCRIBER_JWT",
  },
});

eventSource.onmessage = (event) => {
  const data = JSON.parse(event.data);
  console.log("Order update:", data);
};

Scaling: Mercure is built on Caddy server, which handles high concurrency efficiently. For multi-node deployments, use the Redis transport to share messages across instances. SSE connections are lighter than WebSocket connections in terms of server resources, making Mercure efficient for broadcast-heavy workloads.

Limitations: SSE is unidirectional (server to client only). Clients cannot send messages through the SSE connection back to the server. For bidirectional communication (chat, collaborative editing), you need a separate HTTP endpoint for client-to-server messages. This is architecturally clean but adds a round-trip compared to WebSocket bidirectional messaging. The AGPL-3.0 license applies, though a commercial license is available.

Best for: Applications where the primary pattern is server-to-client push: notifications, live feeds, dashboards, e-commerce order tracking. Especially strong in PHP/Symfony ecosystems where Mercure has first-class integration.

Laravel WebSockets — Native Laravel Broadcasting

Laravel WebSockets is a PHP package that implements the Pusher WebSocket protocol directly inside your Laravel application. No separate server process, no Node.js dependency, no external service. It runs as a PHP process alongside your Laravel app and handles WebSocket connections natively.

This makes it the simplest option for Laravel developers already using Laravel Echo and Broadcasting. You replace the Pusher credentials with your local WebSocket server config, and Echo continues working without changes.

Key capabilities:

  • Full Pusher protocol compatibility within Laravel
  • Works with Laravel Echo out of the box
  • Dashboard for monitoring active connections and channels
  • Presence channels and client events
  • SSL/TLS support
  • Statistics and event logging
  • Multi-tenancy support (multiple apps on one server)

Setup in Laravel

composer require beyondcode/laravel-websockets
php artisan vendor:publish --provider="BeyondCode\LaravelWebSockets\WebSocketsServiceProvider"
php artisan migrate
php artisan websockets:serve
// config/broadcasting.php
'pusher' => [
    'driver' => 'pusher',
    'key' => env('PUSHER_APP_KEY'),
    'secret' => env('PUSHER_APP_SECRET'),
    'app_id' => env('PUSHER_APP_ID'),
    'options' => [
        'host' => '127.0.0.1',
        'port' => 6001,
        'scheme' => 'http',
        'encrypted' => false,
    ],
],

Scaling: Laravel WebSockets runs as a single PHP process. It handles around 10,000 concurrent connections on a well-provisioned server, but scaling horizontally is limited compared to purpose-built WebSocket servers. Redis pub/sub can distribute messages across multiple processes, but the connection handling remains bound to PHP's event loop.

Limitations: The project's maintenance has slowed since 2024. The PHP event loop is less efficient for connection-heavy workloads than Go or C++-based servers. For applications expecting more than 10,000 concurrent connections, Soketi or Centrifugo are stronger choices. Laravel WebSockets is also tightly coupled to the Laravel ecosystem, making it unsuitable for non-PHP backends.

Best for: Laravel developers who want the simplest possible self-hosted replacement with zero infrastructure changes.

WS — Raw WebSocket Performance for Node.js

WS is the most widely used WebSocket library in the Node.js ecosystem, with 21,000+ GitHub stars and millions of weekly npm downloads. It is not a real-time messaging platform. It is a low-level WebSocket implementation that gives you a fast, standards-compliant WebSocket server with no opinions about channels, presence, authentication, or message routing.

The core is implemented in C++ with a thin Node.js binding, making it significantly faster than pure JavaScript alternatives. WS handles the WebSocket protocol (framing, masking, ping/pong, close handshake) and gives you raw message events. You build everything else.

Key capabilities:

  • RFC 6455 compliant WebSocket implementation
  • Per-message deflate compression
  • Binary and text message support
  • Connection upgrade handling
  • Ping/pong for keepalive
  • Backpressure handling

Basic Server Setup

import { WebSocketServer } from "ws";

const wss = new WebSocketServer({ port: 8080 });

const channels = new Map();

wss.on("connection", (ws) => {
  ws.on("message", (raw) => {
    const msg = JSON.parse(raw);

    if (msg.type === "subscribe") {
      if (!channels.has(msg.channel)) {
        channels.set(msg.channel, new Set());
      }
      channels.get(msg.channel).add(ws);
    }

    if (msg.type === "publish") {
      const subs = channels.get(msg.channel);
      if (subs) {
        const payload = JSON.stringify({
          channel: msg.channel,
          data: msg.data,
        });
        for (const client of subs) {
          if (client.readyState === 1) {
            client.send(payload);
          }
        }
      }
    }
  });

  ws.on("close", () => {
    for (const [, subs] of channels) {
      subs.delete(ws);
    }
  });
});

Scaling: WS runs on a single Node.js process. You can fork multiple processes with Node's cluster module or run multiple instances behind a load balancer with sticky sessions. For cross-process message delivery, add Redis pub/sub. A single WS process handles 50,000-100,000 concurrent connections on a 4GB server with efficient message handling.

Limitations: WS provides no built-in channels, authentication, presence, history, reconnection recovery, or clustering. You build all of that yourself. For teams that need those features, Soketi or Centrifugo save months of development time. WS is the right choice only when you need precise control over the WebSocket layer and are prepared to implement the messaging logic yourself.

Best for: Node.js developers building custom real-time protocols where existing abstractions add unnecessary overhead. Gaming servers, financial data feeds, IoT device communication.

Performance Benchmarks at Scale

The following benchmarks reflect community-reported numbers and documented tests across real deployments. Actual performance depends on message size, message frequency, server hardware, and network conditions.

Connection Capacity (Single Node, 8 vCPU / 16GB RAM)

ServerConcurrent ConnectionsMemory UsageCPU Usage
Centrifugo1,000,000+~12GB60-70%
WS (Node.js)500,000~8GB70-80%
Soketi100,000~4GB50-60%
Mercure100,000~3GB40-50%
Laravel WebSockets10,000~2GB80-90%

Message Throughput (10,000 subscribers, 1KB messages)

ServerMessages/sec (broadcast)p50 Latencyp99 Latency
Centrifugo500,000+<1ms5ms
WS (Node.js)300,000<1ms8ms
Soketi150,0001ms12ms
Mercure100,0002ms15ms
Laravel WebSockets20,0005ms50ms

Clustered Performance (3 Nodes + Redis)

ServerTotal ConnectionsCross-Node LatencyNotes
Centrifugo3M+3-8msRedis/Nats broker
Soketi300K5-15msRedis adapter
Mercure300K5-12msRedis transport
Laravel WebSockets30K10-30msRedis pub/sub

Centrifugo dominates in raw scalability. If your application requires hundreds of thousands of concurrent connections, it is the only open source option in this list that handles that load on a single node without degradation. For most applications (under 50,000 connections), all five alternatives perform well within acceptable latency bounds.

Cost Comparison: Self-Hosted vs Pusher

ScenarioPusherSelf-Hosted (Soketi/Centrifugo)
500 connections, 5M msgs/month$49/month (Startup)$5/month (1 vCPU VPS)
5,000 connections, 50M msgs/month$99/month (Business)$10/month (2 vCPU VPS)
50,000 connections, 500M msgs/month$299+/month (custom)$40/month (8 vCPU VPS)
500,000 connections, 5B msgs/monthCustom enterprise pricing$150/month (32GB dedicated)

Self-hosted costs are the VPS price alone. Add $0-20/month for managed Redis if you need clustering. The savings are significant at every tier, and the gap widens as you scale. For a deeper dive on VPS providers and pricing for self-hosted infrastructure, see our VPS comparison guide.

The trade-off is operational responsibility. You manage uptime, upgrades, and monitoring. For teams already running infrastructure, this is incremental work. For teams without DevOps capacity, Pusher's managed service has real value. We cover this trade-off in detail in our self-hosting vs cloud analysis.

When to Choose Each Alternative

Choose Soketi when:

  • You are migrating from Pusher and want zero client-side code changes
  • Your stack already uses Pusher client libraries (pusher-js, Laravel Echo)
  • You need a simple, well-understood protocol with broad library support
  • Connection count stays under 100,000

Choose Centrifugo when:

  • You need maximum scalability (100K+ connections per node)
  • Message history and automatic recovery on reconnect are requirements
  • You want presence tracking without building it yourself
  • Your backend is polyglot (Go, Python, Ruby, PHP, Node.js all publish via HTTP API)
  • You are building chat, collaborative tools, or live dashboards

Choose Mercure when:

  • Your primary pattern is server-to-client push (notifications, feeds, updates)
  • You want to work with web standards (SSE, HTTP/2) rather than WebSocket protocols
  • Your infrastructure does not support WebSocket upgrades (some corporate proxies, CDN configurations)
  • You are in the PHP/Symfony ecosystem with first-class Mercure integration

Choose Laravel WebSockets when:

  • Your entire stack is Laravel and you want the simplest possible integration
  • Connection count stays under 10,000
  • You prioritize development simplicity over raw performance
  • You want a single-command setup with no additional infrastructure

Choose WS when:

  • You need a custom real-time protocol without abstraction overhead
  • You are building something where existing channel/presence models do not fit (gaming, IoT, financial feeds)
  • Your team has the expertise to implement reconnection, authentication, and scaling from scratch
  • Raw performance per dollar matters more than development speed

For more open source developer tools across categories, see our roundup of the best open source developer tools in 2026.

Migration from Pusher: Step by Step

For Soketi (zero-code migration):

  1. Deploy Soketi via Docker on your VPS
  2. Update your Pusher client configuration to point to your Soketi host and port
  3. Update your backend Pusher SDK configuration with the same host change
  4. Test all channel types (public, private, presence) and webhooks
  5. Monitor connection counts and message throughput via the Prometheus endpoint
  6. Remove your Pusher account once stable

For Centrifugo (protocol migration):

  1. Deploy Centrifugo via Docker
  2. Replace pusher-js with the centrifuge-js client library
  3. Update your backend to publish via Centrifugo's HTTP API instead of Pusher's server SDK
  4. Implement JWT authentication for connections and subscriptions
  5. Configure message history and recovery for channels that need it
  6. Test thoroughly, especially reconnection behavior and presence

Methodology

We evaluated each alternative based on five criteria: protocol compatibility (with Pusher and web standards), connection scalability (single-node and clustered), operational complexity (deployment, configuration, monitoring), ecosystem maturity (documentation, community, maintenance activity), and licensing terms.

Connection and throughput benchmarks reference published numbers from each project's documentation, community benchmarks on GitHub, and independent load testing reports. We note where numbers are self-reported by project maintainers versus independently verified. All Docker configurations were tested on Ubuntu 24.04 with Docker Engine 27.x.

GitHub star counts and maintenance activity were checked in March 2026. Pricing for Pusher reflects their published pricing page as of March 2026. VPS costs reference standard pricing from Hetzner, DigitalOcean, and Vultr for the specified configurations.

We did not include Ably, PubNub, or Firebase Realtime Database in this comparison because they are proprietary managed services, not open source self-hostable alternatives. SignalWire FreeSWITCH was excluded because it targets telephony/voice rather than WebSocket real-time messaging.

The SaaS-to-Self-Hosted Migration Guide (Free PDF)

Step-by-step: infrastructure setup, data migration, backups, and security for 15+ common SaaS replacements. Used by 300+ developers.

Join 300+ self-hosters. Unsubscribe in one click.