Self-Hosting Meilisearch: Fast Search 2026
Meilisearch is the open source Algolia alternative — typo-tolerant, instant search that returns results in under 50ms. Self-hosting removes the 10K document limit on the free tier and eliminates per-search pricing.
Why Self-Host Meilisearch
Algolia's pricing can catch teams off guard. The Grow plan starts at $0.50 per 1,000 search operations, which sounds reasonable until you realize a busy e-commerce site with 100K searches/month is spending $50 just on search — before paying for indexed records. The Business plan runs $1,200/month and up. Typesense Cloud similarly charges based on nodes and data.
Self-hosting Meilisearch on a €4.50/month Hetzner CX22 gives you the same sub-50ms search experience at a fraction of the cost. For a SaaS product indexing 500K documents with 2M monthly searches, the annual difference can exceed $10,000.
Data ownership matters too. When your product catalog, user-generated content, or internal knowledge base flows through a third-party search provider, you're trusting them with your data — and their API is one incident or billing dispute away from going dark. Your self-hosted Meilisearch instance stays up regardless of what happens to any vendor.
Customization possibilities are significant. You can tune typo tolerance thresholds per field, configure custom ranking rules that factor in your business logic (not just relevance), set up custom synonyms for domain-specific vocabulary, and adjust stop words for your language. None of this requires a support ticket.
When NOT to self-host Meilisearch: If your team has no one comfortable running a Linux server, managed Algolia or Typesense Cloud removes the operational overhead. Also consider that Meilisearch is single-node — there's no built-in clustering. For very high availability requirements (five nines), Algolia's distributed infrastructure may genuinely be worth the premium. Finally, if you're indexing under 10K documents and search is not a core feature, Algolia's free tier covers you without any server work.
Prerequisites
Before deploying Meilisearch, make sure your environment is ready. Choosing the right VPS for self-hosting upfront saves headaches later — search is latency-sensitive, so server location relative to your users matters.
Server specs: Meilisearch's rule of thumb is roughly 2x your dataset size in RAM for optimal indexing performance. A 1 GB RAM VPS handles small indexes (under 100K documents) comfortably. For 500K documents, plan on 4 GB RAM. CPU matters primarily during indexing — searches are fast even on single-core machines. Disk needs scale with your data: budget 2x your raw data size.
Operating system: Ubuntu 22.04 LTS is the recommended choice — it has the longest support window (until 2027 standard, 2032 extended), the most community resources, and Docker packages are well-tested on it. Debian 12 is a solid second choice. Avoid Ubuntu 23+ for production; they're short-support releases.
Docker Engine 24+: Install with the official script: curl -fsSL https://get.docker.com | sh. After installing, add your user to the docker group (sudo usermod -aG docker $USER) and log out and back in so the change takes effect.
Domain and DNS: You'll need an A record pointing your subdomain at your server IP. DNS propagation takes anywhere from minutes to 48 hours depending on your registrar. Caddy (recommended here) handles SSL certificate provisioning automatically once DNS resolves — no manual certbot steps needed.
Skills required: Basic Linux comfort — SSH, editing files with nano or vim, running systemctl commands. You don't need deep sysadmin knowledge; these steps are copy-paste friendly.
Requirements
- VPS with 1 GB RAM minimum (scale with index size)
- Docker
- Domain name (e.g.,
search.yourdomain.com) - 10+ GB disk (depends on data volume)
Step 1: Deploy with Docker
docker run -d \
--name meilisearch \
--restart unless-stopped \
-p 7700:7700 \
-v meili_data:/meili_data \
-e MEILI_MASTER_KEY=your-master-key-min-16-chars \
-e MEILI_ENV=production \
getmeili/meilisearch:latest
Generate a master key:
openssl rand -hex 24
Step 2: Reverse Proxy (Caddy)
# /etc/caddy/Caddyfile
search.yourdomain.com {
reverse_proxy localhost:7700
}
sudo systemctl restart caddy
Step 3: Get API Keys
Meilisearch auto-generates API keys from the master key:
# List API keys
curl -s https://search.yourdomain.com/keys \
-H 'Authorization: Bearer your-master-key' | jq
| Key | Purpose | Use In |
|---|---|---|
| Default Search API Key | Search only (read) | Frontend |
| Default Admin API Key | Full access (read/write) | Backend |
| Master Key | Manages all keys | Never expose |
Step 4: Create an Index and Add Documents
# Create index with primary key
curl -X POST 'https://search.yourdomain.com/indexes/products/documents' \
-H 'Authorization: Bearer your-admin-api-key' \
-H 'Content-Type: application/json' \
--data-binary '[
{
"id": 1,
"title": "Wireless Headphones",
"description": "Noise-cancelling Bluetooth headphones",
"category": "Electronics",
"price": 79.99
},
{
"id": 2,
"title": "Mechanical Keyboard",
"description": "Cherry MX Blue switches, RGB backlight",
"category": "Electronics",
"price": 129.99
}
]'
Step 5: Configure Index Settings
# Set searchable attributes (order = priority)
curl -X PUT 'https://search.yourdomain.com/indexes/products/settings' \
-H 'Authorization: Bearer your-admin-api-key' \
-H 'Content-Type: application/json' \
--data-binary '{
"searchableAttributes": ["title", "description", "category"],
"filterableAttributes": ["category", "price"],
"sortableAttributes": ["price"],
"displayedAttributes": ["title", "description", "category", "price"],
"typoTolerance": {
"enabled": true,
"minWordSizeForTypos": { "oneTypo": 4, "twoTypos": 8 }
},
"pagination": { "maxTotalHits": 1000 }
}'
Step 6: Search
# Basic search
curl 'https://search.yourdomain.com/indexes/products/search' \
-H 'Authorization: Bearer your-search-api-key' \
-H 'Content-Type: application/json' \
--data-binary '{ "q": "headphons" }'
# Returns "Wireless Headphones" despite typo ✨
# Search with filters
curl 'https://search.yourdomain.com/indexes/products/search' \
-H 'Authorization: Bearer your-search-api-key' \
-H 'Content-Type: application/json' \
--data-binary '{
"q": "keyboard",
"filter": "price < 150",
"sort": ["price:asc"]
}'
Step 7: Frontend Integration
JavaScript SDK:
npm install meilisearch
import { MeiliSearch } from 'meilisearch'
const client = new MeiliSearch({
host: 'https://search.yourdomain.com',
apiKey: 'your-search-api-key', // Search key only!
})
const results = await client.index('products').search('headphones', {
limit: 10,
filter: ['category = Electronics'],
})
InstantSearch (Algolia-compatible UI):
npm install react-instantsearch @meilisearch/instant-meilisearch
import { InstantSearch, SearchBox, Hits } from 'react-instantsearch'
import { instantMeiliSearch } from '@meilisearch/instant-meilisearch'
const { searchClient } = instantMeiliSearch(
'https://search.yourdomain.com',
'your-search-api-key'
)
function SearchPage() {
return (
<InstantSearch indexName="products" searchClient={searchClient}>
<SearchBox />
<Hits hitComponent={Hit} />
</InstantSearch>
)
}
function Hit({ hit }) {
return (
<div>
<h3>{hit.title}</h3>
<p>{hit.description}</p>
<span>${hit.price}</span>
</div>
)
}
Step 8: Keep Data in Sync
Option 1: Batch sync (cron)
# Export from your database and push to Meilisearch daily
*/15 * * * * /usr/local/bin/sync-search.sh
Option 2: Real-time sync (webhook/event)
// After creating/updating a product in your app
await meiliClient.index('products').updateDocuments([updatedProduct])
// After deleting
await meiliClient.index('products').deleteDocument(productId)
Option 3: Database trigger Use n8n or a custom webhook to sync on database changes.
Production Hardening
Docker Compose (recommended for production):
services:
meilisearch:
image: getmeili/meilisearch:latest
container_name: meilisearch
restart: unless-stopped
ports:
- "7700:7700"
volumes:
- meili_data:/meili_data
environment:
- MEILI_MASTER_KEY=your-master-key
- MEILI_ENV=production
- MEILI_MAX_INDEXING_MEMORY=1024Mb
- MEILI_MAX_INDEXING_THREADS=2
volumes:
meili_data:
Backups:
# Snapshot (built-in)
curl -X POST 'https://search.yourdomain.com/snapshots' \
-H 'Authorization: Bearer your-master-key'
# Or backup the data volume
docker run --rm -v meili_data:/data -v /backups:/backup alpine \
tar czf /backup/meili-$(date +%Y%m%d).tar.gz /data
Updates:
docker pull getmeili/meilisearch:latest
docker stop meilisearch && docker rm meilisearch
# Re-run the docker run command (data persists in volume)
Monitoring:
- Health check:
GET /health(returns{ "status": "available" }) - Stats:
GET /stats(index sizes, document counts) - Monitor search latency (should be < 50ms)
Resource Usage
| Documents | RAM | CPU | Disk |
|---|---|---|---|
| 1-100K | 512 MB | 1 core | 5 GB |
| 100K-1M | 2 GB | 2 cores | 20 GB |
| 1M-10M | 8 GB | 4 cores | 50 GB |
Rule of thumb: Meilisearch needs ~2x your dataset size in RAM for optimal performance.
VPS Recommendations
| Provider | Spec (500K docs) | Price |
|---|---|---|
| Hetzner | 2 vCPU, 4 GB RAM | €4.50/month |
| DigitalOcean | 2 vCPU, 4 GB RAM | $24/month |
| Linode | 2 vCPU, 4 GB RAM | $24/month |
Production Security Hardening
A production Meilisearch instance contains your full dataset — products, user content, or internal documents. Securing the server itself is as important as the application layer. Follow this checklist alongside the broader self-hosting security checklist.
Firewall with UFW: Lock down all ports except what you actually need.
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow 22/tcp # SSH
sudo ufw allow 80/tcp # HTTP (Caddy ACME challenge)
sudo ufw allow 443/tcp # HTTPS
# Do NOT expose port 7700 — Meilisearch traffic goes through Caddy
sudo ufw enable
Fail2ban for SSH: Automated brute-force protection for your SSH port.
sudo apt install fail2ban -y
sudo systemctl enable fail2ban
Create /etc/fail2ban/jail.local:
[sshd]
enabled = true
port = ssh
maxretry = 5
bantime = 3600
findtime = 600
Keep secrets out of Docker Compose: Never hardcode your Meilisearch master key in your compose file. Use a .env file and reference it:
# .env (chmod 600 .env — never commit this file)
MEILI_MASTER_KEY=your-secret-master-key
Add .env to .gitignore immediately. The master key gives full read/write access to all your indexes.
Disable SSH password authentication: Once you've confirmed key-based login works, turn off password auth in /etc/ssh/sshd_config:
PasswordAuthentication no
PermitRootLogin no
Then: sudo systemctl restart sshd
Automatic security updates: Ubuntu's unattended-upgrades package applies security patches automatically.
sudo apt install unattended-upgrades -y
sudo dpkg-reconfigure --priority=low unattended-upgrades
Principle of least privilege: Create a dedicated system user for running services rather than using root. Meilisearch's Docker container already runs as a non-root user internally, but be sure the Docker socket itself isn't exposed to untrusted processes.
Troubleshooting Common Issues
Container fails to start — "master key too short"
Meilisearch requires the master key to be at least 16 bytes. Generate a proper key with openssl rand -hex 24 (produces 48 hex characters, well above the minimum). Check the error with docker logs meilisearch.
Search returns no results after adding documents
Meilisearch indexes documents asynchronously. After adding documents, the task status will be enqueued or processing. Check task status:
curl 'https://search.yourdomain.com/tasks' \
-H 'Authorization: Bearer your-master-key' | jq '.results[0]'
Wait for status: "succeeded" before searching. Large indexes can take minutes to process.
High memory usage during indexing
The MEILI_MAX_INDEXING_MEMORY environment variable caps memory usage during bulk indexing. Set it to 50-75% of available RAM. For a 2 GB VPS: MEILI_MAX_INDEXING_MEMORY=1024Mb. Indexing will be slower but won't OOM-kill the container.
Caddy returns 502 Bad Gateway
Meilisearch isn't listening or hasn't fully started yet. Check: docker ps to confirm the container is running, docker logs meilisearch for errors, and curl http://localhost:7700/health to test direct access before going through Caddy.
Search API key rejected from frontend
You're likely using the Master Key or Admin API Key in the frontend — both should never be exposed to browsers. Use only the Default Search API Key (read-only) in client-side code. Retrieve it from GET /keys with your master key and store it in your frontend environment variables.
Data volume grows unexpectedly
Meilisearch stores indexes on disk at roughly 2-3x the raw data size due to inverted index structures. Monitor disk usage with docker exec meilisearch df -h /meili_data. If disk fills up, Meilisearch will stop accepting new documents — set up disk alerts at 80% capacity. See automated server backups with restic to back up the data volume to object storage before it becomes critical.
Ongoing Maintenance and Operations
Running Meilisearch in production is genuinely low-maintenance compared to most self-hosted services. Once the initial setup is done, the day-to-day operational burden is minimal. Here's what to expect.
Index updates are zero-downtime. When you push new documents or update index settings, Meilisearch handles the changes without taking search offline. New documents appear in search results once the indexing task completes. For large bulk updates, you can monitor task status via the /tasks endpoint and notify your frontend when a re-index is complete if needed.
Monitoring search quality over time. As your dataset evolves, periodically review whether your searchable attributes and ranking rules still match user intent. Meilisearch's /indexes/{index}/stats endpoint shows document counts and field distributions. If users complain that expected results don't appear, test the specific query via the API and compare against your index settings — the issue is usually a missing attribute in searchableAttributes or overly strict typo tolerance settings.
Version upgrades require care. Meilisearch occasionally introduces breaking changes between major versions that require re-indexing your data. Before upgrading, check the Meilisearch changelog for migration notes. The recommended upgrade path is: take a snapshot, pull the new image, test against a copy of the snapshot, then upgrade production. Major version bumps (v1.x to v2.x) have historically required full re-indexing.
Capacity planning. Meilisearch's RAM usage is predictable once you know your dataset size. Monitor the meili_data volume growth with docker exec meilisearch du -sh /meili_data. If you're indexing user-generated content or a growing product catalog, set calendar reminders to check disk usage quarterly. Upgrading a Hetzner server's RAM is a straightforward operation (available in the Hetzner console) that takes a few minutes of downtime.
Scaling beyond a single instance. Meilisearch is a single-node search engine — there's no official clustering for self-hosted deployments. For most applications, a single well-sized instance is sufficient. If you need redundancy, the simplest approach is two separate Meilisearch instances (primary and replica) with your ingestion pipeline writing to both. Read traffic can use either. This manual approach works well up to tens of millions of documents.
Multi-tenant scenarios. If you're building a SaaS platform where each customer needs isolated search, Meilisearch's API key scoping lets you create customer-specific API keys that only access specific indexes. Create separate indexes per tenant (products_tenant_123) and generate API keys scoped to those index patterns. This keeps search data fully isolated without running multiple Meilisearch instances.
Cost comparison at scale. A Meilisearch instance on Hetzner CX32 (4 vCPU, 8 GB RAM, €8/month) handling 2 million documents with 500K monthly searches costs €96/year in infrastructure. The equivalent Algolia Business plan at that search volume would run $4,800-12,000/year. Even accounting for engineering time to maintain the server, the economics favor self-hosting for any team comfortable with Docker.
Integrating search into existing apps. Meilisearch works well as a read-side projection of your primary database. The common pattern is to write to your main database (PostgreSQL, MySQL, MongoDB) and mirror relevant data to Meilisearch for search. This keeps Meilisearch as a cache rather than a source of truth — if the Meilisearch index is ever lost or corrupted, you can rebuild it from the database without data loss. Implement a background job that re-indexes documents on create, update, and delete events from your primary database. Libraries like meilisearch-js, meilisearch-ruby, and meilisearch-python simplify this integration. The mirror pattern also makes it straightforward to add fields to your search index without touching your main database schema — transform and augment the data during the sync step to include computed fields, related data, or normalized values that improve search relevance.
Compare search engines on OSSAlt — speed, features, and self-hosting options side by side.
See open source alternatives to Meilisearch on OSSAlt.