Open-source alternatives guide
Meilisearch vs Typesense vs Elasticsearch 2026
Meilisearch, Typesense, and Elasticsearch compared for self-hosted search in 2026. Performance benchmarks, setup complexity, features, and which search.
TL;DR
For developer-friendly typo-tolerant search (e-commerce, docs search, app search): Meilisearch or Typesense — both are purpose-built for instant search UX, simple to deploy, and 10–100x faster to configure than Elasticsearch. For log analytics, full-text search at petabyte scale, complex aggregations: Elasticsearch (or its open source fork OpenSearch). Elasticsearch is powerful but operationally heavy — don't use it for simple search when Meilisearch delivers better user experience in 30 minutes.
Key Takeaways
- Meilisearch: SSPL license (source-available), ~48K stars, Rust, best developer UX, instant search out of the box
- Typesense: GPL 3.0 (community) / commercial, ~22K stars, C++, fastest raw performance, best for high QPS
- Elasticsearch: SSPL (Elastic License 2.0), ~71K stars, Java/Lucene, most powerful but operationally complex
- OpenSearch: Apache 2.0 (true open source), ~9K stars, AWS fork of Elasticsearch 7.10
- Setup complexity: Meilisearch (easiest) < Typesense < OpenSearch < Elasticsearch
- Self-hosting RAM: Typesense ~256MB, Meilisearch ~512MB, Elasticsearch 1–8GB minimum
Quick Comparison
| Feature | Meilisearch | Typesense | Elasticsearch | OpenSearch |
|---|---|---|---|---|
| License | SSPL 1.0 | GPL 3.0 / commercial | Elastic License 2.0 | Apache 2.0 |
| Language | Rust | C++ | Java | Java |
| GitHub Stars | ~48K | ~22K | ~71K | ~9K |
| Min RAM | 512MB | 256MB | 1GB (2GB+ recommended) | 1GB |
| Typo tolerance | ✅ (built-in) | ✅ (built-in) | Partial (fuzzy queries) | Partial |
| Facets/filters | ✅ | ✅ | ✅ | ✅ |
| Geo search | ✅ | ✅ | ✅ | ✅ |
| Vector search | ✅ | ✅ | ✅ | ✅ |
| Full-text aggregations | Limited | Limited | ✅ Full | ✅ Full |
| Log analytics | ❌ | ❌ | ✅ (ELK stack) | ✅ |
| Multi-tenancy | ✅ (API key scoped) | ✅ (API key scoped) | ✅ | ✅ |
| Dashboard | ✅ (built-in) | ✅ (Typesense Dashboard) | Kibana (separate) | OpenSearch Dashboards |
| Cold start | ~1s | <1s | 10–60s | 10–60s |
| Scaling | Single-node (v1) | Multi-node native | Horizontal cluster | Horizontal cluster |
Meilisearch: Best Developer Experience
Meilisearch is a REST API-first search engine built for instant search experiences. Zero configuration required — index documents, search returns results with typo tolerance, ranking, facets, and highlighting automatically.
Why Meilisearch
Zero config instant search. No query DSL to learn. Send documents in, get results out:
# Index documents:
curl -X POST 'http://localhost:7700/indexes/movies/documents' \
-H 'Content-Type: application/json' \
-d '[
{"id": 1, "title": "Inception", "genre": ["sci-fi", "thriller"], "rating": 8.8},
{"id": 2, "title": "Interstellar", "genre": ["sci-fi"], "rating": 8.6}
]'
# Search — typos handled automatically:
curl 'http://localhost:7700/indexes/movies/search?q=incption'
# Returns Inception despite typo
Built-in typo tolerance: Returns results even with 1-2 character mistakes — no fuzzy query configuration needed.
Faceted search out of the box:
curl -X POST 'http://localhost:7700/indexes/movies/search' \
-H 'Content-Type: application/json' \
-d '{
"q": "space",
"facets": ["genre", "rating"],
"filter": "rating > 8"
}'
Docker Setup
services:
meilisearch:
image: getmeili/meilisearch:latest
restart: unless-stopped
ports:
- "7700:7700"
environment:
MEILI_ENV: production
MEILI_MASTER_KEY: "${MEILI_MASTER_KEY}" # Generate: openssl rand -hex 32
volumes:
- meili_data:/meili_data
volumes:
meili_data:
JavaScript SDK Integration
npm install meilisearch
import MeiliSearch from 'meilisearch';
const client = new MeiliSearch({
host: 'https://search.yourdomain.com',
apiKey: 'search-only-api-key', // Scoped key for frontend
});
const index = client.index('products');
// Add documents:
await index.addDocuments([
{ id: 1, name: 'MacBook Pro', category: 'laptops', price: 1999 }
]);
// Search with instant-search UI:
const results = await index.search('macbk', {
attributesToHighlight: ['name'],
facets: ['category'],
});
Meilisearch vs Algolia (the comparison users actually want)
Meilisearch is often described as "self-hosted Algolia." Algolia charges $0.50/1,000 search requests — 1M searches/month = $500/month. Meilisearch self-hosted costs $6–15/month regardless of search volume.
Typesense: Fastest Raw Performance
Typesense is a typo-tolerant search engine written in C++ — the fastest option for high query-per-second workloads. Used by HackerNews, ToolJet, and GitBook.
Why Typesense
Fastest QPS. C++ core handles 10,000+ QPS on a single node with low latency (<50ms p99).
Native multi-node clustering. Raft consensus replication — add nodes for read scaling without a separate cluster manager.
Built-in Typesense Cloud / self-hosted parity. The same binary runs locally and in production.
Docker Setup
services:
typesense:
image: typesense/typesense:27.1
restart: unless-stopped
ports:
- "8108:8108"
volumes:
- typesense_data:/data
command: >
--data-dir /data
--api-key=${TYPESENSE_API_KEY}
--enable-cors
volumes:
typesense_data:
Usage
# Create collection (define schema):
curl -X POST 'http://localhost:8108/collections' \
-H 'X-TYPESENSE-API-KEY: your-api-key' \
-H 'Content-Type: application/json' \
-d '{
"name": "products",
"fields": [
{"name": "name", "type": "string"},
{"name": "price", "type": "float"},
{"name": "category", "type": "string", "facet": true}
],
"default_sorting_field": "price"
}'
# Add documents:
curl -X POST 'http://localhost:8108/collections/products/documents' \
-H 'X-TYPESENSE-API-KEY: your-api-key' \
-H 'Content-Type: application/json' \
-d '{"name": "MacBook Pro", "price": 1999, "category": "laptops"}'
# Search:
curl 'http://localhost:8108/collections/products/search?q=macbk&query_by=name' \
-H 'X-TYPESENSE-API-KEY: search-only-key'
Typesense vs Meilisearch Trade-offs
- Schema required upfront in Typesense (vs Meilisearch's schemaless indexing)
- Faster at high QPS — Typesense C++ core outperforms Meilisearch Rust at extreme loads
- Meilisearch better DX — simpler API, better docs, easier first-time setup
- Typesense better for production scale — native clustering, lower memory per query
Elasticsearch: The Enterprise Standard
Elasticsearch is the most powerful full-text search and analytics engine. Built on Apache Lucene, it powers log analytics (ELK stack), APM, SIEM, and large-scale full-text search. Operationally heavier than Meilisearch/Typesense but unmatched in capability.
License Warning
Elasticsearch changed its license from Apache 2.0 to Elastic License 2.0 (proprietary) in 2021. Self-hosting is allowed, but you can't offer Elasticsearch as a managed service. If you want truly open source: use OpenSearch (AWS fork, Apache 2.0, fully compatible).
Docker Setup (OpenSearch — Apache 2.0)
services:
opensearch:
image: opensearchproject/opensearch:2.17
restart: unless-stopped
environment:
cluster.name: opensearch-cluster
node.name: opensearch-node1
discovery.seed_hosts: opensearch-node1
cluster.initial_cluster_manager_nodes: opensearch-node1
bootstrap.memory_lock: "true"
OPENSEARCH_JAVA_OPTS: "-Xms1g -Xmx1g"
DISABLE_SECURITY_PLUGIN: "true" # For local dev only
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
volumes:
- opensearch_data:/usr/share/opensearch/data
ports:
- "9200:9200"
opensearch-dashboards:
image: opensearchproject/opensearch-dashboards:2.17
ports:
- "5601:5601"
environment:
OPENSEARCH_HOSTS: '["http://opensearch:9200"]'
DISABLE_SECURITY_DASHBOARDS_PLUGIN: "true"
volumes:
opensearch_data:
Elasticsearch Query DSL (also works for OpenSearch)
# Index a document:
curl -X POST 'http://localhost:9200/products/_doc/1' \
-H 'Content-Type: application/json' \
-d '{
"name": "MacBook Pro",
"description": "Apple M3 chip, 18-hour battery",
"price": 1999,
"category": "laptops",
"tags": ["apple", "laptop", "m3"]
}'
# Full-text search with aggregations:
curl -X POST 'http://localhost:9200/products/_search' \
-H 'Content-Type: application/json' \
-d '{
"query": {
"multi_match": {
"query": "apple laptop",
"fields": ["name^3", "description", "tags"]
}
},
"aggs": {
"by_category": {
"terms": {"field": "category.keyword"}
},
"price_stats": {
"stats": {"field": "price"}
}
}
}'
When Elasticsearch Makes Sense
- Log aggregation and analysis (ELK/OpenSearch stack)
- Complex analytics with aggregations across millions of documents
- Existing Elastic tooling investment (Kibana, Beats, Logstash)
- Need horizontal scaling to hundreds of nodes
- Full-text search across petabytes
Don't use Elasticsearch for simple app search. Setting up an Elasticsearch cluster for "search our 50K product catalog" is massive over-engineering. Use Meilisearch.
Performance Benchmarks
Approximate benchmarks on a 4GB RAM, 2 vCPU VPS (1M product documents):
| Metric | Meilisearch | Typesense | Elasticsearch |
|---|---|---|---|
| Index speed (docs/sec) | ~10K | ~50K | ~20K |
| Search latency (p50) | ~20ms | ~5ms | ~15ms |
| Search latency (p99) | ~80ms | ~20ms | ~50ms |
| Max QPS (single node) | ~1,000 | ~10,000 | ~5,000 |
| RAM usage (1M docs) | ~500MB | ~300MB | ~1.5GB |
| Startup time | ~1s | <1s | 30–60s |
Benchmarks are approximate and vary significantly by document size, query complexity, and hardware.
Vector Search (AI/Semantic Search)
All three support vector embeddings for semantic search:
Meilisearch Vector Search
# Add embeddings:
curl -X PATCH 'http://localhost:7700/indexes/products/settings' \
-H 'Content-Type: application/json' \
-d '{
"embedders": {
"openai": {
"source": "openAi",
"apiKey": "sk-your-key",
"model": "text-embedding-3-small",
"documentTemplate": "{{doc.name}} {{doc.description}}"
}
}
}'
# Hybrid search (vector + keyword):
curl -X POST 'http://localhost:7700/indexes/products/search' \
-H 'Content-Type: application/json' \
-d '{
"q": "comfortable chair for home office",
"hybrid": {"semanticRatio": 0.9, "embedder": "openai"}
}'
Typesense Vector Search
{
"fields": [
{"name": "embedding", "type": "float[]", "num_dim": 1536}
]
}
# Search using nearest neighbor:
curl 'http://localhost:8108/collections/products/documents/search' \
-H 'X-TYPESENSE-API-KEY: key' \
-d 'q=*&vector_query=embedding:([0.123, 0.456, ...], k:10)'
Decision Guide
Choose Meilisearch if:
→ Building app search (product catalog, docs, user search)
→ Want instant setup with zero config
→ Team doesn't want to learn a query DSL
→ Search volume < 1K QPS
→ You're replacing Algolia (same UX, self-hosted)
Choose Typesense if:
→ Need highest possible QPS (10K+)
→ Want native clustering for HA
→ C++ performance matters (resource-constrained server)
→ Building a multi-tenant SaaS search product
→ Okay defining schema upfront
Choose Elasticsearch/OpenSearch if:
→ Building log aggregation or APM (ELK stack)
→ Need complex aggregations and analytics
→ Horizontal scaling to 10+ nodes
→ Existing Elastic investment
→ Use OpenSearch for Apache 2.0 license
Avoid Elasticsearch for:
→ Simple app search < 10M documents
→ Teams without dedicated ops capacity
→ Memory-constrained servers (needs 2GB+ to run well)
Search Relevance Tuning: Getting Results Users Actually Want
Raw search capability — returning documents that contain the query terms — is just the starting point. The difference between a search experience users love and one they ignore is relevance: whether the first few results are the most useful ones for the query.
Meilisearch's default ranking is surprisingly good out of the box. The ranking rules are applied in order: typo tolerance first (exact matches rank higher than typo-corrected ones), then words (documents containing more query words rank higher), then proximity (documents where query words appear near each other), then attribute priority (matches in a title field rank higher than matches in a description), then exactness. This ordered rule system lets you customize relevance by adjusting attribute weights and the order of ranking rules without writing complex scoring formulas.
For product search, a common customization is boosting recently listed or highly-rated products. Meilisearch handles this via the ranking rules configuration — add a custom ranking attribute (like a product score combining sales velocity and rating) and configure it as a descending sort rule. Products with higher scores surface at the top within each relevance tier. This is the same approach Algolia uses, and Meilisearch's API is designed to be familiar to Algolia users.
Typesense's relevance model is similar in structure but with more explicit schema control. Because Typesense requires you to define a schema with field types and facet settings upfront, relevance tuning happens at collection creation time as well as at query time. The sort_by parameter at query time and default_sorting_field at collection creation work together to create a relevance model that combines text matching with numerical sort fields.
Elasticsearch/OpenSearch gives the most powerful relevance tools but at the highest complexity cost. The query DSL supports function_score queries that combine multiple scoring factors — BM25 text relevance, field value factors (boost by rating), decay functions (decay relevance by recency), and custom script scoring. This power is necessary for applications like job search (relevance depends on location, skills match, recency, and many other factors) but is massive over-engineering for a documentation search or simple product catalog.
Operational Considerations: Indexing Pipelines and Incremental Updates
A production search implementation isn't just running a search server — it's building and maintaining the pipeline that keeps the search index in sync with your data.
For Meilisearch and Typesense, the indexing architecture is typically: a background job (cron job, queue worker, or event-driven trigger) fetches new and updated records from your database and sends them to the search index via the API. Both tools support partial document updates — you don't need to re-index an entire document to update one field. This is important for high-volume applications where updating search index entries in real-time adds meaningful write traffic.
The indexing pipeline must handle deletions. When a document is removed from your database, it needs to be removed from the search index. Meilisearch and Typesense both support document deletion by ID. A common pattern is to add a deleted_at soft-delete field to your database records and include it in the search index — documents with deleted_at set are filtered out of search results. When you need to physically remove documents from the index (for GDPR erasure requests), you delete by ID.
For Elasticsearch and OpenSearch, the standard approach for syncing data from a relational database is a Change Data Capture (CDC) pipeline using tools like Debezium, which reads the database's write-ahead log and publishes events to a message queue. The queue consumer updates Elasticsearch. This is more complex but handles high-volume, real-time sync reliably. For most applications outside the enterprise scale, a simpler batch sync job on a short interval (1-5 minutes) is sufficient and far easier to operate.
The Algolia Migration Path
Meilisearch is frequently evaluated as an Algolia replacement, and the migration is typically straightforward because the APIs share conceptual structure. If you're coming from Algolia, here's what maps directly and what differs.
Algolia uses "indices" that correspond to Meilisearch's "indexes" — same concept, same name. The document structure is identical: both accept JSON documents with an ID field. Algolia's search API parameters have close equivalents in Meilisearch: hitsPerPage becomes limit, facetFilters becomes filter, attributesToRetrieve becomes attributesToRetrieve (unchanged). The JavaScript InstantSearch library — which powers most Algolia UI integrations — has official adapters for both Meilisearch and Typesense, so you can migrate without rewriting your search UI components.
The main difference is the Algolia ecosystem's depth. Algolia has years of edge cases handled, extensive documentation, and a large community of integrations. Meilisearch is catching up rapidly but may have gaps in specific scenarios — query rule customization, personalization based on user history, and A/B testing of search relevance are areas where Algolia still leads.
For teams evaluating a full-stack search and analytics setup that includes both application search and log analytics, running Meilisearch alongside the Grafana + Prometheus + Loki observability stack is a common pattern. Meilisearch handles your application's user-facing search while Loki handles log search, serving distinct purposes with minimal overlap.
The how to migrate from Algolia to Meilisearch guide covers the migration process in detail — index configuration mapping, SDK migration, and performance validation. For teams actively evaluating the best open source Algolia replacement, the best open source alternatives to Algolia roundup provides a broader view of Meilisearch, Typesense, and other options.
See all open source search tools at OSSAlt.com/categories/search.
See open source alternatives to Meilisearch on OSSAlt.
The SaaS-to-Self-Hosted Migration Guide (Free PDF)
Step-by-step: infrastructure setup, data migration, backups, and security for 15+ common SaaS replacements. Used by 300+ developers.
Join 300+ self-hosters. Unsubscribe in one click.