Meilisearch vs Typesense vs Elasticsearch
TL;DR
For developer-friendly typo-tolerant search (e-commerce, docs search, app search): Meilisearch or Typesense — both are purpose-built for instant search UX, simple to deploy, and 10–100x faster to configure than Elasticsearch. For log analytics, full-text search at petabyte scale, complex aggregations: Elasticsearch (or its open source fork OpenSearch). Elasticsearch is powerful but operationally heavy — don't use it for simple search when Meilisearch delivers better user experience in 30 minutes.
Key Takeaways
- Meilisearch: SSPL license (source-available), ~48K stars, Rust, best developer UX, instant search out of the box
- Typesense: GPL 3.0 (community) / commercial, ~22K stars, C++, fastest raw performance, best for high QPS
- Elasticsearch: SSPL (Elastic License 2.0), ~71K stars, Java/Lucene, most powerful but operationally complex
- OpenSearch: Apache 2.0 (true open source), ~9K stars, AWS fork of Elasticsearch 7.10
- Setup complexity: Meilisearch (easiest) < Typesense < OpenSearch < Elasticsearch
- Self-hosting RAM: Typesense ~256MB, Meilisearch ~512MB, Elasticsearch 1–8GB minimum
Quick Comparison
| Feature | Meilisearch | Typesense | Elasticsearch | OpenSearch |
|---|---|---|---|---|
| License | SSPL 1.0 | GPL 3.0 / commercial | Elastic License 2.0 | Apache 2.0 |
| Language | Rust | C++ | Java | Java |
| GitHub Stars | ~48K | ~22K | ~71K | ~9K |
| Min RAM | 512MB | 256MB | 1GB (2GB+ recommended) | 1GB |
| Typo tolerance | ✅ (built-in) | ✅ (built-in) | Partial (fuzzy queries) | Partial |
| Facets/filters | ✅ | ✅ | ✅ | ✅ |
| Geo search | ✅ | ✅ | ✅ | ✅ |
| Vector search | ✅ | ✅ | ✅ | ✅ |
| Full-text aggregations | Limited | Limited | ✅ Full | ✅ Full |
| Log analytics | ❌ | ❌ | ✅ (ELK stack) | ✅ |
| Multi-tenancy | ✅ (API key scoped) | ✅ (API key scoped) | ✅ | ✅ |
| Dashboard | ✅ (built-in) | ✅ (Typesense Dashboard) | Kibana (separate) | OpenSearch Dashboards |
| Cold start | ~1s | <1s | 10–60s | 10–60s |
| Scaling | Single-node (v1) | Multi-node native | Horizontal cluster | Horizontal cluster |
Meilisearch: Best Developer Experience
Meilisearch is a REST API-first search engine built for instant search experiences. Zero configuration required — index documents, search returns results with typo tolerance, ranking, facets, and highlighting automatically.
Why Meilisearch
Zero config instant search. No query DSL to learn. Send documents in, get results out:
# Index documents:
curl -X POST 'http://localhost:7700/indexes/movies/documents' \
-H 'Content-Type: application/json' \
-d '[
{"id": 1, "title": "Inception", "genre": ["sci-fi", "thriller"], "rating": 8.8},
{"id": 2, "title": "Interstellar", "genre": ["sci-fi"], "rating": 8.6}
]'
# Search — typos handled automatically:
curl 'http://localhost:7700/indexes/movies/search?q=incption'
# Returns Inception despite typo
Built-in typo tolerance: Returns results even with 1-2 character mistakes — no fuzzy query configuration needed.
Faceted search out of the box:
curl -X POST 'http://localhost:7700/indexes/movies/search' \
-H 'Content-Type: application/json' \
-d '{
"q": "space",
"facets": ["genre", "rating"],
"filter": "rating > 8"
}'
Docker Setup
services:
meilisearch:
image: getmeili/meilisearch:latest
restart: unless-stopped
ports:
- "7700:7700"
environment:
MEILI_ENV: production
MEILI_MASTER_KEY: "${MEILI_MASTER_KEY}" # Generate: openssl rand -hex 32
volumes:
- meili_data:/meili_data
volumes:
meili_data:
JavaScript SDK Integration
npm install meilisearch
import MeiliSearch from 'meilisearch';
const client = new MeiliSearch({
host: 'https://search.yourdomain.com',
apiKey: 'search-only-api-key', // Scoped key for frontend
});
const index = client.index('products');
// Add documents:
await index.addDocuments([
{ id: 1, name: 'MacBook Pro', category: 'laptops', price: 1999 }
]);
// Search with instant-search UI:
const results = await index.search('macbk', {
attributesToHighlight: ['name'],
facets: ['category'],
});
Meilisearch vs Algolia (the comparison users actually want)
Meilisearch is often described as "self-hosted Algolia." Algolia charges $0.50/1,000 search requests — 1M searches/month = $500/month. Meilisearch self-hosted costs $6–15/month regardless of search volume.
Typesense: Fastest Raw Performance
Typesense is a typo-tolerant search engine written in C++ — the fastest option for high query-per-second workloads. Used by HackerNews, ToolJet, and GitBook.
Why Typesense
Fastest QPS. C++ core handles 10,000+ QPS on a single node with low latency (<50ms p99).
Native multi-node clustering. Raft consensus replication — add nodes for read scaling without a separate cluster manager.
Built-in Typesense Cloud / self-hosted parity. The same binary runs locally and in production.
Docker Setup
services:
typesense:
image: typesense/typesense:27.1
restart: unless-stopped
ports:
- "8108:8108"
volumes:
- typesense_data:/data
command: >
--data-dir /data
--api-key=${TYPESENSE_API_KEY}
--enable-cors
volumes:
typesense_data:
Usage
# Create collection (define schema):
curl -X POST 'http://localhost:8108/collections' \
-H 'X-TYPESENSE-API-KEY: your-api-key' \
-H 'Content-Type: application/json' \
-d '{
"name": "products",
"fields": [
{"name": "name", "type": "string"},
{"name": "price", "type": "float"},
{"name": "category", "type": "string", "facet": true}
],
"default_sorting_field": "price"
}'
# Add documents:
curl -X POST 'http://localhost:8108/collections/products/documents' \
-H 'X-TYPESENSE-API-KEY: your-api-key' \
-H 'Content-Type: application/json' \
-d '{"name": "MacBook Pro", "price": 1999, "category": "laptops"}'
# Search:
curl 'http://localhost:8108/collections/products/search?q=macbk&query_by=name' \
-H 'X-TYPESENSE-API-KEY: search-only-key'
Typesense vs Meilisearch Trade-offs
- Schema required upfront in Typesense (vs Meilisearch's schemaless indexing)
- Faster at high QPS — Typesense C++ core outperforms Meilisearch Rust at extreme loads
- Meilisearch better DX — simpler API, better docs, easier first-time setup
- Typesense better for production scale — native clustering, lower memory per query
Elasticsearch: The Enterprise Standard
Elasticsearch is the most powerful full-text search and analytics engine. Built on Apache Lucene, it powers log analytics (ELK stack), APM, SIEM, and large-scale full-text search. Operationally heavier than Meilisearch/Typesense but unmatched in capability.
License Warning
Elasticsearch changed its license from Apache 2.0 to Elastic License 2.0 (proprietary) in 2021. Self-hosting is allowed, but you can't offer Elasticsearch as a managed service. If you want truly open source: use OpenSearch (AWS fork, Apache 2.0, fully compatible).
Docker Setup (OpenSearch — Apache 2.0)
services:
opensearch:
image: opensearchproject/opensearch:2.17
restart: unless-stopped
environment:
cluster.name: opensearch-cluster
node.name: opensearch-node1
discovery.seed_hosts: opensearch-node1
cluster.initial_cluster_manager_nodes: opensearch-node1
bootstrap.memory_lock: "true"
OPENSEARCH_JAVA_OPTS: "-Xms1g -Xmx1g"
DISABLE_SECURITY_PLUGIN: "true" # For local dev only
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
volumes:
- opensearch_data:/usr/share/opensearch/data
ports:
- "9200:9200"
opensearch-dashboards:
image: opensearchproject/opensearch-dashboards:2.17
ports:
- "5601:5601"
environment:
OPENSEARCH_HOSTS: '["http://opensearch:9200"]'
DISABLE_SECURITY_DASHBOARDS_PLUGIN: "true"
volumes:
opensearch_data:
Elasticsearch Query DSL (also works for OpenSearch)
# Index a document:
curl -X POST 'http://localhost:9200/products/_doc/1' \
-H 'Content-Type: application/json' \
-d '{
"name": "MacBook Pro",
"description": "Apple M3 chip, 18-hour battery",
"price": 1999,
"category": "laptops",
"tags": ["apple", "laptop", "m3"]
}'
# Full-text search with aggregations:
curl -X POST 'http://localhost:9200/products/_search' \
-H 'Content-Type: application/json' \
-d '{
"query": {
"multi_match": {
"query": "apple laptop",
"fields": ["name^3", "description", "tags"]
}
},
"aggs": {
"by_category": {
"terms": {"field": "category.keyword"}
},
"price_stats": {
"stats": {"field": "price"}
}
}
}'
When Elasticsearch Makes Sense
- Log aggregation and analysis (ELK/OpenSearch stack)
- Complex analytics with aggregations across millions of documents
- Existing Elastic tooling investment (Kibana, Beats, Logstash)
- Need horizontal scaling to hundreds of nodes
- Full-text search across petabytes
Don't use Elasticsearch for simple app search. Setting up an Elasticsearch cluster for "search our 50K product catalog" is massive over-engineering. Use Meilisearch.
Performance Benchmarks
Approximate benchmarks on a 4GB RAM, 2 vCPU VPS (1M product documents):
| Metric | Meilisearch | Typesense | Elasticsearch |
|---|---|---|---|
| Index speed (docs/sec) | ~10K | ~50K | ~20K |
| Search latency (p50) | ~20ms | ~5ms | ~15ms |
| Search latency (p99) | ~80ms | ~20ms | ~50ms |
| Max QPS (single node) | ~1,000 | ~10,000 | ~5,000 |
| RAM usage (1M docs) | ~500MB | ~300MB | ~1.5GB |
| Startup time | ~1s | <1s | 30–60s |
Benchmarks are approximate and vary significantly by document size, query complexity, and hardware.
Vector Search (AI/Semantic Search)
All three support vector embeddings for semantic search:
Meilisearch Vector Search
# Add embeddings:
curl -X PATCH 'http://localhost:7700/indexes/products/settings' \
-H 'Content-Type: application/json' \
-d '{
"embedders": {
"openai": {
"source": "openAi",
"apiKey": "sk-your-key",
"model": "text-embedding-3-small",
"documentTemplate": "{{doc.name}} {{doc.description}}"
}
}
}'
# Hybrid search (vector + keyword):
curl -X POST 'http://localhost:7700/indexes/products/search' \
-H 'Content-Type: application/json' \
-d '{
"q": "comfortable chair for home office",
"hybrid": {"semanticRatio": 0.9, "embedder": "openai"}
}'
Typesense Vector Search
{
"fields": [
{"name": "embedding", "type": "float[]", "num_dim": 1536}
]
}
# Search using nearest neighbor:
curl 'http://localhost:8108/collections/products/documents/search' \
-H 'X-TYPESENSE-API-KEY: key' \
-d 'q=*&vector_query=embedding:([0.123, 0.456, ...], k:10)'
Decision Guide
Choose Meilisearch if:
→ Building app search (product catalog, docs, user search)
→ Want instant setup with zero config
→ Team doesn't want to learn a query DSL
→ Search volume < 1K QPS
→ You're replacing Algolia (same UX, self-hosted)
Choose Typesense if:
→ Need highest possible QPS (10K+)
→ Want native clustering for HA
→ C++ performance matters (resource-constrained server)
→ Building a multi-tenant SaaS search product
→ Okay defining schema upfront
Choose Elasticsearch/OpenSearch if:
→ Building log aggregation or APM (ELK stack)
→ Need complex aggregations and analytics
→ Horizontal scaling to 10+ nodes
→ Existing Elastic investment
→ Use OpenSearch for Apache 2.0 license
Avoid Elasticsearch for:
→ Simple app search < 10M documents
→ Teams without dedicated ops capacity
→ Memory-constrained servers (needs 2GB+ to run well)
See all open source search tools at OSSAlt.com/categories/search.