How to Self-Host Dify in 2026: Complete Setup Guide
What Dify Is
Dify (80K+ GitHub stars, MIT license) is an open source platform for building production-ready AI applications. Think of it as an all-in-one workspace for:
- AI chatbots: Build and deploy conversational AI with custom knowledge bases
- Agentic workflows: Create multi-step AI pipelines that use tools, APIs, and code
- RAG applications: Chat with your documents, PDFs, wikis, and data sources
- LLM management: Connect and switch between OpenAI, Anthropic, Google, Ollama (local), and 100+ other providers
Self-hosting Dify means your prompts, documents, and conversations don't pass through Dify's servers. You control the data and can connect it to entirely local models (via Ollama) for full privacy.
Server Requirements
Minimum
- 2 CPU cores
- 4GB RAM (8GB recommended)
- 20GB storage for application data and model caches
Recommended for Production
- 4+ CPU cores
- 8-16GB RAM
- 50GB+ storage
- External database (PostgreSQL) for production reliability
Recommended Servers (Hetzner)
| Use Case | Server | Monthly |
|---|---|---|
| Personal/testing | CAX21 (4GB ARM) | $6 |
| Small team | CPX31 (8GB) | $10 |
| Production | CPX41 (16GB) | $19 |
If you want to run local models with Ollama alongside Dify (for zero-cost LLM inference), you need more RAM — budget 8GB per 7B model plus Dify's overhead.
Step 1: Prepare Your Server
# Update system
sudo apt update && sudo apt upgrade -y
# Install Docker
curl -fsSL https://get.docker.com | sh
sudo usermod -aG docker $USER
newgrp docker
# Verify
docker --version
docker compose version
Step 2: Clone Dify
git clone https://github.com/langgenius/dify.git
cd dify/docker
The docker directory contains everything needed for deployment.
Step 3: Configure Environment Variables
cp .env.example .env
nano .env
Critical settings to configure:
# Generate a strong secret key (run: openssl rand -base64 32)
SECRET_KEY=your-generated-secret-key-here
# Postgres settings
DB_USERNAME=dify
DB_PASSWORD=YourStrongPassword123
DB_HOST=db
DB_PORT=5432
DB_DATABASE=dify
# Redis settings
REDIS_HOST=redis
REDIS_PORT=6379
REDIS_DB=0
# Storage settings (local by default)
STORAGE_TYPE=local
STORAGE_LOCAL_PATH=storage
# If using S3 for file storage:
# STORAGE_TYPE=s3
# S3_ENDPOINT=https://s3.amazonaws.com
# S3_BUCKET_NAME=your-bucket
# S3_ACCESS_KEY=...
# S3_SECRET_KEY=...
# Your server URL (important for OAuth callbacks and file URLs)
CONSOLE_WEB_URL=https://dify.yourdomain.com
APP_WEB_URL=https://dify.yourdomain.com
# SMTP for email notifications (optional)
MAIL_TYPE=smtp
SMTP_SERVER=smtp.example.com
SMTP_PORT=587
SMTP_USERNAME=your@email.com
SMTP_PASSWORD=yourpassword
MAIL_DEFAULT_SEND_FROM=noreply@yourdomain.com
Step 4: Start Dify
docker compose up -d
Dify starts multiple containers:
api: Backend API serverworker: Background job processorweb: Frontend Next.js applicationdb: PostgreSQL databaseredis: Cache and job queuenginx: Reverse proxy (routes to api/web)sandbox: Code execution sandbox (for Python/JS in workflows)ssrf_proxy: Security proxy for external requestsweaviate: Vector database for embeddings (optional, can use pgvector instead)
Initial startup downloads container images and takes 2-5 minutes. Monitor:
docker compose logs -f api
Wait until you see the API server report it's ready.
Step 5: Initial Setup
Navigate to http://your-server-ip or http://your-server-ip/install
Create Admin Account
Enter your email and password to create the admin account. This is the workspace owner account.
Access the Dashboard
After account creation, you land on Dify's main dashboard. The key sections:
- Studio: Build and test AI applications
- Knowledge: Create knowledge bases from documents
- Models: Configure LLM providers
- Monitoring: Observe application usage and logs
Step 6: Connect LLM Providers
This is where you configure which AI models Dify can use.
Settings → Model Providers
Connect OpenAI (Cloud)
- Go to Settings → Model Provider
- Click OpenAI
- Enter your API key:
sk-... - Save and test connection
Available models after connecting: GPT-4o, GPT-4o-mini, o1, o3, text-embedding-ada-002, dall-e-3
Connect Anthropic (Cloud)
- Click Anthropic
- Enter API key:
sk-ant-... - Available: Claude 3.5 Sonnet, Claude 3 Opus, Claude 3 Haiku
Connect Local Models via Ollama
Use local models for free inference — no API costs, full privacy.
First, install Ollama on your server (or a separate machine):
curl -fsSL https://ollama.com/install.sh | sh
ollama pull llama3.1:8b
ollama pull nomic-embed-text # For embeddings
In Dify:
- Settings → Model Provider → Ollama
- Enter base URL:
http://host-ip:11434(orhttp://host.docker.internal:11434if Ollama runs on the same Docker host) - Add models: enter the model name exactly as in Ollama (
llama3.1:8b,mistral, etc.)
Embedding model: Add nomic-embed-text as an embedding model for knowledge base search.
Now Dify can use your local Llama models for zero-cost AI inference.
Connect Other Providers
Dify supports many providers — each has a similar setup:
- Google AI Studio: Gemini models
- Azure OpenAI: Enterprise OpenAI deployments
- Mistral: Mistral models
- OpenRouter: Access 100+ models via one API
- Groq: Fast inference for open models
Step 7: Build Your First Application
Create a Chatbot
- Studio → Create App → Chatbot
- Name it (e.g., "Customer Support Bot")
- Choose an Orchestration mode:
- Basic: Simple chatbot with a system prompt
- Advanced (Workflow): Visual pipeline with conditional logic, tools, and multi-step processing
For a basic chatbot:
- Set the System Prompt: Define the bot's persona and behavior
- Select the Model: Choose from your connected providers
- Set Context (optional): Attach knowledge bases for RAG
- Debug and Preview: Test in the right panel
Create a Knowledge Base (RAG)
- Knowledge → Create Knowledge
- Upload documents: PDF, TXT, MD, DOCX supported
- Configure chunking: chunk size affects retrieval quality
- Choose embedding model: the model that converts text to searchable vectors
- After indexing, attach the knowledge base to any application
Test with: "What does [document] say about X?"
Build a Workflow Application
Workflows are Dify's most powerful feature — multi-step AI pipelines that can:
- Call external APIs
- Execute Python or JavaScript code
- Branch based on conditions
- Use multiple AI models in sequence
- Generate images with DALL-E or Stable Diffusion
Example: Document Summarization Workflow
- Input: Document file upload
- Extract text: Parse PDF/document
- Summarize: Send text to LLM with summarization prompt
- Format: Structure the summary
- Output: Return formatted summary
The visual canvas lets you connect these nodes by dragging between them.
Step 8: Configure HTTPS
For team access and security, set up HTTPS with a reverse proxy.
Using Caddy
sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https curl
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list
sudo apt update && sudo apt install caddy
/etc/caddy/Caddyfile:
dify.yourdomain.com {
reverse_proxy localhost:80
}
sudo systemctl restart caddy
Update your .env file with the HTTPS URL and restart Dify:
# Update in .env:
CONSOLE_WEB_URL=https://dify.yourdomain.com
APP_WEB_URL=https://dify.yourdomain.com
docker compose up -d
Step 9: Invite Team Members
- Settings → Members → Invite Member
- Enter email address
- Select role: Owner, Admin, Editor, or Member
- Invited users receive an email with a setup link
Roles determine what members can create, modify, and access across the workspace.
Step 10: Deploy Applications as APIs or Chatbots
Every Dify application can be deployed in multiple ways:
Embedded Chatbot Widget
Add Dify as a chat widget to any website:
- Open your application
- Publish → Embed into Site
- Copy the JavaScript snippet and paste into your website HTML
<script>
window.difyChatbotConfig = {
token: 'your-app-token'
}
</script>
<script
src="https://dify.yourdomain.com/embed.min.js"
id="your-app-token"
defer>
</script>
REST API
Every Dify application exposes a REST API:
curl -X POST 'https://dify.yourdomain.com/v1/chat-messages' \
-H 'Authorization: Bearer app-token-here' \
-H 'Content-Type: application/json' \
-d '{
"inputs": {},
"query": "What is the return policy?",
"response_mode": "blocking",
"conversation_id": "",
"user": "user-123"
}'
Use this to integrate Dify applications into your own products and services.
Production Considerations
Use External PostgreSQL
For production, use a managed PostgreSQL instance instead of the Docker container:
DB_HOST=your-postgres-host
DB_PORT=5432
DB_USERNAME=dify
DB_PASSWORD=strong-password
DB_DATABASE=dify
This enables proper backups, monitoring, and reliability SLAs.
Configure File Storage
For production, use S3-compatible storage (Hetzner Object Storage, Backblaze B2, AWS S3) instead of local storage:
STORAGE_TYPE=s3
S3_ENDPOINT=https://s3.amazonaws.com
S3_BUCKET_NAME=dify-storage
S3_ACCESS_KEY=your-access-key
S3_SECRET_KEY=your-secret-key
S3_REGION=us-east-1
Backup
Back up regularly:
# Database backup
docker exec dify-db-1 pg_dump -U dify dify | gzip > dify-backup-$(date +%Y%m%d).sql.gz
# Storage backup (if local)
rsync -av /path/to/dify/docker/volumes/app/storage/ /backup/dify-storage/
Cost Analysis
Dify Cloud vs Self-Hosted
| Dify Cloud | Monthly | Annual |
|---|---|---|
| Sandbox (free tier) | $0 | $0 |
| Professional | $59 | $708 |
| Team | $159 | $1,908 |
| Self-Hosted | Monthly | Annual |
|---|---|---|
| Hetzner CPX31 | $10 | $120 |
| + Ollama local models | $0 | $0 |
| + OpenAI API (usage) | Variable | Variable |
Self-hosted Dify with local Ollama models = $120/year for the server with no per-query AI costs. Add cloud API costs if you connect OpenAI/Anthropic for their specific capabilities.
Find More AI Platform Alternatives
Browse all AI development platform alternatives on OSSAlt — compare Dify, Flowise, LangFlow, AnythingLLM, and every other open source AI application platform with deployment guides.