Skip to main content

How to Self-Host Dify in 2026: Complete Setup Guide

·OSSAlt Team
difyself-hostedai platformllmragworkflowsdockersetup guide2026

What Dify Is

Dify (80K+ GitHub stars, MIT license) is an open source platform for building production-ready AI applications. Think of it as an all-in-one workspace for:

  • AI chatbots: Build and deploy conversational AI with custom knowledge bases
  • Agentic workflows: Create multi-step AI pipelines that use tools, APIs, and code
  • RAG applications: Chat with your documents, PDFs, wikis, and data sources
  • LLM management: Connect and switch between OpenAI, Anthropic, Google, Ollama (local), and 100+ other providers

Self-hosting Dify means your prompts, documents, and conversations don't pass through Dify's servers. You control the data and can connect it to entirely local models (via Ollama) for full privacy.

Server Requirements

Minimum

  • 2 CPU cores
  • 4GB RAM (8GB recommended)
  • 20GB storage for application data and model caches
  • 4+ CPU cores
  • 8-16GB RAM
  • 50GB+ storage
  • External database (PostgreSQL) for production reliability
Use CaseServerMonthly
Personal/testingCAX21 (4GB ARM)$6
Small teamCPX31 (8GB)$10
ProductionCPX41 (16GB)$19

If you want to run local models with Ollama alongside Dify (for zero-cost LLM inference), you need more RAM — budget 8GB per 7B model plus Dify's overhead.

Step 1: Prepare Your Server

# Update system
sudo apt update && sudo apt upgrade -y

# Install Docker
curl -fsSL https://get.docker.com | sh
sudo usermod -aG docker $USER
newgrp docker

# Verify
docker --version
docker compose version

Step 2: Clone Dify

git clone https://github.com/langgenius/dify.git
cd dify/docker

The docker directory contains everything needed for deployment.

Step 3: Configure Environment Variables

cp .env.example .env
nano .env

Critical settings to configure:

# Generate a strong secret key (run: openssl rand -base64 32)
SECRET_KEY=your-generated-secret-key-here

# Postgres settings
DB_USERNAME=dify
DB_PASSWORD=YourStrongPassword123
DB_HOST=db
DB_PORT=5432
DB_DATABASE=dify

# Redis settings
REDIS_HOST=redis
REDIS_PORT=6379
REDIS_DB=0

# Storage settings (local by default)
STORAGE_TYPE=local
STORAGE_LOCAL_PATH=storage

# If using S3 for file storage:
# STORAGE_TYPE=s3
# S3_ENDPOINT=https://s3.amazonaws.com
# S3_BUCKET_NAME=your-bucket
# S3_ACCESS_KEY=...
# S3_SECRET_KEY=...

# Your server URL (important for OAuth callbacks and file URLs)
CONSOLE_WEB_URL=https://dify.yourdomain.com
APP_WEB_URL=https://dify.yourdomain.com

# SMTP for email notifications (optional)
MAIL_TYPE=smtp
SMTP_SERVER=smtp.example.com
SMTP_PORT=587
SMTP_USERNAME=your@email.com
SMTP_PASSWORD=yourpassword
MAIL_DEFAULT_SEND_FROM=noreply@yourdomain.com

Step 4: Start Dify

docker compose up -d

Dify starts multiple containers:

  • api: Backend API server
  • worker: Background job processor
  • web: Frontend Next.js application
  • db: PostgreSQL database
  • redis: Cache and job queue
  • nginx: Reverse proxy (routes to api/web)
  • sandbox: Code execution sandbox (for Python/JS in workflows)
  • ssrf_proxy: Security proxy for external requests
  • weaviate: Vector database for embeddings (optional, can use pgvector instead)

Initial startup downloads container images and takes 2-5 minutes. Monitor:

docker compose logs -f api

Wait until you see the API server report it's ready.

Step 5: Initial Setup

Navigate to http://your-server-ip or http://your-server-ip/install

Create Admin Account

Enter your email and password to create the admin account. This is the workspace owner account.

Access the Dashboard

After account creation, you land on Dify's main dashboard. The key sections:

  • Studio: Build and test AI applications
  • Knowledge: Create knowledge bases from documents
  • Models: Configure LLM providers
  • Monitoring: Observe application usage and logs

Step 6: Connect LLM Providers

This is where you configure which AI models Dify can use.

Settings → Model Providers

Connect OpenAI (Cloud)

  1. Go to SettingsModel Provider
  2. Click OpenAI
  3. Enter your API key: sk-...
  4. Save and test connection

Available models after connecting: GPT-4o, GPT-4o-mini, o1, o3, text-embedding-ada-002, dall-e-3

Connect Anthropic (Cloud)

  1. Click Anthropic
  2. Enter API key: sk-ant-...
  3. Available: Claude 3.5 Sonnet, Claude 3 Opus, Claude 3 Haiku

Connect Local Models via Ollama

Use local models for free inference — no API costs, full privacy.

First, install Ollama on your server (or a separate machine):

curl -fsSL https://ollama.com/install.sh | sh
ollama pull llama3.1:8b
ollama pull nomic-embed-text  # For embeddings

In Dify:

  1. SettingsModel ProviderOllama
  2. Enter base URL: http://host-ip:11434 (or http://host.docker.internal:11434 if Ollama runs on the same Docker host)
  3. Add models: enter the model name exactly as in Ollama (llama3.1:8b, mistral, etc.)

Embedding model: Add nomic-embed-text as an embedding model for knowledge base search.

Now Dify can use your local Llama models for zero-cost AI inference.

Connect Other Providers

Dify supports many providers — each has a similar setup:

  • Google AI Studio: Gemini models
  • Azure OpenAI: Enterprise OpenAI deployments
  • Mistral: Mistral models
  • OpenRouter: Access 100+ models via one API
  • Groq: Fast inference for open models

Step 7: Build Your First Application

Create a Chatbot

  1. StudioCreate AppChatbot
  2. Name it (e.g., "Customer Support Bot")
  3. Choose an Orchestration mode:
    • Basic: Simple chatbot with a system prompt
    • Advanced (Workflow): Visual pipeline with conditional logic, tools, and multi-step processing

For a basic chatbot:

  1. Set the System Prompt: Define the bot's persona and behavior
  2. Select the Model: Choose from your connected providers
  3. Set Context (optional): Attach knowledge bases for RAG
  4. Debug and Preview: Test in the right panel

Create a Knowledge Base (RAG)

  1. KnowledgeCreate Knowledge
  2. Upload documents: PDF, TXT, MD, DOCX supported
  3. Configure chunking: chunk size affects retrieval quality
  4. Choose embedding model: the model that converts text to searchable vectors
  5. After indexing, attach the knowledge base to any application

Test with: "What does [document] say about X?"

Build a Workflow Application

Workflows are Dify's most powerful feature — multi-step AI pipelines that can:

  • Call external APIs
  • Execute Python or JavaScript code
  • Branch based on conditions
  • Use multiple AI models in sequence
  • Generate images with DALL-E or Stable Diffusion

Example: Document Summarization Workflow

  1. Input: Document file upload
  2. Extract text: Parse PDF/document
  3. Summarize: Send text to LLM with summarization prompt
  4. Format: Structure the summary
  5. Output: Return formatted summary

The visual canvas lets you connect these nodes by dragging between them.

Step 8: Configure HTTPS

For team access and security, set up HTTPS with a reverse proxy.

Using Caddy

sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https curl
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list
sudo apt update && sudo apt install caddy

/etc/caddy/Caddyfile:

dify.yourdomain.com {
    reverse_proxy localhost:80
}
sudo systemctl restart caddy

Update your .env file with the HTTPS URL and restart Dify:

# Update in .env:
CONSOLE_WEB_URL=https://dify.yourdomain.com
APP_WEB_URL=https://dify.yourdomain.com

docker compose up -d

Step 9: Invite Team Members

  1. SettingsMembersInvite Member
  2. Enter email address
  3. Select role: Owner, Admin, Editor, or Member
  4. Invited users receive an email with a setup link

Roles determine what members can create, modify, and access across the workspace.

Step 10: Deploy Applications as APIs or Chatbots

Every Dify application can be deployed in multiple ways:

Embedded Chatbot Widget

Add Dify as a chat widget to any website:

  1. Open your application
  2. PublishEmbed into Site
  3. Copy the JavaScript snippet and paste into your website HTML
<script>
 window.difyChatbotConfig = {
  token: 'your-app-token'
 }
</script>
<script
 src="https://dify.yourdomain.com/embed.min.js"
 id="your-app-token"
 defer>
</script>

REST API

Every Dify application exposes a REST API:

curl -X POST 'https://dify.yourdomain.com/v1/chat-messages' \
  -H 'Authorization: Bearer app-token-here' \
  -H 'Content-Type: application/json' \
  -d '{
    "inputs": {},
    "query": "What is the return policy?",
    "response_mode": "blocking",
    "conversation_id": "",
    "user": "user-123"
  }'

Use this to integrate Dify applications into your own products and services.

Production Considerations

Use External PostgreSQL

For production, use a managed PostgreSQL instance instead of the Docker container:

DB_HOST=your-postgres-host
DB_PORT=5432
DB_USERNAME=dify
DB_PASSWORD=strong-password
DB_DATABASE=dify

This enables proper backups, monitoring, and reliability SLAs.

Configure File Storage

For production, use S3-compatible storage (Hetzner Object Storage, Backblaze B2, AWS S3) instead of local storage:

STORAGE_TYPE=s3
S3_ENDPOINT=https://s3.amazonaws.com
S3_BUCKET_NAME=dify-storage
S3_ACCESS_KEY=your-access-key
S3_SECRET_KEY=your-secret-key
S3_REGION=us-east-1

Backup

Back up regularly:

# Database backup
docker exec dify-db-1 pg_dump -U dify dify | gzip > dify-backup-$(date +%Y%m%d).sql.gz

# Storage backup (if local)
rsync -av /path/to/dify/docker/volumes/app/storage/ /backup/dify-storage/

Cost Analysis

Dify Cloud vs Self-Hosted

Dify CloudMonthlyAnnual
Sandbox (free tier)$0$0
Professional$59$708
Team$159$1,908
Self-HostedMonthlyAnnual
Hetzner CPX31$10$120
+ Ollama local models$0$0
+ OpenAI API (usage)VariableVariable

Self-hosted Dify with local Ollama models = $120/year for the server with no per-query AI costs. Add cloud API costs if you connect OpenAI/Anthropic for their specific capabilities.

Find More AI Platform Alternatives

Browse all AI development platform alternatives on OSSAlt — compare Dify, Flowise, LangFlow, AnythingLLM, and every other open source AI application platform with deployment guides.

Comments