OpenWebUI (formerly OllamaWebUI) is the most complete open-source web interface for local LLMs. It gives you a polished ChatGPT-like experience — conversations, code highlighting, document upload for RAG, multi-user support, and a model switcher — all on your own hardware.
What You Get
- Clean, modern chat UI with conversation history
- Switch between any installed Ollama model instantly
- RAG: Upload PDFs, text files, URLs and query them
- Code execution: Run Python, JS, Bash inline in chats
- Web search integration for real-time answers
- Multi-user auth with role-based access
- Plugin ecosystem and mobile-responsive design
Installation (Docker)
docker run -d \ --name openwebui \ --network host \ -v open-webui:/app/backend/data \ -e OLLAMA_BASE_URL=http://127.0.0.1:11434 \ -e WEBUI_SECRET_KEY=change-this-random-string \ --restart unless-stopped \ ghcr.io/open-webui/open-webui:main
Access at http://localhost:8080. Create your admin account on first visit.
Installation (pip)
python3 -m venv openwebui-env source openwebui-env/bin/activate pip install open-webui open-webui init open-webui serve --host 0.0.0.0 --port 8080
Connect to Ollama
Go to Settings — Admin Panel — Connections. Set Ollama API URL: http://127.0.0.1:11434. Click Check Connection — your installed models should appear.
Install Models
ollama pull mistral ollama pull codellama ollama pull llama3:70b ollama pull phi3 ollama pull gemma2
Enable RAG — Chat With Documents
Upload documents and the model reads them before answering. Click the attachment icon in any chat. Supported: PDF, DOCX, TXT, MD, CSV. For best results, install the embedding model:
ollama pull nomic-embed-text
Set it in Settings — RAG Settings — Embedding Model: nomic-embed-text
Code Execution
Enable in Settings — Pipelines — Code Execution. Select allowed languages. Then in any chat, wrap code in triple backticks with a language tag and it executes inline.
Multi-User Setup
Go to Admin Panel — User Management. Create users with roles: Admin, User, or Pending. Enable Enforce User Auth. For API access: Profile — API Keys.
Reverse Proxy + SSL
sudo certbot --nginx -d openwebui.yourdomain.com
server {
listen 443 ssl;
server_name openwebui.yourdomain.com;
client_max_body_size 50M;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_buffering off;
proxy_cache off;
}
}Troubleshooting
No models showing: Verify Ollama is running: curl http://127.0.0.1:11434
Slow RAG: Install nomic-embed-text for faster, better embeddings
File upload fails: nginx default is 1MB — set client_max_body_size 50M;