AI
LocalAI Hub
Search
8 articles · 8 topics

Self-hosted AI,
running on your hardware.

Run powerful AI models locally. No cloud subscriptions, no data leaving your server.

Browse all articles Start with tutorials
Local AI

Self-Host Ollama: Run Any LLM on Your Own Server

Complete guide to running Ollama on Ubuntu — deploy Llama 3, Mistral, Gemma, and hundreds of other language models locally.

2 min read · 3 views
Self-Host Ollama: Run Any LLM on Your Own Server
All AI & Security AI Agents AI Infrastructure AI Productivity LLM Models Local AI Open Source AI Self-Hosted AI
Securing Your AI Servers: Firewall, Auth, and Access Control
AI & Security

Securing Your AI Servers: Firewall, Auth, and Access Control

Harden your self-hosted AI infrastructure — UFW firewall rules, API authentication, rate limiting, network isolation, and AI-specific threat defense.

2 min 0 views
vLLM: High-Throughput LLM Inference on Your GPU
AI Infrastructure

vLLM: High-Throughput LLM Inference on Your GPU

Deploy vLLM for 10-24x faster throughput than naive LLM serving — PagedAttention, continuous batching, OpenAI-compatible API.

2 min 0 views
AnythingLLM: Chat With Your Documents Using Local RAG
Self-Hosted AI

AnythingLLM: Chat With Your Documents Using Local RAG

Build a private RAG chatbot — upload PDFs, take notes, query your knowledge base without any data leaving your server.

2 min 0 views
CrewAI: Build Multi-Agent AI Systems That Work Together
AI Agents

CrewAI: Build Multi-Agent AI Systems That Work Together

Deploy CrewAI — orchestrate multiple AI agents that collaborate on complex tasks, from research to content creation to automated code review.

3 min 0 views
Open Interpreter: Let AI Run Code Directly on Your Machine
AI Agents

Open Interpreter: Let AI Run Code Directly on Your Machine

Open Interpreter gives AI models the ability to execute code on your computer — Python, JavaScript, Bash, and more, with your approval before each step.

2 min 0 views
LocalAI: One API for All Your AI Models
AI Infrastructure

LocalAI: One API for All Your AI Models

LocalAI provides a unified OpenAI-compatible API that routes to Ollama, llama.cpp, and other backends — drop-in replacement for GPT-4.

2 min 0 views