Get up and running with Kimi-K2.5, GLM-5, MiniMax, DeepSeek, gpt-oss, Qwen, Gemma and other models. https://ollama.com
Find a file
2026-05-07 15:59:48 -07:00
.github ci: fix windows dependencies 2026-05-07 11:32:52 -07:00
anthropic anthropic: fix empty inputs in content blocks (#15105) 2026-03-27 15:41:27 -07:00
api refine implementation 2026-05-06 17:26:05 -07:00
app ci: windows mlx tuning 2026-05-07 07:35:22 -07:00
auth auth: fix problems with the ollama keypairs (#12373) 2025-09-22 23:20:20 -07:00
cmd refine implementation 2026-05-06 17:26:05 -07:00
convert refine implementation 2026-05-06 17:26:05 -07:00
discover refine implementation 2026-05-06 17:26:05 -07:00
docs refine implementation 2026-05-06 17:26:05 -07:00
envconfig refine implementation 2026-05-06 17:26:05 -07:00
format chore(all): replace instances of interface with any (#10067) 2025-04-02 09:44:27 -07:00
fs New models (#15861) 2026-04-28 11:50:12 -07:00
harmony refine implementation 2026-05-06 17:26:05 -07:00
integration runner: Remove CGO engines, use llama-server exclusively for GGML models 2026-05-06 17:26:05 -07:00
internal Reapply "don't require pulling stubs for cloud models" again (#14608) 2026-03-06 14:27:47 -08:00
kvcache model: support for qwen3.5 architecture (#14378) 2026-02-24 20:08:05 -08:00
llama win: fix dependency gathering 2026-05-07 15:59:48 -07:00
llm refine implementation 2026-05-06 17:26:05 -07:00
logutil logutil: fix source field (#12279) 2025-09-16 16:18:07 -07:00
manifest create: avoid gc race with create (#15628) 2026-04-16 13:29:16 -07:00
middleware Add support for gemma4 (#15214) 2026-04-02 11:33:33 -07:00
ml refine implementation 2026-05-06 17:26:05 -07:00
model refine implementation 2026-05-06 17:26:05 -07:00
openai openai: map responses reasoning effort to think (#15789) 2026-04-24 02:49:36 -07:00
parser mlx: Gemma4 MTP speculative decoding (#15980) 2026-05-05 08:55:04 -07:00
progress Add z-image image generation prototype (#13659) 2026-01-09 21:09:46 -08:00
readline broad lint fixes to sidestep CI scope glitch 2026-05-06 17:26:05 -07:00
runner runner: Remove CGO engines, use llama-server exclusively for GGML models 2026-05-06 17:26:05 -07:00
scripts refine implementation 2026-05-06 17:26:05 -07:00
server refine implementation 2026-05-06 17:26:05 -07:00
template runner: Remove CGO engines, use llama-server exclusively for GGML models 2026-05-06 17:26:05 -07:00
thinking thinking: fix double emit when no opening tag 2025-08-21 21:03:12 -07:00
tokenizer tokenizer: fix multi-regex BPE offset handling (#15844) 2026-04-27 14:14:27 -07:00
tools refine implementation 2026-05-06 17:26:05 -07:00
types mlx: Gemma4 MTP speculative decoding (#15980) 2026-05-05 08:55:04 -07:00
version add version 2023-08-22 09:40:58 -07:00
x refine implementation 2026-05-06 17:26:05 -07:00
.dockerignore next build (#8539) 2025-01-29 15:03:38 -08:00
.gitattributes .gitattributes: add app/webview to linguist-vendored (#13274) 2025-11-29 23:46:10 -05:00
.gitignore create: Clean up experimental paths, fix create from existing safetensor model (#14679) 2026-04-07 08:12:57 -07:00
.golangci.yaml ci: restore previous linter rules (#13322) 2025-12-03 18:55:02 -08:00
CMakeLists.txt ci: fix windows MLX build 2026-05-06 19:20:48 -07:00
CMakePresets.json runner: Remove CGO engines, use llama-server exclusively for GGML models 2026-05-06 17:26:05 -07:00
CONTRIBUTING.md docs: fix typos in repository documentation (#10683) 2025-11-15 20:22:29 -08:00
Dockerfile refine implementation 2026-05-06 17:26:05 -07:00
go.mod runner: Remove CGO engines, use llama-server exclusively for GGML models 2026-05-06 17:26:05 -07:00
go.sum runner: Remove CGO engines, use llama-server exclusively for GGML models 2026-05-06 17:26:05 -07:00
LICENSE
LLAMA_CPP_VERSION refine implementation 2026-05-06 17:26:05 -07:00
main.go lint 2024-08-01 17:06:06 -07:00
MLX_C_VERSION Update MLX and MLX-C with threading fixes (#15845) 2026-05-03 10:03:14 -07:00
MLX_VERSION Update MLX and MLX-C with threading fixes (#15845) 2026-05-03 10:03:14 -07:00
README.md cmd/launch: add Copilot CLI integration (#15583) 2026-04-15 17:22:53 -07:00
SECURITY.md docs: fix typos in repository documentation (#10683) 2025-11-15 20:22:29 -08:00

ollama

Ollama

Start building with open models.

Download

macOS

curl -fsSL https://ollama.com/install.sh | sh

or download manually

Windows

irm https://ollama.com/install.ps1 | iex

or download manually

Linux

curl -fsSL https://ollama.com/install.sh | sh

Manual install instructions

Docker

The official Ollama Docker image ollama/ollama is available on Docker Hub.

Libraries

Community

Get started

ollama

You'll be prompted to run a model or connect Ollama to your existing agents or applications such as Claude Code, OpenClaw, OpenCode , Codex, Copilot, and more.

Coding

To launch a specific integration:

ollama launch claude

Supported integrations include Claude Code, Codex, Copilot CLI, Droid, and OpenCode.

AI assistant

Use OpenClaw to turn Ollama into a personal AI assistant across WhatsApp, Telegram, Slack, Discord, and more:

ollama launch openclaw

Chat with a model

Run and chat with Gemma 3:

ollama run gemma3

See ollama.com/library for the full list.

See the quickstart guide for more details.

REST API

Ollama has a REST API for running and managing models.

curl http://localhost:11434/api/chat -d '{
  "model": "gemma3",
  "messages": [{
    "role": "user",
    "content": "Why is the sky blue?"
  }],
  "stream": false
}'

See the API documentation for all endpoints.

Python

pip install ollama
from ollama import chat

response = chat(model='gemma3', messages=[
  {
    'role': 'user',
    'content': 'Why is the sky blue?',
  },
])
print(response.message.content)

JavaScript

npm i ollama
import ollama from "ollama";

const response = await ollama.chat({
  model: "gemma3",
  messages: [{ role: "user", content: "Why is the sky blue?" }],
});
console.log(response.message.content);

Supported backends

  • llama.cpp project founded by Georgi Gerganov.

Documentation

Community Integrations

Want to add your project? Open a pull request.

Chat Interfaces

Web

Desktop

  • Dify.AI - LLM app development platform
  • AnythingLLM - All-in-one AI app for Mac, Windows, and Linux
  • Maid - Cross-platform mobile and desktop client
  • Witsy - AI desktop app for Mac, Windows, and Linux
  • Cherry Studio - Multi-provider desktop client
  • Ollama App - Multi-platform client for desktop and mobile
  • PyGPT - AI desktop assistant for Linux, Windows, and Mac
  • Alpaca - GTK4 client for Linux and macOS
  • SwiftChat - Cross-platform including iOS, Android, and Apple Vision Pro
  • Enchanted - Native macOS and iOS client
  • RWKV-Runner - Multi-model desktop runner
  • Ollama Grid Search - Evaluate and compare models
  • macai - macOS client for Ollama and ChatGPT
  • AI Studio - Multi-provider desktop IDE
  • Reins - Parameter tuning and reasoning model support
  • ConfiChat - Privacy-focused with optional encryption
  • LLocal.in - Electron desktop client
  • MindMac - AI chat client for Mac
  • Msty - Multi-model desktop client
  • BoltAI for Mac - AI chat client for Mac
  • IntelliBar - AI-powered assistant for macOS
  • Kerlig AI - AI writing assistant for macOS
  • Hillnote - Markdown-first AI workspace
  • Perfect Memory AI - Productivity AI personalized by screen and meeting history

Mobile

SwiftChat, Enchanted, Maid, Ollama App, Reins, and ConfiChat listed above also support mobile platforms.

Code Editors & Development

Libraries & SDKs

Frameworks & Agents

RAG & Knowledge Bases

  • RAGFlow - RAG engine based on deep document understanding
  • R2R - Open-source RAG engine
  • MaxKB - Ready-to-use RAG chatbot
  • Minima - On-premises or fully local RAG
  • Chipper - AI interface with Haystack RAG
  • ARGO - RAG and deep research on Mac/Windows/Linux
  • Archyve - RAG-enabling document library
  • Casibase - AI knowledge base with RAG and SSO
  • BrainSoup - Native client with RAG and multi-agent automation

Bots & Messaging

Terminal & CLI

Productivity & Apps

Observability & Monitoring

  • Opik - Debug, evaluate, and monitor LLM applications
  • OpenLIT - OpenTelemetry-native monitoring for Ollama and GPUs
  • Lunary - LLM observability with analytics and PII masking
  • Langfuse - Open source LLM observability
  • HoneyHive - AI observability and evaluation for agents
  • MLflow Tracing - Open source LLM observability

Database & Embeddings

Infrastructure & Deployment

Cloud

Package Managers