* fix: add docker system prune before image pull to prevent disk exhaustion The 60GB droplet filled up after ~40 deploys because each docker compose pull leaves the previous image's layers as dangling/unused. The gitnexus image is ~700MB, so ~40 stale copies ≈ 28GB of dead layers. Combined with indexes, OS, and Docker's build cache, the disk hits 100% and the next pull fails with 'no space left on device'. Add a docker system prune -af --volumes BEFORE pulling the new image on every deploy. This removes stopped containers, unused networks, all images not referenced by a running container, and build cache. Running containers are never touched. Typically frees 1-2GB per deploy (the previous image's layers). Also add a hard 2GB free-space guard after prune so the deploy fails with a clear error instead of letting docker pull attempt a 700MB extract onto a near-full disk. * fix: cap PR indexes at 3 + delete-before-sync for 10GB disk The 10GB droplet has ~2GB free. Each index is ~130MB, so 7 PR indexes (~900MB) plus main+dev (~260MB) plus the ~700MB Docker image leaves almost nothing for image pulls. The deploy failed with 'no space left on device' during docker compose pull. Three changes: 1. Cap PR indexes at MAX_PR_INDEXES=3. The resolve step now sorts PR artifacts by created_at descending and only keeps the 3 most recent. Older PR indexes are logged as evicted and their droplet folders get cleaned by the prune step. 2. Prune BEFORE sync (was after). Freeing disk space from evicted indexes before rsyncing new data is critical on a tight disk. The old order (sync then prune) could briefly hold both old evicted indexes and newly-uploaded ones simultaneously. 3. Delete-before-sync for every index, including main/dev. Instead of rsync --delete (which transfers new files then removes extras), rm -rf the target folder before rsync so the disk never holds both old and new copies of the same index (~260MB saved per index). Main/dev are only deleted when a fresh artifact is about to replace them — never evicted between deploys. Budget on 10GB disk: OS + Docker engine: ~4.0 GB Docker image (running): ~0.7 GB main + dev indexes: ~0.26 GB 3 PR indexes: ~0.39 GB Docker prune headroom: ~0.7 GB (for image pull) Free: ~3.9 GB * refine: restrict automatic PR indexing to danny-avila authored PRs With 200+ open PRs and a 10GB disk capped at 3 served PR indexes, auto-indexing every contributor PR burns CI minutes for artifacts that will mostly be evicted before anyone queries them. Narrow the pull_request auto-trigger to PRs authored by danny-avila only. Other contributors' PRs can still be indexed on demand via /gitnexus index (contributor-gated comment command) or manual workflow_dispatch — both arrive as workflow_dispatch events and bypass the pull_request filter entirely. * fix: drop --volumes from docker system prune to preserve Caddy TLS state The deploy workflow explicitly handles a caddy-not-running state later in the same step. If Caddy is stopped when the prune runs, --volumes deletes the caddy-data and caddy-config volumes (TLS certs + ACME account keys), forcing a Let's Encrypt re-issuance on next start. LE rate-limits to 5 certs per domain per week, so repeated wipes could brick HTTPS for days. docker system prune -af (without --volumes) still removes stopped containers, unused networks, all dangling/unreferenced images, and build cache — which is where the disk savings come from. Named volumes are left untouched. * fix: rsync-then-swap instead of delete-before-sync The delete-before-sync pattern removed the live index BEFORE rsync ran. If rsync failed (SSH timeout, disk pressure, network error), the index was already gone — production served nothing for that repo until a later deploy succeeded. Replace with rsync-then-swap: upload to a .new temp directory, and only rm + mv into place after rsync succeeds. On rsync failure, the .new temp is cleaned up and the old index stays live. The cost is ~130MB of extra disk while both old and new coexist, but the prune step runs first and frees evicted PR indexes, so this fits comfortably on the 10GB disk. * fix: fail deploy on main/dev rsync failure, soft-fail PRs only The rsync-then-swap pattern downgraded ALL failures to a warning, so the deploy continued even when LibreChat or LibreChat-dev failed to sync. The job would pull the new image, restart the container, and report success while serving stale or missing core indexes. Split by criticality: main/dev rsync failures now exit 1 (aborting the deploy before the container restart). PR index failures remain soft-fail with a warning — a missing PR index is inconvenient but shouldn't take the whole server down. |
||
|---|---|---|
| .devcontainer | ||
| .do/gitnexus | ||
| .github | ||
| .husky | ||
| .vscode | ||
| api | ||
| client | ||
| config | ||
| e2e | ||
| helm | ||
| packages | ||
| redis-config | ||
| src/tests | ||
| utils | ||
| .dockerignore | ||
| .env.example | ||
| .gitattributes | ||
| .gitignore | ||
| .prettierrc | ||
| AGENTS.md | ||
| bun.lock | ||
| CLAUDE.md | ||
| deploy-compose.yml | ||
| docker-compose.override.yml.example | ||
| docker-compose.yml | ||
| Dockerfile | ||
| Dockerfile.multi | ||
| eslint.config.mjs | ||
| librechat.example.yaml | ||
| LICENSE | ||
| package-lock.json | ||
| package.json | ||
| rag.yml | ||
| README.md | ||
| README.zh.md | ||
| turbo.json | ||
LibreChat
English · 中文
✨ Features
-
🖥️ UI & Experience inspired by ChatGPT with enhanced design and features
-
🤖 AI Model Selection:
- Anthropic (Claude), AWS Bedrock, OpenAI, Azure OpenAI, Google, Vertex AI, OpenAI Responses API (incl. Azure)
- Custom Endpoints: Use any OpenAI-compatible API with LibreChat, no proxy required
- Compatible with Local & Remote AI Providers:
- Ollama, groq, Cohere, Mistral AI, Apple MLX, koboldcpp, together.ai,
- OpenRouter, Helicone, Perplexity, ShuttleAI, Deepseek, Qwen, and more
-
- Secure, Sandboxed Execution in Python, Node.js (JS/TS), Go, C/C++, Java, PHP, Rust, and Fortran
- Seamless File Handling: Upload, process, and download files directly
- No Privacy Concerns: Fully isolated and secure execution
-
🔦 Agents & Tools Integration:
- LibreChat Agents:
- No-Code Custom Assistants: Build specialized, AI-driven helpers
- Agent Marketplace: Discover and deploy community-built agents
- Collaborative Sharing: Share agents with specific users and groups
- Flexible & Extensible: Use MCP Servers, tools, file search, code execution, and more
- Compatible with Custom Endpoints, OpenAI, Azure, Anthropic, AWS Bedrock, Google, Vertex AI, Responses API, and more
- Model Context Protocol (MCP) Support for Tools
- LibreChat Agents:
-
🔍 Web Search:
- Search the internet and retrieve relevant information to enhance your AI context
- Combines search providers, content scrapers, and result rerankers for optimal results
- Customizable Jina Reranking: Configure custom Jina API URLs for reranking services
- Learn More →
-
🪄 Generative UI with Code Artifacts:
- Code Artifacts allow creation of React, HTML, and Mermaid diagrams directly in chat
-
🎨 Image Generation & Editing
- Text-to-image and image-to-image with GPT-Image-1
- Text-to-image with DALL-E (3/2), Stable Diffusion, Flux, or any MCP server
- Produce stunning visuals from prompts or refine existing images with a single instruction
-
💾 Presets & Context Management:
- Create, Save, & Share Custom Presets
- Switch between AI Endpoints and Presets mid-chat
- Edit, Resubmit, and Continue Messages with Conversation branching
- Create and share prompts with specific users and groups
- Fork Messages & Conversations for Advanced Context control
-
💬 Multimodal & File Interactions:
- Upload and analyze images with Claude 3, GPT-4.5, GPT-4o, o1, Llama-Vision, and Gemini 📸
- Chat with Files using Custom Endpoints, OpenAI, Azure, Anthropic, AWS Bedrock, & Google 🗃️
-
🌎 Multilingual UI:
- English, 中文 (简体), 中文 (繁體), العربية, Deutsch, Español, Français, Italiano
- Polski, Português (PT), Português (BR), Русский, 日本語, Svenska, 한국어, Tiếng Việt
- Türkçe, Nederlands, עברית, Català, Čeština, Dansk, Eesti, فارسی
- Suomi, Magyar, Հայերեն, Bahasa Indonesia, ქართული, Latviešu, ไทย, ئۇيغۇرچە
-
🧠 Reasoning UI:
- Dynamic Reasoning UI for Chain-of-Thought/Reasoning AI models like DeepSeek-R1
-
🎨 Customizable Interface:
- Customizable Dropdown & Interface that adapts to both power users and newcomers
-
- Never lose a response: AI responses automatically reconnect and resume if your connection drops
- Multi-Tab & Multi-Device Sync: Open the same chat in multiple tabs or pick up on another device
- Production-Ready: Works from single-server setups to horizontally scaled deployments with Redis
-
🗣️ Speech & Audio:
- Chat hands-free with Speech-to-Text and Text-to-Speech
- Automatically send and play Audio
- Supports OpenAI, Azure OpenAI, and Elevenlabs
-
📥 Import & Export Conversations:
- Import Conversations from LibreChat, ChatGPT, Chatbot UI
- Export conversations as screenshots, markdown, text, json
-
🔍 Search & Discovery:
- Search all messages/conversations
-
👥 Multi-User & Secure Access:
- Multi-User, Secure Authentication with OAuth2, LDAP, & Email Login Support
- Built-in Moderation, and Token spend tools
-
⚙️ Configuration & Deployment:
- Configure Proxy, Reverse Proxy, Docker, & many Deployment options
- Use completely local or deploy on the cloud
-
📖 Open-Source & Community:
- Completely Open-Source & Built in Public
- Community-driven development, support, and feedback
For a thorough review of our features, see our docs here 📚
🪶 All-In-One AI Conversations with LibreChat
LibreChat is a self-hosted AI chat platform that unifies all major AI providers in a single, privacy-focused interface.
Beyond chat, LibreChat provides AI Agents, Model Context Protocol (MCP) support, Artifacts, Code Interpreter, custom actions, conversation search, and enterprise-ready multi-user authentication.
Open source, actively developed, and built for anyone who values control over their AI infrastructure.
🌐 Resources
GitHub Repo:
- RAG API: github.com/danny-avila/rag_api
- Website: github.com/LibreChat-AI/librechat.ai
Other:
- Website: librechat.ai
- Documentation: librechat.ai/docs
- Blog: librechat.ai/blog
📝 Changelog
Keep up with the latest updates by visiting the releases page and notes:
⚠️ Please consult the changelog for breaking changes before updating.
⭐ Star History
✨ Contributions
Contributions, suggestions, bug reports and fixes are welcome!
For new features, components, or extensions, please open an issue and discuss before sending a PR.
If you'd like to help translate LibreChat into your language, we'd love your contribution! Improving our translations not only makes LibreChat more accessible to users around the world but also enhances the overall user experience. Please check out our Translation Guide.
💖 This project exists in its current state thanks to all the people who contribute
🎉 Special Thanks
We thank Locize for their translation management tools that support multiple languages in LibreChat.