Enhanced ChatGPT Clone: Features Agents, MCP, DeepSeek, Anthropic, AWS, OpenAI, Responses API, Azure, Groq, o1, GPT-5, Mistral, OpenRouter, Vertex AI, Gemini, Artifacts, AI model switching, message search, Code Interpreter, langchain, DALL-E-3, OpenAPI Actions, Functions, Secure Multi-User Auth, Presets, open-source for self-hosting. Active. https://librechat.ai/
Find a file
Danny Avila ccf3a6c670
📐 fix: Align Summarization Trigger Schema with Documented and Runtime-Supported Types (#12735)
* 🐛 fix: accept documented `summarization.trigger.type` values

The Zod schema for `summarization.trigger.type` only accepted
`'token_count'`, but:

- the documentation lists `token_ratio`, `remaining_tokens`, and
  `messages_to_refine` as valid
- the `@librechat/agents` runtime only evaluates those three types and
  silently no-ops on anything else

The result was a double failure: any user following the docs hit a
startup Zod error, and anyone who matched the schema by using
`token_count` got a silent no-op at runtime where summarization never
fired.

Align the schema with the documented, runtime-supported trigger types.

Closes #12721

* 🧹 fix: bound `token_ratio` trigger value to (0, 1]

Per Codex review: the previous schema accepted `value: z.number().positive()`
for every trigger type. That meant `trigger: { type: 'token_ratio', value: 80 }`
(presumably meant as "80%") passed validation and then silently never fired —
because `usedRatio = 1 - remaining/max` is bounded at 1, so `>= 80` is always
false. That is exactly the silent-no-op pattern this PR is trying to eliminate.

Switch to a discriminated union so each trigger type has its own value
constraint:

- `token_ratio`: `(0, 1]` — documented as a fraction, so 80 is nonsense
- `remaining_tokens`: positive — token counts can be large
- `messages_to_refine`: positive — message counts can be > 1

Added tests for the upper-bound rejection and the inclusive upper bound
(`value: 1` still accepted as a valid "fire at 100%" extreme).

* 🧹 fix: accept `token_ratio: 0` per documented 0.0–1.0 inclusive range

Per Codex review: `.positive()` rejected `value: 0`, but the docs
describe the `token_ratio` range as `0.0–1.0` (both inclusive). Admins
who copy the documented lower bound into their YAML would fail schema
validation at startup.

Switch `token_ratio` to `.min(0).max(1)`. `0` is a valid (if extreme)
setting — the agents SDK's `usedRatio >= 0` check will fire as soon as
there is anything to refine, which is a legitimate "always summarize
when pruning happens" configuration.

`remaining_tokens` and `messages_to_refine` keep `.positive()`: both
are counts, and `0` there produces no meaningful behavior (the SDK
has an early return for `messagesToRefineCount <= 0`).

* 🐛 fix: preserve `token_ratio` trigger when `value: 0`

Per Codex review: now that the schema accepts `token_ratio: 0`,
`shapeSummarizationConfig` would silently drop it because of a truthy
check on `config?.trigger?.value`. The trigger would disappear and the
runtime would fall back to "no trigger configured" — which fires on any
pruning rather than honoring the explicit ratio.

Switch to `typeof value === 'number'`, which preserves `0` while still
rejecting `undefined`/`null`.

Added a regression test that asserts `{ type: 'token_ratio', value: 0 }`
survives the shaping function untouched.

* 🧹 fix: reject non-finite trigger values at schema level

Per Codex review: `z.number().positive()` still accepts `Infinity` and
`NaN` (via YAML `.inf`, `.nan`). Config validation would succeed, but
the agents SDK guards every trigger path with `Number.isFinite(...)` and
silently returns `false` — summarization never fires while the server
starts cleanly. That is the exact schema/runtime split this PR is trying
to eliminate.

Add `.finite()` to every trigger value. `token_ratio` already had an
implicit guard via `.max(1)`, but applying `.finite()` uniformly keeps
the intent obvious and catches `NaN` (which `.max(1)` does not).

* 🧹 fix: integer counts + targeted token_count migration warning

Two findings from the comprehensive review:

1. `remaining_tokens` and `messages_to_refine` are token/message counts
   and are always integers in the runtime (`Number.isFinite(...)` guards
   already assume integer semantics). `z.number().positive()` accepted
   fractional values like `2.5`, which was semantically confusing and
   would round oddly against the runtime's `>=` / `<=` comparisons. Add
   `.int()` to both count-based branches; `token_ratio` stays fractional.

2. Anyone upgrading with `trigger.type: 'token_count'` in their YAML got
   the generic "Invalid summarization config" warning plus a flattened
   Zod error. Detect that specific case in `loadSummarizationConfig` and
   emit a migration-friendly message that names the three valid
   replacements. Export the function so the behavior is unit-testable.

Also added a parameterized passthrough test covering `remaining_tokens`
and `messages_to_refine` shaping, complementing the existing
`token_ratio` coverage.

* 🧹 fix: accurate fallback wording + bare-string trigger test

Two nits from the follow-up audit:

1. The legacy-`token_count` warning claimed "Summarization will be
   disabled," but `shapeSummarizationConfig` treats a missing
   summarization config as self-summarize mode (fires on every pruning
   event using the agent's own provider/model). "Disabled" would
   mislead an admin into stopping investigation. Reword to describe the
   actual fallback and assert the new wording in the spec.

2. Add a regression test for the `trigger: 'bare-string'` YAML case, so
   the `typeof raw.trigger === 'object'` guard is exercised rather than
   implied.

3. Swap the en-dash in `(0–1)` for an ASCII hyphen so the log message
   is safe in every terminal/aggregator regardless of UTF-8 handling.

* 🔇 fix: cast `raw.trigger.type` to inspect legacy value past narrowed union

CI TS check failed: after the schema tightening, `raw.trigger.type` is
narrowed to `"token_ratio" | "remaining_tokens" | "messages_to_refine"
| undefined`, so the runtime comparison to `"token_count"` is a
TS2367 ("no overlap") error even though that's exactly the comparison
we want for the migration guard.

Widen just that one access via `as { type?: unknown }` so the migration
check reads runtime-shaped YAML input without the type system folding
it back into the narrowed union.
2026-04-19 19:33:52 -07:00
.devcontainer 🪦 refactor: Remove Legacy Code (#10533) 2025-12-11 16:36:12 -05:00
.do/gitnexus 🌊 feat: Add GitNexus DigitalOcean Pipeline with PR Index Serving (#12612) 2026-04-11 13:04:46 -04:00
.github 🧹 chore: Cap PR Indexes at 3 and Add Delete-Before-Sync (#12672) 2026-04-15 09:46:48 -04:00
.husky 🎨 refactor: Redesign Sidebar with Unified Icon Strip Layout (#12013) 2026-03-22 01:15:20 -04:00
.vscode
api 📦 chore: npm audit & bump @librechat/agents to v3.1.67 (#12710) 2026-04-16 22:44:00 -04:00
client 🤝 fix: Normalize Empty Handoff Fields to Restore Default Fallback (#12707) 2026-04-19 11:53:09 -04:00
config 📦 refactor: Consolidate DB models, encapsulating Mongoose usage in data-schemas (#11830) 2026-03-21 14:28:53 -04:00
e2e v0.8.5-rc1 (#12569) 2026-04-09 20:06:31 -04:00
helm v0.8.5-rc1 (#12569) 2026-04-09 20:06:31 -04:00
packages 📐 fix: Align Summarization Trigger Schema with Documented and Runtime-Supported Types (#12735) 2026-04-19 19:33:52 -07:00
redis-config 🔄 refactor: Migrate Cache Logic to TypeScript (#9771) 2025-10-02 09:33:58 -04:00
src/tests 🆔 feat: Add OpenID Connect Federated Provider Token Support (#9931) 2025-11-21 09:51:11 -05:00
utils 🐳 chore: Update image registry references in Docker/Helm configurations (#12026) 2026-03-02 22:14:50 -05:00
.dockerignore
.env.example 🦉 feat: Claude Opus 4.7 Model Support (#12698) 2026-04-16 14:51:00 -04:00
.gitattributes 🎛️ feat: DB-Backed Per-Principal Config System (#12354) 2026-03-25 19:39:29 -04:00
.gitignore 🪆 fix: Allow Nested addParams in Config Schema (#12526) 2026-04-02 20:38:46 -04:00
.prettierrc
AGENTS.md 📋 chore: Move project instructions from AGENTS.md to CLAUDE.md 2026-03-31 21:50:38 -04:00
bun.lock v0.8.5-rc1 (#12569) 2026-04-09 20:06:31 -04:00
CLAUDE.md 📋 chore: Move project instructions from AGENTS.md to CLAUDE.md 2026-03-31 21:50:38 -04:00
deploy-compose.yml 🔒 chore: Bump MongoDB from 8.0.17 to 8.0.20 in Docker Compose Files (#12399) 2026-03-25 13:56:43 -04:00
docker-compose.override.yml.example 🐳 chore: Update image registry references in Docker/Helm configurations (#12026) 2026-03-02 22:14:50 -05:00
docker-compose.yml 🔒 chore: Bump MongoDB from 8.0.17 to 8.0.20 in Docker Compose Files (#12399) 2026-03-25 13:56:43 -04:00
Dockerfile v0.8.5-rc1 (#12569) 2026-04-09 20:06:31 -04:00
Dockerfile.multi v0.8.5-rc1 (#12569) 2026-04-09 20:06:31 -04:00
eslint.config.mjs 🛡️ refactor: Self-Healing Tenant Isolation Update Guard (#12506) 2026-04-01 19:07:52 -04:00
librechat.example.yaml v0.8.5-rc1 (#12569) 2026-04-09 20:06:31 -04:00
LICENSE 🗒️ docs: Update LICENSE.md Year: 2025 -> 2026 (#12554) 2026-04-08 09:12:44 -04:00
package-lock.json 📦 chore: npm audit & bump @librechat/agents to v3.1.67 (#12710) 2026-04-16 22:44:00 -04:00
package.json 📦 chore: npm audit & bump @librechat/agents to v3.1.67 (#12710) 2026-04-16 22:44:00 -04:00
rag.yml 🐳 chore: Update image registry references in Docker/Helm configurations (#12026) 2026-03-02 22:14:50 -05:00
README.md 📝 docs: update deployment link for Railway in README and README.zh.md (#12449) 2026-03-28 20:10:36 -04:00
README.zh.md 📝 docs: update deployment link for Railway in README and README.zh.md (#12449) 2026-03-28 20:10:36 -04:00
turbo.json 🏎️ feat: Smart Reinstall with Turborepo Caching for Better DX (#11785) 2026-02-13 14:25:26 -05:00

LibreChat

English · 中文

Deploy on Railway Deploy on Zeabur Deploy on Sealos

Translation Progress

Features

  • 🖥️ UI & Experience inspired by ChatGPT with enhanced design and features

  • 🤖 AI Model Selection:

    • Anthropic (Claude), AWS Bedrock, OpenAI, Azure OpenAI, Google, Vertex AI, OpenAI Responses API (incl. Azure)
    • Custom Endpoints: Use any OpenAI-compatible API with LibreChat, no proxy required
    • Compatible with Local & Remote AI Providers:
      • Ollama, groq, Cohere, Mistral AI, Apple MLX, koboldcpp, together.ai,
      • OpenRouter, Helicone, Perplexity, ShuttleAI, Deepseek, Qwen, and more
  • 🔧 Code Interpreter API:

    • Secure, Sandboxed Execution in Python, Node.js (JS/TS), Go, C/C++, Java, PHP, Rust, and Fortran
    • Seamless File Handling: Upload, process, and download files directly
    • No Privacy Concerns: Fully isolated and secure execution
  • 🔦 Agents & Tools Integration:

    • LibreChat Agents:
      • No-Code Custom Assistants: Build specialized, AI-driven helpers
      • Agent Marketplace: Discover and deploy community-built agents
      • Collaborative Sharing: Share agents with specific users and groups
      • Flexible & Extensible: Use MCP Servers, tools, file search, code execution, and more
      • Compatible with Custom Endpoints, OpenAI, Azure, Anthropic, AWS Bedrock, Google, Vertex AI, Responses API, and more
      • Model Context Protocol (MCP) Support for Tools
  • 🔍 Web Search:

    • Search the internet and retrieve relevant information to enhance your AI context
    • Combines search providers, content scrapers, and result rerankers for optimal results
    • Customizable Jina Reranking: Configure custom Jina API URLs for reranking services
    • Learn More →
  • 🪄 Generative UI with Code Artifacts:

    • Code Artifacts allow creation of React, HTML, and Mermaid diagrams directly in chat
  • 🎨 Image Generation & Editing

  • 💾 Presets & Context Management:

    • Create, Save, & Share Custom Presets
    • Switch between AI Endpoints and Presets mid-chat
    • Edit, Resubmit, and Continue Messages with Conversation branching
    • Create and share prompts with specific users and groups
    • Fork Messages & Conversations for Advanced Context control
  • 💬 Multimodal & File Interactions:

    • Upload and analyze images with Claude 3, GPT-4.5, GPT-4o, o1, Llama-Vision, and Gemini 📸
    • Chat with Files using Custom Endpoints, OpenAI, Azure, Anthropic, AWS Bedrock, & Google 🗃️
  • 🌎 Multilingual UI:

    • English, 中文 (简体), 中文 (繁體), العربية, Deutsch, Español, Français, Italiano
    • Polski, Português (PT), Português (BR), Русский, 日本語, Svenska, 한국어, Tiếng Việt
    • Türkçe, Nederlands, עברית, Català, Čeština, Dansk, Eesti, فارسی
    • Suomi, Magyar, Հայերեն, Bahasa Indonesia, ქართული, Latviešu, ไทย, ئۇيغۇرچە
  • 🧠 Reasoning UI:

    • Dynamic Reasoning UI for Chain-of-Thought/Reasoning AI models like DeepSeek-R1
  • 🎨 Customizable Interface:

    • Customizable Dropdown & Interface that adapts to both power users and newcomers
  • 🌊 Resumable Streams:

    • Never lose a response: AI responses automatically reconnect and resume if your connection drops
    • Multi-Tab & Multi-Device Sync: Open the same chat in multiple tabs or pick up on another device
    • Production-Ready: Works from single-server setups to horizontally scaled deployments with Redis
  • 🗣️ Speech & Audio:

    • Chat hands-free with Speech-to-Text and Text-to-Speech
    • Automatically send and play Audio
    • Supports OpenAI, Azure OpenAI, and Elevenlabs
  • 📥 Import & Export Conversations:

    • Import Conversations from LibreChat, ChatGPT, Chatbot UI
    • Export conversations as screenshots, markdown, text, json
  • 🔍 Search & Discovery:

    • Search all messages/conversations
  • 👥 Multi-User & Secure Access:

    • Multi-User, Secure Authentication with OAuth2, LDAP, & Email Login Support
    • Built-in Moderation, and Token spend tools
  • ⚙️ Configuration & Deployment:

    • Configure Proxy, Reverse Proxy, Docker, & many Deployment options
    • Use completely local or deploy on the cloud
  • 📖 Open-Source & Community:

    • Completely Open-Source & Built in Public
    • Community-driven development, support, and feedback

For a thorough review of our features, see our docs here 📚

🪶 All-In-One AI Conversations with LibreChat

LibreChat is a self-hosted AI chat platform that unifies all major AI providers in a single, privacy-focused interface.

Beyond chat, LibreChat provides AI Agents, Model Context Protocol (MCP) support, Artifacts, Code Interpreter, custom actions, conversation search, and enterprise-ready multi-user authentication.

Open source, actively developed, and built for anyone who values control over their AI infrastructure.


🌐 Resources

GitHub Repo:

Other:


📝 Changelog

Keep up with the latest updates by visiting the releases page and notes:

⚠️ Please consult the changelog for breaking changes before updating.


Star History

Star History Chart

danny-avila%2FLibreChat | Trendshift ROSS Index - Fastest Growing Open-Source Startups in Q1 2024 | Runa Capital


Contributions

Contributions, suggestions, bug reports and fixes are welcome!

For new features, components, or extensions, please open an issue and discuss before sending a PR.

If you'd like to help translate LibreChat into your language, we'd love your contribution! Improving our translations not only makes LibreChat more accessible to users around the world but also enhances the overall user experience. Please check out our Translation Guide.


💖 This project exists in its current state thanks to all the people who contribute


🎉 Special Thanks

We thank Locize for their translation management tools that support multiple languages in LibreChat.

Locize Logo