Commit graph

412 commits

Author SHA1 Message Date
Danny Avila
68d80f3324
v0.8.6-rc1 (#13094) 2026-05-12 21:40:23 -04:00
Danny Avila
8eb9de011f
📦 chore: bump @librechat/agents to v3.1.86, npm audit, build fix (#13105)
* 📦 chore: Bump `@librechat/agents` to v3.1.86 in package-lock.json and package.json files

* 📦 chore: Update dependencies in package-lock.json to latest versions, including @protobufjs/codegen, @protobufjs/inquire, @protobufjs/utf8, and protobufjs

* 📦 chore: Add `librechat-data-provider` dependency in package.json and package-lock.json, and update build dependencies in turbo.json
2026-05-12 16:19:55 -04:00
Danny Avila
3e7262cfe0
📦 chore: Bump @librechat/agents to v3.1.85 and mermaid to v11.15.0 (#13079)
* 📦 chore: Update @librechat/agents to version 3.1.85 in package-lock.json and package.json files

* 📦 chore: Update mermaid to version 11.15.0 in package.json and package-lock.json
2026-05-11 19:14:18 -04:00
Danny Avila
c3ec23f9b8
🌐 feat: Support Vertex AI Multi-Region Endpoints (#13044)
* feat: support Vertex AI multi-region endpoints

* fix: sync Vertex endpoint with final location
2026-05-10 13:41:58 -04:00
Danny Avila
c67e2b54dc
🔐 feat: Mint Code API Auth Tokens (#13028)
* feat: Mint CodeAPI auth tokens

* style: Format CodeAPI download route

* fix: Prune CodeAPI token cache

* fix: Propagate CodeAPI managed auth

* test: Mock CodeAPI auth in traversal suite

* fix: Pass auth context to invoked skill cache

* feat: Mint CodeAPI plan context

* chore: Refresh CodeAPI auth guidance

* fix: Guard OpenID JWT fallback

* fix: Default CodeAPI JWT tenant in single-tenant mode

* chore: Update @librechat/agents to version 3.1.84 in package-lock.json and package.json files

* chore: Standardize references to Code API in comments and tests
2026-05-09 16:09:10 -04:00
Danny Avila
8a654dc8b1
🧭 feat: Add OpenRouter Prompt Cache Setting (#13029)
* feat: add OpenRouter prompt cache setting

* fix: type OpenRouter schema lookup

* fix: honor proxied OpenRouter prompt cache

* refactor: flatten endpoint schema fallback

* chore: Bump `@librechat/agents` to version 3.1.82

* fix: Default OpenRouter prompt cache params

* test: Align OpenRouter config expectations

* test: Update OpenRouter default cache expectation

* fix: Align OpenRouter Detection

* chore: Bump `@librechat/agents` to version 3.1.83

* docs: Remove OpenRouter prompt cache setup note

* refactor: Use provider enum for OpenRouter defaults

* style: Format OpenRouter defaults guard
2026-05-09 11:46:09 -04:00
Danny Avila
a107520109
📦 chore: Bump @librechat/agents to v3.1.81 & npm audit fix (#13027)
* 📦 chore: Bump `@librechat/agents` to v3.1.81

* chore: npm audit fix
2026-05-08 16:20:03 -04:00
Danny Avila
119ac9c944
📦 chore: bump @librechat/agents to v3.1.80 (#13021) 2026-05-08 12:29:44 -04:00
Danny Avila
93c4ef4ba8
🧱 refactor: typed CodeEnvRef + kind discriminator + principal-aware sandbox cache (#12960)
* 🧱 refactor: typed CodeEnvRef + kind discriminator + tenant-aware sandbox cache

Final cutover for the LibreChat ↔ codeapi sandbox file identity. Replaces
the magic string `${session_id}/${file_id}?entity_id=...` with a typed,
discriminated `CodeEnvRef`. Pre-release lockstep deploy with codeapi
#1455 and agents #148; no legacy aliases retained.

## Final shape

```ts
type CodeEnvRef =
  | { kind: 'skill'; id: string; storage_session_id: string; file_id: string; version: number }
  | { kind: 'agent'; id: string; storage_session_id: string; file_id: string }
  | { kind: 'user';  id: string; storage_session_id: string; file_id: string };
```

`kind` drives codeapi's sessionKey: `<tenant>:<kind>:<id>[✌️<version>]`
for shared kinds, `<tenant>:user:<userId>` for user-private (auth context
provides `userId`). `version` is statically required for `kind: 'skill'`
and forbidden otherwise via discriminated union — constraint holds at
compile time on every consumer, not just codeapi's runtime validator.

`id` is sessionKey-meaningful for `'skill'` / `'agent'`; informational
only for `'user'` (codeapi resolves user identity from auth context).

## What changed

- `packages/data-provider/src/codeEnvRef.ts` — discriminated union +
  `CODE_ENV_KINDS` const-tuple keeps the runtime list and TS union
  locked together.
- Schemas: `metadata.codeEnvRef` and `SkillFile.codeEnvRef` enums
  tightened to `['skill', 'agent', 'user']`.
- `primeSkillFiles` writes `kind: 'skill'`, `id: skill._id`,
  `version: skill.version`. Cache-hit path reads `codeEnvRef`
  directly. Bumping `skill.version` on edit naturally invalidates
  the prior cache entry under the new sessionKey.
- `processCodeOutput` writes `kind: 'user'`, `id: req.user.id`. Output
  bucket is always user-scoped, regardless of which skill the
  execution invoked. New regression test pins the asymmetry.
- `primeFiles` reupload preserves `kind`/`id`/`version?` from the
  existing ref so a skill-cache-miss reupload doesn't silently demote
  to user bucket.
- `crud.js` upload functions (`uploadCodeEnvFile` /
  `batchUploadCodeEnvFiles`) thread `kind`/`id`/`version?` to the
  multipart form (codeapi #1455 option α). Without these on the wire,
  codeapi falls back to user bucketing and skill-cache invalidation
  never fires. Client-side validation mirrors codeapi's validator.
- `Files/process.js` — chat attachments use `kind: 'user'`; agent
  setup files use `kind: 'agent'`.
- Drops `entity_id` everywhere (struct, schema sub-docs, write paths,
  upload form fields). Drops `'system'` from the kind enum (no emitter
  ever existed).

## Test plan

- [x] `cd packages/data-provider && npx jest src/codeEnvRef.spec` — 4 / 4
- [x] `cd packages/data-schemas && npx jest` — 1447 / 1447
- [x] `cd packages/api && npx jest src/agents` — 81 / 81 in skillFiles +
  handlers + resources
- [x] `cd api && npx jest server/services/Files server/controllers/agents` —
  436 / 436
- [x] `cd api && npx jest server/services/Files/Code` — 98 / 98 (incl.
  new "outputs are user-scoped regardless of which skill the execution
  invoked" regression and "reupload forwards kind/id/version from
  existing ref")
- [x] `npx tsc --noEmit -p packages/data-{provider,schemas}/tsconfig.json
  && npx tsc --noEmit -p packages/api/tsconfig.json` — clean (only
  pre-existing unrelated dev errors in storage/balance, untouched here)

## Deploy notes

- **24h cache-miss burst** on first deploy. Inputs (skill caches re-prime
  under new sessionKey shape) and outputs (any pre-Phase C skill-output
  cached files become unreadable). Bounded by codeapi's 24h TTL.
- **Lockstep with codeapi #1455 and agents #148.** Either repo can land
  first since no aliases to drain, but the three deploys must overlap
  within the same maintenance window.
- **`@librechat/agents` bump to `3.1.79-dev.0`** required after agents
  #148 lands and is published.

## What this enables

Auth bridge work (JWT-based tenant/user identity between LC and codeapi)
— codeapi now derives sessionKey purely from `req.codeApiAuthContext.{
tenantId, userId}`, so the next chapter is replacing the header-asserted
user identity with a verified-claim path.

* 🩹 fix: persist execute_code uploads under codeEnvRef metadata key

Codex review P1 (chatgpt-codex-connector). `Files/process.js` was
storing the upload result under `metadata.fileIdentifier` even though:
- `uploadCodeEnvFile` now returns `{ storage_session_id, file_id }`,
  not the legacy magic string.
- The post-cutover schema (`File.metadata.codeEnvRef`) only declares
  `codeEnvRef` — mongoose strict mode silently strips unknown keys.
- All readers (`primeFiles`, `getCodeFilesByIds`,
  `categorizeFileForToolResources`, controller filtering) check
  `metadata.codeEnvRef`.

Net effect of the bug: chat-attached and agent-setup execute_code files
would lose their sandbox reference on save, and primeFiles would skip
them on subsequent code-execution turns — the file blob would still be
available locally but never re-mounted in the sandbox.

Fix: construct the full `CodeEnvRef` (`{ kind, id, storage_session_id,
file_id }`) at the write site and persist under `metadata.codeEnvRef`.
`BaseClient`'s "is this a code-env file" presence check accepts the new
shape alongside the legacy `fileIdentifier` for back-compat with any
pre-cutover records still in the database. Mirrors the same change in
`processAttachments.spec.ts` (which re-implements the BaseClient logic
for testability).

New regression tests in `process.spec.js` cover three cases:
- chat attachments (`messageAttachment=true`) → `kind: 'user'`
- agent setup (`messageAttachment=false`) → `kind: 'agent'`
- legacy `fileIdentifier` key is NOT persisted (would be schema-stripped)

* 🩹 fix: read storage_session_id on primed file refs (Codex P1)

Codex review (chatgpt-codex-connector). After Phase B's per-file
`session_id` → `storage_session_id` rename, `primeFiles` emits the
new field — but `seedCodeFilesIntoSessions` was still reading
`files[0].session_id` for the representative session and `f.session_id`
for the dedupe key. In runs with only primed attachments (no skill
seed), `representativeSessionId` was `undefined`, the function
returned the unchanged map, and `seedCodeFilesIntoSessions` silently
dropped the entire batch. The first `execute_code` call then started
without `_injected_files` and the agent couldn't see prior-turn
artifacts.

Fix:
- `codeFilesSession.ts`: read `f.storage_session_id` for both the
  dedupe key and the representative session id. JSDoc updated to
  match the new field name.
- `callbacks.js`: the two output-file persistence paths read
  `file.session_id` to pass to `processCodeOutput` — switch to
  `file.storage_session_id`. The original comment explicitly says
  this should be the STORAGE session, which is exactly the field
  Phase B renamed.
- `codeFilesSession.spec.ts`: fixture builder uses `storage_session_id`
  and `kind: 'user'` to match the post-cutover `CodeEnvFile` shape.

Lockstep coordination: this matches the post-bump shape of
`@librechat/agents` 3.1.79+. CI tsc errors against the currently-pinned
3.1.78 are expected and resolve when the dep bumps in this PR before
merge.

* 📦 chore: Bump `@librechat/agents` to version 3.1.80-dev.0 in package-lock and package.json files

* 🪪 fix: thread kind/id/version through codeapi /download URLs (Phase C α)

Symmetric fix for the upload-side wire change in 537725a. Codeapi's
`sessionAuth` middleware now requires `kind`/`id`/`version?` on every
download/freshness URL — without them it 400s with "kind must be one
of: skill, agent, user" before serving the file.

Three sites construct codeapi-side URLs that go through `sessionAuth`:

- `processCodeOutput` (`Files/Code/process.js`): `/download/<sess>/<id>`
  for freshly-generated sandbox outputs. Always `kind: 'user'` +
  `id: req.user.id` — code-output files are always user-private,
  regardless of which skill the run invoked.
- `getSessionInfo` (`Files/Code/process.js`): `/sessions/<sess>/objects/<id>`
  for the 23h freshness check. Pulls kind/id/version straight off the
  `codeEnvRef` already in scope — skill files stay skill-bucketed,
  user files stay user-bucketed.
- `/code/download/:session_id/:fileId` LC route (`routes/files/files.js`):
  proxies to codeapi for manual downloads. Code-output files only on
  this route, so `kind: 'user'` + `id: req.user.id`.

The `getCodeOutputDownloadStream` helper in `crud.js` now takes an
`identity` param, validated by a `buildCodeEnvDownloadQuery` helper
that mirrors `appendCodeEnvFileIdentity`'s shape rules: kind required
from the closed `{skill, agent, user}` set, version required for
'skill' and forbidden otherwise. Bad callers fail fast on the client
instead of round-tripping a 400.

Also cleans up two log-noise sources reported alongside the 400:

- `logAxiosError` in `packages/api/src/utils/axios.ts` was dumping
  `error.response.data` raw. With `responseType: 'arraybuffer'` that's
  a `Buffer` (~4 chars per byte after JSON-serialization); with
  `responseType: 'stream'` it's a `Readable` whose internal state
  serializes the entire ring buffer + socket. New `renderResponseData`
  decodes small buffers as UTF-8 (truncated past 2KB) and stubs streams
  as `'[stream]'`. Diagnostics stay useful, log lines stop being
  megabytes.
- `/code/download` route's catch was bare `logger.error('...', error)`,
  bypassing the redactor. Switched to `logAxiosError` so it benefits
  from the same buffer/stream handling.

Tests updated to match the new contract:
- crud.spec: `getCodeOutputDownloadStream` fixtures pass `userIdentity`;
  new cases cover skill identity (with version), bad kind rejection,
  skill-without-version rejection.
- process.spec: `getSessionInfo` test passes a full `codeEnvRef` object.

* ♻️ refactor: extract codeEnv identity helpers into packages/api

Per the project convention that new backend code lives in TypeScript
under `packages/api`, moves `appendCodeEnvFileIdentity` and
`buildCodeEnvDownloadQuery` from `api/server/services/Files/Code/crud.js`
into a new `packages/api/src/files/code/identity.ts` module.

Both helpers are pure validators that mirror codeapi's
`parseUploadSessionKeyInput` server-side rules (closed kind set,
`version` required for `'skill'` and forbidden otherwise) — they
deserve TS support and a dedicated spec rather than living as
JSDoc-typed helpers in the legacy `/api` workspace. The new module:

- Exports a `CodeEnvIdentity` interface using the
  `librechat-data-provider` `CodeEnvKind` discriminated union.
- Adds 13 unit tests in `identity.spec.ts` covering the validation
  matrix (skill+version, agent, user, and every rejection path) plus
  URL encoding for the download query.
- Re-exported from `packages/api/src/files/code/index.ts` alongside
  `classify`, `extract`, and `form`.

Consumer updates:
- `api/server/services/Files/Code/crud.js`: drops the local helpers
  and imports them from `@librechat/api`. Net -64 lines.
- `api/server/services/Files/Code/process.js`: same.
- Test mocks for `@librechat/api` in three spec files now stub the
  helpers' validation behavior locally rather than pulling them
  through `requireActual` (which would drag in provider-config
  init-time side effects). The package's `exports` field only
  surfaces the root barrel, so leaf imports aren't reachable from
  legacy `/api` test setup.

No runtime behavior change. Identity validation rules and emitted
form/query shapes are byte-for-byte identical pre/post.

* 🪪 fix: emit resource_id alongside id on _injected_files (skill 403 fix)

Companion to codeapi #1455 fix and agents 3.1.80-dev.1 — the wire
shape for shared-kind files now requires `resource_id` distinct from
the storage `id`. Without this LC change, codeapi's sessionKey
re-derivation on every shared-kind /exec rejects with 403
session_key_mismatch:

    cached:  legacy:skill:69dcf561...✌️59  (signed at upload, skill _id)
    derived: legacy:skill:ysPwEURuPk-...✌️59  (storage nanoid)

Emit sites updated:

- `primeInvokedSkills` cache-hit path: `resource_id: ref.id` (the
  persisted skill `_id` from `codeEnvRef.id`); `id: ref.file_id`
  unchanged (storage uuid).
- `primeInvokedSkills` fresh-upload path: `resource_id: skill._id.toString()`
  on every primed file (the `allPrimedFiles` builder type now carries
  the field).
- `processCodeOutput`'s `pushFile` (Code/process.js): `resource_id: ref.id`
  — for `kind: 'user'` this is informational (codeapi derives
  sessionKey from auth context) but emitted for shape uniformity
  with shared kinds.

Bumps `@librechat/agents` to `^3.1.80-dev.1` (the version that
ships the matching `CodeEnvFile.resource_id` field).

## Test plan

- [x] `cd packages/api && npx jest src/agents` — 67 / 67 pass
  (skillFiles fixtures updated to assert `resource_id` on the
  emitted CodeSessionContext.files).
- [x] `cd api && npx jest server/services/Files server/controllers/agents` —
  445 / 445 pass (process.spec fixtures updated for the reupload
  + cache-hit emission).
- [x] `npx tsc --noEmit -p packages/api/tsconfig.json` — clean.

* fix(skill-tool-call): carry resource_id through primeSkillFiles → artifact

Codeapi was 400ing every /exec following a `handle_skill` tool call
with `resource_id is invalid` (`type: 'undefined'`). Both code paths
in `primeSkillFiles` (cache-hit + fresh-upload) returned files
without `resource_id`/`kind`/`version`, and the artifact in
`handlers.ts` forwarded the stripped shape into
`tc.codeSessionContext.files` → `_injected_files`.

`primeInvokedSkills` (the NL-detected loader) had already been fixed
end-to-end; this commit aligns the tool-invoked path with the same
contract: `resource_id` = `skill._id.toString()`, `kind: 'skill'`,
`version` = the skill's monotonic counter.

Tests added to `skillFiles.spec.ts` lock the contract on
`primeSkillFiles` directly so future refactors can't silently drop
the resource identity again.

* fix(handlers.spec): align session_id → storage_session_id rename + kind discriminator

Pre-existing TS errors against the post-rename `CodeEnvFile` shape:
the test file still used `session_id` on per-file objects (renamed to
`storage_session_id` in agents Phase B/C) and was missing the `kind`
discriminator the discriminated union requires. Both inputs and the
matching `expect.toEqual(...)` mirrors updated together so the
runtime equality check still holds.

Lines 723-732 stay as-is — they sit behind `as unknown as
ToolCallRequest` and TS already skipped them.

* chore: fix `@librechat/agents`, correct version to 3.1.80-dev.0 in package.json files

* chore: bump `@librechat/agents` to version 3.1.80-dev.1 in package.json and package-lock.json

* chore: bump `@librechat/agents` to version 3.1.80-dev.2

* feat(observability): trace file priming chain from primeCodeFiles to _injected_files

Diagnosing the user-upload "files=[] on first /exec" bug requires
seeing where in the LC chain a file ref disappears. Prior to this
patch the chain (primeCodeFiles → primedCodeFiles → initialSessions
→ CodeSessionContext → _injected_files) was opaque end-to-end:
  - primeCodeFiles silently dropped files without `metadata.codeEnvRef`
  - reuploadFile catches all errors and continues with no signal
  - the handlers.ts handoff to codeapi never logged what it was sending

After this patch, a single grep on `[primeCodeFiles]` plus
`[code-env:inject]` shows the full per-file path:

  [primeCodeFiles] in: file_ids=N resourceFiles=M
  [primeCodeFiles] file=<id> path=skip reason=no-codeenvref filename=...
  [primeCodeFiles] file=<id> path=cache-hit-by-session storage_session_id=...
  [primeCodeFiles] file=<id> path=reupload reason=no-uploadtime ...
  [primeCodeFiles] file=<id> path=reupload reason=stale ...
  [primeCodeFiles] file=<id> path=reupload-success oldSession=... newSession=... newFileId=...
  [primeCodeFiles] file=<id> path=reupload-failed session=...
  [primeCodeFiles] file=<id> path=fresh-active storage_session_id=...
  [primeCodeFiles] out: returned=N skippedNoRef=M reuploadFailures=K

  [code-env:inject] tool=<name> files=N missingResourceId=K     (debug)
  [code-env:inject] M/N files missing resource_id ...           (warn)
  [code-env:inject] tool=<name> _injected_files=0 ...           (warn)

The boundary log warns when LC sends zero injected files on a
code-execution tool call — that's the user's actual symptom showing
up at the LC side instead of having to correlate against codeapi's
`Request received { files: [] }`.

Tag chosen as `[code-env:inject]` rather than `[handoff:exec]` to
avoid collision with the app-level "handoff" semantic (subagent
handoff workflow).

Structural cleanup in primeFiles: replaced the `if (ref) { ... }`
nesting with an early `if (!ref) continue` so the per-path
instrumentation hooks land at top-level scope instead of indented
inside a conditional. Behavior unchanged; pushFile / reuploadFile
identical.

Spec fixtures (handlers.spec.ts, codeFilesSession.spec.ts) updated
to include `resource_id` on `CodeEnvFile` literals — required by
the post-3.1.80-dev.2 type now installed.

## Test plan

- [x] `cd packages/api && npx jest src/agents/handlers.spec.ts src/agents/codeFilesSession.spec.ts src/agents/skillFiles.spec.ts` — 69/69 pass
- [x] `cd api && npx jest server/services/Files/Code/process.spec.js` — 84/84 pass
- [x] `npx tsc --noEmit -p packages/api` — clean
- [x] `npx eslint` on all four touched files — clean

* chore: add CONSOLE_JSON_STRING_LENGTH to .env.example for JSON log string length configuration

* fix(files): align codeapi upload filename with LC's sanitized DB filename

User-attached files for code execution were uploading to codeapi
under `file.originalname` (raw upload filename, may contain spaces /
special chars) while LC's DB record stored the sanitized form
(`sanitizeFilename(file.originalname)`, underscores). Codeapi
preserves whatever filename the upload sent, so the sandbox saw
`/mnt/data/<originalname>` while LC's `primeFiles` toolContext text
+ `_injected_files.name` referenced `file.filename` (sanitized).

Visible failure: agent gets system prompt saying

    /mnt/data/librechat_code_api_-_active_customer_-_2025-11-05.xlsx

…tries that path, hits `FileNotFoundError`, then notices the
sandbox's actual `Available files` line says

    /mnt/data/librechat code api - active customer - 2025-11-05.xlsx

…retries with spaces, succeeds. Wastes a tool call per upload and
leaks raw filenames into model context.

Fix: sanitize once and use the sanitized form in both the codeapi
upload AND the LC DB record. Sandbox path = LC toolContext text =
in-memory ref name. No drift.

Reupload path (`Code/process.js` line 867 `filename: file.filename`)
already uses the sanitized DB name, so it stays consistent with the
fresh-upload path after this change.

## Test plan

- [x] `cd api && npx jest server/services/Files/process` — 32/32 pass
- [x] `npx eslint` on the touched file — clean

* chore: bump `@librechat/agents` to version 3.1.80-dev.3 in package.json and package-lock.json
2026-05-08 12:29:43 -04:00
Danny Avila
b39bf837a7
📦 chore: Update @librechat/agents to v3.1.79 (#13000) 2026-05-07 16:27:17 -04:00
Danny Avila
40a05bbf83
📦 chore: npm audit fixes and Mongoose 8.23 TypeScript follow-ups (#12996)
* chore: Update axios dependency to version 1.16.0 across multiple package files

* chore: Update express-rate-limit and ip-address dependencies to versions 8.5.1 and 10.2.0 in package-lock.json and package.json

* chore: Update mongoose and hono dependencies to versions 8.23.1 and 4.12.18 across multiple package files

* fix: Add type parameters to mongoose lean queries in accessRole and aclEntry methods

* fix: Add type parameters to mongoose lean queries in action, agent, and agentCategory methods

* chore: Update moduleResolution to 'bundler' in tsconfig.json for api and data-schemas packages

* fix: Update mongoose lean queries to include type parameters across various methods for improved type safety
2026-05-07 09:47:40 -04:00
Danny Avila
f839a447e1
🧬 fix: Subagent MCP requestBody Propagation (bump @librechat/agents to 3.1.78 + cleanup) (#12959)
* 📦 chore: bump `@librechat/agents` to v3.1.78

v3.1.78 ships [danny-avila/agents#147](https://github.com/danny-avila/agents/pull/147),
which makes `SubagentExecutor` inherit the parent invocation's
`configurable` (with `thread_id`/`run_id`/`parent_run_id` scrubbed)
into the child workflow. Subagent tool dispatches through the parent's
`ON_TOOL_EXECUTE` handler now arrive with parent's `requestBody`,
`user`, `userMCPAuthMap`, etc. — so `{{LIBRECHAT_BODY_*}}` placeholder
substitution and per-user MCP connection lookup work for subagent
tool calls the same way they do for the parent agent.

Note: `package-lock.json` will need an `npm install` refresh once
v3.1.78 lands on the registry. The user/user_id injection added in
PR #12950 stays as defense-in-depth.

* 🗑️ refactor: drop redundant user/user_id injection from `loadToolsForExecution`

`@librechat/agents@3.1.78` (via danny-avila/agents#147) makes
`SubagentExecutor` forward the parent's `configurable` verbatim into
the child workflow. Subagent `ON_TOOL_EXECUTE` dispatches now arrive
with parent's `user` / `user_id` already in `data.configurable` —
making the host-side injection added in #12950 a no-op.

Removes:
- The conditional `user: createSafeUser(req.user); user_id: req.user.id`
  block in `loadToolsForExecution` (req.user.id-guarded so the
  `'api-user'` fallback in Responses/OpenAI controllers is preserved).
- The unused `createSafeUser` import.
- The 4 unit tests covering the now-deleted behavior.

The merge in `handlers.ts` (`{ ...configurable, ...toolConfigurable }`)
still produces a `mergedConfigurable` with the right user identity for
both parent and subagent paths — the values just come from
`configurable` (forwarded by the SDK) rather than `toolConfigurable`.

Other fixes from #12950 stay (IUser.id narrowing, the env.ts /
google/initialize.ts / remoteAgentAuth.ts TS-warning fixes) — they
were independent of the subagent identity propagation issue.

* 📦 chore: update `@librechat/agents` to v3.1.78

This update reflects the transition from the development version `3.1.78-dev.0` to the stable release `3.1.78`. The package-lock.json has been refreshed to ensure consistency with the new version, including updated integrity checks and resolved URLs for the package. This change is part of ongoing improvements to enhance the functionality and stability of the agents module.
2026-05-05 22:07:26 -04:00
Danny Avila
9efd61d57d
🔐 fix: Forward per-file entity_id through code-env priming (#12958)
* 🔐 fix: Forward per-file `entity_id` through code-env priming

Skill files and persisted code-env files now carry their `entity_id` on
the in-memory file refs that seed `Graph.sessions`. Without this, an
execute call that mixes a skill file (uploaded with `entity_id=skillId`)
and a user attachment (uploaded with no `entity_id`) collapses onto a
single request-level entity at the codeapi authorization step and one
side 403s. With per-file `entity_id`, codeapi resolves sessionKey per
file and both authorize.

- `primeSkillFiles` / `primeInvokedSkills`: thread `entity_id` through
  fresh-upload, cache-hit, and per-skill-batch paths in
  `packages/api/src/agents/skillFiles.ts`.
- `primeFiles` (Code/process.js): parse `entity_id` from the persisted
  `codeEnvIdentifier` query string once per iteration; forward through
  `pushFile`, including the reupload path which re-parses the fresh
  identifier returned by codeapi.
- Tests: extend `skillFiles.spec.ts` with two cases — fresh-upload
  propagation and cached-hot-path parsing.

Companion PRs in flight on `@librechat/agents` (forward `entity_id`
through `_injected_files`) and codeapi (per-file authorization). All
three are wire-back-compat: an absent `entity_id` falls back to the
existing request-level resolution.

* 🔧 chore: Update dependencies in package-lock.json and package.json

- Bump `@librechat/agents` to version `3.1.78-dev.0` across multiple package files.
- Upgrade `@langchain/langgraph-checkpoint` to version `1.0.2` and update its peer dependency for `@langchain/core` to `^1.1.44`.
- Update `axios` to version `1.16.0` and `follow-redirects` to version `1.16.0`.
- Add `@types/diff` as a new dependency at version `7.0.2` and include `diff` at version `9.0.0` in the `@librechat/agents` module.
- Introduce optional peer dependency `@anthropic-ai/sandbox-runtime` for `@librechat/agents` with metadata indicating it is optional.

* 🐛 fix: Make skill code-env cache persistence observable

Two changes to surface the skill-bundle re-upload issue without
behavioral changes to tenant scoping (root cause to be confirmed via
the new warn log):

1. `primeSkillFiles` now awaits `updateSkillFileCodeEnvIds` instead of
   firing-and-forgetting it. The prior shape could race with the next
   prime (read-before-write) even when the bulkWrite itself succeeds,
   producing a silent cache miss. Latency cost: ~10–50ms on first
   prime; in exchange every subsequent prime can rely on the
   identifier being persisted by the time it reads.

2. `updateSkillFileCodeEnvIds` now returns `{matchedCount, modifiedCount}`
   from the underlying bulkWrite. `primeSkillFiles` warn-logs when
   `modifiedCount < updates.length`, making any silent drop visible —
   whether the cause is tenant filtering, a `relativePath` mismatch,
   schema-plugin scoping, or something else. Prior shape returned
   `Promise<void>` so any zero-modification result was invisible.

Tests:
- `skill.spec.ts`: real-MongoDB happy path (counts match), no-match
  case (modifiedCount=0), and empty-input contract.
- `skillFiles.spec.ts`: deferred-promise harness proving the call
  site awaits the persist (prime stays pending until the persist
  resolves) and forwards partial-write counts.

Deliberately narrower than the original draft of this commit, which
also bypassed `tenantSafeBulkWrite` for the codeEnvIdentifier write
on the speculative diagnosis that tenant filtering was the cause.
That change was a behavior shift on tenant scoping without
confirmation; reverted pending real-world signal from the new warn
log.

* 🐛 fix: Justify await for skill code-env persistence under concurrency

The await on `updateSkillFileCodeEnvIds` isn't a defensive nicety —
it's load-bearing for cache effectiveness under concurrent priming.

Verified with an out-of-tree harness (`config/test-skill-cache.ts`,
not committed) that wires `primeSkillFiles` against a real codeapi
stack:

- With fire-and-forget (prior shape after this branch's revert):
  back-to-back primes for the same skill miss the cache. Call N+1
  reads SkillFile docs before Call N's write commits, sees no
  `codeEnvIdentifier`, re-uploads, fires its own forget that Call N+2
  also races. Steady-state stays in cache miss for the full burst.

- With await: the prime that does the upload commits its persist
  before resolving, so the next concurrent prime observes the cache
  pointer instead of racing the read. Latency cost ~10–50ms on the
  upload prime; subsequent concurrent primes save an entire batch
  upload.

In production with primes seconds apart this race is rare; at scale
with many users hitting the same skill in the same second it's the
difference between M and N×M uploads.

Updates the regression test to assert the await contract (deferred
persist promise → prime stays pending until persist resolves).
Comment in `skillFiles.ts` rewritten to document the concurrency
rationale rather than the weaker "race-with-next-prime" framing the
prior commit used.
2026-05-05 18:35:09 -04:00
Atef Bellaaj
187ab787da
🌩️ feat: CloudFront CDN File Strategy (#12193)
* 🌩️ feat: CloudFront CDN File Strategy + signed cookies

Squashed from PR #12193:
- feat(storage): add CloudFront CDN file strategy
- feat(auth): add CloudFront signed cookie support

Note: package.json/package-lock.json dependency additions are intentionally
omitted from this commit and will be re-added via `npm install` after rebase
to avoid lock-file merge conflicts. The two new peer deps that need to be
re-installed are:
  - @aws-sdk/client-cloudfront@^3.1032.0
  - @aws-sdk/cloudfront-signer@^3.1012.0

Also fixes 4 missing destructured names in AuthService.spec.js
(getUserById, generateToken, generateRefreshToken, createSession) that
were referenced in tests but not imported from the mocked '~/models'.

* 📦 chore: install CloudFront SDK deps for PR #12193

Adds the two AWS CloudFront packages required by the rebased
CloudFront CDN strategy:
  - @aws-sdk/client-cloudfront
  - @aws-sdk/cloudfront-signer

Following the @aws-sdk/client-s3 pattern:
  - api/package.json: regular dependency (runtime resolution)
  - packages/api/package.json: peerDependency

Generated by `npm install` against the freshly rebased lock file
to avoid the merge conflicts that came from the original PR's
lock-file edits being made against an older base of dev.

* 🐛 fix: CI failures + review findings on CloudFront PR #12193

CI fixes
- Rename packages/data-provider/src/__tests__/cloudfront-config.test.ts
  → src/cloudfront-config.spec.ts. Jest's default testMatch picks up
  __tests__/ directories even inside dist/, so the compiled .d.ts shell
  was being executed as an empty test suite. Moving to .spec.ts (matching
  the rest of the package) avoids the dist/ pickup.
- Add cookieExpiry: 1800 to CloudFront crud.test makeConfig: the schema
  applies a default so CloudFrontFullConfig requires it.

Review findings addressed
- #1 (Codex + comprehensive): Normalize CloudFront domain with /\/+$/
  regex (and key with /^\/+/ regex) in buildCloudFrontUrl, matching the
  cookie code so resource policy and file URLs stay aligned even when
  the configured domain has multiple trailing slashes. Added tests.
- #2: Move DEFAULT_BASE_PATH out of s3Config into shared
  packages/api/src/storage/constants.ts. ImageService no longer imports
  S3-specific config.
- #3: getCloudFrontConfig() returns Readonly<CloudFrontFullConfig> | null
  to discourage mutation of the cached signing config.
- #4: Add cross-field refinement tests for cloudfrontConfigSchema
  (invalidateOnDelete-without-distributionId,
  imageSigning="cookies"-without-cookieDomain).
- #6: Revert unrelated MCP comment re-indentation in
  librechat.example.yaml.
- #7: Add azure_blob to the strategy list comment.

Skipped
- #5 (extractKeyFromS3Url with CloudFront URLs): existing
  deleteFileFromCloudFront tests already cover the path-equivalence
  assumption; renaming the helper is real refactor work beyond this
  PR's scope.
- #8, #9 (NIT, low confidence): leaving for author judgement.

* 🧹 chore: drop dead DEFAULT_BASE_PATH from s3Config test mock

After moving DEFAULT_BASE_PATH to ~/storage/constants, crud.ts no longer
reads it from s3Config — so the entry in the s3Config jest mock was
misleading dead config. The tests still pass because the unmocked real
constants module provides the value.

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
2026-05-05 13:21:05 -04:00
Danny Avila
f20419d0b7
📄 feat: Rich File Artifact Previews for DOCX, CSV, XLSX, PPTX (#12934)
* 📄 feat: Rich File Artifact Previews for DOCX, CSV, XLSX, PPTX

Render office files emitted by tools as interactive previews in the
artifact panel instead of raw extracted text. The backend produces a
sanitized HTML document via mammoth (DOCX), SheetJS (CSV/XLSX/XLS/ODS),
or yauzl-based slide extraction (PPTX) and ships it through the
existing SSE attachment payload; the client routes it through the
Sandpack `static` template's `index.html` slot — no new browser deps,
no client-side blob fetch, no React renderer components.

* 🔐 fix: Restrict data: URLs to <img> in office HTML sanitizer

Codex review on #12934 caught that `data:` lived in the global
`allowedSchemes`, which meant a smuggled `<a href="data:text/html,
<script>...</script>">` would survive sanitization. The Sandpack
iframe sandbox does not gate `target="_blank"` navigations, so a
click would open attacker-controlled HTML in a new tab.

Scope `data:` to `<img src>` only via `allowedSchemesByTag` (mammoth
inlines DOCX images as base64 `data:image/...` URIs — that path still
works). Add a regression suite (`sanitizeOfficeHtml security`) with
8 cases covering: <script> stripping, event-handler removal,
javascript:/data: rejection on anchors, data:image preservation in
<img>, http/https/mailto allowance, target=_blank rel=noopener
enforcement, and <iframe> stripping.

* 🔧 fix: Route extensionless office files by MIME alone

Codex review on #12934 caught that the office-render gate in
`extractCodeArtifactText` only fired when the extension was in
`OFFICE_HTML_EXTENSIONS` or the category was `document`/`pptx`. A tool
emitting `data` with `text/csv` (no extension) classifies as
`utf8-text`, so the gate was skipped and raw CSV text shipped to the
client — but the client routes by MIME to the SPREADSHEET bucket
expecting a full HTML document, so the panel rendered broken text.

Extract a shared `officeHtmlBucket(name, mime)` predicate from
`html.ts` (returns the bucket name or null). Both `bufferToOfficeHtml`
(the dispatcher) and the upstream gate in `extract.ts` now go through
this single source of truth, so they can never drift apart again. The
predicate already mirrors the dispatcher's extension/MIME logic
(extension wins; MIME is the fallback for extensionless inputs).

Adds:
- 14 cases for the new `officeHtmlBucket` predicate covering the
  positive paths (each bucket via extension OR MIME) and the negative
  paths (txt, py, json, jpg, pdf, zip, odt, plain noext).
- A direct regression test in `extract.spec.ts` for the Codex catch:
  `data` with `text/csv` + utf8-text category routes through the
  office HTML producer.
- Parameterized cases for extensionless DOCX/XLSX/XLS/ODS/PPTX files
  identified by MIME alone.

* 🛡️ fix: Enforce extension-wins precedence in officeHtmlBucket

Codex review on #12934 caught that the predicate's if-chain interleaved
extension and MIME checks for each bucket — e.g. CSV's branch was
`ext === 'csv' || CSV_MIME_PATTERN.test(mimeType)`. A `deck.pptx`
shipped with `text/csv` (sandboxed tools sometimes ship generic MIMEs)
matched the CSV branch BEFORE the PPTX extension branch was reached,
so a binary PPTX would have been handed to `csvToHtml` to parse as
text — yielding garbage or a parse exception.

Restructure to a strict two-pass dispatch: an exhaustive extension
table first (one lookup, all known extensions), then MIME-only
fallback for extensionless / unknown-ext inputs. The doc comment's
"extension wins" claim is now actually enforced by the implementation.

Add 7 regression cases covering the conflicting-MIME footgun for each
bucket: deck.pptx + text/csv → pptx; workbook.xlsx + text/csv →
spreadsheet; legacy.xls + pptx-MIME → spreadsheet; report.docx +
text/csv → docx; data.csv + docx-MIME → csv; etc.

* 🛡️ fix: Reject zip-bomb office files before in-process parsing (SEC)

Addresses pre-existing availability vulnerability validated by
SEC review (Codex finding 275344c5...) and made worse by this PR's
HTML rendering path. A sub-1MiB compressed XLSX/DOCX/PPTX (highly
compressed run-of-zeros) inflates to 200+ MiB of XML when handed
to mammoth/xlsx — blocking the Node event loop for 10+ seconds and
spiking RSS to ~1 GiB. The existing 8s `withTimeout` wrapper uses
`Promise.race`, which can only return early; it cannot interrupt
synchronous parser CPU/RAM consumption. PoC ran an authenticated
execute_code call to OOM the API process.

Add `assertSafeZipSize(buffer)` — a yauzl-based pre-flight that
streams every entry with mid-inflate byte counting and bails on
either a per-entry or total decompressed-size cap. Mid-inflate
counting cannot be bypassed by falsifying the central directory's
`uncompressedSize` field (the technique the PoC used). Defaults:
25 MiB per entry, 100 MiB total — generous headroom for legitimate
image-heavy office files, well below the attack profile.

Hook the check into every path that hands a buffer to mammoth/xlsx
/yauzl:
- New HTML producers (`wordDocToHtml`, `excelSheetToHtml`,
  `pptxToSlideListHtml`) — added by this PR
- Legacy RAG text extractors (`wordDocToText`, `excelSheetToText`
  in `crud.ts`) — pre-existing path, also vulnerable
Errors propagate as a tag-distinct `ZipBombError` so callers can
distinguish a refused bomb from generic parse failures. The outer
`extractCodeArtifactText` swallows the error and returns null,
falling back to the regular download UI.

`.xls` (BIFF/CFB binary, not ZIP) is detected by magic bytes and
skipped — yauzl would reject it as malformed anyway.

Adds 15 tests:
- `zipSafety.spec.ts` (9): benign passes, per-entry cap, total cap,
  ZipBombError type-tagging, malformed-zip distinction, directory-
  entry handling, named-error surfacing, and the SEC-PoC pattern
  (sub-1 MiB compressed → 50 MiB inflated rejected on default caps).
- `html.spec.ts` zip-bomb suite (5): each producer rejects a bomb;
  dispatcher propagates correctly; legitimate fixtures still render.
- `extract.spec.ts` (1): outer extractor swallows ZipBombError and
  returns null so the download UI fallback fires.

* 🧹 fix: Normalize MIME parameters; add legacy CSV MIME variant

Two related Codex catches on PR #12934 — both about MIME-routing
inconsistencies between backend and client that would cause
extensionless CSV files to render as broken (raw text under an HTML
slot) or skip the artifact panel entirely.

P2 — backend MIME normalization:
`officeHtmlBucket` matched MIME strings exactly, so a real-world
`text/csv; charset=utf-8` Content-Type slipped through and the
backend returned raw CSV text. The client's `baseMime` helper
strips parameters before its own MIME lookup, so it routed the
same file to the SPREADSHEET bucket expecting an HTML body that
never arrived. Mirror the client's normalization on the backend
(strip everything from `;` onward, lowercase) before bucket
matching.

P3 — client legacy CSV MIME:
Backend's `CSV_MIME_PATTERN` accepts three variants (`text/csv`,
`application/csv`, `text/comma-separated-values`); the client's
`MIME_TO_TOOL_ARTIFACT_TYPE` only had the first two. An
extensionless file with `text/comma-separated-values` would have
backend HTML produced but the client would skip the artifact
panel entirely. Add the missing variant.

Tests:
- 9 new parameterized-MIME cases on backend covering charset/
  boundary/case variants for every bucket.
- 1 new client routing case for `text/comma-separated-values`.

* 🩹 fix: Try office HTML before short-circuiting on category=other

Codex review on #12934 caught that the early `category === 'other'`
return short-circuited before `hasOfficeHtmlPath` was checked. The
classifier returns 'other' for inputs the new dispatcher can still
route — extensionless `application/csv` (CSV MIMEs aren't in the
classifier's text-MIME set and don't start with `text/`), and
extensionless office MIMEs with parameters like `application/vnd...
spreadsheetml.sheet; charset=binary` (the classifier's `isDocumentMime`
exact-matches these MIMEs without parameter normalization). Both would
route correctly through `officeHtmlBucket` but never reached it.

Move the office-HTML attempt above the 'other' early return, and drop
the `|| category === 'document' || category === 'pptx'` shortcut now
that `hasOfficeHtmlPath` covers the same surface (with parameter
normalization) and a wider one. ODT still routes through `extractDocument`
unchanged — `hasOfficeHtmlPath` returns false for it and the
`category === 'document'` branch below handles it.

Adds 3 regression tests:
- extensionless `application/csv` + category='other' → office HTML
- extensionless parameterized office MIME + category='other' → office HTML
- defense check: actual binary 'other' (image/jpeg) still returns null
  without invoking the office producer

* 🛡️ fix: Office types are HTML-or-null (no text fallback → XSS)

Codex P1 review on #12934 caught that when `renderOfficeHtml` failed
(timeout, malformed file, zip-bomb rejection) for an office type, the
extractor fell through to `extractDocument` and returned plain text.
The client routes by extension/MIME to the office preview buckets and
feeds `attachment.text` straight into the Sandpack iframe's
`index.html`. A spreadsheet cell or document body containing the
literal string `<script>alert(1)</script>` would have been injected
as executable markup — direct XSS.

The contract for office types is now HTML-or-null with no text
fallback. Failed render returns null, the client's empty-text gate
keeps the artifact off the panel, and the file falls back to the
regular download UI (matching what PPTX already did). PDF and ODT
still go through `extractDocument` because the client routes them to
PLAIN_TEXT (which the markdown viewer escapes) or no artifact at all,
so plain text is safe there.

Test reshuffle:
- `document` describe block now uses ODT/PDF for the legacy
  parseDocument-path tests (DOCX/XLSX/XLS/ODS bypass that path).
- New "does NOT call parseDocument for office HTML types" test locks
  in the SEC contract for all four office HTML buckets.
- "falls back to ..." tests rewritten as "returns null when ..." with
  explicit `parseDocumentCalls.length === 0` assertions to prove no
  text leaks back to the client.
- New XSS regression test for the XLSX failure path.
- Mock parseDocument failure-name match relaxed to `includes()` so
  ODT-named tests can use the same trigger.

* 🧽 chore: Address follow-up review findings on PR #12934

Wraps up the 10-finding follow-up review. Two MAJOR + four MINOR + two
NIT addressed; one NIT skipped after verifying it was a misread of the
package.json structure.

MAJOR
- #1: Rewrite `renderOfficeHtml` JSDoc to document the HTML-or-null
  contract explicitly. The pre-fix doc described a text-fallback path
  that was the original XSS vector (commit b06f08a). A future
  maintainer trusting the stale doc could reintroduce the fallback.
- #2: Replace byte-truncation of office HTML with a small "preview too
  large" banner document. Cutting at a UTF-8 boundary lands mid-tag
  (`<table><tr><td>con\n…[truncated]`) and ships malformed markup to
  the iframe — unpredictable rendering, occasional broken layouts on
  DOCX with embedded images / wide spreadsheets.

MINOR
- #4: Wrap `readSlidesFromZip`'s `zipfile.close()` in try/catch so a
  close-time exception (mid-flight stream) doesn't replace the
  original error. Mirrors the defensive pattern in zipSafety.ts.
- #5: Refactor PPTX extraction to use `yauzl.fromBuffer` directly,
  eliminating the temp-file write/unlink the safety pre-flight already
  proved unnecessary. Removes 4 unused imports (os, path, fs/promises,
  randomUUID).
- #6: Extract `isPreviewOnlyArtifact(type)` to `client/src/utils/
  artifacts.ts` so the membership check is unit-testable without
  mounting the full Artifacts component (Recoil + Sandpack + media
  query). 15 new test cases covering positive types, negative types,
  null/undefined, and unknown strings.

NIT
- #3: Remove dead `stripColorStyles` / `COLOR_PROPERTY_PATTERN` —
  unused (sanitizer's `allowedStyles` config handles color implicitly).
- #7: Remove dead `!_lc_csv_label` worksheet property write.
- #9: Remove no-op `exclusiveFilter: () => false` sanitize-html config.
- #10: Type-narrow `PREVIEW_ONLY_ARTIFACT_TYPES` to
  `ReadonlySet<ToolArtifactType>` so the membership table is
  compile-time checked against the enum.

SKIPPED
- #8: Reviewer flagged `sanitize-html` as duplicated in devDeps and
  dependencies. The package has no `dependencies` section — only
  `devDependencies` and `peerDependencies`. Existing convention
  (mammoth, xlsx, yauzl, pdfjs-dist) is to appear in BOTH. Removing
  the devDep entry would break local test runs.

Tests: packages/api 4406/4406, client artifacts 128/128.

* 🪞 chore: Fix isPreviewOnlyArtifact test description parameter order

Follow-up review nit on PR #12934. Jest's `it.each` substitutes `%s`
positionally, and the table rows were `[type, expected]` while the
description template read `'returns %s for type %s'` — outputting
"returns application/vnd.librechat.docx-preview for type true"
instead of the intended "type ... returns true".

Reorder the template to match the column order. Test runner output
now reads naturally: "type application/vnd.librechat.docx-preview
returns true". Pure cosmetic — runtime behavior unchanged.

*  feat: Improve DOCX rendering and surface filename in panel header

Two UX improvements based on hands-on use of the office preview pipeline.

DOCX rendering — mammoth strips the navy banners, cell shading, and
column layouts that direct-formatted docs apply (python-docx-style
output is a common case). The flat `<p><strong>X</strong></p>` and
bare `<table><tr><td>` it emits looks washed out next to the source.
Three targeted compensations:

- Style map promotes `Title`, `Subtitle`, `Heading 1` thru `Heading 6`,
  and `Quote` paragraphs to their semantic HTML equivalents (mammoth's
  default only handles Heading 1-6, missing Title/Subtitle/Quote).
- Extra CSS scoped to `.lc-docx` gives the first table row sticky-
  looking header styling regardless of `<thead>` (mammoth never emits
  `<thead>`), adds zebra striping, and treats the python-docx
  `<p><strong>X</strong></p>` section-heading idiom as a pseudo-h2 with
  a thin accent left border so document structure survives the round
  trip. Headings get a left accent or underline so they read as
  headings instead of just bold paragraphs.
- Sanitizer's `allowedAttributes` opens `class` on the heading and
  block tags the styleMap and CSS heuristics rely on. `<script>`,
  event handlers, javascript: URLs, etc. are still stripped — the
  existing security regression suite catches any drift.

Panel header — `Artifacts.tsx` showed a generic "Preview" pill for
preview-only artifacts. Single-tab Radio is a no-op; surfacing the
document filename there gives the user something useful in the chrome
without taking real estate. `displayFilename` handles the sandbox
dotfile suffix the upload pipeline applies.

Tests: html.spec.ts +1 (new CSS-emission lock), 71/71. Backend files
suite 428/428. Client 308/308.

*  feat: High-fidelity DOCX preview via docx-preview in iframe

Switch the default DOCX render path from server-side mammoth → flat
HTML to client-side `docx-preview` loaded inside the Sandpack iframe.
Mammoth becomes the fallback for files above the cap.

Why
---
The Sandpack iframe is a real browser DOM. Server-side rendering
ceiling for DOCX→HTML is well below the source's visual fidelity —
mammoth strips cell shading, run colors, banners, and column layouts
because Word's layout model doesn't fit HTML's flow model. Pushing the
render into the iframe lifts that ceiling without paying the
server-side cost of jsdom or LibreOffice.

What
----
- New `wordDocToHtmlViaCdn(buffer)` builds a self-contained HTML doc
  that embeds the binary as base64 and lets `docx-preview@0.3.7`
  render it on load. CSS preserves dark/light mode handoff via
  `prefers-color-scheme`. Bootstrap script falls back to a "preview
  unavailable, please download" message if the CDN is unreachable or
  the parse throws.
- `docx-preview` and its `jszip` peer dep are pinned to specific
  versions on jsdelivr with SRI sha384 integrity hashes and
  `crossorigin="anonymous"`. Refresh: re-fetch the file, run
  `openssl dgst -sha384 -binary FILE | openssl base64 -A`.
- CSP locked down on the iframe: `default-src 'none'`, scripts only
  from jsdelivr (no eval), `connect-src 'none'` so a parser bug in
  docx-preview can't be turned into exfiltration of the embedded
  document, `base-uri 'none'`, `form-action 'none'`. Defense in depth
  on top of the Sandpack cross-origin sandbox.
- `wordDocToHtml` dispatches by size: ≤ 350 KB binary → CDN path
  (high fidelity), larger → mammoth fallback (preserves the size cap
  on `attachment.text`). 350 KB chosen so worst-case base64-inflated
  output (~478 KB) plus wrapper overhead (~5 KB) fits under
  MAX_TEXT_CACHE_BYTES (512 KB) with 40 KB headroom.
- Internal renderers exported as `_internal` for tests. Public API
  unchanged — callers still go through `wordDocToHtml`.

PPTX intentionally NOT switched
-------------------------------
Surveyed the available client-side PPTX libraries:
- `pptx-preview@1.0.7` ships an ESM-only main entry plus a 1.36 MB
  UMD that references `require("stream"/"events"/"buffer"/"util")` —
  bundled for Node, not browser-clean. Could work but the runtime
  references to undefined Node globals are a fragility risk worth
  more validation than this PR can absorb.
- `pptxjs` is jQuery-era, requires four separate UMD scripts in a
  specific order, less actively maintained.
- The honest answer for PPTX is the LibreOffice sidecar (DOCX/XLSX/
  PPTX → PDF → PDF.js), which is the architecture every major
  product (Google Drive, Claude.ai, ChatGPT) effectively uses and
  the only path to ~5/5 fidelity for arbitrary user decks.

PPTX stays on the existing slide-list extraction for now. Open a
follow-up issue for the LibreOffice/Gotenberg sidecar.

Tests
-----
- 6 new in CDN-rendered describe block: wrapper structure, base64
  round-trip, SRI integrity + crossorigin, CSP locks
  (connect-src/eval/base-uri/form-action), fallback message wiring,
  size-threshold lock.
- Adjusted 2 existing tests that asserted on mammoth-path artifacts
  (literal document text in `<article class="lc-docx">`) — those
  assertions move to the mammoth-fallback test that calls
  `_internal.wordDocToHtmlViaMammoth` directly. Dispatcher tests now
  assert CDN-path signatures instead.

packages/api files: 434/434 , full unit suite 4473/4473 .

* 🧷 fix: Address Codex P1 (MIME aliases) + P2 (CDN dependency)

Two follow-up review findings on PR #12934, both real.

P1 — Spreadsheet MIME aliases on client
----------------------------------------
Backend's `officeHtmlBucket` uses the broad `excelMimeTypes` regex from
`librechat-data-provider` (covers `application/x-ms-excel`,
`application/x-msexcel`, `application/msexcel`, `application/x-excel`,
`application/x-dos_ms_excel`, `application/xls`, `application/x-xls`,
plus the canonical sheet MIMEs). The client's exact-match
`MIME_TO_TOOL_ARTIFACT_TYPE` only had three of those, so an
extensionless XLS upload with a legacy MIME would have backend HTML
produced but the client would fail to route the artifact at all —
preview chip never registers.

Fix: import the same regex on the client and add it as a fallback in
`detectArtifactTypeFromFile` after the exact-match map miss. Stays in
lock-step with the backend automatically.

7 new test cases — one per legacy alias.

P2 — Hard CDN dependency on jsdelivr
-------------------------------------
Air-gapped / corporate-filtered networks where jsdelivr is unreachable
would see DOCX previews permanently degrade to "Preview unavailable"
because the iframe could never load the renderer scripts. Mammoth was
sitting right there on the server but the dispatcher always preferred
the CDN path for files under 350 KB.

Fix: `OFFICE_PREVIEW_DISABLE_CDN` env var. When truthy (`1`, `true`,
`yes`, case-insensitive, whitespace-trimmed), `wordDocToHtml`
short-circuits to the mammoth path regardless of file size. Operators
on filtered networks set the env var; default behavior is unchanged.

Read at function-call time (not module load) so jest can flip it in
`beforeEach` without `jest.resetModules()`. The cost is one property
access per render.

12 new test cases: env-unset uses CDN (default), all five truthy
forms force mammoth, five non-truthy forms (`false`/`0`/`no`/empty/
arbitrary string) leave CDN active.

Tests
-----
packages/api/src/files: 446/446  (was 434, +12 from env-var matrix).
client artifact suites: 235/235  (was 228, +7 from MIME aliases).

*  feat: High-fidelity PPTX preview via pptx-preview in iframe

Mirrors the DOCX CDN architecture for PPTX: small files (≤350 KB
binary) embed as base64 and render via `pptx-preview` loaded from
jsdelivr inside the Sandpack iframe. Larger files and air-gapped
deployments fall back to the existing slide-list extraction.

Why
---
PPTX is the format where the gap between LibreChat's preview and
Claude.ai-style previews was most visible (slide-list of bullet
points vs. rendered slide layouts). LibreOffice → PDF → PDF.js is
still the eventual gold-standard answer for PPTX fidelity, but
client-side rendering inside the Sandpack iframe gets us a
meaningful intermediate step (~1.5/5 → ~3.5/5) without a sidecar.

What
----
- `pptx-preview@1.0.7` (ISC license, ~1.36 MB UMD bundle that
  includes its echarts/lodash/uuid/jszip/tslib deps inline). Pinned
  to a specific version on jsdelivr with SHA-384 SRI and
  `crossorigin="anonymous"`.
- `buildPptxCdnDocument` mirrors the DOCX wrapper: same CSP locks
  (`default-src 'none'`, `connect-src 'none'`, no eval, no base/form
  tampering), same `id="lc-doc-data"` base64 slot, same fallback
  message wiring (`typeof pptxPreview === 'undefined'` →
  "Preview unavailable").
- New public `pptxToHtml(buffer)` dispatcher; `bufferToOfficeHtml`
  switches its `'pptx'` case to call it. `pptxToSlideListHtml` stays
  exported as the slide-list-only path (still hit by tests directly
  and by the dispatcher fallback).
- `OFFICE_PREVIEW_DISABLE_CDN=true` env-var hatch applies to PPTX
  too — air-gapped operators get the slide-list path. Same env-var
  read at call time, same matrix of truthy values (`1` / `true` /
  `yes` / case-insensitive / whitespace-trimmed).
- `_internal` re-exports moved to after the PPTX section since the
  PPTX internals live further down in the file. Adds
  `pptxToHtmlViaCdn`, `MAX_PPTX_CDN_BINARY_BYTES`,
  `PPTX_PREVIEW_CDN`.

Honest caveats
--------------
- The 1.36 MB UMD bundle has `require("stream"/"events"/"buffer"/
  "util")` references in its outer wrapper. Those are bundled-dep
  artifacts (likely from `tslib` / Node-shim transforms) and don't
  appear to execute on the browser code paths, but I haven't done
  manual e2e on a wide range of decks. If a class of files turns up
  that breaks rendering, the iframe-side fallback message catches it
  and operators have `OFFICE_PREVIEW_DISABLE_CDN=true` as the bail.
- First-render CDN fetch is ~1.36 MB (browser-cached after).
- PPTX with embedded media easily exceeds the 350 KB binary cap;
  those files take the slide-list path. Lifting the cap is a
  follow-up (tied to the broader self-hosting work).

Tests
-----
11 new in two new describe blocks:
- `pptxToHtml dispatcher`: routing predicate (small → CDN, env-set
  → slide-list).
- `CDN-rendered path`: base64 round-trip, SRI integrity +
  crossorigin, CSP locks (connect/eval/base/form), fallback message,
  size-threshold lock at 350 KB.
- `OFFICE_PREVIEW_DISABLE_CDN escape hatch`: env-var matrix for
  truthy values.

packages/api/src/files: 457/457  (was 446, +11).

* 🪟 fix: DOCX preview fills the artifact panel width

docx-preview defaults to rendering at the document's native page
width (8.5in for letter, 21cm for A4). In a wide artifact panel
that left whitespace on either side; in a narrow one it forced
horizontal scroll.

Two changes:
- Pass `ignoreWidth: true` to `docx.renderAsync` so the library skips
  the document's pageSize width and uses its container's width.
- Defensive CSS overrides on `.docx-wrapper` and `.docx-wrapper > section.docx`
  in case a future library version regresses on the option, plus
  `padding: 0` on the wrapper to drop the page-edge whitespace
  docx-preview otherwise reserves.

`renderHeaders`/`renderFooters`/etc. stay enabled — those still
appear in the rendered output, just inside a container that fills
the panel instead of a fixed-width "page."

Tests unchanged (100/100); manual e2e ahead of merge.

* 🩹 fix: PPTX black screen — allow blob: workers + harden bootstrap

Manual e2e of the PPTX CDN renderer surfaced a black screen with
"Could not establish connection. Receiving end does not exist."
unhandled-rejection — characteristic of a Web Worker that couldn't
start.

Root cause: pptx-preview's bundled echarts dep spins up Web Workers
via blob: URLs for chart rendering. Our CSP had `default-src 'none'`
and no `worker-src`, so workers fell back to default → blocked. The
async failure deep inside echarts didn't surface through the outer
`previewer.preview()` promise, so my bootstrap's `.catch` never fired,
the loading state was removed, and the iframe sat with the body
background showing through (dark navy in dark mode = "black screen").

Three changes:
- Add `worker-src blob:` to the PPTX CSP. Allows blob:-only worker
  creation without permitting arbitrary worker URLs.
- Bootstrap: window-level `unhandledrejection` and `error` listeners
  so rejections from inside bundled-dep async pipelines surface as
  the user-facing "Preview unavailable" fallback instead of going
  silent.
- Bootstrap: 8-second timeout that checks `container.children.length`
  — if the renderer hasn't appended anything visible by then, assume
  silent failure and show the fallback.

Also wipe `container.innerHTML` when showing the fallback so a partial
render doesn't compete with the message.

DOCX wrapper unchanged: docx-preview doesn't use workers, so the
worker-src directive doesn't apply, and the existing fallback path
already covers its failure modes.

Tests
-----
- Existing PPTX CSP test now also asserts `worker-src blob:` is present.
- Existing fallback-message test extended to cover the new
  unhandledrejection/error/timeout listeners.

packages/api/src/files: 467/467 .

* 🔒 fix: gate office HTML routing on backend trust flag (textFormat)

Codex P1 review on PR #12934: routing .docx/.csv/.xlsx/.xls/.ods/.pptx
into the office preview buckets assumed `attachment.text` was already
sanitized full-document HTML, but that guarantee only existed for the
new code-output extractor path. Existing stored attachments and other
non-code paths can still carry plain extracted text — `useArtifactProps`
would then inject that as `index.html` inside the Sandpack iframe.

Adds a `textFormat: 'html' | 'text' | null` trust flag persisted on
the file record by the code-output extractor, surfaced over the SSE
attachment payload and the TFile API type. The client's routing in
`detectArtifactTypeFromFile` requires `textFormat === 'html'` before
landing on an office HTML bucket; everything else (legacy attachments,
RAG-extracted plain text from `parseDocument`, explicitly-marked
'text' entries) falls back to the PLAIN_TEXT bucket where the
markdown viewer escapes content rather than executing it.

Tests: new `getExtractedTextFormat` helper has 14 cases covering all
office paths, legacy XLS MIME aliases, parseDocument fallthroughs,
and null-input. Client `artifacts.test.ts` adds three security-gate
tests proving downgrade behavior for missing/null/'text' textFormat,
plus a `fileToArtifact` test that legacy office attachments without
the flag end up in PLAIN_TEXT with their content escaped.

* 🌐 fix: air-gapped DOCX preview — embed mammoth fallback in CDN doc

Codex P2 review on PR #12934: the CDN-rendered DOCX path always pulled
docx-preview + jszip from cdn.jsdelivr.net. Air-gapped or corporate-
filtered networks where jsdelivr is blocked would degrade to a static
"Preview unavailable" message even though the server already had a
local mammoth renderer that could produce readable output.

Now the dispatcher renders mammoth first and embeds the sanitized
output inside the CDN document as a hidden `#lc-fallback` block. The
iframe's existing `typeof docx === 'undefined'` check (which fires
when the CDN scripts can't load) un-hides the fallback so the user
sees a real preview. CDN-success path is unchanged: high-fidelity
docx-preview output owns the viewport, mammoth fallback stays hidden.

Two new safeguards in the dispatcher:
- Size budget: if base64(binary) + mammoth body + wrapper > 512 KB
  (the `attachment.text` cache cap), drop to mammoth-only so a giant
  document still renders. The `OFFICE_HTML_OUTPUT_CAP` constant
  mirrors `MAX_TEXT_CACHE_BYTES` from extract.ts (separate constant
  to avoid a circular import; pinned by a unit test).
- `lc-render` is hidden when fallback shows so the empty padded slot
  doesn't sit above the mammoth content.

Tests: existing CDN-path tests updated for the new
`wordDocToHtmlViaCdn(buffer, mammothBody)` signature; new test for
the embedded fallback structure (`#lc-fallback`, mammoth body
content, "High-fidelity renderer unavailable" notice, render-slot
hide); new constant pin and per-fixture cap-respect assertion.

* 🧪 feat: LibreOffice → PDF preview path (POC, opt-in via env)

Per the plan-mode discussion: prove out a LibreOffice subprocess
pipeline as an alternative to the docx-preview / pptx-preview CDN
renderers. LibreOffice handles every office format Microsoft and
LibreOffice itself can open (DOCX, PPTX, XLSX, ODT, ODP, ODS, RTF,
many more), produces a PDF, and the host browser's built-in PDF
viewer renders it inside the Sandpack iframe via a `data:` URI.
No client-side JS dependency, no CDN dependency, true high
fidelity for any feature LibreOffice supports.

Off by default. Operators opt in by setting both:
  - `OFFICE_PREVIEW_LIBREOFFICE=true`
  - LibreOffice (`soffice` or `libreoffice`) on the server's `$PATH`

When either is missing, the dispatcher falls through to the
existing CDN/mammoth/slide-list pipeline so a misconfiguration
doesn't break previews.

Hardening (`packages/api/src/files/documents/libreoffice.ts`):
- Fresh subprocess per call with isolated temp dir, stripped env
  (PATH/HOME/TMPDIR only), and `-env:UserInstallation` so concurrent
  conversions can't collide on shared `~/.config/libreoffice` locks
- 30-second wall-time cap; SIGKILL on timeout
- 50 MB PDF output cap to bound disk pressure
- 512 KB output cap on the wrapped HTML so the SSE/cache contract
  stays intact (base64 inflates ~33%, effective PDF cap ~380 KB)
- Macros disabled by default flags (`--norestore --invisible
  --nodefault --nofirststartwizard --nolockcheck`)
- Tag-distinct `LibreOfficeUnavailableError` /
  `LibreOfficeConversionError` so callers can swallow appropriately

Iframe wrapper (`buildPdfEmbedDocument`):
- Native browser PDF viewer via `<iframe src="data:application/pdf;
  base64,...">` — works in Chrome, Edge, Safari, Firefox
- CSP locks the iframe to `default-src 'none'; frame-src data:;
  connect-src 'none'; script-src 'unsafe-inline'` — no outbound
  network, no eval, no external scripts
- `#view=FitH` for first-paint sizing
- 4-second heuristic timer that swaps to a "Preview unavailable"
  fallback when the browser's PDF viewer is disabled (kiosk mode,
  Brave Shields, etc.)

Wired into `wordDocToHtml` and `pptxToHtml` as the first branch —
returns null when disabled / unavailable / oversized so the existing
pipeline takes over. XLSX intentionally NOT routed through this
path: SheetJS's HTML output is already excellent for spreadsheets
(sortable, sticky headers) and PDF rendering of sheets is awkward.

Tests (`libreoffice.spec.ts`, 30 cases — 25 always run, 5 conditional
on the binary): env-gating parser semantics matching
`OFFICE_PREVIEW_DISABLE_CDN`, fallthrough contract (never throws,
returns null on any failure), CSP lock-down, fallback structure,
binary probe caching + missing-binary path, error tagging, and
integration tests that engage when `soffice`/`libreoffice` is on
PATH (DOCX→PDF, PPTX→PDF, output-cap fallthrough). Integration
tests skip cleanly on bare CI.

* 🩹 fix: CI — preserve legacy download path for empty-text office attachments

Two regressions surfaced after the textFormat security gate landed.

1. **Client** (`LogContent.test.tsx` "falls back to the legacy download
   branch for an office file with no extracted text"):

   When the security gate downgraded an office type without
   `textFormat: 'html'` to PLAIN_TEXT, the lenient empty-text gate on
   PLAIN_TEXT then accepted a missing `text` field and rendered a
   half-empty panel card. The historical contract is "office type +
   no text → legacy download UI"; the downgrade should only fire when
   there's actual plain text that needs safe-escaping.

   Fix in `detectArtifactTypeFromFile`: short-circuit to null when the
   office type lands in the security-gate branch with no text. The
   PLAIN_TEXT downgrade still fires for legacy attachments that DO
   carry plain text.

2. **API** (`process.spec.js` + `process-traversal.spec.js`): the
   `@librechat/api` mocks didn't expose `getExtractedTextFormat`, so
   `processCodeOutput` called `undefined(...)` → TypeError → tests got
   undefined results. Added the helper to both mocks with a faithful
   default (returns 'text' for non-null extractor output, null
   otherwise).

Tests: new regression in `artifacts.test.ts` pinning the empty-text
+ no-textFormat → null contract for all four office types
(.docx/.csv/.xlsx/.pptx), so a future refactor can't silently
re-introduce the half-empty card.

* 🩹 fix: PPTX slides scale to fit panel width (no horizontal scroll)

Manual e2e on PR #12934: pptx-preview rendered slides at their native
init dimensions (960×540 default). The artifact panel is much narrower
than that, so the iframe got a horizontal scrollbar and only a corner
of each slide showed at any time — the user had to drag-scroll across
each slide to read it.

Fix: keep pptx-preview's init at 960×540 so its internal layout math
stays correct, then post-process each rendered slide:
- Cache the slide's native width/height on its dataset BEFORE
  applying any transform (so subsequent re-fits don't measure the
  already-transformed box).
- Wrap the slide in `.lc-slide-wrap` with explicit width/height set
  inline to the scaled dimensions; the wrap shrinks the layout space
  the slide occupies.
- Apply `transform: scale(panel_width / 960)` to the slide itself
  with `transform-origin: top left` so the rendered output shrinks
  from the top-left corner into the wrap.
- Cap the scale at 1.0 so small slides don't upscale and get blurry.

Streaming + resize:
- `MutationObserver` watches the container for slide insertions so
  streaming renders get scaled on arrival rather than waiting for
  the entire `previewer.preview` promise to settle.
- `ResizeObserver` re-fits all wrapped slides when the iframe
  resizes (panel drag, window resize).

Tests: new "bootstrap wraps + scales each slide" lock in the wrap
class, scale computation, observer setup, and native-size caching
so a future refactor can't silently re-introduce the overflow.

* 🩹 fix: PPTX wrap+scale runs after preview, not during streaming

Manual e2e on PR #12934: regenerated PPTX showed "Preview unavailable"
in the iframe. Root cause: the MutationObserver I added in the
previous commit fired during pptx-preview's render and moved slides
out from under the library's references. pptx-preview's async
pipeline raised an unhandled rejection, the iframe's window-level
listener caught it, and the fallback message replaced the partial
render.

Fix: drop the MutationObserver. Apply the wrap+scale ONCE in a
`finalize` step that runs:
  - On `previewer.preview().then` (the happy path)
  - On the 8-second timeout safety net IF the container has children
    (silent-failure path — pptx-preview emitted slides but never
    resolved its outer promise)

To prevent the user from seeing an unscaled flash while pptx-preview
renders into the 960px-wide canvas, the container is set to
`visibility: hidden` at init and only revealed inside `finalize`
after wrap+scale completes.

Resize handling stays via `ResizeObserver` on `document.body`,
installed AFTER the wrap pass so it doesn't fire during the wrap
itself.

Tests: regression assertion now also locks in:
  - `container.style.visibility = 'hidden' / 'visible'` (the flash-
    prevention contract)
  - Absence of MutationObserver (the bug we just removed — must NOT
    creep back in via a future "let's scale during streaming" idea)

* 🩹 fix: PPTX slides fill panel width (drop upscale cap, per-slide scale)

Manual e2e on PR #12934: slides rendered correctly but didn't fill the
artifact panel — whitespace on either side. Two issues:

1. The scale was capped at `Math.min(1, available / SLIDE_W)`. On
   panels wider than 960px, the cap clamped the scale to 1.0 and
   slides rendered at native size with whitespace on the sides
   instead of stretching.

2. The scale was computed against the constant `SLIDE_W = 960`, but
   pptx-preview can emit slides whose `offsetWidth` differs from the
   init param if the source PPTX has a non-16:9 layout. Per-slide
   division of `available / nativeW` handles that case.

Fix: replace `computeScale()` with two helpers — `availableWidth()`
returns the panel content-box width and `scaleFor(nativeW)` returns
the per-slide scale. No upscale cap. The slide content is rendered
by pptx-preview against its 960×540 canvas using vector text /
canvas — scaling up to e.g. 1500px doesn't visibly degrade quality.

Tests: regression now also asserts:
  - `availableWidth()` and `scaleFor()` exist by name
  - The exact scale formula `availableWidth() / (nativeW || SLIDE_W)`
  - Negative assertion that `Math.min(1, ...)` is NOT present, so a
    future "let's add an upscale cap" rewrite can't silently
    re-introduce the whitespace.

* 🩹 fix: PPTX preview fills panel height (no white gap below slides)

Manual e2e on PR #12934: PPTX preview filled the panel width but left
empty space below the last slide. DOCX didn't have this issue because
its content (mammoth-rendered HTML) flows naturally and either fits
exactly or overflows; PPTX slides are fixed-aspect 16:9 and don't
grow with the panel.

Two changes:

1. **Body fills the iframe viewport** — `html, body { min-height:
   100vh }` plus `body { display: flex; flex-direction: column }`
   and `#lc-render { flex: 1 0 auto }`. The dark theme bg now fills
   the iframe even when total slide content is shorter than the
   panel, so a single-slide deck never reveals a "white below" gap.

2. **Per-slide scale honors viewport height** — `scaleFor(nativeW,
   nativeH)` now returns `min(width-fit, height-fit)` (largest
   factor that fits without overflowing either dimension). On a
   tall artifact panel with a short deck, slides grow up to the
   full panel height instead of staying at the width-bound size.
   Existing height-fit was always considered correct conceptually
   but the previous implementation only used width-fit, leaving
   half the viewport unused per slide.

Tests: regression now also asserts `availableHeight()`, the
`Math.min(sw, sh)` formula, and `min-height: 100vh` are in the
bootstrap. Negative assertion for the old `Math.min(1, ...)` upscale
cap remains.

* 🩹 fix: revert body flex on PPTX bootstrap (caused black-screen render)

Manual e2e regression on PR #12934: the previous commit added
`body { display: flex; flex-direction: column }` plus
`#lc-render { flex: 1 0 auto }` to fill the panel height. Side effect:
pptx-preview's internal layout assumes block flow on its ancestor
elements; making body a flex container caused slides to render as
solid-black rectangles (sized correctly, but with no visible content
inside).

Fix: keep just `html, body { min-height: 100vh }` for the bg-fill
effect — that alone gives empty space below short decks the dark
theme bg without changing flow. Drop the body-flex and the
`#lc-render { flex: 1 0 auto }` directives.

The height-aware `scaleFor(nativeW, nativeH)` from the same commit
stays — it doesn't interact with pptx-preview's layout, just chooses
a per-slide scale. Each slide still grows to fit the viewport
contain-style.

Negative-assertion added to the regression test: `body { display:
flex }` must NOT appear in the bootstrap, so a future "let's flex
the body to make height work" rewrite can't silently re-introduce
this.

(Note: the user also flagged DOCX theming as faint body text; I'm
leaving that for now per their note that it may be pre-existing.
Not addressed in this commit.)

* 🩹 fix: revert PPTX height-fill changes; lock DOCX CDN to light scheme

Two fixes for separate manual e2e regressions on PR #12934.

**1. PPTX black screen (single slide rendering as solid black).**

The previous fix removed `body { display: flex }` thinking that was
the sole cause, but the regression persisted. Bisecting against the
last known-good commit (4e2d538b0, width-fit only), the actual culprit
is the COMBINATION of:
- `min-height: 100vh` on html/body
- `availableHeight()` reading viewport-derived dimensions
- `Math.min(sw, sh)` height-aware scale

pptx-preview's CSS injection step interacts unpredictably with
these. Reverting to width-only `scaleFor(nativeW)` and dropping the
viewport min-height restores reliable rendering. Vertical empty
space below short decks now shows the body's bg color (`var(--bg)`)
which still matches the panel theme — that's an acceptable trade-off
vs. the black-screen regression.

Negative assertions added: `Math.min(sw, sh)`, `availableHeight`,
`min-height: 100vh`, `body { display: flex }` must NOT appear in
the bootstrap. So a future "let's fill height" rewrite has to
demonstrate it doesn't break pptx-preview before it can land.

**2. DOCX body text rendering as faint / translucent grey.**

docx-preview emits page-style rendering with white pages and the
docs native text colors. The CDN doc declared
`color-scheme: light dark`, so on OS dark mode the iframes
inheritable `--fg` resolved to `#e5e7eb` (light grey). docx-preview
body text (no explicit color in the source DOCX) inherited that
light-grey on the white page bg → barely-visible "translucent"
rendering.

Fix: declare `color-scheme: light` only in `buildDocxCdnDocument`,
drop the dark-mode `@media` override. docx-preview is a light-mode-
only renderer; matching that produces correct contrast regardless
of OS theme. The mammoth-only `wrapAsDocument` path is unaffected
— it owns its own bg + text colors and continues to respect the
users OS scheme.

New regression test pins the lock: CDN doc must contain
`color-scheme: light`, must NOT contain `color-scheme: light dark`,
must NOT contain `prefers-color-scheme: dark`.

* 🩹 fix: relax connect-src to allow sourcemap fetches (silence CSP noise)

Manual e2e on PR #12934: every time DevTools is open while viewing a
DOCX or PPTX preview, the console fills with CSP violations like:

  Connecting to 'https://cdn.jsdelivr.net/npm/docx-preview@0.3.7/
  dist/docx-preview.min.js.map' violates the following Content
  Security Policy directive: "connect-src 'none'". The request has
  been blocked.

The actual rendering isn't affected (sourcemap fetches happen AFTER
the script has already loaded and executed via `script-src`), but
the noise is enough to make people suspect a real problem and
distracts from useful console output.

Fix: relax `connect-src` from `'none'` to `'self' https://cdn.
jsdelivr.net` in both DOCX and PPTX CDN docs. This allows:
  - Same-origin fetches (sandpack-static-server) — covers any
    bundler-embedded sourcemaps + same-origin runtime fetches the
    renderer might make
  - jsdelivr fetches — covers sourcemaps from the CDN where we
    loaded the script

Exfiltration risk stays minimal: the iframe is cross-origin to
LibreChat so an attacker can't read application data anyway, and
neither 'self' (sandpack-static-server) nor jsdelivr is a useful
target for exfiltrating slide content to a host the attacker
controls.

Tests updated: assertions for `connect-src 'none'` swapped to
`connect-src 'self' https://cdn.jsdelivr.net` for both DOCX + PPTX
CDN docs. Added negative assertion for wildcard `*` in connect-src
so a future "let's allow everything" rewrite can't widen the
exfiltration surface.

* 🩹 fix: surface PPTX/DOCX fallback reason (inline + console)

Manual e2e on PR #12934: "Preview unavailable" appears in the iframe
with no way to know what actually failed. The reason was tucked into
the fallback element's `title` attribute (hover-only tooltip) — easy
to miss and impossible to copy/paste.

Now surfaces three ways:
  1. Visible inline via a `<details>` element with the reason in
     monospace, folded so the friendly message stays primary but the
     diagnostic is one click away in the iframe itself.
  2. `title` attribute (preserved) for hover tooltip.
  3. `console.error('[pptx-preview] fallback fired:', reason)` so
     DevTools shows it in red — also the only reliable way to see
     the reason if the iframe is detached / re-mounted.

DOCX gets the same console mirror (as `console.warn` since the
fallback there is "high-fidelity unavailable, showing simplified
preview" — informational, not error). The DOCX fallback already
displays the mammoth-rendered content visibly, so no `<details>`
needed there.

Tests: regression assertions pin the diagnostic surfacing — the
`<details>` element, the `title` write, and the `console.error`
call must all be present in the bootstrap.

* 🩹 fix: PPTX CDN embeds slide-list fallback + detects empty renders

Manual e2e + DOM inspection on PR #12934: pptx-preview silently
produces empty `.pptx-preview-wrapper` placeholders for pptxgenjs-
generated decks. The library parses the file enough to create the
960×540 host element with a black bg, then fails to populate it.
The outer Promise resolves "successfully" — no throw, no rejection,
the bootstrap thinks rendering succeeded — and the user sees a black
rectangle with no content and no fallback message.

Fix mirrors the DOCX mammoth-fallback pattern from commit 0c0b0ce88:

1. **Server side**: `pptxToHtml` now renders the slide-list body
   (`<ol class="lc-pptx-list">...`) via the new `renderPptxSlidesBody`
   helper, then embeds it inside the CDN doc via the new
   `buildPptxCdnDocument(base64, slideListFallbackBody)` signature.
   Combined-doc size budget mirrors the DOCX pattern: if the CDN doc
   would exceed `OFFICE_HTML_OUTPUT_CAP` (512 KB), drop to slide-list
   only.

2. **Iframe bootstrap**: new `hasRenderedContent()` check after
   `wrapSlides()` walks each `.lc-slide-wrap` looking for actual
   child content inside pptx-preview's emitted slide nodes. If every
   wrap is empty, fires `showFallback('renderer-produced-empty-
   wrappers ...')` which reveals the embedded slide-list view
   instead of the previous static "Preview unavailable" message.

3. **CSS**: slide-list rules extracted to `PPTX_SLIDE_LIST_CSS`
   constant so they can be inlined into both the standalone slide-
   list document AND the CDN doc's `<style>` block (CSP `style-src`
   is `'unsafe-inline'` only — no external sheets).

`renderPptxSlidesHtml` now delegates to `renderPptxSlidesBody`
wrapped in `wrapAsDocument` — single source of truth for the slide
markup.

Tests (506 passing, +1 vs before): existing `pptxToHtmlViaCdn`
call sites updated for the new fallback-body argument; new
regression test pins `hasRenderedContent`, the
`renderer-produced-empty-wrappers` reason string, the embedded
fallback structure, and the inlined slide-list CSS.

* fix: Detect Empty PPTX Preview Slides

* 🩹 fix: LibreOffice PDF embed uses blob: URL (Chrome blocks data: PDFs)

Manual e2e on PR #12934: enabling `OFFICE_PREVIEW_LIBREOFFICE=true`
on a host with `soffice` installed surfaced "This page has been
blocked by Chrome" inside the PDF preview iframe.

Root cause: Chrome blocks `data:application/pdf;base64,...`
navigations inside sandboxed iframes (anti-phishing measure since
Chrome 76, see crbug.com/863001). The Sandpack iframe IS sandboxed
(its `sandbox="..."` attribute lacks `allow-top-navigation` for
data: URLs specifically), so when our inner `<iframe src="data:
application/pdf;...">` tries to navigate, Chrome's interstitial
fires and renders the "blocked" message.

Fix: switch from `data:` URL to `blob:` URL. The bootstrap now:
  1. Reads the base64 payload from a `<script type="application/
     octet-stream;base64">` data block (same pattern as the DOCX
     and PPTX wrappers).
  2. Decodes via `atob` + `Uint8Array.from`.
  3. Creates a `Blob` with `type: 'application/pdf'`.
  4. `URL.createObjectURL(blob)` produces a same-origin blob: URL.
  5. Sets `pdfFrame.src = url + '#view=FitH'` — Chrome treats blob:
     URLs as legitimate navigation and serves the built-in PDF
     viewer.

CSP updated: `frame-src blob:` (was `frame-src data:`). `data:` is
now explicitly NOT allowed in `frame-src` since Chrome would block
it anyway in our context — keeping it would be misleading
documentation.

Bonus: failure paths now log to `console.error` with a
`[libreoffice-pdf]` prefix so DevTools surfaces blob-creation
failures and PDF-viewer load timeouts in red.

Tests updated:
- "emits a complete sandboxed HTML document" now asserts the
  data-block + blob URL construction (not the old data: URL).
- New CSP test "allows blob: in frame-src (NOT data:)" with both
  positive and negative assertions to lock in the change.
- Integration test for `tryLibreOfficePreview` updated to look for
  the data block + `URL.createObjectURL` instead of the data: URL.
- Large-payload test now verifies the data block round-trip rather
  than data: URL escaping (base64 alphabet has no characters that
  break out of `<script>` anyway).

* 🩹 fix: LibreOffice PDF embed renders via pdf.js (Chrome blocks blob: PDFs too)

Manual e2e on PR #12934 round 2: switching from `data:` to `blob:`
URLs (commit d90f26c11) didn't fix the "This page has been blocked
by Chrome" interstitial. Chrome blocks BOTH data: AND blob: PDF
navigations inside sandboxed iframes — the built-in PDF viewer
requires a top-level browsing context. The Sandpack host iframe is
sandboxed, so neither approach works.

Fix: switch from native browser PDF viewer to pdf.js (Mozilla's
pdfjs-dist) loaded from CDN. pdf.js renders to `<canvas>` which
works in any context — no plugin, no privileged viewer, no
top-level requirement. ~1 MB CDN load is acceptable for a path
that's already opt-in via `OFFICE_PREVIEW_LIBREOFFICE=true`.

Implementation:
- Pin pdf.js v3.11.174 (single-file UMD; v4+ uses ES modules which
  complicate the load + SRI flow)
- Worker URL pointed at the same jsdelivr origin; CSP `worker-src
  https://cdn.jsdelivr.net blob:` allows it
- DPR-aware canvas rendering: scale based on `panelWidth /
  page.viewport.width * devicePixelRatio` so retina displays get
  crisp pixels
- Sequential page rendering (Promise chain) so a many-slide PDF
  doesn't spawn N parallel render jobs
- 15 s timeout safety net (was 4 s for the native viewer; pdf.js
  with DPR=2 on a many-page PDF can take longer)

CSP changes:
- Added `script-src https://cdn.jsdelivr.net 'unsafe-inline'` (was
  inline-only)
- Added `worker-src https://cdn.jsdelivr.net blob:`
- Removed `frame-src` entirely (no nested iframes)
- Removed `object-src` (no `<object>`/`<embed>` either)

Same diagnostic surfacing as the other CDN paths: failure reasons
shown via `<details>` disclosure inline + `console.error` to
DevTools.

Tests updated: PDF.js script presence, GlobalWorkerOptions setup,
canvas render path, all the new failure detection paths. Negative
assertions for both `data:application/pdf` and `blob:...application
/pdf` so a future "let's just try the native viewer again" rewrite
can't silently re-introduce the Chrome block.

SRI hashes intentionally omitted (unlike docx-preview / pptx-
preview) — operator opted in by setting the env flag and trusts
the LibreOffice render pipeline. Worth adding once the path is
proven in production.

* 🧹 cleanup: trim unused _internal exports + stale JSDoc references

After the LibreOffice + pdf.js path proved out, swept the office HTML
modules for dead code and stale documentation.

**Unused `_internal` exports removed (`html.ts`):**
  - `renderMammothBody` — only called within the file (by
    `wordDocToHtmlViaMammoth` and `wordDocToHtml`), never imported by
    tests.
  - `DOCX_PREVIEW_CDN` — internal config constant, never referenced.
  - `PPTX_PREVIEW_CDN` — same, never referenced.

The remaining `_internal` surface (`wordDocToHtmlViaCdn`,
`wordDocToHtmlViaMammoth`, `pptxToHtmlViaCdn`,
`MAX_DOCX_CDN_BINARY_BYTES`, `MAX_PPTX_CDN_BINARY_BYTES`,
`OFFICE_HTML_OUTPUT_CAP`) is all actively used by the spec file.

**Stale JSDoc fixed (`libreoffice.ts`):**

Module-level header still claimed we "embed the PDF as a base64
data:application/pdf URI" and "rely on the host browser's built-in
PDF viewer". Both untrue after the pdf.js switch in commit b2cc81ad8.
Updated to:
  - Describe the actual pipeline: PPTX → soffice → PDF → pdf.js → canvas
  - Document the dead-end iterations (data: blocked, blob: also blocked,
    pdf.js works) so future readers don't re-discover the same Chrome
    PDF-viewer-in-sandboxed-iframe limitation
  - Drop "(POC)" tag — the path is production-quality, just opt-in
  - Adjust disk footprint estimate (250-350 MB with
    `--no-install-recommends` is more accurate than the 500 MB original)

No production code changes; tests still 505 passing.

*  feat: per-format LibreOffice opt-in (env value accepts format list)

Manual e2e on PR #12934: enabling `OFFICE_PREVIEW_LIBREOFFICE=true`
forces both DOCX and PPTX through the LibreOffice path. DOCX renders
~instantly via docx-preview and rarely needs the LibreOffice
treatment; paying the ~2-3 s cold-start there hurts UX without
adding much.

Solution: extend the env var to accept three forms:
  - Truthy (`true`/`1`/`yes`): all formats — backwards compatible
    with the previous behavior
  - Falsy (`false`/`0`/`no`/empty/unset): no formats — default
  - Comma-separated list (`pptx`, `pptx,docx`): just those formats

Practical guidance documented in the module header: most operators
will set `OFFICE_PREVIEW_LIBREOFFICE=pptx` — pptx-preview chokes on
pptxgenjs decks and the slide-list fallback loses formatting, so
LibreOffice is the only path that produces a faithful PPTX preview.
DOCX is well-served by docx-preview's existing CDN renderer.

API:
- New `isLibreOfficeEnabledFor(format)` is the per-format gate, used
  by `tryLibreOfficePreview` to short-circuit before doing work.
- Existing `isLibreOfficeEnabled()` retained for "any format
  enabled" diagnostic checks (returns true if at least one format
  is opted in).
- Internal `parseLibreOfficeEnablement` returns `'all' | Set | null`
  — keeps the gate future-proof: adding a new format to the
  LibreOffice route doesnt require operators to re-enumerate their
  env value.

Edge cases handled:
- Whitespace-tolerant: `  pptx  ,  docx  ` works
- Case-insensitive on both env value AND format name
- Empty list entries dropped: `pptx, ,docx` enables pptx + docx
- Empty string treated as unset (not as a valid empty list)

Tests: 21 new cases pinning the parse semantics + per-format gate
(`pptx` env vs `docx` lookup → false, etc.). Existing
`isLibreOfficeEnabled` tests retained but renamed to clarify the
"any format" semantic.

Total file tests: 526 passing (+21 vs before).

* 🔒 fix: officeHtmlBucket only does MIME fallback when extension is empty

Codex P2 review on PR #12934: the server's `officeHtmlBucket` falls
back to MIME whenever the extension isn't an OFFICE extension. The
client's `detectArtifactTypeFromFile` is stricter — it routes by
extension first for ANY known extension (`.txt` → PLAIN_TEXT,
`.md` → MARKDOWN, `.py` → CODE, etc.), only falling back to MIME
when the extension is unknown.

Mismatch case: `notes.txt` shipped with `Content-Type: application/
vnd.openxmlformats-officedocument.wordprocessingml.document`. Server
runs `officeHtmlBucket` → extension `.txt` not office → MIME fallback
→ 'docx' → produces full HTML, sets `textFormat: 'html'`. Client
routes by extension to PLAIN_TEXT (extension wins), markdown viewer
escapes the HTML, user sees raw `<html>...` markup instead of the
rendered preview.

Fix: server only falls back to MIME when extension is genuinely empty
(extensionless filename). Symmetric with the client's "extension
wins for any known extension" semantic — neither will mis-route.

Trade-off: a true DOCX renamed to `myfile.bin` with the canonical
DOCX MIME no longer routes through office HTML on the server. The
client would have routed to the office bucket via MIME, then the
security gate (`textFormat !== 'html'`) would have downgraded to
PLAIN_TEXT anyway. So the user-visible outcome is the same (raw
bytes via PLAIN_TEXT) — the new behavior just avoids producing HTML
that the client would never use.

Long-term fix: share the extension routing table in data-provider
so both server and client query the same source of truth. Out of
scope for this PR.

Tests: new 8-case `it.each` block in `officeHtmlBucket predicate`
locks in the contract — `.txt`/`.md`/`.json`/`.py`/`.html`/`.css`
+ office MIME → null, and `.bin`/`.dat` + office MIME → null too.
Existing extension-wins tests still pass unchanged.

Total file tests: 534 (+8 vs before).
2026-05-05 12:06:10 +09:00
Danny Avila
3170bd8b22
📦 chore: bump @librechat/agents to v3.1.77 2026-05-03 23:54:46 -04:00
Danny Avila
d6d70eeb26
📦 chore: bump @librechat/agents to v3.1.76 2026-05-03 22:25:12 -04:00
Danny Avila
1b79e0b785
🧬 chore: Align LibreChat With Agents LangChain Upgrade (#12922)
* 🔧 chore: Update dependencies in package-lock.json and package.json

- Bump version of @librechat/agents to 3.1.75-dev.0 in multiple package.json files.
- Upgrade various AWS SDK and Smithy dependencies to their latest versions in package-lock.json for improved stability and performance.

* 🔧 chore: Update AWS SDK and Smithy dependencies in package-lock.json

- Bump version of @aws-sdk/client-bedrock-runtime to 3.1041.0 and update related dependencies for improved performance and stability.
- Upgrade various AWS SDK and Smithy packages to their latest versions, ensuring compatibility and enhanced functionality.

* chore: Align LibreChat with agents LangChain upgrade

- Route LangChain imports through @librechat/agents facade exports
- Update @librechat/agents to 3.1.75-dev.1 and remove direct LangChain deps
- Normalize nullable agent model params and API key override typing
- Update Google thinking config typing for newer LangChain packages
- Refresh targeted audit-related dependency overrides

* chore: Add Jest types for API specs

* test: Fix LangChain upgrade CI specs

* test: Exercise agents env facade

* fix: Clean up TS preview diagnostics

* fix: Address Codex review feedback
2026-05-03 12:46:01 -04:00
Danny Avila
f3e1201ae7
📌 fix: Stabilize Agent Prompt Cache Prefix (#12907)
* fix: stabilize agent prompt cache prefix

* chore: refresh agents sdk lockfile integrity

* test: format agent memory assertion

* test: type agent context fixtures

* fix: preserve MCP instruction precedence

* fix: reuse resolved conversation anchor

* fix: keep resumable startup immediate
2026-05-02 09:55:31 +09:00
Danny Avila
915b30c60d
📦 chore: update @librechat/agents to v3.1.74 (#12869) 2026-04-29 10:20:52 +09:00
Danny Avila
24e29aa8cb
🌱 fix: Inject Code-Tool Files Into Graph Sessions on First Call (+ read_file Sandbox Fallback) (#12831)
* 🌱 fix: Seed Code Tool Files Into Graph Sessions on First Call

Files attached to an agent's `tool_resources.execute_code` (user uploads
or generated artifacts from a prior turn) were silently dropped on the
first `execute_code` invocation of a turn. The agents-side `ToolNode`
populates `_injected_files` only when its `sessions` map already has an
`EXECUTE_CODE` entry — but that entry is only written by a previous
successful execution, so call #1 had nothing to inject. CodeExecutor
then fell back to a `/files/{session_id}` fetch, but `session_id` was
also empty on call #1, leaving the sandbox without the primed files.

Mirror the existing skill-priming pattern (`primeInvokedSkills` →
`initialSessions`) for code-resource files: eagerly call `primeFiles`
before `createRun` and merge the result into `initialSessions` via a
new `seedCodeFilesIntoSessions` helper. Skill files and code-resource
files now share the same `EXECUTE_CODE` entry; the prior representative
`session_id` is preserved on merge.

* 🔬 chore: Add Diagnostic Logging for Code-Files Seeding

Temporary debug logs to diagnose why first-call file injection is not
firing in real agent runs. Logs `wantsCodeExec`, available tool-resource
keys, primed file count, and the seeded EXECUTE_CODE entry. Will revert
once the failure mode is identified.

* 🪛 refactor: Capture primedCodeFiles per-agent at init, merge across run

Replace the client.js eager `primeFiles` call with a per-agent capture at
initialization time so every agent in a multi-agent run (primary +
handoff + addedConvo) contributes its `tool_resources.execute_code`
files to the shared `Graph.sessions` seed.

- handleTools.js (eager loadTools): the `execute_code` factory closes
  over a `primedCodeFiles` slot and surfaces it in the return.
- ToolService.js loadToolDefinitionsWrapper (event-driven): captures
  `files` from the existing `primeCodeFiles` call (was dropping them
  while only keeping `toolContext`) and surfaces them.
- packages/api initialize.ts: the loadTools callback contract now
  includes `primedCodeFiles`, threaded onto `InitializedAgent`.
- client.js: iterate `[primary, ...agentConfigs.values()]` and merge
  each agent's `primedCodeFiles` into `initialSessions`. Drop the
  primary-only `primeCodeFiles` call and diagnostic logs from the prior
  attempt — wrong layer (single-agent), wrong gate (`agent.tools`
  contained Tool instances after init, so the `.includes("execute_code")`
  string check always failed).

* 🔬 chore: Add per-agent diagnostic logs for code-files seeding

Logs `tool_resources` keys + file counts inside loadToolDefinitionsWrapper
and per-agent `primedCodeFiles` + final initialSessions inside
AgentClient. Will revert once the failure mode is confirmed.

* 🔬 chore: Add file-lookup diagnostics inside initializeAgent

Logs the inputs and intermediate counts of the conversation-file lookup
chain (convo file ids, thread message ids, code-generated and
user-code file counts) so we can pinpoint why `tool_resources.execute_code`
is arriving empty at `loadToolDefinitionsWrapper` despite the agent
having `execute_code` in its tools list.

* 🔬 chore: Probe execute_code files without messageId filter

Adds a relaxed `getFiles({conversationId, context: execute_code})` probe
that runs only when `getCodeGeneratedFiles` returns empty. Lists what's
actually in the DB for this conversation so we can confirm whether the
file is missing entirely or whether the messageId filter is rejecting it.

* 🔬 chore: Fix probe getFiles arg order (sort vs projection)

Probe was passing a projection object as the sort arg, which mongoose
rejected with `Invalid sort value`. Move it to the third arg
(selectFields) so the probe actually runs.

* 🪢 fix: Preserve Original messageId on Code-Output File Update

Each `processCodeOutput` call was overwriting the persisted file's
`messageId` with the *current* run's id. When a turn re-creates an
existing file (filename + conversationId match → `claimCodeFile`
returns the existing record, `isUpdate=true`), the file's link to
the assistant message that originally produced it gets clobbered.

`initializeAgent` later runs `getCodeGeneratedFiles({messageId: $in: <thread>})`
to seed `tool_resources.execute_code` from prior-turn artifacts. With a
stale `messageId` (e.g. from a failed read attempt that re-shelled the
same filename), the file no longer matches the parent-walk thread, so
`tool_resources` arrives empty at agent init, the new
`primedCodeFiles` channel has nothing to seed, and the LLM can't see
its own prior-turn artifacts on the next turn — defeating the
just-added Graph-sessions seeding fix.

Preserve the existing `claimed.messageId` on update; first-creation
behavior is unchanged. The runtime return value still includes the
current run's `messageId` (via `Object.assign(file, { messageId })`)
so the artifact is correctly attributed to the live tool_call.

* 🧹 chore: Remove diagnostic logs from code-files seeding path

Drops the temporary debug logs added to trace the empty-tool_resources
failure mode. Production code paths (loadToolDefinitionsWrapper,
client.js seed loop, initializeAgent file lookup) are left as the
permanent shape: capture primedCodeFiles, merge across agents, seed
initialSessions before run start.

* 🪛 feat: read_file Sandbox Fallback for /mnt/data + Non-Skill Paths

When the model called `read_file` with a code-execution path (e.g.
`/mnt/data/sentinel.txt`), the handler returned a misleading
`Use format: {skillName}/{path}` error. Adds a sandbox-aware fallback:

- Short-circuit `/mnt/data/...` (can never be a skill reference) →
  route to a sandbox `cat` via the new host-provided `readSandboxFile`
  callback, which POSTs to the codeapi `/exec` endpoint.
- Skip the skill resolver entirely when `accessibleSkillIds` is empty
  — the resolved-output of `resolveAgentScopedSkillIds` already
  collapses the admin capability + ephemeral badge + persisted
  `skills_enabled` chain, so an empty value is the authoritative
  "skills aren't in scope for this agent" signal.
- For `{firstSegment}/...` paths, consult the catalog-derived
  `activeSkillNames` Set (no DB read) to detect non-skill names and
  fall through to the sandbox before the model has to retry with
  `bash_tool`.

`activeSkillNames` is captured from `injectSkillCatalog`, threaded onto
`InitializedAgent`, into `agentToolContexts`, then through
`enrichWithSkillConfigurable` into `mergedConfigurable` for the handler.

The host implementation of `readSandboxFile` lives in
`api/server/services/Files/Code/process.js` and shells `cat <path>`
through the seeded sandbox session — `tc.codeSessionContext`
(emitted by ToolNode for `read_file` calls in `@librechat/agents`
v3.1.72+) provides the `session_id` + `_injected_files` so the read
lands in the same sandbox that holds prior-turn artifacts. When the
seeded context isn't available (older agents version, no codeapi
configured), the handler returns a model-visible error pointing at
`bash_tool` instead of silently failing.

Tests: 8 new `handleReadFileCall` cases cover the new short-circuits,
the skills-not-enabled gate, the activeSkillNames lookup, the
sandbox-fallback success path, and the bash_tool retry hint on
fallback failure. Existing `read_file` tests now opt into "skills are
in scope" via a `skillsInScope()` fixture (production wouldn't reach
the skill lookup with empty `accessibleSkillIds`).

* 🔧 chore: Update @librechat/agents dependency to version 3.1.72

Bumps the version of the @librechat/agents package across package-lock.json and relevant package.json files to ensure compatibility with the latest features and fixes.

* 🪛 refactor: Centralize Tool-Session Seed in buildInitialToolSessions Helper

Addresses review feedback on the per-agent merge in client.js:

- **Run-wide semantics, named explicitly.** The merge into a single
  `Graph.sessions[EXECUTE_CODE]` was a deliberate match to the
  agents-library design (`Graph.sessions` is shared across every
  `ToolNode` in the run), but the inline `for (const a of agents)`
  loop in `AgentClient.chatCompletion` made it look per-agent. Move
  the logic to a TS helper `buildInitialToolSessions` that documents
  the run-wide-by-design contract in one place. The CJS controller
  now contains a single call site, no business logic.

- **Subagent walk (P2).** The previous loop only iterated
  `[primary, ...agentConfigs.values()]`. Pure subagents are pruned
  out of `agentConfigs` after init and retained on each parent's
  `subagentAgentConfigs`, so their primed code files were silently
  dropped from the seed. The helper now walks recursively, with a
  visited-Set keyed on object identity that terminates safely on a
  malformed agent graph (cycle).

- **`jest.setup.cjs` polyfill for undici `File`.** Reviewer hit
  `ReferenceError: File is not defined` running the targeted spec on
  WSL — a known Node 18 issue where `globalThis.File` from
  `node:buffer` isn't auto-exposed. Polyfill it inside a Jest setup
  file so the suite boots regardless of Node patch version.

Helper test coverage (8 new): skill-only / agent-only / both,
recursive subagent walk, cycle-safe walk, primary+subagent
deduplication, undefined/null entries in the agents iterable, and
representative session_id preservation across the merge.

16 tests pass total in `codeFilesSession.spec.ts` (8 prior + 8 new).
No behavior change vs. the previous commit for the existing
primary+agentConfigs case — subagent inclusion is the only new
behavior, and it matches what the existing seeding logic would have
done if subagents had been in `agentConfigs`.

* 🪛 fix: FIFO Walk Order in buildInitialToolSessions (P3 review)

The traversal used `Array.pop()` (LIFO), which visited the LAST
top-level agent first. The docstring says "primary first"; the code
contradicted it. When no skill seed exists the first-visited agent's
first file supplies the representative `session_id` written to
`Graph.sessions[EXECUTE_CODE]` — so a LIFO walk silently flipped which
agent that came from. `ToolNode` ultimately uses per-file `session_id`s
for runtime injection (so behavior was indistinguishable for current
callers), but the discrepancy was a footgun for any future consumer
that read the representative.

Switch to FIFO via `Array.shift()` to match both the docstring and the
existing `loadSubagentsFor` walk pattern in
`Endpoints/agents/initialize.js`. Add a regression test that asserts
the primary's `session_id` is the representative (and that all three
agents' files still contribute, with per-file `session_id`s preserved).

* 🔬 test: Lock In Code-Files Bug Fixes Per Comprehensive Review

Addresses MAJOR + MINOR + NIT findings from the multi-pass review:

**Finding #4 (MINOR) — empty relativePath misses sandbox fallback.**
A model calling `read_file("output/")` where "output" isn't a skill
name dead-ended with `Missing file path after skill name` instead of
being routed to the sandbox like every other malformed-path branch.
Add the same `codeEnvAvailable → handleSandboxFileFallback` pattern,
plus two regression tests.

**Finding #7 (NIT) — duplicate `skillsInScope()` helper.**
Hoist the identical helper out of two nested describe blocks to
module scope. Single source of truth.

**Finding #1 (MAJOR) — `persistedMessageId` had zero test coverage.**
The fix preserves a file's original `messageId` on update so
`getCodeGeneratedFiles` can still match it on subsequent turns. A
regression in the `isUpdate ? (claimed.messageId ?? messageId) : messageId`
ternary would silently re-introduce the original cross-turn priming
bug. Five new tests cover:
- UPDATE preserves `claimed.messageId` in the persisted record
- UPDATE falls back to current run id when `claimed.messageId` is
  absent (legacy records predating the field)
- CREATE uses current run id (no claimed record exists)
- The runtime return value uses the LIVE id (artifact attribution)
  even when the persisted record kept the original
- The image branch follows the same contract (would silently regress
  if the ternary diverged across the two file-build branches)

The tests use a `snapshotCreateFileArgs()` helper because
`processCodeOutput` mutates the file object after `createFile`
returns (`Object.assign(file, { messageId, toolCallId })`) and a
naive `createFile.mock.calls[0][0]` would reflect the post-mutation
state instead of what was actually persisted.

**Finding #2 (MAJOR) — `readSandboxFile` had no direct tests.**
The model-controlled `file_path` flows through a POSIX single-quote
escape into a shell `cat` command, making this a security boundary.
A quoting regression would let a malicious filename break out of the
quoted argument and inject arbitrary shell. 20 new tests across:
- Shell quoting (7): plain filenames, embedded `'`, `$()`, backticks,
  newlines, shell metachars, multiple consecutive single-quotes
- Payload shape (6): /exec URL, bash language, conditional
  session_id / files inclusion, dedicated keepAlive:false agents
- Response handling (6): `{content}` on success, null on missing
  base URL or absent stdout, throws on stderr-only, partial-success
  returns stdout, transport errors are logged then rethrown
- Timeout (1): matches processCodeOutput's 15s SLA

Audited findings #5 (acknowledged tech debt — readSandboxFile in JS
workspace), #6 (pre-existing positional-args debt on
enrichWithSkillConfigurable), and #8 (cosmetic JSDoc style) — no
action taken per the reviewer's own assessment.

Audited finding #3 (walk order vs docstring) — already addressed in
commit 007f32341 which converted to FIFO via `queue.shift()` plus a
regression test. The audit was performed against an earlier PR head.

Tests: 152 packages/api + 195 api JS = 347 pass. Typecheck clean.

* 🪛 fix: Pure-Subagent codeEnv + Primed-Skill Routing + ToolService Early Returns

Three findings from the second-pass review:

**P2 — Pure subagents missed `codeEnvAvailable`** (initialize.js).
The pure-subagent init path didn't forward the endpoint-level
`codeEnvAvailable` flag to `initializeAgent`, unlike the primary,
handoff, and addedConvo paths. A code-enabled subagent loaded only
through `subagentAgentConfigs` initialized with
`codeEnvAvailable: false`, so even though the recursive seed walk
found its primed code files, the subagent's own `bash_tool` /
`read_file` sandbox fallback were silently gated off. Forward the
flag and add `codeEnvAvailable: config.codeEnvAvailable` to the
`agentToolContexts.set` for symmetry with the other paths.

**P2 — Primed skills outside the catalog cap were misrouted to
sandbox** (handlers.ts). Manual ($-popover) and always-apply primes
are intentionally resolved off the wider `accessibleSkillIds` ACL
set BEFORE catalog injection — see `resolveManualSkills` for why a
skill outside the `SKILL_CATALOG_LIMIT` cap can still be authorized
for direct manual invocation. The `activeSkillNames` shortcut ran
before reading `skillPrimedIdsByName`, so a primed skill not in the
catalog would fall through to the sandbox instead of resolving via
the pinned `_id`. Read the primed map first and bypass the shortcut
for primed names. New regression test asserts a primed-but-not-
cataloged skill resolves through the existing skill path with
`getSkillByName` invoked and `readSandboxFile` NOT called.

**P3 — `loadAgentTools` early returns dropped `primedCodeFiles`**
(ToolService.js). The non-`definitionsOnly` path captures the field
correctly, but two early-return branches (no-action-tools fast path,
no-action-sets fast path) omitted it. Any traditional
`loadAgentTools(..., definitionsOnly: false)` caller using
execute_code without action tools would have its first-call session
seed silently empty. Add `primedCodeFiles` to both early returns
for consistency with the final return shape.

Tests: 153 packages/api + 195 api JS = 348 pass.

* 🧹 chore: Document jest.mock arrow-indirection pattern in process.spec.js

Per the second-pass review's Finding #2 (NIT, "would help future
readers"): the mock setup mixes direct `jest.fn()` references with
arrow-function indirection (`(...args) => mockX(...args)`). The
indirection isn't stylistic — it's required because `jest.mock(...)`
is hoisted above the outer `const` declarations at parse time, so a
direct reference would capture `undefined`. Inline comment explains
the pattern so the next reader doesn't have to reverse-engineer it
or accidentally "simplify" the mocks and break per-test
`mockReturnValueOnce` / `mockImplementationOnce` overrides.

* 🪛 fix: Five Issues from Pass-N + Codex Review (incl. 404 root cause)

Five real bugs surfaced by another review pass + Codex PR comments
+ the codeapi-side logs we collected during manual testing:

**1) `processCodeOutput` 404 root cause (`callbacks.js`).**
The codeapi worker emits TWO distinct `session_id`s on a tool result:
  - `artifact.session_id` is the EXEC session — the sandbox VM that
    ran the bash command. Files don't live there; it's torn down
    post-execution.
  - `file.session_id` is the STORAGE session — the file-server
    bucket prefix where artifacts actually live.
`callbacks.js` was passing the EXEC id to `processCodeOutput`, which
builds `/download/{session_id}/{id}` and 404s because the file-server
doesn't know about that path. This explains every "Error
downloading/processing code environment file" we saw during testing.
Use `file.session_id ?? output.artifact.session_id` (per-file id with
artifact-level fallback for older worker payloads).

**2) `primeFiles` reupload pushed STALE sandbox ids (`process.js`).**
When `getSessionInfo` returns null (file expired/missing in sandbox),
`reuploadFile` re-uploads via `handleFileUpload`, gets a NEW
`fileIdentifier`, and persists it on the DB record. But `pushFile`
was a closure capturing the OLD `(session_id, id)` parsed at the top
of the loop, so the in-memory `files[]` array (now consumed by
`buildInitialToolSessions` to seed `Graph.sessions`) silently
referenced a sandbox object that no longer existed. The first tool
call would 404 trying to mount it; only the next turn's metadata
re-read would correct course. Parameterize `pushFile` with optional
`(session_id, id)` overrides; in `reuploadFile` parse the new
identifier and pass through. 2 regression tests.

**3) Codex P2 — Cap sandbox fallback output before line-numbering
(`handlers.ts`).** The new `handleSandboxFileFallback` returned
`addLineNumbers(result.content)` without a size guard, so reading a
multi-MB `/mnt/data/*` artifact materialized the file twice in
memory (raw + line-numbered) before downstream truncation. Match the
existing skill-file path's `MAX_READABLE_BYTES` (256KB): truncate
raw first, then number, surface the truncation to the model so it
can use `bash_tool` (`head` / `tail`) for the rest. 2 tests
(oversized truncates with hint, in-cap doesn't).

**4) Codex P2 — Dedupe seeded code files by `(session_id, id)`
(`codeFilesSession.ts`).** Multiple agents in a run commonly carry
the same primed execute-code resources (shared conversation files);
without dedupe, `_injected_files` grows proportionally to agent
count and bloats every `/exec` POST. Use a `(session_id, id)`
identity key so first-seen wins (preserves source ordering); name
alone isn't sufficient because two distinct primed uploads can
share a filename across different sessions. 4 tests covering dedup
across iterations, against pre-existing entries, name-collision
distinct-session preservation, and the multi-agent realistic case
in `buildInitialToolSessions`.

**5) Pass-N P2 — Polyfill `globalThis.File` in api Jest setup
(`api/test/jestSetup.js`).** `packages/api/jest.setup.cjs` had the
polyfill; the legacy api workspace's Jest config has its own
`setupFiles` that didn't, so on Node 18 / WSL the api focused tests
still failed at import time with `ReferenceError: File is not
defined` from undici. Mirror the polyfill.

Tests: 159 packages/api + 206 api JS = 365 pass. Typecheck clean.

* 🔧 chore: Update @librechat/agents dependency to version 3.1.73

Bumps the version of the @librechat/agents package across package-lock.json and relevant package.json files to ensure compatibility with the latest features and fixes.
2026-04-27 08:56:39 +09:00
Danny Avila
f7e47f6012
🪢 feat: Enable Tool-Output References for Bash Tool (#12830)
* chore: Update `@librechat/agents` to v3.1.71-dev.0 across package-lock and package.json files

This commit updates the version of the `@librechat/agents` package from `3.1.70` to `3.1.71-dev.0` in the `package-lock.json` and relevant `package.json` files. Additionally, it marks several dependencies as peer dependencies, ensuring better compatibility and integration across the project.

* 🔗 feat: Enable Tool-Output References for bash_tool when codeenv is on

Wires `@librechat/agents`' `RunConfig.toolOutputReferences` into
`createRun()` and the bash tool's LLM-facing description, gated by the
per-agent `effectiveCodeEnvAvailable` flag. The feature auto-activates
for any run where the bash tool is actually registered; SDK defaults
(~400 KB per output, 5 MB total) match the shell-safe budget. No new
env var or yaml capability — piggybacks on the existing `execute_code`
gate.

- `tools.ts`: replace the module-level `BASH_TOOL_DEF` constant with a
  per-call `buildBashToolDef` that wraps `buildBashExecutionToolDescription`.
  Description now includes the `{{tool<idx>turn<turn>}}` reference syntax
  guide iff the new `enableToolOutputReferences` param is true.
- `initialize.ts`: pass `enableToolOutputReferences: effectiveCodeEnvAvailable`
  into `registerCodeExecutionTools`.
- `run.ts`: add `codeEnvAvailable?: boolean` to `RunAgent`, compute the
  flag from `agents[*].codeEnvAvailable`, and conditionally spread
  `toolOutputReferences: { enabled: true }` into `Run.create`.

* 🧪 test: Cover tool-output references gating end-to-end

- `tools.spec.ts`: 3 new cases asserting `bash_tool.description`
  contains `{{tool<idx>turn<turn>}}` iff `enableToolOutputReferences` is
  true (and unset → false).
- `run-summarization.test.ts`: 4 new cases asserting `Run.create` is
  invoked with `toolOutputReferences: { enabled: true }` iff at least
  one `RunAgent.codeEnvAvailable === true`. Covers the present /
  absent / unset / multi-agent-OR cases.
- `initialize.test.ts` + `skills.test.ts`: extend the existing
  `@librechat/agents` jest mocks with a `buildBashExecutionToolDescription`
  stub so suites stay green when the on-disk SDK lags the published
  3.1.71-dev.0 export.

* chore: Update `@librechat/agents` version to `3.1.71-dev.1` in package-lock and package.json files

This commit updates the version of the `@librechat/agents` package from `3.1.71-dev.0` to `3.1.71-dev.1` across the relevant package files. This change ensures consistency and incorporates any updates or fixes from the new version.

* 🪢 fix: Walk Subagents in toolOutputReferences run-level gate

Codex P2 review on PR #12830: the run-level
`enableToolOutputReferences` flag only inspected the top-level
`agents` array. A parent agent without `execute_code` that spawns a
subagent that *does* have it left the SDK's tool-output reference
registry inactive for the run, so the subagent's `bash_tool` calls
saw `{{tool<idx>turn<turn>}}` placeholders pass through to the
shell unsubstituted.

Replace `agents.some(a => a.codeEnvAvailable === true)` with a
recursive `anyAgentHasCodeEnv` helper that walks
`subagentAgentConfigs` transitively. Cycle-safe via a `visited` set,
mirroring the existing `buildSubagentConfigs.ancestors` pattern in
the same module. The bash tool *description* stays per-agent in
`initializeAgent` (only agents with bash actually registered learn
the `{{…}}` syntax), so broadening the run-level gate doesn't
broaden the model-facing surface — it just lets the SDK's shared
registry serve every `ToolNode` the run compiles, which is exactly
the contract the SDK already implements.

Tests cover three new cases: parent-off / subagent-on, parent-off /
child-off / grandchild-on (transitive descent past one level), and
a cyclic A↔B tree with neither codeenv-enabled (asserts both
termination and absence of `toolOutputReferences`). Existing
single-agent and multi-agent tests stay valid since the new helper
returns `true` whenever the previous `.some(...)` did.

* chore: Update `@librechat/agents` version to `3.1.71` in package-lock and package.json files

This commit updates the version of the `@librechat/agents` package from `3.1.71-dev.1` to `3.1.71` across the relevant package files. This change ensures consistency and incorporates any updates or fixes from the stable release.

* review: address audit findings on tool-output references PR

Two findings from comprehensive PR review on #12830:

#1 (MINOR) — `injectSkillCatalog` omitted `enableToolOutputReferences`
when calling `registerCodeExecutionTools`, so its resulting
`bash_tool` description always lacked the `{{tool<idx>turn<turn>}}`
guide. Today this is a no-op because `initializeAgent` registers
first and the registry `.has()` check makes the skills-path call a
dedupe-only operation. But if call order ever flips (skills-first),
the missing flag would silently ship a `bash_tool` without the
syntax guide, and the `initializeAgent` pass would itself become
the no-op — the feature would silently break with no visible error.
Forward `enableToolOutputReferences: codeEnvAvailable === true` so
both call sites produce identical tool definitions regardless of
firing order. Defense-in-depth, not a current bug. Added a test in
`skills.test.ts` that asserts the bash description contains the
`{{tool<idx>turn<turn>}}` marker when `codeEnvAvailable` is on,
exercising the skills caller end-to-end.

#2 (NIT) — `buildBashToolDef` allocated + froze a fresh object on
every agent init. Replaced with two module-level frozen singletons
(`BASH_TOOL_DEF_WITH_OUTPUT_REFS`, `BASH_TOOL_DEF_WITHOUT_OUTPUT_REFS`)
built once at module load via a `createBashToolDef` helper. The
factory now picks the right cached reference instead of building.
Restores the no-allocation intent of the original `BASH_TOOL_DEF`
constant while keeping the per-agent gate behavior. Two new tests
in `tools.spec.ts` pin the contract: identical-flag calls return
reference-equal `bash_tool` defs across registries; opposite-flag
calls return distinct frozen objects with the expected description
content.
2026-04-26 02:06:23 -07:00
Danny Avila
7f3d41024a 📦 chore: Update @librechat/agents to v3.1.70
This commit updates the version of the `@librechat/agents` package from `3.1.68-dev.1` to `3.1.70` in the `package-lock.json` and relevant `package.json` files. This change ensures consistency across the project and incorporates any updates or fixes from the new version.
2026-04-25 04:02:01 -04:00
Danny Avila
35bf04b26c 🧰 refactor: Unify code-execution tools (#12767)
* 🛠️ feat: Add registerCodeExecutionTools helper

Idempotently registers `bash_tool` + `read_file` in the run's tool
registry and tool-definition list via a registry `.has()` dedupe. Sets
up the single code-execution tool path shared by:

- `initializeAgent` (when an agent has `execute_code` in its tools and
  the capability is enabled for the run)
- `injectSkillCatalog` (when skills are active; unconditional read_file,
  bash_tool follows `codeEnvAvailable`)

Both callers reach the helper in the same initialization sequence, so
the second call becomes a no-op and exactly one copy of each tool
reaches the LLM — no more double registration for agents that combine
`execute_code` capability with active skills.

Unit-tested on a fresh run, idempotence (second call, overlap with
prior tooldefs, partial overlap), and the no-registry variant.

* 🔀 refactor: Route injectSkillCatalog bash_tool + read_file through registerCodeExecutionTools

The `skill` tool is still registered inline (it's skill-path-specific),
but `bash_tool` + `read_file` now flow through the shared idempotent
helper so a prior registration from the execute_code path doesn't
produce a duplicate copy later in the same run. Behavior preserved:

- `read_file` always registers when any active skill is in scope —
  manually-primed `disable-model-invocation: true` skills still need it
  to load `references/*` from storage.
- `bash_tool` follows `codeEnvAvailable` exactly as before.

Adds a test pinning the cross-call dedupe: when `injectSkillCatalog`
runs AFTER `registerCodeExecutionTools` has already seeded the registry
+ tool definitions with bash_tool/read_file, the resulting
`toolDefinitions` still contains exactly one copy of each.

* 🪄 feat: Expand `execute_code` tool name into bash_tool + read_file at initialize-time

When an agent's `tools` include `execute_code` and the `execute_code`
capability is enabled for the run, `initializeAgent` now registers
`bash_tool` + `read_file` via `registerCodeExecutionTools` before
`injectSkillCatalog`. The legacy `execute_code` tool definition is no
longer handed to the LLM — `execute_code` remains on the agent
document as a capability-trigger marker, but the runtime expands it
into the skill-flavored tool pair.

Call ordering matters: the `execute_code` registration runs BEFORE
`injectSkillCatalog`, so the skill path's own `registerCodeExecutionTools`
call inside `injectSkillCatalog` becomes a no-op via the registry's
`.has()` check. Exactly one copy of each tool reaches the LLM whether
the agent has:

- only `execute_code` (legacy path)
- only skills
- both

No data migration needed — `agent.tools: ['execute_code']` stays in
the DB unchanged; the expansion is a runtime operation.

Three tests cover the matrix: execute_code + capability on →
bash_tool + read_file registered; execute_code + capability off →
neither registered; no execute_code + capability on → neither
registered.

* 🗑️ refactor: Drop CodeExecutionToolDefinition from the builtin registry

Removes the legacy `execute_code` entry from `agentToolDefinitions` and
the corresponding import. With the initialize-time expansion in place,
nothing consults `getToolDefinition('execute_code')` for a tool schema
any more — the capability gate still filters on the string
`execute_code`, but the actual tool definitions the LLM sees come from
`registerCodeExecutionTools` (i.e. `bash_tool` + `read_file`).

`loadToolDefinitions` in `packages/api/src/tools/definitions.ts`
silently drops `execute_code` when it no longer resolves in the
registry — that's the expected path and is now covered by an updated
test. No caller of `getToolDefinition('execute_code')` expects a
non-undefined result after this change.

* 🔌 refactor: Read CODE_API_KEY from env for primeCodeFiles + PTC

Finishes the Phase 4 server-env-keyed rollout on the two remaining
`loadAuthValues({ authFields: [EnvVar.CODE_API_KEY] })` sites in
`ToolService.js`:

- `primeCodeFiles` (user-attached file priming on execute_code agents)
- Programmatic Tool Calling (`createProgrammaticToolCallingTool`)

Both now read `process.env[EnvVar.CODE_API_KEY]` directly, matching
`bash_tool`'s pattern. The per-user plugin-auth path is no longer
consulted for code-env credentials anywhere in the hot path — the
agents library owns the actual tool-call execution and also reads the
env var internally.

Priming still fires for existing user-file workflows so the legacy
`toolContextMap[execute_code]` hint ("files available at /mnt/data/...")
stays in the prompt; only the key lookup changed.

* 🔧 fix: Type the pre-seeded dedupe-test tools as LCTool

CI TypeScript type checks caught `{ parameters: {} }` in the new
cross-call dedupe test: `LCTool.parameters` is a `JsonSchemaType`,
not `{}`. Use `{ type: 'object', properties: {} }` and type the
local registry Map through the parameter-derived shape so the
pre-seeded values match what `toolRegistry.set` expects.

* 🛡️ fix: Run execute_code expansion before GOOGLE_TOOL_CONFLICT gate

Codex review caught a latent regression: the original Phase 8 placement
ran `registerCodeExecutionTools` after `hasAgentTools` was computed,
so an execute-code-only agent on Google/Vertex with provider-specific
`options.tools` populated would no longer trip `GOOGLE_TOOL_CONFLICT`
— the legacy `CodeExecutionToolDefinition` used to populate
`toolDefinitions` before the guard, but after dropping it from the
registry, `toolDefinitions` stayed empty until my expansion ran
downstream of the guard. Mixed provider + agent tools would silently
flow through to the LLM.

Fix moves the `execute_code` expansion to BEFORE `hasAgentTools`
computation. `bash_tool` + `read_file` now contribute to the check
the same way the legacy `execute_code` def did. Covered by a new
test that pins the Google+execute_code+provider-tools scenario —
the `rejects.toThrow(/google_tool_conflict/)` path would have
silently passed on the prior placement.

* 🔗 fix: Thread codeEnvAvailable through handoff sub-agents

Round-2 codex review caught the other half of the execute_code
expansion gap: `discoverConnectedAgents` omitted `codeEnvAvailable`
from its forwarded `initializeAgent` params, so handoff sub-agents
with `agent.tools: ['execute_code']` lost the `bash_tool` + `read_file`
registration (pre-Phase 8 the legacy `CodeExecutionToolDefinition`
would have landed in their `toolDefinitions` via the registry).

- Add `codeEnvAvailable?` to `DiscoverConnectedAgentsParams` and
  forward it verbatim on every sub-agent `initializeAgent` call.
- Update the three JS call sites that construct the primary's
  `codeEnvAvailable` (`services/Endpoints/agents/initialize.js`,
  `controllers/agents/openai.js`, `controllers/agents/responses.js`)
  to pass the same flag into `discoverConnectedAgents` — one
  authoritative source per request.
- Two regression tests in `discovery.spec.ts` pin the true/false
  passthrough so a future refactor that drops the param-forwarding
  surfaces immediately.

Left intentionally unchanged: `packages/api/src/agents/openai/service.ts`
(public API helper with no in-repo caller). External consumers of
`createAgentChatCompletion` who want code execution should pass a
`codeEnvAvailable`-aware `initializeAgent` via `deps` — documenting
the full public-API surface is out of scope for this Phase 8 PR.

* 🔗 fix: Thread codeEnvAvailable through addedConvo + memory-agent paths

Round-3 codex review caught the last two production `initializeAgent`
callers missing the Phase-8 capability flag:

- `api/server/services/Endpoints/agents/addedConvo.js` (multi-convo
  parallel agent execution). Added `codeEnvAvailable` to
  `processAddedConvo`'s destructured params and forwarded it into
  the per-added-agent `initializeAgent` call. Caller in
  `api/server/services/Endpoints/agents/initialize.js` passes the
  same `codeEnvAvailable` it computed for the primary.
- `api/server/controllers/agents/client.js` (`useMemory` — memory
  extraction agent). Computes its own `codeEnvAvailable` from
  `appConfig?.endpoints?.[EModelEndpoint.agents]?.capabilities` and
  forwards into `initializeAgent`. Memory agents rarely list
  `execute_code`, but if one does, pre-Phase 8 they got the legacy
  `execute_code` tool registered unconditionally — the passthrough
  restores parity.

With this, every production caller of `initializeAgent` explicitly
resolves the capability: main chat flow (primary + handoff), OpenAI
chat completions (primary + handoff), Responses API (primary + handoff),
added convo parallel agents, and memory agents. The one remaining
caller, `packages/api/src/agents/openai/service.ts::createAgentChatCompletion`,
is a public API helper with no in-repo consumer (external callers
must pass a capability-aware `initializeAgent` via `deps`).

* 🪤 fix: Remove duplicate appConfig declaration causing TDZ ReferenceError

The Responses API controller had TWO `const appConfig = req.config;`
bindings inside `createResponse`: one at the top of the function
(added by the Phase 4 `bash_tool` decouple) and one inside the try
block (added by the polish PR #12760). Because `const` is block-scoped
with a temporal dead zone, the inner redeclaration put `appConfig` in
TDZ for the entire try block, so any earlier reference inside the
try — notably `appConfig?.endpoints?.[EModelEndpoint.agents]?.allowedProviders`
at line 348 — threw `ReferenceError: Cannot access 'appConfig'
before initialization`. The error was silently swallowed by the
outer try/catch, leaving `recordCollectedUsage` unreached and the
six `responses.unit.spec.js` token-usage tests failing.

Removing the inner redeclaration fixes the six failing tests
(verified: 11/11 pass locally post-fix, 0 regressions elsewhere).
The outer function-scoped binding already provides `appConfig` to
every downstream reference.

* 🔗 fix: Thread codeEnvAvailable through the OpenAI chat-completion public API

Round-4 codex review (legitimate on the type-safety angle, even though the
runtime concern was already covered): the `createAgentChatCompletion`
helper defines its own narrower `InitializeAgentParams` interface locally,
and the type was missing `codeEnvAvailable`. External consumers who
supply a capability-aware `deps.initializeAgent` couldn't route
`codeEnvAvailable` through without a type-cast workaround.

- Widen the local `InitializeAgentParams` interface to include
  `codeEnvAvailable?: boolean` (matches the real
  `packages/api/src/agents/initialize.ts` type).
- Derive `codeEnvAvailable` inside `createAgentChatCompletion` from
  `deps.appConfig?.endpoints?.agents?.capabilities` (the same source
  the in-repo controllers use) and forward to `deps.initializeAgent`.
  Uses a string literal `'execute_code'` lookup so this file stays free
  of a `librechat-data-provider` import — keeping the dependency surface
  of the public helper minimal.

With this, external consumers of `createAgentChatCompletion` who pass
`appConfig` with the agents capabilities get `bash_tool` + `read_file`
registration automatically; consumers who don't pass `appConfig` retain
the existing "explicit opt-in" semantics (the flag stays `undefined`,
expansion is skipped).

* 🧹 chore: Review-driven polish — observability log, JSDoc DRY, test gaps, no-op allocation

Addresses the comprehensive review of PR #12767:

- **Finding #1** (MINOR, observability): `initializeAgent` now emits a
  debug log when an agent lists `execute_code` in its tools but the
  runtime gate is off (`params.codeEnvAvailable` !== true). The
  event-driven `loadToolDefinitionsWrapper` path doesn't log
  capability-disabled warnings, so without this the tool silently
  vanishes from the LLM's definitions with zero trace. Operators
  debugging "why isn't code interpreter working?" now get a signal at
  the initialize layer.

- **Finding #5** (NIT, allocation): `registerCodeExecutionTools` now
  returns the input `toolDefinitions` array by reference on the no-op
  path (both tools already registered by a prior caller in the same
  run) instead of allocating a fresh spread array every time. The
  common dual-call scenario — `initializeAgent` then
  `injectSkillCatalog` — saves one O(n) copy per request.

- **Finding #4** (NIT, DRY): Collapsed the duplicated 6-line JSDoc
  comment in `openai.js`, `responses.js`, and `addedConvo.js` into
  either a one-line `@see DiscoverConnectedAgentsParams.codeEnvAvailable`
  pointer (the two JS call sites) or a compact 3-line block referring
  back to the canonical source (addedConvo's @param).

- **Finding #2** (MINOR, test gap): Added
  `api/server/services/Endpoints/agents/addedConvo.spec.js` with three
  cases covering `codeEnvAvailable=true`, `codeEnvAvailable=false`,
  and omitted (undefined) passthrough. A future refactor that drops
  the param from destructuring now surfaces here instead of silently
  regressing multi-convo parallel agents with `execute_code`.

- **Finding #3** (MINOR, test gap): Added
  `api/server/controllers/agents/__tests__/client.memory.spec.js`
  pinning the capability-flag derivation that `AgentClient::useMemory`
  uses — six cases covering present/absent/null/undefined config shapes
  plus an enum-literal pin (`'execute_code'` / `'agents'`). Catches
  enum renames or config-path shifts that would otherwise silently
  strip `bash_tool` + `read_file` from memory agents.

Finding #7 (jest.mock scoping, confidence 40) left as-is: the
reviewer's own risk assessment noted `buildToolSet` doesn't touch
the mocked exports, and restructuring a file-level `jest.mock` to
`jest.doMock` + dynamic `import()` introduces more complexity than
the speculative risk justifies. The existing mock is scoped to the
test file and contains the same stubs the adjacent
`skills.test.ts` already uses.

Finding #6 (PR description commit count) addressed out-of-band via
PR description update.

All existing tests pass, typecheck clean, lint clean across touched
files. New tests: 9 cases across 2 new spec files.

* 🧽 refactor: Replace hardcoded 'execute_code' string with AgentCapabilities enum in service.ts

Follow-up review (conf 55) caught that `openai/service.ts`'s Phase 8
`codeEnvAvailable` derivation used the literal `'execute_code'` while
every in-repo controller uses `AgentCapabilities.execute_code` from
`librechat-data-provider`. The file deliberately uses local type
interfaces to keep the public API helper's type surface small, but
that pattern was never a ban on single-value imports from the data
provider — `packages/api` already depends on it. Importing the enum
value means a future rename of `AgentCapabilities.execute_code`
propagates to this file automatically, matching the in-repo
controllers' behavior.

Other follow-up findings left as-is per the reviewer's own verdict:

- #2 (memory spec mirrors the production expression rather than
  calling `AgentClient::useMemory` directly): reviewer flagged as
  "not blocking" / "design-philosophy observation." The test file's
  JSDoc already explicitly documents the tradeoff and pins the enum
  literals to catch the most likely drift vector. Standing up
  `AgentClient` + all its mocks for a one-line regression guard is
  disproportionate.
- #3 (`addedConvo.spec.js` mock signature vs. underlying
  `loadAddedAgent` arity): reviewer's own confidence 25 noted the
  mock matches the wrapper's actual call pattern in the production
  file. Not a real gap.
- #4 was self-retracted as a false alarm.

* 🗑️ refactor: Fully deprecate CODE_API_KEY — remove all LibreChat-side references

The code-execution sandbox no longer authenticates via a per-run
`CODE_API_KEY` (frontend or backend). Auth moved server-side into the
agents library / sandbox service, so LibreChat drops every reference:

**Backend plumbing:**
- `api/server/services/Files/Code/crud.js`: `getCodeOutputDownloadStream`,
  `uploadCodeEnvFile`, `batchUploadCodeEnvFiles` no longer accept
  `apiKey` or send the `X-API-Key` header.
- `api/server/services/Files/Code/process.js`: `processCodeOutput`,
  `getSessionInfo`, `primeFiles` drop the `apiKey` param throughout.
- `api/server/services/ToolService.js`: stop reading
  `process.env[EnvVar.CODE_API_KEY]` for `primeCodeFiles` and PTC; the
  agents library handles auth internally. Remove the now-dead
  `loadAuthValues` + `EnvVar` imports. Drop the misleading
  "LIBRECHAT_CODE_API_KEY" hint from the bash_tool error log.
- `api/server/services/Files/process.js`: remove the `loadAuthValues`
  call around `uploadCodeEnvFile`.
- `api/server/routes/files/files.js`: code-env file download no longer
  fetches a per-user key.
- `api/server/controllers/tools.js`: `execute_code` is no longer a
  tool that needs verifyToolAuth with `[EnvVar.CODE_API_KEY]` — the
  endpoint always reports system-authenticated so the client skips
  the key-entry dialog. `processCodeOutput` called without `apiKey`.
- `api/server/controllers/agents/callbacks.js`: `processCodeOutput`
  invoked without the loadAuthValues round trip, for both LegacyHandler
  and Responses-API handlers.
- `api/app/clients/tools/util/handleTools.js`: `createCodeExecutionTool`
  called with just `user_id` + files.

**packages/api:**
- `packages/api/src/agents/skillFiles.ts`: `PrimeSkillFilesParams`,
  `PrimeInvokedSkillsDeps`, `primeSkillFiles`, `primeInvokedSkills` all
  drop the `apiKey` param; the gate is purely `codeEnvAvailable`.
- `packages/api/src/agents/handlers.ts`: `handleSkillToolCall` drops
  the `process.env[EnvVar.CODE_API_KEY]` read; skill-file priming is
  now gated solely on `codeEnvAvailable`. `ToolExecuteOptions`
  signatures drop apiKey from `batchUploadCodeEnvFiles` and
  `getSessionInfo`.
- `packages/api/src/agents/skillConfigurable.ts`: JSDoc no longer
  references the env var.
- `packages/api/src/tools/classification.ts`: PTC creation no longer
  gated on `loadAuthValues`; `buildToolClassification` drops the
  `loadAuthValues` dep entirely (no LibreChat-side callers need it for
  this path anymore).
- `packages/api/src/tools/definitions.ts`: `LoadToolDefinitionsDeps`
  drops the `loadAuthValues` field.

**Frontend:**
- Delete `client/src/hooks/Plugins/useAuthCodeTool.ts`,
  `useCodeApiKeyForm.ts`, and
  `client/src/components/SidePanel/Agents/Code/ApiKeyDialog.tsx` —
  the install/revoke dialogs for CODE_API_KEY are fully dead.
- `BadgeRowContext.tsx`: drop `codeApiKeyForm` from the context type and
  provider. `codeInterpreter` toggle treated as always authenticated
  (sandbox auth is server-side).
- `ToolsDropdown.tsx`, `ToolDialogs.tsx`, `CodeInterpreter.tsx`,
  `RunCode.tsx`, `SidePanel/Agents/Code/Action.tsx` +`Form.tsx`: all
  API-key dialog trigger refs, "Configure code interpreter" gear
  buttons, and auth-verification plumbing removed. The
  "Code Interpreter" toggle is now a plain `AgentCapabilities.execute_code`
  checkbox — no key-entry gate.
- `client/src/locales/en/translation.json`: drop the three
  `com_ui_librechat_code_api*` keys and `com_ui_add_code_interpreter_api_key`.
  Other locales are externally automated per CLAUDE.md.

**Config:**
- `.env.example`: remove the `# LIBRECHAT_CODE_API_KEY=your-key` section
  and its header.

**Tests:**
- `crud.spec.js`: assertions flipped to pin "no X-API-Key header" and
  "no apiKey param".
- `skillFiles.spec.ts`: removed env-var save/restore; tests now pin
  that the batch-upload path is gated solely on `codeEnvAvailable` and
  that no apiKey is threaded through.
- `handlers.spec.ts`: same — just the `codeEnvAvailable` gate pins
  remain.
- `classification.spec.ts`: remove the two tests that asserted
  `loadAuthValues` was (not) called for PTC.
- `definitions.spec.ts`: drop every `loadAuthValues: mockLoadAuthValues`
  entry from the deps shape.
- `process.spec.js`: strip the mock of `EnvVar.CODE_API_KEY`.

**Comment hygiene:**
- `tools.ts`, `initialize.ts`, `registry/definitions.ts`: shortened
  stale comment references to "legacy `execute_code` tool" without
  naming the retired env var.

Tests verified: 678 packages/api tests pass, 836 backend api tests
pass. Typecheck clean, lint clean. Only remaining CODE_API_KEY
mentions in the code are two regression-guard assertions:
- `crud.spec.js`: pins "no X-API-Key header" stays absent.
- `skillConfigurable.spec.ts`: pins `configurable` never grows a
  `codeApiKey` field.

* 🧹 chore: Remove the last two CODE_API_KEY name mentions in LibreChat

Follow-up to the prior full deprecation commit: two tests still named
the retired identifier in their regression-guard assertions.

- `packages/api/src/agents/skillConfigurable.spec.ts`: drop the
  "does not inject a codeApiKey key" test. The `codeApiKey` field is
  gone from the production configurable shape, so an absence-assertion
  naming it re-introduces the retired identifier in code.
- `api/server/services/Files/Code/crud.spec.js`: rename the
  "without an X-API-Key header" case back to "should request stream
  response from the correct URL" and drop the
  `expect(headers).not.toHaveProperty('X-API-Key')` assertion. The
  surrounding request-shape checks (URL, timeout, responseType) still
  pin the behavior; the explicit header-absence line was named-after
  the deprecated contract.

Result: `grep -rn "CODE_API_KEY\|codeApiKey\|LIBRECHAT_CODE_API_KEY"`
against the LibreChat source tree returns zero hits. The only
remaining `X-API-Key` strings in this repo are on unrelated OpenAPI
Action + MCP server auth configurations, where the string is
user-facing config, not a LibreChat-owned identifier.

Tests: 677 packages/api pass (2 pre-existing summarization e2e failures
unrelated); 126 api-workspace controller/service tests pass.
Typecheck and lint clean.

* 🎯 fix: Narrow codeEnvAvailable to per-agent (admin cap AND agent.tools)

Before this commit, `codeEnvAvailable` was computed in the three JS
controllers as the admin-level capability flag only
(`enabledCapabilities.has(AgentCapabilities.execute_code)`) and passed
through `initializeAgent` → `injectSkillCatalog` / `primeInvokedSkills` /
`enrichWithSkillConfigurable` unchanged. A skills-only agent whose
`tools` array didn't include `execute_code` still got `bash_tool`
registered (via `injectSkillCatalog`) and skill files re-primed to the
sandbox on every turn — wrong, because the agent never opted in to
code execution.

**Fix:** `initializeAgent` now computes the per-agent effective value
once as `params.codeEnvAvailable === true && agent.tools.includes(Tools.execute_code)`,
reuses the same boolean for:

1. The `execute_code` → `bash_tool + read_file` expansion gate
   (previously already consulted `agent.tools`; now shares the single
   `effectiveCodeEnvAvailable` binding).
2. The `injectSkillCatalog` call (previously got the raw admin flag).
3. The returned `InitializedAgent.codeEnvAvailable` field (new, typed as
   required boolean).

**Controllers (initialize.js, openai.js, responses.js):** store
`primaryConfig.codeEnvAvailable` in `agentToolContexts.set(primaryId, ...)`,
capture `config.codeEnvAvailable` in every handoff `onAgentInitialized`
callback, and read it from the per-agent ctx inside the
`toolExecuteOptions.loadTools` runtime closure. The hoisted
`const codeEnvAvailable = enabledCapabilities.has(...)` locals in the
two OpenAI-compat controllers are gone — they were shadowing the
narrowed per-agent value.

**primeInvokedSkills:** `handlePrimeInvokedSkills` in
`services/Endpoints/agents/initialize.js` now uses
`primaryConfig.codeEnvAvailable` (per-agent, narrowed) instead of the
raw admin flag. A skills-only primary agent won't re-prime historical
skill files to the sandbox even when the admin enabled the capability
globally.

**Efficiency:** one extra `&&` in `initializeAgent`. No runtime hot-path
cost — the `includes()` scan on `agent.tools` was already happening for
the `execute_code` expansion gate; it's now just bound to a local. Tool
execution closures read `ctx.codeEnvAvailable === true` (property
access + strict equality, O(1)).

**Ephemeral-agent note:** per-agent narrowing is authoritative for both
persisted and ephemeral flows. The ephemeral toggle
(`ephemeralAgent.execute_code`) is reconciled into `agent.tools`
upstream in `packages/api/src/agents/added.ts`, so
`agent.tools.includes('execute_code')` is the single source of truth
by the time `initializeAgent` runs.

**Tests:** two new regression tests pin the narrowing contract:

- `initialize.test.ts` — four-quadrant matrix on
  `InitializedAgent.codeEnvAvailable` (cap on × agent asks, cap on ×
  doesn't ask, cap off × asks, neither). Catches future refactors that
  drop either half of the AND.
- `skills.test.ts` — `injectSkillCatalog` with `codeEnvAvailable: false`
  against an active skill catalog must NOT register `bash_tool` even
  though it still registers `read_file` + `skill`. This is the state
  a skills-only agent gets post-narrowing.

All 191 affected packages/api tests pass + 836 backend api tests pass.
Typecheck clean, lint clean.

* 🧽 refactor: Comprehensive-review polish — hoist tool defs, pin verifyToolAuth contract, doc appConfig

Addresses the comprehensive review of Phase 8. Findings mapped:

**#1 (MINOR): `verifyToolAuth` unconditional auth for execute_code**
- Added doc comment explicitly stating the deployment contract
  (admin capability → reachable sandbox; no per-check health probe
  to keep UI-gate queries O(1)).
- New `api/server/controllers/__tests__/tools.verifyToolAuth.spec.js`
  with 4 regression tests pinning the contract:
  1. `authenticated: true` + `SYSTEM_DEFINED` for execute_code.
  2. 404 for unknown tool IDs.
  3. `loadAuthValues` is never consulted (catches a future revert
     that would resurface the per-user key-entry dialog).
  4. Response `message` is never `USER_PROVIDED`.

**#2 (MINOR): `openai/service.ts` undocumented `appConfig` dependency**
- Expanded the `ChatCompletionDependencies.appConfig` JSDoc to spell
  out that omitting it silently disables code execution for agents
  with `execute_code` in their tools. External consumers of
  `createAgentChatCompletion` now have the contract documented at
  the type boundary.

**#5 (NIT): `registerCodeExecutionTools` re-allocates tool defs**
- Hoisted `READ_FILE_DEF` and `BASH_TOOL_DEF` to module-level
  `Object.freeze`d constants. The shapes derive entirely from
  static `@librechat/agents` exports, so a single frozen object per
  tool is safe to share across every agent init. Eliminates the
  ~4-property allocations on every call (including the common
  second-call no-op path).

**#6 (NIT): Verbose history-priming comment in initialize.js**
- Trimmed the 16-line `handlePrimeInvokedSkills` block to a 5-line
  summary with `@see InitializedAgent.codeEnvAvailable` pointer.
  The canonical narrowing explanation lives on the type; the
  controller comment is just the ACL-vs-capability rationale.

**Skipped:**

- #3 (memory spec tests a mirror function): reviewer self-dismissed
  as a design tradeoff; the enum-literal pin already catches the
  highest-risk drift vector.
- #4 (cross-repo contract for `createCodeExecutionTool`): user will
  explicitly install the latest `@librechat/agents` dev version
  once the companion PR publishes, so the version pin will be
  authoritative.
- #7 (migration/deprecation note for self-hosters): out of scope
  per user direction — release notes handle this.

Tests verified: 679 packages/api + 840 backend api tests pass.
Typecheck + lint clean.

* 🔧 chore: Update @librechat/agents version to 3.1.68-dev.1 across package-lock and package.json files

This commit updates the version of the `@librechat/agents` package from `3.1.68-dev.0` to `3.1.68-dev.1` in the `package-lock.json` and relevant `package.json` files. This change ensures consistency across the project and incorporates any updates or fixes from the new version.
2026-04-25 04:02:01 -04:00
Danny Avila
9e013be293 📦 chore: bump @librechat/agents to v3.1.68-dev.0 2026-04-25 04:02:01 -04:00
Danny Avila
b88e3473e5 📦 chore: bump @librechat/agents to dev latest 2026-04-25 04:02:00 -04:00
Danny Avila
64ec5f18b8 ⚙️ feat: Skill runtime integration: catalog, tools, execution, file priming (#12649)
* feat: Skill runtime integration — catalog injection, tool registration, execute handler

Wires the @librechat/agents SkillTool primitive into LibreChat's agent runtime:

**Enums:**
- Add `skills` to AgentCapabilities + defaultAgentCapabilities

**Data layer:**
- Add `getSkillByName(name, accessibleIds)` — compound query that
  combines name lookup + ACL check in one findOne

**Agent initialization (packages/api/src/agents/initialize.ts):**
- Accept `accessibleSkillIds` param and `listSkillsByAccess` db method
- Query accessible skills, format catalog via `formatSkillCatalog()`,
  append to `additional_instructions` (appears in agent system prompt)
- Register `SkillToolDefinition` + `createSkillTool()` when catalog
  is non-empty (tool appears in model's tool list)
- Store `accessibleSkillIds` and `skillCount` on InitializedAgent

**Execute handler (packages/api/src/agents/handlers.ts):**
- Add `getSkillByName` to `ToolExecuteOptions`
- `handleSkillToolCall()` intercepts `Constants.SKILL_TOOL`:
  extracts skillName, loads body from DB with ACL check,
  substitutes $ARGUMENTS, returns ToolExecuteResult with
  injectedMessages (skill body as isMeta user message)

**Caller wiring:**
- initialize.js: query skill IDs via findAccessibleResources,
  pass to initializeAgent + store on agentToolContexts,
  add getSkillByName to toolExecuteOptions,
  pass accessibleSkillIds through loadTools configurable
- openai.js + responses.js: same pattern for their flows

Requires @librechat/agents >= 3.1.65 (PR #91 exports).

* feat: Skills toggle in tools menu + backend capability gating

Frontend:
- Add skills?: boolean to TEphemeralAgent type
- Add LAST_SKILLS_TOGGLE_ to LocalStorageKeys for persistence
- Add skillsEnabled to useAgentCapabilities hook
- Add skills useToolToggle to BadgeRowContext with localStorage init
- New Skills.tsx badge component (Scroll icon, cyan theme,
  permission-gated via PermissionTypes.SKILLS)
- Add skills entry to ToolsDropdown with toggle + pin
- Render Skills badge in BadgeRow ephemeral section

Backend:
- Extract injectSkillCatalog() into packages/api/src/agents/skills.ts
  (reduces initializeAgent module size, reusable helper)
- initializeAgent delegates to helper instead of inline block
- Capability-gate the findAccessibleResources query:
  - Agents endpoint: checks AgentCapabilities.skills in admin config
  - OpenAI/Responses controllers: checks ephemeralAgent.skills toggle
- ACL query runs once per run, result shared across all agents

* refactor: remove createSkillTool() instance from injectSkillCatalog

SkillTool is event-driven only. The tool definition in toolDefinitions
is sufficient for the LLM to see the tool schema. No tool instance is
needed since the host handler intercepts via ON_TOOL_EXECUTE before
tool.invoke() is ever called.

Removes tools from InjectSkillCatalogParams/Result, drops the
createSkillTool import.

* feat: skill file priming, bash tool, and invoked skills state

Multi-file skill support:
- New primeSkillFiles() helper (packages/api/src/agents/skillFiles.ts)
  uploads skill files + SKILL.md body to code execution environment
- handleSkillToolCall primes files on invocation when skill.fileCount > 0,
  returns session info as artifact so ToolNode stores the session
- Skill-primed files available to subsequent bash/code tool calls

Bash tool auto-registration:
- BashExecutionToolDefinition added alongside SkillToolDefinition when
  skills are enabled, giving the model a bash tool for running scripts

Conversation state:
- Add invokedSkillIds field to conversation schema (Mongoose + Zod)
- handleSkillToolCall updates conversation with $addToSet on success
- Enables re-priming skill files on subsequent runs (future)

Dependency wiring:
- Pass listSkillFiles, getStrategyFunctions, uploadCodeEnvFile,
  updateConversation through ToolExecuteOptions
- Pass req and codeApiKey through mergedConfigurable
- All three controller entry points wired (initialize.js, openai.js,
  responses.js)

* fix: load bash_tool instance in loadToolsForExecution, remove file listing

- Add createBashExecutionTool to loadToolsForExecution alongside PTC/ToolSearch
  pattern: loads CODE_API_KEY, creates bash tool instance on demand
- Add BASH_TOOL and SKILL_TOOL to specialToolNames set so they don't go
  through the generic loadTools path (bash is created here, skill is
  intercepted in handler before tool.invoke)
- Remove file name listing from skill content text — it's the skill
  author's responsibility to disclose files in SKILL.md, not the framework

* feat: batch upload for skill files, replace sequential uploads

- Add batchUploadCodeEnvFiles() to crud.js: single POST to /upload/batch
  with all files in one multipart request, returns shared session_id
- Rewrite primeSkillFiles to collect all streams (SKILL.md + bundled files)
  then do one batch upload instead of N sequential uploads
- Replace uploadCodeEnvFile with batchUploadCodeEnvFiles across all callers
  (handlers.ts, initialize.js, openai.js, responses.js)

* refactor: remove invokedSkillIds from conversation schema

Skills aren't re-loaded between runs, so conversation-level state for
invoked skills doesn't help. Skill state will live on messages instead
(like tool_search discoveredTools and summaries), enabling in-place
re-injection on follow-up runs.

Removes invokedSkillIds from: convo Mongoose schema, IConversation
interface, Zod schema, ToolExecuteOptions.updateConversation, and
all three caller wiring points.

* feat: smart skill file re-priming with session freshness checking

Schema:
- Add codeEnvIdentifier field to ISkillFile (type + Mongoose schema)
- Add updateSkillFileCodeEnvIds batch method (uses tenantSafeBulkWrite)
- Export checkIfActive from Code/process.js

Extraction:
- Add extractInvokedSkillsFromHistory() to run.ts — scans message
  history for AIMessage tool_calls where name === 'skill', extracts
  skillName args. Follows same pattern as extractDiscoveredToolsFromHistory.

Smart re-priming in primeSkillFiles:
- Before batch uploading, checks if existing codeEnvIdentifiers are
  still active via getSessionInfo + checkIfActive (23h threshold)
- If session is still active, returns cached references (zero uploads)
- If stale or missing, batch-uploads everything and persists new
  identifiers on SkillFile documents (fire-and-forget)
- Single session check covers all files (batch shares one session_id)

Wiring:
- Pass getSessionInfo, checkIfActive, updateSkillFileCodeEnvIds
  through ToolExecuteOptions and all three controller entry points

* feat: wire skill file re-priming at run start via initialSessions

Flow:
1. initialize.js creates primeInvokedSkills callback with all deps
2. client.js calls it with message history before createRun
3. extractInvokedSkillsFromHistory scans for skill tool calls
4. For each invoked skill with files, primeSkillFiles uploads/checks
5. Returns initialSessions map passed to createRun
6. createRun passes initialSessions to Run.create (via RunConfig)
7. Run constructor seeds Graph.sessions, making skill files available
   to subsequent bash/code tool calls via ToolNode session injection

Requires @librechat/agents with initialSessions on RunConfig (PR #94).

* refactor: use CODE_EXECUTION_TOOLS set for code tool checks

Import CODE_EXECUTION_TOOLS from @librechat/agents and replace inline
constant checks in handlers.ts and callbacks.js. Fixes missing bash
tool coverage in the session context injection (handlers.ts) and code
output processing (callbacks.js).

* refactor: move primeInvokedSkills to packages/api, add skill body re-injection

Moves primeInvokedSkills from an inline closure in initialize.js (with
dynamic requires) to a proper exported function in packages/api
skillFiles.ts with explicit typed dependencies.

Key changes:
- primeInvokedSkills now returns both initialSessions (for file priming)
  AND injectedMessages (skill bodies for context continuity)
- createRun accepts invokedSkillMessages and appends skill bodies to
  systemContent so the model retains skill instructions across runs
- initialize.js calls the packaged function with all deps passed explicitly
- client.js passes both initialSessions and injectedMessages to createRun

* fix: move dynamic requires to top-level module imports

Move primeInvokedSkills, getStrategyFunctions, batchUploadCodeEnvFiles,
getSessionInfo, and checkIfActive from inline requires to top-level
module requires where they belong.

* refactor: skill body reconstruction via formatAgentMessages, not systemContent

Replaces the lazy systemContent approach with proper message-level
reconstruction:

SDK (formatAgentMessages):
- New invokedSkillBodies param (Map<string, string>)
- Reconstructs HumanMessages after skill ToolMessages at the correct
  position in the message sequence, matching where ToolNode originally
  injected them

LibreChat:
- extractInvokedSkillsFromPayload replaces extractInvokedSkillsFromHistory
  (works with raw TPayload before formatAgentMessages, not BaseMessage[])
- primeInvokedSkills now takes payload instead of messages, returns
  skillBodies Map instead of injectedMessages
- client.js calls primeInvokedSkills BEFORE formatAgentMessages, passes
  skillBodies through as the 4th param
- Removed invokedSkillMessages from createRun (no more systemContent hack)
- Single-pass: skill detection happens inside formatAgentMessages' existing
  tool_call processing loop, zero extra message iterations

* refactor: rename skillBodies to skills for consistency with SDK param

* refactor: move auth loading into primeInvokedSkills, pass loadAuthValues as dep

The payload/accessibleSkillIds guard and CODE_API_KEY loading now live
inside primeInvokedSkills (packages/api) rather than in the CJS caller.
initialize.js passes loadAuthValues as a dependency and the callback
is only created when skillsCapabilityEnabled.

* feat: ReadFile tool + conditional bash registration + skill path namespacing

ReadFile tool (read_file):
- General-purpose file reader, event-driven (ON_TOOL_EXECUTE)
- Schema: { file_path: string } — "{skillName}/{path}" convention
- handleReadFileCall: resolves skill name from path, ACL check, reads
  from DB cache or storage, binary detection, size limits (256KB),
  lazy caching (512KB), line numbers in output
- SKILL.md special case: reads skill.body directly
- Dispatched alongside SKILL_TOOL in createToolExecuteHandler
- Added to specialToolNames in ToolService

Conditional tool registration:
- ReadFile + SkillTool: always registered when skills enabled
- BashTool: only registered when codeEnvAvailable === true
- codeEnvAvailable passed through InitializeAgentParams from caller

Skill file path namespacing:
- primeSkillFiles now uploads as "{skillName}/SKILL.md" and
  "{skillName}/{relativePath}" instead of flat names
- Prevents file collisions when multiple skills are invoked

Wiring:
- getSkillFileByPath + updateSkillFileContent passed through
  ToolExecuteOptions in all three callers

* feat: return images/PDFs as artifacts from read_file, tighten caching

Binary artifact support:
- Images (png, jpeg, gif, webp) returned as base64 in artifact.content
  with type: 'image_url', processed by existing callback attachment flow
- PDFs returned as base64 artifact similarly
- Binary size limit: 10MB (MAX_BINARY_BYTES)
- Other binary files still return metadata + bash fallback

Caching:
- Text cached only on first read (file.content == null check)
- Binary flag cached only on first detection (file.isBinary == null)
- Skill files are immutable; no redundant cache writes

Registration:
- ReadFileToolDefinition now includes responseFormat: 'content_and_artifact'

* chore: update @librechat/agents to version 3.1.66-dev.0 and add peer dependencies in package-lock.json and package.json files

* fix: resolve review findings #1,#2,#4,#5,#6,#10,#13

Critical:
- #1: primeInvokedSkills now accumulates files across all skills into
  one session entry instead of overwriting. Parallel processing via
  Promise.allSettled.
- #2: codeEnvAvailable now computed and passed in openai.js and
  responses.js (was missing, bash tool never registered in those flows)

Major:
- #4: relativePath in updateSkillFileCodeEnvIds now strips the
  {skillName}/ prefix to match SkillFile documents. SKILL.md filter
  uses endsWith instead of exact match.
- #5: File priming guarded on apiKey being non-empty (skip when not
  configured instead of failing with auth error)
- #6: Skills processed in parallel via Promise.allSettled instead of
  sequential for-of loop

Minor:
- #10: Use top-level imports in initialize.js instead of inline requires
- #13: Log warning when skill catalog reaches the 100-skill limit

* fix: resolve followup review findings N1,N2,N4

N1 (CRITICAL): Wire skill deps into responses.js non-streaming path.
Was completely missing getSkillByName, file strategy functions, etc.

N2 (MAJOR): Single batch upload for ALL skills' files. Resolves skills
in parallel (Phase 1), then collects all file streams across skills
and does ONE batchUploadCodeEnvFiles call (Phase 2). All files share
one session_id, eliminating cross-session isolation issues.

N4 (MINOR): Move inline require() to top-level in openai.js and
responses.js, consistent with initialize.js.

* fix: add mocks for new file strategy imports in controller tests

* fix: restore session freshness check, parallelize file lookups, add warnings

R1: Re-add session freshness check before batch upload. Checks any
existing codeEnvIdentifier via getSessionInfo + checkIfActive. If the
session is still active (23h window), returns cached file references
with zero re-uploads.

R2: listSkillFiles calls parallelized via Promise.all (were sequential
in the for-of loop).

R3: Log warning when skill record lookup fails during identifier
persistence (was a silent empty-string fallback).

* fix: guard freshness cache on single-session consistency

* fix: multi-session freshness check (code env handles mixed sessions natively)

The code execution environment fetches each file by its own
{session_id, fileId} pair independently — no single-session
requirement. Removed the sessionIds.size === 1 guard.

Now checks ALL distinct sessions for freshness. If every session
is still active (23h window), returns cached references with per-file
session_ids preserved. If any session expired, falls through to
re-upload everything in a single batch.

* perf: parallelize session freshness checks via Promise.all

* fix: add optional chaining for session info retrieval in primeInvokedSkills

Updated the primeInvokedSkills function to use optional chaining for getSessionInfo and checkIfActive methods, ensuring safer access and preventing potential runtime errors when these methods are undefined.

* fix: address review findings #1-#9 + Codex P1/P2 + session probe

Critical:
- #1/Codex P1: Add codeApiKey loading to openai.js and responses.js
  loadTools configurable (was missing, file priming broken in 2/3 paths)
- Codex P1: Fix cached file name prefix in primeSkillFiles cache path
  (was sf.relativePath, now ${skill.name}/${sf.relativePath})

Major:
- Codex P2: Honor ephemeral skills toggle in agents endpoint
  (check ephemeralAgent?.skills !== false alongside admin capability)
- #4: Early size check using file.bytes from DB before streaming
  (prevents full-file buffer for oversized files)

Minor:
- #5: Replace Record<string, any> with Record<string, boolean | string>
- #6: Localize Pin/Unpin aria-labels with com_ui_pin/com_ui_unpin
- #8: Parallelize stream acquisition in primeSkillFiles via
  Promise.allSettled
- #9: Log warning for partial batch upload failures with filenames

Performance:
- Session probe optimization: getSessionInfo now hits per-object
  endpoint (GET /sessions/{sid}/objects/{fid}) instead of listing
  entire session (GET /files/{sid}?detail=summary). O(1) stat vs
  O(N) list + linear scan.

* refactor: extract shared skill wiring helper + add unit tests

DRY (#3):
- New skillDeps.js exports getSkillToolDeps() with all 9 skill-related
  deps (getSkillByName, listSkillFiles, getStrategyFunctions, etc.)
- Replaces 5 identical copy-paste blocks across initialize.js, openai.js,
  responses.js (streaming + non-streaming paths)
- One place to maintain when skill deps change

Tests (#2):
- 8 unit tests for extractInvokedSkillsFromPayload covering:
  string args, object args, missing skill tool_calls, non-assistant
  messages, malformed JSON, empty skillName, empty payload, dedup

* fix: remove @jest/globals import, use global jest env

* fix: resolve round 2 review findings R2-1 through R2-7

R2-1 (toggle semantics): openai.js + responses.js now check admin
  capability (AgentCapabilities.skills) alongside ephemeral toggle.
  Aligns with initialize.js.

R2-2 (swallowed error): primeInvokedSkills now logs
  updateSkillFileCodeEnvIds failures (was .catch(() => {}))

R2-4 (test cast): Record<string, string> → Record<string, unknown>

R2-5 (DRY regression): Extract enrichWithSkillConfigurable() into
  skillDeps.js. Replaces 4 identical loadAuthValues blocks.
  Each loadTools callback is now a one-liner. JSDoc added (R2-6).

R2-7 (sequential streams): primeInvokedSkills now uses
  Promise.allSettled for parallel stream acquisition.

* fix: require explicit skills toggle + treat partial cache as miss

- initialize.js: change ephemeralSkillsToggle !== false to === true
  (unset toggle no longer enables skills)
- primeSkillFiles cache: require ALL files to have codeEnvIdentifier
  before using cache (partial persistence = cache miss = re-upload)
- primeInvokedSkills cache: same check (allFilesWithIds.length must
  equal total file count)

* fix: pass entity_id=skillId on batch upload, eliminates per-user cache thrashing

primeSkillFiles now passes entity_id: skill._id.toString() to
batchUploadCodeEnvFiles. This scopes the code env session to the
skill, not the user. All users sharing a skill share the same
uploaded files — no more cache thrashing from overwriting each
other's codeEnvIdentifier.

The stored codeEnvIdentifier now includes ?entity_id= suffix so
freshness checks pass the entity_id through to the per-object
stat endpoint. Both primeSkillFiles and primeInvokedSkills
store consistent identifier formats.

* fix: pass entity_id on multi-skill batch upload, consistent identifier format

* Revert "fix: pass entity_id on multi-skill batch upload, consistent identifier format"

This reverts commit c85ce2161e.

* refactor: per-skill upload in primeInvokedSkills, eliminate multi-skill batch

Replace the monolithic multi-skill batch upload with per-skill
primeSkillFiles calls. Each skill gets its own session with
entity_id=skillId, ensuring:

- Correct session auth (entity_id matches on freshness checks)
- Per-skill freshness caching (only expired skills re-upload)
- Shared skill sessions work across users (same entity_id=skillId)
- Code env handles mixed session_ids natively

The big batch block (stream collection, single upload, identifier
mapping) is replaced by a simple loop over primeSkillFiles, which
already handles freshness caching, batch upload, and identifier
persistence per-skill.

* fix: resolve review findings #1,#3-5,#7,#9-11

Critical:
- #1: Strip ?entity_id= query string before splitting codeEnvIdentifier
  into session_id/fileId (was corrupting cached file IDs in 4 locations)

Major:
- #4: Parallelize per-skill primeSkillFiles via Promise.allSettled
- #5: Add logger.warn to all empty .catch(() => {}) on cache writes

Minor:
- #7: Add logger.debug to enrichWithSkillConfigurable catch block
- #9: Use error instanceof Error guard in batchUploadCodeEnvFiles
- #10: Move enrichWithSkillConfigurable to TypeScript in packages/api
  (skillConfigurable.ts), skillDeps.js wraps with loadAuthValues dep
- #11: Reduce MAX_BINARY_BYTES from 10MB to 5MB (~11.5MB peak with b64)

* fix: forward entity_id in session probe + always register bash tool

Codex P2 (entity_id in probe): getSessionInfo now preserves and
forwards query params (including entity_id) to the per-object stat
endpoint. Without this, identifiers stored as ...?entity_id=... would
fail auth checks because the entity_id scope was dropped.

Codex P2 (bash tool availability): Remove codeEnvAvailable gate from
injectSkillCatalog. Bash tool definition is now always registered when
skills are enabled. Actual tool instance creation still happens at
execution time in loadToolsForExecution (which loads per-user
credentials). This ensures users with per-user CODE_API_KEY get
bash without requiring a global env var at init time.

Removes codeEnvAvailable from InjectSkillCatalogParams,
InitializeAgentParams, and all three controller entry points.

* fix: add debug logging to primeInvokedSkills catch, rename export alias

* fix: stub bash tool when no key + remove PDF artifact path

Codex P1 (bash tool): When CODE_API_KEY is unavailable, create a stub
tool that returns "Code execution is not available. Use read_file
instead." This prevents "tool not found" errors from the model
repeatedly calling bash_tool in no-code-env deployments while still
registering the definition for per-user credential users.

Codex P2 (PDF artifacts): Remove PDF image_url artifact path. The
host artifact pipeline processes image_url via saveBase64Image which
fails for PDFs. PDFs now fall through to the generic binary handler
("Use bash to process"). TODO comment for future document artifact
support.

Also: isImageOrPdf → isImage in early size checks (PDFs are no
longer treated as artifact candidates).

* fix: remove dead PDF_MIME constant, hoist skillToolDeps, document session_id

- #7: Remove unused PDF_MIME constant (dead code after PDF artifact removal)
- #11: Hoist skillToolDeps to module-level constant (avoid per-call allocation)
- #6: Document that CodeSessionContext.session_id is a representative value;
  ToolNode uses per-file session_id from the files array

* fix: call toolEndCallback for skill/read_file artifacts + clear codeEnvIdentifier on re-upload

Codex P1 (toolEndCallback bypass): skill and read_file handler branches
returned early, bypassing the toolEndCallback that processes artifacts
(image attachments). Now calls toolEndCallback when the result has an
artifact, using the same metadata pattern as the normal tool.invoke path.

Codex P1 (stale identifiers): upsertSkillFile now $unset's
codeEnvIdentifier alongside content and isBinary when a file is
re-uploaded. Prevents the freshness cache from returning references
to old file content after a skill file is replaced.

* fix: add session_id comment at cached path, rename skillResult to handlerResult

* fix: return content_and_artifact from bash stub so result.content is populated

* fix: deterministic skill lookup, dedup warning, and multi-session freshness check

- getSkillByName: add sort({updatedAt:-1}) so name collisions resolve
  deterministically to the most recently updated skill
- injectSkillCatalog: warn when multiple accessible skills share a name
- primeSkillFiles: check ALL distinct sessions for freshness, not just
  the first file's session, preventing stale refs after partial bulkWrite

* refactor: update icon import in Skills component

- Replaced the Scroll icon with ScrollText in the Skills component for improved clarity and consistency in the UI.

* fix: SKILL.md cache parity, gate bash_tool on code env, fix read_file too-large message

- primeSkillFiles: filter SKILL.md from returned files array on fresh
  upload so cached and non-cached paths return identical file sets
  (SKILL.md is still on disk in the session for bash access)
- injectSkillCatalog: only register bash_tool when codeEnvAvailable is
  true; thread the flag from all three CJS callers via execute_code
  capability check
- handleReadFileCall: tell the model to invoke the skill first before
  suggesting /mnt/data paths for oversized files

* fix: use EnvVar constant, deduplicate auth lookup, validate batch upload, stream byte limit

- Replace hardcoded 'LIBRECHAT_CODE_API_KEY' with EnvVar.CODE_API_KEY
  in skillConfigurable.ts and skillFiles.ts
- Resolve code API key once at run start in initialize.js and pass to
  both primeInvokedSkills and enrichWithSkillConfigurable via optional
  preResolvedCodeApiKey param, eliminating redundant loadAuthValues calls
- Add response structure validation in batchUploadCodeEnvFiles before
  accessing session_id/files to surface unexpected responses early
- Add streaming byte counter in handleReadFileCall that aborts and
  destroys the stream when accumulated bytes exceed MAX_BINARY_BYTES,
  preventing full file buffering when DB metadata is inaccurate

* refactor: update icon import in ToolsDropdown component

- Replaced the Scroll icon with ScrollText in the ToolsDropdown component for improved clarity and consistency in the UI.

* fix: partial upload failure detection, EnvVar in initialize.js, declaration ordering

- primeSkillFiles: return null (failure) when batch upload partially
  succeeds — missing bundled files would cause runtime bash/read
  failures with missing paths in code env
- initialize.js: replace hardcoded 'LIBRECHAT_CODE_API_KEY' with
  EnvVar.CODE_API_KEY imported from @librechat/agents
- initialize.js: move enabledCapabilities, accessibleSkillIds, and
  codeApiKey declarations before the toolExecuteOptions closure that
  references them (eliminates reliance on temporal dead zone hoisting)
2026-04-25 04:02:00 -04:00
Danny Avila
9ccc8d9bef
v0.8.5 (#12727) 2026-04-22 13:10:19 -07:00
Danny Avila
f5c4deac28
🔐 fix: Prefer WWW-Authenticate resource_metadata Hint for MCP OAuth (#12763)
* 📦 chore: Bump @modelcontextprotocol/sdk to v1.29.0

* ♻️ refactor: Extract WWW-Authenticate Probe Helper for MCP OAuth

* 🔐 fix: Prefer WWW-Authenticate resource_metadata Hint for MCP OAuth

Per RFC 9728 §5.1, the `resource_metadata=<url>` parameter in a 401
`WWW-Authenticate: Bearer` challenge is the authoritative protected-resource
metadata source. Path-aware `.well-known` discovery was winning over the
hint, so split deployments that serve valid-but-wrong metadata at the
path-aware endpoint stranded OAuth at defunct authorization servers.

Threads the hint through `discoverOAuthProtectedResourceMetadata` via
`opts.resourceMetadataUrl` in both startup detection and the OAuth handler,
matching the behavior of Claude Desktop, the MCP Inspector, OpenAI tooling,
and Microsoft Copilot Studio.

Fixes #12761.

* 🧵 fix: Thread OAuth-Aware fetchFn Through Resource-Metadata Probe

Without this, admin-configured `oauthHeaders` (e.g. a gateway API key that
fronts the MCP endpoint) were stripped from the probe, causing the gateway
to 401 for the wrong reason and masking the real `WWW-Authenticate` hint.

The helper now accepts a FetchLike and defaults to global fetch, so the
startup detection path is unchanged while the handler passes its OAuth-
aware wrapper through.

* 🧹 refactor: Address MCP OAuth Probe Review Findings

- Thread `fetchFn` through `probeResourceMetadataHint` so admin-configured
  `oauthHeaders` reach the probe (a gateway API key that fronts the MCP
  endpoint would otherwise 401 us for the wrong reason and hide the real
  Bearer challenge).
- Skip the redundant HEAD request in `checkAuthErrorFallback` when the
  probe already observed a 401/403; fall back to a fresh HEAD only when
  every probe attempt threw (transient network error).
- Narrow the oauth barrel: drop `export * from './resourceHint'` so the
  helper stays an internal module.
- Add `scope` extraction coverage (`Bearer scope="read write"`) and a
  403-only observation path; isolate `MCP_OAUTH_ON_AUTH_ERROR=true` in a
  dedicated suite so precise-outcome tests aren't muddied by the safety net.

*  fix: Use Zod Schema in MCP Reconnection-Storm Test Tool

MCP SDK 1.28 tightened `McpServer.tool()` to require Zod schemas instead
of plain JSON-Schema objects. Swap the `{ message: { type: 'string' } }`
shape for `z.string()` so the fixture server spins up under SDK 1.29.

* 🛡️ fix: Harden MCP OAuth resource_metadata Hint Against SSRF

The `resource_metadata` URL is echoed from an untrusted MCP server, so
handing it straight to the SDK lets a malicious server redirect discovery
at private IPs, the cloud metadata service, or any host the admin did not
intend to reach. Caught by the Copilot review on #12763.

- `handler.ts`: run the hint through the same `validateOAuthUrl` /
  `allowedDomains` gate that already guards the authorization-server URL;
  drop it and fall back to path-aware discovery on rejection.
- `detectOAuth.ts`: no admin-scoped allowedDomains here, so apply a
  strict `isSSRFTarget` + DNS resolution check and silently discard any
  hint pointing at a private/loopback/metadata address.
- Tests cover both the hostname-list and DNS-resolution rejection paths
  and assert the SDK falls back to path-aware discovery unharmed.

* 🧪 test: Mock ~/auth in fallback Suite for Consistency

Matches the main `detectOAuth.test.ts` mock so the SSRF guards added in the
previous commit don't touch the real `~/auth` module at test time.

* 🔍 fix: Scope OAuth Fallback to HEAD + Parse Multi-Scheme WWW-Authenticate

Two codex findings on #12763:

- **P1**: the merged `authChallenge` flag was letting POST-only 401/403 flip
  the `MCP_OAUTH_ON_AUTH_ERROR` fallback, misclassifying WAF/CSRF-hardened
  endpoints (HEAD 200 + POST 403) as OAuth-required. Rename to
  `headAuthChallenge` and derive it only from the HEAD probe, matching the
  legacy fallback's HEAD-only semantics. Add a regression test.
- **P2**: the SDK's `extractWWWAuthenticateParams` only inspects the first
  scheme token, so multi-scheme headers like
  `Basic realm="api", Bearer resource_metadata="..."` silently dropped the
  authoritative Bearer hint. Fall back to a regex across the full header
  when the SDK returns nothing but Bearer is present. Add a regression
  test covering the multi-scheme case.

* 🧽 refactor: Tighten MCP OAuth Probe Semantics

Addresses the second external review pass plus codex P2:

- Merge the two stacked JSDoc blocks on `probeResourceMetadataHint` into one
  with a proper `@returns` section.
- Only short-circuit HEAD when it delivered the `resource_metadata` hint
  itself — a Bearer-without-params HEAD now lets POST run, since some
  servers surface their hint only on POST and we were missing it.
- Drop the unused `scope` field from `ResourceHintProbeResult`; no caller
  read it, and YAGNI beats a reserved field.
- Remove the redundant `OAUTH_ON_AUTH_ERROR` guard inside
  `checkAuthErrorFallback` — the only call site already gates on it.
- Codex P2: signal "HEAD status unknown" via `null` when the HEAD probe
  threw and POST returned non-auth. Previously that combination leaked a
  `{headAuthChallenge: false}` result and silently skipped the fallback's
  retry HEAD, which could misclassify OAuth-required servers after a
  transient HEAD failure.

Regression tests cover every path: Bearer-no-hint-on-HEAD + hint-on-POST,
multi-scheme `Basic + Bearer` headers, HEAD-threw + POST-200 retry, and
the WAF/CSRF-only POST 403 case.

* 🪥 polish: Tighten Probe Null-Guard Ordering + Add Malformed-Hint Test

Two NITs from the follow-up review:

- Move `bearerChallenge` computation after the `!wwwAuth` guard so the
  variable is only derived when it can be meaningfully `true`. The
  early-return path is now a clean unconditional exit.
- Add a regression test that asserts `Bearer resource_metadata="not-a-url"`
  yields `resourceMetadataUrl: undefined` without throwing, locking in
  the try/catch safety net in `extractHintFromHeader` and the SDK
  parser alike.
2026-04-21 16:26:20 -07:00
Danny Avila
7cf8b84778
📦 chore: Update @librechat/agentsto v3.1.68 (#12752)
- Bumped the version of @librechat/agents from 3.1.67 to 3.1.68 in multiple package.json files to ensure consistency and access to the latest features and fixes.
- Updated package-lock.json to reflect the new version and maintain dependency integrity.
2026-04-20 17:45:58 -07:00
Danny Avila
b579390287
📦 chore: npm audit & bump @librechat/agents to v3.1.67 (#12710)
* chore: Update package-lock.json with new dependencies and version upgrades

- Added new dependencies for @langchain/anthropic and @langchain/core, including @anthropic-ai/sdk and fast-xml-parser.
- Updated existing dependencies for @librechat/agents, @opentelemetry/api-logs, @opentelemetry/core, and related packages to their latest versions.
- Enhanced integrity checks and licensing information for new and updated packages.

* chore: Update @librechat/agents dependency to version 3.1.66 in package.json and package-lock.json

- Bumped the version of @librechat/agents from 3.1.65 to 3.1.66 across multiple package.json files to ensure consistency and access to the latest features and fixes.

* chore: Update dompurify and fast-xml-parser dependencies to version 3.4.0 and 5.6.0 respectively

- Bumped the version of dompurify across multiple package.json files to ensure consistency and access to the latest features and security fixes.
- Updated fast-xml-parser to the latest version in relevant package.json files for improved functionality.

* chore: Update @librechat/agents dependency to version 3.1.67 in package.json and package-lock.json

- Bumped the version of @librechat/agents from 3.1.66 to 3.1.67 across multiple package.json files to ensure consistency and access to the latest features and fixes.
2026-04-16 22:44:00 -04:00
Danny Avila
60cee6eec7
🔍 fix: Anthropic Web Search Multi-Turn Issue and Attachment Results (#12651)
* 🔍 fix: Improve WebSearch Progress Handling Based on Attachment Results

- Adjusted progress handling in the WebSearch component to treat searches as complete if attachments contain results, addressing issues with server tool calls not receiving completion signals.
- Introduced `effectiveProgress` to reflect the actual state of progress based on the presence of results, enhancing the accuracy of cancellation and completion states.
- Updated related logic to ensure proper handling of search completion and finalization states based on the new progress calculations.

* fix: only override progress when not streaming

During streaming (isSubmitting=true), use actual progress so the
searching/processing/reading states display correctly. Only override
to 1 after streaming completes to prevent the cancelled check from
hiding the component.

* chore: Update @librechat/agents and mathjs dependencies to latest versions

* chore: Upgrade mathjs dependency to version 15.2.0 across package-lock and package.json files
2026-04-13 15:44:41 -04:00
Danny Avila
4f133f8955
v0.8.5-rc1 (#12569) 2026-04-09 20:06:31 -04:00
Danny Avila
f128587bb7
📦 chore: bump axios, @librechat/agents (#12598)
* chore: bump @librechat/agents to v3.1.64

* chore: update axios to version 1.15.0 across multiple packages
2026-04-09 19:48:21 -04:00
Danny Avila
7ef03391b5
📦 chore: bump nodemailer to v8.0.5 (#12587) 2026-04-09 09:59:57 -04:00
Danny Avila
b44ce264a4
📦 chore: Bump mongodb-memory-server to v11.0.1, mermaid to v11.14.0, npm audit (#12543)
* 🔧 chore: Update `mongodb-memory-server` to v11.0.1

- Bump `mongodb-memory-server` version in `package-lock.json`, `api/package.json`, and `packages/data-schemas/package.json` from 10.1.4 to 11.0.1.
- Update related dependencies in `mongodb-memory-server` and `mongodb-memory-server-core` to ensure compatibility with the new version.
- Adjust `tslib` version in `mongodb-memory-server` to 2.8.1 and `debug` to 4.4.3 for consistency.

* chore: npm audit fix

* chore: Update `mermaid` dependency to version 11.14.0 in `package-lock.json` and `client/package.json`

* fix: use deterministic timestamps in convoStructure test

MongoDB 8.x (from mongodb-memory-server v11) no longer guarantees
insertion-order return for documents with identical timestamps.
Use sequential timestamps with overrideTimestamp to ensure buildTree
processes parents before children.
2026-04-03 17:01:11 -04:00
Danny Avila
aa7e5ba051
📦 chore: bump axios to exact v1.13.6, @librechat/agents to v3.1.63, @aws-sdk/client-bedrock-runtime to v3.1013.0 (#12488)
* 🔧 chore: bump @librechat/agents to v3.1.63

* 🔧 chore: update axios dependency to exact version 1.13.6

* 🔧 chore: update @aws-sdk/client-bedrock-runtime to version 3.1013.0 in package.json and package-lock.json

- Bump the version of @aws-sdk/client-bedrock-runtime across package.json files in api and packages/api to ensure compatibility with the latest features and fixes.
- Reflect the updated version in package-lock.json to maintain consistency in dependency resolution.

* 🔧 chore: update axios dependency to version 1.13.6 across multiple package.json and package-lock.json files

- Bump axios version from ^1.13.5 to 1.13.6 in package.json and package-lock.json for improved performance and security.
- Ensure consistency in dependency resolution across the project by updating all relevant files.
2026-03-31 14:49:31 -04:00
Danny Avila
d9f216c11a
📦 chore: bump dependabot packages (#12487)
* chore: Update Handlebars and package versions in package-lock.json and package.json

- Upgrade Handlebars from version 4.7.7 to 4.7.9 in both package-lock.json and package.json for improved performance and security.
- Update librechat-data-provider version from 0.8.401 to 0.8.406 in package-lock.json.
- Update @librechat/data-schemas version from 0.0.40 to 0.0.48 in package-lock.json.

* chore: Upgrade @happy-dom/jest-environment and happy-dom versions in package-lock.json and package.json

- Update @happy-dom/jest-environment from version 20.8.3 to 20.8.9 for improved compatibility.
- Upgrade happy-dom from version 20.8.3 to 20.8.9 to ensure consistency across dependencies.

* chore: Upgrade @rollup/plugin-terser to version 1.0.0 in package-lock.json and package.json

- Update @rollup/plugin-terser from version 0.4.4 to 1.0.0 in both package-lock.json and package.json for improved performance and compatibility.
- Reflect the new version in the dependencies of data-provider and data-schemas packages.

* chore: Upgrade rollup-plugin-typescript2 to version 0.37.0 in package-lock.json and package.json

- Update rollup-plugin-typescript2 from version 0.35.0 to 0.37.0 in package-lock.json and all relevant package.json files for improved compatibility and performance.
- Adjust dependencies for semver and tslib to their latest versions in line with the rollup-plugin-typescript2 upgrade.

* chore: Upgrade nodemailer to version 8.0.4 in package-lock.json and package.json

- Update nodemailer from version 7.0.11 to 8.0.4 in both package-lock.json and package.json to enhance functionality and security.

* chore: Upgrade picomatch, yaml, brace-expansion versions in package-lock.json

- Update picomatch from version 4.0.3 to 4.0.4 across multiple dependencies for improved functionality.
- Upgrade brace-expansion from version 2.0.2 to 2.0.3 and from 5.0.3 to 5.0.5 to enhance compatibility and performance.
- Update yaml from version 1.10.2 to 1.10.3 for better stability.
2026-03-31 13:36:20 -04:00
Danny Avila
b5c097e5c7
⚗️ feat: Agent Context Compaction/Summarization (#12287)
* chore: imports/types

Add summarization config and package-level summarize handler contracts

Register summarize handlers across server controller paths

Port cursor dual-read/dual-write summary support and UI status handling

Selectively merge cursor branch files for BaseClient summary content
block detection (last-summary-wins), dual-write persistence, summary
block unit tests, and on_summarize_status SSE event handling with
started/completed/failed branches.

Co-authored-by: Cursor <cursoragent@cursor.com>

refactor: type safety

feat: add localization for summarization status messages

refactor: optimize summary block detection in BaseClient

Updated the logic for identifying existing summary content blocks to use a reverse loop for improved efficiency. Added a new test case to ensure the last summary content block is updated correctly when multiple summary blocks exist.

chore: add runName to chainOptions in AgentClient

refactor: streamline summarization configuration and handler integration

Removed the deprecated summarizeNotConfigured function and replaced it with a more flexible createSummarizeFn. Updated the summarization handler setup across various controllers to utilize the new function, enhancing error handling and configuration resolution. Improved overall code clarity and maintainability by consolidating summarization logic.

feat(summarization): add staged chunk-and-merge fallback

feat(usage): track summarization usage separately from messages

feat(summarization): resolve prompt from config in runtime

fix(endpoints): use @librechat/api provider config loader

refactor(agents): import getProviderConfig from @librechat/api

chore: code order

feat(app-config): auto-enable summarization when configured

feat: summarization config

refactor(summarization): streamline persist summary handling and enhance configuration validation

Removed the deprecated createDeferredPersistSummary function and integrated a new createPersistSummary function for MongoDB persistence. Updated summarization handlers across various controllers to utilize the new persistence method. Enhanced validation for summarization configuration to ensure provider, model, and prompt are properly set, improving error handling and overall robustness.

refactor(summarization): update event handling and remove legacy summarize handlers

Replaced the deprecated summarization handlers with new event-driven handlers for summarization start and completion across multiple controllers. This change enhances the clarity of the summarization process and improves the integration of summarization events in the application. Additionally, removed unused summarization functions and streamlined the configuration loading process.

refactor(summarization): standardize event names in handlers

Updated event names in the summarization handlers to use constants from GraphEvents for consistency and clarity. This change improves maintainability and reduces the risk of errors related to string literals in event handling.

feat(summarization): enhance usage tracking for summarization events

Added logic to track summarization usage in multiple controllers by checking the current node type. If the node indicates a summarization task, the usage type is set accordingly. This change improves the granularity of usage data collected during summarization processes.

feat(summarization): integrate SummarizationConfig into AppSummarizationConfig type

Enhanced the AppSummarizationConfig type by extending it with the SummarizationConfig type from librechat-data-provider. This change improves type safety and consistency in the summarization configuration structure.

test: add end-to-end tests for summarization functionality

Introduced a comprehensive suite of end-to-end tests for the summarization feature, covering the full LibreChat pipeline from message creation to summarization. This includes a new setup file for environment configuration and a Jest configuration specifically for E2E tests. The tests utilize real API keys and ensure proper integration with the summarization process, enhancing overall test coverage and reliability.

refactor(summarization): include initial summary in formatAgentMessages output

Updated the formatAgentMessages function to return an initial summary alongside messages and index token count map. This change is reflected in multiple controllers and the corresponding tests, enhancing the summarization process by providing additional context for each agent's response.

refactor: move hydrateMissingIndexTokenCounts to tokenMap utility

Extracted the hydrateMissingIndexTokenCounts function from the AgentClient and related tests into a new tokenMap utility file. This change improves code organization and reusability, allowing for better management of token counting logic across the application.

refactor(summarization): standardize step event handling and improve summary rendering

Refactored the step event handling in the useStepHandler and related components to utilize constants for event names, enhancing consistency and maintainability. Additionally, improved the rendering logic in the Summary component to conditionally display the summary text based on its availability, providing a better user experience during the summarization process.

feat(summarization): introduce baseContextTokens and reserveTokensRatio for improved context management

Added baseContextTokens to the InitializedAgent type to calculate the context budget based on agentMaxContextNum and maxOutputTokensNum. Implemented reserveTokensRatio in the createRun function to allow configurable context token management. Updated related tests to validate these changes and ensure proper functionality.

feat(summarization): add minReserveTokens, context pruning, and overflow recovery configurations

Introduced new configuration options for summarization, including minReserveTokens, context pruning settings, and overflow recovery parameters. Updated the createRun function to accommodate these new options and added a comprehensive test suite to validate their functionality and integration within the summarization process.

feat(summarization): add updatePrompt and reserveTokensRatio to summarization configuration

Introduced an updatePrompt field for updating existing summaries with new messages, enhancing the flexibility of the summarization process. Additionally, added reserveTokensRatio to the configuration schema, allowing for improved management of token allocation during summarization. Updated related tests to validate these new features.

feat(logging): add on_agent_log event handler for structured logging

Implemented an on_agent_log event handler in both the agents' callbacks and responses to facilitate structured logging of agent activities. This enhancement allows for better tracking and debugging of agent interactions by logging messages with associated metadata. Updated the summarization process to ensure proper handling of log events.

fix: remove duplicate IBalanceUpdate interface declaration

perf(usage): single-pass partition of collectedUsage

Replace two Array.filter() passes with a single for-of loop that
partitions message vs. summarization usages in one iteration.

fix(BaseClient): shallow-copy message content before mutating and preserve string content

Avoid mutating the original message.content array in-place when
appending a summary block. Also convert string content to a text
content part instead of silently discarding it.

fix(ui): fix Part.tsx indentation and useStepHandler summarize-complete handling

- Fix SUMMARY else-if branch indentation in Part.tsx to match chain level
- Guard ON_SUMMARIZE_COMPLETE with didFinalize flag to avoid unnecessary
  re-renders when no summarizing parts exist
- Protect against undefined completeData.summary instead of unsafe spread

fix(agents): use strict enabled check for summarization handlers

Change summarizationConfig?.enabled !== false to === true so handlers
are not registered when summarizationConfig is undefined.

chore: fix initializeClient JSDoc and move DEFAULT_RESERVE_RATIO to module scope

refactor(Summary): align collapse/expand behavior with Reasoning component

- Single render path instead of separate streaming vs completed branches
- Use useMessageContext for isSubmitting/isLatestMessage awareness so
  the "Summarizing..." label only shows during active streaming
- Default to collapsed (matching Reasoning), user toggles to expand
- Add proper aria attributes (aria-hidden, role, aria-controls, contentId)
- Hide copy button while actively streaming

feat(summarization): default to self-summarize using agent's own provider/model

When no summarization config is provided (neither in librechat.yaml nor
on the agent), automatically enable summarization using the agent's own
provider and model. The agents package already provides default prompts,
so no prompt configuration is needed.

Also removes the dead resolveSummarizationLLMConfig in summarize.ts
(and its spec) — run.ts buildAgentContext is the single source of truth
for summarization config resolution. Removes the duplicate
RuntimeSummarizationConfig local type in favor of the canonical
SummarizationConfig from data-provider.

chore: schema and type cleanup for summarization

- Add trigger field to summarizationAgentOverrideSchema so per-agent
  trigger overrides in librechat.yaml are not silently stripped by Zod
- Remove unused SummarizationStatus type from runs.ts
- Make AppSummarizationConfig.enabled non-optional to reflect the
  invariant that loadSummarizationConfig always sets it

refactor(responses): extract duplicated on_agent_log handler

refactor(run): use agents package types for summarization config

Import SummarizationConfig, ContextPruningConfig, and
OverflowRecoveryConfig from @librechat/agents and use them to
type-check the translation layer in buildAgentContext. This ensures
the config object passed to the agent graph matches what it expects.

- Use `satisfies AgentSummarizationConfig` on the config object
- Cast contextPruningConfig and overflowRecoveryConfig to agents types
- Properly narrow trigger fields from DeepPartial to required shape

feat(config): add maxToolResultChars to base endpoint schema

Add maxToolResultChars to baseEndpointSchema so it can be configured
on any endpoint in librechat.yaml. Resolved during agent initialization
using getProviderConfig's endpoint resolution: custom endpoint config
takes precedence, then the provider-specific endpoint config, then the
shared `all` config.

Passed through to the agents package ToolNode, which uses it to cap
tool result length before it enters the context window. When not
configured, the agents package computes a sensible default from
maxContextTokens.

fix(summarization): forward agent model_parameters in self-summarize default

When no explicit summarization config exists, the self-summarize
default now forwards the agent's model_parameters as the
summarization parameters. This ensures provider-specific settings
(e.g. Bedrock region, credentials, endpoint host) are available
when the agents package constructs the summarization LLM.

fix(agents): register summarization handlers by default

Change the enabled gate from === true to !== false so handlers
register when no explicit summarization config exists. This aligns
with the self-summarize default where summarization is always on
unless explicitly disabled via enabled: false.

refactor(summarization): let agents package inherit clientOptions for self-summarize

Remove model_parameters forwarding from the self-summarize default.
The agents package now reuses the agent's own clientOptions when the
summarization provider matches the agent's provider, inheriting all
provider-specific settings (region, credentials, proxy, etc.)
automatically.

refactor(summarization): use MessageContentComplex[] for summary content

Unify summary content to always use MessageContentComplex[] arrays,
matching the pattern used by on_message_delta. No more string | array
unions — content is always an array of typed blocks ({ type: 'text',
text: '...' } for text, { type: 'reasoning_content', ... } for
reasoning).

Agents package:
- SummaryContentBlock.content: MessageContentComplex[] (was string)
- tokenCount now optional (not sent on deltas)
- Removed reasoning field — reasoning is now a content block type
- streamAndCollect normalizes all chunks to content block arrays
- Delta events pass content blocks directly

LibreChat:
- SummaryContentPart.content: Agents.MessageContentComplex[]
- Updated Part.tsx, Summary.tsx, useStepHandler.ts, BaseClient.js
- Summary.tsx derives display text from content blocks via useMemo
- Aggregator uses simple array spread

refactor(summarization): enhance summary handling and text extraction

- Updated BaseClient.js to improve summary text extraction, accommodating both legacy and new content formats.
- Modified summarization logic to ensure consistent handling of summary content across different message formats.
- Adjusted test cases in summarization.e2e.spec.js to utilize the new summary text extraction method.
- Refined SSE useStepHandler to initialize summary content as an array.
- Updated configuration schema by removing unused minReserveTokens field.
- Cleaned up SummaryContentPart type by removing rangeHash property.

These changes streamline the summarization process and ensure compatibility with various content structures.

refactor(summarization): streamline usage tracking and logging

- Removed direct checks for summarization nodes in ModelEndHandler and replaced them with a dedicated markSummarizationUsage function for better readability and maintainability.
- Updated OpenAIChatCompletionController and responses handlers to utilize the new markSummarizationUsage function for setting usage types.
- Enhanced logging functionality by ensuring the logger correctly handles different log levels.
- Introduced a new useCopyToClipboard hook in the Summary component to encapsulate clipboard copy logic, improving code reusability and clarity.

These changes improve the overall structure and efficiency of the summarization handling and logging processes.

refactor(summarization): update summary content block documentation

- Removed outdated comment regarding the last summary content block in BaseClient.js.
- Added a new comment to clarify the purpose of the findSummaryContentBlock method, ensuring consistency in documentation.

These changes enhance code clarity and maintainability by providing accurate descriptions of the summarization logic.

refactor(summarization): update summary content structure in tests

- Modified the summarization content structure in e2e tests to use an array format for text, aligning with recent changes in summary handling.
- Updated test descriptions to clarify the behavior of context token calculations, ensuring consistency and clarity in the tests.

These changes enhance the accuracy and maintainability of the summarization tests by reflecting the updated content structure.

refactor(summarization): remove legacy E2E test setup and configuration

- Deleted the e2e-setup.js and jest.e2e.config.js files, which contained legacy configurations for E2E tests using real API keys.
- Introduced a new summarization.e2e.ts file that implements comprehensive E2E backend integration tests for the summarization process, utilizing real AI providers and tracking summaries throughout the run.

These changes streamline the testing framework by consolidating E2E tests into a single, more robust file while removing outdated configurations.

refactor(summarization): enhance E2E tests and error handling

- Added a cleanup step to force exit after all tests to manage Redis connections.
- Updated the summarization model to 'claude-haiku-4-5-20251001' for consistency across tests.
- Improved error handling in the processStream function to capture and return processing errors.
- Enhanced logging for cross-run tests and tight context scenarios to provide better insights into test execution.

These changes improve the reliability and clarity of the E2E tests for the summarization process.

refactor(summarization): enhance test coverage for maxContextTokens behavior

- Updated run-summarization.test.ts to include a new test case ensuring that maxContextTokens does not exceed user-defined limits, even when calculated ratios suggest otherwise.
- Modified summarization.e2e.ts to replace legacy UsageMetadata type with a more appropriate type for collectedUsage, improving type safety and clarity in the test setup.

These changes improve the robustness of the summarization tests by validating context token constraints and refining type definitions.

feat(summarization): add comprehensive E2E tests for summarization process

- Introduced a new summarization.e2e.test.ts file that implements extensive end-to-end integration tests for the summarization pipeline, covering the full flow from LibreChat to agents.
- The tests utilize real AI providers and include functionality to track summaries during and between runs.
- Added necessary cleanup steps to manage Redis connections post-tests and ensure proper exit.

These changes enhance the testing framework by providing robust coverage for the summarization process, ensuring reliability and performance under real-world conditions.

fix(service): import logger from winston configuration

- Removed the import statement for logger from '@librechat/data-schemas' and replaced it with an import from '~/config/winston'.
- This change ensures that the logger is correctly sourced from the updated configuration, improving consistency in logging practices across the application.

refactor(summary): simplify Summary component and enhance token display

- Removed the unused `meta` prop from the `SummaryButton` component to streamline its interface.
- Updated the token display logic to use a localized string for better internationalization support.
- Adjusted the rendering of the `meta` information to improve its visibility within the `Summary` component.

These changes enhance the clarity and usability of the Summary component while ensuring better localization practices.

feat(summarization): add maxInputTokens configuration for summarization

- Introduced a new `maxInputTokens` property in the summarization configuration schema to control the amount of conversation context sent to the summarizer, with a default value of 10000.
- Updated the `createRun` function to utilize the new `maxInputTokens` setting, allowing for more flexible summarization based on agent context.

These changes enhance the summarization capabilities by providing better control over input token limits, improving the overall summarization process.

refactor(summarization): simplify maxInputTokens logic in createRun function

- Updated the logic for the `maxInputTokens` property in the `createRun` function to directly use the agent's base context tokens when the resolved summarization configuration does not specify a value.
- This change streamlines the configuration process and enhances clarity in how input token limits are determined for summarization.

These modifications improve the maintainability of the summarization configuration by reducing complexity in the token calculation logic.

feat(summary): enhance Summary component to display meta information

- Updated the SummaryContent component to accept an optional `meta` prop, allowing for additional contextual information to be displayed above the main content.
- Adjusted the rendering logic in the Summary component to utilize the new `meta` prop, improving the visibility of supplementary details.

These changes enhance the user experience by providing more context within the Summary component, making it clearer and more informative.

refactor(summarization): standardize reserveRatio configuration in summarization logic

- Replaced instances of `reserveTokensRatio` with `reserveRatio` in the `createRun` function and related tests to unify the terminology across the codebase.
- Updated the summarization configuration schema to reflect this change, ensuring consistency in how the reserve ratio is defined and utilized.
- Removed the per-agent override logic for summarization configuration, simplifying the overall structure and enhancing clarity.

These modifications improve the maintainability and readability of the summarization logic by standardizing the configuration parameters.

* fix: circular dependency of `~/models`

* chore: update logging scope in agent log handlers

Changed log scope from `[agentus:${data.scope}]` to `[agents:${data.scope}]` in both the callbacks and responses controllers to ensure consistent logging format across the application.

* feat: calibration ratio

* refactor(tests): update summarizationConfig tests to reflect changes in enabled property

Modified tests to check for the new `summarizationEnabled` property instead of the deprecated `enabled` field in the summarization configuration. This change ensures that the tests accurately validate the current configuration structure and behavior of the agents.

* feat(tests): add markSummarizationUsage mock for improved test coverage

Introduced a mock for the markSummarizationUsage function in the responses unit tests to enhance the testing of summarization usage tracking. This addition supports better validation of summarization-related functionalities and ensures comprehensive test coverage for the agents' response handling.

* refactor(tests): simplify event handler setup in createResponse tests

Removed redundant mock implementations for event handlers in the createResponse unit tests, streamlining the setup process. This change enhances test clarity and maintainability while ensuring that the tests continue to validate the correct behavior of usage tracking during on_chat_model_end events.

* refactor(agents): move calibration ratio capture to finally block

Reorganized the logic for capturing the calibration ratio in the AgentClient class to ensure it is executed in the finally block. This change guarantees that the ratio is captured even if the run is aborted, enhancing the reliability of the response message persistence. Removed redundant code and improved clarity in the handling of context metadata.

* refactor(agents): streamline bulk write logic in recordCollectedUsage function

Removed redundant bulk write operations and consolidated document handling in the recordCollectedUsage function. The logic now combines all documents into a single bulk write operation, improving efficiency and reducing error handling complexity. Updated logging to provide consistent error messages for bulk write failures.

* refactor(agents): enhance summarization configuration resolution in createRun function

Streamlined the summarization configuration logic by introducing a base configuration and allowing for overrides from agent-specific settings. This change improves clarity and maintainability, ensuring that the summarization configuration is consistently applied while retaining flexibility for customization. Updated the handling of summarization parameters to ensure proper integration with the agent's model and provider settings.

* refactor(agents): remove unused tokenCountMap and streamline calibration ratio handling

Eliminated the unused tokenCountMap variable from the AgentClient class to enhance code clarity. Additionally, streamlined the logic for capturing the calibration ratio by using optional chaining and a fallback value, ensuring that context metadata is consistently defined. This change improves maintainability and reduces potential confusion in the codebase.

* refactor(agents): extract agent log handler for improved clarity and reusability

Refactored the agent log handling logic by extracting it into a dedicated function, `agentLogHandler`, enhancing code clarity and reusability across different modules. Updated the event handlers in both the OpenAI and responses controllers to utilize the new handler, ensuring consistent logging behavior throughout the application.

* test: add summarization event tests for useStepHandler

Implemented a series of tests for the summarization events in the useStepHandler hook. The tests cover scenarios for ON_SUMMARIZE_START, ON_SUMMARIZE_DELTA, and ON_SUMMARIZE_COMPLETE events, ensuring proper handling of summarization logic, including message accumulation and finalization. This addition enhances test coverage and validates the correct behavior of the summarization process within the application.

* refactor(config): update summarizationTriggerSchema to use enum for type validation

Changed the type of the `type` field in the summarizationTriggerSchema from a string to an enum with a single value 'token_count'. This modification enhances type safety and ensures that only valid types are accepted in the configuration, improving overall clarity and maintainability of the schema.

* test(usage): add bulk write tests for message and summarization usage

Implemented tests for the bulk write functionality in the recordCollectedUsage function, covering scenarios for combined message and summarization usage, summarization-only usage, and message-only usage. These tests ensure correct document handling and token rollup calculations, enhancing test coverage and validating the behavior of the usage tracking logic.

* refactor(Chat): enhance clipboard copy functionality and type definitions in Summary component

Updated the Summary component to improve the clipboard copy functionality by handling clipboard permission errors. Refactored type definitions for SummaryProps to use a more specific type, enhancing type safety. Adjusted the SummaryButton and FloatingSummaryBar components to accept isCopied and onCopy props, promoting better separation of concerns and reusability.

* chore(translations): remove unused "Expand Summary" key from English translations

Deleted the "Expand Summary" key from the English translation file to streamline the localization resources and improve clarity in the user interface. This change helps maintain an organized and efficient translation structure.

* refactor: adjust token counting for Claude model to account for API discrepancies

Implemented a correction factor for token counting when using the Claude model, addressing discrepancies between Anthropic's API and local tokenizer results. This change ensures accurate token counts by applying a scaling factor, improving the reliability of token-related functionalities.

* refactor(agents): implement token count adjustment for Claude model messages

Added a method to adjust token counts for messages processed by the Claude model, applying a correction factor to align with API expectations. This enhancement improves the accuracy of token counting, ensuring reliable functionality when interacting with the Claude model.

* refactor(agents): token counting for media content in messages

Introduced a new method to estimate token costs for image and document blocks in messages, improving the accuracy of token counting. This enhancement ensures that media content is properly accounted for, particularly for the Claude model, by integrating additional token estimation logic for various content types. Updated the token counting function to utilize this new method, enhancing overall reliability and functionality.

* chore: fix missing import

* fix(agents): clamp baseContextTokens and document reserve ratio change

Prevent negative baseContextTokens when maxOutputTokens exceeds the
context window (misconfigured models). Document the 10%→5% default
reserve ratio reduction introduced alongside summarization.

* fix(agents): include media tokens in hydrated token counts

Add estimateMediaTokensForMessage to createTokenCounter so the hydration
path (used by hydrateMissingIndexTokenCounts) matches the precomputed
path in AgentClient.getTokenCountForMessage. Without this, messages
containing images or documents were systematically undercounted during
hydration, risking context window overflow.

Add 34 unit tests covering all block-type branches of
estimateMediaTokensForMessage.

* fix(agents): include summarization output tokens in usage return value

The returned output_tokens from recordCollectedUsage now reflects all
billed LLM calls (message + summarization). Previously, summarization
completions were billed but excluded from the returned metadata, causing
a discrepancy between what users were charged and what the response
message reported.

* fix(tests): replace process.exit with proper Redis cleanup in e2e test

The summarization E2E test used process.exit(0) to work around a Redis
connection opened at import time, which killed the Jest runner and
bypassed teardown. Use ioredisClient.quit() and keyvRedisClient.disconnect()
for graceful cleanup instead.

* fix(tests): update getConvo imports in OpenAI and response tests

Refactor test files to import getConvo from the main models module instead of the Conversation submodule. This change ensures consistency across tests and simplifies the import structure, enhancing maintainability.

* fix(clients): improve summary text validation in BaseClient

Refactor the summary extraction logic to ensure that only non-empty summary texts are considered valid. This change enhances the robustness of the message processing by utilizing a dedicated method for summary text retrieval, improving overall reliability.

* fix(config): replace z.any() with explicit union in summarization schema

Model parameters (temperature, top_p, etc.) are constrained to
primitive types rather than the policy-violating z.any().

* refactor(agents): deduplicate CLAUDE_TOKEN_CORRECTION constant

Export from the TS source in packages/api and import in the JS client,
eliminating the static class property that could drift out of sync.

* refactor(agents): eliminate duplicate selfProvider in buildAgentContext

selfProvider and provider were derived from the same expression with
different type casts. Consolidated to a single provider variable.

* refactor(agents): extract shared SSE handlers and restrict log levels

- buildSummarizationHandlers() factory replaces triplicated handler
  blocks across responses.js and openai.js
- agentLogHandlerObj exported from callbacks.js for consistent reuse
- agentLogHandler restricted to an allowlist of safe log levels
  (debug, info, warn, error) instead of accepting arbitrary strings

* fix(SSE): batch summarize deltas, add exhaustiveness check, conditional error announcement

- ON_SUMMARIZE_DELTA coalesces rapid-fire renders via requestAnimationFrame
  instead of calling setMessages per chunk
- Exhaustive never-check on TStepEvent catches unhandled variants at
  compile time when new StepEvents are added
- ON_SUMMARIZE_COMPLETE error announcement only fires when a summary
  part was actually present and removed

* feat(agents): persist instruction overhead in contextMeta and seed across runs

Extend contextMeta with instructionOverhead and toolCount so the
provider-observed instruction overhead is persisted on the response message
and seeded into the pruner on subsequent runs. This enables the pruner to
use a calibrated budget from the first call instead of waiting for a
provider observation, preventing the ratio collapse caused by local
tokenizer overestimating tool schema tokens.

The seeded overhead is only used when encoding and tool count match
between runs, ensuring stale values from different configurations
are discarded.

* test(agents): enhance OpenAI test mocks for summarization handlers

Updated the OpenAI test suite to include additional mock implementations for summarization handlers, including buildSummarizationHandlers, markSummarizationUsage, and agentLogHandlerObj. This improves test coverage and ensures consistent behavior during testing.

* fix(agents): address review findings for summarization v2

Cancel rAF on unmount to prevent stale Recoil writes from dead
component context. Clear orphaned summarizing:true parts when
ON_SUMMARIZE_COMPLETE arrives without a summary payload. Add null
guard and safe spread to agentLogHandler. Handle Anthropic-format
base64 image/* documents in estimateMediaTokensForMessage. Use
role="region" for expandable summary content. Add .describe() to
contextMeta Zod fields. Extract duplicate usage loop into helper.

* refactor: simplify contextMeta to calibrationRatio + encoding only

Remove instructionOverhead and toolCount from cross-run persistence —
instruction tokens change too frequently between runs (prompt edits,
tool changes) for a persisted seed to be reliable. The intra-run
calibration in the pruner still self-corrects via provider observations.
contextMeta now stores only the tokenizer-bias ratio and encoding,
which are stable across instruction changes.

* test(SSE): enhance useStepHandler tests for ON_SUMMARIZE_COMPLETE behavior

Updated the test for ON_SUMMARIZE_COMPLETE to clarify that it finalizes the existing part with summarizing set to false when the summary is undefined. Added assertions to verify the correct behavior of message updates and the state of summary parts.

* refactor(BaseClient): remove handleContextStrategy and truncateToolCallOutputs functions

Eliminated the handleContextStrategy method from BaseClient to streamline message handling. Also removed the truncateToolCallOutputs function from the prompts module, simplifying the codebase and improving maintainability.

* refactor: add AGENT_DEBUG_LOGGING option and refactor token count handling in BaseClient

Introduced AGENT_DEBUG_LOGGING to .env.example for enhanced debugging capabilities. Refactored token count handling in BaseClient by removing the handleTokenCountMap method and simplifying token count updates. Updated AgentClient to log detailed token count recalculations and adjustments, improving traceability during message processing.

* chore: update dependencies in package-lock.json and package.json files

Bumped versions of several dependencies, including @librechat/agents to ^3.1.62 and various AWS SDK packages to their latest versions. This ensures compatibility and incorporates the latest features and fixes.

* chore: imports order

* refactor: extract summarization config resolution from buildAgentContext

* refactor: rename and simplify summarization configuration shaping function

* refactor: replace AgentClient token counting methods with single-pass pure utility

Extract getTokenCount() and getTokenCountForMessage() from AgentClient
into countFormattedMessageTokens(), a pure function in packages/api that
handles text, tool_call, image, and document content types in one loop.

- Decompose estimateMediaTokensForMessage into block-level helpers
  (estimateImageDataTokens, estimateImageBlockTokens, estimateDocumentBlockTokens)
  shared by both estimateMediaTokensForMessage and the new single-pass function
- Remove redundant per-call getEncoding() resolution (closure captures once)
- Remove deprecated gpt-3.5-turbo-0301 model branching
- Drop this.getTokenCount guard from BaseClient.sendMessage

* refactor: streamline token counting in createTokenCounter function

Simplified the createTokenCounter function by removing the media token estimation and directly calculating the token count. This change enhances clarity and performance by consolidating the token counting logic into a single pass, while maintaining compatibility with Claude's token correction.

* refactor: simplify summarization configuration types

Removed the AppSummarizationConfig type and directly used SummarizationConfig in the AppConfig interface. This change streamlines the type definitions and enhances consistency across the codebase.

* chore: import order

* fix: summarization event handling in useStepHandler

- Cancel pending summarizeDeltaRaf in clearStepMaps to prevent stale
  frames firing after map reset or component unmount
- Move announcePolite('summarize_completed') inside the didFinalize
  guard so screen readers only announce when finalization actually occurs
- Remove dead cleanup closure returned from stepHandler useCallback body
  that was never invoked by any caller

* fix: estimate tokens for non-PDF/non-image base64 document blocks

Previously estimateDocumentBlockTokens returned 0 for unrecognized MIME
types (e.g. text/plain, application/json), silently underestimating
context budget. Fall back to character-based heuristic or countTokens.

* refactor: return cloned usage from markSummarizationUsage

Avoid mutating LangChain's internal usage_metadata object by returning
a shallow clone with the usage_type tag. Update all call sites in
callbacks, openai, and responses controllers to use the returned value.

* refactor: consolidate debug logging loops in buildMessages

Merge the two sequential O(n) debug-logging passes over orderedMessages
into a single pass inside the map callback where all data is available.

* refactor: narrow SummaryContentPart.content type

Replace broad Agents.MessageContentComplex[] with the specific
Array<{ type: ContentTypes.TEXT; text: string }> that all producers
and consumers already use, improving compile-time safety.

* refactor: use single output array in recordCollectedUsage

Have processUsageGroup append to a shared array instead of returning
separate arrays that are spread into a third, reducing allocations.

* refactor: use for...in in hydrateMissingIndexTokenCounts

Replace Object.entries with for...in to avoid allocating an
intermediate tuple array during token map hydration.
2026-03-21 14:28:56 -04:00
Danny Avila
0736ff2668
v0.8.4 (#12339)
* 🔖 chore: Bump version to v0.8.4

- App version: v0.8.4-rc1 → v0.8.4
- @librechat/api: 1.7.26 → 1.7.27
- @librechat/client: 0.4.55 → 0.4.56
- librechat-data-provider: 0.8.400 → 0.8.401
- @librechat/data-schemas: 0.0.39 → 0.0.40

* chore: bun.lock file bumps
2026-03-20 18:01:00 -04:00
Danny Avila
e442984364
💣 fix: Harden against falsified ZIP metadata in ODT parsing (#12320)
* security: replace JSZip metadata guard with yauzl streaming decompression

The ODT decompressed-size guard was checking JSZip's private
_data.uncompressedSize fields, which are populated from the ZIP central
directory — attacker-controlled metadata. A crafted ODT with falsified
uncompressedSize values bypassed the 50MB cap entirely, allowing
content.xml decompression to exhaust Node.js heap memory (DoS).

Replace JSZip with yauzl for ODT extraction. The new extractOdtContentXml
function uses yauzl's streaming API: it lazily iterates ZIP entries,
opens a decompression stream for content.xml, and counts real bytes as
they arrive from the inflate stream. The stream is destroyed the moment
the byte count crosses ODT_MAX_DECOMPRESSED_SIZE, aborting the inflate
before the full payload is materialised in memory.

- Remove jszip from direct dependencies (still transitive via mammoth)
- Add yauzl + @types/yauzl
- Update zip-bomb test to verify streaming abort with DEFLATE payload

* fix: close file descriptor leaks and declare jszip test dependency

- Use a shared `finish()` helper in extractOdtContentXml that calls
  zipfile.close() on every exit path (success, size cap, missing entry,
  openReadStream errors, zipfile errors). Without this, any error path
  leaked one OS file descriptor permanently — uploading many malformed
  ODTs could exhaust the process FD limit (a distinct DoS vector).
- Add jszip to devDependencies so the zip-bomb test has an explicit
  dependency rather than relying on mammoth's transitive jszip.
- Update JSDoc to document that all exit paths close the zipfile.

* fix: move yauzl from dependencies to peerDependencies

Matches the established pattern for runtime parser libraries in
packages/api: mammoth, pdfjs-dist, and xlsx are all peerDependencies
(provided by the consuming /api workspace) with devDependencies for
testing. yauzl was incorrectly placed in dependencies.

* fix: add yauzl to /api dependencies to satisfy peer dep

packages/api declares yauzl as a peerDependency; /api is the consuming
workspace that must provide it at runtime, matching the pattern used
for mammoth, pdfjs-dist, and xlsx.
2026-03-19 22:13:40 -04:00
Danny Avila
b5a55b23a4
📦 chore: NPM audit packages (#12286)
* 🔧 chore: Update dependencies in package-lock.json and package.json

- Bump @aws-sdk/client-bedrock-runtime from 3.980.0 to 3.1011.0 and update related dependencies.
- Update fast-xml-parser version from 5.3.8 to 5.5.6 in package.json.
- Adjust various @aws-sdk and @smithy packages to their latest versions for improved functionality and security.

* 🔧 chore: Update @librechat/agents dependency to version 3.1.57 in package.json and package-lock.json

- Bump @librechat/agents from 3.1.56 to 3.1.57 across multiple package files for consistency.
- Remove axios dependency from package.json as it is no longer needed.
2026-03-17 17:04:18 -04:00
Danny Avila
1e1a3a8f8d v0.8.4-rc1 (#12285)
- App version: v0.8.3 → v0.8.4-rc1
- @librechat/api: 1.7.25 → 1.7.26
- @librechat/client: 0.4.54 → 0.4.55
- librechat-data-provider: 0.8.302 → 0.8.400
- @librechat/data-schemas: 0.0.38 → 0.0.39
2026-03-17 16:08:48 -04:00
Danny Avila
8271055c2d
📦 chore: Bump @librechat/agents to v3.1.56 (#12258)
* 📦 chore: Bump `@librechat/agents` to v3.1.56

* chore: resolve type error, URL property check in isMCPDomainAllowed function
2026-03-15 23:51:41 -04:00
Danny Avila
cbdc6f6060
📦 chore: Bump NPM Audit Packages (#12227)
* 🔧 chore: Update file-type dependency to version 21.3.2 in package-lock.json and package.json

- Upgraded the "file-type" package from version 18.7.0 to 21.3.2 to ensure compatibility with the latest features and security updates.
- Added new dependencies related to the updated "file-type" package, enhancing functionality and performance.

* 🔧 chore: Upgrade undici dependency to version 7.24.1 in package-lock.json and package.json

- Updated the "undici" package from version 7.18.2 to 7.24.1 across multiple package files to ensure compatibility with the latest features and security updates.

* 🔧 chore: Upgrade yauzl dependency to version 3.2.1 in package-lock.json

- Updated the "yauzl" package from version 3.2.0 to 3.2.1 to incorporate the latest features and security updates.

* 🔧 chore: Upgrade hono dependency to version 4.12.7 in package-lock.json

- Updated the "hono" package from version 4.12.5 to 4.12.7 to incorporate the latest features and security updates.
2026-03-14 03:36:03 -04:00
Danny Avila
9a5d7eaa4e
refactor: Replace tiktoken with ai-tokenizer (#12175)
* chore: Update dependencies by adding ai-tokenizer and removing tiktoken

- Added ai-tokenizer version 1.0.6 to package.json and package-lock.json across multiple packages.
- Removed tiktoken version 1.0.15 from package.json and package-lock.json in the same locations, streamlining dependency management.

* refactor: replace js-tiktoken with ai-tokenizer

- Added support for 'claude' encoding in the AgentClient class to improve model compatibility.
- Updated Tokenizer class to utilize 'ai-tokenizer' for both 'o200k_base' and 'claude' encodings, replacing the previous 'tiktoken' dependency.
- Refactored tests to reflect changes in tokenizer behavior and ensure accurate token counting for both encoding types.
- Removed deprecated references to 'tiktoken' and adjusted related tests for improved clarity and functionality.

* chore: remove tiktoken mocks from DALLE3 tests

- Eliminated mock implementations of 'tiktoken' from DALLE3-related test files to streamline test setup and align with recent dependency updates.
- Adjusted related test structures to ensure compatibility with the new tokenizer implementation.

* chore: Add distinct encoding support for Anthropic Claude models

- Introduced a new method `getEncoding` in the AgentClient class to handle the specific BPE tokenizer for Claude models, ensuring compatibility with the distinct encoding requirements.
- Updated documentation to clarify the encoding logic for Claude and other models.

* docs: Update return type documentation for getEncoding method in AgentClient

- Clarified the return type of the getEncoding method to specify that it can return an EncodingName or undefined, enhancing code readability and type safety.

* refactor: Tokenizer class and error handling

- Exported the EncodingName type for broader usage.
- Renamed encodingMap to encodingData for clarity.
- Improved error handling in getTokenCount method to ensure recovery attempts are logged and return 0 on failure.
- Updated countTokens function documentation to specify the use of 'o200k_base' encoding.

* refactor: Simplify encoding documentation and export type

- Updated the getEncoding method documentation to clarify the default behavior for non-Anthropic Claude models.
- Exported the EncodingName type separately from the Tokenizer module for improved clarity and usage.

* test: Update text processing tests for token limits

- Adjusted test cases to handle smaller text sizes, changing scenarios from ~120k tokens to ~20k tokens for both the real tokenizer and countTokens functions.
- Updated token limits in tests to reflect new constraints, ensuring tests accurately assess performance and call reduction.
- Enhanced console log messages for clarity regarding token counts and reductions in the updated scenarios.

* refactor: Update Tokenizer imports and exports

- Moved Tokenizer and countTokens exports to the tokenizer module for better organization.
- Adjusted imports in memory.ts to reflect the new structure, ensuring consistent usage across the codebase.
- Updated memory.test.ts to mock the Tokenizer from the correct module path, enhancing test accuracy.

* refactor: Tokenizer initialization and error handling

- Introduced an async `initEncoding` method to preload tokenizers, improving performance and accuracy in token counting.
- Updated `getTokenCount` to handle uninitialized tokenizers more gracefully, ensuring proper recovery and logging on errors.
- Removed deprecated synchronous tokenizer retrieval, streamlining the overall tokenizer management process.

* test: Enhance tokenizer tests with initialization and encoding checks

- Added `beforeAll` hooks to initialize tokenizers for 'o200k_base' and 'claude' encodings before running tests, ensuring proper setup.
- Updated tests to validate the loading of encodings and the correctness of token counts for both 'o200k_base' and 'claude'.
- Improved test structure to deduplicate concurrent initialization calls, enhancing performance and reliability.
2026-03-10 23:14:52 -04:00
Danny Avila
cfbe812d63
v0.8.3 (#12161)
*  v0.8.3

* chore: Bump package versions and update configuration

- Updated package versions for @librechat/api (1.7.25), @librechat/client (0.4.54), librechat-data-provider (0.8.302), and @librechat/data-schemas (0.0.38).
- Incremented configuration version in librechat.example.yaml to 1.3.6.

* feat: Add OpenRouter headers to OpenAI configuration

- Introduced 'X-OpenRouter-Title' and 'X-OpenRouter-Categories' headers in the OpenAI configuration for enhanced compatibility with OpenRouter services.
- Updated related tests to ensure the new headers are correctly included in the configuration responses.

* chore: Update package versions and dependencies

- Bumped versions for several dependencies including @eslint/eslintrc to 3.3.4, axios to 1.13.5, express to 5.2.1, and lodash to 4.17.23.
- Updated @librechat/backend and @librechat/frontend versions to 0.8.3.
- Added new dependencies: turbo and mammoth.
- Adjusted various other dependencies to their latest versions for improved compatibility and performance.
2026-03-09 15:19:57 -04:00
Danny Avila
cfaa6337c1
📦 chore: Bump express-rate-limit to v8.3.0 (#12115) 2026-03-06 19:18:35 -05:00
Danny Avila
afb35103f1
📦 chore: Bump multer to v2.1.1
- Updated `multer` dependency from version 2.1.0 to 2.1.1 in both package.json and package-lock.json to incorporate the latest improvements and fixes.
2026-03-04 21:49:13 -05:00
Danny Avila
7e85cf71bd
v0.8.3-rc2 (#12027) 2026-03-04 09:28:20 -05:00