mirror of
https://github.com/danny-avila/LibreChat.git
synced 2026-05-13 16:07:30 +00:00
1830 commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
278cec0fc7 | feat: Add granular access control to shared links via ACL system | ||
|
|
68d80f3324
|
✨ v0.8.6-rc1 (#13094) | ||
|
|
8eb9de011f
|
📦 chore: bump @librechat/agents to v3.1.86, npm audit, build fix (#13105)
* 📦 chore: Bump `@librechat/agents` to v3.1.86 in package-lock.json and package.json files * 📦 chore: Update dependencies in package-lock.json to latest versions, including @protobufjs/codegen, @protobufjs/inquire, @protobufjs/utf8, and protobufjs * 📦 chore: Add `librechat-data-provider` dependency in package.json and package-lock.json, and update build dependencies in turbo.json |
||
|
|
6b5596ec36
|
🍪 refactor: Refresh CloudFront Media Cookies (#13091)
* fix: refresh CloudFront media cookies * fix: satisfy changed-file lint * fix: centralize CloudFront image retry * fix: honor base path for CloudFront refresh * fix: bypass auth refresh for CloudFront cookie retry * fix: pass app auth header to CloudFront retry * test: cover CloudFront refresh with OpenID reuse * fix: avoid duplicate CloudFront refresh retries * fix: clear CloudFront scope cookie with matching flags |
||
|
|
7b9a57a467
|
🛡️ fix: Harden OpenID Session Token Reuse (#13086)
* fix: Harden OpenID Session Token Reuse * fix: Preserve OpenID Session Token On Forced Refresh * fix: Gate Preserved OpenID Id Token By Expiry * test: Cover OpenID Id Token Expiry Buffer |
||
|
|
17a08224e1
|
🍪 fix: Refresh CloudFront Cookies On Auth Refresh (#13083)
* fix: Refresh CloudFront Cookies On Auth Refresh * fix: Exclude Federated Tokens From Refresh Lookup |
||
|
|
3e7262cfe0
|
📦 chore: Bump @librechat/agents to v3.1.85 and mermaid to v11.15.0 (#13079)
* 📦 chore: Update @librechat/agents to version 3.1.85 in package-lock.json and package.json files * 📦 chore: Update mermaid to version 11.15.0 in package.json and package-lock.json |
||
|
|
0a7255b234
|
🎭 feat: Support OpenID Audience On Refresh Grants (#13077) | ||
|
|
c385f2ba88
|
📦 feat: Configure Skill Import Size Limit (#13073)
* fix: configure skill import size limit * fix: validate skill import size in ui * fix: align skill import size boundary * fix: show exact skill import limit |
||
|
|
8735c1763c
|
🧵 fix: Preserve Upload Context Across Multipart Routes (#13072)
* fix skill multipart imports under strict isolation * fix file upload context after multipart parsing * fix skill upload tenant resolution * fix rejected upload cleanup |
||
|
|
0449c423a2
|
🗝️ fix: Enforce Skill Share Role Permission (#13062)
* fix: enforce skill share role permission * fix: preserve share capability bypass * refactor: move share policy middleware to api package * style: order share middleware imports * fix: satisfy share middleware type checks * test: cover share policy resource types |
||
|
|
7631366f52
|
🪵 chore: Log Subagent Limit Hits (#13068) | ||
|
|
70b6bb69d3
|
🧬 fix: Bound Subagent Expansion (#13064)
* fix: Bound subagent expansion * fix: Preserve subagent path depth |
||
|
|
7129b1b1e4
|
📜 refactor: Improve Skill Handling Logs (#13057)
* refactor: Streamline batch upload error handling in `uploadCodeEnvFile` * refactor: Enhance session info error logging in `getSessionInfo` * refactor: Update error logging to use `logAxiosError` in various agent handlers and skill file processing functions * refactor: Consolidate missing resource checks in `createToolExecuteHandler` for better clarity |
||
|
|
763fab2e3e
|
🛂 fix: Restrict OpenID JWT Bearer Reuse (#13052)
* fix: Restrict OpenID JWT bearer reuse * fix: Require signed OpenID reuse cookie * fix: Bind OpenID reuse to authenticated user |
||
|
|
5bab22d236
|
🛡️ fix: Gate Bash PTC Capabilities (#13053) | ||
|
|
030dc98a1d
|
☁️ fix: Enable Azure Agent Provider Uploads (#13045) | ||
|
|
c3ec23f9b8
|
🌐 feat: Support Vertex AI Multi-Region Endpoints (#13044)
* feat: support Vertex AI multi-region endpoints * fix: sync Vertex endpoint with final location |
||
|
|
715a4a5fc1
|
🧰 refactor: Use Bash PTC for Agent Tools (#13042)
* fix: Use Bash PTC for programmatic agent tools * fix: Preserve legacy PTC event calls |
||
|
|
c67e2b54dc
|
🔐 feat: Mint Code API Auth Tokens (#13028)
* feat: Mint CodeAPI auth tokens * style: Format CodeAPI download route * fix: Prune CodeAPI token cache * fix: Propagate CodeAPI managed auth * test: Mock CodeAPI auth in traversal suite * fix: Pass auth context to invoked skill cache * feat: Mint CodeAPI plan context * chore: Refresh CodeAPI auth guidance * fix: Guard OpenID JWT fallback * fix: Default CodeAPI JWT tenant in single-tenant mode * chore: Update @librechat/agents to version 3.1.84 in package-lock.json and package.json files * chore: Standardize references to Code API in comments and tests |
||
|
|
8a654dc8b1
|
🧭 feat: Add OpenRouter Prompt Cache Setting (#13029)
* feat: add OpenRouter prompt cache setting * fix: type OpenRouter schema lookup * fix: honor proxied OpenRouter prompt cache * refactor: flatten endpoint schema fallback * chore: Bump `@librechat/agents` to version 3.1.82 * fix: Default OpenRouter prompt cache params * test: Align OpenRouter config expectations * test: Update OpenRouter default cache expectation * fix: Align OpenRouter Detection * chore: Bump `@librechat/agents` to version 3.1.83 * docs: Remove OpenRouter prompt cache setup note * refactor: Use provider enum for OpenRouter defaults * style: Format OpenRouter defaults guard |
||
|
|
0d5c2b339a
|
🛟 fix: Allow Empty modelSpecs.list to Unstick Admin-Panel Saves (#13036)
* 🛟 fix: Allow empty modelSpecs.list to unstick admin-panel saves The unconditional `.min(1)` on `specsConfigSchema.list` rejected an empty list even when `enforce: false`, leaving admin panels (which save fields path-granularly) with no atomic way to clear the list once it had been populated. Once an admin reached `list: [entry]` and deleted the only entry, every subsequent save failed schema validation and the section became stuck. Relax the schema to `.default([])`. The `.min(1)` was added in #5218 as part of bundled cleanup, not as a deliberate rule. Every consumer of `modelSpecs.list` already handles the empty/undefined case (`?.list`, `?? []`, length-checked), and `processModelSpecs` short-circuits to `undefined` when the list is empty so the runtime treats it as "no specs configured." No call site is load-bearing on length >= 1. Tighten the `buildEndpointOption.js` enforce guard from `?.list && ?.enforce` to `?.list?.length && ?.enforce`. Empty arrays are truthy in JS, so the existing guard would have entered the enforce branch on `list: []` and returned "No model spec selected" or "Invalid model spec" had `processModelSpecs` ever been bypassed. Add a runtime warn in `processModelSpecs` when `enforce: true` is configured alongside an empty list, so operators see the resulting "enforcement disabled" state in logs rather than silently getting a permissive runtime. Add coverage for the empty-list parse path in `config-schemas.spec.ts` and for the empty-list-with-enforce branch in `buildEndpointOption.spec.js`. * chore: update import order in config-schemas.spec.ts |
||
|
|
ac3600cdd7
|
🗂️ fix: Remove Generated Code Files From Prompt Context (#13037) | ||
|
|
d90567204e
|
🛟 fix: persist Vertex Gemini 3 thoughtSignatures across DB round-trips (#13026)
When a tool round-trip is interrupted between the tool result and the
model's text reply (user aborted, network drop, pod restart, ...) and
LibreChat persists the partial assistant message, the next conversation
turn reconstructs an `AIMessage` from `formatAgentMessages` that has
`tool_calls` populated but no `additional_kwargs.signatures`. Vertex
Gemini 3 rejects the resumed request with 400 because the most recent
historical functionCall has no `thought_signature`.
## Storage shape
Capture as `Record<tool_call_id, signature>` rather than a flat array.
This addresses the codex P1 review:
> When an assistant turn contains multiple sequential tool-call batches,
> this restoration path writes all persisted thoughtSignatures onto only
> the last tool-bearing AIMessage. Vertex/Gemini validates signatures
> for each step in the current tool-calling turn, so earlier
> functionCall steps reconstructed without their signature can still
> fail with 400.
A single agent run can fire multiple `chat_model_end` events when the
loop cycles the LLM with intervening tool results — each cycle owns a
distinct `tool_call_id`. Per-id storage maps each signature back onto
the right reconstructed `AIMessage`, not just the last one.
## Mapping
`additional_kwargs.signatures` is a flat array indexed by *response part*
(text + functionCall interleaved). `tool_calls` is just the function
calls in their original order. Non-empty signatures correspond 1:1 with
tool_calls in order — see `partsToSignatures` in
`@langchain/google-common`. Single-pass walk maps `signatures[i]` (when
non-empty) onto the i-th `tool_call.id`.
## Pipeline
| Stage | File | Change |
|---|---|---|
| Capture | callbacks.js | `ModelEndHandler` accepts `Record<string,string>` map; walks signatures + tool_calls in tandem to record per-id. Gated on the map being provided — non-Vertex flows are no-op (and also no-op even when provided, since they don't emit signatures). |
| Plumbing | initialize.js | Allocate `collectedThoughtSignatures = {}`, share with handler + client. Always allocated; the JSDoc explicitly documents that it stays empty for non-Vertex providers. |
| Surface | client.js | `sendCompletion` returns `metadata.thoughtSignatures` when the map has entries; falls through unchanged when empty. |
| Persist | (existing BaseClient.handleRespCompletion) | Writes `metadata` from `sendCompletion` onto `responseMessage.metadata`. Mongoose `Mixed` — no migration. |
| Restore | formatMessages.js | Track every tool-bearing AIMessage produced from a TMessage. For each, build a position-aligned `additional_kwargs.signatures` array (empty placeholders for tool_calls without a stored sig). Agents' `fixThoughtSignatures` dispatches non-empty entries to functionCall parts in order. |
## Live verification
- **Single-step:** real Vertex `gemini-3.1-flash-lite-preview` resume-after-tool case. With fix ✅ / without ❌ 400.
- **Multi-step (codex case):** real two-step agent loop (list /tmp → echo done). Each step's signature attaches to its own reconstructed AIMessage. With fix ✅ / without ❌ 400.
- **Cross-provider:** Anthropic Claude haiku-4.5 + OpenAI gpt-5-mini accept the persisted/restored shape unchanged.
## Tests
`modelEndHandler.spec.js` (new) — 6 tests:
- maps non-empty signatures onto tool_call_ids in order
- accumulates per-id across multiple `model_end` events (multi-step)
- no-op when `collectedThoughtSignatures` is null
- no-op when `signatures` field missing (non-Vertex)
- no-op when `tool_calls` missing
- preserves existing `collectedUsage` array contract
`formatAgentMessages.spec.js` — 6 new tests:
- restores onto the AIMessage that owns the tool_call
- per-step attachment for multi-step turns (codex review case)
- preserves tool_call ordering when signatures are partial
- no-op when metadata.thoughtSignatures absent
- no-op when assistant has no tool_calls
- no-op when stored ids don't match any current tool_call
37 passing across 3 suites; 15 existing formatAgentMessages tests unchanged.
## Compatibility
- Backward-compatible — restore gated on `metadata.thoughtSignatures` being a populated object; capture gated on the map being provided.
- No schema migration — uses `Message.metadata: Mixed` already in place.
- Cross-provider safe — non-Vertex providers tolerate the field (verified live against Anthropic + OpenAI converters).
- Pairs with [agents#159](https://github.com/danny-avila/agents/pull/159) for full coverage on histories that mix plain-text and toolcall AIMessages.
|
||
|
|
e262219c8f
|
🔄 feat: Cross-Origin Admin OAuth Refresh (#13007)
* feat(admin-panel): add /api/admin/oauth/refresh endpoint for cross-origin BFF refresh
The cookie-based /api/auth/refresh controller can't be reached cross-origin
from a separately-hosted admin panel because the refresh-token cookie isn't
sent on cross-origin fetches. Add a dedicated POST /api/admin/oauth/refresh
endpoint that accepts the refresh token in the request body, exchanges it
at the IdP via openid-client refreshTokenGrant, and returns the same
response shape as /api/admin/oauth/exchange.
Implementation lives in packages/api/src/auth/refresh.ts as the
applyAdminRefresh helper. It validates the refreshed tokenset, looks up the
admin user by openidId (with optional user_id disambiguation when multiple
user docs share an openidId), mints the bearer via an injected mintToken
hook, and runs an optional onRefreshSuccess hook for downstream forks that
need to update server-side session state.
The default mintToken passed by the OSS route signs an HS256 LibreChat JWT
via generateToken so admin panel callers continue to use the existing local
JWT strategy. Forks that prefer to hand back an IdP-signed token (e.g. for
deployments where the JWT auth gate is JWKS-only) override mintToken
without changing the helper or the route.
Also threads expiresAt through AdminExchangeData and AdminExchangeResponse
so admin panel clients can drive proactive refresh before the bearer
expires. Defaults the OSS exchange flow to Date.now() + sessionExpiry.
* fix(admin-panel): address review feedback on /api/admin/oauth/refresh
mintToken now returns {token, expiresAt} so the minter is authoritative
for the bearer's lifetime instead of deriving it from the IdP `exp` claim.
The refresh response would otherwise lie to the admin panel and trigger
premature or late refresh cycles.
The helper now falls back to the inbound refresh_token when the IdP omits
one on rotation (Auth0 with rotation off, Microsoft personal accounts).
Without this the admin panel loses its refresh capability after one cycle.
Other hardening:
resolveAdminUser validates user_id with Types.ObjectId.isValid before
hitting Mongoose, avoiding a CastError that would surface as a generic
500 with no useful information for the client.
If user_id resolves to a user whose openidId does not match the refreshed
sub, throw USER_ID_MISMATCH (401) instead of silently swapping in a
different user matching the sub.
Wrap tokenset.claims() in readClaims so an IdP that returns a tokenset
without a usable id_token gets mapped to CLAIMS_INCOMPLETE (502) rather
than bubbling a raw exception.
findUsers now uses the same SAFE_USER_PROJECTION as getUserById so the
fallback path no longer pulls password/totpSecret/backupCodes into memory.
Removed dead fields (email on AdminRefreshClaims, id_token on
RefreshTokenset) and fixed import ordering per AGENTS.md.
Adds packages/api/src/auth/refresh.spec.ts: 18 tests covering the happy
path, userId disambiguation (match, invalid ObjectId, null, mismatch),
all error branches (IDP_INCOMPLETE, CLAIMS_INCOMPLETE for both throw and
missing sub, USER_NOT_FOUND, mintToken/onRefreshSuccess propagation), and
refresh-token preservation under rotation/no-rotation.
* chore(admin-panel): polish per re-review on /api/admin/oauth/refresh
readClaims now logs the original error name/message at warn before mapping
to CLAIMS_INCOMPLETE so a programming bug doesn't get silently rebadged
as an IdP problem in production logs.
The route handler's JSDoc now enumerates every error response (status +
error_code) so admin-panel implementors can plan for each branch without
reading the source.
Tightens the helper's surface: removed the now-dead `exp` field from
`AdminRefreshClaims` (only `sub` is read since the v2 mintToken refactor),
and tightened `AdminRefreshDeps.findUsers`'s projection parameter from
`string | null` to `string` so the contract matches actual usage.
Test polish: the userId-resolves-to-null fallthrough test now asserts the
exact `findUsers` and `getUserById` call arguments so a regression in the
fallthrough query shape is caught. The "skips onRefreshSuccess" test now
asserts a populated response shape rather than just `toBeDefined`.
Declined per prior triage and re-confirmed: a role guard inside
`applyAdminRefresh` (downstream `/api/admin/*` already enforces
ACCESS_ADMIN via requireCapability) and moving the IdP grant call out of
the JS route into TypeScript (matches existing oauth.js / openidStrategy
pattern; package-boundary refactor belongs in a separate PR).
* fix(admin-panel): reject /api/admin/oauth/refresh tokensets from foreign issuers
When the route handler can resolve the configured OpenID issuer, it now
threads it into applyAdminRefresh as expectedIssuer. The helper compares
that against the tokenset claims iss (after normalizeOpenIdIssuer on
both sides to absorb trailing-slash differences) and throws
ISSUER_MISMATCH (401) on mismatch.
The check is skipped when either side is unset so behavior is unchanged
for IdPs that don't return iss on a refresh-grant id_token, and for
older deployments where the OpenID config doesn't expose serverMetadata.
This is a defense-in-depth measure for the refresh path only. The
deeper OIDC posture fix (binding IUser lookup to (sub, iss) as a pair)
is pre-existing debt across openidStrategy.js and the regular exchange
flow as well, and belongs in a separate PR with the schema change and
backfill migration.
* fix(admin-panel): bind refresh user lookup to (sub, iss) and handle getOpenIdConfig throw
Two fixes raised on the PR thread that I previously misdescribed:
The user lookup in resolveAdminUser was keyed on openidId alone, so a
tokenset from a different issuer that happened to share the same sub
could resolve to a local user from a different IdP. Now exports
getIssuerBoundConditions and isUserIssuerAllowed from openid.ts (the
helpers findOpenIDUser already uses) and reuses them. The findUsers
filter becomes ($or of getIssuerBoundConditions for openidId) when an
expectedIssuer is provided, with the same legacy backward-compat
clause for users whose openidIssuer field was never populated. The
direct user_id path now also checks isUserIssuerAllowed and throws
USER_ID_MISMATCH if the stored openidIssuer disagrees with the
configured issuer.
The route's getOpenIdConfig() call was previously documented as
returning null when uninitialized; the actual implementation throws.
That made the if (!openIdConfig) guard unreachable, and an unconfigured
server would surface as 500 INTERNAL_ERROR rather than 503
OPENID_NOT_CONFIGURED. Wraps the call in try/catch so the documented
503 response is what callers actually receive.
Adds 4 tests covering the new lookup binding behavior.
* fix(admin-panel): re-check ACCESS_ADMIN on /api/admin/oauth/refresh
The IdP refresh token can outlive a capability/role change, so the
initial requireAdminAccess on the OAuth callback isn't sufficient.
Inject canAccessAdmin via the existing capability model
(hasCapability with SystemCapabilities.ACCESS_ADMIN, matching
requireAdminAccess so custom roles and user grants are honored)
and gate token minting on it. Capability backend errors are
warn-and-denied to keep the bearer-mint path fail-closed.
* fix(admin-panel): scope /api/admin/oauth/refresh to the request tenant
The same (openidId, openidIssuer) pair is allowed across tenants by
the user schema's unique index. The refresh helper was wrapping both
the direct getUserById and the fallback findUsers in runAsSystem,
bypassing tenant isolation, so an IdP identity that exists in two
tenants could resolve to the wrong tenant's user and mint a JWT
bound to that tenant.
Drop the runAsSystem wrappers, add a trusted tenantId option to
applyAdminRefresh, AND it into the fallback findUsers filter, and
assert it against the direct getUserById result. Mount
preAuthTenantMiddleware on the refresh route so the deployment's
X-Tenant-Id header drives the trusted tenant via ALS. Single-tenant
deploys (no header) keep the existing openidId-only behaviour.
Adds TENANT_MISMATCH (401) and a regression covering duplicate
(sub, iss) across tenants plus the direct-userId tenant assertion.
* fix(admin-panel): gate /api/admin/oauth/refresh on OPENID_REUSE_TOKENS
The OSS refreshController only refreshes OpenID tokensets when
OPENID_REUSE_TOKENS is enabled. The body-based admin variant was
unconditionally calling refreshTokenGrant, which made the flag
ineffective for the admin OAuth flow and let admin sessions keep
renewing in deployments that explicitly turned token reuse off.
Add the same isEnabled(process.env.OPENID_REUSE_TOKENS) check up
front and return 403 TOKEN_REUSE_DISABLED so the admin panel BFF
can surface the configuration mismatch instead of silently churning
through retries.
|
||
|
|
22890771cf
|
🧭 fix: Preserve Resend Files for Subagents (#13030) | ||
|
|
a107520109
|
📦 chore: Bump @librechat/agents to v3.1.81 & npm audit fix (#13027)
* 📦 chore: Bump `@librechat/agents` to v3.1.81
* chore: npm audit fix
|
||
|
|
119ac9c944
|
📦 chore: bump @librechat/agents to v3.1.80 (#13021)
|
||
|
|
93c4ef4ba8
|
🧱 refactor: typed CodeEnvRef + kind discriminator + principal-aware sandbox cache (#12960)
* 🧱 refactor: typed CodeEnvRef + kind discriminator + tenant-aware sandbox cache Final cutover for the LibreChat ↔ codeapi sandbox file identity. Replaces the magic string `${session_id}/${file_id}?entity_id=...` with a typed, discriminated `CodeEnvRef`. Pre-release lockstep deploy with codeapi #1455 and agents #148; no legacy aliases retained. ## Final shape ```ts type CodeEnvRef = | { kind: 'skill'; id: string; storage_session_id: string; file_id: string; version: number } | { kind: 'agent'; id: string; storage_session_id: string; file_id: string } | { kind: 'user'; id: string; storage_session_id: string; file_id: string }; ``` `kind` drives codeapi's sessionKey: `<tenant>:<kind>:<id>[✌️<version>]` for shared kinds, `<tenant>:user:<userId>` for user-private (auth context provides `userId`). `version` is statically required for `kind: 'skill'` and forbidden otherwise via discriminated union — constraint holds at compile time on every consumer, not just codeapi's runtime validator. `id` is sessionKey-meaningful for `'skill'` / `'agent'`; informational only for `'user'` (codeapi resolves user identity from auth context). ## What changed - `packages/data-provider/src/codeEnvRef.ts` — discriminated union + `CODE_ENV_KINDS` const-tuple keeps the runtime list and TS union locked together. - Schemas: `metadata.codeEnvRef` and `SkillFile.codeEnvRef` enums tightened to `['skill', 'agent', 'user']`. - `primeSkillFiles` writes `kind: 'skill'`, `id: skill._id`, `version: skill.version`. Cache-hit path reads `codeEnvRef` directly. Bumping `skill.version` on edit naturally invalidates the prior cache entry under the new sessionKey. - `processCodeOutput` writes `kind: 'user'`, `id: req.user.id`. Output bucket is always user-scoped, regardless of which skill the execution invoked. New regression test pins the asymmetry. - `primeFiles` reupload preserves `kind`/`id`/`version?` from the existing ref so a skill-cache-miss reupload doesn't silently demote to user bucket. - `crud.js` upload functions (`uploadCodeEnvFile` / `batchUploadCodeEnvFiles`) thread `kind`/`id`/`version?` to the multipart form (codeapi #1455 option α). Without these on the wire, codeapi falls back to user bucketing and skill-cache invalidation never fires. Client-side validation mirrors codeapi's validator. - `Files/process.js` — chat attachments use `kind: 'user'`; agent setup files use `kind: 'agent'`. - Drops `entity_id` everywhere (struct, schema sub-docs, write paths, upload form fields). Drops `'system'` from the kind enum (no emitter ever existed). ## Test plan - [x] `cd packages/data-provider && npx jest src/codeEnvRef.spec` — 4 / 4 - [x] `cd packages/data-schemas && npx jest` — 1447 / 1447 - [x] `cd packages/api && npx jest src/agents` — 81 / 81 in skillFiles + handlers + resources - [x] `cd api && npx jest server/services/Files server/controllers/agents` — 436 / 436 - [x] `cd api && npx jest server/services/Files/Code` — 98 / 98 (incl. new "outputs are user-scoped regardless of which skill the execution invoked" regression and "reupload forwards kind/id/version from existing ref") - [x] `npx tsc --noEmit -p packages/data-{provider,schemas}/tsconfig.json && npx tsc --noEmit -p packages/api/tsconfig.json` — clean (only pre-existing unrelated dev errors in storage/balance, untouched here) ## Deploy notes - **24h cache-miss burst** on first deploy. Inputs (skill caches re-prime under new sessionKey shape) and outputs (any pre-Phase C skill-output cached files become unreadable). Bounded by codeapi's 24h TTL. - **Lockstep with codeapi #1455 and agents #148.** Either repo can land first since no aliases to drain, but the three deploys must overlap within the same maintenance window. - **`@librechat/agents` bump to `3.1.79-dev.0`** required after agents #148 lands and is published. ## What this enables Auth bridge work (JWT-based tenant/user identity between LC and codeapi) — codeapi now derives sessionKey purely from `req.codeApiAuthContext.{ tenantId, userId}`, so the next chapter is replacing the header-asserted user identity with a verified-claim path. * 🩹 fix: persist execute_code uploads under codeEnvRef metadata key Codex review P1 (chatgpt-codex-connector). `Files/process.js` was storing the upload result under `metadata.fileIdentifier` even though: - `uploadCodeEnvFile` now returns `{ storage_session_id, file_id }`, not the legacy magic string. - The post-cutover schema (`File.metadata.codeEnvRef`) only declares `codeEnvRef` — mongoose strict mode silently strips unknown keys. - All readers (`primeFiles`, `getCodeFilesByIds`, `categorizeFileForToolResources`, controller filtering) check `metadata.codeEnvRef`. Net effect of the bug: chat-attached and agent-setup execute_code files would lose their sandbox reference on save, and primeFiles would skip them on subsequent code-execution turns — the file blob would still be available locally but never re-mounted in the sandbox. Fix: construct the full `CodeEnvRef` (`{ kind, id, storage_session_id, file_id }`) at the write site and persist under `metadata.codeEnvRef`. `BaseClient`'s "is this a code-env file" presence check accepts the new shape alongside the legacy `fileIdentifier` for back-compat with any pre-cutover records still in the database. Mirrors the same change in `processAttachments.spec.ts` (which re-implements the BaseClient logic for testability). New regression tests in `process.spec.js` cover three cases: - chat attachments (`messageAttachment=true`) → `kind: 'user'` - agent setup (`messageAttachment=false`) → `kind: 'agent'` - legacy `fileIdentifier` key is NOT persisted (would be schema-stripped) * 🩹 fix: read storage_session_id on primed file refs (Codex P1) Codex review (chatgpt-codex-connector). After Phase B's per-file `session_id` → `storage_session_id` rename, `primeFiles` emits the new field — but `seedCodeFilesIntoSessions` was still reading `files[0].session_id` for the representative session and `f.session_id` for the dedupe key. In runs with only primed attachments (no skill seed), `representativeSessionId` was `undefined`, the function returned the unchanged map, and `seedCodeFilesIntoSessions` silently dropped the entire batch. The first `execute_code` call then started without `_injected_files` and the agent couldn't see prior-turn artifacts. Fix: - `codeFilesSession.ts`: read `f.storage_session_id` for both the dedupe key and the representative session id. JSDoc updated to match the new field name. - `callbacks.js`: the two output-file persistence paths read `file.session_id` to pass to `processCodeOutput` — switch to `file.storage_session_id`. The original comment explicitly says this should be the STORAGE session, which is exactly the field Phase B renamed. - `codeFilesSession.spec.ts`: fixture builder uses `storage_session_id` and `kind: 'user'` to match the post-cutover `CodeEnvFile` shape. Lockstep coordination: this matches the post-bump shape of `@librechat/agents` 3.1.79+. CI tsc errors against the currently-pinned 3.1.78 are expected and resolve when the dep bumps in this PR before merge. * 📦 chore: Bump `@librechat/agents` to version 3.1.80-dev.0 in package-lock and package.json files * 🪪 fix: thread kind/id/version through codeapi /download URLs (Phase C α) Symmetric fix for the upload-side wire change in 537725a. Codeapi's `sessionAuth` middleware now requires `kind`/`id`/`version?` on every download/freshness URL — without them it 400s with "kind must be one of: skill, agent, user" before serving the file. Three sites construct codeapi-side URLs that go through `sessionAuth`: - `processCodeOutput` (`Files/Code/process.js`): `/download/<sess>/<id>` for freshly-generated sandbox outputs. Always `kind: 'user'` + `id: req.user.id` — code-output files are always user-private, regardless of which skill the run invoked. - `getSessionInfo` (`Files/Code/process.js`): `/sessions/<sess>/objects/<id>` for the 23h freshness check. Pulls kind/id/version straight off the `codeEnvRef` already in scope — skill files stay skill-bucketed, user files stay user-bucketed. - `/code/download/:session_id/:fileId` LC route (`routes/files/files.js`): proxies to codeapi for manual downloads. Code-output files only on this route, so `kind: 'user'` + `id: req.user.id`. The `getCodeOutputDownloadStream` helper in `crud.js` now takes an `identity` param, validated by a `buildCodeEnvDownloadQuery` helper that mirrors `appendCodeEnvFileIdentity`'s shape rules: kind required from the closed `{skill, agent, user}` set, version required for 'skill' and forbidden otherwise. Bad callers fail fast on the client instead of round-tripping a 400. Also cleans up two log-noise sources reported alongside the 400: - `logAxiosError` in `packages/api/src/utils/axios.ts` was dumping `error.response.data` raw. With `responseType: 'arraybuffer'` that's a `Buffer` (~4 chars per byte after JSON-serialization); with `responseType: 'stream'` it's a `Readable` whose internal state serializes the entire ring buffer + socket. New `renderResponseData` decodes small buffers as UTF-8 (truncated past 2KB) and stubs streams as `'[stream]'`. Diagnostics stay useful, log lines stop being megabytes. - `/code/download` route's catch was bare `logger.error('...', error)`, bypassing the redactor. Switched to `logAxiosError` so it benefits from the same buffer/stream handling. Tests updated to match the new contract: - crud.spec: `getCodeOutputDownloadStream` fixtures pass `userIdentity`; new cases cover skill identity (with version), bad kind rejection, skill-without-version rejection. - process.spec: `getSessionInfo` test passes a full `codeEnvRef` object. * ♻️ refactor: extract codeEnv identity helpers into packages/api Per the project convention that new backend code lives in TypeScript under `packages/api`, moves `appendCodeEnvFileIdentity` and `buildCodeEnvDownloadQuery` from `api/server/services/Files/Code/crud.js` into a new `packages/api/src/files/code/identity.ts` module. Both helpers are pure validators that mirror codeapi's `parseUploadSessionKeyInput` server-side rules (closed kind set, `version` required for `'skill'` and forbidden otherwise) — they deserve TS support and a dedicated spec rather than living as JSDoc-typed helpers in the legacy `/api` workspace. The new module: - Exports a `CodeEnvIdentity` interface using the `librechat-data-provider` `CodeEnvKind` discriminated union. - Adds 13 unit tests in `identity.spec.ts` covering the validation matrix (skill+version, agent, user, and every rejection path) plus URL encoding for the download query. - Re-exported from `packages/api/src/files/code/index.ts` alongside `classify`, `extract`, and `form`. Consumer updates: - `api/server/services/Files/Code/crud.js`: drops the local helpers and imports them from `@librechat/api`. Net -64 lines. - `api/server/services/Files/Code/process.js`: same. - Test mocks for `@librechat/api` in three spec files now stub the helpers' validation behavior locally rather than pulling them through `requireActual` (which would drag in provider-config init-time side effects). The package's `exports` field only surfaces the root barrel, so leaf imports aren't reachable from legacy `/api` test setup. No runtime behavior change. Identity validation rules and emitted form/query shapes are byte-for-byte identical pre/post. * 🪪 fix: emit resource_id alongside id on _injected_files (skill 403 fix) Companion to codeapi #1455 fix and agents 3.1.80-dev.1 — the wire shape for shared-kind files now requires `resource_id` distinct from the storage `id`. Without this LC change, codeapi's sessionKey re-derivation on every shared-kind /exec rejects with 403 session_key_mismatch: cached: legacy:skill:69dcf561...✌️59 (signed at upload, skill _id) derived: legacy:skill:ysPwEURuPk-...✌️59 (storage nanoid) Emit sites updated: - `primeInvokedSkills` cache-hit path: `resource_id: ref.id` (the persisted skill `_id` from `codeEnvRef.id`); `id: ref.file_id` unchanged (storage uuid). - `primeInvokedSkills` fresh-upload path: `resource_id: skill._id.toString()` on every primed file (the `allPrimedFiles` builder type now carries the field). - `processCodeOutput`'s `pushFile` (Code/process.js): `resource_id: ref.id` — for `kind: 'user'` this is informational (codeapi derives sessionKey from auth context) but emitted for shape uniformity with shared kinds. Bumps `@librechat/agents` to `^3.1.80-dev.1` (the version that ships the matching `CodeEnvFile.resource_id` field). ## Test plan - [x] `cd packages/api && npx jest src/agents` — 67 / 67 pass (skillFiles fixtures updated to assert `resource_id` on the emitted CodeSessionContext.files). - [x] `cd api && npx jest server/services/Files server/controllers/agents` — 445 / 445 pass (process.spec fixtures updated for the reupload + cache-hit emission). - [x] `npx tsc --noEmit -p packages/api/tsconfig.json` — clean. * fix(skill-tool-call): carry resource_id through primeSkillFiles → artifact Codeapi was 400ing every /exec following a `handle_skill` tool call with `resource_id is invalid` (`type: 'undefined'`). Both code paths in `primeSkillFiles` (cache-hit + fresh-upload) returned files without `resource_id`/`kind`/`version`, and the artifact in `handlers.ts` forwarded the stripped shape into `tc.codeSessionContext.files` → `_injected_files`. `primeInvokedSkills` (the NL-detected loader) had already been fixed end-to-end; this commit aligns the tool-invoked path with the same contract: `resource_id` = `skill._id.toString()`, `kind: 'skill'`, `version` = the skill's monotonic counter. Tests added to `skillFiles.spec.ts` lock the contract on `primeSkillFiles` directly so future refactors can't silently drop the resource identity again. * fix(handlers.spec): align session_id → storage_session_id rename + kind discriminator Pre-existing TS errors against the post-rename `CodeEnvFile` shape: the test file still used `session_id` on per-file objects (renamed to `storage_session_id` in agents Phase B/C) and was missing the `kind` discriminator the discriminated union requires. Both inputs and the matching `expect.toEqual(...)` mirrors updated together so the runtime equality check still holds. Lines 723-732 stay as-is — they sit behind `as unknown as ToolCallRequest` and TS already skipped them. * chore: fix `@librechat/agents`, correct version to 3.1.80-dev.0 in package.json files * chore: bump `@librechat/agents` to version 3.1.80-dev.1 in package.json and package-lock.json * chore: bump `@librechat/agents` to version 3.1.80-dev.2 * feat(observability): trace file priming chain from primeCodeFiles to _injected_files Diagnosing the user-upload "files=[] on first /exec" bug requires seeing where in the LC chain a file ref disappears. Prior to this patch the chain (primeCodeFiles → primedCodeFiles → initialSessions → CodeSessionContext → _injected_files) was opaque end-to-end: - primeCodeFiles silently dropped files without `metadata.codeEnvRef` - reuploadFile catches all errors and continues with no signal - the handlers.ts handoff to codeapi never logged what it was sending After this patch, a single grep on `[primeCodeFiles]` plus `[code-env:inject]` shows the full per-file path: [primeCodeFiles] in: file_ids=N resourceFiles=M [primeCodeFiles] file=<id> path=skip reason=no-codeenvref filename=... [primeCodeFiles] file=<id> path=cache-hit-by-session storage_session_id=... [primeCodeFiles] file=<id> path=reupload reason=no-uploadtime ... [primeCodeFiles] file=<id> path=reupload reason=stale ... [primeCodeFiles] file=<id> path=reupload-success oldSession=... newSession=... newFileId=... [primeCodeFiles] file=<id> path=reupload-failed session=... [primeCodeFiles] file=<id> path=fresh-active storage_session_id=... [primeCodeFiles] out: returned=N skippedNoRef=M reuploadFailures=K [code-env:inject] tool=<name> files=N missingResourceId=K (debug) [code-env:inject] M/N files missing resource_id ... (warn) [code-env:inject] tool=<name> _injected_files=0 ... (warn) The boundary log warns when LC sends zero injected files on a code-execution tool call — that's the user's actual symptom showing up at the LC side instead of having to correlate against codeapi's `Request received { files: [] }`. Tag chosen as `[code-env:inject]` rather than `[handoff:exec]` to avoid collision with the app-level "handoff" semantic (subagent handoff workflow). Structural cleanup in primeFiles: replaced the `if (ref) { ... }` nesting with an early `if (!ref) continue` so the per-path instrumentation hooks land at top-level scope instead of indented inside a conditional. Behavior unchanged; pushFile / reuploadFile identical. Spec fixtures (handlers.spec.ts, codeFilesSession.spec.ts) updated to include `resource_id` on `CodeEnvFile` literals — required by the post-3.1.80-dev.2 type now installed. ## Test plan - [x] `cd packages/api && npx jest src/agents/handlers.spec.ts src/agents/codeFilesSession.spec.ts src/agents/skillFiles.spec.ts` — 69/69 pass - [x] `cd api && npx jest server/services/Files/Code/process.spec.js` — 84/84 pass - [x] `npx tsc --noEmit -p packages/api` — clean - [x] `npx eslint` on all four touched files — clean * chore: add CONSOLE_JSON_STRING_LENGTH to .env.example for JSON log string length configuration * fix(files): align codeapi upload filename with LC's sanitized DB filename User-attached files for code execution were uploading to codeapi under `file.originalname` (raw upload filename, may contain spaces / special chars) while LC's DB record stored the sanitized form (`sanitizeFilename(file.originalname)`, underscores). Codeapi preserves whatever filename the upload sent, so the sandbox saw `/mnt/data/<originalname>` while LC's `primeFiles` toolContext text + `_injected_files.name` referenced `file.filename` (sanitized). Visible failure: agent gets system prompt saying /mnt/data/librechat_code_api_-_active_customer_-_2025-11-05.xlsx …tries that path, hits `FileNotFoundError`, then notices the sandbox's actual `Available files` line says /mnt/data/librechat code api - active customer - 2025-11-05.xlsx …retries with spaces, succeeds. Wastes a tool call per upload and leaks raw filenames into model context. Fix: sanitize once and use the sanitized form in both the codeapi upload AND the LC DB record. Sandbox path = LC toolContext text = in-memory ref name. No drift. Reupload path (`Code/process.js` line 867 `filename: file.filename`) already uses the sanitized DB name, so it stays consistent with the fresh-upload path after this change. ## Test plan - [x] `cd api && npx jest server/services/Files/process` — 32/32 pass - [x] `npx eslint` on the touched file — clean * chore: bump `@librechat/agents` to version 3.1.80-dev.3 in package.json and package-lock.json |
||
|
|
9441563b95
|
🛡️ refactor: Scope allowedAddresses By Port (#13022)
* fix: Scope allowedAddresses by port * test: Fix SSRF agent spec typing |
||
|
|
b39bf837a7
|
📦 chore: Update @librechat/agents to v3.1.79 (#13000)
|
||
|
|
4238dd4471
|
🪪 fix: Preserve OIDC Logout ID Token Hint (#12999) | ||
|
|
8f92ec012c
|
🧭 fix: Navigate Signed CDN Downloads (#12998)
* fix(files): navigate signed CDN downloads * fix(files): avoid popup target for signed downloads * test(files): restore download URL mock |
||
|
|
40a05bbf83
|
📦 chore: npm audit fixes and Mongoose 8.23 TypeScript follow-ups (#12996)
* chore: Update axios dependency to version 1.16.0 across multiple package files * chore: Update express-rate-limit and ip-address dependencies to versions 8.5.1 and 10.2.0 in package-lock.json and package.json * chore: Update mongoose and hono dependencies to versions 8.23.1 and 4.12.18 across multiple package files * fix: Add type parameters to mongoose lean queries in accessRole and aclEntry methods * fix: Add type parameters to mongoose lean queries in action, agent, and agentCategory methods * chore: Update moduleResolution to 'bundler' in tsconfig.json for api and data-schemas packages * fix: Update mongoose lean queries to include type parameters across various methods for improved type safety |
||
|
|
1bc2692a15
|
🌥️ feat: Add Optional Region-aware S3/CloudFront Storage Keys (#12987)
* feat(files): add optional region-aware storage keys * test(files): fix region storage CI fixtures * feat(files): finalize inline CloudFront asset namespaces * fix(files): allow wildcard region CloudFront cookies * fix(files): preserve legacy storage key compatibility * fix(files): align CloudFront clear cookie cleanup * fix(files): clear legacy CloudFront cookie scopes * chore(files): clean up storage review nits * fix(files): keep inline namespaces CloudFront-only |
||
|
|
5efbcb8b93
|
🌐 fix: Percent-encode X-File-Metadata header for Unicode filenames (#12983)
* 🌐 fix: Percent-encode X-File-Metadata header for Unicode filenames After #12977 preserved Unicode in filenames, the download route crashes with ERR_INVALID_CHAR because JSON.stringify(file) now contains non-ASCII characters that Node.js rejects in HTTP headers per RFC 7230. Wrap the header value in encodeURIComponent on the server and decodeURIComponent on the client before JSON.parse. * fix: Update file route tests after dev merge --------- Co-authored-by: Danny Avila <danny@librechat.ai> |
||
|
|
5c338a4642
|
🛂 fix: Harden Agent File Preview Access (#12981)
* fix: harden agent file access * style: format agent file query * fix: prune agent file refs on alternate writes * test: fix agent pruning specs |
||
|
|
9c81792d25
|
🔐 feat: Add Signed CloudFront File Downloads (#12970)
* feat: add signed CloudFront downloads * fix: preserve local IdP avatar paths * fix: address signed download review findings * fix: harden CloudFront cookie scope validation * fix: preserve URL save API compatibility * fix: store CDN SSO avatars under shared prefix * fix: Harden CloudFront tenant file access * fix: Preserve CloudFront download compatibility * fix: Address CloudFront review follow-ups * fix: Preserve file URL fallback user paths * fix: Address download review hardening * fix: Use file owner for S3 RAG cleanup * fix: Address final download review nits * fix: Clear stale avatar CloudFront cookies * fix: Align download filename helpers with dev * fix: Address final CloudFront review follow-ups * fix: Stream S3 URL uploads * fix: Set S3 stream upload length * fix: Preserve download metadata filepath * fix: Avoid remote content length for stream uploads * fix: Use bounded multipart URL uploads * fix: Harden S3 filename boundaries |
||
|
|
f2de3a219c
|
🌐 fix: Preserve Unicode Filenames (#12977)
* fix: Preserve unicode filenames * fix: Cap unicode filenames by bytes * fix: Preserve clean artifact directories * fix: Disambiguate normalized artifact names |
||
|
|
56b87f70bd
|
🧮 feat: Add GPT-5.5 Token Definitions (#12973)
* fix: add gpt-5.5 token definitions * fix: align gpt-5.5 context limit |
||
|
|
69395d0667
|
🛰️ fix: Honor Anthropic Vertex Configuration (#12972)
* fix: honor Anthropic Vertex config * chore: format Anthropic Vertex config fix |
||
|
|
6c6c72def7
|
🚀 feat: Decouple File Attachment Persistence from Preview Rendering (#12957)
* 🗂️ feat: add `status` lifecycle to file records for two-phase previews
Schema and model foundation for decoupling the agent's final response
from CPU-heavy office-format HTML extraction.
- `MongoFile.status: 'pending' | 'ready' | 'failed'` (indexed) and
`previewError?: string` mirror the lifecycle: phase-1 emits the file
record at `pending` so the response is unblocked; phase-2 transitions
to `ready` (with text/textFormat) or `failed` (with previewError) in
the background. Absent for legacy records — clients treat that as
`ready` for back-compat.
- Mirror types added to `TFile` in data-provider so frontend cache
consumers see the new fields.
- New `sweepOrphanedPreviews(maxAgeMs)` method on the file model
recovers stale `pending` records left behind by a process restart
mid-extraction; transitions them to `failed` with
`previewError: 'orphaned'`. Cheap because `status` is indexed.
* ⚡ feat: two-phase code-execution preview flow (unblocks final response)
The agent's final response no longer waits on CPU-heavy office HTML
extraction. Phase-1 (download + storage save + DB record at
`status: 'pending'`) is awaited as before; phase-2 (extract +
`updateFile`) runs in the background with a hard 60s ceiling.
Three flows, all funneling through `processCodeOutput` and updated to
the new `{ file, finalize? }` return shape:
- `callbacks.js` (chat-completions + Open Responses streaming): emit
the phase-1 attachment immediately (carries `status: 'pending'` for
office buckets so the UI shows "preparing preview…"), then
fire-and-forget `finalize()`. If the SSE stream is still open when
phase-2 lands, push an `attachment` update event with the same
`file_id` so the client merges over the placeholder in place.
- `tools.js` direct endpoint: same split — return the phase-1
metadata immediately, run extraction in the background. Client
polls for the resolved record.
`finalize()` wraps the existing 12s per-render timeout in a 60s outer
`withTimeout`. The HTML-or-null contract from #12934 is preserved:
office types that fail extraction transition to `status: 'failed'`
with `previewError: 'parser-error' | 'timeout'` rather than falling
back to plain text (would be an XSS vector).
Promises continue running after the HTTP response closes (Node
doesn't kill them). The boot-time orphan sweep covers the only case
that loses progress — actual process restart mid-extraction.
`primeFiles` annotates the agent's `toolContext` line for prior-turn
files: `(preview not yet generated)` for pending, `(preview
unavailable: <reason>)` for failed. The model can volunteer "you can
still download it" instead of pretending the preview is fine.
`hasOfficeHtmlPath` exported from `@librechat/api` so `processCodeOutput`
can decide whether a file expects a preview at all.
* 🔍 feat: `GET /api/files/:file_id/preview` endpoint and boot orphan sweep
- New `GET /api/files/:file_id/preview` route returns
`{ status, text?, textFormat?, previewError? }`. The frontend's
`useFilePreview` React Query hook polls this while phase-2 is in
flight, then auto-stops on terminal status. ACL identical to the
download route (reuses `fileAccess` middleware). Defaults `status`
to `'ready'` for legacy records so back-compat is implicit.
`text` only included when `status === 'ready'` and non-null —
preserves the HTML-or-null security contract from #12934.
- `sweepOrphanedPreviews()` invoked on boot in both `server/index.js`
and `server/experimental.js`. Recovers any `pending` records left
behind by a process restart mid-extraction (the only case the
in-process two-phase flow can't handle on its own). Fire-and-forget
so a transient sweep failure doesn't block startup.
* 🖥️ feat: frontend two-phase preview consumer (polling + UI states)
Wires the React side to the new lifecycle so the user sees what's
happening with their file while phase-2 extraction runs in the
background and after the response stream closes.
- `useAttachmentHandler` upserts by `file_id` (was append-only) so
the phase-2 SSE update event merges over the pending placeholder
in place. Lightweight attachments without a `file_id`
(web_search / file_search citations) keep the legacy append path.
- `useFilePreview(file_id)` React Query hook with
`refetchInterval: (data) => data?.status === 'pending' ? 2500 : false`
so polling auto-stops on the first terminal response without the
caller having to flip `enabled`.
- `useAttachmentPreviewSync(attachment)` bridges polled data into
`messageAttachmentsMap`. Polling enabled iff
`status === 'pending' && isAnySubmitting` — per the design ask:
active polling while the LLM is still generating, then quiet.
Process-restart and post-stream cases are covered by polling on
the next interaction.
- `Attachment.tsx` renders a small `PreviewStatusIndicator` (spinner +
"Preparing preview…" for pending, alert icon + "Preview unavailable"
for failed) inside `FileAttachment`. Download button stays fully
functional in both states. Two new English locale keys.
- Data-provider scaffolding: `TFilePreview` type, `endpoints.filePreview`,
`dataService.getFilePreview`, `QueryKeys.filePreview`.
* 🧪 fix: stub `useAttachmentPreviewSync` in pre-existing Attachment test mocks
The new `useAttachmentPreviewSync` hook is called unconditionally inside
`FileAttachment` (added in the prior commit). Two pre-existing test
files mock `~/hooks` to provide `useLocalize` only — the un-mocked
preview hook reference resolved to undefined and crashed render with
`(0 , _hooks.useAttachmentPreviewSync) is not a function` on the
Ubuntu/Windows CI runners.
Fix is local to the test mocks: add a no-op stub that returns
`{ status: 'ready' }` so the component renders the legacy chip path.
The two-phase preview behavior itself has its own dedicated suites
(`useAttachmentHandler.spec.tsx`, `useAttachmentPreviewSync.spec.tsx`).
* 🐛 fix: route phase-2 attachment update to current-run messageId
Codex P1 review on PR #12957. `processCodeOutput` intentionally
preserves the original DB `messageId` across cross-turn filename reuse
so `getCodeGeneratedFiles` can still trace a file back to the
assistant message that originally produced it. The phase-1 SSE emit
already routes by the current run's messageId — `processCodeOutput`
runtime-overlays it via `Object.assign(file, { messageId, toolCallId })`
and the callback writes `result.file` directly.
Phase-2 was passing the raw `updateFile` return through
`attachmentFromFileMetadata`, which read `messageId` straight off the
DB record. On a turn-N run that re-emitted a filename from turn-1
(e.g. agent writes `output.csv` again), the phase-2 SSE update
routed to `turn-1-msg` instead of `turn-N-msg`. Frontend's
`useAttachmentHandler` upserts under the wrong messageAttachmentsMap
slot — turn-N's pending chip stays stuck at "preparing preview…"
while turn-1's already-resolved attachment gets re-merged.
Fix: thread `runtimeMessageId` through `attachmentFromFileMetadata`
and pass `metadata.run_id` from the phase-2 emit site. Mirrors how
phase-1 sources its messageId. Tests cover the cross-turn reuse case
plus the writableEnded / null-finalize / no-finalize paths to lock
in the broader phase-2 emit contract.
* 🛠️ refactor: address codex audit findings (wire-shape parity, DRY, defensive catch)
Comprehensive audit on PR #12957. Resolves all valid findings:
- **MAJOR #1 — Wire-shape parity**: phase-1 ships the full `fileMetadata`
record over SSE; phase-2 was using a tight `attachmentFromFileMetadata`
projection. Drop the projection and have phase-2 spread `{...updated,
messageId, toolCallId}` so both events match the long-standing
legacy phase-1 shape clients depend on.
- **MAJOR #2 — DRY**: extract `runPhase2Finalize({ finalize, fileId,
onResolved })` into `process.js` (alongside `processCodeOutput` whose
contract it pairs with). Both `callbacks.js` paths and `tools.js`
now flow through it. Single catch path eliminates divergence
surface — the fix landed in 01704d4f0 (cross-turn messageId routing)
was a symptom of this duplication risk.
- **MINOR #3 — JSDoc accuracy**: `finalizePreview`'s buffer is bounded
by `fileSizeLimit`, not the 1MB extractor cap. Updated and added a
note about peak heap from queued buffers.
- **MINOR #4 — Defensive catch**: `runPhase2Finalize`'s catch attempts
a best-effort `updateFile({ status: 'failed', previewError:
'unexpected' })` for the file_id, so a programming bug in
`finalizePreview` doesn't leave the record stuck `'pending'` until
the next boot-time orphan sweep.
- **NIT #6 — Stale PR refs**: 12952 → 12957 in 3 places.
- **NIT #7 — Schema bound**: `previewError` capped at `maxlength: 200`
to prevent a future codepath from accidentally persisting a stack
trace.
Skipped per audit verdict (non-blocking):
- #5 (memory pressure): documented in JSDoc; impl change was reviewer's
"consider", not actionable.
- #8 (double DB query per poll): low cost, indexed by_id, polling is
gated narrow.
- #9 (TAttachment cast): the union type is intentional; the casts are
safe widening, refactoring TAttachment is invasive and out of scope.
Tests: 11 new (7 `runPhase2Finalize` unit tests covering happy path,
null-finalize, throws, double-fail, no-fileId, no-onResolved; +4
wire-shape parity assertions in the existing cross-turn test). 328
backend tests pass; 528 frontend tests pass; lint and typecheck clean.
* 🛡️ refactor: address codex P1+P2 + rename to drop phase-1/2 jargon
Codex round 2 review on PR #12957 caught two race conditions and one
recovery gap, all triggered by cross-turn filename reuse (`claimCodeFile`
intentionally returns the same `file_id` for the same
`(filename, conversationId)` across turns). Plus naming cleanup the
user requested — internal "phase 1 / phase 2" vocabulary leaks across
sprints, replace it everywhere with terms describing what's actually
happening.
P1 — stale render overwrites newer revision (process.js)
Two turns reusing `output.csv` share a `file_id`. If turn-1's
background render resolves AFTER turn-2's persist step, the
unconditional `updateFile` writes turn-1's stale text/status over
turn-2's pending placeholder. Fix: stamp a fresh `previewRevision`
UUID on every emit, thread it through `finalizePreview`, and make
the commit conditional via a new optional `extraFilter` argument
on `updateFile` (`{ previewRevision: <expected> }`). The defensive
`updateFile` in `runPreviewFinalize`'s catch uses the same guard
so a programming error from an older render also can't override a
newer turn.
P1 — stale React Query cache on pending remount (queries.ts)
Same root cause from the frontend side. Cache key
`[QueryKeys.filePreview, file_id]` may hold a prior turn's `'ready'`
payload; with `refetchOnMount: false` and the polling gate on
`pending`, polling never starts for the new placeholder. Fix:
`useAttachmentHandler` invalidates that query whenever an attachment
with a `file_id` arrives. Both initial-emit and update events
trigger invalidation — uniform gate.
P2 — quick-restart orphans skipped by boot sweep (files.js)
Boot `sweepOrphanedPreviews` uses a 5-min cutoff for multi-instance
safety. A crash + restart inside the cutoff leaves `pending` records
that never get touched again. Fix: lazy sweep inside the preview
endpoint — if a polled record is `pending` and `updatedAt` is older
than 5 min, mark it `failed:orphaned` on the spot before responding.
Conditional on the same `updatedAt` we observed so a concurrent
legitimate update wins. Cheap, bounded by user activity.
Naming cleanup
- `runPhase2Finalize` → `runPreviewFinalize`
- `PHASE_TWO_TIMEOUT_MS` → `PREVIEW_FINALIZE_TIMEOUT_MS`
- All `phase-1` / `phase-2` / `two-phase` prose replaced with
"the immediate emit", "the deferred render", "the persist step",
"the deferred preview", etc. Skill-feature `phase 1/2` references
(different feature) left alone.
Tests: 10 new (4 lazy-sweep × preview endpoint, 3 cache-invalidation ×
useAttachmentHandler, 3 extraFilter × updateFile data-schemas).
Backend 332/332, frontend 531/531, data-schemas 37/37, lint clean.
* 🛠️ refactor: address comprehensive review (round 3) — stale-cache MAJOR + 3 minors
Comprehensive review on PR #12957 caught a P1 follow-on bug from the
prior `invalidateQueries` fix, plus 3 maintainability findings.
MAJOR: stale React Query cache not actually fixed by `invalidateQueries`
The previous fix called `invalidateQueries` to flush stale cached
preview data on cross-turn filename reuse. But `useFilePreview` had
`refetchOnMount: false`, which made the new observer read the
stale-marked 'ready' data without refetching. The polling
`refetchInterval` then evaluated against stale 'ready' → returned
`false` → polling never started → user stuck on stale content.
Fix (belt-and-suspenders):
a) `useAttachmentHandler` switched to `removeQueries` — drops the
cache entry entirely so the next mount has nothing to read and
must fetch.
b) `useFilePreview` no longer sets `refetchOnMount: false`, so the
React Query default (`true`) kicks in — second line of defense
if any future codepath observes stale data before the handler
has a chance to evict.
MINOR: `finalizePreview` JSDoc missing `previewRevision` param
Added with explanation of the conditional update guard.
MINOR: asymmetric stream-writable guard between SSE protocols
Chat-completions delegated the gate to `writeAttachmentUpdate`;
Open Responses inlined `!res.writableEnded && res.headersSent`.
Extracted `isStreamWritable(res, streamId)` predicate; both paths
+ `writeAttachmentUpdate` now share the single source of truth.
NIT: `(data as Partial<TFile>).file_id` cast repeated 4 times
Extracted to a `fileId` local at the top of the handler.
Tests: existing 9 invalidate-tests rewritten as remove-tests; +1 new
lock-in test asserts removeQueries is called and invalidateQueries
is NOT (regression guard against round-3 finding). 332 backend pass,
532 frontend pass, lint clean.
Skipped findings (deferred / acceptable):
- MINOR: post-submission pending state has no auto-recovery — the
`isAnySubmitting` polling gate was the user's explicit design;
LLM context surfaces failed/pending so the model can volunteer.
Worth a follow-up if real users hit it.
- NIT: double DB query per preview poll — reviewer marked acceptable;
changing `fileAccess` middleware is out of scope.
* 🛡️ test: address comprehensive review NITs (initial-emit guard + isStreamWritable coverage)
NIT — chat-completions initial emit skips writableEnded check
The Open Responses initial emit was switched to use the new
`isStreamWritable` predicate in the round-3 commit, but the
chat-completions initial emit kept the older narrower check
(`streamId || res.headersSent`). On a client disconnect mid-stream
(`writableEnded === true`) it would still hit `res.write` and
raise `ERR_STREAM_WRITE_AFTER_END` — caught by the outer IIFE
catch but logged as noise. Switch this site to `isStreamWritable`
too so both initial-emit paths share the same gate as the
deferred update emits.
NIT — `isStreamWritable` not directly unit-tested
The predicate was only covered indirectly via the deferred-preview
SSE tests (writableEnded skip, headersSent check). Export from
`callbacks.js` and add 5 parametric tests pinning down each branch
(streamId truthy, res null, !headersSent, writableEnded, happy
path) so a future condition addition can't silently regress.
* 🐛 fix: stuck "Preparing preview…" + inline the chip subtitle
Two related fixes for a stuck-spinner bug a user reported in manual
testing of PR #12957.
**Stuck spinner (the bug)**
The deferred preview render can complete a few seconds AFTER the SSE
stream closes (typical case: PPTX render finishes ~3s after the LLM
emits FINAL). When that happens, the SSE update is silently dropped
(`isStreamWritable` returns false on a closed stream) and polling is
the only recovery path.
The earlier polling gate was `status === 'pending' && isAnySubmitting`,
which mirrored the original design intent ("only query while the LLM
is still generating"). But `isAnySubmitting` flips false the moment
the model emits FINAL — milliseconds before the deferred render
commits. Polling never runs, the chip stays "Preparing preview…"
forever even though the DB has `status: 'ready'` with valid HTML.
Drop the `isAnySubmitting` part of the gate. `useFilePreview`'s
`refetchInterval` is already a function-form that returns `false` on
the first terminal response, so polling auto-stops within one tick of
resolution. The server-side render ceiling (60s) plus the lazy sweep
in the preview endpoint cap the worst case to ~24 polls per pending
attachment. Polling itself never blocks UX — the gate's purpose was
"don't waste cycles", and capping by terminal status is the correct
expression of that.
**Inline the chip subtitle (the visual)**
The previous design rendered "Preparing preview…" as a loose-feeling
spinner+text BELOW the file chip. The chip itself looked done while a
floating annotation said it wasn't.
`FileContainer` gains an optional `subtitle?: ReactNode` prop that
overrides the default file-type label. `Attachment.tsx` passes a
`PreviewStatusSubtitle` (spinner + "Preparing preview…" / alert +
"Preview unavailable") into that slot when the file's preview is
pending or failed. The chip footprint stays identical to its `'ready'`
form — just the second row swaps from "PowerPoint Presentation" to
the status indicator. No floating element, no layout shift.
Tests: regression test pinning down "polling stays enabled after the
LLM finishes" so a future revert can't reintroduce the stuck-spinner
bug. Existing FileContainer tests pass unchanged (subtitle override
is opt-in). 522 frontend tests pass; lint clean.
* 🐛 fix: deferred-preview survives reload + matches artifact card chrome
Fixes the remaining stuck-pending case after the polling gate fix: on
a reloaded conversation, message.attachments come from the DB frozen at
the immediate-persist `status: 'pending'`, but `messageAttachmentsMap`
is empty because no SSE handler ever fired for that messageId. Polling
now INSERTS a new live entry when no record matches the file_id, and
`useAttachments` merges live entries onto DB entries by file_id so the
resolved text/textFormat reach `artifactTypeForAttachment` and the
chip routes through the proper PanelArtifact card.
Also replaces the small file chip used during the pending state with
a PreviewPlaceholderCard that mirrors ToolArtifactCard chrome, so the
transition to the resolved PanelArtifact no longer reshapes the UI.
* ✨ feat: auto-open panel when deferred preview resolves pending→ready
The legacy auto-open path is gated only on `isSubmitting`, so an
office-file preview that resolves *after* the SSE stream closes would
render in place but never auto-open the panel — even though that's
exactly the moment the result becomes meaningful to the user. Adds a
per-file_id one-shot signal that `useAttachmentPreviewSync` flips on
the pending→ready edge; `ToolArtifactCard` consumes it on mount and
auto-opens regardless of submission state. The signal is *only* set on
the actual transition (history loads of pre-resolved files don't
trigger it) and is consumed once (panel close + reopen on the same
card stays user-controlled).
* 🐛 fix: drop placeholder Terminal overlay + scope auto-open to fresh resolutions
Two fixes for issues spotted in manual testing of the deferred-preview
auto-open feature:
1. PreviewPlaceholderCard was passing `file={attachment}` to FilePreview,
which triggered SourceIcon's Terminal overlay (`metadata.fileIdentifier`
is set on every code-execution file). The artifact card itself doesn't
show that overlay; the placeholder shouldn't either, so the
pending→resolved transition is visually seamless.
2. The `previewJustResolved` flag flipped on every pending→ready
transition observed by the polling hook — including stale-pending
DB records that resolve via the first poll on a *history load*.
Conversations whose immediate-persist snapshot left attachments at
`status: 'pending'` would yank the panel open every revisit.
Adds `mountedDuringStreamRef` to the hook (mirroring ToolArtifactCard)
so the flag fires only when the hook itself was mounted during an
active turn — preserving the pre-PR contract that the panel only
auto-opens for results the user is actively waiting on, never for
history.
* 🐛 fix: don't downgrade preview to failed when only the SSE emit throws
Codex P2 finding on PR #12957: the original chain placed `.catch` after
`.then(onResolved)`, so a throw inside `onResolved` (transport-side
errors — SSE write race after stream close, an emitter listener
throwing) would propagate into the finalize catch and persist
`status: 'failed'` / `previewError: 'unexpected'`. That surfaced
"preview unavailable" in the UI for a perfectly valid file, and
degraded next-turn LLM context to reflect a non-existent failure.
Wraps `onResolved` in its own try/catch so emit errors are logged but
do not affect the file's persisted status. Extraction success and
emit success are now independent: if extraction succeeds and
`finalizePreview` writes the terminal status, the polling layer / next
page load surfaces the resolved preview even if this turn's SSE emit
didn't land.
* 🛡️ fix: run boot-time orphan sweep under system tenant context
Codex P2 finding on PR #12957: `File` is tenant-isolated, so under
`TENANT_ISOLATION_STRICT=true` the boot-time `sweepOrphanedPreviews`
threw `[TenantIsolation] Query attempted without tenant context in
strict mode` and the recovery path silently failed every restart.
Stale `status: 'pending'` records would be stuck until a user happened
to poll the preview endpoint and trigger the lazy sweep — which only
covers the file the user is currently looking at, not the bulk
candidate set the boot sweep is designed to recover.
Wraps the sweep in `runAsSystem(...)` in both boot paths
(`api/server/index.js` and `api/server/experimental.js`) and pins the
contract with regression tests in `file.spec.ts` — one test asserts
the bare call throws under strict mode, the other asserts the
`runAsSystem`-wrapped call succeeds.
* 🧹 chore: trim verbose comments from previous commit
* 🧹 chore: address review findings (dead branch, lazy-sweep cutoff, stale JSDoc)
- finalizePreview: drop unreachable !isOfficeBucket branch (caller
already gates on hasOfficeHtmlPath, so this path is always office)
- preview endpoint: drop lazy-sweep cutoff from 5min to 2min — anything
past the 60s render ceiling is definitively orphaned, and per-request
sweep can be tighter than the per-instance boot sweep
- strip stale `isSubmitting` references from JSDoc in 3 spots (the
client-side gate was removed in
|
||
|
|
f839a447e1
|
🧬 fix: Subagent MCP requestBody Propagation (bump @librechat/agents to 3.1.78 + cleanup) (#12959)
* 📦 chore: bump `@librechat/agents` to v3.1.78 v3.1.78 ships [danny-avila/agents#147](https://github.com/danny-avila/agents/pull/147), which makes `SubagentExecutor` inherit the parent invocation's `configurable` (with `thread_id`/`run_id`/`parent_run_id` scrubbed) into the child workflow. Subagent tool dispatches through the parent's `ON_TOOL_EXECUTE` handler now arrive with parent's `requestBody`, `user`, `userMCPAuthMap`, etc. — so `{{LIBRECHAT_BODY_*}}` placeholder substitution and per-user MCP connection lookup work for subagent tool calls the same way they do for the parent agent. Note: `package-lock.json` will need an `npm install` refresh once v3.1.78 lands on the registry. The user/user_id injection added in PR #12950 stays as defense-in-depth. * 🗑️ refactor: drop redundant user/user_id injection from `loadToolsForExecution` `@librechat/agents@3.1.78` (via danny-avila/agents#147) makes `SubagentExecutor` forward the parent's `configurable` verbatim into the child workflow. Subagent `ON_TOOL_EXECUTE` dispatches now arrive with parent's `user` / `user_id` already in `data.configurable` — making the host-side injection added in #12950 a no-op. Removes: - The conditional `user: createSafeUser(req.user); user_id: req.user.id` block in `loadToolsForExecution` (req.user.id-guarded so the `'api-user'` fallback in Responses/OpenAI controllers is preserved). - The unused `createSafeUser` import. - The 4 unit tests covering the now-deleted behavior. The merge in `handlers.ts` (`{ ...configurable, ...toolConfigurable }`) still produces a `mergedConfigurable` with the right user identity for both parent and subagent paths — the values just come from `configurable` (forwarded by the SDK) rather than `toolConfigurable`. Other fixes from #12950 stay (IUser.id narrowing, the env.ts / google/initialize.ts / remoteAgentAuth.ts TS-warning fixes) — they were independent of the subagent identity propagation issue. * 📦 chore: update `@librechat/agents` to v3.1.78 This update reflects the transition from the development version `3.1.78-dev.0` to the stable release `3.1.78`. The package-lock.json has been refreshed to ensure consistency with the new version, including updated integrity checks and resolved URLs for the package. This change is part of ongoing improvements to enhance the functionality and stability of the agents module. |
||
|
|
9efd61d57d
|
🔐 fix: Forward per-file entity_id through code-env priming (#12958)
* 🔐 fix: Forward per-file `entity_id` through code-env priming Skill files and persisted code-env files now carry their `entity_id` on the in-memory file refs that seed `Graph.sessions`. Without this, an execute call that mixes a skill file (uploaded with `entity_id=skillId`) and a user attachment (uploaded with no `entity_id`) collapses onto a single request-level entity at the codeapi authorization step and one side 403s. With per-file `entity_id`, codeapi resolves sessionKey per file and both authorize. - `primeSkillFiles` / `primeInvokedSkills`: thread `entity_id` through fresh-upload, cache-hit, and per-skill-batch paths in `packages/api/src/agents/skillFiles.ts`. - `primeFiles` (Code/process.js): parse `entity_id` from the persisted `codeEnvIdentifier` query string once per iteration; forward through `pushFile`, including the reupload path which re-parses the fresh identifier returned by codeapi. - Tests: extend `skillFiles.spec.ts` with two cases — fresh-upload propagation and cached-hot-path parsing. Companion PRs in flight on `@librechat/agents` (forward `entity_id` through `_injected_files`) and codeapi (per-file authorization). All three are wire-back-compat: an absent `entity_id` falls back to the existing request-level resolution. * 🔧 chore: Update dependencies in package-lock.json and package.json - Bump `@librechat/agents` to version `3.1.78-dev.0` across multiple package files. - Upgrade `@langchain/langgraph-checkpoint` to version `1.0.2` and update its peer dependency for `@langchain/core` to `^1.1.44`. - Update `axios` to version `1.16.0` and `follow-redirects` to version `1.16.0`. - Add `@types/diff` as a new dependency at version `7.0.2` and include `diff` at version `9.0.0` in the `@librechat/agents` module. - Introduce optional peer dependency `@anthropic-ai/sandbox-runtime` for `@librechat/agents` with metadata indicating it is optional. * 🐛 fix: Make skill code-env cache persistence observable Two changes to surface the skill-bundle re-upload issue without behavioral changes to tenant scoping (root cause to be confirmed via the new warn log): 1. `primeSkillFiles` now awaits `updateSkillFileCodeEnvIds` instead of firing-and-forgetting it. The prior shape could race with the next prime (read-before-write) even when the bulkWrite itself succeeds, producing a silent cache miss. Latency cost: ~10–50ms on first prime; in exchange every subsequent prime can rely on the identifier being persisted by the time it reads. 2. `updateSkillFileCodeEnvIds` now returns `{matchedCount, modifiedCount}` from the underlying bulkWrite. `primeSkillFiles` warn-logs when `modifiedCount < updates.length`, making any silent drop visible — whether the cause is tenant filtering, a `relativePath` mismatch, schema-plugin scoping, or something else. Prior shape returned `Promise<void>` so any zero-modification result was invisible. Tests: - `skill.spec.ts`: real-MongoDB happy path (counts match), no-match case (modifiedCount=0), and empty-input contract. - `skillFiles.spec.ts`: deferred-promise harness proving the call site awaits the persist (prime stays pending until the persist resolves) and forwards partial-write counts. Deliberately narrower than the original draft of this commit, which also bypassed `tenantSafeBulkWrite` for the codeEnvIdentifier write on the speculative diagnosis that tenant filtering was the cause. That change was a behavior shift on tenant scoping without confirmation; reverted pending real-world signal from the new warn log. * 🐛 fix: Justify await for skill code-env persistence under concurrency The await on `updateSkillFileCodeEnvIds` isn't a defensive nicety — it's load-bearing for cache effectiveness under concurrent priming. Verified with an out-of-tree harness (`config/test-skill-cache.ts`, not committed) that wires `primeSkillFiles` against a real codeapi stack: - With fire-and-forget (prior shape after this branch's revert): back-to-back primes for the same skill miss the cache. Call N+1 reads SkillFile docs before Call N's write commits, sees no `codeEnvIdentifier`, re-uploads, fires its own forget that Call N+2 also races. Steady-state stays in cache miss for the full burst. - With await: the prime that does the upload commits its persist before resolving, so the next concurrent prime observes the cache pointer instead of racing the read. Latency cost ~10–50ms on the upload prime; subsequent concurrent primes save an entire batch upload. In production with primes seconds apart this race is rare; at scale with many users hitting the same skill in the same second it's the difference between M and N×M uploads. Updates the regression test to assert the await contract (deferred persist promise → prime stays pending until persist resolves). Comment in `skillFiles.ts` rewritten to document the concurrency rationale rather than the weaker "race-with-next-prime" framing the prior commit used. |
||
|
|
187ab787da
|
🌩️ feat: CloudFront CDN File Strategy (#12193)
* 🌩️ feat: CloudFront CDN File Strategy + signed cookies Squashed from PR #12193: - feat(storage): add CloudFront CDN file strategy - feat(auth): add CloudFront signed cookie support Note: package.json/package-lock.json dependency additions are intentionally omitted from this commit and will be re-added via `npm install` after rebase to avoid lock-file merge conflicts. The two new peer deps that need to be re-installed are: - @aws-sdk/client-cloudfront@^3.1032.0 - @aws-sdk/cloudfront-signer@^3.1012.0 Also fixes 4 missing destructured names in AuthService.spec.js (getUserById, generateToken, generateRefreshToken, createSession) that were referenced in tests but not imported from the mocked '~/models'. * 📦 chore: install CloudFront SDK deps for PR #12193 Adds the two AWS CloudFront packages required by the rebased CloudFront CDN strategy: - @aws-sdk/client-cloudfront - @aws-sdk/cloudfront-signer Following the @aws-sdk/client-s3 pattern: - api/package.json: regular dependency (runtime resolution) - packages/api/package.json: peerDependency Generated by `npm install` against the freshly rebased lock file to avoid the merge conflicts that came from the original PR's lock-file edits being made against an older base of dev. * 🐛 fix: CI failures + review findings on CloudFront PR #12193 CI fixes - Rename packages/data-provider/src/__tests__/cloudfront-config.test.ts → src/cloudfront-config.spec.ts. Jest's default testMatch picks up __tests__/ directories even inside dist/, so the compiled .d.ts shell was being executed as an empty test suite. Moving to .spec.ts (matching the rest of the package) avoids the dist/ pickup. - Add cookieExpiry: 1800 to CloudFront crud.test makeConfig: the schema applies a default so CloudFrontFullConfig requires it. Review findings addressed - #1 (Codex + comprehensive): Normalize CloudFront domain with /\/+$/ regex (and key with /^\/+/ regex) in buildCloudFrontUrl, matching the cookie code so resource policy and file URLs stay aligned even when the configured domain has multiple trailing slashes. Added tests. - #2: Move DEFAULT_BASE_PATH out of s3Config into shared packages/api/src/storage/constants.ts. ImageService no longer imports S3-specific config. - #3: getCloudFrontConfig() returns Readonly<CloudFrontFullConfig> | null to discourage mutation of the cached signing config. - #4: Add cross-field refinement tests for cloudfrontConfigSchema (invalidateOnDelete-without-distributionId, imageSigning="cookies"-without-cookieDomain). - #6: Revert unrelated MCP comment re-indentation in librechat.example.yaml. - #7: Add azure_blob to the strategy list comment. Skipped - #5 (extractKeyFromS3Url with CloudFront URLs): existing deleteFileFromCloudFront tests already cover the path-equivalence assumption; renaming the helper is real refactor work beyond this PR's scope. - #8, #9 (NIT, low confidence): leaving for author judgement. * 🧹 chore: drop dead DEFAULT_BASE_PATH from s3Config test mock After moving DEFAULT_BASE_PATH to ~/storage/constants, crud.ts no longer reads it from s3Config — so the entry in the s3Config jest mock was misleading dead config. The tests still pass because the unmocked real constants module provides the value. --------- Co-authored-by: Danny Avila <danny@librechat.ai> |
||
|
|
bff9bfea87
|
🐛 fix: Propagate User Identity to Subagent MCP Tool Calls (#12950)
* 🐛 fix: Propagate User Identity to Subagent MCP Tool Calls The `@librechat/agents` SDK's `SubagentExecutor` invokes the child workflow with a fresh configurable of `{ thread_id }` only — the parent's `user` / `user_id` are dropped on the way into the child graph. The child's `ToolNode` then dispatches `ON_TOOL_EXECUTE` to the parent's handler, which merges `{ ...configurable, ...toolConfigurable }`, but neither side carries user identity for subagents. Downstream MCP tools read `config.configurable.user?.id || user_id` and got `undefined`, so `MCPManager.getConnection` fell through to the "No connection found for server X" error path — it can't reach the user-connection lookup without a userId. Re-inject `user` (via `createSafeUser`) and `user_id` from `req.user` into the configurable returned by `loadToolsForExecution`. This is the single point all controllers (chat, Responses API, OpenAI-compat) flow through. For the parent agent it's a no-op (outer config already carries the same values); for subagents it fills the gap so MCP connection lookup, user-placeholder substitution, and tools that read configurable.user all work correctly. * 🐛 fix: Preserve `api-user` Fallback When Injecting Subagent Identity Codex review pointed out that the prior commit unconditionally wrote `user_id: req.user?.id` (and `user`) into `toolConfigurable`. The handler merges via `{ ...configurable, ...toolConfigurable }` — `toolConfigurable` wins — so when `req.user` is absent, this overwrote the outer config's `'api-user'` fallback (set by `responses.js` / `openai.js` for the unauthenticated API-key path) with `undefined`, breaking MCP connection lookup for that path. Only inject the keys when `req.user.id` is truthy. Omitting them lets the merge preserve whatever the outer configurable already had. Tests updated to assert key omission for `req.user` undefined / null / present without `id`. * 🩹 fix: Narrow `IUser.id` to required string `IUser` extends mongoose `Document`, which types `id?: any` (the optional virtual). At runtime `id` is always `_id.toString()` for a hydrated doc, so narrow the type to a required string. Closes two `@rollup/plugin-typescript` TS2322 warnings introduced by PR #12450 (OIDC Bearer Token Authentication for Remote Agent API) where `req.user = userResolution.user` and the `(req: Request, res: Response, next: NextFunction)` signature both failed against the project's local `Express.User` augmentation (`{ [key: string]: any; id: string; }`) because `IUser.id` was `any`/optional. Narrowing here fixes both at the source rather than casting at every assignment site. * 🩹 fix: Resolve TS Build Warnings Surfaced by `IUser.id` Narrowing Three rollup TS plugin warnings surfaced after narrowing `IUser.id` from `any` to `string`: - `utils/env.ts:95` — `safeUser[field] = user[field]` failed strict checking because indexed write through a union-typed key collapses the LHS to the intersection of all field write types (i.e., `undefined` when fields have mixed types). The previous `id?: any` on IUser had been masking this. Switch to `Object.assign(safeUser, { [field]: user[field] })` which widens the assignment. - `endpoints/google/initialize.ts:35` — `getUserKey({ userId: req.user?.id, ... })` failed because `req.user?.id` is now `string | undefined` (no longer `any`). Match the pattern already used in `endpoints/openAI/initialize.ts:49`: `req.user?.id ?? ''`. - `middleware/remoteAgentAuth.ts:465` — pre-existing, unrelated to the IUser change. The local (gitignored) `express.d.ts` augments `express.Request` but not `express-serve-static-core.Request`, so the explicit `(req: Request, ...)` annotation imported from `'express'` resolves to a Request whose `req.user` differs from the one `RequestHandler` expects internally. Type the closure as `RequestHandler` directly so TS infers params from the augmented type. * 🩹 fix: Cast `RemoteAgentAuth` Closure to `RequestHandler` My previous attempt removed the explicit `req: Request` annotation on the closure to side-step the outer `RequestHandler` mismatch. That shifted the error to every helper call site inside the closure (`getConfigOptions(req)`, `runApiKeyAuth(req, ...)` at 467/474/493/ 512/531), because the helpers annotate their params with `express.Request` (which has the local `Request.user` augmentation), while the unannotated closure inferred `req` as `express-serve-static-core.Request` (no augmentation). Reproduced locally by stubbing the gitignored `src/types/express.d.ts`. Right approach: keep the explicit `req: Request` annotation so the closure body matches the helpers' types, then cast at the return — `RequestHandler`'s internal `Request` resolves through `express-serve-static-core` and lacks the augmentation, so the cast is the boundary that bridges the two views of `req.user`. Verified against a build with the local express.d.ts stub: zero warnings on `remoteAgentAuth.ts`, `env.ts`, and `google/initialize.ts`. |
||
|
|
f20419d0b7
|
📄 feat: Rich File Artifact Previews for DOCX, CSV, XLSX, PPTX (#12934)
* 📄 feat: Rich File Artifact Previews for DOCX, CSV, XLSX, PPTX Render office files emitted by tools as interactive previews in the artifact panel instead of raw extracted text. The backend produces a sanitized HTML document via mammoth (DOCX), SheetJS (CSV/XLSX/XLS/ODS), or yauzl-based slide extraction (PPTX) and ships it through the existing SSE attachment payload; the client routes it through the Sandpack `static` template's `index.html` slot — no new browser deps, no client-side blob fetch, no React renderer components. * 🔐 fix: Restrict data: URLs to <img> in office HTML sanitizer Codex review on #12934 caught that `data:` lived in the global `allowedSchemes`, which meant a smuggled `<a href="data:text/html, <script>...</script>">` would survive sanitization. The Sandpack iframe sandbox does not gate `target="_blank"` navigations, so a click would open attacker-controlled HTML in a new tab. Scope `data:` to `<img src>` only via `allowedSchemesByTag` (mammoth inlines DOCX images as base64 `data:image/...` URIs — that path still works). Add a regression suite (`sanitizeOfficeHtml security`) with 8 cases covering: <script> stripping, event-handler removal, javascript:/data: rejection on anchors, data:image preservation in <img>, http/https/mailto allowance, target=_blank rel=noopener enforcement, and <iframe> stripping. * 🔧 fix: Route extensionless office files by MIME alone Codex review on #12934 caught that the office-render gate in `extractCodeArtifactText` only fired when the extension was in `OFFICE_HTML_EXTENSIONS` or the category was `document`/`pptx`. A tool emitting `data` with `text/csv` (no extension) classifies as `utf8-text`, so the gate was skipped and raw CSV text shipped to the client — but the client routes by MIME to the SPREADSHEET bucket expecting a full HTML document, so the panel rendered broken text. Extract a shared `officeHtmlBucket(name, mime)` predicate from `html.ts` (returns the bucket name or null). Both `bufferToOfficeHtml` (the dispatcher) and the upstream gate in `extract.ts` now go through this single source of truth, so they can never drift apart again. The predicate already mirrors the dispatcher's extension/MIME logic (extension wins; MIME is the fallback for extensionless inputs). Adds: - 14 cases for the new `officeHtmlBucket` predicate covering the positive paths (each bucket via extension OR MIME) and the negative paths (txt, py, json, jpg, pdf, zip, odt, plain noext). - A direct regression test in `extract.spec.ts` for the Codex catch: `data` with `text/csv` + utf8-text category routes through the office HTML producer. - Parameterized cases for extensionless DOCX/XLSX/XLS/ODS/PPTX files identified by MIME alone. * 🛡️ fix: Enforce extension-wins precedence in officeHtmlBucket Codex review on #12934 caught that the predicate's if-chain interleaved extension and MIME checks for each bucket — e.g. CSV's branch was `ext === 'csv' || CSV_MIME_PATTERN.test(mimeType)`. A `deck.pptx` shipped with `text/csv` (sandboxed tools sometimes ship generic MIMEs) matched the CSV branch BEFORE the PPTX extension branch was reached, so a binary PPTX would have been handed to `csvToHtml` to parse as text — yielding garbage or a parse exception. Restructure to a strict two-pass dispatch: an exhaustive extension table first (one lookup, all known extensions), then MIME-only fallback for extensionless / unknown-ext inputs. The doc comment's "extension wins" claim is now actually enforced by the implementation. Add 7 regression cases covering the conflicting-MIME footgun for each bucket: deck.pptx + text/csv → pptx; workbook.xlsx + text/csv → spreadsheet; legacy.xls + pptx-MIME → spreadsheet; report.docx + text/csv → docx; data.csv + docx-MIME → csv; etc. * 🛡️ fix: Reject zip-bomb office files before in-process parsing (SEC) Addresses pre-existing availability vulnerability validated by SEC review (Codex finding 275344c5...) and made worse by this PR's HTML rendering path. A sub-1MiB compressed XLSX/DOCX/PPTX (highly compressed run-of-zeros) inflates to 200+ MiB of XML when handed to mammoth/xlsx — blocking the Node event loop for 10+ seconds and spiking RSS to ~1 GiB. The existing 8s `withTimeout` wrapper uses `Promise.race`, which can only return early; it cannot interrupt synchronous parser CPU/RAM consumption. PoC ran an authenticated execute_code call to OOM the API process. Add `assertSafeZipSize(buffer)` — a yauzl-based pre-flight that streams every entry with mid-inflate byte counting and bails on either a per-entry or total decompressed-size cap. Mid-inflate counting cannot be bypassed by falsifying the central directory's `uncompressedSize` field (the technique the PoC used). Defaults: 25 MiB per entry, 100 MiB total — generous headroom for legitimate image-heavy office files, well below the attack profile. Hook the check into every path that hands a buffer to mammoth/xlsx /yauzl: - New HTML producers (`wordDocToHtml`, `excelSheetToHtml`, `pptxToSlideListHtml`) — added by this PR - Legacy RAG text extractors (`wordDocToText`, `excelSheetToText` in `crud.ts`) — pre-existing path, also vulnerable Errors propagate as a tag-distinct `ZipBombError` so callers can distinguish a refused bomb from generic parse failures. The outer `extractCodeArtifactText` swallows the error and returns null, falling back to the regular download UI. `.xls` (BIFF/CFB binary, not ZIP) is detected by magic bytes and skipped — yauzl would reject it as malformed anyway. Adds 15 tests: - `zipSafety.spec.ts` (9): benign passes, per-entry cap, total cap, ZipBombError type-tagging, malformed-zip distinction, directory- entry handling, named-error surfacing, and the SEC-PoC pattern (sub-1 MiB compressed → 50 MiB inflated rejected on default caps). - `html.spec.ts` zip-bomb suite (5): each producer rejects a bomb; dispatcher propagates correctly; legitimate fixtures still render. - `extract.spec.ts` (1): outer extractor swallows ZipBombError and returns null so the download UI fallback fires. * 🧹 fix: Normalize MIME parameters; add legacy CSV MIME variant Two related Codex catches on PR #12934 — both about MIME-routing inconsistencies between backend and client that would cause extensionless CSV files to render as broken (raw text under an HTML slot) or skip the artifact panel entirely. P2 — backend MIME normalization: `officeHtmlBucket` matched MIME strings exactly, so a real-world `text/csv; charset=utf-8` Content-Type slipped through and the backend returned raw CSV text. The client's `baseMime` helper strips parameters before its own MIME lookup, so it routed the same file to the SPREADSHEET bucket expecting an HTML body that never arrived. Mirror the client's normalization on the backend (strip everything from `;` onward, lowercase) before bucket matching. P3 — client legacy CSV MIME: Backend's `CSV_MIME_PATTERN` accepts three variants (`text/csv`, `application/csv`, `text/comma-separated-values`); the client's `MIME_TO_TOOL_ARTIFACT_TYPE` only had the first two. An extensionless file with `text/comma-separated-values` would have backend HTML produced but the client would skip the artifact panel entirely. Add the missing variant. Tests: - 9 new parameterized-MIME cases on backend covering charset/ boundary/case variants for every bucket. - 1 new client routing case for `text/comma-separated-values`. * 🩹 fix: Try office HTML before short-circuiting on category=other Codex review on #12934 caught that the early `category === 'other'` return short-circuited before `hasOfficeHtmlPath` was checked. The classifier returns 'other' for inputs the new dispatcher can still route — extensionless `application/csv` (CSV MIMEs aren't in the classifier's text-MIME set and don't start with `text/`), and extensionless office MIMEs with parameters like `application/vnd... spreadsheetml.sheet; charset=binary` (the classifier's `isDocumentMime` exact-matches these MIMEs without parameter normalization). Both would route correctly through `officeHtmlBucket` but never reached it. Move the office-HTML attempt above the 'other' early return, and drop the `|| category === 'document' || category === 'pptx'` shortcut now that `hasOfficeHtmlPath` covers the same surface (with parameter normalization) and a wider one. ODT still routes through `extractDocument` unchanged — `hasOfficeHtmlPath` returns false for it and the `category === 'document'` branch below handles it. Adds 3 regression tests: - extensionless `application/csv` + category='other' → office HTML - extensionless parameterized office MIME + category='other' → office HTML - defense check: actual binary 'other' (image/jpeg) still returns null without invoking the office producer * 🛡️ fix: Office types are HTML-or-null (no text fallback → XSS) Codex P1 review on #12934 caught that when `renderOfficeHtml` failed (timeout, malformed file, zip-bomb rejection) for an office type, the extractor fell through to `extractDocument` and returned plain text. The client routes by extension/MIME to the office preview buckets and feeds `attachment.text` straight into the Sandpack iframe's `index.html`. A spreadsheet cell or document body containing the literal string `<script>alert(1)</script>` would have been injected as executable markup — direct XSS. The contract for office types is now HTML-or-null with no text fallback. Failed render returns null, the client's empty-text gate keeps the artifact off the panel, and the file falls back to the regular download UI (matching what PPTX already did). PDF and ODT still go through `extractDocument` because the client routes them to PLAIN_TEXT (which the markdown viewer escapes) or no artifact at all, so plain text is safe there. Test reshuffle: - `document` describe block now uses ODT/PDF for the legacy parseDocument-path tests (DOCX/XLSX/XLS/ODS bypass that path). - New "does NOT call parseDocument for office HTML types" test locks in the SEC contract for all four office HTML buckets. - "falls back to ..." tests rewritten as "returns null when ..." with explicit `parseDocumentCalls.length === 0` assertions to prove no text leaks back to the client. - New XSS regression test for the XLSX failure path. - Mock parseDocument failure-name match relaxed to `includes()` so ODT-named tests can use the same trigger. * 🧽 chore: Address follow-up review findings on PR #12934 Wraps up the 10-finding follow-up review. Two MAJOR + four MINOR + two NIT addressed; one NIT skipped after verifying it was a misread of the package.json structure. MAJOR - #1: Rewrite `renderOfficeHtml` JSDoc to document the HTML-or-null contract explicitly. The pre-fix doc described a text-fallback path that was the original XSS vector (commit b06f08a). A future maintainer trusting the stale doc could reintroduce the fallback. - #2: Replace byte-truncation of office HTML with a small "preview too large" banner document. Cutting at a UTF-8 boundary lands mid-tag (`<table><tr><td>con\n…[truncated]`) and ships malformed markup to the iframe — unpredictable rendering, occasional broken layouts on DOCX with embedded images / wide spreadsheets. MINOR - #4: Wrap `readSlidesFromZip`'s `zipfile.close()` in try/catch so a close-time exception (mid-flight stream) doesn't replace the original error. Mirrors the defensive pattern in zipSafety.ts. - #5: Refactor PPTX extraction to use `yauzl.fromBuffer` directly, eliminating the temp-file write/unlink the safety pre-flight already proved unnecessary. Removes 4 unused imports (os, path, fs/promises, randomUUID). - #6: Extract `isPreviewOnlyArtifact(type)` to `client/src/utils/ artifacts.ts` so the membership check is unit-testable without mounting the full Artifacts component (Recoil + Sandpack + media query). 15 new test cases covering positive types, negative types, null/undefined, and unknown strings. NIT - #3: Remove dead `stripColorStyles` / `COLOR_PROPERTY_PATTERN` — unused (sanitizer's `allowedStyles` config handles color implicitly). - #7: Remove dead `!_lc_csv_label` worksheet property write. - #9: Remove no-op `exclusiveFilter: () => false` sanitize-html config. - #10: Type-narrow `PREVIEW_ONLY_ARTIFACT_TYPES` to `ReadonlySet<ToolArtifactType>` so the membership table is compile-time checked against the enum. SKIPPED - #8: Reviewer flagged `sanitize-html` as duplicated in devDeps and dependencies. The package has no `dependencies` section — only `devDependencies` and `peerDependencies`. Existing convention (mammoth, xlsx, yauzl, pdfjs-dist) is to appear in BOTH. Removing the devDep entry would break local test runs. Tests: packages/api 4406/4406, client artifacts 128/128. * 🪞 chore: Fix isPreviewOnlyArtifact test description parameter order Follow-up review nit on PR #12934. Jest's `it.each` substitutes `%s` positionally, and the table rows were `[type, expected]` while the description template read `'returns %s for type %s'` — outputting "returns application/vnd.librechat.docx-preview for type true" instead of the intended "type ... returns true". Reorder the template to match the column order. Test runner output now reads naturally: "type application/vnd.librechat.docx-preview returns true". Pure cosmetic — runtime behavior unchanged. * ✨ feat: Improve DOCX rendering and surface filename in panel header Two UX improvements based on hands-on use of the office preview pipeline. DOCX rendering — mammoth strips the navy banners, cell shading, and column layouts that direct-formatted docs apply (python-docx-style output is a common case). The flat `<p><strong>X</strong></p>` and bare `<table><tr><td>` it emits looks washed out next to the source. Three targeted compensations: - Style map promotes `Title`, `Subtitle`, `Heading 1` thru `Heading 6`, and `Quote` paragraphs to their semantic HTML equivalents (mammoth's default only handles Heading 1-6, missing Title/Subtitle/Quote). - Extra CSS scoped to `.lc-docx` gives the first table row sticky- looking header styling regardless of `<thead>` (mammoth never emits `<thead>`), adds zebra striping, and treats the python-docx `<p><strong>X</strong></p>` section-heading idiom as a pseudo-h2 with a thin accent left border so document structure survives the round trip. Headings get a left accent or underline so they read as headings instead of just bold paragraphs. - Sanitizer's `allowedAttributes` opens `class` on the heading and block tags the styleMap and CSS heuristics rely on. `<script>`, event handlers, javascript: URLs, etc. are still stripped — the existing security regression suite catches any drift. Panel header — `Artifacts.tsx` showed a generic "Preview" pill for preview-only artifacts. Single-tab Radio is a no-op; surfacing the document filename there gives the user something useful in the chrome without taking real estate. `displayFilename` handles the sandbox dotfile suffix the upload pipeline applies. Tests: html.spec.ts +1 (new CSS-emission lock), 71/71. Backend files suite 428/428. Client 308/308. * ✨ feat: High-fidelity DOCX preview via docx-preview in iframe Switch the default DOCX render path from server-side mammoth → flat HTML to client-side `docx-preview` loaded inside the Sandpack iframe. Mammoth becomes the fallback for files above the cap. Why --- The Sandpack iframe is a real browser DOM. Server-side rendering ceiling for DOCX→HTML is well below the source's visual fidelity — mammoth strips cell shading, run colors, banners, and column layouts because Word's layout model doesn't fit HTML's flow model. Pushing the render into the iframe lifts that ceiling without paying the server-side cost of jsdom or LibreOffice. What ---- - New `wordDocToHtmlViaCdn(buffer)` builds a self-contained HTML doc that embeds the binary as base64 and lets `docx-preview@0.3.7` render it on load. CSS preserves dark/light mode handoff via `prefers-color-scheme`. Bootstrap script falls back to a "preview unavailable, please download" message if the CDN is unreachable or the parse throws. - `docx-preview` and its `jszip` peer dep are pinned to specific versions on jsdelivr with SRI sha384 integrity hashes and `crossorigin="anonymous"`. Refresh: re-fetch the file, run `openssl dgst -sha384 -binary FILE | openssl base64 -A`. - CSP locked down on the iframe: `default-src 'none'`, scripts only from jsdelivr (no eval), `connect-src 'none'` so a parser bug in docx-preview can't be turned into exfiltration of the embedded document, `base-uri 'none'`, `form-action 'none'`. Defense in depth on top of the Sandpack cross-origin sandbox. - `wordDocToHtml` dispatches by size: ≤ 350 KB binary → CDN path (high fidelity), larger → mammoth fallback (preserves the size cap on `attachment.text`). 350 KB chosen so worst-case base64-inflated output (~478 KB) plus wrapper overhead (~5 KB) fits under MAX_TEXT_CACHE_BYTES (512 KB) with 40 KB headroom. - Internal renderers exported as `_internal` for tests. Public API unchanged — callers still go through `wordDocToHtml`. PPTX intentionally NOT switched ------------------------------- Surveyed the available client-side PPTX libraries: - `pptx-preview@1.0.7` ships an ESM-only main entry plus a 1.36 MB UMD that references `require("stream"/"events"/"buffer"/"util")` — bundled for Node, not browser-clean. Could work but the runtime references to undefined Node globals are a fragility risk worth more validation than this PR can absorb. - `pptxjs` is jQuery-era, requires four separate UMD scripts in a specific order, less actively maintained. - The honest answer for PPTX is the LibreOffice sidecar (DOCX/XLSX/ PPTX → PDF → PDF.js), which is the architecture every major product (Google Drive, Claude.ai, ChatGPT) effectively uses and the only path to ~5/5 fidelity for arbitrary user decks. PPTX stays on the existing slide-list extraction for now. Open a follow-up issue for the LibreOffice/Gotenberg sidecar. Tests ----- - 6 new in CDN-rendered describe block: wrapper structure, base64 round-trip, SRI integrity + crossorigin, CSP locks (connect-src/eval/base-uri/form-action), fallback message wiring, size-threshold lock. - Adjusted 2 existing tests that asserted on mammoth-path artifacts (literal document text in `<article class="lc-docx">`) — those assertions move to the mammoth-fallback test that calls `_internal.wordDocToHtmlViaMammoth` directly. Dispatcher tests now assert CDN-path signatures instead. packages/api files: 434/434 ✅, full unit suite 4473/4473 ✅. * 🧷 fix: Address Codex P1 (MIME aliases) + P2 (CDN dependency) Two follow-up review findings on PR #12934, both real. P1 — Spreadsheet MIME aliases on client ---------------------------------------- Backend's `officeHtmlBucket` uses the broad `excelMimeTypes` regex from `librechat-data-provider` (covers `application/x-ms-excel`, `application/x-msexcel`, `application/msexcel`, `application/x-excel`, `application/x-dos_ms_excel`, `application/xls`, `application/x-xls`, plus the canonical sheet MIMEs). The client's exact-match `MIME_TO_TOOL_ARTIFACT_TYPE` only had three of those, so an extensionless XLS upload with a legacy MIME would have backend HTML produced but the client would fail to route the artifact at all — preview chip never registers. Fix: import the same regex on the client and add it as a fallback in `detectArtifactTypeFromFile` after the exact-match map miss. Stays in lock-step with the backend automatically. 7 new test cases — one per legacy alias. P2 — Hard CDN dependency on jsdelivr ------------------------------------- Air-gapped / corporate-filtered networks where jsdelivr is unreachable would see DOCX previews permanently degrade to "Preview unavailable" because the iframe could never load the renderer scripts. Mammoth was sitting right there on the server but the dispatcher always preferred the CDN path for files under 350 KB. Fix: `OFFICE_PREVIEW_DISABLE_CDN` env var. When truthy (`1`, `true`, `yes`, case-insensitive, whitespace-trimmed), `wordDocToHtml` short-circuits to the mammoth path regardless of file size. Operators on filtered networks set the env var; default behavior is unchanged. Read at function-call time (not module load) so jest can flip it in `beforeEach` without `jest.resetModules()`. The cost is one property access per render. 12 new test cases: env-unset uses CDN (default), all five truthy forms force mammoth, five non-truthy forms (`false`/`0`/`no`/empty/ arbitrary string) leave CDN active. Tests ----- packages/api/src/files: 446/446 ✅ (was 434, +12 from env-var matrix). client artifact suites: 235/235 ✅ (was 228, +7 from MIME aliases). * ✨ feat: High-fidelity PPTX preview via pptx-preview in iframe Mirrors the DOCX CDN architecture for PPTX: small files (≤350 KB binary) embed as base64 and render via `pptx-preview` loaded from jsdelivr inside the Sandpack iframe. Larger files and air-gapped deployments fall back to the existing slide-list extraction. Why --- PPTX is the format where the gap between LibreChat's preview and Claude.ai-style previews was most visible (slide-list of bullet points vs. rendered slide layouts). LibreOffice → PDF → PDF.js is still the eventual gold-standard answer for PPTX fidelity, but client-side rendering inside the Sandpack iframe gets us a meaningful intermediate step (~1.5/5 → ~3.5/5) without a sidecar. What ---- - `pptx-preview@1.0.7` (ISC license, ~1.36 MB UMD bundle that includes its echarts/lodash/uuid/jszip/tslib deps inline). Pinned to a specific version on jsdelivr with SHA-384 SRI and `crossorigin="anonymous"`. - `buildPptxCdnDocument` mirrors the DOCX wrapper: same CSP locks (`default-src 'none'`, `connect-src 'none'`, no eval, no base/form tampering), same `id="lc-doc-data"` base64 slot, same fallback message wiring (`typeof pptxPreview === 'undefined'` → "Preview unavailable"). - New public `pptxToHtml(buffer)` dispatcher; `bufferToOfficeHtml` switches its `'pptx'` case to call it. `pptxToSlideListHtml` stays exported as the slide-list-only path (still hit by tests directly and by the dispatcher fallback). - `OFFICE_PREVIEW_DISABLE_CDN=true` env-var hatch applies to PPTX too — air-gapped operators get the slide-list path. Same env-var read at call time, same matrix of truthy values (`1` / `true` / `yes` / case-insensitive / whitespace-trimmed). - `_internal` re-exports moved to after the PPTX section since the PPTX internals live further down in the file. Adds `pptxToHtmlViaCdn`, `MAX_PPTX_CDN_BINARY_BYTES`, `PPTX_PREVIEW_CDN`. Honest caveats -------------- - The 1.36 MB UMD bundle has `require("stream"/"events"/"buffer"/ "util")` references in its outer wrapper. Those are bundled-dep artifacts (likely from `tslib` / Node-shim transforms) and don't appear to execute on the browser code paths, but I haven't done manual e2e on a wide range of decks. If a class of files turns up that breaks rendering, the iframe-side fallback message catches it and operators have `OFFICE_PREVIEW_DISABLE_CDN=true` as the bail. - First-render CDN fetch is ~1.36 MB (browser-cached after). - PPTX with embedded media easily exceeds the 350 KB binary cap; those files take the slide-list path. Lifting the cap is a follow-up (tied to the broader self-hosting work). Tests ----- 11 new in two new describe blocks: - `pptxToHtml dispatcher`: routing predicate (small → CDN, env-set → slide-list). - `CDN-rendered path`: base64 round-trip, SRI integrity + crossorigin, CSP locks (connect/eval/base/form), fallback message, size-threshold lock at 350 KB. - `OFFICE_PREVIEW_DISABLE_CDN escape hatch`: env-var matrix for truthy values. packages/api/src/files: 457/457 ✅ (was 446, +11). * 🪟 fix: DOCX preview fills the artifact panel width docx-preview defaults to rendering at the document's native page width (8.5in for letter, 21cm for A4). In a wide artifact panel that left whitespace on either side; in a narrow one it forced horizontal scroll. Two changes: - Pass `ignoreWidth: true` to `docx.renderAsync` so the library skips the document's pageSize width and uses its container's width. - Defensive CSS overrides on `.docx-wrapper` and `.docx-wrapper > section.docx` in case a future library version regresses on the option, plus `padding: 0` on the wrapper to drop the page-edge whitespace docx-preview otherwise reserves. `renderHeaders`/`renderFooters`/etc. stay enabled — those still appear in the rendered output, just inside a container that fills the panel instead of a fixed-width "page." Tests unchanged (100/100); manual e2e ahead of merge. * 🩹 fix: PPTX black screen — allow blob: workers + harden bootstrap Manual e2e of the PPTX CDN renderer surfaced a black screen with "Could not establish connection. Receiving end does not exist." unhandled-rejection — characteristic of a Web Worker that couldn't start. Root cause: pptx-preview's bundled echarts dep spins up Web Workers via blob: URLs for chart rendering. Our CSP had `default-src 'none'` and no `worker-src`, so workers fell back to default → blocked. The async failure deep inside echarts didn't surface through the outer `previewer.preview()` promise, so my bootstrap's `.catch` never fired, the loading state was removed, and the iframe sat with the body background showing through (dark navy in dark mode = "black screen"). Three changes: - Add `worker-src blob:` to the PPTX CSP. Allows blob:-only worker creation without permitting arbitrary worker URLs. - Bootstrap: window-level `unhandledrejection` and `error` listeners so rejections from inside bundled-dep async pipelines surface as the user-facing "Preview unavailable" fallback instead of going silent. - Bootstrap: 8-second timeout that checks `container.children.length` — if the renderer hasn't appended anything visible by then, assume silent failure and show the fallback. Also wipe `container.innerHTML` when showing the fallback so a partial render doesn't compete with the message. DOCX wrapper unchanged: docx-preview doesn't use workers, so the worker-src directive doesn't apply, and the existing fallback path already covers its failure modes. Tests ----- - Existing PPTX CSP test now also asserts `worker-src blob:` is present. - Existing fallback-message test extended to cover the new unhandledrejection/error/timeout listeners. packages/api/src/files: 467/467 ✅. * 🔒 fix: gate office HTML routing on backend trust flag (textFormat) Codex P1 review on PR #12934: routing .docx/.csv/.xlsx/.xls/.ods/.pptx into the office preview buckets assumed `attachment.text` was already sanitized full-document HTML, but that guarantee only existed for the new code-output extractor path. Existing stored attachments and other non-code paths can still carry plain extracted text — `useArtifactProps` would then inject that as `index.html` inside the Sandpack iframe. Adds a `textFormat: 'html' | 'text' | null` trust flag persisted on the file record by the code-output extractor, surfaced over the SSE attachment payload and the TFile API type. The client's routing in `detectArtifactTypeFromFile` requires `textFormat === 'html'` before landing on an office HTML bucket; everything else (legacy attachments, RAG-extracted plain text from `parseDocument`, explicitly-marked 'text' entries) falls back to the PLAIN_TEXT bucket where the markdown viewer escapes content rather than executing it. Tests: new `getExtractedTextFormat` helper has 14 cases covering all office paths, legacy XLS MIME aliases, parseDocument fallthroughs, and null-input. Client `artifacts.test.ts` adds three security-gate tests proving downgrade behavior for missing/null/'text' textFormat, plus a `fileToArtifact` test that legacy office attachments without the flag end up in PLAIN_TEXT with their content escaped. * 🌐 fix: air-gapped DOCX preview — embed mammoth fallback in CDN doc Codex P2 review on PR #12934: the CDN-rendered DOCX path always pulled docx-preview + jszip from cdn.jsdelivr.net. Air-gapped or corporate- filtered networks where jsdelivr is blocked would degrade to a static "Preview unavailable" message even though the server already had a local mammoth renderer that could produce readable output. Now the dispatcher renders mammoth first and embeds the sanitized output inside the CDN document as a hidden `#lc-fallback` block. The iframe's existing `typeof docx === 'undefined'` check (which fires when the CDN scripts can't load) un-hides the fallback so the user sees a real preview. CDN-success path is unchanged: high-fidelity docx-preview output owns the viewport, mammoth fallback stays hidden. Two new safeguards in the dispatcher: - Size budget: if base64(binary) + mammoth body + wrapper > 512 KB (the `attachment.text` cache cap), drop to mammoth-only so a giant document still renders. The `OFFICE_HTML_OUTPUT_CAP` constant mirrors `MAX_TEXT_CACHE_BYTES` from extract.ts (separate constant to avoid a circular import; pinned by a unit test). - `lc-render` is hidden when fallback shows so the empty padded slot doesn't sit above the mammoth content. Tests: existing CDN-path tests updated for the new `wordDocToHtmlViaCdn(buffer, mammothBody)` signature; new test for the embedded fallback structure (`#lc-fallback`, mammoth body content, "High-fidelity renderer unavailable" notice, render-slot hide); new constant pin and per-fixture cap-respect assertion. * 🧪 feat: LibreOffice → PDF preview path (POC, opt-in via env) Per the plan-mode discussion: prove out a LibreOffice subprocess pipeline as an alternative to the docx-preview / pptx-preview CDN renderers. LibreOffice handles every office format Microsoft and LibreOffice itself can open (DOCX, PPTX, XLSX, ODT, ODP, ODS, RTF, many more), produces a PDF, and the host browser's built-in PDF viewer renders it inside the Sandpack iframe via a `data:` URI. No client-side JS dependency, no CDN dependency, true high fidelity for any feature LibreOffice supports. Off by default. Operators opt in by setting both: - `OFFICE_PREVIEW_LIBREOFFICE=true` - LibreOffice (`soffice` or `libreoffice`) on the server's `$PATH` When either is missing, the dispatcher falls through to the existing CDN/mammoth/slide-list pipeline so a misconfiguration doesn't break previews. Hardening (`packages/api/src/files/documents/libreoffice.ts`): - Fresh subprocess per call with isolated temp dir, stripped env (PATH/HOME/TMPDIR only), and `-env:UserInstallation` so concurrent conversions can't collide on shared `~/.config/libreoffice` locks - 30-second wall-time cap; SIGKILL on timeout - 50 MB PDF output cap to bound disk pressure - 512 KB output cap on the wrapped HTML so the SSE/cache contract stays intact (base64 inflates ~33%, effective PDF cap ~380 KB) - Macros disabled by default flags (`--norestore --invisible --nodefault --nofirststartwizard --nolockcheck`) - Tag-distinct `LibreOfficeUnavailableError` / `LibreOfficeConversionError` so callers can swallow appropriately Iframe wrapper (`buildPdfEmbedDocument`): - Native browser PDF viewer via `<iframe src="data:application/pdf; base64,...">` — works in Chrome, Edge, Safari, Firefox - CSP locks the iframe to `default-src 'none'; frame-src data:; connect-src 'none'; script-src 'unsafe-inline'` — no outbound network, no eval, no external scripts - `#view=FitH` for first-paint sizing - 4-second heuristic timer that swaps to a "Preview unavailable" fallback when the browser's PDF viewer is disabled (kiosk mode, Brave Shields, etc.) Wired into `wordDocToHtml` and `pptxToHtml` as the first branch — returns null when disabled / unavailable / oversized so the existing pipeline takes over. XLSX intentionally NOT routed through this path: SheetJS's HTML output is already excellent for spreadsheets (sortable, sticky headers) and PDF rendering of sheets is awkward. Tests (`libreoffice.spec.ts`, 30 cases — 25 always run, 5 conditional on the binary): env-gating parser semantics matching `OFFICE_PREVIEW_DISABLE_CDN`, fallthrough contract (never throws, returns null on any failure), CSP lock-down, fallback structure, binary probe caching + missing-binary path, error tagging, and integration tests that engage when `soffice`/`libreoffice` is on PATH (DOCX→PDF, PPTX→PDF, output-cap fallthrough). Integration tests skip cleanly on bare CI. * 🩹 fix: CI — preserve legacy download path for empty-text office attachments Two regressions surfaced after the textFormat security gate landed. 1. **Client** (`LogContent.test.tsx` "falls back to the legacy download branch for an office file with no extracted text"): When the security gate downgraded an office type without `textFormat: 'html'` to PLAIN_TEXT, the lenient empty-text gate on PLAIN_TEXT then accepted a missing `text` field and rendered a half-empty panel card. The historical contract is "office type + no text → legacy download UI"; the downgrade should only fire when there's actual plain text that needs safe-escaping. Fix in `detectArtifactTypeFromFile`: short-circuit to null when the office type lands in the security-gate branch with no text. The PLAIN_TEXT downgrade still fires for legacy attachments that DO carry plain text. 2. **API** (`process.spec.js` + `process-traversal.spec.js`): the `@librechat/api` mocks didn't expose `getExtractedTextFormat`, so `processCodeOutput` called `undefined(...)` → TypeError → tests got undefined results. Added the helper to both mocks with a faithful default (returns 'text' for non-null extractor output, null otherwise). Tests: new regression in `artifacts.test.ts` pinning the empty-text + no-textFormat → null contract for all four office types (.docx/.csv/.xlsx/.pptx), so a future refactor can't silently re-introduce the half-empty card. * 🩹 fix: PPTX slides scale to fit panel width (no horizontal scroll) Manual e2e on PR #12934: pptx-preview rendered slides at their native init dimensions (960×540 default). The artifact panel is much narrower than that, so the iframe got a horizontal scrollbar and only a corner of each slide showed at any time — the user had to drag-scroll across each slide to read it. Fix: keep pptx-preview's init at 960×540 so its internal layout math stays correct, then post-process each rendered slide: - Cache the slide's native width/height on its dataset BEFORE applying any transform (so subsequent re-fits don't measure the already-transformed box). - Wrap the slide in `.lc-slide-wrap` with explicit width/height set inline to the scaled dimensions; the wrap shrinks the layout space the slide occupies. - Apply `transform: scale(panel_width / 960)` to the slide itself with `transform-origin: top left` so the rendered output shrinks from the top-left corner into the wrap. - Cap the scale at 1.0 so small slides don't upscale and get blurry. Streaming + resize: - `MutationObserver` watches the container for slide insertions so streaming renders get scaled on arrival rather than waiting for the entire `previewer.preview` promise to settle. - `ResizeObserver` re-fits all wrapped slides when the iframe resizes (panel drag, window resize). Tests: new "bootstrap wraps + scales each slide" lock in the wrap class, scale computation, observer setup, and native-size caching so a future refactor can't silently re-introduce the overflow. * 🩹 fix: PPTX wrap+scale runs after preview, not during streaming Manual e2e on PR #12934: regenerated PPTX showed "Preview unavailable" in the iframe. Root cause: the MutationObserver I added in the previous commit fired during pptx-preview's render and moved slides out from under the library's references. pptx-preview's async pipeline raised an unhandled rejection, the iframe's window-level listener caught it, and the fallback message replaced the partial render. Fix: drop the MutationObserver. Apply the wrap+scale ONCE in a `finalize` step that runs: - On `previewer.preview().then` (the happy path) - On the 8-second timeout safety net IF the container has children (silent-failure path — pptx-preview emitted slides but never resolved its outer promise) To prevent the user from seeing an unscaled flash while pptx-preview renders into the 960px-wide canvas, the container is set to `visibility: hidden` at init and only revealed inside `finalize` after wrap+scale completes. Resize handling stays via `ResizeObserver` on `document.body`, installed AFTER the wrap pass so it doesn't fire during the wrap itself. Tests: regression assertion now also locks in: - `container.style.visibility = 'hidden' / 'visible'` (the flash- prevention contract) - Absence of MutationObserver (the bug we just removed — must NOT creep back in via a future "let's scale during streaming" idea) * 🩹 fix: PPTX slides fill panel width (drop upscale cap, per-slide scale) Manual e2e on PR #12934: slides rendered correctly but didn't fill the artifact panel — whitespace on either side. Two issues: 1. The scale was capped at `Math.min(1, available / SLIDE_W)`. On panels wider than 960px, the cap clamped the scale to 1.0 and slides rendered at native size with whitespace on the sides instead of stretching. 2. The scale was computed against the constant `SLIDE_W = 960`, but pptx-preview can emit slides whose `offsetWidth` differs from the init param if the source PPTX has a non-16:9 layout. Per-slide division of `available / nativeW` handles that case. Fix: replace `computeScale()` with two helpers — `availableWidth()` returns the panel content-box width and `scaleFor(nativeW)` returns the per-slide scale. No upscale cap. The slide content is rendered by pptx-preview against its 960×540 canvas using vector text / canvas — scaling up to e.g. 1500px doesn't visibly degrade quality. Tests: regression now also asserts: - `availableWidth()` and `scaleFor()` exist by name - The exact scale formula `availableWidth() / (nativeW || SLIDE_W)` - Negative assertion that `Math.min(1, ...)` is NOT present, so a future "let's add an upscale cap" rewrite can't silently re-introduce the whitespace. * 🩹 fix: PPTX preview fills panel height (no white gap below slides) Manual e2e on PR #12934: PPTX preview filled the panel width but left empty space below the last slide. DOCX didn't have this issue because its content (mammoth-rendered HTML) flows naturally and either fits exactly or overflows; PPTX slides are fixed-aspect 16:9 and don't grow with the panel. Two changes: 1. **Body fills the iframe viewport** — `html, body { min-height: 100vh }` plus `body { display: flex; flex-direction: column }` and `#lc-render { flex: 1 0 auto }`. The dark theme bg now fills the iframe even when total slide content is shorter than the panel, so a single-slide deck never reveals a "white below" gap. 2. **Per-slide scale honors viewport height** — `scaleFor(nativeW, nativeH)` now returns `min(width-fit, height-fit)` (largest factor that fits without overflowing either dimension). On a tall artifact panel with a short deck, slides grow up to the full panel height instead of staying at the width-bound size. Existing height-fit was always considered correct conceptually but the previous implementation only used width-fit, leaving half the viewport unused per slide. Tests: regression now also asserts `availableHeight()`, the `Math.min(sw, sh)` formula, and `min-height: 100vh` are in the bootstrap. Negative assertion for the old `Math.min(1, ...)` upscale cap remains. * 🩹 fix: revert body flex on PPTX bootstrap (caused black-screen render) Manual e2e regression on PR #12934: the previous commit added `body { display: flex; flex-direction: column }` plus `#lc-render { flex: 1 0 auto }` to fill the panel height. Side effect: pptx-preview's internal layout assumes block flow on its ancestor elements; making body a flex container caused slides to render as solid-black rectangles (sized correctly, but with no visible content inside). Fix: keep just `html, body { min-height: 100vh }` for the bg-fill effect — that alone gives empty space below short decks the dark theme bg without changing flow. Drop the body-flex and the `#lc-render { flex: 1 0 auto }` directives. The height-aware `scaleFor(nativeW, nativeH)` from the same commit stays — it doesn't interact with pptx-preview's layout, just chooses a per-slide scale. Each slide still grows to fit the viewport contain-style. Negative-assertion added to the regression test: `body { display: flex }` must NOT appear in the bootstrap, so a future "let's flex the body to make height work" rewrite can't silently re-introduce this. (Note: the user also flagged DOCX theming as faint body text; I'm leaving that for now per their note that it may be pre-existing. Not addressed in this commit.) * 🩹 fix: revert PPTX height-fill changes; lock DOCX CDN to light scheme Two fixes for separate manual e2e regressions on PR #12934. **1. PPTX black screen (single slide rendering as solid black).** The previous fix removed `body { display: flex }` thinking that was the sole cause, but the regression persisted. Bisecting against the last known-good commit ( |
||
|
|
5683706af5
|
🔐 feat: OIDC Bearer Token Authentication for Remote Agent API (#12450)
* Remote Agent Auth middleware * consider migration and update user * fix eslint errors * add scope validation * fix codex review errors * add filter for use: sig * add jwks-rsa deps * Fix remote agent OIDC auth review findings * Polish remote agent OIDC timeout coverage * Reject remote OIDC tokens without subject * Use tenant context for remote agent auth config * Harden remote agent OIDC scope handling * Polish remote agent OIDC cache and scope tests * Resolve remote agent auth review comments * Reuse OpenID email claim resolver for remote auth * Skip empty OpenID email fallback claims * Use pre-auth tenant context for remote auth config * Downgrade expected OIDC fallback logging * Require secure remote OIDC endpoints * Polish remote agent auth edge cases * Enforce unique balance records * Bind remote OpenID users to issuer * Fix issuer-scoped OpenID indexes * Avoid unique balance index requirement * Fix remote OpenID issuer normalization boundaries * Require issuer-bound OpenID lookups * Enforce tenant API key policy after auth * Fix remote auth tenant policy types * Normalize remote OIDC discovery issuer * Allow normalized remote OIDC issuer validation * Enforce resolved tenant OIDC policy * Polish OpenID issuer and scope validation --------- Co-authored-by: Danny Avila <danny@librechat.ai> |
||
|
|
3170bd8b22
|
📦 chore: bump @librechat/agents to v3.1.77
|
||
|
|
d6d70eeb26
|
📦 chore: bump @librechat/agents to v3.1.76
|