mirror of
https://github.com/danny-avila/LibreChat.git
synced 2026-05-13 16:07:30 +00:00
1194 commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
d90567204e
|
🛟 fix: persist Vertex Gemini 3 thoughtSignatures across DB round-trips (#13026)
When a tool round-trip is interrupted between the tool result and the
model's text reply (user aborted, network drop, pod restart, ...) and
LibreChat persists the partial assistant message, the next conversation
turn reconstructs an `AIMessage` from `formatAgentMessages` that has
`tool_calls` populated but no `additional_kwargs.signatures`. Vertex
Gemini 3 rejects the resumed request with 400 because the most recent
historical functionCall has no `thought_signature`.
## Storage shape
Capture as `Record<tool_call_id, signature>` rather than a flat array.
This addresses the codex P1 review:
> When an assistant turn contains multiple sequential tool-call batches,
> this restoration path writes all persisted thoughtSignatures onto only
> the last tool-bearing AIMessage. Vertex/Gemini validates signatures
> for each step in the current tool-calling turn, so earlier
> functionCall steps reconstructed without their signature can still
> fail with 400.
A single agent run can fire multiple `chat_model_end` events when the
loop cycles the LLM with intervening tool results — each cycle owns a
distinct `tool_call_id`. Per-id storage maps each signature back onto
the right reconstructed `AIMessage`, not just the last one.
## Mapping
`additional_kwargs.signatures` is a flat array indexed by *response part*
(text + functionCall interleaved). `tool_calls` is just the function
calls in their original order. Non-empty signatures correspond 1:1 with
tool_calls in order — see `partsToSignatures` in
`@langchain/google-common`. Single-pass walk maps `signatures[i]` (when
non-empty) onto the i-th `tool_call.id`.
## Pipeline
| Stage | File | Change |
|---|---|---|
| Capture | callbacks.js | `ModelEndHandler` accepts `Record<string,string>` map; walks signatures + tool_calls in tandem to record per-id. Gated on the map being provided — non-Vertex flows are no-op (and also no-op even when provided, since they don't emit signatures). |
| Plumbing | initialize.js | Allocate `collectedThoughtSignatures = {}`, share with handler + client. Always allocated; the JSDoc explicitly documents that it stays empty for non-Vertex providers. |
| Surface | client.js | `sendCompletion` returns `metadata.thoughtSignatures` when the map has entries; falls through unchanged when empty. |
| Persist | (existing BaseClient.handleRespCompletion) | Writes `metadata` from `sendCompletion` onto `responseMessage.metadata`. Mongoose `Mixed` — no migration. |
| Restore | formatMessages.js | Track every tool-bearing AIMessage produced from a TMessage. For each, build a position-aligned `additional_kwargs.signatures` array (empty placeholders for tool_calls without a stored sig). Agents' `fixThoughtSignatures` dispatches non-empty entries to functionCall parts in order. |
## Live verification
- **Single-step:** real Vertex `gemini-3.1-flash-lite-preview` resume-after-tool case. With fix ✅ / without ❌ 400.
- **Multi-step (codex case):** real two-step agent loop (list /tmp → echo done). Each step's signature attaches to its own reconstructed AIMessage. With fix ✅ / without ❌ 400.
- **Cross-provider:** Anthropic Claude haiku-4.5 + OpenAI gpt-5-mini accept the persisted/restored shape unchanged.
## Tests
`modelEndHandler.spec.js` (new) — 6 tests:
- maps non-empty signatures onto tool_call_ids in order
- accumulates per-id across multiple `model_end` events (multi-step)
- no-op when `collectedThoughtSignatures` is null
- no-op when `signatures` field missing (non-Vertex)
- no-op when `tool_calls` missing
- preserves existing `collectedUsage` array contract
`formatAgentMessages.spec.js` — 6 new tests:
- restores onto the AIMessage that owns the tool_call
- per-step attachment for multi-step turns (codex review case)
- preserves tool_call ordering when signatures are partial
- no-op when metadata.thoughtSignatures absent
- no-op when assistant has no tool_calls
- no-op when stored ids don't match any current tool_call
37 passing across 3 suites; 15 existing formatAgentMessages tests unchanged.
## Compatibility
- Backward-compatible — restore gated on `metadata.thoughtSignatures` being a populated object; capture gated on the map being provided.
- No schema migration — uses `Message.metadata: Mixed` already in place.
- Cross-provider safe — non-Vertex providers tolerate the field (verified live against Anthropic + OpenAI converters).
- Pairs with [agents#159](https://github.com/danny-avila/agents/pull/159) for full coverage on histories that mix plain-text and toolcall AIMessages.
|
||
|
|
e262219c8f
|
🔄 feat: Cross-Origin Admin OAuth Refresh (#13007)
* feat(admin-panel): add /api/admin/oauth/refresh endpoint for cross-origin BFF refresh
The cookie-based /api/auth/refresh controller can't be reached cross-origin
from a separately-hosted admin panel because the refresh-token cookie isn't
sent on cross-origin fetches. Add a dedicated POST /api/admin/oauth/refresh
endpoint that accepts the refresh token in the request body, exchanges it
at the IdP via openid-client refreshTokenGrant, and returns the same
response shape as /api/admin/oauth/exchange.
Implementation lives in packages/api/src/auth/refresh.ts as the
applyAdminRefresh helper. It validates the refreshed tokenset, looks up the
admin user by openidId (with optional user_id disambiguation when multiple
user docs share an openidId), mints the bearer via an injected mintToken
hook, and runs an optional onRefreshSuccess hook for downstream forks that
need to update server-side session state.
The default mintToken passed by the OSS route signs an HS256 LibreChat JWT
via generateToken so admin panel callers continue to use the existing local
JWT strategy. Forks that prefer to hand back an IdP-signed token (e.g. for
deployments where the JWT auth gate is JWKS-only) override mintToken
without changing the helper or the route.
Also threads expiresAt through AdminExchangeData and AdminExchangeResponse
so admin panel clients can drive proactive refresh before the bearer
expires. Defaults the OSS exchange flow to Date.now() + sessionExpiry.
* fix(admin-panel): address review feedback on /api/admin/oauth/refresh
mintToken now returns {token, expiresAt} so the minter is authoritative
for the bearer's lifetime instead of deriving it from the IdP `exp` claim.
The refresh response would otherwise lie to the admin panel and trigger
premature or late refresh cycles.
The helper now falls back to the inbound refresh_token when the IdP omits
one on rotation (Auth0 with rotation off, Microsoft personal accounts).
Without this the admin panel loses its refresh capability after one cycle.
Other hardening:
resolveAdminUser validates user_id with Types.ObjectId.isValid before
hitting Mongoose, avoiding a CastError that would surface as a generic
500 with no useful information for the client.
If user_id resolves to a user whose openidId does not match the refreshed
sub, throw USER_ID_MISMATCH (401) instead of silently swapping in a
different user matching the sub.
Wrap tokenset.claims() in readClaims so an IdP that returns a tokenset
without a usable id_token gets mapped to CLAIMS_INCOMPLETE (502) rather
than bubbling a raw exception.
findUsers now uses the same SAFE_USER_PROJECTION as getUserById so the
fallback path no longer pulls password/totpSecret/backupCodes into memory.
Removed dead fields (email on AdminRefreshClaims, id_token on
RefreshTokenset) and fixed import ordering per AGENTS.md.
Adds packages/api/src/auth/refresh.spec.ts: 18 tests covering the happy
path, userId disambiguation (match, invalid ObjectId, null, mismatch),
all error branches (IDP_INCOMPLETE, CLAIMS_INCOMPLETE for both throw and
missing sub, USER_NOT_FOUND, mintToken/onRefreshSuccess propagation), and
refresh-token preservation under rotation/no-rotation.
* chore(admin-panel): polish per re-review on /api/admin/oauth/refresh
readClaims now logs the original error name/message at warn before mapping
to CLAIMS_INCOMPLETE so a programming bug doesn't get silently rebadged
as an IdP problem in production logs.
The route handler's JSDoc now enumerates every error response (status +
error_code) so admin-panel implementors can plan for each branch without
reading the source.
Tightens the helper's surface: removed the now-dead `exp` field from
`AdminRefreshClaims` (only `sub` is read since the v2 mintToken refactor),
and tightened `AdminRefreshDeps.findUsers`'s projection parameter from
`string | null` to `string` so the contract matches actual usage.
Test polish: the userId-resolves-to-null fallthrough test now asserts the
exact `findUsers` and `getUserById` call arguments so a regression in the
fallthrough query shape is caught. The "skips onRefreshSuccess" test now
asserts a populated response shape rather than just `toBeDefined`.
Declined per prior triage and re-confirmed: a role guard inside
`applyAdminRefresh` (downstream `/api/admin/*` already enforces
ACCESS_ADMIN via requireCapability) and moving the IdP grant call out of
the JS route into TypeScript (matches existing oauth.js / openidStrategy
pattern; package-boundary refactor belongs in a separate PR).
* fix(admin-panel): reject /api/admin/oauth/refresh tokensets from foreign issuers
When the route handler can resolve the configured OpenID issuer, it now
threads it into applyAdminRefresh as expectedIssuer. The helper compares
that against the tokenset claims iss (after normalizeOpenIdIssuer on
both sides to absorb trailing-slash differences) and throws
ISSUER_MISMATCH (401) on mismatch.
The check is skipped when either side is unset so behavior is unchanged
for IdPs that don't return iss on a refresh-grant id_token, and for
older deployments where the OpenID config doesn't expose serverMetadata.
This is a defense-in-depth measure for the refresh path only. The
deeper OIDC posture fix (binding IUser lookup to (sub, iss) as a pair)
is pre-existing debt across openidStrategy.js and the regular exchange
flow as well, and belongs in a separate PR with the schema change and
backfill migration.
* fix(admin-panel): bind refresh user lookup to (sub, iss) and handle getOpenIdConfig throw
Two fixes raised on the PR thread that I previously misdescribed:
The user lookup in resolveAdminUser was keyed on openidId alone, so a
tokenset from a different issuer that happened to share the same sub
could resolve to a local user from a different IdP. Now exports
getIssuerBoundConditions and isUserIssuerAllowed from openid.ts (the
helpers findOpenIDUser already uses) and reuses them. The findUsers
filter becomes ($or of getIssuerBoundConditions for openidId) when an
expectedIssuer is provided, with the same legacy backward-compat
clause for users whose openidIssuer field was never populated. The
direct user_id path now also checks isUserIssuerAllowed and throws
USER_ID_MISMATCH if the stored openidIssuer disagrees with the
configured issuer.
The route's getOpenIdConfig() call was previously documented as
returning null when uninitialized; the actual implementation throws.
That made the if (!openIdConfig) guard unreachable, and an unconfigured
server would surface as 500 INTERNAL_ERROR rather than 503
OPENID_NOT_CONFIGURED. Wraps the call in try/catch so the documented
503 response is what callers actually receive.
Adds 4 tests covering the new lookup binding behavior.
* fix(admin-panel): re-check ACCESS_ADMIN on /api/admin/oauth/refresh
The IdP refresh token can outlive a capability/role change, so the
initial requireAdminAccess on the OAuth callback isn't sufficient.
Inject canAccessAdmin via the existing capability model
(hasCapability with SystemCapabilities.ACCESS_ADMIN, matching
requireAdminAccess so custom roles and user grants are honored)
and gate token minting on it. Capability backend errors are
warn-and-denied to keep the bearer-mint path fail-closed.
* fix(admin-panel): scope /api/admin/oauth/refresh to the request tenant
The same (openidId, openidIssuer) pair is allowed across tenants by
the user schema's unique index. The refresh helper was wrapping both
the direct getUserById and the fallback findUsers in runAsSystem,
bypassing tenant isolation, so an IdP identity that exists in two
tenants could resolve to the wrong tenant's user and mint a JWT
bound to that tenant.
Drop the runAsSystem wrappers, add a trusted tenantId option to
applyAdminRefresh, AND it into the fallback findUsers filter, and
assert it against the direct getUserById result. Mount
preAuthTenantMiddleware on the refresh route so the deployment's
X-Tenant-Id header drives the trusted tenant via ALS. Single-tenant
deploys (no header) keep the existing openidId-only behaviour.
Adds TENANT_MISMATCH (401) and a regression covering duplicate
(sub, iss) across tenants plus the direct-userId tenant assertion.
* fix(admin-panel): gate /api/admin/oauth/refresh on OPENID_REUSE_TOKENS
The OSS refreshController only refreshes OpenID tokensets when
OPENID_REUSE_TOKENS is enabled. The body-based admin variant was
unconditionally calling refreshTokenGrant, which made the flag
ineffective for the admin OAuth flow and let admin sessions keep
renewing in deployments that explicitly turned token reuse off.
Add the same isEnabled(process.env.OPENID_REUSE_TOKENS) check up
front and return 403 TOKEN_REUSE_DISABLED so the admin panel BFF
can surface the configuration mismatch instead of silently churning
through retries.
|
||
|
|
22890771cf
|
🧭 fix: Preserve Resend Files for Subagents (#13030) | ||
|
|
93c4ef4ba8
|
🧱 refactor: typed CodeEnvRef + kind discriminator + principal-aware sandbox cache (#12960)
* 🧱 refactor: typed CodeEnvRef + kind discriminator + tenant-aware sandbox cache Final cutover for the LibreChat ↔ codeapi sandbox file identity. Replaces the magic string `${session_id}/${file_id}?entity_id=...` with a typed, discriminated `CodeEnvRef`. Pre-release lockstep deploy with codeapi #1455 and agents #148; no legacy aliases retained. ## Final shape ```ts type CodeEnvRef = | { kind: 'skill'; id: string; storage_session_id: string; file_id: string; version: number } | { kind: 'agent'; id: string; storage_session_id: string; file_id: string } | { kind: 'user'; id: string; storage_session_id: string; file_id: string }; ``` `kind` drives codeapi's sessionKey: `<tenant>:<kind>:<id>[✌️<version>]` for shared kinds, `<tenant>:user:<userId>` for user-private (auth context provides `userId`). `version` is statically required for `kind: 'skill'` and forbidden otherwise via discriminated union — constraint holds at compile time on every consumer, not just codeapi's runtime validator. `id` is sessionKey-meaningful for `'skill'` / `'agent'`; informational only for `'user'` (codeapi resolves user identity from auth context). ## What changed - `packages/data-provider/src/codeEnvRef.ts` — discriminated union + `CODE_ENV_KINDS` const-tuple keeps the runtime list and TS union locked together. - Schemas: `metadata.codeEnvRef` and `SkillFile.codeEnvRef` enums tightened to `['skill', 'agent', 'user']`. - `primeSkillFiles` writes `kind: 'skill'`, `id: skill._id`, `version: skill.version`. Cache-hit path reads `codeEnvRef` directly. Bumping `skill.version` on edit naturally invalidates the prior cache entry under the new sessionKey. - `processCodeOutput` writes `kind: 'user'`, `id: req.user.id`. Output bucket is always user-scoped, regardless of which skill the execution invoked. New regression test pins the asymmetry. - `primeFiles` reupload preserves `kind`/`id`/`version?` from the existing ref so a skill-cache-miss reupload doesn't silently demote to user bucket. - `crud.js` upload functions (`uploadCodeEnvFile` / `batchUploadCodeEnvFiles`) thread `kind`/`id`/`version?` to the multipart form (codeapi #1455 option α). Without these on the wire, codeapi falls back to user bucketing and skill-cache invalidation never fires. Client-side validation mirrors codeapi's validator. - `Files/process.js` — chat attachments use `kind: 'user'`; agent setup files use `kind: 'agent'`. - Drops `entity_id` everywhere (struct, schema sub-docs, write paths, upload form fields). Drops `'system'` from the kind enum (no emitter ever existed). ## Test plan - [x] `cd packages/data-provider && npx jest src/codeEnvRef.spec` — 4 / 4 - [x] `cd packages/data-schemas && npx jest` — 1447 / 1447 - [x] `cd packages/api && npx jest src/agents` — 81 / 81 in skillFiles + handlers + resources - [x] `cd api && npx jest server/services/Files server/controllers/agents` — 436 / 436 - [x] `cd api && npx jest server/services/Files/Code` — 98 / 98 (incl. new "outputs are user-scoped regardless of which skill the execution invoked" regression and "reupload forwards kind/id/version from existing ref") - [x] `npx tsc --noEmit -p packages/data-{provider,schemas}/tsconfig.json && npx tsc --noEmit -p packages/api/tsconfig.json` — clean (only pre-existing unrelated dev errors in storage/balance, untouched here) ## Deploy notes - **24h cache-miss burst** on first deploy. Inputs (skill caches re-prime under new sessionKey shape) and outputs (any pre-Phase C skill-output cached files become unreadable). Bounded by codeapi's 24h TTL. - **Lockstep with codeapi #1455 and agents #148.** Either repo can land first since no aliases to drain, but the three deploys must overlap within the same maintenance window. - **`@librechat/agents` bump to `3.1.79-dev.0`** required after agents #148 lands and is published. ## What this enables Auth bridge work (JWT-based tenant/user identity between LC and codeapi) — codeapi now derives sessionKey purely from `req.codeApiAuthContext.{ tenantId, userId}`, so the next chapter is replacing the header-asserted user identity with a verified-claim path. * 🩹 fix: persist execute_code uploads under codeEnvRef metadata key Codex review P1 (chatgpt-codex-connector). `Files/process.js` was storing the upload result under `metadata.fileIdentifier` even though: - `uploadCodeEnvFile` now returns `{ storage_session_id, file_id }`, not the legacy magic string. - The post-cutover schema (`File.metadata.codeEnvRef`) only declares `codeEnvRef` — mongoose strict mode silently strips unknown keys. - All readers (`primeFiles`, `getCodeFilesByIds`, `categorizeFileForToolResources`, controller filtering) check `metadata.codeEnvRef`. Net effect of the bug: chat-attached and agent-setup execute_code files would lose their sandbox reference on save, and primeFiles would skip them on subsequent code-execution turns — the file blob would still be available locally but never re-mounted in the sandbox. Fix: construct the full `CodeEnvRef` (`{ kind, id, storage_session_id, file_id }`) at the write site and persist under `metadata.codeEnvRef`. `BaseClient`'s "is this a code-env file" presence check accepts the new shape alongside the legacy `fileIdentifier` for back-compat with any pre-cutover records still in the database. Mirrors the same change in `processAttachments.spec.ts` (which re-implements the BaseClient logic for testability). New regression tests in `process.spec.js` cover three cases: - chat attachments (`messageAttachment=true`) → `kind: 'user'` - agent setup (`messageAttachment=false`) → `kind: 'agent'` - legacy `fileIdentifier` key is NOT persisted (would be schema-stripped) * 🩹 fix: read storage_session_id on primed file refs (Codex P1) Codex review (chatgpt-codex-connector). After Phase B's per-file `session_id` → `storage_session_id` rename, `primeFiles` emits the new field — but `seedCodeFilesIntoSessions` was still reading `files[0].session_id` for the representative session and `f.session_id` for the dedupe key. In runs with only primed attachments (no skill seed), `representativeSessionId` was `undefined`, the function returned the unchanged map, and `seedCodeFilesIntoSessions` silently dropped the entire batch. The first `execute_code` call then started without `_injected_files` and the agent couldn't see prior-turn artifacts. Fix: - `codeFilesSession.ts`: read `f.storage_session_id` for both the dedupe key and the representative session id. JSDoc updated to match the new field name. - `callbacks.js`: the two output-file persistence paths read `file.session_id` to pass to `processCodeOutput` — switch to `file.storage_session_id`. The original comment explicitly says this should be the STORAGE session, which is exactly the field Phase B renamed. - `codeFilesSession.spec.ts`: fixture builder uses `storage_session_id` and `kind: 'user'` to match the post-cutover `CodeEnvFile` shape. Lockstep coordination: this matches the post-bump shape of `@librechat/agents` 3.1.79+. CI tsc errors against the currently-pinned 3.1.78 are expected and resolve when the dep bumps in this PR before merge. * 📦 chore: Bump `@librechat/agents` to version 3.1.80-dev.0 in package-lock and package.json files * 🪪 fix: thread kind/id/version through codeapi /download URLs (Phase C α) Symmetric fix for the upload-side wire change in 537725a. Codeapi's `sessionAuth` middleware now requires `kind`/`id`/`version?` on every download/freshness URL — without them it 400s with "kind must be one of: skill, agent, user" before serving the file. Three sites construct codeapi-side URLs that go through `sessionAuth`: - `processCodeOutput` (`Files/Code/process.js`): `/download/<sess>/<id>` for freshly-generated sandbox outputs. Always `kind: 'user'` + `id: req.user.id` — code-output files are always user-private, regardless of which skill the run invoked. - `getSessionInfo` (`Files/Code/process.js`): `/sessions/<sess>/objects/<id>` for the 23h freshness check. Pulls kind/id/version straight off the `codeEnvRef` already in scope — skill files stay skill-bucketed, user files stay user-bucketed. - `/code/download/:session_id/:fileId` LC route (`routes/files/files.js`): proxies to codeapi for manual downloads. Code-output files only on this route, so `kind: 'user'` + `id: req.user.id`. The `getCodeOutputDownloadStream` helper in `crud.js` now takes an `identity` param, validated by a `buildCodeEnvDownloadQuery` helper that mirrors `appendCodeEnvFileIdentity`'s shape rules: kind required from the closed `{skill, agent, user}` set, version required for 'skill' and forbidden otherwise. Bad callers fail fast on the client instead of round-tripping a 400. Also cleans up two log-noise sources reported alongside the 400: - `logAxiosError` in `packages/api/src/utils/axios.ts` was dumping `error.response.data` raw. With `responseType: 'arraybuffer'` that's a `Buffer` (~4 chars per byte after JSON-serialization); with `responseType: 'stream'` it's a `Readable` whose internal state serializes the entire ring buffer + socket. New `renderResponseData` decodes small buffers as UTF-8 (truncated past 2KB) and stubs streams as `'[stream]'`. Diagnostics stay useful, log lines stop being megabytes. - `/code/download` route's catch was bare `logger.error('...', error)`, bypassing the redactor. Switched to `logAxiosError` so it benefits from the same buffer/stream handling. Tests updated to match the new contract: - crud.spec: `getCodeOutputDownloadStream` fixtures pass `userIdentity`; new cases cover skill identity (with version), bad kind rejection, skill-without-version rejection. - process.spec: `getSessionInfo` test passes a full `codeEnvRef` object. * ♻️ refactor: extract codeEnv identity helpers into packages/api Per the project convention that new backend code lives in TypeScript under `packages/api`, moves `appendCodeEnvFileIdentity` and `buildCodeEnvDownloadQuery` from `api/server/services/Files/Code/crud.js` into a new `packages/api/src/files/code/identity.ts` module. Both helpers are pure validators that mirror codeapi's `parseUploadSessionKeyInput` server-side rules (closed kind set, `version` required for `'skill'` and forbidden otherwise) — they deserve TS support and a dedicated spec rather than living as JSDoc-typed helpers in the legacy `/api` workspace. The new module: - Exports a `CodeEnvIdentity` interface using the `librechat-data-provider` `CodeEnvKind` discriminated union. - Adds 13 unit tests in `identity.spec.ts` covering the validation matrix (skill+version, agent, user, and every rejection path) plus URL encoding for the download query. - Re-exported from `packages/api/src/files/code/index.ts` alongside `classify`, `extract`, and `form`. Consumer updates: - `api/server/services/Files/Code/crud.js`: drops the local helpers and imports them from `@librechat/api`. Net -64 lines. - `api/server/services/Files/Code/process.js`: same. - Test mocks for `@librechat/api` in three spec files now stub the helpers' validation behavior locally rather than pulling them through `requireActual` (which would drag in provider-config init-time side effects). The package's `exports` field only surfaces the root barrel, so leaf imports aren't reachable from legacy `/api` test setup. No runtime behavior change. Identity validation rules and emitted form/query shapes are byte-for-byte identical pre/post. * 🪪 fix: emit resource_id alongside id on _injected_files (skill 403 fix) Companion to codeapi #1455 fix and agents 3.1.80-dev.1 — the wire shape for shared-kind files now requires `resource_id` distinct from the storage `id`. Without this LC change, codeapi's sessionKey re-derivation on every shared-kind /exec rejects with 403 session_key_mismatch: cached: legacy:skill:69dcf561...✌️59 (signed at upload, skill _id) derived: legacy:skill:ysPwEURuPk-...✌️59 (storage nanoid) Emit sites updated: - `primeInvokedSkills` cache-hit path: `resource_id: ref.id` (the persisted skill `_id` from `codeEnvRef.id`); `id: ref.file_id` unchanged (storage uuid). - `primeInvokedSkills` fresh-upload path: `resource_id: skill._id.toString()` on every primed file (the `allPrimedFiles` builder type now carries the field). - `processCodeOutput`'s `pushFile` (Code/process.js): `resource_id: ref.id` — for `kind: 'user'` this is informational (codeapi derives sessionKey from auth context) but emitted for shape uniformity with shared kinds. Bumps `@librechat/agents` to `^3.1.80-dev.1` (the version that ships the matching `CodeEnvFile.resource_id` field). ## Test plan - [x] `cd packages/api && npx jest src/agents` — 67 / 67 pass (skillFiles fixtures updated to assert `resource_id` on the emitted CodeSessionContext.files). - [x] `cd api && npx jest server/services/Files server/controllers/agents` — 445 / 445 pass (process.spec fixtures updated for the reupload + cache-hit emission). - [x] `npx tsc --noEmit -p packages/api/tsconfig.json` — clean. * fix(skill-tool-call): carry resource_id through primeSkillFiles → artifact Codeapi was 400ing every /exec following a `handle_skill` tool call with `resource_id is invalid` (`type: 'undefined'`). Both code paths in `primeSkillFiles` (cache-hit + fresh-upload) returned files without `resource_id`/`kind`/`version`, and the artifact in `handlers.ts` forwarded the stripped shape into `tc.codeSessionContext.files` → `_injected_files`. `primeInvokedSkills` (the NL-detected loader) had already been fixed end-to-end; this commit aligns the tool-invoked path with the same contract: `resource_id` = `skill._id.toString()`, `kind: 'skill'`, `version` = the skill's monotonic counter. Tests added to `skillFiles.spec.ts` lock the contract on `primeSkillFiles` directly so future refactors can't silently drop the resource identity again. * fix(handlers.spec): align session_id → storage_session_id rename + kind discriminator Pre-existing TS errors against the post-rename `CodeEnvFile` shape: the test file still used `session_id` on per-file objects (renamed to `storage_session_id` in agents Phase B/C) and was missing the `kind` discriminator the discriminated union requires. Both inputs and the matching `expect.toEqual(...)` mirrors updated together so the runtime equality check still holds. Lines 723-732 stay as-is — they sit behind `as unknown as ToolCallRequest` and TS already skipped them. * chore: fix `@librechat/agents`, correct version to 3.1.80-dev.0 in package.json files * chore: bump `@librechat/agents` to version 3.1.80-dev.1 in package.json and package-lock.json * chore: bump `@librechat/agents` to version 3.1.80-dev.2 * feat(observability): trace file priming chain from primeCodeFiles to _injected_files Diagnosing the user-upload "files=[] on first /exec" bug requires seeing where in the LC chain a file ref disappears. Prior to this patch the chain (primeCodeFiles → primedCodeFiles → initialSessions → CodeSessionContext → _injected_files) was opaque end-to-end: - primeCodeFiles silently dropped files without `metadata.codeEnvRef` - reuploadFile catches all errors and continues with no signal - the handlers.ts handoff to codeapi never logged what it was sending After this patch, a single grep on `[primeCodeFiles]` plus `[code-env:inject]` shows the full per-file path: [primeCodeFiles] in: file_ids=N resourceFiles=M [primeCodeFiles] file=<id> path=skip reason=no-codeenvref filename=... [primeCodeFiles] file=<id> path=cache-hit-by-session storage_session_id=... [primeCodeFiles] file=<id> path=reupload reason=no-uploadtime ... [primeCodeFiles] file=<id> path=reupload reason=stale ... [primeCodeFiles] file=<id> path=reupload-success oldSession=... newSession=... newFileId=... [primeCodeFiles] file=<id> path=reupload-failed session=... [primeCodeFiles] file=<id> path=fresh-active storage_session_id=... [primeCodeFiles] out: returned=N skippedNoRef=M reuploadFailures=K [code-env:inject] tool=<name> files=N missingResourceId=K (debug) [code-env:inject] M/N files missing resource_id ... (warn) [code-env:inject] tool=<name> _injected_files=0 ... (warn) The boundary log warns when LC sends zero injected files on a code-execution tool call — that's the user's actual symptom showing up at the LC side instead of having to correlate against codeapi's `Request received { files: [] }`. Tag chosen as `[code-env:inject]` rather than `[handoff:exec]` to avoid collision with the app-level "handoff" semantic (subagent handoff workflow). Structural cleanup in primeFiles: replaced the `if (ref) { ... }` nesting with an early `if (!ref) continue` so the per-path instrumentation hooks land at top-level scope instead of indented inside a conditional. Behavior unchanged; pushFile / reuploadFile identical. Spec fixtures (handlers.spec.ts, codeFilesSession.spec.ts) updated to include `resource_id` on `CodeEnvFile` literals — required by the post-3.1.80-dev.2 type now installed. ## Test plan - [x] `cd packages/api && npx jest src/agents/handlers.spec.ts src/agents/codeFilesSession.spec.ts src/agents/skillFiles.spec.ts` — 69/69 pass - [x] `cd api && npx jest server/services/Files/Code/process.spec.js` — 84/84 pass - [x] `npx tsc --noEmit -p packages/api` — clean - [x] `npx eslint` on all four touched files — clean * chore: add CONSOLE_JSON_STRING_LENGTH to .env.example for JSON log string length configuration * fix(files): align codeapi upload filename with LC's sanitized DB filename User-attached files for code execution were uploading to codeapi under `file.originalname` (raw upload filename, may contain spaces / special chars) while LC's DB record stored the sanitized form (`sanitizeFilename(file.originalname)`, underscores). Codeapi preserves whatever filename the upload sent, so the sandbox saw `/mnt/data/<originalname>` while LC's `primeFiles` toolContext text + `_injected_files.name` referenced `file.filename` (sanitized). Visible failure: agent gets system prompt saying /mnt/data/librechat_code_api_-_active_customer_-_2025-11-05.xlsx …tries that path, hits `FileNotFoundError`, then notices the sandbox's actual `Available files` line says /mnt/data/librechat code api - active customer - 2025-11-05.xlsx …retries with spaces, succeeds. Wastes a tool call per upload and leaks raw filenames into model context. Fix: sanitize once and use the sanitized form in both the codeapi upload AND the LC DB record. Sandbox path = LC toolContext text = in-memory ref name. No drift. Reupload path (`Code/process.js` line 867 `filename: file.filename`) already uses the sanitized DB name, so it stays consistent with the fresh-upload path after this change. ## Test plan - [x] `cd api && npx jest server/services/Files/process` — 32/32 pass - [x] `npx eslint` on the touched file — clean * chore: bump `@librechat/agents` to version 3.1.80-dev.3 in package.json and package-lock.json |
||
|
|
9441563b95
|
🛡️ refactor: Scope allowedAddresses By Port (#13022)
* fix: Scope allowedAddresses by port * test: Fix SSRF agent spec typing |
||
|
|
4238dd4471
|
🪪 fix: Preserve OIDC Logout ID Token Hint (#12999) | ||
|
|
8f92ec012c
|
🧭 fix: Navigate Signed CDN Downloads (#12998)
* fix(files): navigate signed CDN downloads * fix(files): avoid popup target for signed downloads * test(files): restore download URL mock |
||
|
|
1bc2692a15
|
🌥️ feat: Add Optional Region-aware S3/CloudFront Storage Keys (#12987)
* feat(files): add optional region-aware storage keys * test(files): fix region storage CI fixtures * feat(files): finalize inline CloudFront asset namespaces * fix(files): allow wildcard region CloudFront cookies * fix(files): preserve legacy storage key compatibility * fix(files): align CloudFront clear cookie cleanup * fix(files): clear legacy CloudFront cookie scopes * chore(files): clean up storage review nits * fix(files): keep inline namespaces CloudFront-only |
||
|
|
5efbcb8b93
|
🌐 fix: Percent-encode X-File-Metadata header for Unicode filenames (#12983)
* 🌐 fix: Percent-encode X-File-Metadata header for Unicode filenames After #12977 preserved Unicode in filenames, the download route crashes with ERR_INVALID_CHAR because JSON.stringify(file) now contains non-ASCII characters that Node.js rejects in HTTP headers per RFC 7230. Wrap the header value in encodeURIComponent on the server and decodeURIComponent on the client before JSON.parse. * fix: Update file route tests after dev merge --------- Co-authored-by: Danny Avila <danny@librechat.ai> |
||
|
|
5c338a4642
|
🛂 fix: Harden Agent File Preview Access (#12981)
* fix: harden agent file access * style: format agent file query * fix: prune agent file refs on alternate writes * test: fix agent pruning specs |
||
|
|
9c81792d25
|
🔐 feat: Add Signed CloudFront File Downloads (#12970)
* feat: add signed CloudFront downloads * fix: preserve local IdP avatar paths * fix: address signed download review findings * fix: harden CloudFront cookie scope validation * fix: preserve URL save API compatibility * fix: store CDN SSO avatars under shared prefix * fix: Harden CloudFront tenant file access * fix: Preserve CloudFront download compatibility * fix: Address CloudFront review follow-ups * fix: Preserve file URL fallback user paths * fix: Address download review hardening * fix: Use file owner for S3 RAG cleanup * fix: Address final download review nits * fix: Clear stale avatar CloudFront cookies * fix: Align download filename helpers with dev * fix: Address final CloudFront review follow-ups * fix: Stream S3 URL uploads * fix: Set S3 stream upload length * fix: Preserve download metadata filepath * fix: Avoid remote content length for stream uploads * fix: Use bounded multipart URL uploads * fix: Harden S3 filename boundaries |
||
|
|
f2de3a219c
|
🌐 fix: Preserve Unicode Filenames (#12977)
* fix: Preserve unicode filenames * fix: Cap unicode filenames by bytes * fix: Preserve clean artifact directories * fix: Disambiguate normalized artifact names |
||
|
|
69395d0667
|
🛰️ fix: Honor Anthropic Vertex Configuration (#12972)
* fix: honor Anthropic Vertex config * chore: format Anthropic Vertex config fix |
||
|
|
6c6c72def7
|
🚀 feat: Decouple File Attachment Persistence from Preview Rendering (#12957)
* 🗂️ feat: add `status` lifecycle to file records for two-phase previews
Schema and model foundation for decoupling the agent's final response
from CPU-heavy office-format HTML extraction.
- `MongoFile.status: 'pending' | 'ready' | 'failed'` (indexed) and
`previewError?: string` mirror the lifecycle: phase-1 emits the file
record at `pending` so the response is unblocked; phase-2 transitions
to `ready` (with text/textFormat) or `failed` (with previewError) in
the background. Absent for legacy records — clients treat that as
`ready` for back-compat.
- Mirror types added to `TFile` in data-provider so frontend cache
consumers see the new fields.
- New `sweepOrphanedPreviews(maxAgeMs)` method on the file model
recovers stale `pending` records left behind by a process restart
mid-extraction; transitions them to `failed` with
`previewError: 'orphaned'`. Cheap because `status` is indexed.
* ⚡ feat: two-phase code-execution preview flow (unblocks final response)
The agent's final response no longer waits on CPU-heavy office HTML
extraction. Phase-1 (download + storage save + DB record at
`status: 'pending'`) is awaited as before; phase-2 (extract +
`updateFile`) runs in the background with a hard 60s ceiling.
Three flows, all funneling through `processCodeOutput` and updated to
the new `{ file, finalize? }` return shape:
- `callbacks.js` (chat-completions + Open Responses streaming): emit
the phase-1 attachment immediately (carries `status: 'pending'` for
office buckets so the UI shows "preparing preview…"), then
fire-and-forget `finalize()`. If the SSE stream is still open when
phase-2 lands, push an `attachment` update event with the same
`file_id` so the client merges over the placeholder in place.
- `tools.js` direct endpoint: same split — return the phase-1
metadata immediately, run extraction in the background. Client
polls for the resolved record.
`finalize()` wraps the existing 12s per-render timeout in a 60s outer
`withTimeout`. The HTML-or-null contract from #12934 is preserved:
office types that fail extraction transition to `status: 'failed'`
with `previewError: 'parser-error' | 'timeout'` rather than falling
back to plain text (would be an XSS vector).
Promises continue running after the HTTP response closes (Node
doesn't kill them). The boot-time orphan sweep covers the only case
that loses progress — actual process restart mid-extraction.
`primeFiles` annotates the agent's `toolContext` line for prior-turn
files: `(preview not yet generated)` for pending, `(preview
unavailable: <reason>)` for failed. The model can volunteer "you can
still download it" instead of pretending the preview is fine.
`hasOfficeHtmlPath` exported from `@librechat/api` so `processCodeOutput`
can decide whether a file expects a preview at all.
* 🔍 feat: `GET /api/files/:file_id/preview` endpoint and boot orphan sweep
- New `GET /api/files/:file_id/preview` route returns
`{ status, text?, textFormat?, previewError? }`. The frontend's
`useFilePreview` React Query hook polls this while phase-2 is in
flight, then auto-stops on terminal status. ACL identical to the
download route (reuses `fileAccess` middleware). Defaults `status`
to `'ready'` for legacy records so back-compat is implicit.
`text` only included when `status === 'ready'` and non-null —
preserves the HTML-or-null security contract from #12934.
- `sweepOrphanedPreviews()` invoked on boot in both `server/index.js`
and `server/experimental.js`. Recovers any `pending` records left
behind by a process restart mid-extraction (the only case the
in-process two-phase flow can't handle on its own). Fire-and-forget
so a transient sweep failure doesn't block startup.
* 🖥️ feat: frontend two-phase preview consumer (polling + UI states)
Wires the React side to the new lifecycle so the user sees what's
happening with their file while phase-2 extraction runs in the
background and after the response stream closes.
- `useAttachmentHandler` upserts by `file_id` (was append-only) so
the phase-2 SSE update event merges over the pending placeholder
in place. Lightweight attachments without a `file_id`
(web_search / file_search citations) keep the legacy append path.
- `useFilePreview(file_id)` React Query hook with
`refetchInterval: (data) => data?.status === 'pending' ? 2500 : false`
so polling auto-stops on the first terminal response without the
caller having to flip `enabled`.
- `useAttachmentPreviewSync(attachment)` bridges polled data into
`messageAttachmentsMap`. Polling enabled iff
`status === 'pending' && isAnySubmitting` — per the design ask:
active polling while the LLM is still generating, then quiet.
Process-restart and post-stream cases are covered by polling on
the next interaction.
- `Attachment.tsx` renders a small `PreviewStatusIndicator` (spinner +
"Preparing preview…" for pending, alert icon + "Preview unavailable"
for failed) inside `FileAttachment`. Download button stays fully
functional in both states. Two new English locale keys.
- Data-provider scaffolding: `TFilePreview` type, `endpoints.filePreview`,
`dataService.getFilePreview`, `QueryKeys.filePreview`.
* 🧪 fix: stub `useAttachmentPreviewSync` in pre-existing Attachment test mocks
The new `useAttachmentPreviewSync` hook is called unconditionally inside
`FileAttachment` (added in the prior commit). Two pre-existing test
files mock `~/hooks` to provide `useLocalize` only — the un-mocked
preview hook reference resolved to undefined and crashed render with
`(0 , _hooks.useAttachmentPreviewSync) is not a function` on the
Ubuntu/Windows CI runners.
Fix is local to the test mocks: add a no-op stub that returns
`{ status: 'ready' }` so the component renders the legacy chip path.
The two-phase preview behavior itself has its own dedicated suites
(`useAttachmentHandler.spec.tsx`, `useAttachmentPreviewSync.spec.tsx`).
* 🐛 fix: route phase-2 attachment update to current-run messageId
Codex P1 review on PR #12957. `processCodeOutput` intentionally
preserves the original DB `messageId` across cross-turn filename reuse
so `getCodeGeneratedFiles` can still trace a file back to the
assistant message that originally produced it. The phase-1 SSE emit
already routes by the current run's messageId — `processCodeOutput`
runtime-overlays it via `Object.assign(file, { messageId, toolCallId })`
and the callback writes `result.file` directly.
Phase-2 was passing the raw `updateFile` return through
`attachmentFromFileMetadata`, which read `messageId` straight off the
DB record. On a turn-N run that re-emitted a filename from turn-1
(e.g. agent writes `output.csv` again), the phase-2 SSE update
routed to `turn-1-msg` instead of `turn-N-msg`. Frontend's
`useAttachmentHandler` upserts under the wrong messageAttachmentsMap
slot — turn-N's pending chip stays stuck at "preparing preview…"
while turn-1's already-resolved attachment gets re-merged.
Fix: thread `runtimeMessageId` through `attachmentFromFileMetadata`
and pass `metadata.run_id` from the phase-2 emit site. Mirrors how
phase-1 sources its messageId. Tests cover the cross-turn reuse case
plus the writableEnded / null-finalize / no-finalize paths to lock
in the broader phase-2 emit contract.
* 🛠️ refactor: address codex audit findings (wire-shape parity, DRY, defensive catch)
Comprehensive audit on PR #12957. Resolves all valid findings:
- **MAJOR #1 — Wire-shape parity**: phase-1 ships the full `fileMetadata`
record over SSE; phase-2 was using a tight `attachmentFromFileMetadata`
projection. Drop the projection and have phase-2 spread `{...updated,
messageId, toolCallId}` so both events match the long-standing
legacy phase-1 shape clients depend on.
- **MAJOR #2 — DRY**: extract `runPhase2Finalize({ finalize, fileId,
onResolved })` into `process.js` (alongside `processCodeOutput` whose
contract it pairs with). Both `callbacks.js` paths and `tools.js`
now flow through it. Single catch path eliminates divergence
surface — the fix landed in 01704d4f0 (cross-turn messageId routing)
was a symptom of this duplication risk.
- **MINOR #3 — JSDoc accuracy**: `finalizePreview`'s buffer is bounded
by `fileSizeLimit`, not the 1MB extractor cap. Updated and added a
note about peak heap from queued buffers.
- **MINOR #4 — Defensive catch**: `runPhase2Finalize`'s catch attempts
a best-effort `updateFile({ status: 'failed', previewError:
'unexpected' })` for the file_id, so a programming bug in
`finalizePreview` doesn't leave the record stuck `'pending'` until
the next boot-time orphan sweep.
- **NIT #6 — Stale PR refs**: 12952 → 12957 in 3 places.
- **NIT #7 — Schema bound**: `previewError` capped at `maxlength: 200`
to prevent a future codepath from accidentally persisting a stack
trace.
Skipped per audit verdict (non-blocking):
- #5 (memory pressure): documented in JSDoc; impl change was reviewer's
"consider", not actionable.
- #8 (double DB query per poll): low cost, indexed by_id, polling is
gated narrow.
- #9 (TAttachment cast): the union type is intentional; the casts are
safe widening, refactoring TAttachment is invasive and out of scope.
Tests: 11 new (7 `runPhase2Finalize` unit tests covering happy path,
null-finalize, throws, double-fail, no-fileId, no-onResolved; +4
wire-shape parity assertions in the existing cross-turn test). 328
backend tests pass; 528 frontend tests pass; lint and typecheck clean.
* 🛡️ refactor: address codex P1+P2 + rename to drop phase-1/2 jargon
Codex round 2 review on PR #12957 caught two race conditions and one
recovery gap, all triggered by cross-turn filename reuse (`claimCodeFile`
intentionally returns the same `file_id` for the same
`(filename, conversationId)` across turns). Plus naming cleanup the
user requested — internal "phase 1 / phase 2" vocabulary leaks across
sprints, replace it everywhere with terms describing what's actually
happening.
P1 — stale render overwrites newer revision (process.js)
Two turns reusing `output.csv` share a `file_id`. If turn-1's
background render resolves AFTER turn-2's persist step, the
unconditional `updateFile` writes turn-1's stale text/status over
turn-2's pending placeholder. Fix: stamp a fresh `previewRevision`
UUID on every emit, thread it through `finalizePreview`, and make
the commit conditional via a new optional `extraFilter` argument
on `updateFile` (`{ previewRevision: <expected> }`). The defensive
`updateFile` in `runPreviewFinalize`'s catch uses the same guard
so a programming error from an older render also can't override a
newer turn.
P1 — stale React Query cache on pending remount (queries.ts)
Same root cause from the frontend side. Cache key
`[QueryKeys.filePreview, file_id]` may hold a prior turn's `'ready'`
payload; with `refetchOnMount: false` and the polling gate on
`pending`, polling never starts for the new placeholder. Fix:
`useAttachmentHandler` invalidates that query whenever an attachment
with a `file_id` arrives. Both initial-emit and update events
trigger invalidation — uniform gate.
P2 — quick-restart orphans skipped by boot sweep (files.js)
Boot `sweepOrphanedPreviews` uses a 5-min cutoff for multi-instance
safety. A crash + restart inside the cutoff leaves `pending` records
that never get touched again. Fix: lazy sweep inside the preview
endpoint — if a polled record is `pending` and `updatedAt` is older
than 5 min, mark it `failed:orphaned` on the spot before responding.
Conditional on the same `updatedAt` we observed so a concurrent
legitimate update wins. Cheap, bounded by user activity.
Naming cleanup
- `runPhase2Finalize` → `runPreviewFinalize`
- `PHASE_TWO_TIMEOUT_MS` → `PREVIEW_FINALIZE_TIMEOUT_MS`
- All `phase-1` / `phase-2` / `two-phase` prose replaced with
"the immediate emit", "the deferred render", "the persist step",
"the deferred preview", etc. Skill-feature `phase 1/2` references
(different feature) left alone.
Tests: 10 new (4 lazy-sweep × preview endpoint, 3 cache-invalidation ×
useAttachmentHandler, 3 extraFilter × updateFile data-schemas).
Backend 332/332, frontend 531/531, data-schemas 37/37, lint clean.
* 🛠️ refactor: address comprehensive review (round 3) — stale-cache MAJOR + 3 minors
Comprehensive review on PR #12957 caught a P1 follow-on bug from the
prior `invalidateQueries` fix, plus 3 maintainability findings.
MAJOR: stale React Query cache not actually fixed by `invalidateQueries`
The previous fix called `invalidateQueries` to flush stale cached
preview data on cross-turn filename reuse. But `useFilePreview` had
`refetchOnMount: false`, which made the new observer read the
stale-marked 'ready' data without refetching. The polling
`refetchInterval` then evaluated against stale 'ready' → returned
`false` → polling never started → user stuck on stale content.
Fix (belt-and-suspenders):
a) `useAttachmentHandler` switched to `removeQueries` — drops the
cache entry entirely so the next mount has nothing to read and
must fetch.
b) `useFilePreview` no longer sets `refetchOnMount: false`, so the
React Query default (`true`) kicks in — second line of defense
if any future codepath observes stale data before the handler
has a chance to evict.
MINOR: `finalizePreview` JSDoc missing `previewRevision` param
Added with explanation of the conditional update guard.
MINOR: asymmetric stream-writable guard between SSE protocols
Chat-completions delegated the gate to `writeAttachmentUpdate`;
Open Responses inlined `!res.writableEnded && res.headersSent`.
Extracted `isStreamWritable(res, streamId)` predicate; both paths
+ `writeAttachmentUpdate` now share the single source of truth.
NIT: `(data as Partial<TFile>).file_id` cast repeated 4 times
Extracted to a `fileId` local at the top of the handler.
Tests: existing 9 invalidate-tests rewritten as remove-tests; +1 new
lock-in test asserts removeQueries is called and invalidateQueries
is NOT (regression guard against round-3 finding). 332 backend pass,
532 frontend pass, lint clean.
Skipped findings (deferred / acceptable):
- MINOR: post-submission pending state has no auto-recovery — the
`isAnySubmitting` polling gate was the user's explicit design;
LLM context surfaces failed/pending so the model can volunteer.
Worth a follow-up if real users hit it.
- NIT: double DB query per preview poll — reviewer marked acceptable;
changing `fileAccess` middleware is out of scope.
* 🛡️ test: address comprehensive review NITs (initial-emit guard + isStreamWritable coverage)
NIT — chat-completions initial emit skips writableEnded check
The Open Responses initial emit was switched to use the new
`isStreamWritable` predicate in the round-3 commit, but the
chat-completions initial emit kept the older narrower check
(`streamId || res.headersSent`). On a client disconnect mid-stream
(`writableEnded === true`) it would still hit `res.write` and
raise `ERR_STREAM_WRITE_AFTER_END` — caught by the outer IIFE
catch but logged as noise. Switch this site to `isStreamWritable`
too so both initial-emit paths share the same gate as the
deferred update emits.
NIT — `isStreamWritable` not directly unit-tested
The predicate was only covered indirectly via the deferred-preview
SSE tests (writableEnded skip, headersSent check). Export from
`callbacks.js` and add 5 parametric tests pinning down each branch
(streamId truthy, res null, !headersSent, writableEnded, happy
path) so a future condition addition can't silently regress.
* 🐛 fix: stuck "Preparing preview…" + inline the chip subtitle
Two related fixes for a stuck-spinner bug a user reported in manual
testing of PR #12957.
**Stuck spinner (the bug)**
The deferred preview render can complete a few seconds AFTER the SSE
stream closes (typical case: PPTX render finishes ~3s after the LLM
emits FINAL). When that happens, the SSE update is silently dropped
(`isStreamWritable` returns false on a closed stream) and polling is
the only recovery path.
The earlier polling gate was `status === 'pending' && isAnySubmitting`,
which mirrored the original design intent ("only query while the LLM
is still generating"). But `isAnySubmitting` flips false the moment
the model emits FINAL — milliseconds before the deferred render
commits. Polling never runs, the chip stays "Preparing preview…"
forever even though the DB has `status: 'ready'` with valid HTML.
Drop the `isAnySubmitting` part of the gate. `useFilePreview`'s
`refetchInterval` is already a function-form that returns `false` on
the first terminal response, so polling auto-stops within one tick of
resolution. The server-side render ceiling (60s) plus the lazy sweep
in the preview endpoint cap the worst case to ~24 polls per pending
attachment. Polling itself never blocks UX — the gate's purpose was
"don't waste cycles", and capping by terminal status is the correct
expression of that.
**Inline the chip subtitle (the visual)**
The previous design rendered "Preparing preview…" as a loose-feeling
spinner+text BELOW the file chip. The chip itself looked done while a
floating annotation said it wasn't.
`FileContainer` gains an optional `subtitle?: ReactNode` prop that
overrides the default file-type label. `Attachment.tsx` passes a
`PreviewStatusSubtitle` (spinner + "Preparing preview…" / alert +
"Preview unavailable") into that slot when the file's preview is
pending or failed. The chip footprint stays identical to its `'ready'`
form — just the second row swaps from "PowerPoint Presentation" to
the status indicator. No floating element, no layout shift.
Tests: regression test pinning down "polling stays enabled after the
LLM finishes" so a future revert can't reintroduce the stuck-spinner
bug. Existing FileContainer tests pass unchanged (subtitle override
is opt-in). 522 frontend tests pass; lint clean.
* 🐛 fix: deferred-preview survives reload + matches artifact card chrome
Fixes the remaining stuck-pending case after the polling gate fix: on
a reloaded conversation, message.attachments come from the DB frozen at
the immediate-persist `status: 'pending'`, but `messageAttachmentsMap`
is empty because no SSE handler ever fired for that messageId. Polling
now INSERTS a new live entry when no record matches the file_id, and
`useAttachments` merges live entries onto DB entries by file_id so the
resolved text/textFormat reach `artifactTypeForAttachment` and the
chip routes through the proper PanelArtifact card.
Also replaces the small file chip used during the pending state with
a PreviewPlaceholderCard that mirrors ToolArtifactCard chrome, so the
transition to the resolved PanelArtifact no longer reshapes the UI.
* ✨ feat: auto-open panel when deferred preview resolves pending→ready
The legacy auto-open path is gated only on `isSubmitting`, so an
office-file preview that resolves *after* the SSE stream closes would
render in place but never auto-open the panel — even though that's
exactly the moment the result becomes meaningful to the user. Adds a
per-file_id one-shot signal that `useAttachmentPreviewSync` flips on
the pending→ready edge; `ToolArtifactCard` consumes it on mount and
auto-opens regardless of submission state. The signal is *only* set on
the actual transition (history loads of pre-resolved files don't
trigger it) and is consumed once (panel close + reopen on the same
card stays user-controlled).
* 🐛 fix: drop placeholder Terminal overlay + scope auto-open to fresh resolutions
Two fixes for issues spotted in manual testing of the deferred-preview
auto-open feature:
1. PreviewPlaceholderCard was passing `file={attachment}` to FilePreview,
which triggered SourceIcon's Terminal overlay (`metadata.fileIdentifier`
is set on every code-execution file). The artifact card itself doesn't
show that overlay; the placeholder shouldn't either, so the
pending→resolved transition is visually seamless.
2. The `previewJustResolved` flag flipped on every pending→ready
transition observed by the polling hook — including stale-pending
DB records that resolve via the first poll on a *history load*.
Conversations whose immediate-persist snapshot left attachments at
`status: 'pending'` would yank the panel open every revisit.
Adds `mountedDuringStreamRef` to the hook (mirroring ToolArtifactCard)
so the flag fires only when the hook itself was mounted during an
active turn — preserving the pre-PR contract that the panel only
auto-opens for results the user is actively waiting on, never for
history.
* 🐛 fix: don't downgrade preview to failed when only the SSE emit throws
Codex P2 finding on PR #12957: the original chain placed `.catch` after
`.then(onResolved)`, so a throw inside `onResolved` (transport-side
errors — SSE write race after stream close, an emitter listener
throwing) would propagate into the finalize catch and persist
`status: 'failed'` / `previewError: 'unexpected'`. That surfaced
"preview unavailable" in the UI for a perfectly valid file, and
degraded next-turn LLM context to reflect a non-existent failure.
Wraps `onResolved` in its own try/catch so emit errors are logged but
do not affect the file's persisted status. Extraction success and
emit success are now independent: if extraction succeeds and
`finalizePreview` writes the terminal status, the polling layer / next
page load surfaces the resolved preview even if this turn's SSE emit
didn't land.
* 🛡️ fix: run boot-time orphan sweep under system tenant context
Codex P2 finding on PR #12957: `File` is tenant-isolated, so under
`TENANT_ISOLATION_STRICT=true` the boot-time `sweepOrphanedPreviews`
threw `[TenantIsolation] Query attempted without tenant context in
strict mode` and the recovery path silently failed every restart.
Stale `status: 'pending'` records would be stuck until a user happened
to poll the preview endpoint and trigger the lazy sweep — which only
covers the file the user is currently looking at, not the bulk
candidate set the boot sweep is designed to recover.
Wraps the sweep in `runAsSystem(...)` in both boot paths
(`api/server/index.js` and `api/server/experimental.js`) and pins the
contract with regression tests in `file.spec.ts` — one test asserts
the bare call throws under strict mode, the other asserts the
`runAsSystem`-wrapped call succeeds.
* 🧹 chore: trim verbose comments from previous commit
* 🧹 chore: address review findings (dead branch, lazy-sweep cutoff, stale JSDoc)
- finalizePreview: drop unreachable !isOfficeBucket branch (caller
already gates on hasOfficeHtmlPath, so this path is always office)
- preview endpoint: drop lazy-sweep cutoff from 5min to 2min — anything
past the 60s render ceiling is definitively orphaned, and per-request
sweep can be tighter than the per-instance boot sweep
- strip stale `isSubmitting` references from JSDoc in 3 spots (the
client-side gate was removed in
|
||
|
|
f839a447e1
|
🧬 fix: Subagent MCP requestBody Propagation (bump @librechat/agents to 3.1.78 + cleanup) (#12959)
* 📦 chore: bump `@librechat/agents` to v3.1.78 v3.1.78 ships [danny-avila/agents#147](https://github.com/danny-avila/agents/pull/147), which makes `SubagentExecutor` inherit the parent invocation's `configurable` (with `thread_id`/`run_id`/`parent_run_id` scrubbed) into the child workflow. Subagent tool dispatches through the parent's `ON_TOOL_EXECUTE` handler now arrive with parent's `requestBody`, `user`, `userMCPAuthMap`, etc. — so `{{LIBRECHAT_BODY_*}}` placeholder substitution and per-user MCP connection lookup work for subagent tool calls the same way they do for the parent agent. Note: `package-lock.json` will need an `npm install` refresh once v3.1.78 lands on the registry. The user/user_id injection added in PR #12950 stays as defense-in-depth. * 🗑️ refactor: drop redundant user/user_id injection from `loadToolsForExecution` `@librechat/agents@3.1.78` (via danny-avila/agents#147) makes `SubagentExecutor` forward the parent's `configurable` verbatim into the child workflow. Subagent `ON_TOOL_EXECUTE` dispatches now arrive with parent's `user` / `user_id` already in `data.configurable` — making the host-side injection added in #12950 a no-op. Removes: - The conditional `user: createSafeUser(req.user); user_id: req.user.id` block in `loadToolsForExecution` (req.user.id-guarded so the `'api-user'` fallback in Responses/OpenAI controllers is preserved). - The unused `createSafeUser` import. - The 4 unit tests covering the now-deleted behavior. The merge in `handlers.ts` (`{ ...configurable, ...toolConfigurable }`) still produces a `mergedConfigurable` with the right user identity for both parent and subagent paths — the values just come from `configurable` (forwarded by the SDK) rather than `toolConfigurable`. Other fixes from #12950 stay (IUser.id narrowing, the env.ts / google/initialize.ts / remoteAgentAuth.ts TS-warning fixes) — they were independent of the subagent identity propagation issue. * 📦 chore: update `@librechat/agents` to v3.1.78 This update reflects the transition from the development version `3.1.78-dev.0` to the stable release `3.1.78`. The package-lock.json has been refreshed to ensure consistency with the new version, including updated integrity checks and resolved URLs for the package. This change is part of ongoing improvements to enhance the functionality and stability of the agents module. |
||
|
|
9efd61d57d
|
🔐 fix: Forward per-file entity_id through code-env priming (#12958)
* 🔐 fix: Forward per-file `entity_id` through code-env priming Skill files and persisted code-env files now carry their `entity_id` on the in-memory file refs that seed `Graph.sessions`. Without this, an execute call that mixes a skill file (uploaded with `entity_id=skillId`) and a user attachment (uploaded with no `entity_id`) collapses onto a single request-level entity at the codeapi authorization step and one side 403s. With per-file `entity_id`, codeapi resolves sessionKey per file and both authorize. - `primeSkillFiles` / `primeInvokedSkills`: thread `entity_id` through fresh-upload, cache-hit, and per-skill-batch paths in `packages/api/src/agents/skillFiles.ts`. - `primeFiles` (Code/process.js): parse `entity_id` from the persisted `codeEnvIdentifier` query string once per iteration; forward through `pushFile`, including the reupload path which re-parses the fresh identifier returned by codeapi. - Tests: extend `skillFiles.spec.ts` with two cases — fresh-upload propagation and cached-hot-path parsing. Companion PRs in flight on `@librechat/agents` (forward `entity_id` through `_injected_files`) and codeapi (per-file authorization). All three are wire-back-compat: an absent `entity_id` falls back to the existing request-level resolution. * 🔧 chore: Update dependencies in package-lock.json and package.json - Bump `@librechat/agents` to version `3.1.78-dev.0` across multiple package files. - Upgrade `@langchain/langgraph-checkpoint` to version `1.0.2` and update its peer dependency for `@langchain/core` to `^1.1.44`. - Update `axios` to version `1.16.0` and `follow-redirects` to version `1.16.0`. - Add `@types/diff` as a new dependency at version `7.0.2` and include `diff` at version `9.0.0` in the `@librechat/agents` module. - Introduce optional peer dependency `@anthropic-ai/sandbox-runtime` for `@librechat/agents` with metadata indicating it is optional. * 🐛 fix: Make skill code-env cache persistence observable Two changes to surface the skill-bundle re-upload issue without behavioral changes to tenant scoping (root cause to be confirmed via the new warn log): 1. `primeSkillFiles` now awaits `updateSkillFileCodeEnvIds` instead of firing-and-forgetting it. The prior shape could race with the next prime (read-before-write) even when the bulkWrite itself succeeds, producing a silent cache miss. Latency cost: ~10–50ms on first prime; in exchange every subsequent prime can rely on the identifier being persisted by the time it reads. 2. `updateSkillFileCodeEnvIds` now returns `{matchedCount, modifiedCount}` from the underlying bulkWrite. `primeSkillFiles` warn-logs when `modifiedCount < updates.length`, making any silent drop visible — whether the cause is tenant filtering, a `relativePath` mismatch, schema-plugin scoping, or something else. Prior shape returned `Promise<void>` so any zero-modification result was invisible. Tests: - `skill.spec.ts`: real-MongoDB happy path (counts match), no-match case (modifiedCount=0), and empty-input contract. - `skillFiles.spec.ts`: deferred-promise harness proving the call site awaits the persist (prime stays pending until the persist resolves) and forwards partial-write counts. Deliberately narrower than the original draft of this commit, which also bypassed `tenantSafeBulkWrite` for the codeEnvIdentifier write on the speculative diagnosis that tenant filtering was the cause. That change was a behavior shift on tenant scoping without confirmation; reverted pending real-world signal from the new warn log. * 🐛 fix: Justify await for skill code-env persistence under concurrency The await on `updateSkillFileCodeEnvIds` isn't a defensive nicety — it's load-bearing for cache effectiveness under concurrent priming. Verified with an out-of-tree harness (`config/test-skill-cache.ts`, not committed) that wires `primeSkillFiles` against a real codeapi stack: - With fire-and-forget (prior shape after this branch's revert): back-to-back primes for the same skill miss the cache. Call N+1 reads SkillFile docs before Call N's write commits, sees no `codeEnvIdentifier`, re-uploads, fires its own forget that Call N+2 also races. Steady-state stays in cache miss for the full burst. - With await: the prime that does the upload commits its persist before resolving, so the next concurrent prime observes the cache pointer instead of racing the read. Latency cost ~10–50ms on the upload prime; subsequent concurrent primes save an entire batch upload. In production with primes seconds apart this race is rare; at scale with many users hitting the same skill in the same second it's the difference between M and N×M uploads. Updates the regression test to assert the await contract (deferred persist promise → prime stays pending until persist resolves). Comment in `skillFiles.ts` rewritten to document the concurrency rationale rather than the weaker "race-with-next-prime" framing the prior commit used. |
||
|
|
187ab787da
|
🌩️ feat: CloudFront CDN File Strategy (#12193)
* 🌩️ feat: CloudFront CDN File Strategy + signed cookies Squashed from PR #12193: - feat(storage): add CloudFront CDN file strategy - feat(auth): add CloudFront signed cookie support Note: package.json/package-lock.json dependency additions are intentionally omitted from this commit and will be re-added via `npm install` after rebase to avoid lock-file merge conflicts. The two new peer deps that need to be re-installed are: - @aws-sdk/client-cloudfront@^3.1032.0 - @aws-sdk/cloudfront-signer@^3.1012.0 Also fixes 4 missing destructured names in AuthService.spec.js (getUserById, generateToken, generateRefreshToken, createSession) that were referenced in tests but not imported from the mocked '~/models'. * 📦 chore: install CloudFront SDK deps for PR #12193 Adds the two AWS CloudFront packages required by the rebased CloudFront CDN strategy: - @aws-sdk/client-cloudfront - @aws-sdk/cloudfront-signer Following the @aws-sdk/client-s3 pattern: - api/package.json: regular dependency (runtime resolution) - packages/api/package.json: peerDependency Generated by `npm install` against the freshly rebased lock file to avoid the merge conflicts that came from the original PR's lock-file edits being made against an older base of dev. * 🐛 fix: CI failures + review findings on CloudFront PR #12193 CI fixes - Rename packages/data-provider/src/__tests__/cloudfront-config.test.ts → src/cloudfront-config.spec.ts. Jest's default testMatch picks up __tests__/ directories even inside dist/, so the compiled .d.ts shell was being executed as an empty test suite. Moving to .spec.ts (matching the rest of the package) avoids the dist/ pickup. - Add cookieExpiry: 1800 to CloudFront crud.test makeConfig: the schema applies a default so CloudFrontFullConfig requires it. Review findings addressed - #1 (Codex + comprehensive): Normalize CloudFront domain with /\/+$/ regex (and key with /^\/+/ regex) in buildCloudFrontUrl, matching the cookie code so resource policy and file URLs stay aligned even when the configured domain has multiple trailing slashes. Added tests. - #2: Move DEFAULT_BASE_PATH out of s3Config into shared packages/api/src/storage/constants.ts. ImageService no longer imports S3-specific config. - #3: getCloudFrontConfig() returns Readonly<CloudFrontFullConfig> | null to discourage mutation of the cached signing config. - #4: Add cross-field refinement tests for cloudfrontConfigSchema (invalidateOnDelete-without-distributionId, imageSigning="cookies"-without-cookieDomain). - #6: Revert unrelated MCP comment re-indentation in librechat.example.yaml. - #7: Add azure_blob to the strategy list comment. Skipped - #5 (extractKeyFromS3Url with CloudFront URLs): existing deleteFileFromCloudFront tests already cover the path-equivalence assumption; renaming the helper is real refactor work beyond this PR's scope. - #8, #9 (NIT, low confidence): leaving for author judgement. * 🧹 chore: drop dead DEFAULT_BASE_PATH from s3Config test mock After moving DEFAULT_BASE_PATH to ~/storage/constants, crud.ts no longer reads it from s3Config — so the entry in the s3Config jest mock was misleading dead config. The tests still pass because the unmocked real constants module provides the value. --------- Co-authored-by: Danny Avila <danny@librechat.ai> |
||
|
|
bff9bfea87
|
🐛 fix: Propagate User Identity to Subagent MCP Tool Calls (#12950)
* 🐛 fix: Propagate User Identity to Subagent MCP Tool Calls The `@librechat/agents` SDK's `SubagentExecutor` invokes the child workflow with a fresh configurable of `{ thread_id }` only — the parent's `user` / `user_id` are dropped on the way into the child graph. The child's `ToolNode` then dispatches `ON_TOOL_EXECUTE` to the parent's handler, which merges `{ ...configurable, ...toolConfigurable }`, but neither side carries user identity for subagents. Downstream MCP tools read `config.configurable.user?.id || user_id` and got `undefined`, so `MCPManager.getConnection` fell through to the "No connection found for server X" error path — it can't reach the user-connection lookup without a userId. Re-inject `user` (via `createSafeUser`) and `user_id` from `req.user` into the configurable returned by `loadToolsForExecution`. This is the single point all controllers (chat, Responses API, OpenAI-compat) flow through. For the parent agent it's a no-op (outer config already carries the same values); for subagents it fills the gap so MCP connection lookup, user-placeholder substitution, and tools that read configurable.user all work correctly. * 🐛 fix: Preserve `api-user` Fallback When Injecting Subagent Identity Codex review pointed out that the prior commit unconditionally wrote `user_id: req.user?.id` (and `user`) into `toolConfigurable`. The handler merges via `{ ...configurable, ...toolConfigurable }` — `toolConfigurable` wins — so when `req.user` is absent, this overwrote the outer config's `'api-user'` fallback (set by `responses.js` / `openai.js` for the unauthenticated API-key path) with `undefined`, breaking MCP connection lookup for that path. Only inject the keys when `req.user.id` is truthy. Omitting them lets the merge preserve whatever the outer configurable already had. Tests updated to assert key omission for `req.user` undefined / null / present without `id`. * 🩹 fix: Narrow `IUser.id` to required string `IUser` extends mongoose `Document`, which types `id?: any` (the optional virtual). At runtime `id` is always `_id.toString()` for a hydrated doc, so narrow the type to a required string. Closes two `@rollup/plugin-typescript` TS2322 warnings introduced by PR #12450 (OIDC Bearer Token Authentication for Remote Agent API) where `req.user = userResolution.user` and the `(req: Request, res: Response, next: NextFunction)` signature both failed against the project's local `Express.User` augmentation (`{ [key: string]: any; id: string; }`) because `IUser.id` was `any`/optional. Narrowing here fixes both at the source rather than casting at every assignment site. * 🩹 fix: Resolve TS Build Warnings Surfaced by `IUser.id` Narrowing Three rollup TS plugin warnings surfaced after narrowing `IUser.id` from `any` to `string`: - `utils/env.ts:95` — `safeUser[field] = user[field]` failed strict checking because indexed write through a union-typed key collapses the LHS to the intersection of all field write types (i.e., `undefined` when fields have mixed types). The previous `id?: any` on IUser had been masking this. Switch to `Object.assign(safeUser, { [field]: user[field] })` which widens the assignment. - `endpoints/google/initialize.ts:35` — `getUserKey({ userId: req.user?.id, ... })` failed because `req.user?.id` is now `string | undefined` (no longer `any`). Match the pattern already used in `endpoints/openAI/initialize.ts:49`: `req.user?.id ?? ''`. - `middleware/remoteAgentAuth.ts:465` — pre-existing, unrelated to the IUser change. The local (gitignored) `express.d.ts` augments `express.Request` but not `express-serve-static-core.Request`, so the explicit `(req: Request, ...)` annotation imported from `'express'` resolves to a Request whose `req.user` differs from the one `RequestHandler` expects internally. Type the closure as `RequestHandler` directly so TS infers params from the augmented type. * 🩹 fix: Cast `RemoteAgentAuth` Closure to `RequestHandler` My previous attempt removed the explicit `req: Request` annotation on the closure to side-step the outer `RequestHandler` mismatch. That shifted the error to every helper call site inside the closure (`getConfigOptions(req)`, `runApiKeyAuth(req, ...)` at 467/474/493/ 512/531), because the helpers annotate their params with `express.Request` (which has the local `Request.user` augmentation), while the unannotated closure inferred `req` as `express-serve-static-core.Request` (no augmentation). Reproduced locally by stubbing the gitignored `src/types/express.d.ts`. Right approach: keep the explicit `req: Request` annotation so the closure body matches the helpers' types, then cast at the return — `RequestHandler`'s internal `Request` resolves through `express-serve-static-core` and lacks the augmentation, so the cast is the boundary that bridges the two views of `req.user`. Verified against a build with the local express.d.ts stub: zero warnings on `remoteAgentAuth.ts`, `env.ts`, and `google/initialize.ts`. |
||
|
|
f20419d0b7
|
📄 feat: Rich File Artifact Previews for DOCX, CSV, XLSX, PPTX (#12934)
* 📄 feat: Rich File Artifact Previews for DOCX, CSV, XLSX, PPTX Render office files emitted by tools as interactive previews in the artifact panel instead of raw extracted text. The backend produces a sanitized HTML document via mammoth (DOCX), SheetJS (CSV/XLSX/XLS/ODS), or yauzl-based slide extraction (PPTX) and ships it through the existing SSE attachment payload; the client routes it through the Sandpack `static` template's `index.html` slot — no new browser deps, no client-side blob fetch, no React renderer components. * 🔐 fix: Restrict data: URLs to <img> in office HTML sanitizer Codex review on #12934 caught that `data:` lived in the global `allowedSchemes`, which meant a smuggled `<a href="data:text/html, <script>...</script>">` would survive sanitization. The Sandpack iframe sandbox does not gate `target="_blank"` navigations, so a click would open attacker-controlled HTML in a new tab. Scope `data:` to `<img src>` only via `allowedSchemesByTag` (mammoth inlines DOCX images as base64 `data:image/...` URIs — that path still works). Add a regression suite (`sanitizeOfficeHtml security`) with 8 cases covering: <script> stripping, event-handler removal, javascript:/data: rejection on anchors, data:image preservation in <img>, http/https/mailto allowance, target=_blank rel=noopener enforcement, and <iframe> stripping. * 🔧 fix: Route extensionless office files by MIME alone Codex review on #12934 caught that the office-render gate in `extractCodeArtifactText` only fired when the extension was in `OFFICE_HTML_EXTENSIONS` or the category was `document`/`pptx`. A tool emitting `data` with `text/csv` (no extension) classifies as `utf8-text`, so the gate was skipped and raw CSV text shipped to the client — but the client routes by MIME to the SPREADSHEET bucket expecting a full HTML document, so the panel rendered broken text. Extract a shared `officeHtmlBucket(name, mime)` predicate from `html.ts` (returns the bucket name or null). Both `bufferToOfficeHtml` (the dispatcher) and the upstream gate in `extract.ts` now go through this single source of truth, so they can never drift apart again. The predicate already mirrors the dispatcher's extension/MIME logic (extension wins; MIME is the fallback for extensionless inputs). Adds: - 14 cases for the new `officeHtmlBucket` predicate covering the positive paths (each bucket via extension OR MIME) and the negative paths (txt, py, json, jpg, pdf, zip, odt, plain noext). - A direct regression test in `extract.spec.ts` for the Codex catch: `data` with `text/csv` + utf8-text category routes through the office HTML producer. - Parameterized cases for extensionless DOCX/XLSX/XLS/ODS/PPTX files identified by MIME alone. * 🛡️ fix: Enforce extension-wins precedence in officeHtmlBucket Codex review on #12934 caught that the predicate's if-chain interleaved extension and MIME checks for each bucket — e.g. CSV's branch was `ext === 'csv' || CSV_MIME_PATTERN.test(mimeType)`. A `deck.pptx` shipped with `text/csv` (sandboxed tools sometimes ship generic MIMEs) matched the CSV branch BEFORE the PPTX extension branch was reached, so a binary PPTX would have been handed to `csvToHtml` to parse as text — yielding garbage or a parse exception. Restructure to a strict two-pass dispatch: an exhaustive extension table first (one lookup, all known extensions), then MIME-only fallback for extensionless / unknown-ext inputs. The doc comment's "extension wins" claim is now actually enforced by the implementation. Add 7 regression cases covering the conflicting-MIME footgun for each bucket: deck.pptx + text/csv → pptx; workbook.xlsx + text/csv → spreadsheet; legacy.xls + pptx-MIME → spreadsheet; report.docx + text/csv → docx; data.csv + docx-MIME → csv; etc. * 🛡️ fix: Reject zip-bomb office files before in-process parsing (SEC) Addresses pre-existing availability vulnerability validated by SEC review (Codex finding 275344c5...) and made worse by this PR's HTML rendering path. A sub-1MiB compressed XLSX/DOCX/PPTX (highly compressed run-of-zeros) inflates to 200+ MiB of XML when handed to mammoth/xlsx — blocking the Node event loop for 10+ seconds and spiking RSS to ~1 GiB. The existing 8s `withTimeout` wrapper uses `Promise.race`, which can only return early; it cannot interrupt synchronous parser CPU/RAM consumption. PoC ran an authenticated execute_code call to OOM the API process. Add `assertSafeZipSize(buffer)` — a yauzl-based pre-flight that streams every entry with mid-inflate byte counting and bails on either a per-entry or total decompressed-size cap. Mid-inflate counting cannot be bypassed by falsifying the central directory's `uncompressedSize` field (the technique the PoC used). Defaults: 25 MiB per entry, 100 MiB total — generous headroom for legitimate image-heavy office files, well below the attack profile. Hook the check into every path that hands a buffer to mammoth/xlsx /yauzl: - New HTML producers (`wordDocToHtml`, `excelSheetToHtml`, `pptxToSlideListHtml`) — added by this PR - Legacy RAG text extractors (`wordDocToText`, `excelSheetToText` in `crud.ts`) — pre-existing path, also vulnerable Errors propagate as a tag-distinct `ZipBombError` so callers can distinguish a refused bomb from generic parse failures. The outer `extractCodeArtifactText` swallows the error and returns null, falling back to the regular download UI. `.xls` (BIFF/CFB binary, not ZIP) is detected by magic bytes and skipped — yauzl would reject it as malformed anyway. Adds 15 tests: - `zipSafety.spec.ts` (9): benign passes, per-entry cap, total cap, ZipBombError type-tagging, malformed-zip distinction, directory- entry handling, named-error surfacing, and the SEC-PoC pattern (sub-1 MiB compressed → 50 MiB inflated rejected on default caps). - `html.spec.ts` zip-bomb suite (5): each producer rejects a bomb; dispatcher propagates correctly; legitimate fixtures still render. - `extract.spec.ts` (1): outer extractor swallows ZipBombError and returns null so the download UI fallback fires. * 🧹 fix: Normalize MIME parameters; add legacy CSV MIME variant Two related Codex catches on PR #12934 — both about MIME-routing inconsistencies between backend and client that would cause extensionless CSV files to render as broken (raw text under an HTML slot) or skip the artifact panel entirely. P2 — backend MIME normalization: `officeHtmlBucket` matched MIME strings exactly, so a real-world `text/csv; charset=utf-8` Content-Type slipped through and the backend returned raw CSV text. The client's `baseMime` helper strips parameters before its own MIME lookup, so it routed the same file to the SPREADSHEET bucket expecting an HTML body that never arrived. Mirror the client's normalization on the backend (strip everything from `;` onward, lowercase) before bucket matching. P3 — client legacy CSV MIME: Backend's `CSV_MIME_PATTERN` accepts three variants (`text/csv`, `application/csv`, `text/comma-separated-values`); the client's `MIME_TO_TOOL_ARTIFACT_TYPE` only had the first two. An extensionless file with `text/comma-separated-values` would have backend HTML produced but the client would skip the artifact panel entirely. Add the missing variant. Tests: - 9 new parameterized-MIME cases on backend covering charset/ boundary/case variants for every bucket. - 1 new client routing case for `text/comma-separated-values`. * 🩹 fix: Try office HTML before short-circuiting on category=other Codex review on #12934 caught that the early `category === 'other'` return short-circuited before `hasOfficeHtmlPath` was checked. The classifier returns 'other' for inputs the new dispatcher can still route — extensionless `application/csv` (CSV MIMEs aren't in the classifier's text-MIME set and don't start with `text/`), and extensionless office MIMEs with parameters like `application/vnd... spreadsheetml.sheet; charset=binary` (the classifier's `isDocumentMime` exact-matches these MIMEs without parameter normalization). Both would route correctly through `officeHtmlBucket` but never reached it. Move the office-HTML attempt above the 'other' early return, and drop the `|| category === 'document' || category === 'pptx'` shortcut now that `hasOfficeHtmlPath` covers the same surface (with parameter normalization) and a wider one. ODT still routes through `extractDocument` unchanged — `hasOfficeHtmlPath` returns false for it and the `category === 'document'` branch below handles it. Adds 3 regression tests: - extensionless `application/csv` + category='other' → office HTML - extensionless parameterized office MIME + category='other' → office HTML - defense check: actual binary 'other' (image/jpeg) still returns null without invoking the office producer * 🛡️ fix: Office types are HTML-or-null (no text fallback → XSS) Codex P1 review on #12934 caught that when `renderOfficeHtml` failed (timeout, malformed file, zip-bomb rejection) for an office type, the extractor fell through to `extractDocument` and returned plain text. The client routes by extension/MIME to the office preview buckets and feeds `attachment.text` straight into the Sandpack iframe's `index.html`. A spreadsheet cell or document body containing the literal string `<script>alert(1)</script>` would have been injected as executable markup — direct XSS. The contract for office types is now HTML-or-null with no text fallback. Failed render returns null, the client's empty-text gate keeps the artifact off the panel, and the file falls back to the regular download UI (matching what PPTX already did). PDF and ODT still go through `extractDocument` because the client routes them to PLAIN_TEXT (which the markdown viewer escapes) or no artifact at all, so plain text is safe there. Test reshuffle: - `document` describe block now uses ODT/PDF for the legacy parseDocument-path tests (DOCX/XLSX/XLS/ODS bypass that path). - New "does NOT call parseDocument for office HTML types" test locks in the SEC contract for all four office HTML buckets. - "falls back to ..." tests rewritten as "returns null when ..." with explicit `parseDocumentCalls.length === 0` assertions to prove no text leaks back to the client. - New XSS regression test for the XLSX failure path. - Mock parseDocument failure-name match relaxed to `includes()` so ODT-named tests can use the same trigger. * 🧽 chore: Address follow-up review findings on PR #12934 Wraps up the 10-finding follow-up review. Two MAJOR + four MINOR + two NIT addressed; one NIT skipped after verifying it was a misread of the package.json structure. MAJOR - #1: Rewrite `renderOfficeHtml` JSDoc to document the HTML-or-null contract explicitly. The pre-fix doc described a text-fallback path that was the original XSS vector (commit b06f08a). A future maintainer trusting the stale doc could reintroduce the fallback. - #2: Replace byte-truncation of office HTML with a small "preview too large" banner document. Cutting at a UTF-8 boundary lands mid-tag (`<table><tr><td>con\n…[truncated]`) and ships malformed markup to the iframe — unpredictable rendering, occasional broken layouts on DOCX with embedded images / wide spreadsheets. MINOR - #4: Wrap `readSlidesFromZip`'s `zipfile.close()` in try/catch so a close-time exception (mid-flight stream) doesn't replace the original error. Mirrors the defensive pattern in zipSafety.ts. - #5: Refactor PPTX extraction to use `yauzl.fromBuffer` directly, eliminating the temp-file write/unlink the safety pre-flight already proved unnecessary. Removes 4 unused imports (os, path, fs/promises, randomUUID). - #6: Extract `isPreviewOnlyArtifact(type)` to `client/src/utils/ artifacts.ts` so the membership check is unit-testable without mounting the full Artifacts component (Recoil + Sandpack + media query). 15 new test cases covering positive types, negative types, null/undefined, and unknown strings. NIT - #3: Remove dead `stripColorStyles` / `COLOR_PROPERTY_PATTERN` — unused (sanitizer's `allowedStyles` config handles color implicitly). - #7: Remove dead `!_lc_csv_label` worksheet property write. - #9: Remove no-op `exclusiveFilter: () => false` sanitize-html config. - #10: Type-narrow `PREVIEW_ONLY_ARTIFACT_TYPES` to `ReadonlySet<ToolArtifactType>` so the membership table is compile-time checked against the enum. SKIPPED - #8: Reviewer flagged `sanitize-html` as duplicated in devDeps and dependencies. The package has no `dependencies` section — only `devDependencies` and `peerDependencies`. Existing convention (mammoth, xlsx, yauzl, pdfjs-dist) is to appear in BOTH. Removing the devDep entry would break local test runs. Tests: packages/api 4406/4406, client artifacts 128/128. * 🪞 chore: Fix isPreviewOnlyArtifact test description parameter order Follow-up review nit on PR #12934. Jest's `it.each` substitutes `%s` positionally, and the table rows were `[type, expected]` while the description template read `'returns %s for type %s'` — outputting "returns application/vnd.librechat.docx-preview for type true" instead of the intended "type ... returns true". Reorder the template to match the column order. Test runner output now reads naturally: "type application/vnd.librechat.docx-preview returns true". Pure cosmetic — runtime behavior unchanged. * ✨ feat: Improve DOCX rendering and surface filename in panel header Two UX improvements based on hands-on use of the office preview pipeline. DOCX rendering — mammoth strips the navy banners, cell shading, and column layouts that direct-formatted docs apply (python-docx-style output is a common case). The flat `<p><strong>X</strong></p>` and bare `<table><tr><td>` it emits looks washed out next to the source. Three targeted compensations: - Style map promotes `Title`, `Subtitle`, `Heading 1` thru `Heading 6`, and `Quote` paragraphs to their semantic HTML equivalents (mammoth's default only handles Heading 1-6, missing Title/Subtitle/Quote). - Extra CSS scoped to `.lc-docx` gives the first table row sticky- looking header styling regardless of `<thead>` (mammoth never emits `<thead>`), adds zebra striping, and treats the python-docx `<p><strong>X</strong></p>` section-heading idiom as a pseudo-h2 with a thin accent left border so document structure survives the round trip. Headings get a left accent or underline so they read as headings instead of just bold paragraphs. - Sanitizer's `allowedAttributes` opens `class` on the heading and block tags the styleMap and CSS heuristics rely on. `<script>`, event handlers, javascript: URLs, etc. are still stripped — the existing security regression suite catches any drift. Panel header — `Artifacts.tsx` showed a generic "Preview" pill for preview-only artifacts. Single-tab Radio is a no-op; surfacing the document filename there gives the user something useful in the chrome without taking real estate. `displayFilename` handles the sandbox dotfile suffix the upload pipeline applies. Tests: html.spec.ts +1 (new CSS-emission lock), 71/71. Backend files suite 428/428. Client 308/308. * ✨ feat: High-fidelity DOCX preview via docx-preview in iframe Switch the default DOCX render path from server-side mammoth → flat HTML to client-side `docx-preview` loaded inside the Sandpack iframe. Mammoth becomes the fallback for files above the cap. Why --- The Sandpack iframe is a real browser DOM. Server-side rendering ceiling for DOCX→HTML is well below the source's visual fidelity — mammoth strips cell shading, run colors, banners, and column layouts because Word's layout model doesn't fit HTML's flow model. Pushing the render into the iframe lifts that ceiling without paying the server-side cost of jsdom or LibreOffice. What ---- - New `wordDocToHtmlViaCdn(buffer)` builds a self-contained HTML doc that embeds the binary as base64 and lets `docx-preview@0.3.7` render it on load. CSS preserves dark/light mode handoff via `prefers-color-scheme`. Bootstrap script falls back to a "preview unavailable, please download" message if the CDN is unreachable or the parse throws. - `docx-preview` and its `jszip` peer dep are pinned to specific versions on jsdelivr with SRI sha384 integrity hashes and `crossorigin="anonymous"`. Refresh: re-fetch the file, run `openssl dgst -sha384 -binary FILE | openssl base64 -A`. - CSP locked down on the iframe: `default-src 'none'`, scripts only from jsdelivr (no eval), `connect-src 'none'` so a parser bug in docx-preview can't be turned into exfiltration of the embedded document, `base-uri 'none'`, `form-action 'none'`. Defense in depth on top of the Sandpack cross-origin sandbox. - `wordDocToHtml` dispatches by size: ≤ 350 KB binary → CDN path (high fidelity), larger → mammoth fallback (preserves the size cap on `attachment.text`). 350 KB chosen so worst-case base64-inflated output (~478 KB) plus wrapper overhead (~5 KB) fits under MAX_TEXT_CACHE_BYTES (512 KB) with 40 KB headroom. - Internal renderers exported as `_internal` for tests. Public API unchanged — callers still go through `wordDocToHtml`. PPTX intentionally NOT switched ------------------------------- Surveyed the available client-side PPTX libraries: - `pptx-preview@1.0.7` ships an ESM-only main entry plus a 1.36 MB UMD that references `require("stream"/"events"/"buffer"/"util")` — bundled for Node, not browser-clean. Could work but the runtime references to undefined Node globals are a fragility risk worth more validation than this PR can absorb. - `pptxjs` is jQuery-era, requires four separate UMD scripts in a specific order, less actively maintained. - The honest answer for PPTX is the LibreOffice sidecar (DOCX/XLSX/ PPTX → PDF → PDF.js), which is the architecture every major product (Google Drive, Claude.ai, ChatGPT) effectively uses and the only path to ~5/5 fidelity for arbitrary user decks. PPTX stays on the existing slide-list extraction for now. Open a follow-up issue for the LibreOffice/Gotenberg sidecar. Tests ----- - 6 new in CDN-rendered describe block: wrapper structure, base64 round-trip, SRI integrity + crossorigin, CSP locks (connect-src/eval/base-uri/form-action), fallback message wiring, size-threshold lock. - Adjusted 2 existing tests that asserted on mammoth-path artifacts (literal document text in `<article class="lc-docx">`) — those assertions move to the mammoth-fallback test that calls `_internal.wordDocToHtmlViaMammoth` directly. Dispatcher tests now assert CDN-path signatures instead. packages/api files: 434/434 ✅, full unit suite 4473/4473 ✅. * 🧷 fix: Address Codex P1 (MIME aliases) + P2 (CDN dependency) Two follow-up review findings on PR #12934, both real. P1 — Spreadsheet MIME aliases on client ---------------------------------------- Backend's `officeHtmlBucket` uses the broad `excelMimeTypes` regex from `librechat-data-provider` (covers `application/x-ms-excel`, `application/x-msexcel`, `application/msexcel`, `application/x-excel`, `application/x-dos_ms_excel`, `application/xls`, `application/x-xls`, plus the canonical sheet MIMEs). The client's exact-match `MIME_TO_TOOL_ARTIFACT_TYPE` only had three of those, so an extensionless XLS upload with a legacy MIME would have backend HTML produced but the client would fail to route the artifact at all — preview chip never registers. Fix: import the same regex on the client and add it as a fallback in `detectArtifactTypeFromFile` after the exact-match map miss. Stays in lock-step with the backend automatically. 7 new test cases — one per legacy alias. P2 — Hard CDN dependency on jsdelivr ------------------------------------- Air-gapped / corporate-filtered networks where jsdelivr is unreachable would see DOCX previews permanently degrade to "Preview unavailable" because the iframe could never load the renderer scripts. Mammoth was sitting right there on the server but the dispatcher always preferred the CDN path for files under 350 KB. Fix: `OFFICE_PREVIEW_DISABLE_CDN` env var. When truthy (`1`, `true`, `yes`, case-insensitive, whitespace-trimmed), `wordDocToHtml` short-circuits to the mammoth path regardless of file size. Operators on filtered networks set the env var; default behavior is unchanged. Read at function-call time (not module load) so jest can flip it in `beforeEach` without `jest.resetModules()`. The cost is one property access per render. 12 new test cases: env-unset uses CDN (default), all five truthy forms force mammoth, five non-truthy forms (`false`/`0`/`no`/empty/ arbitrary string) leave CDN active. Tests ----- packages/api/src/files: 446/446 ✅ (was 434, +12 from env-var matrix). client artifact suites: 235/235 ✅ (was 228, +7 from MIME aliases). * ✨ feat: High-fidelity PPTX preview via pptx-preview in iframe Mirrors the DOCX CDN architecture for PPTX: small files (≤350 KB binary) embed as base64 and render via `pptx-preview` loaded from jsdelivr inside the Sandpack iframe. Larger files and air-gapped deployments fall back to the existing slide-list extraction. Why --- PPTX is the format where the gap between LibreChat's preview and Claude.ai-style previews was most visible (slide-list of bullet points vs. rendered slide layouts). LibreOffice → PDF → PDF.js is still the eventual gold-standard answer for PPTX fidelity, but client-side rendering inside the Sandpack iframe gets us a meaningful intermediate step (~1.5/5 → ~3.5/5) without a sidecar. What ---- - `pptx-preview@1.0.7` (ISC license, ~1.36 MB UMD bundle that includes its echarts/lodash/uuid/jszip/tslib deps inline). Pinned to a specific version on jsdelivr with SHA-384 SRI and `crossorigin="anonymous"`. - `buildPptxCdnDocument` mirrors the DOCX wrapper: same CSP locks (`default-src 'none'`, `connect-src 'none'`, no eval, no base/form tampering), same `id="lc-doc-data"` base64 slot, same fallback message wiring (`typeof pptxPreview === 'undefined'` → "Preview unavailable"). - New public `pptxToHtml(buffer)` dispatcher; `bufferToOfficeHtml` switches its `'pptx'` case to call it. `pptxToSlideListHtml` stays exported as the slide-list-only path (still hit by tests directly and by the dispatcher fallback). - `OFFICE_PREVIEW_DISABLE_CDN=true` env-var hatch applies to PPTX too — air-gapped operators get the slide-list path. Same env-var read at call time, same matrix of truthy values (`1` / `true` / `yes` / case-insensitive / whitespace-trimmed). - `_internal` re-exports moved to after the PPTX section since the PPTX internals live further down in the file. Adds `pptxToHtmlViaCdn`, `MAX_PPTX_CDN_BINARY_BYTES`, `PPTX_PREVIEW_CDN`. Honest caveats -------------- - The 1.36 MB UMD bundle has `require("stream"/"events"/"buffer"/ "util")` references in its outer wrapper. Those are bundled-dep artifacts (likely from `tslib` / Node-shim transforms) and don't appear to execute on the browser code paths, but I haven't done manual e2e on a wide range of decks. If a class of files turns up that breaks rendering, the iframe-side fallback message catches it and operators have `OFFICE_PREVIEW_DISABLE_CDN=true` as the bail. - First-render CDN fetch is ~1.36 MB (browser-cached after). - PPTX with embedded media easily exceeds the 350 KB binary cap; those files take the slide-list path. Lifting the cap is a follow-up (tied to the broader self-hosting work). Tests ----- 11 new in two new describe blocks: - `pptxToHtml dispatcher`: routing predicate (small → CDN, env-set → slide-list). - `CDN-rendered path`: base64 round-trip, SRI integrity + crossorigin, CSP locks (connect/eval/base/form), fallback message, size-threshold lock at 350 KB. - `OFFICE_PREVIEW_DISABLE_CDN escape hatch`: env-var matrix for truthy values. packages/api/src/files: 457/457 ✅ (was 446, +11). * 🪟 fix: DOCX preview fills the artifact panel width docx-preview defaults to rendering at the document's native page width (8.5in for letter, 21cm for A4). In a wide artifact panel that left whitespace on either side; in a narrow one it forced horizontal scroll. Two changes: - Pass `ignoreWidth: true` to `docx.renderAsync` so the library skips the document's pageSize width and uses its container's width. - Defensive CSS overrides on `.docx-wrapper` and `.docx-wrapper > section.docx` in case a future library version regresses on the option, plus `padding: 0` on the wrapper to drop the page-edge whitespace docx-preview otherwise reserves. `renderHeaders`/`renderFooters`/etc. stay enabled — those still appear in the rendered output, just inside a container that fills the panel instead of a fixed-width "page." Tests unchanged (100/100); manual e2e ahead of merge. * 🩹 fix: PPTX black screen — allow blob: workers + harden bootstrap Manual e2e of the PPTX CDN renderer surfaced a black screen with "Could not establish connection. Receiving end does not exist." unhandled-rejection — characteristic of a Web Worker that couldn't start. Root cause: pptx-preview's bundled echarts dep spins up Web Workers via blob: URLs for chart rendering. Our CSP had `default-src 'none'` and no `worker-src`, so workers fell back to default → blocked. The async failure deep inside echarts didn't surface through the outer `previewer.preview()` promise, so my bootstrap's `.catch` never fired, the loading state was removed, and the iframe sat with the body background showing through (dark navy in dark mode = "black screen"). Three changes: - Add `worker-src blob:` to the PPTX CSP. Allows blob:-only worker creation without permitting arbitrary worker URLs. - Bootstrap: window-level `unhandledrejection` and `error` listeners so rejections from inside bundled-dep async pipelines surface as the user-facing "Preview unavailable" fallback instead of going silent. - Bootstrap: 8-second timeout that checks `container.children.length` — if the renderer hasn't appended anything visible by then, assume silent failure and show the fallback. Also wipe `container.innerHTML` when showing the fallback so a partial render doesn't compete with the message. DOCX wrapper unchanged: docx-preview doesn't use workers, so the worker-src directive doesn't apply, and the existing fallback path already covers its failure modes. Tests ----- - Existing PPTX CSP test now also asserts `worker-src blob:` is present. - Existing fallback-message test extended to cover the new unhandledrejection/error/timeout listeners. packages/api/src/files: 467/467 ✅. * 🔒 fix: gate office HTML routing on backend trust flag (textFormat) Codex P1 review on PR #12934: routing .docx/.csv/.xlsx/.xls/.ods/.pptx into the office preview buckets assumed `attachment.text` was already sanitized full-document HTML, but that guarantee only existed for the new code-output extractor path. Existing stored attachments and other non-code paths can still carry plain extracted text — `useArtifactProps` would then inject that as `index.html` inside the Sandpack iframe. Adds a `textFormat: 'html' | 'text' | null` trust flag persisted on the file record by the code-output extractor, surfaced over the SSE attachment payload and the TFile API type. The client's routing in `detectArtifactTypeFromFile` requires `textFormat === 'html'` before landing on an office HTML bucket; everything else (legacy attachments, RAG-extracted plain text from `parseDocument`, explicitly-marked 'text' entries) falls back to the PLAIN_TEXT bucket where the markdown viewer escapes content rather than executing it. Tests: new `getExtractedTextFormat` helper has 14 cases covering all office paths, legacy XLS MIME aliases, parseDocument fallthroughs, and null-input. Client `artifacts.test.ts` adds three security-gate tests proving downgrade behavior for missing/null/'text' textFormat, plus a `fileToArtifact` test that legacy office attachments without the flag end up in PLAIN_TEXT with their content escaped. * 🌐 fix: air-gapped DOCX preview — embed mammoth fallback in CDN doc Codex P2 review on PR #12934: the CDN-rendered DOCX path always pulled docx-preview + jszip from cdn.jsdelivr.net. Air-gapped or corporate- filtered networks where jsdelivr is blocked would degrade to a static "Preview unavailable" message even though the server already had a local mammoth renderer that could produce readable output. Now the dispatcher renders mammoth first and embeds the sanitized output inside the CDN document as a hidden `#lc-fallback` block. The iframe's existing `typeof docx === 'undefined'` check (which fires when the CDN scripts can't load) un-hides the fallback so the user sees a real preview. CDN-success path is unchanged: high-fidelity docx-preview output owns the viewport, mammoth fallback stays hidden. Two new safeguards in the dispatcher: - Size budget: if base64(binary) + mammoth body + wrapper > 512 KB (the `attachment.text` cache cap), drop to mammoth-only so a giant document still renders. The `OFFICE_HTML_OUTPUT_CAP` constant mirrors `MAX_TEXT_CACHE_BYTES` from extract.ts (separate constant to avoid a circular import; pinned by a unit test). - `lc-render` is hidden when fallback shows so the empty padded slot doesn't sit above the mammoth content. Tests: existing CDN-path tests updated for the new `wordDocToHtmlViaCdn(buffer, mammothBody)` signature; new test for the embedded fallback structure (`#lc-fallback`, mammoth body content, "High-fidelity renderer unavailable" notice, render-slot hide); new constant pin and per-fixture cap-respect assertion. * 🧪 feat: LibreOffice → PDF preview path (POC, opt-in via env) Per the plan-mode discussion: prove out a LibreOffice subprocess pipeline as an alternative to the docx-preview / pptx-preview CDN renderers. LibreOffice handles every office format Microsoft and LibreOffice itself can open (DOCX, PPTX, XLSX, ODT, ODP, ODS, RTF, many more), produces a PDF, and the host browser's built-in PDF viewer renders it inside the Sandpack iframe via a `data:` URI. No client-side JS dependency, no CDN dependency, true high fidelity for any feature LibreOffice supports. Off by default. Operators opt in by setting both: - `OFFICE_PREVIEW_LIBREOFFICE=true` - LibreOffice (`soffice` or `libreoffice`) on the server's `$PATH` When either is missing, the dispatcher falls through to the existing CDN/mammoth/slide-list pipeline so a misconfiguration doesn't break previews. Hardening (`packages/api/src/files/documents/libreoffice.ts`): - Fresh subprocess per call with isolated temp dir, stripped env (PATH/HOME/TMPDIR only), and `-env:UserInstallation` so concurrent conversions can't collide on shared `~/.config/libreoffice` locks - 30-second wall-time cap; SIGKILL on timeout - 50 MB PDF output cap to bound disk pressure - 512 KB output cap on the wrapped HTML so the SSE/cache contract stays intact (base64 inflates ~33%, effective PDF cap ~380 KB) - Macros disabled by default flags (`--norestore --invisible --nodefault --nofirststartwizard --nolockcheck`) - Tag-distinct `LibreOfficeUnavailableError` / `LibreOfficeConversionError` so callers can swallow appropriately Iframe wrapper (`buildPdfEmbedDocument`): - Native browser PDF viewer via `<iframe src="data:application/pdf; base64,...">` — works in Chrome, Edge, Safari, Firefox - CSP locks the iframe to `default-src 'none'; frame-src data:; connect-src 'none'; script-src 'unsafe-inline'` — no outbound network, no eval, no external scripts - `#view=FitH` for first-paint sizing - 4-second heuristic timer that swaps to a "Preview unavailable" fallback when the browser's PDF viewer is disabled (kiosk mode, Brave Shields, etc.) Wired into `wordDocToHtml` and `pptxToHtml` as the first branch — returns null when disabled / unavailable / oversized so the existing pipeline takes over. XLSX intentionally NOT routed through this path: SheetJS's HTML output is already excellent for spreadsheets (sortable, sticky headers) and PDF rendering of sheets is awkward. Tests (`libreoffice.spec.ts`, 30 cases — 25 always run, 5 conditional on the binary): env-gating parser semantics matching `OFFICE_PREVIEW_DISABLE_CDN`, fallthrough contract (never throws, returns null on any failure), CSP lock-down, fallback structure, binary probe caching + missing-binary path, error tagging, and integration tests that engage when `soffice`/`libreoffice` is on PATH (DOCX→PDF, PPTX→PDF, output-cap fallthrough). Integration tests skip cleanly on bare CI. * 🩹 fix: CI — preserve legacy download path for empty-text office attachments Two regressions surfaced after the textFormat security gate landed. 1. **Client** (`LogContent.test.tsx` "falls back to the legacy download branch for an office file with no extracted text"): When the security gate downgraded an office type without `textFormat: 'html'` to PLAIN_TEXT, the lenient empty-text gate on PLAIN_TEXT then accepted a missing `text` field and rendered a half-empty panel card. The historical contract is "office type + no text → legacy download UI"; the downgrade should only fire when there's actual plain text that needs safe-escaping. Fix in `detectArtifactTypeFromFile`: short-circuit to null when the office type lands in the security-gate branch with no text. The PLAIN_TEXT downgrade still fires for legacy attachments that DO carry plain text. 2. **API** (`process.spec.js` + `process-traversal.spec.js`): the `@librechat/api` mocks didn't expose `getExtractedTextFormat`, so `processCodeOutput` called `undefined(...)` → TypeError → tests got undefined results. Added the helper to both mocks with a faithful default (returns 'text' for non-null extractor output, null otherwise). Tests: new regression in `artifacts.test.ts` pinning the empty-text + no-textFormat → null contract for all four office types (.docx/.csv/.xlsx/.pptx), so a future refactor can't silently re-introduce the half-empty card. * 🩹 fix: PPTX slides scale to fit panel width (no horizontal scroll) Manual e2e on PR #12934: pptx-preview rendered slides at their native init dimensions (960×540 default). The artifact panel is much narrower than that, so the iframe got a horizontal scrollbar and only a corner of each slide showed at any time — the user had to drag-scroll across each slide to read it. Fix: keep pptx-preview's init at 960×540 so its internal layout math stays correct, then post-process each rendered slide: - Cache the slide's native width/height on its dataset BEFORE applying any transform (so subsequent re-fits don't measure the already-transformed box). - Wrap the slide in `.lc-slide-wrap` with explicit width/height set inline to the scaled dimensions; the wrap shrinks the layout space the slide occupies. - Apply `transform: scale(panel_width / 960)` to the slide itself with `transform-origin: top left` so the rendered output shrinks from the top-left corner into the wrap. - Cap the scale at 1.0 so small slides don't upscale and get blurry. Streaming + resize: - `MutationObserver` watches the container for slide insertions so streaming renders get scaled on arrival rather than waiting for the entire `previewer.preview` promise to settle. - `ResizeObserver` re-fits all wrapped slides when the iframe resizes (panel drag, window resize). Tests: new "bootstrap wraps + scales each slide" lock in the wrap class, scale computation, observer setup, and native-size caching so a future refactor can't silently re-introduce the overflow. * 🩹 fix: PPTX wrap+scale runs after preview, not during streaming Manual e2e on PR #12934: regenerated PPTX showed "Preview unavailable" in the iframe. Root cause: the MutationObserver I added in the previous commit fired during pptx-preview's render and moved slides out from under the library's references. pptx-preview's async pipeline raised an unhandled rejection, the iframe's window-level listener caught it, and the fallback message replaced the partial render. Fix: drop the MutationObserver. Apply the wrap+scale ONCE in a `finalize` step that runs: - On `previewer.preview().then` (the happy path) - On the 8-second timeout safety net IF the container has children (silent-failure path — pptx-preview emitted slides but never resolved its outer promise) To prevent the user from seeing an unscaled flash while pptx-preview renders into the 960px-wide canvas, the container is set to `visibility: hidden` at init and only revealed inside `finalize` after wrap+scale completes. Resize handling stays via `ResizeObserver` on `document.body`, installed AFTER the wrap pass so it doesn't fire during the wrap itself. Tests: regression assertion now also locks in: - `container.style.visibility = 'hidden' / 'visible'` (the flash- prevention contract) - Absence of MutationObserver (the bug we just removed — must NOT creep back in via a future "let's scale during streaming" idea) * 🩹 fix: PPTX slides fill panel width (drop upscale cap, per-slide scale) Manual e2e on PR #12934: slides rendered correctly but didn't fill the artifact panel — whitespace on either side. Two issues: 1. The scale was capped at `Math.min(1, available / SLIDE_W)`. On panels wider than 960px, the cap clamped the scale to 1.0 and slides rendered at native size with whitespace on the sides instead of stretching. 2. The scale was computed against the constant `SLIDE_W = 960`, but pptx-preview can emit slides whose `offsetWidth` differs from the init param if the source PPTX has a non-16:9 layout. Per-slide division of `available / nativeW` handles that case. Fix: replace `computeScale()` with two helpers — `availableWidth()` returns the panel content-box width and `scaleFor(nativeW)` returns the per-slide scale. No upscale cap. The slide content is rendered by pptx-preview against its 960×540 canvas using vector text / canvas — scaling up to e.g. 1500px doesn't visibly degrade quality. Tests: regression now also asserts: - `availableWidth()` and `scaleFor()` exist by name - The exact scale formula `availableWidth() / (nativeW || SLIDE_W)` - Negative assertion that `Math.min(1, ...)` is NOT present, so a future "let's add an upscale cap" rewrite can't silently re-introduce the whitespace. * 🩹 fix: PPTX preview fills panel height (no white gap below slides) Manual e2e on PR #12934: PPTX preview filled the panel width but left empty space below the last slide. DOCX didn't have this issue because its content (mammoth-rendered HTML) flows naturally and either fits exactly or overflows; PPTX slides are fixed-aspect 16:9 and don't grow with the panel. Two changes: 1. **Body fills the iframe viewport** — `html, body { min-height: 100vh }` plus `body { display: flex; flex-direction: column }` and `#lc-render { flex: 1 0 auto }`. The dark theme bg now fills the iframe even when total slide content is shorter than the panel, so a single-slide deck never reveals a "white below" gap. 2. **Per-slide scale honors viewport height** — `scaleFor(nativeW, nativeH)` now returns `min(width-fit, height-fit)` (largest factor that fits without overflowing either dimension). On a tall artifact panel with a short deck, slides grow up to the full panel height instead of staying at the width-bound size. Existing height-fit was always considered correct conceptually but the previous implementation only used width-fit, leaving half the viewport unused per slide. Tests: regression now also asserts `availableHeight()`, the `Math.min(sw, sh)` formula, and `min-height: 100vh` are in the bootstrap. Negative assertion for the old `Math.min(1, ...)` upscale cap remains. * 🩹 fix: revert body flex on PPTX bootstrap (caused black-screen render) Manual e2e regression on PR #12934: the previous commit added `body { display: flex; flex-direction: column }` plus `#lc-render { flex: 1 0 auto }` to fill the panel height. Side effect: pptx-preview's internal layout assumes block flow on its ancestor elements; making body a flex container caused slides to render as solid-black rectangles (sized correctly, but with no visible content inside). Fix: keep just `html, body { min-height: 100vh }` for the bg-fill effect — that alone gives empty space below short decks the dark theme bg without changing flow. Drop the body-flex and the `#lc-render { flex: 1 0 auto }` directives. The height-aware `scaleFor(nativeW, nativeH)` from the same commit stays — it doesn't interact with pptx-preview's layout, just chooses a per-slide scale. Each slide still grows to fit the viewport contain-style. Negative-assertion added to the regression test: `body { display: flex }` must NOT appear in the bootstrap, so a future "let's flex the body to make height work" rewrite can't silently re-introduce this. (Note: the user also flagged DOCX theming as faint body text; I'm leaving that for now per their note that it may be pre-existing. Not addressed in this commit.) * 🩹 fix: revert PPTX height-fill changes; lock DOCX CDN to light scheme Two fixes for separate manual e2e regressions on PR #12934. **1. PPTX black screen (single slide rendering as solid black).** The previous fix removed `body { display: flex }` thinking that was the sole cause, but the regression persisted. Bisecting against the last known-good commit ( |
||
|
|
5683706af5
|
🔐 feat: OIDC Bearer Token Authentication for Remote Agent API (#12450)
* Remote Agent Auth middleware * consider migration and update user * fix eslint errors * add scope validation * fix codex review errors * add filter for use: sig * add jwks-rsa deps * Fix remote agent OIDC auth review findings * Polish remote agent OIDC timeout coverage * Reject remote OIDC tokens without subject * Use tenant context for remote agent auth config * Harden remote agent OIDC scope handling * Polish remote agent OIDC cache and scope tests * Resolve remote agent auth review comments * Reuse OpenID email claim resolver for remote auth * Skip empty OpenID email fallback claims * Use pre-auth tenant context for remote auth config * Downgrade expected OIDC fallback logging * Require secure remote OIDC endpoints * Polish remote agent auth edge cases * Enforce unique balance records * Bind remote OpenID users to issuer * Fix issuer-scoped OpenID indexes * Avoid unique balance index requirement * Fix remote OpenID issuer normalization boundaries * Require issuer-bound OpenID lookups * Enforce tenant API key policy after auth * Fix remote auth tenant policy types * Normalize remote OIDC discovery issuer * Allow normalized remote OIDC issuer validation * Enforce resolved tenant OIDC policy * Polish OpenID issuer and scope validation --------- Co-authored-by: Danny Avila <danny@librechat.ai> |
||
|
|
c7f38d9621
|
🛡️ fix: Validate Avatar URL Before Fetch (#12928)
`resizeAvatar` previously called `node-fetch` on any string input with no validation. When OIDC providers surface a user-controllable `picture` claim, this could be used to make blind SSRF requests to internal services on every social login. Wrap the URL fetch with: - An allowlist on the URL protocol (http/https only). - The shared `createSSRFSafeAgents` utility, which blocks resolution to private, loopback, and link-local IPs at TCP connect time (TOCTOU-safe; works equally for hostname targets that DNS-resolve privately and for IP-literal targets, since Node's `net.Socket` always dispatches through the agent's `lookup` hook). - `redirect: 'error'` so a public-IP redirect target cannot be used to bypass the agent check on a subsequent hop. - A 5-second total request budget (node-fetch v2's `timeout` covers request initiation through full body receipt, bounding slow-loris exposure rather than just the TCP connect). - A 10 MB response cap (`size` option + `Content-Length` pre-check + post-read length assertion) so a hostile payload cannot exhaust memory before `sharp()` rejects it. Fetch the canonicalized `parsed.href` rather than the raw input string to eliminate any future parser-differential between `new URL()` and the underlying fetch implementation. Per-call agent construction is intentional: the avatar path runs once per social login per user, so pooling adds complexity without a measurable benefit. Documented inline. Comprehensive test coverage in `avatar.spec.js`: - Rejects malformed URLs, non-http(s) schemes (file://, data:, javascript:). - Asserts the happy-path canonicalization (`fetch` is called with `parsed.href`) and the SSRF-safe agent factory routing (https→httpsAgent, http→httpAgent). - Rejects non-2xx HTTP status. - Rejects an oversized Content-Length before reading the body, and asserts `.buffer()` is never invoked in that case. - Rejects an oversized body even when the server lies about / omits Content-Length. - Surfaces ESSRF, redirect, and `size` overflow errors thrown by the fetch layer. - Confirms Buffer inputs bypass the fetcher entirely. |
||
|
|
4cce88be42
|
🪟 feat: Add allowedAddresses Exemption List For SSRF-Guarded Targets (#12933)
* 🪟 feat: Add allowedAddresses Exemption List For SSRF-Guarded Targets LibreChat already blocks SSRF-prone targets (private IPs, loopback, link-local, .internal/.local TLDs) at every server-side fetch site that consumes user-controllable URLs — custom-endpoint baseURLs, MCP servers, OpenAPI Actions, and OAuth endpoints. The only existing escape hatch is `allowedDomains`, but that flips the field into a strict whitelist: adding `127.0.0.1` to permit a self-hosted Ollama also blocks every public destination that isn't in the list. Introduce `allowedAddresses` as the orthogonal primitive: a private- IP-space exemption list. When a hostname or its resolved IP appears in the list, the SSRF block is bypassed for that target. Public destinations remain reachable. Operators can now run self-hosted LLMs / MCP servers / Action endpoints on private addresses without weakening the default-deny posture for everything else. Schema additions in `packages/data-provider/src/config.ts`: - `endpoints.allowedAddresses` (new — gates `validateEndpointURL`) - `mcpSettings.allowedAddresses` (parallel to `allowedDomains`) - `actions.allowedAddresses` (parallel to `allowedDomains`) Core changes in `packages/api/src/auth/`: - New `isAddressAllowed(hostnameOrIP, allowedAddresses)` — pure, case-insensitive, bracket-stripped literal match. - Threaded the list through `isSSRFTarget`, `resolveHostnameSSRF`, `isDomainAllowedCore`, `isActionDomainAllowed`, `isMCPDomainAllowed`, `isOAuthUrlAllowed`, and `validateEndpointURL`. - Extended `createSSRFSafeAgents` and `createSSRFSafeUndiciConnect` to accept the list, building an SSRF-safe DNS lookup that exempts matching hostnames/IPs at TCP connect time (TOCTOU-safe). Wiring: - Custom and OpenAI endpoint initialize sites pass `endpoints.allowedAddresses` to `validateEndpointURL`. - `MCPServersRegistry` stores `allowedAddresses` and exposes it via `getAllowedAddresses()`. The factory, connection class, manager, `UserConnectionManager`, and `ConnectionsRepository` all thread it through to the SSRF utilities. - `MCPOAuthHandler.initiateOAuthFlow`, `refreshOAuthTokens`, and `validateOAuthUrl` accept the list and consult it on every URL validation along the OAuth chain. - `ToolService`, `ActionService`, and the assistants/agents action routes pass `actions.allowedAddresses` to `isActionDomainAllowed` and to `createSSRFSafeAgents` for runtime action calls. - `initializeMCPs.js` reads `mcpSettings.allowedAddresses` from the app config and forwards it to the registry constructor. Documentation: - `librechat.example.yaml` shows the new field next to each existing `allowedDomains` block, with a note clarifying that `allowedAddresses` is an exemption list (not a whitelist). Tests: - Unit tests for `isAddressAllowed` covering literal IPs, hostnames, IPv6 brackets, case insensitivity, and partial-match rejection. - Exemption tests for every entry point: `isSSRFTarget`, `resolveHostnameSSRF`, `validateEndpointURL`, `isActionDomainAllowed`, `isMCPDomainAllowed`, `isOAuthUrlAllowed`. - Existing tests updated to reflect the new optional parameter. Default behavior is unchanged: omitted = empty list = no exemptions. * 🩹 fix: Plumb allowedAddresses Through AppConfig endpoints Type The initial PR added `endpoints.allowedAddresses` to the data-provider config schema and consumed it in the endpoint initialize sites, but the runtime `AppConfig.endpoints` shape in `@librechat/data-schemas` was a hand-maintained subset that didn't include the new field — so `tsc` rejected `appConfig.endpoints.allowedAddresses`. Add the field to `AppConfig['endpoints']` in `packages/data-schemas/src/types/app.ts` and forward it from the loaded config in `packages/data-schemas/src/app/endpoints.ts` so the runtime config carries the value. Update `initializeMCPs.spec.js` to expect the third positional argument (`allowedAddresses`) on the `createMCPServersRegistry` call. * 🩹 fix: Enforce allowedDomains Before allowedAddresses In isOAuthUrlAllowed The initial implementation checked the address exemption first, so a URL whose hostname appeared in `allowedAddresses` would return true even when the admin had configured `allowedDomains` as a strict bound on OAuth endpoints. A malicious MCP server could advertise OAuth metadata, token, or revocation URLs at any address the admin had permitted for an unrelated reason (a self-hosted LLM at `127.0.0.1`, for example) and pass validation, expanding SSRF reach beyond the configured domain whitelist. Reorder: when `allowedDomains` is set, treat it as authoritative — return true only if the URL matches a domain entry, otherwise fall through to false. The address exemption only applies when no `allowedDomains` is configured (mirrors how the downstream SSRF check in `validateOAuthUrl` consults `allowedAddresses`). Add a regression test asserting that an `allowedAddresses` entry does not broaden a configured `allowedDomains` list. Reported by chatgpt-codex-connector on PR #12933. * 🩹 fix: Forward allowedAddresses To Remaining OAuth Callers Two `MCPOAuthHandler` callers still used the pre-feature signatures and were silently dropping the new `allowedAddresses` argument: - `api/server/routes/mcp.js` invoked `initiateOAuthFlow` with the old 5-argument shape, so OAuth flows initiated through the route handler ignored the registry's `getAllowedAddresses()` and would reject any metadata/authorization/token URL on a permitted private host. - `api/server/controllers/UserController.js#maybeUninstallOAuthMCP` invoked `revokeOAuthToken` without the address exemption, so uninstalling an OAuth-backed MCP server on a permitted private host would fail at the revocation step even though the rest of the MCP connection path now permits it. Both sites now read `allowedAddresses` from the registry alongside `allowedDomains` and forward it. Reported by Copilot on PR #12933. * 🩹 fix: Update Test Mocks And Assertions For OAuth allowedAddresses The previous commit started passing `allowedAddresses` to `MCPOAuthHandler.initiateOAuthFlow` from `api/server/routes/mcp.js` and to `MCPOAuthHandler.revokeOAuthToken` from `api/server/controllers/UserController.js`, but the corresponding test files mocked the registry without `getAllowedAddresses` (causing `TypeError`s) and asserted the old positional shape on `toHaveBeenCalledWith`. Update the mocks and assertions to match the new arity: - `api/server/routes/__tests__/mcp.spec.js`: add `getAllowedDomains`/`getAllowedAddresses` to the registry mock and expect the additional positional args on `initiateOAuthFlow`. - `api/server/controllers/__tests__/maybeUninstallOAuthMCP.spec.js`: add a `getAllowedAddresses` mock alongside the existing `getAllowedDomains` and seed it in `setupOAuthServerFound`. - `api/server/controllers/__tests__/UserController.mcpOAuth.spec.js`: add `getAllowedAddresses` to the registry mock and expect the trailing `null` arg on the three `revokeOAuthToken` assertions. * 🛡️ fix: Address Comprehensive Review — Scope allowedAddresses To Private IP Space Major findings from the comprehensive PR review (severity → fix): **CRITICAL — `validateOAuthUrl` SSRF fallback bypass.** When `allowedDomains` is configured and a URL fails the whitelist, the SSRF fallback in `validateOAuthUrl` was still passing `allowedAddresses` to `isSSRFTarget` / `resolveHostnameSSRF`, letting a malicious MCP server advertise OAuth endpoints at any address the admin had permitted for an unrelated reason. Suppress `allowedAddresses` in the fallback when `allowedDomains` is active — the address exemption is opt-in for the no-whitelist mode only. **MAJOR — WebSocket transport SSRF check ignored exemptions.** The `constructTransport` WebSocket branch called `resolveHostnameSSRF(wsHostname)` without `this.allowedAddresses`, so a permitted private MCP server would pass `isMCPDomainAllowed` but be blocked at transport creation. Forward the exemption. **Scope `allowedAddresses` to private IP space only (operator directive).** The exemption list is for permitting private/internal targets; it must not be a back-door to broaden trust to public destinations. - Schema (`packages/data-provider/src/config.ts`): new `allowedAddressesSchema` rejects URLs (`://`), paths/CIDR (`/`), whitespace, and public IPv4/IPv6 literals at config-load time. Wired into `endpoints`, `mcpSettings`, and `actions`. - Runtime (`packages/api/src/auth/domain.ts`): `isAddressAllowed` now drops public-IP candidates and public-IP entries on the match path — defense in depth so a misconfigured runtime list never grants exemption. - Hot path (`packages/api/src/auth/agent.ts`): `buildSSRFSafeLookup` pre-normalizes the list into a `Set<string>` once at construction and applies the same scoping filter, so the connect-time DNS lookup is an O(1) Set membership check instead of a full re-iterate-and-normalize on every outbound request. **Test coverage for the connect-time and OAuth-fallback paths.** - `agent.spec.ts`: new describe block exercising `buildSSRFSafeLookup` and `createSSRFSafe*` with `allowedAddresses` — hostname-literal exemption, resolved-IP exemption, public-IP scoping, URL/CIDR/whitespace rejection, and the default no-list block. - `handler.allowedAddresses.test.ts` (new): integration tests for `validateOAuthUrl` — covers both the no-domains-set "permit private" path and the strict-bound regression where `allowedAddresses` must NOT bypass `allowedDomains`. **Documentation & cleanup.** - `connection.ts` redirect SSRF check: explicit comment that `allowedAddresses` is intentionally NOT consulted for redirect targets (server-controlled, must not inherit the admin's exemption). - `MCPConnectionFactory.test.ts`: replaced an `eslint-disable` with a proper `import { getTenantId } from '@librechat/data-schemas'`. The disable was added to make a pre-existing `require()` quiet — the cleaner fix is to use the existing top-level import. Updated `MCPConnectionSSRF.test.ts` WebSocket SSRF assertions to match the new two-argument call shape (`hostname, allowedAddresses`). * 🩹 fix: Require Absolute URL Before allowedAddresses Trust Bypass In isOAuthUrlAllowed `parseDomainSpec` is lenient — it silently prepends `https://` to schemeless inputs so it can match patterns like bare `example.com`. That leniency leaked into `isOAuthUrlAllowed`'s new `allowedAddresses` short-circuit: a value like `10.0.0.5/oauth` (no scheme) would parse successfully via the prepended default, hit the address-exemption path, return `true`, and skip `validateOAuthUrl`'s strict `new URL(url)` parse-or-throw — only to fail later in OAuth discovery with a less clear runtime error. Add a strict `new URL(url)` gate at the top of `isOAuthUrlAllowed`. Schemeless inputs now fall through to `validateOAuthUrl`'s explicit "Invalid OAuth <field>" rejection. Tests added in both `auth/domain.spec.ts` (unit) and the OAuth handler integration spec (end-to-end). Reported by chatgpt-codex-connector (P2) on PR #12933. * 🛡️ fix: Address Follow-Up Comprehensive Review — Schema Tests, Shared Normalization, host:port Auditing the second comprehensive review: **F1 MAJOR — schema validation untested.** `allowedAddressesSchema` had zero coverage, so a regression in the three refinement stages or the three wiring locations (`endpoints` / `mcpSettings` / `actions`) would silently let invalid entries reach the runtime. Added a dedicated `describe('allowedAddressesSchema')` block in `config.spec.ts` covering: valid private IPs (v4 + v6, including the previously-missed 192.0.0.0/24 range), accepted hostnames, all rejection categories (URLs, CIDR, paths, whitespace tabs/newlines, host:port, public IP literals), and full `configSchema.parse()` integration at each of the three nesting points. **F2 MINOR — `isPrivateIPv4Literal` divergence.** The schema reimpl in `packages/data-provider` was discarding the `c` octet, so the `192.0.0.0/24` (RFC 5736 IETF protocol assignments) range that the authoritative `isPrivateIPv4` accepts was being rejected with a misleading "public IP" error. Destructure `c` and add the missing range check; covered by the new schema tests. **F3 MINOR — DRY violation across `domain.ts` and `agent.ts`.** Both files had independent normalization implementations with a subtle whitespace-check divergence (`/\s/` vs `.includes(' ')`). Extracted the shared logic into a new `packages/api/src/auth/allowedAddresses.ts` module that both consumers import: - `normalizeAddressEntry(entry)` — single-entry shape check - `looksLikeHostPort(entry)` — host:port detector (used by F4) - `normalizeAllowedAddressesSet(list)` — pre-normalized Set for the connect-time hot path - `isAddressInAllowedSet(candidate, set)` — membership check that enforces private-IP scoping on the candidate Both `isAddressAllowed` (preflight) and `buildSSRFSafeLookup` (connect) now go through the same primitives; the whitespace divergence is gone. To break the import cycle (`allowedAddresses` needs `isPrivateIP`, `domain` previously owned it), extracted IP private-range detection into a leaf `auth/ip.ts` module. `domain.ts` re-exports `isPrivateIP` for backward compatibility with existing call sites. **F4 MINOR — `host:port` silently misclassified.** Entries like `localhost:8080` previously slipped through the URL/path guard, were mis-detected as IPv6, failed `isPrivateIP`, and were silently dropped with a misleading "public IP" schema error. Added an explicit `looksLikeHostPort` check with a clear error: "allowedAddresses entries must not include a port — list the bare hostname or IP only." Bare `::1`, `[::1]`, and other valid IPv6 literals are intentionally not matched (regex distinguishes by colon count and the bracketed `[ipv6]:port` form). **F5 MINOR — hostname-trust documentation gap.** Hostname entries short-circuit `resolveHostnameSSRF` before any DNS lookup — that's a deliberate design (admin trusts the name) but it means the exemption follows whatever the name resolves to at runtime. Added an explicit note in `librechat.example.yaml` for both `mcpSettings.allowedAddresses` and `endpoints.allowedAddresses`: "a hostname entry trusts whatever IP that name resolves to. Only list hostnames whose DNS you control. Prefer literal IPs when you can." **F6** (8 positional params) is flagged for follow-up; refactor to an options object is a breaking-API change deferred to a separate PR. **F7** (redirect/WebSocket asymmetry, NIT, conf 40) — skipping; the existing inline comment is sufficient. * 🧹 chore: Address Follow-Up NITs — Import Order And Mirror-Function Naming Three NITs from the latest comprehensive review: **NIT #1 (conf 85) — local import order.** AGENTS.md requires local imports sorted longest-to-shortest. Both `domain.ts` and `agent.ts` had `./ip` (shorter) before `./allowedAddresses` (longer). Swapped. **NIT #2 (conf 60) — missing cross-reference.** The schema-side `isHostPortShape` in `packages/data-provider/src/config.ts` had no note pointing at the canonical runtime mirror. Added a JSDoc paragraph explaining the mirror relationship and why a local copy exists (the data-provider package can't import from `@librechat/api` without creating a circular dependency). **NIT #3 (conf 50) — naming inconsistency.** Renamed `isHostPortShape` → `looksLikeHostPort` so the schema mirror matches the runtime helper exactly. Kept as a separate function (not a shared import) for the same circular-dependency reason; the matching name makes it obvious they should stay in lockstep. |
||
|
|
1b79e0b785
|
🧬 chore: Align LibreChat With Agents LangChain Upgrade (#12922)
* 🔧 chore: Update dependencies in package-lock.json and package.json - Bump version of @librechat/agents to 3.1.75-dev.0 in multiple package.json files. - Upgrade various AWS SDK and Smithy dependencies to their latest versions in package-lock.json for improved stability and performance. * 🔧 chore: Update AWS SDK and Smithy dependencies in package-lock.json - Bump version of @aws-sdk/client-bedrock-runtime to 3.1041.0 and update related dependencies for improved performance and stability. - Upgrade various AWS SDK and Smithy packages to their latest versions, ensuring compatibility and enhanced functionality. * chore: Align LibreChat with agents LangChain upgrade - Route LangChain imports through @librechat/agents facade exports - Update @librechat/agents to 3.1.75-dev.1 and remove direct LangChain deps - Normalize nullable agent model params and API key override typing - Update Google thinking config typing for newer LangChain packages - Refresh targeted audit-related dependency overrides * chore: Add Jest types for API specs * test: Fix LangChain upgrade CI specs * test: Exercise agents env facade * fix: Clean up TS preview diagnostics * fix: Address Codex review feedback |
||
|
|
4e45e8e17c
|
🧹 fix: Clear MCP OAuth Tokens On Revoke
Fixes #12912.\n\n- Clear stored MCP OAuth tokens and flow state on revoke cleanup-only paths.\n- Keep provider revocation best-effort when token and client metadata are available.\n- Add controller and function coverage for stale metadata, missing config, and cleanup failure paths. |
||
|
|
f3e1201ae7
|
📌 fix: Stabilize Agent Prompt Cache Prefix (#12907)
* fix: stabilize agent prompt cache prefix * chore: refresh agents sdk lockfile integrity * test: format agent memory assertion * test: type agent context fixtures * fix: preserve MCP instruction precedence * fix: reuse resolved conversation anchor * fix: keep resumable startup immediate |
||
|
|
5b5e2b0286
|
🛡️ fix: Handle MCP Tool Cache Lookup Failures (#12910)
* Handle MCP tool cache lookup failures * Harden MCP cached tool lookup * Cover full MCP tool cache outage * Guard MCP tool cache store lookup |
||
|
|
74307e6dcc
|
💭 feat: Require Explicit Auto-agent Enablement for Memories (#12886) | ||
|
|
65990a33e9
|
📥 fix: Resolve Imported-Conversation Default Model From Runtime modelsConfig (#12885)
* 📥 fix: Use Endpoint-Aware Default Model on Imported Conversations Claude conversations imported from claude.ai's data export display "gpt-4o-mini" in the chat UI until the page is refreshed, and any attempt to send a message before refreshing fails with "The model 'gpt-4o-mini' is not available for Anthropic." Root cause: ImportBatchBuilder.finishConversation() unconditionally defaulted the saved conversation's `model` field to openAISettings.model.default, regardless of `this.endpoint`. Claude exports don't carry a model name, so every imported Claude conversation landed with endpoint=anthropic but model=gpt-4o-mini. Fix: pick the default based on `this.endpoint` via a small lookup (openAI -> gpt-4o-mini, anthropic -> claude-3-5-sonnet-latest), keeping the existing OpenAI default as the fallback for unknown endpoints. Fixes #12844 * 🪄 refactor: Resolve Import Default Model From `modelsConfig` Replace the hardcoded per-endpoint default lookup added in the previous commit with a runtime resolver that consults the same models config the chat UI uses (`getModelsConfig` in ModelController -> `loadDefaultModels` + `loadConfigModels`). This way an imported conversation defaults to a model the LibreChat instance has actually configured / discovered for the endpoint, instead of a hardcoded constant that may not exist on this deployment. Resolution order: 1. First non-empty model in `modelsConfig[endpoint]`. 2. Per-endpoint hardcoded fallback (anthropic/openAI settings) if the runtime config is empty for the endpoint or `getModelsConfig` throws. 3. `openAISettings.model.default` if even the per-endpoint fallback is missing (unknown endpoint). `importBatchBuilder.finishConversation` now accepts an optional `defaultModel` argument; each importer resolves it once at the top via `resolveImportDefaultModel({ endpoint, requestUserId, userRole })` and threads it through. ChatGPT message-level model selection also falls back to the resolved default before the hardcoded gpt-4o-mini. |
||
|
|
4a5fc701d2
|
📂 fix: Preserve Nested Skill Paths in Code-Env Uploads (#12877)
* fix(code): preserve code env upload filepaths * chore: Reorder import statements in crud.js |
||
|
|
61b9b1daa7
|
🩹 fix(SSE): Treat responseCode === 0 as Transport Failure, Not Server Error (#12834)
* fix(sse): treat responseCode===0 as transport failure, not server error
When a long-running model response (e.g. gpt-5.4 with web_search:true)
takes longer than the browser's idle connection timeout, the SSE transport
drops and sse.js fires an error event with responseCode=0 and e.data set
to the raw response buffer (non-JSON SSE text).
The previous guard `!responseCode` is truthy for both 0 (transport drop)
and undefined (genuine server-sent error event), so the client incorrectly
entered the server-error branch, tried to JSON.parse raw SSE text, logged
"Failed to parse server error", and showed the user a red error banner --
even though the backend continued processing and delivered the final answer
seconds later.
Fix 1 (client): change guard from `!responseCode` to `responseCode == null`
so that only undefined/null (no HTTP status at all) triggers the server-error
parse path. responseCode===0 now correctly falls through to the reconnect path.
Fix 2 (backend): after res.flushHeaders() the response is already committed
as SSE. The fallback branch that wrote res.status(404).json() was an HTTP/SSE
protocol violation. Replace with an SSE-conformant event:error frame + res.end().
* fix(sse): use onError helper on subscribe failure + add regression tests
Replace silent res.end() with onError('Failed to subscribe to stream')
so the client receives a parseable SSE error event instead of a stream
that closes with no signal. The previous res.end() left the UI stuck
in "submitting" state because no error/abort/final event ever fired.
Also adds two missing test cases for the responseCode guard change:
- responseCode === 0 with raw SSE buffer data must NOT call errorHandler
(transport failure should reconnect, not display garbage)
- responseCode == null with JSON error data MUST call errorHandler
(server-sent error events should still surface to the user)
---------
Co-authored-by: Danny Avila <danny@librechat.ai>
|
||
|
|
84043432a5
|
🧹 fix: Graceful MCP OAuth Revoke Cleanup When Tokens Are Missing (#12825)
* fix: graceful MCP OAuth revoke cleanup when tokens are missing (#12754) `maybeUninstallOAuthMCP` in `api/server/controllers/UserController.js` aborts before the DB-delete and flow-state cleanup steps whenever `MCPTokenStorage.getTokens` throws `ReauthenticationRequiredError` — which is exactly what happens when a user clicks "Revoke" on an MCP server whose backend is already dead and whose refresh token is gone. The resulting error is both surfaced to the log as a red line and, more importantly, leaks the DB token row and OAuth flow state. Wrap the token retrieval in try/catch following the same best-effort pattern already used for the two `revokeOAuthToken` calls. On `ReauthenticationRequiredError`, skip revocation silently (info log) and continue to the cleanup steps. On any other unexpected error, log a warning and continue — cleanup must always run. Exported `maybeUninstallOAuthMCP` for direct unit testing and added `api/server/controllers/__tests__/maybeUninstallOAuthMCP.spec.js` with 8 cases: early-return guards (non-MCP key, non-OAuth server, missing client info), happy path (both tokens revoked + cleanup), both failure-to-retrieve paths (ReauthenticationRequiredError and arbitrary error — cleanup still runs in both), single-token path, and revocation-call failures (cleanup still runs). Fixes #12754. * test: use instanceof check against real ReauthenticationRequiredError Follow-up to the previous commit on this branch. Two changes: 1. `UserController.maybeUninstallOAuthMCP` now checks `error instanceof ReauthenticationRequiredError` using the real class imported from `@librechat/api`, instead of comparing `error?.name === 'ReauthenticationRequiredError'`. The name-string check matched any unrelated error that happened to have the same `.name`; the `instanceof` check is a proper identity test. 2. The accompanying spec's jest mock for `@librechat/api` now exposes a `ReauthenticationRequiredError` class, and the test imports it from that mock so the `instanceof` comparison in the production code holds during the test. Without this, the two "skips revocation ... still runs cleanup" tests threw `TypeError: Right-hand side of 'instanceof' is not an object` because the mock left the class undefined. All 8 tests in the spec pass. |
||
|
|
f2df0ea62b
|
🛡️ fix: Filter user_provided Sentinel in Tool Credential Loading (#12840)
When GOOGLE_KEY=user_provided is set as an endpoint config, the loadAuthValues() function in credentials.js would pass the literal string 'user_provided' to tools via the || fallback chain. This caused Gemini Image Tools to fail at runtime with an invalid API key error, as initializeGeminiClient() received the sentinel value instead of a real key. The fix aligns loadAuthValues() with checkPluginAuth() in format.ts, which already correctly excludes user_provided and empty/whitespace values. Now loadAuthValues() skips these values and continues to the next field in the fallback chain or falls through to user DB values. Added regression tests covering: - user_provided sentinel is skipped, DB value used instead - Fallback chain continues past user_provided to next field - Empty and whitespace env values are skipped - Real env values are returned correctly - Optional fields with sentinel values handled gracefully |
||
|
|
89bf2ab7b4
|
💎 fix: Stop Double-Counting Cache Tokens for Gemini/OpenAI in Usage Spend (#12868)
* 💎 fix: Stop Double-Counting Cache Tokens for Gemini/OpenAI in Usage Spend (#12855) Different providers report `usage_metadata.input_tokens` with different semantics: - Anthropic / Bedrock: `input_tokens` EXCLUDES cache; cache reads/writes arrive separately and must be added to get the total prompt size. - Gemini / OpenAI: `input_tokens` ALREADY INCLUDES cached tokens (Google's `promptTokenCount`, OpenAI's `prompt_tokens`). Their `input_token_details.cache_*` are subsets of `input_tokens`. `recordCollectedUsage` treated both schemes as additive, so for cache-hit requests on Gemini/OpenAI it added cache tokens on top of an `input_tokens` value that already contained them — overcharging users by the cache_hit_rate (e.g., ~67% cache hit ≈ 1.67x overcharge). This matches the issue reporter's GCP billing comparison. Adds a small `splitUsage` helper that classifies the provider by model name and computes `inputOnly` (the non-cached portion) plus the all-inclusive `totalInput` for both the spend math and the returned `input_tokens` summary. The helper defaults to additive semantics (the historical behavior) so unknown providers are unaffected. Updates existing OpenAI-shaped tests that previously asserted the buggy additive math, and adds Gemini regression tests using the exact numbers from the issue report (input=11125, cache_read=7441 → input=3684). Anthropic / Bedrock paths remain bit-identical to before. * 🔧 refactor: Classify Cache-Token Semantics by Provider, Not Model Name Follows up the previous commit. Replaces a model-name regex (`gemini|gpt|o[1-9]|chatgpt`) with an explicit `Providers` enum lookup keyed off the `usage.provider` field — `UsageMetadata.provider` already exists in `IJobStore.ts` but was never being populated. - `callbacks.js#ModelEndHandler` now attaches `usage.provider` from `agentContext.provider` alongside `usage.model`. - `usage.ts` uses a `SUBSET_PROVIDERS` set (`openAI`, `azureOpenAI`, `google`, `vertexai`, `xai`, `deepseek`, `openrouter`, `moonshot`) backed by the canonical `Providers` enum from `librechat-data-provider`. - `xai`, `deepseek`, `openrouter`, `moonshot` extend `ChatOpenAI` so they inherit subset semantics (verified in node_modules). - Defaults to additive when `usage.provider` is missing, so the title flow (which doesn't propagate provider) and any pre-this-PR usage entries keep their existing behavior. Tests: switch fixtures from model-name signaling to explicit `provider` field, plus a Vertex AI case and a "missing provider" fallback case. |
||
|
|
46a86d849f
|
🛂 fix: Skip Inherited / Mark Skill Files Read-Only in Code-Env Pipeline (#12866)
* 🛂 fix: Skip Re-Download of Inherited Code-Env Files (No More 403 Storms) When a bash/code-interpreter call lists or operates on inputs the user already owns (skill files primed via primeInvokedSkills, files inherited from a prior session), codeapi echoes those files back in the tool result with `inherited: true`. We were treating every entry as a generated artifact and calling processCodeOutput on each, which: 1. Hit `/api/files/code/download/<session_id>/<file_id>` with the user's session key. Skill files are uploaded under the skill's entity_id, so every download 403'd — producing dozens of "Unauthorized download" log lines per turn. 2. Surfaced those inputs as ghost file chips in the UI even though they were never generated by the run. 3. Wasted a download round-trip even when no auth boundary was crossed — the file is already persisted at its origin. Fix: skip files where `file.inherited === true` in all three artifact-files loops (`tools.js`, `createToolEndCallback`, and `createResponsesToolEndCallback`). Skill files remain available to subsequent calls via primeInvokedSkills / session inheritance — we just don't redundantly re-download them. Pairs with codeapi-side change that adds the `inherited` flag. * 🔒 feat: Mark Skill Files as `read_only` During Code-Env Priming Pairs with codeapi `read_only` upload flag (ClickHouse/ai#1345). When LibreChat primes a skill into the code-env, every file in the batch (SKILL.md plus all bundled scripts/schemas/docs) is now uploaded with `read_only: true`. Codeapi seals these inputs at the filesystem layer (chmod 444) and the walker echoes the original refs as `inherited: true` regardless of whether sandboxed code modified the bytes on disk. Without this, the previous PR's `inherited` skip handled only the unchanged case. A modified skill file (pip writing pyc near a .py, a script accidentally truncating LICENSE.txt, etc.) still flowed through the modified-input branch on codeapi, got a fresh user-owned file_id, uploaded as a "generated" artifact, and surfaced in the UI as a chip the user couldn't actually authorize a download for. Changes: - `api/server/services/Files/Code/crud.js`: `batchUploadCodeEnvFiles({ ..., read_only })` forwards the flag as a multipart form field. Default `false` preserves existing behavior for user-attached files and prior-session inheritance. - `packages/api/src/agents/skillFiles.ts`: type signature gains `read_only?: boolean`; `primeSkillFiles` passes `true`. - `packages/api/src/agents/skillFiles.spec.ts`: assert the upload call carries `read_only: true`. The flag is intentionally not skill-specific. Any future infrastructure-input flow (system fixtures, cached datasets, etc.) can opt in the same way. |
||
|
|
c9dee962e7
|
📂 fix: Preserve Nested Folder Paths for Code-Execution Artifacts (#12848)
* 📂 fix: Preserve Nested Folder Paths for Code-Execution Artifacts When codeapi reports a generated file at a nested path (`a/b/file.txt`), `processCodeOutput` was running it through `sanitizeFilename` — which calls `path.basename()` and then collapses `/` to `_`. The DB row ended up with `filename: "file.txt"`, `primeFiles` shipped that flat name back to the next sandbox session, and `cat /mnt/data/a/b/file.txt` 404'd. Fix: split the sanitizer into two helpers in `packages/api/src/utils/files.ts`: - `sanitizeArtifactPath` — segment-wise sanitize while preserving `/`. Falls back to basename on `..` traversal, absolute paths, and other malformed inputs. The DB record uses this so the next prime() can recreate the nested path in the sandbox. - `flattenArtifactPath` — encode `/` as `__` for the local `saveBuffer` strategies, which key by single-component filename and would otherwise create unintended subdirectories under uploads/. `process.js` is updated to use both: DB filename keeps the path, storage key flattens. `claimCodeFile` is also keyed on `safeName` so the (filename, conversationId) compound key stays consistent with the record `createFile` writes. Tests: +13 unit tests in `files.spec.ts` (sanitizeArtifactPath table, flattenArtifactPath round-trip). +1 integration test in `process.spec.js` asserting the DB-row vs storage-key split for a nested path. Updated `process-traversal.spec.js` to mock the new helpers. 64 pass / 0 fail across `Files/Code/`; 36 pass / 0 fail in `packages/api/src/utils/files.spec.ts`. Companion: ClickHouse/ai#1327 — the codeapi-side counterpart that stops phantom file IDs from reaching this code path in the first place. These two are independent but the matplotlib bug is most cleanly resolved when both ship. * 🛡️ fix: Re-add 255-char per-segment cap in sanitizeArtifactPath (codex review P2) `sanitizeArtifactPath` dropped the 255-char basename cap that `sanitizeFilename` enforces. Long artifact names then flowed unbounded into `processCodeOutput`'s storage key (`${file_id}__${flatName}`) and tripped `ENAMETOOLONG` on filesystems that enforce `NAME_MAX` — saveBuffer fails, and the file falls back to a download URL instead of persisting / priming. This was a regression specifically for flat filenames that the original `sanitizeFilename` would have truncated safely. Re-add the cap as a per-path-component limit so it applies cleanly to both flat and nested paths: - Leaf segment: extension-preserving truncation, matching `sanitizeFilename`'s shape (`<truncated-stem>-<6 hex>.<ext>`). - Non-leaf (directory) segments: plain truncate-and-disambiguate (`<truncated-name>-<6 hex>`); directory names don't carry semantic extensions worth preserving. - Defensive fallback when `path.extname` returns a pathologically long "extension" (e.g. `_.aaaa…aaa` after the dotfile underscore prefix rewrite turns a long hidden file into a non-dotfile with a 300-char "extension"): collapse to whole-segment truncation rather than leaving the cap unmet. +6 unit tests covering: long leaf (regression case), long leaf under a preserved directory, long non-leaf segment, deeply nested mixed-length, exact-255 boundary (no truncation), and the dotfile + truncation interaction. * 🛡️ fix: Cap flattened storage key against NAME_MAX in processCodeOutput (codex review P1) Per-segment caps on the path-preserving form aren't enough. Once segments are joined with `__` for the storage key, deeply-nested or moderately long paths can still produce a flat form that overflows once `${file_id}__` is prepended — `${file_id}__a__b__c.csv` for a 3-level 100-char-each path is ~344 chars, well past filesystem NAME_MAX (255). saveBuffer then trips ENAMETOOLONG and falls back to a download URL, and the artifact never persists / primes. `flattenArtifactPath` gets an optional `maxLength` parameter. When set, the function truncates the flat form to fit, preserving the leaf extension with the same disambiguating-hex-suffix shape sanitizeFilename uses. Default (`undefined`) keeps existing call sites uncapped — the cap is opt-in for callers that are actually building a filesystem key. Pathologically long "extensions" from `path.extname` (e.g. `.aaaa…aaa`) fall back to whole-key truncation rather than leaving the cap unmet. processCodeOutput composes the storage key after `file_id` is known and passes `255 - file_id.length - 2` as the budget so the full `${file_id}__${flatName}` string fits in one filesystem path component. +7 unit tests in files.spec.ts: - Pass-through when no maxLength supplied (cap is opt-in). - Pass-through when flat form fits within maxLength. - Truncation with leaf extension preserved (the regression case). - Leaf-only overflow with extension preservation. - Pathological long-extension fallback (whole-key truncation). - No-extension stem truncation. - Boundary equality (off-by-one guard). +1 integration test in process.spec.js: processCodeOutput passes the file_id-aware budget (`255 - file_id.length - 2`) to flattenArtifactPath. 114/114 across files.spec.ts + Files/Code (49 + 65). * 🛡️ fix: Determinize + clamp artifact-path truncation (codex review P2 ×2) Two follow-ups to Codex review on the path/flat-key cap: 1. **Deterministic truncation suffixes**. The previous helpers used `crypto.randomBytes(3)` for the disambiguator, mirroring `sanitizeFilename`'s shape. That made the truncated form non- deterministic: a re-upload of the same long filename would compute a *different* storage key, orphaning the previous on-disk file under the reused `file_id` returned by `claimCodeFile`. New `deterministicHexSuffix(input)` helper hashes the input with SHA-256 and takes the first 6 hex chars. Same input → same suffix (storage key stable across re-uploads); different inputs sharing a truncation prefix still get different suffixes (collision avoidance). 24 bits ≈ 16M values is collision-safe for our scale (single-digit artifacts per turn per (filename, conversationId) bucket). Applied to `truncateLeafSegment`, `truncateDirSegment`, and `flattenArtifactPath` — every truncation site in the new helpers. `sanitizeFilename` (pre-existing) is intentionally left alone; its tests rely on the random-bytes mock and it's outside this PR's scope. 2. **Final clamp on flattenArtifactPath result**. The old `Math.max(1, maxLength - ext.length - 7)` floor could let the result slip past `maxLength` when the extension was nearly as large as the budget (e.g. `maxLength=5`, `ext=".txt"`: budget computed as 0, but result was `-<6 hex>.txt` = 11 chars). Drop the `Math.max(1, …)` floor and add a final `truncated.slice(0, maxLength)` so the contract holds for any input. Also short-circuit `maxLength <= 0` to `''` for pathological budgets. Tests updated to compute the expected hash inline (the existing `randomBytes` mock doesn't apply to the new code path), plus 4 new regression tests: - sanitizeArtifactPath: same input → same output, different inputs → different outputs (determinism + collision avoidance). - flattenArtifactPath: same input → same output, different inputs sharing a truncation prefix → different outputs. - flattenArtifactPath: clamp holds when ext.length > maxLength - 7. - flattenArtifactPath: returns '' for maxLength <= 0. 53 unit tests pass. 65 integration tests pass. * 🛡️ fix: Total-path cap + basename for classifier (codex P2 + comprehensive review) Four follow-ups from the latest reviews on this PR: 1. **Codex P2: total-path cap in sanitizeArtifactPath**. Per-segment caps weren't enough — a deeply nested path (3+ at-cap segments) can still produce a joined form past Mongo's 1024-byte indexed-key limit (4.0 and earlier reject; later versions configurable). Added `ARTIFACT_PATH_TOTAL_MAX = 512` and a leaf-only fallback when the joined form exceeds it. Same shape as the absolute-path / `..`-traversal fallbacks above; the leaf is already segment-capped to ≤255, so the final result stays within bounds. 2. **Codex P2: pass basename to classifier/extractor in process.js**. With the path-preserving sanitizer, `safeName` can now be a nested string like `reports.v1/Makefile`. The classifier's `extensionOf` reads that as `v1/Makefile` (the slice after the dot in the directory name) and the bare-name branch rejects because it sees a `.` anywhere. Result: extensionless artifacts under dotted folders (Makefile, Dockerfile, etc.) get misclassified as `other` and skip text extraction. Pass `path.basename(safeName)` to both `classifyCodeArtifact` and `extractCodeArtifactText` so classification matches what the old flat-name flow produced. 3. **Review nit: drop dead `sanitizeFilename` mock in process.spec.js**. process.js no longer imports `sanitizeFilename`; the mock was misleading dead code. 4. **Review nit: rename misleading `'embedded parent traversal'` test**. `path.posix.normalize('a/../escape.txt')` resolves to `escape.txt` which goes through the normal segment-split path, not the `sanitizeFilename` fallback. Test name now says "resolves embedded parent traversal via path normalization" to match the actual code path. +3 regression tests: - sanitizeArtifactPath falls back to leaf-only when joined > 512. - sanitizeArtifactPath keeps nested path within the 512 budget. - process.spec: passes basename (`Makefile` from `reports.v1/Makefile`) to classifyCodeArtifact + extractCodeArtifactText. Existing "caps every segment in a deeply-nested path" test now uses 2 segments (not 3) so the joined form stays under the new total cap; the 3-segment scenario is covered by the new fallback test instead. 55 unit + 66 integration = 121/121 pass. * 📝 docs: Correct sanitizeArtifactPath JSDoc to match actual schema index Two doc-only fixes from the latest comprehensive review (both NIT): 1. **Index field list was wrong**. JSDoc claimed the compound unique index was `{ file_id, filename, conversationId, context }`. The actual index in `packages/data-schemas/src/schema/file.ts:92-95` is `{ filename, conversationId, context, tenantId }` with a partial filter for `context: FileContext.execute_code`. The cap rationale (Mongo 4.0 indexed-key limit) is correct and unchanged; just the field list was wrong. Added the schema file path so future readers can find the source of truth. 2. **Trade-off acknowledgement**. The reviewer noted that the leaf-only fallback loses directory structure, which means the model's `cat /mnt/data/<deep>/<path>/file.txt` would 404 on the pathological-depth case — partially re-introducing the original flat-name bug for >512-char paths. This is intentional (DB write failure is strictly worse than losing structure), but the trade-off wasn't called out explicitly in the JSDoc. Added a paragraph acknowledging it and noting that the cap is monotonically better than the pre-PR behavior, where ALL artifacts were treated this way regardless of depth. No code or test changes — pure JSDoc correction. Tests still 55/0. * 🛡️ fix: Disambiguate sanitized artifact names to keep claimCodeFile keys unique (codex P2) `sanitizeArtifactPath` is not injective — multiple raw inputs can collapse onto the same regex-and-normalize output. Codex's example: `reports 2026/out.csv` and `reports_2026/out.csv` both sanitize to `reports_2026/out.csv`. `claimCodeFile` is keyed on the schema's compound unique `(filename, conversationId, context, tenantId)` index, so the later upload silently matches the earlier record and overwrites the first artifact's bytes via the reused `file_id` — a single conversation can drop files when both names are valid in the sandbox. This collision space isn't strictly new — pre-PR `sanitizeFilename` (basename-only) had the same property — but the path-preserving form gives us enough information to fix it for the first time. **Fix.** When character-level sanitization changed something (regex replacement, path normalization, dotfile prefix, empty-segment collapse), embed a deterministic SHA-256 prefix of the **raw input** in the leaf segment via the new `embedDisambiguatorInLeaf` helper. Same raw input → same safe form (idempotent for re-uploads); different raw inputs that would have collided → different safe forms. **Why "character-level"** specifically: - The disambiguator fires when `preCapJoined !== inputName` (post-regex + dotfile + empty-segment, BUT pre-truncation). - Truncation alone is already disambiguated by `truncateLeafSegment`'s own seg-hash; firing the input-hash branch on truncation would just stack a second hash for no collision-avoidance benefit and clutter human-readable filenames. **Three known collision shapes covered:** 1. `out 1.csv` vs `out_1.csv` (and `out@1.csv` vs `out#1.csv`, etc.) 2. `dir//file.txt` vs `dir/file.txt` (empty-segment collapse) 3. `.x` vs `_.x` (dotfile-prefix step) **Disambiguator + truncation interaction:** for very long mutated leaves, `truncateLeafSegment` caps at 255 first, then `embedDisambiguatorInLeaf` re-trims to insert the input hash. The seg-hash from the first pass is replaced by the input-hash from the second pass — that's intentional (input-hash is the load-bearing collision-avoidance suffix; seg-hash was only ever decorative once the input-hash exists). Final clamp ensures the result never exceeds `ARTIFACT_PATH_SEGMENT_MAX` regardless of input. **Disambiguator + total-cap fallback:** when joined > 512, we fall back to the leaf-only form. The leaf has already had the disambiguator embedded, so collision avoidance survives the pathological-depth case. **`embedDisambiguatorInLeaf`** uses `dot <= 1` to detect "no real extension" (covers extensionless names AND dotfile-prefixed leaves like `_.hidden` — without this, `_.hidden` would split as stem `_` + ext `.hidden` and produce the awkward `_-<hash>.hidden`). **Updated 5 existing tests** that asserted the old collision-prone outputs — they now verify the disambiguator-included form. The character-level-only firing rule was load-bearing here: tests for "clean inputs (no mutation)" and "long inputs (truncation only)" still pass without any disambiguator clutter. **+7 regression tests** in a new `collision avoidance (Codex review P2)` describe block: 1. Different raw inputs sanitizing to the same form get distinct safes 2. Whitespace-vs-underscore in directory segment 3. Dotfile-prefix collision 4. Idempotency: same raw → same safe across calls 5. Clean inputs skip the disambiguator (cosmetic guarantee) 6. Disambiguator survives leaf truncation (long mutated leaf) 7. Disambiguator survives total-cap fallback (pathological depth) 62 unit + 66 integration = 128/128 pass. |
||
|
|
24e29aa8cb
|
🌱 fix: Inject Code-Tool Files Into Graph Sessions on First Call (+ read_file Sandbox Fallback) (#12831)
* 🌱 fix: Seed Code Tool Files Into Graph Sessions on First Call
Files attached to an agent's `tool_resources.execute_code` (user uploads
or generated artifacts from a prior turn) were silently dropped on the
first `execute_code` invocation of a turn. The agents-side `ToolNode`
populates `_injected_files` only when its `sessions` map already has an
`EXECUTE_CODE` entry — but that entry is only written by a previous
successful execution, so call #1 had nothing to inject. CodeExecutor
then fell back to a `/files/{session_id}` fetch, but `session_id` was
also empty on call #1, leaving the sandbox without the primed files.
Mirror the existing skill-priming pattern (`primeInvokedSkills` →
`initialSessions`) for code-resource files: eagerly call `primeFiles`
before `createRun` and merge the result into `initialSessions` via a
new `seedCodeFilesIntoSessions` helper. Skill files and code-resource
files now share the same `EXECUTE_CODE` entry; the prior representative
`session_id` is preserved on merge.
* 🔬 chore: Add Diagnostic Logging for Code-Files Seeding
Temporary debug logs to diagnose why first-call file injection is not
firing in real agent runs. Logs `wantsCodeExec`, available tool-resource
keys, primed file count, and the seeded EXECUTE_CODE entry. Will revert
once the failure mode is identified.
* 🪛 refactor: Capture primedCodeFiles per-agent at init, merge across run
Replace the client.js eager `primeFiles` call with a per-agent capture at
initialization time so every agent in a multi-agent run (primary +
handoff + addedConvo) contributes its `tool_resources.execute_code`
files to the shared `Graph.sessions` seed.
- handleTools.js (eager loadTools): the `execute_code` factory closes
over a `primedCodeFiles` slot and surfaces it in the return.
- ToolService.js loadToolDefinitionsWrapper (event-driven): captures
`files` from the existing `primeCodeFiles` call (was dropping them
while only keeping `toolContext`) and surfaces them.
- packages/api initialize.ts: the loadTools callback contract now
includes `primedCodeFiles`, threaded onto `InitializedAgent`.
- client.js: iterate `[primary, ...agentConfigs.values()]` and merge
each agent's `primedCodeFiles` into `initialSessions`. Drop the
primary-only `primeCodeFiles` call and diagnostic logs from the prior
attempt — wrong layer (single-agent), wrong gate (`agent.tools`
contained Tool instances after init, so the `.includes("execute_code")`
string check always failed).
* 🔬 chore: Add per-agent diagnostic logs for code-files seeding
Logs `tool_resources` keys + file counts inside loadToolDefinitionsWrapper
and per-agent `primedCodeFiles` + final initialSessions inside
AgentClient. Will revert once the failure mode is confirmed.
* 🔬 chore: Add file-lookup diagnostics inside initializeAgent
Logs the inputs and intermediate counts of the conversation-file lookup
chain (convo file ids, thread message ids, code-generated and
user-code file counts) so we can pinpoint why `tool_resources.execute_code`
is arriving empty at `loadToolDefinitionsWrapper` despite the agent
having `execute_code` in its tools list.
* 🔬 chore: Probe execute_code files without messageId filter
Adds a relaxed `getFiles({conversationId, context: execute_code})` probe
that runs only when `getCodeGeneratedFiles` returns empty. Lists what's
actually in the DB for this conversation so we can confirm whether the
file is missing entirely or whether the messageId filter is rejecting it.
* 🔬 chore: Fix probe getFiles arg order (sort vs projection)
Probe was passing a projection object as the sort arg, which mongoose
rejected with `Invalid sort value`. Move it to the third arg
(selectFields) so the probe actually runs.
* 🪢 fix: Preserve Original messageId on Code-Output File Update
Each `processCodeOutput` call was overwriting the persisted file's
`messageId` with the *current* run's id. When a turn re-creates an
existing file (filename + conversationId match → `claimCodeFile`
returns the existing record, `isUpdate=true`), the file's link to
the assistant message that originally produced it gets clobbered.
`initializeAgent` later runs `getCodeGeneratedFiles({messageId: $in: <thread>})`
to seed `tool_resources.execute_code` from prior-turn artifacts. With a
stale `messageId` (e.g. from a failed read attempt that re-shelled the
same filename), the file no longer matches the parent-walk thread, so
`tool_resources` arrives empty at agent init, the new
`primedCodeFiles` channel has nothing to seed, and the LLM can't see
its own prior-turn artifacts on the next turn — defeating the
just-added Graph-sessions seeding fix.
Preserve the existing `claimed.messageId` on update; first-creation
behavior is unchanged. The runtime return value still includes the
current run's `messageId` (via `Object.assign(file, { messageId })`)
so the artifact is correctly attributed to the live tool_call.
* 🧹 chore: Remove diagnostic logs from code-files seeding path
Drops the temporary debug logs added to trace the empty-tool_resources
failure mode. Production code paths (loadToolDefinitionsWrapper,
client.js seed loop, initializeAgent file lookup) are left as the
permanent shape: capture primedCodeFiles, merge across agents, seed
initialSessions before run start.
* 🪛 feat: read_file Sandbox Fallback for /mnt/data + Non-Skill Paths
When the model called `read_file` with a code-execution path (e.g.
`/mnt/data/sentinel.txt`), the handler returned a misleading
`Use format: {skillName}/{path}` error. Adds a sandbox-aware fallback:
- Short-circuit `/mnt/data/...` (can never be a skill reference) →
route to a sandbox `cat` via the new host-provided `readSandboxFile`
callback, which POSTs to the codeapi `/exec` endpoint.
- Skip the skill resolver entirely when `accessibleSkillIds` is empty
— the resolved-output of `resolveAgentScopedSkillIds` already
collapses the admin capability + ephemeral badge + persisted
`skills_enabled` chain, so an empty value is the authoritative
"skills aren't in scope for this agent" signal.
- For `{firstSegment}/...` paths, consult the catalog-derived
`activeSkillNames` Set (no DB read) to detect non-skill names and
fall through to the sandbox before the model has to retry with
`bash_tool`.
`activeSkillNames` is captured from `injectSkillCatalog`, threaded onto
`InitializedAgent`, into `agentToolContexts`, then through
`enrichWithSkillConfigurable` into `mergedConfigurable` for the handler.
The host implementation of `readSandboxFile` lives in
`api/server/services/Files/Code/process.js` and shells `cat <path>`
through the seeded sandbox session — `tc.codeSessionContext`
(emitted by ToolNode for `read_file` calls in `@librechat/agents`
v3.1.72+) provides the `session_id` + `_injected_files` so the read
lands in the same sandbox that holds prior-turn artifacts. When the
seeded context isn't available (older agents version, no codeapi
configured), the handler returns a model-visible error pointing at
`bash_tool` instead of silently failing.
Tests: 8 new `handleReadFileCall` cases cover the new short-circuits,
the skills-not-enabled gate, the activeSkillNames lookup, the
sandbox-fallback success path, and the bash_tool retry hint on
fallback failure. Existing `read_file` tests now opt into "skills are
in scope" via a `skillsInScope()` fixture (production wouldn't reach
the skill lookup with empty `accessibleSkillIds`).
* 🔧 chore: Update @librechat/agents dependency to version 3.1.72
Bumps the version of the @librechat/agents package across package-lock.json and relevant package.json files to ensure compatibility with the latest features and fixes.
* 🪛 refactor: Centralize Tool-Session Seed in buildInitialToolSessions Helper
Addresses review feedback on the per-agent merge in client.js:
- **Run-wide semantics, named explicitly.** The merge into a single
`Graph.sessions[EXECUTE_CODE]` was a deliberate match to the
agents-library design (`Graph.sessions` is shared across every
`ToolNode` in the run), but the inline `for (const a of agents)`
loop in `AgentClient.chatCompletion` made it look per-agent. Move
the logic to a TS helper `buildInitialToolSessions` that documents
the run-wide-by-design contract in one place. The CJS controller
now contains a single call site, no business logic.
- **Subagent walk (P2).** The previous loop only iterated
`[primary, ...agentConfigs.values()]`. Pure subagents are pruned
out of `agentConfigs` after init and retained on each parent's
`subagentAgentConfigs`, so their primed code files were silently
dropped from the seed. The helper now walks recursively, with a
visited-Set keyed on object identity that terminates safely on a
malformed agent graph (cycle).
- **`jest.setup.cjs` polyfill for undici `File`.** Reviewer hit
`ReferenceError: File is not defined` running the targeted spec on
WSL — a known Node 18 issue where `globalThis.File` from
`node:buffer` isn't auto-exposed. Polyfill it inside a Jest setup
file so the suite boots regardless of Node patch version.
Helper test coverage (8 new): skill-only / agent-only / both,
recursive subagent walk, cycle-safe walk, primary+subagent
deduplication, undefined/null entries in the agents iterable, and
representative session_id preservation across the merge.
16 tests pass total in `codeFilesSession.spec.ts` (8 prior + 8 new).
No behavior change vs. the previous commit for the existing
primary+agentConfigs case — subagent inclusion is the only new
behavior, and it matches what the existing seeding logic would have
done if subagents had been in `agentConfigs`.
* 🪛 fix: FIFO Walk Order in buildInitialToolSessions (P3 review)
The traversal used `Array.pop()` (LIFO), which visited the LAST
top-level agent first. The docstring says "primary first"; the code
contradicted it. When no skill seed exists the first-visited agent's
first file supplies the representative `session_id` written to
`Graph.sessions[EXECUTE_CODE]` — so a LIFO walk silently flipped which
agent that came from. `ToolNode` ultimately uses per-file `session_id`s
for runtime injection (so behavior was indistinguishable for current
callers), but the discrepancy was a footgun for any future consumer
that read the representative.
Switch to FIFO via `Array.shift()` to match both the docstring and the
existing `loadSubagentsFor` walk pattern in
`Endpoints/agents/initialize.js`. Add a regression test that asserts
the primary's `session_id` is the representative (and that all three
agents' files still contribute, with per-file `session_id`s preserved).
* 🔬 test: Lock In Code-Files Bug Fixes Per Comprehensive Review
Addresses MAJOR + MINOR + NIT findings from the multi-pass review:
**Finding #4 (MINOR) — empty relativePath misses sandbox fallback.**
A model calling `read_file("output/")` where "output" isn't a skill
name dead-ended with `Missing file path after skill name` instead of
being routed to the sandbox like every other malformed-path branch.
Add the same `codeEnvAvailable → handleSandboxFileFallback` pattern,
plus two regression tests.
**Finding #7 (NIT) — duplicate `skillsInScope()` helper.**
Hoist the identical helper out of two nested describe blocks to
module scope. Single source of truth.
**Finding #1 (MAJOR) — `persistedMessageId` had zero test coverage.**
The fix preserves a file's original `messageId` on update so
`getCodeGeneratedFiles` can still match it on subsequent turns. A
regression in the `isUpdate ? (claimed.messageId ?? messageId) : messageId`
ternary would silently re-introduce the original cross-turn priming
bug. Five new tests cover:
- UPDATE preserves `claimed.messageId` in the persisted record
- UPDATE falls back to current run id when `claimed.messageId` is
absent (legacy records predating the field)
- CREATE uses current run id (no claimed record exists)
- The runtime return value uses the LIVE id (artifact attribution)
even when the persisted record kept the original
- The image branch follows the same contract (would silently regress
if the ternary diverged across the two file-build branches)
The tests use a `snapshotCreateFileArgs()` helper because
`processCodeOutput` mutates the file object after `createFile`
returns (`Object.assign(file, { messageId, toolCallId })`) and a
naive `createFile.mock.calls[0][0]` would reflect the post-mutation
state instead of what was actually persisted.
**Finding #2 (MAJOR) — `readSandboxFile` had no direct tests.**
The model-controlled `file_path` flows through a POSIX single-quote
escape into a shell `cat` command, making this a security boundary.
A quoting regression would let a malicious filename break out of the
quoted argument and inject arbitrary shell. 20 new tests across:
- Shell quoting (7): plain filenames, embedded `'`, `$()`, backticks,
newlines, shell metachars, multiple consecutive single-quotes
- Payload shape (6): /exec URL, bash language, conditional
session_id / files inclusion, dedicated keepAlive:false agents
- Response handling (6): `{content}` on success, null on missing
base URL or absent stdout, throws on stderr-only, partial-success
returns stdout, transport errors are logged then rethrown
- Timeout (1): matches processCodeOutput's 15s SLA
Audited findings #5 (acknowledged tech debt — readSandboxFile in JS
workspace), #6 (pre-existing positional-args debt on
enrichWithSkillConfigurable), and #8 (cosmetic JSDoc style) — no
action taken per the reviewer's own assessment.
Audited finding #3 (walk order vs docstring) — already addressed in
commit
|
||
|
|
8c073b4400
|
📄 feat: Auto-render Text-Based Code Execution Artifacts Inline (#12829)
* 📄 feat: Auto-render Text-Based Code Execution Artifacts Inline Eagerly extract text content from non-image artifacts produced by code execution tools and render it inline in the message instead of behind a click-to-download file card. Reuses the SkillFiles binary-detection helper and the existing parseDocument dispatcher so docx, xlsx, csv, html, code, and other text-renderable formats land directly under the tool call. PPTX is intentionally classified but not yet extracted — follow-up. * 🌐 chore: Remove unused com_download_expires locale key Removed in en/translation.json so the detect-unused-i18n-keys CI check passes. The only reference was a commented-out localize() call in LogContent.tsx that was deleted in the previous commit. * 🩹 fix: Address PR review on code artifact text extraction - extract.ts: build the temp document path from a randomUUID and pass path.basename(name) as originalname so a malicious artifact name cannot escape os.tmpdir() (P1 traversal flagged by codex/Copilot). - process.js: classify and extract using safeName, not the raw name — defense in depth alongside the temp-path fix. - classify.ts: add a bare-name lookup so extensionless text artifacts (Makefile, Dockerfile, …) classify as utf8-text instead of falling through to other. - Attachment.tsx: wire aria-expanded / aria-controls on the show-all toggle for screen reader support. - LogContent.tsx: restore a download chip (LogLink) on inline-text attachments so users can still pull down the underlying file. - Tests: cover extensionless filenames and the temp-path traversal invariant. * 🩹 fix: Address comprehensive PR review on code artifact extraction - extract.ts: walk back to a UTF-8 code-point boundary before truncating so cuts cannot land mid-multibyte and emit U+FFFD (CJK/emoji concern). truncate() now accepts the original buffer to skip a redundant encode. - extract.ts: add an 8s timeout around parseDocument via Promise.race so a pathological docx/xlsx cannot stall the response path. - process.js: always set `text` (string or null) on the file payload — createFile uses findOneAndUpdate with $set semantics, so omitting the field leaves a stale value behind when an artifact's content changes. - Attachment.tsx: switch the show-all toggle from char-count threshold to a useLayoutEffect ref measurement on scrollHeight, and use overflow-hidden when collapsed (overflow-auto when expanded) so the collapsed box has a single clear interaction model. - Attachment.tsx + LogContent.tsx: lift `isImageAttachment` / `isTextAttachment` into a shared attachmentTypes module. LogContent keeps its looser image check (no width/height required) because the legacy log surface receives attachments without dimensions. - Tests: cover multi-byte boundary, the always-set-text contract on updates, and the new shared predicates. * 🧪 test: Component test for TextAttachment + direct withTimeout coverage - Attachment.tsx: re-order local imports longest-to-shortest per AGENTS.md (attachmentTypes ahead of FileContainer/Image). - extract.ts: export withTimeout so it can be unit-tested directly (it's also used internally — exporting carries no runtime cost). - extract.spec.ts: three small unit tests on withTimeout that cover resolve, propagated rejection, and timeout rejection paths with real timers. - TextAttachment.test.tsx: ten cases for the new React component — text rendering in <pre>, download chip presence/absence, ref-based collapse measurement (with scrollHeight stubbed via prototype), aria-expanded toggle, fall-through to FileAttachment for missing and empty text, and AttachmentGroup routing. * 🩹 fix: Canonicalize document MIME by extension before parseDocument When the classifier puts a file on the document path via its extension (.docx, .xlsx, …) but the buffer sniffer returned a generic value like application/zip or application/octet-stream, we previously forwarded that generic MIME to parseDocument, which dispatches strictly by MIME and silently rejected it — exactly defeating the extension-first classification this PR added. extractDocument now remaps the MIME from the extension (falling back to the original sniffed MIME if the extension is unrecognized, so files that reached the document branch via MIME detection still work). Adds a parameterized test across docx/xlsx/xls/ods/odt against zip/octet sniffs to guard the regression. * 🩹 fix: Reuse existing withTimeout from utils/promise The previous commit's local withTimeout export collided with the already-exported `withTimeout` from `~/utils/promise`, breaking the @librechat/api tsc job (TS2308 ambiguous re-export). Drops the duplicate, imports from `~/utils/promise`, and removes the now-redundant unit tests (the helper has its own coverage in utils/promise.spec.ts). The third argument shifts from a label to the fully-formed timeout error message that the existing helper expects. * 🧹 chore: TextAttachment test polish (NITs) - Use the conventional `import Attachment, { AttachmentGroup }` form rather than `default as Attachment`. - Save the original `scrollHeight` property descriptor and restore it in afterAll, so the prototype patch never leaks past this suite. |
||
|
|
596f806f60 |
🛡️ fix: Strict Opt-In Skills Activation per Agent (#12823)
* 🛡️ fix: Strict opt-in skills activation per agent
Skills were activating on every agent run that had the capability +
RBAC enabled, regardless of whether the user (ephemeral) or author
(persisted) had opted in. `scopeSkillIds(undefined)` fell through to
"full accessible catalog" whenever `agent.skills` was unset, which is
the default state for any agent created before skills existed and for
every ephemeral agent.
Activation now requires an explicit signal:
- Ephemeral agent → per-conversation skills badge toggle.
- Persisted agent → new `skills_enabled` master switch on the agent
doc, surfaced as a toggle in the Agent Builder skills section.
Enabled + empty/undefined allowlist = full accessible catalog;
enabled + non-empty allowlist = narrow to those ids; disabled (or
undefined) = no skills available, even if an allowlist is set.
Centralised the predicate in `resolveAgentScopedSkillIds` so the
primary-agent path, handoff/discovery, the subagent loop, and both
OpenAI controllers all share one source of truth. Frontend `$`
popover scope mirrors the same logic so the UI never offers skills
the backend would refuse to activate.
* test: mock resolveAgentScopedSkillIds in agent controller specs
* refactor: address review findings on skills opt-in PR
- AgentConfig: associate skills label with toggle via htmlFor for
click/keyboard affordance; simplify Switch handler to Boolean(value).
- skills: mark scopeSkillIds as @internal so runtime callers continue
to route through resolveAgentScopedSkillIds and inherit the activation
predicate (ephemeral toggle, persisted skills_enabled).
* fix(agents): include skills_enabled in agent list projection
Without this field, agents loaded via the list endpoint hydrate into the
client agentsMap with skills_enabled === undefined, causing the `$`
skill popover to hide every skill on a fresh page load even when the
agent was saved with skills_enabled: true.
* fix(skills): fail closed for persisted agents during agentsMap hydration
Returning undefined while the agents map loads let the popover render the
full catalog for a persisted agent before we could read its
skills_enabled flag, so the user could pick a skill the backend would
then refuse for the turn. Match the strict opt-in contract by returning
[] until the map is authoritative.
* refactor(skills): extract skillsHintKey for readability
Replaces the nested ternary in the skills section JSX with a
pre-computed constant so the activation -> hint key mapping reads
top-down.
* refactor(skills): unflatten skillsHintKey to remove nested ternary
|
||
|
|
d83cb84f59 |
🪆 feat: Subagent configuration in Agent Builder (#12725)
* 🪆 feat: Subagents configuration (isolated-context child agents) Surfaces the new @librechat/agents `SubagentConfig` primitive in the Agent Builder. Subagents let a supervisor delegate a focused subtask to a child graph running in an isolated context window: verbose tool output stays in the child, only a filtered summary returns to the parent. Data model: new `subagents: { enabled, allowSelf, agent_ids }` on Agent, wired through the Zod, Mongoose, and form schemas plus a new `AgentCapabilities.subagents` capability (enabled by default). Backend: `initialize.js` loads explicit subagent configs alongside handoff agents, and drops subagent-only references from the parallel/handoff maps so they don't leak into the supervisor's graph. `run.ts` emits `SubagentConfig[]` on the primary `AgentInputs` — a self-spawn entry when `allowSelf` is enabled plus one entry per configured agent. UI: an "Advanced" panel section with an enable toggle, a self-spawn toggle, and an agent picker (capped at 10). Enabling without adding agents still yields self-spawn; disabling self-spawn with no agents shows a warning. A capability flag gates the whole section. * 🪆 feat: Stream subagent progress to UI (dialog + inline ticker) Pairs with the @librechat/agents SDK change that forwards child-graph events through the parent's handler registry (danny-avila/agents#107): - Self-spawn and explicit subagents can now use event-driven tools, because child `ON_TOOL_EXECUTE` dispatches reach our ToolService via the parent's registered handler. - The same forwarding path wraps the child's run_step / run_step_delta / run_step_completed / message_delta / reasoning_delta dispatches in a new `ON_SUBAGENT_UPDATE` envelope, with start/stop/error bookends. Backend: `callbacks.js` registers an `ON_SUBAGENT_UPDATE` handler that forwards each envelope straight to the SSE stream. Frontend: - `useStepHandler` consumes `ON_SUBAGENT_UPDATE` events and merges them into a per-tool_call Recoil atom (`subagentProgressByToolCallId`). First-seen `subagentRunId` claims the most-recent unclaimed `subagent` tool call in the active response message — a temporal mapping, no SDK wire-format change needed to correlate child runs with parent tool calls. - New `SubagentCall` part component replaces the default `ToolCall` rendering when `toolCall.name === Constants.SUBAGENT`: compact status ticker showing the 3 most recent update labels, clickable to open a dialog with the full activity log + final markdown-rendered result. - Adds `Constants.SUBAGENT`, `StepEvents.ON_SUBAGENT_UPDATE`, and `SubagentUpdateEvent` type in data-provider. Tests: - `packages/api npx jest run-summarization` — 23 pass - `api npx jest initialize` — 16 pass - `npm run build` — clean Dependency note: bumps `@librechat/agents` to `^3.1.67-dev.1` — requires the SDK PR (danny-avila/agents#107) to be merged to dev and published before this PR merges. `ON_SUBAGENT_UPDATE` is absent from dev.0, so the handler registration would be a no-op with the older SDK but would not crash. * 🪆 fix: address Codex review and review audit on subagents Stacks on top of the SDK change in danny-avila/agents#107 (bumped to `^3.1.67-dev.2`). - **P1 (`initialize.js`)**: subagent-only agents were being deleted from both `agentConfigs` AND `agentToolContexts`. The tool-execute handler resolves execution context (agent, tool_resources, skill ACLs) from `agentToolContexts`, so explicit subagents would run without their configured resources and skip action tools. Now only `agentConfigs` is pruned — tool context stays intact. - **P2 (`AgentSubagents.tsx`)**: toggling subagents off set the form field to `undefined`; `removeNullishValues` stripped it from the PATCH, leaving the server copy enabled. Now it persists an explicit `{ enabled: false, ... }` so the update actually clears state. - **Finding 1 (MAJOR)** — `agent_ids` Zod schema gains `.max()` via a new `MAX_SUBAGENTS` export from `data-provider` (shared with the UI cap). Crafted payloads can't trigger hundreds of `processAgent` calls. - **Finding 2 (MAJOR)** — `subagentProgressByToolCallId` atomFamily atoms are now tracked in a ref and reset from `clearStepMaps` via a `useRecoilCallback({ reset })`. No monotonic growth across a session. - **Finding 3 (MAJOR)** — early-arriving `ON_SUBAGENT_UPDATE` events whose parent `tool_call_id` is not yet mapped are now buffered in `pendingSubagentBuffer` (keyed by `subagentRunId`) and replayed in arrival order once correlation completes. Mirrors the existing `pendingDeltaBuffer` pattern. - **Finding 4 (MAJOR)** — switched to deterministic correlation via the new `parentToolCallId` that SDK `3.1.67-dev.2` threads through from `ToolRunnableConfig.toolCall.id`. Temporal fallback now iterates oldest-unclaimed-first (forward), matching tool-call creation order, so concurrent spawns map correctly. - **Finding 6 (MINOR)** — `agent_ids` are deduped on the backend via `new Set(...)` before the load loop. Duplicates no longer produce duplicate `SubagentConfig` entries visible to the LLM. - **Finding 7 (MINOR)** — events array inside each Recoil atom is capped at 200 entries. Long-running subagents no longer replay O(n) spreads on every update; the dialog log still shows the cap window. - **Finding 8 (MINOR)** — documented: subagents are loaded only for the primary agent this release (handoff children get self-spawn but not explicit sub-subagents). In-code comment added so the next maintainer doesn't wonder. - **Finding 9 (NIT)** — removed `{!isSubmitting && null}` dead code and the misleading announce-polite comment in `SubagentCall`. - New `validation.spec.ts` — 9 tests covering the cap on `agent_ids.length` at the subagent schema, agent-create, and agent-update layers. - `run-summarization` — 23 pass, `initialize` — 16 pass, total backend package: 103 pass across touched areas. Findings 5 (component tests) and 10 (micro-allocation) are tracked but deferred; the former needs a Recoil-RenderHook harness that isn't in this PR's scope, and the latter has negligible impact (one `Array.from` per subagent run). * 🧪 test: integration coverage for subagent correlation + backend loading Addresses the follow-up audit on #12725 with real-code tests (no mock handlers, only the existing setMessages/getMessages spies and the standard mongodb-memory-server harness). Six new tests under a dedicated `describe('subagent loading')`: - loads a configured subagent, populates `subagentAgentConfigs`, keeps it out of `agentConfigs` - **P1 regression guard**: drives the real `toolExecuteOptions.loadTools` closure with the subagent id and asserts `loadToolsForExecution` is called with `agent: <subagent>`, `tool_resources`, `actionsEnabled`. If anyone deletes `agentToolContexts` again, this fails. - dedup: three copies of the same id load the agent once - overlap: agent referenced both as handoff target and subagent stays in `agentConfigs` - capability gate: admin disabling `subagents` suppresses loading even when the agent has a config - per-agent disable: `subagents.enabled: false` skips loading entirely Five new tests under `describe('on_subagent_update event')` using a real `RecoilRoot` and a companion `useRecoilCallback` reader so writes from the hook are observable: - deterministic correlation via `parentToolCallId` (happy path with SDK dev.2+) - fallback: oldest-unclaimed tool call wins for concurrent spawns without `parentToolCallId` - early-arrival buffer: updates with no mapping get buffered and replayed once the tool call appears - event cap: 205 updates collapse to 200 retained, oldest dropped - `clearStepMaps` resets tracked atoms back to their null default - F2 — added explicit `// TODO` marker for handoff-subagent-loading extension (matches the comment that referenced it). - F3 — dropped the unnecessary `MAX_SUBAGENTS as MAX_SUBAGENTS_CAP` alias; just import the constant directly. - Bumped `@librechat/agents` to `^3.1.67-dev.3` to pick up the SDK's paired test additions. - `api/server/services/Endpoints/agents/initialize.spec.js` — 22 pass (6 new + 16 existing) - `packages/api/src/agents/validation.spec.ts` + `run-summarization.test.ts` — 103 pass - `client/src/hooks/SSE/__tests__/useStepHandler.spec.ts` — 48 pass (5 new + 43 existing) * 🪆 fix: strip parent run summary + discovered tools from subagent inputs Codex P1 on #12725: `buildSubagentConfigs` reused the shared `buildAgentInput` factory for each explicit child, and that factory always stamps the parent run's `initialSummary` (cross-run conversation summary) and `discoveredTools` (tool names the parent's LLM searched earlier) onto every `AgentInputs` it returns. When subagents were enabled on a conversation that had already been summarized, every child inherited that summary — silently defeating the isolated-context contract and burning extra tokens on unrelated prior chat. Fix in `run.ts`: after `buildAgentInput(child)`, explicitly clear `childInputs.initialSummary` and `childInputs.discoveredTools` before attaching to the `SubagentConfig`. The parent keeps both — that's how the supervisor receives cross-turn context — but the child starts fresh. Paired with danny-avila/agents#107 (bumped to `^3.1.67-dev.4`), which adds the equivalent strip inside `buildChildInputs` to cover the self-spawn path where the SDK clones parent `_sourceInputs` directly and LibreChat never sees the intermediate shape. Belt and suspenders. Regression test (new): - `does NOT leak the parent run initialSummary into an explicit child (Codex P1 regression)` — sets `initialSummary` on the run, enables subagents with an explicit child, asserts the parent still has the summary but `childConfig.agentInputs.initialSummary` is `undefined`. Same for `discoveredTools`. 24 pass. * 🪆 fix: capability gate applies to handoff agents + parallel subagent test ### Codex P2 — handoff agents kept `subagents` after capability disabled The endpoint-level `AgentCapabilities.subagents` gate only cleared `subagents` on `primaryConfig`. Handoff agents loaded into `agentConfigs` retained their persisted `subagents.enabled: true`, and because `run.ts` calls `buildSubagentConfigs` for every agent input, self-spawn would still fire on a handoff target even when the admin had disabled the capability globally. Fix in `initialize.js`: after the subagent loading block, when the capability is off, iterate `agentConfigs.values()` and clear `subagents` + `subagentAgentConfigs` on every loaded config. Regression test: `clears subagents on handoff agents too when capability is disabled (Codex P2 regression)` — seeds a handoff target with its own `subagents.enabled: true`, disables the capability at the endpoint, asserts both primary AND handoff have `subagents` undefined in the client args. 23 init tests pass. ### Parallel subagent correlation — user-requested verification Added `keeps parallel subagent streams independent when events interleave` to `useStepHandler.spec.ts`. Two `subagent` tool calls seeded side by side, 6 interleaved `ON_SUBAGENT_UPDATE` envelopes dispatched (a-start, b-start, a-step, b-step, a-stop, b-step), each carrying its own `parentToolCallId`. Asserts each `tool_call_id`'s Recoil bucket accumulates only its own run's events, statuses reflect each run independently (`call_a` → stop, `call_b` → run_step), no cross-contamination. 49 step-handler tests pass. * 🪆 fix: SubagentCall detects cancelled / errored states (Codex P2) Codex P2 on #12725: the old `running` check only consulted `initialProgress` and the subagent's phase. A user stop, dropped stream, or backend crash before a terminal `stop`/`error` envelope arrived would leave the ticker permanently stuck on "working…". Other *Call components (ToolCall.tsx) already model this via `!isSubmitting && !finished` → cancelled. Mirror that pattern. Re-introduce `isSubmitting` on `SubagentCallProps` (the prop was dropped earlier as 'unused' — that was a bug) and resolve status as a tri-state: - `finished` — initialProgress >= 1, or subagent `stop`/`error` - `cancelled` — `!isSubmitting && !finished` - `running` — neither New locale keys `com_ui_subagent_cancelled` + `com_ui_subagent_errored` swap in the right header text per state. Tests: new `SubagentCall.test.tsx` covers all four states with a real `RecoilRoot` and a `useRecoilCallback` seeder — no mocked store — 5/5 pass. Includes an explicit P2 regression test that simulates the `isSubmitting=false, progress.status='run_step', initialProgress<1` scenario and asserts the cancelled label renders. * 🪆 feat: semantic ticker + aggregated content-part dialog for subagents Two rounds of feedback on #12725: ### Ticker — user-readable lines, not raw event names The old ticker showed \`on_run_step\`, \`on_message_delta\`, etc. — not meaningful to users. Replaced with \`buildSubagentTickerLines\`, a pure helper that walks the \`SubagentUpdateEvent\` stream and emits: - message/reasoning deltas → a single live "Writing: <last 60 chars>" (or "Reasoning: …") line that updates in place as chunks arrive - run_step with tool_calls → "Using calculator(expression=42*58)" for a single call, "Using tool: a, b" for parallel (args dropped when multiple so the line stays short) - run_step_completed → "calculator → 42*58 = 2436" (output truncated to 48 chars; falls back to "Tool X complete" when output is empty) - error → "Error: <message>" - start / stop / run_step_delta → suppressed (too granular / lifecycle-only) Args and output pass through \`summarizeArgs\` / \`summarizeOutput\` which flatten JSON to \`key=value\` pairs and head-truncate long strings so a 200-line tool output never bloats the ticker. ### Dialog — aggregated content parts via leaf renderers \`aggregateSubagentContent\` folds the raw event stream into \`TMessageContentParts[]\` — text/reasoning delta streaks collapse into single \`TEXT\` / \`THINK\` parts, tool calls become \`TOOL_CALL\` parts, and \`run_step\` boundaries correctly break text runs around tool calls. The dialog iterates those parts through a \`SubagentDialogPart\` renderer that delegates to the existing \`Text\`, \`Reasoning\`, and \`ToolCall\` leaf components — the same sub-components \`<Part />\` uses — wrapped in a minimal \`MessageContext\` so reasoning expand state and cursor animation work. Leaf components are used directly rather than importing \`<Part />\` itself to avoid a module cycle (Part → Parts/index → SubagentCall → Part) and to sidestep a hypothetical nested-subagent rendering. ### Tests - \`subagentContent.test.ts\` — 19 pure-function tests covering the aggregator (text concat, reasoning concat, tool call lifecycle, interleaving, phase suppression, late-arriving completions) and the ticker builder (live preview truncation, args/output snippets, parallel-call handling, output truncation, i18n formatter override). - \`SubagentCall.test.tsx\` — 9 component tests: 5 status-resolution (existing) + 2 ticker (semantic text, delta collapse) + 2 dialog (aggregated parts routed to leaf renderers, raw-output fallback). ### Locale keys New: \`com_ui_subagent_ticker_writing\`, \`…_reasoning\`, \`…_error\`, \`…_using\`, \`…_using_with_args\`, \`…_tool_complete\`, \`…_tool_output\`. Preserves i18n at the display layer while the helper stays pure. * chore: drop unused com_ui_subagent_activity_log locale key The dialog no longer renders an "Activity log" section — the new content-parts renderer replaced it. Also tweaks the dialog description copy to match. * 🪆 fix: subagent dialog order, persistence, auto-scroll, width Follow-up pass addressing the four issues observed in real runs against a live subagent-using parent. ### Aggregator ordering (reasoning appearing after text it preceded) Reproducible pattern: LLM emits reasoning → text → tool call in that order, but the dialog rendered text BEFORE reasoning in the content array. Root cause: `aggregateSubagentContent` maintained `currentText` and `currentThink` buffers in parallel and only flushed them at a `run_step` boundary in a fixed (text, think) order, losing the actual arrival order. Fix: when a text chunk arrives, close any open think buffer first (pushes it into the content array right then); symmetric for think → text. Two new regression tests cover the exact reasoning → text → tool_call sequence from the screenshot and the repeated reasoning ↔ text flow across a turn. ### Content persists after completion (markdown not rendering when done) `clearStepMaps` was calling `resetSubagentAtoms()` at stream end, which wiped every `subagentProgressByToolCallId` entry. Once reset, `contentParts.length === 0` and the dialog fell back to rendering the raw `output` string with plain text — hence the literal `##`/`**` in the completed-state screenshot. Stopped resetting; the atoms are bounded per-call (200-event cap) and per-conversation (one per subagent spawn) so growth matches the rest of the conversation state. `resetSubagentAtoms` is kept for a future conversation-switch caller. Also: routed the raw-`output` fallback (older subagent runs recorded before the event forwarder existed) through the same `SubagentDialogPart` → `Text` leaf that content parts use, so its markdown renders the same way. ### Auto-scroll to bottom while running Added a `scrollRef` on the dialog body and a `useEffect` that pins `scrollTop = scrollHeight` while the dialog is open AND the subagent is running. Triggers on `contentParts.length` (new tool calls / part boundaries) and `events.length` (intra-part deltas) so the cursor tracks text streaming. Disabled post-completion so re-opening a finished run doesn't yank to the bottom. ### Wider dialog Went from `max-w-2xl` (42rem / 672px — too cramped on maximized laptop windows) to `w-[min(95vw,64rem)] max-w-[min(95vw,64rem)]`. Narrow on phones, scales up to 64rem on desktop, always leaves a bit of margin from the viewport edge. Bumped `max-h-[65vh]` on the scroll area to give the extra width room to breathe vertically too. ### Tests - `subagentContent.test.ts` — 21 pass (2 new ordering regressions). - `useStepHandler.spec.ts` — 49 pass (1 updated to assert atoms are *preserved* on clearStepMaps). - `SubagentCall.test.tsx` — 9 pass (unchanged; aggregator-level tests cover the ordering). * 🪆 feat: persist subagent_content via SDK createContentAggregator Per-request map of createContentAggregator instances keyed by the parent's tool_call_id. ON_SUBAGENT_UPDATE handler feeds each event into the matching aggregator (phase → GraphEvent mapping); AgentClient harvests contentParts onto the subagent tool_call at message save so the child's reasoning / tool calls / final text survive a page refresh. Reusing the SDK's battle-tested aggregator instead of a bespoke one keeps the persisted shape identical to the parent graph's output and drops ~100 lines of custom aggregation code. * 🪆 fix: incremental subagent aggregation + dialog render parity **Disappearing tool_calls**: the Recoil atom trimmed events to a 200-long rolling window, so verbose subagents could shed the `run_step` that originally created a tool_call part — rebuilding content from the trimmed window then produced only the surviving text/reasoning. Fix: fold each envelope into `contentParts` incrementally in the atom as it arrives (new `foldSubagentEvent` + cursor state). Event trim window now affects only the ticker, never the dialog. **Render parity**: dialog now applies `groupSequentialToolCalls` and renders single parts through `Container` + grouped batches through `ToolCallGroup` — same spacing and "Used N tools" collapsing the main message view uses. **Width**: `min(96vw, 80rem)` — wider on big screens, still responsive. **Labels**: "Subagent: X" is jargon. Named subagents render as `Running "{name}" agent` / `Ran "{name}" agent` (past tense on completion); self-spawns use `Running subtask` / `Ran subtask` since `Running "self" agent` reads badly. * 🪆 polish: subagent dialog parity + agent avatar in header **Labels**: drop "subtask" framing. Self-spawn shows `Running agent` / `Ran agent` (past tense on completion); named subagents stay `Running "X" agent` / `Ran "X" agent`. **Dialog render parity**: stop wrapping every part in `Container`. TEXT keeps its `Container` (gap-3 + `mt-5` sibling margin), THINK and TOOL_CALL render bare so their own wrappers set the full-column width the regular message view gives them — matches the main `<Part>` dispatch. Outer scroll region now uses `px-4 py-3` padding and a `max-w-full flex-grow flex-col gap-0` inner wrapper, mirroring the `MessageParts` container the main conversation uses. **Avatar**: header icon now renders the subagent's configured avatar via `MessageIcon` when `useAgentsMapContext()` has the child agent, falling back to the `Users` SVG (which keeps its running-state pulse). Same icon-left-of-label pattern the tool UI uses. * 🪆 polish: subagent group label, ticker throttle + tail-ellipsis, scroll button **Grouped label**: ToolCallGroup now detects all-subagent batches and labels them "Running N agents" / "Ran N agents" instead of "Used N tools". Mixed batches keep the existing label. The tool-name summary is suppressed for all-subagent groups (every entry dedupes to "subagent", which adds nothing). **Ticker width + tail-ellipsis**: raise the preview cap to 300 chars so wide containers aren't half-empty, and flip the ticker `<li>` to `dir="rtl"` so `text-overflow: ellipsis` clips the *oldest* characters (visually the left edge) — the newest tokens stay pinned to the right regardless of container width. Bidi lays out the Latin text LTR internally, the rtl only affects which side gets the ellipsis. **Throttle**: `useThrottledValue` hook (trailing-edge, 1.2s) smooths the live `Writing: …` preview so tokens no longer strobe past the eye faster than they can be read. Ref-based internals (not `useState`) avoid infinite-update loops when the upstream value is a new-reference each render; `NEGATIVE_INFINITY` sentinel ensures the very first value passes through synchronously so tests and first paint aren't delayed. **Scroll-to-bottom**: dialog tracks `isAtBottom` with a 120px threshold; auto-scroll only engages when the user is already following along, and a persistent jump-to-latest button appears whenever they scroll up — no more fighting the auto-scroll to read back. * 🪆 polish: snappier ticker, prefix-safe labels, agents icon, readable lines **Ticker lines are now incrementally aggregated in the atom** — same pattern as contentParts. The raw-events rolling window is gone; event volume no longer caps what the ticker can display. Verbose subagents that used to drop early tool_call lines out of the window now keep the full 3-line history (using_tool, tool_complete, writing). **Discriminated-union ticker lines** split a constant prefix (e.g. "Writing:") from a tail-truncatable body. The prefix lives in a `shrink-0` span so it never gets clipped when the body overflows; the body uses `dir="rtl"` only on itself — scoped so non-streaming lines (e.g. "Waiting for first update…") can't get their trailing ellipsis flipped by bidi. **Content-aware throttle**: 800ms interval (down from 1200ms), skipped entirely while the live buffer is below 120 chars. Early tokens now appear immediately — no more "Reasoning: I" sitting blank for a full second before the next heartbeat. Once the preview is long enough to fill the container, throttling kicks in at the tighter interval. **Header label** is now a constant verb + optional muted sub-label. Base reads "Running agent" / "Ran agent" / "Cancelled agent" / "Agent errored" for every subagent; named subagents get the configured agent name rendered to the right in secondary text (self-spawns and unresolved names omit it — "Running self agent" is nonsense). **ToolCallGroup** now detects `allSubagents` and swaps `StackedToolIcons` for a single `Users` glyph — otherwise the group header shows a wrench ("tool") icon next to "Ran 5 agents", which reads wrong. * 🪆 feat: delimiter-aware tool labels in ticker + full-width tool lines New shared `parseToolName` helper in `client/src/utils/toolLabels.ts` — single source of truth for splitting `<tool>_mcp_<server>` ids and mapping native tool names (web_search, execute_code, …) to their friendly translation keys. `ToolCallGroup` drops its inline copy and pulls from this helper. Ticker tool lines now use the shared parser + a new `ToolIdentifier` sub-renderer so the live log reads like the main tool UI: - MCP tool → `<server> · <code-badge:tool>` (e.g. "github · `search_code`") - Native → friendly name from `TOOL_FRIENDLY_NAME_KEYS` - Unknown → bare `<code>` badge of the raw id The `using_tool` / `tool_complete` rows now render with a `flex w-full items-baseline gap-1 overflow-hidden` layout matching the writing/reasoning rows — they take the full container width instead of collapsing to content size. Output snippets on `tool_complete` get the same tail-side `dir="rtl"` ellipsis so the newest characters stay flush-right when the container is narrow. Dropped the now-unused template i18n keys (`com_ui_subagent_ticker_using_with_args`, `com_ui_subagent_ticker_tool_complete`, `com_ui_subagent_ticker_tool_output`) in favor of tokens the JSX composes structurally. Only English is touched per the project rule; other locales follow externally. * 🪆 fix: dialog scroll button + auto-scroll during streaming deltas Two race/trigger bugs in the dialog's scroll behavior: **Button never showed**: `addEventListener('scroll', …)` in a `useEffect` ran before Radix's portal had actually committed the scroll container, so `scrollRef.current` was still null — the listener never attached, `isAtBottom` stayed stuck at its initial `true`, and the jump-to-latest button was never rendered. Swap to React's `onScroll` prop on the element itself so the handler wires up as part of DOM commit, not a post-commit effect. **Auto-scroll stalled during text streaming**: the pin-to-bottom effect only re-fired on `contentParts.length` changes. Message/reasoning deltas extend the last TEXT/THINK part's `.text` without changing the array length — so the view would drift up as tokens piled in and never catch back up. Replace the length-dep effect with a `ResizeObserver` on the inner content div; every height change (new part or in-place growth) triggers a scroll-pin when the user is still at the bottom. * 🪆 fix: drop leading ellipsis from ticker body truncatePreview was prepending ... to the tail when the buffer exceeded 300 chars. The component's CSS already produces a left-side ellipsis for overflow via dir=rtl + text-overflow: ellipsis — stacking a data-level ellipsis on top renders a stray dot character right after the Writing: / Reasoning: label (Writing: .Sure!), which looks like a typo to the reader. Data now returns just the last 300 chars when truncating; CSS handles the visual cue whenever the body actually overflows its container. * 🪆 fix: Codex review — subagent isolation + concurrent-safe throttle Three findings from the @codex review pass, all valid: **P1 — buildAgentInput leaks parent discovered-tool state into subagent children.** `buildAgentInput` mutates `agent.toolRegistry` (`overrideDeferLoadingForDiscoveredTools` flips `defer_loading:true→false` on tools the parent previously searched for) and appends those tools' definitions to the returned `toolDefinitions` before the function returns. `buildSubagentConfigs` was clearing the reported `initialSummary` / `discoveredTools` fields on the returned AgentInputs, but that happened post-return — the registry writes and extra tool definitions persisted on the child, silently defeating context isolation and inflating the child's prompt. Fix: `buildAgentInput` now takes an `isSubagent` flag that gates the registry-mutation block and omits `initialSummary` / `discoveredTools` at the source. `buildSubagentConfigs` passes `{ isSubagent: true }` for every explicit child; no post-hoc cleanup needed. **P2 — ToolCallGroup labels a finished subagent group as still running when the child returned no output.** `getToolMeta` computed `hasOutput` as `!!tc.output`, which is `false` for a completed subagent that returned empty text (the UI already has an "empty result" fallback for that case). `allCompleted` would stay `false` and the group header stuck on "Running N agents" forever. Fix: treat `tc.progress === 1` as completion too — progress is the authoritative lifecycle signal, output is just content. **P2 — useThrottledValue schedules `setTimeout` during render.** Discarded renders under Strict Mode / Concurrent rendering would leave orphan timers firing against stale trees. Fix: move `setTimeout` into a `useEffect` keyed on `[value, intervalMs, enabled]`. Render-time still mutates refs (idempotent), but timer scheduling lives post-commit. Cleanup on unmount and on passthrough transitions is preserved. * 🪆 fix: Codex P2 — wipe subagent atoms on conversation switch `clearStepMaps()` intentionally doesn't reset `subagentProgressByToolCallId` so a user can reopen a completed subagent's dialog mid-conversation, but `resetSubagentAtoms` was defined and never exposed / called — so each completed run's aggregated `contentParts` + `tickerState` stayed resident in the `atomFamily` for the whole app session. Unbounded growth across multi-conversation sessions. Expose `resetSubagentAtoms` from `useStepHandler` and fire it from `useEventHandlers` whenever the URL's `conversationId` changes. That's the correct cleanup boundary: historical subagent dialogs rehydrate from persisted `subagent_content` on each `tool_call` at message-save time, so wiping live atoms on switch doesn't lose any viewable history — it just releases per-tool-call state that the old conversation's components no longer subscribe to. * 🪆 fix: Codex round 3 — subagent registry isolation + post-run label Two more valid findings. **P1 — parent-order registry mutation leaks into subagent inputs.** `overrideDeferLoadingForDiscoveredTools` mutates `agent.toolRegistry` in place (the Map *and* the LCTool objects inside it). When an agent appears both as a handoff target (normal graph node) AND an explicit subagent child, a subagent build that ran before the parent's build captures a reference to the same registry — the parent's later mutation leaks through to the child. Fix: for subagent children (`isSubagent`), clone the `toolRegistry` Map and shallow-clone each LCTool inside before returning the inputs. `defer_loading` flips on parent-graph registry mutations can't propagate across the clone boundary. `toolDefinitions` also gets a shallow-copy pass so the same isolation holds for definitions the child carries directly. **P2 — "Running N agents" label stuck after cancel/error.** ToolCallGroup's all-subagent label was gated only on `allCompleted`, which requires every child to have `hasOutput || progress === 1`. A subagent that gets cancelled (stream ends, no `stop` phase, no output) never satisfies that — so even after `isSubmitting` flips false, the header stays on "Running N agents" while each individual card correctly shows "Cancelled agent". Fix: derive a `subagentsDone` flag as `allCompleted || !isSubmitting` and gate the past-tense label on that. Matches the tri-state each SubagentCall card already resolves (finished / cancelled / running). * 🪆 fix: Codex P2 — ACL-check subagents.agent_ids on create/update Codex flagged that `subagents.agent_ids` was accepted as arbitrary strings on the create/update routes while `edges` got a `validateEdgeAgentAccess` pass — so users could save subagent references to agents they can't VIEW. At runtime `initializeClient`'s `processAgent` ACL gate silently drops those, so the persisted configuration and the actual behavior diverged in a way that is difficult to diagnose. Refactor: extract the id-set → unauthorized-ids check into a shared `collectUnauthorizedAgentIds`, wrap it with a dedicated `validateSubagentAccess`, and plumb the same 403-on-failure response the edge path already returns. Applied on both POST /agents and PATCH /agents/:id. * 🪆 fix: Codex round 5 — ACL-disable escape hatch + ticker order Two valid findings. **P1 — can't disable subagents after losing access to a child.** The subagent ACL check ran on every create/update that echoed back the `agent_ids` list, even when the user was explicitly disabling the feature. The UI keeps the list intact when toggling `enabled: false`, so a user who subsequently lost VIEW on any child would be locked in a 403 loop — every edit (including the one that turns subagents off) bounces. Fix: gate the ACL check on `subagents.enabled !== false` at both the POST /agents and PATCH /agents/:id handlers. Empty list stays a no-op. Disabling the feature is always permitted. **P2 — ticker fold merges out-of-order previews across delta switches.** `foldSubagentEventIntoTicker` carried `textLineIdx` / `thinkLineIdx` across a reasoning → text → reasoning transition, so the second reasoning chunk appended to the original reasoning line instead of starting a new chronological one. Fix: close the opposite buffer + cursor when a delta-type switch is detected (same rule the content-parts reducer already applies). Added a regression test. * 🪆 fix: Codex round 6 — preserve mid-stream atoms + honor sequential suppression Two valid findings. **P2 — atom reset fires on initial chat URL assignment.** `useEventHandlers` initialized `lastConversationIdRef` from the URL's current `paramId`, then reset subagent atoms whenever the ref and `paramId` disagreed. For a brand-new conversation the URL stamp goes from `undefined → "abc123"` while the first response is still streaming, which used to drop subagent ticker/content state mid-run and leave dialogs missing earlier updates. Fix: only reset when *both* the old and new IDs are non-null and differ — i.e. a user-initiated switch between two established conversations. The initial assignment passes through without clearing. **P2 — ON_SUBAGENT_UPDATE bypassed `hide_sequential_outputs`.** Every other streaming handler in `callbacks.js` (`ON_RUN_STEP`, `ON_MESSAGE_DELTA`, etc.) gates emission on `checkIfLastAgent` + `metadata?.hide_sequential_outputs`, but the subagent forwarder did an unconditional `emitEvent` — so intermediate agents in a sequential chain were leaking their children's activity to the client even when the chain was configured to suppress intermediates. Fix: accept `metadata` and apply the same `isLastAgent || !hide_sequential_outputs` gate. Aggregation still runs regardless of visibility (persistence + dialog depend on it); only the SSE forward is suppressed. * 🪆 fix: Codex P2 — gate subagent ACL check on endpoint capability `validateSubagentAccess` ran on every create/update where `subagents.enabled !== false`, regardless of the endpoint-level `subagents` capability. When the capability is off at the appConfig level, `initializeClient` already strips the `subagents` block at runtime — so persisted `agent_ids` are inert — but the validation could still 403 on a legacy record whose referenced child is no longer viewable, blocking unrelated edits. Fix: add `isSubagentsCapabilityEnabled(req)` that reads the agents endpoint's capabilities from `req.config` and gate both the create and update ACL checks on it. Capability-off environments can update agents with stale `subagents` data freely; capability-on keeps the full ACL protection. * 🪆 fix: Codex P2 — reset subagent atoms on id→null navigation too Previous guard (both-established) skipped the reset whenever `paramId` became null/undefined, so navigating from an existing chat to a "new chat" route left stale subagent progress resident in the `atomFamily` until the user picked a specific different chat. Swap the both-established check for a one-time flag: skip only the very first `undefined → id` transition (the brand-new-chat URL stamp that happens mid-stream), then reset on any subsequent change — id→id, id→null, null→id-after-reset. If the user started on an established chat the flag is true at mount, so the guard is a no-op and every navigation resets normally. * 🪆 fix: Codex round 9 — subagent persistence gate + handoff children Two valid findings. **P1 — hide_sequential_outputs also gates persistence.** The previous fix gated the SSE forward on `isLastAgent || !hide_sequential_outputs` but still ran the per-tool-call `createContentAggregator` aggregation unconditionally. `finalizeSubagentContent` would then attach the hidden intermediate agent's child reasoning / tool output to the saved message, so a page refresh could reveal activity that was intentionally suppressed live. Move the visibility gate to the top of the handler — hidden agents now skip both aggregation and emission, so "hide_sequential_outputs" is a consistent "don't record" rule for subagent traces. **P2 — handoff agents' explicit subagents were silently dropped.** `initializeClient` only resolved `subagentAgentConfigs` for the primary config, so an agent used via handoff that had its own `subagents.agent_ids` saved in the builder would get self-spawn only; every explicit child was quietly ignored, creating a saved-config / runtime mismatch the user couldn't diagnose. Extract the resolution into a shared `loadSubagentsFor(config)` helper and invoke it for the primary and every handoff agent in `agentConfigs`. The `edgeAgentIds` precomputation stays outside the helper (it's loop-invariant). Capability-off shortcuts return empty early so the existing strip-on-capability-off path still holds. * 🪆 fix: Codex P2 — recursive subagent build for multi-level delegation Previously only the outer `agents[]` loop attached `subagentConfigs` to its inputs, so a child used as a subagent (invoked via the `subagent` tool) lost every explicit spawn target of its own. A user-valid configuration like A → B → C would only run the top layer; B could never actually delegate to C from inside A's run. Recursively build `subagentConfigs` for each child inside `buildSubagentConfigs`, passing the child's freshly-constructed `childInputs` down so its own `subagents.enabled` children get resolved too. Added cycle protection via an `ancestors` Set — a configuration like A → B → A is safely cut off at the second encounter of A rather than recursing forever (the existing `child.id === agent.id` guard already prevents the direct self-loop). * 🪆 fix: Codex P2 — reset subagent atoms on useEventHandlers unmount The effect that resets subagent atoms only fired on `paramId` change, so unmounting the chat container (route change away from /c) never flushed the atoms. `knownSubagentAtomKeys` lives in a ref inside `useStepHandler` — once the hook unmounts the ref is gone, so a subsequent remount can't clean atoms it never registered. Added a second `useEffect` that only runs cleanup on unmount (empty deps aside from the stable `resetSubagentAtoms` callback). Keeps `atomFamily` bounded across full route teardowns too. * 🪆 fix: Codex round 13 — cyclic subagent guard + prefer persisted Two valid findings. **P1 — cyclic subagent ref reloads the primary.** A configuration like `A ↔ B` (B lists A as its own subagent) would send `loadSubagentsFor` down a path that couldn't find A in `agentConfigs` (the primary isn't stored there), so it called `processAgent(A)` a second time. That inserts a fresh config for the primary id, which downstream duplicates in `[primary, ...agentConfigs.values()]` and can replace the primary's tool context with the reloaded copy. Fix: short-circuit when a subagent ref points back at `primaryConfig.id` — reuse the already-loaded primary config. Primary is always an edge id so no pruning bookkeeping needed. **P2 — live atom preferred over canonical persisted trace.** The dialog picked `progress.contentParts` ahead of `persistedContent`, but the Recoil bucket is best-effort — after a disconnect/reconnect it can be stale or partial. The server's `subagent_content` on the `tool_call` is the canonical record refreshed on sync. Preferring live could hide completed tool/reasoning history that was actually persisted. Fix: flip the preference order. Persisted wins when it's non-empty; live covers the mid-stream window (before the parent message saves, persisted is empty) and the older-runs fallback. Updated the test that enforced the old order to lock the new semantics in (separate mid-stream live-fallback assertion kept). * 🪆 fix: Codex P2 — subagent atom reset rule simplified to 'leaving established id' The `hasEstablishedConversationRef` + check for initial undefined→id covered the first navigation but missed the equivalent mid-stream URL stamp when a user goes from an existing chat to a new chat and sends a message there (`id → null → newId`). The null → newId transition was still hitting the reset branch and wiping the in-flight subagent ticker/content for that first turn. Simpler rule: only reset when the PREVIOUS paramId is an established id. Every transition AWAY from an established chat clears (id→id2, id→null, id→undefined); every transition FROM null/undefined passes through (initial mount, new-chat URL stamp mid-stream). Drop the `hasEstablishedConversationRef` machinery in favor of that single condition. * 🪆 fix: Codex P2 — match runtime's strict subagent enable check in ACL Runtime (`initializeClient` + `run.ts`) treats `subagents?.enabled` as a truthy predicate — `undefined`, `null`, missing, and `false` all short-circuit. The ACL gate was using `!== false` which accepted `undefined` / missing as "enabled" and could 403 a payload whose subagent tool would be inert at runtime. Swap both create and update to `enabled === true`. Only a strictly- enabled payload triggers the ACL check; the disable path (`false`) still passes through so a user who lost VIEW on a child can still save the disable edit. * 🪆 fix: Codex P2 — reject missing subagent references with 400 `validateSubagentAccess` collapsed through `collectUnauthorizedAgentIds`, which returns an empty list for ids with no DB record — so typos and references to deleted agents passed validation silently, and `initializeClient` later dropped them at runtime. Saved config would then list spawn targets that the backend never honored, a hard-to- diagnose drift. Refactor the helper into `classifyAgentReferences(ids, …)` which returns `{ missing, unauthorized }` separately. `validateEdgeAgentAccess` keeps its old semantics (missing is intentional — a self-referential `from` names the agent being created). `validateSubagentReferences` surfaces both buckets so the create/update handlers can 400 on missing and 403 on unauthorized with distinct error messages and `agent_ids` lists. * 🪆 polish: tighten subagent dialog grid gap to gap-2 OGDialogContent's grid default is `gap-4`, which renders the title, description, and scroll area as three visually separated panels. Drop to `gap-2` so they read as one block. * 🪆 polish: swap Subagents above Handoffs in Advanced panel Subagents is the more common knob users reach for, so show it first. Handoffs keep the same Controller wiring, just move below. |
||
|
|
35bf04b26c |
🧰 refactor: Unify code-execution tools (#12767)
* 🛠️ feat: Add registerCodeExecutionTools helper Idempotently registers `bash_tool` + `read_file` in the run's tool registry and tool-definition list via a registry `.has()` dedupe. Sets up the single code-execution tool path shared by: - `initializeAgent` (when an agent has `execute_code` in its tools and the capability is enabled for the run) - `injectSkillCatalog` (when skills are active; unconditional read_file, bash_tool follows `codeEnvAvailable`) Both callers reach the helper in the same initialization sequence, so the second call becomes a no-op and exactly one copy of each tool reaches the LLM — no more double registration for agents that combine `execute_code` capability with active skills. Unit-tested on a fresh run, idempotence (second call, overlap with prior tooldefs, partial overlap), and the no-registry variant. * 🔀 refactor: Route injectSkillCatalog bash_tool + read_file through registerCodeExecutionTools The `skill` tool is still registered inline (it's skill-path-specific), but `bash_tool` + `read_file` now flow through the shared idempotent helper so a prior registration from the execute_code path doesn't produce a duplicate copy later in the same run. Behavior preserved: - `read_file` always registers when any active skill is in scope — manually-primed `disable-model-invocation: true` skills still need it to load `references/*` from storage. - `bash_tool` follows `codeEnvAvailable` exactly as before. Adds a test pinning the cross-call dedupe: when `injectSkillCatalog` runs AFTER `registerCodeExecutionTools` has already seeded the registry + tool definitions with bash_tool/read_file, the resulting `toolDefinitions` still contains exactly one copy of each. * 🪄 feat: Expand `execute_code` tool name into bash_tool + read_file at initialize-time When an agent's `tools` include `execute_code` and the `execute_code` capability is enabled for the run, `initializeAgent` now registers `bash_tool` + `read_file` via `registerCodeExecutionTools` before `injectSkillCatalog`. The legacy `execute_code` tool definition is no longer handed to the LLM — `execute_code` remains on the agent document as a capability-trigger marker, but the runtime expands it into the skill-flavored tool pair. Call ordering matters: the `execute_code` registration runs BEFORE `injectSkillCatalog`, so the skill path's own `registerCodeExecutionTools` call inside `injectSkillCatalog` becomes a no-op via the registry's `.has()` check. Exactly one copy of each tool reaches the LLM whether the agent has: - only `execute_code` (legacy path) - only skills - both No data migration needed — `agent.tools: ['execute_code']` stays in the DB unchanged; the expansion is a runtime operation. Three tests cover the matrix: execute_code + capability on → bash_tool + read_file registered; execute_code + capability off → neither registered; no execute_code + capability on → neither registered. * 🗑️ refactor: Drop CodeExecutionToolDefinition from the builtin registry Removes the legacy `execute_code` entry from `agentToolDefinitions` and the corresponding import. With the initialize-time expansion in place, nothing consults `getToolDefinition('execute_code')` for a tool schema any more — the capability gate still filters on the string `execute_code`, but the actual tool definitions the LLM sees come from `registerCodeExecutionTools` (i.e. `bash_tool` + `read_file`). `loadToolDefinitions` in `packages/api/src/tools/definitions.ts` silently drops `execute_code` when it no longer resolves in the registry — that's the expected path and is now covered by an updated test. No caller of `getToolDefinition('execute_code')` expects a non-undefined result after this change. * 🔌 refactor: Read CODE_API_KEY from env for primeCodeFiles + PTC Finishes the Phase 4 server-env-keyed rollout on the two remaining `loadAuthValues({ authFields: [EnvVar.CODE_API_KEY] })` sites in `ToolService.js`: - `primeCodeFiles` (user-attached file priming on execute_code agents) - Programmatic Tool Calling (`createProgrammaticToolCallingTool`) Both now read `process.env[EnvVar.CODE_API_KEY]` directly, matching `bash_tool`'s pattern. The per-user plugin-auth path is no longer consulted for code-env credentials anywhere in the hot path — the agents library owns the actual tool-call execution and also reads the env var internally. Priming still fires for existing user-file workflows so the legacy `toolContextMap[execute_code]` hint ("files available at /mnt/data/...") stays in the prompt; only the key lookup changed. * 🔧 fix: Type the pre-seeded dedupe-test tools as LCTool CI TypeScript type checks caught `{ parameters: {} }` in the new cross-call dedupe test: `LCTool.parameters` is a `JsonSchemaType`, not `{}`. Use `{ type: 'object', properties: {} }` and type the local registry Map through the parameter-derived shape so the pre-seeded values match what `toolRegistry.set` expects. * 🛡️ fix: Run execute_code expansion before GOOGLE_TOOL_CONFLICT gate Codex review caught a latent regression: the original Phase 8 placement ran `registerCodeExecutionTools` after `hasAgentTools` was computed, so an execute-code-only agent on Google/Vertex with provider-specific `options.tools` populated would no longer trip `GOOGLE_TOOL_CONFLICT` — the legacy `CodeExecutionToolDefinition` used to populate `toolDefinitions` before the guard, but after dropping it from the registry, `toolDefinitions` stayed empty until my expansion ran downstream of the guard. Mixed provider + agent tools would silently flow through to the LLM. Fix moves the `execute_code` expansion to BEFORE `hasAgentTools` computation. `bash_tool` + `read_file` now contribute to the check the same way the legacy `execute_code` def did. Covered by a new test that pins the Google+execute_code+provider-tools scenario — the `rejects.toThrow(/google_tool_conflict/)` path would have silently passed on the prior placement. * 🔗 fix: Thread codeEnvAvailable through handoff sub-agents Round-2 codex review caught the other half of the execute_code expansion gap: `discoverConnectedAgents` omitted `codeEnvAvailable` from its forwarded `initializeAgent` params, so handoff sub-agents with `agent.tools: ['execute_code']` lost the `bash_tool` + `read_file` registration (pre-Phase 8 the legacy `CodeExecutionToolDefinition` would have landed in their `toolDefinitions` via the registry). - Add `codeEnvAvailable?` to `DiscoverConnectedAgentsParams` and forward it verbatim on every sub-agent `initializeAgent` call. - Update the three JS call sites that construct the primary's `codeEnvAvailable` (`services/Endpoints/agents/initialize.js`, `controllers/agents/openai.js`, `controllers/agents/responses.js`) to pass the same flag into `discoverConnectedAgents` — one authoritative source per request. - Two regression tests in `discovery.spec.ts` pin the true/false passthrough so a future refactor that drops the param-forwarding surfaces immediately. Left intentionally unchanged: `packages/api/src/agents/openai/service.ts` (public API helper with no in-repo caller). External consumers of `createAgentChatCompletion` who want code execution should pass a `codeEnvAvailable`-aware `initializeAgent` via `deps` — documenting the full public-API surface is out of scope for this Phase 8 PR. * 🔗 fix: Thread codeEnvAvailable through addedConvo + memory-agent paths Round-3 codex review caught the last two production `initializeAgent` callers missing the Phase-8 capability flag: - `api/server/services/Endpoints/agents/addedConvo.js` (multi-convo parallel agent execution). Added `codeEnvAvailable` to `processAddedConvo`'s destructured params and forwarded it into the per-added-agent `initializeAgent` call. Caller in `api/server/services/Endpoints/agents/initialize.js` passes the same `codeEnvAvailable` it computed for the primary. - `api/server/controllers/agents/client.js` (`useMemory` — memory extraction agent). Computes its own `codeEnvAvailable` from `appConfig?.endpoints?.[EModelEndpoint.agents]?.capabilities` and forwards into `initializeAgent`. Memory agents rarely list `execute_code`, but if one does, pre-Phase 8 they got the legacy `execute_code` tool registered unconditionally — the passthrough restores parity. With this, every production caller of `initializeAgent` explicitly resolves the capability: main chat flow (primary + handoff), OpenAI chat completions (primary + handoff), Responses API (primary + handoff), added convo parallel agents, and memory agents. The one remaining caller, `packages/api/src/agents/openai/service.ts::createAgentChatCompletion`, is a public API helper with no in-repo consumer (external callers must pass a capability-aware `initializeAgent` via `deps`). * 🪤 fix: Remove duplicate appConfig declaration causing TDZ ReferenceError The Responses API controller had TWO `const appConfig = req.config;` bindings inside `createResponse`: one at the top of the function (added by the Phase 4 `bash_tool` decouple) and one inside the try block (added by the polish PR #12760). Because `const` is block-scoped with a temporal dead zone, the inner redeclaration put `appConfig` in TDZ for the entire try block, so any earlier reference inside the try — notably `appConfig?.endpoints?.[EModelEndpoint.agents]?.allowedProviders` at line 348 — threw `ReferenceError: Cannot access 'appConfig' before initialization`. The error was silently swallowed by the outer try/catch, leaving `recordCollectedUsage` unreached and the six `responses.unit.spec.js` token-usage tests failing. Removing the inner redeclaration fixes the six failing tests (verified: 11/11 pass locally post-fix, 0 regressions elsewhere). The outer function-scoped binding already provides `appConfig` to every downstream reference. * 🔗 fix: Thread codeEnvAvailable through the OpenAI chat-completion public API Round-4 codex review (legitimate on the type-safety angle, even though the runtime concern was already covered): the `createAgentChatCompletion` helper defines its own narrower `InitializeAgentParams` interface locally, and the type was missing `codeEnvAvailable`. External consumers who supply a capability-aware `deps.initializeAgent` couldn't route `codeEnvAvailable` through without a type-cast workaround. - Widen the local `InitializeAgentParams` interface to include `codeEnvAvailable?: boolean` (matches the real `packages/api/src/agents/initialize.ts` type). - Derive `codeEnvAvailable` inside `createAgentChatCompletion` from `deps.appConfig?.endpoints?.agents?.capabilities` (the same source the in-repo controllers use) and forward to `deps.initializeAgent`. Uses a string literal `'execute_code'` lookup so this file stays free of a `librechat-data-provider` import — keeping the dependency surface of the public helper minimal. With this, external consumers of `createAgentChatCompletion` who pass `appConfig` with the agents capabilities get `bash_tool` + `read_file` registration automatically; consumers who don't pass `appConfig` retain the existing "explicit opt-in" semantics (the flag stays `undefined`, expansion is skipped). * 🧹 chore: Review-driven polish — observability log, JSDoc DRY, test gaps, no-op allocation Addresses the comprehensive review of PR #12767: - **Finding #1** (MINOR, observability): `initializeAgent` now emits a debug log when an agent lists `execute_code` in its tools but the runtime gate is off (`params.codeEnvAvailable` !== true). The event-driven `loadToolDefinitionsWrapper` path doesn't log capability-disabled warnings, so without this the tool silently vanishes from the LLM's definitions with zero trace. Operators debugging "why isn't code interpreter working?" now get a signal at the initialize layer. - **Finding #5** (NIT, allocation): `registerCodeExecutionTools` now returns the input `toolDefinitions` array by reference on the no-op path (both tools already registered by a prior caller in the same run) instead of allocating a fresh spread array every time. The common dual-call scenario — `initializeAgent` then `injectSkillCatalog` — saves one O(n) copy per request. - **Finding #4** (NIT, DRY): Collapsed the duplicated 6-line JSDoc comment in `openai.js`, `responses.js`, and `addedConvo.js` into either a one-line `@see DiscoverConnectedAgentsParams.codeEnvAvailable` pointer (the two JS call sites) or a compact 3-line block referring back to the canonical source (addedConvo's @param). - **Finding #2** (MINOR, test gap): Added `api/server/services/Endpoints/agents/addedConvo.spec.js` with three cases covering `codeEnvAvailable=true`, `codeEnvAvailable=false`, and omitted (undefined) passthrough. A future refactor that drops the param from destructuring now surfaces here instead of silently regressing multi-convo parallel agents with `execute_code`. - **Finding #3** (MINOR, test gap): Added `api/server/controllers/agents/__tests__/client.memory.spec.js` pinning the capability-flag derivation that `AgentClient::useMemory` uses — six cases covering present/absent/null/undefined config shapes plus an enum-literal pin (`'execute_code'` / `'agents'`). Catches enum renames or config-path shifts that would otherwise silently strip `bash_tool` + `read_file` from memory agents. Finding #7 (jest.mock scoping, confidence 40) left as-is: the reviewer's own risk assessment noted `buildToolSet` doesn't touch the mocked exports, and restructuring a file-level `jest.mock` to `jest.doMock` + dynamic `import()` introduces more complexity than the speculative risk justifies. The existing mock is scoped to the test file and contains the same stubs the adjacent `skills.test.ts` already uses. Finding #6 (PR description commit count) addressed out-of-band via PR description update. All existing tests pass, typecheck clean, lint clean across touched files. New tests: 9 cases across 2 new spec files. * 🧽 refactor: Replace hardcoded 'execute_code' string with AgentCapabilities enum in service.ts Follow-up review (conf 55) caught that `openai/service.ts`'s Phase 8 `codeEnvAvailable` derivation used the literal `'execute_code'` while every in-repo controller uses `AgentCapabilities.execute_code` from `librechat-data-provider`. The file deliberately uses local type interfaces to keep the public API helper's type surface small, but that pattern was never a ban on single-value imports from the data provider — `packages/api` already depends on it. Importing the enum value means a future rename of `AgentCapabilities.execute_code` propagates to this file automatically, matching the in-repo controllers' behavior. Other follow-up findings left as-is per the reviewer's own verdict: - #2 (memory spec mirrors the production expression rather than calling `AgentClient::useMemory` directly): reviewer flagged as "not blocking" / "design-philosophy observation." The test file's JSDoc already explicitly documents the tradeoff and pins the enum literals to catch the most likely drift vector. Standing up `AgentClient` + all its mocks for a one-line regression guard is disproportionate. - #3 (`addedConvo.spec.js` mock signature vs. underlying `loadAddedAgent` arity): reviewer's own confidence 25 noted the mock matches the wrapper's actual call pattern in the production file. Not a real gap. - #4 was self-retracted as a false alarm. * 🗑️ refactor: Fully deprecate CODE_API_KEY — remove all LibreChat-side references The code-execution sandbox no longer authenticates via a per-run `CODE_API_KEY` (frontend or backend). Auth moved server-side into the agents library / sandbox service, so LibreChat drops every reference: **Backend plumbing:** - `api/server/services/Files/Code/crud.js`: `getCodeOutputDownloadStream`, `uploadCodeEnvFile`, `batchUploadCodeEnvFiles` no longer accept `apiKey` or send the `X-API-Key` header. - `api/server/services/Files/Code/process.js`: `processCodeOutput`, `getSessionInfo`, `primeFiles` drop the `apiKey` param throughout. - `api/server/services/ToolService.js`: stop reading `process.env[EnvVar.CODE_API_KEY]` for `primeCodeFiles` and PTC; the agents library handles auth internally. Remove the now-dead `loadAuthValues` + `EnvVar` imports. Drop the misleading "LIBRECHAT_CODE_API_KEY" hint from the bash_tool error log. - `api/server/services/Files/process.js`: remove the `loadAuthValues` call around `uploadCodeEnvFile`. - `api/server/routes/files/files.js`: code-env file download no longer fetches a per-user key. - `api/server/controllers/tools.js`: `execute_code` is no longer a tool that needs verifyToolAuth with `[EnvVar.CODE_API_KEY]` — the endpoint always reports system-authenticated so the client skips the key-entry dialog. `processCodeOutput` called without `apiKey`. - `api/server/controllers/agents/callbacks.js`: `processCodeOutput` invoked without the loadAuthValues round trip, for both LegacyHandler and Responses-API handlers. - `api/app/clients/tools/util/handleTools.js`: `createCodeExecutionTool` called with just `user_id` + files. **packages/api:** - `packages/api/src/agents/skillFiles.ts`: `PrimeSkillFilesParams`, `PrimeInvokedSkillsDeps`, `primeSkillFiles`, `primeInvokedSkills` all drop the `apiKey` param; the gate is purely `codeEnvAvailable`. - `packages/api/src/agents/handlers.ts`: `handleSkillToolCall` drops the `process.env[EnvVar.CODE_API_KEY]` read; skill-file priming is now gated solely on `codeEnvAvailable`. `ToolExecuteOptions` signatures drop apiKey from `batchUploadCodeEnvFiles` and `getSessionInfo`. - `packages/api/src/agents/skillConfigurable.ts`: JSDoc no longer references the env var. - `packages/api/src/tools/classification.ts`: PTC creation no longer gated on `loadAuthValues`; `buildToolClassification` drops the `loadAuthValues` dep entirely (no LibreChat-side callers need it for this path anymore). - `packages/api/src/tools/definitions.ts`: `LoadToolDefinitionsDeps` drops the `loadAuthValues` field. **Frontend:** - Delete `client/src/hooks/Plugins/useAuthCodeTool.ts`, `useCodeApiKeyForm.ts`, and `client/src/components/SidePanel/Agents/Code/ApiKeyDialog.tsx` — the install/revoke dialogs for CODE_API_KEY are fully dead. - `BadgeRowContext.tsx`: drop `codeApiKeyForm` from the context type and provider. `codeInterpreter` toggle treated as always authenticated (sandbox auth is server-side). - `ToolsDropdown.tsx`, `ToolDialogs.tsx`, `CodeInterpreter.tsx`, `RunCode.tsx`, `SidePanel/Agents/Code/Action.tsx` +`Form.tsx`: all API-key dialog trigger refs, "Configure code interpreter" gear buttons, and auth-verification plumbing removed. The "Code Interpreter" toggle is now a plain `AgentCapabilities.execute_code` checkbox — no key-entry gate. - `client/src/locales/en/translation.json`: drop the three `com_ui_librechat_code_api*` keys and `com_ui_add_code_interpreter_api_key`. Other locales are externally automated per CLAUDE.md. **Config:** - `.env.example`: remove the `# LIBRECHAT_CODE_API_KEY=your-key` section and its header. **Tests:** - `crud.spec.js`: assertions flipped to pin "no X-API-Key header" and "no apiKey param". - `skillFiles.spec.ts`: removed env-var save/restore; tests now pin that the batch-upload path is gated solely on `codeEnvAvailable` and that no apiKey is threaded through. - `handlers.spec.ts`: same — just the `codeEnvAvailable` gate pins remain. - `classification.spec.ts`: remove the two tests that asserted `loadAuthValues` was (not) called for PTC. - `definitions.spec.ts`: drop every `loadAuthValues: mockLoadAuthValues` entry from the deps shape. - `process.spec.js`: strip the mock of `EnvVar.CODE_API_KEY`. **Comment hygiene:** - `tools.ts`, `initialize.ts`, `registry/definitions.ts`: shortened stale comment references to "legacy `execute_code` tool" without naming the retired env var. Tests verified: 678 packages/api tests pass, 836 backend api tests pass. Typecheck clean, lint clean. Only remaining CODE_API_KEY mentions in the code are two regression-guard assertions: - `crud.spec.js`: pins "no X-API-Key header" stays absent. - `skillConfigurable.spec.ts`: pins `configurable` never grows a `codeApiKey` field. * 🧹 chore: Remove the last two CODE_API_KEY name mentions in LibreChat Follow-up to the prior full deprecation commit: two tests still named the retired identifier in their regression-guard assertions. - `packages/api/src/agents/skillConfigurable.spec.ts`: drop the "does not inject a codeApiKey key" test. The `codeApiKey` field is gone from the production configurable shape, so an absence-assertion naming it re-introduces the retired identifier in code. - `api/server/services/Files/Code/crud.spec.js`: rename the "without an X-API-Key header" case back to "should request stream response from the correct URL" and drop the `expect(headers).not.toHaveProperty('X-API-Key')` assertion. The surrounding request-shape checks (URL, timeout, responseType) still pin the behavior; the explicit header-absence line was named-after the deprecated contract. Result: `grep -rn "CODE_API_KEY\|codeApiKey\|LIBRECHAT_CODE_API_KEY"` against the LibreChat source tree returns zero hits. The only remaining `X-API-Key` strings in this repo are on unrelated OpenAPI Action + MCP server auth configurations, where the string is user-facing config, not a LibreChat-owned identifier. Tests: 677 packages/api pass (2 pre-existing summarization e2e failures unrelated); 126 api-workspace controller/service tests pass. Typecheck and lint clean. * 🎯 fix: Narrow codeEnvAvailable to per-agent (admin cap AND agent.tools) Before this commit, `codeEnvAvailable` was computed in the three JS controllers as the admin-level capability flag only (`enabledCapabilities.has(AgentCapabilities.execute_code)`) and passed through `initializeAgent` → `injectSkillCatalog` / `primeInvokedSkills` / `enrichWithSkillConfigurable` unchanged. A skills-only agent whose `tools` array didn't include `execute_code` still got `bash_tool` registered (via `injectSkillCatalog`) and skill files re-primed to the sandbox on every turn — wrong, because the agent never opted in to code execution. **Fix:** `initializeAgent` now computes the per-agent effective value once as `params.codeEnvAvailable === true && agent.tools.includes(Tools.execute_code)`, reuses the same boolean for: 1. The `execute_code` → `bash_tool + read_file` expansion gate (previously already consulted `agent.tools`; now shares the single `effectiveCodeEnvAvailable` binding). 2. The `injectSkillCatalog` call (previously got the raw admin flag). 3. The returned `InitializedAgent.codeEnvAvailable` field (new, typed as required boolean). **Controllers (initialize.js, openai.js, responses.js):** store `primaryConfig.codeEnvAvailable` in `agentToolContexts.set(primaryId, ...)`, capture `config.codeEnvAvailable` in every handoff `onAgentInitialized` callback, and read it from the per-agent ctx inside the `toolExecuteOptions.loadTools` runtime closure. The hoisted `const codeEnvAvailable = enabledCapabilities.has(...)` locals in the two OpenAI-compat controllers are gone — they were shadowing the narrowed per-agent value. **primeInvokedSkills:** `handlePrimeInvokedSkills` in `services/Endpoints/agents/initialize.js` now uses `primaryConfig.codeEnvAvailable` (per-agent, narrowed) instead of the raw admin flag. A skills-only primary agent won't re-prime historical skill files to the sandbox even when the admin enabled the capability globally. **Efficiency:** one extra `&&` in `initializeAgent`. No runtime hot-path cost — the `includes()` scan on `agent.tools` was already happening for the `execute_code` expansion gate; it's now just bound to a local. Tool execution closures read `ctx.codeEnvAvailable === true` (property access + strict equality, O(1)). **Ephemeral-agent note:** per-agent narrowing is authoritative for both persisted and ephemeral flows. The ephemeral toggle (`ephemeralAgent.execute_code`) is reconciled into `agent.tools` upstream in `packages/api/src/agents/added.ts`, so `agent.tools.includes('execute_code')` is the single source of truth by the time `initializeAgent` runs. **Tests:** two new regression tests pin the narrowing contract: - `initialize.test.ts` — four-quadrant matrix on `InitializedAgent.codeEnvAvailable` (cap on × agent asks, cap on × doesn't ask, cap off × asks, neither). Catches future refactors that drop either half of the AND. - `skills.test.ts` — `injectSkillCatalog` with `codeEnvAvailable: false` against an active skill catalog must NOT register `bash_tool` even though it still registers `read_file` + `skill`. This is the state a skills-only agent gets post-narrowing. All 191 affected packages/api tests pass + 836 backend api tests pass. Typecheck clean, lint clean. * 🧽 refactor: Comprehensive-review polish — hoist tool defs, pin verifyToolAuth contract, doc appConfig Addresses the comprehensive review of Phase 8. Findings mapped: **#1 (MINOR): `verifyToolAuth` unconditional auth for execute_code** - Added doc comment explicitly stating the deployment contract (admin capability → reachable sandbox; no per-check health probe to keep UI-gate queries O(1)). - New `api/server/controllers/__tests__/tools.verifyToolAuth.spec.js` with 4 regression tests pinning the contract: 1. `authenticated: true` + `SYSTEM_DEFINED` for execute_code. 2. 404 for unknown tool IDs. 3. `loadAuthValues` is never consulted (catches a future revert that would resurface the per-user key-entry dialog). 4. Response `message` is never `USER_PROVIDED`. **#2 (MINOR): `openai/service.ts` undocumented `appConfig` dependency** - Expanded the `ChatCompletionDependencies.appConfig` JSDoc to spell out that omitting it silently disables code execution for agents with `execute_code` in their tools. External consumers of `createAgentChatCompletion` now have the contract documented at the type boundary. **#5 (NIT): `registerCodeExecutionTools` re-allocates tool defs** - Hoisted `READ_FILE_DEF` and `BASH_TOOL_DEF` to module-level `Object.freeze`d constants. The shapes derive entirely from static `@librechat/agents` exports, so a single frozen object per tool is safe to share across every agent init. Eliminates the ~4-property allocations on every call (including the common second-call no-op path). **#6 (NIT): Verbose history-priming comment in initialize.js** - Trimmed the 16-line `handlePrimeInvokedSkills` block to a 5-line summary with `@see InitializedAgent.codeEnvAvailable` pointer. The canonical narrowing explanation lives on the type; the controller comment is just the ACL-vs-capability rationale. **Skipped:** - #3 (memory spec tests a mirror function): reviewer self-dismissed as a design tradeoff; the enum-literal pin already catches the highest-risk drift vector. - #4 (cross-repo contract for `createCodeExecutionTool`): user will explicitly install the latest `@librechat/agents` dev version once the companion PR publishes, so the version pin will be authoritative. - #7 (migration/deprecation note for self-hosters): out of scope per user direction — release notes handle this. Tests verified: 679 packages/api + 840 backend api tests pass. Typecheck + lint clean. * 🔧 chore: Update @librechat/agents version to 3.1.68-dev.1 across package-lock and package.json files This commit updates the version of the `@librechat/agents` package from `3.1.68-dev.0` to `3.1.68-dev.1` in the `package-lock.json` and relevant `package.json` files. This change ensures consistency across the project and incorporates any updates or fixes from the new version. |
||
|
|
ac913aa886 |
🔐 chore: Skills Permissions Housekeeping, Reachable Admin Dialog + Defaults Tests (#12766)
* 🔐 chore: Skills permissions housekeeping — reachable admin dialog + defaults tests
Phase 9 housekeeping pass. Skills was already gated on `PermissionTypes.SKILLS`
(seeded from `interface.skills`) and `AgentCapabilities.skills` everywhere it
matters, but two smaller parity gaps with Prompts/Memory/MCP remained:
- The skills admin settings dialog had no UI entry point. The only mount was
inside an unused `FilterSkills` component, so admins had no way to reach the
role-permissions editor for skills. Mounted it in `SkillsAccordion` gated on
`SystemRoles.ADMIN`, matching the `PromptsAccordion` pattern.
- No regression lock on skill permission defaults. `roles.spec.ts` asserted
structural completeness but not the specific shape — a future refactor
could silently flip ADMIN's `USE/CREATE/SHARE/SHARE_PUBLIC` to false or
drop SKILLS from USER defaults without failing. Added explicit Skills
assertions for both roles.
- No lock on `AgentCapabilities.skills` being in `defaultAgentCapabilities`.
Added an assertion in `endpoints.spec.ts`.
* 🩹 fix: Remove duplicate `const appConfig` in Responses createResponse
The Skills polish commit (#12760) added `const appConfig = req.config;` at
line 381 inside the try block of `createResponse`, without noticing that
the earlier drive-by fix (
|
||
|
|
91cd3f7b7c |
🧽 refactor: Skills polish: precedence-aware body validation, controller drop logs, SkillPills rename (#12760)
Post-merge sanity-review cleanup on top of #12746: - `createSkill` / `updateSkill` now parse SKILL.md body's always-apply status once and reuse it for both validation and derivation (was parsing the same YAML block twice per call). - Body-inline `always-apply:` validation becomes precedence-aware: a caller sending an explicit top-level `alwaysApply` or a structured `frontmatter['always-apply']` no longer gets rejected for a typo in the body — the body value is never consulted at derivation time when a higher-precedence source wins. New tests cover the three relevant interactions (explicit+body-typo, frontmatter+body-typo, body-only typo still rejects). - OpenAI and Responses controllers now emit a `logger.warn` when `injectSkillPrimes` drops always-apply primes to stay under `MAX_PRIMED_SKILLS_PER_TURN`. `injectSkillPrimes` already logs internally; the controller-level warn adds endpoint context so operators can identify which path hit the cap at a glance. Mirrors AgentClient's existing log. - Rename `ManualSkillPills` → `SkillPills` (component + type + file + test + all JSDoc references). The component handles both manual and always-apply pills now; the original name was carried over from the manual-only Phase 3 and misleads new readers. - Drive-by fix: declare `appConfig = req.config` at the top of `createResponse` in `responses.js` — it was used unqualified on lines 381/396, which silently evaluated to `undefined` (via optional chaining) and disabled the skills-capability check on the Responses endpoint. Pre-existing, surfaced by lint on the touched file. |
||
|
|
7581540ab6 |
🔌 refactor: Decouple bash_tool from Per-User CODE_API_KEY (#12712)
* 🔌 refactor: Decouple bash_tool from Per-User CODE_API_KEY
Phase 4 of Agent Skills umbrella (#12625): gate bash_tool and skill
file priming on the `execute_code` capability only. Thread a boolean
`codeEnvAvailable` through `enrichWithSkillConfigurable` and
`primeInvokedSkills` in place of the old per-user `codeApiKey` +
`loadAuthValues` plumbing. The sandbox API key is the LibreChat-
hosted service key — system-level, not a user secret — so the
per-user lookup was legacy; when needed, it's read directly from
`process.env[EnvVar.CODE_API_KEY]` inside the capability gate.
`handleSkillToolCall` and `primeInvokedSkills` gate sandbox uploads
on `codeEnvAvailable` first, preventing skill-file uploads to the
sandbox when an agent has `execute_code` disabled even if the env
var happens to be set. The agents library resolves the env key
itself for `bash_tool`, so `ToolService.js` drops the
`loadAuthValues` lookup and the "Code execution is not available"
placeholder tool in favor of a plain `createBashExecutionTool({})`
with a loud error log if the env var is missing.
Also fixes a pre-existing `appConfig`-undefined lint error in
`responses.js`/`createResponse` that surfaced when this file was
touched (declares `const appConfig = req.config` at function top,
matching the existing pattern in other controllers).
Preserves the `skillPrimedIdsByName` threading added by Phase 3/5/6
and all Phase 3/5/6 call-site signatures. Adds
`skillConfigurable.spec.ts` (5 cases pinning the new surface) and
`skillFiles.spec.ts` (4-way matrix of capability × env key for
`primeInvokedSkills`).
* 🧪 refactor: Address Codex Review Feedback
Resolves findings from the second codex review on #12712:
- MAJOR: `handlers.spec.ts` now covers the `codeEnvAvailable` gate in
`handleSkillToolCall` across three cases (gate off, gate on + env
set, gate on + env unset). The gate is the critical regression
prevention — a future edit that drops it would silently re-enable
sandbox uploads for agents with `execute_code` disabled.
- MINOR: Hoist `codeEnvAvailable` and `skillPrimedIdsByName` out of
`loadTools` closures in `openai.js` and `responses.js`. Both values
are fixed once `initializeAgent` resolves, so recomputing them on
every tool execution was wasted work. `responses.js` shares a single
pair between its streaming and non-streaming branches.
- MINOR: `skillFiles.spec.ts` now has a test that exercises the full
upload path end-to-end with real file records, asserting
`batchUploadCodeEnvFiles` is called with the env-sourced apiKey and
the correct file set (including the synthetic `SKILL.md`).
- NIT: Finish the `appConfig` extraction in `responses.js/createResponse`
— replaces the remaining `req.config` references with `appConfig` for
consistency with the pattern in other controllers.
No behavioral changes beyond what was already in place; this is
coverage and readability polish.
* 🧷 test: Tighten Spec Hygiene Per Codex Nit Feedback
Round-3 codex review flagged two NITs on the test code added in the
previous commit:
- Replace `_id: 'skill-id' as unknown as never` in the new
`makeSkillHandlerWithFiles` helper with a real `Types.ObjectId`,
matching the pattern used by the primed-skill tests further up in
the same file (and by `skillFiles.spec.ts`). The `never` cast
hides the fact that `_id` really is a string / ObjectId at runtime.
- Replace the ad-hoc `{ on, pipe, read }` stub with a real
`Readable.from(Buffer.from(''))` in the upload-path test. The stub
worked only because `batchUploadCodeEnvFiles` is mocked and never
iterates the stream; `Readable.from` satisfies the same contract
and is robust to any future partial-real replacement of the upload
function.
Pure test-hygiene improvements; no runtime code touched.
* 🧹 chore: Remove Duplicate appConfig Declaration After Rebase
The upstream `
|
||
|
|
5c69d1f7fa |
🩹 fix: define appConfig in Responses API createResponse
responses.js referenced `appConfig` on lines 381 / 396 without ever declaring it, so `createResponse` threw `ReferenceError: appConfig is not defined` the moment it entered the skills-capability block. The existing `recordCollectedUsage` unit tests silently stopped running (try/catch swallowed the error into `logger.error`), so CI showed 6 assertions failing with "Expected calls: 1, Received: 0" — the function never reached the recorder. Mirror initialize.js: seed `appConfig = req.config` at the top of the try block, before the `enabledCapabilities` Set it feeds into. The two later `appConfig: req.config` call-sites keep the direct reference — only the lexical reads needed a binding. This failure already exists on origin/feat/agent-skills (the same 6 tests fail there with the same stack) but blocks our branch too since we're rebased on top, so fix it here and cherry-pick back if needed. |
||
|
|
89b6bffc46 | 🧼 fix: Missing Enum imports | ||
|
|
dfc3dfa57f |
📍 feat: always-apply frontmatter: auto-prime skills every turn (#12746)
* 🔁 refactor: Rebase always-apply work onto merged structured-frontmatter columns Phase 6 (disable-model-invocation / user-invocable / allowed-tools) landed first on feat/agent-skills. Reconcile this branch with the new mainline: - Thread alwaysApplySkillPrimes through unionPrimeAllowedTools alongside manualSkillPrimes, applying the combined MAX_PRIMED_SKILLS_PER_TURN ceiling before loading tools. - Add `_id` to ResolvedAlwaysApplySkill to match Phase 6's ResolvedManualSkill shape (read_file name-collision protection). - Register 'always-apply' in ALLOWED_FRONTMATTER_KEYS / FRONTMATTER_KIND so Phase 6's validator recognizes it. - Drop frontmatter from the listSkillsByAccess projection; the backfill helper remains as defensive code but its read path is no longer exercised on summary rows (no legacy rows exist — the branch never shipped), saving ~200KB per page. - Retire the corresponding "backfills legacy on summaries" test. - Plumb listAlwaysApplySkills through the JS controllers + endpoint initializer so the always-apply resolver sees a real DB method. * 🧹 fix: Dedupe manual/always-apply overlap, share YAML util, tidy comments Addresses review findings: - Cross-list dedup: when a user $-invokes a skill that is also marked always-apply, the always-apply copy is now dropped so the same SKILL.md body never primes twice in one turn. Manual wins (explicit intent, closer to the user message). Dedup runs in both initializeAgent (so persisted user-bubble pills stay in sync) and injectSkillPrimes (defense-in-depth at splice time). New test cases cover single-overlap, partial-overlap, and dedup-before-cap. - DRY: extract stripYamlTrailingComment to packages/data-schemas/src/utils/yaml.ts; packages/api/src/skills/import.ts now imports the shared helper. Also drop the redundant inner stripYamlTrailingComment call inside parseBooleanScalar — the call site already strips. - Mark injectManualSkillPrimes as @deprecated in favor of injectSkillPrimes (kept for external consumers of @librechat/api). - Document SKILL_TRIGGER_MODEL as forward-looking plumbing for the model-invoked path rather than leaving it as a bare unused export. - Replace the stale "frontmatter is included" comment on listSkillsByAccess with an accurate explanation of why it was intentionally excluded. * 🔒 fix: Include always-apply primes in skillPrimedIdsByName + clear alwaysApply on body opt-out Two bugs flagged by Codex review: P1 (read_file): `manualSkillPrimedIdsByName` only carried manual-invocation primes, so an always-apply skill with `disable-model-invocation: true` was blocked from reading its own bundled files, and same-name collisions could resolve to a different doc than the one whose body got primed. - Rename `buildManualSkillPrimedIdsByName` → `buildSkillPrimedIdsByName` (accepts both manual + always-apply prime arrays). - Rename the configurable field `manualSkillPrimedIdsByName` → `skillPrimedIdsByName` throughout the plumbing (skillConfigurable.ts, handlers.ts, CJS callers, tests). - Overlap resolution: manual wins on the rare edge case where the same name appears in both arrays (upstream dedup should prevent this, but defensive merging treats manual as authoritative). - New tests: (1) gate-relaxation fires for always-apply primes, (2) `_id` pinning works for always-apply same-name collisions. P2 (updateSkill): when a body update had no `always-apply:` key, `extractAlwaysApplyFromBody` returned `absent` and the column was left untouched. A skill that was once `alwaysApply: true` would keep auto-priming even after its SKILL.md no longer declared the flag. - Treat `absent` as a positive "not always-apply" declaration when the body is explicitly submitted; flip the column to `false`. - Explicit top-level `alwaysApply` still wins (three-source precedence unchanged). - New tests: body removes key → false, body has no frontmatter at all → false, explicit + body-without-key → explicit wins. * 🧵 refactor: Collapse duplicate prime types + tighten parse + test hygiene Sanity-check review follow-ups: - Collapse `ResolvedManualSkill` / `ResolvedAlwaysApplySkill` into a single `ResolvedSkillPrime` canonical interface with two backward- compatible type aliases. Both resolvers feed the same pipeline stages (injectSkillPrimes, unionPrimeAllowedTools, buildSkillPrimedIdsByName); the per-source distinction lives on `additional_kwargs.trigger`, not on the resolver output. - Move the `always-apply` branch in `parseFrontmatter` to operate on the raw post-colon text. The outer `unquoteYaml` was fine today because it's idempotent on non-quoted strings, but running it twice (once per line, once after stripping the inline comment) would be fragile if the unquoter ever grows richer YAML-escape handling. - Add the missing `alwaysApplyDedupedFromManual: 0` field to the `injectSkillPrimes` mocks in `openai.spec.js` and `responses.unit.spec.js` so they match the full `InjectSkillPrimesResult` contract. - Insert the blank line between the `unionPrimeAllowedTools` and `resolveAlwaysApplySkills` describe blocks. * 🔧 fix(tsc): Cast mock.calls via `unknown` for strict tuple destructure `getSkillByName.mock.calls[0]` is typed as `[]` by jest's generic default; a direct cast to `[string, ..., ...]` fails TS2352 under `--noEmit` even though the runtime shape matches. Go through `as unknown as [...]` like the earlier test in the same file so CI's type-check step stays green. * 🪢 fix: Propagate skillPrimedIdsByName into handoff agent tool context Handoff agents go through the same `initializeAgent` flow as the primary (with `listAlwaysApplySkills` now plumbed), so they resolve their own `manualSkillPrimes` and `alwaysApplySkillPrimes` — but the `agentToolContexts.set(...)` for handoff agents didn't carry `skillPrimedIdsByName` into the per-agent context. That meant `handleReadFileCall` fell back to the full ACL set + a `prefer*` flag for handoff agents: same-name collisions could resolve to a different doc than the one whose body got primed, and a `disable-model-invocation: true` skill primed via manual `$` or always-apply inside the handoff flow would be blocked from reading its own bundled files. Build the map via `buildSkillPrimedIdsByName(config.manualSkillPrimes, config.alwaysApplySkillPrimes)` for every handoff tool context so `read_file` behaves identically across primary and handoff agents. |
||
|
|
82173f7b91 |
🛡️ feat: Persist & enforce disable-model-invocation / user-invocable / allowed-tools (#12745)
* 🧬 feat: Persist `disable-model-invocation` / `user-invocable` / `allowed-tools` Adds first-class columns mirroring the three runtime-enforced frontmatter fields, with a `deriveStructuredFrontmatterFields` helper that maps from frontmatter at create/update time and re-syncs (via `$unset`) when fields are removed. `listSkillsByAccess` projection includes them so the Phase 6 catalog filter and popover filter can both read off the summary row. Marks `invocationMode` as @deprecated on `TSkill` and the `InvocationMode` enum — the runtime now reads the persisted pair instead. * 🛡️ feat: Enforce frontmatter at runtime (catalog, skill tool, manual resolver, tool union) Wires the persisted columns into actual runtime behavior across all four invocation paths: - `injectSkillCatalog` excludes `disableModelInvocation: true` skills before catalog formatting — they cost zero context tokens and stay invisible to the model. - `handleSkillToolCall` rejects with a clear error when the model names a skill marked `disable-model-invocation: true` (defends against a stale-cache or hallucinated invocation getting past the catalog filter). - `resolveManualSkills` skips `userInvocable: false` skills with a warn log so an API-direct caller can't bypass the popover-side filter. - `unionPrimeAllowedTools` collects skill-declared `allowed-tools` minus what's already on the agent; `initialize.ts` re-runs `loadTools` for the extras and merges resulting `toolDefinitions` into the agent's effective set for the turn. Tool-name resolution is tolerant — unknown names silently drop with a debug log so cross-ecosystem skills referencing yet-to-be-implemented tools (Claude Code's `edit_file`, etc.) import without breaking. The agent document is never modified; the union is turn-scoped. Helper exports (`unionPrimeAllowedTools`) are structured so Phase 5's always-apply primes flow through the same union (combined `[...manualPrimes, ...alwaysApplyPrimes]`) once the resolver lands. Skill handler wire format gains the three fields so clients can render them on detail / list views. * 🎛️ feat: `$` popover reads `userInvocable` instead of UI-only `invocationMode` Replaces the phase-1 UI-only `invocationMode` check with the persisted `userInvocable` field (mirrors the `user-invocable` frontmatter). Skills authored with `user-invocable: false` no longer surface in the popover; the backend resolver enforces the same rule for defense-in-depth. Default-visible behavior is preserved: skills without an explicit `userInvocable` value (older rows, freshly imported skills that don't declare the field) stay visible — only an explicit `false` hides them. Test fixture updated to reflect the new field. * 🔧 fix: Address Phase 6 review findings Codex P2 + reviewer #1: Single `loadTools` call with the union of `agent.tools + allowed-tools`. The earlier two-call approach dropped `userMCPAuthMap` / `toolContextMap` / `actionsEnabled` from the skill-added pass — an MCP tool gained via `allowed-tools` would be visible to the model but fail at execution without per-user auth context. Resolution of `manualSkillPrimes` is hoisted before `loadTools` so the union can be computed up-front; the dropped-tools debug log now compares loaded vs. requested across the single call. Codex P3 + reviewer #2: `injectSkillCatalog.activeSkillIds` now includes `disable-model-invocation: true` skills. The runtime ACL check in `handleSkillToolCall` previously couldn't reach the explicit "cannot be invoked by the model" rejection because the broader access set excluded those skills. Catalog text and tool registration still gate on the visible subset (zero-context-token guarantee preserved); only the per-user `isActive` filter is a hard exclusion now. Reviewer #1 (try/catch around loadTools, MAJOR): A single bad `allowed-tools` entry from a shared skill could crash the entire turn. Now wrapped — on failure with extras, retry with just `agent.tools` and continue (the dropped-tools debug log surfaces what vanished). If the retry-without-extras still throws, propagate; the agent's own tools are the load-bearing surface. Reviewer #3 (integration tests, MAJOR): Added six tests in `initialize.test.ts` covering the full `allowed-tools` loading path: union pass-through, no-extras short-circuit, agent-baseline dedup, loadTools throw + retry, propagated throw without extras, and the empty-tools edge case. Smaller cleanups bundled in: - Reviewer #4: Moved `logger` import to the package-imports section (was wedged among local imports). - Reviewer #5: Removed unused index on `disableModelInvocation` (filtering happens application-side in `injectSkillCatalog`; index cost write overhead for zero query benefit). - Reviewer #6: Swapped order of `userInvocable` and body checks in `resolveManualSkills` so the more authoritative author-decision reason surfaces first when both apply. - Reviewer #8: Documented the `allowedTools` enforcement gap on the schema + type — model-invoked skills (mid-turn `skill` tool calls) do NOT trigger tool union, since adding tools after the graph starts would require a rebuild. Manual / always-apply (Phase 5) primes are the supported paths. - Reviewer #9: Renamed `dmi` / `ui` / `at` locals to `disableModelInvocationRaw` / `userInvocableRaw` / `allowedToolsRaw` in `deriveStructuredFrontmatterFields`. Reviewer #7 (DRY shared `getSkillByName` return type) deferred — field sets diverge meaningfully across the three call sites (handler needs `body + fileCount`; resolver needs `author + allowedTools + userInvocable`; the InitializeAgentDbMethods contract needs the superset). A `Pick<>`-based consolidation is a follow-up cleanup. * 🔧 fix: Address codex iter 2 — catalog quota + duplicate-name dedup P1: `injectSkillCatalog` cap now counts only model-visible skills, not the merged active set. The previous behavior let a tenant with many `disable-model-invocation: true` rows near the top of the cursor exhaust the 100-slot quota before any invocable skill got scanned — the catalog could end up empty even though invocable skills existed further down the paginated results. `MAX_CATALOG_PAGES` stays the ceiling on scan budget; only `visibleCount` drives the early-exit on quota fill. P2: When an invocable and a `disable-model-invocation: true` skill share a name, drop the disabled doc(s) from `activeSkillIds`. Without this dedup, `getSkillByName` (which sorts by `updatedAt` desc) could pick the disabled doc and every model call to the cataloged name would fail with "cannot be invoked by the model" instead of executing the visible skill. When ONLY a disabled doc exists for a name, it stays in `activeSkillIds` so the explicit-rejection error path still fires for hallucinated invocations. Tests: 3 new cases in `injectSkillCatalog` covering (a) cap counted on visible skills only, (b) same-name collision drops disabled doc, (c) sole-disabled-name case keeps the disabled doc. * 🔒 fix: Apply `disable-model-invocation` gate to read_file too (codex iter 3 P1) `activeSkillIds` is shared between the `skill` and `read_file` handlers. The skill-tool gate was applied last iteration, but `handleReadFileCall` authorized purely on `getSkillByName(..., accessibleIds)` — so a model that learned a hidden skill's name (stale catalog or hallucination) could still read its `SKILL.md` body or bundled files via `read_file`, defeating the contract. Same explicit rejection now fires from both handlers; no change needed to the ACL set itself (disabled docs stay in `activeSkillIds` so the explicit error path keeps firing). Two new tests in `handlers.spec.ts` cover the read_file gate and regression-protect the happy path. * 🔧 fix: Address codex iter 4 — manual-prime exception + legacy frontmatter backfill P1: Scope the `read_file` `disableModelInvocation` gate to AUTONOMOUS model probes only. A user-invoked `$` skill that is also marked `disable-model-invocation: true` had its bundled `references/*` / `scripts/*` files unreadable, leaving the manually-primed skill body referencing files the model couldn't load. Now the handler bypasses the gate when the skill name appears in `manualSkillNames` (the per-turn allowlist threaded from `manualSkillPrimes` → `agentToolContexts` → `enrichWithSkillConfigurable` → `mergedConfigurable`). Defense-in-depth: the bypass is scoped to the specific names in the allowlist; a different disabled skill name is still rejected. P2: Read-time fallback for legacy skills authored before Phase 6 landed the structured columns. `user-invocable: false` / `disable-model-invocation: true` set in `frontmatter` (the validator already accepted those keys) but with no derived column would incorrectly evaluate as "user-invocable / model-allowed" until a save backfilled the columns. New `backfillDerivedFromFrontmatter` helper fills undefined columns from frontmatter at read time in both `getSkillByName` and `listSkillsByAccess` — column wins when both are set, frontmatter fills the gap when only it's set. No DB writes; the next `updateSkill` naturally persists. `listSkillsByAccess` projection expanded to include `frontmatter` (bounded by validator, payload impact small) so summaries can also be backfilled. Sticky-primed disabled skills (ones invoked in prior turns of the same conversation) are not yet in the manual-prime allowlist — same- turn manual invocation is the load-bearing path codex flagged; the sticky-turn case is a known limitation tracked for a follow-up. Tests: 2 new in handlers.spec.ts (manual-prime allows + name-scoped block holds), 3 new in skill.spec.ts (legacy backfill via getSkillByName + listSkillsByAccess + column-wins precedence). * 🔧 fix: Address codex iter 5 — propagate manualSkillNames + keep read_file P1: `enrichWithSkillConfigurable` is also called from `openai.js` and `responses.js` (the OpenAI Responses + completions endpoints). Both were ignoring the new `manualSkillNames` parameter, which meant the manual-prime exception in the `read_file` gate (iter 4) only worked on the agents endpoint. Now all three call sites pass `primaryConfig.manualSkillPrimes?.map(p => p.name)` so manual `$` invocations of disabled skills work consistently across endpoints. P2: When every accessible skill is `disable-model-invocation: true`, the catalog text and `skill` tool are correctly omitted (no model- reachable targets) — but `read_file` and `bash_tool` MUST still be registered. A user manually invoking such a skill gets its SKILL.md body primed into context; if the body references `references/foo.md` or `scripts/run.sh`, those reads need a registered tool. Restructured `injectSkillCatalog` so `skill` registration is gated on `catalogVisibleSkills.length > 0` while `read_file` (always) and `bash_tool` (when codeEnvAvailable) register whenever any active skill is in scope. Tests: existing all-disabled test rewritten to assert read_file IS registered + skill is NOT; new test confirms bash_tool joins it when codeEnvAvailable. * 🔧 fix: Address codex iter 6 — name-collision consistency via preferInvocable P2a (resolveManualSkills): a name collision between an older user-invocable doc and a newer non-user-invocable doc made manual `$` invocation silently no-op. The popover surfaced the older invocable doc; resolver looked it up by name; `getSkillByName` returned the newer non-invocable doc; resolver skipped on `userInvocable: false`. P2b (handler / runtime ACL): with same-name duplicates (e.g. older invocable + newer disabled), the manual prime resolved to one doc while later `read_file` / `skill` execution resolved a different doc through `activeSkillIds`. Model could follow one SKILL.md body while reading files from a different skill. Both root-cause: `getSkillByName` always returned the newest match and let the caller filter, but with collisions the newest can be something the caller didn't want. Fix: extend `getSkillByName` with `options.preferInvocable`. When true, prefer the newest doc satisfying BOTH `userInvocable !== false` AND `disableModelInvocation !== true` (with frontmatter backfill); fall back to the newest match otherwise. Fast path preserved when caller doesn't opt in. Callers passing `preferInvocable: true`: - `resolveManualSkills` — picks the popover-visible invocable doc even when a newer disabled / non-user-invocable duplicate exists. - `handleSkillToolCall` — keeps execution aligned with the catalog; falls back to the disabled doc only when no invocable variant exists (so the explicit "cannot be invoked by the model" gate still fires for the hallucinated-disabled-name case). - `handleReadFileCall` — same alignment, plus the manual-prime exception added in iter 4 still applies. Tests: - 2 new in skill.spec.ts (preferInvocable picks invocable when collision exists; falls back to newest when no clean-invocable exists). - 1 new in skills.test.ts (resolver passes preferInvocable through). - 2 new in handlers.spec.ts (skill tool + read_file pass it). - Existing initialize.test.ts assertion updated for the new option. * 🔧 fix: Address codex iter 7 — split preferInvocable into per-axis flags The previous unified `preferInvocable` filter required BOTH `userInvocable !== false` AND `disableModelInvocation !== true`. That was wrong for the model paths: `userInvocable: false` skills are model-only and remain valid `skill` / `read_file` invocation targets. A duplicate-name scenario where the newer cataloged doc was model- only would let the older user-invocable variant shadow it on every model call. Split the option into two independent axes: - `preferUserInvocable` — for manual paths (`$` popover). Skips docs with `userInvocable: false`. Disable-model-invocation status is irrelevant; iter 4 explicitly supports manual prime of disabled skills. - `preferModelInvocable` — for model paths (`skill` / `read_file` handlers). Skips docs with `disableModelInvocation: true`. User- invocable status is irrelevant; model-only skills are valid here. Both flags fall back to the newest match when no preferred doc exists, so the explicit-rejection error paths still fire correctly in the sole-disabled-name case. Callers updated: - `resolveManualSkills` → `preferUserInvocable: true` - `handleSkillToolCall` / `handleReadFileCall` → `preferModelInvocable: true` Tests: - New spec test for preferModelInvocable not filtering on userInvocable. - Existing preferInvocable test renamed/split to cover the new axes. - New test asserts preferUserInvocable still returns disabled docs (preserves iter 4 manual-disabled support). - Caller tests assert each path passes the right single flag and does NOT pass the wrong one. * 🔧 fix: TypeScript type-check failure in handlers.spec.ts (CI green) `jest.fn(async () => ...)` without explicit args infers an empty tuple for the call signature, so `mock.calls[0][2]` flagged as "Tuple type '[]' has no element at index '2'." Cast to `unknown[]` then narrow to the expected option shape. Behavior unchanged. Caught by the `Type check @librechat/api` CI step (.github/workflows/backend-review.yml). * 🔧 fix: Address codex iter 8 — undefined-result fallback + read_file alignment P1 (loadTools returning undefined): Production loaders (`createToolLoader` in `initialize.js` / `openai.js` / `responses.js`) wrap `loadAgentTools` in try/catch and return `undefined` on failure rather than throwing. Without explicit handling, my iter-1 try/catch only fired for thrown errors — a silent-failure on a skill-added tool would fall through to the empty fallback and silently DROP the agent's baseline tools for the turn (much worse than just losing the extras). Added an `undefined`-result branch that retries with just `agent.tools`, mirroring the throw branch. Test pins both behaviors. P2 (read_file alignment with manual prime): When a skill is in this turn's `manualSkillNames`, the `read_file` handler now uses `preferUserInvocable` instead of `preferModelInvocable`. Same name-collision rule as `resolveManualSkills`, so the doc whose files get read is the same doc whose body got primed. For autonomous probes (skill not in `manualSkillNames`), the handler keeps `preferModelInvocable` to align with the catalog the model saw. Two new tests cover both branches and regression-protect that the wrong flag isn't passed. * 🔧 fix: Address codex iter 9 — pin read_file lookup to primed skill _id P1 (manually-primed disabled IDs were dropped from activeSkillIds): The `executableSkills` dedup in `injectSkillCatalog` correctly drops `disable-model-invocation: true` duplicates when an invocable doc shares the name — but `resolveManualSkills` legitimately primes disabled docs (iter 4 supports manual `$` invocation of disabled skills). When the resolver primed a disabled doc, the read_file handler couldn't find it in the (deduped) `activeSkillIds` and either resolved a different same-name skill or returned not-found. Fix: `ResolvedManualSkill` now carries `_id`; the legacy `initialize.js` / `openai.js` / `responses.js` controllers build a `manualSkillPrimedIdsByName` map and `enrichWithSkillConfigurable` passes it into `mergedConfigurable`. `handleReadFileCall` now pins its lookup's `accessibleIds` to `[primedId]` whenever the requested skill is in the map. The constrained set guarantees the lookup returns the EXACT doc the resolver primed — body/files come from the same source even when same-name duplicates exist or the dedup removed the prime's id from `activeSkillIds`. Autonomous read_file probes (skill not in the manual-primed map) keep the full ACL set + `preferModelInvocable` so they align with the catalog the model saw and the disabled-only case still fires the explicit-rejection gate. Test fixture changes flow from `_id` becoming required on `ResolvedManualSkill`. `buildSkillPrimeContentParts` / `injectManualSkillPrimes` widen their param types to `Pick<...>` because they only read `name` / `body` and shouldn't force test literals to invent placeholder ids. * 🧹 fix: Address independent reviewer findings (DRY + types + tests + docs) Sanity-pass review surfaced 7 findings; addressed 6 (the 7th — DRY on inline `getSkillByName` return types — is acknowledged tech debt deferred to a follow-up). #1 [MAJOR, DRY]: The 4-line `manualSkillPrimedIdsByName` map construction was duplicated across 4 CJS call sites (openai.js, responses.js x2, initialize.js). Extracted `buildManualSkillPrimedIdsByName` helper in `skillDeps.js`; all four sites now call the helper. If `ResolvedManualSkill` ever renames `_id` or gains identifying fields, only the helper changes. #2 [MINOR, type safety]: `handleReadFileCall` was casting a hex string to `Types.ObjectId[]` via `as unknown as`, relying on mongoose's auto-cast in `$in` queries. Replaced with `new Types.ObjectId(...)` so any future consumer comparing with `.equals()` / `===` gets the correct value type. Imported `Types` as a value (was type-only). #5 [MINOR, test gap]: Added a test for the worst-case silent-failure path — both the union and base-only `loadTools` calls return undefined. The agent gets no tools but the turn doesn't crash hard; pinning that contract. #4 [MINOR, performance]: Added a TODO on the `listSkillsByAccess` projection noting the `frontmatter` field can be dropped once a write migration backfills all pre-Phase-6 skills' columns. ~2KB/skill × 100/page is wasted bandwidth post-backfill. #6 [NIT, docs]: `backfillDerivedFromFrontmatter` JSDoc said "Pure" right before "mutates its undefined fields in place". Replaced with "Side-effect-free w.r.t. the DB (no writes), but mutates its argument in place" which describes both halves accurately. #7 [NIT, test determinism]: Replaced `await new Promise(r => setTimeout(r, 5))` in two same-name collision tests with explicit `updateOne` setting `updatedAt: new Date(Date.now() - 1000)` on the older doc. Removes the wall-clock race on fast CI runners. The pagination test (line 480) still uses setTimeout — that test is pre-existing and order is incidental, not load-bearing. Existing test fixtures updated to use valid 24-char hex ObjectIds (required by the iter-9 test that constructs a real `ObjectId`). #3 [MINOR, deferred]: Inline `getSkillByName` return type duplicated across `handlers.ts`, `initialize.ts`, `skills.ts`. Reviewer acknowledged this as deferred; field sets diverge across call sites (handler needs `fileCount`, resolver needs `author`/`allowedTools`). A `Pick<>`-based consolidation is a clean follow-up. |
||
|
|
539c4c7e4d |
🎬 feat: Prime Manually-Invoked Skills via $ Popover (#12709)
* 🎬 feat: Prime Manually-Invoked Skills via $ Popover Lands the backend for manual skill invocation, making the $ popover deterministically prime SKILL.md before the LLM turn instead of leaving the model to discover the skill via the catalog. Flow: popover drains pendingManualSkillsByConvoId on submit, attaches names to the ask payload, controllers forward to initializeAgent, and initialize resolves each name to its body (ACL + active-state filtered, reusing the same rules as catalog injection). AgentClient splices the primes as meta HumanMessages before the user's current message. - Extract primeManualSkill / resolveManualSkills in packages/api/src/agents/skills.ts and reuse primeManualSkill inside handleSkillToolCall for a single shape source. - Thread manualSkills + getSkillByName through InitializeAgentParams / DbMethods and all three initializeAgent call sites (initialize.js, responses.js, openai.js). - Splice HumanMessage primes in client.js chatCompletion after formatAgentMessages, shifting indexTokenCountMap so hydrate still fills fresh positions correctly. - Carry isMeta / source / skillName in additional_kwargs for downstream filtering. * 🛡️ fix: Scope manual skill primes to single-agent + cap resolver input Two follow-ups to the Phase 3 priming path flagged in Codex review. Multi-agent runs: skipping the splice when agentConfigs is non-empty. `initialMessages` is shared across every agent in `createRun`, so splicing a skill body there would bypass Phase 1's per-agent `scopeSkillIds` contract — a handoff / added-convo agent with a different skill scope would see content its configuration excludes. Warn + skip is the minimal correct behavior; lifting this to per-agent initial state is a follow-up. Input bounding: `resolveManualSkills` now truncates to `MAX_MANUAL_SKILLS` (10) after dedup, with a warn listing the dropped tail. Controllers only validate `Array.isArray(req.body.manualSkills)`, so a crafted payload could otherwise fan out into an unbounded `Promise.all` of concurrent `getSkillByName` DB lookups. Cap lives in the resolver so every caller (including future `always-apply` in Phase 5) inherits it. * 🧪 refactor: Testable Helpers + Payload Validation for Manual Skill Primes Follow-ups from the comprehensive review. No behavior change for the happy path — these are architectural and defensive improvements that shrink the JS surface in /api, tighten the request-body contract, and cover the delicate splice logic with proper unit tests. - Extract `injectManualSkillPrimes` into packages/api/src/agents/skills.ts so the message-array splice and `indexTokenCountMap` shift are unit- testable in TS. client.js now calls the helper. Tests pin the `>=` vs `>` boundary condition — a regression here would silently corrupt token accounting for every message after the insertion point. - Extract `extractManualSkills(body)` and use in all three controllers (initialize.js, responses.js, openai.js). Replaces copy-pasted `Array.isArray(...) ? ... : undefined` with a helper that also filters non-string / empty elements — closes a type-safety gap where a crafted payload like `{"manualSkills": [123, {"$gt":""}]}` would otherwise reach `getSkillByName` and waste DB round-trips. - Rename `primeManualSkill` → `buildSkillPrimeMessage`. The helper serves three invocation modes (`$` popover, `always-apply`, model-invoked); the old name misled readers coming from `handleSkillToolCall`. - Add `loadable.state === 'hasValue'` guard in `drainPendingManualSkills` — defensive, since the atom has a synchronous `[]` default, but the previous `.contents` cast would have been unsound under loading/error. - Document why `resolveManualSkills` honors the active-state filter even for explicit `$` selections (Phase 2 popover filter + API-direct hardening). - Remove stray `void Types;` in initialize.test.ts — `Types` is already consumed elsewhere in that test. * 🔖 refactor: Single source for the skill-message source marker Export `SKILL_MESSAGE_SOURCE = 'skill'` and use it in both construction paths that stamp skill-primed messages — `buildSkillPrimeMessage` (for the model-invoked tool path) and `injectManualSkillPrimes` (for the user-invoked splice path). Downstream filtering and telemetry read this marker, so the two paths must agree; keeping the literal in one place removes the risk of them drifting when Phase 5's `always-apply` adds a third caller. * ♻️ refactor: Drop Multi-Agent Guard + Review Polish - Remove the multi-agent skip in `AgentClient.chatCompletion`. Leaking primes to handoff / added-convo agents via shared `initialMessages` is the agents SDK's concern to scope; this layer should just inject and let the graph handle agent-scoped state. The guard was well-intended but produced a silent-drop UX where `$skill` in a multi-agent run did nothing. - Bound the `[resolveManualSkills] Truncating ...` warn output to the first 5 dropped names plus a count suffix. A malicious payload of 1000 names was previously spilling all ~990 names into the log line. - Remove dead `?? []` from the `hasValue`-guarded loadable read in `drainPendingManualSkills` — the atom always yields a string[] when resolved, so the nullish fallback was unreachable. - Reorder skills.ts imports to follow the style guide: value imports shortest-to-longest (`data-schemas` → `langchain/core/messages` → multi-line `@librechat/agents`), type imports longest-to-shortest. * 🧠 fix: Strip Skill Primes from Memory Window + Unbreak CI Mocks Two fixes after the last push. CI unbreak: `responses.unit.spec.js` and `openai.spec.js` mock `@librechat/api` and the mock didn't expose the new `extractManualSkills` symbol, so every test in those files crashed before reaching the `recordCollectedUsage` assertion. Added `extractManualSkills: jest.fn()` returning `undefined` to both mocks; the controllers now no-op on manualSkills as the tests expect. Codex P2: `runMemory` passes `messages` straight through to the memory processor, so after the splice in `injectManualSkillPrimes`, SKILL.md bodies ride along as if they were real user chat. That pollutes memory extraction with synthetic instruction content and crowds out real turns from the window. - Export `isSkillPrimeMessage(msg)` from `packages/api/src/agents/skills.ts` — a predicate keyed on the shared `SKILL_MESSAGE_SOURCE` marker. - Filter `chatMessages = messages.filter(m => !isSkillPrimeMessage(m))` at the top of `runMemory` before the window-sizing logic. Keeps the primes visible to the LLM (they still ride in `initialMessages`) but invisible to the memory layer. - 5 new tests for the predicate covering marker-present, plain messages, different source, non-object inputs, and array filter integration. * 📜 feat: Show Skill-Loaded Cards for Manually-Invoked Skills The $ popover was priming SKILL.md bodies into the turn but leaving no visible trace on the assistant response — from the user's view it looked like the `$name ` cosmetic text did nothing. Now each manually-invoked skill renders the same "Skill X loaded" tool-call card that model-invoked skills already produce via PR #12684's SkillCall renderer. Approach: post-run prepend to `this.contentParts`. The aggregator owns per-step indices during the run, so pre-seeding collides; waiting until `await runAgents(...)` returns lets the graph settle before synthetic parts slot in at the front. - Export `buildSkillPrimeContentParts(primes, { runId })` from `packages/api/src/agents/skills.ts`. Returns completed tool_call parts (`progress: 1`, args JSON-encoded with `{skillName}`, output matching the model-invoked path's wording) that the existing `SkillCall.tsx` renderer draws identically. - In `AgentClient.chatCompletion`, prepend the built parts to `this.contentParts` immediately after `await runAgents`. Persistence and the final-event reconcile come for free — `sendCompletion` already reads `this.contentParts` verbatim. - Card ordering: skills appear first in the assistant message, reflecting that priming ran before the LLM's turn. Live-during-streaming cards are a separate follow-up — the graph's index-based aggregator makes that a bigger lift and this change delivers the core UX win without fighting the stream ordering. 6 new unit tests covering part shape, args JSON contract, output text, unique IDs, empty input, and startOffset ID differentiation. * ⚡ feat: Emit Optimistic Skill Cards + Wire Primes in OpenAI/Responses Two follow-ups from testing. Optimistic card emit: the main chat path was only showing "Skill X loaded" cards at final-reconcile time, so the user saw nothing happen until the stream finished. Now emit synthetic ON_RUN_STEP + ON_RUN_STEP_COMPLETED events right before `runAgents` starts — same pattern the MCP OAuth flow uses in `ToolService` — so the cards appear immediately. The graph's content at index 0 may overwrite them during streaming, but the post-run `contentParts` prepend (unchanged) restores them on final reconcile. OpenAI + Responses parity: both controllers were resolving `manualSkillPrimes` via `initializeAgent` but never injecting them into `formattedMessages` before the run. Manual invocation silently did nothing on `/v1/chat/completions` and the Responses API path. Now both call `injectManualSkillPrimes` on the formatted messages so the model sees SKILL.md bodies on every path. LibreChat-style card SSE events don't apply to these OpenAI-shaped responses, so the live-emit is chat-path-only. - Export `buildSkillPrimeStepEvents(primes, { runId })` from `packages/api/src/agents/skills.ts`. Uses `Constants.USE_PRELIM_RESPONSE_MESSAGE_ID` by default so the frontend maps events to the in-flight preliminary response message, matching the OAuth emitter. - In `AgentClient.chatCompletion`, emit via `sendEvent` (or `GenerationJobManager.emitChunk` in resumable mode) after `injectManualSkillPrimes` runs, before the LLM turn begins. - Wire `injectManualSkillPrimes` into `openai.js` + `responses.js` after `formatAgentMessages`. Refactored the destructure to `let` on `indexTokenCountMap` so the injector's returned map is usable. - 8 new unit tests covering the step-event builder: pair cardinality, default/custom runId, TOOL_CALLS shape + JSON args, progress:1 on completion, index ordering, stepId/toolCallId pairing, empty input. * 🎯 fix: Route Skill Prime Events to the Real Response + Sparse-Array Offset Two bugs in the optimistic-card emit from the last pass. 1. Wrong runId. The events used `USE_PRELIM_RESPONSE_MESSAGE_ID` (the MCP OAuth pattern), but OAuth emits DURING tool loading — before the real response messageId exists. By the time skill priming fires, the graph is about to emit with `this.responseMessageId`, so the PRELIM runId orphaned every card onto the client's placeholder response entry in `messageMap`, separate from the one the LLM's events were building. Net effect: cards never rendered mid-stream. Now passing `this.responseMessageId` — the same ID `createRun` receives — so synthetic and real steps land on the same `messageMap` entry. 2. Index 0 collision. With the runId fixed, card-at-0 would have hit `updateContent`'s type-mismatch guard when the LLM's text delta arrived at the same index, suppressing the whole text stream. New `SKILL_PRIME_INDEX_OFFSET` = 100 placed on both the live SSE emit and the server-side `contentParts` assignment. Sparse array during streaming renders as `[llm_text, ..., card]` (skip-holes via `Array#filter` / `Array#map`). `filterMalformedContentParts` from `sendCompletion` compacts to dense `[text, card]` before persistence, so streaming UI and saved message agree on order — no finalize reorder jank. Post-run switches from `contentParts.unshift` to `contentParts[OFFSET + i] = part` to mirror the live placement. - Add `startIndex` option to `buildSkillPrimeStepEvents` with `SKILL_PRIME_INDEX_OFFSET` default. Export the constant from `@librechat/api` so `client.js` can reuse it for the post-run splice. - Update the existing index-ordering test to the new default and add a new test for the explicit `startIndex` override. * 🎗️ feat: Replace \$skill-name Text with Pills on the User Message The `$skill-name ` cosmetic text the popover was inserting into the textarea had two problems: it lingered in the user message forever (the card is a more meaningful marker), and it implied that free-form text invocation like \"\$foo help me\" should work — which it doesn't, and supporting it would mean another parsing layer nobody asked for. Dropped the textarea insertion. Visual confirmation after submit now comes from a compact `ManualSkillPills` row on the user bubble that self-extinguishes once the backend's live skill-card stream (`buildSkillPrimeStepEvents` from the last commit) populates the sibling assistant response. Multiple skills render as multiple pills — the atom was already a string array, so multi-select works for free. - `SkillsCommand.tsx`: select handler no longer writes to the textarea. Still drops the trigger `$` via `removeCharIfLast`, still pushes to `pendingManualSkillsByConvoId`, still flips `ephemeralAgent.skills`. - `families.ts`: new `attachedSkillsByMessageId` atomFamily keyed by user messageId. `useChatFunctions.ask` writes the drained skill list here on every fresh submit (regenerate/continue/edit still skip). - `ManualSkillPills.tsx` renders pills conditionally: hidden when the message isn't a user message, when no skills are attached, or when the sibling assistant response already carries a `skill` tool_call content part (the live card took over). Reads messages via React Query so we don't re-render on every message-state keystroke. - `Container.tsx` mounts the pills above the user message text, parallel to the existing `Files` slot. - Updated the SkillsCommand select-flow spec to assert the textarea is cleared of `$` instead of populated with `\$name `. 5 new tests for `ManualSkillPills` covering empty state, non-user message guard, multi-skill rendering, the skill-card hide condition, and the text-only-content-doesn't-hide case. * 🎛️ feat: Manual Skills as Persisted Message Field + Compose-Time Chips Three problems with the previous pass: 1. Cards rendered BELOW the LLM text on the assistant message (and stayed there on reload) because the sparse index-100 offset put them after the model's content. Now back to `unshift` — cards at the top, same as before the live-emit detour. 2. Pills on the user message disappeared the moment the live card arrived, so users barely saw them. The live-emit channel also added meaningful complexity and relied on a per-message Recoil atom that had no clean cleanup story. 3. No visual cue at all during new-chat compose — the `$name ` text was removed, the submitted-message pills weren't there yet, and the popover closes after selection. User had no way to see what they'd queued up before sending. New architecture: `manualSkills` is a first-class field on `TMessage`, persisted by the backend on the user message. `ManualSkillPills` reads straight from `message.manualSkills` — no atom, no sibling-lookup — so pills survive reload, show in history, and stay for the lifetime of the message. Compose-time chips above the textarea read the existing `pendingManualSkillsByConvoId` atom and let users × skills out before submitting. Backend reverts: - `client.js`: dropped the `ON_RUN_STEP` live-emit loop, restored `this.contentParts.unshift(...primeParts)` so cards sit at the top of the persisted assistant response. - `skills.ts`: removed `buildSkillPrimeStepEvents` and `SKILL_PRIME_INDEX_OFFSET` (both unused now). `GraphEvents`, `StepTypes`, and `Constants` imports went with them. Removed 8 tests. Field persistence: - `tMessageSchema` gains `manualSkills: z.array(z.string()).optional()`. - Mongoose message schema gains `manualSkills: { type: [String] }` with matching `IMessage` TS field. - `BaseClient.js` reads `req.body.manualSkills` on user-message save, filters to non-empty strings, pins onto `userMessage` before `saveMessageToDatabase`. Mirrors the existing `files` pattern right above it. Runtime resolution still reads top-level `req.body.manualSkills` — persistence and resolution are separate concerns. Frontend: - `useChatFunctions.ask` sets `currentMsg.manualSkills` directly; the drained atom value goes onto the message, not a separate atom. Removed the `attachSkillsToMessage` Recoil callback. - `ManualSkillPills`: pure render of `message.manualSkills`. No more `useQueryClient`, no sibling scan, no atom read. Loses the auto-hide-when-card-arrives behavior — pills stay on the user bubble, cards live on the assistant bubble, both are informative. - Dropped the `attachedSkillsByMessageId` atomFamily and its export. - New `PendingManualSkillsChips` above the textarea reads the compose-time atom and renders chips with × to remove. Mounted in `ChatForm` right after `TextareaHeader`. Naturally hides on submit when the atom drains. Tests: updated `ManualSkillPills` suite to the new field-based reads (5 passing). New `PendingManualSkillsChips` suite covering empty state, multi-chip render, single × removal, and full-clear (4 passing). Backend suite trimmed to 89 (was 97) from the step-events test removal — no regressions on the remaining helpers. * 🧪 feat: Assistant-Side Skill-Loading Chips + Pill Padding Two small UX fixes on top of the field-on-message architecture. Pill padding: bumped the user-side `ManualSkillPills` from `py-0.5` to `py-1` on each chip and added `py-0.5` to the wrapper so the row breathes a little without feeling tall. Mid-stream indicator: new `InvokingSkillsIndicator` mirrors the parent user message's `manualSkills` onto the assistant bubble as transient "Running X" chips while the real card is in flight. Renders above `ContentParts` in `MessageParts`. Hides itself when the assistant's own `content` grows a `skill` tool_call — the authoritative card from `buildSkillPrimeContentParts.unshift` is showing, so the placeholder steps aside. No SSE emit, no aggregator injection, no index collision with the LLM's streaming content: just a render slot keyed off the parent's field. Why not stream the cards live: whichever content index we'd choose either blocks the LLM's text stream (`updateContent` type-mismatch at index 0) or lands below the response after sparse compaction (index 100+). Mirroring the parent field sidesteps the aggregator entirely and gives the user an immediate "skill is loading" signal that naturally gives way to the real card at finalize. Covers the gap the user flagged: pills on the user message said "I asked for these" but nothing on the assistant side said "we're working on it" until the stream finished. 5 new tests for the indicator: user-msg guard, missing parent-field guard, multi-chip render, hides-on-card-landing, orphan-parent guard. * 🔁 fix: Indicator Visibility + Carry Manual Skills Through Regenerate/Edit Two bugs. Indicator never rendered: `InvokingSkillsIndicator` looked up the parent user message via `queryClient.getQueryData([QueryKeys.messages, convoId])`, but on a new chat the React Query cache is keyed by `"new"` (the URL `paramId`) until the server assigns a real conversation ID — while `message.conversationId` on the assistant message is already the server ID. Lookup missed, `skills.length === 0`, nothing rendered. Switched to `useChatContext().getMessages()`, which reads from the same `paramId` the rest of the UI uses, so new-chat and existing-chat cases both resolve to the correct message list. Regenerate / save-and-submit dropped manual skills: the compose-time `pendingManualSkillsByConvoId` atom is drained on the first submit, so replaying that turn later found an empty atom and sent `manualSkills: []`. The pills were still on the user bubble, so from the user's point of view the model was running primed — but the backend saw nothing and produced an unprimed response. - Added `overrideManualSkills?: string[]` to `TOptions`. Callers with a reference message pass its persisted `manualSkills`; `useChatFunctions.ask` uses the override verbatim when present, otherwise falls back to the existing drain-or-empty logic. - `regenerate` in `useChatFunctions` passes `parentMessage.manualSkills` — the user message being regenerated has the field persisted by the backend, so the second turn primes the same skills as the first. - `EditMessage.resubmitMessage` covers both edit branches: - User-message save-and-submit: forwards the edited message's own `manualSkills` so the new sibling turn primes identically. - Assistant-response edit: forwards the parent user message's `manualSkills` for the same reason. Indicator test suite converted from `@tanstack/react-query` harness to a jest-mocked `useChatContext().getMessages()`. 6 tests (was 5), added a cache-miss case. * 🧭 fix: Drive Mid-Stream Skill Chips from Submission Atom, Not Message Lookup Message-ID-keyed lookups kept racing the stream: the user message flips from its client-side intermediate UUID to the server-assigned ID mid-run, conversation IDs flip from the URL `paramId="new"` to the real convo ID on brand-new chats, and the React Query cache splits briefly between the two. Previous attempts — direct `queryClient.getQueryData` and then `useChatContext().getMessages()` — each missed a different window. `TSubmission.manualSkills` is already populated at `ask()` time and the submission atom (`store.submissionByIndex(index)`) is the single stable anchor across the whole lifecycle: set once at submit, lives through every SSE event, cleared when the stream ends. No ID lookups, no cache timing. - `InvokingSkillsIndicator` now reads `submissionByIndex(index)` via Recoil. Shows chips when: • the message is assistant-side, • a submission is in flight with non-empty `manualSkills`, • the assistant's `parentMessageId` matches `submission.userMessage.messageId` (so chips appear only on the bubble for the current turn, never on siblings), • the assistant's own content doesn't yet carry a `skill` tool_call (real card takes over from the server's post-run `contentParts.unshift`). - Drops the `useChatContext().getMessages()` dependency and the `useQueryClient` dependency before that. No more lookups by conversationId or messageId. Test suite now mocks `useChatContext` to supply `index: 0` and seeds the `submissionByIndex(0)` atom via Recoil initializer. 6 cases cover user-side, no-submission history, empty `manualSkills`, multi-chip render, hides-on-card-landing, and wrong-turn guard. * 🌱 fix: Seed Response manualSkills in createdHandler, Indicator Becomes Pure The mid-stream indicator kept getting wired off state I don't own: first `queryClient.getQueryData` (raced the new-chat paramId flip), then `useChatContext().getMessages()` (same cache, same race), then `useRecoilValue(submissionByIndex)` (pulled every message into the submission subscription — re-renders all indicators on any submission change, exactly the "limit hooks in rendering" concern). Cleanest path is the one the user pointed at: the submission owns the data, `useSSE` / `useEventHandlers` owns the save points, so seed the field ONTO the response message at the save site and let the indicator be a pure prop-read. - `createdHandler` now writes `manualSkills` onto the initial response from `submission.manualSkills` at the moment the placeholder enters the messages array. The field rides through the normal mutation pipeline via spreads (`useStepHandler` response creation, `updateContent` result returns) — no special handling needed. - `InvokingSkillsIndicator` drops the Recoil / context / queryClient reads. Pure function of `message`: if assistant, has `manualSkills`, and `content` hasn't grown a `skill` tool_call yet, render chips. Only `useLocalize` left, which was already unavoidable for the i18n string. - Renders decouple: no single state change (`submissionByIndex` flip, React Query cache update) forces every indicator in the message list to re-render anymore. Only the message whose prop changed re-runs. Finalize story unchanged: server's `responseMessage` doesn't carry the frontend-only `manualSkills` field, so `finalHandler`'s replacement drops it — but by then the real `skill` tool_call is in `content` and the indicator's content-scan hides itself anyway. Test suite back to pure prop mocks: 7 cases covering user-guard, no-seed, multi-chip render, skill-card-hide, non-skill-tool-call-keeps, text-only-keeps, and missing message. * 🪞 fix: Render Skill Indicator Inside ContentParts, Adjacent to Parts The indicator still wasn't showing because even though MessageParts mounted it as a sibling of ContentParts, ContentParts is a `memo`'d component that owns the only rendering path that refreshes in lockstep with content deltas. Mounting above it put the indicator one layer further out — reachable, but not exercised on the same render cycle that processes the streaming `message` prop. Moved the indicator into ContentParts itself, rendered at the top of both the sequential and parallel branches. Reads the `message` prop (newly threaded through as an optional prop alongside `content`), so: - Same render cycle as Parts — updates from the SSE pipeline flow through the same pathway. - Lives outside the `content.map`, so delta-driven content reshuffles never wipe it. - Still a pure prop-read inside the indicator itself (no Recoil, queryClient, context hooks). The only dep is `useLocalize`. Thread: - `ContentPartsProps` gains `message?: TMessage`. - `MessageParts` passes `message={message}` through, drops its own indicator mount + import. - `ContentParts` renders `<InvokingSkillsIndicator message={message} />` in both the parallel-content and sequential-content branches, right under `MemoryArtifacts` and before the empty-cursor / parts map. Companion data flow (unchanged): `createdHandler` seeds `initialResponse.manualSkills` from `submission.manualSkills`; the field rides through `useStepHandler` via spreads; indicator hides on `skill` tool_call landing in `content`. * 🔎 refactor: Narrow Skill Components to Scalar skills Prop, Kill Memo Churn Passing the full `message` object into presentational components busts `React.memo` shallow comparisons every time the message reference changes for unrelated reasons. Swap to scalar `skills?: string[]` throughout: - `InvokingSkillsIndicator`: props-only (`skills?: string[]`); visibility logic (user-vs-assistant, skill tool_call arrival) now lives in the caller so this stays pure presentational. - `ManualSkillPills`: props-only (`skills?: string[]`). - `ContentParts`: takes `manualSkills?: string[]` scalar, computes `showInvokingSkills` once per render from `manualSkills` + content scan for the `skill` tool_call, then mounts the indicator with `skills=` prop in both parallel and sequential branches. - `MessageParts`: passes `manualSkills={message.manualSkills}` through to `ContentParts`. - `Container`: passes `skills={message.manualSkills}` to `ManualSkillPills`. - Tests updated to exercise the narrowed prop surface. * 📜 feat: Mid-Stream Skill Cards via SkillCall, Drop Custom Indicator Instead of a separate `InvokingSkillsIndicator` chip component, render pending skill placeholders through the existing `SkillCall` renderer — same component the backend's finalized prime part uses. The loading visual (`progress < 1` + empty output → pulsing "Running X") and the completed visual ("Ran X") now come from one source of truth. `ContentParts` computes `pendingSkillNames` from `manualSkills` minus any `skill` tool_call already in `content` (dedupe by `args.skillName` since the synthetic's id differs from the real one). Those names render through a separate slot ABOVE the Parts iteration — not prepended to the content array, which would shift React keys on every downstream streaming text / tool part and force unmount/remount mid-stream. When the real prime `tool_call` lands at finalize (backend unshifts to content[0..]), `collectExistingSkillNames` picks it up, the pending set empties, and the real part takes over rendering in the Parts iteration. Layout is identical either way because primes are always at the top of content. - `InvokingSkillsIndicator.tsx` + test deleted (no longer referenced) - `ContentParts.tsx` renders `<SkillCall .../>` directly for pending names, mirrors `Part.tsx`'s usage of the same component - `createdHandler` doc comment updated to reflect the new flow * ✂️ fix: Render Interim Skill Cards From manualSkills Only, Leave Content Untouched Previous revision read `content` to de-dupe pending cards against real `skill` tool_calls, so any optimistic skill part streamed from the backend would race our placeholder off the screen mid-turn — exactly the "getting overridden" symptom. Now: interim `SkillCall` cards are driven purely by the response message's `manualSkills` field. `content` is never inspected here, so no backend delta can pull the cards down. The field is now seeded directly onto the assistant placeholder in `useChatFunctions` (not only in `createdHandler`) so the cards appear from the first render, before the `created` SSE event round-trip. Lifecycle: - `useChatFunctions` puts `manualSkills` on the freshly-minted `initialResponse` — cards render the instant the placeholder lands. - `createdHandler` keeps its own re-seed (idempotent; safe) so a regenerate / save-and-submit flow that hits that path still works. - `useStepHandler` spread operations preserve the field through every content update. - `finalHandler` replaces the message with the server-backed `responseMessage` (no `manualSkills`) — cards disappear, and the real `skill` tool_call part in `content` takes over. ContentParts changes: - Drop `collectExistingSkillNames` / `parseJsonField` dedupe path. - `renderPendingSkills` reads only `manualSkills` + `isCreatedByUser`. - Simpler control flow — one boolean (`hasPendingSkills`) gates the early return, one function renders. * 🩹 fix: Codex Review Resolutions — Localization, Guards, Tests, Docs Addresses seven findings from comprehensive code review: Finding 1 (MAJOR) — Document sticky re-priming as intentional - `buildSkillPrimeContentParts`: expanded doc comment explaining synthetic `skill` tool_calls persist and get re-primed on every subsequent turn via `extractInvokedSkillsFromPayload` (shape parity with model-invoked skills). This matches the UX: the assistant skill card is a visible, persistent signal that the skill is active for the conversation. Not a bug — called out explicitly so future maintainers don't mistake it for one. Finding 2 (MAJOR) — Add ContentParts render tests - New `ContentParts.test.tsx` with 7 cases covering the interim skill card logic: assistant-only rendering, user-message suppression, undefined-content safety, parallel+sequential branch integration, progress<1 (pending) state. Child components mocked so the test exercises only the branching and prop wiring ContentParts owns. Finding 3 (MINOR) — Localize hardcoded aria-labels - Added `com_ui_skills_manual_invoked` + `com_ui_skills_queued` keys. - Reused existing `com_ui_remove_skill_var` for the remove-button aria-label. - `PendingManualSkillsChips` and `ManualSkillPills` now call `useLocalize()`. Test mocks updated to the label-echo pattern. Finding 4 (MINOR) — Max-length guard in `extractManualSkills` - New `MAX_SKILL_NAME_LENGTH = 200` constant and filter. Blocks a crafted payload like `{ manualSkills: ['a'.repeat(100000)] }` from reaching `getSkillByName` / Mongo's query planner. Finding 5 (NIT) — `BaseClient.js` comment contradicted itself - Rewrote to call the filter what it is: defense-in-depth on top of Mongoose schema validation, not a redundant second layer. Finding 6 (NIT) — `ManualSkillPills` now wrapped in `React.memo` - Consistent with peer components (`PendingManualSkillsChips`, `ContentParts`). Rendered inside `Container`, which re-renders on every content update, so the memo is a real cycle savings. Finding 7 (NIT) — Redundant guard in `ContentParts.renderPendingSkills` - Collapsed the duplicate null-check by computing `pendingSkills` as a `useMemo`'d array (`[]` when not applicable), and mapping directly. `hasPendingSkills` now derives from the array length — one source of truth, no redundant gate inside the render function. * 🔧 fix: Update ParallelContent to Handle Optional Content Prop Modified the `ParallelContentRendererProps` to make the `content` prop optional, ensuring safer access within the component. Adjusted the calculation of `lastContentIdx` to handle cases where `content` may be undefined, preventing potential runtime errors. This change enhances the robustness of the component when dealing with varying message structures. * 🎯 fix: Thread manualSkills Through ContentRender — The Real Renderer This is why the interim skill cards never appeared across many rounds of iteration: `ContentRender.tsx` (the memo'd renderer used by most paths, including the agents endpoint) was calling `ContentParts` without the `manualSkills` prop. Only `MessageParts.tsx` had it wired up — and that's not the component that actually renders the assistant response in production. Two fixes: 1. Pass `manualSkills={msg.manualSkills}` to the `ContentParts` call. 2. Extend the `areContentRenderPropsEqual` memo comparator to include `manualSkills.length`, otherwise a message update that adds the field (seeded by `useChatFunctions` on the initialResponse) would be bailed out by the memo and never re-render. Verified the two ContentParts call sites are now consistent; Container usages for `ManualSkillPills` on the user side were already correct. * 🧹 polish: Address Audit Follow-Up (F1/F3/F6) F1 — Clarify sticky re-priming opt-out path. The previous comment said "regenerate without the pick" as one opt-out, but `useChatFunctions.regenerate` forwards the original picks via `overrideManualSkills`, so regeneration alone keeps the skill sticky. Updated to: edit the originating message to remove the pills and resubmit, or start a new conversation. F3 — Add DOM-order assertions to the parallel + sequential tests. The two "alongside" tests verified both elements existed but didn't pin the ordering contract. Both now use `compareDocumentPosition` to assert the pending SkillCall precedes the real content, matching the backend semantic (`contentParts.unshift(...primeParts)` puts primes at the top). F6 — Fix package import order in PendingManualSkillsChips. `recoil` (58 chars) was listed before `lucide-react` (45 chars) which violates the "shortest to longest after react" rule in AGENTS.md. Swapped order; no behavior change. F2 / F4 / F5 from the audit were confirmed as non-issues (React-safe empty map, cosmetic test-mock artifact, accepted memo tradeoff) and require no change. * ✨ feat: Dedicated PendingSkillCall + Running→Ran Transition on Real Content UX polish on the interim skill card now that it's actually rendering: 1. New `PendingSkillCall` component (mirrors `SkillCall` visually but drops the expand affordance). `SkillCall`'s underlying `ProgressText` always renders a chevron + clickable button when any input is present, which on a card with empty output points at nothing — misleading cursor:pointer and a no-op toggle. The pending variant has only the icon + label, no button wrapper, no chevron. 2. "Running X" → "Ran X" transition when real content lands. `ContentParts` computes `hasRealContent` (any non-text part, or a text part with non-empty content — placeholder empty-text parts don't count) and passes `loaded={hasRealContent}` to `PendingSkillCall`. Matches what users see for model-invoked skills as they finish priming: pulsing shimmer → static icon. 3. Cleanup: - Dropped direct `SkillCall` import from `ContentParts` (replaced by `PendingSkillCall`). `SkillCall` is still used by `Part` for real `skill` tool_call content parts — no behavior change there. - Removed the now-redundant explicit `manualSkills` assignment in `createdHandler`. `useChatFunctions` seeds the field on `initialResponse` at construction, so the `...submission.initialResponse` spread already carries it through — the re-assignment was defensive belt-and-suspenders doing the same work twice. Comment rewritten to describe the actual lifecycle. Tests updated to the new component (12/12 pass): two new cases pin the loaded-state transition (unloaded when content has no real parts, flips to loaded once a non-empty text part lands). |
||
|
|
9225a279eb |
🎚️ feat: Per-User Skill Active/Inactive Toggle with Ownership-Aware Defaults (#12692)
* feat: per-user skill active/inactive toggle with ownership-aware defaults - Add `skillStates` map (Record<string, boolean>) to user schema for per-user active/inactive overrides on skills - Add `defaultActiveOnShare` to interface.skills config (default: false) so admins can control whether shared skills auto-activate - Add GET/POST /api/user/settings/skills/active endpoints with validation - Add React Query hooks with optimistic mutations for skill states - Add useSkillActiveState hook with ownership-aware resolution: owned skills default active, shared skills default inactive - Add toggle switch UI to SkillListItem and SkillDetail components - Filter inactive skills in injectSkillCatalog before agent injection - Add localization keys for active/inactive labels * fix: use Record instead of Map for IUser.skillStates Mongoose .lean() flattens Map to a plain object, causing type incompatibility with IUser in methods that return lean documents. * fix: address review findings for skill active states - Fail-closed when userId is absent: filter rejects all shared skills instead of passing them through unfiltered (Codex P1) - Validate Mongoose Map key characters (reject . and $) in controller to return 400 instead of a 500 from schema validation (Codex P2) - Block toggle while initial skill states query is loading to prevent overwriting server-side overrides with an empty snapshot (Codex P2) - Extract shared SkillToggle component, eliminating duplicate toggle markup in SkillListItem and SkillDetail (Finding #3) - Move skill state query/mutation hooks from Favorites.ts to Skills/queries.ts per feature-directory convention (Finding #4) - Fix hardcoded English aria-label in SkillListItem by passing the localized string from the parent SkillList (Finding #5) - Fix inline arrow in SkillList render loop: pass stable callback reference so SkillListItem memo() is not invalidated (Finding #1) - Extract toRecord() helper in controller to DRY the Map-to-Object conversion (Finding #6) - Remove Promise.resolve wrapping synchronous config read (Finding #8) - Remove unused TUpdateSkillStatesRequest type (Finding #12) * fix: forward tabIndex on SkillToggle to preserve list keyboard nav The original inline toggle had tabIndex={-1} so the row itself remained the sole tab target. The extraction into SkillToggle dropped this prop, making every list toggle a tab stop. Add an optional tabIndex prop and pass -1 from SkillListItem. * fix: plumb skillStates to all agent entry points, isolate toggle keydown - Add skillStates/defaultActiveOnShare loading to openai.js and responses.js controllers so shared-skill activation is respected across all agent entry points, not just initialize.js (Codex P1) - Stop keydown propagation on SkillToggle so Enter/Space does not bubble to the parent row's navigation handler (Codex P2) * fix: paginate catalog fetch and serialize toggle writes - Paginate listSkillsByAccess (up to 10 pages of 100) until the active catalog quota is filled, so inactive shared skills in recent positions do not starve active owned skills past the first page (Codex P1) - Extend listSkillsByAccess interface with cursor/has_more/after for catalog pagination - Serialize skill-state writes via a ref queue: one in-flight request at a time, with the latest desired state sent when the previous one settles. Prevents last-response-wins races where an older request overwrites newer toggles (Codex P2) * fix: share write queue across hook instances, block toggle on fetch error - Move the write queue from a per-instance useRef to a module-scoped object so every mount of useSkillActiveState (SkillList, SkillDetail, etc.) serializes against the same in-flight slot. Prior per-instance queues allowed two components to race full-map POSTs (Codex P1) - Extend the toggle guard beyond isLoading: also block when isError is true or data is undefined. Prevents a failed GET from seeding a toggle with an empty baseline that would wipe server-side overrides on the next successful POST (Codex P1) * fix: stale closure, orphan cleanup, and cap-error UX - Read toggle baseline from React Query cache via queryClient.getQueryData instead of the captured skillStates closure. The closure can be stale between onMutate's setQueryData and the next render, so rapid successive toggles would build on old state and drop earlier changes (Codex P1) - Surface the MAX_SKILL_STATES_EXCEEDED error code with a specific toast key (com_ui_skill_states_limit) so users understand the 200-cap rather than seeing a generic error - Prune orphaned entries (skillIds whose Skill doc no longer exists) on both GET and POST in SkillStatesController. Self-heals over time without needing cascade-delete hooks or a migration job. Uses one indexed Skill._id query per request * test: pin skill active-state precedence with unit tests Extract the active-state resolution logic from a closure inside injectSkillCatalog into an exported resolveSkillActive helper, then cover every branch of the precedence matrix: - Fails closed when userId is absent (even with defaultActiveOnShare=true) - Explicit override wins over ownership and config (both true and false) - Owned skills default to active when no override is set - Shared skills default to defaultActiveOnShare value - Undefined skillStates behaves identically to an empty object - defaultActiveOnShare defaults to false when omitted - Owned skills ignore defaultActiveOnShare entirely Closes Finding #2 from the pre-rebase comprehensive review. Mirrors the existing scopeSkillIds test style; injectSkillCatalog now calls resolveSkillActive instead of inlining the closure. * refactor: limit skill active toggle to detail header, drop label - Remove the per-row toggle from SkillListItem and the active-state plumbing (hook call, isSkillEnabled/onToggleEnabled/toggleAriaLabel props) from SkillList. The detail view is now the single place to change a skill's active state - Drop dim/muted styling for inactive skills in the sidebar: without a control there, the visual indication has nowhere to land - Resize SkillToggle to match neighbor buttons: outer h-9 container, h-6 w-11 track with size-5 knob, no label span. The 'Active' / 'Inactive' text that accompanied the detail-view toggle is removed - Remove the now-unused label prop and tabIndex prop (the tabIndex existed only for the list-row context) from SkillToggle. Drop the onKeyDown stopPropagation for the same reason - Remove now-orphaned com_ui_skill_active / com_ui_skill_inactive translation keys * style: shrink SkillToggle track to h-5 w-9 with size-4 knob Container stays at h-9 to match neighbor button heights. The toggle track itself drops from h-6 w-11 to h-5 w-9, with a size-4 knob travelling 1.125rem on activation. Visually lighter inside the row. * fix: remove redundant skillStates entries that match the resolved default When a toggle lands on the ownership/config default, delete the key from the map instead of persisting `{id: defaultValue}`. Without this, a user toggling a skill off and back on would leave `{id: true}` for an owned skill (whose default is already true), silently consuming a slot against the 200-entry cap. Repeated round-trip toggles could exhaust the quota with zero meaningful overrides (Codex P2). Preserves the exceptions-list invariant that the runtime-resolution design depends on. * fix: prune before enforcing skill-state cap; reject non-ObjectId keys Reorder the update controller so pruneOrphans runs before the 200-cap check. Without this, a user near the cap with some orphaned entries (skills deleted since their last GET) could send a payload that would pass after pruning but gets rejected by the raw-size check first. Add a sanity cap on raw payload size (2 * MAX_SKILL_STATES) so abusive inputs do not reach the DB query, and enforce the real cap on the pruned result instead. Harden pruneOrphans: the earlier early-return path could pass non-ObjectId keys through unchanged. Now only valid ObjectIds are returned, and the Skill-model-unavailable fallback filters by format. Also add isValidObjectIdString validation at the input boundary so malformed (but otherwise non-Mongo-unsafe) keys never reach persistence (Codex P2 x2). * fix: enforce active filter at execute time, prune revoked shares, scope queue per user P1: injectSkillCatalog now returns activeSkillIds (the filtered set that appears in the catalog). initializeAgent uses that set as the stored accessibleSkillIds on the initialized agent, so getSkillByName at runtime cannot resolve a deactivated skill — even if the LLM hallucinates a name or the user invokes by direct-invocation shorthand. Previously the executor authorized against the full ACL set, bypassing the active-state guarantee (Codex P1). P2: pruneOrphans now checks user access via findAccessibleResources in addition to skill existence. When a share is revoked, the user's skillStates entry for that skill had no cleanup path and silently consumed the 200-cap. Self-heals on both GET and POST. One extra ACL query per settings read/write; scoped to a single user so no N-user amplification (Codex P2). P2: the write queue moves from a single module-scoped object to a Map keyed by userId. Logout/login in the same tab can no longer flush the previous user's pending snapshot under the new session's auth. Each userId gets its own pending/inFlight slot; the in-flight request retains its original auth via the cookie already attached when sent, so the race window closes (Codex P2). * refactor: extract skillStates helpers to packages/api; add tests; polish Address the remaining valid findings from the comprehensive review: - Extract toRecord, loadSkillStates, validateSkillStatesPayload, and pruneOrphanSkillStates into packages/api/src/skills/skillStates.ts as TypeScript. The controller in /api shrinks to a ~90-line thin wrapper that builds live dependency adapters for Mongoose + the permission service (Review #2 DRY, #3 workspace boundary) - Replace the triplicated 12-line skillStates loading block in initialize.js, openai.js, and responses.js with a single call to loadSkillStates from @librechat/api. One helper, three sites - Swap console.error for the project logger in the controller (Review #7) - Remove the redundant INVALID_KEY_PATTERN regex: a valid ObjectId cannot contain . or $, so isValidObjectIdString already covers it (Review #11) - Parameterize the 200-cap error toast with {{0}} interpolation driven by the error response's `limit` field, so future changes to MAX_SKILL_STATES update the UI message automatically (Review #12) - Add 24 unit tests for the new skillStates helpers (toRecord, resolveDefaultActiveOnShare, loadSkillStates, validateSkillStates- Payload, pruneOrphanSkillStates) covering success paths, malformed input, cap boundaries, and parallel-query behavior (Review #4) - Add 10 tests for injectSkillCatalog pagination covering empty accessible set, missing listSkillsByAccess, single-page filter, owned-vs-shared defaults, explicit-override precedence, multi-page collection, MAX_CATALOG_PAGES safety cap, early termination on has_more=false, additional_instructions injection, and fail-closed without userId (Review #5) Total test count: 60 (was 26 on this surface). * fix: rename skillStates ValidationError to avoid barrel-export collision packages/api/src/types/error.ts already exports a ValidationError (MongooseError extension). Re-exporting a different shape from skills/skillStates.ts through the skills barrel caused TS2308 in CI because the root index re-exports both. Rename to SkillStatesValidationError to keep the exports disjoint. * refactor: tighten tests and absorb caller guard into loadSkillStates Address the followup review findings: - Add optional `accessibleSkillIds` param to loadSkillStates so the helper short-circuits to defaults when no skills are accessible. All three controllers drop the residual 7-line conditional wrapper in favor of a single destructured call (Review #2) - Remove the unreachable `typeof key !== 'string'` check from validateSkillStatesPayload: Object.entries always yields string keys per the JS spec (Review #3) - Replace the two `as unknown as` agent casts in the injectSkillCatalog tests with a `makeAgent()` factory typed directly as the function's parameter shape (Review #4) - Tighten the MAX_CATALOG_PAGES assertion from `toBeLessThanOrEqual(11)` to `toHaveBeenCalledTimes(10)` — the loop deterministically makes exactly 10 page fetches before hitting the cap (Review #1) - Rewrite the parallel-execution test for pruneOrphanSkillStates using deferred promises instead of microtask-order assertions. The test now inspects `toHaveBeenCalledTimes(1)` on both mocks after a single Promise.resolve() yield, pinning Promise.all usage without relying on push-order into a shared array (Review #5) - Evict stale writeQueue entries on user change via a module-scoped `lastSeenUserId` sentinel. When a different user's toggle is the first one after a logout/login, the previous user's queue entry is deleted. Keeps the Map bounded without adding hook-instance effect cleanup (Review #6) * fix(test): mock loadSkillStates in openai and responses controller specs The prior refactor replaced the inline 12-line skillStates loading block with a call to loadSkillStates from @librechat/api. Both controller spec files mock @librechat/api as a flat object, so any new named import from that package is undefined in the test env. Calling `await loadSkillStates(...)` threw before recordCollectedUsage ran, surfacing as "undefined is not iterable" on the test's array destructure of `mockRecordCollectedUsage.mock.calls[0]`. Add the missing mock to both spec files alongside the existing scopeSkillIds stub. * fix: abandon stale skillStates write queues on user switch Close the cross-session leak window where an in-flight flush loop still holds a reference to a previous user's queue: it could fire its next mutateAsync under the new session's auth cookies and persist the stale snapshot to the new user's document (Codex P1). Add an `abandoned` flag on `WriteQueue`. Three mechanisms cooperate: - `getWriteQueue` marks every non-active queue abandoned when the user differs from the last-seen identity (pre-existing eviction site, now more aggressive). - A `useEffect` on `userId` calls the same abandonment pass on every render with a new active identity, covering the window between logout/login and the new user's first toggle (when `getWriteQueue` would otherwise not fire). - The flush loop checks `!queue.abandoned` in its while condition so the second and later iterations exit without firing another `mutateAsync` after the session changes. The first iteration's in-flight request (already dispatched under the original user's cookies) still runs to completion or failure on its own — only the subsequent iterations, which are the dangerous ones, are blocked. |
||
|
|
3e064c2f2b |
🎯 feat: Per-Agent Skill Selection in Builder and Runtime Scoping (#12689)
* feat: per-agent skill selection in builder and runtime scoping
Wire skills persistence on the Agent model and enable the skills
section in the agents builder panel. At runtime, scope the skill
catalog to only the skills configured on each agent (intersected
with user ACL). When no skills are configured, the full user catalog
is used as the default. The ephemeral chat toggle overrides per-agent
scoping to provide the full catalog.
* fix: add scopeSkillIds to @librechat/api mock in responses unit test
The test mocks @librechat/api but was missing the newly imported
scopeSkillIds, causing createResponse to throw before reaching the
assertions. Added a passthrough mock that returns the input array.
* fix: scope primeInvokedSkills by agent's configured skills
primeInvokedSkills was receiving the full unscoped accessibleSkillIds,
bypassing the per-agent skill scoping applied to initializeAgent. This
allowed previously invoked skills from message history to be resolved
and primed even when excluded from the agent's configured skill set.
Apply the same scopeSkillIds filtering to match the initializeAgent
calls, so skill resolution is consistent across catalog injection
and history priming.
* fix: preserve agent skills through form reset and union prime scope
Two related bugs in the per-agent skill selection flow:
1. resetAgentForm dropped the persisted skills array because the generic
fall-through at the end of the loop excludes object/array values.
Combined with composeAgentUpdatePayload always emitting skills, this
caused any save of a previously-configured agent to silently overwrite
skills with an empty array. Add an explicit case for skills mirroring
the agent_ids handling.
2. primeInvokedSkills processes the full conversation payload, including
prior handoff-agent invocations. Scoping it to only primaryAgent.skills
meant a skill invoked by a handoff agent in a prior turn could not be
resolved when the current primary agent had a different scope, leaving
message history reconstruction incomplete. Union the per-agent scoped
accessibleSkillIds across primary plus all loaded handoff agents so
any skill any active agent could invoke is resolvable from history.
* fix: mark inline skill removals as dirty
The inline X button on the skills list called setValue without
shouldDirty: true, so removing a skill via this control did not
mark the skills field as dirty in react-hook-form state. When a
user removed a skill with the X button and also staged an avatar
upload in the same save, isAvatarUploadOnlyDirty returned true and
onSubmit short-circuited to avatar-only upload, silently dropping
the PATCH that would persist the skill removal.
The dialog path (SkillSelectDialog) already passes shouldDirty: true
on add/remove; this aligns the inline control with that behavior.
* fix: restore full ACL scope for primeInvokedSkills history reconstruction
Reverting the earlier scoping of primeInvokedSkills to the active-agent
union. That change conflated runtime invocation scoping (which correctly
gates what the model can call now) with history reconstruction (which
restores bodies the model already saw in prior turns).
Per-agent scoping still applies at:
- Catalog injection (injectSkillCatalog via initializeAgent)
- Runtime invocation (handleSkillToolCall via enrichWithSkillConfigurable,
using each agent's scoped accessibleSkillIds in agentToolContexts)
History priming is a read of past context, not a grant of new capability.
Scoping it causes historical skill bodies to vanish from formatAgentMessages
when an agent's skills list is edited mid-conversation or when the ephemeral
toggle flips, which breaks message reconstruction and drops code-env file
continuity for /mnt/data/{skillName}/ references. The user's ACL-accessible
set is the correct and sufficient gate for history reconstruction.
* fix: close openai.js skill gap and pin undefined vs [] semantics
Three related gaps surfaced in review:
1. api/server/controllers/agents/openai.js was a third skill resolution
site alongside responses.js and initialize.js, but still used the old
activation gate (required ephemeralAgent.skills === true) and never
passed accessibleSkillIds through scopeSkillIds. Per-agent scoping
silently did not apply on this route. Mirror the same pattern used
in responses.js so all three routes behave identically.
2. scopeSkillIds previously collapsed undefined and [] into the same
"full catalog" fallback, making it impossible for a user to express
"this agent has no skills." Tighten the semantics before any data
is written under the old behavior:
- undefined / null = not configured, full catalog
- [] = explicitly none, returns []
- non-empty = intersection with ACL-accessible set
Update defaultAgentFormValues.skills from [] to undefined so a brand
new agent whose skills UI was never touched does not accidentally
persist "explicit none" on first save (removeNullishValues strips
undefined from the payload server side).
3. Add direct unit tests for scopeSkillIds covering all five cases
(undefined, null, empty, disjoint, overlap, exact match, empty
accessible set). 16 tests total in skills.test.ts pass.
* fix: add scopeSkillIds to @librechat/api mock in openai unit test
Same pattern as the earlier responses.unit.spec.js fix: the test mocks
@librechat/api with an explicit object, so each newly imported symbol
must be added to the mock. Without scopeSkillIds, OpenAIChatCompletion
controller throws on destructuring before reaching recordCollectedUsage,
causing the token usage assertions to fail.
|