mirror of
https://github.com/danny-avila/LibreChat.git
synced 2026-05-14 16:38:40 +00:00
25 commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
d90567204e
|
🛟 fix: persist Vertex Gemini 3 thoughtSignatures across DB round-trips (#13026)
When a tool round-trip is interrupted between the tool result and the
model's text reply (user aborted, network drop, pod restart, ...) and
LibreChat persists the partial assistant message, the next conversation
turn reconstructs an `AIMessage` from `formatAgentMessages` that has
`tool_calls` populated but no `additional_kwargs.signatures`. Vertex
Gemini 3 rejects the resumed request with 400 because the most recent
historical functionCall has no `thought_signature`.
## Storage shape
Capture as `Record<tool_call_id, signature>` rather than a flat array.
This addresses the codex P1 review:
> When an assistant turn contains multiple sequential tool-call batches,
> this restoration path writes all persisted thoughtSignatures onto only
> the last tool-bearing AIMessage. Vertex/Gemini validates signatures
> for each step in the current tool-calling turn, so earlier
> functionCall steps reconstructed without their signature can still
> fail with 400.
A single agent run can fire multiple `chat_model_end` events when the
loop cycles the LLM with intervening tool results — each cycle owns a
distinct `tool_call_id`. Per-id storage maps each signature back onto
the right reconstructed `AIMessage`, not just the last one.
## Mapping
`additional_kwargs.signatures` is a flat array indexed by *response part*
(text + functionCall interleaved). `tool_calls` is just the function
calls in their original order. Non-empty signatures correspond 1:1 with
tool_calls in order — see `partsToSignatures` in
`@langchain/google-common`. Single-pass walk maps `signatures[i]` (when
non-empty) onto the i-th `tool_call.id`.
## Pipeline
| Stage | File | Change |
|---|---|---|
| Capture | callbacks.js | `ModelEndHandler` accepts `Record<string,string>` map; walks signatures + tool_calls in tandem to record per-id. Gated on the map being provided — non-Vertex flows are no-op (and also no-op even when provided, since they don't emit signatures). |
| Plumbing | initialize.js | Allocate `collectedThoughtSignatures = {}`, share with handler + client. Always allocated; the JSDoc explicitly documents that it stays empty for non-Vertex providers. |
| Surface | client.js | `sendCompletion` returns `metadata.thoughtSignatures` when the map has entries; falls through unchanged when empty. |
| Persist | (existing BaseClient.handleRespCompletion) | Writes `metadata` from `sendCompletion` onto `responseMessage.metadata`. Mongoose `Mixed` — no migration. |
| Restore | formatMessages.js | Track every tool-bearing AIMessage produced from a TMessage. For each, build a position-aligned `additional_kwargs.signatures` array (empty placeholders for tool_calls without a stored sig). Agents' `fixThoughtSignatures` dispatches non-empty entries to functionCall parts in order. |
## Live verification
- **Single-step:** real Vertex `gemini-3.1-flash-lite-preview` resume-after-tool case. With fix ✅ / without ❌ 400.
- **Multi-step (codex case):** real two-step agent loop (list /tmp → echo done). Each step's signature attaches to its own reconstructed AIMessage. With fix ✅ / without ❌ 400.
- **Cross-provider:** Anthropic Claude haiku-4.5 + OpenAI gpt-5-mini accept the persisted/restored shape unchanged.
## Tests
`modelEndHandler.spec.js` (new) — 6 tests:
- maps non-empty signatures onto tool_call_ids in order
- accumulates per-id across multiple `model_end` events (multi-step)
- no-op when `collectedThoughtSignatures` is null
- no-op when `signatures` field missing (non-Vertex)
- no-op when `tool_calls` missing
- preserves existing `collectedUsage` array contract
`formatAgentMessages.spec.js` — 6 new tests:
- restores onto the AIMessage that owns the tool_call
- per-step attachment for multi-step turns (codex review case)
- preserves tool_call ordering when signatures are partial
- no-op when metadata.thoughtSignatures absent
- no-op when assistant has no tool_calls
- no-op when stored ids don't match any current tool_call
37 passing across 3 suites; 15 existing formatAgentMessages tests unchanged.
## Compatibility
- Backward-compatible — restore gated on `metadata.thoughtSignatures` being a populated object; capture gated on the map being provided.
- No schema migration — uses `Message.metadata: Mixed` already in place.
- Cross-provider safe — non-Vertex providers tolerate the field (verified live against Anthropic + OpenAI converters).
- Pairs with [agents#159](https://github.com/danny-avila/agents/pull/159) for full coverage on histories that mix plain-text and toolcall AIMessages.
|
||
|
|
6c6c72def7
|
🚀 feat: Decouple File Attachment Persistence from Preview Rendering (#12957)
* 🗂️ feat: add `status` lifecycle to file records for two-phase previews
Schema and model foundation for decoupling the agent's final response
from CPU-heavy office-format HTML extraction.
- `MongoFile.status: 'pending' | 'ready' | 'failed'` (indexed) and
`previewError?: string` mirror the lifecycle: phase-1 emits the file
record at `pending` so the response is unblocked; phase-2 transitions
to `ready` (with text/textFormat) or `failed` (with previewError) in
the background. Absent for legacy records — clients treat that as
`ready` for back-compat.
- Mirror types added to `TFile` in data-provider so frontend cache
consumers see the new fields.
- New `sweepOrphanedPreviews(maxAgeMs)` method on the file model
recovers stale `pending` records left behind by a process restart
mid-extraction; transitions them to `failed` with
`previewError: 'orphaned'`. Cheap because `status` is indexed.
* ⚡ feat: two-phase code-execution preview flow (unblocks final response)
The agent's final response no longer waits on CPU-heavy office HTML
extraction. Phase-1 (download + storage save + DB record at
`status: 'pending'`) is awaited as before; phase-2 (extract +
`updateFile`) runs in the background with a hard 60s ceiling.
Three flows, all funneling through `processCodeOutput` and updated to
the new `{ file, finalize? }` return shape:
- `callbacks.js` (chat-completions + Open Responses streaming): emit
the phase-1 attachment immediately (carries `status: 'pending'` for
office buckets so the UI shows "preparing preview…"), then
fire-and-forget `finalize()`. If the SSE stream is still open when
phase-2 lands, push an `attachment` update event with the same
`file_id` so the client merges over the placeholder in place.
- `tools.js` direct endpoint: same split — return the phase-1
metadata immediately, run extraction in the background. Client
polls for the resolved record.
`finalize()` wraps the existing 12s per-render timeout in a 60s outer
`withTimeout`. The HTML-or-null contract from #12934 is preserved:
office types that fail extraction transition to `status: 'failed'`
with `previewError: 'parser-error' | 'timeout'` rather than falling
back to plain text (would be an XSS vector).
Promises continue running after the HTTP response closes (Node
doesn't kill them). The boot-time orphan sweep covers the only case
that loses progress — actual process restart mid-extraction.
`primeFiles` annotates the agent's `toolContext` line for prior-turn
files: `(preview not yet generated)` for pending, `(preview
unavailable: <reason>)` for failed. The model can volunteer "you can
still download it" instead of pretending the preview is fine.
`hasOfficeHtmlPath` exported from `@librechat/api` so `processCodeOutput`
can decide whether a file expects a preview at all.
* 🔍 feat: `GET /api/files/:file_id/preview` endpoint and boot orphan sweep
- New `GET /api/files/:file_id/preview` route returns
`{ status, text?, textFormat?, previewError? }`. The frontend's
`useFilePreview` React Query hook polls this while phase-2 is in
flight, then auto-stops on terminal status. ACL identical to the
download route (reuses `fileAccess` middleware). Defaults `status`
to `'ready'` for legacy records so back-compat is implicit.
`text` only included when `status === 'ready'` and non-null —
preserves the HTML-or-null security contract from #12934.
- `sweepOrphanedPreviews()` invoked on boot in both `server/index.js`
and `server/experimental.js`. Recovers any `pending` records left
behind by a process restart mid-extraction (the only case the
in-process two-phase flow can't handle on its own). Fire-and-forget
so a transient sweep failure doesn't block startup.
* 🖥️ feat: frontend two-phase preview consumer (polling + UI states)
Wires the React side to the new lifecycle so the user sees what's
happening with their file while phase-2 extraction runs in the
background and after the response stream closes.
- `useAttachmentHandler` upserts by `file_id` (was append-only) so
the phase-2 SSE update event merges over the pending placeholder
in place. Lightweight attachments without a `file_id`
(web_search / file_search citations) keep the legacy append path.
- `useFilePreview(file_id)` React Query hook with
`refetchInterval: (data) => data?.status === 'pending' ? 2500 : false`
so polling auto-stops on the first terminal response without the
caller having to flip `enabled`.
- `useAttachmentPreviewSync(attachment)` bridges polled data into
`messageAttachmentsMap`. Polling enabled iff
`status === 'pending' && isAnySubmitting` — per the design ask:
active polling while the LLM is still generating, then quiet.
Process-restart and post-stream cases are covered by polling on
the next interaction.
- `Attachment.tsx` renders a small `PreviewStatusIndicator` (spinner +
"Preparing preview…" for pending, alert icon + "Preview unavailable"
for failed) inside `FileAttachment`. Download button stays fully
functional in both states. Two new English locale keys.
- Data-provider scaffolding: `TFilePreview` type, `endpoints.filePreview`,
`dataService.getFilePreview`, `QueryKeys.filePreview`.
* 🧪 fix: stub `useAttachmentPreviewSync` in pre-existing Attachment test mocks
The new `useAttachmentPreviewSync` hook is called unconditionally inside
`FileAttachment` (added in the prior commit). Two pre-existing test
files mock `~/hooks` to provide `useLocalize` only — the un-mocked
preview hook reference resolved to undefined and crashed render with
`(0 , _hooks.useAttachmentPreviewSync) is not a function` on the
Ubuntu/Windows CI runners.
Fix is local to the test mocks: add a no-op stub that returns
`{ status: 'ready' }` so the component renders the legacy chip path.
The two-phase preview behavior itself has its own dedicated suites
(`useAttachmentHandler.spec.tsx`, `useAttachmentPreviewSync.spec.tsx`).
* 🐛 fix: route phase-2 attachment update to current-run messageId
Codex P1 review on PR #12957. `processCodeOutput` intentionally
preserves the original DB `messageId` across cross-turn filename reuse
so `getCodeGeneratedFiles` can still trace a file back to the
assistant message that originally produced it. The phase-1 SSE emit
already routes by the current run's messageId — `processCodeOutput`
runtime-overlays it via `Object.assign(file, { messageId, toolCallId })`
and the callback writes `result.file` directly.
Phase-2 was passing the raw `updateFile` return through
`attachmentFromFileMetadata`, which read `messageId` straight off the
DB record. On a turn-N run that re-emitted a filename from turn-1
(e.g. agent writes `output.csv` again), the phase-2 SSE update
routed to `turn-1-msg` instead of `turn-N-msg`. Frontend's
`useAttachmentHandler` upserts under the wrong messageAttachmentsMap
slot — turn-N's pending chip stays stuck at "preparing preview…"
while turn-1's already-resolved attachment gets re-merged.
Fix: thread `runtimeMessageId` through `attachmentFromFileMetadata`
and pass `metadata.run_id` from the phase-2 emit site. Mirrors how
phase-1 sources its messageId. Tests cover the cross-turn reuse case
plus the writableEnded / null-finalize / no-finalize paths to lock
in the broader phase-2 emit contract.
* 🛠️ refactor: address codex audit findings (wire-shape parity, DRY, defensive catch)
Comprehensive audit on PR #12957. Resolves all valid findings:
- **MAJOR #1 — Wire-shape parity**: phase-1 ships the full `fileMetadata`
record over SSE; phase-2 was using a tight `attachmentFromFileMetadata`
projection. Drop the projection and have phase-2 spread `{...updated,
messageId, toolCallId}` so both events match the long-standing
legacy phase-1 shape clients depend on.
- **MAJOR #2 — DRY**: extract `runPhase2Finalize({ finalize, fileId,
onResolved })` into `process.js` (alongside `processCodeOutput` whose
contract it pairs with). Both `callbacks.js` paths and `tools.js`
now flow through it. Single catch path eliminates divergence
surface — the fix landed in 01704d4f0 (cross-turn messageId routing)
was a symptom of this duplication risk.
- **MINOR #3 — JSDoc accuracy**: `finalizePreview`'s buffer is bounded
by `fileSizeLimit`, not the 1MB extractor cap. Updated and added a
note about peak heap from queued buffers.
- **MINOR #4 — Defensive catch**: `runPhase2Finalize`'s catch attempts
a best-effort `updateFile({ status: 'failed', previewError:
'unexpected' })` for the file_id, so a programming bug in
`finalizePreview` doesn't leave the record stuck `'pending'` until
the next boot-time orphan sweep.
- **NIT #6 — Stale PR refs**: 12952 → 12957 in 3 places.
- **NIT #7 — Schema bound**: `previewError` capped at `maxlength: 200`
to prevent a future codepath from accidentally persisting a stack
trace.
Skipped per audit verdict (non-blocking):
- #5 (memory pressure): documented in JSDoc; impl change was reviewer's
"consider", not actionable.
- #8 (double DB query per poll): low cost, indexed by_id, polling is
gated narrow.
- #9 (TAttachment cast): the union type is intentional; the casts are
safe widening, refactoring TAttachment is invasive and out of scope.
Tests: 11 new (7 `runPhase2Finalize` unit tests covering happy path,
null-finalize, throws, double-fail, no-fileId, no-onResolved; +4
wire-shape parity assertions in the existing cross-turn test). 328
backend tests pass; 528 frontend tests pass; lint and typecheck clean.
* 🛡️ refactor: address codex P1+P2 + rename to drop phase-1/2 jargon
Codex round 2 review on PR #12957 caught two race conditions and one
recovery gap, all triggered by cross-turn filename reuse (`claimCodeFile`
intentionally returns the same `file_id` for the same
`(filename, conversationId)` across turns). Plus naming cleanup the
user requested — internal "phase 1 / phase 2" vocabulary leaks across
sprints, replace it everywhere with terms describing what's actually
happening.
P1 — stale render overwrites newer revision (process.js)
Two turns reusing `output.csv` share a `file_id`. If turn-1's
background render resolves AFTER turn-2's persist step, the
unconditional `updateFile` writes turn-1's stale text/status over
turn-2's pending placeholder. Fix: stamp a fresh `previewRevision`
UUID on every emit, thread it through `finalizePreview`, and make
the commit conditional via a new optional `extraFilter` argument
on `updateFile` (`{ previewRevision: <expected> }`). The defensive
`updateFile` in `runPreviewFinalize`'s catch uses the same guard
so a programming error from an older render also can't override a
newer turn.
P1 — stale React Query cache on pending remount (queries.ts)
Same root cause from the frontend side. Cache key
`[QueryKeys.filePreview, file_id]` may hold a prior turn's `'ready'`
payload; with `refetchOnMount: false` and the polling gate on
`pending`, polling never starts for the new placeholder. Fix:
`useAttachmentHandler` invalidates that query whenever an attachment
with a `file_id` arrives. Both initial-emit and update events
trigger invalidation — uniform gate.
P2 — quick-restart orphans skipped by boot sweep (files.js)
Boot `sweepOrphanedPreviews` uses a 5-min cutoff for multi-instance
safety. A crash + restart inside the cutoff leaves `pending` records
that never get touched again. Fix: lazy sweep inside the preview
endpoint — if a polled record is `pending` and `updatedAt` is older
than 5 min, mark it `failed:orphaned` on the spot before responding.
Conditional on the same `updatedAt` we observed so a concurrent
legitimate update wins. Cheap, bounded by user activity.
Naming cleanup
- `runPhase2Finalize` → `runPreviewFinalize`
- `PHASE_TWO_TIMEOUT_MS` → `PREVIEW_FINALIZE_TIMEOUT_MS`
- All `phase-1` / `phase-2` / `two-phase` prose replaced with
"the immediate emit", "the deferred render", "the persist step",
"the deferred preview", etc. Skill-feature `phase 1/2` references
(different feature) left alone.
Tests: 10 new (4 lazy-sweep × preview endpoint, 3 cache-invalidation ×
useAttachmentHandler, 3 extraFilter × updateFile data-schemas).
Backend 332/332, frontend 531/531, data-schemas 37/37, lint clean.
* 🛠️ refactor: address comprehensive review (round 3) — stale-cache MAJOR + 3 minors
Comprehensive review on PR #12957 caught a P1 follow-on bug from the
prior `invalidateQueries` fix, plus 3 maintainability findings.
MAJOR: stale React Query cache not actually fixed by `invalidateQueries`
The previous fix called `invalidateQueries` to flush stale cached
preview data on cross-turn filename reuse. But `useFilePreview` had
`refetchOnMount: false`, which made the new observer read the
stale-marked 'ready' data without refetching. The polling
`refetchInterval` then evaluated against stale 'ready' → returned
`false` → polling never started → user stuck on stale content.
Fix (belt-and-suspenders):
a) `useAttachmentHandler` switched to `removeQueries` — drops the
cache entry entirely so the next mount has nothing to read and
must fetch.
b) `useFilePreview` no longer sets `refetchOnMount: false`, so the
React Query default (`true`) kicks in — second line of defense
if any future codepath observes stale data before the handler
has a chance to evict.
MINOR: `finalizePreview` JSDoc missing `previewRevision` param
Added with explanation of the conditional update guard.
MINOR: asymmetric stream-writable guard between SSE protocols
Chat-completions delegated the gate to `writeAttachmentUpdate`;
Open Responses inlined `!res.writableEnded && res.headersSent`.
Extracted `isStreamWritable(res, streamId)` predicate; both paths
+ `writeAttachmentUpdate` now share the single source of truth.
NIT: `(data as Partial<TFile>).file_id` cast repeated 4 times
Extracted to a `fileId` local at the top of the handler.
Tests: existing 9 invalidate-tests rewritten as remove-tests; +1 new
lock-in test asserts removeQueries is called and invalidateQueries
is NOT (regression guard against round-3 finding). 332 backend pass,
532 frontend pass, lint clean.
Skipped findings (deferred / acceptable):
- MINOR: post-submission pending state has no auto-recovery — the
`isAnySubmitting` polling gate was the user's explicit design;
LLM context surfaces failed/pending so the model can volunteer.
Worth a follow-up if real users hit it.
- NIT: double DB query per preview poll — reviewer marked acceptable;
changing `fileAccess` middleware is out of scope.
* 🛡️ test: address comprehensive review NITs (initial-emit guard + isStreamWritable coverage)
NIT — chat-completions initial emit skips writableEnded check
The Open Responses initial emit was switched to use the new
`isStreamWritable` predicate in the round-3 commit, but the
chat-completions initial emit kept the older narrower check
(`streamId || res.headersSent`). On a client disconnect mid-stream
(`writableEnded === true`) it would still hit `res.write` and
raise `ERR_STREAM_WRITE_AFTER_END` — caught by the outer IIFE
catch but logged as noise. Switch this site to `isStreamWritable`
too so both initial-emit paths share the same gate as the
deferred update emits.
NIT — `isStreamWritable` not directly unit-tested
The predicate was only covered indirectly via the deferred-preview
SSE tests (writableEnded skip, headersSent check). Export from
`callbacks.js` and add 5 parametric tests pinning down each branch
(streamId truthy, res null, !headersSent, writableEnded, happy
path) so a future condition addition can't silently regress.
* 🐛 fix: stuck "Preparing preview…" + inline the chip subtitle
Two related fixes for a stuck-spinner bug a user reported in manual
testing of PR #12957.
**Stuck spinner (the bug)**
The deferred preview render can complete a few seconds AFTER the SSE
stream closes (typical case: PPTX render finishes ~3s after the LLM
emits FINAL). When that happens, the SSE update is silently dropped
(`isStreamWritable` returns false on a closed stream) and polling is
the only recovery path.
The earlier polling gate was `status === 'pending' && isAnySubmitting`,
which mirrored the original design intent ("only query while the LLM
is still generating"). But `isAnySubmitting` flips false the moment
the model emits FINAL — milliseconds before the deferred render
commits. Polling never runs, the chip stays "Preparing preview…"
forever even though the DB has `status: 'ready'` with valid HTML.
Drop the `isAnySubmitting` part of the gate. `useFilePreview`'s
`refetchInterval` is already a function-form that returns `false` on
the first terminal response, so polling auto-stops within one tick of
resolution. The server-side render ceiling (60s) plus the lazy sweep
in the preview endpoint cap the worst case to ~24 polls per pending
attachment. Polling itself never blocks UX — the gate's purpose was
"don't waste cycles", and capping by terminal status is the correct
expression of that.
**Inline the chip subtitle (the visual)**
The previous design rendered "Preparing preview…" as a loose-feeling
spinner+text BELOW the file chip. The chip itself looked done while a
floating annotation said it wasn't.
`FileContainer` gains an optional `subtitle?: ReactNode` prop that
overrides the default file-type label. `Attachment.tsx` passes a
`PreviewStatusSubtitle` (spinner + "Preparing preview…" / alert +
"Preview unavailable") into that slot when the file's preview is
pending or failed. The chip footprint stays identical to its `'ready'`
form — just the second row swaps from "PowerPoint Presentation" to
the status indicator. No floating element, no layout shift.
Tests: regression test pinning down "polling stays enabled after the
LLM finishes" so a future revert can't reintroduce the stuck-spinner
bug. Existing FileContainer tests pass unchanged (subtitle override
is opt-in). 522 frontend tests pass; lint clean.
* 🐛 fix: deferred-preview survives reload + matches artifact card chrome
Fixes the remaining stuck-pending case after the polling gate fix: on
a reloaded conversation, message.attachments come from the DB frozen at
the immediate-persist `status: 'pending'`, but `messageAttachmentsMap`
is empty because no SSE handler ever fired for that messageId. Polling
now INSERTS a new live entry when no record matches the file_id, and
`useAttachments` merges live entries onto DB entries by file_id so the
resolved text/textFormat reach `artifactTypeForAttachment` and the
chip routes through the proper PanelArtifact card.
Also replaces the small file chip used during the pending state with
a PreviewPlaceholderCard that mirrors ToolArtifactCard chrome, so the
transition to the resolved PanelArtifact no longer reshapes the UI.
* ✨ feat: auto-open panel when deferred preview resolves pending→ready
The legacy auto-open path is gated only on `isSubmitting`, so an
office-file preview that resolves *after* the SSE stream closes would
render in place but never auto-open the panel — even though that's
exactly the moment the result becomes meaningful to the user. Adds a
per-file_id one-shot signal that `useAttachmentPreviewSync` flips on
the pending→ready edge; `ToolArtifactCard` consumes it on mount and
auto-opens regardless of submission state. The signal is *only* set on
the actual transition (history loads of pre-resolved files don't
trigger it) and is consumed once (panel close + reopen on the same
card stays user-controlled).
* 🐛 fix: drop placeholder Terminal overlay + scope auto-open to fresh resolutions
Two fixes for issues spotted in manual testing of the deferred-preview
auto-open feature:
1. PreviewPlaceholderCard was passing `file={attachment}` to FilePreview,
which triggered SourceIcon's Terminal overlay (`metadata.fileIdentifier`
is set on every code-execution file). The artifact card itself doesn't
show that overlay; the placeholder shouldn't either, so the
pending→resolved transition is visually seamless.
2. The `previewJustResolved` flag flipped on every pending→ready
transition observed by the polling hook — including stale-pending
DB records that resolve via the first poll on a *history load*.
Conversations whose immediate-persist snapshot left attachments at
`status: 'pending'` would yank the panel open every revisit.
Adds `mountedDuringStreamRef` to the hook (mirroring ToolArtifactCard)
so the flag fires only when the hook itself was mounted during an
active turn — preserving the pre-PR contract that the panel only
auto-opens for results the user is actively waiting on, never for
history.
* 🐛 fix: don't downgrade preview to failed when only the SSE emit throws
Codex P2 finding on PR #12957: the original chain placed `.catch` after
`.then(onResolved)`, so a throw inside `onResolved` (transport-side
errors — SSE write race after stream close, an emitter listener
throwing) would propagate into the finalize catch and persist
`status: 'failed'` / `previewError: 'unexpected'`. That surfaced
"preview unavailable" in the UI for a perfectly valid file, and
degraded next-turn LLM context to reflect a non-existent failure.
Wraps `onResolved` in its own try/catch so emit errors are logged but
do not affect the file's persisted status. Extraction success and
emit success are now independent: if extraction succeeds and
`finalizePreview` writes the terminal status, the polling layer / next
page load surfaces the resolved preview even if this turn's SSE emit
didn't land.
* 🛡️ fix: run boot-time orphan sweep under system tenant context
Codex P2 finding on PR #12957: `File` is tenant-isolated, so under
`TENANT_ISOLATION_STRICT=true` the boot-time `sweepOrphanedPreviews`
threw `[TenantIsolation] Query attempted without tenant context in
strict mode` and the recovery path silently failed every restart.
Stale `status: 'pending'` records would be stuck until a user happened
to poll the preview endpoint and trigger the lazy sweep — which only
covers the file the user is currently looking at, not the bulk
candidate set the boot sweep is designed to recover.
Wraps the sweep in `runAsSystem(...)` in both boot paths
(`api/server/index.js` and `api/server/experimental.js`) and pins the
contract with regression tests in `file.spec.ts` — one test asserts
the bare call throws under strict mode, the other asserts the
`runAsSystem`-wrapped call succeeds.
* 🧹 chore: trim verbose comments from previous commit
* 🧹 chore: address review findings (dead branch, lazy-sweep cutoff, stale JSDoc)
- finalizePreview: drop unreachable !isOfficeBucket branch (caller
already gates on hasOfficeHtmlPath, so this path is always office)
- preview endpoint: drop lazy-sweep cutoff from 5min to 2min — anything
past the 60s render ceiling is definitively orphaned, and per-request
sweep can be tighter than the per-instance boot sweep
- strip stale `isSubmitting` references from JSDoc in 3 spots (the
client-side gate was removed in
|
||
|
|
596f806f60 |
🛡️ fix: Strict Opt-In Skills Activation per Agent (#12823)
* 🛡️ fix: Strict opt-in skills activation per agent
Skills were activating on every agent run that had the capability +
RBAC enabled, regardless of whether the user (ephemeral) or author
(persisted) had opted in. `scopeSkillIds(undefined)` fell through to
"full accessible catalog" whenever `agent.skills` was unset, which is
the default state for any agent created before skills existed and for
every ephemeral agent.
Activation now requires an explicit signal:
- Ephemeral agent → per-conversation skills badge toggle.
- Persisted agent → new `skills_enabled` master switch on the agent
doc, surfaced as a toggle in the Agent Builder skills section.
Enabled + empty/undefined allowlist = full accessible catalog;
enabled + non-empty allowlist = narrow to those ids; disabled (or
undefined) = no skills available, even if an allowlist is set.
Centralised the predicate in `resolveAgentScopedSkillIds` so the
primary-agent path, handoff/discovery, the subagent loop, and both
OpenAI controllers all share one source of truth. Frontend `$`
popover scope mirrors the same logic so the UI never offers skills
the backend would refuse to activate.
* test: mock resolveAgentScopedSkillIds in agent controller specs
* refactor: address review findings on skills opt-in PR
- AgentConfig: associate skills label with toggle via htmlFor for
click/keyboard affordance; simplify Switch handler to Boolean(value).
- skills: mark scopeSkillIds as @internal so runtime callers continue
to route through resolveAgentScopedSkillIds and inherit the activation
predicate (ephemeral toggle, persisted skills_enabled).
* fix(agents): include skills_enabled in agent list projection
Without this field, agents loaded via the list endpoint hydrate into the
client agentsMap with skills_enabled === undefined, causing the `$`
skill popover to hide every skill on a fresh page load even when the
agent was saved with skills_enabled: true.
* fix(skills): fail closed for persisted agents during agentsMap hydration
Returning undefined while the agents map loads let the popover render the
full catalog for a persisted agent before we could read its
skills_enabled flag, so the user could pick a skill the backend would
then refuse for the turn. Match the strict opt-in contract by returning
[] until the map is authoritative.
* refactor(skills): extract skillsHintKey for readability
Replaces the nested ternary in the skills section JSX with a
pre-computed constant so the activation -> hint key mapping reads
top-down.
* refactor(skills): unflatten skillsHintKey to remove nested ternary
|
||
|
|
35bf04b26c |
🧰 refactor: Unify code-execution tools (#12767)
* 🛠️ feat: Add registerCodeExecutionTools helper Idempotently registers `bash_tool` + `read_file` in the run's tool registry and tool-definition list via a registry `.has()` dedupe. Sets up the single code-execution tool path shared by: - `initializeAgent` (when an agent has `execute_code` in its tools and the capability is enabled for the run) - `injectSkillCatalog` (when skills are active; unconditional read_file, bash_tool follows `codeEnvAvailable`) Both callers reach the helper in the same initialization sequence, so the second call becomes a no-op and exactly one copy of each tool reaches the LLM — no more double registration for agents that combine `execute_code` capability with active skills. Unit-tested on a fresh run, idempotence (second call, overlap with prior tooldefs, partial overlap), and the no-registry variant. * 🔀 refactor: Route injectSkillCatalog bash_tool + read_file through registerCodeExecutionTools The `skill` tool is still registered inline (it's skill-path-specific), but `bash_tool` + `read_file` now flow through the shared idempotent helper so a prior registration from the execute_code path doesn't produce a duplicate copy later in the same run. Behavior preserved: - `read_file` always registers when any active skill is in scope — manually-primed `disable-model-invocation: true` skills still need it to load `references/*` from storage. - `bash_tool` follows `codeEnvAvailable` exactly as before. Adds a test pinning the cross-call dedupe: when `injectSkillCatalog` runs AFTER `registerCodeExecutionTools` has already seeded the registry + tool definitions with bash_tool/read_file, the resulting `toolDefinitions` still contains exactly one copy of each. * 🪄 feat: Expand `execute_code` tool name into bash_tool + read_file at initialize-time When an agent's `tools` include `execute_code` and the `execute_code` capability is enabled for the run, `initializeAgent` now registers `bash_tool` + `read_file` via `registerCodeExecutionTools` before `injectSkillCatalog`. The legacy `execute_code` tool definition is no longer handed to the LLM — `execute_code` remains on the agent document as a capability-trigger marker, but the runtime expands it into the skill-flavored tool pair. Call ordering matters: the `execute_code` registration runs BEFORE `injectSkillCatalog`, so the skill path's own `registerCodeExecutionTools` call inside `injectSkillCatalog` becomes a no-op via the registry's `.has()` check. Exactly one copy of each tool reaches the LLM whether the agent has: - only `execute_code` (legacy path) - only skills - both No data migration needed — `agent.tools: ['execute_code']` stays in the DB unchanged; the expansion is a runtime operation. Three tests cover the matrix: execute_code + capability on → bash_tool + read_file registered; execute_code + capability off → neither registered; no execute_code + capability on → neither registered. * 🗑️ refactor: Drop CodeExecutionToolDefinition from the builtin registry Removes the legacy `execute_code` entry from `agentToolDefinitions` and the corresponding import. With the initialize-time expansion in place, nothing consults `getToolDefinition('execute_code')` for a tool schema any more — the capability gate still filters on the string `execute_code`, but the actual tool definitions the LLM sees come from `registerCodeExecutionTools` (i.e. `bash_tool` + `read_file`). `loadToolDefinitions` in `packages/api/src/tools/definitions.ts` silently drops `execute_code` when it no longer resolves in the registry — that's the expected path and is now covered by an updated test. No caller of `getToolDefinition('execute_code')` expects a non-undefined result after this change. * 🔌 refactor: Read CODE_API_KEY from env for primeCodeFiles + PTC Finishes the Phase 4 server-env-keyed rollout on the two remaining `loadAuthValues({ authFields: [EnvVar.CODE_API_KEY] })` sites in `ToolService.js`: - `primeCodeFiles` (user-attached file priming on execute_code agents) - Programmatic Tool Calling (`createProgrammaticToolCallingTool`) Both now read `process.env[EnvVar.CODE_API_KEY]` directly, matching `bash_tool`'s pattern. The per-user plugin-auth path is no longer consulted for code-env credentials anywhere in the hot path — the agents library owns the actual tool-call execution and also reads the env var internally. Priming still fires for existing user-file workflows so the legacy `toolContextMap[execute_code]` hint ("files available at /mnt/data/...") stays in the prompt; only the key lookup changed. * 🔧 fix: Type the pre-seeded dedupe-test tools as LCTool CI TypeScript type checks caught `{ parameters: {} }` in the new cross-call dedupe test: `LCTool.parameters` is a `JsonSchemaType`, not `{}`. Use `{ type: 'object', properties: {} }` and type the local registry Map through the parameter-derived shape so the pre-seeded values match what `toolRegistry.set` expects. * 🛡️ fix: Run execute_code expansion before GOOGLE_TOOL_CONFLICT gate Codex review caught a latent regression: the original Phase 8 placement ran `registerCodeExecutionTools` after `hasAgentTools` was computed, so an execute-code-only agent on Google/Vertex with provider-specific `options.tools` populated would no longer trip `GOOGLE_TOOL_CONFLICT` — the legacy `CodeExecutionToolDefinition` used to populate `toolDefinitions` before the guard, but after dropping it from the registry, `toolDefinitions` stayed empty until my expansion ran downstream of the guard. Mixed provider + agent tools would silently flow through to the LLM. Fix moves the `execute_code` expansion to BEFORE `hasAgentTools` computation. `bash_tool` + `read_file` now contribute to the check the same way the legacy `execute_code` def did. Covered by a new test that pins the Google+execute_code+provider-tools scenario — the `rejects.toThrow(/google_tool_conflict/)` path would have silently passed on the prior placement. * 🔗 fix: Thread codeEnvAvailable through handoff sub-agents Round-2 codex review caught the other half of the execute_code expansion gap: `discoverConnectedAgents` omitted `codeEnvAvailable` from its forwarded `initializeAgent` params, so handoff sub-agents with `agent.tools: ['execute_code']` lost the `bash_tool` + `read_file` registration (pre-Phase 8 the legacy `CodeExecutionToolDefinition` would have landed in their `toolDefinitions` via the registry). - Add `codeEnvAvailable?` to `DiscoverConnectedAgentsParams` and forward it verbatim on every sub-agent `initializeAgent` call. - Update the three JS call sites that construct the primary's `codeEnvAvailable` (`services/Endpoints/agents/initialize.js`, `controllers/agents/openai.js`, `controllers/agents/responses.js`) to pass the same flag into `discoverConnectedAgents` — one authoritative source per request. - Two regression tests in `discovery.spec.ts` pin the true/false passthrough so a future refactor that drops the param-forwarding surfaces immediately. Left intentionally unchanged: `packages/api/src/agents/openai/service.ts` (public API helper with no in-repo caller). External consumers of `createAgentChatCompletion` who want code execution should pass a `codeEnvAvailable`-aware `initializeAgent` via `deps` — documenting the full public-API surface is out of scope for this Phase 8 PR. * 🔗 fix: Thread codeEnvAvailable through addedConvo + memory-agent paths Round-3 codex review caught the last two production `initializeAgent` callers missing the Phase-8 capability flag: - `api/server/services/Endpoints/agents/addedConvo.js` (multi-convo parallel agent execution). Added `codeEnvAvailable` to `processAddedConvo`'s destructured params and forwarded it into the per-added-agent `initializeAgent` call. Caller in `api/server/services/Endpoints/agents/initialize.js` passes the same `codeEnvAvailable` it computed for the primary. - `api/server/controllers/agents/client.js` (`useMemory` — memory extraction agent). Computes its own `codeEnvAvailable` from `appConfig?.endpoints?.[EModelEndpoint.agents]?.capabilities` and forwards into `initializeAgent`. Memory agents rarely list `execute_code`, but if one does, pre-Phase 8 they got the legacy `execute_code` tool registered unconditionally — the passthrough restores parity. With this, every production caller of `initializeAgent` explicitly resolves the capability: main chat flow (primary + handoff), OpenAI chat completions (primary + handoff), Responses API (primary + handoff), added convo parallel agents, and memory agents. The one remaining caller, `packages/api/src/agents/openai/service.ts::createAgentChatCompletion`, is a public API helper with no in-repo consumer (external callers must pass a capability-aware `initializeAgent` via `deps`). * 🪤 fix: Remove duplicate appConfig declaration causing TDZ ReferenceError The Responses API controller had TWO `const appConfig = req.config;` bindings inside `createResponse`: one at the top of the function (added by the Phase 4 `bash_tool` decouple) and one inside the try block (added by the polish PR #12760). Because `const` is block-scoped with a temporal dead zone, the inner redeclaration put `appConfig` in TDZ for the entire try block, so any earlier reference inside the try — notably `appConfig?.endpoints?.[EModelEndpoint.agents]?.allowedProviders` at line 348 — threw `ReferenceError: Cannot access 'appConfig' before initialization`. The error was silently swallowed by the outer try/catch, leaving `recordCollectedUsage` unreached and the six `responses.unit.spec.js` token-usage tests failing. Removing the inner redeclaration fixes the six failing tests (verified: 11/11 pass locally post-fix, 0 regressions elsewhere). The outer function-scoped binding already provides `appConfig` to every downstream reference. * 🔗 fix: Thread codeEnvAvailable through the OpenAI chat-completion public API Round-4 codex review (legitimate on the type-safety angle, even though the runtime concern was already covered): the `createAgentChatCompletion` helper defines its own narrower `InitializeAgentParams` interface locally, and the type was missing `codeEnvAvailable`. External consumers who supply a capability-aware `deps.initializeAgent` couldn't route `codeEnvAvailable` through without a type-cast workaround. - Widen the local `InitializeAgentParams` interface to include `codeEnvAvailable?: boolean` (matches the real `packages/api/src/agents/initialize.ts` type). - Derive `codeEnvAvailable` inside `createAgentChatCompletion` from `deps.appConfig?.endpoints?.agents?.capabilities` (the same source the in-repo controllers use) and forward to `deps.initializeAgent`. Uses a string literal `'execute_code'` lookup so this file stays free of a `librechat-data-provider` import — keeping the dependency surface of the public helper minimal. With this, external consumers of `createAgentChatCompletion` who pass `appConfig` with the agents capabilities get `bash_tool` + `read_file` registration automatically; consumers who don't pass `appConfig` retain the existing "explicit opt-in" semantics (the flag stays `undefined`, expansion is skipped). * 🧹 chore: Review-driven polish — observability log, JSDoc DRY, test gaps, no-op allocation Addresses the comprehensive review of PR #12767: - **Finding #1** (MINOR, observability): `initializeAgent` now emits a debug log when an agent lists `execute_code` in its tools but the runtime gate is off (`params.codeEnvAvailable` !== true). The event-driven `loadToolDefinitionsWrapper` path doesn't log capability-disabled warnings, so without this the tool silently vanishes from the LLM's definitions with zero trace. Operators debugging "why isn't code interpreter working?" now get a signal at the initialize layer. - **Finding #5** (NIT, allocation): `registerCodeExecutionTools` now returns the input `toolDefinitions` array by reference on the no-op path (both tools already registered by a prior caller in the same run) instead of allocating a fresh spread array every time. The common dual-call scenario — `initializeAgent` then `injectSkillCatalog` — saves one O(n) copy per request. - **Finding #4** (NIT, DRY): Collapsed the duplicated 6-line JSDoc comment in `openai.js`, `responses.js`, and `addedConvo.js` into either a one-line `@see DiscoverConnectedAgentsParams.codeEnvAvailable` pointer (the two JS call sites) or a compact 3-line block referring back to the canonical source (addedConvo's @param). - **Finding #2** (MINOR, test gap): Added `api/server/services/Endpoints/agents/addedConvo.spec.js` with three cases covering `codeEnvAvailable=true`, `codeEnvAvailable=false`, and omitted (undefined) passthrough. A future refactor that drops the param from destructuring now surfaces here instead of silently regressing multi-convo parallel agents with `execute_code`. - **Finding #3** (MINOR, test gap): Added `api/server/controllers/agents/__tests__/client.memory.spec.js` pinning the capability-flag derivation that `AgentClient::useMemory` uses — six cases covering present/absent/null/undefined config shapes plus an enum-literal pin (`'execute_code'` / `'agents'`). Catches enum renames or config-path shifts that would otherwise silently strip `bash_tool` + `read_file` from memory agents. Finding #7 (jest.mock scoping, confidence 40) left as-is: the reviewer's own risk assessment noted `buildToolSet` doesn't touch the mocked exports, and restructuring a file-level `jest.mock` to `jest.doMock` + dynamic `import()` introduces more complexity than the speculative risk justifies. The existing mock is scoped to the test file and contains the same stubs the adjacent `skills.test.ts` already uses. Finding #6 (PR description commit count) addressed out-of-band via PR description update. All existing tests pass, typecheck clean, lint clean across touched files. New tests: 9 cases across 2 new spec files. * 🧽 refactor: Replace hardcoded 'execute_code' string with AgentCapabilities enum in service.ts Follow-up review (conf 55) caught that `openai/service.ts`'s Phase 8 `codeEnvAvailable` derivation used the literal `'execute_code'` while every in-repo controller uses `AgentCapabilities.execute_code` from `librechat-data-provider`. The file deliberately uses local type interfaces to keep the public API helper's type surface small, but that pattern was never a ban on single-value imports from the data provider — `packages/api` already depends on it. Importing the enum value means a future rename of `AgentCapabilities.execute_code` propagates to this file automatically, matching the in-repo controllers' behavior. Other follow-up findings left as-is per the reviewer's own verdict: - #2 (memory spec mirrors the production expression rather than calling `AgentClient::useMemory` directly): reviewer flagged as "not blocking" / "design-philosophy observation." The test file's JSDoc already explicitly documents the tradeoff and pins the enum literals to catch the most likely drift vector. Standing up `AgentClient` + all its mocks for a one-line regression guard is disproportionate. - #3 (`addedConvo.spec.js` mock signature vs. underlying `loadAddedAgent` arity): reviewer's own confidence 25 noted the mock matches the wrapper's actual call pattern in the production file. Not a real gap. - #4 was self-retracted as a false alarm. * 🗑️ refactor: Fully deprecate CODE_API_KEY — remove all LibreChat-side references The code-execution sandbox no longer authenticates via a per-run `CODE_API_KEY` (frontend or backend). Auth moved server-side into the agents library / sandbox service, so LibreChat drops every reference: **Backend plumbing:** - `api/server/services/Files/Code/crud.js`: `getCodeOutputDownloadStream`, `uploadCodeEnvFile`, `batchUploadCodeEnvFiles` no longer accept `apiKey` or send the `X-API-Key` header. - `api/server/services/Files/Code/process.js`: `processCodeOutput`, `getSessionInfo`, `primeFiles` drop the `apiKey` param throughout. - `api/server/services/ToolService.js`: stop reading `process.env[EnvVar.CODE_API_KEY]` for `primeCodeFiles` and PTC; the agents library handles auth internally. Remove the now-dead `loadAuthValues` + `EnvVar` imports. Drop the misleading "LIBRECHAT_CODE_API_KEY" hint from the bash_tool error log. - `api/server/services/Files/process.js`: remove the `loadAuthValues` call around `uploadCodeEnvFile`. - `api/server/routes/files/files.js`: code-env file download no longer fetches a per-user key. - `api/server/controllers/tools.js`: `execute_code` is no longer a tool that needs verifyToolAuth with `[EnvVar.CODE_API_KEY]` — the endpoint always reports system-authenticated so the client skips the key-entry dialog. `processCodeOutput` called without `apiKey`. - `api/server/controllers/agents/callbacks.js`: `processCodeOutput` invoked without the loadAuthValues round trip, for both LegacyHandler and Responses-API handlers. - `api/app/clients/tools/util/handleTools.js`: `createCodeExecutionTool` called with just `user_id` + files. **packages/api:** - `packages/api/src/agents/skillFiles.ts`: `PrimeSkillFilesParams`, `PrimeInvokedSkillsDeps`, `primeSkillFiles`, `primeInvokedSkills` all drop the `apiKey` param; the gate is purely `codeEnvAvailable`. - `packages/api/src/agents/handlers.ts`: `handleSkillToolCall` drops the `process.env[EnvVar.CODE_API_KEY]` read; skill-file priming is now gated solely on `codeEnvAvailable`. `ToolExecuteOptions` signatures drop apiKey from `batchUploadCodeEnvFiles` and `getSessionInfo`. - `packages/api/src/agents/skillConfigurable.ts`: JSDoc no longer references the env var. - `packages/api/src/tools/classification.ts`: PTC creation no longer gated on `loadAuthValues`; `buildToolClassification` drops the `loadAuthValues` dep entirely (no LibreChat-side callers need it for this path anymore). - `packages/api/src/tools/definitions.ts`: `LoadToolDefinitionsDeps` drops the `loadAuthValues` field. **Frontend:** - Delete `client/src/hooks/Plugins/useAuthCodeTool.ts`, `useCodeApiKeyForm.ts`, and `client/src/components/SidePanel/Agents/Code/ApiKeyDialog.tsx` — the install/revoke dialogs for CODE_API_KEY are fully dead. - `BadgeRowContext.tsx`: drop `codeApiKeyForm` from the context type and provider. `codeInterpreter` toggle treated as always authenticated (sandbox auth is server-side). - `ToolsDropdown.tsx`, `ToolDialogs.tsx`, `CodeInterpreter.tsx`, `RunCode.tsx`, `SidePanel/Agents/Code/Action.tsx` +`Form.tsx`: all API-key dialog trigger refs, "Configure code interpreter" gear buttons, and auth-verification plumbing removed. The "Code Interpreter" toggle is now a plain `AgentCapabilities.execute_code` checkbox — no key-entry gate. - `client/src/locales/en/translation.json`: drop the three `com_ui_librechat_code_api*` keys and `com_ui_add_code_interpreter_api_key`. Other locales are externally automated per CLAUDE.md. **Config:** - `.env.example`: remove the `# LIBRECHAT_CODE_API_KEY=your-key` section and its header. **Tests:** - `crud.spec.js`: assertions flipped to pin "no X-API-Key header" and "no apiKey param". - `skillFiles.spec.ts`: removed env-var save/restore; tests now pin that the batch-upload path is gated solely on `codeEnvAvailable` and that no apiKey is threaded through. - `handlers.spec.ts`: same — just the `codeEnvAvailable` gate pins remain. - `classification.spec.ts`: remove the two tests that asserted `loadAuthValues` was (not) called for PTC. - `definitions.spec.ts`: drop every `loadAuthValues: mockLoadAuthValues` entry from the deps shape. - `process.spec.js`: strip the mock of `EnvVar.CODE_API_KEY`. **Comment hygiene:** - `tools.ts`, `initialize.ts`, `registry/definitions.ts`: shortened stale comment references to "legacy `execute_code` tool" without naming the retired env var. Tests verified: 678 packages/api tests pass, 836 backend api tests pass. Typecheck clean, lint clean. Only remaining CODE_API_KEY mentions in the code are two regression-guard assertions: - `crud.spec.js`: pins "no X-API-Key header" stays absent. - `skillConfigurable.spec.ts`: pins `configurable` never grows a `codeApiKey` field. * 🧹 chore: Remove the last two CODE_API_KEY name mentions in LibreChat Follow-up to the prior full deprecation commit: two tests still named the retired identifier in their regression-guard assertions. - `packages/api/src/agents/skillConfigurable.spec.ts`: drop the "does not inject a codeApiKey key" test. The `codeApiKey` field is gone from the production configurable shape, so an absence-assertion naming it re-introduces the retired identifier in code. - `api/server/services/Files/Code/crud.spec.js`: rename the "without an X-API-Key header" case back to "should request stream response from the correct URL" and drop the `expect(headers).not.toHaveProperty('X-API-Key')` assertion. The surrounding request-shape checks (URL, timeout, responseType) still pin the behavior; the explicit header-absence line was named-after the deprecated contract. Result: `grep -rn "CODE_API_KEY\|codeApiKey\|LIBRECHAT_CODE_API_KEY"` against the LibreChat source tree returns zero hits. The only remaining `X-API-Key` strings in this repo are on unrelated OpenAPI Action + MCP server auth configurations, where the string is user-facing config, not a LibreChat-owned identifier. Tests: 677 packages/api pass (2 pre-existing summarization e2e failures unrelated); 126 api-workspace controller/service tests pass. Typecheck and lint clean. * 🎯 fix: Narrow codeEnvAvailable to per-agent (admin cap AND agent.tools) Before this commit, `codeEnvAvailable` was computed in the three JS controllers as the admin-level capability flag only (`enabledCapabilities.has(AgentCapabilities.execute_code)`) and passed through `initializeAgent` → `injectSkillCatalog` / `primeInvokedSkills` / `enrichWithSkillConfigurable` unchanged. A skills-only agent whose `tools` array didn't include `execute_code` still got `bash_tool` registered (via `injectSkillCatalog`) and skill files re-primed to the sandbox on every turn — wrong, because the agent never opted in to code execution. **Fix:** `initializeAgent` now computes the per-agent effective value once as `params.codeEnvAvailable === true && agent.tools.includes(Tools.execute_code)`, reuses the same boolean for: 1. The `execute_code` → `bash_tool + read_file` expansion gate (previously already consulted `agent.tools`; now shares the single `effectiveCodeEnvAvailable` binding). 2. The `injectSkillCatalog` call (previously got the raw admin flag). 3. The returned `InitializedAgent.codeEnvAvailable` field (new, typed as required boolean). **Controllers (initialize.js, openai.js, responses.js):** store `primaryConfig.codeEnvAvailable` in `agentToolContexts.set(primaryId, ...)`, capture `config.codeEnvAvailable` in every handoff `onAgentInitialized` callback, and read it from the per-agent ctx inside the `toolExecuteOptions.loadTools` runtime closure. The hoisted `const codeEnvAvailable = enabledCapabilities.has(...)` locals in the two OpenAI-compat controllers are gone — they were shadowing the narrowed per-agent value. **primeInvokedSkills:** `handlePrimeInvokedSkills` in `services/Endpoints/agents/initialize.js` now uses `primaryConfig.codeEnvAvailable` (per-agent, narrowed) instead of the raw admin flag. A skills-only primary agent won't re-prime historical skill files to the sandbox even when the admin enabled the capability globally. **Efficiency:** one extra `&&` in `initializeAgent`. No runtime hot-path cost — the `includes()` scan on `agent.tools` was already happening for the `execute_code` expansion gate; it's now just bound to a local. Tool execution closures read `ctx.codeEnvAvailable === true` (property access + strict equality, O(1)). **Ephemeral-agent note:** per-agent narrowing is authoritative for both persisted and ephemeral flows. The ephemeral toggle (`ephemeralAgent.execute_code`) is reconciled into `agent.tools` upstream in `packages/api/src/agents/added.ts`, so `agent.tools.includes('execute_code')` is the single source of truth by the time `initializeAgent` runs. **Tests:** two new regression tests pin the narrowing contract: - `initialize.test.ts` — four-quadrant matrix on `InitializedAgent.codeEnvAvailable` (cap on × agent asks, cap on × doesn't ask, cap off × asks, neither). Catches future refactors that drop either half of the AND. - `skills.test.ts` — `injectSkillCatalog` with `codeEnvAvailable: false` against an active skill catalog must NOT register `bash_tool` even though it still registers `read_file` + `skill`. This is the state a skills-only agent gets post-narrowing. All 191 affected packages/api tests pass + 836 backend api tests pass. Typecheck clean, lint clean. * 🧽 refactor: Comprehensive-review polish — hoist tool defs, pin verifyToolAuth contract, doc appConfig Addresses the comprehensive review of Phase 8. Findings mapped: **#1 (MINOR): `verifyToolAuth` unconditional auth for execute_code** - Added doc comment explicitly stating the deployment contract (admin capability → reachable sandbox; no per-check health probe to keep UI-gate queries O(1)). - New `api/server/controllers/__tests__/tools.verifyToolAuth.spec.js` with 4 regression tests pinning the contract: 1. `authenticated: true` + `SYSTEM_DEFINED` for execute_code. 2. 404 for unknown tool IDs. 3. `loadAuthValues` is never consulted (catches a future revert that would resurface the per-user key-entry dialog). 4. Response `message` is never `USER_PROVIDED`. **#2 (MINOR): `openai/service.ts` undocumented `appConfig` dependency** - Expanded the `ChatCompletionDependencies.appConfig` JSDoc to spell out that omitting it silently disables code execution for agents with `execute_code` in their tools. External consumers of `createAgentChatCompletion` now have the contract documented at the type boundary. **#5 (NIT): `registerCodeExecutionTools` re-allocates tool defs** - Hoisted `READ_FILE_DEF` and `BASH_TOOL_DEF` to module-level `Object.freeze`d constants. The shapes derive entirely from static `@librechat/agents` exports, so a single frozen object per tool is safe to share across every agent init. Eliminates the ~4-property allocations on every call (including the common second-call no-op path). **#6 (NIT): Verbose history-priming comment in initialize.js** - Trimmed the 16-line `handlePrimeInvokedSkills` block to a 5-line summary with `@see InitializedAgent.codeEnvAvailable` pointer. The canonical narrowing explanation lives on the type; the controller comment is just the ACL-vs-capability rationale. **Skipped:** - #3 (memory spec tests a mirror function): reviewer self-dismissed as a design tradeoff; the enum-literal pin already catches the highest-risk drift vector. - #4 (cross-repo contract for `createCodeExecutionTool`): user will explicitly install the latest `@librechat/agents` dev version once the companion PR publishes, so the version pin will be authoritative. - #7 (migration/deprecation note for self-hosters): out of scope per user direction — release notes handle this. Tests verified: 679 packages/api + 840 backend api tests pass. Typecheck + lint clean. * 🔧 chore: Update @librechat/agents version to 3.1.68-dev.1 across package-lock and package.json files This commit updates the version of the `@librechat/agents` package from `3.1.68-dev.0` to `3.1.68-dev.1` in the `package-lock.json` and relevant `package.json` files. This change ensures consistency across the project and incorporates any updates or fixes from the new version. |
||
|
|
dfc3dfa57f |
📍 feat: always-apply frontmatter: auto-prime skills every turn (#12746)
* 🔁 refactor: Rebase always-apply work onto merged structured-frontmatter columns Phase 6 (disable-model-invocation / user-invocable / allowed-tools) landed first on feat/agent-skills. Reconcile this branch with the new mainline: - Thread alwaysApplySkillPrimes through unionPrimeAllowedTools alongside manualSkillPrimes, applying the combined MAX_PRIMED_SKILLS_PER_TURN ceiling before loading tools. - Add `_id` to ResolvedAlwaysApplySkill to match Phase 6's ResolvedManualSkill shape (read_file name-collision protection). - Register 'always-apply' in ALLOWED_FRONTMATTER_KEYS / FRONTMATTER_KIND so Phase 6's validator recognizes it. - Drop frontmatter from the listSkillsByAccess projection; the backfill helper remains as defensive code but its read path is no longer exercised on summary rows (no legacy rows exist — the branch never shipped), saving ~200KB per page. - Retire the corresponding "backfills legacy on summaries" test. - Plumb listAlwaysApplySkills through the JS controllers + endpoint initializer so the always-apply resolver sees a real DB method. * 🧹 fix: Dedupe manual/always-apply overlap, share YAML util, tidy comments Addresses review findings: - Cross-list dedup: when a user $-invokes a skill that is also marked always-apply, the always-apply copy is now dropped so the same SKILL.md body never primes twice in one turn. Manual wins (explicit intent, closer to the user message). Dedup runs in both initializeAgent (so persisted user-bubble pills stay in sync) and injectSkillPrimes (defense-in-depth at splice time). New test cases cover single-overlap, partial-overlap, and dedup-before-cap. - DRY: extract stripYamlTrailingComment to packages/data-schemas/src/utils/yaml.ts; packages/api/src/skills/import.ts now imports the shared helper. Also drop the redundant inner stripYamlTrailingComment call inside parseBooleanScalar — the call site already strips. - Mark injectManualSkillPrimes as @deprecated in favor of injectSkillPrimes (kept for external consumers of @librechat/api). - Document SKILL_TRIGGER_MODEL as forward-looking plumbing for the model-invoked path rather than leaving it as a bare unused export. - Replace the stale "frontmatter is included" comment on listSkillsByAccess with an accurate explanation of why it was intentionally excluded. * 🔒 fix: Include always-apply primes in skillPrimedIdsByName + clear alwaysApply on body opt-out Two bugs flagged by Codex review: P1 (read_file): `manualSkillPrimedIdsByName` only carried manual-invocation primes, so an always-apply skill with `disable-model-invocation: true` was blocked from reading its own bundled files, and same-name collisions could resolve to a different doc than the one whose body got primed. - Rename `buildManualSkillPrimedIdsByName` → `buildSkillPrimedIdsByName` (accepts both manual + always-apply prime arrays). - Rename the configurable field `manualSkillPrimedIdsByName` → `skillPrimedIdsByName` throughout the plumbing (skillConfigurable.ts, handlers.ts, CJS callers, tests). - Overlap resolution: manual wins on the rare edge case where the same name appears in both arrays (upstream dedup should prevent this, but defensive merging treats manual as authoritative). - New tests: (1) gate-relaxation fires for always-apply primes, (2) `_id` pinning works for always-apply same-name collisions. P2 (updateSkill): when a body update had no `always-apply:` key, `extractAlwaysApplyFromBody` returned `absent` and the column was left untouched. A skill that was once `alwaysApply: true` would keep auto-priming even after its SKILL.md no longer declared the flag. - Treat `absent` as a positive "not always-apply" declaration when the body is explicitly submitted; flip the column to `false`. - Explicit top-level `alwaysApply` still wins (three-source precedence unchanged). - New tests: body removes key → false, body has no frontmatter at all → false, explicit + body-without-key → explicit wins. * 🧵 refactor: Collapse duplicate prime types + tighten parse + test hygiene Sanity-check review follow-ups: - Collapse `ResolvedManualSkill` / `ResolvedAlwaysApplySkill` into a single `ResolvedSkillPrime` canonical interface with two backward- compatible type aliases. Both resolvers feed the same pipeline stages (injectSkillPrimes, unionPrimeAllowedTools, buildSkillPrimedIdsByName); the per-source distinction lives on `additional_kwargs.trigger`, not on the resolver output. - Move the `always-apply` branch in `parseFrontmatter` to operate on the raw post-colon text. The outer `unquoteYaml` was fine today because it's idempotent on non-quoted strings, but running it twice (once per line, once after stripping the inline comment) would be fragile if the unquoter ever grows richer YAML-escape handling. - Add the missing `alwaysApplyDedupedFromManual: 0` field to the `injectSkillPrimes` mocks in `openai.spec.js` and `responses.unit.spec.js` so they match the full `InjectSkillPrimesResult` contract. - Insert the blank line between the `unionPrimeAllowedTools` and `resolveAlwaysApplySkills` describe blocks. * 🔧 fix(tsc): Cast mock.calls via `unknown` for strict tuple destructure `getSkillByName.mock.calls[0]` is typed as `[]` by jest's generic default; a direct cast to `[string, ..., ...]` fails TS2352 under `--noEmit` even though the runtime shape matches. Go through `as unknown as [...]` like the earlier test in the same file so CI's type-check step stays green. * 🪢 fix: Propagate skillPrimedIdsByName into handoff agent tool context Handoff agents go through the same `initializeAgent` flow as the primary (with `listAlwaysApplySkills` now plumbed), so they resolve their own `manualSkillPrimes` and `alwaysApplySkillPrimes` — but the `agentToolContexts.set(...)` for handoff agents didn't carry `skillPrimedIdsByName` into the per-agent context. That meant `handleReadFileCall` fell back to the full ACL set + a `prefer*` flag for handoff agents: same-name collisions could resolve to a different doc than the one whose body got primed, and a `disable-model-invocation: true` skill primed via manual `$` or always-apply inside the handoff flow would be blocked from reading its own bundled files. Build the map via `buildSkillPrimedIdsByName(config.manualSkillPrimes, config.alwaysApplySkillPrimes)` for every handoff tool context so `read_file` behaves identically across primary and handoff agents. |
||
|
|
539c4c7e4d |
🎬 feat: Prime Manually-Invoked Skills via $ Popover (#12709)
* 🎬 feat: Prime Manually-Invoked Skills via $ Popover Lands the backend for manual skill invocation, making the $ popover deterministically prime SKILL.md before the LLM turn instead of leaving the model to discover the skill via the catalog. Flow: popover drains pendingManualSkillsByConvoId on submit, attaches names to the ask payload, controllers forward to initializeAgent, and initialize resolves each name to its body (ACL + active-state filtered, reusing the same rules as catalog injection). AgentClient splices the primes as meta HumanMessages before the user's current message. - Extract primeManualSkill / resolveManualSkills in packages/api/src/agents/skills.ts and reuse primeManualSkill inside handleSkillToolCall for a single shape source. - Thread manualSkills + getSkillByName through InitializeAgentParams / DbMethods and all three initializeAgent call sites (initialize.js, responses.js, openai.js). - Splice HumanMessage primes in client.js chatCompletion after formatAgentMessages, shifting indexTokenCountMap so hydrate still fills fresh positions correctly. - Carry isMeta / source / skillName in additional_kwargs for downstream filtering. * 🛡️ fix: Scope manual skill primes to single-agent + cap resolver input Two follow-ups to the Phase 3 priming path flagged in Codex review. Multi-agent runs: skipping the splice when agentConfigs is non-empty. `initialMessages` is shared across every agent in `createRun`, so splicing a skill body there would bypass Phase 1's per-agent `scopeSkillIds` contract — a handoff / added-convo agent with a different skill scope would see content its configuration excludes. Warn + skip is the minimal correct behavior; lifting this to per-agent initial state is a follow-up. Input bounding: `resolveManualSkills` now truncates to `MAX_MANUAL_SKILLS` (10) after dedup, with a warn listing the dropped tail. Controllers only validate `Array.isArray(req.body.manualSkills)`, so a crafted payload could otherwise fan out into an unbounded `Promise.all` of concurrent `getSkillByName` DB lookups. Cap lives in the resolver so every caller (including future `always-apply` in Phase 5) inherits it. * 🧪 refactor: Testable Helpers + Payload Validation for Manual Skill Primes Follow-ups from the comprehensive review. No behavior change for the happy path — these are architectural and defensive improvements that shrink the JS surface in /api, tighten the request-body contract, and cover the delicate splice logic with proper unit tests. - Extract `injectManualSkillPrimes` into packages/api/src/agents/skills.ts so the message-array splice and `indexTokenCountMap` shift are unit- testable in TS. client.js now calls the helper. Tests pin the `>=` vs `>` boundary condition — a regression here would silently corrupt token accounting for every message after the insertion point. - Extract `extractManualSkills(body)` and use in all three controllers (initialize.js, responses.js, openai.js). Replaces copy-pasted `Array.isArray(...) ? ... : undefined` with a helper that also filters non-string / empty elements — closes a type-safety gap where a crafted payload like `{"manualSkills": [123, {"$gt":""}]}` would otherwise reach `getSkillByName` and waste DB round-trips. - Rename `primeManualSkill` → `buildSkillPrimeMessage`. The helper serves three invocation modes (`$` popover, `always-apply`, model-invoked); the old name misled readers coming from `handleSkillToolCall`. - Add `loadable.state === 'hasValue'` guard in `drainPendingManualSkills` — defensive, since the atom has a synchronous `[]` default, but the previous `.contents` cast would have been unsound under loading/error. - Document why `resolveManualSkills` honors the active-state filter even for explicit `$` selections (Phase 2 popover filter + API-direct hardening). - Remove stray `void Types;` in initialize.test.ts — `Types` is already consumed elsewhere in that test. * 🔖 refactor: Single source for the skill-message source marker Export `SKILL_MESSAGE_SOURCE = 'skill'` and use it in both construction paths that stamp skill-primed messages — `buildSkillPrimeMessage` (for the model-invoked tool path) and `injectManualSkillPrimes` (for the user-invoked splice path). Downstream filtering and telemetry read this marker, so the two paths must agree; keeping the literal in one place removes the risk of them drifting when Phase 5's `always-apply` adds a third caller. * ♻️ refactor: Drop Multi-Agent Guard + Review Polish - Remove the multi-agent skip in `AgentClient.chatCompletion`. Leaking primes to handoff / added-convo agents via shared `initialMessages` is the agents SDK's concern to scope; this layer should just inject and let the graph handle agent-scoped state. The guard was well-intended but produced a silent-drop UX where `$skill` in a multi-agent run did nothing. - Bound the `[resolveManualSkills] Truncating ...` warn output to the first 5 dropped names plus a count suffix. A malicious payload of 1000 names was previously spilling all ~990 names into the log line. - Remove dead `?? []` from the `hasValue`-guarded loadable read in `drainPendingManualSkills` — the atom always yields a string[] when resolved, so the nullish fallback was unreachable. - Reorder skills.ts imports to follow the style guide: value imports shortest-to-longest (`data-schemas` → `langchain/core/messages` → multi-line `@librechat/agents`), type imports longest-to-shortest. * 🧠 fix: Strip Skill Primes from Memory Window + Unbreak CI Mocks Two fixes after the last push. CI unbreak: `responses.unit.spec.js` and `openai.spec.js` mock `@librechat/api` and the mock didn't expose the new `extractManualSkills` symbol, so every test in those files crashed before reaching the `recordCollectedUsage` assertion. Added `extractManualSkills: jest.fn()` returning `undefined` to both mocks; the controllers now no-op on manualSkills as the tests expect. Codex P2: `runMemory` passes `messages` straight through to the memory processor, so after the splice in `injectManualSkillPrimes`, SKILL.md bodies ride along as if they were real user chat. That pollutes memory extraction with synthetic instruction content and crowds out real turns from the window. - Export `isSkillPrimeMessage(msg)` from `packages/api/src/agents/skills.ts` — a predicate keyed on the shared `SKILL_MESSAGE_SOURCE` marker. - Filter `chatMessages = messages.filter(m => !isSkillPrimeMessage(m))` at the top of `runMemory` before the window-sizing logic. Keeps the primes visible to the LLM (they still ride in `initialMessages`) but invisible to the memory layer. - 5 new tests for the predicate covering marker-present, plain messages, different source, non-object inputs, and array filter integration. * 📜 feat: Show Skill-Loaded Cards for Manually-Invoked Skills The $ popover was priming SKILL.md bodies into the turn but leaving no visible trace on the assistant response — from the user's view it looked like the `$name ` cosmetic text did nothing. Now each manually-invoked skill renders the same "Skill X loaded" tool-call card that model-invoked skills already produce via PR #12684's SkillCall renderer. Approach: post-run prepend to `this.contentParts`. The aggregator owns per-step indices during the run, so pre-seeding collides; waiting until `await runAgents(...)` returns lets the graph settle before synthetic parts slot in at the front. - Export `buildSkillPrimeContentParts(primes, { runId })` from `packages/api/src/agents/skills.ts`. Returns completed tool_call parts (`progress: 1`, args JSON-encoded with `{skillName}`, output matching the model-invoked path's wording) that the existing `SkillCall.tsx` renderer draws identically. - In `AgentClient.chatCompletion`, prepend the built parts to `this.contentParts` immediately after `await runAgents`. Persistence and the final-event reconcile come for free — `sendCompletion` already reads `this.contentParts` verbatim. - Card ordering: skills appear first in the assistant message, reflecting that priming ran before the LLM's turn. Live-during-streaming cards are a separate follow-up — the graph's index-based aggregator makes that a bigger lift and this change delivers the core UX win without fighting the stream ordering. 6 new unit tests covering part shape, args JSON contract, output text, unique IDs, empty input, and startOffset ID differentiation. * ⚡ feat: Emit Optimistic Skill Cards + Wire Primes in OpenAI/Responses Two follow-ups from testing. Optimistic card emit: the main chat path was only showing "Skill X loaded" cards at final-reconcile time, so the user saw nothing happen until the stream finished. Now emit synthetic ON_RUN_STEP + ON_RUN_STEP_COMPLETED events right before `runAgents` starts — same pattern the MCP OAuth flow uses in `ToolService` — so the cards appear immediately. The graph's content at index 0 may overwrite them during streaming, but the post-run `contentParts` prepend (unchanged) restores them on final reconcile. OpenAI + Responses parity: both controllers were resolving `manualSkillPrimes` via `initializeAgent` but never injecting them into `formattedMessages` before the run. Manual invocation silently did nothing on `/v1/chat/completions` and the Responses API path. Now both call `injectManualSkillPrimes` on the formatted messages so the model sees SKILL.md bodies on every path. LibreChat-style card SSE events don't apply to these OpenAI-shaped responses, so the live-emit is chat-path-only. - Export `buildSkillPrimeStepEvents(primes, { runId })` from `packages/api/src/agents/skills.ts`. Uses `Constants.USE_PRELIM_RESPONSE_MESSAGE_ID` by default so the frontend maps events to the in-flight preliminary response message, matching the OAuth emitter. - In `AgentClient.chatCompletion`, emit via `sendEvent` (or `GenerationJobManager.emitChunk` in resumable mode) after `injectManualSkillPrimes` runs, before the LLM turn begins. - Wire `injectManualSkillPrimes` into `openai.js` + `responses.js` after `formatAgentMessages`. Refactored the destructure to `let` on `indexTokenCountMap` so the injector's returned map is usable. - 8 new unit tests covering the step-event builder: pair cardinality, default/custom runId, TOOL_CALLS shape + JSON args, progress:1 on completion, index ordering, stepId/toolCallId pairing, empty input. * 🎯 fix: Route Skill Prime Events to the Real Response + Sparse-Array Offset Two bugs in the optimistic-card emit from the last pass. 1. Wrong runId. The events used `USE_PRELIM_RESPONSE_MESSAGE_ID` (the MCP OAuth pattern), but OAuth emits DURING tool loading — before the real response messageId exists. By the time skill priming fires, the graph is about to emit with `this.responseMessageId`, so the PRELIM runId orphaned every card onto the client's placeholder response entry in `messageMap`, separate from the one the LLM's events were building. Net effect: cards never rendered mid-stream. Now passing `this.responseMessageId` — the same ID `createRun` receives — so synthetic and real steps land on the same `messageMap` entry. 2. Index 0 collision. With the runId fixed, card-at-0 would have hit `updateContent`'s type-mismatch guard when the LLM's text delta arrived at the same index, suppressing the whole text stream. New `SKILL_PRIME_INDEX_OFFSET` = 100 placed on both the live SSE emit and the server-side `contentParts` assignment. Sparse array during streaming renders as `[llm_text, ..., card]` (skip-holes via `Array#filter` / `Array#map`). `filterMalformedContentParts` from `sendCompletion` compacts to dense `[text, card]` before persistence, so streaming UI and saved message agree on order — no finalize reorder jank. Post-run switches from `contentParts.unshift` to `contentParts[OFFSET + i] = part` to mirror the live placement. - Add `startIndex` option to `buildSkillPrimeStepEvents` with `SKILL_PRIME_INDEX_OFFSET` default. Export the constant from `@librechat/api` so `client.js` can reuse it for the post-run splice. - Update the existing index-ordering test to the new default and add a new test for the explicit `startIndex` override. * 🎗️ feat: Replace \$skill-name Text with Pills on the User Message The `$skill-name ` cosmetic text the popover was inserting into the textarea had two problems: it lingered in the user message forever (the card is a more meaningful marker), and it implied that free-form text invocation like \"\$foo help me\" should work — which it doesn't, and supporting it would mean another parsing layer nobody asked for. Dropped the textarea insertion. Visual confirmation after submit now comes from a compact `ManualSkillPills` row on the user bubble that self-extinguishes once the backend's live skill-card stream (`buildSkillPrimeStepEvents` from the last commit) populates the sibling assistant response. Multiple skills render as multiple pills — the atom was already a string array, so multi-select works for free. - `SkillsCommand.tsx`: select handler no longer writes to the textarea. Still drops the trigger `$` via `removeCharIfLast`, still pushes to `pendingManualSkillsByConvoId`, still flips `ephemeralAgent.skills`. - `families.ts`: new `attachedSkillsByMessageId` atomFamily keyed by user messageId. `useChatFunctions.ask` writes the drained skill list here on every fresh submit (regenerate/continue/edit still skip). - `ManualSkillPills.tsx` renders pills conditionally: hidden when the message isn't a user message, when no skills are attached, or when the sibling assistant response already carries a `skill` tool_call content part (the live card took over). Reads messages via React Query so we don't re-render on every message-state keystroke. - `Container.tsx` mounts the pills above the user message text, parallel to the existing `Files` slot. - Updated the SkillsCommand select-flow spec to assert the textarea is cleared of `$` instead of populated with `\$name `. 5 new tests for `ManualSkillPills` covering empty state, non-user message guard, multi-skill rendering, the skill-card hide condition, and the text-only-content-doesn't-hide case. * 🎛️ feat: Manual Skills as Persisted Message Field + Compose-Time Chips Three problems with the previous pass: 1. Cards rendered BELOW the LLM text on the assistant message (and stayed there on reload) because the sparse index-100 offset put them after the model's content. Now back to `unshift` — cards at the top, same as before the live-emit detour. 2. Pills on the user message disappeared the moment the live card arrived, so users barely saw them. The live-emit channel also added meaningful complexity and relied on a per-message Recoil atom that had no clean cleanup story. 3. No visual cue at all during new-chat compose — the `$name ` text was removed, the submitted-message pills weren't there yet, and the popover closes after selection. User had no way to see what they'd queued up before sending. New architecture: `manualSkills` is a first-class field on `TMessage`, persisted by the backend on the user message. `ManualSkillPills` reads straight from `message.manualSkills` — no atom, no sibling-lookup — so pills survive reload, show in history, and stay for the lifetime of the message. Compose-time chips above the textarea read the existing `pendingManualSkillsByConvoId` atom and let users × skills out before submitting. Backend reverts: - `client.js`: dropped the `ON_RUN_STEP` live-emit loop, restored `this.contentParts.unshift(...primeParts)` so cards sit at the top of the persisted assistant response. - `skills.ts`: removed `buildSkillPrimeStepEvents` and `SKILL_PRIME_INDEX_OFFSET` (both unused now). `GraphEvents`, `StepTypes`, and `Constants` imports went with them. Removed 8 tests. Field persistence: - `tMessageSchema` gains `manualSkills: z.array(z.string()).optional()`. - Mongoose message schema gains `manualSkills: { type: [String] }` with matching `IMessage` TS field. - `BaseClient.js` reads `req.body.manualSkills` on user-message save, filters to non-empty strings, pins onto `userMessage` before `saveMessageToDatabase`. Mirrors the existing `files` pattern right above it. Runtime resolution still reads top-level `req.body.manualSkills` — persistence and resolution are separate concerns. Frontend: - `useChatFunctions.ask` sets `currentMsg.manualSkills` directly; the drained atom value goes onto the message, not a separate atom. Removed the `attachSkillsToMessage` Recoil callback. - `ManualSkillPills`: pure render of `message.manualSkills`. No more `useQueryClient`, no sibling scan, no atom read. Loses the auto-hide-when-card-arrives behavior — pills stay on the user bubble, cards live on the assistant bubble, both are informative. - Dropped the `attachedSkillsByMessageId` atomFamily and its export. - New `PendingManualSkillsChips` above the textarea reads the compose-time atom and renders chips with × to remove. Mounted in `ChatForm` right after `TextareaHeader`. Naturally hides on submit when the atom drains. Tests: updated `ManualSkillPills` suite to the new field-based reads (5 passing). New `PendingManualSkillsChips` suite covering empty state, multi-chip render, single × removal, and full-clear (4 passing). Backend suite trimmed to 89 (was 97) from the step-events test removal — no regressions on the remaining helpers. * 🧪 feat: Assistant-Side Skill-Loading Chips + Pill Padding Two small UX fixes on top of the field-on-message architecture. Pill padding: bumped the user-side `ManualSkillPills` from `py-0.5` to `py-1` on each chip and added `py-0.5` to the wrapper so the row breathes a little without feeling tall. Mid-stream indicator: new `InvokingSkillsIndicator` mirrors the parent user message's `manualSkills` onto the assistant bubble as transient "Running X" chips while the real card is in flight. Renders above `ContentParts` in `MessageParts`. Hides itself when the assistant's own `content` grows a `skill` tool_call — the authoritative card from `buildSkillPrimeContentParts.unshift` is showing, so the placeholder steps aside. No SSE emit, no aggregator injection, no index collision with the LLM's streaming content: just a render slot keyed off the parent's field. Why not stream the cards live: whichever content index we'd choose either blocks the LLM's text stream (`updateContent` type-mismatch at index 0) or lands below the response after sparse compaction (index 100+). Mirroring the parent field sidesteps the aggregator entirely and gives the user an immediate "skill is loading" signal that naturally gives way to the real card at finalize. Covers the gap the user flagged: pills on the user message said "I asked for these" but nothing on the assistant side said "we're working on it" until the stream finished. 5 new tests for the indicator: user-msg guard, missing parent-field guard, multi-chip render, hides-on-card-landing, orphan-parent guard. * 🔁 fix: Indicator Visibility + Carry Manual Skills Through Regenerate/Edit Two bugs. Indicator never rendered: `InvokingSkillsIndicator` looked up the parent user message via `queryClient.getQueryData([QueryKeys.messages, convoId])`, but on a new chat the React Query cache is keyed by `"new"` (the URL `paramId`) until the server assigns a real conversation ID — while `message.conversationId` on the assistant message is already the server ID. Lookup missed, `skills.length === 0`, nothing rendered. Switched to `useChatContext().getMessages()`, which reads from the same `paramId` the rest of the UI uses, so new-chat and existing-chat cases both resolve to the correct message list. Regenerate / save-and-submit dropped manual skills: the compose-time `pendingManualSkillsByConvoId` atom is drained on the first submit, so replaying that turn later found an empty atom and sent `manualSkills: []`. The pills were still on the user bubble, so from the user's point of view the model was running primed — but the backend saw nothing and produced an unprimed response. - Added `overrideManualSkills?: string[]` to `TOptions`. Callers with a reference message pass its persisted `manualSkills`; `useChatFunctions.ask` uses the override verbatim when present, otherwise falls back to the existing drain-or-empty logic. - `regenerate` in `useChatFunctions` passes `parentMessage.manualSkills` — the user message being regenerated has the field persisted by the backend, so the second turn primes the same skills as the first. - `EditMessage.resubmitMessage` covers both edit branches: - User-message save-and-submit: forwards the edited message's own `manualSkills` so the new sibling turn primes identically. - Assistant-response edit: forwards the parent user message's `manualSkills` for the same reason. Indicator test suite converted from `@tanstack/react-query` harness to a jest-mocked `useChatContext().getMessages()`. 6 tests (was 5), added a cache-miss case. * 🧭 fix: Drive Mid-Stream Skill Chips from Submission Atom, Not Message Lookup Message-ID-keyed lookups kept racing the stream: the user message flips from its client-side intermediate UUID to the server-assigned ID mid-run, conversation IDs flip from the URL `paramId="new"` to the real convo ID on brand-new chats, and the React Query cache splits briefly between the two. Previous attempts — direct `queryClient.getQueryData` and then `useChatContext().getMessages()` — each missed a different window. `TSubmission.manualSkills` is already populated at `ask()` time and the submission atom (`store.submissionByIndex(index)`) is the single stable anchor across the whole lifecycle: set once at submit, lives through every SSE event, cleared when the stream ends. No ID lookups, no cache timing. - `InvokingSkillsIndicator` now reads `submissionByIndex(index)` via Recoil. Shows chips when: • the message is assistant-side, • a submission is in flight with non-empty `manualSkills`, • the assistant's `parentMessageId` matches `submission.userMessage.messageId` (so chips appear only on the bubble for the current turn, never on siblings), • the assistant's own content doesn't yet carry a `skill` tool_call (real card takes over from the server's post-run `contentParts.unshift`). - Drops the `useChatContext().getMessages()` dependency and the `useQueryClient` dependency before that. No more lookups by conversationId or messageId. Test suite now mocks `useChatContext` to supply `index: 0` and seeds the `submissionByIndex(0)` atom via Recoil initializer. 6 cases cover user-side, no-submission history, empty `manualSkills`, multi-chip render, hides-on-card-landing, and wrong-turn guard. * 🌱 fix: Seed Response manualSkills in createdHandler, Indicator Becomes Pure The mid-stream indicator kept getting wired off state I don't own: first `queryClient.getQueryData` (raced the new-chat paramId flip), then `useChatContext().getMessages()` (same cache, same race), then `useRecoilValue(submissionByIndex)` (pulled every message into the submission subscription — re-renders all indicators on any submission change, exactly the "limit hooks in rendering" concern). Cleanest path is the one the user pointed at: the submission owns the data, `useSSE` / `useEventHandlers` owns the save points, so seed the field ONTO the response message at the save site and let the indicator be a pure prop-read. - `createdHandler` now writes `manualSkills` onto the initial response from `submission.manualSkills` at the moment the placeholder enters the messages array. The field rides through the normal mutation pipeline via spreads (`useStepHandler` response creation, `updateContent` result returns) — no special handling needed. - `InvokingSkillsIndicator` drops the Recoil / context / queryClient reads. Pure function of `message`: if assistant, has `manualSkills`, and `content` hasn't grown a `skill` tool_call yet, render chips. Only `useLocalize` left, which was already unavoidable for the i18n string. - Renders decouple: no single state change (`submissionByIndex` flip, React Query cache update) forces every indicator in the message list to re-render anymore. Only the message whose prop changed re-runs. Finalize story unchanged: server's `responseMessage` doesn't carry the frontend-only `manualSkills` field, so `finalHandler`'s replacement drops it — but by then the real `skill` tool_call is in `content` and the indicator's content-scan hides itself anyway. Test suite back to pure prop mocks: 7 cases covering user-guard, no-seed, multi-chip render, skill-card-hide, non-skill-tool-call-keeps, text-only-keeps, and missing message. * 🪞 fix: Render Skill Indicator Inside ContentParts, Adjacent to Parts The indicator still wasn't showing because even though MessageParts mounted it as a sibling of ContentParts, ContentParts is a `memo`'d component that owns the only rendering path that refreshes in lockstep with content deltas. Mounting above it put the indicator one layer further out — reachable, but not exercised on the same render cycle that processes the streaming `message` prop. Moved the indicator into ContentParts itself, rendered at the top of both the sequential and parallel branches. Reads the `message` prop (newly threaded through as an optional prop alongside `content`), so: - Same render cycle as Parts — updates from the SSE pipeline flow through the same pathway. - Lives outside the `content.map`, so delta-driven content reshuffles never wipe it. - Still a pure prop-read inside the indicator itself (no Recoil, queryClient, context hooks). The only dep is `useLocalize`. Thread: - `ContentPartsProps` gains `message?: TMessage`. - `MessageParts` passes `message={message}` through, drops its own indicator mount + import. - `ContentParts` renders `<InvokingSkillsIndicator message={message} />` in both the parallel-content and sequential-content branches, right under `MemoryArtifacts` and before the empty-cursor / parts map. Companion data flow (unchanged): `createdHandler` seeds `initialResponse.manualSkills` from `submission.manualSkills`; the field rides through `useStepHandler` via spreads; indicator hides on `skill` tool_call landing in `content`. * 🔎 refactor: Narrow Skill Components to Scalar skills Prop, Kill Memo Churn Passing the full `message` object into presentational components busts `React.memo` shallow comparisons every time the message reference changes for unrelated reasons. Swap to scalar `skills?: string[]` throughout: - `InvokingSkillsIndicator`: props-only (`skills?: string[]`); visibility logic (user-vs-assistant, skill tool_call arrival) now lives in the caller so this stays pure presentational. - `ManualSkillPills`: props-only (`skills?: string[]`). - `ContentParts`: takes `manualSkills?: string[]` scalar, computes `showInvokingSkills` once per render from `manualSkills` + content scan for the `skill` tool_call, then mounts the indicator with `skills=` prop in both parallel and sequential branches. - `MessageParts`: passes `manualSkills={message.manualSkills}` through to `ContentParts`. - `Container`: passes `skills={message.manualSkills}` to `ManualSkillPills`. - Tests updated to exercise the narrowed prop surface. * 📜 feat: Mid-Stream Skill Cards via SkillCall, Drop Custom Indicator Instead of a separate `InvokingSkillsIndicator` chip component, render pending skill placeholders through the existing `SkillCall` renderer — same component the backend's finalized prime part uses. The loading visual (`progress < 1` + empty output → pulsing "Running X") and the completed visual ("Ran X") now come from one source of truth. `ContentParts` computes `pendingSkillNames` from `manualSkills` minus any `skill` tool_call already in `content` (dedupe by `args.skillName` since the synthetic's id differs from the real one). Those names render through a separate slot ABOVE the Parts iteration — not prepended to the content array, which would shift React keys on every downstream streaming text / tool part and force unmount/remount mid-stream. When the real prime `tool_call` lands at finalize (backend unshifts to content[0..]), `collectExistingSkillNames` picks it up, the pending set empties, and the real part takes over rendering in the Parts iteration. Layout is identical either way because primes are always at the top of content. - `InvokingSkillsIndicator.tsx` + test deleted (no longer referenced) - `ContentParts.tsx` renders `<SkillCall .../>` directly for pending names, mirrors `Part.tsx`'s usage of the same component - `createdHandler` doc comment updated to reflect the new flow * ✂️ fix: Render Interim Skill Cards From manualSkills Only, Leave Content Untouched Previous revision read `content` to de-dupe pending cards against real `skill` tool_calls, so any optimistic skill part streamed from the backend would race our placeholder off the screen mid-turn — exactly the "getting overridden" symptom. Now: interim `SkillCall` cards are driven purely by the response message's `manualSkills` field. `content` is never inspected here, so no backend delta can pull the cards down. The field is now seeded directly onto the assistant placeholder in `useChatFunctions` (not only in `createdHandler`) so the cards appear from the first render, before the `created` SSE event round-trip. Lifecycle: - `useChatFunctions` puts `manualSkills` on the freshly-minted `initialResponse` — cards render the instant the placeholder lands. - `createdHandler` keeps its own re-seed (idempotent; safe) so a regenerate / save-and-submit flow that hits that path still works. - `useStepHandler` spread operations preserve the field through every content update. - `finalHandler` replaces the message with the server-backed `responseMessage` (no `manualSkills`) — cards disappear, and the real `skill` tool_call part in `content` takes over. ContentParts changes: - Drop `collectExistingSkillNames` / `parseJsonField` dedupe path. - `renderPendingSkills` reads only `manualSkills` + `isCreatedByUser`. - Simpler control flow — one boolean (`hasPendingSkills`) gates the early return, one function renders. * 🩹 fix: Codex Review Resolutions — Localization, Guards, Tests, Docs Addresses seven findings from comprehensive code review: Finding 1 (MAJOR) — Document sticky re-priming as intentional - `buildSkillPrimeContentParts`: expanded doc comment explaining synthetic `skill` tool_calls persist and get re-primed on every subsequent turn via `extractInvokedSkillsFromPayload` (shape parity with model-invoked skills). This matches the UX: the assistant skill card is a visible, persistent signal that the skill is active for the conversation. Not a bug — called out explicitly so future maintainers don't mistake it for one. Finding 2 (MAJOR) — Add ContentParts render tests - New `ContentParts.test.tsx` with 7 cases covering the interim skill card logic: assistant-only rendering, user-message suppression, undefined-content safety, parallel+sequential branch integration, progress<1 (pending) state. Child components mocked so the test exercises only the branching and prop wiring ContentParts owns. Finding 3 (MINOR) — Localize hardcoded aria-labels - Added `com_ui_skills_manual_invoked` + `com_ui_skills_queued` keys. - Reused existing `com_ui_remove_skill_var` for the remove-button aria-label. - `PendingManualSkillsChips` and `ManualSkillPills` now call `useLocalize()`. Test mocks updated to the label-echo pattern. Finding 4 (MINOR) — Max-length guard in `extractManualSkills` - New `MAX_SKILL_NAME_LENGTH = 200` constant and filter. Blocks a crafted payload like `{ manualSkills: ['a'.repeat(100000)] }` from reaching `getSkillByName` / Mongo's query planner. Finding 5 (NIT) — `BaseClient.js` comment contradicted itself - Rewrote to call the filter what it is: defense-in-depth on top of Mongoose schema validation, not a redundant second layer. Finding 6 (NIT) — `ManualSkillPills` now wrapped in `React.memo` - Consistent with peer components (`PendingManualSkillsChips`, `ContentParts`). Rendered inside `Container`, which re-renders on every content update, so the memo is a real cycle savings. Finding 7 (NIT) — Redundant guard in `ContentParts.renderPendingSkills` - Collapsed the duplicate null-check by computing `pendingSkills` as a `useMemo`'d array (`[]` when not applicable), and mapping directly. `hasPendingSkills` now derives from the array length — one source of truth, no redundant gate inside the render function. * 🔧 fix: Update ParallelContent to Handle Optional Content Prop Modified the `ParallelContentRendererProps` to make the `content` prop optional, ensuring safer access within the component. Adjusted the calculation of `lastContentIdx` to handle cases where `content` may be undefined, preventing potential runtime errors. This change enhances the robustness of the component when dealing with varying message structures. * 🎯 fix: Thread manualSkills Through ContentRender — The Real Renderer This is why the interim skill cards never appeared across many rounds of iteration: `ContentRender.tsx` (the memo'd renderer used by most paths, including the agents endpoint) was calling `ContentParts` without the `manualSkills` prop. Only `MessageParts.tsx` had it wired up — and that's not the component that actually renders the assistant response in production. Two fixes: 1. Pass `manualSkills={msg.manualSkills}` to the `ContentParts` call. 2. Extend the `areContentRenderPropsEqual` memo comparator to include `manualSkills.length`, otherwise a message update that adds the field (seeded by `useChatFunctions` on the initialResponse) would be bailed out by the memo and never re-render. Verified the two ContentParts call sites are now consistent; Container usages for `ManualSkillPills` on the user side were already correct. * 🧹 polish: Address Audit Follow-Up (F1/F3/F6) F1 — Clarify sticky re-priming opt-out path. The previous comment said "regenerate without the pick" as one opt-out, but `useChatFunctions.regenerate` forwards the original picks via `overrideManualSkills`, so regeneration alone keeps the skill sticky. Updated to: edit the originating message to remove the pills and resubmit, or start a new conversation. F3 — Add DOM-order assertions to the parallel + sequential tests. The two "alongside" tests verified both elements existed but didn't pin the ordering contract. Both now use `compareDocumentPosition` to assert the pending SkillCall precedes the real content, matching the backend semantic (`contentParts.unshift(...primeParts)` puts primes at the top). F6 — Fix package import order in PendingManualSkillsChips. `recoil` (58 chars) was listed before `lucide-react` (45 chars) which violates the "shortest to longest after react" rule in AGENTS.md. Swapped order; no behavior change. F2 / F4 / F5 from the audit were confirmed as non-issues (React-safe empty map, cosmetic test-mock artifact, accepted memo tradeoff) and require no change. * ✨ feat: Dedicated PendingSkillCall + Running→Ran Transition on Real Content UX polish on the interim skill card now that it's actually rendering: 1. New `PendingSkillCall` component (mirrors `SkillCall` visually but drops the expand affordance). `SkillCall`'s underlying `ProgressText` always renders a chevron + clickable button when any input is present, which on a card with empty output points at nothing — misleading cursor:pointer and a no-op toggle. The pending variant has only the icon + label, no button wrapper, no chevron. 2. "Running X" → "Ran X" transition when real content lands. `ContentParts` computes `hasRealContent` (any non-text part, or a text part with non-empty content — placeholder empty-text parts don't count) and passes `loaded={hasRealContent}` to `PendingSkillCall`. Matches what users see for model-invoked skills as they finish priming: pulsing shimmer → static icon. 3. Cleanup: - Dropped direct `SkillCall` import from `ContentParts` (replaced by `PendingSkillCall`). `SkillCall` is still used by `Part` for real `skill` tool_call content parts — no behavior change there. - Removed the now-redundant explicit `manualSkills` assignment in `createdHandler`. `useChatFunctions` seeds the field on `initialResponse` at construction, so the `...submission.initialResponse` spread already carries it through — the re-assignment was defensive belt-and-suspenders doing the same work twice. Comment rewritten to describe the actual lifecycle. Tests updated to the new component (12/12 pass): two new cases pin the loaded-state transition (unloaded when content has no real parts, flips to loaded once a non-empty text part lands). |
||
|
|
9225a279eb |
🎚️ feat: Per-User Skill Active/Inactive Toggle with Ownership-Aware Defaults (#12692)
* feat: per-user skill active/inactive toggle with ownership-aware defaults - Add `skillStates` map (Record<string, boolean>) to user schema for per-user active/inactive overrides on skills - Add `defaultActiveOnShare` to interface.skills config (default: false) so admins can control whether shared skills auto-activate - Add GET/POST /api/user/settings/skills/active endpoints with validation - Add React Query hooks with optimistic mutations for skill states - Add useSkillActiveState hook with ownership-aware resolution: owned skills default active, shared skills default inactive - Add toggle switch UI to SkillListItem and SkillDetail components - Filter inactive skills in injectSkillCatalog before agent injection - Add localization keys for active/inactive labels * fix: use Record instead of Map for IUser.skillStates Mongoose .lean() flattens Map to a plain object, causing type incompatibility with IUser in methods that return lean documents. * fix: address review findings for skill active states - Fail-closed when userId is absent: filter rejects all shared skills instead of passing them through unfiltered (Codex P1) - Validate Mongoose Map key characters (reject . and $) in controller to return 400 instead of a 500 from schema validation (Codex P2) - Block toggle while initial skill states query is loading to prevent overwriting server-side overrides with an empty snapshot (Codex P2) - Extract shared SkillToggle component, eliminating duplicate toggle markup in SkillListItem and SkillDetail (Finding #3) - Move skill state query/mutation hooks from Favorites.ts to Skills/queries.ts per feature-directory convention (Finding #4) - Fix hardcoded English aria-label in SkillListItem by passing the localized string from the parent SkillList (Finding #5) - Fix inline arrow in SkillList render loop: pass stable callback reference so SkillListItem memo() is not invalidated (Finding #1) - Extract toRecord() helper in controller to DRY the Map-to-Object conversion (Finding #6) - Remove Promise.resolve wrapping synchronous config read (Finding #8) - Remove unused TUpdateSkillStatesRequest type (Finding #12) * fix: forward tabIndex on SkillToggle to preserve list keyboard nav The original inline toggle had tabIndex={-1} so the row itself remained the sole tab target. The extraction into SkillToggle dropped this prop, making every list toggle a tab stop. Add an optional tabIndex prop and pass -1 from SkillListItem. * fix: plumb skillStates to all agent entry points, isolate toggle keydown - Add skillStates/defaultActiveOnShare loading to openai.js and responses.js controllers so shared-skill activation is respected across all agent entry points, not just initialize.js (Codex P1) - Stop keydown propagation on SkillToggle so Enter/Space does not bubble to the parent row's navigation handler (Codex P2) * fix: paginate catalog fetch and serialize toggle writes - Paginate listSkillsByAccess (up to 10 pages of 100) until the active catalog quota is filled, so inactive shared skills in recent positions do not starve active owned skills past the first page (Codex P1) - Extend listSkillsByAccess interface with cursor/has_more/after for catalog pagination - Serialize skill-state writes via a ref queue: one in-flight request at a time, with the latest desired state sent when the previous one settles. Prevents last-response-wins races where an older request overwrites newer toggles (Codex P2) * fix: share write queue across hook instances, block toggle on fetch error - Move the write queue from a per-instance useRef to a module-scoped object so every mount of useSkillActiveState (SkillList, SkillDetail, etc.) serializes against the same in-flight slot. Prior per-instance queues allowed two components to race full-map POSTs (Codex P1) - Extend the toggle guard beyond isLoading: also block when isError is true or data is undefined. Prevents a failed GET from seeding a toggle with an empty baseline that would wipe server-side overrides on the next successful POST (Codex P1) * fix: stale closure, orphan cleanup, and cap-error UX - Read toggle baseline from React Query cache via queryClient.getQueryData instead of the captured skillStates closure. The closure can be stale between onMutate's setQueryData and the next render, so rapid successive toggles would build on old state and drop earlier changes (Codex P1) - Surface the MAX_SKILL_STATES_EXCEEDED error code with a specific toast key (com_ui_skill_states_limit) so users understand the 200-cap rather than seeing a generic error - Prune orphaned entries (skillIds whose Skill doc no longer exists) on both GET and POST in SkillStatesController. Self-heals over time without needing cascade-delete hooks or a migration job. Uses one indexed Skill._id query per request * test: pin skill active-state precedence with unit tests Extract the active-state resolution logic from a closure inside injectSkillCatalog into an exported resolveSkillActive helper, then cover every branch of the precedence matrix: - Fails closed when userId is absent (even with defaultActiveOnShare=true) - Explicit override wins over ownership and config (both true and false) - Owned skills default to active when no override is set - Shared skills default to defaultActiveOnShare value - Undefined skillStates behaves identically to an empty object - defaultActiveOnShare defaults to false when omitted - Owned skills ignore defaultActiveOnShare entirely Closes Finding #2 from the pre-rebase comprehensive review. Mirrors the existing scopeSkillIds test style; injectSkillCatalog now calls resolveSkillActive instead of inlining the closure. * refactor: limit skill active toggle to detail header, drop label - Remove the per-row toggle from SkillListItem and the active-state plumbing (hook call, isSkillEnabled/onToggleEnabled/toggleAriaLabel props) from SkillList. The detail view is now the single place to change a skill's active state - Drop dim/muted styling for inactive skills in the sidebar: without a control there, the visual indication has nowhere to land - Resize SkillToggle to match neighbor buttons: outer h-9 container, h-6 w-11 track with size-5 knob, no label span. The 'Active' / 'Inactive' text that accompanied the detail-view toggle is removed - Remove the now-unused label prop and tabIndex prop (the tabIndex existed only for the list-row context) from SkillToggle. Drop the onKeyDown stopPropagation for the same reason - Remove now-orphaned com_ui_skill_active / com_ui_skill_inactive translation keys * style: shrink SkillToggle track to h-5 w-9 with size-4 knob Container stays at h-9 to match neighbor button heights. The toggle track itself drops from h-6 w-11 to h-5 w-9, with a size-4 knob travelling 1.125rem on activation. Visually lighter inside the row. * fix: remove redundant skillStates entries that match the resolved default When a toggle lands on the ownership/config default, delete the key from the map instead of persisting `{id: defaultValue}`. Without this, a user toggling a skill off and back on would leave `{id: true}` for an owned skill (whose default is already true), silently consuming a slot against the 200-entry cap. Repeated round-trip toggles could exhaust the quota with zero meaningful overrides (Codex P2). Preserves the exceptions-list invariant that the runtime-resolution design depends on. * fix: prune before enforcing skill-state cap; reject non-ObjectId keys Reorder the update controller so pruneOrphans runs before the 200-cap check. Without this, a user near the cap with some orphaned entries (skills deleted since their last GET) could send a payload that would pass after pruning but gets rejected by the raw-size check first. Add a sanity cap on raw payload size (2 * MAX_SKILL_STATES) so abusive inputs do not reach the DB query, and enforce the real cap on the pruned result instead. Harden pruneOrphans: the earlier early-return path could pass non-ObjectId keys through unchanged. Now only valid ObjectIds are returned, and the Skill-model-unavailable fallback filters by format. Also add isValidObjectIdString validation at the input boundary so malformed (but otherwise non-Mongo-unsafe) keys never reach persistence (Codex P2 x2). * fix: enforce active filter at execute time, prune revoked shares, scope queue per user P1: injectSkillCatalog now returns activeSkillIds (the filtered set that appears in the catalog). initializeAgent uses that set as the stored accessibleSkillIds on the initialized agent, so getSkillByName at runtime cannot resolve a deactivated skill — even if the LLM hallucinates a name or the user invokes by direct-invocation shorthand. Previously the executor authorized against the full ACL set, bypassing the active-state guarantee (Codex P1). P2: pruneOrphans now checks user access via findAccessibleResources in addition to skill existence. When a share is revoked, the user's skillStates entry for that skill had no cleanup path and silently consumed the 200-cap. Self-heals on both GET and POST. One extra ACL query per settings read/write; scoped to a single user so no N-user amplification (Codex P2). P2: the write queue moves from a single module-scoped object to a Map keyed by userId. Logout/login in the same tab can no longer flush the previous user's pending snapshot under the new session's auth. Each userId gets its own pending/inFlight slot; the in-flight request retains its original auth via the cookie already attached when sent, so the race window closes (Codex P2). * refactor: extract skillStates helpers to packages/api; add tests; polish Address the remaining valid findings from the comprehensive review: - Extract toRecord, loadSkillStates, validateSkillStatesPayload, and pruneOrphanSkillStates into packages/api/src/skills/skillStates.ts as TypeScript. The controller in /api shrinks to a ~90-line thin wrapper that builds live dependency adapters for Mongoose + the permission service (Review #2 DRY, #3 workspace boundary) - Replace the triplicated 12-line skillStates loading block in initialize.js, openai.js, and responses.js with a single call to loadSkillStates from @librechat/api. One helper, three sites - Swap console.error for the project logger in the controller (Review #7) - Remove the redundant INVALID_KEY_PATTERN regex: a valid ObjectId cannot contain . or $, so isValidObjectIdString already covers it (Review #11) - Parameterize the 200-cap error toast with {{0}} interpolation driven by the error response's `limit` field, so future changes to MAX_SKILL_STATES update the UI message automatically (Review #12) - Add 24 unit tests for the new skillStates helpers (toRecord, resolveDefaultActiveOnShare, loadSkillStates, validateSkillStates- Payload, pruneOrphanSkillStates) covering success paths, malformed input, cap boundaries, and parallel-query behavior (Review #4) - Add 10 tests for injectSkillCatalog pagination covering empty accessible set, missing listSkillsByAccess, single-page filter, owned-vs-shared defaults, explicit-override precedence, multi-page collection, MAX_CATALOG_PAGES safety cap, early termination on has_more=false, additional_instructions injection, and fail-closed without userId (Review #5) Total test count: 60 (was 26 on this surface). * fix: rename skillStates ValidationError to avoid barrel-export collision packages/api/src/types/error.ts already exports a ValidationError (MongooseError extension). Re-exporting a different shape from skills/skillStates.ts through the skills barrel caused TS2308 in CI because the root index re-exports both. Rename to SkillStatesValidationError to keep the exports disjoint. * refactor: tighten tests and absorb caller guard into loadSkillStates Address the followup review findings: - Add optional `accessibleSkillIds` param to loadSkillStates so the helper short-circuits to defaults when no skills are accessible. All three controllers drop the residual 7-line conditional wrapper in favor of a single destructured call (Review #2) - Remove the unreachable `typeof key !== 'string'` check from validateSkillStatesPayload: Object.entries always yields string keys per the JS spec (Review #3) - Replace the two `as unknown as` agent casts in the injectSkillCatalog tests with a `makeAgent()` factory typed directly as the function's parameter shape (Review #4) - Tighten the MAX_CATALOG_PAGES assertion from `toBeLessThanOrEqual(11)` to `toHaveBeenCalledTimes(10)` — the loop deterministically makes exactly 10 page fetches before hitting the cap (Review #1) - Rewrite the parallel-execution test for pruneOrphanSkillStates using deferred promises instead of microtask-order assertions. The test now inspects `toHaveBeenCalledTimes(1)` on both mocks after a single Promise.resolve() yield, pinning Promise.all usage without relying on push-order into a shared array (Review #5) - Evict stale writeQueue entries on user change via a module-scoped `lastSeenUserId` sentinel. When a different user's toggle is the first one after a logout/login, the previous user's queue entry is deleted. Keeps the Map bounded without adding hook-instance effect cleanup (Review #6) * fix(test): mock loadSkillStates in openai and responses controller specs The prior refactor replaced the inline 12-line skillStates loading block with a call to loadSkillStates from @librechat/api. Both controller spec files mock @librechat/api as a flat object, so any new named import from that package is undefined in the test env. Calling `await loadSkillStates(...)` threw before recordCollectedUsage ran, surfacing as "undefined is not iterable" on the test's array destructure of `mockRecordCollectedUsage.mock.calls[0]`. Add the missing mock to both spec files alongside the existing scopeSkillIds stub. * fix: abandon stale skillStates write queues on user switch Close the cross-session leak window where an in-flight flush loop still holds a reference to a previous user's queue: it could fire its next mutateAsync under the new session's auth cookies and persist the stale snapshot to the new user's document (Codex P1). Add an `abandoned` flag on `WriteQueue`. Three mechanisms cooperate: - `getWriteQueue` marks every non-active queue abandoned when the user differs from the last-seen identity (pre-existing eviction site, now more aggressive). - A `useEffect` on `userId` calls the same abandonment pass on every render with a new active identity, covering the window between logout/login and the new user's first toggle (when `getWriteQueue` would otherwise not fire). - The flush loop checks `!queue.abandoned` in its while condition so the second and later iterations exit without firing another `mutateAsync` after the session changes. The first iteration's in-flight request (already dispatched under the original user's cookies) still runs to completion or failure on its own — only the subsequent iterations, which are the dangerous ones, are blocked. |
||
|
|
3e064c2f2b |
🎯 feat: Per-Agent Skill Selection in Builder and Runtime Scoping (#12689)
* feat: per-agent skill selection in builder and runtime scoping
Wire skills persistence on the Agent model and enable the skills
section in the agents builder panel. At runtime, scope the skill
catalog to only the skills configured on each agent (intersected
with user ACL). When no skills are configured, the full user catalog
is used as the default. The ephemeral chat toggle overrides per-agent
scoping to provide the full catalog.
* fix: add scopeSkillIds to @librechat/api mock in responses unit test
The test mocks @librechat/api but was missing the newly imported
scopeSkillIds, causing createResponse to throw before reaching the
assertions. Added a passthrough mock that returns the input array.
* fix: scope primeInvokedSkills by agent's configured skills
primeInvokedSkills was receiving the full unscoped accessibleSkillIds,
bypassing the per-agent skill scoping applied to initializeAgent. This
allowed previously invoked skills from message history to be resolved
and primed even when excluded from the agent's configured skill set.
Apply the same scopeSkillIds filtering to match the initializeAgent
calls, so skill resolution is consistent across catalog injection
and history priming.
* fix: preserve agent skills through form reset and union prime scope
Two related bugs in the per-agent skill selection flow:
1. resetAgentForm dropped the persisted skills array because the generic
fall-through at the end of the loop excludes object/array values.
Combined with composeAgentUpdatePayload always emitting skills, this
caused any save of a previously-configured agent to silently overwrite
skills with an empty array. Add an explicit case for skills mirroring
the agent_ids handling.
2. primeInvokedSkills processes the full conversation payload, including
prior handoff-agent invocations. Scoping it to only primaryAgent.skills
meant a skill invoked by a handoff agent in a prior turn could not be
resolved when the current primary agent had a different scope, leaving
message history reconstruction incomplete. Union the per-agent scoped
accessibleSkillIds across primary plus all loaded handoff agents so
any skill any active agent could invoke is resolvable from history.
* fix: mark inline skill removals as dirty
The inline X button on the skills list called setValue without
shouldDirty: true, so removing a skill via this control did not
mark the skills field as dirty in react-hook-form state. When a
user removed a skill with the X button and also staged an avatar
upload in the same save, isAvatarUploadOnlyDirty returned true and
onSubmit short-circuited to avatar-only upload, silently dropping
the PATCH that would persist the skill removal.
The dialog path (SkillSelectDialog) already passes shouldDirty: true
on add/remove; this aligns the inline control with that behavior.
* fix: restore full ACL scope for primeInvokedSkills history reconstruction
Reverting the earlier scoping of primeInvokedSkills to the active-agent
union. That change conflated runtime invocation scoping (which correctly
gates what the model can call now) with history reconstruction (which
restores bodies the model already saw in prior turns).
Per-agent scoping still applies at:
- Catalog injection (injectSkillCatalog via initializeAgent)
- Runtime invocation (handleSkillToolCall via enrichWithSkillConfigurable,
using each agent's scoped accessibleSkillIds in agentToolContexts)
History priming is a read of past context, not a grant of new capability.
Scoping it causes historical skill bodies to vanish from formatAgentMessages
when an agent's skills list is edited mid-conversation or when the ephemeral
toggle flips, which breaks message reconstruction and drops code-env file
continuity for /mnt/data/{skillName}/ references. The user's ACL-accessible
set is the correct and sufficient gate for history reconstruction.
* fix: close openai.js skill gap and pin undefined vs [] semantics
Three related gaps surfaced in review:
1. api/server/controllers/agents/openai.js was a third skill resolution
site alongside responses.js and initialize.js, but still used the old
activation gate (required ephemeralAgent.skills === true) and never
passed accessibleSkillIds through scopeSkillIds. Per-agent scoping
silently did not apply on this route. Mirror the same pattern used
in responses.js so all three routes behave identically.
2. scopeSkillIds previously collapsed undefined and [] into the same
"full catalog" fallback, making it impossible for a user to express
"this agent has no skills." Tighten the semantics before any data
is written under the old behavior:
- undefined / null = not configured, full catalog
- [] = explicitly none, returns []
- non-empty = intersection with ACL-accessible set
Update defaultAgentFormValues.skills from [] to undefined so a brand
new agent whose skills UI was never touched does not accidentally
persist "explicit none" on first save (removeNullishValues strips
undefined from the payload server side).
3. Add direct unit tests for scopeSkillIds covering all five cases
(undefined, null, empty, disjoint, overlap, exact match, empty
accessible set). 16 tests total in skills.test.ts pass.
* fix: add scopeSkillIds to @librechat/api mock in openai unit test
Same pattern as the earlier responses.unit.spec.js fix: the test mocks
@librechat/api with an explicit object, so each newly imported symbol
must be added to the mock. Without scopeSkillIds, OpenAIChatCompletion
controller throws on destructuring before reaching recordCollectedUsage,
causing the token usage assertions to fail.
|
||
|
|
64ec5f18b8 |
⚙️ feat: Skill runtime integration: catalog, tools, execution, file priming (#12649)
* feat: Skill runtime integration — catalog injection, tool registration, execute handler
Wires the @librechat/agents SkillTool primitive into LibreChat's agent runtime:
**Enums:**
- Add `skills` to AgentCapabilities + defaultAgentCapabilities
**Data layer:**
- Add `getSkillByName(name, accessibleIds)` — compound query that
combines name lookup + ACL check in one findOne
**Agent initialization (packages/api/src/agents/initialize.ts):**
- Accept `accessibleSkillIds` param and `listSkillsByAccess` db method
- Query accessible skills, format catalog via `formatSkillCatalog()`,
append to `additional_instructions` (appears in agent system prompt)
- Register `SkillToolDefinition` + `createSkillTool()` when catalog
is non-empty (tool appears in model's tool list)
- Store `accessibleSkillIds` and `skillCount` on InitializedAgent
**Execute handler (packages/api/src/agents/handlers.ts):**
- Add `getSkillByName` to `ToolExecuteOptions`
- `handleSkillToolCall()` intercepts `Constants.SKILL_TOOL`:
extracts skillName, loads body from DB with ACL check,
substitutes $ARGUMENTS, returns ToolExecuteResult with
injectedMessages (skill body as isMeta user message)
**Caller wiring:**
- initialize.js: query skill IDs via findAccessibleResources,
pass to initializeAgent + store on agentToolContexts,
add getSkillByName to toolExecuteOptions,
pass accessibleSkillIds through loadTools configurable
- openai.js + responses.js: same pattern for their flows
Requires @librechat/agents >= 3.1.65 (PR #91 exports).
* feat: Skills toggle in tools menu + backend capability gating
Frontend:
- Add skills?: boolean to TEphemeralAgent type
- Add LAST_SKILLS_TOGGLE_ to LocalStorageKeys for persistence
- Add skillsEnabled to useAgentCapabilities hook
- Add skills useToolToggle to BadgeRowContext with localStorage init
- New Skills.tsx badge component (Scroll icon, cyan theme,
permission-gated via PermissionTypes.SKILLS)
- Add skills entry to ToolsDropdown with toggle + pin
- Render Skills badge in BadgeRow ephemeral section
Backend:
- Extract injectSkillCatalog() into packages/api/src/agents/skills.ts
(reduces initializeAgent module size, reusable helper)
- initializeAgent delegates to helper instead of inline block
- Capability-gate the findAccessibleResources query:
- Agents endpoint: checks AgentCapabilities.skills in admin config
- OpenAI/Responses controllers: checks ephemeralAgent.skills toggle
- ACL query runs once per run, result shared across all agents
* refactor: remove createSkillTool() instance from injectSkillCatalog
SkillTool is event-driven only. The tool definition in toolDefinitions
is sufficient for the LLM to see the tool schema. No tool instance is
needed since the host handler intercepts via ON_TOOL_EXECUTE before
tool.invoke() is ever called.
Removes tools from InjectSkillCatalogParams/Result, drops the
createSkillTool import.
* feat: skill file priming, bash tool, and invoked skills state
Multi-file skill support:
- New primeSkillFiles() helper (packages/api/src/agents/skillFiles.ts)
uploads skill files + SKILL.md body to code execution environment
- handleSkillToolCall primes files on invocation when skill.fileCount > 0,
returns session info as artifact so ToolNode stores the session
- Skill-primed files available to subsequent bash/code tool calls
Bash tool auto-registration:
- BashExecutionToolDefinition added alongside SkillToolDefinition when
skills are enabled, giving the model a bash tool for running scripts
Conversation state:
- Add invokedSkillIds field to conversation schema (Mongoose + Zod)
- handleSkillToolCall updates conversation with $addToSet on success
- Enables re-priming skill files on subsequent runs (future)
Dependency wiring:
- Pass listSkillFiles, getStrategyFunctions, uploadCodeEnvFile,
updateConversation through ToolExecuteOptions
- Pass req and codeApiKey through mergedConfigurable
- All three controller entry points wired (initialize.js, openai.js,
responses.js)
* fix: load bash_tool instance in loadToolsForExecution, remove file listing
- Add createBashExecutionTool to loadToolsForExecution alongside PTC/ToolSearch
pattern: loads CODE_API_KEY, creates bash tool instance on demand
- Add BASH_TOOL and SKILL_TOOL to specialToolNames set so they don't go
through the generic loadTools path (bash is created here, skill is
intercepted in handler before tool.invoke)
- Remove file name listing from skill content text — it's the skill
author's responsibility to disclose files in SKILL.md, not the framework
* feat: batch upload for skill files, replace sequential uploads
- Add batchUploadCodeEnvFiles() to crud.js: single POST to /upload/batch
with all files in one multipart request, returns shared session_id
- Rewrite primeSkillFiles to collect all streams (SKILL.md + bundled files)
then do one batch upload instead of N sequential uploads
- Replace uploadCodeEnvFile with batchUploadCodeEnvFiles across all callers
(handlers.ts, initialize.js, openai.js, responses.js)
* refactor: remove invokedSkillIds from conversation schema
Skills aren't re-loaded between runs, so conversation-level state for
invoked skills doesn't help. Skill state will live on messages instead
(like tool_search discoveredTools and summaries), enabling in-place
re-injection on follow-up runs.
Removes invokedSkillIds from: convo Mongoose schema, IConversation
interface, Zod schema, ToolExecuteOptions.updateConversation, and
all three caller wiring points.
* feat: smart skill file re-priming with session freshness checking
Schema:
- Add codeEnvIdentifier field to ISkillFile (type + Mongoose schema)
- Add updateSkillFileCodeEnvIds batch method (uses tenantSafeBulkWrite)
- Export checkIfActive from Code/process.js
Extraction:
- Add extractInvokedSkillsFromHistory() to run.ts — scans message
history for AIMessage tool_calls where name === 'skill', extracts
skillName args. Follows same pattern as extractDiscoveredToolsFromHistory.
Smart re-priming in primeSkillFiles:
- Before batch uploading, checks if existing codeEnvIdentifiers are
still active via getSessionInfo + checkIfActive (23h threshold)
- If session is still active, returns cached references (zero uploads)
- If stale or missing, batch-uploads everything and persists new
identifiers on SkillFile documents (fire-and-forget)
- Single session check covers all files (batch shares one session_id)
Wiring:
- Pass getSessionInfo, checkIfActive, updateSkillFileCodeEnvIds
through ToolExecuteOptions and all three controller entry points
* feat: wire skill file re-priming at run start via initialSessions
Flow:
1. initialize.js creates primeInvokedSkills callback with all deps
2. client.js calls it with message history before createRun
3. extractInvokedSkillsFromHistory scans for skill tool calls
4. For each invoked skill with files, primeSkillFiles uploads/checks
5. Returns initialSessions map passed to createRun
6. createRun passes initialSessions to Run.create (via RunConfig)
7. Run constructor seeds Graph.sessions, making skill files available
to subsequent bash/code tool calls via ToolNode session injection
Requires @librechat/agents with initialSessions on RunConfig (PR #94).
* refactor: use CODE_EXECUTION_TOOLS set for code tool checks
Import CODE_EXECUTION_TOOLS from @librechat/agents and replace inline
constant checks in handlers.ts and callbacks.js. Fixes missing bash
tool coverage in the session context injection (handlers.ts) and code
output processing (callbacks.js).
* refactor: move primeInvokedSkills to packages/api, add skill body re-injection
Moves primeInvokedSkills from an inline closure in initialize.js (with
dynamic requires) to a proper exported function in packages/api
skillFiles.ts with explicit typed dependencies.
Key changes:
- primeInvokedSkills now returns both initialSessions (for file priming)
AND injectedMessages (skill bodies for context continuity)
- createRun accepts invokedSkillMessages and appends skill bodies to
systemContent so the model retains skill instructions across runs
- initialize.js calls the packaged function with all deps passed explicitly
- client.js passes both initialSessions and injectedMessages to createRun
* fix: move dynamic requires to top-level module imports
Move primeInvokedSkills, getStrategyFunctions, batchUploadCodeEnvFiles,
getSessionInfo, and checkIfActive from inline requires to top-level
module requires where they belong.
* refactor: skill body reconstruction via formatAgentMessages, not systemContent
Replaces the lazy systemContent approach with proper message-level
reconstruction:
SDK (formatAgentMessages):
- New invokedSkillBodies param (Map<string, string>)
- Reconstructs HumanMessages after skill ToolMessages at the correct
position in the message sequence, matching where ToolNode originally
injected them
LibreChat:
- extractInvokedSkillsFromPayload replaces extractInvokedSkillsFromHistory
(works with raw TPayload before formatAgentMessages, not BaseMessage[])
- primeInvokedSkills now takes payload instead of messages, returns
skillBodies Map instead of injectedMessages
- client.js calls primeInvokedSkills BEFORE formatAgentMessages, passes
skillBodies through as the 4th param
- Removed invokedSkillMessages from createRun (no more systemContent hack)
- Single-pass: skill detection happens inside formatAgentMessages' existing
tool_call processing loop, zero extra message iterations
* refactor: rename skillBodies to skills for consistency with SDK param
* refactor: move auth loading into primeInvokedSkills, pass loadAuthValues as dep
The payload/accessibleSkillIds guard and CODE_API_KEY loading now live
inside primeInvokedSkills (packages/api) rather than in the CJS caller.
initialize.js passes loadAuthValues as a dependency and the callback
is only created when skillsCapabilityEnabled.
* feat: ReadFile tool + conditional bash registration + skill path namespacing
ReadFile tool (read_file):
- General-purpose file reader, event-driven (ON_TOOL_EXECUTE)
- Schema: { file_path: string } — "{skillName}/{path}" convention
- handleReadFileCall: resolves skill name from path, ACL check, reads
from DB cache or storage, binary detection, size limits (256KB),
lazy caching (512KB), line numbers in output
- SKILL.md special case: reads skill.body directly
- Dispatched alongside SKILL_TOOL in createToolExecuteHandler
- Added to specialToolNames in ToolService
Conditional tool registration:
- ReadFile + SkillTool: always registered when skills enabled
- BashTool: only registered when codeEnvAvailable === true
- codeEnvAvailable passed through InitializeAgentParams from caller
Skill file path namespacing:
- primeSkillFiles now uploads as "{skillName}/SKILL.md" and
"{skillName}/{relativePath}" instead of flat names
- Prevents file collisions when multiple skills are invoked
Wiring:
- getSkillFileByPath + updateSkillFileContent passed through
ToolExecuteOptions in all three callers
* feat: return images/PDFs as artifacts from read_file, tighten caching
Binary artifact support:
- Images (png, jpeg, gif, webp) returned as base64 in artifact.content
with type: 'image_url', processed by existing callback attachment flow
- PDFs returned as base64 artifact similarly
- Binary size limit: 10MB (MAX_BINARY_BYTES)
- Other binary files still return metadata + bash fallback
Caching:
- Text cached only on first read (file.content == null check)
- Binary flag cached only on first detection (file.isBinary == null)
- Skill files are immutable; no redundant cache writes
Registration:
- ReadFileToolDefinition now includes responseFormat: 'content_and_artifact'
* chore: update @librechat/agents to version 3.1.66-dev.0 and add peer dependencies in package-lock.json and package.json files
* fix: resolve review findings #1,#2,#4,#5,#6,#10,#13
Critical:
- #1: primeInvokedSkills now accumulates files across all skills into
one session entry instead of overwriting. Parallel processing via
Promise.allSettled.
- #2: codeEnvAvailable now computed and passed in openai.js and
responses.js (was missing, bash tool never registered in those flows)
Major:
- #4: relativePath in updateSkillFileCodeEnvIds now strips the
{skillName}/ prefix to match SkillFile documents. SKILL.md filter
uses endsWith instead of exact match.
- #5: File priming guarded on apiKey being non-empty (skip when not
configured instead of failing with auth error)
- #6: Skills processed in parallel via Promise.allSettled instead of
sequential for-of loop
Minor:
- #10: Use top-level imports in initialize.js instead of inline requires
- #13: Log warning when skill catalog reaches the 100-skill limit
* fix: resolve followup review findings N1,N2,N4
N1 (CRITICAL): Wire skill deps into responses.js non-streaming path.
Was completely missing getSkillByName, file strategy functions, etc.
N2 (MAJOR): Single batch upload for ALL skills' files. Resolves skills
in parallel (Phase 1), then collects all file streams across skills
and does ONE batchUploadCodeEnvFiles call (Phase 2). All files share
one session_id, eliminating cross-session isolation issues.
N4 (MINOR): Move inline require() to top-level in openai.js and
responses.js, consistent with initialize.js.
* fix: add mocks for new file strategy imports in controller tests
* fix: restore session freshness check, parallelize file lookups, add warnings
R1: Re-add session freshness check before batch upload. Checks any
existing codeEnvIdentifier via getSessionInfo + checkIfActive. If the
session is still active (23h window), returns cached file references
with zero re-uploads.
R2: listSkillFiles calls parallelized via Promise.all (were sequential
in the for-of loop).
R3: Log warning when skill record lookup fails during identifier
persistence (was a silent empty-string fallback).
* fix: guard freshness cache on single-session consistency
* fix: multi-session freshness check (code env handles mixed sessions natively)
The code execution environment fetches each file by its own
{session_id, fileId} pair independently — no single-session
requirement. Removed the sessionIds.size === 1 guard.
Now checks ALL distinct sessions for freshness. If every session
is still active (23h window), returns cached references with per-file
session_ids preserved. If any session expired, falls through to
re-upload everything in a single batch.
* perf: parallelize session freshness checks via Promise.all
* fix: add optional chaining for session info retrieval in primeInvokedSkills
Updated the primeInvokedSkills function to use optional chaining for getSessionInfo and checkIfActive methods, ensuring safer access and preventing potential runtime errors when these methods are undefined.
* fix: address review findings #1-#9 + Codex P1/P2 + session probe
Critical:
- #1/Codex P1: Add codeApiKey loading to openai.js and responses.js
loadTools configurable (was missing, file priming broken in 2/3 paths)
- Codex P1: Fix cached file name prefix in primeSkillFiles cache path
(was sf.relativePath, now ${skill.name}/${sf.relativePath})
Major:
- Codex P2: Honor ephemeral skills toggle in agents endpoint
(check ephemeralAgent?.skills !== false alongside admin capability)
- #4: Early size check using file.bytes from DB before streaming
(prevents full-file buffer for oversized files)
Minor:
- #5: Replace Record<string, any> with Record<string, boolean | string>
- #6: Localize Pin/Unpin aria-labels with com_ui_pin/com_ui_unpin
- #8: Parallelize stream acquisition in primeSkillFiles via
Promise.allSettled
- #9: Log warning for partial batch upload failures with filenames
Performance:
- Session probe optimization: getSessionInfo now hits per-object
endpoint (GET /sessions/{sid}/objects/{fid}) instead of listing
entire session (GET /files/{sid}?detail=summary). O(1) stat vs
O(N) list + linear scan.
* refactor: extract shared skill wiring helper + add unit tests
DRY (#3):
- New skillDeps.js exports getSkillToolDeps() with all 9 skill-related
deps (getSkillByName, listSkillFiles, getStrategyFunctions, etc.)
- Replaces 5 identical copy-paste blocks across initialize.js, openai.js,
responses.js (streaming + non-streaming paths)
- One place to maintain when skill deps change
Tests (#2):
- 8 unit tests for extractInvokedSkillsFromPayload covering:
string args, object args, missing skill tool_calls, non-assistant
messages, malformed JSON, empty skillName, empty payload, dedup
* fix: remove @jest/globals import, use global jest env
* fix: resolve round 2 review findings R2-1 through R2-7
R2-1 (toggle semantics): openai.js + responses.js now check admin
capability (AgentCapabilities.skills) alongside ephemeral toggle.
Aligns with initialize.js.
R2-2 (swallowed error): primeInvokedSkills now logs
updateSkillFileCodeEnvIds failures (was .catch(() => {}))
R2-4 (test cast): Record<string, string> → Record<string, unknown>
R2-5 (DRY regression): Extract enrichWithSkillConfigurable() into
skillDeps.js. Replaces 4 identical loadAuthValues blocks.
Each loadTools callback is now a one-liner. JSDoc added (R2-6).
R2-7 (sequential streams): primeInvokedSkills now uses
Promise.allSettled for parallel stream acquisition.
* fix: require explicit skills toggle + treat partial cache as miss
- initialize.js: change ephemeralSkillsToggle !== false to === true
(unset toggle no longer enables skills)
- primeSkillFiles cache: require ALL files to have codeEnvIdentifier
before using cache (partial persistence = cache miss = re-upload)
- primeInvokedSkills cache: same check (allFilesWithIds.length must
equal total file count)
* fix: pass entity_id=skillId on batch upload, eliminates per-user cache thrashing
primeSkillFiles now passes entity_id: skill._id.toString() to
batchUploadCodeEnvFiles. This scopes the code env session to the
skill, not the user. All users sharing a skill share the same
uploaded files — no more cache thrashing from overwriting each
other's codeEnvIdentifier.
The stored codeEnvIdentifier now includes ?entity_id= suffix so
freshness checks pass the entity_id through to the per-object
stat endpoint. Both primeSkillFiles and primeInvokedSkills
store consistent identifier formats.
* fix: pass entity_id on multi-skill batch upload, consistent identifier format
* Revert "fix: pass entity_id on multi-skill batch upload, consistent identifier format"
This reverts commit
|
||
|
|
d2cbd551b7
|
🤝 fix: Load Handoff Agents for Agents API (#12740)
* 🤝 fix: load handoff sub-agents on OpenAI-compat endpoints (#12726)
Extracts the BFS discovery + ACL-gated initialization of handoff sub-agents
into a shared `discoverConnectedAgents` helper in `@librechat/api` and
wires it into the OpenAI-compatible `/v1/chat/completions` and Open
Responses `/v1/responses` controllers. These endpoints previously only
passed the primary agent config to `createRun` while keeping
`primaryConfig.edges` intact, which forced `MultiAgentGraph` into
multi-agent mode without loading the referenced sub-agents and caused
StateGraph to throw "Found edge ending at unknown node <id>".
The discovery helper also filters orphaned edges (deleted sub-agents or
those the caller lacks VIEW permission on), so API users see the same
graceful fallback the chat UI already had.
* 🧪 fix: use ServerRequest in discovery spec helpers
CI `tsc --noEmit -p packages/api/tsconfig.json` caught that the test
helpers typed `req` as `express.Request`, which is not assignable to
`DiscoverConnectedAgentsParams.req` (typed as `ServerRequest` whose
`user` is `IUser`). Local jest passed because ts-jest is transpile-only,
but the CI typecheck uses the full compiler.
* 🪲 fix: drop orphan edges on both endpoints, not just `to`
Addresses the P1 codex finding on #12740: `filterOrphanedEdges`
previously only removed edges whose `to` referenced a skipped agent.
Edges whose `from` was a skipped agent — the symmetric case in a
bidirectional graph like `A <-> B` where `B` is deleted or the user
lacks VIEW on it — leaked through to `createRun` and re-triggered
`Found edge ending at unknown node <id>` at StateGraph compile time.
The filter now drops an edge if either endpoint references a skipped
id, and the existing `to`-only test cases were updated to reflect the
stricter behavior. Adds a bidirectional-graph regression test in
`discovery.spec.ts`.
* 🔒 fix: enforce REMOTE_AGENT ACL on handoff sub-agents for API routes
Addresses the second P1 codex finding on #12740: the OpenAI-compat
`/v1/chat/completions` and Open Responses `/v1/responses` routes gate
the primary agent on `REMOTE_AGENT` (via `createCheckRemoteAgentAccess`),
but `discoverConnectedAgents` was checking handoff sub-agents against
the looser in-app `AGENT` resource type. That allowed a remote caller
who could reach the orchestrator but had only in-app visibility on a
sub-agent to invoke it via the API — bypassing the remote-sharing
boundary.
Adds an optional `resourceType` param to `discoverConnectedAgents`
(defaulting to `AGENT` for the chat UI path) and passes
`ResourceType.REMOTE_AGENT` from both API controllers so every
discovered sub-agent clears the same sharing boundary enforced at
route entry.
* 🧯 fix: enforce allowedProviders for discovered sub-agents
Addresses the third P1 codex finding on #12740: `discoverConnectedAgents`
forwarded the caller's `endpointOption` verbatim into `initializeAgent`,
but on the OpenAI-compat routes that option's `endpoint` is the primary
agent's provider (e.g. `openai`), not `agents`. `initializeAgent` only
enforces `allowedProviders` when `isAgentsEndpoint(endpointOption.endpoint)`
is true, so handoff sub-agents silently bypassed the provider allowlist
configured under `endpoints.agents.allowedProviders`.
Override `endpointOption.endpoint` to `EModelEndpoint.agents` for every
per-sub-agent init call. The primary agent still uses the caller's
endpointOption as before — this only affects the BFS-loaded handoff
targets. Regression test asserts the override.
* ✂️ fix: prune unreachable sub-agents after orphan-edge filtering
Addresses the fourth P1 codex finding on #12740: BFS eagerly initializes
every sub-agent referenced in the primary's edge scan, but once
`filterOrphanedEdges` drops edges whose endpoints were skipped, some of
those sub-agents end up disconnected from the primary. In an `A -> B ->
C` graph (edges stored directly on A) where B is skipped (missing or
no VIEW), both edges are filtered, but C was already loaded and would
still be passed to `createRun` — which flips into multi-agent mode on
`agents.length > 1` and turns C into an unintended parallel start node.
After filtering edges, compute the set of agent ids reachable from the
primary through the surviving edge set and prune `agentConfigs` to that
set. Two regression tests added: one for the pruning case, one that
confirms agents connected via surviving edges are still kept.
* 🔁 fix: don't seed initialize.js agentConfigs from the pre-pruning callback
Addresses the fifth P1 codex finding on #12740: `onAgentInitialized`
fires during BFS, BEFORE the helper prunes agents that become
disconnected once `filterOrphanedEdges` runs. Writing the sub-agent
straight into the outer `agentConfigs` there and then only additively
merging the pruned `discoveredConfigs` left stranded entries in the
outer map, and `AgentClient` would still hand them to `createRun` as
extra parallel start nodes (the exact failure mode the pass-4 prune
was meant to eliminate for the API controllers).
Drop the `agentConfigs.set` from the callback and replace the additive
merge with a direct copy from `discoveredConfigs`, which is now the
single authoritative source of what the run should see. The
per-agent tool context map is still populated during BFS — stale
entries there are harmless because they're only read by closure inside
`ON_TOOL_EXECUTE` and are unreachable once the agent is not in
`agentConfigs`.
* 🔬 fix: address audit findings on discovery helper
Resolves findings from a comprehensive external audit of #12740.
**Finding 1 (CRITICAL) — stale edges survive the reachability prune.**
The pass-4 prune removed unreachable agents from `agentConfigs` but left
matching edges in the return value. In an `A -> B -> C -> D` graph (all
edges stored on A) where B is skipped, `filterOrphanedEdges` drops A->B
and B->C but keeps C->D (neither endpoint is skipped). The caller then
sees `agentConfigs` without C/D but `edges` still references them,
flipping `createRun` into multi-agent mode with mismatched agents/edges
— the exact crash this PR is supposed to fix. Now filter the edge list
to the reachable set in the same pass, so the returned shape is
self-consistent: every edge endpoint is either the primary id or a key
of `agentConfigs`. New regression test covers A->B->C->D with B skipped.
**Finding 2 (MAJOR) — unconditional `getModelsConfig` on every API
request.** The OpenAI-compat and Responses controllers called
`getModelsConfig(req)` and `discoverConnectedAgents` even when the
primary agent had no edges (the common single-agent API case). Gate
both behind `primaryConfig.edges?.length > 0` so single-agent runs
don't pay that cost.
**Finding 5 (MINOR) — silent mutation of caller's
`primaryConfig.userMCPAuthMap`.** The helper aliased that object and
then `Object.assign`'d sub-agent entries into it, changing the caller's
config in-place. Shallow-clone up front so the returned merged map is
the only destination.
**Finding 7 (NIT) — dead `?? []` coalescing.**
`filterOrphanedEdges` always returns a concrete array, so the
`discoveredEdges ?? []` fallback was never reached. Simplified the
`primaryConfig.edges = …` assignment.
Also adds a test that verifies `primaryConfig.userMCPAuthMap` is not
mutated in-place.
* 🧹 chore: address audit NITs on discovery helper
Addresses two NIT findings from the post-fix audit:
**F1** — the shallow clone on `primaryConfig.userMCPAuthMap` was only
applied on the primary side; the `else` branch (hit when the primary
had no MCP auth and the first sub-agent seeds the map) assigned the
sub-agent's `config.userMCPAuthMap` directly, so a later sub-agent's
`Object.assign` mutated the first one's map in place. Harmless in
practice (per-request ephemeral objects) but asymmetric. Clone in the
else branch too. Test added.
**F2** — `initialize.js` had a defensive `if (agentConfigs.size > 0 &&
!edges) edges = []` normalizer. Pre-existing dead code: the helper now
always returns a concrete array from `filteredEdges.filter(...)`.
Removed for clarity.
* 🕸 fix: require all sources reachable when traversing fan-in edges
Addresses the seventh P1 codex finding on #12740: the reachability BFS
advanced through an edge as soon as any of its `from` endpoints matched
the current frontier node (`sources.includes(current)`), but the
subsequent edge filter required ALL sources to be reachable (`every`).
The two-semantics mismatch let a fan-in edge like `{from: ['A','B'],
to: 'C'}` mark C reachable purely via A even when B had no path from
the primary, then drop the edge itself at filter time. Result: C
survived in `agentConfigs` with no surviving edge connecting it to A,
so `createRun` flipped into multi-agent mode on `agents.length > 1`
and C ran as an unintended parallel root.
Replace the BFS with a fixed-point iteration keyed on the same
all-sources-reachable predicate used by the filter, so traversal and
filtering stay aligned and multi-source edges only fire once every
source is in the reachable set.
Two regression tests added:
- `{from: ['A','B'], to: 'C'}` with B having no incoming path — asserts
neither B nor C leak into the result.
- `A -> B`, `A -> C`, `['B','C'] -> D` — asserts the fan-in edge fires
and D becomes reachable once both B and C are.
* 🔀 fix: match SDK OR semantics for multi-source edge reachability
Reverts the all-sources-required reachability gate from
|
||
|
|
cb41ba14b2
|
🔁 fix: Pass recursionLimit to OpenAI-Compatible Agents API Endpoint (#12510)
* fix: pass recursionLimit to processStream in OpenAI-compatible agents API The OpenAI-compatible endpoint never passed recursionLimit to LangGraph's processStream(), silently capping all API-based agent calls at the default 25 steps. Mirror the 3-step cascade already used by the UI path (client.js): yaml config default → per-agent DB override → max cap. * refactor: extract resolveRecursionLimit into shared utility Extract the 3-step recursion limit cascade into a shared resolveRecursionLimit() function in @librechat/api. Both openai.js and client.js now call this single source of truth. Also fixes falsy-guard edge cases where recursion_limit=0 or maxRecursionLimit=0 would silently misbehave, by using explicit typeof + positive checks. Includes unit tests covering all cascade branches and edge cases. * refactor: use resolveRecursionLimit in openai.js and client.js Replace duplicated cascade logic in both controllers with the shared resolveRecursionLimit() utility from @librechat/api. In openai.js: hoist agentsEConfig to avoid double property walk, remove displaced comment, add integration test assertions. In client.js: remove inline cascade that was overriding config after initial assignment. * fix: hoist processStream mock for test accessibility The processStream mock was created inline inside mockResolvedValue, making it inaccessible via createRun.mock.results (which returns the Promise, not the resolved value). Hoist it to a module-level variable so tests can assert on it directly. * test: improve test isolation and boundary coverage Use mockReturnValueOnce instead of mockReturnValue to prevent mock leaking across test boundaries. Add boundary tests for downward agent override and exact-match maxRecursionLimit. |
||
|
|
b5c097e5c7
|
⚗️ feat: Agent Context Compaction/Summarization (#12287)
* chore: imports/types
Add summarization config and package-level summarize handler contracts
Register summarize handlers across server controller paths
Port cursor dual-read/dual-write summary support and UI status handling
Selectively merge cursor branch files for BaseClient summary content
block detection (last-summary-wins), dual-write persistence, summary
block unit tests, and on_summarize_status SSE event handling with
started/completed/failed branches.
Co-authored-by: Cursor <cursoragent@cursor.com>
refactor: type safety
feat: add localization for summarization status messages
refactor: optimize summary block detection in BaseClient
Updated the logic for identifying existing summary content blocks to use a reverse loop for improved efficiency. Added a new test case to ensure the last summary content block is updated correctly when multiple summary blocks exist.
chore: add runName to chainOptions in AgentClient
refactor: streamline summarization configuration and handler integration
Removed the deprecated summarizeNotConfigured function and replaced it with a more flexible createSummarizeFn. Updated the summarization handler setup across various controllers to utilize the new function, enhancing error handling and configuration resolution. Improved overall code clarity and maintainability by consolidating summarization logic.
feat(summarization): add staged chunk-and-merge fallback
feat(usage): track summarization usage separately from messages
feat(summarization): resolve prompt from config in runtime
fix(endpoints): use @librechat/api provider config loader
refactor(agents): import getProviderConfig from @librechat/api
chore: code order
feat(app-config): auto-enable summarization when configured
feat: summarization config
refactor(summarization): streamline persist summary handling and enhance configuration validation
Removed the deprecated createDeferredPersistSummary function and integrated a new createPersistSummary function for MongoDB persistence. Updated summarization handlers across various controllers to utilize the new persistence method. Enhanced validation for summarization configuration to ensure provider, model, and prompt are properly set, improving error handling and overall robustness.
refactor(summarization): update event handling and remove legacy summarize handlers
Replaced the deprecated summarization handlers with new event-driven handlers for summarization start and completion across multiple controllers. This change enhances the clarity of the summarization process and improves the integration of summarization events in the application. Additionally, removed unused summarization functions and streamlined the configuration loading process.
refactor(summarization): standardize event names in handlers
Updated event names in the summarization handlers to use constants from GraphEvents for consistency and clarity. This change improves maintainability and reduces the risk of errors related to string literals in event handling.
feat(summarization): enhance usage tracking for summarization events
Added logic to track summarization usage in multiple controllers by checking the current node type. If the node indicates a summarization task, the usage type is set accordingly. This change improves the granularity of usage data collected during summarization processes.
feat(summarization): integrate SummarizationConfig into AppSummarizationConfig type
Enhanced the AppSummarizationConfig type by extending it with the SummarizationConfig type from librechat-data-provider. This change improves type safety and consistency in the summarization configuration structure.
test: add end-to-end tests for summarization functionality
Introduced a comprehensive suite of end-to-end tests for the summarization feature, covering the full LibreChat pipeline from message creation to summarization. This includes a new setup file for environment configuration and a Jest configuration specifically for E2E tests. The tests utilize real API keys and ensure proper integration with the summarization process, enhancing overall test coverage and reliability.
refactor(summarization): include initial summary in formatAgentMessages output
Updated the formatAgentMessages function to return an initial summary alongside messages and index token count map. This change is reflected in multiple controllers and the corresponding tests, enhancing the summarization process by providing additional context for each agent's response.
refactor: move hydrateMissingIndexTokenCounts to tokenMap utility
Extracted the hydrateMissingIndexTokenCounts function from the AgentClient and related tests into a new tokenMap utility file. This change improves code organization and reusability, allowing for better management of token counting logic across the application.
refactor(summarization): standardize step event handling and improve summary rendering
Refactored the step event handling in the useStepHandler and related components to utilize constants for event names, enhancing consistency and maintainability. Additionally, improved the rendering logic in the Summary component to conditionally display the summary text based on its availability, providing a better user experience during the summarization process.
feat(summarization): introduce baseContextTokens and reserveTokensRatio for improved context management
Added baseContextTokens to the InitializedAgent type to calculate the context budget based on agentMaxContextNum and maxOutputTokensNum. Implemented reserveTokensRatio in the createRun function to allow configurable context token management. Updated related tests to validate these changes and ensure proper functionality.
feat(summarization): add minReserveTokens, context pruning, and overflow recovery configurations
Introduced new configuration options for summarization, including minReserveTokens, context pruning settings, and overflow recovery parameters. Updated the createRun function to accommodate these new options and added a comprehensive test suite to validate their functionality and integration within the summarization process.
feat(summarization): add updatePrompt and reserveTokensRatio to summarization configuration
Introduced an updatePrompt field for updating existing summaries with new messages, enhancing the flexibility of the summarization process. Additionally, added reserveTokensRatio to the configuration schema, allowing for improved management of token allocation during summarization. Updated related tests to validate these new features.
feat(logging): add on_agent_log event handler for structured logging
Implemented an on_agent_log event handler in both the agents' callbacks and responses to facilitate structured logging of agent activities. This enhancement allows for better tracking and debugging of agent interactions by logging messages with associated metadata. Updated the summarization process to ensure proper handling of log events.
fix: remove duplicate IBalanceUpdate interface declaration
perf(usage): single-pass partition of collectedUsage
Replace two Array.filter() passes with a single for-of loop that
partitions message vs. summarization usages in one iteration.
fix(BaseClient): shallow-copy message content before mutating and preserve string content
Avoid mutating the original message.content array in-place when
appending a summary block. Also convert string content to a text
content part instead of silently discarding it.
fix(ui): fix Part.tsx indentation and useStepHandler summarize-complete handling
- Fix SUMMARY else-if branch indentation in Part.tsx to match chain level
- Guard ON_SUMMARIZE_COMPLETE with didFinalize flag to avoid unnecessary
re-renders when no summarizing parts exist
- Protect against undefined completeData.summary instead of unsafe spread
fix(agents): use strict enabled check for summarization handlers
Change summarizationConfig?.enabled !== false to === true so handlers
are not registered when summarizationConfig is undefined.
chore: fix initializeClient JSDoc and move DEFAULT_RESERVE_RATIO to module scope
refactor(Summary): align collapse/expand behavior with Reasoning component
- Single render path instead of separate streaming vs completed branches
- Use useMessageContext for isSubmitting/isLatestMessage awareness so
the "Summarizing..." label only shows during active streaming
- Default to collapsed (matching Reasoning), user toggles to expand
- Add proper aria attributes (aria-hidden, role, aria-controls, contentId)
- Hide copy button while actively streaming
feat(summarization): default to self-summarize using agent's own provider/model
When no summarization config is provided (neither in librechat.yaml nor
on the agent), automatically enable summarization using the agent's own
provider and model. The agents package already provides default prompts,
so no prompt configuration is needed.
Also removes the dead resolveSummarizationLLMConfig in summarize.ts
(and its spec) — run.ts buildAgentContext is the single source of truth
for summarization config resolution. Removes the duplicate
RuntimeSummarizationConfig local type in favor of the canonical
SummarizationConfig from data-provider.
chore: schema and type cleanup for summarization
- Add trigger field to summarizationAgentOverrideSchema so per-agent
trigger overrides in librechat.yaml are not silently stripped by Zod
- Remove unused SummarizationStatus type from runs.ts
- Make AppSummarizationConfig.enabled non-optional to reflect the
invariant that loadSummarizationConfig always sets it
refactor(responses): extract duplicated on_agent_log handler
refactor(run): use agents package types for summarization config
Import SummarizationConfig, ContextPruningConfig, and
OverflowRecoveryConfig from @librechat/agents and use them to
type-check the translation layer in buildAgentContext. This ensures
the config object passed to the agent graph matches what it expects.
- Use `satisfies AgentSummarizationConfig` on the config object
- Cast contextPruningConfig and overflowRecoveryConfig to agents types
- Properly narrow trigger fields from DeepPartial to required shape
feat(config): add maxToolResultChars to base endpoint schema
Add maxToolResultChars to baseEndpointSchema so it can be configured
on any endpoint in librechat.yaml. Resolved during agent initialization
using getProviderConfig's endpoint resolution: custom endpoint config
takes precedence, then the provider-specific endpoint config, then the
shared `all` config.
Passed through to the agents package ToolNode, which uses it to cap
tool result length before it enters the context window. When not
configured, the agents package computes a sensible default from
maxContextTokens.
fix(summarization): forward agent model_parameters in self-summarize default
When no explicit summarization config exists, the self-summarize
default now forwards the agent's model_parameters as the
summarization parameters. This ensures provider-specific settings
(e.g. Bedrock region, credentials, endpoint host) are available
when the agents package constructs the summarization LLM.
fix(agents): register summarization handlers by default
Change the enabled gate from === true to !== false so handlers
register when no explicit summarization config exists. This aligns
with the self-summarize default where summarization is always on
unless explicitly disabled via enabled: false.
refactor(summarization): let agents package inherit clientOptions for self-summarize
Remove model_parameters forwarding from the self-summarize default.
The agents package now reuses the agent's own clientOptions when the
summarization provider matches the agent's provider, inheriting all
provider-specific settings (region, credentials, proxy, etc.)
automatically.
refactor(summarization): use MessageContentComplex[] for summary content
Unify summary content to always use MessageContentComplex[] arrays,
matching the pattern used by on_message_delta. No more string | array
unions — content is always an array of typed blocks ({ type: 'text',
text: '...' } for text, { type: 'reasoning_content', ... } for
reasoning).
Agents package:
- SummaryContentBlock.content: MessageContentComplex[] (was string)
- tokenCount now optional (not sent on deltas)
- Removed reasoning field — reasoning is now a content block type
- streamAndCollect normalizes all chunks to content block arrays
- Delta events pass content blocks directly
LibreChat:
- SummaryContentPart.content: Agents.MessageContentComplex[]
- Updated Part.tsx, Summary.tsx, useStepHandler.ts, BaseClient.js
- Summary.tsx derives display text from content blocks via useMemo
- Aggregator uses simple array spread
refactor(summarization): enhance summary handling and text extraction
- Updated BaseClient.js to improve summary text extraction, accommodating both legacy and new content formats.
- Modified summarization logic to ensure consistent handling of summary content across different message formats.
- Adjusted test cases in summarization.e2e.spec.js to utilize the new summary text extraction method.
- Refined SSE useStepHandler to initialize summary content as an array.
- Updated configuration schema by removing unused minReserveTokens field.
- Cleaned up SummaryContentPart type by removing rangeHash property.
These changes streamline the summarization process and ensure compatibility with various content structures.
refactor(summarization): streamline usage tracking and logging
- Removed direct checks for summarization nodes in ModelEndHandler and replaced them with a dedicated markSummarizationUsage function for better readability and maintainability.
- Updated OpenAIChatCompletionController and responses handlers to utilize the new markSummarizationUsage function for setting usage types.
- Enhanced logging functionality by ensuring the logger correctly handles different log levels.
- Introduced a new useCopyToClipboard hook in the Summary component to encapsulate clipboard copy logic, improving code reusability and clarity.
These changes improve the overall structure and efficiency of the summarization handling and logging processes.
refactor(summarization): update summary content block documentation
- Removed outdated comment regarding the last summary content block in BaseClient.js.
- Added a new comment to clarify the purpose of the findSummaryContentBlock method, ensuring consistency in documentation.
These changes enhance code clarity and maintainability by providing accurate descriptions of the summarization logic.
refactor(summarization): update summary content structure in tests
- Modified the summarization content structure in e2e tests to use an array format for text, aligning with recent changes in summary handling.
- Updated test descriptions to clarify the behavior of context token calculations, ensuring consistency and clarity in the tests.
These changes enhance the accuracy and maintainability of the summarization tests by reflecting the updated content structure.
refactor(summarization): remove legacy E2E test setup and configuration
- Deleted the e2e-setup.js and jest.e2e.config.js files, which contained legacy configurations for E2E tests using real API keys.
- Introduced a new summarization.e2e.ts file that implements comprehensive E2E backend integration tests for the summarization process, utilizing real AI providers and tracking summaries throughout the run.
These changes streamline the testing framework by consolidating E2E tests into a single, more robust file while removing outdated configurations.
refactor(summarization): enhance E2E tests and error handling
- Added a cleanup step to force exit after all tests to manage Redis connections.
- Updated the summarization model to 'claude-haiku-4-5-20251001' for consistency across tests.
- Improved error handling in the processStream function to capture and return processing errors.
- Enhanced logging for cross-run tests and tight context scenarios to provide better insights into test execution.
These changes improve the reliability and clarity of the E2E tests for the summarization process.
refactor(summarization): enhance test coverage for maxContextTokens behavior
- Updated run-summarization.test.ts to include a new test case ensuring that maxContextTokens does not exceed user-defined limits, even when calculated ratios suggest otherwise.
- Modified summarization.e2e.ts to replace legacy UsageMetadata type with a more appropriate type for collectedUsage, improving type safety and clarity in the test setup.
These changes improve the robustness of the summarization tests by validating context token constraints and refining type definitions.
feat(summarization): add comprehensive E2E tests for summarization process
- Introduced a new summarization.e2e.test.ts file that implements extensive end-to-end integration tests for the summarization pipeline, covering the full flow from LibreChat to agents.
- The tests utilize real AI providers and include functionality to track summaries during and between runs.
- Added necessary cleanup steps to manage Redis connections post-tests and ensure proper exit.
These changes enhance the testing framework by providing robust coverage for the summarization process, ensuring reliability and performance under real-world conditions.
fix(service): import logger from winston configuration
- Removed the import statement for logger from '@librechat/data-schemas' and replaced it with an import from '~/config/winston'.
- This change ensures that the logger is correctly sourced from the updated configuration, improving consistency in logging practices across the application.
refactor(summary): simplify Summary component and enhance token display
- Removed the unused `meta` prop from the `SummaryButton` component to streamline its interface.
- Updated the token display logic to use a localized string for better internationalization support.
- Adjusted the rendering of the `meta` information to improve its visibility within the `Summary` component.
These changes enhance the clarity and usability of the Summary component while ensuring better localization practices.
feat(summarization): add maxInputTokens configuration for summarization
- Introduced a new `maxInputTokens` property in the summarization configuration schema to control the amount of conversation context sent to the summarizer, with a default value of 10000.
- Updated the `createRun` function to utilize the new `maxInputTokens` setting, allowing for more flexible summarization based on agent context.
These changes enhance the summarization capabilities by providing better control over input token limits, improving the overall summarization process.
refactor(summarization): simplify maxInputTokens logic in createRun function
- Updated the logic for the `maxInputTokens` property in the `createRun` function to directly use the agent's base context tokens when the resolved summarization configuration does not specify a value.
- This change streamlines the configuration process and enhances clarity in how input token limits are determined for summarization.
These modifications improve the maintainability of the summarization configuration by reducing complexity in the token calculation logic.
feat(summary): enhance Summary component to display meta information
- Updated the SummaryContent component to accept an optional `meta` prop, allowing for additional contextual information to be displayed above the main content.
- Adjusted the rendering logic in the Summary component to utilize the new `meta` prop, improving the visibility of supplementary details.
These changes enhance the user experience by providing more context within the Summary component, making it clearer and more informative.
refactor(summarization): standardize reserveRatio configuration in summarization logic
- Replaced instances of `reserveTokensRatio` with `reserveRatio` in the `createRun` function and related tests to unify the terminology across the codebase.
- Updated the summarization configuration schema to reflect this change, ensuring consistency in how the reserve ratio is defined and utilized.
- Removed the per-agent override logic for summarization configuration, simplifying the overall structure and enhancing clarity.
These modifications improve the maintainability and readability of the summarization logic by standardizing the configuration parameters.
* fix: circular dependency of `~/models`
* chore: update logging scope in agent log handlers
Changed log scope from `[agentus:${data.scope}]` to `[agents:${data.scope}]` in both the callbacks and responses controllers to ensure consistent logging format across the application.
* feat: calibration ratio
* refactor(tests): update summarizationConfig tests to reflect changes in enabled property
Modified tests to check for the new `summarizationEnabled` property instead of the deprecated `enabled` field in the summarization configuration. This change ensures that the tests accurately validate the current configuration structure and behavior of the agents.
* feat(tests): add markSummarizationUsage mock for improved test coverage
Introduced a mock for the markSummarizationUsage function in the responses unit tests to enhance the testing of summarization usage tracking. This addition supports better validation of summarization-related functionalities and ensures comprehensive test coverage for the agents' response handling.
* refactor(tests): simplify event handler setup in createResponse tests
Removed redundant mock implementations for event handlers in the createResponse unit tests, streamlining the setup process. This change enhances test clarity and maintainability while ensuring that the tests continue to validate the correct behavior of usage tracking during on_chat_model_end events.
* refactor(agents): move calibration ratio capture to finally block
Reorganized the logic for capturing the calibration ratio in the AgentClient class to ensure it is executed in the finally block. This change guarantees that the ratio is captured even if the run is aborted, enhancing the reliability of the response message persistence. Removed redundant code and improved clarity in the handling of context metadata.
* refactor(agents): streamline bulk write logic in recordCollectedUsage function
Removed redundant bulk write operations and consolidated document handling in the recordCollectedUsage function. The logic now combines all documents into a single bulk write operation, improving efficiency and reducing error handling complexity. Updated logging to provide consistent error messages for bulk write failures.
* refactor(agents): enhance summarization configuration resolution in createRun function
Streamlined the summarization configuration logic by introducing a base configuration and allowing for overrides from agent-specific settings. This change improves clarity and maintainability, ensuring that the summarization configuration is consistently applied while retaining flexibility for customization. Updated the handling of summarization parameters to ensure proper integration with the agent's model and provider settings.
* refactor(agents): remove unused tokenCountMap and streamline calibration ratio handling
Eliminated the unused tokenCountMap variable from the AgentClient class to enhance code clarity. Additionally, streamlined the logic for capturing the calibration ratio by using optional chaining and a fallback value, ensuring that context metadata is consistently defined. This change improves maintainability and reduces potential confusion in the codebase.
* refactor(agents): extract agent log handler for improved clarity and reusability
Refactored the agent log handling logic by extracting it into a dedicated function, `agentLogHandler`, enhancing code clarity and reusability across different modules. Updated the event handlers in both the OpenAI and responses controllers to utilize the new handler, ensuring consistent logging behavior throughout the application.
* test: add summarization event tests for useStepHandler
Implemented a series of tests for the summarization events in the useStepHandler hook. The tests cover scenarios for ON_SUMMARIZE_START, ON_SUMMARIZE_DELTA, and ON_SUMMARIZE_COMPLETE events, ensuring proper handling of summarization logic, including message accumulation and finalization. This addition enhances test coverage and validates the correct behavior of the summarization process within the application.
* refactor(config): update summarizationTriggerSchema to use enum for type validation
Changed the type of the `type` field in the summarizationTriggerSchema from a string to an enum with a single value 'token_count'. This modification enhances type safety and ensures that only valid types are accepted in the configuration, improving overall clarity and maintainability of the schema.
* test(usage): add bulk write tests for message and summarization usage
Implemented tests for the bulk write functionality in the recordCollectedUsage function, covering scenarios for combined message and summarization usage, summarization-only usage, and message-only usage. These tests ensure correct document handling and token rollup calculations, enhancing test coverage and validating the behavior of the usage tracking logic.
* refactor(Chat): enhance clipboard copy functionality and type definitions in Summary component
Updated the Summary component to improve the clipboard copy functionality by handling clipboard permission errors. Refactored type definitions for SummaryProps to use a more specific type, enhancing type safety. Adjusted the SummaryButton and FloatingSummaryBar components to accept isCopied and onCopy props, promoting better separation of concerns and reusability.
* chore(translations): remove unused "Expand Summary" key from English translations
Deleted the "Expand Summary" key from the English translation file to streamline the localization resources and improve clarity in the user interface. This change helps maintain an organized and efficient translation structure.
* refactor: adjust token counting for Claude model to account for API discrepancies
Implemented a correction factor for token counting when using the Claude model, addressing discrepancies between Anthropic's API and local tokenizer results. This change ensures accurate token counts by applying a scaling factor, improving the reliability of token-related functionalities.
* refactor(agents): implement token count adjustment for Claude model messages
Added a method to adjust token counts for messages processed by the Claude model, applying a correction factor to align with API expectations. This enhancement improves the accuracy of token counting, ensuring reliable functionality when interacting with the Claude model.
* refactor(agents): token counting for media content in messages
Introduced a new method to estimate token costs for image and document blocks in messages, improving the accuracy of token counting. This enhancement ensures that media content is properly accounted for, particularly for the Claude model, by integrating additional token estimation logic for various content types. Updated the token counting function to utilize this new method, enhancing overall reliability and functionality.
* chore: fix missing import
* fix(agents): clamp baseContextTokens and document reserve ratio change
Prevent negative baseContextTokens when maxOutputTokens exceeds the
context window (misconfigured models). Document the 10%→5% default
reserve ratio reduction introduced alongside summarization.
* fix(agents): include media tokens in hydrated token counts
Add estimateMediaTokensForMessage to createTokenCounter so the hydration
path (used by hydrateMissingIndexTokenCounts) matches the precomputed
path in AgentClient.getTokenCountForMessage. Without this, messages
containing images or documents were systematically undercounted during
hydration, risking context window overflow.
Add 34 unit tests covering all block-type branches of
estimateMediaTokensForMessage.
* fix(agents): include summarization output tokens in usage return value
The returned output_tokens from recordCollectedUsage now reflects all
billed LLM calls (message + summarization). Previously, summarization
completions were billed but excluded from the returned metadata, causing
a discrepancy between what users were charged and what the response
message reported.
* fix(tests): replace process.exit with proper Redis cleanup in e2e test
The summarization E2E test used process.exit(0) to work around a Redis
connection opened at import time, which killed the Jest runner and
bypassed teardown. Use ioredisClient.quit() and keyvRedisClient.disconnect()
for graceful cleanup instead.
* fix(tests): update getConvo imports in OpenAI and response tests
Refactor test files to import getConvo from the main models module instead of the Conversation submodule. This change ensures consistency across tests and simplifies the import structure, enhancing maintainability.
* fix(clients): improve summary text validation in BaseClient
Refactor the summary extraction logic to ensure that only non-empty summary texts are considered valid. This change enhances the robustness of the message processing by utilizing a dedicated method for summary text retrieval, improving overall reliability.
* fix(config): replace z.any() with explicit union in summarization schema
Model parameters (temperature, top_p, etc.) are constrained to
primitive types rather than the policy-violating z.any().
* refactor(agents): deduplicate CLAUDE_TOKEN_CORRECTION constant
Export from the TS source in packages/api and import in the JS client,
eliminating the static class property that could drift out of sync.
* refactor(agents): eliminate duplicate selfProvider in buildAgentContext
selfProvider and provider were derived from the same expression with
different type casts. Consolidated to a single provider variable.
* refactor(agents): extract shared SSE handlers and restrict log levels
- buildSummarizationHandlers() factory replaces triplicated handler
blocks across responses.js and openai.js
- agentLogHandlerObj exported from callbacks.js for consistent reuse
- agentLogHandler restricted to an allowlist of safe log levels
(debug, info, warn, error) instead of accepting arbitrary strings
* fix(SSE): batch summarize deltas, add exhaustiveness check, conditional error announcement
- ON_SUMMARIZE_DELTA coalesces rapid-fire renders via requestAnimationFrame
instead of calling setMessages per chunk
- Exhaustive never-check on TStepEvent catches unhandled variants at
compile time when new StepEvents are added
- ON_SUMMARIZE_COMPLETE error announcement only fires when a summary
part was actually present and removed
* feat(agents): persist instruction overhead in contextMeta and seed across runs
Extend contextMeta with instructionOverhead and toolCount so the
provider-observed instruction overhead is persisted on the response message
and seeded into the pruner on subsequent runs. This enables the pruner to
use a calibrated budget from the first call instead of waiting for a
provider observation, preventing the ratio collapse caused by local
tokenizer overestimating tool schema tokens.
The seeded overhead is only used when encoding and tool count match
between runs, ensuring stale values from different configurations
are discarded.
* test(agents): enhance OpenAI test mocks for summarization handlers
Updated the OpenAI test suite to include additional mock implementations for summarization handlers, including buildSummarizationHandlers, markSummarizationUsage, and agentLogHandlerObj. This improves test coverage and ensures consistent behavior during testing.
* fix(agents): address review findings for summarization v2
Cancel rAF on unmount to prevent stale Recoil writes from dead
component context. Clear orphaned summarizing:true parts when
ON_SUMMARIZE_COMPLETE arrives without a summary payload. Add null
guard and safe spread to agentLogHandler. Handle Anthropic-format
base64 image/* documents in estimateMediaTokensForMessage. Use
role="region" for expandable summary content. Add .describe() to
contextMeta Zod fields. Extract duplicate usage loop into helper.
* refactor: simplify contextMeta to calibrationRatio + encoding only
Remove instructionOverhead and toolCount from cross-run persistence —
instruction tokens change too frequently between runs (prompt edits,
tool changes) for a persisted seed to be reliable. The intra-run
calibration in the pruner still self-corrects via provider observations.
contextMeta now stores only the tokenizer-bias ratio and encoding,
which are stable across instruction changes.
* test(SSE): enhance useStepHandler tests for ON_SUMMARIZE_COMPLETE behavior
Updated the test for ON_SUMMARIZE_COMPLETE to clarify that it finalizes the existing part with summarizing set to false when the summary is undefined. Added assertions to verify the correct behavior of message updates and the state of summary parts.
* refactor(BaseClient): remove handleContextStrategy and truncateToolCallOutputs functions
Eliminated the handleContextStrategy method from BaseClient to streamline message handling. Also removed the truncateToolCallOutputs function from the prompts module, simplifying the codebase and improving maintainability.
* refactor: add AGENT_DEBUG_LOGGING option and refactor token count handling in BaseClient
Introduced AGENT_DEBUG_LOGGING to .env.example for enhanced debugging capabilities. Refactored token count handling in BaseClient by removing the handleTokenCountMap method and simplifying token count updates. Updated AgentClient to log detailed token count recalculations and adjustments, improving traceability during message processing.
* chore: update dependencies in package-lock.json and package.json files
Bumped versions of several dependencies, including @librechat/agents to ^3.1.62 and various AWS SDK packages to their latest versions. This ensures compatibility and incorporates the latest features and fixes.
* chore: imports order
* refactor: extract summarization config resolution from buildAgentContext
* refactor: rename and simplify summarization configuration shaping function
* refactor: replace AgentClient token counting methods with single-pass pure utility
Extract getTokenCount() and getTokenCountForMessage() from AgentClient
into countFormattedMessageTokens(), a pure function in packages/api that
handles text, tool_call, image, and document content types in one loop.
- Decompose estimateMediaTokensForMessage into block-level helpers
(estimateImageDataTokens, estimateImageBlockTokens, estimateDocumentBlockTokens)
shared by both estimateMediaTokensForMessage and the new single-pass function
- Remove redundant per-call getEncoding() resolution (closure captures once)
- Remove deprecated gpt-3.5-turbo-0301 model branching
- Drop this.getTokenCount guard from BaseClient.sendMessage
* refactor: streamline token counting in createTokenCounter function
Simplified the createTokenCounter function by removing the media token estimation and directly calculating the token count. This change enhances clarity and performance by consolidating the token counting logic into a single pass, while maintaining compatibility with Claude's token correction.
* refactor: simplify summarization configuration types
Removed the AppSummarizationConfig type and directly used SummarizationConfig in the AppConfig interface. This change streamlines the type definitions and enhances consistency across the codebase.
* chore: import order
* fix: summarization event handling in useStepHandler
- Cancel pending summarizeDeltaRaf in clearStepMaps to prevent stale
frames firing after map reset or component unmount
- Move announcePolite('summarize_completed') inside the didFinalize
guard so screen readers only announce when finalization actually occurs
- Remove dead cleanup closure returned from stepHandler useCallback body
that was never invoked by any caller
* fix: estimate tokens for non-PDF/non-image base64 document blocks
Previously estimateDocumentBlockTokens returned 0 for unrecognized MIME
types (e.g. text/plain, application/json), silently underestimating
context budget. Fall back to character-based heuristic or countTokens.
* refactor: return cloned usage from markSummarizationUsage
Avoid mutating LangChain's internal usage_metadata object by returning
a shallow clone with the usage_type tag. Update all call sites in
callbacks, openai, and responses controllers to use the returned value.
* refactor: consolidate debug logging loops in buildMessages
Merge the two sequential O(n) debug-logging passes over orderedMessages
into a single pass inside the map callback where all data is available.
* refactor: narrow SummaryContentPart.content type
Replace broad Agents.MessageContentComplex[] with the specific
Array<{ type: ContentTypes.TEXT; text: string }> that all producers
and consumers already use, improving compile-time safety.
* refactor: use single output array in recordCollectedUsage
Have processUsageGroup append to a shared array instead of returning
separate arrays that are spread into a third, reducing allocations.
* refactor: use for...in in hydrateMissingIndexTokenCounts
Replace Object.entries with for...in to avoid allocating an
intermediate tuple array during token map hydration.
|
||
|
|
0412f05daf
|
🪢 chore: Consolidate Pricing and Tx Imports After tx.js Module Removal (#12086)
* 🧹 chore: resolve imports due to rebase
* chore: Update model mocks in unit tests for consistency
- Consolidated model mock implementations across various test files to streamline setup and reduce redundancy.
- Removed duplicate mock definitions for `getMultiplier` and `getCacheMultiplier`, ensuring a unified approach in `recordCollectedUsage.spec.js`, `openai.spec.js`, `responses.unit.spec.js`, and `abortMiddleware.spec.js`.
- Enhanced clarity and maintainability of test files by aligning mock structures with the latest model updates.
* fix: Safeguard token credit checks in transaction tests
- Updated assertions in `transaction.spec.ts` to handle potential null values for `updatedBalance` by using optional chaining.
- Enhanced robustness of tests related to token credit calculations, ensuring they correctly account for scenarios where the balance may not be found.
* chore: transaction methods with bulk insert functionality
- Introduced `bulkInsertTransactions` method in `transaction.ts` to facilitate batch insertion of transaction documents.
- Updated test file `transactions.bulk-parity.spec.ts` to utilize new pricing function assignments and handle potential null values in calculations, improving test robustness.
- Refactored pricing function initialization for clarity and consistency.
* refactor: Enhance type definitions and introduce new utility functions for model matching
- Added `findMatchingPattern` and `matchModelName` utility functions to improve model name matching logic in transaction methods.
- Updated type definitions for `findMatchingPattern` to accept a more specific tokensMap structure, enhancing type safety.
- Refactored `dbMethods` initialization in `transactions.bulk-parity.spec.ts` to include the new utility functions, improving test clarity and functionality.
* refactor: Update database method imports and enhance transaction handling
- Refactored `abortMiddleware.js` to utilize centralized database methods for message handling and conversation retrieval, improving code consistency.
- Enhanced `bulkInsertTransactions` in `transaction.ts` to handle empty document arrays gracefully and added error logging for better debugging.
- Updated type definitions in `transactions.ts` to enforce stricter typing for token types, enhancing type safety across transaction methods.
- Improved test setup in `transactions.bulk-parity.spec.ts` by refining pricing function assignments and ensuring robust handling of potential null values.
* refactor: Update database method references and improve transaction multiplier handling
- Refactored `client.js` to update database method references for `bulkInsertTransactions` and `updateBalance`, ensuring consistency in method usage.
- Enhanced transaction multiplier calculations in `transaction.spec.ts` to provide fallback values for write and read multipliers, improving robustness in cost calculations across structured token spending tests.
|
||
|
|
8ba2bde5c1
|
📦 refactor: Consolidate DB models, encapsulating Mongoose usage in data-schemas (#11830)
* chore: move database model methods to /packages/data-schemas * chore: add TypeScript ESLint rule to warn on unused variables * refactor: model imports to streamline access - Consolidated model imports across various files to improve code organization and reduce redundancy. - Updated imports for models such as Assistant, Message, Conversation, and others to a unified import path. - Adjusted middleware and service files to reflect the new import structure, ensuring functionality remains intact. - Enhanced test files to align with the new import paths, maintaining test coverage and integrity. * chore: migrate database models to packages/data-schemas and refactor all direct Mongoose Model usage outside of data-schemas * test: update agent model mocks in unit tests - Added `getAgent` mock to `client.test.js` to enhance test coverage for agent-related functionality. - Removed redundant `getAgent` and `getAgents` mocks from `openai.spec.js` and `responses.unit.spec.js` to streamline test setup and reduce duplication. - Ensured consistency in agent mock implementations across test files. * fix: update types in data-schemas * refactor: enhance type definitions in transaction and spending methods - Updated type definitions in `checkBalance.ts` to use specific request and response types. - Refined `spendTokens.ts` to utilize a new `SpendTxData` interface for better clarity and type safety. - Improved transaction handling in `transaction.ts` by introducing `TransactionResult` and `TxData` interfaces, ensuring consistent data structures across methods. - Adjusted unit tests in `transaction.spec.ts` to accommodate new type definitions and enhance robustness. * refactor: streamline model imports and enhance code organization - Consolidated model imports across various controllers and services to a unified import path, improving code clarity and reducing redundancy. - Updated multiple files to reflect the new import structure, ensuring all functionalities remain intact. - Enhanced overall code organization by removing duplicate import statements and optimizing the usage of model methods. * feat: implement loadAddedAgent and refactor agent loading logic - Introduced `loadAddedAgent` function to handle loading agents from added conversations, supporting multi-convo parallel execution. - Created a new `load.ts` file to encapsulate agent loading functionalities, including `loadEphemeralAgent` and `loadAgent`. - Updated the `index.ts` file to export the new `load` module instead of the deprecated `loadAgent`. - Enhanced type definitions and improved error handling in the agent loading process. - Adjusted unit tests to reflect changes in the agent loading structure and ensure comprehensive coverage. * refactor: enhance balance handling with new update interface - Introduced `IBalanceUpdate` interface to streamline balance update operations across the codebase. - Updated `upsertBalanceFields` method signatures in `balance.ts`, `transaction.ts`, and related tests to utilize the new interface for improved type safety. - Adjusted type imports in `balance.spec.ts` to include `IBalanceUpdate`, ensuring consistency in balance management functionalities. - Enhanced overall code clarity and maintainability by refining type definitions related to balance operations. * feat: add unit tests for loadAgent functionality and enhance agent loading logic - Introduced comprehensive unit tests for the `loadAgent` function, covering various scenarios including null and empty agent IDs, loading of ephemeral agents, and permission checks. - Enhanced the `initializeClient` function by moving `getConvoFiles` to the correct position in the database method exports, ensuring proper functionality. - Improved test coverage for agent loading, including handling of non-existent agents and user permissions. * chore: reorder memory method exports for consistency - Moved `deleteAllUserMemories` to the correct position in the exported memory methods, ensuring a consistent and logical order of method exports in `memory.ts`. |
||
|
|
381ed8539b
|
🪪 fix: Enforce Conversation Ownership Checks in Remote Agent Controllers (#12263)
* 🔒 fix: Validate conversation ownership in remote agent API endpoints Add user-scoped ownership checks for client-supplied conversation IDs in OpenAI-compatible and Open Responses controllers to prevent cross-tenant file/message loading via IDOR. * 🔒 fix: Harden ownership checks against type confusion and unhandled errors - Add typeof string validation before getConvo to block NoSQL operator injection (e.g. { "$gt": "" }) bypassing the ownership check - Move ownership checks inside try/catch so DB errors produce structured JSON error responses instead of unhandled promise rejections - Add string type validation for conversation_id and previous_response_id in the upstream TS request validators (defense-in-depth) * 🧪 test: Add coverage for conversation ownership validation in remote agent APIs - Fix broken getConvo mock in openai.spec.js (was missing entirely) - Add tests for: owned conversation, unowned (404), non-string type (400), absent conversation_id (skipped), and DB error (500) — both controllers |
||
|
|
0c27ad2d55
|
🛡️ refactor: Scope Action Mutations by Parent Resource Ownership (#12237)
* 🛡️ fix: Scope action mutations by parent resource ownership Prevent cross-tenant action overwrites by validating that an existing action's agent_id/assistant_id matches the URL parameter before allowing updates or deletes. Without this, a user with EDIT access on their own agent could reference a foreign action_id to hijack another agent's action record. * 🛡️ fix: Harden action ownership checks and scope write filters - Remove && short-circuit that bypassed the guard when agent_id or assistant_id was falsy (e.g. assistant-owned actions have no agent_id, so the check was skipped entirely on the agents route). - Include agent_id / assistant_id in the updateAction and deleteAction query filters so the DB write itself enforces ownership atomically. - Log a warning when deleteAction returns null (silent no-op from data-integrity mismatch). * 📝 docs: Update Action model JSDoc to reflect scoped query params * ✅ test: Add Action ownership scoping tests Cover update, delete, and cross-type protection scenarios using MongoMemoryServer to verify that scoped query filters (agent_id, assistant_id) prevent cross-tenant overwrites and deletions at the database level. * 🛡️ fix: Scope updateAction filter in agent duplication handler * 🐛 fix: Use action metadata domain instead of action_id when duplicating agent actions The duplicate handler was splitting `action.action_id` by `actionDelimiter` to extract the domain, but `action_id` is a bare nanoid that doesn't contain the delimiter. This produced malformed entries in the duplicated agent's actions array (nanoid_action_newNanoid instead of domain_action_newNanoid). The domain is available on `action.metadata.domain`. * ✅ test: Add integration tests for agent duplication action handling Uses MongoMemoryServer with real Agent and Action models to verify: - Duplicated actions use metadata.domain (not action_id) for the agent actions array entries - Sensitive metadata fields are stripped from duplicated actions - Original action documents are not modified |
||
|
|
e1e204d6cf
|
🧮 refactor: Bulk Transactions & Balance Updates for Token Spending (#11996)
* refactor: transaction handling by integrating pricing and bulk write operations
- Updated `recordCollectedUsage` to accept pricing functions and bulk write operations, improving transaction management.
- Refactored `AgentClient` and related controllers to utilize the new transaction handling capabilities, ensuring better performance and accuracy in token spending.
- Added tests to validate the new functionality, ensuring correct behavior for both standard and bulk transaction paths.
- Introduced a new `transactions.ts` file to encapsulate transaction-related logic and types, enhancing code organization and maintainability.
* chore: reorganize imports in agents client controller
- Moved `getMultiplier` and `getCacheMultiplier` imports to maintain consistency and clarity in the import structure.
- Removed duplicate import of `updateBalance` and `bulkInsertTransactions`, streamlining the code for better readability.
* refactor: add TransactionData type and CANCEL_RATE constant to data-schemas
Establishes a single source of truth for the transaction document shape
and the incomplete-context billing rate constant, both consumed by
packages/api and api/.
* refactor: use proper types in data-schemas transaction methods
- Replace `as unknown as { tokenCredits }` with `lean<IBalance>()`
- Use `TransactionData[]` instead of `Record<string, unknown>[]`
for bulkInsertTransactions parameter
- Add JSDoc noting insertMany bypasses document middleware
- Remove orphan section comment in methods/index.ts
* refactor: use shared types in transactions.ts, fix bulk write logic
- Import CANCEL_RATE from data-schemas instead of local duplicate
- Import TransactionData from data-schemas for PreparedEntry/BulkWriteDeps
- Use tilde alias for EndpointTokenConfig import
- Pass valueKey through to getMultiplier
- Only sum tokenValue for balance-enabled docs in bulkWriteTransactions
- Consolidate two loops into single-pass map
* refactor: remove duplicate updateBalance from Transaction.js
Import updateBalance from ~/models (sourced from data-schemas) instead
of maintaining a second copy. Also import CANCEL_RATE from data-schemas
and remove the Balance model import (no longer needed directly).
* fix: test real spendCollectedUsage instead of IIFE replica
Export spendCollectedUsage from abortMiddleware.js and rewrite the test
file to import and test the actual function. Previously the tests ran
against a hand-written replica that could silently diverge from the real
implementation.
* test: add transactions.spec.ts and restore regression comments
Add 22 direct unit tests for transactions.ts financial logic covering
prepareTokenSpend, prepareStructuredTokenSpend, bulkWriteTransactions,
CANCEL_RATE paths, NaN guards, disabled transactions, zero tokens,
cache multipliers, and balance-enabled filtering.
Restore critical regression documentation comments in
recordCollectedUsage.spec.js explaining which production bugs the
tests guard against.
* fix: widen setValues type to include lastRefill
The UpdateBalanceParams.setValues type was Partial<Pick<IBalance,
'tokenCredits'>> which excluded lastRefill — used by
createAutoRefillTransaction. Widen to also pick 'lastRefill'.
* test: use real MongoDB for bulkWriteTransactions tests
Replace mock-based bulkWriteTransactions tests with real DB tests using
MongoMemoryServer. Pure function tests (prepareTokenSpend,
prepareStructuredTokenSpend) remain mock-based since they don't touch
DB. Add end-to-end integration tests that verify the full prepare →
bulk write → DB state pipeline with real Transaction and Balance models.
* chore: update @librechat/agents dependency to version 3.1.54 in package-lock.json and related package.json files
* test: add bulk path parity tests proving identical DB outcomes
Three test suites proving the bulk path (prepareTokenSpend/
prepareStructuredTokenSpend + bulkWriteTransactions) produces
numerically identical results to the legacy path for all scenarios:
- usage.bulk-parity.spec.ts: mirrors all legacy recordCollectedUsage
tests; asserts same return values and verifies metadata fields on
the insertMany docs match what spendTokens args would carry
- transactions.bulk-parity.spec.ts: real-DB tests using actual
getMultiplier/getCacheMultiplier pricing functions; asserts exact
tokenValue, rate, rawAmount and balance deductions for standard
tokens, structured/cache tokens, CANCEL_RATE, premium pricing,
multi-entry batches, and edge cases (NaN, zero, disabled)
- Transaction.spec.js: adds describe('Bulk path parity') that mirrors
7 key legacy tests via recordCollectedUsage + bulk deps against
real MongoDB, asserting same balance deductions and doc counts
* refactor: update llmConfig structure to use modelKwargs for reasoning effort
Refactor the llmConfig in getOpenAILLMConfig to store reasoning effort within modelKwargs instead of directly on llmConfig. This change ensures consistency in the configuration structure and improves clarity in the handling of reasoning properties in the tests.
* test: update performance checks in processAssistantMessage tests
Revise the performance assertions in the processAssistantMessage tests to ensure that each message processing time remains under 100ms, addressing potential ReDoS vulnerabilities. This change enhances the reliability of the tests by focusing on maximum processing time rather than relative ratios.
* test: fill parity test gaps — model fallback, abort context, structured edge cases
- usage.bulk-parity: add undefined model fallback test
- transactions.bulk-parity: add abort context test (txns inserted,
balance unchanged when balance not passed), fix readTokens type cast
- Transaction.spec: add 3 missing mirrors — balance disabled with
transactions enabled, structured transactions disabled, structured
balance disabled
* fix: deduct balance before inserting transactions to prevent orphaned docs
Swap the order in bulkWriteTransactions: updateBalance runs before
insertMany. If updateBalance fails (after exhausting retries), no
transaction documents are written — avoiding the inconsistent state
where transactions exist in MongoDB with no corresponding balance
deduction.
* chore: import order
* test: update config.spec.ts for OpenRouter reasoning in modelKwargs
Same fix as llm.spec.ts — OpenRouter reasoning is now passed via
modelKwargs instead of llmConfig.reasoning directly.
|
||
|
|
5ea59ecb2b
|
🐛 fix: Normalize output_text blocks in Responses API input conversion (#11835)
* 🐛 fix: Normalize `output_text` blocks in Responses API input conversion
Treat `output_text` content blocks the same as `input_text` when
converting Responses API input to internal message format. Previously,
assistant messages containing `output_text` blocks fell through to the
default handler, producing `{ type: 'output_text' }` without a `text`
field, which caused downstream provider adapters (e.g. Bedrock) to fail
with "Unsupported content block type: output_text".
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* refactor: Remove ChatModelStreamHandler from OpenAI and Responses controllers
Eliminated the ChatModelStreamHandler from both OpenAIChatCompletionController and createResponse functions to streamline event handling. This change simplifies the code by relying on existing handlers for message deltas and reasoning deltas, enhancing maintainability and reducing complexity in the agent's event processing logic.
* feat: Enhance input conversion in Responses API
Updated the `convertInputToMessages` function to handle additional content types, including `input_file` and `refusal` blocks, ensuring they are converted to appropriate message formats. Implemented null filtering for content arrays and default values for missing fields, improving robustness. Added comprehensive unit tests to validate these changes and ensure correct behavior across various input scenarios.
* fix: Forward upstream provider status codes in error responses
Updated error handling in OpenAIChatCompletionController and createResponse functions to forward upstream provider status codes (e.g., Anthropic 400s) instead of masking them as 500. This change improves error reporting by providing more accurate status codes and error types, enhancing the clarity of error responses for clients.
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
|
||
|
|
9a38af5875
|
📉 feat: Add Token Usage Tracking for Agents API Routes (#11600)
* feat: Implement token usage tracking for OpenAI and Responses controllers - Added functionality to record token usage against user balances in OpenAIChatCompletionController and createResponse functions. - Introduced new utility functions for managing token spending and structured token usage. - Enhanced error handling for token recording to improve logging and debugging capabilities. - Updated imports to include new usage tracking methods and configurations. * test: Add unit tests for recordCollectedUsage function in usage.spec.ts - Introduced comprehensive tests for the recordCollectedUsage function, covering various scenarios including handling empty and null collectedUsage, single and multiple usage entries, and sequential and parallel execution cases. - Enhanced token handling tests to ensure correct calculations for both OpenAI and Anthropic formats, including cache token management. - Improved overall test coverage for usage tracking functionality, ensuring robust validation of expected behaviors and outcomes. * test: Add unit tests for OpenAI and Responses API controllers - Introduced comprehensive unit tests for the OpenAIChatCompletionController and createResponse functions, focusing on the correct invocation of recordCollectedUsage for token spending. - Enhanced tests to validate the passing of balance and transactions configuration to the recordCollectedUsage function. - Ensured proper dependency injection of spendTokens and spendStructuredTokens in the usage recording process. - Improved overall test coverage for token usage tracking, ensuring robust validation of expected behaviors and outcomes. |
||
|
|
7c9c7e530b
|
⏲️ feat: Defer Loading MCP Tools (#11270)
* WIP: code ptc
* refactor: tool classification and calling logic
* 🔧 fix: Update @librechat/agents dependency to version 3.0.68
* chore: import order and correct renamed tool name for tool search
* refactor: streamline tool classification logic for local and programmatic tools
* feat: add per-tool configuration options for agents, including deferred loading and allowed callers
- Introduced `tool_options` in agent forms to manage tool behavior.
- Updated tool classification logic to prioritize agent-level configurations.
- Enhanced UI components to support tool deferral functionality.
- Added localization strings for new tool options and actions.
* feat: enhance agent schema with per-tool options for configuration
- Added `tool_options` schema to support per-tool configurations, including `defer_loading` and `allowed_callers`.
- Updated agent data model to incorporate new tool options, ensuring flexibility in tool behavior management.
- Modified type definitions to reflect the new `tool_options` structure for agents.
* feat: add tool_options parameter to loadTools and initializeAgent for enhanced agent configuration
* chore: update @librechat/agents dependency to version 3.0.71 and enhance agent tool loading logic
- Updated the @librechat/agents package to version 3.0.71 across multiple files.
- Added support for handling deferred loading of tools in agent initialization and execution processes.
- Improved the extraction of discovered tools from message history to optimize tool loading behavior.
* chore: update @librechat/agents dependency to version 3.0.72
* chore: update @librechat/agents dependency to version 3.0.75
* refactor: simplify tool defer loading logic in MCPTool component
- Removed local state management for deferred tools, relying on form state instead.
- Updated related functions to directly use form values for checking and toggling defer loading.
- Cleaned up code by eliminating unnecessary optimistic updates and local state dependencies.
* chore: remove deprecated localization strings for tool deferral in translation.json
- Eliminated unused strings related to deferred loading descriptions in the English translation file.
- Streamlined localization to reflect recent changes in tool loading logic.
* refactor: improve tool defer loading handling in MCPTool component
- Enhanced the logic for managing deferred loading of tools by simplifying the update process for tool options.
- Ensured that the state reflects the correct loading behavior based on the new deferred loading conditions.
- Cleaned up the code to remove unnecessary complexity in handling tool options.
* refactor: update agent mocks in callbacks test to use actual implementations
- Modified the agent mocks in the callbacks test to include actual implementations from the @librechat/agents module.
- This change enhances the accuracy of the tests by ensuring they reflect the real behavior of the agent functions.
|
||
|
|
11210d8b98
|
🏁 fix: Message Race Condition if Cancelled Early (#11462)
* 🔧 fix: Prevent race conditions in message saving during abort scenarios * Added logic to save partial responses before returning from the abort endpoint to ensure parentMessageId exists in the database. * Updated the ResumableAgentController to save response messages before emitting final events, preventing orphaned parentMessageIds. * Enhanced handling of unfinished responses to improve stability and data integrity in agent interactions. * 🔧 fix: logging and job replacement handling in ResumableAgentController * Added detailed logging for job creation and final event emissions to improve traceability. * Implemented logic to check for job replacement before emitting events, preventing stale requests from affecting newer jobs. * Updated abort handling to log additional context about the abort result, enhancing debugging capabilities. * refactor: abort handling and token spending logic in AgentStream * Added authorization check for abort attempts to prevent unauthorized access. * Improved response message saving logic to ensure valid message IDs are stored. * Implemented token spending for aborted requests to prevent double-spending across parallel agents. * Enhanced logging for better traceability of token spending operations during abort scenarios. * refactor: remove TODO comments for token spending in abort handling * Removed outdated TODO comments regarding token spending for aborted requests in the abort endpoint. * This change streamlines the code and clarifies the current implementation status. * ✅ test: Add comprehensive tests for job replacement and abort handling * Introduced unit tests for job replacement detection in ResumableAgentController, covering job creation timestamp tracking, stale job detection, and response message saving order. * Added tests for the agent abort endpoint, ensuring proper authorization checks, early abort handling, and partial response saving. * Enhanced logging and error handling in tests to improve traceability and robustness of the abort functionality. |
||
|
|
304bba853c
|
💻 feat: Deeper MCP UI integration in the Chat UI (#9669)
* 💻 feat: deeper MCP UI integration in the chat UI using plugins --------- Co-authored-by: Samuel Path <samuel.path@shopify.com> Co-authored-by: Pierre-Luc Godin <pierreluc.godin@shopify.com> * 💻 refactor: Migrate MCP UI resources from index-based to ID-based referencing - Replace index-based resource markers with stable resource IDs - Update plugin to parse \ui{resourceId} format instead of \ui0 - Refactor components to use useMessagesOperations instead of useSubmitMessage - Add ShareMessagesProvider for UI resources in share view - Add useConversationUIResources hook for cross-turn resource lookups - Update parsers to generate resource IDs from content hashes - Update all tests to use resource IDs instead of indices - Add sandbox permissions for iframe popups - Remove deprecated MCP tool context instructions --------- Co-authored-by: Pierre-Luc Godin <pierreluc.godin@shopify.com> |
||
|
|
81139046e5
|
🔄 refactor: Convert OCR Tool Resource to Context (#9699)
* WIP: conversion of `ocr` to `context` * refactor: make `primeResources` backwards-compatible for `ocr` tool_resources * refactor: Convert legacy `ocr` tool resource to `context` in agent updates - Implemented conversion logic to replace `ocr` with `context` in both incoming updates and existing agent data. - Merged file IDs and files from `ocr` into `context` while ensuring deduplication. - Updated tools array to reflect the change from `ocr` to `context`. * refactor: Enhance context file handling in agent processing - Updated the logic for managing context files by consolidating file IDs from both `ocr` and `context` resources. - Improved backwards compatibility by ensuring that context files are correctly populated and handled. - Simplified the iteration over context files for better readability and maintainability. * refactor: Enhance tool_resources handling in primeResources - Added tests to verify the deletion behavior of tool_resources fields, ensuring original objects remain unchanged. - Implemented logic to delete `ocr` and `context` fields after fetching and re-categorizing files. - Preserved context field when the context capability is disabled, ensuring correct behavior in various scenarios. * refactor: Replace `ocrEnabled` with `contextEnabled` in AgentConfig * refactor: Adjust legacy tool handling order for improved clarity * refactor: Implement OCR to context conversion functions and remove original conversion logic in update agent handling * refactor: Move contextEnabled declaration to maintain consistent order in capabilities * refactor: Update localization keys for file context to improve clarity and accuracy * chore: Update localization key for file context information to improve clarity |
||
|
|
180046a3c5
|
✂️ refactor: Artifacts and Tool Callbacks to Pass UI Resources (#9581)
* ✂️ refactor: use artifacts and callbacks to pass UI resources
* chore: imports
* refactor: Update UIResource type imports and definitions across components and tests
* refactor: Update ToolCallInfo test data structure and enhance TAttachment type definition
---------
Co-authored-by: Samuel Path <samuel.path@shopify.com>
|
||
|
|
3e1591d404
|
🤖 fix: Remove versions and __v when Duplicating an Agent (#8115)
Revert "Add tests for agent duplication controller" This reverts commit 3e7beb1cc336bcfe1c57411e9c151f5e6aa927e4. |