mirror of
https://github.com/danny-avila/LibreChat.git
synced 2026-05-13 16:07:30 +00:00
* 🌱 fix: Seed Code Tool Files Into Graph Sessions on First Call
Files attached to an agent's `tool_resources.execute_code` (user uploads
or generated artifacts from a prior turn) were silently dropped on the
first `execute_code` invocation of a turn. The agents-side `ToolNode`
populates `_injected_files` only when its `sessions` map already has an
`EXECUTE_CODE` entry — but that entry is only written by a previous
successful execution, so call #1 had nothing to inject. CodeExecutor
then fell back to a `/files/{session_id}` fetch, but `session_id` was
also empty on call #1, leaving the sandbox without the primed files.
Mirror the existing skill-priming pattern (`primeInvokedSkills` →
`initialSessions`) for code-resource files: eagerly call `primeFiles`
before `createRun` and merge the result into `initialSessions` via a
new `seedCodeFilesIntoSessions` helper. Skill files and code-resource
files now share the same `EXECUTE_CODE` entry; the prior representative
`session_id` is preserved on merge.
* 🔬 chore: Add Diagnostic Logging for Code-Files Seeding
Temporary debug logs to diagnose why first-call file injection is not
firing in real agent runs. Logs `wantsCodeExec`, available tool-resource
keys, primed file count, and the seeded EXECUTE_CODE entry. Will revert
once the failure mode is identified.
* 🪛 refactor: Capture primedCodeFiles per-agent at init, merge across run
Replace the client.js eager `primeFiles` call with a per-agent capture at
initialization time so every agent in a multi-agent run (primary +
handoff + addedConvo) contributes its `tool_resources.execute_code`
files to the shared `Graph.sessions` seed.
- handleTools.js (eager loadTools): the `execute_code` factory closes
over a `primedCodeFiles` slot and surfaces it in the return.
- ToolService.js loadToolDefinitionsWrapper (event-driven): captures
`files` from the existing `primeCodeFiles` call (was dropping them
while only keeping `toolContext`) and surfaces them.
- packages/api initialize.ts: the loadTools callback contract now
includes `primedCodeFiles`, threaded onto `InitializedAgent`.
- client.js: iterate `[primary, ...agentConfigs.values()]` and merge
each agent's `primedCodeFiles` into `initialSessions`. Drop the
primary-only `primeCodeFiles` call and diagnostic logs from the prior
attempt — wrong layer (single-agent), wrong gate (`agent.tools`
contained Tool instances after init, so the `.includes("execute_code")`
string check always failed).
* 🔬 chore: Add per-agent diagnostic logs for code-files seeding
Logs `tool_resources` keys + file counts inside loadToolDefinitionsWrapper
and per-agent `primedCodeFiles` + final initialSessions inside
AgentClient. Will revert once the failure mode is confirmed.
* 🔬 chore: Add file-lookup diagnostics inside initializeAgent
Logs the inputs and intermediate counts of the conversation-file lookup
chain (convo file ids, thread message ids, code-generated and
user-code file counts) so we can pinpoint why `tool_resources.execute_code`
is arriving empty at `loadToolDefinitionsWrapper` despite the agent
having `execute_code` in its tools list.
* 🔬 chore: Probe execute_code files without messageId filter
Adds a relaxed `getFiles({conversationId, context: execute_code})` probe
that runs only when `getCodeGeneratedFiles` returns empty. Lists what's
actually in the DB for this conversation so we can confirm whether the
file is missing entirely or whether the messageId filter is rejecting it.
* 🔬 chore: Fix probe getFiles arg order (sort vs projection)
Probe was passing a projection object as the sort arg, which mongoose
rejected with `Invalid sort value`. Move it to the third arg
(selectFields) so the probe actually runs.
* 🪢 fix: Preserve Original messageId on Code-Output File Update
Each `processCodeOutput` call was overwriting the persisted file's
`messageId` with the *current* run's id. When a turn re-creates an
existing file (filename + conversationId match → `claimCodeFile`
returns the existing record, `isUpdate=true`), the file's link to
the assistant message that originally produced it gets clobbered.
`initializeAgent` later runs `getCodeGeneratedFiles({messageId: $in: <thread>})`
to seed `tool_resources.execute_code` from prior-turn artifacts. With a
stale `messageId` (e.g. from a failed read attempt that re-shelled the
same filename), the file no longer matches the parent-walk thread, so
`tool_resources` arrives empty at agent init, the new
`primedCodeFiles` channel has nothing to seed, and the LLM can't see
its own prior-turn artifacts on the next turn — defeating the
just-added Graph-sessions seeding fix.
Preserve the existing `claimed.messageId` on update; first-creation
behavior is unchanged. The runtime return value still includes the
current run's `messageId` (via `Object.assign(file, { messageId })`)
so the artifact is correctly attributed to the live tool_call.
* 🧹 chore: Remove diagnostic logs from code-files seeding path
Drops the temporary debug logs added to trace the empty-tool_resources
failure mode. Production code paths (loadToolDefinitionsWrapper,
client.js seed loop, initializeAgent file lookup) are left as the
permanent shape: capture primedCodeFiles, merge across agents, seed
initialSessions before run start.
* 🪛 feat: read_file Sandbox Fallback for /mnt/data + Non-Skill Paths
When the model called `read_file` with a code-execution path (e.g.
`/mnt/data/sentinel.txt`), the handler returned a misleading
`Use format: {skillName}/{path}` error. Adds a sandbox-aware fallback:
- Short-circuit `/mnt/data/...` (can never be a skill reference) →
route to a sandbox `cat` via the new host-provided `readSandboxFile`
callback, which POSTs to the codeapi `/exec` endpoint.
- Skip the skill resolver entirely when `accessibleSkillIds` is empty
— the resolved-output of `resolveAgentScopedSkillIds` already
collapses the admin capability + ephemeral badge + persisted
`skills_enabled` chain, so an empty value is the authoritative
"skills aren't in scope for this agent" signal.
- For `{firstSegment}/...` paths, consult the catalog-derived
`activeSkillNames` Set (no DB read) to detect non-skill names and
fall through to the sandbox before the model has to retry with
`bash_tool`.
`activeSkillNames` is captured from `injectSkillCatalog`, threaded onto
`InitializedAgent`, into `agentToolContexts`, then through
`enrichWithSkillConfigurable` into `mergedConfigurable` for the handler.
The host implementation of `readSandboxFile` lives in
`api/server/services/Files/Code/process.js` and shells `cat <path>`
through the seeded sandbox session — `tc.codeSessionContext`
(emitted by ToolNode for `read_file` calls in `@librechat/agents`
v3.1.72+) provides the `session_id` + `_injected_files` so the read
lands in the same sandbox that holds prior-turn artifacts. When the
seeded context isn't available (older agents version, no codeapi
configured), the handler returns a model-visible error pointing at
`bash_tool` instead of silently failing.
Tests: 8 new `handleReadFileCall` cases cover the new short-circuits,
the skills-not-enabled gate, the activeSkillNames lookup, the
sandbox-fallback success path, and the bash_tool retry hint on
fallback failure. Existing `read_file` tests now opt into "skills are
in scope" via a `skillsInScope()` fixture (production wouldn't reach
the skill lookup with empty `accessibleSkillIds`).
* 🔧 chore: Update @librechat/agents dependency to version 3.1.72
Bumps the version of the @librechat/agents package across package-lock.json and relevant package.json files to ensure compatibility with the latest features and fixes.
* 🪛 refactor: Centralize Tool-Session Seed in buildInitialToolSessions Helper
Addresses review feedback on the per-agent merge in client.js:
- **Run-wide semantics, named explicitly.** The merge into a single
`Graph.sessions[EXECUTE_CODE]` was a deliberate match to the
agents-library design (`Graph.sessions` is shared across every
`ToolNode` in the run), but the inline `for (const a of agents)`
loop in `AgentClient.chatCompletion` made it look per-agent. Move
the logic to a TS helper `buildInitialToolSessions` that documents
the run-wide-by-design contract in one place. The CJS controller
now contains a single call site, no business logic.
- **Subagent walk (P2).** The previous loop only iterated
`[primary, ...agentConfigs.values()]`. Pure subagents are pruned
out of `agentConfigs` after init and retained on each parent's
`subagentAgentConfigs`, so their primed code files were silently
dropped from the seed. The helper now walks recursively, with a
visited-Set keyed on object identity that terminates safely on a
malformed agent graph (cycle).
- **`jest.setup.cjs` polyfill for undici `File`.** Reviewer hit
`ReferenceError: File is not defined` running the targeted spec on
WSL — a known Node 18 issue where `globalThis.File` from
`node:buffer` isn't auto-exposed. Polyfill it inside a Jest setup
file so the suite boots regardless of Node patch version.
Helper test coverage (8 new): skill-only / agent-only / both,
recursive subagent walk, cycle-safe walk, primary+subagent
deduplication, undefined/null entries in the agents iterable, and
representative session_id preservation across the merge.
16 tests pass total in `codeFilesSession.spec.ts` (8 prior + 8 new).
No behavior change vs. the previous commit for the existing
primary+agentConfigs case — subagent inclusion is the only new
behavior, and it matches what the existing seeding logic would have
done if subagents had been in `agentConfigs`.
* 🪛 fix: FIFO Walk Order in buildInitialToolSessions (P3 review)
The traversal used `Array.pop()` (LIFO), which visited the LAST
top-level agent first. The docstring says "primary first"; the code
contradicted it. When no skill seed exists the first-visited agent's
first file supplies the representative `session_id` written to
`Graph.sessions[EXECUTE_CODE]` — so a LIFO walk silently flipped which
agent that came from. `ToolNode` ultimately uses per-file `session_id`s
for runtime injection (so behavior was indistinguishable for current
callers), but the discrepancy was a footgun for any future consumer
that read the representative.
Switch to FIFO via `Array.shift()` to match both the docstring and the
existing `loadSubagentsFor` walk pattern in
`Endpoints/agents/initialize.js`. Add a regression test that asserts
the primary's `session_id` is the representative (and that all three
agents' files still contribute, with per-file `session_id`s preserved).
* 🔬 test: Lock In Code-Files Bug Fixes Per Comprehensive Review
Addresses MAJOR + MINOR + NIT findings from the multi-pass review:
**Finding #4 (MINOR) — empty relativePath misses sandbox fallback.**
A model calling `read_file("output/")` where "output" isn't a skill
name dead-ended with `Missing file path after skill name` instead of
being routed to the sandbox like every other malformed-path branch.
Add the same `codeEnvAvailable → handleSandboxFileFallback` pattern,
plus two regression tests.
**Finding #7 (NIT) — duplicate `skillsInScope()` helper.**
Hoist the identical helper out of two nested describe blocks to
module scope. Single source of truth.
**Finding #1 (MAJOR) — `persistedMessageId` had zero test coverage.**
The fix preserves a file's original `messageId` on update so
`getCodeGeneratedFiles` can still match it on subsequent turns. A
regression in the `isUpdate ? (claimed.messageId ?? messageId) : messageId`
ternary would silently re-introduce the original cross-turn priming
bug. Five new tests cover:
- UPDATE preserves `claimed.messageId` in the persisted record
- UPDATE falls back to current run id when `claimed.messageId` is
absent (legacy records predating the field)
- CREATE uses current run id (no claimed record exists)
- The runtime return value uses the LIVE id (artifact attribution)
even when the persisted record kept the original
- The image branch follows the same contract (would silently regress
if the ternary diverged across the two file-build branches)
The tests use a `snapshotCreateFileArgs()` helper because
`processCodeOutput` mutates the file object after `createFile`
returns (`Object.assign(file, { messageId, toolCallId })`) and a
naive `createFile.mock.calls[0][0]` would reflect the post-mutation
state instead of what was actually persisted.
**Finding #2 (MAJOR) — `readSandboxFile` had no direct tests.**
The model-controlled `file_path` flows through a POSIX single-quote
escape into a shell `cat` command, making this a security boundary.
A quoting regression would let a malicious filename break out of the
quoted argument and inject arbitrary shell. 20 new tests across:
- Shell quoting (7): plain filenames, embedded `'`, `$()`, backticks,
newlines, shell metachars, multiple consecutive single-quotes
- Payload shape (6): /exec URL, bash language, conditional
session_id / files inclusion, dedicated keepAlive:false agents
- Response handling (6): `{content}` on success, null on missing
base URL or absent stdout, throws on stderr-only, partial-success
returns stdout, transport errors are logged then rethrown
- Timeout (1): matches processCodeOutput's 15s SLA
Audited findings #5 (acknowledged tech debt — readSandboxFile in JS
workspace), #6 (pre-existing positional-args debt on
enrichWithSkillConfigurable), and #8 (cosmetic JSDoc style) — no
action taken per the reviewer's own assessment.
Audited finding #3 (walk order vs docstring) — already addressed in
commit 007f32341 which converted to FIFO via `queue.shift()` plus a
regression test. The audit was performed against an earlier PR head.
Tests: 152 packages/api + 195 api JS = 347 pass. Typecheck clean.
* 🪛 fix: Pure-Subagent codeEnv + Primed-Skill Routing + ToolService Early Returns
Three findings from the second-pass review:
**P2 — Pure subagents missed `codeEnvAvailable`** (initialize.js).
The pure-subagent init path didn't forward the endpoint-level
`codeEnvAvailable` flag to `initializeAgent`, unlike the primary,
handoff, and addedConvo paths. A code-enabled subagent loaded only
through `subagentAgentConfigs` initialized with
`codeEnvAvailable: false`, so even though the recursive seed walk
found its primed code files, the subagent's own `bash_tool` /
`read_file` sandbox fallback were silently gated off. Forward the
flag and add `codeEnvAvailable: config.codeEnvAvailable` to the
`agentToolContexts.set` for symmetry with the other paths.
**P2 — Primed skills outside the catalog cap were misrouted to
sandbox** (handlers.ts). Manual ($-popover) and always-apply primes
are intentionally resolved off the wider `accessibleSkillIds` ACL
set BEFORE catalog injection — see `resolveManualSkills` for why a
skill outside the `SKILL_CATALOG_LIMIT` cap can still be authorized
for direct manual invocation. The `activeSkillNames` shortcut ran
before reading `skillPrimedIdsByName`, so a primed skill not in the
catalog would fall through to the sandbox instead of resolving via
the pinned `_id`. Read the primed map first and bypass the shortcut
for primed names. New regression test asserts a primed-but-not-
cataloged skill resolves through the existing skill path with
`getSkillByName` invoked and `readSandboxFile` NOT called.
**P3 — `loadAgentTools` early returns dropped `primedCodeFiles`**
(ToolService.js). The non-`definitionsOnly` path captures the field
correctly, but two early-return branches (no-action-tools fast path,
no-action-sets fast path) omitted it. Any traditional
`loadAgentTools(..., definitionsOnly: false)` caller using
execute_code without action tools would have its first-call session
seed silently empty. Add `primedCodeFiles` to both early returns
for consistency with the final return shape.
Tests: 153 packages/api + 195 api JS = 348 pass.
* 🧹 chore: Document jest.mock arrow-indirection pattern in process.spec.js
Per the second-pass review's Finding #2 (NIT, "would help future
readers"): the mock setup mixes direct `jest.fn()` references with
arrow-function indirection (`(...args) => mockX(...args)`). The
indirection isn't stylistic — it's required because `jest.mock(...)`
is hoisted above the outer `const` declarations at parse time, so a
direct reference would capture `undefined`. Inline comment explains
the pattern so the next reader doesn't have to reverse-engineer it
or accidentally "simplify" the mocks and break per-test
`mockReturnValueOnce` / `mockImplementationOnce` overrides.
* 🪛 fix: Five Issues from Pass-N + Codex Review (incl. 404 root cause)
Five real bugs surfaced by another review pass + Codex PR comments
+ the codeapi-side logs we collected during manual testing:
**1) `processCodeOutput` 404 root cause (`callbacks.js`).**
The codeapi worker emits TWO distinct `session_id`s on a tool result:
- `artifact.session_id` is the EXEC session — the sandbox VM that
ran the bash command. Files don't live there; it's torn down
post-execution.
- `file.session_id` is the STORAGE session — the file-server
bucket prefix where artifacts actually live.
`callbacks.js` was passing the EXEC id to `processCodeOutput`, which
builds `/download/{session_id}/{id}` and 404s because the file-server
doesn't know about that path. This explains every "Error
downloading/processing code environment file" we saw during testing.
Use `file.session_id ?? output.artifact.session_id` (per-file id with
artifact-level fallback for older worker payloads).
**2) `primeFiles` reupload pushed STALE sandbox ids (`process.js`).**
When `getSessionInfo` returns null (file expired/missing in sandbox),
`reuploadFile` re-uploads via `handleFileUpload`, gets a NEW
`fileIdentifier`, and persists it on the DB record. But `pushFile`
was a closure capturing the OLD `(session_id, id)` parsed at the top
of the loop, so the in-memory `files[]` array (now consumed by
`buildInitialToolSessions` to seed `Graph.sessions`) silently
referenced a sandbox object that no longer existed. The first tool
call would 404 trying to mount it; only the next turn's metadata
re-read would correct course. Parameterize `pushFile` with optional
`(session_id, id)` overrides; in `reuploadFile` parse the new
identifier and pass through. 2 regression tests.
**3) Codex P2 — Cap sandbox fallback output before line-numbering
(`handlers.ts`).** The new `handleSandboxFileFallback` returned
`addLineNumbers(result.content)` without a size guard, so reading a
multi-MB `/mnt/data/*` artifact materialized the file twice in
memory (raw + line-numbered) before downstream truncation. Match the
existing skill-file path's `MAX_READABLE_BYTES` (256KB): truncate
raw first, then number, surface the truncation to the model so it
can use `bash_tool` (`head` / `tail`) for the rest. 2 tests
(oversized truncates with hint, in-cap doesn't).
**4) Codex P2 — Dedupe seeded code files by `(session_id, id)`
(`codeFilesSession.ts`).** Multiple agents in a run commonly carry
the same primed execute-code resources (shared conversation files);
without dedupe, `_injected_files` grows proportionally to agent
count and bloats every `/exec` POST. Use a `(session_id, id)`
identity key so first-seen wins (preserves source ordering); name
alone isn't sufficient because two distinct primed uploads can
share a filename across different sessions. 4 tests covering dedup
across iterations, against pre-existing entries, name-collision
distinct-session preservation, and the multi-agent realistic case
in `buildInitialToolSessions`.
**5) Pass-N P2 — Polyfill `globalThis.File` in api Jest setup
(`api/test/jestSetup.js`).** `packages/api/jest.setup.cjs` had the
polyfill; the legacy api workspace's Jest config has its own
`setupFiles` that didn't, so on Node 18 / WSL the api focused tests
still failed at import time with `ReferenceError: File is not
defined` from undici. Mirror the polyfill.
Tests: 159 packages/api + 206 api JS = 365 pass. Typecheck clean.
* 🔧 chore: Update @librechat/agents dependency to version 3.1.73
Bumps the version of the @librechat/agents package across package-lock.json and relevant package.json files to ensure compatibility with the latest features and fixes.
1500 lines
48 KiB
JavaScript
1500 lines
48 KiB
JavaScript
const { logger } = require('@librechat/data-schemas');
|
|
const { tool: toolFn, DynamicStructuredTool } = require('@langchain/core/tools');
|
|
const {
|
|
sleep,
|
|
StepTypes,
|
|
GraphEvents,
|
|
createToolSearch,
|
|
Constants: AgentConstants,
|
|
createBashExecutionTool,
|
|
createProgrammaticToolCallingTool,
|
|
} = require('@librechat/agents');
|
|
const {
|
|
sendEvent,
|
|
getToolkitKey,
|
|
getUserMCPAuthMap,
|
|
loadToolDefinitions,
|
|
GenerationJobManager,
|
|
isActionDomainAllowed,
|
|
buildWebSearchContext,
|
|
buildImageToolContext,
|
|
buildToolClassification,
|
|
buildOAuthToolCallName,
|
|
} = require('@librechat/api');
|
|
const {
|
|
Time,
|
|
Tools,
|
|
Constants,
|
|
CacheKeys,
|
|
ErrorTypes,
|
|
ContentTypes,
|
|
imageGenTools,
|
|
EModelEndpoint,
|
|
EToolResources,
|
|
isActionTool,
|
|
actionDelimiter,
|
|
ImageVisionTool,
|
|
openapiToFunction,
|
|
AgentCapabilities,
|
|
isEphemeralAgentId,
|
|
validateActionDomain,
|
|
actionDomainSeparator,
|
|
defaultAgentCapabilities,
|
|
validateAndParseOpenAPISpec,
|
|
} = require('librechat-data-provider');
|
|
const {
|
|
createActionTool,
|
|
legacyDomainEncode,
|
|
decryptMetadata,
|
|
loadActionSets,
|
|
domainParser,
|
|
} = require('./ActionService');
|
|
const {
|
|
getEndpointsConfig,
|
|
getMCPServerTools,
|
|
getCachedTools,
|
|
} = require('~/server/services/Config');
|
|
const { processFileURL, uploadImageBuffer } = require('~/server/services/Files/process');
|
|
const { primeFiles: primeSearchFiles } = require('~/app/clients/tools/util/fileSearch');
|
|
const { primeFiles: primeCodeFiles } = require('~/server/services/Files/Code/process');
|
|
const { manifestToolMap, toolkits } = require('~/app/clients/tools/manifest');
|
|
const { createOnSearchResults } = require('~/server/services/Tools/search');
|
|
const { reinitMCPServer } = require('~/server/services/Tools/mcp');
|
|
const { resolveConfigServers } = require('~/server/services/MCP');
|
|
const { recordUsage } = require('~/server/services/Threads');
|
|
const { loadTools } = require('~/app/clients/tools/util');
|
|
const { redactMessage } = require('~/config/parsers');
|
|
const { findPluginAuthsByKeys } = require('~/models');
|
|
const { getFlowStateManager } = require('~/config');
|
|
const { getLogStores } = require('~/cache');
|
|
|
|
const domainSeparatorRegex = new RegExp(actionDomainSeparator, 'g');
|
|
|
|
/**
|
|
* Collapse every `actionDomainSeparator` sequence in the encoded-domain
|
|
* suffix of a fully-qualified action tool name to an underscore. Agents
|
|
* can store tool names in the raw `domainParser(..., true)` output,
|
|
* which for short hostnames is a `---`-separated string (e.g.
|
|
* `medium---com`). The lookup maps below are always keyed with the
|
|
* `_`-collapsed domain, so every read must normalize that suffix or
|
|
* short-hostname tools silently fail to resolve.
|
|
*
|
|
* The operationId portion (everything before the last `actionDelimiter`)
|
|
* is deliberately left untouched: `openapiToFunction` preserves hyphens
|
|
* in generated operationIds, so two specs can legitimately produce
|
|
* operationIds that differ only in hyphens-vs-underscores (e.g.
|
|
* `get_foo---bar` vs `get_foo_bar`). Collapsing the operationId would
|
|
* merge those into a single map slot and silently drop one tool.
|
|
*/
|
|
const normalizeActionToolName = (toolName) => {
|
|
const delimiterIndex = toolName.lastIndexOf(actionDelimiter);
|
|
if (delimiterIndex === -1) {
|
|
return toolName;
|
|
}
|
|
const prefixEnd = delimiterIndex + actionDelimiter.length;
|
|
const encodedDomain = toolName.slice(prefixEnd);
|
|
return toolName.slice(0, prefixEnd) + encodedDomain.replace(domainSeparatorRegex, '_');
|
|
};
|
|
|
|
/**
|
|
* Populate a `toolToAction` map with one slot per fully-qualified tool
|
|
* name (`<operationId><actionDelimiter><encoded-domain>`). Both the new
|
|
* and the legacy encodings of the domain are registered for every
|
|
* function so agents whose stored tool names predate the current
|
|
* encoding still resolve correctly.
|
|
*
|
|
* Indexing on the full tool name instead of the encoded domain alone is
|
|
* what makes multi-action agents work when two actions share a hostname:
|
|
* the operationId disambiguates them, so neither overwrites the other.
|
|
*
|
|
* Two actions that additionally share the same operationId still
|
|
* collide (nothing in the key distinguishes them). That case is
|
|
* pathological — `sanitizeOperationId` plus OpenAPI's own uniqueness
|
|
* requirement make it very unlikely — but when it does happen we log
|
|
* a warning so the silent-overwrite mode from the original bug cannot
|
|
* reappear under a different disguise.
|
|
*/
|
|
const registerActionTools = ({
|
|
toolToAction,
|
|
functionSignatures,
|
|
normalizedDomain,
|
|
legacyNormalized,
|
|
makeEntry,
|
|
}) => {
|
|
const setKey = (key, entry) => {
|
|
if (toolToAction.has(key)) {
|
|
logger.warn(
|
|
`[Actions] operationId collision: "${key}" already registered; ` +
|
|
`action "${entry.action?.action_id}" overwrites the previous entry. ` +
|
|
`Two actions share both the operationId and the encoded hostname.`,
|
|
);
|
|
}
|
|
toolToAction.set(key, entry);
|
|
};
|
|
|
|
for (const sig of functionSignatures) {
|
|
const entry = makeEntry(sig);
|
|
// Use `sig.name` verbatim: `openapiToFunction` keeps hyphens in
|
|
// generated operationIds, so `get_foo---bar` and `get_foo_bar` are
|
|
// distinct operations on the same spec. `normalizeActionToolName`
|
|
// only touches the encoded-domain suffix at lookup time, so map
|
|
// keys and lookups stay consistent without merging distinct
|
|
// operationIds into the same slot.
|
|
setKey(`${sig.name}${actionDelimiter}${normalizedDomain}`, entry);
|
|
if (legacyNormalized !== normalizedDomain) {
|
|
setKey(`${sig.name}${actionDelimiter}${legacyNormalized}`, entry);
|
|
}
|
|
}
|
|
};
|
|
|
|
/**
|
|
* Resolves the set of enabled agent capabilities from endpoints config,
|
|
* falling back to app-level or default capabilities for ephemeral agents.
|
|
* @param {ServerRequest} req
|
|
* @param {Object} appConfig
|
|
* @param {string} agentId
|
|
* @returns {Promise<Set<string>>}
|
|
*/
|
|
async function resolveAgentCapabilities(req, appConfig, agentId) {
|
|
const endpointsConfig = await getEndpointsConfig(req);
|
|
let capabilities = new Set(endpointsConfig?.[EModelEndpoint.agents]?.capabilities ?? []);
|
|
if (capabilities.size === 0 && isEphemeralAgentId(agentId)) {
|
|
capabilities = new Set(
|
|
appConfig.endpoints?.[EModelEndpoint.agents]?.capabilities ?? defaultAgentCapabilities,
|
|
);
|
|
}
|
|
return capabilities;
|
|
}
|
|
|
|
/**
|
|
* Processes the required actions by calling the appropriate tools and returning the outputs.
|
|
* @param {OpenAIClient} client - OpenAI or StreamRunManager Client.
|
|
* @param {RequiredAction} requiredActions - The current required action.
|
|
* @returns {Promise<ToolOutput>} The outputs of the tools.
|
|
*/
|
|
const processVisionRequest = async (client, currentAction) => {
|
|
if (!client.visionPromise) {
|
|
return {
|
|
tool_call_id: currentAction.toolCallId,
|
|
output: 'No image details found.',
|
|
};
|
|
}
|
|
|
|
/** @type {ChatCompletion | undefined} */
|
|
const completion = await client.visionPromise;
|
|
if (completion && completion.usage) {
|
|
recordUsage({
|
|
user: client.req.user.id,
|
|
model: client.req.body.model,
|
|
conversationId: (client.responseMessage ?? client.finalMessage).conversationId,
|
|
...completion.usage,
|
|
});
|
|
}
|
|
const output = completion?.choices?.[0]?.message?.content ?? 'No image details found.';
|
|
return {
|
|
tool_call_id: currentAction.toolCallId,
|
|
output,
|
|
};
|
|
};
|
|
|
|
/**
|
|
* Processes return required actions from run.
|
|
* @param {OpenAIClient | StreamRunManager} client - OpenAI (legacy) or StreamRunManager Client.
|
|
* @param {RequiredAction[]} requiredActions - The required actions to submit outputs for.
|
|
* @returns {Promise<ToolOutputs>} The outputs of the tools.
|
|
*/
|
|
async function processRequiredActions(client, requiredActions) {
|
|
logger.debug(
|
|
`[required actions] user: ${client.req.user.id} | thread_id: ${requiredActions[0].thread_id} | run_id: ${requiredActions[0].run_id}`,
|
|
requiredActions,
|
|
);
|
|
const appConfig = client.req.config;
|
|
const toolDefinitions = (await getCachedTools()) ?? {};
|
|
const seenToolkits = new Set();
|
|
const tools = requiredActions
|
|
.map((action) => {
|
|
const toolName = action.tool;
|
|
const toolDef = toolDefinitions[toolName];
|
|
if (toolDef && !manifestToolMap[toolName]) {
|
|
for (const toolkit of toolkits) {
|
|
if (seenToolkits.has(toolkit.pluginKey)) {
|
|
return;
|
|
} else if (toolName.startsWith(`${toolkit.pluginKey}_`)) {
|
|
seenToolkits.add(toolkit.pluginKey);
|
|
return toolkit.pluginKey;
|
|
}
|
|
}
|
|
}
|
|
return toolName;
|
|
})
|
|
.filter((toolName) => !!toolName);
|
|
|
|
const { loadedTools } = await loadTools({
|
|
user: client.req.user.id,
|
|
model: client.req.body.model ?? 'gpt-4o-mini',
|
|
tools,
|
|
functions: true,
|
|
endpoint: client.req.body.endpoint,
|
|
options: {
|
|
processFileURL,
|
|
req: client.req,
|
|
uploadImageBuffer,
|
|
openAIApiKey: client.apiKey,
|
|
returnMetadata: true,
|
|
},
|
|
webSearch: appConfig.webSearch,
|
|
fileStrategy: appConfig.fileStrategy,
|
|
imageOutputType: appConfig.imageOutputType,
|
|
});
|
|
|
|
const ToolMap = loadedTools.reduce((map, tool) => {
|
|
map[tool.name] = tool;
|
|
return map;
|
|
}, {});
|
|
|
|
const promises = [];
|
|
|
|
let actionSetsData = null;
|
|
let isActionTool = false;
|
|
const ActionToolMap = {};
|
|
const ActionBuildersMap = {};
|
|
|
|
for (let i = 0; i < requiredActions.length; i++) {
|
|
const currentAction = requiredActions[i];
|
|
if (currentAction.tool === ImageVisionTool.function.name) {
|
|
promises.push(processVisionRequest(client, currentAction));
|
|
continue;
|
|
}
|
|
let tool = ToolMap[currentAction.tool] ?? ActionToolMap[currentAction.tool];
|
|
|
|
const handleToolOutput = async (output) => {
|
|
requiredActions[i].output = output;
|
|
|
|
/** @type {FunctionToolCall & PartMetadata} */
|
|
const toolCall = {
|
|
function: {
|
|
name: currentAction.tool,
|
|
arguments: JSON.stringify(currentAction.toolInput),
|
|
output,
|
|
},
|
|
id: currentAction.toolCallId,
|
|
type: 'function',
|
|
progress: 1,
|
|
action: isActionTool,
|
|
};
|
|
|
|
const toolCallIndex = client.mappedOrder.get(toolCall.id);
|
|
|
|
if (imageGenTools.has(currentAction.tool)) {
|
|
const imageOutput = output;
|
|
toolCall.function.output = `${currentAction.tool} displayed an image. All generated images are already plainly visible, so don't repeat the descriptions in detail. Do not list download links as they are available in the UI already. The user may download the images by clicking on them, but do not mention anything about downloading to the user.`;
|
|
|
|
// Streams the "Finished" state of the tool call in the UI
|
|
client.addContentData({
|
|
[ContentTypes.TOOL_CALL]: toolCall,
|
|
index: toolCallIndex,
|
|
type: ContentTypes.TOOL_CALL,
|
|
});
|
|
|
|
await sleep(500);
|
|
|
|
/** @type {ImageFile} */
|
|
const imageDetails = {
|
|
...imageOutput,
|
|
...currentAction.toolInput,
|
|
};
|
|
|
|
const image_file = {
|
|
[ContentTypes.IMAGE_FILE]: imageDetails,
|
|
type: ContentTypes.IMAGE_FILE,
|
|
// Replace the tool call output with Image file
|
|
index: toolCallIndex,
|
|
};
|
|
|
|
client.addContentData(image_file);
|
|
|
|
// Update the stored tool call
|
|
client.seenToolCalls && client.seenToolCalls.set(toolCall.id, toolCall);
|
|
|
|
return {
|
|
tool_call_id: currentAction.toolCallId,
|
|
output: toolCall.function.output,
|
|
};
|
|
}
|
|
|
|
client.seenToolCalls && client.seenToolCalls.set(toolCall.id, toolCall);
|
|
client.addContentData({
|
|
[ContentTypes.TOOL_CALL]: toolCall,
|
|
index: toolCallIndex,
|
|
type: ContentTypes.TOOL_CALL,
|
|
// TODO: to append tool properties to stream, pass metadata rest to addContentData
|
|
// result: tool.result,
|
|
});
|
|
|
|
return {
|
|
tool_call_id: currentAction.toolCallId,
|
|
output,
|
|
};
|
|
};
|
|
|
|
if (!tool) {
|
|
// throw new Error(`Tool ${currentAction.tool} not found.`);
|
|
|
|
if (!actionSetsData) {
|
|
/** @type {Action[]} */
|
|
const actionSets =
|
|
(await loadActionSets({
|
|
assistant_id: client.req.body.assistant_id,
|
|
})) ?? [];
|
|
|
|
// See registerActionTools for the key-shape rationale.
|
|
const toolToAction = new Map();
|
|
|
|
for (const action of actionSets) {
|
|
const domain = await domainParser(action.metadata.domain, true);
|
|
const normalizedDomain = domain.replace(domainSeparatorRegex, '_');
|
|
const legacyDomain = legacyDomainEncode(action.metadata.domain);
|
|
const legacyNormalized = legacyDomain.replace(domainSeparatorRegex, '_');
|
|
|
|
const isDomainAllowed = await isActionDomainAllowed(
|
|
action.metadata.domain,
|
|
appConfig?.actions?.allowedDomains,
|
|
);
|
|
if (!isDomainAllowed) {
|
|
continue;
|
|
}
|
|
|
|
// Validate and parse OpenAPI spec
|
|
const validationResult = validateAndParseOpenAPISpec(action.metadata.raw_spec);
|
|
if (!validationResult.spec || !validationResult.serverUrl) {
|
|
throw new Error(
|
|
`Invalid spec: user: ${client.req.user.id} | thread_id: ${requiredActions[0].thread_id} | run_id: ${requiredActions[0].run_id}`,
|
|
);
|
|
}
|
|
|
|
// SECURITY: Validate the domain from the spec matches the stored domain
|
|
// This is defense-in-depth to prevent any stored malicious actions
|
|
const domainValidation = validateActionDomain(
|
|
action.metadata.domain,
|
|
validationResult.serverUrl,
|
|
);
|
|
if (!domainValidation.isValid) {
|
|
logger.error(`Domain mismatch in stored action: ${domainValidation.message}`, {
|
|
userId: client.req.user.id,
|
|
action_id: action.action_id,
|
|
});
|
|
continue; // Skip this action rather than failing the entire request
|
|
}
|
|
|
|
// Process the OpenAPI spec
|
|
const { requestBuilders, functionSignatures } = openapiToFunction(validationResult.spec);
|
|
|
|
// Store encrypted values for OAuth flow
|
|
const encrypted = {
|
|
oauth_client_id: action.metadata.oauth_client_id,
|
|
oauth_client_secret: action.metadata.oauth_client_secret,
|
|
};
|
|
|
|
// Decrypt metadata
|
|
const decryptedAction = { ...action };
|
|
decryptedAction.metadata = await decryptMetadata(action.metadata);
|
|
|
|
registerActionTools({
|
|
toolToAction,
|
|
functionSignatures,
|
|
normalizedDomain,
|
|
legacyNormalized,
|
|
makeEntry: (sig) => ({
|
|
action: decryptedAction,
|
|
requestBuilder: requestBuilders[sig.name],
|
|
encrypted,
|
|
}),
|
|
});
|
|
|
|
// Store builders for reuse
|
|
ActionBuildersMap[action.metadata.domain] = requestBuilders;
|
|
}
|
|
|
|
actionSetsData = toolToAction;
|
|
}
|
|
|
|
const entry = actionSetsData.get(normalizeActionToolName(currentAction.tool));
|
|
if (!entry) {
|
|
continue;
|
|
}
|
|
|
|
const { action, requestBuilder, encrypted } = entry;
|
|
|
|
// We've already decrypted the metadata, so we can pass it directly
|
|
const _allowedDomains = appConfig?.actions?.allowedDomains;
|
|
tool = await createActionTool({
|
|
userId: client.req.user.id,
|
|
res: client.res,
|
|
action,
|
|
requestBuilder,
|
|
// Note: intentionally not passing zodSchema, name, and description for assistants API
|
|
encrypted, // Pass the encrypted values for OAuth flow
|
|
useSSRFProtection: !Array.isArray(_allowedDomains) || _allowedDomains.length === 0,
|
|
});
|
|
if (!tool) {
|
|
logger.warn(
|
|
`Invalid action: user: ${client.req.user.id} | thread_id: ${requiredActions[0].thread_id} | run_id: ${requiredActions[0].run_id} | toolName: ${currentAction.tool}`,
|
|
);
|
|
throw new Error(`{"type":"${ErrorTypes.INVALID_ACTION}"}`);
|
|
}
|
|
isActionTool = !!tool;
|
|
ActionToolMap[currentAction.tool] = tool;
|
|
}
|
|
|
|
if (currentAction.tool === 'calculator') {
|
|
currentAction.toolInput = currentAction.toolInput.input;
|
|
}
|
|
|
|
const handleToolError = (error) => {
|
|
logger.error(
|
|
`tool_call_id: ${currentAction.toolCallId} | Error processing tool ${currentAction.tool}`,
|
|
error,
|
|
);
|
|
return {
|
|
tool_call_id: currentAction.toolCallId,
|
|
output: `Error processing tool ${currentAction.tool}: ${redactMessage(error.message, 256)}`,
|
|
};
|
|
};
|
|
|
|
try {
|
|
const promise = tool
|
|
._call(currentAction.toolInput)
|
|
.then(handleToolOutput)
|
|
.catch(handleToolError);
|
|
promises.push(promise);
|
|
} catch (error) {
|
|
const toolOutputError = handleToolError(error);
|
|
promises.push(Promise.resolve(toolOutputError));
|
|
}
|
|
}
|
|
|
|
return {
|
|
tool_outputs: await Promise.all(promises),
|
|
};
|
|
}
|
|
|
|
/**
|
|
* Processes the runtime tool calls and returns the tool classes.
|
|
* @param {Object} params - Run params containing user and request information.
|
|
* @param {ServerRequest} params.req - The request object.
|
|
* @param {ServerResponse} params.res - The request object.
|
|
* @param {AbortSignal} params.signal
|
|
* @param {Pick<Agent, 'id' | 'provider' | 'model' | 'tools'} params.agent - The agent to load tools for.
|
|
* @param {string | undefined} [params.openAIApiKey] - The OpenAI API key.
|
|
* @returns {Promise<{
|
|
* tools?: StructuredTool[];
|
|
* toolContextMap?: Record<string, unknown>;
|
|
* userMCPAuthMap?: Record<string, Record<string, string>>;
|
|
* toolRegistry?: Map<string, import('~/utils/toolClassification').LCTool>;
|
|
* hasDeferredTools?: boolean;
|
|
* }>} The agent tools and registry.
|
|
*/
|
|
/** Native LibreChat tools that are not in the manifest */
|
|
const nativeTools = new Set([Tools.execute_code, Tools.file_search, Tools.web_search]);
|
|
|
|
/** Checks if a tool name is a known built-in tool */
|
|
const isBuiltInTool = (toolName) =>
|
|
Boolean(
|
|
manifestToolMap[toolName] ||
|
|
toolkits.some((t) => t.pluginKey === toolName) ||
|
|
nativeTools.has(toolName),
|
|
);
|
|
|
|
/**
|
|
* Loads only tool definitions without creating tool instances.
|
|
* This is the efficient path for event-driven mode where tools are loaded on-demand.
|
|
*
|
|
* @param {Object} params
|
|
* @param {ServerRequest} params.req - The request object
|
|
* @param {ServerResponse} [params.res] - The response object for SSE events
|
|
* @param {Object} params.agent - The agent configuration
|
|
* @param {string|null} [params.streamId] - Stream ID for resumable mode
|
|
* @returns {Promise<{
|
|
* toolDefinitions?: import('@librechat/api').LCTool[];
|
|
* toolRegistry?: Map<string, import('@librechat/api').LCTool>;
|
|
* userMCPAuthMap?: Record<string, Record<string, string>>;
|
|
* hasDeferredTools?: boolean;
|
|
* }>}
|
|
*/
|
|
async function loadToolDefinitionsWrapper({ req, res, agent, streamId = null, tool_resources }) {
|
|
if (!agent.tools || agent.tools.length === 0) {
|
|
return { toolDefinitions: [] };
|
|
}
|
|
|
|
if (
|
|
agent.tools.length === 1 &&
|
|
(agent.tools[0] === AgentCapabilities.context || agent.tools[0] === AgentCapabilities.ocr)
|
|
) {
|
|
return { toolDefinitions: [] };
|
|
}
|
|
|
|
const appConfig = req.config;
|
|
const enabledCapabilities = await resolveAgentCapabilities(req, appConfig, agent.id);
|
|
|
|
const checkCapability = (capability) => enabledCapabilities.has(capability);
|
|
const areToolsEnabled = checkCapability(AgentCapabilities.tools);
|
|
const actionsEnabled = checkCapability(AgentCapabilities.actions);
|
|
const deferredToolsEnabled = checkCapability(AgentCapabilities.deferred_tools);
|
|
|
|
const filteredTools = agent.tools?.filter((tool) => {
|
|
if (tool === Tools.file_search) {
|
|
return checkCapability(AgentCapabilities.file_search);
|
|
}
|
|
if (tool === Tools.execute_code) {
|
|
return checkCapability(AgentCapabilities.execute_code);
|
|
}
|
|
if (tool === Tools.web_search) {
|
|
return checkCapability(AgentCapabilities.web_search);
|
|
}
|
|
if (isActionTool(tool)) {
|
|
return actionsEnabled;
|
|
}
|
|
if (!areToolsEnabled) {
|
|
return false;
|
|
}
|
|
return true;
|
|
});
|
|
|
|
if (!filteredTools || filteredTools.length === 0) {
|
|
return { toolDefinitions: [] };
|
|
}
|
|
|
|
/** @type {Record<string, Record<string, string>>} */
|
|
let userMCPAuthMap;
|
|
if (agent.tools?.some((t) => t.includes(Constants.mcp_delimiter))) {
|
|
userMCPAuthMap = await getUserMCPAuthMap({
|
|
tools: agent.tools,
|
|
userId: req.user.id,
|
|
findPluginAuthsByKeys,
|
|
});
|
|
}
|
|
|
|
const flowsCache = getLogStores(CacheKeys.FLOWS);
|
|
const flowManager = getFlowStateManager(flowsCache);
|
|
const configServers = await resolveConfigServers(req);
|
|
const pendingOAuthServers = new Set();
|
|
|
|
const createOAuthEmitter = (serverName) => {
|
|
return async (authURL) => {
|
|
const flowId = `${req.user.id}:${serverName}:${Date.now()}`;
|
|
const stepId = 'step_oauth_login_' + serverName;
|
|
const toolCall = {
|
|
id: flowId,
|
|
name: buildOAuthToolCallName(serverName),
|
|
type: 'tool_call_chunk',
|
|
};
|
|
|
|
const runStepData = {
|
|
runId: Constants.USE_PRELIM_RESPONSE_MESSAGE_ID,
|
|
id: stepId,
|
|
type: StepTypes.TOOL_CALLS,
|
|
index: 0,
|
|
stepDetails: {
|
|
type: StepTypes.TOOL_CALLS,
|
|
tool_calls: [toolCall],
|
|
},
|
|
};
|
|
|
|
const runStepDeltaData = {
|
|
id: stepId,
|
|
delta: {
|
|
type: StepTypes.TOOL_CALLS,
|
|
tool_calls: [{ ...toolCall, args: '' }],
|
|
auth: authURL,
|
|
expires_at: Date.now() + Time.TWO_MINUTES,
|
|
},
|
|
};
|
|
|
|
const runStepEvent = { event: GraphEvents.ON_RUN_STEP, data: runStepData };
|
|
const runStepDeltaEvent = { event: GraphEvents.ON_RUN_STEP_DELTA, data: runStepDeltaData };
|
|
|
|
if (streamId) {
|
|
await GenerationJobManager.emitChunk(streamId, runStepEvent);
|
|
await GenerationJobManager.emitChunk(streamId, runStepDeltaEvent);
|
|
} else if (res && !res.writableEnded) {
|
|
sendEvent(res, runStepEvent);
|
|
sendEvent(res, runStepDeltaEvent);
|
|
} else {
|
|
logger.warn(
|
|
`[Tool Definitions] Cannot emit OAuth event for ${serverName}: no streamId and res not available`,
|
|
);
|
|
}
|
|
};
|
|
};
|
|
|
|
const getOrFetchMCPServerTools = async (userId, serverName) => {
|
|
const cached = await getMCPServerTools(userId, serverName);
|
|
if (cached) {
|
|
return cached;
|
|
}
|
|
|
|
const oauthStart = async () => {
|
|
pendingOAuthServers.add(serverName);
|
|
};
|
|
|
|
const result = await reinitMCPServer({
|
|
user: req.user,
|
|
oauthStart,
|
|
flowManager,
|
|
serverName,
|
|
configServers,
|
|
userMCPAuthMap,
|
|
});
|
|
|
|
return result?.availableTools || null;
|
|
};
|
|
|
|
const getActionToolDefinitions = async (agentId, actionToolNames) => {
|
|
const actionSets = (await loadActionSets({ agent_id: agentId })) ?? [];
|
|
if (actionSets.length === 0) {
|
|
return [];
|
|
}
|
|
|
|
const definitions = [];
|
|
const allowedDomains = appConfig?.actions?.allowedDomains;
|
|
const normalizedToolNames = new Set(
|
|
actionToolNames.map((n) => n.replace(domainSeparatorRegex, '_')),
|
|
);
|
|
|
|
for (const action of actionSets) {
|
|
const domain = await domainParser(action.metadata.domain, true);
|
|
const normalizedDomain = domain.replace(domainSeparatorRegex, '_');
|
|
|
|
const legacyDomain = legacyDomainEncode(action.metadata.domain);
|
|
const legacyNormalized = legacyDomain.replace(domainSeparatorRegex, '_');
|
|
|
|
const isDomainAllowed = await isActionDomainAllowed(action.metadata.domain, allowedDomains);
|
|
if (!isDomainAllowed) {
|
|
logger.warn(
|
|
`[Actions] Domain "${action.metadata.domain}" not in allowedDomains. ` +
|
|
`Add it to librechat.yaml actions.allowedDomains to enable this action.`,
|
|
);
|
|
continue;
|
|
}
|
|
|
|
const validationResult = validateAndParseOpenAPISpec(action.metadata.raw_spec);
|
|
if (!validationResult.spec || !validationResult.serverUrl) {
|
|
logger.warn(`[Actions] Invalid OpenAPI spec for domain: ${domain}`);
|
|
continue;
|
|
}
|
|
|
|
const { functionSignatures } = openapiToFunction(validationResult.spec, true);
|
|
|
|
for (const sig of functionSignatures) {
|
|
const toolName = `${sig.name}${actionDelimiter}${normalizedDomain}`;
|
|
const legacyToolName = `${sig.name}${actionDelimiter}${legacyNormalized}`;
|
|
if (!normalizedToolNames.has(toolName) && !normalizedToolNames.has(legacyToolName)) {
|
|
continue;
|
|
}
|
|
|
|
definitions.push({
|
|
name: toolName,
|
|
description: sig.description,
|
|
parameters: sig.parameters,
|
|
});
|
|
}
|
|
}
|
|
|
|
return definitions;
|
|
};
|
|
|
|
let { toolDefinitions, toolRegistry, hasDeferredTools } = await loadToolDefinitions(
|
|
{
|
|
userId: req.user.id,
|
|
agentId: agent.id,
|
|
tools: filteredTools,
|
|
toolOptions: agent.tool_options,
|
|
deferredToolsEnabled,
|
|
},
|
|
{
|
|
isBuiltInTool,
|
|
getOrFetchMCPServerTools,
|
|
getActionToolDefinitions,
|
|
},
|
|
);
|
|
|
|
if (pendingOAuthServers.size > 0 && (res || streamId)) {
|
|
const serverNames = Array.from(pendingOAuthServers);
|
|
logger.info(
|
|
`[Tool Definitions] OAuth required for ${serverNames.length} server(s): ${serverNames.join(', ')}. Emitting events and waiting.`,
|
|
);
|
|
|
|
const oauthWaitPromises = serverNames.map(async (serverName) => {
|
|
try {
|
|
const result = await reinitMCPServer({
|
|
user: req.user,
|
|
serverName,
|
|
configServers,
|
|
userMCPAuthMap,
|
|
flowManager,
|
|
returnOnOAuth: false,
|
|
oauthStart: createOAuthEmitter(serverName),
|
|
connectionTimeout: Time.TWO_MINUTES,
|
|
});
|
|
|
|
if (result?.availableTools) {
|
|
logger.info(`[Tool Definitions] OAuth completed for ${serverName}, tools available`);
|
|
return { serverName, success: true };
|
|
}
|
|
return { serverName, success: false };
|
|
} catch (error) {
|
|
logger.debug(`[Tool Definitions] OAuth wait failed for ${serverName}:`, error?.message);
|
|
return { serverName, success: false };
|
|
}
|
|
});
|
|
|
|
const results = await Promise.allSettled(oauthWaitPromises);
|
|
const successfulServers = results
|
|
.filter((r) => r.status === 'fulfilled' && r.value.success)
|
|
.map((r) => r.value.serverName);
|
|
|
|
if (successfulServers.length > 0) {
|
|
logger.info(
|
|
`[Tool Definitions] Reloading tools after OAuth for: ${successfulServers.join(', ')}`,
|
|
);
|
|
const reloadResult = await loadToolDefinitions(
|
|
{
|
|
userId: req.user.id,
|
|
agentId: agent.id,
|
|
tools: filteredTools,
|
|
toolOptions: agent.tool_options,
|
|
deferredToolsEnabled,
|
|
},
|
|
{
|
|
isBuiltInTool,
|
|
getOrFetchMCPServerTools,
|
|
getActionToolDefinitions,
|
|
},
|
|
);
|
|
toolDefinitions = reloadResult.toolDefinitions;
|
|
toolRegistry = reloadResult.toolRegistry;
|
|
hasDeferredTools = reloadResult.hasDeferredTools;
|
|
}
|
|
}
|
|
|
|
/** @type {Record<string, string>} */
|
|
const toolContextMap = {};
|
|
const hasWebSearch = filteredTools.includes(Tools.web_search);
|
|
const hasFileSearch = filteredTools.includes(Tools.file_search);
|
|
const hasExecuteCode = filteredTools.includes(Tools.execute_code);
|
|
|
|
if (hasWebSearch) {
|
|
toolContextMap[Tools.web_search] = buildWebSearchContext();
|
|
}
|
|
|
|
/**
|
|
* `files` carry the upload session_ids; we surface them so client.js can
|
|
* seed `Graph.sessions[EXECUTE_CODE]` before run start. Without that seed,
|
|
* the agents-side `ToolNode.getCodeSessionContext` returns undefined on
|
|
* call #1, `_injected_files` is never set on the tool call, and the
|
|
* sandbox can't see the prior turn's generated artifacts on first read.
|
|
*/
|
|
let primedCodeFiles;
|
|
if (hasExecuteCode && tool_resources) {
|
|
try {
|
|
const { toolContext, files } = await primeCodeFiles({
|
|
req,
|
|
tool_resources,
|
|
agentId: agent.id,
|
|
});
|
|
if (toolContext) {
|
|
toolContextMap[Tools.execute_code] = toolContext;
|
|
}
|
|
if (files?.length) {
|
|
primedCodeFiles = files;
|
|
}
|
|
} catch (error) {
|
|
logger.error('[loadToolDefinitionsWrapper] Error priming code files:', error);
|
|
}
|
|
}
|
|
|
|
if (hasFileSearch && tool_resources) {
|
|
try {
|
|
const { toolContext } = await primeSearchFiles({
|
|
req,
|
|
tool_resources,
|
|
agentId: agent.id,
|
|
});
|
|
if (toolContext) {
|
|
toolContextMap[Tools.file_search] = toolContext;
|
|
}
|
|
} catch (error) {
|
|
logger.error('[loadToolDefinitionsWrapper] Error priming search files:', error);
|
|
}
|
|
}
|
|
|
|
const imageFiles = tool_resources?.[EToolResources.image_edit]?.files ?? [];
|
|
if (imageFiles.length > 0) {
|
|
const hasOaiImageGen = filteredTools.includes('image_gen_oai');
|
|
const hasGeminiImageGen = filteredTools.includes('gemini_image_gen');
|
|
|
|
if (hasOaiImageGen) {
|
|
const toolContext = buildImageToolContext({
|
|
imageFiles,
|
|
toolName: `${EToolResources.image_edit}_oai`,
|
|
contextDescription: 'image editing',
|
|
});
|
|
if (toolContext) {
|
|
toolContextMap.image_edit_oai = toolContext;
|
|
}
|
|
}
|
|
|
|
if (hasGeminiImageGen) {
|
|
const toolContext = buildImageToolContext({
|
|
imageFiles,
|
|
toolName: 'gemini_image_gen',
|
|
contextDescription: 'image context',
|
|
});
|
|
if (toolContext) {
|
|
toolContextMap.gemini_image_gen = toolContext;
|
|
}
|
|
}
|
|
}
|
|
|
|
return {
|
|
toolRegistry,
|
|
userMCPAuthMap,
|
|
toolContextMap,
|
|
toolDefinitions,
|
|
hasDeferredTools,
|
|
actionsEnabled,
|
|
primedCodeFiles,
|
|
};
|
|
}
|
|
|
|
/**
|
|
* Loads agent tools for initialization or execution.
|
|
* @param {Object} params
|
|
* @param {ServerRequest} params.req - The request object
|
|
* @param {ServerResponse} params.res - The response object
|
|
* @param {Object} params.agent - The agent configuration
|
|
* @param {AbortSignal} [params.signal] - Abort signal
|
|
* @param {Object} [params.tool_resources] - Tool resources
|
|
* @param {string} [params.openAIApiKey] - OpenAI API key
|
|
* @param {string|null} [params.streamId] - Stream ID for resumable mode
|
|
* @param {boolean} [params.definitionsOnly=true] - When true, returns only serializable
|
|
* tool definitions without creating full tool instances. Use for event-driven mode
|
|
* where tools are loaded on-demand during execution.
|
|
*/
|
|
async function loadAgentTools({
|
|
req,
|
|
res,
|
|
agent,
|
|
signal,
|
|
tool_resources,
|
|
openAIApiKey,
|
|
streamId = null,
|
|
definitionsOnly = true,
|
|
}) {
|
|
if (definitionsOnly) {
|
|
return loadToolDefinitionsWrapper({ req, res, agent, streamId, tool_resources });
|
|
}
|
|
|
|
if (!agent.tools || agent.tools.length === 0) {
|
|
return { toolDefinitions: [] };
|
|
} else if (
|
|
agent.tools &&
|
|
agent.tools.length === 1 &&
|
|
/** Legacy handling for `ocr` as may still exist in existing Agents */
|
|
(agent.tools[0] === AgentCapabilities.context || agent.tools[0] === AgentCapabilities.ocr)
|
|
) {
|
|
return { toolDefinitions: [] };
|
|
}
|
|
|
|
const appConfig = req.config;
|
|
const enabledCapabilities = await resolveAgentCapabilities(req, appConfig, agent.id);
|
|
const checkCapability = (capability) => {
|
|
const enabled = enabledCapabilities.has(capability);
|
|
if (!enabled) {
|
|
const isToolCapability = [
|
|
AgentCapabilities.file_search,
|
|
AgentCapabilities.execute_code,
|
|
AgentCapabilities.web_search,
|
|
].includes(capability);
|
|
const suffix = isToolCapability ? ' despite configured tool.' : '.';
|
|
logger.warn(
|
|
`Capability "${capability}" disabled${suffix} User: ${req.user.id} | Agent: ${agent.id}`,
|
|
);
|
|
}
|
|
return enabled;
|
|
};
|
|
const areToolsEnabled = checkCapability(AgentCapabilities.tools);
|
|
const actionsEnabled = checkCapability(AgentCapabilities.actions);
|
|
|
|
let includesWebSearch = false;
|
|
const _agentTools = agent.tools?.filter((tool) => {
|
|
if (tool === Tools.file_search) {
|
|
return checkCapability(AgentCapabilities.file_search);
|
|
} else if (tool === Tools.execute_code) {
|
|
return checkCapability(AgentCapabilities.execute_code);
|
|
} else if (tool === Tools.web_search) {
|
|
includesWebSearch = checkCapability(AgentCapabilities.web_search);
|
|
return includesWebSearch;
|
|
} else if (isActionTool(tool)) {
|
|
return actionsEnabled;
|
|
} else if (!areToolsEnabled) {
|
|
return false;
|
|
}
|
|
return true;
|
|
});
|
|
|
|
if (!_agentTools || _agentTools.length === 0) {
|
|
return {};
|
|
}
|
|
/** @type {ReturnType<typeof createOnSearchResults>} */
|
|
let webSearchCallbacks;
|
|
if (includesWebSearch) {
|
|
webSearchCallbacks = createOnSearchResults(res, streamId);
|
|
}
|
|
|
|
/** @type {Record<string, Record<string, string>>} */
|
|
let userMCPAuthMap;
|
|
if (agent.tools?.some((t) => t.includes(Constants.mcp_delimiter))) {
|
|
userMCPAuthMap = await getUserMCPAuthMap({
|
|
tools: agent.tools,
|
|
userId: req.user.id,
|
|
findPluginAuthsByKeys,
|
|
});
|
|
}
|
|
|
|
const { loadedTools, toolContextMap, primedCodeFiles } = await loadTools({
|
|
agent,
|
|
signal,
|
|
userMCPAuthMap,
|
|
functions: true,
|
|
user: req.user.id,
|
|
tools: _agentTools,
|
|
options: {
|
|
req,
|
|
res,
|
|
openAIApiKey,
|
|
tool_resources,
|
|
processFileURL,
|
|
uploadImageBuffer,
|
|
returnMetadata: true,
|
|
[Tools.web_search]: webSearchCallbacks,
|
|
},
|
|
webSearch: appConfig.webSearch,
|
|
fileStrategy: appConfig.fileStrategy,
|
|
imageOutputType: appConfig.imageOutputType,
|
|
});
|
|
|
|
/** Build tool registry from MCP tools and create PTC/tool search tools if configured */
|
|
const deferredToolsEnabled = checkCapability(AgentCapabilities.deferred_tools);
|
|
const { toolRegistry, toolDefinitions, additionalTools, hasDeferredTools } =
|
|
await buildToolClassification({
|
|
loadedTools,
|
|
userId: req.user.id,
|
|
agentId: agent.id,
|
|
agentToolOptions: agent.tool_options,
|
|
deferredToolsEnabled,
|
|
});
|
|
|
|
const agentTools = [];
|
|
for (let i = 0; i < loadedTools.length; i++) {
|
|
const tool = loadedTools[i];
|
|
if (tool.name && (tool.name === Tools.execute_code || tool.name === Tools.file_search)) {
|
|
agentTools.push(tool);
|
|
continue;
|
|
}
|
|
|
|
if (!areToolsEnabled) {
|
|
continue;
|
|
}
|
|
|
|
if (tool.mcp === true) {
|
|
agentTools.push(tool);
|
|
continue;
|
|
}
|
|
|
|
if (tool instanceof DynamicStructuredTool) {
|
|
agentTools.push(tool);
|
|
continue;
|
|
}
|
|
|
|
const toolDefinition = {
|
|
name: tool.name,
|
|
schema: tool.schema,
|
|
description: tool.description,
|
|
};
|
|
|
|
if (imageGenTools.has(tool.name)) {
|
|
toolDefinition.responseFormat = 'content_and_artifact';
|
|
}
|
|
|
|
const toolInstance = toolFn(async (...args) => {
|
|
return tool['_call'](...args);
|
|
}, toolDefinition);
|
|
|
|
agentTools.push(toolInstance);
|
|
}
|
|
|
|
const ToolMap = loadedTools.reduce((map, tool) => {
|
|
map[tool.name] = tool;
|
|
return map;
|
|
}, {});
|
|
|
|
agentTools.push(...additionalTools);
|
|
|
|
const hasActionTools = _agentTools.some((t) => isActionTool(t));
|
|
if (!hasActionTools) {
|
|
return {
|
|
toolRegistry,
|
|
userMCPAuthMap,
|
|
toolContextMap,
|
|
toolDefinitions,
|
|
hasDeferredTools,
|
|
actionsEnabled,
|
|
tools: agentTools,
|
|
primedCodeFiles,
|
|
};
|
|
}
|
|
|
|
const actionSets = (await loadActionSets({ agent_id: agent.id })) ?? [];
|
|
if (actionSets.length === 0) {
|
|
if (_agentTools.length > 0 && agentTools.length === 0) {
|
|
logger.warn(`No tools found for the specified tool calls: ${_agentTools.join(', ')}`);
|
|
}
|
|
return {
|
|
toolRegistry,
|
|
userMCPAuthMap,
|
|
toolContextMap,
|
|
toolDefinitions,
|
|
hasDeferredTools,
|
|
actionsEnabled,
|
|
tools: agentTools,
|
|
primedCodeFiles,
|
|
};
|
|
}
|
|
|
|
// See registerActionTools for the key-shape rationale.
|
|
const toolToAction = new Map();
|
|
|
|
for (const action of actionSets) {
|
|
const domain = await domainParser(action.metadata.domain, true);
|
|
const normalizedDomain = domain.replace(domainSeparatorRegex, '_');
|
|
const legacyDomain = legacyDomainEncode(action.metadata.domain);
|
|
const legacyNormalized = legacyDomain.replace(domainSeparatorRegex, '_');
|
|
|
|
const isDomainAllowed = await isActionDomainAllowed(
|
|
action.metadata.domain,
|
|
appConfig?.actions?.allowedDomains,
|
|
);
|
|
if (!isDomainAllowed) {
|
|
continue;
|
|
}
|
|
|
|
// Validate and parse OpenAPI spec once per action set
|
|
const validationResult = validateAndParseOpenAPISpec(action.metadata.raw_spec);
|
|
if (!validationResult.spec || !validationResult.serverUrl) {
|
|
continue;
|
|
}
|
|
|
|
// SECURITY: Validate the domain from the spec matches the stored domain
|
|
// This is defense-in-depth to prevent any stored malicious actions
|
|
const domainValidation = validateActionDomain(
|
|
action.metadata.domain,
|
|
validationResult.serverUrl,
|
|
);
|
|
if (!domainValidation.isValid) {
|
|
logger.error(`Domain mismatch in stored action: ${domainValidation.message}`, {
|
|
userId: req.user.id,
|
|
agent_id: agent.id,
|
|
action_id: action.action_id,
|
|
});
|
|
continue; // Skip this action rather than failing the entire request
|
|
}
|
|
|
|
const encrypted = {
|
|
oauth_client_id: action.metadata.oauth_client_id,
|
|
oauth_client_secret: action.metadata.oauth_client_secret,
|
|
};
|
|
|
|
// Decrypt metadata once per action set
|
|
const decryptedAction = { ...action };
|
|
decryptedAction.metadata = await decryptMetadata(action.metadata);
|
|
|
|
// Process the OpenAPI spec once per action set
|
|
const { requestBuilders, functionSignatures, zodSchemas } = openapiToFunction(
|
|
validationResult.spec,
|
|
true,
|
|
);
|
|
|
|
registerActionTools({
|
|
toolToAction,
|
|
functionSignatures,
|
|
normalizedDomain,
|
|
legacyNormalized,
|
|
makeEntry: (sig) => ({
|
|
action: decryptedAction,
|
|
requestBuilder: requestBuilders[sig.name],
|
|
zodSchema: zodSchemas[sig.name],
|
|
functionSignature: sig,
|
|
encrypted,
|
|
}),
|
|
});
|
|
}
|
|
|
|
// Now map tools to the processed action sets
|
|
const ActionToolMap = {};
|
|
|
|
for (const toolName of _agentTools) {
|
|
if (ToolMap[toolName]) {
|
|
continue;
|
|
}
|
|
|
|
const entry = toolToAction.get(normalizeActionToolName(toolName));
|
|
if (!entry) {
|
|
continue;
|
|
}
|
|
|
|
const { action, encrypted, zodSchema, requestBuilder, functionSignature } = entry;
|
|
const _allowedDomains = appConfig?.actions?.allowedDomains;
|
|
const tool = await createActionTool({
|
|
userId: req.user.id,
|
|
res,
|
|
action,
|
|
requestBuilder,
|
|
zodSchema,
|
|
encrypted,
|
|
name: toolName,
|
|
description: functionSignature.description,
|
|
streamId,
|
|
useSSRFProtection: !Array.isArray(_allowedDomains) || _allowedDomains.length === 0,
|
|
});
|
|
|
|
if (!tool) {
|
|
logger.warn(
|
|
`Invalid action: user: ${req.user.id} | agent_id: ${agent.id} | toolName: ${toolName}`,
|
|
);
|
|
throw new Error(`{"type":"${ErrorTypes.INVALID_ACTION}"}`);
|
|
}
|
|
|
|
agentTools.push(tool);
|
|
ActionToolMap[toolName] = tool;
|
|
}
|
|
|
|
if (_agentTools.length > 0 && agentTools.length === 0) {
|
|
logger.warn(`No tools found for the specified tool calls: ${_agentTools.join(', ')}`);
|
|
return {};
|
|
}
|
|
|
|
return {
|
|
toolRegistry,
|
|
toolContextMap,
|
|
userMCPAuthMap,
|
|
toolDefinitions,
|
|
hasDeferredTools,
|
|
actionsEnabled,
|
|
tools: agentTools,
|
|
primedCodeFiles,
|
|
};
|
|
}
|
|
|
|
/**
|
|
* Loads tools for event-driven execution (ON_TOOL_EXECUTE handler).
|
|
* This function encapsulates all dependencies needed for tool loading,
|
|
* so callers don't need to import processFileURL, uploadImageBuffer, etc.
|
|
*
|
|
* Handles both regular tools (MCP, built-in) and action tools.
|
|
*
|
|
* @param {Object} params
|
|
* @param {ServerRequest} params.req - The request object
|
|
* @param {ServerResponse} params.res - The response object
|
|
* @param {AbortSignal} [params.signal] - Abort signal
|
|
* @param {Object} params.agent - The agent object
|
|
* @param {string[]} params.toolNames - Names of tools to load
|
|
* @param {Map} [params.toolRegistry] - Tool registry
|
|
* @param {Record<string, Record<string, string>>} [params.userMCPAuthMap] - User MCP auth map
|
|
* @param {Object} [params.tool_resources] - Tool resources
|
|
* @param {string|null} [params.streamId] - Stream ID for web search callbacks
|
|
* @param {boolean} [params.actionsEnabled] - Whether the actions capability is enabled
|
|
* @returns {Promise<{ loadedTools: Array, configurable: Object }>}
|
|
*/
|
|
async function loadToolsForExecution({
|
|
req,
|
|
res,
|
|
signal,
|
|
agent,
|
|
toolNames,
|
|
toolRegistry,
|
|
userMCPAuthMap,
|
|
tool_resources,
|
|
streamId = null,
|
|
actionsEnabled,
|
|
}) {
|
|
const appConfig = req.config;
|
|
const allLoadedTools = [];
|
|
const configurable = { userMCPAuthMap };
|
|
|
|
if (actionsEnabled === undefined) {
|
|
const enabledCapabilities = await resolveAgentCapabilities(req, appConfig, agent?.id);
|
|
actionsEnabled = enabledCapabilities.has(AgentCapabilities.actions);
|
|
}
|
|
|
|
const isToolSearch = toolNames.includes(AgentConstants.TOOL_SEARCH);
|
|
const isPTC = toolNames.includes(AgentConstants.PROGRAMMATIC_TOOL_CALLING);
|
|
|
|
logger.debug(
|
|
`[loadToolsForExecution] isToolSearch: ${isToolSearch}, toolRegistry: ${toolRegistry?.size ?? 'undefined'}`,
|
|
);
|
|
|
|
if (isToolSearch && toolRegistry) {
|
|
const toolSearchTool = createToolSearch({
|
|
mode: 'local',
|
|
toolRegistry,
|
|
});
|
|
allLoadedTools.push(toolSearchTool);
|
|
configurable.toolRegistry = toolRegistry;
|
|
}
|
|
|
|
if (isPTC && toolRegistry) {
|
|
configurable.toolRegistry = toolRegistry;
|
|
try {
|
|
/**
|
|
* PTC auth is handled by the agents library / sandbox service
|
|
* directly; LibreChat no longer threads a per-run credential.
|
|
*/
|
|
const ptcTool = createProgrammaticToolCallingTool({});
|
|
allLoadedTools.push(ptcTool);
|
|
} catch (error) {
|
|
logger.error('[loadToolsForExecution] Error creating PTC tool:', error);
|
|
}
|
|
}
|
|
|
|
const isBashTool = toolNames.includes(AgentConstants.BASH_TOOL);
|
|
if (isBashTool) {
|
|
try {
|
|
const bashTool = createBashExecutionTool({});
|
|
allLoadedTools.push(bashTool);
|
|
} catch (error) {
|
|
logger.error('[loadToolsForExecution] Failed to create bash_tool', error);
|
|
}
|
|
}
|
|
|
|
const specialToolNames = new Set([
|
|
AgentConstants.TOOL_SEARCH,
|
|
AgentConstants.PROGRAMMATIC_TOOL_CALLING,
|
|
AgentConstants.BASH_TOOL,
|
|
AgentConstants.SKILL_TOOL,
|
|
AgentConstants.READ_FILE,
|
|
]);
|
|
|
|
let ptcOrchestratedToolNames = [];
|
|
if (isPTC && toolRegistry) {
|
|
ptcOrchestratedToolNames = Array.from(toolRegistry.keys()).filter(
|
|
(name) => !specialToolNames.has(name),
|
|
);
|
|
}
|
|
|
|
const requestedNonSpecialToolNames = toolNames.filter((name) => !specialToolNames.has(name));
|
|
const allToolNamesToLoad = isPTC
|
|
? [...new Set([...requestedNonSpecialToolNames, ...ptcOrchestratedToolNames])]
|
|
: requestedNonSpecialToolNames;
|
|
|
|
const actionToolNames = [];
|
|
const regularToolNames = [];
|
|
for (const name of allToolNamesToLoad) {
|
|
(isActionTool(name) ? actionToolNames : regularToolNames).push(name);
|
|
}
|
|
|
|
if (regularToolNames.length > 0) {
|
|
const includesWebSearch = regularToolNames.includes(Tools.web_search);
|
|
const webSearchCallbacks = includesWebSearch ? createOnSearchResults(res, streamId) : undefined;
|
|
|
|
const { loadedTools } = await loadTools({
|
|
agent,
|
|
signal,
|
|
userMCPAuthMap,
|
|
functions: true,
|
|
tools: regularToolNames,
|
|
user: req.user.id,
|
|
options: {
|
|
req,
|
|
res,
|
|
tool_resources,
|
|
processFileURL,
|
|
uploadImageBuffer,
|
|
returnMetadata: true,
|
|
[Tools.web_search]: webSearchCallbacks,
|
|
},
|
|
webSearch: appConfig?.webSearch,
|
|
fileStrategy: appConfig?.fileStrategy,
|
|
imageOutputType: appConfig?.imageOutputType,
|
|
});
|
|
|
|
if (loadedTools) {
|
|
allLoadedTools.push(...loadedTools);
|
|
}
|
|
}
|
|
|
|
if (actionToolNames.length > 0 && agent && actionsEnabled) {
|
|
const actionTools = await loadActionToolsForExecution({
|
|
req,
|
|
res,
|
|
agent,
|
|
appConfig,
|
|
streamId,
|
|
actionToolNames,
|
|
});
|
|
allLoadedTools.push(...actionTools);
|
|
} else if (actionToolNames.length > 0 && agent && !actionsEnabled) {
|
|
logger.warn(
|
|
`[loadToolsForExecution] Capability "${AgentCapabilities.actions}" disabled. ` +
|
|
`Skipping action tool execution. User: ${req.user.id} | Agent: ${agent.id} | Tools: ${actionToolNames.join(', ')}`,
|
|
);
|
|
}
|
|
|
|
if (isPTC && allLoadedTools.length > 0) {
|
|
const ptcToolMap = new Map();
|
|
for (const tool of allLoadedTools) {
|
|
if (tool.name && tool.name !== AgentConstants.PROGRAMMATIC_TOOL_CALLING) {
|
|
ptcToolMap.set(tool.name, tool);
|
|
}
|
|
}
|
|
configurable.ptcToolMap = ptcToolMap;
|
|
}
|
|
|
|
return {
|
|
configurable,
|
|
loadedTools: allLoadedTools,
|
|
};
|
|
}
|
|
|
|
/**
|
|
* Loads action tools for event-driven execution.
|
|
* @param {Object} params
|
|
* @param {ServerRequest} params.req - The request object
|
|
* @param {ServerResponse} params.res - The response object
|
|
* @param {Object} params.agent - The agent object
|
|
* @param {Object} params.appConfig - App configuration
|
|
* @param {string|null} params.streamId - Stream ID
|
|
* @param {string[]} params.actionToolNames - Action tool names to load
|
|
* @returns {Promise<Array>} Loaded action tools
|
|
*/
|
|
async function loadActionToolsForExecution({
|
|
req,
|
|
res,
|
|
agent,
|
|
appConfig,
|
|
streamId,
|
|
actionToolNames,
|
|
}) {
|
|
const loadedActionTools = [];
|
|
|
|
const actionSets = (await loadActionSets({ agent_id: agent.id })) ?? [];
|
|
if (actionSets.length === 0) {
|
|
return loadedActionTools;
|
|
}
|
|
|
|
// See registerActionTools for the key-shape rationale.
|
|
const toolToAction = new Map();
|
|
const allowedDomains = appConfig?.actions?.allowedDomains;
|
|
|
|
for (const action of actionSets) {
|
|
const domain = await domainParser(action.metadata.domain, true);
|
|
const normalizedDomain = domain.replace(domainSeparatorRegex, '_');
|
|
const legacyDomain = legacyDomainEncode(action.metadata.domain);
|
|
const legacyNormalized = legacyDomain.replace(domainSeparatorRegex, '_');
|
|
|
|
const isDomainAllowed = await isActionDomainAllowed(action.metadata.domain, allowedDomains);
|
|
if (!isDomainAllowed) {
|
|
logger.warn(
|
|
`[Actions] Domain "${action.metadata.domain}" not in allowedDomains. ` +
|
|
`Add it to librechat.yaml actions.allowedDomains to enable this action.`,
|
|
);
|
|
continue;
|
|
}
|
|
|
|
const validationResult = validateAndParseOpenAPISpec(action.metadata.raw_spec);
|
|
if (!validationResult.spec || !validationResult.serverUrl) {
|
|
logger.warn(`[Actions] Invalid OpenAPI spec for domain: ${domain}`);
|
|
continue;
|
|
}
|
|
|
|
const domainValidation = validateActionDomain(
|
|
action.metadata.domain,
|
|
validationResult.serverUrl,
|
|
);
|
|
if (!domainValidation.isValid) {
|
|
logger.error(`Domain mismatch in stored action: ${domainValidation.message}`, {
|
|
userId: req.user.id,
|
|
agent_id: agent.id,
|
|
action_id: action.action_id,
|
|
});
|
|
continue;
|
|
}
|
|
|
|
const encrypted = {
|
|
oauth_client_id: action.metadata.oauth_client_id,
|
|
oauth_client_secret: action.metadata.oauth_client_secret,
|
|
};
|
|
|
|
const decryptedAction = { ...action };
|
|
decryptedAction.metadata = await decryptMetadata(action.metadata);
|
|
|
|
const { requestBuilders, functionSignatures, zodSchemas } = openapiToFunction(
|
|
validationResult.spec,
|
|
true,
|
|
);
|
|
|
|
registerActionTools({
|
|
toolToAction,
|
|
functionSignatures,
|
|
normalizedDomain,
|
|
legacyNormalized,
|
|
makeEntry: (sig) => ({
|
|
action: decryptedAction,
|
|
requestBuilder: requestBuilders[sig.name],
|
|
zodSchema: zodSchemas[sig.name],
|
|
functionSignature: sig,
|
|
encrypted,
|
|
}),
|
|
});
|
|
}
|
|
|
|
for (const toolName of actionToolNames) {
|
|
const entry = toolToAction.get(normalizeActionToolName(toolName));
|
|
if (!entry) {
|
|
continue;
|
|
}
|
|
|
|
const { action, encrypted, zodSchema, requestBuilder, functionSignature } = entry;
|
|
const tool = await createActionTool({
|
|
userId: req.user.id,
|
|
res,
|
|
action,
|
|
streamId,
|
|
zodSchema,
|
|
encrypted,
|
|
requestBuilder,
|
|
name: toolName,
|
|
description: functionSignature.description,
|
|
useSSRFProtection: !Array.isArray(allowedDomains) || allowedDomains.length === 0,
|
|
});
|
|
|
|
if (!tool) {
|
|
logger.warn(`[Actions] Failed to create action tool: ${toolName}`);
|
|
continue;
|
|
}
|
|
|
|
loadedActionTools.push(tool);
|
|
}
|
|
|
|
return loadedActionTools;
|
|
}
|
|
|
|
module.exports = {
|
|
loadTools,
|
|
isBuiltInTool,
|
|
getToolkitKey,
|
|
loadAgentTools,
|
|
loadToolsForExecution,
|
|
processRequiredActions,
|
|
resolveAgentCapabilities,
|
|
};
|