mirror of
https://github.com/danny-avila/LibreChat.git
synced 2026-05-14 08:27:49 +00:00
* 🪟 feat: Add allowedAddresses Exemption List For SSRF-Guarded Targets LibreChat already blocks SSRF-prone targets (private IPs, loopback, link-local, .internal/.local TLDs) at every server-side fetch site that consumes user-controllable URLs — custom-endpoint baseURLs, MCP servers, OpenAPI Actions, and OAuth endpoints. The only existing escape hatch is `allowedDomains`, but that flips the field into a strict whitelist: adding `127.0.0.1` to permit a self-hosted Ollama also blocks every public destination that isn't in the list. Introduce `allowedAddresses` as the orthogonal primitive: a private- IP-space exemption list. When a hostname or its resolved IP appears in the list, the SSRF block is bypassed for that target. Public destinations remain reachable. Operators can now run self-hosted LLMs / MCP servers / Action endpoints on private addresses without weakening the default-deny posture for everything else. Schema additions in `packages/data-provider/src/config.ts`: - `endpoints.allowedAddresses` (new — gates `validateEndpointURL`) - `mcpSettings.allowedAddresses` (parallel to `allowedDomains`) - `actions.allowedAddresses` (parallel to `allowedDomains`) Core changes in `packages/api/src/auth/`: - New `isAddressAllowed(hostnameOrIP, allowedAddresses)` — pure, case-insensitive, bracket-stripped literal match. - Threaded the list through `isSSRFTarget`, `resolveHostnameSSRF`, `isDomainAllowedCore`, `isActionDomainAllowed`, `isMCPDomainAllowed`, `isOAuthUrlAllowed`, and `validateEndpointURL`. - Extended `createSSRFSafeAgents` and `createSSRFSafeUndiciConnect` to accept the list, building an SSRF-safe DNS lookup that exempts matching hostnames/IPs at TCP connect time (TOCTOU-safe). Wiring: - Custom and OpenAI endpoint initialize sites pass `endpoints.allowedAddresses` to `validateEndpointURL`. - `MCPServersRegistry` stores `allowedAddresses` and exposes it via `getAllowedAddresses()`. The factory, connection class, manager, `UserConnectionManager`, and `ConnectionsRepository` all thread it through to the SSRF utilities. - `MCPOAuthHandler.initiateOAuthFlow`, `refreshOAuthTokens`, and `validateOAuthUrl` accept the list and consult it on every URL validation along the OAuth chain. - `ToolService`, `ActionService`, and the assistants/agents action routes pass `actions.allowedAddresses` to `isActionDomainAllowed` and to `createSSRFSafeAgents` for runtime action calls. - `initializeMCPs.js` reads `mcpSettings.allowedAddresses` from the app config and forwards it to the registry constructor. Documentation: - `librechat.example.yaml` shows the new field next to each existing `allowedDomains` block, with a note clarifying that `allowedAddresses` is an exemption list (not a whitelist). Tests: - Unit tests for `isAddressAllowed` covering literal IPs, hostnames, IPv6 brackets, case insensitivity, and partial-match rejection. - Exemption tests for every entry point: `isSSRFTarget`, `resolveHostnameSSRF`, `validateEndpointURL`, `isActionDomainAllowed`, `isMCPDomainAllowed`, `isOAuthUrlAllowed`. - Existing tests updated to reflect the new optional parameter. Default behavior is unchanged: omitted = empty list = no exemptions. * 🩹 fix: Plumb allowedAddresses Through AppConfig endpoints Type The initial PR added `endpoints.allowedAddresses` to the data-provider config schema and consumed it in the endpoint initialize sites, but the runtime `AppConfig.endpoints` shape in `@librechat/data-schemas` was a hand-maintained subset that didn't include the new field — so `tsc` rejected `appConfig.endpoints.allowedAddresses`. Add the field to `AppConfig['endpoints']` in `packages/data-schemas/src/types/app.ts` and forward it from the loaded config in `packages/data-schemas/src/app/endpoints.ts` so the runtime config carries the value. Update `initializeMCPs.spec.js` to expect the third positional argument (`allowedAddresses`) on the `createMCPServersRegistry` call. * 🩹 fix: Enforce allowedDomains Before allowedAddresses In isOAuthUrlAllowed The initial implementation checked the address exemption first, so a URL whose hostname appeared in `allowedAddresses` would return true even when the admin had configured `allowedDomains` as a strict bound on OAuth endpoints. A malicious MCP server could advertise OAuth metadata, token, or revocation URLs at any address the admin had permitted for an unrelated reason (a self-hosted LLM at `127.0.0.1`, for example) and pass validation, expanding SSRF reach beyond the configured domain whitelist. Reorder: when `allowedDomains` is set, treat it as authoritative — return true only if the URL matches a domain entry, otherwise fall through to false. The address exemption only applies when no `allowedDomains` is configured (mirrors how the downstream SSRF check in `validateOAuthUrl` consults `allowedAddresses`). Add a regression test asserting that an `allowedAddresses` entry does not broaden a configured `allowedDomains` list. Reported by chatgpt-codex-connector on PR #12933. * 🩹 fix: Forward allowedAddresses To Remaining OAuth Callers Two `MCPOAuthHandler` callers still used the pre-feature signatures and were silently dropping the new `allowedAddresses` argument: - `api/server/routes/mcp.js` invoked `initiateOAuthFlow` with the old 5-argument shape, so OAuth flows initiated through the route handler ignored the registry's `getAllowedAddresses()` and would reject any metadata/authorization/token URL on a permitted private host. - `api/server/controllers/UserController.js#maybeUninstallOAuthMCP` invoked `revokeOAuthToken` without the address exemption, so uninstalling an OAuth-backed MCP server on a permitted private host would fail at the revocation step even though the rest of the MCP connection path now permits it. Both sites now read `allowedAddresses` from the registry alongside `allowedDomains` and forward it. Reported by Copilot on PR #12933. * 🩹 fix: Update Test Mocks And Assertions For OAuth allowedAddresses The previous commit started passing `allowedAddresses` to `MCPOAuthHandler.initiateOAuthFlow` from `api/server/routes/mcp.js` and to `MCPOAuthHandler.revokeOAuthToken` from `api/server/controllers/UserController.js`, but the corresponding test files mocked the registry without `getAllowedAddresses` (causing `TypeError`s) and asserted the old positional shape on `toHaveBeenCalledWith`. Update the mocks and assertions to match the new arity: - `api/server/routes/__tests__/mcp.spec.js`: add `getAllowedDomains`/`getAllowedAddresses` to the registry mock and expect the additional positional args on `initiateOAuthFlow`. - `api/server/controllers/__tests__/maybeUninstallOAuthMCP.spec.js`: add a `getAllowedAddresses` mock alongside the existing `getAllowedDomains` and seed it in `setupOAuthServerFound`. - `api/server/controllers/__tests__/UserController.mcpOAuth.spec.js`: add `getAllowedAddresses` to the registry mock and expect the trailing `null` arg on the three `revokeOAuthToken` assertions. * 🛡️ fix: Address Comprehensive Review — Scope allowedAddresses To Private IP Space Major findings from the comprehensive PR review (severity → fix): **CRITICAL — `validateOAuthUrl` SSRF fallback bypass.** When `allowedDomains` is configured and a URL fails the whitelist, the SSRF fallback in `validateOAuthUrl` was still passing `allowedAddresses` to `isSSRFTarget` / `resolveHostnameSSRF`, letting a malicious MCP server advertise OAuth endpoints at any address the admin had permitted for an unrelated reason. Suppress `allowedAddresses` in the fallback when `allowedDomains` is active — the address exemption is opt-in for the no-whitelist mode only. **MAJOR — WebSocket transport SSRF check ignored exemptions.** The `constructTransport` WebSocket branch called `resolveHostnameSSRF(wsHostname)` without `this.allowedAddresses`, so a permitted private MCP server would pass `isMCPDomainAllowed` but be blocked at transport creation. Forward the exemption. **Scope `allowedAddresses` to private IP space only (operator directive).** The exemption list is for permitting private/internal targets; it must not be a back-door to broaden trust to public destinations. - Schema (`packages/data-provider/src/config.ts`): new `allowedAddressesSchema` rejects URLs (`://`), paths/CIDR (`/`), whitespace, and public IPv4/IPv6 literals at config-load time. Wired into `endpoints`, `mcpSettings`, and `actions`. - Runtime (`packages/api/src/auth/domain.ts`): `isAddressAllowed` now drops public-IP candidates and public-IP entries on the match path — defense in depth so a misconfigured runtime list never grants exemption. - Hot path (`packages/api/src/auth/agent.ts`): `buildSSRFSafeLookup` pre-normalizes the list into a `Set<string>` once at construction and applies the same scoping filter, so the connect-time DNS lookup is an O(1) Set membership check instead of a full re-iterate-and-normalize on every outbound request. **Test coverage for the connect-time and OAuth-fallback paths.** - `agent.spec.ts`: new describe block exercising `buildSSRFSafeLookup` and `createSSRFSafe*` with `allowedAddresses` — hostname-literal exemption, resolved-IP exemption, public-IP scoping, URL/CIDR/whitespace rejection, and the default no-list block. - `handler.allowedAddresses.test.ts` (new): integration tests for `validateOAuthUrl` — covers both the no-domains-set "permit private" path and the strict-bound regression where `allowedAddresses` must NOT bypass `allowedDomains`. **Documentation & cleanup.** - `connection.ts` redirect SSRF check: explicit comment that `allowedAddresses` is intentionally NOT consulted for redirect targets (server-controlled, must not inherit the admin's exemption). - `MCPConnectionFactory.test.ts`: replaced an `eslint-disable` with a proper `import { getTenantId } from '@librechat/data-schemas'`. The disable was added to make a pre-existing `require()` quiet — the cleaner fix is to use the existing top-level import. Updated `MCPConnectionSSRF.test.ts` WebSocket SSRF assertions to match the new two-argument call shape (`hostname, allowedAddresses`). * 🩹 fix: Require Absolute URL Before allowedAddresses Trust Bypass In isOAuthUrlAllowed `parseDomainSpec` is lenient — it silently prepends `https://` to schemeless inputs so it can match patterns like bare `example.com`. That leniency leaked into `isOAuthUrlAllowed`'s new `allowedAddresses` short-circuit: a value like `10.0.0.5/oauth` (no scheme) would parse successfully via the prepended default, hit the address-exemption path, return `true`, and skip `validateOAuthUrl`'s strict `new URL(url)` parse-or-throw — only to fail later in OAuth discovery with a less clear runtime error. Add a strict `new URL(url)` gate at the top of `isOAuthUrlAllowed`. Schemeless inputs now fall through to `validateOAuthUrl`'s explicit "Invalid OAuth <field>" rejection. Tests added in both `auth/domain.spec.ts` (unit) and the OAuth handler integration spec (end-to-end). Reported by chatgpt-codex-connector (P2) on PR #12933. * 🛡️ fix: Address Follow-Up Comprehensive Review — Schema Tests, Shared Normalization, host:port Auditing the second comprehensive review: **F1 MAJOR — schema validation untested.** `allowedAddressesSchema` had zero coverage, so a regression in the three refinement stages or the three wiring locations (`endpoints` / `mcpSettings` / `actions`) would silently let invalid entries reach the runtime. Added a dedicated `describe('allowedAddressesSchema')` block in `config.spec.ts` covering: valid private IPs (v4 + v6, including the previously-missed 192.0.0.0/24 range), accepted hostnames, all rejection categories (URLs, CIDR, paths, whitespace tabs/newlines, host:port, public IP literals), and full `configSchema.parse()` integration at each of the three nesting points. **F2 MINOR — `isPrivateIPv4Literal` divergence.** The schema reimpl in `packages/data-provider` was discarding the `c` octet, so the `192.0.0.0/24` (RFC 5736 IETF protocol assignments) range that the authoritative `isPrivateIPv4` accepts was being rejected with a misleading "public IP" error. Destructure `c` and add the missing range check; covered by the new schema tests. **F3 MINOR — DRY violation across `domain.ts` and `agent.ts`.** Both files had independent normalization implementations with a subtle whitespace-check divergence (`/\s/` vs `.includes(' ')`). Extracted the shared logic into a new `packages/api/src/auth/allowedAddresses.ts` module that both consumers import: - `normalizeAddressEntry(entry)` — single-entry shape check - `looksLikeHostPort(entry)` — host:port detector (used by F4) - `normalizeAllowedAddressesSet(list)` — pre-normalized Set for the connect-time hot path - `isAddressInAllowedSet(candidate, set)` — membership check that enforces private-IP scoping on the candidate Both `isAddressAllowed` (preflight) and `buildSSRFSafeLookup` (connect) now go through the same primitives; the whitespace divergence is gone. To break the import cycle (`allowedAddresses` needs `isPrivateIP`, `domain` previously owned it), extracted IP private-range detection into a leaf `auth/ip.ts` module. `domain.ts` re-exports `isPrivateIP` for backward compatibility with existing call sites. **F4 MINOR — `host:port` silently misclassified.** Entries like `localhost:8080` previously slipped through the URL/path guard, were mis-detected as IPv6, failed `isPrivateIP`, and were silently dropped with a misleading "public IP" schema error. Added an explicit `looksLikeHostPort` check with a clear error: "allowedAddresses entries must not include a port — list the bare hostname or IP only." Bare `::1`, `[::1]`, and other valid IPv6 literals are intentionally not matched (regex distinguishes by colon count and the bracketed `[ipv6]:port` form). **F5 MINOR — hostname-trust documentation gap.** Hostname entries short-circuit `resolveHostnameSSRF` before any DNS lookup — that's a deliberate design (admin trusts the name) but it means the exemption follows whatever the name resolves to at runtime. Added an explicit note in `librechat.example.yaml` for both `mcpSettings.allowedAddresses` and `endpoints.allowedAddresses`: "a hostname entry trusts whatever IP that name resolves to. Only list hostnames whose DNS you control. Prefer literal IPs when you can." **F6** (8 positional params) is flagged for follow-up; refactor to an options object is a breaking-API change deferred to a separate PR. **F7** (redirect/WebSocket asymmetry, NIT, conf 40) — skipping; the existing inline comment is sufficient. * 🧹 chore: Address Follow-Up NITs — Import Order And Mirror-Function Naming Three NITs from the latest comprehensive review: **NIT #1 (conf 85) — local import order.** AGENTS.md requires local imports sorted longest-to-shortest. Both `domain.ts` and `agent.ts` had `./ip` (shorter) before `./allowedAddresses` (longer). Swapped. **NIT #2 (conf 60) — missing cross-reference.** The schema-side `isHostPortShape` in `packages/data-provider/src/config.ts` had no note pointing at the canonical runtime mirror. Added a JSDoc paragraph explaining the mirror relationship and why a local copy exists (the data-provider package can't import from `@librechat/api` without creating a circular dependency). **NIT #3 (conf 50) — naming inconsistency.** Renamed `isHostPortShape` → `looksLikeHostPort` so the schema mirror matches the runtime helper exactly. Kept as a separate function (not a shared import) for the same circular-dependency reason; the matching name makes it obvious they should stay in lockstep.
1528 lines
49 KiB
JavaScript
1528 lines
49 KiB
JavaScript
const { logger } = require('@librechat/data-schemas');
|
|
const { tool: toolFn, DynamicStructuredTool } = require('@librechat/agents/langchain/tools');
|
|
const {
|
|
sleep,
|
|
StepTypes,
|
|
GraphEvents,
|
|
createToolSearch,
|
|
createBashExecutionTool,
|
|
Constants: AgentConstants,
|
|
createProgrammaticToolCallingTool,
|
|
} = require('@librechat/agents');
|
|
const {
|
|
sendEvent,
|
|
getToolkitKey,
|
|
getUserMCPAuthMap,
|
|
loadToolDefinitions,
|
|
GenerationJobManager,
|
|
isActionDomainAllowed,
|
|
buildWebSearchContext,
|
|
buildImageToolContext,
|
|
buildOAuthToolCallName,
|
|
buildToolClassification,
|
|
buildWebSearchDynamicContext,
|
|
} = require('@librechat/api');
|
|
const {
|
|
Time,
|
|
Tools,
|
|
Constants,
|
|
CacheKeys,
|
|
ErrorTypes,
|
|
ContentTypes,
|
|
imageGenTools,
|
|
EModelEndpoint,
|
|
EToolResources,
|
|
isActionTool,
|
|
actionDelimiter,
|
|
ImageVisionTool,
|
|
openapiToFunction,
|
|
AgentCapabilities,
|
|
isEphemeralAgentId,
|
|
validateActionDomain,
|
|
actionDomainSeparator,
|
|
defaultAgentCapabilities,
|
|
validateAndParseOpenAPISpec,
|
|
} = require('librechat-data-provider');
|
|
const {
|
|
createActionTool,
|
|
legacyDomainEncode,
|
|
decryptMetadata,
|
|
loadActionSets,
|
|
domainParser,
|
|
} = require('./ActionService');
|
|
const {
|
|
getEndpointsConfig,
|
|
getMCPServerTools,
|
|
getCachedTools,
|
|
} = require('~/server/services/Config');
|
|
const { processFileURL, uploadImageBuffer } = require('~/server/services/Files/process');
|
|
const { primeFiles: primeSearchFiles } = require('~/app/clients/tools/util/fileSearch');
|
|
const { primeFiles: primeCodeFiles } = require('~/server/services/Files/Code/process');
|
|
const { manifestToolMap, toolkits } = require('~/app/clients/tools/manifest');
|
|
const { createOnSearchResults } = require('~/server/services/Tools/search');
|
|
const { reinitMCPServer } = require('~/server/services/Tools/mcp');
|
|
const { resolveConfigServers } = require('~/server/services/MCP');
|
|
const { recordUsage } = require('~/server/services/Threads');
|
|
const { loadTools } = require('~/app/clients/tools/util');
|
|
const { redactMessage } = require('~/config/parsers');
|
|
const { findPluginAuthsByKeys } = require('~/models');
|
|
const { getFlowStateManager } = require('~/config');
|
|
const { getLogStores } = require('~/cache');
|
|
|
|
const domainSeparatorRegex = new RegExp(actionDomainSeparator, 'g');
|
|
|
|
/**
|
|
* Collapse every `actionDomainSeparator` sequence in the encoded-domain
|
|
* suffix of a fully-qualified action tool name to an underscore. Agents
|
|
* can store tool names in the raw `domainParser(..., true)` output,
|
|
* which for short hostnames is a `---`-separated string (e.g.
|
|
* `medium---com`). The lookup maps below are always keyed with the
|
|
* `_`-collapsed domain, so every read must normalize that suffix or
|
|
* short-hostname tools silently fail to resolve.
|
|
*
|
|
* The operationId portion (everything before the last `actionDelimiter`)
|
|
* is deliberately left untouched: `openapiToFunction` preserves hyphens
|
|
* in generated operationIds, so two specs can legitimately produce
|
|
* operationIds that differ only in hyphens-vs-underscores (e.g.
|
|
* `get_foo---bar` vs `get_foo_bar`). Collapsing the operationId would
|
|
* merge those into a single map slot and silently drop one tool.
|
|
*/
|
|
const normalizeActionToolName = (toolName) => {
|
|
const delimiterIndex = toolName.lastIndexOf(actionDelimiter);
|
|
if (delimiterIndex === -1) {
|
|
return toolName;
|
|
}
|
|
const prefixEnd = delimiterIndex + actionDelimiter.length;
|
|
const encodedDomain = toolName.slice(prefixEnd);
|
|
return toolName.slice(0, prefixEnd) + encodedDomain.replace(domainSeparatorRegex, '_');
|
|
};
|
|
|
|
/**
|
|
* Populate a `toolToAction` map with one slot per fully-qualified tool
|
|
* name (`<operationId><actionDelimiter><encoded-domain>`). Both the new
|
|
* and the legacy encodings of the domain are registered for every
|
|
* function so agents whose stored tool names predate the current
|
|
* encoding still resolve correctly.
|
|
*
|
|
* Indexing on the full tool name instead of the encoded domain alone is
|
|
* what makes multi-action agents work when two actions share a hostname:
|
|
* the operationId disambiguates them, so neither overwrites the other.
|
|
*
|
|
* Two actions that additionally share the same operationId still
|
|
* collide (nothing in the key distinguishes them). That case is
|
|
* pathological — `sanitizeOperationId` plus OpenAPI's own uniqueness
|
|
* requirement make it very unlikely — but when it does happen we log
|
|
* a warning so the silent-overwrite mode from the original bug cannot
|
|
* reappear under a different disguise.
|
|
*/
|
|
const registerActionTools = ({
|
|
toolToAction,
|
|
functionSignatures,
|
|
normalizedDomain,
|
|
legacyNormalized,
|
|
makeEntry,
|
|
}) => {
|
|
const setKey = (key, entry) => {
|
|
if (toolToAction.has(key)) {
|
|
logger.warn(
|
|
`[Actions] operationId collision: "${key}" already registered; ` +
|
|
`action "${entry.action?.action_id}" overwrites the previous entry. ` +
|
|
`Two actions share both the operationId and the encoded hostname.`,
|
|
);
|
|
}
|
|
toolToAction.set(key, entry);
|
|
};
|
|
|
|
for (const sig of functionSignatures) {
|
|
const entry = makeEntry(sig);
|
|
// Use `sig.name` verbatim: `openapiToFunction` keeps hyphens in
|
|
// generated operationIds, so `get_foo---bar` and `get_foo_bar` are
|
|
// distinct operations on the same spec. `normalizeActionToolName`
|
|
// only touches the encoded-domain suffix at lookup time, so map
|
|
// keys and lookups stay consistent without merging distinct
|
|
// operationIds into the same slot.
|
|
setKey(`${sig.name}${actionDelimiter}${normalizedDomain}`, entry);
|
|
if (legacyNormalized !== normalizedDomain) {
|
|
setKey(`${sig.name}${actionDelimiter}${legacyNormalized}`, entry);
|
|
}
|
|
}
|
|
};
|
|
|
|
/**
|
|
* Resolves the set of enabled agent capabilities from endpoints config,
|
|
* falling back to app-level or default capabilities for ephemeral agents.
|
|
* @param {ServerRequest} req
|
|
* @param {Object} appConfig
|
|
* @param {string} agentId
|
|
* @returns {Promise<Set<string>>}
|
|
*/
|
|
async function resolveAgentCapabilities(req, appConfig, agentId) {
|
|
const endpointsConfig = await getEndpointsConfig(req);
|
|
let capabilities = new Set(endpointsConfig?.[EModelEndpoint.agents]?.capabilities ?? []);
|
|
if (capabilities.size === 0 && isEphemeralAgentId(agentId)) {
|
|
capabilities = new Set(
|
|
appConfig.endpoints?.[EModelEndpoint.agents]?.capabilities ?? defaultAgentCapabilities,
|
|
);
|
|
}
|
|
return capabilities;
|
|
}
|
|
|
|
/**
|
|
* Processes the required actions by calling the appropriate tools and returning the outputs.
|
|
* @param {OpenAIClient} client - OpenAI or StreamRunManager Client.
|
|
* @param {RequiredAction} requiredActions - The current required action.
|
|
* @returns {Promise<ToolOutput>} The outputs of the tools.
|
|
*/
|
|
const processVisionRequest = async (client, currentAction) => {
|
|
if (!client.visionPromise) {
|
|
return {
|
|
tool_call_id: currentAction.toolCallId,
|
|
output: 'No image details found.',
|
|
};
|
|
}
|
|
|
|
/** @type {ChatCompletion | undefined} */
|
|
const completion = await client.visionPromise;
|
|
if (completion && completion.usage) {
|
|
recordUsage({
|
|
user: client.req.user.id,
|
|
model: client.req.body.model,
|
|
conversationId: (client.responseMessage ?? client.finalMessage).conversationId,
|
|
...completion.usage,
|
|
});
|
|
}
|
|
const output = completion?.choices?.[0]?.message?.content ?? 'No image details found.';
|
|
return {
|
|
tool_call_id: currentAction.toolCallId,
|
|
output,
|
|
};
|
|
};
|
|
|
|
/**
|
|
* Processes return required actions from run.
|
|
* @param {OpenAIClient | StreamRunManager} client - OpenAI (legacy) or StreamRunManager Client.
|
|
* @param {RequiredAction[]} requiredActions - The required actions to submit outputs for.
|
|
* @returns {Promise<ToolOutputs>} The outputs of the tools.
|
|
*/
|
|
async function processRequiredActions(client, requiredActions) {
|
|
logger.debug(
|
|
`[required actions] user: ${client.req.user.id} | thread_id: ${requiredActions[0].thread_id} | run_id: ${requiredActions[0].run_id}`,
|
|
requiredActions,
|
|
);
|
|
const appConfig = client.req.config;
|
|
const toolDefinitions = (await getCachedTools()) ?? {};
|
|
const seenToolkits = new Set();
|
|
const tools = requiredActions
|
|
.map((action) => {
|
|
const toolName = action.tool;
|
|
const toolDef = toolDefinitions[toolName];
|
|
if (toolDef && !manifestToolMap[toolName]) {
|
|
for (const toolkit of toolkits) {
|
|
if (seenToolkits.has(toolkit.pluginKey)) {
|
|
return;
|
|
} else if (toolName.startsWith(`${toolkit.pluginKey}_`)) {
|
|
seenToolkits.add(toolkit.pluginKey);
|
|
return toolkit.pluginKey;
|
|
}
|
|
}
|
|
}
|
|
return toolName;
|
|
})
|
|
.filter((toolName) => !!toolName);
|
|
|
|
const { loadedTools } = await loadTools({
|
|
user: client.req.user.id,
|
|
model: client.req.body.model ?? 'gpt-4o-mini',
|
|
tools,
|
|
functions: true,
|
|
endpoint: client.req.body.endpoint,
|
|
options: {
|
|
processFileURL,
|
|
req: client.req,
|
|
uploadImageBuffer,
|
|
openAIApiKey: client.apiKey,
|
|
returnMetadata: true,
|
|
},
|
|
webSearch: appConfig.webSearch,
|
|
fileStrategy: appConfig.fileStrategy,
|
|
imageOutputType: appConfig.imageOutputType,
|
|
});
|
|
|
|
const ToolMap = loadedTools.reduce((map, tool) => {
|
|
map[tool.name] = tool;
|
|
return map;
|
|
}, {});
|
|
|
|
const promises = [];
|
|
|
|
let actionSetsData = null;
|
|
let isActionTool = false;
|
|
const ActionToolMap = {};
|
|
const ActionBuildersMap = {};
|
|
|
|
for (let i = 0; i < requiredActions.length; i++) {
|
|
const currentAction = requiredActions[i];
|
|
if (currentAction.tool === ImageVisionTool.function.name) {
|
|
promises.push(processVisionRequest(client, currentAction));
|
|
continue;
|
|
}
|
|
let tool = ToolMap[currentAction.tool] ?? ActionToolMap[currentAction.tool];
|
|
|
|
const handleToolOutput = async (output) => {
|
|
requiredActions[i].output = output;
|
|
|
|
/** @type {FunctionToolCall & PartMetadata} */
|
|
const toolCall = {
|
|
function: {
|
|
name: currentAction.tool,
|
|
arguments: JSON.stringify(currentAction.toolInput),
|
|
output,
|
|
},
|
|
id: currentAction.toolCallId,
|
|
type: 'function',
|
|
progress: 1,
|
|
action: isActionTool,
|
|
};
|
|
|
|
const toolCallIndex = client.mappedOrder.get(toolCall.id);
|
|
|
|
if (imageGenTools.has(currentAction.tool)) {
|
|
const imageOutput = output;
|
|
toolCall.function.output = `${currentAction.tool} displayed an image. All generated images are already plainly visible, so don't repeat the descriptions in detail. Do not list download links as they are available in the UI already. The user may download the images by clicking on them, but do not mention anything about downloading to the user.`;
|
|
|
|
// Streams the "Finished" state of the tool call in the UI
|
|
client.addContentData({
|
|
[ContentTypes.TOOL_CALL]: toolCall,
|
|
index: toolCallIndex,
|
|
type: ContentTypes.TOOL_CALL,
|
|
});
|
|
|
|
await sleep(500);
|
|
|
|
/** @type {ImageFile} */
|
|
const imageDetails = {
|
|
...imageOutput,
|
|
...currentAction.toolInput,
|
|
};
|
|
|
|
const image_file = {
|
|
[ContentTypes.IMAGE_FILE]: imageDetails,
|
|
type: ContentTypes.IMAGE_FILE,
|
|
// Replace the tool call output with Image file
|
|
index: toolCallIndex,
|
|
};
|
|
|
|
client.addContentData(image_file);
|
|
|
|
// Update the stored tool call
|
|
client.seenToolCalls && client.seenToolCalls.set(toolCall.id, toolCall);
|
|
|
|
return {
|
|
tool_call_id: currentAction.toolCallId,
|
|
output: toolCall.function.output,
|
|
};
|
|
}
|
|
|
|
client.seenToolCalls && client.seenToolCalls.set(toolCall.id, toolCall);
|
|
client.addContentData({
|
|
[ContentTypes.TOOL_CALL]: toolCall,
|
|
index: toolCallIndex,
|
|
type: ContentTypes.TOOL_CALL,
|
|
// TODO: to append tool properties to stream, pass metadata rest to addContentData
|
|
// result: tool.result,
|
|
});
|
|
|
|
return {
|
|
tool_call_id: currentAction.toolCallId,
|
|
output,
|
|
};
|
|
};
|
|
|
|
if (!tool) {
|
|
// throw new Error(`Tool ${currentAction.tool} not found.`);
|
|
|
|
if (!actionSetsData) {
|
|
/** @type {Action[]} */
|
|
const actionSets =
|
|
(await loadActionSets({
|
|
assistant_id: client.req.body.assistant_id,
|
|
})) ?? [];
|
|
|
|
// See registerActionTools for the key-shape rationale.
|
|
const toolToAction = new Map();
|
|
|
|
for (const action of actionSets) {
|
|
const domain = await domainParser(action.metadata.domain, true);
|
|
const normalizedDomain = domain.replace(domainSeparatorRegex, '_');
|
|
const legacyDomain = legacyDomainEncode(action.metadata.domain);
|
|
const legacyNormalized = legacyDomain.replace(domainSeparatorRegex, '_');
|
|
|
|
const isDomainAllowed = await isActionDomainAllowed(
|
|
action.metadata.domain,
|
|
appConfig?.actions?.allowedDomains,
|
|
appConfig?.actions?.allowedAddresses,
|
|
);
|
|
if (!isDomainAllowed) {
|
|
continue;
|
|
}
|
|
|
|
// Validate and parse OpenAPI spec
|
|
const validationResult = validateAndParseOpenAPISpec(action.metadata.raw_spec);
|
|
if (!validationResult.spec || !validationResult.serverUrl) {
|
|
throw new Error(
|
|
`Invalid spec: user: ${client.req.user.id} | thread_id: ${requiredActions[0].thread_id} | run_id: ${requiredActions[0].run_id}`,
|
|
);
|
|
}
|
|
|
|
// SECURITY: Validate the domain from the spec matches the stored domain
|
|
// This is defense-in-depth to prevent any stored malicious actions
|
|
const domainValidation = validateActionDomain(
|
|
action.metadata.domain,
|
|
validationResult.serverUrl,
|
|
);
|
|
if (!domainValidation.isValid) {
|
|
logger.error(`Domain mismatch in stored action: ${domainValidation.message}`, {
|
|
userId: client.req.user.id,
|
|
action_id: action.action_id,
|
|
});
|
|
continue; // Skip this action rather than failing the entire request
|
|
}
|
|
|
|
// Process the OpenAPI spec
|
|
const { requestBuilders, functionSignatures } = openapiToFunction(validationResult.spec);
|
|
|
|
// Store encrypted values for OAuth flow
|
|
const encrypted = {
|
|
oauth_client_id: action.metadata.oauth_client_id,
|
|
oauth_client_secret: action.metadata.oauth_client_secret,
|
|
};
|
|
|
|
// Decrypt metadata
|
|
const decryptedAction = { ...action };
|
|
decryptedAction.metadata = await decryptMetadata(action.metadata);
|
|
|
|
registerActionTools({
|
|
toolToAction,
|
|
functionSignatures,
|
|
normalizedDomain,
|
|
legacyNormalized,
|
|
makeEntry: (sig) => ({
|
|
action: decryptedAction,
|
|
requestBuilder: requestBuilders[sig.name],
|
|
encrypted,
|
|
}),
|
|
});
|
|
|
|
// Store builders for reuse
|
|
ActionBuildersMap[action.metadata.domain] = requestBuilders;
|
|
}
|
|
|
|
actionSetsData = toolToAction;
|
|
}
|
|
|
|
const entry = actionSetsData.get(normalizeActionToolName(currentAction.tool));
|
|
if (!entry) {
|
|
continue;
|
|
}
|
|
|
|
const { action, requestBuilder, encrypted } = entry;
|
|
|
|
// We've already decrypted the metadata, so we can pass it directly
|
|
const _allowedDomains = appConfig?.actions?.allowedDomains;
|
|
const _allowedAddresses = appConfig?.actions?.allowedAddresses;
|
|
tool = await createActionTool({
|
|
userId: client.req.user.id,
|
|
res: client.res,
|
|
action,
|
|
requestBuilder,
|
|
// Note: intentionally not passing zodSchema, name, and description for assistants API
|
|
encrypted, // Pass the encrypted values for OAuth flow
|
|
useSSRFProtection: !Array.isArray(_allowedDomains) || _allowedDomains.length === 0,
|
|
allowedAddresses: _allowedAddresses,
|
|
});
|
|
if (!tool) {
|
|
logger.warn(
|
|
`Invalid action: user: ${client.req.user.id} | thread_id: ${requiredActions[0].thread_id} | run_id: ${requiredActions[0].run_id} | toolName: ${currentAction.tool}`,
|
|
);
|
|
throw new Error(`{"type":"${ErrorTypes.INVALID_ACTION}"}`);
|
|
}
|
|
isActionTool = !!tool;
|
|
ActionToolMap[currentAction.tool] = tool;
|
|
}
|
|
|
|
if (currentAction.tool === 'calculator') {
|
|
currentAction.toolInput = currentAction.toolInput.input;
|
|
}
|
|
|
|
const handleToolError = (error) => {
|
|
logger.error(
|
|
`tool_call_id: ${currentAction.toolCallId} | Error processing tool ${currentAction.tool}`,
|
|
error,
|
|
);
|
|
return {
|
|
tool_call_id: currentAction.toolCallId,
|
|
output: `Error processing tool ${currentAction.tool}: ${redactMessage(error.message, 256)}`,
|
|
};
|
|
};
|
|
|
|
try {
|
|
const promise = tool
|
|
._call(currentAction.toolInput)
|
|
.then(handleToolOutput)
|
|
.catch(handleToolError);
|
|
promises.push(promise);
|
|
} catch (error) {
|
|
const toolOutputError = handleToolError(error);
|
|
promises.push(Promise.resolve(toolOutputError));
|
|
}
|
|
}
|
|
|
|
return {
|
|
tool_outputs: await Promise.all(promises),
|
|
};
|
|
}
|
|
|
|
/**
|
|
* Processes the runtime tool calls and returns the tool classes.
|
|
* @param {Object} params - Run params containing user and request information.
|
|
* @param {ServerRequest} params.req - The request object.
|
|
* @param {ServerResponse} params.res - The request object.
|
|
* @param {AbortSignal} params.signal
|
|
* @param {Pick<Agent, 'id' | 'provider' | 'model' | 'tools'} params.agent - The agent to load tools for.
|
|
* @param {string | undefined} [params.openAIApiKey] - The OpenAI API key.
|
|
* @returns {Promise<{
|
|
* tools?: StructuredTool[];
|
|
* toolContextMap?: Record<string, unknown>;
|
|
* dynamicToolContextMap?: Record<string, unknown>;
|
|
* userMCPAuthMap?: Record<string, Record<string, string>>;
|
|
* toolRegistry?: Map<string, import('~/utils/toolClassification').LCTool>;
|
|
* hasDeferredTools?: boolean;
|
|
* }>} The agent tools and registry.
|
|
*/
|
|
/** Native LibreChat tools that are not in the manifest */
|
|
const nativeTools = new Set([Tools.execute_code, Tools.file_search, Tools.web_search]);
|
|
|
|
/** Checks if a tool name is a known built-in tool */
|
|
const isBuiltInTool = (toolName) =>
|
|
Boolean(
|
|
manifestToolMap[toolName] ||
|
|
toolkits.some((t) => t.pluginKey === toolName) ||
|
|
nativeTools.has(toolName),
|
|
);
|
|
|
|
/**
|
|
* Loads only tool definitions without creating tool instances.
|
|
* This is the efficient path for event-driven mode where tools are loaded on-demand.
|
|
*
|
|
* @param {Object} params
|
|
* @param {ServerRequest} params.req - The request object
|
|
* @param {ServerResponse} [params.res] - The response object for SSE events
|
|
* @param {Object} params.agent - The agent configuration
|
|
* @param {string|null} [params.streamId] - Stream ID for resumable mode
|
|
* @returns {Promise<{
|
|
* toolDefinitions?: import('@librechat/api').LCTool[];
|
|
* toolRegistry?: Map<string, import('@librechat/api').LCTool>;
|
|
* userMCPAuthMap?: Record<string, Record<string, string>>;
|
|
* hasDeferredTools?: boolean;
|
|
* }>}
|
|
*/
|
|
async function loadToolDefinitionsWrapper({ req, res, agent, streamId = null, tool_resources }) {
|
|
if (!agent.tools || agent.tools.length === 0) {
|
|
return { toolDefinitions: [] };
|
|
}
|
|
|
|
if (
|
|
agent.tools.length === 1 &&
|
|
(agent.tools[0] === AgentCapabilities.context || agent.tools[0] === AgentCapabilities.ocr)
|
|
) {
|
|
return { toolDefinitions: [] };
|
|
}
|
|
|
|
const appConfig = req.config;
|
|
const enabledCapabilities = await resolveAgentCapabilities(req, appConfig, agent.id);
|
|
|
|
const checkCapability = (capability) => enabledCapabilities.has(capability);
|
|
const areToolsEnabled = checkCapability(AgentCapabilities.tools);
|
|
const actionsEnabled = checkCapability(AgentCapabilities.actions);
|
|
const deferredToolsEnabled = checkCapability(AgentCapabilities.deferred_tools);
|
|
|
|
const filteredTools = agent.tools?.filter((tool) => {
|
|
if (tool === Tools.file_search) {
|
|
return checkCapability(AgentCapabilities.file_search);
|
|
}
|
|
if (tool === Tools.execute_code) {
|
|
return checkCapability(AgentCapabilities.execute_code);
|
|
}
|
|
if (tool === Tools.web_search) {
|
|
return checkCapability(AgentCapabilities.web_search);
|
|
}
|
|
if (isActionTool(tool)) {
|
|
return actionsEnabled;
|
|
}
|
|
if (!areToolsEnabled) {
|
|
return false;
|
|
}
|
|
return true;
|
|
});
|
|
|
|
if (!filteredTools || filteredTools.length === 0) {
|
|
return { toolDefinitions: [] };
|
|
}
|
|
|
|
/** @type {Record<string, Record<string, string>>} */
|
|
let userMCPAuthMap;
|
|
if (agent.tools?.some((t) => t.includes(Constants.mcp_delimiter))) {
|
|
userMCPAuthMap = await getUserMCPAuthMap({
|
|
tools: agent.tools,
|
|
userId: req.user.id,
|
|
findPluginAuthsByKeys,
|
|
});
|
|
}
|
|
|
|
const flowsCache = getLogStores(CacheKeys.FLOWS);
|
|
const flowManager = getFlowStateManager(flowsCache);
|
|
const configServers = await resolveConfigServers(req);
|
|
const pendingOAuthServers = new Set();
|
|
|
|
const createOAuthEmitter = (serverName) => {
|
|
return async (authURL) => {
|
|
const flowId = `${req.user.id}:${serverName}:${Date.now()}`;
|
|
const stepId = 'step_oauth_login_' + serverName;
|
|
const toolCall = {
|
|
id: flowId,
|
|
name: buildOAuthToolCallName(serverName),
|
|
type: 'tool_call_chunk',
|
|
};
|
|
|
|
const runStepData = {
|
|
runId: Constants.USE_PRELIM_RESPONSE_MESSAGE_ID,
|
|
id: stepId,
|
|
type: StepTypes.TOOL_CALLS,
|
|
index: 0,
|
|
stepDetails: {
|
|
type: StepTypes.TOOL_CALLS,
|
|
tool_calls: [toolCall],
|
|
},
|
|
};
|
|
|
|
const runStepDeltaData = {
|
|
id: stepId,
|
|
delta: {
|
|
type: StepTypes.TOOL_CALLS,
|
|
tool_calls: [{ ...toolCall, args: '' }],
|
|
auth: authURL,
|
|
expires_at: Date.now() + Time.TWO_MINUTES,
|
|
},
|
|
};
|
|
|
|
const runStepEvent = { event: GraphEvents.ON_RUN_STEP, data: runStepData };
|
|
const runStepDeltaEvent = { event: GraphEvents.ON_RUN_STEP_DELTA, data: runStepDeltaData };
|
|
|
|
if (streamId) {
|
|
await GenerationJobManager.emitChunk(streamId, runStepEvent);
|
|
await GenerationJobManager.emitChunk(streamId, runStepDeltaEvent);
|
|
} else if (res && !res.writableEnded) {
|
|
sendEvent(res, runStepEvent);
|
|
sendEvent(res, runStepDeltaEvent);
|
|
} else {
|
|
logger.warn(
|
|
`[Tool Definitions] Cannot emit OAuth event for ${serverName}: no streamId and res not available`,
|
|
);
|
|
}
|
|
};
|
|
};
|
|
|
|
const getOrFetchMCPServerTools = async (userId, serverName) => {
|
|
const cached = await getMCPServerTools(userId, serverName);
|
|
if (cached) {
|
|
return cached;
|
|
}
|
|
|
|
const oauthStart = async () => {
|
|
pendingOAuthServers.add(serverName);
|
|
};
|
|
|
|
const result = await reinitMCPServer({
|
|
user: req.user,
|
|
oauthStart,
|
|
flowManager,
|
|
serverName,
|
|
configServers,
|
|
userMCPAuthMap,
|
|
});
|
|
|
|
return result?.availableTools || null;
|
|
};
|
|
|
|
const getActionToolDefinitions = async (agentId, actionToolNames) => {
|
|
const actionSets = (await loadActionSets({ agent_id: agentId })) ?? [];
|
|
if (actionSets.length === 0) {
|
|
return [];
|
|
}
|
|
|
|
const definitions = [];
|
|
const allowedDomains = appConfig?.actions?.allowedDomains;
|
|
const allowedAddresses = appConfig?.actions?.allowedAddresses;
|
|
const normalizedToolNames = new Set(
|
|
actionToolNames.map((n) => n.replace(domainSeparatorRegex, '_')),
|
|
);
|
|
|
|
for (const action of actionSets) {
|
|
const domain = await domainParser(action.metadata.domain, true);
|
|
const normalizedDomain = domain.replace(domainSeparatorRegex, '_');
|
|
|
|
const legacyDomain = legacyDomainEncode(action.metadata.domain);
|
|
const legacyNormalized = legacyDomain.replace(domainSeparatorRegex, '_');
|
|
|
|
const isDomainAllowed = await isActionDomainAllowed(
|
|
action.metadata.domain,
|
|
allowedDomains,
|
|
allowedAddresses,
|
|
);
|
|
if (!isDomainAllowed) {
|
|
logger.warn(
|
|
`[Actions] Domain "${action.metadata.domain}" not in allowedDomains. ` +
|
|
`Add it to librechat.yaml actions.allowedDomains to enable this action.`,
|
|
);
|
|
continue;
|
|
}
|
|
|
|
const validationResult = validateAndParseOpenAPISpec(action.metadata.raw_spec);
|
|
if (!validationResult.spec || !validationResult.serverUrl) {
|
|
logger.warn(`[Actions] Invalid OpenAPI spec for domain: ${domain}`);
|
|
continue;
|
|
}
|
|
|
|
const { functionSignatures } = openapiToFunction(validationResult.spec, true);
|
|
|
|
for (const sig of functionSignatures) {
|
|
const toolName = `${sig.name}${actionDelimiter}${normalizedDomain}`;
|
|
const legacyToolName = `${sig.name}${actionDelimiter}${legacyNormalized}`;
|
|
if (!normalizedToolNames.has(toolName) && !normalizedToolNames.has(legacyToolName)) {
|
|
continue;
|
|
}
|
|
|
|
definitions.push({
|
|
name: toolName,
|
|
description: sig.description,
|
|
parameters: sig.parameters,
|
|
});
|
|
}
|
|
}
|
|
|
|
return definitions;
|
|
};
|
|
|
|
let { toolDefinitions, toolRegistry, hasDeferredTools } = await loadToolDefinitions(
|
|
{
|
|
userId: req.user.id,
|
|
agentId: agent.id,
|
|
tools: filteredTools,
|
|
toolOptions: agent.tool_options,
|
|
deferredToolsEnabled,
|
|
},
|
|
{
|
|
isBuiltInTool,
|
|
getOrFetchMCPServerTools,
|
|
getActionToolDefinitions,
|
|
},
|
|
);
|
|
|
|
if (pendingOAuthServers.size > 0 && (res || streamId)) {
|
|
const serverNames = Array.from(pendingOAuthServers);
|
|
logger.info(
|
|
`[Tool Definitions] OAuth required for ${serverNames.length} server(s): ${serverNames.join(', ')}. Emitting events and waiting.`,
|
|
);
|
|
|
|
const oauthWaitPromises = serverNames.map(async (serverName) => {
|
|
try {
|
|
const result = await reinitMCPServer({
|
|
user: req.user,
|
|
serverName,
|
|
configServers,
|
|
userMCPAuthMap,
|
|
flowManager,
|
|
returnOnOAuth: false,
|
|
oauthStart: createOAuthEmitter(serverName),
|
|
connectionTimeout: Time.TWO_MINUTES,
|
|
});
|
|
|
|
if (result?.availableTools) {
|
|
logger.info(`[Tool Definitions] OAuth completed for ${serverName}, tools available`);
|
|
return { serverName, success: true };
|
|
}
|
|
return { serverName, success: false };
|
|
} catch (error) {
|
|
logger.debug(`[Tool Definitions] OAuth wait failed for ${serverName}:`, error?.message);
|
|
return { serverName, success: false };
|
|
}
|
|
});
|
|
|
|
const results = await Promise.allSettled(oauthWaitPromises);
|
|
const successfulServers = results
|
|
.filter((r) => r.status === 'fulfilled' && r.value.success)
|
|
.map((r) => r.value.serverName);
|
|
|
|
if (successfulServers.length > 0) {
|
|
logger.info(
|
|
`[Tool Definitions] Reloading tools after OAuth for: ${successfulServers.join(', ')}`,
|
|
);
|
|
const reloadResult = await loadToolDefinitions(
|
|
{
|
|
userId: req.user.id,
|
|
agentId: agent.id,
|
|
tools: filteredTools,
|
|
toolOptions: agent.tool_options,
|
|
deferredToolsEnabled,
|
|
},
|
|
{
|
|
isBuiltInTool,
|
|
getOrFetchMCPServerTools,
|
|
getActionToolDefinitions,
|
|
},
|
|
);
|
|
toolDefinitions = reloadResult.toolDefinitions;
|
|
toolRegistry = reloadResult.toolRegistry;
|
|
hasDeferredTools = reloadResult.hasDeferredTools;
|
|
}
|
|
}
|
|
|
|
/** @type {Record<string, string>} */
|
|
const toolContextMap = {};
|
|
/** @type {Record<string, string>} */
|
|
const dynamicToolContextMap = {};
|
|
const hasWebSearch = filteredTools.includes(Tools.web_search);
|
|
const hasFileSearch = filteredTools.includes(Tools.file_search);
|
|
const hasExecuteCode = filteredTools.includes(Tools.execute_code);
|
|
|
|
if (hasWebSearch) {
|
|
toolContextMap[Tools.web_search] = buildWebSearchContext();
|
|
dynamicToolContextMap[Tools.web_search] = buildWebSearchDynamicContext(
|
|
req.conversationCreatedAt,
|
|
);
|
|
}
|
|
|
|
/**
|
|
* `files` carry the upload session_ids; we surface them so client.js can
|
|
* seed `Graph.sessions[EXECUTE_CODE]` before run start. Without that seed,
|
|
* the agents-side `ToolNode.getCodeSessionContext` returns undefined on
|
|
* call #1, `_injected_files` is never set on the tool call, and the
|
|
* sandbox can't see the prior turn's generated artifacts on first read.
|
|
*/
|
|
let primedCodeFiles;
|
|
if (hasExecuteCode && tool_resources) {
|
|
try {
|
|
const { toolContext, files } = await primeCodeFiles({
|
|
req,
|
|
tool_resources,
|
|
agentId: agent.id,
|
|
});
|
|
if (toolContext) {
|
|
dynamicToolContextMap[Tools.execute_code] = toolContext;
|
|
}
|
|
if (files?.length) {
|
|
primedCodeFiles = files;
|
|
}
|
|
} catch (error) {
|
|
logger.error('[loadToolDefinitionsWrapper] Error priming code files:', error);
|
|
}
|
|
}
|
|
|
|
if (hasFileSearch && tool_resources) {
|
|
try {
|
|
const { toolContext } = await primeSearchFiles({
|
|
req,
|
|
tool_resources,
|
|
agentId: agent.id,
|
|
});
|
|
if (toolContext) {
|
|
dynamicToolContextMap[Tools.file_search] = toolContext;
|
|
}
|
|
} catch (error) {
|
|
logger.error('[loadToolDefinitionsWrapper] Error priming search files:', error);
|
|
}
|
|
}
|
|
|
|
const imageFiles = tool_resources?.[EToolResources.image_edit]?.files ?? [];
|
|
if (imageFiles.length > 0) {
|
|
const hasOaiImageGen = filteredTools.includes('image_gen_oai');
|
|
const hasGeminiImageGen = filteredTools.includes('gemini_image_gen');
|
|
|
|
if (hasOaiImageGen) {
|
|
const toolContext = buildImageToolContext({
|
|
imageFiles,
|
|
toolName: `${EToolResources.image_edit}_oai`,
|
|
contextDescription: 'image editing',
|
|
});
|
|
if (toolContext) {
|
|
dynamicToolContextMap.image_edit_oai = toolContext;
|
|
}
|
|
}
|
|
|
|
if (hasGeminiImageGen) {
|
|
const toolContext = buildImageToolContext({
|
|
imageFiles,
|
|
toolName: 'gemini_image_gen',
|
|
contextDescription: 'image context',
|
|
});
|
|
if (toolContext) {
|
|
dynamicToolContextMap.gemini_image_gen = toolContext;
|
|
}
|
|
}
|
|
}
|
|
|
|
return {
|
|
toolRegistry,
|
|
userMCPAuthMap,
|
|
toolContextMap,
|
|
dynamicToolContextMap,
|
|
toolDefinitions,
|
|
hasDeferredTools,
|
|
actionsEnabled,
|
|
primedCodeFiles,
|
|
};
|
|
}
|
|
|
|
/**
|
|
* Loads agent tools for initialization or execution.
|
|
* @param {Object} params
|
|
* @param {ServerRequest} params.req - The request object
|
|
* @param {ServerResponse} params.res - The response object
|
|
* @param {Object} params.agent - The agent configuration
|
|
* @param {AbortSignal} [params.signal] - Abort signal
|
|
* @param {Object} [params.tool_resources] - Tool resources
|
|
* @param {string} [params.openAIApiKey] - OpenAI API key
|
|
* @param {string|null} [params.streamId] - Stream ID for resumable mode
|
|
* @param {boolean} [params.definitionsOnly=true] - When true, returns only serializable
|
|
* tool definitions without creating full tool instances. Use for event-driven mode
|
|
* where tools are loaded on-demand during execution.
|
|
*/
|
|
async function loadAgentTools({
|
|
req,
|
|
res,
|
|
agent,
|
|
signal,
|
|
tool_resources,
|
|
openAIApiKey,
|
|
streamId = null,
|
|
definitionsOnly = true,
|
|
}) {
|
|
if (definitionsOnly) {
|
|
return loadToolDefinitionsWrapper({ req, res, agent, streamId, tool_resources });
|
|
}
|
|
|
|
if (!agent.tools || agent.tools.length === 0) {
|
|
return { toolDefinitions: [] };
|
|
} else if (
|
|
agent.tools &&
|
|
agent.tools.length === 1 &&
|
|
/** Legacy handling for `ocr` as may still exist in existing Agents */
|
|
(agent.tools[0] === AgentCapabilities.context || agent.tools[0] === AgentCapabilities.ocr)
|
|
) {
|
|
return { toolDefinitions: [] };
|
|
}
|
|
|
|
const appConfig = req.config;
|
|
const enabledCapabilities = await resolveAgentCapabilities(req, appConfig, agent.id);
|
|
const checkCapability = (capability) => {
|
|
const enabled = enabledCapabilities.has(capability);
|
|
if (!enabled) {
|
|
const isToolCapability = [
|
|
AgentCapabilities.file_search,
|
|
AgentCapabilities.execute_code,
|
|
AgentCapabilities.web_search,
|
|
].includes(capability);
|
|
const suffix = isToolCapability ? ' despite configured tool.' : '.';
|
|
logger.warn(
|
|
`Capability "${capability}" disabled${suffix} User: ${req.user.id} | Agent: ${agent.id}`,
|
|
);
|
|
}
|
|
return enabled;
|
|
};
|
|
const areToolsEnabled = checkCapability(AgentCapabilities.tools);
|
|
const actionsEnabled = checkCapability(AgentCapabilities.actions);
|
|
|
|
let includesWebSearch = false;
|
|
const _agentTools = agent.tools?.filter((tool) => {
|
|
if (tool === Tools.file_search) {
|
|
return checkCapability(AgentCapabilities.file_search);
|
|
} else if (tool === Tools.execute_code) {
|
|
return checkCapability(AgentCapabilities.execute_code);
|
|
} else if (tool === Tools.web_search) {
|
|
includesWebSearch = checkCapability(AgentCapabilities.web_search);
|
|
return includesWebSearch;
|
|
} else if (isActionTool(tool)) {
|
|
return actionsEnabled;
|
|
} else if (!areToolsEnabled) {
|
|
return false;
|
|
}
|
|
return true;
|
|
});
|
|
|
|
if (!_agentTools || _agentTools.length === 0) {
|
|
return {};
|
|
}
|
|
/** @type {ReturnType<typeof createOnSearchResults>} */
|
|
let webSearchCallbacks;
|
|
if (includesWebSearch) {
|
|
webSearchCallbacks = createOnSearchResults(res, streamId);
|
|
}
|
|
|
|
/** @type {Record<string, Record<string, string>>} */
|
|
let userMCPAuthMap;
|
|
if (agent.tools?.some((t) => t.includes(Constants.mcp_delimiter))) {
|
|
userMCPAuthMap = await getUserMCPAuthMap({
|
|
tools: agent.tools,
|
|
userId: req.user.id,
|
|
findPluginAuthsByKeys,
|
|
});
|
|
}
|
|
|
|
const { loadedTools, toolContextMap, dynamicToolContextMap, primedCodeFiles } = await loadTools({
|
|
agent,
|
|
signal,
|
|
userMCPAuthMap,
|
|
functions: true,
|
|
user: req.user.id,
|
|
tools: _agentTools,
|
|
options: {
|
|
req,
|
|
res,
|
|
openAIApiKey,
|
|
tool_resources,
|
|
processFileURL,
|
|
uploadImageBuffer,
|
|
returnMetadata: true,
|
|
[Tools.web_search]: webSearchCallbacks,
|
|
},
|
|
webSearch: appConfig.webSearch,
|
|
fileStrategy: appConfig.fileStrategy,
|
|
imageOutputType: appConfig.imageOutputType,
|
|
});
|
|
|
|
/** Build tool registry from MCP tools and create PTC/tool search tools if configured */
|
|
const deferredToolsEnabled = checkCapability(AgentCapabilities.deferred_tools);
|
|
const { toolRegistry, toolDefinitions, additionalTools, hasDeferredTools } =
|
|
await buildToolClassification({
|
|
loadedTools,
|
|
userId: req.user.id,
|
|
agentId: agent.id,
|
|
agentToolOptions: agent.tool_options,
|
|
deferredToolsEnabled,
|
|
});
|
|
|
|
const agentTools = [];
|
|
for (let i = 0; i < loadedTools.length; i++) {
|
|
const tool = loadedTools[i];
|
|
if (tool.name && (tool.name === Tools.execute_code || tool.name === Tools.file_search)) {
|
|
agentTools.push(tool);
|
|
continue;
|
|
}
|
|
|
|
if (!areToolsEnabled) {
|
|
continue;
|
|
}
|
|
|
|
if (tool.mcp === true) {
|
|
agentTools.push(tool);
|
|
continue;
|
|
}
|
|
|
|
if (tool instanceof DynamicStructuredTool) {
|
|
agentTools.push(tool);
|
|
continue;
|
|
}
|
|
|
|
const toolDefinition = {
|
|
name: tool.name,
|
|
schema: tool.schema,
|
|
description: tool.description,
|
|
};
|
|
|
|
if (imageGenTools.has(tool.name)) {
|
|
toolDefinition.responseFormat = 'content_and_artifact';
|
|
}
|
|
|
|
const toolInstance = toolFn(async (...args) => {
|
|
return tool['_call'](...args);
|
|
}, toolDefinition);
|
|
|
|
agentTools.push(toolInstance);
|
|
}
|
|
|
|
const ToolMap = loadedTools.reduce((map, tool) => {
|
|
map[tool.name] = tool;
|
|
return map;
|
|
}, {});
|
|
|
|
agentTools.push(...additionalTools);
|
|
|
|
const hasActionTools = _agentTools.some((t) => isActionTool(t));
|
|
if (!hasActionTools) {
|
|
return {
|
|
toolRegistry,
|
|
userMCPAuthMap,
|
|
toolContextMap,
|
|
dynamicToolContextMap,
|
|
toolDefinitions,
|
|
hasDeferredTools,
|
|
actionsEnabled,
|
|
tools: agentTools,
|
|
primedCodeFiles,
|
|
};
|
|
}
|
|
|
|
const actionSets = (await loadActionSets({ agent_id: agent.id })) ?? [];
|
|
if (actionSets.length === 0) {
|
|
if (_agentTools.length > 0 && agentTools.length === 0) {
|
|
logger.warn(`No tools found for the specified tool calls: ${_agentTools.join(', ')}`);
|
|
}
|
|
return {
|
|
toolRegistry,
|
|
userMCPAuthMap,
|
|
toolContextMap,
|
|
dynamicToolContextMap,
|
|
toolDefinitions,
|
|
hasDeferredTools,
|
|
actionsEnabled,
|
|
tools: agentTools,
|
|
primedCodeFiles,
|
|
};
|
|
}
|
|
|
|
// See registerActionTools for the key-shape rationale.
|
|
const toolToAction = new Map();
|
|
|
|
for (const action of actionSets) {
|
|
const domain = await domainParser(action.metadata.domain, true);
|
|
const normalizedDomain = domain.replace(domainSeparatorRegex, '_');
|
|
const legacyDomain = legacyDomainEncode(action.metadata.domain);
|
|
const legacyNormalized = legacyDomain.replace(domainSeparatorRegex, '_');
|
|
|
|
const isDomainAllowed = await isActionDomainAllowed(
|
|
action.metadata.domain,
|
|
appConfig?.actions?.allowedDomains,
|
|
appConfig?.actions?.allowedAddresses,
|
|
);
|
|
if (!isDomainAllowed) {
|
|
continue;
|
|
}
|
|
|
|
// Validate and parse OpenAPI spec once per action set
|
|
const validationResult = validateAndParseOpenAPISpec(action.metadata.raw_spec);
|
|
if (!validationResult.spec || !validationResult.serverUrl) {
|
|
continue;
|
|
}
|
|
|
|
// SECURITY: Validate the domain from the spec matches the stored domain
|
|
// This is defense-in-depth to prevent any stored malicious actions
|
|
const domainValidation = validateActionDomain(
|
|
action.metadata.domain,
|
|
validationResult.serverUrl,
|
|
);
|
|
if (!domainValidation.isValid) {
|
|
logger.error(`Domain mismatch in stored action: ${domainValidation.message}`, {
|
|
userId: req.user.id,
|
|
agent_id: agent.id,
|
|
action_id: action.action_id,
|
|
});
|
|
continue; // Skip this action rather than failing the entire request
|
|
}
|
|
|
|
const encrypted = {
|
|
oauth_client_id: action.metadata.oauth_client_id,
|
|
oauth_client_secret: action.metadata.oauth_client_secret,
|
|
};
|
|
|
|
// Decrypt metadata once per action set
|
|
const decryptedAction = { ...action };
|
|
decryptedAction.metadata = await decryptMetadata(action.metadata);
|
|
|
|
// Process the OpenAPI spec once per action set
|
|
const { requestBuilders, functionSignatures, zodSchemas } = openapiToFunction(
|
|
validationResult.spec,
|
|
true,
|
|
);
|
|
|
|
registerActionTools({
|
|
toolToAction,
|
|
functionSignatures,
|
|
normalizedDomain,
|
|
legacyNormalized,
|
|
makeEntry: (sig) => ({
|
|
action: decryptedAction,
|
|
requestBuilder: requestBuilders[sig.name],
|
|
zodSchema: zodSchemas[sig.name],
|
|
functionSignature: sig,
|
|
encrypted,
|
|
}),
|
|
});
|
|
}
|
|
|
|
// Now map tools to the processed action sets
|
|
const ActionToolMap = {};
|
|
|
|
for (const toolName of _agentTools) {
|
|
if (ToolMap[toolName]) {
|
|
continue;
|
|
}
|
|
|
|
const entry = toolToAction.get(normalizeActionToolName(toolName));
|
|
if (!entry) {
|
|
continue;
|
|
}
|
|
|
|
const { action, encrypted, zodSchema, requestBuilder, functionSignature } = entry;
|
|
const _allowedDomains = appConfig?.actions?.allowedDomains;
|
|
const _allowedAddresses = appConfig?.actions?.allowedAddresses;
|
|
const tool = await createActionTool({
|
|
userId: req.user.id,
|
|
res,
|
|
action,
|
|
requestBuilder,
|
|
zodSchema,
|
|
encrypted,
|
|
name: toolName,
|
|
description: functionSignature.description,
|
|
streamId,
|
|
useSSRFProtection: !Array.isArray(_allowedDomains) || _allowedDomains.length === 0,
|
|
allowedAddresses: _allowedAddresses,
|
|
});
|
|
|
|
if (!tool) {
|
|
logger.warn(
|
|
`Invalid action: user: ${req.user.id} | agent_id: ${agent.id} | toolName: ${toolName}`,
|
|
);
|
|
throw new Error(`{"type":"${ErrorTypes.INVALID_ACTION}"}`);
|
|
}
|
|
|
|
agentTools.push(tool);
|
|
ActionToolMap[toolName] = tool;
|
|
}
|
|
|
|
if (_agentTools.length > 0 && agentTools.length === 0) {
|
|
logger.warn(`No tools found for the specified tool calls: ${_agentTools.join(', ')}`);
|
|
return {};
|
|
}
|
|
|
|
return {
|
|
toolRegistry,
|
|
toolContextMap,
|
|
dynamicToolContextMap,
|
|
userMCPAuthMap,
|
|
toolDefinitions,
|
|
hasDeferredTools,
|
|
actionsEnabled,
|
|
tools: agentTools,
|
|
primedCodeFiles,
|
|
};
|
|
}
|
|
|
|
/**
|
|
* Loads tools for event-driven execution (ON_TOOL_EXECUTE handler).
|
|
* This function encapsulates all dependencies needed for tool loading,
|
|
* so callers don't need to import processFileURL, uploadImageBuffer, etc.
|
|
*
|
|
* Handles both regular tools (MCP, built-in) and action tools.
|
|
*
|
|
* @param {Object} params
|
|
* @param {ServerRequest} params.req - The request object
|
|
* @param {ServerResponse} params.res - The response object
|
|
* @param {AbortSignal} [params.signal] - Abort signal
|
|
* @param {Object} params.agent - The agent object
|
|
* @param {string[]} params.toolNames - Names of tools to load
|
|
* @param {Map} [params.toolRegistry] - Tool registry
|
|
* @param {Record<string, Record<string, string>>} [params.userMCPAuthMap] - User MCP auth map
|
|
* @param {Object} [params.tool_resources] - Tool resources
|
|
* @param {string|null} [params.streamId] - Stream ID for web search callbacks
|
|
* @param {boolean} [params.actionsEnabled] - Whether the actions capability is enabled
|
|
* @returns {Promise<{ loadedTools: Array, configurable: Object }>}
|
|
*/
|
|
async function loadToolsForExecution({
|
|
req,
|
|
res,
|
|
signal,
|
|
agent,
|
|
toolNames,
|
|
toolRegistry,
|
|
userMCPAuthMap,
|
|
tool_resources,
|
|
streamId = null,
|
|
actionsEnabled,
|
|
}) {
|
|
const appConfig = req.config;
|
|
const allLoadedTools = [];
|
|
const configurable = { userMCPAuthMap };
|
|
|
|
if (actionsEnabled === undefined) {
|
|
const enabledCapabilities = await resolveAgentCapabilities(req, appConfig, agent?.id);
|
|
actionsEnabled = enabledCapabilities.has(AgentCapabilities.actions);
|
|
}
|
|
|
|
const isToolSearch = toolNames.includes(AgentConstants.TOOL_SEARCH);
|
|
const isPTC = toolNames.includes(AgentConstants.PROGRAMMATIC_TOOL_CALLING);
|
|
|
|
logger.debug(
|
|
`[loadToolsForExecution] isToolSearch: ${isToolSearch}, toolRegistry: ${toolRegistry?.size ?? 'undefined'}`,
|
|
);
|
|
|
|
if (isToolSearch && toolRegistry) {
|
|
const toolSearchTool = createToolSearch({
|
|
mode: 'local',
|
|
toolRegistry,
|
|
});
|
|
allLoadedTools.push(toolSearchTool);
|
|
configurable.toolRegistry = toolRegistry;
|
|
}
|
|
|
|
if (isPTC && toolRegistry) {
|
|
configurable.toolRegistry = toolRegistry;
|
|
try {
|
|
/**
|
|
* PTC auth is handled by the agents library / sandbox service
|
|
* directly; LibreChat no longer threads a per-run credential.
|
|
*/
|
|
const ptcTool = createProgrammaticToolCallingTool({});
|
|
allLoadedTools.push(ptcTool);
|
|
} catch (error) {
|
|
logger.error('[loadToolsForExecution] Error creating PTC tool:', error);
|
|
}
|
|
}
|
|
|
|
const isBashTool = toolNames.includes(AgentConstants.BASH_TOOL);
|
|
if (isBashTool) {
|
|
try {
|
|
const bashTool = createBashExecutionTool({});
|
|
allLoadedTools.push(bashTool);
|
|
} catch (error) {
|
|
logger.error('[loadToolsForExecution] Failed to create bash_tool', error);
|
|
}
|
|
}
|
|
|
|
const specialToolNames = new Set([
|
|
AgentConstants.TOOL_SEARCH,
|
|
AgentConstants.PROGRAMMATIC_TOOL_CALLING,
|
|
AgentConstants.BASH_TOOL,
|
|
AgentConstants.SKILL_TOOL,
|
|
AgentConstants.READ_FILE,
|
|
]);
|
|
|
|
let ptcOrchestratedToolNames = [];
|
|
if (isPTC && toolRegistry) {
|
|
ptcOrchestratedToolNames = Array.from(toolRegistry.keys()).filter(
|
|
(name) => !specialToolNames.has(name),
|
|
);
|
|
}
|
|
|
|
const requestedNonSpecialToolNames = toolNames.filter((name) => !specialToolNames.has(name));
|
|
const allToolNamesToLoad = isPTC
|
|
? [...new Set([...requestedNonSpecialToolNames, ...ptcOrchestratedToolNames])]
|
|
: requestedNonSpecialToolNames;
|
|
|
|
const actionToolNames = [];
|
|
const regularToolNames = [];
|
|
for (const name of allToolNamesToLoad) {
|
|
(isActionTool(name) ? actionToolNames : regularToolNames).push(name);
|
|
}
|
|
|
|
if (regularToolNames.length > 0) {
|
|
const includesWebSearch = regularToolNames.includes(Tools.web_search);
|
|
const webSearchCallbacks = includesWebSearch ? createOnSearchResults(res, streamId) : undefined;
|
|
|
|
const { loadedTools } = await loadTools({
|
|
agent,
|
|
signal,
|
|
userMCPAuthMap,
|
|
functions: true,
|
|
tools: regularToolNames,
|
|
user: req.user.id,
|
|
options: {
|
|
req,
|
|
res,
|
|
tool_resources,
|
|
processFileURL,
|
|
uploadImageBuffer,
|
|
returnMetadata: true,
|
|
[Tools.web_search]: webSearchCallbacks,
|
|
},
|
|
webSearch: appConfig?.webSearch,
|
|
fileStrategy: appConfig?.fileStrategy,
|
|
imageOutputType: appConfig?.imageOutputType,
|
|
});
|
|
|
|
if (loadedTools) {
|
|
allLoadedTools.push(...loadedTools);
|
|
}
|
|
}
|
|
|
|
if (actionToolNames.length > 0 && agent && actionsEnabled) {
|
|
const actionTools = await loadActionToolsForExecution({
|
|
req,
|
|
res,
|
|
agent,
|
|
appConfig,
|
|
streamId,
|
|
actionToolNames,
|
|
});
|
|
allLoadedTools.push(...actionTools);
|
|
} else if (actionToolNames.length > 0 && agent && !actionsEnabled) {
|
|
logger.warn(
|
|
`[loadToolsForExecution] Capability "${AgentCapabilities.actions}" disabled. ` +
|
|
`Skipping action tool execution. User: ${req.user.id} | Agent: ${agent.id} | Tools: ${actionToolNames.join(', ')}`,
|
|
);
|
|
}
|
|
|
|
if (isPTC && allLoadedTools.length > 0) {
|
|
const ptcToolMap = new Map();
|
|
for (const tool of allLoadedTools) {
|
|
if (tool.name && tool.name !== AgentConstants.PROGRAMMATIC_TOOL_CALLING) {
|
|
ptcToolMap.set(tool.name, tool);
|
|
}
|
|
}
|
|
configurable.ptcToolMap = ptcToolMap;
|
|
}
|
|
|
|
return {
|
|
configurable,
|
|
loadedTools: allLoadedTools,
|
|
};
|
|
}
|
|
|
|
/**
|
|
* Loads action tools for event-driven execution.
|
|
* @param {Object} params
|
|
* @param {ServerRequest} params.req - The request object
|
|
* @param {ServerResponse} params.res - The response object
|
|
* @param {Object} params.agent - The agent object
|
|
* @param {Object} params.appConfig - App configuration
|
|
* @param {string|null} params.streamId - Stream ID
|
|
* @param {string[]} params.actionToolNames - Action tool names to load
|
|
* @returns {Promise<Array>} Loaded action tools
|
|
*/
|
|
async function loadActionToolsForExecution({
|
|
req,
|
|
res,
|
|
agent,
|
|
appConfig,
|
|
streamId,
|
|
actionToolNames,
|
|
}) {
|
|
const loadedActionTools = [];
|
|
|
|
const actionSets = (await loadActionSets({ agent_id: agent.id })) ?? [];
|
|
if (actionSets.length === 0) {
|
|
return loadedActionTools;
|
|
}
|
|
|
|
// See registerActionTools for the key-shape rationale.
|
|
const toolToAction = new Map();
|
|
const allowedDomains = appConfig?.actions?.allowedDomains;
|
|
const allowedAddresses = appConfig?.actions?.allowedAddresses;
|
|
|
|
for (const action of actionSets) {
|
|
const domain = await domainParser(action.metadata.domain, true);
|
|
const normalizedDomain = domain.replace(domainSeparatorRegex, '_');
|
|
const legacyDomain = legacyDomainEncode(action.metadata.domain);
|
|
const legacyNormalized = legacyDomain.replace(domainSeparatorRegex, '_');
|
|
|
|
const isDomainAllowed = await isActionDomainAllowed(
|
|
action.metadata.domain,
|
|
allowedDomains,
|
|
allowedAddresses,
|
|
);
|
|
if (!isDomainAllowed) {
|
|
logger.warn(
|
|
`[Actions] Domain "${action.metadata.domain}" not in allowedDomains. ` +
|
|
`Add it to librechat.yaml actions.allowedDomains to enable this action.`,
|
|
);
|
|
continue;
|
|
}
|
|
|
|
const validationResult = validateAndParseOpenAPISpec(action.metadata.raw_spec);
|
|
if (!validationResult.spec || !validationResult.serverUrl) {
|
|
logger.warn(`[Actions] Invalid OpenAPI spec for domain: ${domain}`);
|
|
continue;
|
|
}
|
|
|
|
const domainValidation = validateActionDomain(
|
|
action.metadata.domain,
|
|
validationResult.serverUrl,
|
|
);
|
|
if (!domainValidation.isValid) {
|
|
logger.error(`Domain mismatch in stored action: ${domainValidation.message}`, {
|
|
userId: req.user.id,
|
|
agent_id: agent.id,
|
|
action_id: action.action_id,
|
|
});
|
|
continue;
|
|
}
|
|
|
|
const encrypted = {
|
|
oauth_client_id: action.metadata.oauth_client_id,
|
|
oauth_client_secret: action.metadata.oauth_client_secret,
|
|
};
|
|
|
|
const decryptedAction = { ...action };
|
|
decryptedAction.metadata = await decryptMetadata(action.metadata);
|
|
|
|
const { requestBuilders, functionSignatures, zodSchemas } = openapiToFunction(
|
|
validationResult.spec,
|
|
true,
|
|
);
|
|
|
|
registerActionTools({
|
|
toolToAction,
|
|
functionSignatures,
|
|
normalizedDomain,
|
|
legacyNormalized,
|
|
makeEntry: (sig) => ({
|
|
action: decryptedAction,
|
|
requestBuilder: requestBuilders[sig.name],
|
|
zodSchema: zodSchemas[sig.name],
|
|
functionSignature: sig,
|
|
encrypted,
|
|
}),
|
|
});
|
|
}
|
|
|
|
for (const toolName of actionToolNames) {
|
|
const entry = toolToAction.get(normalizeActionToolName(toolName));
|
|
if (!entry) {
|
|
continue;
|
|
}
|
|
|
|
const { action, encrypted, zodSchema, requestBuilder, functionSignature } = entry;
|
|
const tool = await createActionTool({
|
|
userId: req.user.id,
|
|
res,
|
|
action,
|
|
streamId,
|
|
zodSchema,
|
|
encrypted,
|
|
requestBuilder,
|
|
name: toolName,
|
|
description: functionSignature.description,
|
|
useSSRFProtection: !Array.isArray(allowedDomains) || allowedDomains.length === 0,
|
|
allowedAddresses,
|
|
});
|
|
|
|
if (!tool) {
|
|
logger.warn(`[Actions] Failed to create action tool: ${toolName}`);
|
|
continue;
|
|
}
|
|
|
|
loadedActionTools.push(tool);
|
|
}
|
|
|
|
return loadedActionTools;
|
|
}
|
|
|
|
module.exports = {
|
|
loadTools,
|
|
isBuiltInTool,
|
|
getToolkitKey,
|
|
loadAgentTools,
|
|
loadToolsForExecution,
|
|
processRequiredActions,
|
|
resolveAgentCapabilities,
|
|
};
|