mirror of
https://github.com/danny-avila/LibreChat.git
synced 2026-05-13 16:07:30 +00:00
* 🪟 feat: Add allowedAddresses Exemption List For SSRF-Guarded Targets LibreChat already blocks SSRF-prone targets (private IPs, loopback, link-local, .internal/.local TLDs) at every server-side fetch site that consumes user-controllable URLs — custom-endpoint baseURLs, MCP servers, OpenAPI Actions, and OAuth endpoints. The only existing escape hatch is `allowedDomains`, but that flips the field into a strict whitelist: adding `127.0.0.1` to permit a self-hosted Ollama also blocks every public destination that isn't in the list. Introduce `allowedAddresses` as the orthogonal primitive: a private- IP-space exemption list. When a hostname or its resolved IP appears in the list, the SSRF block is bypassed for that target. Public destinations remain reachable. Operators can now run self-hosted LLMs / MCP servers / Action endpoints on private addresses without weakening the default-deny posture for everything else. Schema additions in `packages/data-provider/src/config.ts`: - `endpoints.allowedAddresses` (new — gates `validateEndpointURL`) - `mcpSettings.allowedAddresses` (parallel to `allowedDomains`) - `actions.allowedAddresses` (parallel to `allowedDomains`) Core changes in `packages/api/src/auth/`: - New `isAddressAllowed(hostnameOrIP, allowedAddresses)` — pure, case-insensitive, bracket-stripped literal match. - Threaded the list through `isSSRFTarget`, `resolveHostnameSSRF`, `isDomainAllowedCore`, `isActionDomainAllowed`, `isMCPDomainAllowed`, `isOAuthUrlAllowed`, and `validateEndpointURL`. - Extended `createSSRFSafeAgents` and `createSSRFSafeUndiciConnect` to accept the list, building an SSRF-safe DNS lookup that exempts matching hostnames/IPs at TCP connect time (TOCTOU-safe). Wiring: - Custom and OpenAI endpoint initialize sites pass `endpoints.allowedAddresses` to `validateEndpointURL`. - `MCPServersRegistry` stores `allowedAddresses` and exposes it via `getAllowedAddresses()`. The factory, connection class, manager, `UserConnectionManager`, and `ConnectionsRepository` all thread it through to the SSRF utilities. - `MCPOAuthHandler.initiateOAuthFlow`, `refreshOAuthTokens`, and `validateOAuthUrl` accept the list and consult it on every URL validation along the OAuth chain. - `ToolService`, `ActionService`, and the assistants/agents action routes pass `actions.allowedAddresses` to `isActionDomainAllowed` and to `createSSRFSafeAgents` for runtime action calls. - `initializeMCPs.js` reads `mcpSettings.allowedAddresses` from the app config and forwards it to the registry constructor. Documentation: - `librechat.example.yaml` shows the new field next to each existing `allowedDomains` block, with a note clarifying that `allowedAddresses` is an exemption list (not a whitelist). Tests: - Unit tests for `isAddressAllowed` covering literal IPs, hostnames, IPv6 brackets, case insensitivity, and partial-match rejection. - Exemption tests for every entry point: `isSSRFTarget`, `resolveHostnameSSRF`, `validateEndpointURL`, `isActionDomainAllowed`, `isMCPDomainAllowed`, `isOAuthUrlAllowed`. - Existing tests updated to reflect the new optional parameter. Default behavior is unchanged: omitted = empty list = no exemptions. * 🩹 fix: Plumb allowedAddresses Through AppConfig endpoints Type The initial PR added `endpoints.allowedAddresses` to the data-provider config schema and consumed it in the endpoint initialize sites, but the runtime `AppConfig.endpoints` shape in `@librechat/data-schemas` was a hand-maintained subset that didn't include the new field — so `tsc` rejected `appConfig.endpoints.allowedAddresses`. Add the field to `AppConfig['endpoints']` in `packages/data-schemas/src/types/app.ts` and forward it from the loaded config in `packages/data-schemas/src/app/endpoints.ts` so the runtime config carries the value. Update `initializeMCPs.spec.js` to expect the third positional argument (`allowedAddresses`) on the `createMCPServersRegistry` call. * 🩹 fix: Enforce allowedDomains Before allowedAddresses In isOAuthUrlAllowed The initial implementation checked the address exemption first, so a URL whose hostname appeared in `allowedAddresses` would return true even when the admin had configured `allowedDomains` as a strict bound on OAuth endpoints. A malicious MCP server could advertise OAuth metadata, token, or revocation URLs at any address the admin had permitted for an unrelated reason (a self-hosted LLM at `127.0.0.1`, for example) and pass validation, expanding SSRF reach beyond the configured domain whitelist. Reorder: when `allowedDomains` is set, treat it as authoritative — return true only if the URL matches a domain entry, otherwise fall through to false. The address exemption only applies when no `allowedDomains` is configured (mirrors how the downstream SSRF check in `validateOAuthUrl` consults `allowedAddresses`). Add a regression test asserting that an `allowedAddresses` entry does not broaden a configured `allowedDomains` list. Reported by chatgpt-codex-connector on PR #12933. * 🩹 fix: Forward allowedAddresses To Remaining OAuth Callers Two `MCPOAuthHandler` callers still used the pre-feature signatures and were silently dropping the new `allowedAddresses` argument: - `api/server/routes/mcp.js` invoked `initiateOAuthFlow` with the old 5-argument shape, so OAuth flows initiated through the route handler ignored the registry's `getAllowedAddresses()` and would reject any metadata/authorization/token URL on a permitted private host. - `api/server/controllers/UserController.js#maybeUninstallOAuthMCP` invoked `revokeOAuthToken` without the address exemption, so uninstalling an OAuth-backed MCP server on a permitted private host would fail at the revocation step even though the rest of the MCP connection path now permits it. Both sites now read `allowedAddresses` from the registry alongside `allowedDomains` and forward it. Reported by Copilot on PR #12933. * 🩹 fix: Update Test Mocks And Assertions For OAuth allowedAddresses The previous commit started passing `allowedAddresses` to `MCPOAuthHandler.initiateOAuthFlow` from `api/server/routes/mcp.js` and to `MCPOAuthHandler.revokeOAuthToken` from `api/server/controllers/UserController.js`, but the corresponding test files mocked the registry without `getAllowedAddresses` (causing `TypeError`s) and asserted the old positional shape on `toHaveBeenCalledWith`. Update the mocks and assertions to match the new arity: - `api/server/routes/__tests__/mcp.spec.js`: add `getAllowedDomains`/`getAllowedAddresses` to the registry mock and expect the additional positional args on `initiateOAuthFlow`. - `api/server/controllers/__tests__/maybeUninstallOAuthMCP.spec.js`: add a `getAllowedAddresses` mock alongside the existing `getAllowedDomains` and seed it in `setupOAuthServerFound`. - `api/server/controllers/__tests__/UserController.mcpOAuth.spec.js`: add `getAllowedAddresses` to the registry mock and expect the trailing `null` arg on the three `revokeOAuthToken` assertions. * 🛡️ fix: Address Comprehensive Review — Scope allowedAddresses To Private IP Space Major findings from the comprehensive PR review (severity → fix): **CRITICAL — `validateOAuthUrl` SSRF fallback bypass.** When `allowedDomains` is configured and a URL fails the whitelist, the SSRF fallback in `validateOAuthUrl` was still passing `allowedAddresses` to `isSSRFTarget` / `resolveHostnameSSRF`, letting a malicious MCP server advertise OAuth endpoints at any address the admin had permitted for an unrelated reason. Suppress `allowedAddresses` in the fallback when `allowedDomains` is active — the address exemption is opt-in for the no-whitelist mode only. **MAJOR — WebSocket transport SSRF check ignored exemptions.** The `constructTransport` WebSocket branch called `resolveHostnameSSRF(wsHostname)` without `this.allowedAddresses`, so a permitted private MCP server would pass `isMCPDomainAllowed` but be blocked at transport creation. Forward the exemption. **Scope `allowedAddresses` to private IP space only (operator directive).** The exemption list is for permitting private/internal targets; it must not be a back-door to broaden trust to public destinations. - Schema (`packages/data-provider/src/config.ts`): new `allowedAddressesSchema` rejects URLs (`://`), paths/CIDR (`/`), whitespace, and public IPv4/IPv6 literals at config-load time. Wired into `endpoints`, `mcpSettings`, and `actions`. - Runtime (`packages/api/src/auth/domain.ts`): `isAddressAllowed` now drops public-IP candidates and public-IP entries on the match path — defense in depth so a misconfigured runtime list never grants exemption. - Hot path (`packages/api/src/auth/agent.ts`): `buildSSRFSafeLookup` pre-normalizes the list into a `Set<string>` once at construction and applies the same scoping filter, so the connect-time DNS lookup is an O(1) Set membership check instead of a full re-iterate-and-normalize on every outbound request. **Test coverage for the connect-time and OAuth-fallback paths.** - `agent.spec.ts`: new describe block exercising `buildSSRFSafeLookup` and `createSSRFSafe*` with `allowedAddresses` — hostname-literal exemption, resolved-IP exemption, public-IP scoping, URL/CIDR/whitespace rejection, and the default no-list block. - `handler.allowedAddresses.test.ts` (new): integration tests for `validateOAuthUrl` — covers both the no-domains-set "permit private" path and the strict-bound regression where `allowedAddresses` must NOT bypass `allowedDomains`. **Documentation & cleanup.** - `connection.ts` redirect SSRF check: explicit comment that `allowedAddresses` is intentionally NOT consulted for redirect targets (server-controlled, must not inherit the admin's exemption). - `MCPConnectionFactory.test.ts`: replaced an `eslint-disable` with a proper `import { getTenantId } from '@librechat/data-schemas'`. The disable was added to make a pre-existing `require()` quiet — the cleaner fix is to use the existing top-level import. Updated `MCPConnectionSSRF.test.ts` WebSocket SSRF assertions to match the new two-argument call shape (`hostname, allowedAddresses`). * 🩹 fix: Require Absolute URL Before allowedAddresses Trust Bypass In isOAuthUrlAllowed `parseDomainSpec` is lenient — it silently prepends `https://` to schemeless inputs so it can match patterns like bare `example.com`. That leniency leaked into `isOAuthUrlAllowed`'s new `allowedAddresses` short-circuit: a value like `10.0.0.5/oauth` (no scheme) would parse successfully via the prepended default, hit the address-exemption path, return `true`, and skip `validateOAuthUrl`'s strict `new URL(url)` parse-or-throw — only to fail later in OAuth discovery with a less clear runtime error. Add a strict `new URL(url)` gate at the top of `isOAuthUrlAllowed`. Schemeless inputs now fall through to `validateOAuthUrl`'s explicit "Invalid OAuth <field>" rejection. Tests added in both `auth/domain.spec.ts` (unit) and the OAuth handler integration spec (end-to-end). Reported by chatgpt-codex-connector (P2) on PR #12933. * 🛡️ fix: Address Follow-Up Comprehensive Review — Schema Tests, Shared Normalization, host:port Auditing the second comprehensive review: **F1 MAJOR — schema validation untested.** `allowedAddressesSchema` had zero coverage, so a regression in the three refinement stages or the three wiring locations (`endpoints` / `mcpSettings` / `actions`) would silently let invalid entries reach the runtime. Added a dedicated `describe('allowedAddressesSchema')` block in `config.spec.ts` covering: valid private IPs (v4 + v6, including the previously-missed 192.0.0.0/24 range), accepted hostnames, all rejection categories (URLs, CIDR, paths, whitespace tabs/newlines, host:port, public IP literals), and full `configSchema.parse()` integration at each of the three nesting points. **F2 MINOR — `isPrivateIPv4Literal` divergence.** The schema reimpl in `packages/data-provider` was discarding the `c` octet, so the `192.0.0.0/24` (RFC 5736 IETF protocol assignments) range that the authoritative `isPrivateIPv4` accepts was being rejected with a misleading "public IP" error. Destructure `c` and add the missing range check; covered by the new schema tests. **F3 MINOR — DRY violation across `domain.ts` and `agent.ts`.** Both files had independent normalization implementations with a subtle whitespace-check divergence (`/\s/` vs `.includes(' ')`). Extracted the shared logic into a new `packages/api/src/auth/allowedAddresses.ts` module that both consumers import: - `normalizeAddressEntry(entry)` — single-entry shape check - `looksLikeHostPort(entry)` — host:port detector (used by F4) - `normalizeAllowedAddressesSet(list)` — pre-normalized Set for the connect-time hot path - `isAddressInAllowedSet(candidate, set)` — membership check that enforces private-IP scoping on the candidate Both `isAddressAllowed` (preflight) and `buildSSRFSafeLookup` (connect) now go through the same primitives; the whitespace divergence is gone. To break the import cycle (`allowedAddresses` needs `isPrivateIP`, `domain` previously owned it), extracted IP private-range detection into a leaf `auth/ip.ts` module. `domain.ts` re-exports `isPrivateIP` for backward compatibility with existing call sites. **F4 MINOR — `host:port` silently misclassified.** Entries like `localhost:8080` previously slipped through the URL/path guard, were mis-detected as IPv6, failed `isPrivateIP`, and were silently dropped with a misleading "public IP" schema error. Added an explicit `looksLikeHostPort` check with a clear error: "allowedAddresses entries must not include a port — list the bare hostname or IP only." Bare `::1`, `[::1]`, and other valid IPv6 literals are intentionally not matched (regex distinguishes by colon count and the bracketed `[ipv6]:port` form). **F5 MINOR — hostname-trust documentation gap.** Hostname entries short-circuit `resolveHostnameSSRF` before any DNS lookup — that's a deliberate design (admin trusts the name) but it means the exemption follows whatever the name resolves to at runtime. Added an explicit note in `librechat.example.yaml` for both `mcpSettings.allowedAddresses` and `endpoints.allowedAddresses`: "a hostname entry trusts whatever IP that name resolves to. Only list hostnames whose DNS you control. Prefer literal IPs when you can." **F6** (8 positional params) is flagged for follow-up; refactor to an options object is a breaking-API change deferred to a separate PR. **F7** (redirect/WebSocket asymmetry, NIT, conf 40) — skipping; the existing inline comment is sufficient. * 🧹 chore: Address Follow-Up NITs — Import Order And Mirror-Function Naming Three NITs from the latest comprehensive review: **NIT #1 (conf 85) — local import order.** AGENTS.md requires local imports sorted longest-to-shortest. Both `domain.ts` and `agent.ts` had `./ip` (shorter) before `./allowedAddresses` (longer). Swapped. **NIT #2 (conf 60) — missing cross-reference.** The schema-side `isHostPortShape` in `packages/data-provider/src/config.ts` had no note pointing at the canonical runtime mirror. Added a JSDoc paragraph explaining the mirror relationship and why a local copy exists (the data-provider package can't import from `@librechat/api` without creating a circular dependency). **NIT #3 (conf 50) — naming inconsistency.** Renamed `isHostPortShape` → `looksLikeHostPort` so the schema mirror matches the runtime helper exactly. Kept as a separate function (not a shared import) for the same circular-dependency reason; the matching name makes it obvious they should stay in lockstep.
507 lines
18 KiB
JavaScript
507 lines
18 KiB
JavaScript
const jwt = require('jsonwebtoken');
|
|
const { nanoid } = require('nanoid');
|
|
const { GraphEvents, sleep } = require('@librechat/agents');
|
|
const { tool } = require('@librechat/agents/langchain/tools');
|
|
const { logger, encryptV2, decryptV2 } = require('@librechat/data-schemas');
|
|
const {
|
|
sendEvent,
|
|
logAxiosError,
|
|
refreshAccessToken,
|
|
GenerationJobManager,
|
|
createSSRFSafeAgents,
|
|
} = require('@librechat/api');
|
|
const {
|
|
Time,
|
|
CacheKeys,
|
|
StepTypes,
|
|
Constants,
|
|
AuthTypeEnum,
|
|
actionDelimiter,
|
|
isImageVisionTool,
|
|
actionDomainSeparator,
|
|
} = require('librechat-data-provider');
|
|
const {
|
|
findToken,
|
|
updateToken,
|
|
createToken,
|
|
getActions,
|
|
deleteActions,
|
|
deleteAssistant,
|
|
} = require('~/models');
|
|
const { getFlowStateManager } = require('~/config');
|
|
const { getLogStores } = require('~/cache');
|
|
|
|
const JWT_SECRET = process.env.JWT_SECRET;
|
|
const toolNameRegex = /^[a-zA-Z0-9_-]+$/;
|
|
const protocolRegex = /^https?:\/\//;
|
|
const replaceSeparatorRegex = new RegExp(actionDomainSeparator, 'g');
|
|
|
|
/**
|
|
* Validates tool name against regex pattern and updates if necessary.
|
|
* @param {object} params - The parameters for the function.
|
|
* @param {object} params.req - Express Request.
|
|
* @param {FunctionTool} params.tool - The tool object.
|
|
* @param {string} params.assistant_id - The assistant ID
|
|
* @returns {object|null} - Updated tool object or null if invalid and not an action.
|
|
*/
|
|
const validateAndUpdateTool = async ({ req, tool, assistant_id }) => {
|
|
let actions;
|
|
if (isImageVisionTool(tool)) {
|
|
return null;
|
|
}
|
|
if (!toolNameRegex.test(tool.function.name)) {
|
|
const [functionName, domain] = tool.function.name.split(actionDelimiter);
|
|
actions = await getActions({ assistant_id, user: req.user.id }, true);
|
|
const matchingActions = actions.filter((action) => {
|
|
const metadata = action.metadata;
|
|
if (!metadata) {
|
|
return false;
|
|
}
|
|
const strippedMetaDomain = stripProtocol(metadata.domain);
|
|
return strippedMetaDomain === domain || metadata.domain === domain;
|
|
});
|
|
const action = matchingActions[0];
|
|
if (!action) {
|
|
return null;
|
|
}
|
|
|
|
const parsedDomain = await domainParser(domain, true);
|
|
|
|
if (!parsedDomain) {
|
|
return null;
|
|
}
|
|
|
|
tool.function.name = `${functionName}${actionDelimiter}${parsedDomain}`;
|
|
}
|
|
return tool;
|
|
};
|
|
|
|
/** @param {string} domain */
|
|
function stripProtocol(domain) {
|
|
const stripped = domain.replace(protocolRegex, '');
|
|
const pathIdx = stripped.indexOf('/');
|
|
return pathIdx === -1 ? stripped : stripped.substring(0, pathIdx);
|
|
}
|
|
|
|
/**
|
|
* Encodes a domain using the legacy scheme (full URL including protocol).
|
|
* Used for backward-compatible matching against agents saved before the collision fix.
|
|
* @param {string} domain
|
|
* @returns {string}
|
|
*/
|
|
function legacyDomainEncode(domain) {
|
|
if (!domain) {
|
|
return '';
|
|
}
|
|
if (domain.length <= Constants.ENCODED_DOMAIN_LENGTH) {
|
|
return domain.replace(/\./g, actionDomainSeparator);
|
|
}
|
|
const modifiedDomain = Buffer.from(domain).toString('base64');
|
|
return modifiedDomain.substring(0, Constants.ENCODED_DOMAIN_LENGTH);
|
|
}
|
|
|
|
/**
|
|
* Encodes or decodes a domain name to/from base64, or replacing periods with a custom separator.
|
|
*
|
|
* Necessary due to `[a-zA-Z0-9_-]*` Regex Validation, limited to a 64-character maximum.
|
|
* Strips protocol prefix before encoding to prevent base64 collisions
|
|
* (all `https://` URLs share the same 10-char base64 prefix).
|
|
*
|
|
* @param {string} domain - The domain name to encode/decode.
|
|
* @param {boolean} inverse - False to decode from base64, true to encode to base64.
|
|
* @returns {Promise<string>} Encoded or decoded domain string.
|
|
*/
|
|
async function domainParser(domain, inverse = false) {
|
|
if (!domain) {
|
|
return;
|
|
}
|
|
|
|
const domainsCache = getLogStores(CacheKeys.ENCODED_DOMAINS);
|
|
|
|
if (inverse) {
|
|
const hostname = stripProtocol(domain);
|
|
const cachedDomain = await domainsCache.get(hostname);
|
|
if (cachedDomain) {
|
|
return hostname;
|
|
}
|
|
|
|
if (hostname.length <= Constants.ENCODED_DOMAIN_LENGTH) {
|
|
return hostname.replace(/\./g, actionDomainSeparator);
|
|
}
|
|
|
|
const modifiedDomain = Buffer.from(hostname).toString('base64');
|
|
const key = modifiedDomain.substring(0, Constants.ENCODED_DOMAIN_LENGTH);
|
|
await domainsCache.set(key, modifiedDomain);
|
|
return key;
|
|
}
|
|
|
|
const cachedDomain = await domainsCache.get(domain);
|
|
if (!cachedDomain) {
|
|
return domain.replace(replaceSeparatorRegex, '.');
|
|
}
|
|
|
|
try {
|
|
return Buffer.from(cachedDomain, 'base64').toString('utf-8');
|
|
} catch (error) {
|
|
logger.error(`Failed to parse domain (possibly not base64): ${domain}`, error);
|
|
return domain;
|
|
}
|
|
}
|
|
|
|
/**
|
|
* Loads action sets based on the user and assistant ID.
|
|
*
|
|
* @param {Object} searchParams - The parameters for loading action sets.
|
|
* @param {string} searchParams.user - The user identifier.
|
|
* @param {string} [searchParams.agent_id]- The agent identifier.
|
|
* @param {string} [searchParams.assistant_id]- The assistant identifier.
|
|
* @returns {Promise<Action[] | null>} A promise that resolves to an array of actions or `null` if no match.
|
|
*/
|
|
async function loadActionSets(searchParams) {
|
|
return await getActions(searchParams, true);
|
|
}
|
|
|
|
/**
|
|
* Creates a general tool for an entire action set.
|
|
*
|
|
* @param {Object} params - The parameters for loading action sets.
|
|
* @param {string} params.userId
|
|
* @param {ServerResponse} params.res
|
|
* @param {Action} params.action - The action set. Necessary for decrypting authentication values.
|
|
* @param {ActionRequest} params.requestBuilder - The ActionRequest builder class to execute the API call.
|
|
* @param {string | undefined} [params.name] - The name of the tool.
|
|
* @param {string | undefined} [params.description] - The description for the tool.
|
|
* @param {import('zod').ZodTypeAny | undefined} [params.zodSchema] - The Zod schema for tool input validation/definition
|
|
* @param {{ oauth_client_id?: string; oauth_client_secret?: string; }} params.encrypted - The encrypted values for the action.
|
|
* @param {string | null} [params.streamId] - The stream ID for resumable streams.
|
|
* @param {boolean} [params.useSSRFProtection] - When true, uses SSRF-safe HTTP agents that validate resolved IPs at connect time.
|
|
* @param {string[] | null} [params.allowedAddresses] - Optional admin exemption list of hostnames/IPs that bypass the SSRF private-IP block.
|
|
* @returns { Promise<typeof tool | { _call: (toolInput: Object | string) => unknown}> } An object with `_call` method to execute the tool input.
|
|
*/
|
|
async function createActionTool({
|
|
userId,
|
|
res,
|
|
action,
|
|
requestBuilder,
|
|
zodSchema,
|
|
name,
|
|
description,
|
|
encrypted,
|
|
streamId = null,
|
|
useSSRFProtection = false,
|
|
allowedAddresses,
|
|
}) {
|
|
const ssrfAgents = useSSRFProtection ? createSSRFSafeAgents(allowedAddresses) : undefined;
|
|
/** @type {(toolInput: Object | string, config: GraphRunnableConfig) => Promise<unknown>} */
|
|
const _call = async (toolInput, config) => {
|
|
try {
|
|
/** @type {import('librechat-data-provider').ActionMetadataRuntime} */
|
|
const metadata = action.metadata;
|
|
const executor = requestBuilder.createExecutor();
|
|
const preparedExecutor = executor.setParams(toolInput ?? {});
|
|
|
|
if (metadata.auth && metadata.auth.type !== AuthTypeEnum.None) {
|
|
try {
|
|
if (metadata.auth.type === AuthTypeEnum.OAuth && metadata.auth.authorization_url) {
|
|
const action_id = action.action_id;
|
|
const identifier = `${userId}:${action.action_id}`;
|
|
const requestLogin = async () => {
|
|
const { args: _args, stepId, ...toolCall } = config.toolCall ?? {};
|
|
if (!stepId) {
|
|
throw new Error('Tool call is missing stepId');
|
|
}
|
|
const statePayload = {
|
|
nonce: nanoid(),
|
|
user: userId,
|
|
action_id,
|
|
};
|
|
|
|
const stateToken = jwt.sign(statePayload, JWT_SECRET, { expiresIn: '10m' });
|
|
try {
|
|
const redirectUri = `${process.env.DOMAIN_CLIENT}/api/actions/${action_id}/oauth/callback`;
|
|
const params = new URLSearchParams({
|
|
client_id: metadata.oauth_client_id,
|
|
scope: metadata.auth.scope,
|
|
redirect_uri: redirectUri,
|
|
access_type: 'offline',
|
|
response_type: 'code',
|
|
state: stateToken,
|
|
});
|
|
|
|
const authURL = `${metadata.auth.authorization_url}?${params.toString()}`;
|
|
/** @type {{ id: string; delta: AgentToolCallDelta }} */
|
|
const data = {
|
|
id: stepId,
|
|
delta: {
|
|
type: StepTypes.TOOL_CALLS,
|
|
tool_calls: [{ ...toolCall, args: '' }],
|
|
auth: authURL,
|
|
expires_at: Date.now() + Time.TWO_MINUTES,
|
|
},
|
|
};
|
|
const flowsCache = getLogStores(CacheKeys.FLOWS);
|
|
const flowManager = getFlowStateManager(flowsCache);
|
|
await flowManager.createFlowWithHandler(
|
|
`${identifier}:oauth_login:${config.metadata.thread_id}:${config.metadata.run_id}`,
|
|
'oauth_login',
|
|
async () => {
|
|
const eventData = { event: GraphEvents.ON_RUN_STEP_DELTA, data };
|
|
if (streamId) {
|
|
await GenerationJobManager.emitChunk(streamId, eventData);
|
|
} else {
|
|
sendEvent(res, eventData);
|
|
}
|
|
logger.debug('Sent OAuth login request to client', { action_id, identifier });
|
|
return true;
|
|
},
|
|
config?.signal,
|
|
);
|
|
logger.debug('Waiting for OAuth Authorization response', { action_id, identifier });
|
|
const result = await flowManager.createFlow(
|
|
identifier,
|
|
'oauth',
|
|
{
|
|
state: stateToken,
|
|
userId: userId,
|
|
client_url: metadata.auth.client_url,
|
|
redirect_uri: `${process.env.DOMAIN_SERVER}/api/actions/${action_id}/oauth/callback`,
|
|
token_exchange_method: metadata.auth.token_exchange_method,
|
|
/** Encrypted values */
|
|
encrypted_oauth_client_id: encrypted.oauth_client_id,
|
|
encrypted_oauth_client_secret: encrypted.oauth_client_secret,
|
|
},
|
|
config?.signal,
|
|
);
|
|
logger.debug('Received OAuth Authorization response', { action_id, identifier });
|
|
data.delta.auth = undefined;
|
|
data.delta.expires_at = undefined;
|
|
const successEventData = { event: GraphEvents.ON_RUN_STEP_DELTA, data };
|
|
if (streamId) {
|
|
await GenerationJobManager.emitChunk(streamId, successEventData);
|
|
} else {
|
|
sendEvent(res, successEventData);
|
|
}
|
|
await sleep(3000);
|
|
metadata.oauth_access_token = result.access_token;
|
|
metadata.oauth_refresh_token = result.refresh_token;
|
|
const expiresAt = new Date(Date.now() + result.expires_in * 1000);
|
|
metadata.oauth_token_expires_at = expiresAt.toISOString();
|
|
} catch (error) {
|
|
const errorMessage = 'Failed to authenticate OAuth tool';
|
|
logger.error(errorMessage, error);
|
|
throw new Error(errorMessage);
|
|
}
|
|
};
|
|
|
|
const tokenPromises = [];
|
|
tokenPromises.push(findToken({ userId, type: 'oauth', identifier }));
|
|
tokenPromises.push(
|
|
findToken({
|
|
userId,
|
|
type: 'oauth_refresh',
|
|
identifier: `${identifier}:refresh`,
|
|
}),
|
|
);
|
|
const [tokenData, refreshTokenData] = await Promise.all(tokenPromises);
|
|
|
|
if (tokenData) {
|
|
// Valid token exists, add it to metadata for setAuth
|
|
metadata.oauth_access_token = await decryptV2(tokenData.token);
|
|
if (refreshTokenData) {
|
|
metadata.oauth_refresh_token = await decryptV2(refreshTokenData.token);
|
|
}
|
|
metadata.oauth_token_expires_at = tokenData.expiresAt.toISOString();
|
|
} else if (!refreshTokenData) {
|
|
// No tokens exist, need to authenticate
|
|
await requestLogin();
|
|
} else if (refreshTokenData) {
|
|
// Refresh token is still valid, use it to get new access token
|
|
try {
|
|
const refresh_token = await decryptV2(refreshTokenData.token);
|
|
const refreshTokens = async () =>
|
|
await refreshAccessToken(
|
|
{
|
|
userId,
|
|
identifier,
|
|
refresh_token,
|
|
client_url: metadata.auth.client_url,
|
|
encrypted_oauth_client_id: encrypted.oauth_client_id,
|
|
token_exchange_method: metadata.auth.token_exchange_method,
|
|
encrypted_oauth_client_secret: encrypted.oauth_client_secret,
|
|
},
|
|
{
|
|
findToken,
|
|
updateToken,
|
|
createToken,
|
|
},
|
|
);
|
|
const flowsCache = getLogStores(CacheKeys.FLOWS);
|
|
const flowManager = getFlowStateManager(flowsCache);
|
|
const refreshData = await flowManager.createFlowWithHandler(
|
|
`${identifier}:refresh`,
|
|
'oauth_refresh',
|
|
refreshTokens,
|
|
config?.signal,
|
|
);
|
|
metadata.oauth_access_token = refreshData.access_token;
|
|
if (refreshData.refresh_token) {
|
|
metadata.oauth_refresh_token = refreshData.refresh_token;
|
|
}
|
|
const expiresAt = new Date(Date.now() + refreshData.expires_in * 1000);
|
|
metadata.oauth_token_expires_at = expiresAt.toISOString();
|
|
} catch (error) {
|
|
logger.error('Failed to refresh token, requesting new login:', error);
|
|
await requestLogin();
|
|
}
|
|
} else {
|
|
await requestLogin();
|
|
}
|
|
}
|
|
|
|
await preparedExecutor.setAuth(metadata);
|
|
} catch (error) {
|
|
if (
|
|
error.message.includes('No access token found') ||
|
|
error.message.includes('Access token is expired')
|
|
) {
|
|
throw error;
|
|
}
|
|
throw new Error(`Authentication failed: ${error.message}`);
|
|
}
|
|
}
|
|
|
|
const response = await preparedExecutor.execute(ssrfAgents);
|
|
|
|
if (typeof response.data === 'object') {
|
|
return JSON.stringify(response.data);
|
|
}
|
|
return response.data;
|
|
} catch (error) {
|
|
const message = `API call to ${action.metadata.domain} failed:`;
|
|
return logAxiosError({ message, error });
|
|
}
|
|
};
|
|
|
|
if (name) {
|
|
return tool(_call, {
|
|
name: name.replace(replaceSeparatorRegex, '_'),
|
|
description: description || '',
|
|
schema: zodSchema,
|
|
});
|
|
}
|
|
|
|
return {
|
|
_call,
|
|
};
|
|
}
|
|
|
|
/**
|
|
* Encrypts a sensitive value.
|
|
* @param {string} value
|
|
* @returns {Promise<string>}
|
|
*/
|
|
async function encryptSensitiveValue(value) {
|
|
// Encode API key to handle special characters like ":"
|
|
const encodedValue = encodeURIComponent(value);
|
|
return await encryptV2(encodedValue);
|
|
}
|
|
|
|
/**
|
|
* Decrypts a sensitive value.
|
|
* @param {string} value
|
|
* @returns {Promise<string>}
|
|
*/
|
|
async function decryptSensitiveValue(value) {
|
|
const decryptedValue = await decryptV2(value);
|
|
return decodeURIComponent(decryptedValue);
|
|
}
|
|
|
|
/**
|
|
* Encrypts sensitive metadata values for an action.
|
|
*
|
|
* @param {ActionMetadata} metadata - The action metadata to encrypt.
|
|
* @returns {Promise<ActionMetadata>} The updated action metadata with encrypted values.
|
|
*/
|
|
async function encryptMetadata(metadata) {
|
|
const encryptedMetadata = { ...metadata };
|
|
|
|
// ServiceHttp
|
|
if (metadata.auth && metadata.auth.type === AuthTypeEnum.ServiceHttp) {
|
|
if (metadata.api_key) {
|
|
encryptedMetadata.api_key = await encryptSensitiveValue(metadata.api_key);
|
|
}
|
|
}
|
|
|
|
// OAuth
|
|
else if (metadata.auth && metadata.auth.type === AuthTypeEnum.OAuth) {
|
|
if (metadata.oauth_client_id) {
|
|
encryptedMetadata.oauth_client_id = await encryptSensitiveValue(metadata.oauth_client_id);
|
|
}
|
|
if (metadata.oauth_client_secret) {
|
|
encryptedMetadata.oauth_client_secret = await encryptSensitiveValue(
|
|
metadata.oauth_client_secret,
|
|
);
|
|
}
|
|
}
|
|
|
|
return encryptedMetadata;
|
|
}
|
|
|
|
/**
|
|
* Decrypts sensitive metadata values for an action.
|
|
*
|
|
* @param {ActionMetadata} metadata - The action metadata to decrypt.
|
|
* @returns {Promise<ActionMetadata>} The updated action metadata with decrypted values.
|
|
*/
|
|
async function decryptMetadata(metadata) {
|
|
const decryptedMetadata = { ...metadata };
|
|
|
|
// ServiceHttp
|
|
if (metadata.auth && metadata.auth.type === AuthTypeEnum.ServiceHttp) {
|
|
if (metadata.api_key) {
|
|
decryptedMetadata.api_key = await decryptSensitiveValue(metadata.api_key);
|
|
}
|
|
}
|
|
|
|
// OAuth
|
|
else if (metadata.auth && metadata.auth.type === AuthTypeEnum.OAuth) {
|
|
if (metadata.oauth_client_id) {
|
|
decryptedMetadata.oauth_client_id = await decryptSensitiveValue(metadata.oauth_client_id);
|
|
}
|
|
if (metadata.oauth_client_secret) {
|
|
decryptedMetadata.oauth_client_secret = await decryptSensitiveValue(
|
|
metadata.oauth_client_secret,
|
|
);
|
|
}
|
|
}
|
|
|
|
return decryptedMetadata;
|
|
}
|
|
|
|
/**
|
|
* Deletes an action and its corresponding assistant.
|
|
* @param {Object} params - The parameters for the function.
|
|
* @param {OpenAIClient} params.req - The Express Request object.
|
|
* @param {string} params.assistant_id - The ID of the assistant.
|
|
*/
|
|
const deleteAssistantActions = async ({ req, assistant_id }) => {
|
|
try {
|
|
await deleteActions({ assistant_id, user: req.user.id });
|
|
await deleteAssistant({ assistant_id, user: req.user.id });
|
|
} catch (error) {
|
|
const message = 'Trouble deleting Assistant Actions for Assistant ID: ' + assistant_id;
|
|
logger.error(message, error);
|
|
throw new Error(message);
|
|
}
|
|
};
|
|
|
|
module.exports = {
|
|
deleteAssistantActions,
|
|
validateAndUpdateTool,
|
|
legacyDomainEncode,
|
|
createActionTool,
|
|
encryptMetadata,
|
|
decryptMetadata,
|
|
loadActionSets,
|
|
domainParser,
|
|
};
|