* 📥 fix: Use Endpoint-Aware Default Model on Imported Conversations
Claude conversations imported from claude.ai's data export display
"gpt-4o-mini" in the chat UI until the page is refreshed, and any
attempt to send a message before refreshing fails with "The model
'gpt-4o-mini' is not available for Anthropic."
Root cause: ImportBatchBuilder.finishConversation() unconditionally
defaulted the saved conversation's `model` field to
openAISettings.model.default, regardless of `this.endpoint`. Claude
exports don't carry a model name, so every imported Claude conversation
landed with endpoint=anthropic but model=gpt-4o-mini.
Fix: pick the default based on `this.endpoint` via a small lookup
(openAI -> gpt-4o-mini, anthropic -> claude-3-5-sonnet-latest), keeping
the existing OpenAI default as the fallback for unknown endpoints.
Fixes#12844
* 🪄 refactor: Resolve Import Default Model From `modelsConfig`
Replace the hardcoded per-endpoint default lookup added in the previous
commit with a runtime resolver that consults the same models config the
chat UI uses (`getModelsConfig` in ModelController -> `loadDefaultModels`
+ `loadConfigModels`). This way an imported conversation defaults to a
model the LibreChat instance has actually configured / discovered for
the endpoint, instead of a hardcoded constant that may not exist on this
deployment.
Resolution order:
1. First non-empty model in `modelsConfig[endpoint]`.
2. Per-endpoint hardcoded fallback (anthropic/openAI settings) if the
runtime config is empty for the endpoint or `getModelsConfig` throws.
3. `openAISettings.model.default` if even the per-endpoint fallback is
missing (unknown endpoint).
`importBatchBuilder.finishConversation` now accepts an optional
`defaultModel` argument; each importer resolves it once at the top via
`resolveImportDefaultModel({ endpoint, requestUserId, userRole })` and
threads it through. ChatGPT message-level model selection also falls
back to the resolved default before the hardcoded gpt-4o-mini.