Commit graph

5377 commits

Author SHA1 Message Date
Daniel Hiltgen
421faa0263
mlx: fix macOS 26 target leakage in v3 metallib (#16053)
MLX compiles the AIR objects with the requested -mmacosx-version-min, but its final metallib step invokes metal instead of metallib. With the macOS 26 SDK, that can stamp the Metal v3 library with a macOS 26 deployment target.

Relink the generated AIR files with metallib before install until this is fixed upstream.
2026-05-11 16:37:57 -07:00
Daniel Hiltgen
206b049508
mlx: avoid status timeout during inference (#16086)
The MLX runner now routes model work through a locked worker thread. Status also used that worker only to sample memory, so a scheduler health ping could sit behind long prefill or generation until its 10s context expired, causing /v1/status to return 500 and the server to treat the runner as unhealthy.

While Metal doesn't change VRAM reporting, CUDA does. Cache the last memory sample and make status perform only a short best-effort refresh. If the worker is busy, status returns the cached value while a single background refresh continues and updates the cache when the worker becomes available. The in-flight guard and lifecycle context keep this from spawning unbounded refreshes while preserving live VRAM refresh behavior for CUDA.

Fixes #16081
2026-05-11 16:03:38 -07:00
Patrick Devine
d819ef0f97
mlx: update the imagegen runner for mlx thread affinity (#16096) 2026-05-11 13:05:06 -07:00
Daniel Hiltgen
3d5a011a2e
app: harden update flows (#16100)
* app: harden update flows

This hardens the windows update flows and adds a new opt-in and CI triggered unit test to verify Mac/Windows updates with verification.

* test: harden unit tests for OLLAMA_MODELS being set

* app: harden updater
2026-05-11 12:24:01 -07:00
Daniel Hiltgen
c2f2d90a67
test: integration test hardening (#13532)
* test: integration test hardening

Improve reliability on slower systems, and some flakes.  Fix
a few logic flaws on the newer tests, general hardening.

* tighten up vision logging

* add new models

* remove some older models - still covered by library scenarios
2026-05-08 15:54:17 -07:00
Daniel Hiltgen
1e1b34dada
mlx: refined model push behavior (#15431)
* mlx: refined model push behavior

Refine the algorithm for parallel push of safetensors based models to get
better reliability and throughput.

* review comments, hardening, and performance tuning for slow links

* review comments
2026-05-08 14:25:30 -07:00
Parth Sareen
f866e7608f
launch: disable Claude Desktop launch (#16028) 2026-05-07 10:46:18 -07:00
Parth Sareen
bab59072fb
launch: add plan-aware model gating (#16027) 2026-05-06 14:34:26 -07:00
Eva H
7c2c36bda2
cmd/launch: improve integration backup UX (#15907) 2026-05-06 11:32:54 -04:00
Parth Sareen
d319227df0
server: cache show responses (#15967) 2026-05-05 14:40:18 -07:00
Daniel Hiltgen
2d84ec939c
mlx: partial cleanup of imagegen layout (#15435)
* mlx: partial cleanup of imagegen layout

This moves part of the imagegen safetensors code to the new package.

* test: remove flaky timing test
2026-05-05 14:15:30 -07:00
Patrick Devine
15e6076d79
mlx: Gemma4 MTP speculative decoding (#15980)
This change adds support for MTP (multi-token prediction) speculative decoding for the
gemma4 model family.

It includes:
  * support for importing safetensors based gemma4 draft models with `ollama create`
  * a new DRAFT command in the Modelfile for specifying draft models
  * a --quantize-draft flag for the ollama create command to quantize the draft model
  * cache support for speculation
  * changes to the rotating cache to be able to handle MTP correctly
  * sampling support for draft model token prediction

---------

Co-authored-by: Daniel Hiltgen <daniel@ollama.com>
2026-05-05 08:55:04 -07:00
Parth Sareen
4017af96cd
go: bump to 1.26 (#15904) 2026-05-03 23:24:35 -07:00
Daniel Hiltgen
534342e7e2
Update MLX and MLX-C with threading fixes (#15845)
* Update MLX and MLX-C

* Run MLX CGO work on a locked OS thread

MLX now relies on OS-thread-local execution state for streams, encoders, and caches. Add an mlxthread executor backed by runtime.LockOSThread and route runner initialization, model load, inference, status memory reads, and cleanup through the worker so Go goroutine migration cannot split MLX state across native threads.

Also stop caching default MLX streams before the runner owns the thread and add worker/threaded MLX regression tests.

* mlx: use common status writer

* mlx: bundle missing libjaccl on arm64

Inspired by #15793

* review comments
2026-05-03 10:03:14 -07:00
Parth Sareen
9ba5a04914
launch: claude app (#15937) 2026-05-02 19:19:57 -07:00
Bruce MacDonald
938ca6e274
app: source featured models from experimental recommendations endpoint (#15909)
Replace the hardcoded FEATURED_MODELS list with the
/api/experimental/model-recommendations endpoint so the picker stays in
sync with server-driven recommendations. Inline the merge into useModels
(recommendations first, then the rest of /api/tags) and drop the
standalone mergeModels util.
2026-05-01 11:10:20 -07:00
Pratham Agarwal
8f39fff70b
fix: resolve OpenClaw gateway launch timeout on Windows by enforcing IPv4 loopback (#15726) 2026-04-30 22:20:08 -04:00
Daniel Hiltgen
4fe5609563
metal: harden for ggml initialization failures (#15755)
* metal: harden for ggml initialization failures

ggml_metal_device_init performs a probe to verify the tensor API compiles.  On
some systems this passes, even though kernel coverage isn't complete, which
results in a later crash when compiling the real kernels.  This change adds a
single retry if any of the error strings match this failure mode to disable the
tensor API.  It also hardens an error case in the Go initDevices to detect
device initialization failures and panic instead of crashing later on a nil
array entry.

Fixes #15734

* review comments

* review comments
2026-04-30 16:28:03 -07:00
Bruce MacDonald
917324bb4d
app: remove ollama update url env var used for testing (#15905) 2026-04-30 13:14:08 -07:00
Parth Sareen
c7c2837c96
renderers: update gemma4 renderer (#15886) 2026-04-29 18:40:23 -07:00
Parth Sareen
b6447caebc
launch: use vram bytes for model recommendations (#15885) 2026-04-29 18:40:14 -07:00
Eva H
bad32c7244
launch/docs: fix title for pool (#15883) 2026-04-29 17:18:44 -04:00
Eva H
ab2e005bf7
app: align the app launch page with ollama launch (#15753) 2026-04-29 14:45:19 -04:00
Parth Sareen
321cc8a2ba
server/launch: add model recommendations cache endpoint (#15868) 2026-04-28 17:09:04 -07:00
Daniel Hiltgen
87288ced4f
New models (#15861)
* mlx: add laguna model support

* convert: support fp8 safetensors import

Decode HF F8_E4M3 safetensors with block scale companions into GGUF-supported tensor types, and record which output tensors came from FP8 source weights.

Use that source-precision metadata during create quantization: default FP8-sourced GGUFs to Q8_0, keep non-FP8 tensors at their original precision for Q8_0, and promote non-FP8 quantizable tensors to Q8_0 for Q4_K requests.

* ggml: add laguna model support

* server: preserve generate logprobs with builtin parsers

Generate requests were dropping logprob-only chunks whenever a builtin parser buffered visible content. Chat already handled this case, but generate only forwarded chunks with visible response, thinking, or tool-call output.

Keep generate chunks that carry logprobs even when the builtin parser has not flushed visible content yet, and add a regression test that exercises the behavior with a generic thinking parser.

* review comments - perf improvements

* ggml: implement nemotron 3 nano omni

* add poolside integration

* update poolside doc

* adapt to new cache setup

* fix test

* fix test

---------

Co-authored-by: Eva Ho <hoyyeva@gmail.com>
2026-04-28 11:50:12 -07:00
Jesse Gross
2bbe2405fe mlxrunner: decouple models from attention cache storage layout
Models build their own attention masks and read K/V directly from
the cache's buffers, which ties them to the cache's storage layout.
That blocks multi-sequence batching — right-padded rows need a
query-padding mask composed onto every model — and rules out
variants like paged attention where K/V isn't one contiguous tensor.

Caches now hand back a per-layer KVHistory holding post-update K, V,
and a MaskApplier that merges the cache's storage restrictions into
the model's logical mask. Models describe their mask in logical
terms; SDPA composes model, padding, and applier contributions and
dispatches to the kernel's causal or no-mask fast path when it can.
KVHistory still exposes K, V, and the composed mask for manual
attention paths (e.g. CUDA prefill at head_dim > 128).

Performance for single-sequence inference is unchanged.
2026-04-27 20:04:46 -07:00
Jesse Gross
bd21678b16 mlxrunner: apply RoPE at per-row positions
Switch RoPE from the scalar-offset kernel (mlx_fast_rope) to the
array-offset one (mlx_fast_rope_dynamic) so each batch row can start
at its own position. The pipeline tracks the current position locally
and passes it to the model through Batch.SeqOffsets; each model
materializes that slice into an int32 array for the RoPE call.

Single-sequence behavior is unchanged; this is the wiring needed
before the runner can batch independent sequences.
2026-04-27 20:04:46 -07:00
Jesse Gross
088dfd89a8 mlxrunner: wrap model forward inputs in a Batch struct
Gives a single extension point for per-call context (positions,
sequence IDs, masks) as multi-sequence batching grows, without having
to churn every model's Forward signature again.
2026-04-27 20:04:46 -07:00
Eva H
3cab8a7b02
app/server: fix desktop app startup killing active ollama launch sessions (#15657) 2026-04-27 22:52:53 -04:00
Daniel Hiltgen
03aee88186
mlx: Support NVIDIA TensorRT Model Optimizer import (#15566)
* mlx: Support NVIDIA TensorRT Model Optimizer import

* x/create: support FP8 safetensors import

Decode HF F8_E4M3 safetensors with block scale companions into MLX-importable tensor blobs, including compressed-tensors weight_scale metadata, packed NVFP4 layouts, and mixed-precision tensor headers.

Use that source-precision metadata during create quantization: default FP8-sourced imports to mxfp8, allow source FP8 to target MLX low-bit formats, preserve source-quantized NVFP4 layouts, selectively keep or promote tensors based on their source precision, and detect quantized dtype from mixed-precision safetensors manifests.

* review comments
2026-04-27 18:28:10 -07:00
Daniel Hiltgen
ec9b4e9e47
tokenizer: fix multi-regex BPE offset handling (#15844)
Use the current fragment offset when emitting unmatched spans during multi-regex BPE splitting. This avoids duplicating earlier prompt text and inflating token counts for multi-stage BPE tokenizers.
2026-04-27 14:14:27 -07:00
Jesse Gross
4656a07e56 mlxrunner: batch the sampler across multiple sequences
Register sequences with Add/Remove; each Sample call takes any subset of
registered slots and samples one token per row, appending to each slot's
ring-buffer history. When all slots share Options and penalty rings are
full, one fused transform pass runs over the whole batch via a persistent
pooled history tensor; otherwise calls fall back to per-slot serial
processing indexed against the same pool.

Performance is unchanged for a single sequence, which is all that is
exposed for now.
2026-04-25 09:53:53 -07:00
Jesse Gross
30f86cb9dd mlxrunner: track sampler history in a fixed-size ring buffer
AppendToken used to concatenate the new token onto the history tensor
and slice it back to RepeatLastN every decode step, churning the graph
shape and reallocating a fresh tensor each call. The stateful penalties
don't care about order within the window, so a fixed-capacity ring with
one SliceUpdate per append keeps the tensor shape constant across
steps.
2026-04-25 09:53:53 -07:00
Parth Sareen
ea01af6f76
openai: map responses reasoning effort to think (#15789) 2026-04-24 02:49:36 -07:00
Parth Sareen
c2ebb4d57c
api: accept "max" as a think value (#15787) 2026-04-24 01:49:39 -07:00
Parth Sareen
590109c835
launch: harden OpenClaw onboarding flow (#15777) 2026-04-23 16:47:20 -07:00
Eva H
b4442c6d17
launch: resave managed integration config when live config drifts (#15776) 2026-04-23 19:32:36 -04:00
Eva H
85ff8e4a21
launch: keep launch recommended models in a fixed canonical order (#15750) 2026-04-23 16:33:00 -04:00
Parth Sareen
160660e572
launch: use bundled OpenClaw ollama web search (#15757) 2026-04-22 16:34:19 -07:00
madflow
3b43b9bc4b
docs: update structured outputs doc for cloud (#15733)
---------

Co-authored-by: Parth Sareen <parth.sareen@ollama.com>
2026-04-22 00:42:39 -07:00
Parth Sareen
21883571b7
launch: replace kimi-k2.5 with k2.6 as top recommended model (#15737) 2026-04-21 15:13:20 -07:00
Jesse Gross
ce99f24731 mlxrunner: tokenize prompts in request handler goroutines
Move tokenization out of the single GPU processing goroutine and
into each request's HTTP handler goroutine. This allows the next
request's prompt to be tokenized on the CPU while the current
request is executing on the GPU.
2026-04-21 14:38:49 -07:00
Jesse Gross
04f5f0cdb4 mlx: improve thread safety of array management
Use atomic.Int32 for Array.pinned and a sync.Mutex for the global
arrays slice so MLX arrays can be created and pinned from multiple
goroutines without racing on those structures. Convert Array value
receivers to pointer receivers and struct fields from Array to
*Array to avoid copying the atomic.

This does not fully achieve thread safety even when building
completely independent graphs. The tracing flag and traceScratch
slice in compile.go are unprotected, so concurrent Compile calls
will race. MLX itself is not fully thread-safe either although
it is working to improve.
2026-04-21 14:38:49 -07:00
Matteo Celani
fb36a01ffe
app/ui: fix model picker showing stale model after switching chats (#15280)
* app/ui: fix model picker showing stale model after switching chats

Optimistic messages created during streaming were storing the full
Model object instead of the model name string. When switching back
to a chat with cached streaming data, the restore effect read an
object where it expected a string, causing the model picker to fail
matching and remain stuck on the previous chat's model.

* app/ui: fix two more instances of Model object passed as model name

Fix the same bug at lines 523 and 536 in the assistant_with_tools
event handler, where selectedModel (object) was used instead of
selectedModel.model (string).
2026-04-21 15:08:06 -04:00
Michael Verrilli
0c65ed33bc
cmd: populate model capabilities in launchInteractiveModel (#15712)
launchInteractiveModel was introduced in PR #14609 without the
client.Show() capability-detection block that RunHandler uses.
This left opts.MultiModal always false in the TUI path, causing
image/audio file paths to always be treated as unknown commands
instead of being loaded as multimodal attachments.

Mirror the Show() call, pull-on-404 fallback, cloud auth handling,
and MultiModal/Think population from RunHandler into
launchInteractiveModel.

Fixes #15711
2026-04-21 14:37:36 -04:00
Jesse Gross
22d6c817f8 mlxrunner: fuse top-P and top-K into a single sort pass
When both filters are active, avoid paying for a full sort in top-P
and a partial sort in top-K. Single-filter paths are unchanged.
Improves generation throughput on gemma4:e4b by 1.5%.
2026-04-20 17:43:00 -07:00
Jesse Gross
ca01373b28 mlxrunner: use MaxAxis in the min-P sampler
One reduction op instead of Argmax + TakeAlongAxis.
2026-04-20 17:43:00 -07:00
Jesse Gross
24e038d56a mlxrunner: add logprobs support
Match the ollamarunner and OpenAI semantics: raw, full-vocab log-softmax
with the top-K ranked by probability. Skipped on the GPU when the request
doesn't ask for logprobs so decode doesn't pay for it otherwise.
2026-04-20 17:43:00 -07:00
Parth Sareen
5d1021603a
server: apply format when think=false for gemma4 (#15678) 2026-04-20 17:42:29 -07:00
Parth Sareen
8e05d734b9
launch: add kimi cli integration with installer flow (#15723) 2026-04-20 15:33:32 -07:00