* test: integration test hardening
Improve reliability on slower systems, and some flakes. Fix
a few logic flaws on the newer tests, general hardening.
* tighten up vision logging
* add new models
* remove some older models - still covered by library scenarios
* mlx: refined model push behavior
Refine the algorithm for parallel push of safetensors based models to get
better reliability and throughput.
* review comments, hardening, and performance tuning for slow links
* review comments
Remove the vendored GGML and llama.cpp backend, CGO runner, Go model
implementations, and sample. llama-server (built from upstream llama.cpp via
FetchContent) is now the sole inference engine for GGUF-based models.
(Safetensor based models continue to run on the new MLX engine.) This allows
us to more rapidly pick up new capabilities and fixes from llama.cpp as they
come out.
On windows this now requires recent AMD driver versions to support ROCm v7 as
llama.cpp currently does not support building against v6.
This change adds support for MTP (multi-token prediction) speculative decoding for the
gemma4 model family.
It includes:
* support for importing safetensors based gemma4 draft models with `ollama create`
* a new DRAFT command in the Modelfile for specifying draft models
* a --quantize-draft flag for the ollama create command to quantize the draft model
* cache support for speculation
* changes to the rotating cache to be able to handle MTP correctly
* sampling support for draft model token prediction
---------
Co-authored-by: Daniel Hiltgen <daniel@ollama.com>
* Update MLX and MLX-C
* Run MLX CGO work on a locked OS thread
MLX now relies on OS-thread-local execution state for streams, encoders, and caches. Add an mlxthread executor backed by runtime.LockOSThread and route runner initialization, model load, inference, status memory reads, and cleanup through the worker so Go goroutine migration cannot split MLX state across native threads.
Also stop caching default MLX streams before the runner owns the thread and add worker/threaded MLX regression tests.
* mlx: use common status writer
* mlx: bundle missing libjaccl on arm64
Inspired by #15793
* review comments
Replace the hardcoded FEATURED_MODELS list with the
/api/experimental/model-recommendations endpoint so the picker stays in
sync with server-driven recommendations. Inline the merge into useModels
(recommendations first, then the rest of /api/tags) and drop the
standalone mergeModels util.
* metal: harden for ggml initialization failures
ggml_metal_device_init performs a probe to verify the tensor API compiles. On
some systems this passes, even though kernel coverage isn't complete, which
results in a later crash when compiling the real kernels. This change adds a
single retry if any of the error strings match this failure mode to disable the
tensor API. It also hardens an error case in the Go initDevices to detect
device initialization failures and panic instead of crashing later on a nil
array entry.
Fixes#15734
* review comments
* review comments
* mlx: add laguna model support
* convert: support fp8 safetensors import
Decode HF F8_E4M3 safetensors with block scale companions into GGUF-supported tensor types, and record which output tensors came from FP8 source weights.
Use that source-precision metadata during create quantization: default FP8-sourced GGUFs to Q8_0, keep non-FP8 tensors at their original precision for Q8_0, and promote non-FP8 quantizable tensors to Q8_0 for Q4_K requests.
* ggml: add laguna model support
* server: preserve generate logprobs with builtin parsers
Generate requests were dropping logprob-only chunks whenever a builtin parser buffered visible content. Chat already handled this case, but generate only forwarded chunks with visible response, thinking, or tool-call output.
Keep generate chunks that carry logprobs even when the builtin parser has not flushed visible content yet, and add a regression test that exercises the behavior with a generic thinking parser.
* review comments - perf improvements
* ggml: implement nemotron 3 nano omni
* add poolside integration
* update poolside doc
* adapt to new cache setup
* fix test
* fix test
---------
Co-authored-by: Eva Ho <hoyyeva@gmail.com>
Models build their own attention masks and read K/V directly from
the cache's buffers, which ties them to the cache's storage layout.
That blocks multi-sequence batching — right-padded rows need a
query-padding mask composed onto every model — and rules out
variants like paged attention where K/V isn't one contiguous tensor.
Caches now hand back a per-layer KVHistory holding post-update K, V,
and a MaskApplier that merges the cache's storage restrictions into
the model's logical mask. Models describe their mask in logical
terms; SDPA composes model, padding, and applier contributions and
dispatches to the kernel's causal or no-mask fast path when it can.
KVHistory still exposes K, V, and the composed mask for manual
attention paths (e.g. CUDA prefill at head_dim > 128).
Performance for single-sequence inference is unchanged.
Switch RoPE from the scalar-offset kernel (mlx_fast_rope) to the
array-offset one (mlx_fast_rope_dynamic) so each batch row can start
at its own position. The pipeline tracks the current position locally
and passes it to the model through Batch.SeqOffsets; each model
materializes that slice into an int32 array for the RoPE call.
Single-sequence behavior is unchanged; this is the wiring needed
before the runner can batch independent sequences.
Gives a single extension point for per-call context (positions,
sequence IDs, masks) as multi-sequence batching grows, without having
to churn every model's Forward signature again.
* mlx: Support NVIDIA TensorRT Model Optimizer import
* x/create: support FP8 safetensors import
Decode HF F8_E4M3 safetensors with block scale companions into MLX-importable tensor blobs, including compressed-tensors weight_scale metadata, packed NVFP4 layouts, and mixed-precision tensor headers.
Use that source-precision metadata during create quantization: default FP8-sourced imports to mxfp8, allow source FP8 to target MLX low-bit formats, preserve source-quantized NVFP4 layouts, selectively keep or promote tensors based on their source precision, and detect quantized dtype from mixed-precision safetensors manifests.
* review comments
Use the current fragment offset when emitting unmatched spans during multi-regex BPE splitting. This avoids duplicating earlier prompt text and inflating token counts for multi-stage BPE tokenizers.
Register sequences with Add/Remove; each Sample call takes any subset of
registered slots and samples one token per row, appending to each slot's
ring-buffer history. When all slots share Options and penalty rings are
full, one fused transform pass runs over the whole batch via a persistent
pooled history tensor; otherwise calls fall back to per-slot serial
processing indexed against the same pool.
Performance is unchanged for a single sequence, which is all that is
exposed for now.
AppendToken used to concatenate the new token onto the history tensor
and slice it back to RepeatLastN every decode step, churning the graph
shape and reallocating a fresh tensor each call. The stateful penalties
don't care about order within the window, so a fixed-capacity ring with
one SliceUpdate per append keeps the tensor shape constant across
steps.