ollama/template
Daniel Hiltgen 31e336791a runner: Remove CGO engines, use llama-server exclusively for GGML models
Remove the vendored GGML and llama.cpp backend, CGO runner, Go model
implementations, and sample.  llama-server (built from upstream llama.cpp via
FetchContent) is now the sole inference engine for GGUF-based models.
(Safetensor based models continue to run on the new MLX engine.)  This allows
us to more rapidly pick up new capabilities and fixes from llama.cpp as they
come out.

On windows this now requires recent AMD driver versions to support ROCm v7 as
llama.cpp currently does not support building against v6.
2026-05-06 17:26:05 -07:00
..
testdata templates: add autotemplate for gemma3 (#9880) 2025-03-20 00:15:30 -07:00
alfred.gotmpl update templates to use messages 2024-08-27 15:44:04 -07:00
alfred.json
alpaca.gotmpl update templates to use messages 2024-08-27 15:44:04 -07:00
alpaca.json
chatml.gotmpl update templates to use messages 2024-08-27 15:44:04 -07:00
chatml.json
chatqa.gotmpl update templates to use messages 2024-08-27 15:44:04 -07:00
chatqa.json
codellama-70b-instruct.gotmpl update templates to use messages 2024-08-27 15:44:04 -07:00
codellama-70b-instruct.json
command-r.gotmpl convert: import support for command-r models from safetensors (#6063) 2025-01-15 16:31:22 -08:00
command-r.json convert: import support for command-r models from safetensors (#6063) 2025-01-15 16:31:22 -08:00
falcon-instruct.gotmpl update templates to use messages 2024-08-27 15:44:04 -07:00
falcon-instruct.json
gemma-instruct.gotmpl update templates to use messages 2024-08-27 15:44:04 -07:00
gemma-instruct.json
gemma3-instruct.gotmpl templates: add autotemplate for gemma3 (#9880) 2025-03-20 00:15:30 -07:00
gemma3-instruct.json templates: add autotemplate for gemma3 (#9880) 2025-03-20 00:15:30 -07:00
granite-instruct.gotmpl update templates to use messages 2024-08-27 15:44:04 -07:00
granite-instruct.json
index.json templates: add autotemplate for gemma3 (#9880) 2025-03-20 00:15:30 -07:00
llama2-chat.gotmpl update templates to use messages 2024-08-27 15:44:04 -07:00
llama2-chat.json
llama3-instruct.gotmpl update templates to use messages 2024-08-27 15:44:04 -07:00
llama3-instruct.json
magicoder.gotmpl update templates to use messages 2024-08-27 15:44:04 -07:00
magicoder.json
mistral-instruct.gotmpl update templates to use messages 2024-08-27 15:44:04 -07:00
mistral-instruct.json
openchat.gotmpl update templates to use messages 2024-08-27 15:44:04 -07:00
openchat.json
phi-3.gotmpl update templates to use messages 2024-08-27 15:44:04 -07:00
phi-3.json
solar-instruct.gotmpl update templates to use messages 2024-08-27 15:44:04 -07:00
solar-instruct.json
starcoder2-instruct.gotmpl update templates to use messages 2024-08-27 15:44:04 -07:00
starcoder2-instruct.json
template.go runner: Remove CGO engines, use llama-server exclusively for GGML models 2026-05-06 17:26:05 -07:00
template_test.go template: fix args-as-json rendering (#13636) 2026-01-06 18:33:57 -08:00
vicuna.gotmpl update templates to use messages 2024-08-27 15:44:04 -07:00
vicuna.json
zephyr.gotmpl update templates to use messages 2024-08-27 15:44:04 -07:00
zephyr.json