ollama/ml
2026-05-06 17:26:05 -07:00
..
backend/ggml/ggml/src/ggml-cuda/template-instances runner: Remove CGO engines, use llama-server exclusively for GGML models 2026-05-06 17:26:05 -07:00
nn runner: Remove CGO engines, use llama-server exclusively for GGML models 2026-05-06 17:26:05 -07:00
backend.go Add support for gemma4 (#15214) 2026-04-02 11:33:33 -07:00
device.go refine implementation 2026-05-06 17:26:05 -07:00
device_test.go refine implementation 2026-05-06 17:26:05 -07:00
path.go cpu: always ensure LibOllamaPath included (#12890) 2025-10-31 14:37:29 -07:00