ollama/integration
Jesse Gross 24e038d56a mlxrunner: add logprobs support
Match the ollamarunner and OpenAI semantics: raw, full-vocab log-softmax
with the top-K ranked by probability. Skipped on the GPU when the request
doesn't ask for logprobs so decode doesn't pay for it otherwise.
2026-04-20 17:43:00 -07:00
..
testdata
api_test.go mlxrunner: add logprobs support 2026-04-20 17:43:00 -07:00
audio_test.go
audio_test_data_test.go
basic_test.go
concurrency_test.go
context_test.go
create_imagegen_test.go create: Clean up experimental paths, fix create from existing safetensor model (#14679) 2026-04-07 08:12:57 -07:00
create_test.go create: Clean up experimental paths, fix create from existing safetensor model (#14679) 2026-04-07 08:12:57 -07:00
embed_test.go
imagegen_test.go
library_models_test.go
llm_image_test.go
max_queue_test.go
model_arch_test.go
model_perf_test.go
quantization_test.go
README.md
thinking_test.go
tools_stress_test.go
tools_test.go
utils_test.go
vision_test.go
vision_test_data_test.go

Integration Tests

This directory contains integration tests to exercise Ollama end-to-end to verify behavior

By default, these tests are disabled so go test ./... will exercise only unit tests. To run integration tests you must pass the integration tag. go test -tags=integration ./... Some tests require additional tags to enable to allow scoped testing to keep the duration reasonable. For example, testing a broad set of models requires -tags=integration,models and a longer timeout (~60m or more depending on the speed of your GPU.). To view the current set of tag combinations use find integration -type f | xargs grep "go:build"

The integration tests have 2 modes of operating.

  1. By default, on Unix systems, they will start the server on a random port, run the tests, and then shutdown the server. On Windows you must ALWAYS run the server on OLLAMA_HOST for the tests to work.
  2. If OLLAMA_TEST_EXISTING is set to a non-empty string, the tests will run against an existing running server, which can be remote based on your OLLAMA_HOST environment variable

Important

Before running the tests locally without the "test existing" setting, compile ollama from the top of the source tree go build . in addition to GPU support with cmake if applicable on your platform. The integration tests expect to find an ollama binary at the top of the tree.

Testing a New Model

When implementing new model architecture, use OLLAMA_TEST_MODEL to run the integration suite against your model.

# Build the binary first
go build .

# Run integration tests against it
OLLAMA_TEST_MODEL=mymodel go test -tags integration -v -count 1 -timeout 15m ./integration/