Skip to main content
Version: dev

Ollama Proxy LLM Configuration

Ollama proxy LLM configuration.

Details can be found in:
https://ollama.com/library

Parameters

NameTypeRequiredDescription
namestring
The name of the model.
backendstring
The real model name to pass to the provider, default is None. If backend is None, use name as the real model name.
providerstring
The provider of the model. If model is deployed in local, this is the inference type. If model is deployed in third-party service, this is platform name('proxy/<platform>')
Defaults:proxy/ollama
verboseboolean
Show verbose output.
Defaults:False
concurrencyinteger
Model concurrency limit
Defaults:5
prompt_templatestring
Prompt template. If None, the prompt template is automatically determined from model. Just for local deployment.
context_lengthinteger
The context length of the model. If None, it is automatically determined from model.
reasoning_modelboolean
Whether the model is a reasoning model. If None, it is automatically determined from model.
api_basestring
The base url of the Ollama API.
Defaults:${env:OLLAMA_API_BASE:-http://localhost:11434}