OllamaEmbeddingDeployModelParameters Configuration
Ollama Embeddings deploy model parameters.
Parameters
| Name | Type | Required | Description | 
|---|---|---|---|
name | string | ✅ | The name of the model.  | 
provider | string | ❌ | The provider of the model. If model is deployed in local, this is the inference type. If model is deployed in third-party service, this is platform name('proxy/<platform>') Defaults: proxy/ollama | 
verbose | boolean | ❌ | Show verbose output. Defaults: False | 
concurrency | integer | ❌ | Model concurrency limit Defaults: 100 | 
api_url | string | ❌ | The URL of the embeddings API. Defaults: http://localhost:11434 | 
backend | string | ❌ | The real model name to pass to the provider, default is None. If backend is None, use name as the real model name.  |