SiliconFlow Proxy LLM Configuration
SiliconFlow proxy LLM configuration.
Details can be found in: 
https://docs.siliconflow.cn/en/api-reference/chat-completions/chat-completions
Parameters
| Name | Type | Required | Description | 
|---|---|---|---|
| name | string | ✅ | The name of the model. | 
| backend | string | ❌ | The real model name to pass to the provider, default is None. If backend is None, use name as the real model name. | 
| provider | string | ❌ | The provider of the model. If model is deployed in local, this is the inference type. If model is deployed in third-party service, this is platform name('proxy/<platform>') Defaults: proxy/siliconflow | 
| verbose | boolean | ❌ | Show verbose output. Defaults: False | 
| concurrency | integer | ❌ | Model concurrency limit Defaults: 100 | 
| prompt_template | string | ❌ | Prompt template. If None, the prompt template is automatically determined from model. Just for local deployment. | 
| context_length | integer | ❌ | The context length of the OpenAI API. If None, it is determined by the model. | 
| reasoning_model | boolean | ❌ | Whether the model is a reasoning model. If None, it is automatically determined from model. | 
| api_base | string | ❌ | The base url of the SiliconFlow API. Defaults:  | 
| api_key | string | ❌ | The API key of the SiliconFlow API. Defaults: ${env:SILICONFLOW_API_KEY} | 
| api_type | string | ❌ | The type of the OpenAI API, if you use Azure, it can be: azure | 
| api_version | string | ❌ | The version of the OpenAI API. | 
| http_proxy | string | ❌ | The http or https proxy to use openai |