Tongyi Proxy LLM Configuration
Tongyi proxy LLM configuration.
Details can be found in: 
https://help.aliyun.com/zh/model-studio/getting-started/first-api-call-to-qwen
Parameters
| Name | Type | Required | Description | 
|---|---|---|---|
name | string | ✅ | The name of the model.  | 
backend | string | ❌ | The real model name to pass to the provider, default is None. If backend is None, use name as the real model name.  | 
provider | string | ❌ | The provider of the model. If model is deployed in local, this is the inference type. If model is deployed in third-party service, this is platform name('proxy/<platform>') Defaults: proxy/tongyi | 
verbose | boolean | ❌ | Show verbose output. Defaults: False | 
concurrency | integer | ❌ | Model concurrency limit Defaults: 100 | 
prompt_template | string | ❌ | Prompt template. If None, the prompt template is automatically determined from model. Just for local deployment.  | 
context_length | integer | ❌ | The context length of the OpenAI API. If None, it is determined by the model.  | 
reasoning_model | boolean | ❌ | Whether the model is a reasoning model. If None, it is automatically determined from model.  | 
api_base | string | ❌ | The base url of the tongyi API. Defaults: https://dashscope.aliyuncs.com/compatible-mode/v1 | 
api_key | string | ❌ | The API key of the tongyi API. Defaults: ${env:DASHSCOPE_API_KEY} | 
api_type | string | ❌ | The type of the OpenAI API, if you use Azure, it can be: azure  | 
api_version | string | ❌ | The version of the OpenAI API.  | 
http_proxy | string | ❌ | The http or https proxy to use openai  |