Xunfei Spark Proxy LLM Configuration
Xunfei Spark proxy LLM configuration.
Details can be found in:
https://www.xfyun.cn/doc/spark/HTTP%E8%B0%83%E7%94%A8%E6%96%87%E6%A1%A3.html#_1-%E6%8E%A5%E5%8F%A3%E8%AF%B4%E6%98%8E
Parameters
Name | Type | Required | Description |
---|---|---|---|
name | string | ✅ | The name of the model. |
backend | string | ❌ | The real model name to pass to the provider, default is None. If backend is None, use name as the real model name. |
provider | string | ❌ | The provider of the model. If model is deployed in local, this is the inference type. If model is deployed in third-party service, this is platform name('proxy/<platform>') Defaults: proxy/spark |
verbose | boolean | ❌ | Show verbose output. Defaults: False |
concurrency | integer | ❌ | Model concurrency limit Defaults: 100 |
prompt_template | string | ❌ | Prompt template. If None, the prompt template is automatically determined from model. Just for local deployment. |
context_length | integer | ❌ | The context length of the OpenAI API. If None, it is determined by the model. |
api_base | string | ❌ | The base url of the Spark API. Defaults:
|
api_key | string | ❌ | The API key of the Spark API. Defaults: ${env:XUNFEI_SPARK_API_KEY} |
api_type | string | ❌ | The type of the OpenAI API, if you use Azure, it can be: azure |
api_version | string | ❌ | The version of the OpenAI API. |
http_proxy | string | ❌ | The http or https proxy to use openai |