Skip to main content
Version: dev

Zhipu Proxy LLM Configuration

Zhipu proxy LLM configuration.

Details can be found in:
https://open.bigmodel.cn/dev/api/normal-model/glm-4#overview

Parameters

NameTypeRequiredDescription
namestring
The name of the model.
backendstring
The real model name to pass to the provider, default is None. If backend is None, use name as the real model name.
providerstring
The provider of the model. If model is deployed in local, this is the inference type. If model is deployed in third-party service, this is platform name('proxy/<platform>')
Defaults:proxy/zhipu
verboseboolean
Show verbose output.
Defaults:False
concurrencyinteger
Model concurrency limit
Defaults:100
prompt_templatestring
Prompt template. If None, the prompt template is automatically determined from model. Just for local deployment.
context_lengthinteger
The context length of the OpenAI API. If None, it is determined by the model.
api_basestring
The base url of the Zhipu API.
Defaults:
${env:ZHIPUAI_BASE_URL:-https://open.bigmodel.cn/api/paas/v4}
api_keystring
The API key of the Zhipu API.
Defaults:${env:ZHIPUAI_API_KEY}
api_typestring
The type of the OpenAI API, if you use Azure, it can be: azure
api_versionstring
The version of the OpenAI API.
http_proxystring
The http or https proxy to use openai