Skip to main content
Version: dev

ModelsDeployParameters Configuration

ModelsDeployParameters(default_llm: Optional[str] = None, default_embedding: Optional[str] = None, default_reranker: Optional[str] = None, llms: List[dbgpt.core.interface.parameter.LLMDeployModelParameters] = <factory>, embeddings: List[dbgpt.core.interface.parameter.EmbeddingDeployModelParameters] = <factory>, rerankers: List[dbgpt.core.interface.parameter.RerankerDeployModelParameters] = <factory>)

Parameters

NameTypeRequiredDescription
default_embeddingstring
Default embedding model name, used to specify which model to use when you have multiple embedding models
default_rerankerstring
Default reranker model name, used to specify which model to use when you have multiple reranker models
llmsLLMDeployModelParameters (hf configuration, vllm configuration, llama.cpp.server configuration, llama.cpp configuration, proxy/openai configuration, proxy/siliconflow configuration, proxy/zhipu configuration, proxy/moonshot configuration, proxy/gitee configuration, proxy/deepseek configuration, proxy/ollama configuration, proxy/yi configuration, proxy/spark configuration, proxy/baichuan configuration, proxy/gemini configuration, proxy/tongyi configuration, proxy/volcengine configuration, proxy/wenxin configuration, proxy/claude configuration)
LLM model deploy configuration. If you deploy in cluster mode, you just deploy one model.
Defaults:[]
embeddingsEmbeddingDeployModelParameters (hf configuration, proxy/openai configuration, proxy/jina configuration, proxy/ollama configuration, proxy/qianfan configuration, proxy/tongyi configuration)
Embedding model deploy configuration. If you deploy in cluster mode, you just deploy one model.
Defaults:[]
rerankersRerankerDeployModelParameters (hf configuration, proxy/openapi configuration, proxy/siliconflow configuration)
Reranker model deploy configuration. If you deploy in cluster mode, you just deploy one model.
Defaults:[]
default_llmstring
Default LLM model name, used to specify which model to use when you have multiple LLMs