Proxy LLMs
DB-GPT can be deployed on servers with lower hardware requirements through proxy LLMs. DB-GPT supports many proxy LLMs, such as OpenAI, Azure, DeepSeek, Ollama, and more.
Installation and Configuration
Installing DB-GPT with proxy LLM support requires using the uv
package manager for a faster and more stable dependency management experience.
- OpenAI
- Azure
- DeepSeek
- Ollama
- Qwen
- ChatGLM
- WenXin
Install Dependencies
# Use uv to install dependencies needed for OpenAI proxy
uv sync --all-packages \
--extra "base" \
--extra "proxy_openai" \
--extra "rag" \
--extra "storage_chromadb" \
--extra "dbgpts"
Configure OpenAI
Edit the configs/dbgpt-proxy-openai.toml
configuration file to specify your OpenAI API key:
# Model Configurations
[models]
[[models.llms]]
name = "gpt-3.5-turbo"
provider = "proxy/openai"
api_key = "your-openai-api-key"
# Optional: To use GPT-4, change the name to "gpt-4" or "gpt-4-turbo"
[[models.embeddings]]
name = "text-embedding-ada-002"
provider = "proxy/openai"
api_key = "your-openai-api-key"
Run Webserver
uv run dbgpt start webserver --config configs/dbgpt-proxy-openai.toml
Install Dependencies
# Use uv to install dependencies needed for Azure OpenAI proxy
uv sync --all-packages \
--extra "base" \
--extra "proxy_openai" \
--extra "rag" \
--extra "storage_chromadb" \
--extra "dbgpts"
Configure Azure OpenAI
Edit the configs/dbgpt-proxy-azure.toml
configuration file to specify your Azure OpenAI settings:
# Model Configurations
[models]
[[models.llms]]
name = "gpt-35-turbo" # or your deployment model name
provider = "proxy/openai"
api_base = "https://your-resource-name.openai.azure.com/"
api_key = "your-azure-openai-api-key"
api_version = "2023-05-15" # or your specific API version
api_type = "azure"
Run Webserver
uv run dbgpt start webserver --config configs/dbgpt-proxy-azure.toml
Install Dependencies
# Use uv to install dependencies needed for DeepSeek proxy
uv sync --all-packages \
--extra "base" \
--extra "proxy_openai" \
--extra "rag" \
--extra "storage_chromadb" \
--extra "dbgpts"
Configure DeepSeek
Edit the configs/dbgpt-proxy-deepseek.toml
configuration file to specify your DeepSeek API key:
# Model Configurations
[models]
[[models.llms]]
# name = "deepseek-chat"
name = "deepseek-reasoner"
provider = "proxy/deepseek"
api_key = "your-deepseek-api-key"
Run Webserver
uv run dbgpt start webserver --config configs/dbgpt-proxy-deepseek.toml
Install Dependencies
# Use uv to install dependencies needed for Ollama proxy
uv sync --all-packages \
--extra "base" \
--extra "proxy_ollama" \
--extra "rag" \
--extra "storage_chromadb" \
--extra "dbgpts"
Configure Ollama
Edit the configs/dbgpt-proxy-ollama.toml
configuration file to specify your Ollama API base:
# Model Configurations
[models]
[[models.llms]]
name = "llama3" # or any other model available in your Ollama instance
provider = "proxy/ollama"
api_base = "http://localhost:11434" # your-ollama-api-base
[[models.embeddings]]
name = "nomic-embed-text" # or any other embedding model in Ollama
provider = "proxy/ollama"
api_base = "http://localhost:11434" # your-ollama-api-base
Run Webserver
uv run dbgpt start webserver --config configs/dbgpt-proxy-ollama.toml