DeepSeek
Configure DB-GPT to use DeepSeek's language models for chat and reasoning.
Prerequisitesâ
- A DeepSeek API key
- DB-GPT installed with
proxy_openaiextra
Install dependenciesâ
uv sync --all-packages \
--extra "base" \
--extra "proxy_openai" \
--extra "rag" \
--extra "storage_chromadb" \
--extra "dbgpts"
Embedding model
DeepSeek does not provide embedding models. The default config uses a HuggingFace embedding model (BAAI/bge-large-zh-v1.5). If using this, also add:
uv sync --all-packages \
--extra "base" \
--extra "proxy_openai" \
--extra "rag" \
--extra "storage_chromadb" \
--extra "dbgpts" \
--extra "hf" \
--extra "cpu"
Configurationâ
Edit configs/dbgpt-proxy-deepseek.toml:
[models]
[[models.llms]]
name = "deepseek-reasoner"
provider = "proxy/deepseek"
api_key = "your-deepseek-api-key"
[[models.embeddings]]
name = "BAAI/bge-large-zh-v1.5"
provider = "hf"
# Uncomment to use a local model path:
# path = "models/bge-large-zh-v1.5"
Available modelsâ
| Model | Config name | Notes |
|---|---|---|
| DeepSeek-R1 | deepseek-reasoner | Strong reasoning, chain-of-thought |
| DeepSeek-V3 | deepseek-chat | General purpose chat |
Start the serverâ
uv run dbgpt start webserver --config configs/dbgpt-proxy-deepseek.toml
Troubleshootingâ
| Issue | Solution |
|---|---|
AuthenticationError | Verify your DeepSeek API key at platform.deepseek.com |
| Embedding download slow | Pre-download the model or use a mirror (UV_INDEX_URL) |
| Out of memory for embedding | Use --extra "cpu" to run embeddings on CPU |
What's nextâ
- Getting Started â Full setup walkthrough
- Model Providers â Try other providers