Skip to main content
Version: dev

DeepSeek

Configure DB-GPT to use DeepSeek's language models for chat and reasoning.

Prerequisites​

Install dependencies​

uv sync --all-packages \
--extra "base" \
--extra "proxy_openai" \
--extra "rag" \
--extra "storage_chromadb" \
--extra "dbgpts"
Embedding model

DeepSeek does not provide embedding models. The default config uses a HuggingFace embedding model (BAAI/bge-large-zh-v1.5). If using this, also add:

uv sync --all-packages \
--extra "base" \
--extra "proxy_openai" \
--extra "rag" \
--extra "storage_chromadb" \
--extra "dbgpts" \
--extra "hf" \
--extra "cpu"

Configuration​

Edit configs/dbgpt-proxy-deepseek.toml:

[models]
[[models.llms]]
name = "deepseek-reasoner"
provider = "proxy/deepseek"
api_key = "your-deepseek-api-key"

[[models.embeddings]]
name = "BAAI/bge-large-zh-v1.5"
provider = "hf"
# Uncomment to use a local model path:
# path = "models/bge-large-zh-v1.5"

Available models​

ModelConfig nameNotes
DeepSeek-R1deepseek-reasonerStrong reasoning, chain-of-thought
DeepSeek-V3deepseek-chatGeneral purpose chat

Start the server​

uv run dbgpt start webserver --config configs/dbgpt-proxy-deepseek.toml

Troubleshooting​

IssueSolution
AuthenticationErrorVerify your DeepSeek API key at platform.deepseek.com
Embedding download slowPre-download the model or use a mirror (UV_INDEX_URL)
Out of memory for embeddingUse --extra "cpu" to run embeddings on CPU

What's next​