Prerequisites
Everything you need before installing DB-GPT.
Quick check
Already have Python 3.10+ and uv? Skip to Getting Started.
Requiredâ
| Requirement | Version | Check command |
|---|---|---|
| Python | 3.10 or newer | python --version |
| uv | Latest | uv --version |
| Git | Any recent | git --version |
Pythonâ
DB-GPT requires Python 3.10+. We recommend Python 3.11 for the best compatibility.
python --version
# Python 3.11.x
uv (package manager)â
Starting from v0.7.0, DB-GPT uses uv for environment and package management, providing faster and more stable dependency resolution.
- macOS / Linux
- pipx
- Other
curl -LsSf https://astral.sh/uv/install.sh | sh
python -m pip install --upgrade pip
python -m pip install --upgrade pipx
python -m pipx ensurepath
pipx install uv --global
See the full uv installation guide for Homebrew, Scoop, and other methods.
Verify the installation:
uv --version
Choose the right setup firstâ
- Fastest setup: API proxy model (OpenAI, DeepSeek, Qwen, SiliconFlow) â no GPU required
- Privacy-first local setup: Ollama â local model runtime, optional GPU
- High-performance local inference: vLLM or HuggingFace GPU stack â NVIDIA GPU required
Optional (based on deployment)â
For Web UI developmentâ
| Requirement | Version | Check command |
|---|---|---|
| Node.js | 18 or newer | node --version |
| npm | 8 or newer | npm --version |
For local model deploymentâ
| Requirement | Details |
|---|---|
| NVIDIA GPU | CUDA 12.1+ for GPU-accelerated inference |
| CUDA Toolkit | Required for vLLM, HuggingFace Transformers with GPU |
| Sufficient VRAM | 8 GB+ for 7B models, 24 GB+ for 13B+ models |
info
If you only use API proxy models (OpenAI, DeepSeek, etc.), no GPU is required. You can run on a CPU-only machine.
For Docker deploymentâ
| Requirement | Version | Check command |
|---|---|---|
| Docker | 20.10+ | docker --version |
| Docker Compose | 2.0+ | docker compose version |
| NVIDIA Container Toolkit | Latest (GPU only) | nvidia-smi |
System resourcesâ
| Deployment Type | CPU | RAM | Disk |
|---|---|---|---|
| API proxy only | 2 cores | 4 GB | 10 GB |
| Local 7B model | 4 cores | 16 GB | 30 GB |
| Local 13B+ model | 8 cores | 32 GB | 60 GB |
Network considerations (China)â
If you are in the China region, configure a PyPI mirror for faster package downloads:
# Set the mirror as environment variable
echo "export UV_INDEX_URL=https://pypi.tuna.tsinghua.edu.cn/simple" >> ~/.bashrc
source ~/.bashrc
Or append --index-url to each uv sync command:
uv sync --all-packages \
--extra "base" \
--extra "proxy_openai" \
--index-url=https://pypi.tuna.tsinghua.edu.cn/simple
Next stepâ
Ready to go? Head to Getting Started for a 5-minute setup.