跳到主要内容
版本:dev

Docker Deployment

Run DB-GPT in a single Docker container — no Python setup required.

Prerequisites

Deploy with API proxy (no GPU)

The fastest way to get started. Uses a cloud LLM provider — no GPU needed.

Step 1 — Pull the image

docker pull eosphorosai/dbgpt-openai:latest

Step 2 — Run the container

docker run -it --rm \
-e SILICONFLOW_API_KEY=${SILICONFLOW_API_KEY} \
-p 5670:5670 \
--name dbgpt \
eosphorosai/dbgpt-openai

Replace ${SILICONFLOW_API_KEY} with your actual key from SiliconFlow.

Step 3 — Open the Web UI

Visit http://localhost:5670 in your browser.


Deploy with GPU (local model)

Run models locally on your NVIDIA GPU.

Step 1 — Download models

mkdir -p ./models && cd ./models
git lfs install
git clone https://www.modelscope.cn/Qwen/Qwen2.5-Coder-0.5B-Instruct.git
git clone https://www.modelscope.cn/BAAI/bge-large-zh-v1.5.git
cd ..

Step 2 — Create a config file

Create dbgpt-local-gpu.toml:

[models]
[[models.llms]]
name = "Qwen2.5-Coder-0.5B-Instruct"
provider = "hf"
path = "/app/models/Qwen2.5-Coder-0.5B-Instruct"

[[models.embeddings]]
name = "BAAI/bge-large-zh-v1.5"
provider = "hf"
path = "/app/models/bge-large-zh-v1.5"

Step 3 — Run the container

docker run --ipc host --gpus all \
-it --rm \
-p 5670:5670 \
-v ./dbgpt-local-gpu.toml:/app/configs/dbgpt-local-gpu.toml \
-v ./models:/app/models \
--name dbgpt \
eosphorosai/dbgpt \
dbgpt start webserver --config /app/configs/dbgpt-local-gpu.toml
FlagPurpose
--ipc hostEnables host IPC mode for better performance
--gpus allAllows the container to use all available GPUs
-v ./models:/app/modelsMounts local models into the container

Step 4 — Open the Web UI

Visit http://localhost:5670 in your browser.


Persist data (optional)

By default, data is lost when the container stops. To persist it:

mkdir -p ./pilot/data ./pilot/message ./pilot/alembic_versions

Add these volume mounts to your docker run command:

-v ./pilot/data:/app/pilot/data \
-v ./pilot/message:/app/pilot/message \
-v ./pilot/alembic_versions:/app/pilot/meta_data/alembic/versions

And configure the database path in your TOML file:

[service.web.database]
type = "sqlite"
path = "/app/pilot/message/dbgpt.db"

Build your own image

To build a custom Docker image from source:

# Proxy image (no GPU required)
bash docker/base/build_proxy_image.sh

# Full image (with GPU support)
bash docker/base/build_image.sh
信息

For detailed build options, see bash docker/base/build_image.sh --help.

Directory structure

After setup, your working directory looks like:

.
├── dbgpt-local-gpu.toml # Your config file
├── models/
│ ├── Qwen2.5-Coder-0.5B-Instruct/
│ └── bge-large-zh-v1.5/
└── pilot/ # (optional) persistent data
├── data/
└── message/

Next steps

TopicLink
Docker Compose (multi-service)Docker Compose
Cluster deploymentCluster
Model providersProviders