Skip to main content
Version: v0.7.4

DB-GPT Docker Build Guide

This guide provides comprehensive instructions for building DB-GPT Docker images with various configurations using the docker/base/build_image.sh script.

Overview​

The DB-GPT build script allows you to create Docker images tailored to your specific requirements. You can choose from predefined installation modes or customize the build with specific extras, environment variables, and other settings.

Available Installation Modes​

CUDA-based image with standard features.

bash docker/base/build_image.sh

Includes: CUDA support, proxy integrations (OpenAI, Ollama, Zhipuai, Anthropic, Qianfan, Tongyi), RAG capabilities, graph RAG, Hugging Face integration, and quantization support.

Basic Usage​

View Available Modes​

To see all available installation modes and their configurations:

bash docker/base/build_image.sh --list-modes

Get Help​

Display all available options:

bash docker/base/build_image.sh --help

Customization Options​

Python Version​

DB-GPT requires Python 3.10 or higher. The default is Python 3.11, but you can specify a different version:

bash docker/base/build_image.sh --python-version 3.10

Custom Image Name​

Set a custom name for the built image:

bash docker/base/build_image.sh --image-name mycompany/dbgpt

Image Name Suffix​

Add a suffix to the image name for versioning or environment identification:

bash docker/base/build_image.sh --image-name-suffix v1.0

This will generate eosphorosai/dbgpt-v1.0 for the default mode or eosphorosai/dbgpt-MODE-v1.0 for specific modes.

PIP Mirror​

Choose a different PIP index URL:

bash docker/base/build_image.sh --pip-index-url https://pypi.org/simple

Ubuntu Mirror​

Control whether to use Tsinghua Ubuntu mirror:

bash docker/base/build_image.sh --use-tsinghua-ubuntu false

Language Preference​

Set your preferred language (default is English):

bash docker/base/build_image.sh --language zh

Advanced Customization​

Custom Extras​

You can customize the Python package extras installed in the image:

Completely replace the default extras with your own selection:

bash docker/base/build_image.sh --extras "base,proxy_openai,rag,storage_chromadb"

Available Extra Options​

Here are some useful extras you can add:

Extra PackageDescription
storage_milvusVector store integration with Milvus
storage_elasticsearchVector store integration with Elasticsearch
datasource_postgresDatabase connector for PostgreSQL
vllmVLLM integration for optimized inference
llama_cppLlama-cpp Python bindings
llama_cpp_serverLlama-cpp HTTP server

You can run uv run install_help.py list in your local DB-GPT repository to see all available extras.

Environment Variables​

DB-GPT build supports environment variables for specialized builds. The main environment variable used is CMAKE_ARGS which is particularly important for Llama-cpp compilation.

Replace the default environment variables:

bash docker/base/build_image.sh --env-vars "CMAKE_ARGS=\"-DGGML_CUDA=ON -DLLAMA_CUBLAS=ON\""
note

For Llama-cpp mode, CMAKE_ARGS="-DGGML_CUDA=ON" is automatically set to enable CUDA acceleration.

Docker Network​

Specify a Docker network for building:

bash docker/base/build_image.sh --network host

Custom Dockerfile​

Use a custom Dockerfile:

bash docker/base/build_image.sh --dockerfile Dockerfile.custom

Example Scenarios​

Enterprise DB-GPT with PostgreSQL and Elasticsearch​

Build a full-featured enterprise version with PostgreSQL and Elasticsearch support:

bash docker/base/build_image.sh --install-mode full \
--add-extras "storage_elasticsearch,datasource_postgres" \
--image-name-suffix enterprise \
--python-version 3.10 \
--load-examples false

Optimized Llama-cpp for Specific Hardware​

Build with custom Llama-cpp optimization flags:

bash docker/base/build_image.sh --install-mode llama-cpp \
--env-vars "CMAKE_ARGS=\"-DGGML_CUDA=ON -DGGML_AVX2=OFF -DGGML_AVX512=ON\"" \
--python-version 3.11

Lightweight OpenAI Proxy​

Build a minimal OpenAI proxy image:

bash docker/base/build_image.sh --install-mode openai \
--use-tsinghua-ubuntu false \
--pip-index-url https://pypi.org/simple \
--load-examples false

Development Build with Milvus​

Build a development version with Milvus support:

bash docker/base/build_image.sh --install-mode vllm \
--add-extras "storage_milvus" \
--image-name-suffix dev

Troubleshooting​

Common Build Issues

CUDA Not Found​

If you encounter CUDA-related errors:

# Try building with a different CUDA base image
bash docker/base/build_image.sh --base-image nvidia/cuda:12.1.0-devel-ubuntu22.04

Package Installation Failures​

If extras fail to install:

# Try building with fewer extras to isolate the problem
bash docker/base/build_image.sh --extras "base,proxy_openai,rag"

Network Issues​

If you encounter network problems:

# Use a specific network
bash docker/base/build_image.sh --network host

API Reference​

Script Options​

OptionDescriptionDefault Value
--install-modeInstallation modedefault
--base-imageBase Docker imagenvidia/cuda:12.4.0-devel-ubuntu22.04
--image-nameDocker image nameeosphorosai/dbgpt
--image-name-suffixSuffix for image name
--pip-index-urlPIP mirror URLhttps://pypi.tuna.tsinghua.edu.cn/simple
--languageInterface languageen
--load-examplesLoad example datatrue
--python-versionPython version3.11
--use-tsinghua-ubuntuUse Tsinghua Ubuntu mirrortrue
--extrasExtra packages to installMode dependent
--add-extrasAdditional extra packages
--env-varsBuild environment variablesMode dependent
--add-env-varsAdditional environment variables
--dockerfileDockerfile to useDockerfile
--networkDocker network to use

Additional Resources​