DB-GPT Docker Build Guide
This guide provides comprehensive instructions for building DB-GPT Docker images with various configurations using the docker/base/build_image.sh script.
Overviewâ
The DB-GPT build script allows you to create Docker images tailored to your specific requirements. You can choose from predefined installation modes or customize the build with specific extras, environment variables, and other settings.
Available Installation Modesâ
- Default
- OpenAI
- VLLM
- Llama-cpp
- Full
CUDA-based image with standard features.
bash docker/base/build_image.sh
Includes: CUDA support, proxy integrations (OpenAI, Ollama, Zhipuai, Anthropic, Qianfan, Tongyi), RAG capabilities, graph RAG, Hugging Face integration, and quantization support.
CPU-based image optimized for OpenAI API usage.
bash docker/base/build_image.sh --install-mode openai
Includes: Basic functionality, all proxy integrations, and RAG capabilities without GPU acceleration.
CUDA-based image with VLLM for optimized inference.
bash docker/base/build_image.sh --install-mode vllm
Includes: All default features plus VLLM support for high-performance inference.
CUDA-based image with Llama-cpp support.
bash docker/base/build_image.sh --install-mode llama-cpp
Includes: All default features plus Llama-cpp and Llama-cpp server with CUDA acceleration enabled via CMAKE_ARGS="-DGGML_CUDA=ON".
CUDA-based image with all available features.
bash docker/base/build_image.sh --install-mode full
Includes: All features from other modes plus embedding capabilities.
Basic Usageâ
View Available Modesâ
To see all available installation modes and their configurations:
bash docker/base/build_image.sh --list-modes
Get Helpâ
Display all available options:
bash docker/base/build_image.sh --help
Customization Optionsâ
Python Versionâ
DB-GPT requires Python 3.10 or higher. The default is Python 3.11, but you can specify a different version:
bash docker/base/build_image.sh --python-version 3.10
Custom Image Nameâ
Set a custom name for the built image:
bash docker/base/build_image.sh --image-name mycompany/dbgpt
Image Name Suffixâ
Add a suffix to the image name for versioning or environment identification:
bash docker/base/build_image.sh --image-name-suffix v1.0
This will generate eosphorosai/dbgpt-v1.0 for the default mode or eosphorosai/dbgpt-MODE-v1.0 for specific modes.
PIP Mirrorâ
Choose a different PIP index URL:
bash docker/base/build_image.sh --pip-index-url https://pypi.org/simple
Ubuntu Mirrorâ
Control whether to use Tsinghua Ubuntu mirror:
bash docker/base/build_image.sh --use-tsinghua-ubuntu false
Language Preferenceâ
Set your preferred language (default is English):
bash docker/base/build_image.sh --language zh
Advanced Customizationâ
Custom Extrasâ
You can customize the Python package extras installed in the image:
- Override Extras
- Add Extras
- Mode-Specific
Completely replace the default extras with your own selection:
bash docker/base/build_image.sh --extras "base,proxy_openai,rag,storage_chromadb"
Keep the default extras and add more:
bash docker/base/build_image.sh --add-extras "storage_milvus,storage_elasticsearch,datasource_postgres"
Add specific extras to a particular installation mode:
bash docker/base/build_image.sh --install-mode vllm --add-extras "storage_milvus,datasource_postgres"
Available Extra Optionsâ
Here are some useful extras you can add:
| Extra Package | Description |
|---|---|
storage_milvus | Vector store integration with Milvus |
storage_elasticsearch | Vector store integration with Elasticsearch |
datasource_postgres | Database connector for PostgreSQL |
vllm | VLLM integration for optimized inference |
llama_cpp | Llama-cpp Python bindings |
llama_cpp_server | Llama-cpp HTTP server |
You can run uv run install_help.py list in your local DB-GPT repository to see all available extras.
Environment Variablesâ
DB-GPT build supports environment variables for specialized builds. The main environment variable used is CMAKE_ARGS which is particularly important for Llama-cpp compilation.
- Override Env Vars
- Add Env Vars
Replace the default environment variables:
bash docker/base/build_image.sh --env-vars "CMAKE_ARGS=\"-DGGML_CUDA=ON -DLLAMA_CUBLAS=ON\""
Add additional environment variables:
bash docker/base/build_image.sh --install-mode llama-cpp --add-env-vars "FORCE_CMAKE=1"
For Llama-cpp mode, CMAKE_ARGS="-DGGML_CUDA=ON" is automatically set to enable CUDA acceleration.
Docker Networkâ
Specify a Docker network for building:
bash docker/base/build_image.sh --network host
Custom Dockerfileâ
Use a custom Dockerfile:
bash docker/base/build_image.sh --dockerfile Dockerfile.custom
Example Scenariosâ
Enterprise DB-GPT with PostgreSQL and Elasticsearchâ
Build a full-featured enterprise version with PostgreSQL and Elasticsearch support:
bash docker/base/build_image.sh --install-mode full \
--add-extras "storage_elasticsearch,datasource_postgres" \
--image-name-suffix enterprise \
--python-version 3.10 \
--load-examples false
Optimized Llama-cpp for Specific Hardwareâ
Build with custom Llama-cpp optimization flags:
bash docker/base/build_image.sh --install-mode llama-cpp \
--env-vars "CMAKE_ARGS=\"-DGGML_CUDA=ON -DGGML_AVX2=OFF -DGGML_AVX512=ON\"" \
--python-version 3.11
Lightweight OpenAI Proxyâ
Build a minimal OpenAI proxy image:
bash docker/base/build_image.sh --install-mode openai \
--use-tsinghua-ubuntu false \
--pip-index-url https://pypi.org/simple \
--load-examples false
Development Build with Milvusâ
Build a development version with Milvus support:
bash docker/base/build_image.sh --install-mode vllm \
--add-extras "storage_milvus" \
--image-name-suffix dev
Troubleshootingâ
Common Build Issues
CUDA Not Foundâ
If you encounter CUDA-related errors:
# Try building with a different CUDA base image
bash docker/base/build_image.sh --base-image nvidia/cuda:12.1.0-devel-ubuntu22.04
Package Installation Failuresâ
If extras fail to install:
# Try building with fewer extras to isolate the problem
bash docker/base/build_image.sh --extras "base,proxy_openai,rag"
Network Issuesâ
If you encounter network problems:
# Use a specific network
bash docker/base/build_image.sh --network host
API Referenceâ
Script Optionsâ
| Option | Description | Default Value |
|---|---|---|
--install-mode | Installation mode | default |
--base-image | Base Docker image | nvidia/cuda:12.4.0-devel-ubuntu22.04 |
--image-name | Docker image name | eosphorosai/dbgpt |
--image-name-suffix | Suffix for image name | |
--pip-index-url | PIP mirror URL | https://pypi.tuna.tsinghua.edu.cn/simple |
--language | Interface language | en |
--load-examples | Load example data | true |
--python-version | Python version | 3.11 |
--use-tsinghua-ubuntu | Use Tsinghua Ubuntu mirror | true |
--extras | Extra packages to install | Mode dependent |
--add-extras | Additional extra packages | |
--env-vars | Build environment variables | Mode dependent |
--add-env-vars | Additional environment variables | |
--dockerfile | Dockerfile to use | Dockerfile |
--network | Docker network to use | |