Skip to main content
Version: v0.7.0

Quickstart

DB-GPT supports the installation and use of various open-source and closed-source models. Different models have different requirements for environment and resources. If local model deployment is required, GPU resources are necessary. The API proxy model requires relatively few resources and can be deployed and started on a CPU machine.

note
  • Detailed installation and deployment tutorials can be found in Installation.
  • This page only introduces deployment based on ChatGPT proxy and local GLM model.

Environment Preparation

Download Source Code

tip

Download DB-GPT

git clone https://github.com/eosphoros-ai/DB-GPT.git

Environment Setup

  • The default database uses SQLite, so there is no need to install a database in the default startup mode. If you need to use other databases, please refer to the advanced tutorials below. Starting from version 0.7.0, DB-GPT uses uv for environment and package management, providing faster and more stable dependency management.
note

There are some ways to install uv:

curl -LsSf https://astral.sh/uv/install.sh | sh

Then, you can run uv --version to check if uv is installed successfully.

uv --version

Deploy DB-GPT

tip

If you are in the China region, you can add --index-url=https://pypi.tuna.tsinghua.edu.cn/simple at the end of the command.Like this:

uv sync --all-packages \
--extra "base" \
--extra "proxy_openai" \
--extra "rag" \
--extra "storage_chromadb" \
--extra "dbgpts" \
--index-url=https://pypi.tuna.tsinghua.edu.cn/simple

This tutorial assumes that you can establish network communication with the dependency download sources.

Install Dependencies

# Use uv to install dependencies needed for OpenAI proxy
uv sync --all-packages \
--extra "base" \
--extra "proxy_openai" \
--extra "rag" \
--extra "storage_chromadb" \
--extra "dbgpts"

Run Webserver

To run DB-GPT with OpenAI proxy, you must provide the OpenAI API key in the configs/dbgpt-proxy-openai.toml configuration file or privide it in the environment variable with key OPENAI_API_KEY.

# Model Configurations
[models]
[[models.llms]]
...
api_key = "your-openai-api-key"
[[models.embeddings]]
...
api_key = "your-openai-api-key"

Then run the following command to start the webserver:

uv run dbgpt start webserver --config configs/dbgpt-proxy-openai.toml

In the above command, --config specifies the configuration file, and configs/dbgpt-proxy-openai.toml is the configuration file for the OpenAI proxy model, you can also use other configuration files or create your own configuration file according to your needs.

Optionally, you can also use the following command to start the webserver:

uv run python packages/dbgpt-app/src/dbgpt_app/dbgpt_server.py --config configs/dbgpt-proxy-openai.toml

(Optional) More Configuration

You can view the configuration in Configuration to learn more about the configuration options.

For example, if you want to configure the LLM model, you can see all available options in the LLM Configuration.

And another example, if you want to how to configure the vllm model, you can see all available options in the VLLM Configuration.

DB-GPT Install Help Tool

If you need help with the installation, you can use the uv script to get help.

uv run install_help.py --help

Generate Install Command

You can use the uv script to generate the install command in the interactive mode.

uv run install_help.py install-cmd --interactive

And you can generate an install command with all the dependencies needed for the OpenAI proxy model.

uv run install_help.py install-cmd --all

You can found all the dependencies and extras.

uv run install_help.py list

Visit Website

Open your browser and visit http://localhost:5670

(Optional) Run Web Front-end Separately

You can also run the web front-end separately:

cd web && npm install
cp .env.template .env
// Set API_BASE_URL to your DB-GPT server address, usually http://localhost:5670
npm run dev

Open your browser and visit http://localhost:3000