Skip to main content
Version: dev

Getting Started

Goal: go from zero to a first working chat with minimal setup.

Fastest path

Use an API proxy (OpenAI or DeepSeek) — no GPU required. You will have a working DB-GPT chat in under 5 minutes.

What you need​

  • Python 3.10 or newer
  • uv package manager
tip

Check your versions with python --version and uv --version. Full requirements: Prerequisites.

Quick setup​

Step 1 — Clone the repository​

git clone https://github.com/eosphoros-ai/DB-GPT.git
cd DB-GPT

Step 2 — Install dependencies​

uv sync --all-packages \
--extra "base" \
--extra "proxy_openai" \
--extra "rag" \
--extra "storage_chromadb" \
--extra "dbgpts"

Step 3 — Configure your model​

Edit configs/dbgpt-proxy-openai.toml and set your API key:

[models]
[[models.llms]]
name = "chatgpt_proxyllm"
provider = "proxy/openai"
api_key = "your-openai-api-key" # <-- replace this

[[models.embeddings]]
name = "text-embedding-3-small"
provider = "proxy/openai"
api_key = "your-openai-api-key" # <-- replace this

Step 4 — Start the server​

uv run dbgpt start webserver --config configs/dbgpt-proxy-openai.toml

Step 5 — Open the Web UI​

Open your browser and visit http://localhost:5670.

Verify it works

If the Web UI loads and you can start a chat conversation, your DB-GPT is ready for use.

Verify​

  • The webserver is running
  • Your model config loads without errors
  • The Web UI opens at http://localhost:5670
  • SQLite is available as the default metadata store

Common first-run issues​

  • uv: command not found
  • Model key/auth errors
  • Web UI does not load
    • Confirm the server is listening on port 5670
    • Check the server logs in the terminal where you started DB-GPT
  • Local model does not respond
    • Confirm Ollama or your local inference backend is already running

If you need more​

  • Run the web front-end separately

    cd web && npm install
    cp .env.template .env
    # Edit .env — set API_BASE_URL=http://localhost:5670
    npm run dev

    Then open http://localhost:3000.

  • Use the install helper

    uv run install_help.py install-cmd --interactive
    uv run install_help.py list
  • Use a different database

    • Default is SQLite
    • For MySQL, PostgreSQL, and others, see Data Sources
  • Useful environment variables

    • UV_INDEX_URL — PyPI mirror URL
    • OPENAI_API_KEY — alternative to storing the key in TOML
    • CUDA_VISIBLE_DEVICES — GPU device selection
    • Full reference: Config Reference

Go deeper​

TopicLink
Full architecture overviewArchitecture
Connect more model providersModel Providers
Docker deploymentDocker
Knowledge base setupKnowledge Base

Next steps​