Getting Started
Goal: go from zero to a first working chat with minimal setup.
Use an API proxy (OpenAI or DeepSeek) â no GPU required. You will have a working DB-GPT chat in under 5 minutes.
What you needâ
- Python 3.10 or newer
- uv package manager
Check your versions with python --version and uv --version. Full requirements: Prerequisites.
Quick setupâ
Step 1 â Clone the repositoryâ
git clone https://github.com/eosphoros-ai/DB-GPT.git
cd DB-GPT
Step 2 â Install dependenciesâ
- OpenAI (proxy)
- DeepSeek (proxy)
- Ollama (local)
uv sync --all-packages \
--extra "base" \
--extra "proxy_openai" \
--extra "rag" \
--extra "storage_chromadb" \
--extra "dbgpts"
uv sync --all-packages \
--extra "base" \
--extra "proxy_openai" \
--extra "rag" \
--extra "storage_chromadb" \
--extra "dbgpts"
uv sync --all-packages \
--extra "base" \
--extra "proxy_ollama" \
--extra "rag" \
--extra "storage_chromadb" \
--extra "dbgpts"
Step 3 â Configure your modelâ
- OpenAI
- DeepSeek
- Ollama
Edit configs/dbgpt-proxy-openai.toml and set your API key:
[models]
[[models.llms]]
name = "chatgpt_proxyllm"
provider = "proxy/openai"
api_key = "your-openai-api-key" # <-- replace this
[[models.embeddings]]
name = "text-embedding-3-small"
provider = "proxy/openai"
api_key = "your-openai-api-key" # <-- replace this
Edit configs/dbgpt-proxy-deepseek.toml and set your API key:
[models]
[[models.llms]]
name = "deepseek-reasoner"
provider = "proxy/deepseek"
api_key = "your-deepseek-api-key" # <-- replace this
[[models.embeddings]]
name = "BAAI/bge-large-zh-v1.5"
provider = "hf"
The default embedding model is BAAI/bge-large-zh-v1.5. If using a HuggingFace embedding, also add --extra "hf" and --extra "cpu" to the install command.
Make sure Ollama is running, then edit configs/dbgpt-proxy-ollama.toml:
[models]
[[models.llms]]
name = "qwen2.5:latest"
provider = "proxy/ollama"
api_base = "http://localhost:11434"
[[models.embeddings]]
name = "nomic-embed-text:latest"
provider = "proxy/ollama"
api_base = "http://localhost:11434"
Step 4 â Start the serverâ
- OpenAI
- DeepSeek
- Ollama
uv run dbgpt start webserver --config configs/dbgpt-proxy-openai.toml
uv run dbgpt start webserver --config configs/dbgpt-proxy-deepseek.toml
uv run dbgpt start webserver --config configs/dbgpt-proxy-ollama.toml
Step 5 â Open the Web UIâ
Open your browser and visit http://localhost:5670.
If the Web UI loads and you can start a chat conversation, your DB-GPT is ready for use.
Verifyâ
- The webserver is running
- Your model config loads without errors
- The Web UI opens at
http://localhost:5670 - SQLite is available as the default metadata store
Common first-run issuesâ
uv: command not found- Install uv first: Prerequisites
- Model key/auth errors
- Re-check the provider config under
configs/ - Start here: Model Providers
- Re-check the provider config under
- Web UI does not load
- Confirm the server is listening on port
5670 - Check the server logs in the terminal where you started DB-GPT
- Confirm the server is listening on port
- Local model does not respond
- Confirm Ollama or your local inference backend is already running
If you need moreâ
-
Run the web front-end separately
cd web && npm install
cp .env.template .env
# Edit .env â set API_BASE_URL=http://localhost:5670
npm run devThen open http://localhost:3000.
-
Use the install helper
uv run install_help.py install-cmd --interactive
uv run install_help.py list -
Use a different database
- Default is SQLite
- For MySQL, PostgreSQL, and others, see Data Sources
-
Useful environment variables
UV_INDEX_URLâ PyPI mirror URLOPENAI_API_KEYâ alternative to storing the key in TOMLCUDA_VISIBLE_DEVICESâ GPU device selection- Full reference: Config Reference
Go deeperâ
| Topic | Link |
|---|---|
| Full architecture overview | Architecture |
| Connect more model providers | Model Providers |
| Docker deployment | Docker |
| Knowledge base setup | Knowledge Base |
Next stepsâ
- Configure model providers: Model Providers
- Deploy with Docker: Docker Deployment
- Explore the Web UI: Web UI Guide
- Build your first AWEL workflow: AWEL Quickstart