Docker Deployment
Docker image preparation
There are two ways to prepare a Docker image.
- Pull from the official image
- Build locally, see Build Docker Image
You can choose any one during actual use.
Deploy With Proxy Model
In this deployment, you don't need an GPU environment.
- Pull from the official image repository, Eosphoros AI Docker Hub
docker pull eosphorosai/dbgpt-openai:latest
- Run the Docker container
This example requires you provide a valid API key for the SiliconFlow API. You can obtain one by signing up at SiliconFlow and creating an API key at API Key. Alternatively, set AIMLAPI_API_KEY
to use the AI/ML API service.
docker run -it --rm -e SILICONFLOW_API_KEY=${SILICONFLOW_API_KEY} \
-p 5670:5670 --name dbgpt eosphorosai/dbgpt-openai
Or with AI/ML API:
docker run -it --rm -e AIMLAPI_API_KEY=${AIMLAPI_API_KEY} \
-p 5670:5670 --name dbgpt eosphorosai/dbgpt-openai
Please replace ${SILICONFLOW_API_KEY}
or ${AIMLAPI_API_KEY}
with your own API key.
Then you can visit http://localhost:5670 in the browser.
Deploy With GPU (Local Model)
In this deployment, you need an GPU environment.
Before running the Docker container, you need to install the NVIDIA Container Toolkit. For more information, please refer to the official documentation NVIDIA Container Toolkit.
In this deployment, you will use a local model instead of downloading it from the Hugging Face or ModelScope model hub. This is useful if you have already downloaded the model to your local machine or if you want to use a model from a different source.