Graph RAG User Manual
In this example, we will show how to use the Graph RAG framework in DB-GPT. Using a graph database to implement RAG can, to some extent, alleviate the uncertainty and interpretability issues brought about by vector database retrieval.
You can refer to the python example file DB-GPT/examples/rag/graph_rag_example.py
in the source code. This example demonstrates how to load knowledge from a document and store it in a graph store. Subsequently, it recalls knowledge relevant to your question by searching for triplets in the graph store.
Install Dependencies
First, you need to install the dbgpt
library.
pip install "dbgpt[rag]>=0.6.0"
Prepare Graph Database
To store the knowledge in graph, we need an graph database, TuGraph is the first graph database supported by DB-GPT.
Visit github repository of TuGraph to view Quick Start document, follow the instructions to pull the TuGraph database docker image (latest / version >= 4.3.2) and launch it.
docker pull tugraph/tugraph-runtime-centos7:latest
docker run -d -p 7070:7070 -p 7687:7687 -p 9090:9090 --name tugraph_demo reg.docker.alibaba-inc.com/fma/tugraph-runtime-centos7:latest lgraph_server -d run --enable_plugin true
The default port for the bolt protocol is 7687
.
Prepare LLM
To build a Graph RAG program, we need a LLM, here are some of the LLMs that DB-GPT supports:
- Open AI(API)
- YI(API)
- API Server(cluster)
First, you should install the openai
library.
pip install openai
Then set your API key in the environment OPENAI_API_KEY
.
from dbgpt.model.proxy import OpenAILLMClient
llm_client = OpenAILLMClient()
You should have a YI account and get the API key from the YI official website.
First, you should install the openai
library.
pip install openai
Then set your API key in the environment variable YI_API_KEY
.
from dbgpt.model.proxy import YiLLMClient
llm_client = YiLLMClient()
If you have deployed DB-GPT cluster and API server , you can connect to the API server to get the LLM model.
The API is compatible with the OpenAI API, so you can use the OpenAILLMClient to connect to the API server.
First you should install the openai
library.
pip install openai
from dbgpt.model.proxy import OpenAILLMClient
llm_client = OpenAILLMClient(api_base="http://localhost:8100/api/v1/", api_key="{your_api_key}")