Skip to main content

DB-GPT V0.6.0, Defining new standards for AI-native data applications.

· 4 min read

Introduction

DB-GPT is an open source AI native data application development framework with AWEL and agents. In the V0.6.0 version, we further provide flexible and scalable AI native data application management and development capabilities around large models, which can help enterprises quickly build and deploy intelligent AI data applications, and achieve enterprise digital transformation and business growth through intelligent data analysis, insights and decisions

The V0.6.0 version mainly adds and enhances the following core features

  • AWEL protocol upgrade 2.0, supporting more complex orchestration

  • Supports the creation and lifecycle management of data applications, and supports multiple application construction modes, such as: multi-agent automatic planning mode, task flow orchestration mode, single agent mode, and native application mode

  • GraphRAG supports graph community summary and hybrid retrieval, and the graph index cost is reduced by 50% compared to Microsoft GraphRAG.

  • Supports multiple Agent Memories, such as perceptual memory, short-term memory, long-term memory, hybrid memory, etc.

  • Supports intent recognition and prompt management, and newly added support for Text2NLU and Text2GQL fine-tuning

  • GPT-Vis front-end visualization upgrade to support richer visualization charts

Features

AWEL protocol upgrade 2.0 supports more complex orchestration and optimizes front-end visualization and interaction capabilities.

AWEL (Agentic Workflow Expression Language) is an agent-based workflow expression language designed specifically for large model application development, providing powerful functions and flexibility. Through the AWEL API, developers can focus on large model application logic development without having to pay attention to cumbersome model, environment and other details. In AWEL2.0, we support more complex orchestration and visualization

Supports the creation and life cycle management of data applications, and supports multiple modes to build applications, such as: multi-agent automatic planning mode, task flow orchestration mode, single agent mode, and native application mode

GraphRAG supports graph community summarization and hybrid retrieval.

The graph construction and retrieval performance have obvious advantages compared to community solutions, and it supports cool visualization. GraphRAG is an enhanced retrieval generation system based on knowledge graphs. Through the construction and retrieval of knowledge graphs, it further enhances the accuracy of retrieval and the stability of recall, while reducing the illusion of large models and enhancing the effects of domain applications. DB-GPT combines with TuGraph to build efficient retrieval enhancement generation capabilities

Based on the universal RAG framework launched in DB-GPT version 0.5.6 that integrates vector index, graph index, and full-text index, DB-GPT version 0.6.0 has enhanced the capabilities of graph index (GraphRAG) to support graph community summary and hybrid retrieval. ability. In the new version, we introduced TuGraph’s built-in Leiden community discovery algorithm, combined with large models to extract community subgraph summaries, and finally used similarity recall of community summaries to cope with generalized questioning scenarios, namely QFS (Query Focused Summarization). question. In addition, in the knowledge extraction stage, we upgraded the original triple extraction to graph extraction with point edge information summary, and optimized cross-text block associated information extraction through text block history to further enhance the information density of the knowledge graph.

Based on the above design, we used the open source knowledge graph corpus (OSGraph) provided by the TuGraph community and the product introduction materials of DB-GPT and TuGraph (about 43k tokens in total), and conducted comparative tests with Microsoft's GraphRAG system. Finally, DB-GPT It only consumes 50% of the token overhead and generates a knowledge graph of the same scale. And on the premise that the quality of the question and answer test is equivalent, the global search performance has been significantly improved.

For the final generated knowledge graph, we used AntV's G6 engine to upgrade the front-end rendering logic, which can intuitively preview the knowledge graph data and community segmentation results.

GPT-Vis: GPT-Vis is an interactive visualization solution for LLM and data, supporting rich visual chart display and intelligent recommendations

Text2GQL and Text2NLU fine-tuning: Newly supports fine-tuning from natural language to graph language, as well as fine-tuning for semantic classification.

Acknowledgements

This iteration is inseparable from the participation of developers and users in the community, and it also further cooperates with the TuGraph and AntV communities. Thanks to all the contributors who made this release possible!

@Aries-ckt, @Dreammy23, @Hec-gitHub, @JxQg, @KingSkyLi, @M1n9X, @bigcash, @chaplinthink, @csunny, @dusens, @fangyinc, @huangjh131, @hustcc, @lhwan, @whyuds and @yhjun1026

Reference

DB-GPT Now Supports Meta Llama 3.1 Series Models

· 2 min read
Fangyin Cheng
DB-GPT Core Team

We are thrilled to announce that DB-GPT now supports inference with the Meta Llama 3.1 series models!

Introducing Meta Llama 3.1

Meta Llama 3.1 is a state-of-the-art series of language models developed by Meta AI. Designed with cutting-edge techniques, the Llama 3.1 models offer unparalleled performance and versatility. Here are some of the key highlights:

  • Variety of Models: Meta Llama 3.1 is available in 8B, 70B, and 405B versions, each with both instruction-tuned and base models, supporting contexts up to 128k tokens.
  • Multilingual Support: Supports 8 languages, including English, German, and French.
  • Extensive Training: Trained on over 1.5 trillion tokens, utilizing 250 million human and synthetic samples for fine-tuning.
  • Flexible Licensing: Permissive model output usage allows for adaptation into other large language models (LLMs).
  • Quantization Support: Available in FP8, AWQ, and GPTQ quantized versions for efficient inference.
  • Performance: The Llama 3 405B version has outperformed GPT-4 in several benchmarks.
  • Enhanced Efficiency: The 8B and 70B models have seen a 12% improvement in coding and instruction-following capabilities.
  • Tool and Function Call Support: Supports tool usage and function calling.

How to Access Meta Llama 3.1

Your can access the Meta Llama 3.1 models according to Access to Hugging Face.

For comprehensive documentation and additional details, please refer to the model card.

Using Meta Llama 3.1 in DB-GPT

Please read the Source Code Deployment to learn how to install DB-GPT from source code.

Llama 3.1 needs upgrade your transformers >= 4.43.0, please upgrade your transformers:

pip install --upgrade "transformers>=4.43.0"

Please cd to the DB-GPT root directory:

cd DB-GPT

We assume that your models are stored in the models directory, e.g., models/Meta-Llama-3.1-8B-Instruct.

Then modify your .env file:

LLM_MODEL=meta-llama-3.1-8b-instruct
# LLM_MODEL=meta-llama-3.1-70b-instruct
# LLM_MODEL=meta-llama-3.1-405b-instruct
## you can also specify the model path
# LLM_MODEL_PATH=models/Meta-Llama-3.1-8B-Instruct
## Quantization settings
# QUANTIZE_8bit=False
# QUANTIZE_4bit=True
## You can configure the maximum memory used by each GPU.
# MAX_GPU_MEMORY=16Gib

Then you can run the following command to start the server:

dbgpt start webserver

Open your browser and visit http://localhost:5670 to use the Meta Llama 3.1 models in DB-GPT.

Enjoy the power of Meta Llama 3.1 in DB-GPT!