Debugging
DB-GPT provides a series of tools to help developers troubleshoot and solve some problems they may encounter.
View Trace Logs With Command
DB-GPT writes some key system runtime information to trace logs. By default, they are located in logs/dbgpt*.jsonl
.
DB-GPT also provides a command line tool dbgpt trace
to help analyze these trace logs. You can check the specific usage through the following command:
dbgpt trace --help
View chat details
You can view chat details through the dbgpt trace chat
command. By default, the latest conversation information is displayed.
View service runtime information
dbgpt trace chat --hide_conv
The output is as follows:
+------------------------+--------------------------+-----------------------------+------------------------------------+
| Config Key (Webserver) | Config Value (Webserver) | Config Key (EmbeddingModel) | Config Value (EmbeddingModel) |
+------------------------+--------------------------+-----------------------------+------------------------------------+
| host | 0.0.0.0 | model_name | text2vec |
| port | 5000 | model_path | /app/models/text2vec-large-chinese |
| daemon | False | device | cuda |
| share | False | normalize_embeddings | None |
| remote_embedding | False | | |
| log_level | None | | |
| light | False | | |
+------------------------+--------------------------+-----------------------------+------------------------------------+
+--------------------------+-----------------------------+----------------------------+------------------------------+
| Config Key (ModelWorker) | Config Value (ModelWorker) | Config Key (WorkerManager) | Config Value (WorkerManager) |
+--------------------------+-----------------------------+----------------------------+------------------------------+
| model_name | vicuna-13b-v1.5 | model_name | vicuna-13b-v1.5 |
| model_path | /app/models/vicuna-13b-v1.5 | model_path | /app/models/vicuna-13b-v1.5 |
| device | cuda | worker_type | None |
| model_type | huggingface | worker_class | None |
| prompt_template | None | model_type | huggingface |
| max_context_size | 4096 | host | 0.0.0.0 |
| num_gpus | None | port | 5000 |
| max_gpu_memory | None | daemon | False |
| cpu_offloading | False | limit_model_concurrency | 5 |
| load_8bit | False | standalone | True |
| load_4bit | False | register | True |
| quant_type | nf4 | worker_register_host | None |
| use_double_quant | True | controller_addr | http://127.0.0.1:5000 |
| compute_dtype | None | send_heartbeat | True |
| trust_remote_code | True | heartbeat_interval | 20 |
| verbose | False | log_level | None |
+--------------------------+-----------------------------+----------------------------+------------------------------+
View latest conversation information
dbgpt trace chat --hide_run_params
The output is as follows:
+-------------------------------------------------------------------------------------------------------------------------------------------+
| Chat Trace Details |
+----------------+--------------------------------------------------------------------------------------------------------------------------+
| Key | Value Value |
+----------------+--------------------------------------------------------------------------------------------------------------------------+
| trace_id | 5d1900c3-5aad-4159-9946-fbb600666530 |
| span_id | 5d1900c3-5aad-4159-9946-fbb600666530:14772034-bed4-4b4e-b43f-fcf3a8aad6a7 |
| conv_uid | 5e456272-68ac-11ee-9fba-0242ac150003 |
| user_input | Who are you? |
| chat_mode | chat_normal |
| select_param | None |
| model_name | vicuna-13b-v1.5 |
| temperature | 0.6 |
| max_new_tokens | 1024 |
| echo | False |
| llm_adapter | FastChatLLMModelAdaperWrapper(fastchat.model.model_adapter.VicunaAdapter) |
| User prompt | A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polit |
| | e answers to the user's questions. USER: Who are you? ASSISTANT: |
| Model output | You can call me Vicuna, and I was trained by Large Model Systems Organization (LMSYS) researchers as a language model. |
+----------------+--------------------------------------------------------------------------------------------------------------------------+
View chat details and call chain
dbgpt trace chat --hide_run_params --tree
The output is as follows:
Invoke Trace Tree:
Operation: DB-GPT-Web-Entry (Start: 2023-10-12 03:06:43.180, End: None)
Operation: get_chat_instance (Start: 2023-10-12 03:06:43.258, End: None)
Operation: get_chat_instance (Start: 2023-10-12 03:06:43.258, End: 2023-10-12 03:06:43.424)
Operation: stream_generator (Start: 2023-10-12 03:06:43.425, End: None)
Operation: BaseChat.stream_call (Start: 2023-10-12 03:06:43.426, End: None)
Operation: WorkerManager.generate_stream (Start: 2023-10-12 03:06:43.426, End: None)
Operation: DefaultModelWorker.generate_stream (Start: 2023-10-12 03:06:43.428, End: None)
Operation: DefaultModelWorker_call.generate_stream_func (Start: 2023-10-12 03:06:43.430, End: None)
Operation: DefaultModelWorker_call.generate_stream_func (Start: 2023-10-12 03:06:43.430, End: 2023-10-12 03:06:48.518)
Operation: DefaultModelWorker.generate_stream (Start: 2023-10-12 03:06:43.428, End: 2023-10-12 03:06:48.518)
Operation: WorkerManager.generate_stream (Start: 2023-10-12 03:06:43.426, End: 2023-10-12 03:06:48.518)
Operation: BaseChat.stream_call (Start: 2023-10-12 03:06:43.426, End: 2023-10-12 03:06:48.519)
Operation: stream_generator (Start: 2023-10-12 03:06:43.425, End: 2023-10-12 03:06:48.519)
Operation: DB-GPT-Web-Entry (Start: 2023-10-12 03:06:43.180, End: 2023-10-12 03:06:43.257)
+-------------------------------------------------------------------------------------------------------------------------------------------+
| Chat Trace Details |
+----------------+--------------------------------------------------------------------------------------------------------------------------+
| Key | Value Value |
+----------------+--------------------------------------------------------------------------------------------------------------------------+
| trace_id | 5d1900c3-5aad-4159-9946-fbb600666530 |
| span_id | 5d1900c3-5aad-4159-9946-fbb600666530:14772034-bed4-4b4e-b43f-fcf3a8aad6a7 |
| conv_uid | 5e456272-68ac-11ee-9fba-0242ac150003 |
| user_input | Who are you? |
| chat_mode | chat_normal |
| select_param | None |
| model_name | vicuna-13b-v1.5 |
| temperature | 0.6 |
| max_new_tokens | 1024 |
| echo | False |
| llm_adapter | FastChatLLMModelAdaperWrapper(fastchat.model.model_adapter.VicunaAdapter) |
| User prompt | A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polit |
| | e answers to the user's questions. USER: Who are you? ASSISTANT: |
| Model output | You can call me Vicuna, and I was trained by Large Model Systems Organization (LMSYS) researchers as a language model. |
+----------------+--------------------------------------------------------------------------------------------------------------------------+
View chat details based on trace_id
dbgpt trace chat --hide_run_params --trace_id ec30d733-7b35-4d61-b02e-2832fd2e29ff
The output is as follows:
+-------------------------------------------------------------------------------------------------------------------------------------------+
| Chat Trace Details |
+----------------+--------------------------------------------------------------------------------------------------------------------------+
| Key | Value Value |
+----------------+--------------------------------------------------------------------------------------------------------------------------+
| trace_id | ec30d733-7b35-4d61-b02e-2832fd2e29ff |
| span_id | ec30d733-7b35-4d61-b02e-2832fd2e29ff:0482a0c5-38b3-4b38-8101-e42489f90ccd |
| conv_uid | 87a722de-68ae-11ee-9fba-0242ac150003 |
| user_input | Hello |
| chat_mode | chat_normal |
| select_param | None |
| model_name | vicuna-13b-v1.5 |
| temperature | 0.6 |
| max_new_tokens | 1024 |
| echo | False |
| llm_adapter | FastChatLLMModelAdaperWrapper(fastchat.model.model_adapter.VicunaAdapter) |
| User prompt | A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polit |
| | e answers to the user's questions. USER: Hello ASSISTANT: |
| Model output | Hello! How can I help you today? Is there something specific you want to know or talk about? I'm here to answer any ques |
| | tions you might have, to the best of my ability. |
+----------------+--------------------------------------------------------------------------------------------------------------------------+