Skip to main content
Version: dev

ChatWithDBQAConfig Configuration

Chat With DB QA Configuration

Parameters

NameTypeRequiredDescription
top_kinteger
The top k for LLM generation
top_pnumber
The top p for LLM generation
temperaturenumber
The temperature for LLM generation
max_new_tokensinteger
The max new tokens for LLM generation
namestring
The name of your app
memoryBaseGPTsAppMemoryConfig
Memory configuration
Defaults:BufferWindowGPTsAppMemoryConfig
schema_retrieve_top_kinteger
The number of tables to retrieve from the database.
Defaults:10
schema_max_tokensinteger
The maximum number of tokens to pass to the model, default 100 * 1024.Just work for the schema retrieval failed, and load all tables schema.
Defaults:102400
max_num_resultsinteger
The maximum number of results to return from the query.
Defaults:50