ChatWithDBExecuteConfig Configuration
Chat With DB Execute Configuration
Parameters
| Name | Type | Required | Description | 
|---|---|---|---|
top_k | integer | ❌ | The top k for LLM generation  | 
top_p | number | ❌ | The top p for LLM generation  | 
temperature | number | ❌ | The temperature for LLM generation  | 
max_new_tokens | integer | ❌ | The max new tokens for LLM generation  | 
name | string | ❌ | The name of your app  | 
memory | BaseGPTsAppMemoryConfig | ❌ | Memory configuration Defaults: BufferWindowGPTsAppMemoryConfig | 
schema_retrieve_top_k | integer | ❌ | The number of tables to retrieve from the database. Defaults: 10 | 
schema_max_tokens | integer | ❌ | The maximum number of tokens to pass to the model, default 100 * 1024.Just work for the schema retrieval failed, and load all tables schema. Defaults: 102400 | 
max_num_results | integer | ❌ | The maximum number of results to return from the query. Defaults: 50 |