跳到主要内容
版本:dev

Tool Resource

In previous Tool Use, we have introduced how to use tools in the agent. In this section, we will introduce the tools in detail.

Function Tools

Create Tool With @tool Decorator

You can create a tool just add a function with the @tool decorator.

from dbgpt.agent.resource import tool

@tool
def simple_calculator(first_number: int, second_number: int, operator: str) -> float:
"""Simple calculator tool. Just support +, -, *, /."""
if isinstance(first_number, str):
first_number = int(first_number)
if isinstance(second_number, str):
second_number = int(second_number)
if operator == "+":
return first_number + second_number
elif operator == "-":
return first_number - second_number
elif operator == "*":
return first_number * second_number
elif operator == "/":
return first_number / second_number
else:
raise ValueError(f"Invalid operator: {operator}")

Let's see more details about the simple_calculator,

print("Type: ", type(simple_calculator._tool))
print("")
print(simple_calculator._tool)

Run the code above, you will see the output like this:

Type:  <class 'dbgpt.agent.resource.tool.base.FunctionTool'>

Tool: simple_calculator (Simple calculator tool. Just support +, -, *, /.)

As you can see, the _tool of simple_calculator is a FunctionTool object, and the description of the tool is Simple calculator tool. Just support +, -, *, /..

How Does Agent Use Tools?

Generate Tool Prompts

You can call the get_prompt method of the tool to generate the prompt for the tool.

async def show_prompts():
from dbgpt.agent.resource import BaseTool
tool: BaseTool= simple_calculator._tool

tool_prompt = await tool.get_prompt()
openai_tool_prompt = await tool.get_prompt(prompt_type="openai")
print(f"Tool Prompt: \n{tool_prompt}\n")
print(f"OpenAI Tool Prompt: \n{openai_tool_prompt}\n")


if __name__ == "__main__":
import asyncio
asyncio.run(show_prompts())

Run the code above, you will see the output like this:

Tool Prompt: 
simple_calculator: Call this tool to interact with the simple_calculator API. What is the simple_calculator API useful for? Simple calculator tool. Just support +, -, *, /. Parameters: [{"name": "first_number", "type": "integer", "description": "First Number", "required": true}, {"name": "second_number", "type": "integer", "description": "Second Number", "required": true}, {"name": "operator", "type": "string", "description": "Operator", "required": true}]

OpenAI Tool Prompt:
simple_calculator: Call this tool to interact with the simple_calculator API. What is the simple_calculator API useful for? Simple calculator tool. Just support +, -, *, /. Parameters: {"type": "object", "properties": {"first_number": {"type": "integer", "description": "First Number"}, "second_number": {"type": "integer", "description": "Second Number"}, "operator": {"type": "string", "description": "Operator"}}, "required": ["first_number", "second_number", "operator"]}

In above code, you will see two types of prompts generated by the tool, one is the default prompt, and the other is the OpenAI prompt(prompt_type="openai", the format is similar to OpenAI function calling API).

So, you may have already guessed that the LLM only needs to read the tool description in the prompt, and then tell the user what tool to use and what parameters to pass to the tool in LLM's response.

Why get_prompt is an async function? Because it is a abstract method in the Resource class, and it is implemented in the BaseTool class. If resource is a knowledge base, it will be retrieved from the knowledge base that may take some time, so it is an async function for high performance.

Wrap Tools To ToolPack

To test multiple tools, let's write another tool to help LLMs to count the number of files in a directory. And then wrap the two tools to a ToolPack.

import os
from typing_extensions import Annotated, Doc

from dbgpt.agent.resource import ToolPack

@tool
def count_directory_files(path: Annotated[str, Doc("The directory path")]) -> int:
"""Count the number of files in a directory."""
if not os.path.isdir(path):
raise ValueError(f"Invalid directory path: {path}")
return len(os.listdir(path))

tools = ToolPack([simple_calculator, count_directory_files])

Let's see the prompt of the ToolPack,

async def show_tool_pack_prompts():

# Just show the default prompt type
tool_pack_prompt = await tools.get_prompt()
print(tool_pack_prompt)

if __name__ == "__main__":
import asyncio
asyncio.run(show_tool_pack_prompts())

Run the code above, you will see the output like this:

You have access to the following APIs:
simple_calculator: Call this tool to interact with the simple_calculator API. What is the simple_calculator API useful for? Simple calculator tool. Just support +, -, *, /. Parameters: [{"name": "first_number", "type": "integer", "description": "First Number", "required": true}, {"name": "second_number", "type": "integer", "description": "Second Number", "required": true}, {"name": "operator", "type": "string", "description": "Operator", "required": true}]
count_directory_files: Call this tool to interact with the count_directory_files API. What is the count_directory_files API useful for? Count the number of files in a directory. Parameters: [{"name": "path", "type": "string", "description": "The directory path", "required": true}]

Use Tools In Your Agent

In previous Tool Use, we have introduced how to use tools in the agent.

Just like the example below:


import asyncio
import os
from dbgpt.agent import AgentContext, AgentMemory, LLMConfig, UserProxyAgent
from dbgpt.agent.expand.tool_assistant_agent import ToolAssistantAgent
from dbgpt.model.proxy import OpenAILLMClient

async def main():

llm_client = OpenAILLMClient(
model_alias="gpt-3.5-turbo", # or other models, eg. "gpt-4o"
api_base=os.getenv("OPENAI_API_BASE"),
api_key=os.getenv("OPENAI_API_KEY"),
)
context: AgentContext = AgentContext(
conv_id="test123", language="en", temperature=0.5, max_new_tokens=2048
)
agent_memory = AgentMemory()

user_proxy = await UserProxyAgent().bind(agent_memory).bind(context).build()

tool_man = (
await ToolAssistantAgent()
.bind(context)
.bind(LLMConfig(llm_client=llm_client))
.bind(agent_memory)
.bind(tools)
.build()
)

await user_proxy.initiate_chat(
recipient=tool_man,
reviewer=user_proxy,
message="Calculate the product of 10 and 99",
)

await user_proxy.initiate_chat(
recipient=tool_man,
reviewer=user_proxy,
message="Count the number of files in /tmp",
)

if __name__ == "__main__":
asyncio.run(main())

Run the code above, you will see the output like this:

--------------------------------------------------------------------------------
User (to LuBan)-[]:

"Calculate the product of 10 and 99"

--------------------------------------------------------------------------------
un_stream ai response: {
"thought": "To calculate the product of 10 and 99, we need to use a tool that can perform multiplication operation.",
"tool_name": "simple_calculator",
"args": {
"first_number": 10,
"second_number": 99,
"operator": "*"
}
}

--------------------------------------------------------------------------------
LuBan (to User)-[gpt-3.5-turbo]:

"{\n \"thought\": \"To calculate the product of 10 and 99, we need to use a tool that can perform multiplication operation.\",\n \"tool_name\": \"simple_calculator\",\n \"args\": {\n \"first_number\": 10,\n \"second_number\": 99,\n \"operator\": \"*\"\n }\n}"
>>>>>>>>LuBan Review info:
Pass(None)
>>>>>>>>LuBan Action report:
execution succeeded,
990

--------------------------------------------------------------------------------

--------------------------------------------------------------------------------
User (to LuBan)-[]:

"Count the number of files in /tmp"

--------------------------------------------------------------------------------
un_stream ai response: {
"thought": "To count the number of files in the directory /tmp, we can use the count_directory_files tool.",
"tool_name": "count_directory_files",
"args": {
"path": "/tmp"
}
}

--------------------------------------------------------------------------------
LuBan (to User)-[gpt-3.5-turbo]:

"{\n \"thought\": \"To count the number of files in the directory /tmp, we can use the count_directory_files tool.\",\n \"tool_name\": \"count_directory_files\",\n \"args\": {\n \"path\": \"/tmp\"\n }\n}"
>>>>>>>>LuBan Review info:
Pass(None)
>>>>>>>>LuBan Action report:
execution succeeded,
16

More Details About @tool Decorator

@tool is a decorator to create a tool. It will parse the function signature and docstring to generate the tool description by default. But you can also customize them according to your needs.

Customize Tool Name And Description

You can customize the tool name and description by the name and description parameters of the @tool decorator.

from dbgpt.agent.resource import tool

@tool("my_two_sum", description="Add two numbers and return the sum.")
def two_sum(a: int, b: int) -> int:
return a + b

The prompt of the two_sum tool will be like this:

my_two_sum: Call this tool to interact with the my_two_sum API. What is the my_two_sum API useful for? Add two numbers and return the sum. Parameters: [{"name": "a", "type": "integer", "description": "A", "required": true}, {"name": "b", "type": "integer", "description": "B", "required": true}]

Customize Tool Parameters

You can customize the tool parameters by the args parameter of the @tool decorator.

from dbgpt.agent.resource import tool

@tool(
args={
"a": {
"type": "integer",
"description": "First number to add",
"required": True,
},
"b": {
"type": "integer",
"description": "Second number to add",
"required": True,
},
}
)
def two_sum(a: int, b: int) -> int:
"""Add two numbers and return the sum."""
return a + b

The prompt of the two_sum tool will be like this:

two_sum: Call this tool to interact with the two_sum API. What is the two_sum API useful for? Add two numbers and return the sum. Parameters: [{"name": "a", "type": "integer", "description": "First number to add", "required": true}, {"name": "b", "type": "integer", "description": "Second number to add", "required": true}]

You can also use the pydantic model to define the tool parameters by the args_schema parameter of the @tool decorator.

from pydantic import BaseModel, Field
from dbgpt.agent.resource import tool

class ArgsSchema(BaseModel):
a: int = Field(description="The first number.")
b: int = Field(description="The second number.")


@tool(args_schema=ArgsSchema)
def two_sum(a: int, b: int) -> int:
"""Add two numbers and return the sum."""
return a + b

The prompt of the two_sum tool will be like this:

two_sum: Call this tool to interact with the two_sum API. What is the two_sum API useful for? Add two numbers and return the sum. Parameters: [{"name": "a", "type": "integer", "description": "The first number.", "required": true}, {"name": "b", "type": "integer", "description": "The second number.", "required": true}]

You can also use the Annotated and Doc to define the tool parameters.

from typing_extensions import Annotated, Doc

from dbgpt.agent.resource import tool

@tool
def two_sum(
a: Annotated[int, Doc("The first number.")],
b: Annotated[int, Doc("The second number.")],
) -> int:
"""Add two numbers and return the sum."""
return a + b

Run Function Tool Directly

Can I run the function tool directly? Yes, you can. Just call the function directly.

from typing_extensions import Annotated, Doc

from dbgpt.agent.resource import tool

@tool
def two_sum(
a: Annotated[int, Doc("The first number.")],
b: Annotated[int, Doc("The second number.")],
) -> int:
"""Add two numbers and return the sum."""
return a + b

print("Result: ", two_sum(1, 2))

Run the code above, you will see the output like this:

Result:  3

Create Tool With FunctionTool Class

You can also create a tool with the FunctionTool by passing the function name and a function.

from dbgpt.agent.resource import FunctionTool


def two_sum(a: int, b: int) -> int:
"""Add two numbers and return the sum."""
return a + b


tool = FunctionTool("two_sum", two_sum)


async def show_prompts():
print(await tool.get_prompt())


if __name__ == "__main__":
import asyncio

asyncio.run(show_prompts())

Run the code above, you will see the output like this:

two_sum: Call this tool to interact with the two_sum API. What is the two_sum API useful for? Add two numbers and return the sum. Parameters: [{"name": "a", "type": "integer", "description": "A", "required": true}, {"name": "b", "type": "integer", "description": "B", "required": true}]

You can also pass args, args_schema, name, description parameters to the FunctionTool to customize the tool just like the @tool decorator.

Writing Your Own Tools

FunctionTool is a class inherited from BaseTool, and you can also create your own tool class by inheriting from BaseTool.

In the following example, we create a MyTwoSumTool class to add two numbers and return the sum.

First, prepare an internal function _two_sum to add two numbers and return the sum. And define the tool parameters _TWO_SUM_ARGS with the ToolParameter class.

from dbgpt.agent.resource import ToolParameter

def _two_sum(a: int, b: int) -> int:
return a + b

_TWO_SUM_ARGS = {
"a": ToolParameter(
name="a",
type="integer",
required=True,
description="First number to add",
),
"b": ToolParameter(
name="b",
type="integer",
required=True,
description="Second number to add",
),
}

Then create the MyTwoSumTool class inherited from BaseTool and implement the execute method.

from typing import Any, Dict, Optional
from dbgpt.agent.resource import BaseTool, ToolParameter

class MyTwoSumTool(BaseTool):

def __init__(self, name: Optional[str] = None) -> None:
self._name = name or "two_sum"
self._args = _TWO_SUM_ARGS

@property
def name(self) -> str:
"""Return the name of the tool."""
return self._name

@property
def description(self) -> str:
return "Add two numbers and return the sum."

@property
def args(self) -> Dict[str, ToolParameter]:
return self._args

@property
def is_async(self) -> bool:
"""Return whether the tool is asynchronous."""
return False

def execute(
self,
*args,
resource_name: Optional[str] = None,
**kwargs,
) -> Any:
return _two_sum(*args, **kwargs)

async def async_execute(
self,
*args,
resource_name: Optional[str] = None,
**kwargs,
) -> Any:
raise ValueError("The function is not asynchronous")

Now, you can show the prompt of the MyTwoSumTool like this:

# Just create an instance of MyTwoSumTool
tool = MyTwoSumTool()

async def show_prompts():
print(await tool.get_prompt())


if __name__ == "__main__":
import asyncio

asyncio.run(show_prompts())

Run the code above, you will see the output like this:

two_sum: Call this tool to interact with the two_sum API. What is the two_sum API useful for? Add two numbers and return the sum. Parameters: [{"name": "a", "type": "integer", "description": "First number to add", "required": true}, {"name": "b", "type": "integer", "description": "Second number to add", "required": true}]

Summary

In this section, we have introduced how to create tools with the @tool decorator and FunctionTool class. And we have also introduced how to customize the tool name, description, and parameters. Finally, we have shown how to create a tool class inherited from BaseTool to create a tool.