跳到主要内容

工具/函数调用

工具调用(也称为函数调用)使 LLM 能够访问外部工具。LLM 不会直接调用这些工具,而是建议调用哪个工具。然后,用户单独调用该工具,并将结果返回给 LLM。最后,LLM 将响应格式化为对用户原始问题的答案。

OpenRouter 标准化了跨模型和提供商的工具调用接口。

有关 OpenAI SDK 中工具调用如何工作的入门知识,请参阅这篇文章,或者如果您更喜欢从完整的端到端示例中学习,请继续往下看。

工具调用示例

下面是Python代码,它使LLM能够调用外部API——在本例中是Project Gutenberg,以搜索书籍。

import json, requests
from openai import OpenAI

OPENROUTER_API_KEY = f"<OPENROUTER_API_KEY>"

# You can use any model that supports tool calling
MODEL = "google/gemini-2.0-flash-001"

openai_client = OpenAI(
base_url="https://openrouter.co/v1",
api_key=OPENROUTER_API_KEY,
)

task = "What are the titles of some James Joyce books?"

messages = [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": task,
}
]

定义工具

接下来,我们定义要调用的工具。记住,该工具将收到LLM的请求,但我们在这里编写的代码最终负责执行调用并将结果返回给LLM。

def search_gutenberg_books(search_terms):
search_query = " ".join(search_terms)
url = "https://gutendex.com/books"
response = requests.get(url, params={"search": search_query})

simplified_results = []
for book in response.json().get("results", []):
simplified_results.append({
"id": book.get("id"),
"title": book.get("title"),
"authors": book.get("authors")
})

return simplified_results

tools = [
{
"type": "function",
"function": {
"name": "search_gutenberg_books",
"description": "Search for books in the Project Gutenberg library based on specified search terms",
"parameters": {
"type": "object",
"properties": {
"search_terms": {
"type": "array",
"items": {
"type": "string"
},
"description": "List of search terms to find books in the Gutenberg library (e.g. ['dickens', 'great'] to search for books by Dickens with 'great' in the title)"
}
},
"required": ["search_terms"]
}
}
}
]

TOOL_MAPPING = {
"search_gutenberg_books": search_gutenberg_books
}

请注意,“工具”只是一个正常的功能。然后,我们编写一个与OpenAI函数调用参数兼容的JSON“规范”。我们将把该规范传递给LLM,以便它知道此工具可用以及如何使用它。它将在需要时请求该工具以及任何参数。然后,我们将在本地封送工具调用,进行函数调用,并将结果返回给LLM。

工具使用和工具调用结果

让我们对该模型进行第一次 OpenRouter API 调用:

request_1 = {
"model": google/gemini-2.0-flash-001,
"tools": tools,
"messages": messages
}

response_1 = openai_client.chat.completions.create(**request_1).message

LLM以tool_calls的完成原因和tool_call数组作为响应。在通用LLM响应处理程序中,您希望在处理工具调用之前检查完成原因,但在这里我们将假设情况确实如此。让我们继续处理工具调用:

# Append the response to the messages array so the LLM has the full context
# It's easy to forget this step!
messages.append(response_1)

# Now we process the requested tool calls, and use our book lookup tool
for tool_call in response_1.tool_calls:
'''
In this case we only provided one tool, so we know what function to call.
When providing multiple tools, you can inspect `tool_call.function.name`
to figure out what function you need to call locally.
'''
tool_name = tool_call.function.name
tool_args = json.loads(tool_call.function.arguments)
tool_response = TOOL_MAPPING[tool_name](**tool_args)
messages.append({
"role": "tool",
"tool_call_id": tool_call.id,
"name": tool_name,
"content": json.dumps(tool_response),
})

消息数组现在包含:

  1. 我们的原始请求
  2. LLM 的响应(包含工具调用请求)
  3. 工具调用的结果(从 Project Gutenberg API 返回的 JSON 对象)

现在,我们可以进行第二次 OpenRouter API 调用,并希望获得结果!

request_2 = {
"model": MODEL,
"messages": messages,
"tools": tools
}

response_2 = openai_client.chat.completions.create(**request_2)

print(response_2.choices[0].message.content)

输出类似如下显示:

Here are some books by James Joyce:

* *Ulysses*
* *Dubliners*
* *A Portrait of the Artist as a Young Man*
* *Chamber Music*
* *Exiles: A Play in Three Acts*

我们做到了!我们成功地在提示词中使用了工具。

一个简单的代理循环

在上面的示例中,调用是显式且按顺序进行的。为了处理各种各样的用户输入和工具调用,您可以使用代理循环。

这是一个简单的代理循环的示例(使用与上面相同的“工具”和初始“消息”):


def call_llm(msgs):
resp = openai_client.chat.completions.create(
model=google/gemini-2.0-flash-001,
tools=tools,
messages=msgs
)
msgs.append(resp.choices[0].message.dict())
return resp

def get_tool_response(response):
tool_call = response.choices[0].message.tool_calls[0]
tool_name = tool_call.function.name
tool_args = json.loads(tool_call.function.arguments)

# Look up the correct tool locally, and call it with the provided arguments
# Other tools can be added without changing the agentic loop
tool_result = TOOL_MAPPING[tool_name](**tool_args)

return {
"role": "tool",
"tool_call_id": tool_call.id,
"name": tool_name,
"content": tool_result,
}

while True:
resp = call_llm(_messages)

if resp.choices[0].message.tool_calls is not None:
messages.append(get_tool_response(resp))
else:
break

print(messages[-1]['content'])