推理 Token
对于支持该 API 的模型,OpenRouter API 可以返回 推理 Token,也称为思考 Token。OpenRouter 规范化了自定义模型将使用的推理 Token 数量的不同方式,从而为不同提供商提供了统一的接口。
推理 Token 可以透明地展示模型所采取的推理步骤。推理被视为输出Token,并会相应计费。
如果模型决定输出推理Token,则默认将它们包含在响应中。除非您决定将其排除,否则推理Token将出现在每条消息的reasoning
字段中。
虽然大多数模型和提供商在响应中提供推理令牌,但有些(例如 OpenAI o 系列和 Gemini Flash Thinking)却没有。
控制推理 Token
您可以使用 reasoning
参数控制请求中的推理标记:
{
"model": "your-model",
"messages": [],
"reasoning": {
// One of the following (not both):
"effort": "high", // Can be "high", "medium", or "low" (OpenAI-style)
"max_tokens": 2000, // Specific token limit (Anthropic-style)
// Optional: Default is false. All models support this.
"exclude": false // Set to true to exclude reasoning tokens from response
}
}
reasoning
配置对象整合了用于控制不同模型推理强度的设置。请参阅下方每个选项的注释,了解哪些模型受支持以及其他模型的行为方式。
推理最大Token
目前支持思维模型有:Anthropic 与 Gemini 推理模型
对于支持推理Token分配的模型,可以这样控制:
"max_tokens": 2000
- 直接指定用于推理的最大Token数
对于仅支持 reasoning.effort
的模型(见下文), max_tokens
的值将用于确定推理的难度级别。
推理资源分配级别
当前支持的模型: OpenAI o 系列
"effort": "high"
- 分配大量Token用于推理(约占max_tokens
的 80%)"effort": "medium"
- 分配适中数量的Token(约占max_tokens
的 50%)"effort": "low"
- 分配较少的Token(约占max_tokens
的 20%)
对于仅支持 reasoning.max_tokens
的模型,将根据上述比例设置资源分配级别。
排除推理 Token
如果希望模型在内部执行推理但不将推理过程包含在响应中:
"exclude": true
- 模型仍会执行推理,但推理内容不会出现在返回结果中
推理消耗的 Token 将显示在每条消息的 reasoning
字段中。
旧版参数
为保持向后兼容性,OpenRouter 仍支持以下旧版参数:
include_reasoning: true
- 等同于reasoning: {}
include_reasoning: false
- 等同于reasoning: { exclude: true }
但建议使用新的统一参数 reasoning
,以获得更精细的控制和更好的未来兼容性。
示例
含推理 Token 基础用法
- Python
- TypeScript
import requests
import json
url = "https://openrouter.co/v1/chat/completions"
headers = {
"Authorization": f"Bearer <OPENROUTER_API_KEY>",
"Content-Type": "application/json"
}
payload = {
"model": "openai/o3-mini",
"messages": [
{"role": "user", "content": "How would you build the world's tallest skyscraper?"}
],
"reasoning": {
"effort": "high" # Use high reasoning effort
}
}
response = requests.post(url, headers=headers, data=json.dumps(payload))
print(response.json()['choices'][0]['message']['reasoning'])
import OpenAI from 'openai';
const openai = new OpenAI({
baseURL: 'https://openrouter.co/v1',
apiKey: '<OPENROUTER_API_KEY>',
});
async function getResponseWithReasoning() {
const response = await openai.chat.completions.create({
model: 'openai/o3-mini',
messages: [
{
role: 'user',
content: "How would you build the world's tallest skyscraper?",
},
],
reasoning: {
effort: 'high', // Use high reasoning effort
},
});
console.log('REASONING:', response.choices[0].message.reasoning);
console.log('CONTENT:', response.choices[0].message.content);
}
getResponseWithReasoning();
推理使用最大Token
对于支持直接分配 Token 的模型(如 Anthropic 系列模型),可通过以下方式指定推理使用的具体 Token 数量:
- Python
- TypeScript
import requests
import json
url = "https://openrouter.co/v1/chat/completions"
headers = {
"Authorization": f"Bearer <OPENROUTER_API_KEY>",
"Content-Type": "application/json"
}
payload = {
"model": "anthropic/claude-3.7-sonnet",
"messages": [
{"role": "user", "content": "What's the most efficient algorithm for sorting a large dataset?"}
],
"reasoning": {
"max_tokens": 2000 # Allocate 2000 tokens (or approximate effort) for reasoning
}
}
response = requests.post(url, headers=headers, data=json.dumps(payload))
print(response.json()['choices'][0]['message']['reasoning'])
print(response.json()['choices'][0]['message']['content'])
import OpenAI from 'openai';
const openai = new OpenAI({
baseURL: 'https://openrouter.co/v1',
apiKey: '<OPENROUTER_API_KEY>',
});
async function getResponseWithReasoning() {
const response = await openai.chat.completions.create({
model: 'anthropic/claude-3.7-sonnet',
messages: [
{
role: 'user',
content: "How would you build the world's tallest skyscraper?",
},
],
reasoning: {
max_tokens: 2000, // Allocate 2000 tokens (or approximate effort) for reasoning
},
});
console.log('REASONING:', response.choices[0].message.reasoning);
console.log('CONTENT:', response.choices[0].message.content);
}
getResponseWithReasoning();
响应中排除推理 Token
若需模型执行内部推理但不将推理过程包含在响应中:
- Python
- TypeScript
import requests
import json
url = "https://openrouter.co/v1/chat/completions"
headers = {
"Authorization": f"Bearer <OPENROUTER_API_KEY>",
"Content-Type": "application/json"
}
payload = {
"model": "deepseek/deepseek-r1",
"messages": [
{"role": "user", "content": "Explain quantum computing in simple terms."}
],
"reasoning": {
"effort": "high",
"exclude": true # Use reasoning but don't include it in the response
}
}
response = requests.post(url, headers=headers, data=json.dumps(payload))
# No reasoning field in the response
print(response.json()['choices'][0]['message']['content'])
import OpenAI from 'openai';
const openai = new OpenAI({
baseURL: 'https://openrouter.co/v1',
apiKey: '<OPENROUTER_API_KEY>',
});
async function getResponseWithReasoning() {
const response = await openai.chat.completions.create({
model: 'deepseek/deepseek-r1',
messages: [
{
role: 'user',
content: "How would you build the world's tallest skyscraper?",
},
],
reasoning: {
effort: 'high',
exclude: true, // Use reasoning but don't include it in the response
},
});
console.log('REASONING:', response.choices[0].message.reasoning);
console.log('CONTENT:', response.choices[0].message.content);
}
getResponseWithReasoning();
高级用法:推理链(Chain-of-Thought)
此示例展示如何在复杂工作流中使用推理 Token,通过将一个模型的推理结果注入另一个模型来提升回答质量:
- Python
- TypeScript
import requests
import json
question = "Which is bigger: 9.11 or 9.9?"
url = "https://openrouter.co/v1/chat/completions"
headers = {
"Authorization": f"Bearer <OPENROUTER_API_KEY>",
"Content-Type": "application/json"
}
def do_req(model, content, reasoning_config=None):
payload = {
"model": model,
"messages": [
{"role": "user", "content": content}
],
"stop": "</think>"
}
return requests.post(url, headers=headers, data=json.dumps(payload))
# Get reasoning from a capable model
content = f"{question} Please think this through, but don't output an answer"
reasoning_response = do_req("deepseek/deepseek-r1", content)
reasoning = reasoning_response.json()['choices'][0]['message']['reasoning']
# Let's test! Here's the naive response:
simple_response = do_req("openai/gpt-4o-mini", question)
print(simple_response.json()['choices'][0]['message']['content'])
# Here's the response with the reasoning token injected:
content = f"{question}. Here is some context to help you: {reasoning}"
smart_response = do_req("openai/gpt-4o-mini", content)
print(smart_response.json()['choices'][0]['message']['content'])
import OpenAI from 'openai';
const openai = new OpenAI({
baseURL: 'https://openrouter.co/v1',
apiKey,
});
async function doReq(model, content, reasoningConfig) {
const payload = {
model,
messages: [{ role: 'user', content }],
stop: '</think>',
...reasoningConfig,
};
return openai.chat.completions.create(payload);
}
async function getResponseWithReasoning() {
const question = 'Which is bigger: 9.11 or 9.9?';
const reasoningResponse = await doReq(
'deepseek/deepseek-r1',
`${question} Please think this through, but don't output an answer`,
);
const reasoning = reasoningResponse.choices[0].message.reasoning;
// Let's test! Here's the naive response:
const simpleResponse = await doReq('openai/gpt-4o-mini', question);
console.log(simpleResponse.choices[0].message.content);
// Here's the response with the reasoning token injected:
const content = `${question}. Here is some context to help you: ${reasoning}`;
const smartResponse = await doReq('openai/gpt-4o-mini', content);
console.log(smartResponse.choices[0].message.content);
}
getResponseWithReasoning();
提供商专属推理实现
Anthropic 模型的推理 Token 支持
最新 Claude 模型,有如anthropic/claude-3.7-sonnet
,支持使用并返回推理 Token。
可通过以下两种方式启用 Anthropic 模型的推理功能:
- 使用
:thinking
变体后缀 (如anthropic/claude-3.7-sonnet:thinking
),此变体默认启用高强度推理("effort": "high"
)。 - 使用统一的
reasoning
参数, 通过effort
(推理强度比例)或max_tokens
(直接分配 Token 数)控制。
Anthropic 模型最大 Token 推理
使用 Anthropic 模型的推理功能时需注意:
reasoning.max_tokens
参数: 直接指定 Token 数,最小值为 1024。:thinking
变体或reasoning.effort
参数: 基于max_tokens
动态计算 budget_tokens。
规则细节:
Token 分配范围:推理 Token 数限制在 1024(最小) ~ 32,000(最大)之间。
budget_tokens 计算公式:
budget_tokens = max(min(max_tokens * {effort_ratio}, 32000), 1024)
effort_ratio 取值:
- 高 (high effort):0.8
- 中 (medium effort):0.5
- 低 (low effort):0.2
关键限制: max_tokens
必须 严格大于 budget_tokens
,以确保推理后仍有 Token 生成最终响应。
推理 Token 会计入输出 Token 的计费。使用推理功能会增加 Token 消耗,但能显著提升模型响应的质量。
Anthropic 模型示例
示例1: 带推理流式输出
- Python
- TypeScript
from openai import OpenAI
client = OpenAI(
base_url="https://openrouter.co/v1",
api_key="<OPENROUTER_API_KEY>",
)
def chat_completion_with_reasoning(messages):
response = client.chat.completions.create(
model="anthropic/claude-3.7-sonnet",
messages=messages,
max_tokens=10000,
reasoning={
"max_tokens": 8000 # Directly specify reasoning token budget
},
stream=True
)
return response
for chunk in chat_completion_with_reasoning([
{"role": "user", "content": "What's bigger, 9.9 or 9.11?"}
]):
if hasattr(chunk.choices[0].delta, 'reasoning') and chunk.choices[0].delta.reasoning:
print(f"REASONING: {chunk.choices[0].delta.reasoning}")
elif chunk.choices[0].delta.content:
print(f"CONTENT: {chunk.choices[0].delta.content}")
import OpenAI from 'openai';
const openai = new OpenAI({
baseURL: 'https://openrouter.co/v1',
apiKey,
});
async function chatCompletionWithReasoning(messages) {
const response = await openai.chat.completions.create({
model: '{{MODEL}}',
messages,
maxTokens: 10000,
reasoning: {
maxTokens: 8000, // Directly specify reasoning token budget
},
stream: true,
});
return response;
}
(async () => {
for await (const chunk of chatCompletionWithReasoning([
{ role: 'user', content: "What's bigger, 9.9 or 9.11?" },
])) {
if (chunk.choices[0].delta.reasoning) {
console.log(`REASONING: ${chunk.choices[0].delta.reasoning}`);
} else if (chunk.choices[0].delta.content) {
console.log(`CONTENT: ${chunk.choices[0].delta.content}`);
}
}
})();