Ctrl+Shift+J(Windows)/
Cmd+Shift+J(macOS)打开 Cursor Settings。
ap-xxxxxxxxxxxx(你的 API Key)
https://api.apolloinn.site/v1
Kiro-Sonnet-4-6,点击 ADD 按钮确认。
| 情况 | 配置方式 |
|---|---|
| ✅ 自有 Cursor Pro 账户 | 按上方步骤填写 API Key、添加模型即可,无需额外操作 |
| ❌ 无 Cursor Pro 账户 | 下载 Apollo Agent,双击运行,按提示一键完成配置 |
https://api.apolloinn.site/v1
ap-xxxxxxxxxxxx(你的 API Key)
claude-sonnet-4.6
| 配置项 | 值 |
|---|---|
| API Provider | OpenAI Compatible |
| Base URL | https://api.apolloinn.site/v1 |
| API Key | ap-xxx(你的 Key) |
| 推荐模型 | claude-sonnet-4.6 |
~/.continue/config.yaml
name: Apollo Gateway
version: 1.0.0
schema: v1
models:
- name: Claude Sonnet 4.6
provider: openai
model: claude-sonnet-4.6
apiKey: ap-你的apikey
apiBase: https://api.apolloinn.site/standard/v1
- name: Claude Opus 4.6
provider: openai
model: claude-opus-4.6
apiKey: ap-你的apikey
apiBase: https://api.apolloinn.site/standard/v1
- name: Claude Haiku 4.5
provider: openai
model: claude-haiku-4.5
apiKey: ap-你的apikey
apiBase: https://api.apolloinn.site/standard/v1
/standard/v1 端点(标准 OpenAI
兼容),而非 Cursor 专用的 /v1 端点。
npm install -g @openai/codex
~/.codex/config.toml 中写入以下配置:
model = "apollo/claude-sonnet-4.6"
[model_providers.apollo]
name = "Apollo Gateway"
base_url = "https://api.apolloinn.site/standard/v1"
env_key = "APOLLO_API_KEY"
wire_api = "chat"
mkdir -p ~/.codex
cat > ~/.codex/config.toml << 'EOF'
model = "apollo/claude-sonnet-4.6"
[model_providers.apollo]
name = "Apollo Gateway"
base_url = "https://api.apolloinn.site/standard/v1"
env_key = "APOLLO_API_KEY"
wire_api = "chat"
EOF
env_key 指定了从哪个环境变量读取 Key,需要设置对应变量:
echo 'export APOLLO_API_KEY="ap-你的apikey"' >> ~/.zshrc
source ~/.zshrc
$env:APOLLO_API_KEY="ap-你的apikey"
~/.zshrc 替换为 ~/.bashrc。Windows 用户如需持久化,可通过系统环境变量设置。
cd ~/your-project
codex
codex --model apollo/claude-opus-4.6,模型名格式为 提供商名/模型名。
| 配置项 | 值 |
|---|---|
| 配置文件 | ~/.codex/config.toml |
| base_url | https://api.apolloinn.site/standard/v1 |
| env_key 环境变量 | APOLLO_API_KEY=ap-xxx |
| wire_api | chat(OpenAI Chat 协议) |
| 模型格式 | apollo/模型名(如 apollo/claude-sonnet-4.6) |
npm install -g opencode-ai
~/.config/opencode/config.json 中写入:
{
"provider": {
"openai": {
"apiKey": "ap-你的apikey",
"baseURL": "https://api.apolloinn.site/standard/v1"
}
},
"model": "openai/claude-sonnet-4.6"
}
export OPENAI_API_KEY="ap-你的apikey"
export OPENAI_BASE_URL="https://api.apolloinn.site/standard/v1"
opencode
| 配置项 | 值 |
|---|---|
| baseURL / OPENAI_BASE_URL | https://api.apolloinn.site/standard/v1 |
| apiKey / OPENAI_API_KEY | ap-xxx(你的 Key) |
| 推荐模型 | openai/claude-sonnet-4.6 |
npm install -g @anthropic-ai/claude-code
~/.claude/settings.json,写入以下内容:
{
"env": {
"ANTHROPIC_AUTH_TOKEN": "ap-你的apikey",
"ANTHROPIC_BASE_URL": "https://api.apolloinn.site",
"ANTHROPIC_MODEL": "claude-opus-4.6",
"ANTHROPIC_DEFAULT_HAIKU_MODEL": "claude-opus-4.6",
"ANTHROPIC_DEFAULT_OPUS_MODEL": "claude-opus-4.6",
"ANTHROPIC_DEFAULT_SONNET_MODEL": "claude-opus-4.6"
}
}
mkdir -p ~/.claude
cat > ~/.claude/settings.json << 'EOF'
{
"env": {
"ANTHROPIC_AUTH_TOKEN": "ap-你的apikey",
"ANTHROPIC_BASE_URL": "https://api.apolloinn.site",
"ANTHROPIC_MODEL": "claude-opus-4.6",
"ANTHROPIC_DEFAULT_HAIKU_MODEL": "claude-opus-4.6",
"ANTHROPIC_DEFAULT_OPUS_MODEL": "claude-opus-4.6",
"ANTHROPIC_DEFAULT_SONNET_MODEL": "claude-opus-4.6"
}
}
EOF
claude-sonnet-4.6、claude-haiku-4.5 等。
cat > ~/.claude/.credentials.json << 'EOF'
{"apiKeyHelper":{"apiKey":"sk-placeholder"}}
EOF
echo 'export ANTHROPIC_API_KEY=""' >> ~/.zshrc
source ~/.zshrc
~/.zshrc 替换为 ~/.bashrc。
ANTHROPIC_API_KEY="" claude -p "hello"
cd ~/your-project
ANTHROPIC_API_KEY="" claude
| 配置项 | 值 |
|---|---|
| ANTHROPIC_BASE_URL | https://api.apolloinn.site |
| ANTHROPIC_AUTH_TOKEN | ap-xxx(你的 Key) |
| 配置文件 | ~/.claude/settings.json |
| 凭证文件 | ~/.claude/.credentials.json |
| 推荐模型 | claude-opus-4.6(可按需更换) |
https://api.apolloinn.site(不带 /v1)。也可替换为美西
api2.apolloinn.site 或日本
api3.apolloinn.site。
| 节点 | 地址 |
|---|---|
| 美东(默认) | https://api.apolloinn.site |
| 美西 | https://api2.apolloinn.site |
| 日本 | https://api3.apolloinn.site |
| 路径 | Base URL | 协议 | 适用场景 |
|---|---|---|---|
| Cursor 专用 | /v1 |
OpenAI | Cursor IDE,自动处理 thinking 标签、重试、保活 |
| 标准 OpenAI | /standard/v1 |
OpenAI | 通用 OpenAI 兼容客户端(Cline、ChatBox、LobeChat 等) |
| Anthropic 原生 | 不带路径后缀 |
Anthropic | Anthropic SDK 客户端(Claude Code CLI 等) |
Authorization: Bearer <key>;Anthropic 协议使用 x-api-key: <key>。
返回当前所有可用模型列表。
curl https://api.apolloinn.site/v1/models \
-H "Authorization: Bearer your-api-key"
from openai import OpenAI
client = OpenAI(api_key="your-api-key", base_url="https://api.apolloinn.site/v1")
for m in client.models.list().data:
print(m.id)
reasoning_content 转为
<think> 标签格式。
from openai import OpenAI
client = OpenAI(api_key="your-api-key", base_url="https://api.apolloinn.site/v1")
response = client.chat.completions.create(
model="claude-sonnet-4.6",
messages=[{"role": "user", "content": "你好!"}],
stream=True
)
for chunk in response:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="", flush=True)
from openai import OpenAI
client = OpenAI(api_key="your-api-key", base_url="https://api.apolloinn.site/v1")
tools = [{"type":"function","function":{"name":"get_weather",
"description":"获取城市天气",
"parameters":{"type":"object","properties":{"city":{"type":"string"}},"required":["city"]}}}]
resp = client.chat.completions.create(
model="claude-sonnet-4.6",
messages=[{"role":"user","content":"北京天气怎么样?"}],
tools=tools
)
if resp.choices[0].message.tool_calls:
tc = resp.choices[0].message.tool_calls[0]
print(tc.function.name, tc.function.arguments)
reasoning_mode 参数控制思考内容。
| reasoning_mode | 行为 |
|---|---|
"drop" |
默认,丢弃思考过程,只返回最终回答 |
"reasoning_content" |
思考内容放在 reasoning_content 字段,与 OpenAI
o1 格式一致
|
"content" |
思考内容用
<think>...</think> 包裹,拼接到
content
|
控制是否对消息上下文进行自动压缩:
| context_compression | 行为 |
|---|---|
true |
默认。当消息总 token 数接近模型上下文窗口时,自动压缩早期消息以避免超限 |
false |
关闭压缩,消息原样透传。适合客户端自行管理上下文长度的场景 |
curl -X POST https://api.apolloinn.site/standard/v1/chat/completions \
-H "Authorization: Bearer your-api-key" \
-H "Content-Type: application/json" \
-d '{"model":"claude-sonnet-4.6","messages":[{"role":"user","content":"你好!"}],"stream":false}'
from openai import OpenAI
client = OpenAI(api_key="your-api-key", base_url="https://api.apolloinn.site/standard/v1")
response = client.chat.completions.create(
model="claude-sonnet-4.6",
messages=[{"role":"user","content":"25 * 37 等于多少?"}],
stream=False,
extra_body={"reasoning_mode": "reasoning_content"}
)
msg = response.choices[0].message
print("思考过程:", getattr(msg, "reasoning_content", ""))
print("最终答案:", msg.content)
from openai import OpenAI
client = OpenAI(api_key="your-api-key", base_url="https://api.apolloinn.site/standard/v1")
response = client.chat.completions.create(
model="claude-sonnet-4.6",
messages=[{"role":"user","content":"解释量子纠缠"}],
stream=True,
extra_body={"reasoning_mode": "reasoning_content"}
)
for chunk in response:
delta = chunk.choices[0].delta
if hasattr(delta,"reasoning_content") and delta.reasoning_content:
print(delta.reasoning_content, end="", flush=True)
if delta.content:
print(delta.content, end="", flush=True)
import OpenAI from "openai";
const client = new OpenAI({ apiKey:"your-api-key", baseURL:"https://api.apolloinn.site/standard/v1" });
const response = await client.chat.completions.create({
model: "claude-sonnet-4.6",
messages: [{ role:"user", content:"Hello!" }],
stream: true,
});
for await (const chunk of response) {
const c = chunk.choices[0]?.delta?.content;
if (c) process.stdout.write(c);
}
from openai import OpenAI
client = OpenAI(api_key="your-api-key", base_url="https://api.apolloinn.site/standard/v1")
response = client.chat.completions.create(
model="claude-sonnet-4.6",
messages=[{"role":"user","content":"你好!"}]
)
print(response.choices[0].message.content)
from openai import OpenAI
client = OpenAI(api_key="your-api-key", base_url="https://api.apolloinn.site/standard/v1")
response = client.chat.completions.create(
model="claude-sonnet-4.6",
messages=[{"role":"user","content":"写一首关于春天的诗"}],
stream=True
)
for chunk in response:
c = chunk.choices[0].delta.content
if c: print(c, end="", flush=True)
import OpenAI from "openai";
const client = new OpenAI({ apiKey:"your-api-key", baseURL:"https://api.apolloinn.site/standard/v1" });
const response = await client.chat.completions.create({
model: "claude-sonnet-4.6",
messages: [{ role:"user", content:"Hello!" }]
});
console.log(response.choices[0].message.content);
x-api-key 或 Authorization: Bearer。
| 特性 | 说明 |
|---|---|
| Thinking | 模型推理过程通过原生 thinking content block 返回,客户端可自行折叠/展示 |
| 图片 | 支持 base64 格式的 image content block,压缩路径中完整保留 |
| 工具调用 | 支持 Anthropic 原生 tool_use / tool_result 格式 |
| 上下文压缩 | 始终开启,长对话自动压缩 |
curl -X POST https://api.apolloinn.site/v1/messages \
-H "Authorization: Bearer your-api-key" \
-H "Content-Type: application/json" \
-d '{"model":"claude-sonnet-4.6","max_tokens":1024,"messages":[{"role":"user","content":"你好!"}]}'
import anthropic
client = anthropic.Anthropic(api_key="your-api-key", base_url="https://api.apolloinn.site")
msg = client.messages.create(
model="claude-sonnet-4.6", max_tokens=1024,
system="你是一个有帮助的助手。",
messages=[{"role":"user","content":"你好!"}]
)
print(msg.content[0].text)
import anthropic
client = anthropic.Anthropic(api_key="your-api-key", base_url="https://api.apolloinn.site")
with client.messages.stream(
model="claude-sonnet-4.6", max_tokens=1024,
messages=[{"role":"user","content":"写一首关于春天的诗"}]
) as stream:
for text in stream.text_stream:
print(text, end="", flush=True)
import Anthropic from "@anthropic-ai/sdk";
const client = new Anthropic({ apiKey:"your-api-key", baseURL:"https://api.apolloinn.site" });
const msg = await client.messages.create({
model:"claude-sonnet-4.6", max_tokens:1024,
messages:[{ role:"user", content:"Hello!" }]
});
console.log(msg.content[0].text);
event: message_start
data: {"type":"message_start","message":{"id":"msg_xxx","type":"message","role":"assistant","model":"claude-sonnet-4-20250514",...}}
event: content_block_start
data: {"type":"content_block_start","index":0,"content_block":{"type":"thinking","thinking":""}}
event: content_block_delta
data: {"type":"content_block_delta","index":0,"delta":{"type":"thinking_delta","thinking":"Let me think..."}}
event: content_block_stop
data: {"type":"content_block_stop","index":0}
event: content_block_start
data: {"type":"content_block_start","index":1,"content_block":{"type":"text","text":""}}
event: content_block_delta
data: {"type":"content_block_delta","index":1,"delta":{"type":"text_delta","text":"Hello!"}}
event: content_block_stop
data: {"type":"content_block_stop","index":1}
event: message_delta
data: {"type":"message_delta","delta":{"stop_reason":"end_turn"},"usage":{"output_tokens":50}}
event: message_stop
data: {"type":"message_stop"}
| 标准名称(其他工具可选) | Kiro 别名(Cursor 必用) | 级别 |
|---|---|---|
Claude-Opus-4.6 |
Kiro-Opus-4-6 |
旗舰 |
Claude-Opus-4.5 |
Kiro-Opus-4-5 |
旗舰 |
Claude-Sonnet-4.6 |
Kiro-Sonnet-4-6 |
均衡 |
Claude-Sonnet-4.5 |
Kiro-Sonnet-4-5 |
均衡 |
Claude-Sonnet-4 |
Kiro-Sonnet-4 |
均衡 |
Claude-Haiku-4.5 |
Kiro-Haiku-4-5 |
快速 |
| — | Kiro-Haiku |
快速 |
| — | Kiro-Auto |
自动 |
| 使用场景 | 推荐接口 |
|---|---|
| Cursor IDE 接入 | POST /v1/chat/completions |
| 自建应用(OpenAI SDK) | POST /standard/v1/chat/completions |
| 需要获取思考过程 |
标准接口 + reasoning_mode: "reasoning_content"
|
| 已有 Anthropic SDK 项目 | POST /v1/messages |
| 查询可用模型 | GET /v1/models |
| 状态码 | 说明 | 处理建议 |
|---|---|---|
| 401 | API Key 无效或缺失 | 检查 Authorization Header 是否正确 |
| 403 | 账户已禁用或额度不足 | 联系管理员 |
| 422 | 请求参数格式错误 | 检查请求体格式是否符合规范 |
| 429 | 请求频率超限 | 降低请求频率,加入重试逻辑 |
| 503 | 服务暂时不可用(无可用 token) | 稍后重试 |
import httpx, time
def chat_with_retry(messages, max_retries=3):
for attempt in range(max_retries):
try:
resp = httpx.post(
"https://api.apolloinn.site/standard/v1/chat/completions",
headers={"Authorization": "Bearer your-api-key"},
json={"model":"claude-sonnet-4.6","messages":messages,"stream":False},
timeout=60
)
if resp.status_code == 429:
wait = 2 ** attempt # 指数退避
print(f"频率限制,{wait}s 后重试...")
time.sleep(wait)
continue
if resp.status_code == 503:
print("服务暂时不可用,5s 后重试...")
time.sleep(5)
continue
resp.raise_for_status()
return resp.json()
except httpx.HTTPStatusError as e:
print(f"HTTP 错误: {e.response.status_code}")
raise
raise Exception("超过最大重试次数")