One endpoint. Any agent.
ChiefLab is a hosted JSON-RPC endpoint at chieflab.io/api/mcp. Anything that can fetch() can call it. Pick your runtime below — or just grab a key and POST.
Install first. Pick your runtime below — install completes anonymously, your agent mints a key on first call. Or get a key upfront if you'd rather skip the agent flow.
⚡ One-click install
Install completes anonymously. Your agent mints a key on first call via chieflab_signup_workspace. CLI: npm i -g @chieflab/cli && chieflab login.
Pick your runtime
CLI — Friendliest path. One npm install, then drive the closed-loop launch flow from your shell. Zero deps, ~5s install. · npm i -g @chieflab/cli
# Install once
npm install -g @chieflab/cli
# Save your key
chieflab login
# Drive the closed loop end-to-end
chieflab launch yoursite.com --channels linkedin,x,email
# → returns launchId + signed reviewUrl
# user approves on the reviewUrl, then:
chieflab publish <actionId> --content "..." \
--platform linkedin --account zer_acc_xxx
chieflab send-email <actionId> --from "Brand <[email protected]>" \
--to [email protected] --subject "Launch day" --html "<h1>...</h1>"
chieflab review <runId> # 24h post-launch metrics + next-move brief
# Or just verify your setup
chieflab whoami
chieflab connections
chieflab tools # all 72 MCP tools available on this key Cursor — Native MCP support + one-click install deeplink. 60-second setup via Settings → MCP, ~/.cursor/mcp.json, or the cursor:// install button. · MCP (hosted url or stdio)
// ~/.cursor/mcp.json
{
"mcpServers": {
"chieflab": {
"url": "https://chieflab.io/api/mcp",
"headers": {
"Authorization": "Bearer clp_dev_..."
}
}
}
}
// Restart Cursor. In any chat:
// "Use ChiefMO to launch this product."
// Cursor's agent picks chiefmo_diagnose_marketing automatically,
// reads the brief, and renders the final copy in chat. Claude Desktop — Native MCP support. Add to claude_desktop_config.json, restart, done. · MCP (hosted url or stdio)
// macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
// Windows: %APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"chieflab": {
"url": "https://chieflab.io/api/mcp",
"headers": {
"Authorization": "Bearer clp_dev_..."
}
}
}
}
// Quit Claude Desktop fully and reopen. ChiefLab tools appear in the
// input area. Ask: "Use ChiefMO to launch this product." Node script / web app — Plain fetch. No MCP client required. Drop into any Node service, Next.js route, edge function. · HTTPS POST
// Node 20+ (built-in fetch)
const r = await fetch("https://chieflab.io/api/mcp", {
method: "POST",
headers: {
"Authorization": `Bearer ${process.env.CHIEFLAB_API_KEY}`,
"Content-Type": "application/json"
},
body: JSON.stringify({
jsonrpc: "2.0", id: 1, method: "tools/call",
params: {
name: "chiefmo_diagnose_marketing",
arguments: {
goal: "Launch this product and get its first 100 users",
tenantId: "acme-co"
}
}
})
});
const { result } = await r.json();
const payload = JSON.parse(result.content[0].text);
// payload.assets[] are drafting briefs — pass to your LLM to render
// payload.reviewUrl — open in browser for approval flow Python — httpx / requests. Drop into any FastAPI, Flask, LangChain, LlamaIndex, Pydantic AI agent. · HTTPS POST
import os, json, httpx
r = httpx.post(
"https://chieflab.io/api/mcp",
headers={"Authorization": f"Bearer {os.environ['CHIEFLAB_API_KEY']}"},
json={
"jsonrpc": "2.0", "id": 1, "method": "tools/call",
"params": {
"name": "chiefmo_diagnose_marketing",
"arguments": {
"goal": "Launch this product",
"tenantId": "acme-co"
}
}
},
timeout=60.0
)
payload = json.loads(r.json()["result"]["content"][0]["text"])
# payload["assets"] are briefs — pass to your LLM to render
# payload["reviewUrl"] — surface to your end-user for approval Telegram bot — Inside your webhook handler, fetch the endpoint. Use chat ID as tenantId for per-user context. · HTTPS POST
// inside your Telegram bot webhook handler
import { Telegraf } from "telegraf";
const bot = new Telegraf(process.env.TELEGRAM_TOKEN);
bot.command("launch", async (ctx) => {
const r = await fetch("https://chieflab.io/api/mcp", {
method: "POST",
headers: {
"Authorization": `Bearer ${process.env.CHIEFLAB_API_KEY}`,
"Content-Type": "application/json"
},
body: JSON.stringify({
jsonrpc: "2.0", id: 1, method: "tools/call",
params: {
name: "chiefmo_diagnose_marketing",
arguments: {
goal: ctx.message.text,
// Per-user context: each Telegram user is a tenant
tenantId: `tg-${ctx.from.id}`
}
}
})
});
const { result } = await r.json();
const payload = JSON.parse(result.content[0].text);
// Your LLM (or a simple message) renders the brief
await ctx.reply(`Brief ready. Review here: ${payload.reviewUrl}`);
// For richer rendering, pass payload.assets[] to OpenAI/Anthropic
// and post the rendered text back to Telegram.
}); Vapi / voice agents — Register ChiefLab as a function call. Vapi runtime hits the endpoint; your voice agent's LLM speaks the rendered output back. · HTTPS POST (function call)
// Vapi function definition (in your assistant config)
{
"name": "chiefmo_diagnose_marketing",
"description": "Get a marketing brief for the caller's product. Returns positioning, brand context, drafting prompts the assistant can speak back.",
"parameters": {
"type": "object",
"properties": {
"goal": { "type": "string" },
"tenantId": { "type": "string", "description": "Use the caller's phone number" }
}
},
"server": {
"url": "https://chieflab.io/api/mcp",
"headers": {
"Authorization": "Bearer YOUR_CHIEFLAB_KEY"
},
"method": "POST"
}
}
// The Vapi runtime translates a function call into a JSON-RPC call to
// chieflab.io/api/mcp. The voice agent's LLM (Vapi → OpenAI/Anthropic)
// reads the brief and speaks the rendered marketing back to the caller.
// No glue code required. LangChain / LangGraph — Wrap as a Tool. The agent's LLM gets a brief, renders the final output. · HTTPS POST
from langchain.tools import Tool
import httpx, json, os
def chiefmo_brief(query: str) -> str:
"""Get a marketing brief from ChiefMO. Returns brand context + drafting prompts the agent's LLM should render."""
r = httpx.post(
"https://chieflab.io/api/mcp",
headers={"Authorization": f"Bearer {os.environ['CHIEFLAB_API_KEY']}"},
json={
"jsonrpc": "2.0", "id": 1, "method": "tools/call",
"params": {
"name": "chiefmo_diagnose_marketing",
"arguments": {"goal": query, "tenantId": "default"}
}
}
)
payload = json.loads(r.json()["result"]["content"][0]["text"])
# Return assets as a single text block for the LLM to consume
briefs = "\n\n---\n\n".join(a["body"] for a in payload["assets"])
return f"REVIEW URL: {payload['reviewUrl']}\n\n{briefs}"
chiefmo_tool = Tool(
name="chiefmo_diagnose_marketing",
description="Get marketing brief: brand context + drafting prompts. Then render the briefs into the user's final marketing copy.",
func=chiefmo_brief
)
# Add chiefmo_tool to any LangChain / LangGraph agent. The agent's LLM
# reads the brief and renders the final copy itself — no extra ChiefLab
# server-side LLM cost. Custom agent / SDK — OpenClaw, Hermes, Manus, Devin, Replit Agent, your own SDK — anything with HTTPS works. · HTTPS POST
Same shape as the others — POST JSON-RPC with a Bearer token. See the Node example as a starting point.
What you get back
Every chiefmo_diagnose_marketing call returns:
- assets[] — drafting briefs (brand context + system prompt + skill-specific drafting prompt). Pass each one to your LLM to render the final post / email / landing copy. Default
outputMode: "context"= ~$0 in tokens for ChiefLab. - reviewUrl — signed, no-login URL your end-user opens to review + approve drafts.
- actions[] — proposed external actions (publish, schedule, send) with approval requirements.
- data.cost — providerCostUsd, modelCalls, outputMode — full cost telemetry per run.
Want ChiefLab to write server-side instead of returning a brief? Pass outputMode: "full" — premium credits, slower, but you don't need a calling LLM.
Multi-tenant: serving your own customers
If your agent serves multiple end-users, register each as a tenant:
chieflab_create_tenant({
tenantId: "acme-co",
name: "Acme Co",
domain: "acme.co"
})
chieflab_set_tenant_context({
tenantId: "acme-co",
brand: "Acme Co",
audience: "B2B SaaS founders",
voice: "Direct, founder-led, no jargon"
}) Then call any *_diagnose tool with tenantId: "acme-co" and ChiefLab grounds every output against that tenant's context. One workspace can serve thousands of tenants.