ChiefMO
The marketing brain + visual engine + execution layer your agent calls right after it builds a product.
ChiefMO is the killer flow. Agent ships a product → calls chiefmo_launch_product → gets back a complete launch pack: positioning, per-channel briefs (LinkedIn, X, Product Hunt, email, landing hero), on-brand generated graphics, signed reviewUrl, and approval-gated publish actions. After human approval, ChiefLab actually schedules the posts via Zernio. 24h later, chiefmo_post_launch_review pulls metrics and recommends the next iteration. Build → launch → measure → iterate, all callable from any agent. Asymmetric on cost: text = your agent's LLM (cheap), images = ChiefLab (consistent, brand-grounded), execution = ChiefLab (you get approved hands inside business tools).
From "I built it" to "people are using it"
The killer flow ChiefMO is designed for: agent builds → agent asks for users → agent calls ChiefMO → user reviews → memory adapts the next run.
- 01
Agent builds the product
Cursor / Claude Code / Bolt / Base44 / Lovable / OpenClaw ships an MVP. The user has a working product but no users.
- 02
User asks: 'Now help me get users'
The same agent that built it gets the next instruction. Most agents return generic advice (do SEO, post on LinkedIn, run ads). That ships products to nowhere.
- 03
Agent calls chiefmo_diagnose_marketing
Plain HTTPS POST to chieflab.io/api/mcp with goal + tenantId. Works from any runtime — Cursor, Claude Desktop, Telegram bot, voice agent, custom SDK.
- 04
ChiefMO returns a brief, not a wall of copy
Positioning diagnosis + brand context + connector evidence + drafting briefs + generated images (when requested) + signed reviewUrl. Per-tenant memory updated. ChiefLab token cost: ~$0 by default.
- 05
Agent's LLM renders the brief into final copy
The brief is markdown — drop it into the agent's context. Cursor's Sonnet, Claude Desktop's Claude, Vapi's GPT-4o, your custom SDK's Gemini — whichever LLM your agent runs writes the final post / email / landing copy.
- 06
User opens reviewUrl, approves or rejects
No login. Renders the brief, the agent-rendered copy, and any generated images. Approve = action queued; reject with feedback = stored as a voice sample. Next run adapts.
One MCP tool. One reviewable run.
POST https://chieflab.io/api/mcp
Authorization: Bearer clp_live_<your-key>
{ "jsonrpc": "2.0", "id": 1,
"method": "tools/call",
"params": {
"name": "chiefmo_diagnose_marketing",
"arguments": {
"goal": "Help this product launch and get its first 100 users",
"tenantId": "acme-co",
"idempotencyKey": "launch-2026-04",
"webhookUrl": "https://yourapp/hook"
}
}
}
// → reviewUrl: chieflab.io/runs/<id>?token=...
// click → see assets, images, approve/reject
// → assets[]: positioning, posts, ads, images, ...
// → actions[]: 2 approval-gated drafts
// → data: { intent, skillsRun, action, cost, ... }
// → diagnosis: { tenantContextLoaded, approvalRequired, ... }
No tenantId yet? Pass tenantUrl instead — ChiefMO uses the URL as light context (does not persist). For real per-tenant memory, call chieflab_create_tenant + chieflab_set_tenant_context first.
Eight callable skills, one orchestrated agent loop
Each skill is an MCP tool. The primary chiefmo_diagnose_marketing picks the right ones and runs them in sequence with shared per-tenant context. Or call any skill directly.
Diagnose marketing
Primary entry point. Detects intent, pulls GA4/Search Console evidence, runs the right skills with per-tenant brand context, returns drafting briefs your LLM renders + signed reviewUrl. Default outputMode: "context" — ~$0 in tokens. Pass outputMode: "full" if you want ChiefLab to write server-side (premium credits).
chiefmo_diagnose_marketing
Brand DNA discovery
Discover a brand from URL: audience, voice, pillars, competitors, visual cues. Stored per-tenant; cached after first run. ChiefLab pays this cost once per tenant URL (~$0.005), then it's free forever.
chiefmo_extract_brand_dna
Social post brief
Returns platform-native drafting brief (Instagram/LinkedIn/X) — hook patterns, caption structure, hashtag rules, posting cadence. Your LLM renders the actual posts.
chiefmo_generate_social_posts
Ad variant brief
Returns angles, headline frameworks, primary text patterns, CTAs, test plan for Meta + Google Ads. Your LLM writes the variants. Budget changes require human approval.
chiefmo_generate_ad_variants
Landing copy brief
Returns hero + value-prop + FAQ + CTA structure. Your LLM writes the actual copy. Page build/publish requires approval.
chiefmo_generate_landing_copy
Email sequence brief
Returns subject ladder + 4-email skeleton + send timing. Your LLM writes each email. Sending requires approval and a connected ESP.
chiefmo_generate_email_sequence
Weekly report brief
Returns connector snapshots + KPI deltas + structured 'what changed and why' + recommended next 3 actions framework. Your LLM writes the narrative report.
chiefmo_prepare_weekly_report
Anomaly diagnosis
Returns ranked-cause framework for a metric drop/spike + corrective-action template. Your LLM writes the customer-facing explanation.
chiefmo_diagnose_anomaly
The first bottleneck after product creation is always users
- Universal pain. Every newly built product needs distribution. Marketing is the most universal post-build problem.
- Demoable. A marketing operator returns visible work — copy, images, posts, plans. The reject-with-feedback → next-run-improves loop is observable in seconds.
- Crowded but undifferentiated. AI marketing tools exist (Sintra, Okara, Jasper). MCP marketing infra exists (Markifact, AdKit). But none combine a developer-first MCP wedge + per-tenant memory + approval-gated execution + multi-operator runtime under one roof.
- Honest framing. ChiefSales / ChiefSupport / ChiefFI / ChiefOps prove the runtime generalizes — they're real, they're callable, they're shipped. But ChiefMO is where the polish, the connectors, the depth, and the design-partner energy go first.
Try ChiefMO in your Cursor chat
60-second install. Bootstrap API key included. Ask your Cursor agent: "Help my product at <url> launch. Use ChiefMO." Click the reviewUrl. See it work end-to-end.