Every Cafecito API product ships an MCP (Model Context Protocol) server — same auth, same key, zero extra setup. Point your AI agent or MCP client at the endpoint and go.
Authentication
Get API Key
Your existing API key works for MCP calls — they share the same rate limit and quota meter as regular API calls.
Connecting to MCP Servers
Server URL : https://api.cafecito.tech/<product>/mcp
Authorization: Bearer YOUR-API-KEY
Beans MCP server
Server URL : https://api.cafecito.tech/beans/mcp
Using Beans MCP with your Agent
# openai-agents.HostedMCPTool (https://github.com/openai/openai-agents-python)
import os
import asyncio
from agents import Agent, HostedMCPTool, Runner
mcp_tool_config = {
"type" : "mcp" ,
"server_label" : "cafecito_beans" ,
"server_url" : "https://api.cafecito.tech/beans/mcp" ,
"authorization" : os.getenv( "CAFECITO_API_KEY" ),
"require_approval" : "never" ,
}
# SETUP: model client using AsyncOpenAI
# SETUP: model using OpenAIChatCompletionsModel
agent = Agent(
name = "News Analyst" ,
instructions = "EXTRACT news headlines and associated briefings" ,
tools = [
HostedMCPTool( tool_config = mcp_tool_config)
],
)
PROMPT_TEMPLATE = """
Today: {date}
Question: What is trending on middle east today?
"""
result = Runner.run_sync(agent, PROMPT_TEMPLATE .format( date = datetime.now().strftime( "%Y-%m- %d " )))
print ( "=== Final output ===" )
print (result.final_output)
Use cases
AI assistants and RAG workflows that need fresh news context
AI agents that monitor trends, entities, or topics in real time
LLM pipelines that require enrichment-ready JSON, not raw HTML
Related
Last modified on March 16, 2026