npx skills add https://github.com/XSpoonAi/spoon-core --skill web3-researchInstale esta skill com a CLI e comece a usar o fluxo de trabalho SKILL.md em seu espaço de trabalho.
Core developer framework of SpoonOS ——Agentic OS for the sentient economy. Next-Generation AI Agent Framework | Powerful Interactive CLI | Web3 infrastructure optimized Support
This README is your guide to getting started with the SpoonOS Core Developer Framework (SCDF). It walks you through everything you need—from understanding core capabilities to actually running your own agents.
Here's how to navigate it:
✨ Features: Start here to understand what SpoonOS can do. This section gives you a high-level overview of its agentic, composable, and interoperable architecture.
🔧 Installation: As of June 2025, SpoonOS currently supports Python only. This section tells you which Python version to use and how to set up a virtual environment.
🔐 Environment & API Key Config: Learn how to configure the API keys for various LLMs (e.g., OpenAI, Claude, deepseek). We also provide configuration methods for Web3 infrastructure such as chains, RPC endpoints, databases, and blockchain explorers.
🚀 Quick Start: Once your environment is ready, start calling our MCP server, which bundles a wide range of tools. Other servers are also available.
🛠️ CLI Tools: This section shows how to use the CLI to run LLM-powered tasks with ease.
🧩 Agent Framework: Learn how to create your own agents, register custom tools, and extend SpoonOS with minimal setup.
📊 Enhanced Graph System: Discover the powerful graph-based workflow orchestration system for complex AI agent workflows.
🔌 API Integration: Plug in external APIs to enhance your agent workflows.
🤝 Contributing: Want to get involved? Check here for contribution guidelines.
📄 License: Standard license information.
By the end of this README, you'll not only understand what SCDF is—but you'll be ready to build and run your own AI agents and will gain ideas on scenarios what SCDF could empower. Have fun!
SpoonOS is a living, evolving agentic operating system. Its SCDF is purpose-built to meet the growing demands of Web3 developers — offering a complete toolkit for building sentient, composable, and interoperable AI agents.
stdio, http, or websocket transports — without hardcoding or restarts.# Clone the repo
$ git clone https://github.com/XSpoonAi/spoon-core.git
$ cd spoon-core
# Create a virtual environment
$ python -m venv spoon-env
$ source spoon-env/bin/activate # For macOS/Linux
# Install dependencies
$ pip install -r requirements.txt
Prefer uv for a faster, reproducible install?
# Install dependencies with uv (recommended)
$ uv pip install -r requirements.txt
# Editable install for local development
$ uv pip install -e .
Note (Nov 2025): When you import
spoon_aidirectly in Python, configuration is read from environment variables (including.env). The interactive CLI /spoon-clitooling is what readsconfig.jsonand exports those values into the environment for you.
SpoonOS uses a unified configuration system that supports multiple setup methods. Choose the one that works best for your workflow:
Create a .env file in the root directory:
cp .env.example .env
Fill in your API keys:
# LLM Provider Keys
OPENAI_API_KEY=sk-your-openai-key
ANTHROPIC_API_KEY=sk-your-claude-key
DEEPSEEK_API_KEY=your-deepseek-key
GEMINI_API_KEY=your-gemini-api-key
# Web3 Configuration
PRIVATE_KEY=your-wallet-private-key
RPC_URL=https://mainnet.rpc
CHAIN_ID=12345
# Turnkey SDK Configuration
TURNKEY_BASE_URL=https://api.turnkey.com
TURNKEY_API_PUBLIC_KEY=your-turnkey-public-key
TURNKEY_API_PRIVATE_KEY=your-turnkey-private-key-hex
TURNKEY_ORG_ID=your-turnkey-organization-id
# Tool-specific Keys
TAVILY_API_KEY=your-tavily-api-key
OKX_API_KEY=your-okx-api-key
OKX_SECRET_KEY=your-okx-secret-key
OKX_API_PASSPHRASE=your-okx-passphrase
Then load it in your Python entry file:
from dotenv import load_dotenv
load_dotenv(override=True)
SpoonOS now ships with a first-class x402 integration, letting agents pay for external services and expose their own paywalled endpoints.
Add the following entries to .env (or export them in your shell):
X402_AGENT_PRIVATE_KEY=0xyour-agent-wallet-private-key
X402_RECEIVER_ADDRESS=0xwallet-that-receives-fees
X402_FACILITATOR_URL=https://x402.org/facilitator
X402_DEFAULT_ASSET=0xa063B8d5ada3bE64A24Df594F96aB75F0fb78160 # USDC on Base Sepolia
X402_DEFAULT_NETWORK=base-sepolia
X402_DEFAULT_AMOUNT_USDC=0.10
You can override additional values in config.json under the new x402 section (branding, session tokens, per-resource metadata, etc.).
SpoonReactAI automatically registers two tools when x402 is configured:
x402_create_payment – generate signed X-PAYMENT headers for any resource.x402_paywalled_request – negotiate a 402 challenge, sign the payment, and retry the HTTP call automatically.Use the bundled CLI to inspect requirements, sign headers, or verify incoming requests:
uv run python -m spoon_ai.payments.cli requirements
uv run python -m spoon_ai.payments.cli sign --amount-usdc 0.05 --resource https://api.example.com/data
uv run python -m spoon_ai.payments.cli verify <base64-header>
Run the FastAPI gateway to protect agent invocations with x402:
uv run python -m spoon_ai.payments.app
This exposes:
GET /x402/requirements – discover supported payment requirements.POST /x402/invoke/{agent_name} – pay-to-invoke endpoint that verifies and settles headers, then forwards prompts to your agent. Successful responses include an X-PAYMENT-RESPONSE header containing the settlement receipt.Check examples/x402_agent_demo.py for an end-to-end walkthrough.
Start the CLI and configure interactively:
python main.py
# Configure API keys
> config api_key openai sk-your-openai-key
> config api_key anthropic sk-your-claude-key
# View current configuration
> config
config.json (optional)For CLI workflows (including python main.py and spoon-cli), you can create or edit a config.json file that the CLI layer reads and then exports into environment variables. Core Python code still uses environment variables only.
{
"api_keys": {
"openai": "sk-your-openai-key",
"anthropic": "sk-your-claude-key"
},
"default_agent": "trading_agent",
"agents": {
"trading_agent": {
"class": "SpoonReactMCP",
"tools": [
{
"name": "tavily-search",
"type": "mcp",
"enabled": true,
"mcp_server": {
"command": "npx",
"args": ["--yes", "tavily-mcp"],
"env": {"TAVILY_API_KEY": "your-tavily-key"}
}
},
{
"name": "crypto_powerdata_cex",
"type": "builtin",
"enabled": true,
"env": {
"OKX_API_KEY": "your-okx-key",
"OKX_SECRET_KEY": "your-okx-secret",
"OKX_API_PASSPHRASE": "your-okx-passphrase",
"OKX_PROJECT_ID": "your-okx-project-id"
}
}
]
}
}
}
📖 Complete Configuration Guide
SpoonOS uses a split configuration model:
spoon_ai): reads only environment variables (including .env).config.json, then materializes values into environment variables before invoking the SDK.SpoonOS supports two main tool types:
Example agent with both tool types:
{
"agents": {
"my_agent": {
"class": "SpoonReactMCP",
"tools": [
{
"name": "tavily-search",
"type": "mcp",
"mcp_server": {
"command": "npx",
"args": ["--yes", "tavily-mcp"],
"env": {"TAVILY_API_KEY": "your-key"}
}
},
{
"name": "crypto_powerdata_cex",
"type": "builtin",
"enabled": true,
"env": {
"OKX_API_KEY": "your_okx_api_key",
"OKX_SECRET_KEY": "your_okx_secret_key",
"OKX_API_PASSPHRASE": "your_okx_api_passphrase",
"OKX_PROJECT_ID": "your_okx_project_id"
}
]
}
}
}
SpoonOS features a unified LLM infrastructure that provides seamless integration with multiple providers, automatic fallback mechanisms, and comprehensive monitoring.
import asyncio
from spoon_ai.llm import LLMManager, ConfigurationManager
async def main():
# Initialize the LLM manager
config_manager = ConfigurationManager()
llm_manager = LLMManager(config_manager)
# Simple chat request (uses default provider)
response = await llm_manager.chat(
[{"role": "user", "content": "Hello, world!"}]
)
print(response.content)
# Use specific provider
response = await llm_manager.chat(
messages=[{"role": "user", "content": "Hello!"}],
provider="anthropic",
)
# Chat with tools
tools = [{"name": "get_weather", "description": "Get weather info"}]
response = await llm_manager.chat_with_tools(
messages=[{"role": "user", "content": "What's the weather?"}],
tools=tools,
provider="openai",
)
print(response.content)
if __name__ == "__main__":
asyncio.run(main())
For blockchain key management and secure transaction signing:
from spoon_ai.turnkey import Turnkey
# Initialize Turnkey client (requires TURNKEY_* env vars)
client = Turnkey()
# Sign an EVM transaction
result = client.sign_evm_transaction(
sign_with="0x_your_wallet_address",
unsigned_tx="0x_unsigned_transaction_hex"
)
# Sign a message
result = client.sign_message(
sign_with="0x_your_wallet_address",
message="Hello Turnkey!"
)
See examples/turnkey/ for complete usage examples.
In CLI workflows you can configure providers in the CLI config.json (the CLI will export these values into environment variables before invoking the SDK). For pure SDK usage, set the corresponding environment variables instead of relying on config.json:
{
"llm_providers": {
"openai": {
"api_key": "sk-your-openai-key",
"model": "gpt-4.1",
"max_tokens": 4096,
"temperature": 0.3
},
"anthropic": {
"api_key": "sk-ant-your-key",
"model": "claude-sonnet-4-20250514",
"max_tokens": 4096,
"temperature": 0.3
},
"gemini": {
"api_key": "your-gemini-key",
"model": "gemini-2.5-pro",
"max_tokens": 4096
}
},
"llm_settings": {
"default_provider": "openai",
"fallback_chain": ["openai", "anthropic", "gemini"],
"enable_monitoring": true,
"enable_caching": true
}
}
# Set up fallback chain
llm_manager.set_fallback_chain(["openai", "anthropic", "gemini"])
# The manager will automatically try providers in order if one fails
response = await llm_manager.chat([
{"role": "user", "content": "Hello!"}
])
# If OpenAI fails, it will try Anthropic, then Gemini
from spoon_ai.llm import LLMProviderInterface, register_provider
@register_provider("custom", capabilities=["chat", "completion"])
class CustomProvider(LLMProviderInterface):
async def initialize(self, config):
self.api_key = config["api_key"]
# Initialize your provider
async def chat(self, messages, **kwargs):
# Implement chat functionality
return LLMResponse(
content="Custom response",
provider="custom",
model="custom-model",
finish_reason="stop"
)
# Implement other required methods...
from spoon_ai.llm import get_debug_logger, get_metrics_collector
# Get monitoring instances
debug_logger = get_debug_logger()
metrics = get_metrics_collector()
# View provider statistics
stats = metrics.get_provider_stats("openai")
print(f"Success rate: {stats['success_rate']:.1f}%")
print(f"Average response time: {stats['avg_response_time']:.2f}s")
# Get recent logs
logs = debug_logger.get_recent_logs(limit=10)
for log in logs:
print(f"{log.timestamp}: {log.provider} - {log.method}")
from spoon_ai.chat import ChatBot
from spoon_ai.agents import SpoonReactAI
# Using OpenAI's GPT-4
openai_agent = SpoonReactAI(
llm=ChatBot(model_name="gpt-4.1", llm_provider="openai")
)
# Using Anthropic's Claude
claude_agent = SpoonReactAI(
llm=ChatBot(model_name="claude-sonnet-4-20250514", llm_provider="anthropic")
)
# Using OpenRouter (OpenAI-compatible API)
# Uses OPENAI_API_KEY environment variable with your OpenRouter API key
openrouter_agent = SpoonReactAI(
llm=ChatBot(
model_name="anthropic/claude-sonnet-4", # Model name from OpenRouter
llm_provider="openai", # MUST be "openai"
base_url="https://openrouter.ai/api/v1" # OpenRouter API endpoint
)
)
SpoonOS includes a powerful graph-based workflow orchestration system, designed for building complex AI agent workflows with state management, multi-agent coordination, and human-in-the-loop patterns.
from spoon_ai.graph import StateGraph
from typing import TypedDict
class WorkflowState(TypedDict):
counter: int
completed: bool
def increment(state: WorkflowState):
return {"counter": state["counter"] + 1}
def complete(state: WorkflowState):
return {"completed": True}
# Build and execute workflow
graph = StateGraph(WorkflowState)
graph.add_node("increment", increment)
graph.add_node("complete", complete)
graph.add_edge("increment", "complete")
graph.set_entry_point("increment")
compiled = graph.compile()
result = await compiled.invoke({"counter": 0, "completed": False})
# Result: {"counter": 1, "completed": True}
from spoon_ai.tools.base import BaseTool
class MyCustomTool(BaseTool):
name: str = "my_tool"
description: str = "Description of what this tool does"
parameters: dict = {
"type": "object",
"properties": {
"param1": {"type": "string", "description": "Parameter description"}
},
"required": ["param1"]
}
async def execute(self, param1: str) -> str:
# Tool implementation
return f"Result: {param1}"
from spoon_ai.agents import ToolCallAgent
from spoon_ai.tools import ToolManager
class MyAgent(ToolCallAgent):
name: str = "my_agent"
description: str = "Agent description"
system_prompt: str = "You are a helpful assistant..."
max_steps: int = 5
available_tools: ToolManager = Field(
default_factory=lambda: ToolManager([MyCustomTool()])
)
import asyncio
async def main():
agent = MyCustomAgent(llm=ChatBot())
result = await agent.run("Say hello to Scarlett")
print("Result:", result)
if __name__ == "__main__":
asyncio.run(main())
Register your own tools, override run(), or extend with MCP integrations. See docs/agent.md or docs/mcp_mode_usage.md
SpoonOS supports runtime pluggable agents using the MCP (Model Context Protocol) — allowing your agent to connect to a live tool server (via SSE/WebSocket/HTTP) and call tools like get_contract_events or get_wallet_activity with no extra code.
Two ways to build MCP-powered agents:
Built-in Agent Mode: Build and run your own MCP server (e.g., mcp_thirdweb_collection.py) and connect to it using an MCPClientMixin agent.
Community Agent Mode: Use mcp-proxy to connect to open-source agents hosted on GitHub.
SpoonOS supports prompt caching for Anthropic models to reduce costs and improve performance. Enable/disable globally:
from spoon_ai.chat import ChatBot
# Enable prompt caching (default: True)
chatbot = ChatBot(
llm_provider="anthropic",
enable_prompt_cache=True
)
spoon-core/
├── 📄 README.md # This file
├── 🔧 main.py # CLI entry point
├── ⚙️ config.json # Runtime configuration
├── 🔐 .env.example # Environment template
├── 📦 requirements.txt # Python dependencies
│
├── 📁 spoon_ai/ # Core framework
│ ├── 🤖 agents/ # Agent implementations
│ ├── 🛠️ tools/ # Built-in tools
│ ├── 🧠 llm/ # LLM providers & management
│ ├── 📊 graph.py # Graph workflow system
│ └── 💬 chat.py # Chat interface
│
├── 📁 examples/ # Usage examples
│ ├── 🤖 agent/ # Custom agent demos
│ ├── 🔌 mcp/ # MCP tool examples
│ └── 📊 graph_demo.py # Graph system demo
│
├── 📁 doc/ # Documentation
│ ├── 📖 configuration.md # Setup & config guide
│ ├── 🤖 agent.md # Agent development
│ ├── 📊 graph_agent.md # Graph workflows
│ ├── 🔌 mcp_mode_usage.md # MCP integration
│ └── 💻 cli.md # CLI reference
│
└── 📁 tests/ # Test suite
├── 🧪 test_agents.py
├── 🧪 test_tools.py
└── 🧪 test_graph.py
main.py - Start here! CLI entry pointconfig.json - Main configuration file (auto-generated)doc/configuration.md - Complete setup guideexamples/ - Ready-to-run examples