CyberStrikeAI is an AI-native security testing platform built in Go. It integrates 100+ security tools, an intelligent orchestration engine, role-based testing with predefined security roles, a skills system with specialized testing skills, and comprehensive lifecycle management capabilities.
npx skills add https://github.com/Ed1s0nZ/CyberStrikeAI --skill security-awareness-trainingInstall this skill with the CLI and start using the SKILL.md workflow in your workspace.
Community: Join us on Discord
If CyberStrikeAI helps you, you can support the project via WeChat Pay or Alipay:
CyberStrikeAI is an AI-native security testing platform built in Go. It integrates 100+ security tools, an intelligent orchestration engine, role-based testing with predefined security roles, a skills system with specialized testing skills, and comprehensive lifecycle management capabilities. Through native MCP protocol and AI agents, it enables end-to-end automation from conversational commands to vulnerability discovery, attack-chain analysis, knowledge retrieval, and result visualizationβdelivering an auditable, traceable, and collaborative testing environment for security teams.
The dashboard provides a comprehensive overview of system runtime status, security vulnerabilities, tool usage, and knowledge base, helping users quickly understand the platform's core features and current state.
Web Console
|
Task Management
|
Vulnerability Management
|
WebShell Management
|
MCP Management
|
Knowledge Base
|
Skills Management
|
Agent Management
|
Role Management
|
System Settings
|
MCP stdio Mode
|
Burp Suite Plugin
|
/api/agent-loop), multi mode (/api/multi-agent/stream) offers deep (coordinator + task sub-agents), plan_execute (planner / executor / replanner), and supervisor (orchestrator + transfer / exit); chosen per request via orchestration. Markdown under agents/: orchestrator.md (Deep), orchestrator-plan-execute.md, orchestrator-supervisor.md, plus sub-agent *.md where applicable (see Multi-agent doc)skills_dir follow Agent Skills layout (SKILL.md + optional files); multi-agent sessions use the official Eino ADK skill tool for progressive disclosure (load by name), with optional host filesystem / shell via multi_agent.eino_skills; optional eino_middleware adds patchtoolcalls, tool_search, plantask, reduction, checkpoints, and Deep tuningβ20+ sample domains (SQLi, XSS, API security, β¦) ship under skills/CyberStrikeAI includes optional integrations under plugins/.
plugins/burp-suite/cyberstrikeai-burp-extension/plugins/burp-suite/cyberstrikeai-burp-extension/dist/cyberstrikeai-burp-extension.jarplugins/burp-suite/cyberstrikeai-burp-extension/README.mdCyberStrikeAI ships with 100+ curated tools covering the whole kill chain:
Prerequisites:
One-Command Deployment:
git clone https://github.com/Ed1s0nZ/CyberStrikeAI.git
cd CyberStrikeAI
chmod +x run.sh && ./run.sh
The run.sh script will automatically:
First-Time Configuration:
Settings β Fill in your API credentials:openai:
api_key: "sk-your-key"
base_url: "https://api.openai.com/v1" # or https://api.deepseek.com/v1
model: "gpt-4o" # or deepseek-chat, claude-3-opus, etc.
config.yaml directly before launchingauth.password in config.yaml)# macOS
brew install nmap sqlmap nuclei httpx gobuster feroxbuster subfinder amass
# Ubuntu/Debian
sudo apt-get install nmap sqlmap nuclei httpx gobuster feroxbuster
AI automatically falls back to alternatives when a tool is missing.Alternative Launch Methods:
# Direct Go run (requires manual setup)
go run cmd/server/main.go
# Manual build
go build -o cyberstrike-ai cmd/server/main.go
./cyberstrike-ai
Note: The Python virtual environment (venv/) is automatically created and managed by run.sh. Tools that require Python (like api-fuzzer, http-framework-test, etc.) will automatically use this environment.
CyberStrikeAI one-click upgrade (recommended):
chmod +x upgrade.sh./upgrade.sh (optional flags: --tag vX.Y.Z, --no-venv, --preserve-custom, --yes)config.yaml and data/, upgrade the code from GitHub Release, update config.yaml's version, then restart the server.Recommended one-liner:
chmod +x upgrade.sh && ./upgrade.sh --yes
If something goes wrong, you can restore from .upgrade-backup/ (or manually copy /data and config.yaml back) and run ./run.sh again.
Requirements / tips:
curl or wget for downloading Release packages.rsync is recommended/required for the safe code sync.export GITHUB_TOKEN="..." before running ./upgrade.sh.β οΈ Note: This procedure only applies to version updates without compatibility or breaking changes. If a release includes compatibility changes, this method may not apply.
Examples: No breaking changes β e.g. v1.3.1 β v1.3.2; with breaking changes β e.g. v1.3.1 β v1.4.0. The project follows Semantic Versioning (SemVer): when only the patch version (third number) changes, this upgrade path is usually safe; when the minor or major version changes, config, data, or APIs may have changed β check the release notes before using this method.
multi_agent.enabled: true, the chat UI can switch between single (classic ReAct loop, /api/agent-loop/stream) and multi (/api/multi-agent/stream). Multi mode keeps deep as the baseline coordinator + task sub-agents, and adds plan_execute and supervisor orchestrations via the request body orchestration field. MCP tools are bridged the same way as single-agent.auth.password is empty.roles/ directory.user_prompt that prepends to user messages, guiding the AI to adopt specialized testing methodologies and focus areas.tools list to limit available tools, ensuring focused testing workflows (e.g., CTF role restricts to CTF-specific utilities).skills_dir and are loaded in multi-agent / Eino sessions via the ADK skill tool (progressive disclosure). Configure multi_agent.eino_skills for middleware, tool name override, and optional host read_file / glob / grep / write / edit / execute (Deep / Supervisor when enabled; plan_execute differsβsee docs). Single-agent ReAct does not mount this Eino skill stack today.roles/ directory. Each role defines name, description, user_prompt, icon, tools, and enabled fields.Creating a custom role (example):
roles/ (e.g., roles/custom-role.yaml):name: Custom Role
description: Specialized testing scenario
user_prompt: You are a specialized security tester focusing on API security...
icon: "\U0001F4E1"
tools:
- api-fuzzer
- arjun
- graphql-scanner
enabled: true
adk/prebuilt: deep β coordinator + task sub-agents; plan_execute β planner / executor / replanner loop (no YAML/Markdown sub-agent list); supervisor β orchestrator with transfer and exit over Markdown-defined specialists. The client sends orchestration: deep | plan_execute | supervisor (default deep).agents_dir (default agents/):
orchestrator.md or one .md with kind: orchestrator. Body or multi_agent.orchestrator_instruction, then Eino defaults.orchestrator-plan-execute.md (plus optional orchestrator_instruction_plan_execute in YAML).orchestrator-supervisor.md (plus optional orchestrator_instruction_supervisor); requires at least one sub-agent.*.md files (YAML front matter + body). Not used as task targets if marked orchestrator-only./api/multi-agent/markdown-agents.multi_agent in config.yaml: enabled, default_mode, robot_use_multi_agent, batch_use_multi_agent, max_iteration, plan_execute_loop_max_iterations, per-mode orchestrator instruction fields, optional YAML sub_agents merged with disk (id clash β Markdown wins), eino_skills, eino_middleware (optional ADK middleware and Deep/Supervisor tuning).SKILL.md only (Agent Skills): YAML front matter only name and description, plus Markdown body. Optional sibling files (FORMS.md, REFERENCE.md, scripts/*, β¦). No SKILL.yaml (not part of Claude or Eino specs); sections/scripts/progressive behavior are derived at runtime from Markdown and the filesystem.skills_dir is the single root for packs. Multi-agent loads them through Einoβs official skill middleware (progressive disclosure: model calls skill with a pack name instead of receiving full SKILL text up front). Configure via multi_agent.eino_skills: disable, filesystem_tools (host read/glob/grep/write/edit/execute), skill_tool_name.schema.Document chunks for FilesystemSkillsRetriever (skills.AsEinoRetriever()) in compose graphs (e.g. knowledge/indexing pipelines)./api/skills listing and depth (summary | full), section, and resource_path remain for the web UI and ops; model-side skill loading in multi-agent uses the skill tool, not MCP.eino_middleware β e.g. tool_search (dynamic MCP tool list), patch_tool_calls, plantask (structured tasks; persistence defaults under a subdirectory of skills_dir), reduction, checkpoint_dir, Deep output key / model retries / task-tool description prefixβsee config.yaml and internal/config/config.go.skills/cyberstrike-eino-demo/; see skills/README.md.Creating a skill:
mkdir skills/<skill-id> and add standard SKILL.md (+ any optional files), or drop in an open-source skill folder as-is.multi_agent.eino_skills enabled so the model can call the skill tool with that pack name.tools/*.yaml describe commands, arguments, prompts, and metadata.security.tools_dir to a folder is usually enough; inline definitions in config.yaml remain supported for quick experiments.query_execution_result tool with paging, filters, and regex search.Creating a custom tool (typical flow)
tools/ (for example tools/sample.yaml).name, command, args, and short_description.parameters[] so the agent knows how to build CLI arguments.description/notes block if the agent needs extra context or post-processing tips.cmd), and an optional remark; all records persist in SQLite and are compatible with common clients such as IceSword and AntSword.echo 1 check).go run cmd/mcp-stdio/main.go exposes the agent to Cursor/CLI.mcp-servers/ directory provides standalone MCPs (e.g. reverse shell). They speak standard MCP over stdio and work with CyberStrikeAI (Settings β External MCP), Cursor, VS Code, and other MCP clients.go build -o cyberstrike-ai-mcp cmd/mcp-stdio/main.go
Settings β Tools & MCP β Add Custom MCP, pick Command, then point to the compiled binary and your config:{
"mcpServers": {
"cyberstrike-ai": {
"command": "/absolute/path/to/cyberstrike-ai-mcp",
"args": [
"--config",
"/absolute/path/to/config.yaml"
]
}
}
}
Replace the paths with your local locations; Cursor will launch the stdio server automatically.The HTTP MCP server runs on a separate port (default 8081) and supports header-based authentication so only clients that send the correct header can call tools.
config.yaml set mcp.enabled: true and optionally mcp.host / mcp.port. For auth (recommended if the port is reachable from the network), set:
mcp.auth_header β header name (e.g. X-MCP-Token);mcp.auth_header_value β secret value. Leave it empty if you want the server to auto-generate a random token on first start and write it back to the config../run.sh or go run cmd/server/main.go. The MCP endpoint is http://<host>:<port>/mcp (e.g. http://localhost:8081/mcp).auth_header_value was empty, it will have been generated and saved; the printed JSON includes the URL and headers.~/.cursor/mcp.json (or your projectβs .cursor/mcp.json) under mcpServers, or merge it into your existing mcpServers..mcp.json or ~/.claude.json under mcpServers.Example of what the terminal prints (with auth enabled):
{
"mcpServers": {
"cyberstrike-ai": {
"url": "http://localhost:8081/mcp",
"headers": {
"X-MCP-Token": "<auto-generated-or-your-value>"
},
"type": "http"
}
}
}
If you do not set auth_header / auth_header_value, the endpoint accepts requests without authentication (suitable only for localhost or trusted networks).
CyberStrikeAI supports connecting to external MCP servers via three transport modes:
To add an external MCP server:
Open the Web UI and navigate to Settings β External MCP.
Click Add External MCP and provide the configuration in JSON format:
HTTP mode example:
{
"my-http-mcp": {
"transport": "http",
"url": "http://127.0.0.1:8081/mcp",
"description": "HTTP MCP server",
"timeout": 30
}
}
stdio mode example:
{
"my-stdio-mcp": {
"command": "python3",
"args": ["/path/to/mcp-server.py"],
"description": "stdio MCP server",
"timeout": 30
}
}
SSE mode example:
{
"my-sse-mcp": {
"transport": "sse",
"url": "http://127.0.0.1:8082/sse",
"description": "SSE MCP server",
"timeout": 30
}
}
Click Save and then Start to connect to the server.
Monitor the connection status, tool count, and health in real time.
SSE mode benefits:
A test SSE MCP server is available at cmd/test-sse-mcp-server/ for validation purposes.
search_knowledge_base tool.retriever.Retriever usage.knowledge_base/ directory for Markdown files and automatically indexes them with embeddings.Quick Start (Using Pre-built Knowledge Base):
knowledge.db) and place it in the project's data/ directory.Setting up the knowledge base:
knowledge.enabled: true in config.yaml:knowledge:
enabled: true
base_path: knowledge_base
embedding:
provider: openai
model: text-embedding-v4
base_url: "https://api.openai.com/v1" # or your embedding API
api_key: "sk-xxx"
retrieval:
top_k: 5
similarity_threshold: 0.7
knowledge_base/ directory, organized by category (e.g., knowledge_base/SQL Injection/README.md).search_knowledge_base when it needs security knowledge. You can also explicitly ask: "Search the knowledge base for SQL injection techniques".Knowledge base structure:
POST /api/multi-agent/stream (SSE, when enabled), POST /api/multi-agent (non-streaming), Markdown agents under /api/multi-agent/markdown-agents (list/get/create/update/delete)./api/roles endpoints: GET /api/roles (list all roles), GET /api/roles/:name (get role), POST /api/roles (create role), PUT /api/roles/:name (update role), DELETE /api/roles/:name (delete role). Roles are stored as YAML files in the roles/ directory and support hot-reload./api/vulnerabilities endpoints: GET /api/vulnerabilities (list with filters), POST /api/vulnerabilities (create), GET /api/vulnerabilities/:id (get), PUT /api/vulnerabilities/:id (update), DELETE /api/vulnerabilities/:id (delete), GET /api/vulnerabilities/stats (statistics)./api/batch-tasks endpoints: POST /api/batch-tasks (create queue), GET /api/batch-tasks (list queues), GET /api/batch-tasks/:queueId (get queue), POST /api/batch-tasks/:queueId/start (start execution), POST /api/batch-tasks/:queueId/cancel (cancel), DELETE /api/batch-tasks/:queueId (delete), POST /api/batch-tasks/:queueId/tasks (add task), PUT /api/batch-tasks/:queueId/tasks/:taskId (update task), DELETE /api/batch-tasks/:queueId/tasks/:taskId (delete task). Tasks execute sequentially, each creating a separate conversation with full status tracking./api/webshell/connections (GET list, POST create, PUT update, DELETE delete) and /api/webshell/exec (command execution), /api/webshell/fileop (list/read/write/delete files)./api/auth/change-password, enforce short-lived sessions, and restrict MCP ports at the network layer when exposing the service.auth:
password: "change-me"
session_duration_hours: 12
server:
host: "0.0.0.0"
port: 8080
log:
level: "info"
output: "stdout"
mcp:
enabled: true
host: "0.0.0.0"
port: 8081
auth_header: "X-MCP-Token" # optional; leave empty for no auth
auth_header_value: "" # optional; leave empty to auto-generate on first start
openai:
api_key: "sk-xxx"
base_url: "https://api.deepseek.com/v1"
model: "deepseek-chat"
database:
path: "data/conversations.db"
knowledge_db_path: "data/knowledge.db" # Optional: separate DB for knowledge base
security:
tools_dir: "tools"
knowledge:
enabled: false # Enable knowledge base feature
base_path: "knowledge_base" # Path to knowledge base directory
embedding:
provider: "openai" # Embedding provider (currently only "openai")
model: "text-embedding-v4" # Embedding model name
base_url: "" # Leave empty to use OpenAI base_url
api_key: "" # Leave empty to use OpenAI api_key
retrieval:
top_k: 5 # Number of top results to return
similarity_threshold: 0.7 # Minimum cosine similarity (0-1)
roles_dir: "roles" # Role configuration directory (relative to config file)
skills_dir: "skills" # Skills directory (relative to config file)
agents_dir: "agents" # Multi-agent Markdown definitions (orchestrator + sub-agents)
multi_agent:
enabled: false
default_mode: "single" # single | multi (UI default when multi-agent is enabled)
robot_use_multi_agent: false
batch_use_multi_agent: false
orchestrator_instruction: "" # Deep; used when orchestrator.md body is empty
# orchestrator_instruction_plan_execute / orchestrator_instruction_supervisor optional
# eino_skills: { disable: false, filesystem_tools: true, skill_tool_name: skill }
# eino_middleware: optional patch_tool_calls, tool_search, plantask, reduction, checkpoint_dir, ...
tools/nmap.yaml)name: "nmap"
command: "nmap"
args: ["-sT", "-sV", "-sC"]
enabled: true
short_description: "Network mapping & service fingerprinting"
parameters:
- name: "target"
type: "string"
description: "IP or domain"
required: true
position: 0
- name: "ports"
type: "string"
flag: "-p"
description: "Range, e.g. 1-1000"
roles/penetration-testing.yaml)name: Penetration Testing
description: Professional penetration testing expert for comprehensive security testing
user_prompt: You are a professional cybersecurity penetration testing expert. Please use professional penetration testing methods and tools to conduct comprehensive security testing on targets, including but not limited to SQL injection, XSS, CSRF, file inclusion, command execution and other common vulnerabilities.
icon: "\U0001F3AF"
tools:
- nmap
- sqlmap
- nuclei
- burpsuite
- metasploit
- httpx
- record_vulnerability
- list_knowledge_risk_types
- search_knowledge_base
enabled: true
agents/*.md, eino_skills / eino_middleware, APIs, and chat/stream behavior.CyberStrikeAI/
βββ cmd/ # Server, MCP stdio entrypoints, tooling
βββ internal/ # Agent, MCP core, handlers, security executor
βββ web/ # Static SPA + templates
βββ tools/ # YAML tool recipes (100+ examples provided)
βββ roles/ # Role configurations (12+ predefined security testing roles)
βββ skills/ # Agent Skills dirs (SKILL.md + optional files; demo: cyberstrike-eino-demo)
βββ agents/ # Multi-agent Markdown (orchestrator.md + sub-agent *.md)
βββ docs/ # Documentation (e.g. robot/chatbot guide, MULTI_AGENT_EINO.md)
βββ images/ # Docs screenshots & diagrams
βββ config.yaml # Runtime configuration
βββ run.sh # Convenience launcher
βββ README*.md
Scan open ports on 192.168.1.1
Perform a comprehensive port scan on 192.168.1.1 focusing on 80,443,22
Check if https://example.com/page?id=1 is vulnerable to SQL injection
Scan https://example.com for hidden directories and outdated software
Enumerate subdomains for example.com, then run nuclei against the results
Load the recon-engagement template, run amass/subfinder, then brute-force dirs on every live host.
Use external Burp-based MCP server for authenticated traffic replay, then pass findings back for graphing.
Compress the 5 MB nuclei report, summarize critical CVEs, and attach the artifact to the conversation.
Build an attack chain for the latest engagement and export the node list with severity >= high.
CyberStrikeAI has joined 404Starlink
CyberStrikeAI is licensed under the Apache License 2.0.
See the LICENSE file for details.
This tool is for educational and authorized testing purposes only!
CyberStrikeAI is a professional security testing platform designed to assist security researchers, penetration testers, and IT professionals in conducting security assessments and vulnerability research with explicit authorization.
By using this tool, you agree to:
The developers are not responsible for any misuse! Please ensure your usage complies with local laws and regulations, and that you have obtained explicit authorization from the target system owner.
Need help or want to contribute? Open an issue or PRβcommunity tooling additions are welcome!