Build your own harness. Understand and research any codebase. Plan complex features. Ship them autonomously.
npx skills add https://github.com/flora131/atomic --skill prompt-engineerInstall this skill with the CLI and start using the SKILL.md workflow in your workspace.
An open-source TypeScript SDK for building harnesses around your coding agent — Claude Code, OpenCode, or GitHub Copilot CLI. Chain agent sessions into deterministic pipelines, add human-in-the-loop approval gates, dispatch 12 specialized sub-agents, and tap 57 built-in skills — then ship it as TypeScript your whole team runs.
Define how your agent works. Start for yourself, scale to your team — across GitHub, Azure DevOps (ADO), or Sapling.
Install, generate context, try Ralph, then write your own workflow — four steps, a few minutes.
claude and authenticateopencode and authenticatecopilot and authenticateThe bootstrap script installs Bun, Atomic, and shell completions in one step:
# macOS / Linux
curl -fsSL https://raw.githubusercontent.com/flora131/atomic/main/install.sh | bash
# Windows (PowerShell 7+)
irm https://raw.githubusercontent.com/flora131/atomic/main/install.ps1 | iex
Upgrade later with bun update -g @bastani/atomic.
bun install -g @bastani/atomic
This skips the Bun install step but doesn't set up shell completions — run atomic completions <shell> separately if you want them (see Commands Reference).
Prerelease builds: bun install -g @bastani/atomic@next (may contain breaking changes).
Set GITHUB_TOKEN to avoid GitHub API rate limits when running the bootstrap script in CI:
# macOS / Linux
GITHUB_TOKEN=ghp_... curl -fsSL https://raw.githubusercontent.com/flora131/atomic/main/install.sh | bash
# Windows PowerShell
$env:GITHUB_TOKEN='ghp_...'; irm https://raw.githubusercontent.com/flora131/atomic/main/install.ps1 | iex
Devcontainers isolate the agent from your host, limiting the blast radius of destructive actions. This is the safest way to run workflows.
Add one feature to .devcontainer/devcontainer.json:
| Feature | Agent |
|---|---|
ghcr.io/flora131/atomic/claude:1 |
Atomic + Claude Code |
ghcr.io/flora131/atomic/opencode:1 |
Atomic + OpenCode |
ghcr.io/flora131/atomic/copilot:1 |
Atomic + Copilot CLI |
Full .devcontainer.json templates per agent live in .devcontainer/. Each feature installs Atomic, bun, playwright-cli, agent configs, and the agent CLI itself. First run takes ~1 minute to warm up.
Minimal example (Claude + Rust):
{
"image": "mcr.microsoft.com/devcontainers/rust:latest",
"features": {
"ghcr.io/devcontainers/features/common-utils": {},
"ghcr.io/flora131/atomic/claude:1": {},
"ghcr.io/devcontainers/features/github-cli:1": {}
},
"remoteEnv": {
"ANTHROPIC_API_KEY": "${localEnv:ANTHROPIC_API_KEY}"
}
}
Use the Dev Containers VS Code extension or the Dev Container CLI to start the container.
Atomic used to ship as a standalone binary. It's now an npm package. One-time migration:
atomic uninstall
bun uninstall -g @bastani/atomic-workflows
rm -rf ~/.atomic ~/.copilot/skills ~/.opencode/skills
bun install -g @bastani/atomic
atomic chat -a <claude|opencode|copilot>
Then type /init. Atomic explores your codebase with sub-agents and writes CLAUDE.md / AGENTS.md so every future session starts with the right context.
Ralph plans, implements, reviews, and debugs a task on its own — up to 10 iterations, exiting after 2 consecutive clean reviews.
atomic workflow -n ralph -a claude "Build a REST API for user management"
⚠️ Workflows run with agent permission checks disabled so pipelines don't block on prompts. Run them in a devcontainer or git worktree, not on your host. See Security.
Every team has a process — code review, CI checks, PR creation, approval, merge. Encode it as TypeScript once; everyone runs the same pipeline.
bun init && bun add @bastani/atomic
mkdir -p .atomic/workflows/review-to-merge/claude
Create .atomic/workflows/review-to-merge/claude/index.ts:
import { defineWorkflow } from "@bastani/atomic/workflows";
export default defineWorkflow({
name: "review-to-merge",
description: "Review → CI → PR → Notify → Approve → Merge",
}).for<"claude">()
.run(async (ctx) => {
// 1. Review
const review = await ctx.stage({ name: "review" }, {}, {}, async (s) => {
await s.session.query("Review uncommitted changes for correctness, security, style.");
s.save(s.sessionId);
});
// 2. Run security + CI in parallel
await Promise.all([
ctx.stage({ name: "security-scan" }, {}, {}, async (s) => {
await s.session.query("Run `bun audit` and scan for leaked secrets.");
s.save(s.sessionId);
}),
ctx.stage({ name: "ci-checks" }, {}, {}, async (s) => {
await s.session.query("Run `bun lint` and `bun test`. Report failures.");
s.save(s.sessionId);
}),
]);
// 3. Open PR, then notify Slack + wait for human approval
await ctx.stage({ name: "notify-and-merge" }, {}, {}, async (s) => {
const t = await s.transcript(review);
await s.session.query(`Read ${t.path}. Open a PR summarizing the changes.`);
await fetch("https://slack.com/api/chat.postMessage", {
method: "POST",
headers: { Authorization: `Bearer ${process.env.SLACK_TOKEN}` },
body: JSON.stringify({ channel: "#code-review", text: "PR ready — please approve." }),
});
// Human-in-the-loop: pauses until the user responds
await s.session.query(
"Ask the user to confirm approval, then merge with `gh pr merge --squash`.",
{ allowedTools: ["Bash", "Read", "AskUserQuestion"] },
);
s.save(s.sessionId);
});
})
.compile();
Run it:
atomic workflow -n review-to-merge -a claude
Swap -a claude for -a opencode or -a copilot — same harness, different agent. See Workflow SDK for parallel stages, input schemas, headless stages, and the full API reference.
Every chat and workflow runs inside an isolated tmux session on a dedicated socket (your personal tmux is untouched). If your terminal disconnects, your session keeps running — reconnect anytime.
atomic session list # all sessions
atomic session connect # interactive fuzzy picker
atomic session connect <name> # by name
atomic session kill <name> # kill one (or all, with confirmation)
Session names follow atomic-chat-<id> or atomic-wf-<workflow>-<id>. Scope with atomic chat session … or atomic workflow session ….
Need a workflow to run in the background while you do something else? Pass -d / --detach:
atomic workflow -n ralph -a claude -d "build the auth module" # returns immediately
atomic workflow session connect atomic-wf-claude-ralph-<id> # attach later
Detached mode is what you want for scripted / CI automation and long-running tasks — the orchestrator keeps running on the atomic tmux socket regardless of your terminal.
Better models make harnesses more important, not less. The more you trust an agent to execute complex tasks, the more value you get from defining exactly what it should execute, in what order, with what checks along the way. The harness is the durable layer — models keep improving underneath it, but your process stays the same.
Add production monitoring. Research observability gaps, implement missing metrics and health checks, review the changes.
atomic workflow -n add-monitoring -a claude "add Prometheus metrics and health checks to all API endpoints"
Parallel UX testing with 50 personas. Spin up 50 agents, each with a distinct persona (power user, accessibility-dependent, non-technical stakeholder), each using Playwright to test your app.
atomic workflow -n ux-personas -a claude
Review-to-merge pipeline. The workflow from step 4 above — reviews code, runs CI in parallel, opens a PR, notifies Slack, waits for approval, merges.
[!CAUTION]
Atomic workflows run coding agents with all permission checks disabled. The agent can read, write, and delete files, execute arbitrary shell commands, and make network requests without prompting. This is required for unattended pipelines. Run workflows in a devcontainer, not on your host machine.
| Agent | How permissions are bypassed | Key flags / settings |
|---|---|---|
| Claude Code | CLI flag disables the interactive permission prompt entirely | --dangerously-skip-permissions |
| GitHub Copilot CLI | CLI flag enables auto-execution; SDK auto-approves all tool requests | --yolo, COPILOT_ALLOW_ALL=true, onPermissionRequest: approveAll |
| OpenCode | Permissions handled programmatically through the event stream | Permission requests auto-replied via SSE events |
Defaults live in src/services/config/definitions.ts and src/sdk/runtime/executor.ts. Override per-project via ProviderOverrides in .atomic/settings.json — chatFlags replaces defaults entirely; envVars are merged.
Atomic works across three production coding agents — switch with a flag and your workflows, skills, and sub-agents carry over.
| Agent | Command |
|---|---|
| Claude Code | atomic chat -a claude |
| OpenCode | atomic chat -a opencode |
| GitHub Copilot CLI | atomic chat -a copilot |
Each agent gets its own configuration directory (.claude/, .opencode/, .github/), skills, and context files — all managed by Atomic.
The Workflow SDK (@bastani/atomic/workflows) lets you encode your team's process as TypeScript — spawn agent sessions dynamically with native control flow (for, if, Promise.all()), and watch them appear in a live graph as they execute.
Set up a workflow project (bun init && bun add @bastani/atomic), create .atomic/workflows/<name>/<agent>/index.ts, and run it:
atomic workflow -n my-workflow -a claude "describe this project"
See step 4 of Quick Start for a complete review-to-merge example. More examples and the full API reference below.
import { defineWorkflow } from "@bastani/atomic/workflows";
export default defineWorkflow({
name: "my-workflow",
description: "Two-session pipeline: describe -> summarize",
inputs: [{ name: "prompt", type: "text", required: true, description: "task prompt" }],
}).for<"claude">()
.run(async (ctx) => {
const prompt = ctx.inputs.prompt ?? "";
const describe = await ctx.stage(
{ name: "describe", description: "Ask Claude to describe the project" },
{}, {},
async (s) => {
await s.session.query(prompt);
s.save(s.sessionId);
},
);
await ctx.stage(
{ name: "summarize", description: "Summarize the previous session's output" },
{}, {},
async (s) => {
const research = await s.transcript(describe);
await s.session.query(`Read ${research.path} and summarize in 2-3 bullets.`);
s.save(s.sessionId);
},
);
})
.compile();
import { defineWorkflow } from "@bastani/atomic/workflows";
export default defineWorkflow({
name: "parallel-demo",
description: "describe -> [summarize-a, summarize-b] -> merge",
inputs: [{ name: "prompt", type: "text", required: true, description: "task prompt" }],
}).for<"claude">()
.run(async (ctx) => {
const prompt = ctx.inputs.prompt ?? "";
const describe = await ctx.stage({ name: "describe" }, {}, {}, async (s) => {
await s.session.query(prompt);
s.save(s.sessionId);
});
const [summarizeA, summarizeB] = await Promise.all([
ctx.stage({ name: "summarize-a" }, {}, {}, async (s) => {
const research = await s.transcript(describe);
await s.session.query(`Read ${research.path} and summarize in 2-3 bullets.`);
s.save(s.sessionId);
}),
ctx.stage({ name: "summarize-b" }, {}, {}, async (s) => {
const research = await s.transcript(describe);
await s.session.query(`Read ${research.path} and summarize in one sentence.`);
s.save(s.sessionId);
}),
]);
await ctx.stage({ name: "merge" }, {}, {}, async (s) => {
const bullets = await s.transcript(summarizeA);
const oneliner = await s.transcript(summarizeB);
await s.session.query(
`Combine:\n\n## Bullets\n${bullets.content}\n\n## One-liner\n${oneliner.content}`,
);
s.save(s.sessionId);
});
})
.compile();
Declare inputs on defineWorkflow and the CLI materialises one --<field>=<value> flag per entry. Required fields, enum membership, and unknown-flag rejection are validated before any tmux session spawns. The interactive picker renders the same schema as a form.
import { defineWorkflow } from "@bastani/atomic/workflows";
export default defineWorkflow({
name: "gen-spec",
description: "Convert a research doc into an execution spec",
inputs: [
{
name: "research_doc",
type: "string",
required: true,
description: "path to the research doc",
placeholder: "research/docs/2026-04-11-auth.md",
},
{
name: "focus",
type: "enum",
required: true,
description: "how aggressively to scope the spec",
values: ["minimal", "standard", "exhaustive"],
default: "standard",
},
{
name: "notes",
type: "text",
description: "extra guidance for the spec writer (optional)",
},
],
}).for<"claude">()
.run(async (ctx) => {
const { research_doc, focus } = ctx.inputs;
const notes = ctx.inputs.notes ?? "";
await ctx.stage({ name: "write-spec" }, {}, {}, async (s) => {
await s.session.query(
`Read ${research_doc} and produce a ${focus} spec.` +
(notes ? `\n\nExtra guidance:\n${notes}` : ""),
);
s.save(s.sessionId);
});
})
.compile();
Run it:
# Named + flags (scriptable; CI-friendly)
atomic workflow -n gen-spec -a claude \
--research_doc=research/docs/2026-04-11-auth.md \
--focus=standard
# Picker (fuzzy-search workflows, fill the form)
atomic workflow -a claude
Stages can run headlessly (headless: true) — they execute the provider SDK in-process instead of spawning a tmux window. Headless stages are invisible in the graph but tracked via a background counter in the statusline.
import { defineWorkflow, extractAssistantText } from "@bastani/atomic/workflows";
export default defineWorkflow({
name: "headless-demo",
description: "seed -> [3 headless background] -> merge",
inputs: [{ name: "prompt", type: "text", required: true, description: "task prompt" }],
}).for<"claude">()
.run(async (ctx) => {
const prompt = ctx.inputs.prompt ?? "";
const seed = await ctx.stage(
{ name: "seed", description: "Generate overview" }, {}, {},
async (s) => {
const result = await s.session.query(prompt);
s.save(s.sessionId);
return extractAssistantText(result, 0);
},
);
const [pros, cons, uses] = await Promise.all([
ctx.stage({ name: "pros", headless: true }, {}, {}, async (s) => {
const r = await s.session.query(`List 3 pros:\n\n${seed.result}`);
s.save(s.sessionId);
return extractAssistantText(r, 0);
}),
ctx.stage({ name: "cons", headless: true }, {}, {}, async (s) => {
const r = await s.session.query(`List 3 cons:\n\n${seed.result}`);
s.save(s.sessionId);
return extractAssistantText(r, 0);
}),
ctx.stage({ name: "uses", headless: true }, {}, {}, async (s) => {
const r = await s.session.query(`List 3 use cases:\n\n${seed.result}`);
s.save(s.sessionId);
return extractAssistantText(r, 0);
}),
]);
await ctx.stage(
{ name: "merge", description: "Combine results" }, {}, {},
async (s) => {
await s.session.query(
`Combine:\n\n## Pros\n${pros.result}\n\n## Cons\n${cons.result}\n\n## Uses\n${uses.result}`,
);
s.save(s.sessionId);
},
);
})
.compile();
The graph shows seed → merge — headless stages are transparent to the topology. The callback API (s.client, s.session, s.save(), s.transcript(), return values) is identical to interactive stages.
Key capabilities:
| Capability | Description |
|---|---|
| Dynamic session spawning | ctx.stage() spawns sessions at runtime — each gets its own tmux window and graph node |
| Native TypeScript control flow | Use for, if/else, Promise.all(), try/catch — no framework DSL |
| Session return values | Session callbacks can return data: const h = await ctx.stage(...); h.result |
| Transcript passing | Access prior output via handle (s.transcript(handle)) or name (s.transcript("name")) |
| Declared input schemas | Add an inputs: [...] array and the CLI materialises --<field>=<value> flags with built-in validation |
| Interactive picker | atomic workflow -a <agent> renders input schemas as forms — no flag memorisation |
| Nested sub-sessions | s.stage() inside a callback spawns child sessions — visible as nested graph nodes |
| Auto-inferred graph | Topology derived from await / Promise.all patterns — no annotations |
| Provider-agnostic | Write raw SDK code for Claude, Copilot, or OpenCode inside each callback |
| Live graph visualization | Sessions appear in the TUI graph as they spawn — loops and conditionals visible in real time |
| Background (headless) stages | headless: true runs in-process without a tmux window — invisible in graph, tracked by statusline counter, identical callback API |
Deterministic execution guarantees:
Workflows are deterministic by design — the same definition produces the same execution order with the same data flow, anywhere.
.compile() freezes the workflow. Once compiled, step order, session names, and the execution graph are immutable.ctx.transcript() / ctx.getMessages() calls.Variance comes only from the LLM's responses, not from the harness.
Ask Atomic to build workflows for you:
Use your workflow-creator skill to create a workflow that plans, implements, and reviews a feature.
| Method | Purpose |
|---|---|
defineWorkflow({ name, description }) |
Entry point — returns a WorkflowBuilder |
.run(async (ctx) => { ... }) |
Set the workflow's entry point — ctx is a WorkflowContext |
.compile() |
Required — terminal method that seals the workflow definition |
ctx) — top-level orchestrator| Property | Type | Description |
|---|---|---|
ctx.inputs |
{ [K in N]?: string } |
Typed inputs for this run — only declared field names are valid keys. Accessing an undeclared field is a compile-time error. Workflows that need a prompt must declare it in their inputs schema |
ctx.agent |
AgentType |
Which agent is running ("claude", "copilot", "opencode") |
ctx.stage(opts, clientOpts, sessionOpts, fn) |
Promise<SessionHandle<T>> |
Spawn a session — returns handle with name, id, result |
ctx.transcript(ref) |
Promise<Transcript> |
Get a completed session's transcript ({ path, content }) |
ctx.getMessages(ref) |
Promise<SavedMessage[]> |
Get a completed session's raw native messages |
s) — inside each session callback| Property | Type | Description |
|---|---|---|
s.client |
ProviderClient<A> |
Pre-created SDK client (auto-managed by runtime) |
s.session |
ProviderSession<A> |
Pre-created provider session (auto-managed by runtime) |
s.inputs |
{ [K in N]?: string } |
Same typed inputs as ctx.inputs, forwarded into every stage so callbacks can read values without closing over the outer ctx |
s.agent |
AgentType |
Which agent is running |
s.paneId |
string |
tmux pane ID for this session |
s.sessionId |
string |
Session UUID |
s.sessionDir |
string |
Path to this session's storage directory on disk |
s.save(messages) |
SaveTranscript |
Save this session's output for subsequent sessions |
s.transcript(ref) |
Promise<Transcript> |
Get a completed session's transcript |
s.getMessages(ref) |
Promise<SavedMessage[]> |
Get a completed session's raw native messages |
s.stage(opts, clientOpts, sessionOpts, fn) |
Promise<SessionHandle<T>> |
Spawn a nested sub-session (child in the graph) |
SessionRunOptions)| Property | Type | Description |
|---|---|---|
name |
string |
Unique session name within the workflow run |
description |
string? |
Human-readable description shown in the graph |
headless |
boolean? |
When true, run in-process without a tmux window — invisible in graph, tracked by background counter |
The runtime auto-infers parent-child edges from execution order: sequential await creates a chain, Promise.all creates parallel fan-out/fan-in — no annotations needed.
Each provider saves transcripts differently:
| Provider | How to Save |
|---|---|
| Claude | s.save(s.sessionId) — auto-reads via getSessionMessages() |
| Copilot | s.save(await session.getMessages()) — pass SessionEvent[] |
| OpenCode | s.save(result.data!) — pass the full { info, parts } response |
The runtime auto-creates s.client and s.session — use them directly inside the callback:
| Agent | How to send a prompt |
|---|---|
| Claude | await s.session.query(prompt) |
| Copilot | await s.session.send({ prompt }) |
| OpenCode | await s.client.session.prompt({ sessionID: s.session.id, parts: [{ type: "text", text: prompt }] }) |
export default a builder with .run() and .compile()transcript() / getMessages() only access completed sessions (callback returned + saves flushed).atomic/workflows/<name>/<agent>/index.tsbun init && bun add @bastani/atomics.client, s.session, s.save(), return values all work identicallyFor the authoring walkthrough ask Atomic to use the workflow-creator skill or read .agents/skills/workflow-creator/.
[!TIP]
When the Workflow SDK is updated, ask theworkflow-creatorskill to migrate your workflows to the latest patterns: "Update this workflow to use the latest SDK patterns."
The /research-codebase command dispatches specialized sub-agents in parallel to analyze your codebase — understand auth flows, trace root causes, query docs, and hit external sources via DeepWiki MCP. Get up to speed on a new project in minutes instead of hours.
| Sub-Agent | Model | Purpose |
|---|---|---|
codebase-locator |
Haiku | Locate files, directories, and components relevant to the research topic |
codebase-analyzer |
Sonnet | Analyze implementation details, trace data flow, explain technical workings |
codebase-pattern-finder |
Haiku | Find similar implementations, usage examples, and existing patterns to model after |
codebase-online-researcher |
Sonnet | Fetch up-to-date information from the web and repository knowledge from DeepWiki |
codebase-research-locator |
Haiku | Discover relevant documents in research/ and specs/ directories |
codebase-research-analyzer |
Sonnet | Extract high-value insights, decisions, and technical details from research documents |
Run parallel research sessions to compare approaches:
# Terminal 1: LangChain approach
atomic chat -a claude "/research-codebase Research GraphRAG using LangChain's graph retrieval."
# Terminal 2: Microsoft GraphRAG
atomic chat -a claude "/research-codebase Research GraphRAG using microsoft/graphrag."
# Terminal 3: LlamaIndex approach
atomic chat -a claude "/research-codebase Research GraphRAG using LlamaIndex's property graph."
Then run /create-spec on each output, spin up git worktrees, and run atomic workflow -n ralph in each — wake up to three complete implementations on separate branches. Research persists in research/ and specs in specs/, so every investigation compounds into future context.
A single agent asked to "research the auth system" tries to search, read, analyze, and summarize within one context window. As that window fills with file contents, search results, and intermediate reasoning, synthesis degrades — this is a fundamental constraint of transformer attention, not a prompt-engineering problem.
Atomic dispatches purpose-built sub-agents: a codebase-locator only finds relevant files, a codebase-analyzer only reads and analyzes implementations, a codebase-online-researcher only queries external docs. Each operates in its own context with only the tools it needs; the parent receives distilled findings. The result: faster research, higher-quality findings, less hallucination.
The Ralph Method enables multi-hour autonomous coding sessions. Approve your spec, let Ralph work in the background, focus on other things.
How Ralph works:
planner sub-agent breaks your spec into a task list with dependency tracking, stored in SQLite (WAL mode for parallel access).orchestrator retrieves the task list, validates the dependency graph, and dispatches worker sub-agents for ready tasks.reviewer audits the implementation with structured JSON output; if P0–P2 findings exist, a debugger investigates root causes and feeds back to the planner on the next iteration.Loop config: Up to 10 iterations. Exits early after 2 consecutive clean reviews (zero actionable findings). P3 (minor) findings are non-actionable.
# From a prompt
atomic workflow -n ralph -a <claude|opencode|copilot> "Build a REST API for user management"
# From a spec file
atomic workflow -n ralph -a claude "specs/YYYY-MM-DD-my-feature.md"
Best practice: run Ralph in a git worktree so autonomous changes stay isolated from your working tree:
git worktree add ../my-project-ralph feature-branch
cd ../my-project-ralph
atomic workflow -n ralph -a claude "Build the auth module"
Atomic ships a deep-research-codebase workflow that performs multi-agent parallel research across your codebase — a full pipeline, not a single-shot command.
research/docs/.research/docs/YYYY-MM-DD-<slug>.md.atomic workflow -n deep-research-codebase -a claude "How does the authentication system work?"
The output is a permanent research artifact that future runs, specs, and workflows can reference.
Atomic ships as devcontainer features that bundle the CLI, agent, and all dependencies into isolated containers — the recommended way to run autonomous agents safely.
Why containerize?
rm, git reset --hard, and arbitrary shell commands — containers limit blast radius| Feature | Installs |
|---|---|
ghcr.io/flora131/atomic/claude:1 |
Atomic + Claude Code |
ghcr.io/flora131/atomic/opencode:1 |
Atomic + OpenCode |
ghcr.io/flora131/atomic/copilot:1 |
Atomic + Copilot CLI |
See Quick Start → Devcontainer for a working .devcontainer.json and the .devcontainer/ directory for per-agent templates.
Atomic dispatches purpose-built sub-agents, each with scoped context, tools, and termination conditions:
| Sub-Agent | Purpose |
|---|---|
planner |
Decompose specs into structured task lists with dependency tracking |
worker |
Implement single focused tasks (multiple workers run in parallel) |
reviewer |
Audit implementations against specs and best practices |
code-simplifier |
Simplify and refine code for clarity, consistency, maintainability |
orchestrator |
Coordinate complex multi-step workflows |
codebase-analyzer |
Analyze implementation details of specific components |
codebase-locator |
Locate files, directories, and components |
codebase-pattern-finder |
Find similar implementations and usage examples |
codebase-online-researcher |
Research using web sources and DeepWiki |
codebase-research-analyzer |
Deep dive on research topics |
codebase-research-locator |
Find documents in research/ directory |
debugger |
Debug errors, test failures, and unexpected behavior |
LLMs have an architectural limitation: the more context they hold, the harder it becomes to attend to the right information. A single agent juggling a spec, dozens of files, tool outputs, and its own reasoning will lose details, repeat work, or hallucinate connections. This isn't solvable via prompt engineering — it's how attention mechanisms work.
Specialized sub-agents turn the limitation into an advantage:
codebase-locator doesn't carry file contents; a worker doesn't carry the full spec.reviewer has read-only tools and can't edit files; a worker has edit tools but can't spawn other workers.reviewer used by Ralph is the one invoked when you ask for a code review in chat.A specialized codebase-analyzer reading three files produces more accurate output than a generalist that has already consumed 50,000 tokens of search results and prior reasoning.
Use /agents in any chat session to see all available sub-agents.
Skills are structured capability modules that give agents best practices and reusable workflows. Atomic ships 57 skills across eight categories; each lives at .agents/skills/<name>/SKILL.md and is auto-invoked when the agent detects a relevant trigger.
| Skill | Description |
|---|---|
init |
Generate CLAUDE.md and AGENTS.md by exploring the codebase |
research-codebase |
Analyze codebase with parallel sub-agents and document findings |
create-spec |
Create detailed execution plans from research documents |
workflow-creator |
Create multi-agent workflows using the session-based defineWorkflow() API |
explain-code |
Explain code functionality in detail using DeepWiki |
find-skills |
Discover and install agent skills from the community |
test-driven-development |
Write tests first; includes a testing anti-patterns guide |
prompt-engineer |
Create, improve, and optimize prompts using best practices |
| Skill | Description |
|---|---|
context-fundamentals |
How context windows work; attention mechanics; progressive disclosure |
context-degradation |
Diagnose lost-in-middle, poisoning, distraction failures in long runs |
context-compression |
Summarize transcripts at session boundaries; preserve actionable info |
context-optimization |
KV-cache optimization, observation masking, context budgeting |
filesystem-context |
Offload context to files; file-based agent coordination |
memory-systems |
Cross-session knowledge retention; Mem0 / Zep / Letta comparisons |
multi-agent-patterns |
Supervisor, swarm, handoff patterns for multi-agent systems |
tool-design |
Design clear tool contracts; reduce agent-tool friction |
hosted-agents |
Background agents in sandboxed VMs; warm pools; Modal sandboxes |
project-development |
Validate task-model fit before building; cost estimation |
bdi-mental-states |
Belief-desire-intention models for explainable agent reasoning |
| Skill | Description |
|---|---|
typescript-expert |
Type-level programming, perf optimization, migrations |
typescript-advanced-types |
Generics, conditional types, mapped types, template literals |
typescript-react-reviewer |
Expert review for TypeScript + React 19 applications |
bun |
Build, test, deploy with Bun (runtime, package manager, bundler, tests) |
opentui |
Build terminal UIs with OpenTUI (core, React, Solid reconcilers) |
| Skill | Description |
|---|---|
impeccable |
Create distinctive, production-grade frontend interfaces |
polish |
Final quality pass on alignment, spacing, consistency |
critique |
UX evaluation with quantitative scoring and persona testing |
audit |
Accessibility, performance, theming, responsive, anti-pattern audit |
layout / typeset / colorize |
Layout, typography, and color refinement |
adapt |
Responsive design: breakpoints, fluid layouts, touch targets |
animate / delight |
Add motion, micro-interactions, and personality |
clarify |
Improve UX copy, error messages, microcopy, labels |
distill / quieter / bolder / overdrive |
Simplify, tone down, amplify, or push designs to their limit |
harden |
Error handling, onboarding, empty states, i18n, overflow, edge-case resilience |
optimize |
Diagnose and fix loading, rendering, animation, bundle-size issues |
Evaluation:
| Skill | Description |
|---|---|
evaluation |
Multi-dimensional evaluation, LLM-as-judge, quality gates |
advanced-evaluation |
Pairwise comparison, position-bias mitigation, evaluation pipelines |
Documents & parsing:
| Skill | Description |
|---|---|
pdf |
Read, create, edit, split, merge, and OCR PDF files |
xlsx |
Create, read, edit, and fix spreadsheet files (.xlsx, .csv, .tsv) |
docx |
Create, read, edit, and manipulate Word (.docx) documents |
pptx |
Create, read, edit, and manipulate PowerPoint (.pptx) slide decks |
liteparse |
Parse and convert unstructured files (PDF, DOCX, PPTX, images) locally |
Git / Azure DevOps / Sapling / automation:
| Skill | Description |
|---|---|
gh-commit |
Conventional-commit Git commits |
gh-create-pr |
Commit unstaged changes, push, and submit a GitHub PR |
ado-commit |
Conventional-commit Git commits for Azure DevOps (adds AB#<id> trailers) |
ado-create-pr |
Commit, push, and open an Azure DevOps PR via the azure-devops MCP server |
sl-commit |
Conventional-commit Sapling commits |
sl-submit-diff |
Submit Sapling commits as Phabricator diffs |
playwright-cli |
Automate browser interactions, tests, screenshots |
Note on source control providers: the GitHub and Azure DevOps MCP servers are disabled by default to avoid consuming tokens on projects that don't need them. Set
scmin.atomic/settings.json(or runatomic config set scm <provider>) togithub,azure-devops, orsapling— on everyatomic chat/atomic workflowstartup Atomic reconciles.claude/settings.json(disabledMcpjsonServers),.opencode/opencode.json(mcp.<server>.enabled), and appends--disable-mcp-server <name>to the Copilot CLI invocation (Copilot has no on-disk MCP toggle).saplingdisables both servers everywhere.
Meta:
| Skill | Description |
|---|---|
skill-creator |
Create, modify, evaluate, and benchmark your own skills |
Skills are auto-invoked when relevant. Run ls .agents/skills/ for the complete, current list on disk.
During atomic workflow execution, Atomic renders a live orchestrator panel built on OpenTUI over the workflow's tmux session graph. It shows:
.stage() with status (pending / running / completed / failed) and edges for sequential / parallel dependenciess.save() / s.transcript() handoffs as they happenDuring atomic chat, there is no Atomic-owned TUI — atomic chat -a <agent> spawns the native agent CLI inside a tmux session, so chat features (streaming, @ mentions, /slash-commands, model selection, theme, keyboard shortcuts) come from the agent CLI itself. Atomic handles config sync, tmux session management, and argument passthrough.
| Context | UI provider |
|---|---|
atomic workflow -n <name> -a <agent> |
Atomic (orchestrator panel + tmux session graph) |
atomic chat -a <agent> |
The native agent CLI (Claude Code / OpenCode / Copilot CLI) |
| Command | Description |
|---|---|
atomic chat |
Spawn the native agent CLI inside a tmux session |
atomic workflow |
Run a multi-session workflow with the Atomic orchestrator panel |
atomic workflow list |
List available workflows, grouped by source |
atomic session list |
List all running sessions on the atomic tmux socket |
atomic session connect [name] |
Attach to a session (interactive picker when no name given) |
atomic session kill [name] |
Kill a session by name, or all sessions when no name is given |
atomic completions <shell> |
Output shell completion script (bash, zsh, fish, powershell) |
atomic config set <k> <v> |
Set configuration values (supports telemetry and scm) |
| Flag | Description |
|---|---|
-y, --yes |
Auto-confirm all prompts (non-interactive) |
--no-banner |
Skip ASCII banner display |
-v, --version |
Show version number |
atomic session SubcommandsAvailable at three levels — scoped or global:
| Command | Description |
|---|---|
atomic session list |
List all running sessions |
atomic session connect [name] |
Attach to a session (interactive picker when no name) |
atomic session kill [name] |
Kill a session, or all sessions when no name is given |
atomic chat session list |
List running chat sessions only |
atomic chat session connect [name] |
Attach to a chat session |
atomic chat session kill [name] |
Kill a chat session, or all chat sessions |
atomic workflow session list |
List running workflow sessions only |
atomic workflow session connect [name] |
Attach to a workflow session |
atomic workflow session kill [name] |
Kill a workflow session, or all workflow sessions |
list, connect, and kill accept -a <agent> (repeatable) to filter by agent. kill prompts for confirmation.
atomic session list # all sessions
atomic session list -a claude # only Claude sessions
atomic session connect my-session # attach by name
atomic session connect # interactive picker
atomic chat session list -a copilot # chat sessions for Copilot only
atomic session kill my-session # kill one session by name
atomic session kill # kill all sessions (with confirmation)
atomic workflow session kill -a claude # kill all Claude workflow sessions
atomic chat Flags| Flag | Description |
|---|---|
-a, --agent <name> |
Agent: claude, opencode, copilot |
All other arguments are forwarded directly to the native agent CLI:
atomic chat -a claude "fix the bug" # initial prompt
atomic chat -a copilot --model gpt-5.4 # custom model
atomic chat -a claude --verbose # forward --verbose to claude
atomic workflow Flags| Flag | Description |
|---|---|
-n, --name <name> |
Workflow name (matches directory under .atomic/workflows/<name>/) |
-a, --agent <name> |
Agent: claude, opencode, copilot |
-d, --detach |
Start the workflow in the background without attaching — ideal for scripted / CI runs; attach later with atomic workflow session connect <name> |
--<field>=<value> |
Structured input for workflows that declare an inputs schema (also accepts --<field> <value>) |
[prompt...] |
Positional prompt — requires the workflow to declare a prompt input |
Five invocation shapes:
# 1. List every workflow available, grouped by source
atomic workflow list
atomic workflow list -a claude # filter by agent
# 2. Launch the interactive picker (no -n) — fuzzy-search, fill the form, confirm with y/n
atomic workflow -a claude
# 3. Run with a positional prompt (workflow must declare a "prompt" input)
atomic workflow -n ralph -a claude "build a REST API for user management"
# 4. Run a structured-input workflow with one --<field> flag per declared input
atomic workflow -n gen-spec -a claude \
--research_doc=research/docs/2026-04-11-auth.md \
--focus=standard
# 5. Run detached — orchestrator runs in the background; prints the session name
# and returns immediately. Attach any time with `atomic workflow session connect`.
atomic workflow -n ralph -a claude -d "build a REST API for user management"
Workflows that declare inputs: WorkflowInput[] get CLI flag validation for free. Builtin workflows (e.g. ralph) are reserved — a local/global workflow with the same name will not shadow a builtin.
atomic completions — Shell CompletionsAtomic ships tab-completion for bash, zsh, fish, and PowerShell. Cache the script once so new shells don't re-spawn the atomic binary on startup.
Bash
mkdir -p ~/.atomic/completions
atomic completions bash > ~/.atomic/completions/atomic.bash
echo '[ -f "$HOME/.atomic/completions/atomic.bash" ] && source "$HOME/.atomic/completions/atomic.bash"' >> ~/.bashrc
Zsh
mkdir -p ~/.atomic/completions
atomic completions zsh > ~/.atomic/completions/atomic.zsh
echo '[ -f "$HOME/.atomic/completions/atomic.zsh" ] && source "$HOME/.atomic/completions/atomic.zsh"' >> ~/.zshrc
Fish
atomic completions fish > ~/.config/fish/completions/atomic.fish
PowerShell
$cache = Join-Path $HOME '.atomic\completions\atomic.ps1'
New-Item -ItemType Directory -Force -Path (Split-Path $cache) | Out-Null
atomic completions powershell | Out-File -FilePath $cache -Encoding utf8
Add-Content $PROFILE "`nif (Test-Path `"$cache`") { . `"$cache`" }"
The bootstrap installer (
install.sh/install.ps1) sets this up automatically and migrates oldereval "$(atomic completions …)"snippets to the cached form.
Atomic ships skills — not slash commands. Skills are auto-discovered by Claude Code, OpenCode, and Copilot CLI, invoked by typing /<skill-name> (Claude Code) or by natural-language reference (OpenCode / Copilot CLI).
| Skill | Typical invocation | Purpose |
|---|---|---|
init |
/init |
Generate CLAUDE.md and AGENTS.md by exploring the codebase |
research-codebase |
/research-codebase "<question>" |
Dispatch parallel sub-agents to analyze the codebase and write a research doc |
create-spec |
/create-spec "<research-path>" |
Produce a technical spec grounded in a research document |
explain-code |
/explain-code "<path>" |
Deep-dive explanation of specific code using DeepWiki |
gh-commit |
/gh-commit |
Create a conventional-commit Git commit |
gh-create-pr |
/gh-create-pr |
Commit, push, and open a GitHub pull request |
ado-commit |
/ado-commit |
Create a conventional-commit Git commit on an Azure DevOps-hosted repo |
ado-create-pr |
/ado-create-pr |
Commit, push, and open an Azure DevOps PR through the azure-devops MCP server |
sl-commit |
/sl-commit |
Create a Sapling commit |
sl-submit-diff |
/sl-submit-diff |
Submit a Sapling commit as a Phabricator diff |
workflow-creator |
natural language | Generate a multi-agent workflow file in .atomic/workflows/ |
Native slash commands (/help, /clear, /compact, /model, /theme, /agents, /mcp, /exit) come from the underlying agent CLI, not Atomic.
.atomic/settings.jsonResolution order:
.atomic/settings.json~/.atomic/settings.json{
"$schema": "https://raw.githubusercontent.com/flora131/atomic/main/assets/settings.schema.json",
"version": 1,
"scm": "github",
"lastUpdated": "2026-04-09T12:00:00.000Z"
}
| Field | Type | Description |
|---|---|---|
$schema |
string | JSON Schema URL for editor autocomplete |
version |
number | Config schema version (currently 1) |
scm |
string | Source control provider — github, azure-devops, or sapling. Reconciles the GitHub / Azure DevOps MCP servers in agent configs on startup. |
lastUpdated |
string | ISO 8601 timestamp of the last update |
Model selection and reasoning effort are managed by each underlying agent CLI (e.g. Claude Code's
/model), not Atomic. Atomic's chat command spawns the agent's native TUI — use the agent's own controls.
| Agent | Folder | Skills | Context File |
|---|---|---|---|
| Claude Code | .claude/ |
.claude/skills/ (symlink → .agents/skills/) |
CLAUDE.md |
| OpenCode | .opencode/ |
.agents/skills/ |
AGENTS.md |
| GitHub Copilot | .github/ |
.agents/skills/ |
AGENTS.md |
All three agents share the same skill set via .agents/skills/. Claude Code accesses them through a .claude/skills/ symlink.
bun update -g @bastani/atomic # latest stable
bun install -g @bastani/atomic@next # prerelease
The first atomic run after upgrading auto-syncs tooling deps and global skills — no separate command needed.
bun remove -g @bastani/atomic
# macOS / Linux
rm -rf ~/.atomic/
# Windows PowerShell
Remove-Item -Path "$env:USERPROFILE\.atomic" -Recurse -Force
git config --global user.name "Your Name"
git config --global user.email "[email protected]"
Ensure the agent CLI is in your PATH. Atomic uses Bun.which(), which handles .cmd, .exe, and .bat extensions automatically.
Spec Kit is GitHub's toolkit for "Spec-Driven Development." Both improve AI-assisted development, but solve different problems:
In short: Spec-Kit works well for greenfield projects where you start from a spec and use a single Copilot session to generate code. Atomic is built for the harder case — large existing codebases where you need to research what's already there before changing anything. Atomic gives you multi-session pipelines with isolated context windows, deterministic execution, and support for Claude Code, OpenCode, and Copilot CLI instead of just one agent.
| Aspect | Spec-Kit | Atomic |
|---|---|---|
| Focus | Greenfield projects with spec-first workflow | Large existing codebases + greenfield — research-first or spec-first |
| First Step | Define project principles and specs | Analyze existing architecture with parallel research sub-agents |
| Workflow Definition | Shell scripts and markdown templates | TypeScript Workflow SDK (defineWorkflow() → .run() → .compile()) with deterministic execution |
| Session Management | Single agent session | Multi-session pipelines — sequential and parallel — each in isolated context windows |
| Data Flow | Manual — copy output between steps | Controlled transcript passing via ctx.transcript() and ctx.getMessages() |
| Agent Support | GitHub Copilot CLI | Claude Code + OpenCode + Copilot CLI — switch with a flag |
| Sub-Agents | Single general-purpose agent | 12 specialized sub-agents with scoped tools and isolated contexts |
| Skills | Not available | 57 built-in skills (development, design, docs, agent architecture) |
| Autonomous Execution | Not available | Ralph — multi-hour autonomous sessions with plan/implement/review/debug loop |
| Execution Guarantees | Non-deterministic | Deterministic — strict step ordering, frozen definitions, controlled transcript access |
| Isolation | Not addressed | Devcontainer features for containerized execution |
DeerFlow is ByteDance's agent harness built on LangGraph/LangChain. Both are multi-agent orchestrators, but take different approaches:
In short: DeerFlow is a general-purpose agent orchestrator with a web UI. Atomic is narrowly focused on coding workflows. The key difference is that Atomic runs on top of production coding agents (Claude Code, OpenCode, Copilot CLI) rather than reimplementing coding tools through a generic API — you get each agent's native file editing, permissions, MCP integrations, and hooks out of the box. Atomic also gives you deterministic execution, which matters when encoding a team's dev process.
| Aspect | DeerFlow | Atomic |
|---|---|---|
| Runtime | Python (LangGraph) | TypeScript (Bun) |
| Agent SDKs | OpenAI-compatible API | Claude Code + OpenCode + Copilot CLI native SDKs — write raw SDK code in each session |
| Focus | General-purpose agent tasks (research, reports) | Coding-specific: research, spec, implement, review, debug |
| Workflow Definition | LangGraph state machines with graph nodes | TypeScript Workflow SDK — defineWorkflow() → .run() → .compile() |
| Execution Model | DAG-based with conditional edges | Deterministic — strict step ordering, frozen definitions, controlled transcript passing |
| Parallelism | Via LangGraph branch nodes | Native parallel sessions via Promise.all() with ctx.session() in isolated context windows |
| Sub-Agents | Researcher, coder, reporter nodes | 12 specialized sub-agents with scoped tools (planner, worker, reviewer, debugger, etc.) |
| Skills | Not available | 57 built-in skills auto-invoked by context |
| Isolation | Sandbox containers | Devcontainer features + git worktrees |
| Interface | Web UI (Streamlit) | Terminal chat with tmux-based session management |
| Autonomous | Not available | Ralph — bounded iteration with plan/implement/review/debug loop |
| Distribution | pip install + local server |
bun install -g or devcontainer features |
Hermes Agent is Nous Research's general-purpose AI agent with a self-improving learning loop. Both are open-source agent frameworks, but serve different use cases:
In short: Hermes is a broad AI assistant that learns across sessions and connects to messaging platforms. Atomic is a coding-specific harness for engineering teams. It lets you encode your development process as deterministic TypeScript workflows that run identically across team members, machines, and CI. Atomic inherits production-hardened tools from Claude Code, OpenCode, and Copilot CLI — including their permission systems, MCP integrations, and hooks — giving you two independent security boundaries (devcontainer isolation + agent permissions). Fresh context per session keeps output sharp over multi-hour tasks. Developer-authored skills don't drift the way auto-generated ones can.
| Aspect | Hermes Agent | Atomic |
|---|---|---|
| Focus | General-purpose AI assistant (coding, messaging, smart home, research) | Coding-specific: multi-session workflows on coding agents |
| Runtime | Python 3.11+ (uv) | TypeScript (Bun) |
| Agent SDKs | OpenAI-compatible API as universal adapter (200+ models via OpenRouter) | Claude Code + OpenCode + Copilot CLI native SDKs — write raw SDK code in each session |
| Workflow Definition | Cron scheduler + subagent delegation | TypeScript Workflow SDK — defineWorkflow() → .run() → .compile() |
| Session Management | Single conversation loop with context compression | Multi-session pipelines — sequential and parallel — each in isolated context windows |
| Data Flow | In-context within a single conversation | Controlled transcript passing via ctx.transcript() and ctx.getMessages() |
| Self-Improvement | Closed learning loop — auto-creates skills from experience, persistent user model via Honcho | Skills authored by developers; memory via CLAUDE.md / AGENTS.md context files |
| Sub-Agents | delegate_task spawns isolated subagents |
12 specialized sub-agents with scoped tools and model tiers (Opus, Sonnet, Haiku) |
| Skills | 40+ tools + community Skills Hub (agentskills.io) | 57 built-in skills (development, design, docs, agent architecture) |
| Interface | Terminal TUI + multi-platform messaging gateway (Telegram, Discord, Slack, WhatsApp, etc.) | Terminal chat with tmux-based session management |
| Isolation | Six terminal backends (local, Docker, SSH, Daytona, Singularity, Modal) | Devcontainer features + git worktrees |
| Autonomous Execution | Cron scheduler with inactivity-based timeouts | Ralph — bounded iteration with plan/implement/review/debug loop |
| Execution Guarantees | Non-deterministic conversation loop | Deterministic — strict step ordering, frozen definitions, controlled transcript access |
| Team Process Encoding | Personal assistant — no concept of team-shared workflows | Encode your team's dev process as TypeScript — repeatable across members, projects, and CI |
| Coding Agent Tooling | Reimplements file/terminal tools from scratch via model_tools.py |
Inherits production-hardened tool ecosystems from Claude Code, OpenCode, and Copilot CLI (file editing, permissions, MCP, hooks) |
| Reproducibility | Conversation loop produces different execution paths each run | Frozen workflow definitions run identically across machines, team members, and CI pipelines |
| Context Quality | Lossy compression within a single conversation — degrades on long coding tasks | Fresh context window per session with only distilled transcripts passed forward — stays sharp over multi-hour tasks |
| Skill Authoring | Auto-created skills may drift, accumulate errors, or encode bad patterns over time | Developer-authored, version-controlled skills — intentional and auditable |
| Security Model | Command approval + container backends (single boundary) | Devcontainer isolation + coding agent permission systems (Claude Code permissions, Copilot safeguards) — two independent security boundaries |
| Distribution | uv / pip |
bun install -g or devcontainer features |
See DEV_SETUP.md for development setup, testing guidelines, and contribution workflow.
MIT License — see LICENSE for details.