Installable skill + working example for building persistent, LLM-maintained Markdown wikis, based on Karpathy’s original LLM Wiki idea.
npx skills add https://github.com/nanzhipro/karpathy-llm-wiki-bootstrap-skill --skill llm-wiki-bootstrap使用 CLI 安装这个技能,并在你的工作区中直接复用对应的 SKILL.md 工作流。
An installable skill and a working reference implementation for building persistent, LLM-maintained Markdown wikis.

The most important thing here is not just the bootstrap skill, but the worked example. Starting from karpathy-llm-wiki-original.md, the LLM incrementally compiles source material into llm-wiki/—growing an index, a log, concept pages, comparison pages, and synthesis pages. The point is not a one-off summary, but a maintainable knowledge artifact.
That same pattern works for articles, papers, research reports, and books:
raw/wiki/Full walkthrough → Reading Is Not Enough: How to Compile an Article into an LLM Wiki · 中文版
This repository has two closely related parts:
That split is intentional: skill/ is the reusable product, while llm-wiki/ shows the pattern in practice.
Recommended:
npx skills add nanzhipro/Karpathy-llm-wiki-bootstrap-skill@llm-wiki-bootstrap
Non-interactive user-level install:
npx skills add nanzhipro/Karpathy-llm-wiki-bootstrap-skill@llm-wiki-bootstrap -g -y
Note:
npx skills ... (plural), not npx skill .... skill is a different CLI and will not handle this repository specifier correctly.npx skills update or npx skills update llm-wiki-bootstrap.Here is the simplest first-time workflow, using karpathy-llm-wiki-original.md as the seed source.
The single source of truth is always SCHEMA.md. AGENTS.md (Codex) and CLAUDE.md (Claude Code) are generated as thin pointers that redirect to it. The example below uses OpenAI Codex, so an AGENTS.md pointer is present; if you select Claude Code, a CLAUDE.md pointer is generated instead. Multiple runtimes can coexist — all pointers read from the same SCHEMA.md.
In your agent, trigger the skill:
bootstrap a wiki
When the skill asks its setup questions, choose values like these:
Research topicllm-wiki-demoOpenAI CodexObsidianWeb articlesCurrent directoryAfter the wiki scaffold is created, copy the seed source into the new raw folder:
cp karpathy-llm-wiki-original.md llm-wiki-demo/raw/
Then tell the agent:
Read llm-wiki-demo/SCHEMA.md, then ingest llm-wiki-demo/raw/karpathy-llm-wiki-original.md
(Runtimes that auto-discover AGENTS.md / CLAUDE.md will be redirected to SCHEMA.md automatically.)
After the first ingest, inspect these files:
llm-wiki-demo/wiki/index.mdllm-wiki-demo/wiki/log.mdllm-wiki-demo/wiki/overview.mdWhat you should expect after that first run:
wiki/sources/If you want to see what a completed run looks like before trying it yourself, open llm-wiki/.
Tip for a Chinese-language wiki:
If you want the wiki to be compiled in Chinese from the start, you can simply tell the agent:
使用中文编译 karpathy-llm-wiki-original.md
Most LLM document workflows stop at RAG: upload files, retrieve a few chunks at question time, and synthesize an answer from scratch. That works—but it does not build lasting structure.
This project packages a different model:
The result: a knowledge base that compounds instead of resetting on every query.
The full system has four layers:
| Layer | Location | Role |
|---|---|---|
| Skill package | skill/ |
Bootstrap logic, templates, and workflow rules |
| Raw sources | raw/ |
Immutable evidence layer |
| Schema | SCHEMA.md |
Single source of truth — the operating contract for every agent |
| Pointers | AGENTS.md / CLAUDE.md / .github/copilot-instructions.md |
Optional thin redirects to SCHEMA.md, one per runtime you want to support |
| Wiki pages | wiki/ |
Maintained knowledge layer |
The skill creates the bottom three layers inside a new wiki.
The example in llm-wiki/ shows what that looks like after the system has already been used.
llm-wiki/ is not placeholder content. It is a working example generated from the skill and then maintained as a living wiki.
Current structure:
llm-wiki/
├── SCHEMA.md # Single source of truth for operating rules
├── AGENTS.md # Thin pointer for OpenAI Codex → SCHEMA.md
├── raw/
│ ├── Karpathy x.md
│ └── llm-wiki-pattern.md
└── wiki/
├── index.md
├── log.md
├── overview.md
├── concepts/
├── entities/
├── comparisons/
├── sources/
└── synthesis/
Useful entry points:
If you want to understand the pattern quickly, llm-wiki/ is the best place to inspect it in action.
The underlying idea comes from Karpathy's original LLM Wiki note:
The example wiki is grounded in that note. Source lineage in this repo:
| Source | Role |
|---|---|
| karpathy-llm-wiki-original.md | Reference copy of the original idea |
| llm-wiki/raw/llm-wiki-pattern.md | Example-local raw source derived from it |
| llm-wiki/raw/Karpathy x.md | Shows how additional sources get absorbed |
Clean framing for public-facing explanation:
Use .agent/skills/ as the canonical installation location.
If Claude, Codex, or another runtime expects a separate discovery directory, link that runtime back to the same installed copy instead of duplicating files.
.agent/
└── skills/
└── llm-wiki-bootstrap/
├── SKILL.md
└── references/
Example symlinks:
ln -s /absolute/path/to/.agent/skills/llm-wiki-bootstrap ~/.claude/skills/llm-wiki-bootstrap
ln -s /absolute/path/to/.agent/skills/llm-wiki-bootstrap ~/.codex/skills/llm-wiki-bootstrap
Principle: keep one real installed copy and point every runtime back to it.
When you bootstrap a new wiki, the generated structure looks like this:
{wiki-name}/
├── raw/
├── wiki/
│ ├── index.md
│ ├── log.md
│ └── overview.md
├── SCHEMA.md # Always generated — single source of truth
├── {pointer-files} # Optional, one per selected runtime
└── .gitignore
SCHEMA.md is always generated. For each runtime you select during setup, the skill adds a thin pointer file that redirects to SCHEMA.md:
| Agent | Pointer File | Points to |
|---|---|---|
| Claude Code | CLAUDE.md |
./SCHEMA.md |
| OpenAI Codex | AGENTS.md |
./SCHEMA.md |
| Copilot (VS Code) | .github/copilot-instructions.md |
../SCHEMA.md |
| Other / generic | (no pointer) | agent reads SCHEMA.md directly |
All rules live in SCHEMA.md. Pointers never duplicate rule content — so you can safely add a second runtime at any time by dropping in another pointer file.
| Operation | Trigger | Result |
|---|---|---|
| Ingest | "ingest raw/{file}" |
Turns a source into summaries, entities, concepts, links, index updates, and a log entry |
| Query | Ask a domain question | Reads the index, opens relevant pages, and answers with citations |
| Lint | "lint" or "health check" |
Audits contradictions, stale claims, orphan pages, and missing links |
| Path | Purpose |
|---|---|
| skill/SKILL.md | Installable skill definition |
| skill/references/templates | Templates used during bootstrap |
| skill/references/workflows | Detailed ingest, query, and lint workflow references |
| karpathy-llm-wiki-original.md | Repository copy of the original idea note |
| llm-wiki/SCHEMA.md | Single source of truth for agent instructions |
| llm-wiki/AGENTS.md | Thin Codex pointer that redirects to SCHEMA.md |
| llm-wiki/raw | Example source corpus |
| llm-wiki/wiki | Example compiled wiki output |
Karpathy LLM Wiki Bootstrap is an installable skill for creating persistent, LLM-maintained Markdown wikis, bundled with a real llm-wiki/ reference implementation grounded in Karpathy's original LLM Wiki idea.