llm-wiki-bootstrap

Installable skill + working example for building persistent, LLM-maintained Markdown wikis, based on Karpathy’s original LLM Wiki idea.

Installation
CLI
npx skills add https://github.com/nanzhipro/karpathy-llm-wiki-bootstrap-skill --skill llm-wiki-bootstrap

Installieren Sie diesen Skill über die CLI und beginnen Sie mit der Verwendung des SKILL.md-Workflows in Ihrem Arbeitsbereich.

Zuletzt aktualisiert am 4/23/2026

Karpathy LLM Wiki Bootstrap

简体中文

An installable skill and a working reference implementation for building persistent, LLM-maintained Markdown wikis.

Cover image for Karpathy LLM Wiki Bootstrap

The most important thing here is not just the bootstrap skill, but the worked example. Starting from karpathy-llm-wiki-original.md, the LLM incrementally compiles source material into llm-wiki/—growing an index, a log, concept pages, comparison pages, and synthesis pages. The point is not a one-off summary, but a maintainable knowledge artifact.

That same pattern works for articles, papers, research reports, and books:

  • keep raw evidence in raw/
  • keep evolving understanding in wiki/
  • use ingest, query, and lint to keep the structure alive over time

Full walkthrough → Reading Is Not Enough: How to Compile an Article into an LLM Wiki · 中文版

What This Repo Contains

This repository has two closely related parts:

  • skill/ is the installable skill package. Use it to bootstrap your own wiki.
  • llm-wiki/ is a real example wiki created with that skill and then maintained through the ingest, query, and lint loop.

That split is intentional: skill/ is the reusable product, while llm-wiki/ shows the pattern in practice.

Quick Install

Recommended:

npx skills add nanzhipro/Karpathy-llm-wiki-bootstrap-skill@llm-wiki-bootstrap

Non-interactive user-level install:

npx skills add nanzhipro/Karpathy-llm-wiki-bootstrap-skill@llm-wiki-bootstrap -g -y

Note:

  • Use npx skills ... (plural), not npx skill .... skill is a different CLI and will not handle this repository specifier correctly.
  • If the skill is already installed and you just want the latest version, run npx skills update or npx skills update llm-wiki-bootstrap.

First Run Example

Here is the simplest first-time workflow, using karpathy-llm-wiki-original.md as the seed source.

The single source of truth is always SCHEMA.md. AGENTS.md (Codex) and CLAUDE.md (Claude Code) are generated as thin pointers that redirect to it. The example below uses OpenAI Codex, so an AGENTS.md pointer is present; if you select Claude Code, a CLAUDE.md pointer is generated instead. Multiple runtimes can coexist — all pointers read from the same SCHEMA.md.

  1. In your agent, trigger the skill:

    bootstrap a wiki

  2. When the skill asks its setup questions, choose values like these:

    • Domain: Research topic
    • Wiki name: llm-wiki-demo
    • Agent: OpenAI Codex
    • Editor: Obsidian
    • Source types: Web articles
    • Output location: Current directory
  3. After the wiki scaffold is created, copy the seed source into the new raw folder:

    cp karpathy-llm-wiki-original.md llm-wiki-demo/raw/
    
  4. Then tell the agent:

    Read llm-wiki-demo/SCHEMA.md, then ingest llm-wiki-demo/raw/karpathy-llm-wiki-original.md

    (Runtimes that auto-discover AGENTS.md / CLAUDE.md will be redirected to SCHEMA.md automatically.)

  5. After the first ingest, inspect these files:

    • llm-wiki-demo/wiki/index.md
    • llm-wiki-demo/wiki/log.md
    • llm-wiki-demo/wiki/overview.md

What you should expect after that first run:

  • a source summary page under wiki/sources/
  • new concept or entity pages if the agent identifies them
  • an updated index, log, and overview

If you want to see what a completed run looks like before trying it yourself, open llm-wiki/.

Tip for a Chinese-language wiki:

If you want the wiki to be compiled in Chinese from the start, you can simply tell the agent:

使用中文编译 karpathy-llm-wiki-original.md

Why This Pattern Exists

Most LLM document workflows stop at RAG: upload files, retrieve a few chunks at question time, and synthesize an answer from scratch. That works—but it does not build lasting structure.

This project packages a different model:

  • Raw sources stay immutable
  • The agent incrementally compiles knowledge into a wiki
  • The wiki becomes a persistent artifact that grows over time
  • Useful answers get filed back into the wiki instead of disappearing into chat history

The result: a knowledge base that compounds instead of resetting on every query.

System Model

The full system has four layers:

Layer Location Role
Skill package skill/ Bootstrap logic, templates, and workflow rules
Raw sources raw/ Immutable evidence layer
Schema SCHEMA.md Single source of truth — the operating contract for every agent
Pointers AGENTS.md / CLAUDE.md / .github/copilot-instructions.md Optional thin redirects to SCHEMA.md, one per runtime you want to support
Wiki pages wiki/ Maintained knowledge layer

The skill creates the bottom three layers inside a new wiki.

The example in llm-wiki/ shows what that looks like after the system has already been used.

The Reference Wiki

llm-wiki/ is not placeholder content. It is a working example generated from the skill and then maintained as a living wiki.

Current structure:

llm-wiki/
├── SCHEMA.md            # Single source of truth for operating rules
├── AGENTS.md            # Thin pointer for OpenAI Codex → SCHEMA.md
├── raw/
│   ├── Karpathy x.md
│   └── llm-wiki-pattern.md
└── wiki/
    ├── index.md
    ├── log.md
    ├── overview.md
    ├── concepts/
    ├── entities/
    ├── comparisons/
    ├── sources/
    └── synthesis/

Useful entry points:

If you want to understand the pattern quickly, llm-wiki/ is the best place to inspect it in action.

Origin And Source Lineage

The underlying idea comes from Karpathy's original LLM Wiki note:

The example wiki is grounded in that note. Source lineage in this repo:

Source Role
karpathy-llm-wiki-original.md Reference copy of the original idea
llm-wiki/raw/llm-wiki-pattern.md Example-local raw source derived from it
llm-wiki/raw/Karpathy x.md Shows how additional sources get absorbed

Clean framing for public-facing explanation:

  1. The skill packages the method
  2. The example wiki demonstrates the method in use
  3. The corpus starts from Karpathy's idea, then grows through new sources

Use .agent/skills/ as the canonical installation location.

If Claude, Codex, or another runtime expects a separate discovery directory, link that runtime back to the same installed copy instead of duplicating files.

.agent/
└── skills/
    └── llm-wiki-bootstrap/
        ├── SKILL.md
        └── references/

Example symlinks:

ln -s /absolute/path/to/.agent/skills/llm-wiki-bootstrap ~/.claude/skills/llm-wiki-bootstrap
ln -s /absolute/path/to/.agent/skills/llm-wiki-bootstrap ~/.codex/skills/llm-wiki-bootstrap

Principle: keep one real installed copy and point every runtime back to it.


What The Skill Generates

When you bootstrap a new wiki, the generated structure looks like this:

{wiki-name}/
├── raw/
├── wiki/
│   ├── index.md
│   ├── log.md
│   └── overview.md
├── SCHEMA.md            # Always generated — single source of truth
├── {pointer-files}      # Optional, one per selected runtime
└── .gitignore

SCHEMA.md is always generated. For each runtime you select during setup, the skill adds a thin pointer file that redirects to SCHEMA.md:

Agent Pointer File Points to
Claude Code CLAUDE.md ./SCHEMA.md
OpenAI Codex AGENTS.md ./SCHEMA.md
Copilot (VS Code) .github/copilot-instructions.md ../SCHEMA.md
Other / generic (no pointer) agent reads SCHEMA.md directly

All rules live in SCHEMA.md. Pointers never duplicate rule content — so you can safely add a second runtime at any time by dropping in another pointer file.


Core Operations

Operation Trigger Result
Ingest "ingest raw/{file}" Turns a source into summaries, entities, concepts, links, index updates, and a log entry
Query Ask a domain question Reads the index, opens relevant pages, and answers with citations
Lint "lint" or "health check" Audits contradictions, stale claims, orphan pages, and missing links

Repository Layout

Path Purpose
skill/SKILL.md Installable skill definition
skill/references/templates Templates used during bootstrap
skill/references/workflows Detailed ingest, query, and lint workflow references
karpathy-llm-wiki-original.md Repository copy of the original idea note
llm-wiki/SCHEMA.md Single source of truth for agent instructions
llm-wiki/AGENTS.md Thin Codex pointer that redirects to SCHEMA.md
llm-wiki/raw Example source corpus
llm-wiki/wiki Example compiled wiki output

One-Line Positioning

Karpathy LLM Wiki Bootstrap is an installable skill for creating persistent, LLM-maintained Markdown wikis, bundled with a real llm-wiki/ reference implementation grounded in Karpathy's original LLM Wiki idea.