Skills for Claude Code — deep-research: systematic academic literature review
npx skills add https://github.com/lingzhi227/agent-research-skills --skill citation-managementقم بتثبيت هذه المهارة باستخدام واجهة سطر الأوامر (CLI) وابدأ في استخدام سير عمل SKILL.md في مساحة عملك.
31 skills for Claude Code covering the full academic research paper lifecycle — from literature search to slide generation — plus GitHub repository analysis for research topics.
Extracted from 17 GitHub repos studying LLM-agent-driven research automation. See SKILLS_DESIGN.md for the original design specifications.
npx skills add lingzhi227/agent-research-skills -g -a claude-code
Use the
-g(global) flag. Scripts use~/.claude/skills/paths that require global installation.
git clone https://github.com/lingzhi227/agent-research-skills.git /tmp/agent-research-skills
/tmp/agent-research-skills/install.sh
rm -rf /tmp/agent-research-skills
This installs slash commands, checks Python dependencies, and verifies script syntax.
| Package | Required by | Install |
|---|---|---|
| Python 3 | All scripts | brew install python3 |
| PyMuPDF | self-review, deep-research (PDF parsing) |
pip install PyMuPDF |
| numpy + scipy | data-analysis (statistical tests) |
pip install numpy scipy |
Semantic Scholar API key (higher rate limits for literature search):
~/keys.md: S2_API_Key: your-key-hereOutput directory: Deep research outputs go to ~/deep-research-output/ by default.
| Skill | Description | Scripts |
|---|---|---|
| github-research | 6-phase GitHub repo discovery, analysis, and integration planning for research topics | 13 scripts: search, clone, analyze structure/deps/impls, compare, compile report |
| deep-research | 6-phase systematic literature survey (frontier → survey → deep dive → code → synthesis → report) | 7 scripts: search APIs, PDF extraction, paper DB, BibTeX, report compilation |
| literature-search | Multi-source academic search (Semantic Scholar, arXiv, OpenAlex, CrossRef) with ranking | 4 scripts: search_crossref.py, download_arxiv_source.py, search_openalex.py + shared |
| literature-review | Multi-perspective dialogue simulation with expert personas for grounded literature review | Shares search scripts |
| idea-generation | Generate and score research ideas (Interestingness/Feasibility/Novelty) with iterative refinement | 1 script: novelty_check.py |
| novelty-assessment | Harsh-critic novelty evaluation with up to 10 rounds of literature search | Shares search scripts |
| research-planning | 4-stage research plan design with task dependency graphs | Prompt-only |
| Skill | Description | Scripts |
|---|---|---|
| atomic-decomposition | Decompose ideas into atomic concepts with bidirectional math ↔ code mapping | Prompt-only |
| algorithm-design | Algorithm pseudocode (LaTeX) + UML diagrams (Mermaid) with consistency verification | Prompt-only |
| math-reasoning | Derivations, proofs, formalization, statistical test selection, notation tables | Prompt-only |
| symbolic-equation | LLM-guided evolutionary search for scientific equation discovery | Prompt-only |
| Skill | Description | Scripts |
|---|---|---|
| experiment-design | 4-stage progressive experiment planning (implement → tune → research → ablate) | 1 script: design_experiments.py |
| experiment-code | ML training/evaluation pipeline generation with iterative improvement | Prompt-only |
| code-debugging | Structured error analysis with categorization and 4-retry fix loop | Prompt-only |
| data-analysis | Statistical analysis with 4-round code review and appropriate test selection | 2 scripts: stat_summary.py, format_pvalue.py |
| Skill | Description | Scripts |
|---|---|---|
| paper-writing-section | Section-by-section writing with section-specific guidance and two-pass refinement | Prompt-only |
| related-work-writing | Related Work section with thematic organization and compare-and-contrast style | Prompt-only |
| survey-generation | Complete survey paper via RAG-based subsection writing with citation validation | Shares search scripts |
| paper-to-code | Convert paper PDF to runnable code repo (Planning → Analysis → Coding pipeline) | Prompt-only |
| Skill | Description | Scripts |
|---|---|---|
| figure-generation | Publication-quality matplotlib figures with VLM feedback loop (10 figure types) | 1 script: figure_template.py |
| table-generation | JSON/CSV → LaTeX booktabs tables with bold-best, significance stars, multi-dataset | 1 script: results_to_table.py |
| citation-management | BibTeX harvesting, validation, deduplication, and auto-fix | 2 scripts: harvest_citations.py, validate_citations.py |
| backward-traceability | Every PDF number hyperlinks to the code line that produced it | 1 script: ref_numeric_values.py |
| Skill | Description | Scripts |
|---|---|---|
| latex-formatting | Conference templates (ICML/ICLR/NeurIPS/AAAI/ACL), formatting fixes, pre-submission checks | 2 scripts: latex_checker.py, clean_latex.py |
| paper-compilation | Full pdflatex+bibtex pipeline with auto-fix error correction loop | 2 scripts: compile_paper.py, fix_latex_errors.py |
| excalidraw-skill | Programmatic Excalidraw diagramming via MCP tools with quality verification | MCP server (7 CJS files) |
| Skill | Description | Scripts |
|---|---|---|
| self-review | 3-persona automated review (NeurIPS form) with reflection and meta-review | 2 scripts: extract_pdf_text.py, parse_pdf_sections.py |
| paper-revision | Map reviewer concerns to sections, apply targeted edits, verify improvements | Prompt-only |
| rebuttal-writing | Point-by-point rebuttal with evidence-based responses | Prompt-only |
| slide-generation | Paper → Beamer slides (extract elements, generate skeleton, simplify) | 1 script: extract_paper_elements.py |
| paper-assembly | End-to-end pipeline orchestrator with 9-phase checkpointing | 1 script: assembly_checker.py |
/research transformer architectures for long-context reasoning
"Analyze GitHub repos for multi-agent coordination research"
"Do a literature review on protein folding with LLMs"
"Write the Methods section of my paper"
"Generate a comparison table from results.json"
"Review my paper draft before submission"
"Make slides from my paper"
"Check if my idea is novel"
"Design experiments for my contrastive learning method"
# Search GitHub repos for a research topic
python ~/.claude/skills/github-research/scripts/search_github.py --query "multi-agent LLM coordination" --max-results 50 --output repos.jsonl
# Analyze a cloned repo's structure
python ~/.claude/skills/github-research/scripts/analyze_repo_structure.py --repo-dir ./my-repo --output analysis.json
# Search literature
python ~/.claude/skills/literature-search/scripts/search_crossref.py --query "attention mechanism" --rows 10
# Validate citations
python ~/.claude/skills/citation-management/scripts/validate_citations.py --tex paper/main.tex --bib paper/references.bib
# Generate experiment design
python ~/.claude/skills/experiment-design/scripts/design_experiments.py --method "contrastive learning" --task classification --format markdown
# Format p-values
python ~/.claude/skills/data-analysis/scripts/format_pvalue.py --values "0.001 0.05 0.23" --format latex
# Extract paper elements for slides
python ~/.claude/skills/slide-generation/scripts/extract_paper_elements.py --tex main.tex --output slides.tex
# Check paper pipeline completeness
python ~/.claude/skills/paper-assembly/scripts/assembly_checker.py --dir paper/ --verbose
All skills follow the same structure:
skills/<skill-name>/
├── SKILL.md # Skill definition (prompt, workflow, rules)
├── scripts/ # Executable tools (optional)
│ └── *.py # CLI scripts with argparse, docstring headers
└── references/ # Reference docs (optional)
└── *.md # Templates, API docs, patterns
Design principles:
--help, docstring header with usage examples, and argparse CLI## Related Skills sections (upstream/downstream/see-also)