Enterprise-grade deep research skill for Claude Code with 8-phase pipeline, source credibility scoring, and automated validation. Outperforms OpenAI, Gemini, and Claude Desktop in quality and verification.
npx skills add https://github.com/199-biotechnologies/claude-deep-research-skill --skill deep-researchInstallieren Sie diesen Skill über die CLI und beginnen Sie mit der Verwendung des SKILL.md-Workflows in Ihrem Arbeitsbereich.
Enterprise-grade research engine for Claude Code. Produces citation-backed reports with source credibility scoring, multi-provider search, and automated validation.
# Clone into Claude Code skills directory
git clone https://github.com/199-biotechnologies/claude-deep-research-skill.git ~/.claude/skills/deep-research
No additional dependencies required for basic usage.
For aggregated search across Brave, Serper, Exa, Jina, and Firecrawl:
brew tap 199-biotechnologies/tap && brew install search-cli
search config set keys.brave YOUR_KEY # configure at least one provider
deep research on the current state of quantum computing
deep research in ultradeep mode: compare PostgreSQL vs Supabase for our stack
| Mode | Phases | Duration | Best For |
|---|---|---|---|
| Quick | 3 | 2-5 min | Initial exploration |
| Standard | 6 | 5-10 min | Most research questions |
| Deep | 8 | 10-20 min | Complex topics, critical decisions |
| UltraDeep | 8+ | 20-45 min | Comprehensive reports, maximum rigor |
Scope → Plan → Retrieve (parallel search + agents) → Triangulate → Outline Refinement → Synthesize → Critique (with loop-back) → Refine → Package
Key features:
sources.json survives context compaction and continuation agentsReports saved to ~/Documents/[Topic]_Research_[Date]/:
Reports >18K words auto-continue via recursive agent spawning with context preservation.
validate_report.py (9 checks) + verify_citations.py (DOI/URL/hallucination detection)| Tool | Priority | Setup |
|---|---|---|
| search-cli | Primary — all searches go here first | brew install search-cli + API keys |
| WebSearch | Fallback — if search-cli fails or rate-limited | None (built-in) |
| Exa MCP | Optional — semantic/neural search alongside search-cli | MCP config |
deep-research/
├── SKILL.md # Skill entry point (lean, ~100 lines)
├── reference/
│ ├── methodology.md # 8-phase pipeline details
│ ├── report-assembly.md # Progressive generation strategy
│ ├── quality-gates.md # Validation standards
│ ├── html-generation.md # McKinsey HTML conversion
│ ├── continuation.md # Auto-continuation protocol
│ └── weasyprint_guidelines.md # PDF generation
├── templates/
│ ├── report_template.md # Report structure template
│ └── mckinsey_report_template.html # HTML report template
├── scripts/
│ ├── validate_report.py # 9-check structure validator
│ ├── verify_citations.py # DOI/URL/hallucination checker
│ ├── source_evaluator.py # Source credibility scoring
│ ├── citation_manager.py # Citation tracking
│ ├── md_to_html.py # Markdown to HTML converter
│ ├── verify_html.py # HTML verification
│ └── research_engine.py # Core orchestration engine
└── tests/
└── fixtures/ # Test report fixtures
| Version | Date | Changes |
|---|---|---|
| 2.3.1 | 2026-03-19 | Template/validator harmonization, structured evidence, critique loop-back, multi-persona red teaming |
| 2.3 | 2026-03-19 | Contract harmonization, search-cli integration, dynamic year detection, disk-persisted citations, validation loops |
| 2.2 | 2025-11-05 | Auto-continuation system for unlimited length |
| 2.1 | 2025-11-05 | Progressive file assembly |
| 1.0 | 2025-11-04 | Initial release |
MIT - modify as needed for your workflow.