Collection of Claude Code skills for enhanced AI workflows
npx skills add https://github.com/glebis/claude-skills --skill google-image-searchInstale esta skill com a CLI e comece a usar o fluxo de trabalho SKILL.md em seu espaço de trabalho.
A collection of skills for Claude Code that extend AI capabilities with specialized workflows, tools, and domain expertise.
Multi-agent TDD orchestration with architecturally enforced context isolation. Uses Claude Code's Task tool to spawn separate subagents for test writing and implementation -- the Test Writer never sees implementation code, and the Implementer never sees the specification.
Features:
--auto mode: run all slices without pausing, stop only on unrecoverable errorslayer_map path validation: rejects Implementer writes to wrong-layer directoriesrun_tests.sh: universal test runner wrapping 7 frameworks into structured JSON with timeout supportextract_api.sh: public API surface extractor (signatures only, no bodies) for 7 languages.tdd-state.json with --resume supportArchitecture:
ORCHESTRATOR (main Claude context)
├─ Phase 0: Setup (detect framework, extract API, create state)
├─ Phase 1: Decompose into vertical slices -> user approves
├─ FOR EACH SLICE:
│ ├─ Phase 2 (RED): Task(Test Writer) <- spec + API only
│ ├─ Phase 3 (GREEN): Task(Implementer) <- failing test + error only
│ └─ Phase 4 (REFACTOR): Task(Refactorer) <- all code + green results
└─ Summary
Quick Start:
# Copy to skills directory
cp -r tdd ~/.claude/skills/
# Interactive mode (pauses at each RED checkpoint)
/tdd "add user authentication with JWT tokens"
# Autonomous mode (runs all slices, stops only on errors)
/tdd --auto "add user authentication with JWT tokens"
# Resume a paused session
/tdd --resume
# Bug fix
/tdd "fix: cart total doesn't include tax"
Design informed by:
Use when: Implementing features or fixing bugs where you want disciplined test-first development. Use --auto for maximum autonomy. The multi-agent architecture is especially valuable when single-context TDD produces tests that mirror implementation details.
Comprehensive reference skill for the gws CLI — Gmail, Calendar, Drive, Sheets, Docs, Tasks, Chat, People, Meet, and cross-service workflows from Claude Code.
Features:
+triage, +send, +agenda, +insert, +upload, +read, +append, +write, +send for Chat, workflow helpers)conferenceDataVersion, recurring events, attendees with sendUpdates, freebusy), Drive (upload, download, search, share, folders), Sheets (values read/update/clear, batchUpdate), Docs (batchUpdate), Tasks, Chat (spaces, threaded/card messages), People, Meet (spaces)gws schema) for discovering any API method's parameters--params, --json, --upload, --output, --format, --page-all, --page-limit, --dry-rungmail.settings.basic manual-OAuth workaroundQuick Start:
# Copy to skills directory
cp -r gws ~/.claude/skills/
# Then use naturally:
# "check my unread email"
# "search Gmail for Amazon S3"
# "show today's calendar"
# "create a calendar event today 18:00 AGENCY Meetup"
# "create a meeting with Google Meet link tomorrow 14:00"
# "read Sheet1!A1:D10 from spreadsheet SID"
# "upload report.pdf to Drive"
# "post to Chat space spaces/XYZ: ship today"
# "send an email to [email protected]"
Depends on: gws — npm install -g @googleworkspace/cli
Use when: Interacting with any Google Workspace service from Claude Code — email triage, sending email, calendar management (including Meet links), spreadsheet read/write, file uploads, Chat posts, contact lookup, or cross-service workflows.
Full CLI and Python API wrapper for Google NotebookLM. Lets you manage notebooks, sources, chat, artifacts (podcasts, videos, slides, quizzes, flashcards), notes, sharing, and research entirely from the terminal via natural language.
Features:
notebooklm-py async library for programmatic useQuick Start:
# Copy to skills directory
cp -r notebooklm ~/.claude/skills/
# Then just talk naturally:
# "create a notebook called My Research"
# "upload all markdown files from ./notes/"
# "ask the notebook about key themes"
# "generate a podcast about the findings"
# "download the podcast"
Depends on: notebooklm-py (v0.3.4+) — pip install notebooklm-py
Use when: Interacting with Google NotebookLM from Claude Code. Covers all CLI commands and the underlying Python API for advanced automation.
Generate a 3D interactive knowledge map (Inner Temple) from any Obsidian vault. Maps vault structure into a spatial mythology with concentric entity rings, synthesized audio, discovery mechanics, and multi-scale semantic zoom.

Features:
Architecture:
Generation Pipeline (Claude) Runtime Renderer (Template)
├─ extract_entities.py → vault-scan ├─ Three.js scene from JSON
├─ Claude classifies entities ├─ Concentric ring layout
├─ Builds abstraction levels ├─ Audio per entity type
├─ Writes temple-data.json ├─ Discovery mechanics
└─ Inlines into template └─ Semantic zoom transitions
Quick Start:
cp -r temple-generator ~/.claude/skills/
/temple-generate ~/my-vault --inline
Use when: Visualizing any Obsidian vault as a 3D spatial mythology, comparing two knowledge graphs, or creating an art installation from structured knowledge.
Query Granola's local cache and API to list meetings, view transcripts, and export to Obsidian vault in Fathom-compatible format. Includes auto-sync via macOS LaunchAgent.
Features:
**Speaker**: text format)sync.sh) — checks for new meetings every 15 min via LaunchAgent, exports only unseen ones, logs to ~/Library/Logs/granola-sync.logQuick Start:
# Copy to skills directory
cp -r granola ~/.claude/skills/
# List meetings
python3 ~/.claude/skills/granola/scripts/granola.py list
# Export a meeting to Obsidian
python3 ~/.claude/skills/granola/scripts/granola.py export "meeting title"
# Get transcript
python3 ~/.claude/skills/granola/scripts/granola.py transcript abc123
# Set up auto-sync (see SKILL.md for LaunchAgent setup)
chmod +x ~/.claude/skills/granola/scripts/sync.sh
bash ~/.claude/skills/granola/scripts/sync.sh # test run
Use when: Importing Granola meeting recordings and transcripts into an Obsidian vault, querying meeting history from the command line, or setting up automated transcript sync on a schedule.
Parse Claude Code's built-in /insights report and extract actionable items into structured, trackable markdown files. Designed for Obsidian vaults but works with any markdown-based knowledge system.
Features:
--interactive) to cherry-pick items via AskUserQuestion--configure) to set folders, date format, and preferencesQuick Start:
# Run /insights first, then extract
/insight-extractor
# Interactive -- review and filter each category
/insight-extractor --interactive
# Configure output paths, date format, etc.
/insight-extractor --configure
Use when: After running /insights to persist analysis into your vault, during weekly reviews, or to discover automation candidates from session patterns.
Multi-agent system that mines your Obsidian vault for non-obvious connections between notes, mimicking the brain's default mode network. Samples random note pairs, synthesizes connections via Sonnet, filters with Haiku critic. Inspired by Gwern's LLM Daydreaming.
Features:
Architecture:
Skill (orchestrator)
|-- Glob/Read: scan vault, extract excerpts
|-- Generate 50 random pairs (recency-weighted)
|-- Task(model: sonnet) x 10: synthesize connections <-- parallel
|-- Task(model: haiku) x 10: critique/score insights <-- parallel
|-- Filter (avg >= 7.0)
+-- Write: save insight notes + daily digest
No external dependencies -- pure Claude Code tools (Glob, Read, Write, Bash, Task).
Quick Start:
# Copy to skills directory
cp -r daydream ~/.claude/skills/
# Edit instructions.md to set your VAULT_ROOT path
# Then invoke
/daydream
Output:
Daydreams/YYYYMMDD-slug.md -- individual insight notes with scores and wikilinksDaydreams/digests/YYYYMMDD-digest.md -- daily digest with stats and ranked insights## Daydream section -- summary with top connectionsCost: ~$0.40-0.50 per run (~50 pairs) via Claude Code usage.
Inspired by: Gwern's "LLM Daydreaming" -- the idea that LLMs can productively "daydream" by finding unexpected connections between disparate pieces of knowledge, similar to how the brain's default mode network generates creative insights during idle periods.
Use when: You want to discover surprising connections across your knowledge base -- run daily or weekly to surface insights you wouldn't find through deliberate search.
Longitudinal cognitive pattern analysis across months of recorded conversations. Extracts 12 evidence-based dimensions from Fathom transcripts, synthesizes cross-session patterns, and detects blind spots via multi-agent parallel processing.
Scientific Foundation:
12 Extraction Dimensions:
10 Output Sections + Blind Spot Summary:
Architecture:
Stage 0: Corpus Discovery (orchestrator)
|-- Find transcripts, classify by type, extract speaker lines
Stage 1: Per-Transcript Extraction (~13 parallel sonnet agents)
|-- 12 dimensions extracted per transcript
Stage 2: Aggregation (orchestrator)
|-- De-duplicate, cluster, package into synthesis bundles
Stage 3: Cross-Session Synthesis (4 parallel + 1 sequential sonnet agents)
|-- Pattern detection, blind spot analysis, contradiction mapping
Stage 4: Output (orchestrator)
+-- Compile analysis document, link to daily note
Features:
Quick Start:
# Copy to skills directory
cp -r thinking-patterns ~/.claude/skills/
# Dry run -- see corpus stats and batch plan
/thinking-patterns --dry-run
# Full analysis (default: last 3 months)
/thinking-patterns
# Custom date range
/thinking-patterns --period 2026-01 2026-02
Output:
ai-research/YYYYMMDD-thinking-patterns-analysis.md -- full analysis with evidence## ResearchCost: ~$3.50 per full run, ~6-8 minutes runtime.
Use when: Quarterly self-reflection, coaching preparation, or whenever you want evidence-based insight into your own cognitive patterns across recorded conversations.
Evidence-based health research using tiered trusted sources with GRADE-inspired evidence ratings. Integrates Apple Health data for personalized context.
Features:
Quick Start:
# Quick answer (~30s)
/doctorg Is creatine safe for daily use?
# Deep research (~90s)
/doctorg --deep Huberman vs Attia on fasted training
# Full investigation (~3min)
/doctorg --full Safety profile of long-term melatonin supplementation
# Without personal health data
/doctorg --no-personal Best stretching protocol for lower back pain
Use when: Asking any health, nutrition, exercise, sleep, or wellness question and wanting evidence-based answers with explicit strength ratings rather than opinion.
End-to-end pipeline for publishing Claude Code lab meetings. Single /agency-docs-updater invocation replaces 5+ manual steps: finds Fathom transcript, downloads video, uploads to YouTube, generates fact-checked Russian summary, creates MDX, and deploys to Vercel.
Features:
Quick Start:
# Run full pipeline (invoke as Claude Code skill)
/agency-docs-updater
# Or use the script directly
python3 scripts/update_meeting_doc.py \
transcript.md youtube_url summary.md [-n 08] [--update]
Use when: Publishing Claude Code lab sessions — automates the entire flow from Fathom recording to live documentation site.
Transform AI-sounding text into human, authentic writing while preserving meaning and facts. Research-backed approach focusing on quality over detection evasion.
Features:
Quick Start:
# Interactive mode (asks questions)
/de-ai --file article.md
# Quick mode (no questions)
/de-ai --file article.md --interactive false
# Specify language and register
/de-ai --file text.md --language ru --register essay
# Show what AI tells were removed
/de-ai --file content.md --explain true
Use when: You need to improve AI-generated text quality, remove bureaucratic language (канцелярит), humanize drafts while preserving facts, or refine professional writing across languages.
Quantified ROI analysis for automation decisions with voice-enabled web interface. Analytical precision design.
Features:
Quick Start:
# Install dependencies
pip install flask groq python-dotenv
# Add Groq API key (optional, for voice)
export GROQ_API_KEY="your-key"
# Start web server
python3 server_web.py
# Open browser
open http://localhost:8080
Use when: Deciding whether to automate repetitive tasks - transforms "this feels tedious" into quantified recommendations with clear next steps.
Generate structured decision-making tools — step-by-step guides, bias checkers, scenario matrices, and interactive dashboards.
Features:
Frameworks Included:
Quick Start:
# Copy to skills directory
cp -r decision-toolkit ~/.claude/skills/
# Invoke for a decision
/decision-toolkit "Should I switch to a new tech stack?"
Use when: Facing significant choices requiring systematic analysis — career moves, technology decisions, major purchases, strategic pivots.
Fetch meetings, transcripts, summaries, action items, and download video recordings from Fathom API.
Features:
Quick Start:
# Install dependencies
pip install requests python-dotenv
# Requires ffmpeg for video downloads
brew install ffmpeg # macOS
# or: apt-get install ffmpeg # Linux
# Add API key
echo "FATHOM_API_KEY=your-key" > ~/.claude/skills/fathom/scripts/.env
# List recent meetings
python3 scripts/fetch.py --list
# Fetch today's meetings
python3 scripts/fetch.py --today
# Download video recording
python3 scripts/fetch.py --id abc123 --download-video
# Fetch and analyze
python3 scripts/fetch.py --today --analyze
Use when: You need to fetch Fathom meeting recordings, download video files, sync transcripts to your vault, or extract meeting data via API.
Demo/recording mode that redacts personally identifiable and sensitive information from Claude Code's outputs in real time.
Features:
/recording — single command flips stateAlex Doe, Acme Co) — never plausible fakesQuick Start:
cp -r recording ~/.claude/skills/
# Before your demo
/recording
# When done
/recording
Use when: Screen-sharing, recording videos, or live-demoing Claude Code and you don't want personal vault content leaking on stream.
Session retrospective for continual learning. Reviews conversations, extracts learnings, updates skills.
Features:
Quick Start:
# Copy to skills directory
cp -r retrospective ~/.claude/skills/
# Invoke at end of session
/retrospective
Use when: End of coding sessions to capture learnings before context is lost. Based on Continual Learning in Claude Code concepts.
Publish files and notes as GitHub Gists for easy sharing.
Features:
gh CLI (recommended) or falls back to APIQuick Start:
# Publish file as secret gist
python3 scripts/publish_gist.py ~/notes/idea.md
# Public gist with description
python3 scripts/publish_gist.py code.py --public -d "My utility script"
# Quick snippet from stdin
echo "Hello world" | python3 scripts/publish_gist.py - -f "hello.txt"
# Publish and open in browser
python3 scripts/publish_gist.py doc.md --open
Setup:
# Option 1: gh CLI (recommended)
gh auth login
# Option 2: Environment variable
# Get token at https://github.com/settings/tokens (select 'gist' scope)
export GITHUB_GIST_TOKEN="ghp_your_token_here"
Use when: You want to share code snippets, notes, or files via a quick shareable URL.
Search and download images via Google Custom Search API with LLM-powered selection and Obsidian integration.
Features:
Quick Start:
# Simple query
python3 scripts/google_image_search.py --query "neural interface demo" --output-dir ./images
# Enrich Obsidian note with images
python3 scripts/google_image_search.py --enrich-note ~/vault/research.md
# Generate config from terms
python3 scripts/google_image_search.py --generate-config --terms "AI therapy" "VR mental health"
Use when: Finding images for articles, presentations, research docs, or enriching Obsidian notes with visuals.
Create and manage Zoom meetings and access cloud recordings via the Zoom API.
Features:
Quick Start:
# Check setup status
python3 scripts/zoom_meetings.py setup
# List upcoming meetings
python3 scripts/zoom_meetings.py list
# Create a meeting
python3 scripts/zoom_meetings.py create "Team Standup" --start "2025-01-15T10:00:00" --duration 30
# List recordings (last 30 days)
python3 scripts/zoom_meetings.py recordings --show-downloads
Use when: You need to create Zoom meetings, list scheduled calls, or access cloud recordings with transcripts.
Interactive HTML presentations with neobrutalism style and Anime.js animations.
Features:
Quick Start:
# Generate HTML from JSON
node scripts/generate-presentation.js --input slides.json --output presentation.html
# Export to PNG/PDF/video
node scripts/export-slides.js presentation.html --format png
node scripts/export-slides.js presentation.html --format pdf
node scripts/export-slides.js presentation.html --format video --duration 5
Use when: You need animated presentations, video slide decks, or interactive HTML slideshows.
Neobrutalism brand styling with social media template rendering.
Features:
Quick Start:
# Install Playwright
npm install playwright
# Render all templates
node scripts/render-templates.js
# Render specific template
node scripts/render-templates.js -t instagram/story-announcement
# List templates
node scripts/render-templates.js --list
Use when: You need branded graphics, social media images, presentations with consistent neobrutalism styling.
Search and fetch emails via Gmail API with flexible query options and output formats.
Features:
Quick Start:
# Install dependencies
pip install google-api-python-client google-auth-httplib2 google-auth-oauthlib
# Authenticate (opens browser)
python scripts/gmail_search.py auth
# Search emails
python scripts/gmail_search.py search "meeting notes"
python scripts/gmail_search.py search --from "[email protected]" --unread
Use when: You need to search, read, or download emails from Gmail.
Fetch, search, download, and send Telegram messages with flexible filtering and output options.
Features:
--schedule)--markdown)Quick Start:
# Install dependency
pip install telethon
# List chats
python scripts/telegram_fetch.py list
# Get recent messages
python scripts/telegram_fetch.py recent --limit 20
# Send message
python scripts/telegram_fetch.py send --chat "@username" --text "Hello!"
# Send with markdown formatting
python scripts/telegram_fetch.py send --chat "@channel" --markdown --text "**Bold** and [links](https://example.com)"
# Schedule for tomorrow
python scripts/telegram_fetch.py send --chat "@channel" --markdown --schedule "tomorrow 10:00" --text "Scheduled post"
# Schedule with relative time or ISO format
python scripts/telegram_fetch.py send --chat "@username" --schedule "+2h" --text "In 2 hours"
python scripts/telegram_fetch.py send --chat "@username" --schedule "2026-04-10T14:00" --text "At specific time"
Use when: You need to read, search, or send Telegram messages from Claude Code.
Create, preview, and publish formatted Telegram posts from draft markdown files with HTML formatting and media. Built for @klodkot and Gleb Kalinin's other Telegram channels -- channel configs (footers, tags, language) are hardcoded but the pattern is easy to adapt.
Features:
Configured channels: @klodkot, @mentalhealthtech, @toolbuildingape, @opytnymputem
Quick Start:
# Create a draft
python3 scripts/post.py create "my-post-slug" --topic "Topic" --source "https://..."
# Preview (always do this first)
python3 scripts/post.py send "Channels/klodkot/drafts/20260211-my-post.md" --dry-run
# Send to saved messages for review
python3 scripts/post.py send "Channels/klodkot/drafts/20260211-my-post.md"
# Publish to channel (triggers post-publish: move, frontmatter update, index)
python3 scripts/post.py send "Channels/klodkot/drafts/20260211-my-post.md" -c "@klodkot"
Use when: Creating, previewing, or publishing Telegram channel posts from Obsidian draft files. Note: channel configs are specific to Gleb's channels -- fork and edit CHANNEL_CONFIG in post.py for your own.
Full Telethon API wrapper with daemon mode and Claude Code integration. Monitor chats, auto-respond with Claude, and manage sessions.
Features:
Quick Start:
# Install dependencies
pip install telethon rich questionary
# Interactive setup
python3 scripts/tg.py setup
# Check status
python3 scripts/tg.py status
# List chats
python3 scripts/tg.py list
# Start daemon (monitors for triggers)
python3 scripts/tgd.py start --foreground
Daemon Configuration (~/.config/telegram-telethon/daemon.yaml):
triggers:
- chat: "@yourusername"
pattern: "^/claude (.+)$"
action: claude
reply_mode: inline
Use when: You need advanced Telegram automation, background monitoring, or Claude-powered chat responses.
Unified interface for processing text with multiple LLM providers from a single CLI.
Features:
Quick Start:
# Install llm CLI
pip install llm
# Set Groq API key (free, no credit card)
export GROQ_API_KEY='gsk_...'
# Use it
llm -m groq-llama-3.3-70b "Your prompt"
Documentation:
Use when: You want to process text with LLMs, compare models, or build AI-powered workflows.
Comprehensive research automation using OpenAI's Deep Research API (o4-mini-deep-research model).
Features:
Use when: You need in-depth research with web sources, analysis, or topic exploration.
Professional PDF generation from markdown with mobile-optimized and desktop layouts.
Features:
Quick Start:
# Mobile-optimized PDF (default for Telegram)
python scripts/generate_pdf.py doc.md --mobile
# Desktop/print PDF
python scripts/generate_pdf.py doc.md -t research
# Russian document
python scripts/generate_pdf.py doc.md --russian --mobile
Use when: You need to create professional PDF documents from markdown - mobile layout for sharing via messaging apps, desktop for printing and archival.
Extract YouTube video transcripts with metadata and save as Markdown to Obsidian vault.
Features:
Quick Start:
python scripts/extract_transcript.py <youtube_url>
Use when: You need to extract YouTube video transcripts, convert videos to text, or save video content to your knowledge base.
Query browsing history from all synced Chrome devices (iPhone, iPad, Mac, desktop) with natural language.
Features:
Quick Start:
# Initialize database
python3 scripts/init_db.py
# Sync local Chrome history
python3 scripts/sync_chrome_history.py
# Query history
python3 browsing_query.py "yesterday" --device iPhone
python3 browsing_query.py "AI articles" --days 7 --categorize
python3 browsing_query.py "last week" --output ~/vault/history.md
Use when: You need to search browsing history across all your devices, find articles by topic, or export history to your notes.
Query local Chrome browsing history with natural language search and filtering.
Features:
Use when: You need quick access to local desktop Chrome history only.
Query and analyze Apple Health data from SQLite database with multiple output formats.
Features:
Quick Start:
# Daily summary
python scripts/health_query.py daily --date 2025-11-29
# Weekly trends in JSON
python scripts/health_query.py --format json weekly --weeks 4
# Sleep analysis in FHIR format
python scripts/health_query.py --format fhir sleep --days 7
# ASCII charts
python scripts/health_query.py --format ascii activity --days 30
# Custom SQL
python scripts/health_query.py query "SELECT * FROM workouts LIMIT 5"
Use when: You need to analyze Apple Health metrics, generate health reports, export data in FHIR format, or visualize fitness/sleep patterns.
Convert text to high-quality audio files using ElevenLabs API with customizable voice parameters.
Features:
Quick Start:
cd ~/.claude/skills/elevenlabs-tts
pip install -r requirements.txt
# Add your API key to .env
python scripts/elevenlabs_tts.py "Welcome to Claude Code"
Use when: You need text-to-speech generation, audio narration, voice synthesis, or want to speak generated content aloud.
Research automation using FireCrawl API with academic writing templates and bibliography generation.
Features:
[research] tagsQuick Start:
# Install dependencies
pip install python-dotenv requests
# Add API key to .env
echo "FIRECRAWL_API_KEY=fc-your-key" > ~/.claude/skills/firecrawl-research/.env
# Research topics from markdown
python scripts/firecrawl_research.py topics.md ./output 5
# Generate bibliography
python scripts/generate_bibliography.py output/*.md -o refs.bib
# Convert to PDF with citations
python scripts/convert_academic.py paper.md pdf
Use when: You need to research topics from the web, write academic papers with citations, or build bibliographies from scraped sources.
Analyze meeting transcripts using Cerebras AI to extract decisions, action items, and terminology.
Features:
Quick Start:
# Install dependencies
cd ~/.claude/skills/transcript-analyzer/scripts && npm install
# Add API key
echo "CEREBRAS_API_KEY=your-key" > scripts/.env
# Analyze transcript
npm run cli -- /path/to/meeting.md -o analysis.md
# Include original transcript
npm run cli -- meeting.md -o analysis.md --include-transcript
# Skip glossary
npm run cli -- meeting.md -o analysis.md --no-glossary
Use when: You need to extract action items from meetings, find decisions in conversations, or build glossaries from recorded discussions.
Extract and analyze Wispr Flow voice dictation history from the local SQLite database. Combines quantitative metrics with LLM-powered qualitative analysis for self-reflection, work pattern recognition, and mental health awareness.
Features:
today, yesterday, week, month, specific dates, date rangesall, technical (coding/work), soft (communication patterns), trends (volume/frequency), mental (sentiment/energy/rumination)meta/wispr-analytics/)Quick Start:
# Copy to skills directory
cp -r wispr-analytics ~/.claude/skills/
# Today's full analysis
/wispr-analytics today
# Last 7 days, communication patterns
/wispr-analytics week soft
# Monthly mental health reflection
/wispr-analytics month mental
# Specific date range, productivity focus
/wispr-analytics 2026-02-01:2026-02-14 technical
Use when: Self-reflection on work patterns, reviewing dictation habits, tracking energy/sentiment over time, understanding how you communicate across contexts, or generating periodic self-awareness reports.
Generate interactive AI transformation context-builder prompts for consulting clients. Creates structured discovery session prompts that guide a company through context gathering about their business, pain points, tech stack, and AI opportunities -- producing a resumable, multi-section questionnaire with Express and Deep Dive modes.
Features:
Quick Start:
# Copy to skills directory
cp -r context-builder ~/.claude/skills/
# Run the skill
/context-builder
Use when: Preparing for a consulting engagement, onboarding a new client, running a structured discovery session, or doing a self-assessment of your own business's AI transformation readiness.
Collaborative SVG canvas MCP server with a Fabric.js browser editor. Claude writes and reads SVG via MCP tools while the user edits interactively in the browser. Real-time sync via WebSocket.
Features:
MCP Tools:
sketch_open_canvas -- create/open canvas, launches browsersketch_get_svg / sketch_set_svg -- read/replace SVGsketch_add_element -- add SVG fragment without clearingsketch_add_textbox -- add fixed-width text area with word wrappingsketch_lock_objects / sketch_unlock_objects -- freeze/unfreeze objectssketch_save_template / sketch_load_template / sketch_list_templates -- JSON template persistencesketch_clear_canvas / sketch_focus_canvas / sketch_close_canvas -- canvas managementQuick Start:
# Clone and build
git clone https://github.com/glebis/sketch-mcp-server.git
cd sketch-mcp-server && npm install && npm run build
# Add to Claude Code MCP config
# mcpServers: { "sketch-mcp-server": { "command": "node", "args": ["path/to/dist/index.js", "--stdio"] } }
Use when: Visual prototyping, creating diagrams, building reusable canvas templates, before/after comparisons, or any task where Claude and the user need a shared visual workspace.
Intelligent meeting transcript processor that auto-detects meeting type (leadgen, partnership, coaching, internal) and applies type-specific structured extraction with optional interactive clarification.
Features:
## Meeting Analysis section to transcript filesmeeting_type, processed_date, processing_modeQuick Start:
# Copy to skills directory
cp -r meeting-processor ~/.claude/skills/
# Install dependencies
pip install openai pyyaml
# Process a transcript interactively
python3 ~/.claude/skills/meeting-processor/scripts/process.py <transcript-file> --mode interactive
# Batch mode (no interaction)
python3 ~/.claude/skills/meeting-processor/scripts/process.py <transcript-file> --mode batch
# Force meeting type
python3 ~/.claude/skills/meeting-processor/scripts/process.py <transcript-file> --type leadgen
Use when: Processing meeting transcripts after Fathom/Granola sync, or when asked to analyze/summarize a meeting. Requires CEREBRAS_API_KEY environment variable.
Semantic search across Claude Code session transcripts. Combines keyword pre-filtering with LLM-powered relevance evaluation to find previous sessions about specific topics, debugging conversations, research tasks, or past work.
Features:
Quick Start:
# Copy to skills directory
cp -r session-search ~/.claude/skills/
# Search for sessions about a topic
/session-search "debugging auth flow"
# With custom parameters (20 results, 180 days lookback)
/session-search "obsidian vault" 20 180
Use when: Finding previous Claude Code sessions about specific topics, locating past debugging conversations, or searching for research/planning sessions.
Evidence-based dialogue mode that replaces sycophantic AI responses with structured, critical analysis. Five modes for different contexts — from quick gut-checks to deep Socratic dialogue.
Modes:
Output Modifiers:
--table — ASCII pro/contra table--refs — full academic citations with DOI validationMeta-Rules:
Quick Start:
# Install via npx
npx skills add glebis/claude-skills -s balanced
# Or copy manually
cp -r balanced ~/.claude/skills/
# Quick analysis
/balanced "AI agents will replace most knowledge work within 5 years"
# Steelman mode for debate prep
/balanced steelman "remote work is more productive than office work"
# TLDR with table
/balanced tldr --table "should I migrate from REST to GraphQL?"
# Interactive Socratic dialogue
/balanced i "consciousness is an illusion"
# Onboarding — pick your default mode
/balanced onboard
Use when: You need honest, structured feedback instead of agreement — testing assumptions, evaluating claims, preparing arguments, making decisions.
Standalone CLI for the Linear issue tracker with browser-based OAuth via Linear's MCP server. Zero dependencies beyond Python 3 — authenticates through Dynamic Client Registration + PKCE, communicates via MCP JSON-RPC over Streamable HTTP.
Features:
tools command to discover all available MCP operations~/.config/linear/ (XDG compliant)Quick Start:
# Copy to skills directory
cp -r linear ~/.claude/skills/
# Authenticate (opens browser)
~/.claude/skills/linear/scripts/linear auth
# List teams
~/.claude/skills/linear/scripts/linear teams
# Create an issue
~/.claude/skills/linear/scripts/linear create "Fix login bug" --team GLE --priority high --assignee me --due today
# List your issues
~/.claude/skills/linear/scripts/linear list --mine
# Update an issue
~/.claude/skills/linear/scripts/linear update GLE-123 --state "In Progress"
Use when: Managing Linear issues from the terminal or Claude Code — creating tasks, querying backlogs, updating statuses, or integrating Linear into automated workflows.
Interview-driven automation design tool. Runs a coverage-driven JTBD interview (text or voice) to capture what to build, for whom, and why — then exports a one-page design.md spec plus an SVG design map. Sits between "should I automate this?" (automation-advisor) and "how do I package this as a skill?" (skill-creator).
Features:
propose-from-session)design.md + design.svg — ready to hand off to skill-creatorQuick Start:
# Install CLI
pip install -e ~/.claude/skills/skill-studio
# Text mode (default)
/skill-studio
# Voice mode (requires Daily, Groq, Deepgram keys)
skill-studio new --voice --preset ai-agent --depth standard
Use when: Designing a new skill, agent, automation, or workflow — transforms "I want a bot that..." into a structured spec with concrete scenarios, triggers, inputs/outputs, and guardrails.
Generate and edit images using Google's Gemini image generation models. Supports style presets, platform-specific sizing, variants, image editing, and reference images for style transfer.
Features:
Quick Start:
cp -r nano-banana ~/.claude/skills/
# Requires Google AI API key (auto-decrypted via SOPS)
Use when: Generating images for presentations, social media, blog posts, or editing existing images with AI.
Final retrospective and self-assessment for Claude Code Lab graduates. Four interactive exercises: progress audit, best prompt showcase, monthly plan, and structured feedback.
Quick Start:
cp -r lab-retro ~/.claude/skills/
/lab-retro
Use when: Completing a Claude Code Lab cohort — consolidates learning, captures best work, and generates a forward plan.
Register the repo as a skill source, then install individual skills:
# One-time: add the marketplace
claude plugin marketplace add glebis/claude-skills
# Install any skill
claude plugin install tdd@glebis-skills
claude plugin install doctorg@glebis-skills
claude plugin install deep-research@glebis-skills
skills CLInpx skills add glebis/claude-skills --skill tdd
npx skills add glebis/claude-skills --skill doctorg
npx skills add glebis/claude-skills --skill deep-research
# Clone the repository
git clone https://github.com/glebis/claude-skills.git
# Copy desired skill to Claude Code skills directory
cp -r claude-skills/<skill-name> ~/.claude/skills/
Some skills require additional setup after installation:
# For llm-cli: Install Python dependencies
cd ~/.claude/skills/llm-cli
pip install -r requirements.txt
# For deep-research: Set up environment
cd ~/.claude/skills/deep-research
cp .env.example .env
# Edit .env and add your OPENAI_API_KEY
# For youtube-transcript: Install yt-dlp
pip install yt-dlp
llm CLI tool: pip install llm⚠️ Important: OpenAI requires organization verification to access certain models via API, including o4-mini-deep-research.
To verify your organization:
Without verification, you'll receive a model_not_found error when trying to use the Deep Research API.
yt-dlp: pip install yt-dlptelethon: pip install telethon~/.telegram_dl/ (run telegram_dl.py to authenticate)telethon, rich, questionary: pip install telethon rich questionaryGROQ_API_KEY env var or pip install openai-whisper~/.config/telegram-telethon/pip install google-api-python-client google-auth-httplib2 google-auth-oauthlibnpm install playwright~/data/health.db (imported from Apple Health export)python-dotenv, requests: pip install python-dotenv requests--deep mode (optional)--full mode (optional)gh) - install from https://cli.github.com, then run gh auth logingist scope from https://github.com/settings/tokensSkills are automatically triggered by Claude Code based on your requests. For example:
User: "Research the most effective open-source RAG solutions"
Claude: [Triggers deep-research skill]
- Asks clarifying questions
- Enhances prompt with parameters
- Runs comprehensive research
- Saves markdown report with sources
Create a .env file in the skill directory:
OPENAI_API_KEY=your-key-here
Or export as environment variable:
export OPENAI_API_KEY="your-key-here"
Each skill includes comprehensive documentation:
SKILL.md - Complete skill overview and usage guideCHANGELOG.md - Version history and updatesreferences/ - Detailed workflow documentationContributions are welcome! To add a new skill:
deep-research/skill-name/
├── SKILL.md # Skill metadata and documentation
├── CHANGELOG.md # Version history
├── .env.example # Example environment configuration
├── scripts/ # Executable orchestration scripts
├── assets/ # Core scripts and resources
└── references/ # Detailed documentation
For guidance on creating your own skills, see the skill-creator guide.
MIT License - see individual skill directories for specific licenses.
Note: Skills require Claude Code to function. These are not standalone tools.