A collection of skills for AI coding agents. Skills are packaged instructions and scripts that extend agent capabilities.
npx skills add https://github.com/gohypergiant/agent-skills --skill accelint-ts-performanceCLI를 사용하여 이 스킬을 설치하고 작업 공간에서 SKILL.md 워크플로 사용을 시작하세요.
A collection of skills and commands that transform AI agents into specialized problem solvers. Skills provide domain expertise, coding standards, and reusable workflows that help agents write better code with less context.
To have a greater guarantee of a skill being utilized, we recommend appending the following to any prompt you use:
Before relying on your training data you MUST evaluate and apply ALL APPLICABLE SKILLS to your problem space. IF AND ONLY IF you do not find a skill that applies are you allowed to fall back to your training data. This is not negotiable. This is not optional. You cannot rationalize your way out of this.
Skills follow the Agent Skills format and can be installed using the skills CLI.
npm:
npx skills add gohypergiant/agent-skills
pnpm:
pnpm dlx skills add gohypergiant/agent-skills
Once installed, skills activate automatically when relevant tasks are detected. No configuration needed.
# Agents will use accelint-ts-security when you ask:
"Add input validation to this function"
# Agents will use accelint-react-best-practices when you ask:
"Optimize this component's re-renders"
# Agents will use accelint-ts-testing when you ask:
"Write tests for this utility function"
# Skills can be explicitly requested in prompt:
"Use the accelint-react-testing skill to write tests for this interactive modal"
# Skills can be invoked directly with slash command:
claude /accelint-react-best-practices <dir>
For more in-depth examples, see Prompt Patterns
To scaffold and establish a new skill you can invoke the accelint-skill-manager skill like so:
/accelint-skill-manager <description of skill>. Can you help me refine and complete it?
After creating or significantly modifying a skill, run this 4-step audit loop before considering the work done.
Run the skill-judge skill against the completed skill. Apply all suggested improvements before proceeding.
Run /clear to reset context, then run the accelint-skill-manager skill against the skill. Apply all structural and content suggestions before proceeding.
Run /clear, then run skill-judge again. Apply remaining suggestions. Target grade A or higher (>=108/120).
name is lowercase, no uppercase letters, no consecutive hyphens, ≤64 chars, matches directory namedescription answers WHAT + WHEN + KEYWORDS, is non-empty, ≤1024 charsmetadata.version is bumped (major for substantial changes, minor for small fixes)Skills are modular packages that extend agent's capabilities with specialized knowledge. They work like onboarding guides for specific domains, providing:
Skills use a progressive disclosure structure. The agent reads a compact overview first (AGENTS.md), then loads detailed reference files only when needed. This minimizes context usage while keeping deep knowledge available.
LLMs have broad knowledge but lack project-specific conventions and current best practices. Skills bridge this gap by providing:
Skills are designed for agents, not humans. They're structured for efficient context usage and include trigger conditions that help Claude know when to activate them.
TypeScript and JavaScript coding standards covering:
any, prefer type over interface, use as const instead of enum)Activates when: Writing JS/TS functions, fixing type errors, adding validation, reviewing code quality.
Systematic JavaScript/TypeScript performance optimization using V8 profiling:
Activates when: Code is measurably slow, optimizing hot paths, profiling shows bottlenecks, fixing excessive allocations, improving execution speed.
React performance optimization and modern patterns for React 19+:
useEffectEvent, <Activity>, ref as prop)Activates when: Writing React components, debugging re-renders, fixing hydration errors, optimizing list rendering.
Testing patterns for Vitest:
it.each()Activates when: Writing *.test.ts files, adding test coverage, debugging flaky tests, reviewing test code.
Documentation standards for JavaScript and TypeScript:
Activates when: Adding JSDoc comments, documenting functions or types, auditing documentation completeness, adding TODO/FIXME markers, improving code comments.
Next.js performance optimization and best practices:
Activates when: Writing Server Components/Actions, implementing data fetching in RSC, optimizing API routes, debugging waterfall issues, reviewing Next.js code for performance, fixing authentication in Server Actions, reducing HTML payload size, or deciding between Server/Client Components.
TanStack Query best practices for React applications with Next.js App Router:
Activates when: Configuring QueryClient, implementing mutations, debugging performance issues, adding optimistic updates, working with query keys, handling cache invalidation, integrating with Next.js Server Components, debugging hydration issues, or using TanStack Query hooks (useQuery, useSuspenseQuery, useMutation).
Comprehensive security audit and vulnerability detection following OWASP Top 10:
Activates when: Auditing security, checking for vulnerabilities, implementing authentication, adding API endpoints, handling user input, working with secrets or sensitive data, implementing payment features, or conducting pre-deployment security checks.
A meta-skill for creating and managing other skills:
Activates when: Creating new skills or updating existing ones.
Guide for creating Claude Code commands:
Activates when: Creating new Claude Code commands.
README documentation generator and updater:
Activates when: Creating or updating README.md files, documenting packages, or auditing documentation completeness.
Converts acceptance criteria into JSON test plans and then Playwright *.spec.ts files:
Activates when: Turning acceptance criteria into Playwright *.spec.ts tests.
| Command | Description |
|---|---|
/feature-planning:acceptance |
Define acceptance criteria for a feature. |
/feature-planning:implementation |
Research codebase patterns and create implementation tasks. |
/feature-planning:testing |
Generate test plans based on implementation. |
You can search and audit third party skills at skills.sh
These third-party skills have been vetted for use alongside the Accelint skills:
Install ecosystem skills the same way you do Accelint skills:
npx skills add https://github.com/intellectronica/agent-skills --skill context7
This repository uses symlinks to make locally developed skills available during skill creation:
# Symlinks are already configured in this repo
ls -la .claude/skills/ # Shows symlinks to skills/
Why symlinks? When creating new skills, Claude can reference and learn from existing skills in skills/. The symlinks in .claude/skills/ make these skills available to:
accelint-skill-manager)Structure:
skills/ - Source of truth for all locally developed skills.claude/skills/ - Symlinks to skills/ for Claude Code skill loadingSkills in .claude/skills/ take precedence over globally installed skills.
To test a skill from this repository in another project:
skills/ to .claude/skills/ in your target project~/.claude/skills/ for cross-project testingReusable prompt patterns for common development tasks. These patterns work well with the skills and commands in this repository.
# Start with a plan
I want to [your task]. Research the codebase and create a plan. Do NOT write any code yet.
# Get a second opinion on the plan
Review this plan as a skeptical staff engineer. What's missing? What could go wrong? What would you push back on?
# Surface unknowns before proceeding
Make the plan extremely concise. At the end, give me a list of unresolved questions to answer before we start.
# Re-plan when stuck
This approach isn't working. Stop. Let's go back to plan mode. What went wrong and what's a better approach?
# Demand elegance
Knowing everything you know now, scrap this and implement the elegant solution.
# Make Agent your reviewer
Grill me on these changes and don't make a PR until I pass your test.
# Prove it works
Prove to me this works. Diff the behavior between main and this branch.
# After Agent makes a mistake and you correct it
Update AGENTS.md so you don't make that mistake again.
Persona:
You are an expert in the field of [CONCEPT] and a professional science communicator.
Objective:
Explain [CONCEPT] as if I'm 5 years old.
Requirements:
- Use simple everyday analogies
- Be specific, not grandiose. Say what it actually does
- Avoid technical jargon
- Avoid puffery: pivotal, crucial, vital, testament, enduring legacy
- Avoid empty "-ing" phrases: ensuring reliability, showcasing features, highlighting capabilities
- Avoid promotional adjectives: groundbreaking, seamless, robust, cutting-edge
- Avoid overused AI vocabulary: delve, leverage, multifaceted, foster, realm, tapestry
- Avoid formatting overuse: excessive bullets, emoji decorations, bold on every other word
Invokes: No specific skills (general-purpose explanation pattern)
Persona:
You are a lead software engineer and technical writer with 15+ years of experience.
Objective:
1. Check for bugs, edge cases, and error handling
2. Suggest performance improvements
3. Evaluate code structure and organization and recommend better patterns
4. Assess naming conventions and readability
5. Identify potential security issues
6. Provide thorough testing including edge cases
7. Explain your reasoning clearly with specific examples
Requirements:
Always prioritize readability and maintainability over cleverness.
Invokes: accelint-ts-best-practices, accelint-react-best-practices, accelint-ts-testing, accelint-nextjs-best-practices (depending on code type)
Persona:
You are a lead software engineer and technical writer with 15+ years of experience.
Objective:
1. **Problem Identification**: What exactly is failing?
2. **Root Cause**: Why is it failing?
3. **Fix**: Provide corrected code
4. **Prevention**: How to prevent similar bugs
Requirements:
Show your debugging thought process step by step.
Invokes: accelint-ts-best-practices, accelint-react-best-practices, accelint-ts-testing, accelint-nextjs-best-practices (depending on code type)
Persona:
You are a lead software engineer and technical writer with 15+ years of experience.
Objective:
Analyze this code for performance issues.
Requirements:
1. **Time Complexity**: Big O analysis
2. **Space Complexity**: Memory usage patterns
3. **I/O Bottlenecks**: Database, network, disk
4. **Algorithmic Issues**: Inefficient patterns
5. **Quick Wins**: Easy optimizations
Invokes: accelint-ts-best-practices, accelint-ts-performance, accelint-react-best-practices, accelint-nextjs-best-practices
Persona:
You are a lead security analyst and technical writer with 15+ years of experience.
Objective:
Perform a security review of this code.
Requirements:
1. **Input Validation**: Check all inputs
2. **Authentication/Authorization**: Access control
3. **Data Protection**: Sensitive data handling
4. **Injection Vulnerabilities**: SQL, XSS, etc.
5. **Dependencies**: Known vulnerabilities
Invokes: accelint-ts-best-practices, accelint-ts-performance, accelint-react-best-practices, accelint-nextjs-best-practices, accelint-security-best-practices
Persona:
You are a expert agent skill architect.
Objective:
1. Use the accelint-skill-manager skill to audit ./skills/example-skill
2. Identify any best practice optimizations that can be made
3. Optimize towards deterministic output and correctness when auditing
4. Explain your reasoning clearly with specific examples
5. Re-run this loop but with the skill-judge skill instead
Invokes: accelint-skill-manager, skill-judge
Persona:
You are a senior QA automation engineer with deep Playwright experience.
Objective:
1. Convert the acceptance criteria at [AC_PATH] into JSON test plans.
2. Validate each plan against the schema.
3. Translate validated plans into Playwright spec files.
Requirements:
- Follow the AC rules and mappings in the skill references.
- Process one AC file at a time.
- Ask for any missing required fields before generating plans.
- Do not invent assertions. If the correct assertion to use isn't clear from the AC, ask.
- Require explicit output directories for plans, tests, and summaries before writing files.
Output:
Validated JSON plans and Playwright spec files, plus a brief summary of what was generated.
This repository leverages the following third party agent skills internally:
See CONTRIBUTING.md for setup instructions.
Quick contribution guide:
Apache 2.0 - see LICENSE for details.