sound-effects

Collections of Skills to assist building with ElevenLabs

Instalação
CLI
npx skills add https://github.com/elevenlabs/skills --skill sound-effects

Instale esta skill com a CLI e comece a usar o fluxo de trabalho SKILL.md em seu espaço de trabalho.

Última atualização em 4/22/2026

LOGO

ElevenLabs Skills

Agent skills for ElevenLabs developer products. These skills follow the Agent Skills specification and can be used with any compatible AI coding assistant.

Installation

npx skills add elevenlabs/skills

Available Skills

Skill Description
text-to-speech Convert text to lifelike speech using ElevenLabs' AI voices
speech-to-text Transcribe audio files to text with timestamps
agents Build conversational voice AI agents
sound-effects Generate sound effects from text descriptions
music Generate music tracks using AI composition
setup-api-key Guide through obtaining and configuring an ElevenLabs API key

Configuration

All skills require an ElevenLabs API key. Set it as an environment variable:

export ELEVENLABS_API_KEY="your-api-key"

Get your API key from the setup-api-key skill or use the ElevenLabs dashboard.

SDK Support

Most skills include examples for:

  • Python - pip install elevenlabs
  • JavaScript/TypeScript - npm install @elevenlabs/elevenlabs-js
  • cURL - Direct REST API calls

JavaScript SDK Warning: Always use @elevenlabs/elevenlabs-js. Do not use npm install elevenlabs (that's an outdated v1.x package).

See the installation guide in any skill's references/ folder for complete setup instructions including migration from deprecated packages.

Evaluations

The evals/ directory contains trigger and functional evaluations for all skills.

# Run all evaluations (trigger + functional)
python3 evals/run_all.py -v

# Trigger evals only — tests if skills fire for the right queries (~3 min)
python3 evals/run_all.py --trigger-only -v

# Functional evals only — tests if skills produce correct output (~15 min)
python3 evals/run_all.py --functional-only -v

# Specific skills
python3 evals/run_all.py --skills text-to-speech agents -v

# Custom model (see `cursor-agent --list-models`)
python3 evals/run_all.py --model gpt-5.4-high -v

Results are saved to evals/results/<timestamp>/ with a report.md summary and results.json for programmatic access.

Functional evals use an isolated cursor-agent workspace per test case (under that results tree); they do not modify skill sources under each skill’s directory.

Requires the Cursor Agent CLI (cursor-agent on your PATH; override binary with CURSOR_AGENT) and Cursor authentication (cursor-agent login or CURSOR_API_KEY).

License

MIT