A collection of skills to help coding agents onboard to LiveAvatar.
npx skills add https://github.com/heygen-com/liveavatar-agent-skills --skill liveavatar-debugInstallieren Sie diesen Skill über die CLI und beginnen Sie mit der Verwendung des SKILL.md-Workflows in Ihrem Arbeitsbereich.
Reusable skills for AI coding agents integrating with LiveAvatar. These skills provide procedural knowledge that helps agents build LiveAvatar integrations correctly on the first attempt.
| Skill | Description |
|---|---|
| liveavatar-integrate | End-to-end integration builder — assesses your existing stack, recommends the optimal path (Embed / FULL / LITE), and guides implementation step by step. |
| liveavatar-debug | Symptom-based troubleshooting for silent avatars, garbled audio, auth errors, and more. |
| liveavatar-feedback | Collects user feedback on their LiveAvatar integration experience and sends it to the LiveAvatar team. Triggers after implementation, on frustration, or on explicit request. |
# Install all skills globally
npx skills add heygen-com/liveavatar-agent-skills -a claude-code -g
# Or install to current project only
npx skills add heygen-com/liveavatar-agent-skills -a claude-code
# Install a specific skill only
npx skills add heygen-com/liveavatar-agent-skills --skill liveavatar-integrate
This works with Claude Code, Cursor, Codex, and other agents.
git clone https://github.com/heygen-com/liveavatar-agent-skills.git
# Symlink all skills to personal skills directory (available in all projects)
for skill in liveavatar-agent-skills/skills/*/; do
ln -s "$(pwd)/$skill" ~/.claude/skills/$(basename "$skill")
done
Skills activate automatically when agents detect relevant tasks (e.g., "add a LiveAvatar avatar", "build a conversational avatar", "integrate LiveAvatar").
Each skill follows the Agent Skills format:
skill-name/
├── SKILL.md # Agent instructions with behavioral guidance
└── references/ # Supporting documentation (API details, code examples)
| Task | Skill |
|---|---|
| Build a new LiveAvatar integration | liveavatar-integrate |
| Put an avatar on a page (no code) | liveavatar-integrate → Embed pathway |
| Build a conversational avatar | liveavatar-integrate → FULL Mode pathway |
| Use your own LLM / TTS / full pipeline | liveavatar-integrate → FULL or LITE pathway |
| Connect ElevenLabs Agents to an avatar | liveavatar-integrate → LITE + ElevenLabs pathway |
| Debug a silent or broken avatar | liveavatar-debug |
| Share feedback about the integration experience | liveavatar-feedback |
"Add a LiveAvatar avatar to my landing page"
"Build a conversational AI avatar for customer support"
"I have my own LLM — integrate it with LiveAvatar"
"Set up LiveAvatar LITE Mode with my Pipecat pipeline"
"My LiveAvatar avatar is silent, help me debug"
"Create a sandbox session to test LiveAvatar"
"I want to give feedback about the LiveAvatar integration"
"This integration is frustrating, let me report this"
All skills use the LiveAvatar API:
https://api.liveavatar.comX-API-KEY header (backend), Bearer <session_token> (session ops)Contributions welcome! See CONTRIBUTING.md for guidelines. When adding or modifying skills:
references/MIT