agskills.dev
MARKETPLACE

phoenix-evals

Build and run evaluators for AI/LLM applications using Phoenix.

Arize-ai8.7k736

预览

SKILL.md
Metadata
name
phoenix-evals
description
Build and run evaluators for AI/LLM applications using Phoenix.
license
Apache-2.0
version
"1.0.0"
languages
Python, TypeScript

Phoenix Evals

Build evaluators for AI/LLM applications. Code first, LLM for nuance, validate against humans.

Quick Reference

TaskFiles
Setupsetup-python, setup-typescript
Decide what to evaluateevaluators-overview
Choose a judge modelfundamentals-model-selection
Use pre-built evaluatorsevaluators-pre-built
Build code evaluatorevaluators-code-{python|typescript}
Build LLM evaluatorevaluators-llm-{python|typescript}, evaluators-custom-templates
Batch evaluate DataFrameevaluate-dataframe-python
Run experimentexperiments-running-{python|typescript}
Create datasetexperiments-datasets-{python|typescript}
Generate synthetic dataexperiments-synthetic-{python|typescript}
Validate evaluator accuracyvalidation, validation-evaluators-{python|typescript}
Sample traces for reviewobserve-sampling-{python|typescript}
Analyze errorserror-analysis, error-analysis-multi-turn, axial-coding
RAG evalsevaluators-rag
Avoid common mistakescommon-mistakes-python, fundamentals-anti-patterns
Productionproduction-overview, production-guardrails, production-continuous

Workflows

Starting Fresh: observe-tracing-setuperror-analysisaxial-codingevaluators-overview

Building Evaluator: fundamentalscommon-mistakes-pythonevaluators-{code\|llm}-{python\|typescript}validation-evaluators-{python\|typescript}

RAG Systems: evaluators-ragevaluators-code-* (retrieval) → evaluators-llm-* (faithfulness)

Production: production-overviewproduction-guardrailsproduction-continuous

Rule Categories

PrefixDescription
fundamentals-*Types, scores, anti-patterns
observe-*Tracing, sampling
error-analysis-*Finding failures
axial-coding-*Categorizing failures
evaluators-*Code, LLM, RAG evaluators
experiments-*Datasets, running experiments
validation-*Validating evaluator accuracy against human labels
production-*CI/CD, monitoring

Key Principles

PrincipleAction
Error analysis firstCan't automate what you haven't observed
Custom > genericBuild from your failures
Code firstDeterministic before LLM
Validate judges>80% TPR/TNR
Binary > LikertPass/fail, not 1-5