Reusable Xcrawl skill definitions for multi-agent runtimes, focused on API-first web data workflows.
npx skills add https://github.com/xcrawl-api/xcrawl-skills --skill xcrawlInstall this skill with the CLI and start using the SKILL.md workflow in your workspace.
Reusable XCrawl skill definitions for multi-agent runtimes, focused on API-first web data workflows.
Canonical repository: https://github.com/xcrawl-api/xcrawl-skills
English | 中文
XCrawl is a web data infrastructure product for search, scraping, URL mapping, and site crawling.
This repository provides production-oriented skill definitions that help agents call XCrawl APIs consistently.
xcrawl: Default XCrawl entry skill for direct lookup and single-URL extractionxcrawl-scrape: Single-URL extraction and structured data workflowsxcrawl-map: Site URL discovery and scope planning workflowsxcrawl-crawl: Bulk site crawling and async result handling workflowsxcrawl-search: Query-based discovery with location/language controlshttps://dash.xcrawl.com/ and activate the free 1000 credits plancurl and nodeCreate local config file:
Path: ~/.xcrawl/config.json
{
"XCRAWL_API_KEY": "<your_api_key>"
}
Skills in this repo are designed to read XCRAWL_API_KEY from this local file.
Open one of:
skills/xcrawl/SKILL.mdskills/xcrawl-scrape/SKILL.mdskills/xcrawl-map/SKILL.mdskills/xcrawl-crawl/SKILL.mdskills/xcrawl-search/SKILL.mdEach skill includes:
Use the examples in each SKILL.md directly, then adapt request payloads for your business scenario.
/docs/ URLs under this domain with a limit of 2000."Each skill can be executed through a runtime adapter layer.
goal, inputs, constraints, credentials_ref, runtime_contextstatus, request_payload, raw_response or async pair, task_ids, errorDefault behavior is raw passthrough: return upstream API response bodies as-is.
https://run.xcrawl.com~/.xcrawl/config.jsonXCRAWL_API_KEYAuthorization: Bearer <XCRAWL_API_KEY>POST /v1/scrape and GET /v1/scrape/{scrape_id}POST /v1/mapPOST /v1/crawl and GET /v1/crawl/{crawl_id}POST /v1/search