The fastest way to get an AI coding agent productive on a Pipes SDK project is to install the official Pipes SDK Agent Skill:
npx skills add subsquid-labs/skills/pipes-sdk
The skill activates automatically on tasks like “create an indexer for Uniswap V3 swaps” or “my indexer is syncing slowly, help me optimize it”. It covers scaffolding, runtime error diagnosis, sync tuning, and data-quality checks.Pair the skill with one or both MCP servers so the agent can read live data and look things up:
Portal MCP server — 29 tools for querying blocks, transactions, logs, instructions, and analytics across 225+ datasets. No API key.
If you’d rather feed docs into a model directly, the static llms.txt (index) and llms-full.txt (full content) files are kept in sync with the site. See the AI Development overview for the full menu.
The CLI prompts for the project folder name, package manager (please stick to pnpm for now), sink (please use ClickHouse or Postgres), network type, network, and template; then installs dependencies and writes a runnable project.You can supply a JSON config instead of filling the prompts manually. Here’s the configuration for Orca Whirlpool swap instructions mentioned above:
The decoder asks the Portal for swap instructions on the Orca Whirlpool program. enrichEvents (from src/utils/) reshapes each decoded instruction into a row matching the Drizzle table. See the Pipe anatomy and Handling contract events guides for more info on evmDecoder().The main() function wires the decoder to a drizzleTarget:
The id is a per-pipeline identifier — keep it stable so the target’s cursor survives restarts. See solanaPortalSource for the full source API and Pipe anatomy for how the pieces fit together.
The tokenBalances template indexes pre/post token balances directly from blocks — no program ID needed. The generated pipe uses solanaQuery() instead of an instruction decoder.