Feel free to also use the template-specific
sqd scripts defined in
commands.json to simplify your workflow. See
sqd CLI cheatsheet for a
short intro.Prepare the environment
- Node v20.x or newer
- Git
- Squid CLI
- Docker (if your squid will store its data to PostgreSQL)
Understand your technical requirements
Consider your business requirements and find out-
How the data should be delivered. Options:
- PostgreSQL with an optional GraphQL API - can be real-time
- file-based dataset - local or on S3
- Google BigQuery
- What data should be delivered
-
What are the technologies powering the blockchain(s) in question. Supported options:
- Ethereum Virtual Machine (EVM) chains like Ethereum - supported networks
- What exact data should be retrieved from blockchain(s)
- Whether you need to mix in any off-chain data
Example requirements
Start from a template {#templates}
Although it is possible to compose a squid from individual packages, in practice it is usually easier to start from a template.Templates for the PostgreSQL+GraphQL data destination
Templates for the PostgreSQL+GraphQL data destination
- A minimal template intended for developing EVM squids. Indexes ETH burns.
- A starter squid for indexing ERC20 transfers.
- Classic example Subgraph after a migration to SQD.
- A template showing how to combine data from multiple chains. Indexes USDC transfers on Ethereum and Binance.
Templates for storing data in files
Templates for storing data in files
- USDC transfers -> local CSV
- USDC transfers -> local Parquet
- USDC transfers -> CSV on S3
Templates for the Google BigQuery data destination
Templates for the Google BigQuery data destination
- USDC transfers -> BigQuery dataset
- PostgreSQL+GraphQL
- filesystem dataset
- BigQuery
Start the GraphQL server
Run the following command in a separate terminal:Then visit the GraphiQL console to verify that the GraphQL API is up.
docker compose down.The bottom-up development cycle {#bottom-up-development}
The advantage of this approach is that the code remains buildable at all times, making it easier to catch issues early.I. Regenerate the task-specific utilities {#typegen}
Retrieve JSON ABIs for all contracts of interest (e.g. from Etherscan), taking care to get ABIs for implementation contracts and not proxies where appropriate. Assuming that you saved the ABI files to./abi, you can then regenerate the utilities with
src/abi.
See also EVM typegen code generation.
II. Configure the data requests {#processor-config}
Data requests are customarily defined atsrc/processor.ts.
Edit the definition of const processor to:
-
Use a data source appropriate for your chain and task.
- It is possible to use RPC as the only data source, but adding a SQD Network data source will make your squid sync much faster.
- RPC is a hard requirement if you’re building a real-time API.
- If you’re using RPC as one of your data sources, make sure to set the number of finality confirmations so that hot blocks ingestion works properly.
- On low block time, high data rate networks (e.g. Arbitrum) use WSS endpoints if latency is critical.
- Request all event logs, transactions, execution traces and state diffs that your task requires, with any necessary related data (e.g. parent transactions for event logs).
-
Select all data fields necessary for your task (e.g.
gasUsedfor transactions).
III. Decode and normalize the data {#batch-handler-decoding}
Next, change the batch handler to decode and normalize your data. In templates, the batch handler is defined at theprocessor.run() call in src/main.ts as an inline function. Its sole argument ctx contains:
- at
ctx.blocks: all the requested data for a batch of blocks - at
ctx.store: the means to save the processed data - at
ctx.log: aLogger - at
ctx.isHead: a boolean indicating whether the batch is at the current chain head - at
ctx._chain: the means to access RPC for state calls
ctx.blocks contains the data for the requested logs, transactions, traces and state diffs for a particular block, plus some info on the block itself. See EVM batch context reference.
Use the .decode methods from the contract ABI utilities to decode events and transactions, e.g.
(Optional) IV. Mix in external data and chain state calls output {#external-data}
If you need external (i.e. non-blockchain) data in your transformation, take a look at the External APIs and IPFS page. If any of the on-chain data you need is unavalable from the processor or incovenient to retrieve with it, you have an option to get it via direct chain queries.V. Prepare the store {#store}
Atsrc/main.ts, change the Database object definition to accept your output data. The methods for saving data will be exposed by ctx.store within the batch handler.
- PostgreSQL+GraphQL
- filesystem dataset
- BigQuery
Define the database schema
Define the schema of the database (and the core schema of the OpenReader GraphQL API if it is used) at
schema.graphql.Ensure access to a blank database
The easiest way to do so is to start PostgreSQL in a Docker container with:If the container is running, stop it and erase the database with:before issuing a
docker compose up -d.The alternative is to connect to an external database. See this section to learn how to specify the connection parameters.
ctx.store.upsert() and ctx.store.insert(), as well as various TypeORM lookup methods to access the database.See the typeorm-store guide and reference for more info.VI. Persist the transformed data to your data sink {#batch-handler-persistence}
Once your data is decoded, optionally enriched with external data and transformed the way you need it to be, it is time to save it.- PostgreSQL+GraphQL
- filesystem dataset
- BigQuery
For each batch, create all the instances of all TypeORM model classes at once, then save them with the minimal number of calls to It will often make sense to keep the entity instances in maps rather than arrays to make it easier to reuse them when defining instances of other entities with relations to the previous ones. The process is described in more detail in the step 2 of the BAYC tutorial.If you perform any database lookups, try to do so in batches and make sure that the entity fields that you’re searching over are indexed.See also the patterns and anti-pattens sections of the Batch processing guide.
upsert() or insert(), e.g.:The top-down development cycle
The bottom-up development cycle described above is convenient for inital squid development and for trying out new things, but it has the disadvantage of not having the means of saving the data ready at hand when initially writing the data decoding/transformation code. That makes it necessary to come back to that code later, which is somewhat inconvenient e.g. when adding new squid features incrementally. The alternative is to do the same steps in a different order:- Update the store
- If necessary, regenerate the utility classes
- Update the processor configuration
- Decode and normalize the added data
- Retrieve any external data if necessary
- Add the persistence code for the transformed data
GraphQL options
Store your data to PostgreSQL, then consult Serving GraphQL for options.Scaling up
If you’re developing a large squid, make sure to use batch processing throughout your code. A common mistake is to make handlers for individual event logs or transactions; for updates that require data retrieval that results in lots of small database lookups and ultimately in poor syncing performance. Collect all the relevant data and process it at once. A simple architecture of that type is discussed in the BAYC tutorial. You should also check the Cloud best practices page even if you’re not planning to deploy to SQD Cloud - it contains valuable performance-related tips. Many issues commonly arising when developing larger squids are addressed by the third party@belopash/typeorm-store package. Consider using it.
For complete examples of complex squids take a look at the Giant Squid Explorer and Thena Squid repos.
Next steps
- Learn about batch processing.
- Learn how squid deal with unfinalized blocks.
- Use external APIs and IPFS in your squid.
- See how squid should be set up for the multichain setting.
- Deploy your squid on own infrastructure or to SQD Cloud.

