Skip to main content

FAQ & Troubleshooting

Common questions and solutions for developing with Squid SDK.

Frequently Asked Questions

Real-World Applications

What are some real-world applications for which Squid SDK was a good fit?

Squid SDK is well-suited for a wide range of blockchain indexing applications:
  • DEX Analytics - Track swaps and liquidity pools across Solana DEXs
  • NFT Marketplaces - Index NFT trades and collections on Solana
  • Program Analysis - Monitor program interactions and account state changes
  • Real-time Bots - Build bots with <1/sec delay triggered by on-chain activity
Squid SDK excels at applications requiring high-performance indexing, complex data transformations, and real-time processing.

Technical Questions

How does Squid SDK handle unfinalized blocks?

The SQD Network serves finalized blocks and is typically ~1000 blocks behind the tip. Recent and unfinalized blocks are seamlessly handled by the SDK from a complementary RPC data source configured in your processor.
Potential chain reorganizations are automatically handled under the hood, ensuring data consistency.
For detailed information, see Indexing unfinalized blocks.

What is the latency for the data served by the squid?

Since the ArrowSquid release, Squid SDK can ingest unfinalized blocks directly from an RPC endpoint, making indexing real-time with minimal latency.
Configure your RPC endpoint in the processor to enable real-time indexing of the latest blocks.

How do I enable GraphQL subscriptions for local runs?

Add the --subscription flag to the serve command in your commands.json:
{
  "commands": {
    "serve:dev": {
      "cmd": ["npx", "squid-graphql-server", "--subscription"]
    }
  }
}
See Subscriptions for detailed configuration options.

How do squids keep track of their sync progress?

Sync progress tracking depends on the data sink used: TypeORM Database: Processors using TypeormDatabase store their state in a PostgreSQL schema (not a table). By default, the schema is called squid_processor.
The schema name must be overridden in multiprocessor squids.
View sync status:
SELECT * FROM squid_processor.status;
Reset processor status:
DROP SCHEMA squid_processor CASCADE;
File-based datasets: Squids using file-based storage store their status in status.txt by default. This can be customized using database hooks.

Is there a healthcheck endpoint for the indexer?

Yes! The processor exposes Prometheus metrics at the ${process.env.PROMETHEUS_PORT}/metric endpoint.
For squids deployed to SQD Cloud, metrics are publicly exposed. See Monitoring in the Cloud for details.

Do squids have a debug mode?

Yes. Enable debug mode by setting the SQD_DEBUG environment variable:
# Enable all debug messages
SQD_DEBUG=*

# Enable specific namespace (e.g., SQD Network queries)
SQD_DEBUG=sqd:processor:archive
Use specific namespaces to focus on particular components and reduce log noise during debugging.

Troubleshooting

Many issues can be resolved by following the best practices guide of SQD Cloud.

Processor Issues

Instruction decoding errors on Solana

  • Ensure that the instruction data matches the expected program IDL format. Verify that you’re using the correct program ID and instruction discriminator.
  • Check that the instruction accounts array matches the expected account layout for the instruction being decoded.
  • If decoding fails, verify that the program IDL is up to date and matches the on-chain program version.

Data Sink Issues

QueryFailedError: relation "..." does not exist

It is likely that the generated migrations in the db/migrations folder are outdated and do not match the schema file. Recreate the migrations from scratch as detailed on this page.

Query runner already released. Cannot run queries anymore, or too late to perform db updates, make sure you haven't forgot to await on db query

If your squid saves its data to a database, all operations with the store are asynchronous. Make sure you await on all store operations like upsert, update, find, save etc. You may find the require-await eslint rule to be helpful.

QueryFailedError: invalid byte sequence for encoding "UTF8": 0x00

PostgreSQL doesn’t support storing NULL (\0x00) characters in text fields. Usually the error occurs when a raw bytes string (like UIntArray or Bytes) is inserted into a String field. If this is the case, use hex encoding, e.g. using util-internal-hex library. For addresses, use appropriate encoding for your chain type.

GraphQL Issues

API queries are too slow

response might exceed the size limit

Make sure the input query has limits set or the entities are decorated with @cardinality. We recommend using XXXConnection queries for pagination. For configuring limits and max response sizes, see DoS protection.