A cursor records the last successfully processed Built-in targets (ClickHouse, Drizzle) handle persistence automatically. When usingDocumentation Index
Fetch the complete documentation index at: https://beta.docs.sqd.dev/llms.txt
Use this file to discover all available pages before exploring further.
createTarget directly you own the full lifecycle.
The cursor object
hash is the fork detection tripwire: the SDK sends parentBlockHash = cursor.hash in each portal request. An absent hash silently skips fork detection for that request. See cursor semantics for the full picture.
Startup: range.from and stored cursors
range.from in the decoder sets where the stream begins on a first run — before any cursor exists:
Once a cursor is stored, range.from is ignored — the stream resumes from cursor.number + 1.
The stream id
Theid on fuelPortalStream is the primary key for all stored state:
Both built-in targets use it to isolate state records, so multiple streams can share one physical table. Never rename an active stream’s id — the stored cursor is keyed on it, and renaming causes the pipeline to restart from range.from.
ClickHouse target
clickhouseTarget saves the cursor after every successful onData call and resolves fork and crash-recovery callbacks automatically.
onRollback is called in two situations:
type: 'offset_check'— on every startup when a cursor exists. ClickHouse is non-transactional: a crash betweenonDataand the cursor save leaves rows newer than the saved cursor. Delete them here. See non-transactional databases.type: 'blockchain_fork'— when the portal signals a reorg. The rollback cursor is resolved automatically from stored history; your callback only needs to delete rows aftersafeCursor.number.
maxRows are pruned every 25 saves. Set maxRows to cover your network’s worst-case reorg depth — see rollback depth.
Drizzle target
drizzleTarget saves the cursor inside the same PostgreSQL transaction as the data write — fully atomic, no crash-recovery pass needed.
tables is required for every table written in onData. At startup the target installs a PostgreSQL trigger on each listed table; the trigger copies the pre-change row into a <name>__snapshots table (keyed by number and primary key). On a fork the target replays these snapshots in reverse, restoring pre-fork state automatically. Writing to a table not in tables raises a runtime error.
Snapshotting only fires for at or above the current finalized head — historical can never be reorged.
Advisory lock. Every batch acquires pg_try_advisory_xact_lock(hashtext(id)) inside the transaction, preventing concurrent writers on the same stream. Two drizzleTarget instances sharing the same id will serialize correctly; two with different ids run independently.
Retention. Snapshot rows below min(current, finalizedHead) - unfinalizedBlocksRetention are deleted every 25 batches. Set this to cover your network’s worst-case reorg depth.
Rollback hooks. onBeforeRollback and onAfterRollback receive { tx, cursor } and run inside the fork transaction. Use them to perform additional cleanup that the snapshot mechanism cannot cover (e.g., rows in tables not tracked by tables).
Async iterator
When consuming a pipeline withfor await...of instead of pipeTo, the native [Symbol.asyncIterator]() always calls read() with no cursor — it has no way to accept one. The stream therefore starts from range.from on every run.
Finalized streams. If the stream only consumes already-finalized (no forks possible), rebuilding the stream with range.from set to the stored cursor is sufficient:
Save ctx.stream.state.current — the full BlockCursor of the batch’s last — not just the number. The hash is needed if you later switch to real-time or need the cursor as a fork anchor.
Real-time streams. Setting range.from to a stored number loses the hash. On restart the first request carries no parentBlockHash, so fork detection is silently disabled for that request. For real-time streams, use the pipeToIterator helper from the async iteration tab of the fork handling guide, which accepts an initialCursor and passes it directly to read() inside pipeTo:
pipeToIterator preserves parentBlockHash across fork rounds because it uses pipeTo internally. On a fresh first run, pass undefined as initialCursor and the stream begins from range.from as normal.
Custom cursor management
When usingcreateTarget directly, you own the full cursor lifecycle.
At the start of write, fetch the stored cursor and pass it to read:
fork callback and the algorithm for resolving rollback cursors from stored history are covered in detail in the fork handling guide.
Example: cursor management with createTarget
A minimal example showing manual cursor passing in createTarget
Example: ClickHouse target
Full pipeline with onRollback and onData
Example: Drizzle / PostgreSQL target
Full pipeline including GraphQL API
