Cursors track the last successfully processed slot, allowing pipelines to resume after restarts without reprocessing historical data.
Resume indexing from a specific slot using cursors:
import { createTarget } from '@subsquid/pipes'
import { solanaPortalSource, solanaInstructionDecoder } from '@subsquid/pipes/solana'
import * as orcaWhirlpool from './abi/orca_whirlpool/index.js'
const source = solanaPortalSource({
portal: 'https://portal.sqd.dev/datasets/solana-mainnet'
})
const decoder = solanaInstructionDecoder({
programId: orcaWhirlpool.programId,
instructions: {
swap: orcaWhirlpool.instructions.swap,
},
range: { from: 200_000_000, to: 200_000_500 }
})
async function secondRun() {
console.log(`Starting from slots following 200_000_300...`)
await source
.pipe(decoder)
.pipeTo(createTarget({
write: async ({logger, read}) => {
// Resume from slot 200_000_300
for await (const {data} of read({ number: 200_000_300 })) {
console.log('data:', data)
}
},
}))
}
Pass a cursor to read() to resume processing from a specific slot. The cursor format is { number: slotNumber }. This allows you to restart your indexer without reprocessing all historical data.
Store the cursor after successfully processing each batch to enable resumption. Use a database or file-based storage for persistence across restarts.
Cursor updates are atomic. Either the entire slot batch succeeds and the cursor advances, or the cursor doesn’t advance. Never manually modify cursor position without ensuring you don’t create gaps in processed slots.