Skip to main content
Deploy a production-ready Pipe to Railway with persistent cursor management. This guide demonstrates deploying a USDC transfer indexer using Railway’s platform.

Overview

Railway provides a simple platform for deploying Node.js applications with built-in support for persistent storage, environment variables, and monitoring. This guide shows a complete production setup including:
  • Cursor persistence for resumable indexing
  • Proper error handling and logging
  • Rollback handlers for blockchain reorganizations
  • File-based data storage
  • Automated Railway deployment
While SST supports Railway through the Railway Terraform provider, this guide uses Railway’s native deployment methods which are simpler and better documented for Node.js applications.

Prerequisites

Before starting, ensure you have:
  • A Railway account
  • Node.js 22+ installed
  • Git for version control
  • GitHub account (for deployment option 1)

Complete Example Pipe

Here’s a production-ready Pipe that indexes USDC transfers with cursor persistence:
import { createTarget } from "@subsquid/pipes";
import {
  evmPortalSource,
  evmDecoder,
  commonAbis,
} from "@subsquid/pipes/evm";
import fs from "fs/promises";
import path from "path";

const CURSOR_FILE = process.env.CURSOR_FILE || "cursor.json";
const DATA_DIR = process.env.DATA_DIR || "data";

async function loadCursor(): Promise<number | null> {
  try {
    const data = await fs.readFile(CURSOR_FILE, "utf-8");
    const { blockNumber } = JSON.parse(data);
    console.log(`Resuming from block ${blockNumber}`);
    return blockNumber;
  } catch {
    console.log("No cursor found, starting from configured block");
    return null;
  }
}

async function saveCursor(blockNumber: number): Promise<void> {
  await fs.writeFile(
    CURSOR_FILE,
    JSON.stringify({ blockNumber, timestamp: new Date().toISOString() }, null, 2)
  );
}

async function saveData(filename: string, data: object[]): Promise<void> {
  const filepath = path.join(DATA_DIR, filename);
  await fs.writeFile(filepath, JSON.stringify(data, null, 2));
}

async function main() {
  // Ensure data directory exists
  await fs.mkdir(DATA_DIR, { recursive: true });
  
  const savedCursor = await loadCursor();
  
  console.log("Starting USDC transfer indexer...");
  console.log(`Portal: ${process.env.PORTAL_URL || "default"}`);

  const source = evmPortalSource({
    portal: process.env.PORTAL_URL || "https://portal.sqd.dev/datasets/ethereum-mainnet",
  });

  const decoder = evmDecoder({
    range: {
      from: savedCursor ?? 17000000,
    },
    contracts: ["0xa0b86991c6218b36c1d19d4a2e9eb0ce3606eb48"], // USDC
    events: {
      transfer: commonAbis.erc20.events.Transfer,
    },
  });

  const target = createTarget({
    write: async ({ logger, read }) => {
      for await (const { data } of read(
        savedCursor ? { number: savedCursor } : undefined
      )) {
        try {
          // Extract and format transfer data
          const transfers = data.transfer.map((t) => ({
            blockNumber: t.block.number,
            blockHash: t.block.hash,
            timestamp: t.block.timestamp,
            transactionHash: t.rawEvent.transactionHash,
            logIndex: t.rawEvent.logIndex,
            from: t.event.from,
            to: t.event.to,
            value: t.event.value.toString(),
          }));

          // Save batch to file
          const filename = `transfers-${Date.now()}.json`;
          await saveData(filename, transfers);

          // Update cursor
          const lastBlock = Math.max(
            ...data.transfer.map((t) => t.block.number)
          );
          await saveCursor(lastBlock);

          logger.info(
            `Processed ${transfers.length} transfers up to block ${lastBlock}`
          );
        } catch (error) {
          logger.error({ error }, "Error processing batch");
          throw error;
        }
      }
    },
  });

  await source.pipe(decoder).pipeTo(target);
}

main().catch((error) => {
  console.error("Fatal error:", error);
  process.exit(1);
});
Cursor File Persistence: Railway services use ephemeral storage by default. Use Railway’s persistent volumes to ensure cursor data survives restarts. Without persistent storage, your indexer will restart from the beginning after each deployment.

Dockerfile

Create a Dockerfile for Railway deployment:
Dockerfile
FROM node:20-alpine AS builder

WORKDIR /app

# Copy package files
COPY package*.json ./
COPY tsconfig.json ./

# Install dependencies
RUN npm ci

# Copy source code
COPY src ./src

# Build TypeScript
RUN npm run build

# Production stage
FROM node:20-alpine

WORKDIR /app

# Copy package files
COPY package*.json ./

# Install production dependencies only
RUN npm ci --production

# Copy built application
COPY --from=builder /app/dist ./dist

# Create data directory
RUN mkdir -p /app/data

# Set environment variables
ENV NODE_ENV=production
ENV CURSOR_FILE=/app/cursor.json
ENV DATA_DIR=/app/data

# Run the application
CMD ["node", "dist/index.js"]

Deployment Options

Choose your preferred deployment method:

Verification

Verify your deployment is working correctly:
1

Check deployment status

In the Railway dashboard:
  1. Navigate to your project
  2. Check the Deployments tab
  3. Verify the latest deployment shows “Active”
  4. Click on the deployment to view build logs
2

View live logs

  1. Go to your service
  2. Click the Logs tab
  3. Watch for processing messages:
Starting USDC transfer indexer...
Resuming from block 17500000
Processed 42 transfers up to block 17500100
3

Verify cursor progression

Check that the cursor is advancing:
  1. Use Railway’s shell feature to access your container:
railway shell
  1. View the cursor file:
cat cursor.json
You should see:
{
  "blockNumber": 17500100,
  "timestamp": "2024-01-15T10:30:00.000Z"
}
4

Check data output

Verify data files are being created:
ls -la data/
cat data/transfers-*.json | head -n 20
5

Monitor resource usage

In the Railway dashboard:
  1. Go to Metrics tab
  2. Monitor CPU and memory usage
  3. Ensure your service stays within allocated resources
  4. Adjust service size if needed in SettingsResources

Production Best Practices

Use Railway Volumes for cursor persistence:
# In Railway dashboard:
# Settings → Volumes → Add Volume
# Mount path: /app/data
Update your code to store cursor in the mounted volume:
const CURSOR_FILE = "/app/data/cursor.json";
Implement robust error handling:
const target = createTarget({
  write: async ({ logger, read }) => {
    for await (const { data } of read()) {
      try {
        await processData(data);
      } catch (error) {
        logger.error({ error, blockNumber: data.blocks[0]?.number }, "Processing failed");
        // Decide: throw to stop, or continue to next batch
        throw error; // Stops indexer for investigation
      }
    }
  },
});
Set up health checks and alerts:
  1. Use Railway’s built-in monitoring
  2. Add a health check endpoint (optional):
import express from "express";

const app = express();
let lastProcessedBlock = 0;

app.get("/health", (req, res) => {
  res.json({
    status: "ok",
    lastBlock: lastProcessedBlock,
    uptime: process.uptime(),
  });
});

app.listen(3000);
  1. Configure Railway health check in SettingsHealth Check
Use environment-specific settings:
const config = {
  development: {
    portal: "https://portal.sqd.dev/datasets/ethereum-mainnet",
    startBlock: 20000000,
  },
  production: {
    portal: "https://portal.sqd.dev/datasets/ethereum-mainnet",
    startBlock: 17000000,
  },
};

const env = process.env.NODE_ENV || "development";
const settings = config[env];
Handle shutdown signals properly:
let isShuttingDown = false;

process.on("SIGTERM", () => {
  console.log("SIGTERM received, shutting down gracefully...");
  isShuttingDown = true;
});

const target = createTarget({
  write: async ({ logger, read }) => {
    for await (const { data } of read()) {
      if (isShuttingDown) {
        logger.info("Shutdown requested, stopping after current batch");
        break;
      }
      await processData(data);
    }
  },
});
Testing Before Deployment: Always test your indexer locally with npm run dev before deploying to Railway. Use a small block range to verify cursor persistence and data output.

Troubleshooting

Common issues and solutions:
IssueSolution
Cursor resets on restartAdd a persistent volume in Railway settings mounted to /app/data
Out of memory errorsIncrease memory in Settings → Resources, or reduce batch size
Build failsCheck Dockerfile syntax and ensure all dependencies are in package.json
No data outputVerify PORTAL_URL is set correctly and check logs for errors
Slow indexingCheck network latency, consider caching, optimize batch size

Next Steps

References