Skip to main content
Portal is currently in closed beta. Report bugs or suggestions in the SQD Portal chat or to Squid Devs.
If you plan to use your portal to index Solana, read the Using your Portal - On Solana section before committing tokens.
Running your own Portal instance enables you to access the permissionless SQD Network data without relying on centralized services. You can operate a private Portal instance for personal use or deploy a high-throughput public Portal.

Prerequisites

Before you begin, ensure you have:

Software Requirements

  • Working Docker installation

Financial Requirements

  • Minimum 10,000 SQD tokens
    • Tokens remain locked while your Portal instance is active
    • More tokens = more bandwidth
    • Tokens can be in your wallet or SQD-issued vesting contract
  • Arbitrum ETH for gas fees
For the minimum 10,000 SQD, you’ll get enough bandwidth to run a few squids. For heavier workloads, see Token requirements and compute units to estimate requirements.

Hardware Requirements

  • Minimum 25GB unused RAM (we’re working to reduce this requirement)
  • Additional hardware depends on your use case:
    • Private single-user Portal instances can run on a laptop with sufficient RAM
    • Public or high-throughput Portal instances require more robust infrastructure
Hardware requirements scale with your expected usage patterns.

Portal Setup

Lock SQD tokens

  1. Go to network.subsquid.io and connect your wallet
    • Use MetaMask (recommended) or another supported wallet
    • Ensure your wallet holds the tokens or is the beneficiary of your vesting contract
  2. Navigate to the portals page and click “Lock”
  3. Specify:
    • Amount of SQD to lock
    • Duration of the lockup period
  4. Click “Confirm” and approve the transaction in your wallet
The page will update to show your locked SQD amount and lockup duration.

Enable Auto-Extension (Optional)

By default, your Portal instance stops working when the lock period ends. Enable auto-extension to keep your Portal instance running continuously.
Click the “Auto Extension” switch and confirm the transaction. This automatically relocks your SQD when the current period ends.
With auto-extension enabled, you must unlock first, then wait for the current period to end before withdrawing tokens.

Generate a peer ID

SQD Network operates as a decentralized peer-to-peer system. Your Portal instance needs a private key and public peer ID to participate.Generate your peer ID:
docker run --rm subsquid/rpc-node:0.2.5 keygen > <KEY_FILE_PATH>
The command creates a key file at <KEY_FILE_PATH> and outputs:
Your peer ID: THIS IS WHAT YOU NEED TO COPY
Copy your peer ID—you’ll need it for registration.
Protect your key file! Ensure it won’t be deleted accidentally and cannot be accessed by unauthorized parties. See consequences of losing your key.

Register your portal

  1. Go to the portals page
  2. Click the “Add portal” button
  3. Fill the portal registration form:
    • Enter your peer ID from the previous step
    • If making your Portal instance public, enable “Publicly available” and complete additional fields
  4. Click “Confirm” and approve the transaction in your wallet
Your Portal instance is now registered on-chain.

Run your portal

Clone the Repository

git clone https://github.com/subsquid/sqd-portal
cd sqd-portal

Configure Environment

Copy the mainnet configuration:
cp mainnet.env .env
Set your key file path:
echo KEY_PATH=<KEY_FILE_PATH> >> .env
Double-check the key path! An incorrect path will cause the system to generate a new random key, resulting in your Portal instance attempting to operate with an unregistered peer ID.

Start the Portal

Choose your preferred method:
Your Portal instance is now running!

Wait for activation

Your stake will become active at the beginning of the next epoch—within approximately 20 minutes.The portals page will update to show your lockup is active.
Portal activated! You can now use it to access SQD Network data.

Using your portal

Portal serves a new version of API that’s not compatible with @latest SDK and the old gateways. Update your packages with
npx --yes npm-check-updates --filter "@subsquid/*" --target "@portal-api" --upgrade
then freeze the versions of @portal-api packages at the new ones, either by removing any version range specifiers (^, ~, <, >, >=, <=) preceding the package versions or by running
sed -i -e 's/[\^~=<>]*\([0-9\.]*-portal-api\.[0-9a-f]\{6\}\)/\1/g' package.json
to do the same thing automatically. Finally, reinstall all packages from scratch:
rm -r node_modules package-lock.json
npm i
A working Portal instance exposes its dataset-specific APIs via URLs such as this one:

http://&lt;host&gt;:8000/datasets/&lt;dataset-slug&gt;

Here, <dataset-slug> is the last path segment of the network-specific gateway URL found on this page. For example, a local Portal instance will expose its Ethereum dataset API at

http://127.0.0.1:8000/datasets/ethereum-mainnet

On EVM

You should be able to simply replace the setGateway call with setPortal and get exactly the same behavior as before:
+  .setPortal('http://127.0.0.1:8000/datasets/ethereum-mainnet')
-  .setGateway('https://v2.archive.subsquid.io/network/ethereum-mainnet')
If your squid uses RPC to ingest unfinalized blocks, @subsquid/evm-processor@portal-api will smoothly transition to that regime as it catches up to the network.

On Solana

Solana SDK in its @portal-api version is capable of ingesting real-time data from Portal, but is no longer capable of ingesting it from RPC. Since real-time Solana data is not yet available for private Portal instances, that means that at the moment it is not possible to get real-time Solana data if the private Portal instance is used as the main data source. This will be fixed soon, but as of 2025-03-04 this is one limitation we have. If you just want to try ingesting historical Solana data you can take a look at the portal-api branch of our Solana template:
git clone https://github.com/subsquid-labs/solana-example/
cd solana-example
git checkout portal-api
npm i
npm run build
docker compose up -d
npx squid-typeorm-migration apply
node -r dotenv/config lib/main.js
If the last command fails due to an HTTP 400 response, try raising the lowest block at described in this comment.

Migrating an Existing Squid

A. Replace all existing data sources with the Portal instance:
+  .setPortal('https://portal.tethys.sqd.dev/datasets/solana-beta')
-  .setGateway('https://v2.archive.subsquid.io/network/solana-mainnet')
-  .setRpc({
-    client: new SolanaRpcClient({
-      url: process.env.SOLANA_NODE
-    })
-  })
Also, please remove any mentions of SolanaRpcClient, for example:
-import \{DataSourceBuilder, SolanaRpcClient\} from '@subsquid/solana-stream'
+import \{DataSourceBuilder\} from '@subsquid/solana-stream'
B. Replace any block height literals with slot number literals.
+  .setBlockRange(\{from: 325000000\})
-  .setBlockRange(\{from: 303262650\})
A convenient converter from block heights to slot numbers is TBA; for now bisecting the block range is your best bet. To get block height by slot number use this script:
SLOT_NUMBER=320276063 curl --request POST --url https: "//api.mainnet-beta.solana.com --header 'accept: application/json' --header 'content-type: application/json' --data "\{\"id\": 1, \"jsonrpc\": \"2.0\", \"method\": \"getBlock\", \"params\": [$\{SLOT_NUMBER\}, \{\"encoding\": \"jsonParsed\", \"maxSupportedTransactionVersion\": 0\}]\}" | jq | grep blockHeight"
C. If you used the slot field of block headers anywhere in your code, replace it with .number:
-  slot: block.header.slot,
+  slot: block.header.number,
D. If you need the block height (for example to stay compatible with your old code) request it in the .setFields call:
  .setFields({
    block: { // block header fields
      timestamp: true,
+      height: true
    },

Token Requirements and Compute Units

The minimum token locking requirement for a Portal instance is 10000 SQD. This should be enough for simple user cases like running 2-3 squid or grabbing some data for analytics. If you expect heavier workloads, read on. The rate limiting mechanism of SQD Network relies on the concept of a compute unit, or CU for short. CUs do not map directly to the amount of data fetched by a Portal instance; instead, they (roughly) represent the amount of work that the network does internally while serving the Portal instance’s requests.
SQD Network datasets are partitioned by block number. Dataset chunks are randomly distributed among worker nodes.When a Portal instance receives a data request, the following happens:
  1. The Portal instance forwards the request to several workers that hold the chunks of the relevant dataset
  2. The workers execute the request separately on each chunk
  3. Workers send the results back to the Portal instance. For lightweight queries, they send one response for each dataset chunk; however, if any response exceeds 100 Mbytes, it’s split into several parts
  4. The Portal instance concatenates the workers’ replies and serves a continuous stream of data to the user
Each response made by any of the workers in step 3 spends exactly 1 CU.
The more SQD you lock and the greater the lockup period is, the more CUs you get. Currently, each locked SQD generates 1-3 CU at the beginning of each epoch. Here’s how the number of CUs per SQD changes with the lockup period: In principle, any valid amount of locked SQDs generates an infinite amount of CUs. However, if the rate at which your queries consume CUs exceeds the rate at which they are produced, your app will be throttled. To avoid that, you may want to understand how much CUs your queries spend. Currently there is no tool available for estimating the number of CUs that a query needs before actually running it. However, on EVM you can use the following formula to get an order of magnitude estimate:

10^-5 _ range_length_blocks _ transactions_per_block

Notes:
  • This assumes lightweight queries - that is, queries that fetch much less data than the total dataset size. For heavyweight queries multiply the estimate by a factor of 2-5.
  • This is a rough estimate. Multiply it by ten to get a somewhat safe figure. If you want to minimize your SQD lockup, start at that safe figure, then measure the actual amount of CU you spend and reduce the lockup accordingly.
  • If your network has an Etherscan-style explorer, you can estimate the transactions_per_block by visiting its front page, reading the “Transactions” stat and dividing it by the “Last finalized block” height.
For a lightweight query, the amount of CU spent is determined by how many dataset chunks the network needs to examine to process it. The ingester creates chunks of roughly the same size within each dataset. Since the amount of data per block is roughly proportional to the number of transactions in that block, we can assume that the number of chunks in any given range is proportional to the number of transactions per block.Extrapolating from the Ethereum dataset:

num*chunks_in_range = range_length * (eth*chunks / eth_height) * (chain_txs_per_block / eth_txs_per_block)

Where:
  • eth_chunks = 3.1e4
  • eth_height = 2.1e7
  • eth_txs_per_block = 1.2e2
Multiplying all the known values together and rounding to one decimal digit, we get the 1e-5 coefficient of the final formula.Important assumption: This assumes all EVM datasets have the same chunk size as Ethereum. In reality, chunk sizes vary between 50-1000 Mbytes. Ethereum’s chunk size is roughly 500 Mbytes, so expect the estimate to be off by a factor of 0.5-10, which is within the “order of magnitude” definition.Heavyweight queries: Scale the same way but may spend more than one CU per chunk. The heaviest possible queries (fetching the whole dataset) on Ethereum consume roughly 5 CUs per chunk.
Now, if your queries consume X CUs each and you run them once per Y epochs, you need to lock up at least this much SQD:

X / (Y \* boost_factor)

Here, boost_factor is a multiplier ranging from 1 to 3 depending on the lockup length (see the graph above).

Example

Suppose your query traverses the last 1M blocks of the Binance Smart Chain and you want to run it once every hour.
  • Visiting bscscan.com we find that there’s a total of 6.4B txs made over 44M blocks. We can roughly estimate the chain density at 150 txs/block.
  • We can now estimate the cost of each query to be on the order of 10^-5 * 10^6 * 150 = 1500 CU.
  • When ran once every 60 minutes or 3 epochs at 20 min per epoch, such queries would require roughly 1500 / 3 = 500 SQD locked up. This assumes a short lockup (under 60 days).
  • Finally we apply the recommended margins:
    • For lightweight queries it is 10x and the final sum amounts to 5000 SQD - less than the global minimum lockup of 10000.
    • If you’re running extremely heavy queries, the recommended margin is 50x and the final recommended lockup is 25000 SQD. Use this value as a starting point and iterate if you’d like to minimize your lockup.
      • If you’d like to reduce the amount of SQD even further, one option is to lock the tokens up for longer: with a two-year long lockup heavy queries you’ll get three times the CUs and the final recommended lockup will be 10000 even for heavy queries.

High throughput portals

The recommended way to make a Portal instance that can serve a large number of requests is deploy multiple Portal instances associated with a single wallet:
  1. Use a single wallet to register as many peer IDs as you need Portal instances.
  2. Make one SQD lockup for all your Portal instances. See Token requirements and compute units to get an idea of how large your stake should be.
  3. Run Portal instances in parallel, balancing traffic between the instances.
If you plan to automate running your Portal instances, you may find this helm chart useful.

Troubleshooting

If you lose your key file:
  • You won’t be able to run your Portal instance until you generate and register a new one
If your key file is stolen:
  • The perpetrator can cause connectivity issues, effectively creating downtime for your Portal instance
Recovery steps:
  1. Unregister your Portal instance on the “Portals” tab
  2. Generate a new key file
  3. Register the new portal peer ID
Always protect your key file! Store it securely and restrict access.
Portal performance metrics are exposed at the /metrics endpoint.For a local Portal instance, check throttling statistics:
curl --compressed http://127.0.0.1:8000/metrics | grep throttled
Interpreting results:Lower values of portal_stream_throttled_ratio_sum indicate better performance.
If throttling is high, consider locking more SQD tokens to increase your bandwidth allocation.
Last modified on November 18, 2025