HyperIndex Complete Documentation
This document contains all HyperIndex documentation consolidated into a single file for LLM consumption.
| What it is | A blazing-fast, developer-friendly multichain blockchain indexer that transforms on-chain events into structured, queryable databases with GraphQL APIs |
| Data engine | Powered by HyperSync - up to 2000x faster than traditional RPC endpoints |
| Performance | Ranked #1 fastest indexer in independent Sentio benchmarks (April 2025) - up to 6x faster than the nearest competitor, 63x faster than TheGraph |
| Supported chains | 70+ EVM chains and Fuel, with new networks added regularly; all EVM-compatible chains supported via RPC |
| Languages | TypeScript, JavaScript, ReScript |
| Key files | config.yaml (indexer settings), schema.graphql (data schema), src/EventHandlers.* (event logic) |
| Prerequisites | Node.js v22+, pnpm v8+, Docker Desktop (local dev only) |
| Deployment | Hosted service (managed, no API token needed) or self-hosted |
| API token | Required for local dev and self-hosted deployments from 3 November 2025 via ENVIO_API_TOKEN env variable |
| Query interface | GraphQL API auto-generated from your schema |
| Multichain | Native multichain indexing with unordered_multichain_mode support |
| Wildcard indexing | Index by event signature rather than contract address |
| Migration | Straightforward migration path from TheGraph subgraphs |
| Get started | pnpx envio init |
| Support | Discord Β· GitHub |
Overviewβ
File: overview.md
HyperIndex is a blazing-fast, developer-friendly multichain indexer, optimized for both local development and reliable hosted deployment. It empowers developers to effortlessly build robust backends for blockchain applications.
HyperIndex is Envio's full-featured blockchain indexing framework that transforms on-chain events into structured, queryable databases with GraphQL APIs.
HyperSync is the high-performance data engine that powers HyperIndex. It provides the raw blockchain data access layer, delivering up to 2000x faster performance than traditional RPC endpoints.
While HyperIndex gives you a complete indexing solution with schema management and event handling, HyperSync can be used directly for custom data pipelines and specialized applications.
Feature Roadmapβ
Upcoming features on our development roadmap:
- Isolated Multichain Mode
- Polished Solana Support
- Indexing 1,000,000+ events per second
π Quick Linksβ
Contract Importβ
File: contract-import.md
The Quickstart enables you to instantly autogenerate a powerful blockchain indexer and start querying blockchain data in minutes. This is the fastest and easiest way to begin using HyperIndex.
Example: Autogenerate an indexer for the Eigenlayer contract and index its entire history in less than 5 minutes by simply running pnpx envio@3.0.0-rc.0 init and providing the contract address from Etherscan.
Getting Startedβ
Run the following command to initialize your blockchain indexer:
pnpx envio@3.0.0-rc.0 init
You'll then follow interactive prompts to customize your indexer.
Indexer Initialization Optionsβ
During initialization, you'll be presented with two options:
- Contract Import (recommended for existing smart contracts)
- Template
Choose the Contract Import option to auto-generate indexers directly from smart contracts.
? Choose an initialization option
Template
> Contract Import
[ββ to move, enter to select]
2. Local ABI Importβ
Choose this method if the contract ABI is unavailable from a block explorer or you're using an unverified contract.
Steps:β
a. Select Local ABI
? Would you like to import from a block explorer or a local abi?
Block Explorer
> Local ABI
[ββ to move, enter to select]
b. Specify ABI JSON file
Provide the path to your local ABI file (JSON format):
? What is the path to your json abi file?
c. Select events to index
? Which events would you like to index?
> [x] ClaimRewards(address indexed from, address indexed reward, uint256 amount)
[x] Deposit(address indexed from, uint256 indexed tokenId, uint256 amount)
[space to select, β to select all, β to deselect all]
d. Choose blockchain
Specify the blockchain your contract is deployed on:
? Choose network:
> ethereum-mainnet
goerli
optimism
base
bsc
gnosis
[Custom Network ID]
[ββ to move, enter to select]
e. Enter contract details
- Contract name
? What is the name of this contract?
- Contract address
? What is the address of the contract?
[Use proxy address if ABI is for a proxy implementation]
f. Finish or add more contracts
Complete the import process or continue adding contracts:
? Would you like to add another contract?
> I'm finished
Add a new address for same contract on same network
Add a new network for same contract
Add a new contract (with a different ABI)
Congratulations! Your HyperIndex indexer is now ready to run and query data!
Next step: Running your Indexer locally or Deploying to Envio Cloud.
Quickstart With Aiβ
File: quickstart-with-ai.md
Build an Envio HyperIndex indexer end-to-end with an AI coding assistant.
Most developers now reach for an AI coding assistant before they open a file. This guide walks through an AI-centric flow for creating, developing, and deploying a HyperIndex indexer. It is semi-generic, so any capable AI coding assistant (Cursor, Windsurf, Copilot Agent, Continue, etc.) will work. That said, we've seen the best results with Claude Code and recommend starting there.
If you'd rather drive the CLI yourself, see the Quickstart.
Step 1. Give the Assistant Access to the Envio Docs (MCP)β
Envio ships a Model Context Protocol server so your AI assistant can search and read Envio documentation directly instead of guessing from stale training data.
Claude Code:
claude mcp add --transport http envio-docs https://docs.envio.dev/mcp
Cursor / VS Code / other MCP clients, add the endpoint to your MCP config:
{
"mcpServers": {
"envio-docs": {
"url": "https://docs.envio.dev/mcp"
}
}
}
Full setup details in the MCP Server guide. If your assistant doesn't support MCP, you can still point it at the LLM-friendly docs bundle.
Step 3. Develop with the Built-in Claude Skillsβ
HyperIndex v3 ships with Claude skills that teach AI assistants how HyperIndex works: config, schema, handlers, loaders, dynamic contracts, testing, and migration checklists. When an assistant is attached to a v3 project, it can read these skills directly instead of inventing patterns.
A productive loop with skills + the docs MCP looks like:
- Describe the behavior you want in plain English.
- Let the assistant edit
config.yaml,schema.graphql, andsrc/handlers. - Ask it to run
pnpm envio codegenandpnpm devto validate. - Iterate on failures together.
The three files you'll spend most of your time in:
config.yaml: networks, contracts, eventsschema.graphql: entities and relationshipssrc/handlers: per-event logic
Step 5. Deploy Programmatically with envio-cloudβ
Once your indexer runs locally, the envio-cloud CLI lets an assistant (or a CI job) deploy and manage the hosted indexer without opening the dashboard.
npm install -g envio-cloud
envio-cloud login --token $ENVIO_GITHUB_TOKEN
envio-cloud indexer add --name my-indexer --repo my-repo
envio-cloud deployment status my-indexer --watch-till-synced
envio-cloud deployment logs my-indexer --follow
Every command supports -o json, which makes it easy for assistants and scripts to parse results. Full reference: Envio Cloud CLI.
What's New in HyperIndex V3β
File: whats-new-in-v3.md
15 full months have passed since the official HyperIndex v2.0.0. Since then, we have shipped 32 minor releases and multiple patches with zero breaking changes to the documented API. We also received PRs from 6 external contributors, grew from 1 GitHub star to over 470, and saw many big projects rely on HyperIndex.
HyperIndex V3 focuses on modernizing the codebase and laying the foundation for many more months of development. This page describes everything that's new. To upgrade an existing project from V2, follow the Migrate to V3 guide.
New Featuresβ
Unified Handlers APIβ
In V3 all handler registrations now happen through a single indexer value. Contract-specific exports (ERC20.Transfer.handler, UniV3.PoolFactory.contractRegister, etc.) have been removed in favor of indexer.onEvent, indexer.contractRegister, and indexer.onBlock.
Event handlers with indexer.onEvent:
indexer.onEvent(
{
contract: "ERC20",
event: "Transfer",
wildcard: true,
where: ({ chain }) => ({
params: [
{ from: chain.Safe.addresses },
{ to: chain.Safe.addresses },
],
}),
},
async ({ event, context }) => {
// Handler logic
},
);
Dynamic contracts with indexer.contractRegister:
indexer.contractRegister(
{
contract: "UniV3",
event: "PoolFactory",
},
async ({ event, context }) => {
context.chain.Pool.add(event.params.poolAddress);
},
);
Block handlers with indexer.onBlock consolidate across chains in a single call:
indexer.onBlock(
{ name: "EveryBlock" },
async ({ block, context }) => {
// Handler logic
},
);
For chain-specific or interval-based block handlers, use the where callback:
indexer.onBlock(
{
name: "Ranges",
where: ({ chain }) => {
if (chain.id !== 1) return false;
return {
block: {
number: {
_gte: 20_000_000,
_lte: 22_000_000,
_every: 100,
},
},
};
},
},
async ({ block, context }) => {
// Handler logic
},
);
Per-Event Start Blockβ
Handlers can specify custom start blocks per chain via where.block.number._gte, overriding contract and chain configuration:
indexer.onEvent(
{
contract: "UniV4",
event: "Pool",
where: ({ chain }) => {
let startBlock: number;
switch (chain.id) {
case 1:
startBlock = 18_000_000;
break;
case 8453:
startBlock = 2_000_000;
break;
default: {
const _exhaustive: never = chain.id;
return false;
}
}
return {
block: { number: { _gte: startBlock } },
};
},
},
async ({ event, context }) => {
// Handler logic
},
);
CommonJS β ESMβ
We migrated HyperIndex from CommonJS-only to ESM-only. This enables:
- Using the latest versions of libraries that have long since abandoned CommonJS support
- Top-level await in handler files
Top-Level Awaitβ
Thanks to the migration to ESM, you can now use await directly in handler and other files:
const ZERO_ADDRESS = "0x0000000000000000000000000000000000000000";
// Load data before registering handlers
const addressesFromServer = await loadWhitelistedAddresses();
indexer.onEvent(
{
contract: "ERC20",
event: "Transfer",
wildcard: true,
where: {
params: [
{ from: ZERO_ADDRESS, to: addressesFromServer },
{ from: addressesFromServer, to: ZERO_ADDRESS },
],
},
},
async ({ event, context }) => {
// ... your handler logic
},
);
3x Historical Backfill Performanceβ
Achieved by adding chunking logic to request events across multiple ranges at once. This also fixed overfetching for contracts with a much later start_block in the config, as well as speeding up dynamic contract registration. If you had data fetching as a bottleneck, 25k events per second is now a standard.
Automatic Handler Registration (src/handlers)β
We introduced automatic registration of handler files located in src/handlers.
Previously, you needed to specify an explicit path to a handler file for every contract in config.yaml. Now you can remove all of the paths from config.yaml and simply move the files to src/handlers. You can name the files however you want, but we suggest using contract names and having a file per contract.
If you don't like src/handlers, use the handlers option in config.yaml to customize it.
The explicit handler field in config.yaml still works, so you don't need to change anything immediately.
RPC for Realtime Indexingβ
Built by an external contributor @cairoeth to allow specifying realtime mode for an RPC data source to embrace low-latency head tracking:
rpc:
- url: https://eth-mainnet.your-rpc-provider.com
for: realtime
In this case, the RPC won't be used for historical sync but will be used as the primary source once the indexer enters realtime mode.
Chain State on Contextβ
The Handler Context object provides chain state via the chain property:
indexer.onEvent(
{ contract: "ERC20", event: "Approval" },
async ({ context }) => {
console.log(context.chain.id); // 1 - The chain id of the event
console.log(context.chain.isRealtime); // true - Whether the indexer entered realtime mode
},
);
Indexer State & Configβ
As a replacement for the deprecated and removed getGeneratedByChainId, we introduce the indexer value. It provides nicely typed chains and contract data from your config, as well as the current indexing state, such as isRealtime and addresses. Use indexer either at the top level of the file or directly from handlers. It returns the latest indexer state.
With this change, we also introduce new official types: Indexer, EvmChainId, FuelChainId, and SvmChainId.
indexer.name; // "uniswap-v4-indexer"
indexer.description; // "Uniswap v4 indexer"
indexer.chainIds; // [1, 42161, 10, 8453, 137, 56]
indexer.chains[1].id; // 1
indexer.chains[1].startBlock; // 0
indexer.chains[1].endBlock; // undefined
indexer.chains[1].isRealtime; // false
indexer.chains[1].PoolManager.name; // "PoolManager"
indexer.chains[1].PoolManager.abi; // unknown[]
indexer.chains[1].PoolManager.addresses; // ["0x000000000004444c5dc75cB358380D2e3dE08A90"]
On indexer restart, reading indexer at the top level of a handler file returns values restored from the database β including dynamically registered contract addresses β rather than only what's declared in config.yaml:
// Includes initial + dynamically registered addresses persisted in the DB
console.log(indexer.chains.eth.Pool.addresses);
Conditional Event Handlersβ
Now it's possible to return a boolean value from the where function to disable or enable the handler conditionally.
indexer.onEvent(
{
contract: "ERC20",
event: "Transfer",
wildcard: true,
where: ({ chain }) => {
// Skip all ERC20 on Polygon
if (chain.id === 137) {
return false;
}
// Track all ERC20 on Ethereum Mainnet
if (chain.id === 1) {
return true;
}
// Track only whitelisted addresses on other chains
return {
params: [
{ from: ZERO_ADDRESS, to: WHITELISTED_ADDRESSES[chain.id] },
{ from: WHITELISTED_ADDRESSES[chain.id], to: ZERO_ADDRESS },
],
};
},
},
async ({ event, context }) => {
// ... your handler logic
},
);
Automatic Contract Configurationβ
Started automatically configuring all globally defined contracts. This fixes an issue where addContract crashed because the contract was defined globally but not linked for a specific chain. Now it's done automatically:
contracts:
- name: UniswapV3Factory
events: # ...
- name: UniswapV3Pool
events: # ...
chains:
- id: 1
start_block: 0
contracts:
- name: UniswapV3Factory
address: 0x1F98431c8aD98523631AE4a59f267346ea31F984
# UniswapV3Pool no longer needed here - auto-configured from global contracts
- id: 10
start_block: 0
contracts:
- name: UniswapV3Factory
address: 0x1F98431c8aD98523631AE4a59f267346ea31F984
# UniswapV3Pool no longer needed here - auto-configured from global contracts
ClickHouse Storage (Experimental)β
HyperIndex can now run with multiple storage backends at the same time. Postgres remains the primary database, and entities can additionally be written to a ClickHouse database that is restart- and reorg-resistant. Prometheus metrics carry a storage-name label so you can distinguish backends.
Enable both backends in config.yaml:
storage:
postgres: true
clickhouse: true
envio dev automatically spins up a ClickHouse Docker container for local development. For envio start, provide your own connection via the environment variables ENVIO_CLICKHOUSE_HOST, ENVIO_CLICKHOUSE_DATABASE, ENVIO_CLICKHOUSE_USERNAME, and ENVIO_CLICKHOUSE_PASSWORD. Currently supported only on Dedicated Plan.
Do not run multiple indexers writing to the same ClickHouse database at the same time.
HyperSync Source Improvementsβ
Multiple updates on the HyperSync side to achieve smaller latency and less traffic:
- Server-Sent Events instead of polling to get updates about new blocks
- CapnProto instead of JSON for query serialization
- Cache for queries with repetitive filters - huge egress saving when indexing thousands of addresses
- Improved connection establishment behind a proxy
- Configurable log level support via
ENVIO_HYPERSYNC_LOG_LEVELenvironment variable - Automatic rate-limiting handling on the client side
- Better reconnection logic, logging, and fallbacks for HyperSync SSE and RPC WebSocket height streaming for more stable indexing at the chain head
Fuel Block Handler Supportβ
Block handlers are now supported for Fuel indexing.
Solana Support (Experimental)β
HyperIndex now supports Solana with RPC as a source. This feature is experimental and may undergo minor breaking changes. Solana exposes its block-stream handler as indexer.onSlot (rather than onBlock) to match Solana's slot-based model.
To initialize a Solana project:
pnpx envio@3.0.0-rc.0 init svm
See the Solana documentation for more details.
pnpx envio@3.0.0-rc.0 init Improvementsβ
- Removed language selection to prefer TypeScript by default
- Cleaned up templates to follow the latest good practices
- Added new templates to highlight HyperIndex features:
Feature: Factory Contract,Feature: External Calls - Pre-configured GitHub Actions workflow for running tests and initialized git repository
- Generated projects include Cursor/Claude skills to support agent-driven development
Block Handler Only Indexersβ
Now it's possible to create indexers with only block handlers. Previously, it was required to have at least one event handler for it to work. The contracts field became optional in config.yaml.
Flexible Entity Fieldsβ
We no longer have restrictions on entity field names, such as type and others. Shape your entities any way you want. There are also improvements in generating database columns in the same order as they are defined in the schema.graphql.
Unordered Multichain Mode by Defaultβ
Unordered multichain mode is now available and the default behavior. This provides better performance for most use cases. If you need ordered multichain behavior, you can explicitly set multichain: ordered in your config.
Preload Optimization by Defaultβ
Preload optimization is now enabled by default, replacing the previous loaders and preload_handlers options. This improves historical sync performance automatically.
TUI Improvementsβ
We gave our TUI some love, making it look more beautiful and compact. It also consumes fewer resources, shares a link to the Hasura playground, and dynamically adjusts to the terminal width.
The TUI is now auto-disabled in CI environments and when running under AI agents, so logs stay clean without manual configuration. The legacy TUI_OFF=true environment variable was renamed to ENVIO_TUI=false.
!TUI
New Testing Frameworkβ
HyperIndex ships a purpose-built testing framework powered by createTestIndexer(). Write tests against the same indexer that runs in production β no database, no Docker, no manual mock wiring.
The framework integrates with Vitest, replacing the previous mocha/chai setup with a single package that doesn't require configuration by default and includes snapshot testing out-of-the-box. It also provides typed test assertions and utilities to read/write entities in-between processing runs.
Three ways to feed eventsβ
1. Auto-exit β processes the first block with matching events, then exits. Each subsequent call continues where the last one stopped. Zero config needed.
describe("ERC20 indexer", () => {
it("processes the first block with events", async (t) => {
const indexer = createTestIndexer();
const result = await indexer.process({ chains: { 1: {} } });
// Auto-filled by Vitest on first run β just review and commit
t.expect(result).toMatchInlineSnapshot(`
{
"changes": [
{
"Transfer": {
"sets": [
{
"blockNumber": 10861674,
"from": "0x0000000000000000000000000000000000000000",
"id": "1-10861674-23",
"to": "0x41653c7d61609D856f29355E404F310Ec4142Cfb",
"transactionHash": "0x4b37d2f343608457ca...",
"value": 1000000000000000000000000000n,
},
],
},
"block": 10861674,
"chainId": 1,
"eventsProcessed": 1,
},
],
}
`);
});
});
2. Explicit block range β pin to specific blocks for deterministic CI snapshots.
const result = await indexer.process({
chains: {
1: {
startBlock: 10_861_674,
endBlock: 10_861_674,
},
},
});
3. Simulate β feed typed synthetic events for pure unit tests. No network, no block ranges.
await indexer.process({
chains: {
137: {
simulate: [
{
contract: "Greeter",
event: "NewGreeting",
params: { greeting: "Hello", user: "0x123..." },
},
],
},
},
});
Key capabilitiesβ
- Snapshot-driven assertions β
result.changescaptures every entity set/delete per block. Pair withtoMatchInlineSnapshotfor auto-generated, reviewable snapshots. - Direct entity access β
indexer.Entity.get(),.getOrThrow(),.getAll(), and.set()for reading and presetting state. - Real pipeline, real confidence β tests exercise the full indexer pipeline including dynamic contract registration, multi-chain support, and handler context.
- Parallel test execution via worker thread isolation.
The test indexer also exposes chain information:
const indexer = createTestIndexer();
indexer.chainIds; // [1, 42161]
indexer.chains[1].id; // 1
indexer.chains[1].startBlock; // 0
indexer.chains[1].ERC20.addresses; // ["0x..."]
// Read/write entities between processing runs
await indexer.Account.set({ id: "0x123...", balance: 100n });
const account = await indexer.Account.get("0x123...");
See the Testing documentation for more details.
Podman Supportβ
Beyond Docker, HyperIndex now supports Podman for local development environments. This provides an alternative container runtime for developers who prefer Podman or have it available in their environment.
Nested Tuples for Contract Importβ
The envio init command now supports contracts with nested tuples in event signatures, which was previously a limitation when importing contracts.
PostgreSQL Update for Local Docker Composeβ
The local development Docker Compose setup now uses PostgreSQL 18.1 (upgraded from 17.5).
contractName and eventName on Eventβ
Events now include contractName and eventName fields, making it easier to identify which contract and event you're working with in handlers:
indexer.onEvent(
{ contract: "ERC20", event: "Transfer" },
async ({ event }) => {
console.log(event.contractName); // "ERC20"
console.log(event.eventName); // "Transfer"
},
);
New Official Exported Typesβ
Generated code now exports official generic types for entities, enums, and events. These replace the previous contract-specific type exports:
import type {
MyEntity, // Still exported but Entity is preferred
Entity, // Generic entity type β use as Entity
Enum, // Generic enum type β use as Enum (replaces direct MyEnum export)
EvmEvent, // Generic event type β use as EvmEvent
// Access specific fields: EvmEvent["block"]
} from "envio";
Support for DESC Indicesβ
A nice way to improve your query performance as well:
type PoolDayData
@index(fields: ["poolId", ["date", "DESC"]]) {
id: ID!
poolId: String!
date: Timestamp!
}
RPC Source Improvementsβ
Added polling_interval option for RPC source configuration. Also added missing support for receipt-only fields (gasUsed, cumulativeGasUsed, effectiveGasPrice) that are not available via eth_getTransactionByHash. HyperIndex will additionally perform the eth_getTransactionReceipt request when one of the fields is added in field_selection.
WebSocket Support (Experimental)β
Experimental WebSocket support for RPC source to improve head latency. Please create a GitHub issue if you come across any problems.
chains:
- id: 1
rpc:
url: ${ENVIO_RPC_ENDPOINT}
ws: ${ENVIO_WS_ENDPOINT}
for: realtime
Prometheus Metrics for Data Providersβ
Added a Prometheus metric to track requests to data providers, providing better observability into your indexer's data fetching patterns.
GraphQL-Style getWhere APIβ
The getWhere query API has been redesigned using GraphQL-style syntax:
// Before
const transfers = await context.Transfer.getWhere.from.eq("0x123...");
// After
const transfers = await context.Transfer.getWhere({ from: { _eq: "0x123..." } });
Additionally, three new filter operators are available following Hasura-style conventions:
context.Entity.getWhere({ amount: { _gte: 100n } })
context.Entity.getWhere({ amount: { _lte: 500n } })
context.Entity.getWhere({ status: { _in: ["active", "pending"] } })
Direct RPC Clientβ
Replaced Ethers.js with a direct RPC client implementation, reducing dependencies and improving performance.
Block Lag Configurationβ
A per-chain block_lag option to index behind the chain head by a specified number of blocks. Replaces the global ENVIO_INDEXING_BLOCK_LAG environment variable. Defaults to 0. This is for advanced use cases β only use it if you know what you're doing.
chains:
- id: 1
block_lag: 5
Official /metrics Endpointβ
Prometheus metrics are now official. We cleaned up metric names, switched time units to seconds instead of milliseconds, and followed Prometheus naming conventions more closely. Metrics also cover data points previously available only via the --bench feature. A separate /metrics/runtime endpoint with a dedicated Prometheus registry is available for runtime metrics, isolated from the default /metrics endpoint.
Starting from the v3.0.0 release, Prometheus metrics will follow semver and be documented.
Breaking changes:
- Cleaned up metric names and switched time units from milliseconds to seconds
- Removed
--benchsupport β use the/metricsendpoint instead
Use the new envio metrics CLI command to fetch the Prometheus metrics of a locally running indexer without curling the endpoint manually.
Continue on Config Changeβ
HyperIndex can now keep indexing through some config.yaml changes β rpc configuration is the first to land β instead of erroring out on every restart. Where a change is incompatible, the CLI prints exactly which fields were touched and offers two clear options (revert, or envio dev -r to wipe and re-index). More flexibility will be unlocked over time; open a GitHub issue if you need a specific field supported.
Double Handler Registrationβ
It's now possible to register multiple handlers for the same event with similar filters:
indexer.onEvent(
{ contract: "ERC20", event: "Transfer" },
async ({ event, context }) => {
// Your logic here
},
);
indexer.onEvent(
{ contract: "ERC20", event: "Transfer" },
async ({ event, context }) => {
// And here
},
);
Improved Multiple Data-Sources Supportβ
After switching to a fallback source, HyperIndex now attempts to recover to the primary source 60 seconds later. Previously, it would stay on the fallback until the fallback was down or the indexer was restarted. The source selection logic has also been improved for better indexing resilience and stricter enforcement of the realtime mode configuration.
Updated Dev Docker Flowβ
envio dev no longer uses a generated Docker Compose file and manages containers, network, and volumes directly for greater flexibility. For example, disabling Hasura with ENVIO_HASURA now prevents envio dev from pulling the Hasura image. Use envio dev --restart (or -r) to forcefully clear the database even if there are no config changes detected.
Envio Dev Updateβ
envio dev no longer automatically resets the database on incompatible config or schema changes. Use envio dev -r to explicitly allow this.
Envio Start Updateβ
envio start now has a clear role: to run HyperIndex in the production environment. Use envio dev for local development to enable debugging with Dev Console.
Optimized envio codegenβ
envio codegen is now near-instant. We no longer run pnpm i for the generated package, and we no longer recompile ReScript every time you change config.yaml or schema.graphql. The output is also a lot quieter.
Smaller envio Package (-88MB)β
By eliminating dynamically generated ReScript code, we no longer need to ship or run a ReScript compiler at runtime. The published npm package shrank from 141MB to 53MB.
No Hard pnpm Requirementβ
Internal use of pnpm is gone. The generated package no longer has its own dependency tree, so HyperIndex works with whichever package manager you prefer.
Bun Supportβ
Run HyperIndex on Bun:
bun --bun envio dev
Choose Your Package Manager on envio initβ
envio init now accepts --package-manager=pnpm|npm|bun|yarn so you can scaffold projects without committing to pnpm.
Better Tuples Developer Experienceβ
Solidity struct components used to be generated as positional tuples in handler params, which made handler code awkward. They are now generated as objects with named fields:
struct CreateEventCommon {
address funder;
address sender;
address recipient;
Lockup.CreateAmounts amounts;
IERC20 token;
bool cancelable;
bool transferable;
Lockup.Timestamps timestamps;
string shape;
address broker;
}
event CreateLockupTranchedStream(
uint256 indexed streamId,
Lockup.CreateEventCommon commonParams,
LockupTranched.Tranche[] tranches
);
// Before
event.params.commonParams[5];
event.params.commonParams[3][0];
// After
event.params.commonParams.cancelable;
event.params.commonParams.amounts.deposit;
Improved Multichain Backfillβ
For large multichain indexers, HyperIndex now throttles chains that have already reached the head so they don't compete for resources while the rest finish backfilling. Once every chain has caught up, throttling is lifted and all chains continue indexing equally.
Toolchain Upgradesβ
- ReScript upgraded from v11 to v12 (internally and in
envio inittemplates) - TypeScript upgraded from v5 to v6 (internally and in
envio inittemplates)
Fixesβ
- Fixed an issue where the indexer stops progressing without any error (PostgreSQL client update)
- Fixed checksum for addresses returned by RPC in lowercase
- Fixed incorrect validation of transactions
tofield returned by RPC - Fixed OOM error on RPC request crashing loop
- Fixed an edge case where a multichain indexer could freeze during a rollback on reorg (also backported to v2.32.10)
- Fixed external Postgres database support via
ENVIO_PG_HOST - Fixed
S.nullableschema type to beT | nullinstead ofT | undefined
Release Notesβ
For detailed release notes, see:
- v3.0.0-rc.0
- v3.0.0-alpha.24
- v3.0.0-alpha.23
- v3.0.0-alpha.22
- v3.0.0-alpha.21
- v3.0.0-alpha.20
- v3.0.0-alpha.19
- v3.0.0-alpha.18
- v3.0.0-alpha.17
- v3.0.0-alpha.16
- v3.0.0-alpha.15
- v3.0.0-alpha.14
- v3.0.0-alpha.13
- v3.0.0-alpha.12
- v3.0.0-alpha.11
- v3.0.0-alpha.10
- v3.0.0-alpha.9
- v3.0.0-alpha.8
- v3.0.0-alpha.7
- v3.0.0-alpha.6
- v3.0.0-alpha.5
- v3.0.0-alpha.4
- v3.0.0-alpha.3
- v3.0.0-alpha.2
- v3.0.0-alpha.1
- v3.0.0-alpha.0
HyperIndex Performance Benchmarksβ
File: benchmarks.md
Overviewβ
HyperIndex delivers industry-leading performance for blockchain data indexing. Independent benchmarks have consistently shown Envio's HyperIndex to be the fastest blockchain indexing solution available, with dramatic performance advantages over competitive offerings.
Recent Independent Benchmarksβ
The most comprehensive and up-to-date benchmarks were conducted by Sentio in April 2025 and are available in the sentio-benchmark repository. These benchmarks compare Envio's HyperIndex against other popular blockchain indexers across multiple real-world scenarios:
Key Performance Highlightsβ
| Case | Description | Envio | Nearest Competitor | The Graph | Ponder |
|---|---|---|---|---|---|
| LBTC Token Transfers | Event handling, No RPC calls, Write-only | 3m | 8m - 2.6x slower (Sentio) | 3h9m - 3780x slower | 1h40m - 2000x slower |
| LBTC Token with RPC calls | Event handling, RPC calls, Read-after-write | 1m | 6m - 6x slower (Sentio) | 1h3m - 63x slower | 45m - 45x slower |
| Ethereum Block Processing | 100K blocks with Metadata extraction | 7.9s | 1m - 7.5x slower (Subsquid) | 10m - 75x slower | 33m - 250x slower |
| Ethereum Transaction Gas Usage | Transaction handling, Gas calculations | 1m 26s | 7m - 4.8x slower (Subsquid) | N/A | 33m - 23x slower |
| Uniswap V2 Swap Trace Analysis | Transaction trace handling, Swap decoding | 41s | 2m - 3x slower (Subsquid) | 8m - 11x slower | N/A |
| Uniswap V2 Factory | Event handling, Pair and swap analysis | 8s | 2m - 15x slower (Subsquid) | 19m - 142x slower | 21m - 157x slower |
The independent benchmark results demonstrate that HyperIndex consistently outperforms all competitors across every tested scenario. This includes the most realistic real-world indexing scenario LBTC Token with RPC calls - where HyperIndex was up to 6x faster than the nearest competitor and over 63x faster than The Graph.
Historical Benchmarking Resultsβ
Our internal benchmarking from October 2023 showed similar performance advantages. When indexing the Uniswap V3 ETH-USDC pool contract on Ethereum Mainnet, HyperIndex achieved:
- 2.1x faster indexing than the nearest competitor
- Over 100x faster indexing than some popular alternatives
You can read the full details in our Indexer Benchmarking Results blog post.
Verify For Yourselfβ
We encourage developers to run their own benchmarks. You can also use the templates provided in the Open Indexer Benchmark repository.
How to Migrate Using AIβ
File: migrate-with-ai.md
HyperIndex v3 includes built-in Claude skills that guide AI programming assistants through the full subgraph migration process, from understanding your existing logic to converting handlers and running quality checks. This is the recommended way to migrate complex subgraphs.
Prerequisitesβ
- An AI programming assistant (Cursor or Claude Code)
- pnpm installed
- HyperIndex v3 (Claude skills are available in v3)
Step 1: Initialize a Boilerplate HyperIndex Indexerβ
Create a new HyperIndex indexer that indexes the same contracts and events as the subgraph you are migrating. Run the following in a new directory:
pnpx envio@3.0.0-rc.0 init
Follow the CLI prompts to set up the boilerplate indexer with the same contracts and events as your existing subgraph.
The Claude skills are only available in HyperIndex v3. See the v3 migration guide for current install guidance.
Step 2: Set Up a Monorepo Structureβ
Create a parent directory that contains both your new HyperIndex boilerplate indexer and the existing subgraph repo you want to migrate:
my-migration/
βββ my-subgraph/ # Your existing subgraph repo
βββ my-hyperindex-indexer/ # The boilerplate HyperIndex indexer from Step 1
This structure gives your assistant visibility into both projects so it can read and understand your subgraph logic while writing the HyperIndex implementation.
Step 3: Run Your AI Programming Assistantβ
Open the monorepo root with your AI programming assistant running there (for example, run Claude Code in the monorepo root or open the monorepo in Cursor). Put your assistant in plan mode first, then provide a prompt like the following (replace the repo names with your own):
This monorepo contains two indexers:
- `my-subgraph/` β an existing Graph Protocol subgraph indexer (source of truth)
- `my-hyperindex-indexer/` β a HyperIndex boilerplate scaffolded from the same
contracts (migration target)
Migrate the subgraph indexer to a fully working HyperIndex indexer.
Follow these phases in order:
Phase 1 β Plan
- Produce a migration plan mapping each subgraph component to its HyperIndex
equivalent.
- Flag anything that has no direct equivalent and propose a workaround.
- Do NOT write code yet.
Phase 2 β Implement
- Migrate the entire subgraph following the plan and skill guides.
- Process one handler file at a time.
- After each file, run `pnpm envio codegen` to validate, and verify it against
the migration checklist before moving on.
Phase 3 β Verify
- Walk through every checklist item from the migration skill and confirm it
passes.
- Run any available build or type check commands.
- List any items you could not complete and why.
- Only modify files in `my-hyperindex-indexer/`. Do not change the subgraph repo.
- Preserve all entity fields and event mappings from the subgraph.
- Do not skip or summarize checklist items β execute every one.
- If you are uncertain about a migration decision, pause and ask me.
- After migration, run
pnpm devto verify the indexer runs correctly - Use the Indexer Migration Validator to compare outputs between your subgraph and the new HyperIndex indexer
Manual Migrationβ
For a detailed manual migration guide covering the step by step conversion of subgraph.yaml, schema, and event handlers, see Migrate from The Graph.
Migrate from The Graph to Envioβ
File: migration-guide.md
Please reach out to our team on Discord for personalized migration assistance.
Introductionβ
Migrating your existing subgraph to Envio's HyperIndex is designed to be a developer-friendly process. HyperIndex draws strong inspiration from The Graphβs subgraph architecture, which makes the migration simple, especially with the help of coding assistants like Cursor and AI tools (don't forget to use our ai friendly docs).
The process is simple but requires a good understanding of the underlying concepts. If you are new to HyperIndex, we recommend starting with the Quickstart guide.
If you want an assistant-led workflow, see How to Migrate Using AI for a guided process that works in both Cursor and Claude Code.
Why Migrate to HyperIndex?β
- Superior Performance: Up to 100x faster indexing speeds
- Lower Costs: Reduced infrastructure requirements and operational expenses
- Better Developer Experience: Simplified configuration and deployment
- Advanced Features: Access to capabilities not available in other indexing solutions
- Seamless Integration: Easy integration with existing GraphQL APIs and applications
Subgraph to HyperIndex Migration Overviewβ
Migration consists of three major steps:
- Subgraph.yaml migration
- Schema migration - near copy paste
- Event handler migration
At any point in the migration run
pnpm envio codegen
to verify the config.yaml and schema.graphql files are valid.
or run
pnpm dev
to verify the indexer is running and indexing correctly.
0.5 Use pnpx envio@3.0.0-rc.0 init to generate a boilerplateβ
As a first step, we recommend using pnpx envio@3.0.0-rc.0 init to generate a boilerplate for your project. This will handle the creation of the config.yaml file and a basic schema.graphql file with generic handler functions.
1. subgraph.yaml β config.yamlβ
pnpx envio@3.0.0-rc.0 init will generate this for you. It's a simple configuration file conversion. Effectively specifying which contracts to index, which networks to index (multiple networks can be specified with envio) and which events from those contracts to index.
Take the following conversion as an example, where the subgraph.yaml file is converted to config.yaml the below comparisons is for the Uniswap v4 pool manager subgraph.
The Graph - subgraph.yaml
specVersion: 0.0.4
description: Uniswap is a decentralized protocol for automated token exchange on Ethereum.
repository: https://github.com/Uniswap/v4-subgraph
schema:
file: ./schema.graphql
features:
- nonFatalErrors
- grafting
- kind: ethereum/contract
name: PositionManager
network: mainnet
source:
abi: PositionManager
address: "0xbD216513d74C8cf14cf4747E6AaA6420FF64ee9e"
startBlock: 21689089
mapping:
kind: ethereum/events
apiVersion: 0.0.7
language: wasm/assemblyscript
file: ./src/mappings/index.ts
entities:
- Position
abis:
- name: PositionManager
file: ./abis/PositionManager.json
eventHandlers:
- event: Subscription(indexed uint256,indexed address)
handler: handleSubscription
- event: Unsubscription(indexed uint256,indexed address)
handler: handleUnsubscription
- event: Transfer(indexed address,indexed address,indexed uint256)
handler: handleTransfer
HyperIndex - config.yaml
# yaml-language-server: $schema=./node_modules/envio/evm.schema.json
name: uni-v4-indexer
networks:
- id: 1
start_block: 21689089
contracts:
- name: PositionManager
address: 0xbD216513d74C8cf14cf4747E6AaA6420FF64ee9e
events:
- event: Subscription(uint256 indexed tokenId, address indexed subscriber)
- event: Unsubscription(uint256 indexed tokenId, address indexed subscriber)
- event: Transfer(address indexed from, address indexed to, uint256 indexed id)
For any potential hurdles, please refer to the Configuration File documentation.
2. Schema migrationβ
copy & paste the schema from the subgraph to the HyperIndex config file.
Small nuance differences:
- You can remove the
@entitydirective - Enums
- BigDecimals
3. Event handler migrationβ
This consists of two parts
- Converting assemblyscript to typescript
- Converting the subgraph syntax to HyperIndex syntax
3.1 Converting Assemblyscript to Typescriptβ
The subgraph uses assemblyscript to write event handlers. The HyperIndex syntax is usually in typescript. Since assemblyscript is a subset of typescript, it's quite simple to copy and paste the code, especially so for pure functions.
3.2 Converting the subgraph syntax to HyperIndex syntaxβ
There are some subtle differences in the syntax of the subgraph and HyperIndex. Including but not limited to the following:
- Replace Entity.save() with context.Entity.set()
- Convert to async handler functions
- Use
awaitfor loading entitiesconst x = await context.Entity.get(id) - Use dynamic contract registration to register contracts
The below code snippets can give you a basic idea of what this difference might look like.
The Graph - eventHandler.ts
export function handleSubscription(event: SubscriptionEvent): void {
const subscription = new Subscribe(event.transaction.hash + event.logIndex);
subscription.tokenId = event.params.tokenId;
subscription.address = event.params.subscriber.toHexString();
subscription.logIndex = event.logIndex;
subscription.blockNumber = event.block.number;
subscription.position = event.params.tokenId;
subscription.save();
}
HyperIndex - eventHandler.ts
PoolManager.Subscription.handler( async (event, context) => {
const entity = {
id: event.transaction.hash + event.logIndex,
tokenId: event.params.tokenId,
address: event.params.subscriber,
blockNumber: event.block.number,
logIndex: event.logIndex,
position: event.params.tokenId
}
context.Subscription.set(entity);
})
Extra tipsβ
HyperIndex is a powerful tool that can be used to index any contract. There are some features that are especially powerful that go above subgraph implementations and so in some cases you may want to optimise your migration to HyperIndex further to take advantage of these features. Here are some useful tips:
- Use
field_selectionto opt into optional transaction and block fields (e.g.hash,status,gasUsed) that are not included by default, see Transaction receipts for a migration-focused example and the field selection docs for the full list. - Use the
unordered_multichain_modeoption to enable unordered multichain mode, this is the most common need for multichain indexing. However comes with tradeoffs worth understanding. Doc here: unordered multichain mode - Use wildcard indexing to index by event signatures rather than by contract address.
- HyperIndex uses the standard GraphQL query language, whereas TheGraph uses a custom GraphQL syntax. You can read about the differences and how to convert queries in our Query Conversion Guide. We also provide a query converter tool for backwards compatibility with existing TheGraph queries.
- Loaders are a powerful feature to optimize historical sync performance. You can read more about them here.
- HyperIndex is very flexible and can be used to index offchain data too or send messages to a queue etc for fetching external data, you can further optimise the fetching by using the effects api
Transaction receiptsβ
In The Graph, you opt into receipt data per-handler with receipt: true in subgraph.yaml:
eventHandlers:
- event: Transfer(indexed address,indexed address,indexed uint256)
handler: handleTransfer
receipt: true
This makes event.receipt available inside the handler with fields like status, gasUsed, and logs.
In HyperIndex, receipt-level fields are part of transaction_fields and must be requested via field_selection in config.yaml. There is no separate receipt object β the fields are accessed directly on event.transaction:
field_selection:
transaction_fields:
- hash
- status # 1 = success, 0 = reverted
- gasUsed
- cumulativeGasUsed
- contractAddress # non-null for contract-creation transactions
- logsBloom
MyContract.Transfer.handler(async ({ event, context }) => {
const { status, gasUsed } = event.transaction;
// ...
});
See the full list of available transaction_fields in the Configuration File docs.
Validating Your Migrationβ
After completing your migration, it's important to verify that your HyperIndex indexer produces the same data as your original subgraph. Use the Indexer Migration Validator CLI tool to compare results between both endpoints and identify any discrepancies. The tool automatically generates entity configs from your GraphQL schema and provides detailed field-level analysis of differences.
Share Your Learningsβ
If you discover helpful tips during your migration, we'd love contributions! Open a PR to this guide and help future developers.
Getting Helpβ
Join Our Discord: The fastest way to get personalized help is through our Discord community.
Migrate from Ponder to HyperIndexβ
File: migrate-from-ponder.md
Need help? Reach out on Discord for personalized migration assistance.
Migrating from Ponder to HyperIndex is straightforward β both frameworks use TypeScript, index EVM events, and expose a GraphQL API. The key differences are the config format, schema syntax, and entity operation API.
If you are new to HyperIndex, start with the Quickstart guide first.
For an assistant-led workflow, see How to Migrate Using AI, which includes a shared process for Cursor and Claude Code.
Why Migrate to HyperIndex?β
- Up to 158x faster historical sync via HyperSync
- Multichain by default β index any number of chains in one config
- Same language β your TypeScript logic transfers directly
Migration Overviewβ
Migration has three steps:
ponder.config.tsβconfig.yamlponder.schema.tsβschema.graphql- Event handlers β adapt syntax and entity operations
At any point, run:
pnpm envio codegen # validate config + schema, regenerate types
pnpm dev # run the indexer locally
Step 1: ponder.config.ts β config.yamlβ
Ponder
export default createConfig({
chains: {
mainnet: { id: 1, rpc: process.env.PONDER_RPC_URL_1 },
},
contracts: {
MyToken: {
abi: myTokenAbi,
chain: "mainnet",
address: "0xabc...",
startBlock: 18000000,
},
},
});
HyperIndex (v3)
# yaml-language-server: $schema=./node_modules/envio/evm.schema.json
name: my-indexer
contracts:
- name: MyToken
abi_file_path: ./abis/MyToken.json
events:
- event: Transfer
- event: Approval
chains:
- id: 1
start_block: 0
contracts:
- name: MyToken
address:
- 0xabc...
start_block: 18000000
v2 note: HyperIndex v2 uses
networksinstead ofchains. See the v2βv3 migration guide.
Key differences:
| Concept | Ponder | HyperIndex |
|---|---|---|
| Config format | ponder.config.ts (TypeScript) | config.yaml (YAML) |
| Chain reference | Named + viem object | Numeric chain ID |
| RPC URL | In config | RPC_URL_ env var |
| ABI source | TypeScript import | JSON file (abi_file_path) |
| Events to index | Inferred from handlers | Explicit events: list |
| Handler file | Inferred | Explicit handler: per contract |
Convert your ABI: Ponder uses TypeScript ABI exports (as const). HyperIndex needs a plain JSON file in abis/. Strip the export const ... = wrapper and as const and save as .json.
Field selection β accessing transaction and block fieldsβ
By default, only a minimal set of fields is available on event.transaction and event.block. Fields like event.transaction.hash are undefined unless explicitly requested.
events:
- event: Transfer
field_selection:
transaction_fields:
- hash
Or declare once at the top level to apply to all events:
name: my-indexer
field_selection:
transaction_fields:
- hash
contracts:
# ...
See the full list of available fields in the Configuration File docs.
Migrate From Alchemyβ
File: migrate-from-alchemy.md
Note: Alchemy subgraphs sunset on Dec 8th, 2025. Envio is offering affected Alchemy users 2 months of free hosting on Envio, along with full white-glove migration support to help projects move over smoothly.
For more info on how you can start your free trial or book migration support, visit this page to learn more.
Migrating Alchemy subgraphs to Envioβs HyperIndex is a simple and developer-friendly process. Alchemy subgraphs follow The Graphβs model and HyperIndex uses a very similar structure, so most of your existing setup can carry over cleanly.
If you're familiar with The Graphβs libraries, the migration process should be straightforward. You can also utilize tools like Cursor to speed things up. If you are new to HyperIndex, we strongly recommend starting with our Quickstart guide before you begin your migration from Alchemy.
Why Migrate to Envioβs HyperIndex?β
- High Speed Performance: 143x faster than subgraphs
- Lower Costs: Reduced infrastructure requirements and operational expenses
- Better Developer Experience: Simplified configuration and deployment
- Multichain Native: Index data across multiple EVM chains through a single HyperIndex project
- Local Development: Run your indexers locally for fast iteration and easier debugging
- White Glove Migration Support: Get direct support from the Envio team for a smoother migration.
- GitOps Ready Deployments: Link your GitHub repo and manage multiple deployments in a clean unified workflow
- Advanced Features: Access to features like external calls and block handlers
- Seamless Integration: Easily integrate existing GraphQL APIs and applications
How to Migrate from Alchemy to Envio in 4 easy stepsβ
This Migration consists of 4 major steps:
- Create a HyperIndex Project
- subgraph.yaml Migration to config.yaml
- schema.graphql Migration
- Event Handler Migration
Create a HyperIndex Projectβ
Start by spinning up a basic HyperIndex project with this command:
pnpx envio@3.0.0-rc.0 init template --name alchemy-migration --directory alchemy-migration --template greeter --api-token "YOUR_ENVIO_API_KEY"
Once the project is created, drop your API key into the .env file and youβre good to go.
subgraph.yaml Migration to config.yamlβ
In HyperIndex, all project configuration lives in config.yaml. This is where you define contract addresses, the networks you want to index, and the specific events you want to track from those contracts.
Below is an example showing how a Uniswap V4 subgraph.yaml maps to a HyperIndex config.yaml in a real migration.
The Graph - subgraph.yaml
specVersion: 0.0.4
description: Uniswap is a decentralized protocol for automated token exchange on Ethereum.
repository: https://github.com/Uniswap/v4-subgraph
schema:
file: ./schema.graphql
features:
- nonFatalErrors
- grafting
- kind: ethereum/contract
name: PositionManager
network: mainnet
source:
abi: PositionManager
address: "0xbD216513d74C8cf14cf4747E6AaA6420FF64ee9e"
startBlock: 21689089
mapping:
kind: ethereum/events
apiVersion: 0.0.7
language: wasm/assemblyscript
file: ./src/mappings/index.ts
entities:
- Position
abis:
- name: PositionManager
file: ./abis/PositionManager.json
eventHandlers:
- event: Subscription(indexed uint256,indexed address)
handler: handleSubscription
- event: Unsubscription(indexed uint256,indexed address)
handler: handleUnsubscription
- event: Transfer(indexed address,indexed address,indexed uint256)
handler: handleTransfer
HyperIndex - config.yaml
# yaml-language-server: $schema=./node_modules/envio/evm.schema.json
name: uni-v4-indexer
networks:
- id: 1
start_block: 21689089
contracts:
- name: PositionManager
address: 0xbD216513d74C8cf14cf4747E6AaA6420FF64ee9e
events:
- event: Subscription(uint256 indexed tokenId, address indexed subscriber)
- event: Unsubscription(uint256 indexed tokenId, address indexed subscriber)
- event: Transfer(address indexed from, address indexed to, uint256 indexed id)
If you hit any issues, check the Configuration File docs or reach out to our team in Discord.
schema.graphql Migrationβ
This step is simple. You keep the entire file as is, with one small change: remove all @entity directives from your entities. Everything else stays the same.
Event Handler Migrationβ
This is the final step of the migration which consists of two parts:
- Moving from AssemblyScript to TypeScript
- Updating Subgraph syntax to HyperIndex syntax
AssemblyScript to TypeScriptβ
HyperIndex uses TypeScript instead of AssemblyScript. Since AssemblyScript is a subset of TypeScript, you can simply copy most of your code over without worrying about major syntax changes.
Subgraph to HyperIndexβ
The HyperIndex workflow is very similar to Subgraphs, but there are a few important differences to keep in mind:
- Replace
ENTITY.save()withcontext.ENTITY.set(VALUES) - Handlers need to be async
- Use
awaitwhen loading entities
As you start using HyperIndex, youβll pick up the differences quickly.
Here is a code snippet to give you a sense of what these changes look like in practice.
The Graph - eventHandler.ts
export function handleSubscription(event: SubscriptionEvent): void {
const subscription = new Subscribe(event.transaction.hash + event.logIndex);
subscription.tokenId = event.params.tokenId;
subscription.address = event.params.subscriber.toHexString();
subscription.logIndex = event.logIndex;
subscription.blockNumber = event.block.number;
subscription.position = event.params.tokenId;
subscription.save();
}
HyperIndex - eventHandler.ts
PoolManager.Subscription.handler( async (event, context) => {
const entity = {
id: event.transaction.hash + event.logIndex,
tokenId: event.params.tokenId,
address: event.params.subscriber,
blockNumber: event.block.number,
logIndex: event.logIndex,
position: event.params.tokenId
}
context.Subscription.set(entity);
})
For a few extra tips on migrating from Alchemy to Envio, check out our other migration guide in our docs.
Share Your Learningsβ
If you come across anything useful during your migration, please feel free to contribute. Simply open a PR to this guide and help future developers.
Getting Helpβ
Join our Discord if you need support. It is the fastest way to get direct help from the team and the community.
Migrate to HyperIndex V3β
File: migrate-to-v3.md
This guide is a plain, step-by-step checklist of every change required to upgrade an existing HyperIndex V2 project to V3. For an overview of new V3 capabilities, see What's New in V3.
Follow the steps in order. Each step is independent enough to skim, but Step 0 (preparation on V2) is strongly recommended before you start touching V3 code.
Step 0: Prepare on V2 (Recommended)β
Before upgrading to V3, prepare your project while still on V2:
-
Upgrade to
envio@2.32.6. -
Enable Preload Optimization in
config.yaml:preload_handlers: true -
If you were using loaders, migrate them to Preload Optimization following the Migrating from Loaders guide.
-
Verify your indexer still works with
pnpm dev.
Step 1: Update Node.jsβ
Update Node.js to 22 or higher (24 is recommended). Earlier versions are no longer supported.
Step 2: Update package.jsonβ
-
Add
"type": "module"(required β without it the project will fail to start with ESM import errors). -
Set
engines.nodeto>=22.0.0. -
Update the
enviodependency to the latest v3 release. -
Remove the
optionalDependencies.generatedentry β the localgeneratedpackage no longer exists. Types are emitted to.envio/types.d.ts(git-ignored) and wired up via a smallenvio-env.d.tsfile at the project root. Everything previously imported fromgeneratedis now exported fromenvio.- "optionalDependencies": {
- "generated": "./generated"
- }, -
Update dev tooling:
{
"type": "module",
"engines": {
"node": ">=22.0.0"
},
"dependencies": {
"envio": "3.0.0-rc.0"
},
"devDependencies": {
"@types/node": "24.12.2",
"typescript": "6.0.3",
"vitest": "4.1.0"
}
} -
If you used
ts-nodefor the start script, replace it withenvio start:{
"scripts": {
"start": "envio start"
}
}
Test runnerβ
Option A β Migrate to Vitest (recommended).
pnpm remove ts-mocha ts-node mocha chai @types/mocha @types/chai
pnpm add -D vitest@4.0.16
{
"scripts": {
"test": "vitest run"
},
"devDependencies": {
"vitest": "4.0.16"
}
}
Move tests from test/Test.ts to src/indexer.test.ts and update imports:
// Before (mocha/chai)
// After (vitest)
Option B β Keep Mocha. Replace ts-mocha/ts-node with tsx:
pnpm remove ts-mocha ts-node
pnpm add -D tsx@4.21.0
{
"scripts": {
"mocha": "tsc --noEmit && NODE_OPTIONS='--no-warnings --import tsx' mocha --exit test/**/*.ts"
}
}
Step 3: Update tsconfig.jsonβ
Update for ESM:
{
"compilerOptions": {
"esModuleInterop": true,
"skipLibCheck": true,
"target": "es2022",
"allowJs": true,
"resolveJsonModule": true,
"moduleDetection": "force",
"isolatedModules": true,
"verbatimModuleSyntax": true,
"strict": true,
"noUncheckedIndexedAccess": true,
"noImplicitOverride": true,
"module": "ESNext",
"moduleResolution": "bundler",
"noEmit": true,
"lib": ["es2022"],
"types": ["node"]
}
}
verbatimModuleSyntax and noUncheckedIndexedAccess are extra strictness. You can disable them to simplify the migration.
Step 4: Update config.yamlβ
Renamesβ
networksβchainsconfirmed_block_thresholdβmax_reorg_depthrpc_configβrpc(now supports multiple URLs,for: sync | realtime | fallback, and WebSocket configuration)
# Before
networks:
- id: 1
contracts:
- name: MyContract
events:
- event: Transfer(address indexed from, address indexed to, uint256 value)
# After
chains:
- id: 1
contracts:
- name: MyContract
events:
- event: Transfer(address indexed from, address indexed to, uint256 value)
Removalsβ
Remove these options if present:
unordered_multichain_modeβ unordered is now the default. If you need ordered behavior, setmultichain: ordered.loadersβ Preload Optimization is now always enabled.preload_handlersβ now always enabled.preRegisterDynamicContractsβ no longer needed.event_decoderβ the Rust-based decoder is now the only implementation.outputβ generated types are always emitted to.envio/.
Replacements for environment variablesβ
If you were using the MAX_BATCH_SIZE environment variable, switch to the config option:
full_batch_size: 5000
Optional: Automatic handler registrationβ
Move handler files to src/handlers/ and remove the explicit handler paths from config.yaml. The explicit handler field still works if you'd rather not move files immediately.
Optional: ClickHouse storageβ
If using ClickHouse, add:
storage:
postgres: true
clickhouse: true
The connection environment variables (ENVIO_CLICKHOUSE_HOST, ENVIO_CLICKHOUSE_DATABASE, ENVIO_CLICKHOUSE_USERNAME, ENVIO_CLICKHOUSE_PASSWORD) are still required for envio start.
Step 5: Update Environment Variablesβ
Addβ
If your indexer uses HyperSync (the default), set an API token:
-
Get a free API token at envio.dev/app/api-tokens.
-
Set it in your environment:
export ENVIO_API_TOKEN=your_token_hereOr in a local
.envfile:ENVIO_API_TOKEN=your_token_here
Removeβ
UNSTABLE__TEMP_UNORDERED_HEAD_MODEUNORDERED_MULTICHAIN_MODEMAX_BATCH_SIZE(usefull_batch_sizeinconfig.yamlinstead)ENVIO_INDEXING_BLOCK_LAG(use the per-chainblock_lagconfig option instead)
Renameβ
TUI_OFF=trueβENVIO_TUI=false(TUI is also auto-disabled in CI and under AI agents)ENVIO_PG_PUBLIC_SCHEMAβENVIO_PG_SCHEMA(the old name is still supported until v4)
Step 6: Update Handler Codeβ
All contract-specific handler exports have been removed. Register every handler through the unified indexer value imported from envio.
Migrate event handlersβ
// Before
ERC20.Transfer.handler(
async ({ event, context }) => {
// ...
},
{
wildcard: true,
eventFilters: ({ chainId }) => [
{ from: ZERO_ADDRESS, to: WHITELIST[chainId] },
],
}
);
// After
indexer.onEvent(
{
contract: "ERC20",
event: "Transfer",
wildcard: true,
where: ({ chain }) => ({
params: [{ from: ZERO_ADDRESS, to: WHITELIST[chain.id] }],
}),
},
async ({ event, context }) => {
// ...
},
);
Notes:
eventFiltersis renamed towhere.- The
wherecallback receives{ chain }(not{ chainId }) and must returnfalse,true, or{ params: [...], block?: { number: { _gte, _lte, _every } } }. - The previous array shorthand at the top level is no longer accepted β wrap it in
{ params: [...] }.
Migrate dynamic contract registrationβ
// Before
UniV3.PoolFactory.contractRegister(async ({ event, context }) => {
context.addPool(event.params.poolAddress);
});
// After
indexer.contractRegister(
{ contract: "UniV3", event: "PoolFactory" },
async ({ event, context }) => {
context.chain.Pool.add(event.params.poolAddress);
},
);
context.add(address) becomes context.chain..add(address).
Migrate block handlersβ
// Before
indexer.chainIds.forEach((chainId) => {
onBlock(
{ name: "EveryBlock", chain: chainId },
async ({ block, context }) => {
// ...
},
);
});
// After
indexer.onBlock(
{ name: "EveryBlock" },
async ({ block, context }) => {
// ...
},
);
For chain-specific or interval handlers, return { block: { number: { _gte, _lte, _every } } } from where, or false to skip a chain. Inside a block handler, replace block.chainId with context.chain.id.
Update the getWhere APIβ
Switch to the GraphQL-style filter syntax:
// Before
const transfers = await context.Transfer.getWhere.from.eq("0x123...");
const bigTransfers = await context.Transfer.getWhere.value.gt(1000n);
// After
const transfers = await context.Transfer.getWhere({ from: { _eq: "0x123..." } });
const bigTransfers = await context.Transfer.getWhere({ value: { _gt: 1000n } });
New operators are also available: _gte, _lte, _in.
Rename and removal cheat sheetβ
| V2 (removed) | V3 |
|---|---|
Contract.Event.handler(...) | indexer.onEvent({ contract, event, ...options }, handler) |
Contract.Event.contractRegister(...) | indexer.contractRegister({ contract, event }, handler) |
onBlock({ chain, ... }, handler) | indexer.onBlock({ name, where? }, handler) |
context.add(addr) | context.chain..add(addr) |
eventFilters option | where callback returning { params: [...] } |
experimental_createEffect | createEffect |
block.chainId (in block handlers) | context.chain.id |
transaction.kind | transaction.type |
transaction.chainId | context.chain.id or event.chainId |
chain type | ChainId (now a union type) |
getGeneratedByChainId(...) | indexer.chains[chainId] |
Entity.getWhere.field.eq(value) | Entity.getWhere({ field: { _eq: value } }) |
Entity.getWhere.field.gt(value) | Entity.getWhere({ field: { _gt: value } }) |
Entity.getWhere.field.lt(value) | Entity.getWhere({ field: { _lt: value } }) |
Lowercased entity types (e.g. transfer) | Capitalized (Transfer) |
ERC20_Transfer_eventLog | EvmEvent |
ERC20_Transfer_block | EvmEvent["block"] |
MyEnum (direct export) | Enum |
MyEntity (direct export) | Entity (preferred; direct still exported) |
Other type changes:
Addressis now`0x${string}`instead ofstring.- Entity array fields are typed as
readonlyβ update any code that mutates them. S.nullableschema type now returnsT | nullinstead ofT | undefined.- The internal
ContractTypeenum was removed.
Step 7: Update Testsβ
The MockDb testing API has been removed. Migrate to createTestIndexer() with simulate.
-import { TestHelpers, type User } from "generated";
-const { MockDb, Greeter, Addresses } = TestHelpers;
+import { createTestIndexer, type User, TestHelpers } from "envio";
+const { Addresses } = TestHelpers;
it("A NewGreeting event creates a User entity", async (t) => {
- const mockDbInitial = MockDb.createMockDb();
+ const indexer = createTestIndexer();
const userAddress = Addresses.defaultAddress;
const greeting = "Hi there";
- const mockNewGreetingEvent = Greeter.NewGreeting.createMockEvent({
- greeting: greeting,
- user: userAddress,
- });
-
- const updatedMockDb = await Greeter.NewGreeting.processEvent({
- event: mockNewGreetingEvent,
- mockDb: mockDbInitial,
- });
+ await indexer.process({
+ chains: {
+ 137: {
+ simulate: [
+ {
+ contract: "Greeter",
+ event: "NewGreeting",
+ params: { greeting, user: userAddress },
+ },
+ ],
+ },
+ },
+ });
const expectedUserEntity: User = {
id: userAddress,
latestGreeting: greeting,
numberOfGreetings: 1,
greetings: [greeting],
};
- const actualUserEntity = updatedMockDb.entities.User.get(userAddress);
+ const actualUserEntity = await indexer.User.getOrThrow(userAddress);
t.expect(actualUserEntity).toEqual(expectedUserEntity);
});
MockDb migration cheat sheetβ
Old (MockDb) | New (createTestIndexer) |
|---|---|
MockDb.createMockDb() | createTestIndexer() |
Contract.Event.createMockEvent({...}) | Inline in simulate: [{ contract, event, params }] |
Contract.Event.processEvent({event,mockDb}) | indexer.process({ chains: { id: { simulate } } }) |
mockDb.entities.Entity.get(id) | await indexer.Entity.getOrThrow(id) |
mockDb.entities.Entity.set({...}) | indexer.Entity.set({...}) |
| Manual handler threading & event chaining | Automatic β pass multiple events in the simulate array |
Step 8: Update CLI Usageβ
envio devno longer auto-resets the database. If you relied on this, runenvio dev -r(or--restart) explicitly.envio startis now production-only. Continue usingenvio devfor local development.- Changes in handler files no longer trigger codegen on
pnpm dev.
Step 9: Run Codegen and Verifyβ
pnpm envio codegen
pnpm dev
Postgres column type changes (raw_events.event_id: NUMERIC β BIGINT, raw_events.serial: SERIAL β BIGSERIAL, envio_chains.events_processed: INTEGER β BIGINT, envio_checkpoints.id: INTEGER β BIGINT) are applied automatically β no action required. The deprecated envio_chains._num_batches_fetched column always returns 0.
Quick Migration Checklistβ
Prepare (on V2):
- Upgrade to
envio@2.32.6 - Enable
preload_handlers: trueinconfig.yaml - Migrate from loaders if applicable (guide)
- Verify indexer works with
pnpm dev
Dependencies:
- Update Node.js to
>=22 - Add
"type": "module"topackage.jsonβ Required for V3 - Update
enviodependency to the latest v3 release - Remove
optionalDependencies.generatedfrompackage.json - Update
engines.nodeto>=22.0.0 - Update
tsconfig.jsonfor ESM support - Migrate from mocha/chai to vitest (recommended) or replace
ts-mocha/ts-nodewithtsx
config.yaml:
- Rename
networksβchains - Rename
confirmed_block_thresholdβmax_reorg_depth - Replace
rpc_configwithrpc - Remove
unordered_multichain_mode(now default) - Remove
loadersandpreload_handlers - Remove
preRegisterDynamicContracts - Remove
event_decoder - Remove
output(types always written to.envio/) - If using ClickHouse, add
storage: { postgres: true, clickhouse: true }
Environment variables:
- Set
ENVIO_API_TOKENif using HyperSync (get token) - Remove
UNSTABLE__TEMP_UNORDERED_HEAD_MODE - Remove
UNORDERED_MULTICHAIN_MODE - Remove
MAX_BATCH_SIZE(usefull_batch_size) - Remove
ENVIO_INDEXING_BLOCK_LAG(use per-chainblock_lag) - Rename
TUI_OFF=trueβENVIO_TUI=false - Rename
ENVIO_PG_PUBLIC_SCHEMAβENVIO_PG_SCHEMA
Handler code:
- Migrate event handlers from
Contract.Event.handler(...)toindexer.onEvent({ contract, event, ...options }, handler) - Migrate dynamic contract registration to
indexer.contractRegister({ contract, event }, handler) - Replace
context.add(addr)withcontext.chain..add(addr) - Convert
eventFilterstowherereturning{ params: [...] } - Migrate block handlers to a single
indexer.onBlockcall (usewherefor chain-specific or interval filters) - Use
where.block.number._gteto override per-event start blocks if needed - Replace
experimental_createEffectwithcreateEffect - Replace
block.chainIdwithcontext.chain.id - Replace
transaction.kindwithtransaction.type - Replace
transaction.chainIdwithcontext.chain.idorevent.chainId - Update
chaintype toChainId - Replace
getGeneratedByChainIdwithindexer.chains[chainId] - Update
Addressconsumers β type is now`0x${string}` - Replace lowercased entity imports with capitalized versions (e.g.
transferβTransfer) - Update
getWherecalls to GraphQL-style filter syntax - Update any
S.nullableusage β now returnsnullinstead ofundefined - Replace contract-specific type exports with generics (
EvmEvent)
Tests:
- Migrate from
MockDbtocreateTestIndexer()
CLI:
- Use
envio dev -rif you relied onenvio devresetting the DB automatically - Use
envio devfor local development (envio startis production-only)
Verify:
- Run
pnpm envio codegenandpnpm dev
Getting Helpβ
If you encounter any issues during migration, join our Discord community for support.
Configuration Fileβ
File: Guides/configuration-file.mdx
The config.yaml file defines your indexer's behavior, including which blockchain events to index, contract addresses, which networks to index, and various advanced indexing options. It is a crucial step in configuring your HyperIndex setup.
After any changes to your config.yaml and the schema, run:
pnpm codegen
This command generates necessary types and code for your event handlers.
Key Configuration Optionsβ
Contract Addressesβ
Set the address of the smart contract you're indexing.
Addresses can be provided in checksum format or in lowercase. Envio accepts both and normalizes them internally.
Single address:
address: 0xContractAddress
Multiple addresses for the same contract:
contracts:
- name: MyContract
address:
- 0xAddress1
- 0xAddress2
If using a proxy contract, always use the proxy address, not the implementation address.
Global definitions:
You can also avoid repeating addresses by using global contract definitions:
contracts:
- name: Greeter
abi: greeter.json
networks:
- id: ethereum-mainnet
contracts:
- name: Greeter
address: 0xProxyAddressHere
Events Selectionβ
Define specific events to index in a human-readable format:
events:
- event: "NewGreeting(address user, string greeting)"
- event: "ClearGreeting(address user)"
By default, all events defined in the contract are indexed, but you can selectively disable them by removing them from this list.
Custom Event Namesβ
You can assign custom names to events in config.yaml. This is handy when
two events share the same name but have different signatures, or when you want
a more descriptive name in your Envio project.
events:
- event: Assigned(address indexed recipientId, uint256 amount, address token)
- event: Assigned(address indexed recipientId, uint256 amount, address token, address sender)
name: AssignedWithSender
Field Selectionβ
To improve indexing performance and reduce credits usage, the block and transaction fields on events contain only a subset of the fields available on the blockchain.
To access fields that are not provided by default, specify them using the field_selection option for your event:
events:
- event: "Assigned(address indexed user, uint256 amount)"
field_selection:
transaction_fields:
- transactionIndex
block_fields:
- timestamp
See all possible options in the Config File Reference or use IDE autocomplete for your help.
Global Field Selectionβ
You can also specify fields globally for all events in the root of the config file:
field_selection:
transaction_fields:
- hash
- gasUsed
block_fields:
- parentHash
Try to use this option sparingly as it can cause redundant Data Source calls and increased credits usage.
Field Selection per Event is available from envio@2.11.0 and above. Please, upgrade your indexer to access this feature.
Rollback on Reorgβ
HyperIndex automatically handles blockchain reorganizations by default. To disable or customize this behavior, set the rollback_on_reorg flag in your config.yaml:
rollback_on_reorg: true # default is true
See detailed configuration options here.
Environment Variablesβ
Since envio@2.9.0, environment variable interpolation is supported for flexibility and security:
networks:
- id: ${ENVIO_CHAIN_ID:-ethereum-mainnet}
contracts:
- name: Greeter
address: ${ENVIO_GREETER_ADDRESS}
Run your indexer with custom environment variables:
ENVIO_CHAIN_ID=optimism ENVIO_GREETER_ADDRESS=0xYourContractAddress pnpm dev
Interpolation syntax:
${ENVIO_VAR}β Use the value ofENVIO_VAR${ENVIO_VAR:-default}β UseENVIO_VARif set, otherwise usedefault
For more detailed information about environment variables, see our Environment Variables Guide.
Output Directory Pathβ
You can customize the path where the generated directory will be placed using the output option:
output: ./custom/generated/path
By default, the generated directory is placed in generated relative to the current working directory. If set, it will be a path relative to the config file location.
This is an advanced configuration option. When using a custom output directory, you'll need to manually adjust your .gitignore file and project structure to match the new configuration.
Full config file exampleβ
This example indexes events from multiple contracts across multiple networks.
name: envio-indexer
unordered_multichain_mode: true
preload_handlers: true
contracts:
- name: PoolManager
events:
- event: Swap(bytes32 indexed id, address indexed sender, int128 amount0, int128 amount1, uint160 sqrtPriceX96, uint128 liquidity, int24 tick, uint24 fee)
- name: PositionManager
events:
- event: Transfer(address indexed from, address indexed to, uint256 indexed id)
networks:
- id: 1
# Keep it 0 and HyperSync will automatically find the first block for your contracts
start_block: 0
contracts:
- name: PositionManager
address:
- 0xbD216513d74C8cf14cf4747E6AaA6420FF64ee9e
start_block: 18500000 # OPTIONAL: Override for contract deployed later
- name: PoolManager
address:
- "0x000000000004444c5dc75cB358380D2e3dE08A90"
- id: 10
start_block: 0
contracts:
- name: PositionManager
address:
- 0x3C3Ea4B57a46241e54610e5f022e5c45859A1017
- name: PoolManager
address:
- 0x9a13F98Cb987694C9F086b1F5eB990EeA8264Ec3
- id: 42161
start_block: 0
contracts:
- name: PositionManager
address:
- 0xd88f38f930b7952f2db2432cb002e7abbf3dd869
- name: PoolManager
address:
- 0x360e68faccca8ca495c1b759fd9eee466db9fb32
Now your configuration file is set, you're ready to start indexing with HyperIndex!
Schema Fileβ
File: Guides/schema-file.md
The schema.graphql file defines the data model for your HyperIndex indexer. Each entity type defined in this schema corresponds directly to a database table, with your event handlers responsible for creating and updating the records. HyperIndex automatically generates a GraphQL API based on these entity types, allowing easy access to the indexed data.
Scalar Typesβ
Scalar types represent basic data types and map directly to JavaScript, TypeScript, or ReScript types.
| GraphQL Scalar | Description | JavaScript/TypeScript | ReScript |
|---|---|---|---|
ID | Unique identifier | string | string |
String | UTF-8 character sequence | string | string |
Int | Signed 32-bit integer | number | int |
Float | Signed floating-point number | number | float |
Boolean | true or false | boolean | bool |
Bytes | UTF-8 character sequence (hex prefixed 0x) | string | string |
BigInt | Signed integer (int256 in Solidity) | bigint | bigint |
BigDecimal | Arbitrary-size floating-point | BigDecimal (imported) | BigDecimal.t |
Timestamp | Timestamp with timezone | Date | Js.Date.t |
Json | JSON object (from envio@2.20) | Json | Js.Json.t |
Learn more about GraphQL scalars here.
Enum Typesβ
Enums allow fields to accept only a predefined set of values.
Example:
enum AccountType {
ADMIN
USER
}
type User {
id: ID!
balance: Int!
accountType: AccountType!
}
Enums translate to string unions (TypeScript/JavaScript) or polymorphic variants (ReScript):
TypeScript Example:
let user = {
id: event.params.id,
balance: event.params.balance,
accountType: "USER", // enum as string
};
ReScript Example:
let user: Types.userEntity = {
id: event.params.id,
balance: event.params.balance,
accountType: #USER, // polymorphic variant
};
Field Indexing (@index)β
Add an index to a field for optimized queries and loader performance:
type Token {
id: ID!
tokenId: BigInt!
collection: NftCollection!
owner: User! @index
}
- All
idfields and fields referenced via@derivedFromare indexed automatically.
Generating Typesβ
Once you've defined your schema, run this command to generate these entity types that can be accessed in your event handlers:
pnpm envio codegen
You're now ready to define powerful schemas and efficiently query your indexed data with HyperIndex!
Event Handlersβ
File: Guides/event-handlers.mdx
Event Handlers
Registrationβ
A handler is a function that receives blockchain data, processes it, and inserts it into the database. You can register handlers in the file defined in the handler field in your config.yaml file. By default this is src/handlers file.
..handler(async ({ event, context }) => {
// Your logic here
});
const { } = require("generated");
..handler(async ({ event, context }) => {
// Your logic here
});
Handlers...handler(async ({ event, context }) => {
// Your logic here
});
The generated module contains code and types based on config.yaml and schema.graphql files. Update it by running pnpm codegen command whenever you change these files.
Basic Exampleβ
Here's a handler example for the NewGreeting event. It belongs to the Greeter contract from our beginners Greeter Tutorial:
// Handler for the NewGreeting event
Greeter.NewGreeting.handler(async ({ event, context }) => {
const userId = event.params.user; // The id for the User entity
const latestGreeting = event.params.greeting; // The greeting string that was added
const currentUserEntity = await context.User.get(userId); // Optional user entity that may already exist
// Update or create a new User entity
const userEntity: User = currentUserEntity
? {
id: userId,
latestGreeting,
numberOfGreetings: currentUserEntity.numberOfGreetings + 1,
greetings: [...currentUserEntity.greetings, latestGreeting],
}
: {
id: userId,
latestGreeting,
numberOfGreetings: 1,
greetings: [latestGreeting],
};
context.User.set(userEntity); // Set the User entity in the DB
});
const { Greeter } = require("generated");
// Handler for the NewGreeting event
Greeter.NewGreeting.handler(async ({ event, context }) => {
const userId = event.params.user; // The id for the User entity
const latestGreeting = event.params.greeting; // The greeting string that was added
const currentUserEntity = await context.User.get(userId); // Optional user entity that may already exist
// Update or create a new User entity
const userEntity = currentUserEntity
? {
id: userId,
latestGreeting,
numberOfGreetings: currentUserEntity.numberOfGreetings + 1,
greetings: [...currentUserEntity.greetings, latestGreeting],
}
: {
id: userId,
latestGreeting,
numberOfGreetings: 1,
greetings: [latestGreeting],
};
context.User.set(userEntity); // Set the User entity in the DB
});
open Types
// Handler for the NewGreeting event
Handlers.Greeter.NewGreeting.handler(async ({event, context}) => {
let userId = event.params.user->Address.toString // The id for the User entity
let latestGreeting = event.params.greeting // The greeting string that was added
let maybeCurrentUserEntity = await context.user.get(userId) // Optional User entity that may already exist
// Update or create a new User entity
let userEntity: Entities.User.t = switch maybeCurrentUserEntity {
| Some(existingUserEntity) => {
id: userId,
latestGreeting,
numberOfGreetings: existingUserEntity.numberOfGreetings + 1,
greetings: existingUserEntity.greetings->Belt.Array.concat([latestGreeting]),
}
| None => {
id: userId,
latestGreeting,
numberOfGreetings: 1,
greetings: [latestGreeting],
}
}
context.user.set(userEntity) // Set the User entity in the DB
})
Preload Optimizationβ
Important! Preload optimization makes your handlers run twice.
Starting from envio@2.27 all new indexers are created with preload optimization pre-configured by default.
This optimization enables HyperIndex to efficiently preload entities used by handlers through batched database queries, while ensuring events are processed synchronously in their original order. When combined with the Effect API for external calls, this feature delivers performance improvements of multiple orders of magnitude compared to other indexing solutions.
Read more in the dedicated guides:
- How Preload Optimization Works
- Double-Run Footgun
- Effect API
- Migrating from Loaders (recommended)
Advanced Use Casesβ
HyperIndex provides many features to help you build more powerful and efficient indexers. There's definitely the one for you:
- Handle Factory Contracts with Dynamic Contract Registration (with nested factories support)
- Perform external calls to decide which contract address to register using Async Contract Register
- Index all ERC20 token transfers with Wildcard Indexing
- Use Topic Filtering to ignore irrelevant events
- With multiple filters for single event
- With different filters per network
- With filter by dynamicly registered contract addresses (eg Index all ERC20 transfers to/from your Contract)
- Access Contract State directly from handlers
- Perform external calls from handlers by following the IPFS Integration guide
Context Objectβ
The handler context provides methods to interact with entities stored in the database.
Retrieving Entitiesβ
Retrieve entities from the database using context.Entity.get where Entity is the name of the entity you want to retrieve, which is defined in your schema.graphql file.
await context.Entity.get(entityId);
It'll return Entity object or undefined if the entity doesn't exist.
Starting from envio@2.22.0 you can use context.Entity.getOrThrow to conveniently throw an error if the entity doesn't exist:
const pool = await context.Pool.getOrThrow(poolId);
// Will throw: Entity 'Pool' with ID '...' is expected to exist.
// Or you can pass a custom message as a second argument:
const pool = await context.Pool.getOrThrow(
poolId,
`Pool with ID ${poolId} is expected.`
);
Or use context.Entity.getOrCreate to automatically create an entity with default values if it doesn't exist:
const pool = await context.Pool.getOrCreate({
id: poolId,
totalValueLockedETH: 0n,
});
// Which is equivalent to:
let pool = await context.Pool.get(poolId);
if (!pool) {
pool = {
id: poolId,
totalValueLockedETH: 0n,
};
context.Pool.set(pool);
}
Retrieving Entities by Fieldβ
ERC20.Approval.handler(async ({ event, context }) => {
// Find all approvals for this specific owner
const currentOwnerApprovals = await context.Approval.getWhere.owner_id.eq(
event.params.owner
);
// Process all the owner's approvals efficiently
for (const approval of currentOwnerApprovals) {
// Process each approval
}
});
You can also use context..getWhere..gt to get all entities where the field value is greater than the given value.
Important:
-
This feature requires Preload Optimization to be enabled.
- Either by
preload_handlers: truein yourconfig.yamlfile - Or by using Loaders (Deprecated)
- Either by
-
Works with any field that:
- Is used in a relationship with the
@derivedFromdirective - Has an
@indexdirective
- Is used in a relationship with the
-
Potential Memory Issues: Very large
getWherequeries might cause memory overflows. -
Tip: Try to put the
getWherequery to the top of the handler, to make sure it's being preloaded. Read more about how Preload Optimization works.
Modifying Entitiesβ
Use context.Entity.set to create or update an entity:
context.Entity.set({
id: entityId,
...otherEntityFields,
});
Both context.Entity.set and context.Entity.deleteUnsafe methods use the In-Memory Storage under the hood and don't require await in front of them.
Referencing Linked Entitiesβ
When your schema defines a field that links to another entity type, set the relationship using _id with the referenced entity's id. You are storing the ID, not the full entity object.
type A {
id: ID!
b: B!
}
type B {
id: ID!
}
context.A.set({
id: aId,
b_id: bId, // ID of the linked B entity
});
HyperIndex automatically resolves A.b based on the stored b_id when querying the API.
Deleting Entities (Unsafe)β
To delete an entity:
context.Entity.deleteUnsafe(entityId);
The deleteUnsafe method is experimental and unsafe. You need to manually handle all entity references after deletion to maintain database consistency.
Updating Specific Entity Fieldsβ
Use the following approach to update specific fields in an existing entity:
const pool = await context.Pool.get(poolId);
if (pool) {
context.Pool.set({
...pool,
totalValueLockedETH: pool.totalValueLockedETH.plus(newDeposit),
});
}
const pool = await context.Pool.get(poolId);
if (pool) {
context.Pool.set({
...pool,
totalValueLockedETH: pool.totalValueLockedETH.plus(newDeposit),
});
}
let pool = await context.pool.get(poolId);
pool->Option.forEach(pool => {
context.pool.set({
...pool,
totalValueLockedETH: pool.totalValueLockedETH.plus(newDeposit),
});
});
context.logβ
The context object also provides a logger that you can use to log messages to the console. Compared to console.log calls, these logs will be displayed on our Envio Cloud runtime logs page.
Read more in the Logging Guide.
context.isPreloadβ
If you need to skip the preload phase for CPU-intensive operations or to perform certain actions only once per event, you can use context.isPreload.
ERC20.Transfer.handler(async ({ event, context }) => {
// Load existing data efficiently
const [sender, receiver] = await Promise.all([
context.Account.getOrThrow(event.params.from),
context.Account.getOrThrow(event.params.to),
]);
// Skip expensive operations during preload
if (context.isPreload) {
return;
}
// CPU-intensive calculations only happen once
const complexCalculation = performExpensiveOperation(event.params.value); // Placeholder function for demonstration
// Create or update sender account
context.Account.set({
id: event.params.from,
balance: sender.balance - event.params.value,
computedValue: complexCalculation,
});
// Create or update receiver account
context.Account.set({
id: event.params.to,
balance: receiver.balance + event.params.value,
});
});
Note: While context.isPreload can be useful for bypassing double execution, it's recommended to use the Effect API for external calls instead, as it provides automatic batching and memoization benefits.
External Callsβ
Envio indexer runs using Node.js runtime. This means that you can use fetch or any other library like viem to perform external calls from your handlers.
Note that with Preload Optimization all handlers run twice. But with Effect API this behavior makes your external calls run in parallel, while keeping the processing data consistent.
Check out our IPFS Integration, Accessing Contract State and Effect API guides for more information.
context.effectβ
Define an effect and use it in your handler with context.effect:
// Define an effect that will be called from the handler.
const getMetadata = createEffect(
{
name: "getMetadata",
input: S.string,
output: {
description: S.string,
value: S.bigint,
},
rateLimit: {
calls: 5,
per: "second",
},
cache: true, // Optionally persist the results in the database
},
({ input }) => {
const response = await fetch(`https://api.example.com/metadata/${input}`);
const data = await response.json();
return {
description: data.description,
value: data.value,
};
}
);
ERC20.Transfer.handler(async ({ event, context }) => {
// Load metadata for the token.
// This will be executed in parallel for all events in the batch.
// The call is automatically memoized, so you don't need to worry about duplicate requests.
const sender = await context.effect(getMetadata, event.params.from);
// Process the transfer with the pre-loaded data
});
Performance Considerationsβ
For performance optimization and best practices, refer to:
- Benchmarking
- Preload Optimization
These guides offer detailed recommendations on optimizing entity loading and indexing performance.
Block Handlers (new in v2.29)β
File: Guides/block-handlers.md
Run logic on every block or an interval.
Understanding Multichain Indexingβ
File: Advanced/multichain-indexing.mdx
Multichain indexing allows you to monitor and process events from contracts deployed across multiple blockchain networks within a single indexer instance. This capability is essential for applications that:
- Track the same contract deployed across multiple networks
- Need to aggregate data from different chains into a unified view
- Monitor cross-chain interactions or state
How It Worksβ
With multichain indexing, events from contracts deployed on multiple chains can be used to create and update entities defined in your schema file. Your blockchain indexer will process events from all configured networks, maintaining proper synchronization across chains.
Configuration Requirementsβ
To implement multichain indexing, you need to:
- Populate the
networkssection in yourconfig.yamlfile for each chain - Specify contracts to index from each network
- Create event handlers for the specified contracts
Real-World Example: Uniswap V4 Multichain Indexerβ
For a comprehensive, production-ready example of multichain indexing, we recommend exploring our Uniswap V4 Multichain Indexer. This official reference implementation:
- Indexes Uniswap V4 deployments across 10 different blockchain networks
- Powers the official v4.xyz interface with real-time data
- Demonstrates best practices for high-performance multichain indexing
- Provides a complete, production-grade implementation you can study and adapt
!V4 indexer
The Uniswap V4 indexer showcases how to effectively structure a multichain indexer for a complex DeFi protocol, handling high volumes of data across multiple networks while maintaining performance and reliability.
Config File Structure for Multichain Indexingβ
The config.yaml file for multichain indexing contains three key sections:
- Global contract definitions - Define contracts, ABIs, and events once
- Network-specific configurations - Specify chain IDs and starting blocks
- Contract instances - Reference global contracts with network-specific addresses
# Example structure (simplified)
contracts:
- name: ExampleContract
abi_file_path: ./abis/example-abi.json
events:
- event: ExampleEvent
networks:
- id: 1 # Ethereum Mainnet
start_block: 0
contracts:
- name: ExampleContract
address: "0x1234..."
- id: 137 # Polygon
start_block: 0
contracts:
- name: ExampleContract
address: "0x5678..."
Key Configuration Conceptsβ
- The global
contractssection defines the contract interface, ABI, handlers, and events once - The
networkssection lists each blockchain network you want to index - Each network entry references the global contract and provides the network-specific address
- This structure allows you to reuse the same handler functions and event definitions across networks
π’ Best Practice: When developing multichain indexers, append the chain ID to entity IDs to avoid collisions. For example:
user-1for Ethereum anduser-137for Polygon.
Multichain Event Orderingβ
When indexing multiple chains, you have two approaches for handling event ordering:
Unordered Multichain Modeβ
Unordered mode is recommended for most applications.
The indexer processes events as soon as they're available from each chain, without waiting for other chains. This "Unordered Multichain Mode" provides better performance and lower latency.
- Events will still be processed in order within each individual chain
- Events across different chains may be processed out of order
- Processing happens as soon as events are emitted, reducing latency
- You avoid waiting for the slowest chain's block time
This mode is ideal for most applications, especially when:
- Operations on your entities are commutative (order doesn't matter)
- Entities from different networks never interact with each other
- Processing speed is more important than guaranteed cross-chain ordering
How to Enable Unordered Modeβ
In your config.yaml:
unordered_multichain_mode: true
networks: ...
Ordered Modeβ
Ordered mode is currently the default mode. But it'll be changed to unordered mode in the future. If you don't need strict deterministic ordering of events across all chains, it's recommended to use unordered mode.
If your application requires strict deterministic ordering of events across all chains, you can enable "Ordered Mode". In this mode, the indexer synchronizes event processing across all chains, ensuring that events are processed in the exact same order in every indexer run, regardless of which chain they came from.
When to Use Ordered Modeβ
Use ordered mode only when:
- The exact ordering of operations across different chains is critical to your application logic
- You need guaranteed deterministic results across all indexer runs
- You're willing to accept higher latency for cross-chain consistency
Cross-chain ordering is particularly important for applications like:
- Bridge applications: Where messages or assets must be processed on one chain before being processed on another chain
- Cross-chain governance: Where decisions made on one chain affect operations on another chain
- Multi-chain financial applications: Where the sequence of transactions across chains affects accounting or risk calculations
- Data consistency systems: Where the state must be consistent across multiple chains in a specific order
Technical Detailsβ
With ordered mode enabled:
- The indexer needs to wait for all blocks to increment from each network
- There is increased latency between when an event is emitted and when it's processed
- Processing speed is limited by the block interval of the slowest network
- Events are guaranteed to be processed in the same order in every indexer run
Cross-Chain Ordering Preservationβ
Ordered mode ensures that the temporal relationship between events on different chains is preserved. This is achieved by:
- Global timestamp ordering: Events are ordered based on their block timestamps across all chains
- Deterministic processing: The same sequence of events will be processed in the same order every time
The primary trade-off is increased latency at the head of the chain. Since the indexer must wait for blocks from all chains to determine the correct ordering, the processing of recent events is delayed by the slowest chain's block time. For example, if Chain A has 2-second blocks and Chain B has 15-second blocks, the indexer will process events at the slower 15-second rate to maintain proper ordering.
This latency is acceptable for applications where correct cross-chain ordering is more important than real-time updates. For bridge applications in particular, this ordering preservation can be critical for security and correctness, as it ensures that deposit events on one chain are always processed before the corresponding withdrawal events on another chain.
Best Practices for Multichain Indexingβ
1. Entity ID Namespacingβ
Always namespace your entity IDs with the chain ID to prevent collisions between networks. This ensures that entities from different networks remain distinct.
2. Error Handlingβ
Implement robust error handling for network-specific issues. A failure on one chain shouldn't prevent indexing from continuing on other chains.
3. Testingβ
- Test your indexer with realistic scenarios across all networks
- Use testnet deployments for initial validation
- Verify entity updates work correctly across chains
4. Performance Considerationsβ
- Use unordered mode when appropriate for better performance
- Consider your indexing frequency based on the block times of each chain
- Monitor resource usage, as indexing multiple chains increases load
- Adding more chains does not linearly degrade performance β chains are indexed in parallel. However, handler logic that writes to shared entities across chains may introduce contention when using ordered mode.
5. Adding a New Chain to an Existing Indexerβ
To add a new chain to a running indexer:
- Add the new network entry to your
config.yamlwith the appropriatestart_blockand contract addresses - Push the updated code to your deployment branch (for Envio Cloud) or restart locally with
pnpm envio start -r
On Envio Cloud, this creates a new deployment that re-indexes all chains (including the new one). Your previous deployment continues serving queries with zero downtime until the new deployment is fully synced. See the deployment guide for details.
Locally, adding a new chain requires a restart and will re-index all chains from their respective start blocks.
Troubleshooting Common Issuesβ
-
Different Network Speeds: If one network is significantly slower than others, consider using unordered mode to prevent bottlenecks.
-
Entity Conflicts: If you see unexpected entity updates, verify that your entity IDs are properly namespaced with chain IDs.
-
Memory Usage: If your indexer uses excessive memory, consider optimizing your entity structure and implementing pagination in your queries.
Next Stepsβ
- Explore our Uniswap V4 Multichain Indexer for a complete implementation
- Review performance optimization techniques for your indexer
Testingβ
File: Guides/testing.mdx
Introductionβ
Envio comes with a built-in testing library that enables developers to thoroughly validate their indexer behavior without requiring deployment or interaction with actual blockchains. This library is specifically crafted to:
- Mock database states: Create and manipulate in-memory representations of your database
- Simulate blockchain events: Generate test events that mimic real blockchain activity
- Assert event handler logic: Verify that your handlers correctly process events and update entities
- Test complete workflows: Validate the entire process from event creation to database updates
The testing library provides helper functions that integrate with any JavaScript-based testing framework (like Mocha, Jest, or others), giving you flexibility in how you structure and run your tests.
Learn by doingβ
If you prefer to explore by example, the Greeter template includes complete tests that demonstrate best practices:
- Generate
greetertemplate in TypeScript using Envio CLI
pnpx envio@3.0.0-rc.0 init template -l typescript -d greeter -t greeter -n greeter
- Run tests
pnpm test
- See the
test/test.tsfile to understand how the tests are written.
Writing testsβ
Test Library Designβ
The testing library follows key design principles that make it effective for testing HyperIndex indexers:
- Immutable database: The mock database is immutable, with each operation returning a new instance. This makes it robust and easy to test against previous states.
- Chainable operations: Operations can be chained together to build complex test scenarios.
- Realistic simulations: Mock events closely mirror real blockchain events, allowing you to test your handlers in conditions similar to production.
Typical Test Flowβ
Most tests will follow this general pattern:
- Initialize the mock database (empty or with predefined entities)
- Create a mock event with test parameters
- Process the mock event through your handler(s)
- Assert that the resulting database state matches your expectations
This flow allows you to verify that your event handlers correctly create, update, or modify entities in response to blockchain events.
Assertionsβ
The testing library works with any JavaScript assertion library. In the examples, we use Node.js's built-in assert module, but you can also use popular alternatives like chai or expect.
Common assertion patterns include:
assert.deepEqual(expectedEntity, actualEntity)- Check that entire entities matchassert.equal(expectedValue, actualEntity.property)- Verify specific property valuesassert.ok(updatedMockDb.entities.Entity.get(id))- Ensure an entity exists
Troubleshootingβ
If you encounter issues with your tests, check the following:
Environment and Setupβ
-
Verify your Envio version: The testing library is available in versions
v0.0.26and abovepnpm envio -v -
Ensure you've generated testing code: Always run codegen after updating your schema or config
pnpm codegen -
Check your imports: Make sure you're importing the correct files
const { MockDb, Greeter, Addresses } = TestHelpers;
const assert = require("assert");
const { UserEntity, TestHelpers } = require("generated");
const { MockDb, Greeter, Addresses } = TestHelpers;
open RescriptMocha
open Mocha
open Belt
Common Issues and Solutionsβ
-
"Cannot read properties of undefined": This usually means an entity wasn't found in the database. Verify your IDs match exactly and that the entity exists before accessing it.
-
"Type mismatch": Ensure that your entity structure matches what's defined in your schema. Type issues are common when working with numeric types (like
BigIntvsnumber). -
ReScript specific setup: If using ReScript, remember to update your
rescript.jsonfile:{
"sources": [
{ "dir": "src", "subdirs": true },
{ "dir": "test", "subdirs": true }
],
"bs-dependencies": ["rescript-mocha"]
} -
Debug database state: If you're having trouble with assertions, add a debug log to see the exact state of your entities:
console.log(
JSON.stringify(updatedMockDb.entities.User.get(userAddress), null, 2)
);
If you encounter any issues or have questions, please reach out to us on Discord
Navigating Hasuraβ
File: Guides/navigating-hasura.md
This page is only relevant when testing on a local machine or using a self-hosted version of Envio that uses Hasura.
Introductionβ
Hasura is a GraphQL engine that provides a web interface for interacting with your indexed blockchain data. When running HyperIndex locally, Hasura serves as your primary tool for:
- Querying indexed data via GraphQL
- Visualizing database tables and relationships
- Testing API endpoints before integration with your frontend
- Monitoring the indexing process
This guide explains how to navigate the Hasura dashboard to effectively work with your indexed data.
Accessing Hasura Consoleβ
When running HyperIndex locally, Hasura Console is automatically available at:
http://localhost:8080
You can access this URL in any web browser to open the Hasura console.
When prompted for authentication, use the password: testing
Key Dashboard Areasβ
The Hasura dashboard has several tabs, but we'll focus on the two most important ones for HyperIndex developers:
API Tabβ
The API tab lets you execute GraphQL queries and mutations on indexed data. It serves as a GraphQL playground for testing your API calls.
Featuresβ
- Explorer Panel: The left panel shows all available entities defined in your
schema.graphqlfile - Query Builder: The center area is where you write and execute GraphQL queries
- Results Panel: The right panel displays query results in JSON format
Available Entitiesβ
By default, you'll see:
- All entities defined in your
schema.graphqlfile dynamic_contracts(for dynamically added contracts)raw_eventstable (Note: This table is no longer populated by default to improve performance. To enable storage of raw events, addraw_events: trueto yourconfig.yamlfile as described in the Raw Events Storage section)
Example Queryβ
Try a simple query to test your blockchain indexer:
query MyQuery {
User(limit: 5) {
id
latestGreeting
numberOfGreetings
}
}
Click the "Play" button to execute the query and see the results.
For more advanced GraphQL query options, see Hasura's quickstart guide.
Data Tabβ
The Data tab provides direct access to your database tables and relationships, allowing you to view the actual indexed data.
Featuresβ
- Schema Browser: View all tables in the database (left panel)
- Table Data: Examine and browse data within each table
- Relationship Viewer: See how different entities are connected
Working with Tablesβ
- Select any table from the "public" schema to view its contents
- Use the "Browse Rows" tab to see all data in that table
- Check the "Insert Row" tab to manually add data (useful for testing)
- View the "Modify" tab to see the table structure
Verifying Indexed Dataβ
To confirm your blockchain indexer is working correctly:
- Check entity tables to ensure they contain the expected data
- Look at the
db_write_timestampcolumn values to confirm when data was last updated - Newer timestamps indicate fresh data; older timestamps might indicate stale data from previous runs
Common Tasksβ
Checking Indexing Statusβ
To verify your blockchain indexer is actively processing new blocks:
- Go to the Data tab
- Select any entity table
- Check the latest
db_write_timestampvalues - Monitor these values over time to ensure they're updating
(Note the TUI is also an easy way to monitor this)
Troubleshooting Missing Dataβ
If expected data isn't appearing:
- Check if you've enabled raw events storage (
raw_events: trueinconfig.yaml) and then examine theraw_eventstable to confirm events were captured - Verify your event handlers are correctly processing these events
- Examine your GraphQL queries to ensure they match your schema structure
- Check console logs for any processing errors
Resetting Indexed Dataβ
When testing, you may need to reset your database:
- Stop your indexer
- Reset your database (refer to the development guide for commands)
- Restart your indexer to begin processing from the configured start block
Best Practicesβ
- Regular Verification: Periodically check both the API and Data tabs to ensure your blockchain indexer is functioning correctly
- Query Testing: Test complex queries in the API tab before implementing them in your application
- Schema Validation: Use the Data tab to verify that relationships between entities are correctly established
- Performance Monitoring: Watch for tables that grow unusually large, which might indicate inefficient indexing
Aggregations: local vs hosted (avoid the footβgun)β
When developing locally with Hasura, you may notice that GraphQL aggregate helpers (for example, count/sum-style aggregations) are available. On Envio Cloud, these aggregate endpoints are intentionally not exposed. Aggregations over large datasets can be very slow and unpredictable in production.
The recommended approach is to compute and store aggregates at indexing time, not at query time. In practice this means maintaining counters, sums, and other rollups in entities as part of your event handlers, and then querying those precomputed values.
Example: indexing-time aggregationβ
schema.graphql
# singleton; you hardcode the id and load it in and out
type GlobalState {
id: ID! # "global-state"
count: Int!
}
type Token {
id: ID! # incremental number
description: String!
}
EventHandler.ts
const globalStateId = "global-state";
NftContract.Mint.handler(async ({event, context}) => {
const globalState = await context.GlobalState.get(globalStateId);
if (!globalState) {
context.log.error("global state doesn't exist");
return;
}
const incrementedTokenId = globalState.count + 1;
context.Token.set({
id: incrementedTokenId,
description: event.params.description,
});
context.GlobalState.set({
...globalState,
count: incrementedTokenId,
});
});
This pattern scales: you can keep per-entity counters, rolling windows (daily/hourly entities keyed by date), and top-N caches by updating entities as events arrive. Your queries then read these precomputed values directly, avoiding expensive runtime aggregations.
Exceptional casesβ
If runtime aggregate queries are a hard requirement for your use case, please reach out and we can evaluate options for your project on Envio Cloud. Contact us on Discord.
Disable Hasura for Self-Hosted Blockchain Indexersβ
Starting from envio@2.26.0 it's possible to disable Hasura integration for self-hosted blockchain indexers. To do so, set the ENVIO_HASURA environment variable to false.
Environment Variablesβ
File: Guides/environment-variables.md
Environment variables are a crucial part of configuring your Envio blockchain indexer. They allow you to manage sensitive information and configuration settings without hardcoding them in your codebase.
Naming Conventionβ
All environment variables used by Envio must be prefixed with ENVIO_. This naming convention:
- Prevents conflicts with other environment variables
- Makes it clear which variables are used by the Envio indexer
- Ensures consistency across different environments
Envio API Token (required for HyperSync)β
To ensure continued access to HyperSync, set an Envio API token in your environment.
- Use
ENVIO_API_TOKENto provide your token at runtime - See the API Tokens guide for how to generate a token: API Tokens
Envio-specific environment variablesβ
The following variables are used by HyperIndex:
-
ENVIO_API_TOKEN: API token for HyperSync access (required for continued access in self-hosted deployments) -
ENVIO_HASURA: Set tofalseto disable Hasura integration for self-hosted blockchain indexers -
MAX_BATCH_SIZE: Size of the in-memory batch before writing to the database. Default:5000. Set to1to help isolate which event or data save is causing Postgres write errors. -
ENVIO_PG_PORT: Port for the Postgres service used by HyperIndex during local development -
ENVIO_PG_PASSWORD: Postgres password (self-hosted) -
ENVIO_PG_USER: Postgres username (self-hosted) -
ENVIO_PG_DATABASE: Postgres database name (self-hosted) -
ENVIO_PG_PUBLIC_SCHEMA: Postgres schema name override for the generated/public schema
Example Environment Variablesβ
Here are some commonly used environment variables:
# Envio API Token (required for continued HyperSync access)
ENVIO_API_TOKEN=your-secret-token
# Blockchain RPC URL
ENVIO_RPC_URL=https://arbitrum.direct.dev/your-api-key
# Starting block number for indexing
ENVIO_START_BLOCK=12345678
# Coingecko API key
ENVIO_COINGECKO_API_KEY=api-key
# In-memory batch size (default 5000)
MAX_BATCH_SIZE=1
Setting Environment Variablesβ
Local Developmentβ
For local development, you can set environment variables in several ways:
- Using a
.envfile in your project root:
# .env
ENVIO_API_TOKEN=your-secret-token
ENVIO_RPC_URL=https://arbitrum.direct.dev/your-api-key
ENVIO_START_BLOCK=12345678
- Directly in your terminal:
export ENVIO_API_TOKEN=your-secret-token
export ENVIO_RPC_URL=https://arbitrum.direct.dev/your-api-key
Envio Cloudβ
When using Envio Cloud, you can configure environment variables through the Envio platform's dashboard. Remember that all variables must still be prefixed with ENVIO_.
For more information about environment variables in Envio Cloud, see the Envio Cloud documentation.
Configuration Fileβ
For use of environment variables in your configuration file, read the docs here: Configuration File.
Best Practicesβ
- Never commit sensitive values: Always use environment variables for sensitive information like API keys and database credentials
- Never commit or use private keys: Never commit or use private keys in your codebase
- Use descriptive names: Make your environment variable names clear and descriptive
- Document your variables: Keep a list of required environment variables in your project's README
- Use different values: Use different environment variables for development, staging, and production environments
- Validate required variables: Check that all required environment variables are set before starting your blockchain indexer
Troubleshootingβ
If you encounter issues with environment variables:
- Verify that all required variables are set
- Check that variables are prefixed with
ENVIO_ - Ensure there are no typos in variable names
- Confirm that the values are correctly formatted
For more help, see our Troubleshooting Guide.
MCP Serverβ
File: Guides/mcp-server.md
Envio provides a Model Context Protocol (MCP) server that lets AI coding assistants search and retrieve documentation directly. This means tools like Claude Code, Cursor, and other MCP-compatible clients can access Envio docs without you needing to copy-paste context manually.
Endpointβ
https://docs.envio.dev/mcp
Available Toolsβ
The MCP server exposes two tools:
| Tool | Description |
|---|---|
docs_search | Full-text search across all documentation. Returns matching pages with titles, URLs, and content snippets. |
docs_fetch | Retrieves the full content of a documentation page as markdown. |
Setupβ
Claude Codeβ
claude mcp add --transport http envio-docs https://docs.envio.dev/mcp
Cursor / VS Codeβ
Add the following to your MCP configuration (.cursor/mcp.json or VS Code MCP settings):
{
"mcpServers": {
"envio-docs": {
"url": "https://docs.envio.dev/mcp"
}
}
}
Other MCP Clientsβ
Point any MCP-compatible client to the endpoint URL above using the Streamable HTTP transport.
Uniswap V4 Multichain Indexerβ
File: Examples/example-uniswap-v4.md
The following blockchain indexer example is a reference implementation and can serve as a starting point for applications with similar logic.
This official Uniswap V4 indexer is a comprehensive implementation for the Uniswap V4 protocol using Envio HyperIndex. This is the same indexer that powers the v4.xyz website, providing real-time data for the Uniswap V4 interface.
Key Featuresβ
- Multichain Support: Indexes Uniswap V4 deployments across 10 different blockchain networks in real-time
- Complete Pool Metrics: Tracks pool statistics including volume, TVL, fees, and other critical metrics
- Swap Analysis: Monitors swap events and liquidity changes with high precision
- Hook Integration: In-progress support for Uniswap V4 hooks and their events
- Production Ready: Powers the official v4.xyz interface with production-grade reliability
- Ultra-Fast Syncing: Processes massive amounts of blockchain data significantly faster than alternative blockchain indexing solutions, reducing sync times from days to minutes
!V4 gif
Technical Overviewβ
This indexer is built using TypeScript and provides a unified GraphQL API for accessing Uniswap V4 data across all supported networks. The architecture is designed to handle high throughput and maintain consistency across different blockchain networks.
Performance Advantagesβ
The Envio-powered Uniswap V4 indexer offers extraordinary performance benefits:
- 10-100x Faster Sync Times: Leveraging Envio's HyperSync technology, this indexer can process historical blockchain data orders of magnitude faster than traditional solutions
- Real-time Updates: Maintains low latency for new blocks while efficiently managing historical data
Use Casesβ
- Power analytics dashboards and trading interfaces
- Monitor DeFi positions and protocol health
- Track historical performance of Uniswap V4 pools
- Build custom notifications and alerts
- Analyze hook interactions and their impact
Getting Startedβ
To use this indexer, you can:
- Clone the repository
- Follow the installation instructions in the README
- Run the indexer locally or deploy it to a production environment
- Access indexed data through the GraphQL API
Contributionβ
The Uniswap V4 indexer is actively maintained and welcomes contributions from the community. If you'd like to contribute or report issues, please visit the GitHub repository.
This is an official reference implementation that powers the v4.xyz website. While extensively tested in production, remember to validate the data for your specific use case. The indexer is continuously updated to support the latest Uniswap V4 features and optimizations.
Sablier Protocol Indexersβ
File: Examples/example-sablier.md
The following blockchain indexers serve as exceptional reference implementations for the Sablier protocol, showcasing professional development practices and efficient multichain data processing.
Overviewβ
Sablier is a token streaming protocol that enables real-time finance on the blockchain, allowing tokens to be streamed continuously over time. These official Sablier indexers track streaming activity across 18 different EVM-compatible chains, providing comprehensive data through a unified GraphQL API.