diff --git a/docs/cloud/reference/manifest.mdx b/docs/cloud/reference/manifest.mdx index 92793e05..5b00e776 100644 --- a/docs/cloud/reference/manifest.mdx +++ b/docs/cloud/reference/manifest.mdx @@ -50,7 +50,7 @@ For a successful build the following files and folders **must** be present in th The `db` and `assets` folders are added to the build context if present in the squid folder. See [Project structure](/sdk/how-to-start/layout) for more info. Under the hood, Cloud builds a Docker image and runs a docker container for each service (`api`, `processor`, `migrate`) using the same image. -See [Self-hosting](/sdk/resources/basics/self-hosting) for instructions on how to build and run the Docker image locally. +See [Self-hosting](/sdk/resources/self-hosting) for instructions on how to build and run the Docker image locally. Even though the squid services (`api`, `processor`, `migrate`) use the same single container image, the exec command is different and can is defined by the `deploy:` section as explained below. ### `cmd:` diff --git a/docs/cloud/resources/best-practices.md b/docs/cloud/resources/best-practices.md index 65b769d7..b903f4f2 100644 --- a/docs/cloud/resources/best-practices.md +++ b/docs/cloud/resources/best-practices.md @@ -8,7 +8,7 @@ description: Checklist for going to production Here is a list of items to check out before you deploy your squid for use in production: -* Make sure that you use [batch processing](/sdk/resources/basics/batch-processing) throughout your code. Consider using [`@belopash/typeorm-store`](/external-tools/#belopashtypeorm-store) for large projects with extensive [entity relations](/sdk/reference/schema-file/entity-relations) and frequent [database reads](/sdk/reference/store/typeorm/#typeorm-methods). +* Make sure that you use [batch processing](/sdk/resources/batch-processing) throughout your code. Consider using [`@belopash/typeorm-store`](/external-tools/#belopashtypeorm-store) for large projects with extensive [entity relations](/sdk/reference/schema-file/entity-relations) and frequent [database reads](/sdk/reference/store/typeorm/#typeorm-methods). * Filter your data in the batch handler. E.g. if you [request event logs](/sdk/reference/processors/evm-batch/logs) from a particular contract, do check that the `address` field of the returned data items matches the contract address before processing the data. This will make sure that any future changes in your processor configuration will not cause the newly added data to be routed to your old processing code by mistake. @@ -18,9 +18,12 @@ Batch handler data filtering used to be compulsory before the release of `@subsq * If your squid [saves its data to a database](/sdk/resources/persisting-data/typeorm), make sure your [schema](/sdk/reference/schema-file) has [`@index` decorators](/sdk/reference/schema-file/indexes-and-constraints) for all entities that will be looked up frequently. -* If your squid serves a [GraphQL API](/sdk/resources/graphql-server), consider: - 1. configuring the built-in [DoS protection](/sdk/resources/graphql-server/dos-protection) against heavy queries; - 2. configuring [caching](/sdk/resources/graphql-server/caching). +* If your squid serves a [GraphQL API](/sdk/resources/serving-graphql) + 1. Do not use [OpenReader](/sdk/resources/serving-graphql/#openreader) if your application uses subscriptions. Instead, use [PostGraphile](/sdk/resources/serving-graphql/#postgraphile) or [Hasura](/sdk/resources/serving-graphql/#hasura). + 2. If you do use OpenReader: + - configure the built-in [DoS protection](/sdk/reference/openreader-server/configuration/dos-protection) against heavy queries; + - configure [caching](/sdk/reference/openreader-server/configuration/caching). + 3. If you use PostGraphile or Hasura, follow their docs to harden your service in a similar way. * If you deploy your squid to Subsquid Cloud: 1. Deploy your squid to a [Professional organization](/cloud/resources/organizations/#professional-organizations). diff --git a/docs/cloud/resources/monitoring.md b/docs/cloud/resources/monitoring.md index fc86326f..e84c6660 100644 --- a/docs/cloud/resources/monitoring.md +++ b/docs/cloud/resources/monitoring.md @@ -16,7 +16,7 @@ The processor metrics are available at `https://${org}.subsquid.io/${squid_name} The metrics are documented inline. They include some values reflecting the squid health: - `sqd_processor_last_block`. The last processed block. -- `sqd_processor_chain_height`. Current chain height as reported by the RPC endpoint (when [RPC ingestion](/sdk/resources/basics/unfinalized-blocks) is enabled) or by [Subsquid Network](/subsquid-network) (when it is disabled). +- `sqd_processor_chain_height`. Current chain height as reported by the RPC endpoint (when [RPC ingestion](/sdk/resources/unfinalized-blocks) is enabled) or by [Subsquid Network](/subsquid-network) (when it is disabled). Inspect the metrics endpoint for a full list. diff --git a/docs/cloud/troubleshooting.md b/docs/cloud/troubleshooting.md index ee9b5e7c..7a2e3a22 100644 --- a/docs/cloud/troubleshooting.md +++ b/docs/cloud/troubleshooting.md @@ -22,7 +22,7 @@ npm update -g @subsquid/cli npm run update ``` - Check that the squid adheres to the expected [structure](/sdk/how-to-start/layout) -- Make sure you can [build and run Docker images locally](/sdk/resources/basics/self-hosting) +- Make sure you can [build and run Docker images locally](/sdk/resources/self-hosting) ### `Validation error` when releasing a squid diff --git a/docs/external-tools.md b/docs/external-tools.md index fc89a0cb..b5a414dc 100644 --- a/docs/external-tools.md +++ b/docs/external-tools.md @@ -9,7 +9,7 @@ sidebar_position: 90 ## `@belopash/typeorm-store` -[`@belopash/typeorm-store`](https://github.com/belopash/squid-typeorm-store) is a [fork](/sdk/resources/persisting-data/overview/#custom-database) of [`@subsquid/typeorm-store`](/sdk/reference/store/typeorm) that automates collecting read and write database requests into [batches](/sdk/resources/basics/batch-processing) and caches the available entity records in RAM. Unlike the [standard `typeorm-store`](/sdk/resources/persisting-data/typeorm), @belopash's store is intended to be used with declarative code: it makes it easy to write mapping functions (e.g. event handlers) that explicitly define +[`@belopash/typeorm-store`](https://github.com/belopash/squid-typeorm-store) is a [fork](/sdk/resources/persisting-data/overview/#custom-database) of [`@subsquid/typeorm-store`](/sdk/reference/store/typeorm) that automates collecting read and write database requests into [batches](/sdk/resources/batch-processing) and caches the available entity records in RAM. Unlike the [standard `typeorm-store`](/sdk/resources/persisting-data/typeorm), @belopash's store is intended to be used with declarative code: it makes it easy to write mapping functions (e.g. event handlers) that explicitly define - what data you're going to need from the database - what code has to be executed once the data is available diff --git a/docs/glossary.md b/docs/glossary.md index 675544c4..8c170dec 100644 --- a/docs/glossary.md +++ b/docs/glossary.md @@ -70,11 +70,9 @@ An SDK (software development kit) and a smart-contract language for developing W ### OpenReader -An open-source GraphQL server that automatically generates an expressive API from an input schema file. -* [GitHub repo](https://github.com/subsquid/squid-sdk/tree/master/graphql/openreader) -* [Server documentation](/sdk/resources/graphql-server) -* [Schema dialect reference](/sdk/reference/schema-file) -* [GraphQL API reference](/sdk/reference/openreader) +1. SQD's own open source [GraphQL server](/sdk/reference/openreader-server/overview), built in-house. No longer recommended for new projects running PostgreSQL due to its [longstanding issues](/sdk/reference/openreader-server/overview/#known-issues). See [Serving GraphQL](/sdk/resources/serving-graphql) to learn more. + +2. The GraphQL [schema generation library](https://github.com/subsquid/squid-sdk/tree/master/graphql/openreader) at the heart of 1. Implements [OpenCRUD](https://www.opencrud.org/). ### Pallet diff --git a/docs/overview.mdx b/docs/overview.mdx index a275b9f7..a24d862b 100644 --- a/docs/overview.mdx +++ b/docs/overview.mdx @@ -51,8 +51,8 @@ For real-time use cases such as app-specific APIs use [Squid SDK](/sdk): it'll u - [High-level libraries](/sdk/reference/processors) for extracting and filtering the Subsquid Network data in what can be though of as Extract-Transform-Load (ETL) pipelines - [Ergonomic tools](/sdk/resources/tools/typegen) for decoding and normalizing raw data and efficiently accessing [network state](/sdk/resources/tools/typegen/state-queries) - Pluggable [data sinks](/sdk/reference/store) to save data into Postgres, files (local or s3) or BigQuery -- Expressive [GraphQL server](/sdk/resources/graphql-server) with a schema-based [config](/sdk/reference/schema-file) -- Seamless handling of [unfinalized blocks and chain reorganizations](/sdk/resources/basics/unfinalized-blocks) for real-time data ingestion +- An expressive [GraphQL server](/sdk/resources/serving-graphql#openreader) with a schema-based [config](/sdk/reference/schema-file) +- Seamless handling of [unfinalized blocks and chain reorganizations](/sdk/resources/unfinalized-blocks) for real-time data ingestion - rapid data extraction and decoding [for local analytics](/sdk/tutorials/file-csv) The SDK is a go-to choice for production solutions and prototypes of @@ -81,7 +81,7 @@ A Platform-as-a-Service for deploying Squid SDK indexers, featuring - Learn about [squid components](/sdk/overview), [combining them](/sdk/how-to-start/squid-from-scratch) or follow the [end-to-end development guide](/sdk/how-to-start/squid-development) - Explore [tutorials](/sdk/tutorials) or [examples](/sdk/examples) - Learn how to [migrate from The Graph](/sdk/resources/migrate/migrate-subgraph) -- Explore the [GraphQL server options](/sdk/resources/graphql-server) including custom extensions, caching and DoS protection in production +- Explore the [GraphQL server options](/sdk/resources/serving-graphql) ```mdx-code-block diff --git a/docs/sdk/faq.md b/docs/sdk/faq.md index bddf2137..342cf347 100644 --- a/docs/sdk/faq.md +++ b/docs/sdk/faq.md @@ -16,7 +16,7 @@ Here is an incomplete list: ### How does Squid SDK handle unfinalized blocks? -The Subsquid Network only serves finalized blocks and is typically ~1000 blocks behind the tip. The most recent blocks, as well as the unfinalized blocks are seamlessly handled by the SDK from a complementary RPC data source, set by the `chain` config. Potential chain reorgs are automatically handled under the hood. See [Indexing unfinalized blocks](/sdk/resources/basics/unfinalized-blocks) for details. +The Subsquid Network only serves finalized blocks and is typically ~1000 blocks behind the tip. The most recent blocks, as well as the unfinalized blocks are seamlessly handled by the SDK from a complementary RPC data source, set by the `chain` config. Potential chain reorgs are automatically handled under the hood. See [Indexing unfinalized blocks](/sdk/resources/unfinalized-blocks) for details. ### What is the latency for the data served by the squid? @@ -24,11 +24,11 @@ Since the ArrowSquid release, the Squid SDK has the option to ingest unfinalized ### How do I enable GraphQL subscriptions for local runs? -Add `--subscription` flag to the `serve` command defined in `commands.json`. See [Subscriptions](/sdk/resources/graphql-server/subscriptions) for details. +Add `--subscription` flag to the `serve` command defined in `commands.json`. See [Subscriptions](/sdk/reference/openreader-server/configuration/subscriptions) for details. ### How do squids keep track of their sync progress? -Depends on the data sink used. Squid processors that use [`TypeormDatabase`](/sdk/resources/persisting-data/typeorm) keep their state in a [schema](https://www.postgresql.org/docs/current/sql-createschema.html), not in a table. By default the schema is called `squid_processor` (name must be overridden in [multiprocessor squids](/sdk/resources/basics/multichain)). You can view it with +Depends on the data sink used. Squid processors that use [`TypeormDatabase`](/sdk/resources/persisting-data/typeorm) keep their state in a [schema](https://www.postgresql.org/docs/current/sql-createschema.html), not in a table. By default the schema is called `squid_processor` (name must be overridden in [multiprocessor squids](/sdk/resources/multichain)). You can view it with ```sql select * from squid_processor.status; ``` diff --git a/docs/sdk/how-to-start/layout.md b/docs/sdk/how-to-start/layout.md index a8608965..48ba3327 100644 --- a/docs/sdk/how-to-start/layout.md +++ b/docs/sdk/how-to-start/layout.md @@ -13,13 +13,13 @@ All files and folders except `package.json` are optional. - `tsconfig.json` -- Configuration of `tsc`. Required for most squids. - [Deployment manifest](/cloud/reference/manifest) (`squid.yaml` by default) -- Definitions of squid services used for running it locally with [`sqd run`](/squid-cli/run) and deploying to [Subsquid Cloud](/cloud). - `.squidignore` -- Files and patterns to be excluded when sending the squid code to the [Cloud](/cloud). When not supplied, some files will still be omitted: see the [reference page](/cloud/reference/squidignore) for details. -- `schema.graphql` -- [The schema definition file](/sdk/reference/schema-file). Required if your squid uses the [built-in GraphQL server](/sdk/resources/graphql-server). +- `schema.graphql` -- [The schema definition file](/sdk/reference/schema-file). Required if your squid [stores its data in PostgreSQL](/sdk/resources/persisting-data/typeorm). - `/src` -- The TypeScript source code folder for the squid processor. + `/src/main.ts` -- The entry point of the squid processor process. Typically, contains a `processor.run()` call. + `/src/processor.ts` -- Processor object ([EVM](/sdk/reference/processors/evm-batch) or [Substrate](/sdk/reference/processors/substrate-batch)) definition and configuration. + `/src/model/generated` -- The folder for the TypeORM entities generated from `schema.graphql`. + `/src/model` -- The module exporting the entity classes. - + `/src/server-extension/resolvers` -- A folder for [user-defined GraphQL resolvers](/sdk/resources/graphql-server/custom-resolvers). + + `/src/server-extension/resolvers` -- A folder for [user-defined GraphQL resolvers](/sdk/reference/openreader-server/configuration/custom-resolvers) used by [OpenReader](/sdk/reference/openreader-server). + `/src/types` -- A folder for types generated by the Substrate [typegen](/sdk/resources/tools/typegen/) tool for use in data decoding. + `/src/abi` -- A folder for modules generated by the EVM [typegen](/sdk/resources/tools/typegen/) tool containing type definitions and data decoding boilerplate code. - `/db` -- The designated folder with the [database migrations](/sdk/resources/persisting-data/typeorm). diff --git a/docs/sdk/how-to-start/squid-development.mdx b/docs/sdk/how-to-start/squid-development.mdx index 5f1189d6..658651f5 100644 --- a/docs/sdk/how-to-start/squid-development.mdx +++ b/docs/sdk/how-to-start/squid-development.mdx @@ -27,7 +27,7 @@ See also the [Environment set up](/sdk/how-to-start/development-environment-set- Consider your business requirements and find out 1. How the data should be delivered. Options: - - [PostgreSQL](/sdk/resources/persisting-data/typeorm) with an optional [GraphQL API](/sdk/resources/graphql-server) - can be real-time + - [PostgreSQL](/sdk/resources/persisting-data/typeorm) with an optional [GraphQL API](/sdk/resources/serving-graphql) - can be real-time - [file-based dataset](/sdk/resources/persisting-data/file) - local or on S3 - [Google BigQuery](/sdk/resources/persisting-data/bigquery/) 2. What data should be delivered @@ -35,9 +35,9 @@ Consider your business requirements and find out - Ethereum Virtual Machine (EVM) chains like [Ethereum](https://ethereum.org) - [supported networks](/subsquid-network/reference/evm-networks) - [Substrate](https://substrate.io)-powered chains like [Polkadot](https://polkadot.network) and [Kusama](https://kusama.network) - [supported networks](/subsquid-network/reference/substrate-networks) - Note that you can use Subsquid via [RPC ingestion](/sdk/resources/basics/unfinalized-blocks) even if your network is not listed. + Note that you can use Subsquid via [RPC ingestion](/sdk/resources/unfinalized-blocks) even if your network is not listed. 4. What exact data should be retrieved from blockchain(s) -5. Whether you need to mix in any [off-chain data](/sdk/resources/basics/external-api) +5. Whether you need to mix in any [off-chain data](/sdk/resources/external-api) #### Example requirements @@ -57,10 +57,10 @@ Suppose you want to train a prototype ML model on all trades done on Uniswap Pol
NFT ownership on Ethereum -Suppose you want to make a website that shows the image and ownership history for ERC721 NFTs from a certain Polygon contract. +Suppose you want to make a website that shows the image and ownership history for ERC721 NFTs from a certain Ethereum contract. 1. For this application it makes sense to deliver a GraphQL API. -2. Output data might have `Token`, `Owner` and `Transfer` [entities](/sdk/reference/openreader/queries), with e.g. `Token` supplying all the fields necessary to show ownership history and the image. +2. Output data might have `Token`, `Owner` and `Transfer` database tables / [entities](/sdk/reference/schema-file/entities), with e.g. `Token` supplying all the fields necessary to show ownership history and the image. 3. Ethereum is an EVM chain. 4. Data on token mints and ownership history can be derived from `Transfer(address,address,uint256)` EVM event logs emitted by the contract. To render images, you will also need token metadata URLs that are only available by [querying the contract state](/sdk/resources/tools/typegen/state-queries) with the `tokenURI(uint256)` function. 5. You'll need to retrieve the off-chain token metadata (usually from IPFS). @@ -103,7 +103,7 @@ Although it is possible to [compose a squid from individual packages](/sdk/how-t ```bash sqd init my-squid-name -t gravatar ``` -- A template showing how to [combine data from multiple chains](/sdk/resources/basics/multichain). Indexes USDC transfers on Ethereum and Binance. +- A template showing how to [combine data from multiple chains](/sdk/resources/multichain). Indexes USDC transfers on Ethereum and Binance. ```bash sqd init my-squid-name -t multichain ``` @@ -323,7 +323,7 @@ Edit the definition of `const processor` to 1. Use a data source appropriate for your chain and task. - It is possible to [use RPC](/sdk/reference/processors/evm-batch/general/#set-rpc-endpoint) as the only data source, but [adding](/sdk/reference/processors/evm-batch/general/#set-gateway) a [Subsquid Network](/subsquid-network/reference/evm-networks) data source will make your squid sync much faster. - RPC is a hard requirement if you're building a real-time API. - - If you're using RPC as one of your data sources, make sure to [set the number of finality confirmations](/sdk/reference/processors/evm-batch/general/#set-finality-confirmation) so that [hot blocks ingestion](/sdk/resources/basics/unfinalized-blocks) works properly. + - If you're using RPC as one of your data sources, make sure to [set the number of finality confirmations](/sdk/reference/processors/evm-batch/general/#set-finality-confirmation) so that [hot blocks ingestion](/sdk/resources/unfinalized-blocks) works properly. 2. Request all [event logs](/sdk/reference/processors/evm-batch/logs/), [transactions](/sdk/reference/processors/evm-batch/transactions/), [execution traces](/sdk/reference/processors/evm-batch/traces) and [state diffs](/sdk/reference/processors/evm-batch/state-diffs/) that your task requires, with any necessary related data (e.g. parent transactions for event logs). 3. [Select all data fields](/sdk/reference/processors/evm-batch/field-selection) necessary for your task (e.g. `gasUsed` for transactions). @@ -451,7 +451,7 @@ You can also decode the data of certain pallet-specific events and transactions ### (Optional) IV. Mix in external data and chain state calls output {#external-data} -If you need external (i.e. non-blockchain) data in your transformation, take a look at the [External APIs and IPFS](/sdk/resources/basics/external-api) page. +If you need external (i.e. non-blockchain) data in your transformation, take a look at the [External APIs and IPFS](/sdk/resources/external-api) page. If any of the on-chain data you need is unavalable from the processor or incovenient to retrieve with it, you have an option to get it via [direct chain queries](/sdk/resources/tools/typegen/state-queries). @@ -464,7 +464,7 @@ At `src/main.ts`, change the [`Database`](/sdk/resources/persisting-data/overvie ``` -1. Define the schema of the database (and the [core schema of the GraphQL API](/sdk/reference/openreader) if it is used) at [`schema.graphql`](/sdk/reference/schema-file). +1. Define the schema of the database (and the [core schema of the OpenReader GraphQL API](/sdk/reference/openreader-server/api) if it is used) at [`schema.graphql`](/sdk/reference/schema-file). 2. Regenerate the TypeORM model classes with ```bash @@ -587,7 +587,7 @@ It will often make sense to keep the entity instances in maps rather than arrays If you perform any [database lookups](/sdk/reference/store/typeorm/#typeorm-methods), try to do so in batches and make sure that the entity fields that you're searching over are [indexed](/sdk/reference/schema-file/indexes-and-constraints). -See also the [patterns](/sdk/resources/basics/batch-processing/#patterns) and [anti-pattens](/sdk/resources/basics/batch-processing/#anti-patterns) sections of the Batch processing guide. +See also the [patterns](/sdk/resources/batch-processing/#patterns) and [anti-pattens](/sdk/resources/batch-processing/#anti-patterns) sections of the Batch processing guide. ```mdx-code-block @@ -626,9 +626,13 @@ The alternative is to do the same steps in a different order: 5. [Retrieve any external data](#external-data) if necessary 6. [Add the persistence code for the transformed data](#batch-handler-persistence) +## GraphQL options + +[Store your data to PostgreSQL](/sdk/resources/persisting-data/typeorm), then consult [Serving GraphQL](/sdk/resources/serving-graphql) for options. + ## Scaling up -If you're developing a large squid, make sure to use [batch processing](/sdk/resources/basics/batch-processing) throughout your code. +If you're developing a large squid, make sure to use [batch processing](/sdk/resources/batch-processing) throughout your code. A common mistake is to make handlers for individual event logs or transactions; for updates that require data retrieval that results in lots of small database lookups and ultimately in poor syncing performance. Collect all the relevant data and process it at once. A simple architecture of that type is discussed in the [BAYC tutorial](/sdk/tutorials/bayc). @@ -640,5 +644,8 @@ For complete examples of complex squids take a look at the [Giant Squid Explorer ## Next steps -* Deploy your squid [on own infrastructure](/sdk/resources/basics/self-hosting) or to [Subsquid Cloud](/cloud) -* If your squid serves a GraphQL API, consult the [Core GraphQL API reference](/sdk/reference/openreader) while writing your frontend +* Learn about [batch processing](/sdk/resources/batch-processing). +* Learn how squid deal with [unfinalized blocks](/sdk/resources/unfinalized-blocks). +* [Use external APIs and IPFS](/sdk/resources/external-api) in your squid. +* See how squid should be set up for the [multichain setting](/sdk/resources/multichain). +* Deploy your squid [on own infrastructure](/sdk/resources/self-hosting) or to [Subsquid Cloud](/cloud). diff --git a/docs/sdk/how-to-start/squid-from-scratch.mdx b/docs/sdk/how-to-start/squid-from-scratch.mdx index 7986a0cb..eee4bddb 100644 --- a/docs/sdk/how-to-start/squid-from-scratch.mdx +++ b/docs/sdk/how-to-start/squid-from-scratch.mdx @@ -13,18 +13,18 @@ This page goes through all the technical details to make the squid architecture ## USDT transfers API -**Pre-requisites**: NodeJS 16.x or newer, Docker. +**Pre-requisites**: NodeJS 20.x or newer, Docker. Suppose the task is to track transfers of USDT on Ethereum, then save the resulting data to PostgreSQL and serve it as a GraphQL API. From this description we can immediately put together a list of [packages](/sdk/overview): * `@subsquid/evm-processor` - for retrieving Ethereum data * the triad of `@subsquid/typeorm-store`, `@subsquid/typeorm-codegen` and `@subsquid/typeorm-migration` - for saving data to PostgreSQL -* `@subsquid/graphql-server` We also assume the following choice of _optional_ packages: * `@subsquid/evm-typegen` - for decoding Ethereum data and useful constants such as event topic0 values * `@subsquid/evm-abi` - as a peer dependency for the code generated by `@subsquid/evm-typegen` +* `@subsquid/graphql-server` / [OpenReader](/sdk/reference/openreader-server) To make the indexer, follow these steps: diff --git a/docs/sdk/overview.mdx b/docs/sdk/overview.mdx index 9ace965a..eafc6543 100644 --- a/docs/sdk/overview.mdx +++ b/docs/sdk/overview.mdx @@ -10,7 +10,7 @@ import TabItem from '@theme/TabItem'; # Overview A _squid_ is an indexing project built with [Squid SDK](https://github.com/subsquid/squid-sdk) to retrieve and process blockchain data from the [Subsquid Network](/subsquid-network/overview) -(either permissioned or decentralized instance). The Squid SDK is a set of open source Typescript libraries that retrieve, decode, transform and persist the data. It can also make the transformed data available via an API. All stages of the indexing pipeline, from the data extraction to transformation to persistence are performed on [batches of blocks](/sdk/resources/basics/batch-processing) to maximize the indexing speed. Modular architecture of the SDK makes it possible to extend indexing projects (squids) with custom plugins and data targets. +(either permissioned or decentralized instance). The Squid SDK is a set of open source Typescript libraries that retrieve, decode, transform and persist the data. It can also make the transformed data available via an API. All stages of the indexing pipeline, from the data extraction to transformation to persistence are performed on [batches of blocks](/sdk/resources/batch-processing) to maximize the indexing speed. Modular architecture of the SDK makes it possible to extend indexing projects (squids) with custom plugins and data targets. ## Required squid components @@ -75,9 +75,9 @@ Install these with `--save-dev`. ### GraphQL server -Squids that store their data in PostgreSQL can subsequently make it available as a GraphQL API. To use this functionality, install [`@subsquid/graphql-server`](https://www.npmjs.com/package/@subsquid/graphql-server). +Squids that store their data in PostgreSQL can subsequently make it available as a GraphQL API via a variety of supported servers. See [Serving GraphQL](/sdk/resources/serving-graphql). -The [server](/sdk/resources/graphql-server) runs as a separate process. [Core API](/sdk/reference/openreader) is automatically derived from the database schema; it is possible to extend it with [custom queries](/sdk/resources/graphql-server/custom-resolvers) and [basic access control](/sdk/resources/graphql-server/authorization). +Among other alternatives, SQD provides its own server called [OpenReader](/sdk/reference/openreader-server) via the [`@subsquid/graphql-server`](https://www.npmjs.com/package/@subsquid/graphql-server) package. The server runs as a separate process. [Core API](/sdk/reference/openreader-server/api) is automatically derived from the schema file; it is possible to extend it with [custom queries](/sdk/reference/openreader-server/configuration/custom-resolvers) and [basic access control](/sdk/reference/openreader-server/configuration/authorization). ### Misc utilities diff --git a/docs/sdk/reference/openreader-server/_category_.json b/docs/sdk/reference/openreader-server/_category_.json new file mode 100644 index 00000000..a5839121 --- /dev/null +++ b/docs/sdk/reference/openreader-server/_category_.json @@ -0,0 +1,12 @@ +{ + "position": 80, + "label": "OpenReader", + "collapsible": true, + "collapsed": true, + "className": "red", + "link": { + "type": "generated-index", + "slug": "/sdk/reference/openreader-server", + "title": "An open source GraphQL server built by SQD" + } +} diff --git a/docs/sdk/reference/openreader-server/api/_category_.json b/docs/sdk/reference/openreader-server/api/_category_.json new file mode 100644 index 00000000..f2f2d685 --- /dev/null +++ b/docs/sdk/reference/openreader-server/api/_category_.json @@ -0,0 +1,12 @@ +{ + "position": 30, + "label": "Core API", + "collapsible": true, + "collapsed": true, + "className": "red", + "link": { + "type": "generated-index", + "slug": "/sdk/reference/openreader-server/api", + "title": "Core queries exposed by the OpenReader API" + } +} diff --git a/docs/sdk/reference/openreader/and-or-filters.md b/docs/sdk/reference/openreader-server/api/and-or-filters.md similarity index 75% rename from docs/sdk/reference/openreader/and-or-filters.md rename to docs/sdk/reference/openreader-server/api/and-or-filters.md index 25405b1b..9e45d022 100644 --- a/docs/sdk/reference/openreader/and-or-filters.md +++ b/docs/sdk/reference/openreader-server/api/and-or-filters.md @@ -9,11 +9,11 @@ description: >- ## Overview -Our GraphQL implementation offers a vast selection of tools to filter and section results. One of these is the `where` clause, very common in most database query languages and [explained here](/sdk/reference/openreader/queries/#filter-query-results--search-queries) in detail. +Our GraphQL implementation offers a vast selection of tools to filter and section results. One of these is the `where` clause, very common in most database query languages and [explained here](/sdk/reference/openreader-server/api/queries/#filter-query-results--search-queries) in detail. In our GraphQL server implementation, we included logical operators to be used in the `where` clause, allowing to group multiple parameters in the same `where` argument using the `AND` and `OR` operators to filter results based on more than one criteria. -Note that the [newer](/sdk/resources/graphql-server/overview/#supported-queries) and [more advanced](/sdk/reference/openreader/paginate-query-results) `{entityName}sConnection` queries support exactly the same format of the `where` argument as the older `{entityName}s` queries used in the examples provided here. +Note that the [newer](/sdk/reference/openreader-server/overview/#supported-queries) and [more advanced](/sdk/reference/openreader-server/api/paginate-query-results) `{entityName}sConnection` queries support exactly the same format of the `where` argument as the older `{entityName}s` queries used in the examples provided here. ### Example of an `OR` clause: diff --git a/docs/sdk/reference/openreader/cross-relation-field-queries.md b/docs/sdk/reference/openreader-server/api/cross-relation-field-queries.md similarity index 91% rename from docs/sdk/reference/openreader/cross-relation-field-queries.md rename to docs/sdk/reference/openreader-server/api/cross-relation-field-queries.md index 1142d451..6b29861f 100644 --- a/docs/sdk/reference/openreader/cross-relation-field-queries.md +++ b/docs/sdk/reference/openreader-server/api/cross-relation-field-queries.md @@ -9,7 +9,7 @@ description: >- ## Introduction -The [previous section](/sdk/reference/openreader/nested-field-queries) has already demonstrated that queries can return not just scalars such as a String, but also fields that refer to object or entity types. What's even more interesting is that queries can leverage fields of related objects to filter results. +The [previous section](/sdk/reference/openreader-server/api/nested-field-queries) has already demonstrated that queries can return not just scalars such as a String, but also fields that refer to object or entity types. What's even more interesting is that queries can leverage fields of related objects to filter results. Let's take this sample schema with two entity types and a one-to-many relationship between them: diff --git a/docs/sdk/reference/openreader/intro.md b/docs/sdk/reference/openreader-server/api/intro.md similarity index 66% rename from docs/sdk/reference/openreader/intro.md rename to docs/sdk/reference/openreader-server/api/intro.md index 585ed764..30abdaa2 100644 --- a/docs/sdk/reference/openreader/intro.md +++ b/docs/sdk/reference/openreader-server/api/intro.md @@ -6,14 +6,14 @@ description: >- --- :::info -At the moment, [Squid SDK GraphQL server](/sdk/resources/graphql-server) can only be used with squids that use Postgres as their target database. +At the moment, [Squid SDK GraphQL server](/sdk/reference/openreader-server) can only be used with squids that use Postgres as their target database. ::: GraphQL is an API query language, and a server-side runtime for executing queries using a custom type system. Head over to the [official documentation website](https://graphql.org/learn/) for more info. -A GraphQL API served by the [GraphQL server](/sdk/resources/graphql-server) has two components: +A GraphQL API served by the [GraphQL server](/sdk/reference/openreader-server) has two components: 1. Core API is defined by the [schema file](/sdk/reference/schema-file). -2. Extensions added via [custom resolvers](/sdk/resources/graphql-server/custom-resolvers). +2. Extensions added via [custom resolvers](/sdk/reference/openreader-server/configuration/custom-resolvers). In this section we cover the core GraphQL API, with short explanations on how to perform GraphQL queries, how to paginate and sort results. This functionality is supported via [OpenReader](https://github.com/subsquid/squid-sdk/tree/master/graphql/openreader), Subsquid's own implementation of [OpenCRUD](https://www.opencrud.org). diff --git a/docs/sdk/reference/openreader/json-queries.md b/docs/sdk/reference/openreader-server/api/json-queries.md similarity index 100% rename from docs/sdk/reference/openreader/json-queries.md rename to docs/sdk/reference/openreader-server/api/json-queries.md diff --git a/docs/sdk/reference/openreader/nested-field-queries.md b/docs/sdk/reference/openreader-server/api/nested-field-queries.md similarity index 81% rename from docs/sdk/reference/openreader/nested-field-queries.md rename to docs/sdk/reference/openreader-server/api/nested-field-queries.md index 79ae0156..834c0b07 100644 --- a/docs/sdk/reference/openreader/nested-field-queries.md +++ b/docs/sdk/reference/openreader-server/api/nested-field-queries.md @@ -44,4 +44,4 @@ query { } ``` -Note that the [newer](/sdk/resources/graphql-server/overview/#supported-queries) and [more advanced](/sdk/reference/openreader/paginate-query-results) `{entityName}sConnection` queries support exactly the same format of the `where` argument as the older `{entityName}s` queries used in the examples provided here. +Note that the [newer](/sdk/reference/openreader-server/overview/#supported-queries) and [more advanced](/sdk/reference/openreader-server/api/paginate-query-results) `{entityName}sConnection` queries support exactly the same format of the `where` argument as the older `{entityName}s` queries used in the examples provided here. diff --git a/docs/sdk/reference/openreader/paginate-query-results.md b/docs/sdk/reference/openreader-server/api/paginate-query-results.md similarity index 92% rename from docs/sdk/reference/openreader/paginate-query-results.md rename to docs/sdk/reference/openreader-server/api/paginate-query-results.md index 954b097a..0ecd6216 100644 --- a/docs/sdk/reference/openreader/paginate-query-results.md +++ b/docs/sdk/reference/openreader-server/api/paginate-query-results.md @@ -15,7 +15,7 @@ Cursors are used to traverse across entities of an entity set. They work by retu Currently, only forward pagination is supported. If your use case requires bidirectional pagination please let us know at our [Telegram channel](https://t.me/HydraDevs). -In Subsquid GraphQL server, cursor based pagination is implemented with `{entityName}sConnection` queries available for every entity in the input schema. These queries require an explicitly supplied [`orderBy` argument](/sdk/reference/openreader/sorting), and *the field that is used for ordering must also be requested by the query itself*. Check out [this section](/sdk/reference/openreader/paginate-query-results/#important-note-on-orderby) for a valid query template. +In Subsquid GraphQL server, cursor based pagination is implemented with `{entityName}sConnection` queries available for every entity in the input schema. These queries require an explicitly supplied [`orderBy` argument](/sdk/reference/openreader-server/api/sorting), and *the field that is used for ordering must also be requested by the query itself*. Check out [this section](/sdk/reference/openreader-server/api/paginate-query-results/#important-note-on-orderby) for a valid query template. Example: this query fetches a list of videos where `isExplicit` is true and gets their count. diff --git a/docs/sdk/reference/openreader/queries.md b/docs/sdk/reference/openreader-server/api/queries.md similarity index 94% rename from docs/sdk/reference/openreader/queries.md rename to docs/sdk/reference/openreader-server/api/queries.md index 69cd321f..b4668668 100644 --- a/docs/sdk/reference/openreader/queries.md +++ b/docs/sdk/reference/openreader-server/api/queries.md @@ -29,7 +29,7 @@ query { } } ``` -or, using a [newer](/sdk/resources/graphql-server/overview/#supported-queries) and [more advanced](/sdk/reference/openreader/paginate-query-results) `{entityName}sConnection` query +or, using a [newer](/sdk/reference/openreader-server/overview/#supported-queries) and [more advanced](/sdk/reference/openreader-server/api/paginate-query-results) `{entityName}sConnection` query ```graphql query { diff --git a/docs/sdk/reference/openreader/resolve-union-types-interfaces.md b/docs/sdk/reference/openreader-server/api/resolve-union-types-interfaces.md similarity index 100% rename from docs/sdk/reference/openreader/resolve-union-types-interfaces.md rename to docs/sdk/reference/openreader-server/api/resolve-union-types-interfaces.md diff --git a/docs/sdk/reference/openreader/sorting.md b/docs/sdk/reference/openreader-server/api/sorting.md similarity index 100% rename from docs/sdk/reference/openreader/sorting.md rename to docs/sdk/reference/openreader-server/api/sorting.md diff --git a/docs/sdk/reference/openreader-server/configuration/_category_.json b/docs/sdk/reference/openreader-server/configuration/_category_.json new file mode 100644 index 00000000..245fa956 --- /dev/null +++ b/docs/sdk/reference/openreader-server/configuration/_category_.json @@ -0,0 +1,12 @@ +{ + "position": 20, + "label": "Configuration", + "collapsible": true, + "collapsed": true, + "className": "red", + "link": { + "type": "generated-index", + "slug": "/sdk/reference/openreader-server/configuration", + "title": "Configuring and extending the server" + } +} diff --git a/docs/sdk/resources/graphql-server/authorization.md b/docs/sdk/reference/openreader-server/configuration/authorization.md similarity index 95% rename from docs/sdk/resources/graphql-server/authorization.md rename to docs/sdk/reference/openreader-server/configuration/authorization.md index 033dd8fd..250aa20d 100644 --- a/docs/sdk/resources/graphql-server/authorization.md +++ b/docs/sdk/reference/openreader-server/configuration/authorization.md @@ -48,7 +48,7 @@ Here, ## Sending user data to resolvers -Authentication data such as user name can be passed from `requestCheck()` to a [custom resolver](/sdk/resources/graphql-server/custom-resolvers/) through Openreader context: +Authentication data such as user name can be passed from `requestCheck()` to a [custom resolver](/sdk/reference/openreader-server/configuration/custom-resolvers/) through Openreader context: ```typescript export async function requestCheck(req: RequestCheckContext): Promise { ... @@ -100,7 +100,7 @@ export class UserCommentResolver { ``` See full code in [this branch](https://github.com/subsquid-labs/access-control-example/tree/interacting-with-resolver). -This approach does not work with [subscriptions](/sdk/resources/graphql-server/subscriptions/). +This approach does not work with [subscriptions](/sdk/reference/openreader-server/configuration/subscriptions/). ## Examples diff --git a/docs/sdk/resources/graphql-server/caching.md b/docs/sdk/reference/openreader-server/configuration/caching.md similarity index 100% rename from docs/sdk/resources/graphql-server/caching.md rename to docs/sdk/reference/openreader-server/configuration/caching.md diff --git a/docs/sdk/resources/graphql-server/custom-resolvers.md b/docs/sdk/reference/openreader-server/configuration/custom-resolvers.md similarity index 100% rename from docs/sdk/resources/graphql-server/custom-resolvers.md rename to docs/sdk/reference/openreader-server/configuration/custom-resolvers.md diff --git a/docs/sdk/resources/graphql-server/dos-protection.md b/docs/sdk/reference/openreader-server/configuration/dos-protection.md similarity index 98% rename from docs/sdk/resources/graphql-server/dos-protection.md rename to docs/sdk/reference/openreader-server/configuration/dos-protection.md index 63586be7..4513a971 100644 --- a/docs/sdk/resources/graphql-server/dos-protection.md +++ b/docs/sdk/reference/openreader-server/configuration/dos-protection.md @@ -65,7 +65,7 @@ In a nutshell, assuming that the schema file is properly decorated with `@cardin **`--subscription-max-response-size `** -Same as `--max-response-size` but for live query [subscriptions](/sdk/resources/graphql-server/subscriptions). +Same as `--max-response-size` but for live query [subscriptions](/sdk/reference/openreader-server/configuration/subscriptions). #### Example diff --git a/docs/sdk/resources/graphql-server/subscriptions.md b/docs/sdk/reference/openreader-server/configuration/subscriptions.md similarity index 89% rename from docs/sdk/resources/graphql-server/subscriptions.md rename to docs/sdk/reference/openreader-server/configuration/subscriptions.md index e38a40fb..70532872 100644 --- a/docs/sdk/resources/graphql-server/subscriptions.md +++ b/docs/sdk/reference/openreader-server/configuration/subscriptions.md @@ -4,7 +4,11 @@ title: Subscriptions description: Subscribe to updates over a websocket --- -# Query subscriptions +# Subscriptions + +:::danger +RAM usage of subscriptions scales poorly under high load, making the feature unsuitable for most production uses. There are currently no plans to fix this issue. +::: OpenReader supports [GraphQL subscriptions](https://www.apollographql.com/docs/react/data/subscriptions/) via live queries. To use these, a client opens a websocket connection to the server and sends a `subscription` query there. The query body is then repeatedly executed (every 5 seconds by default) and the results are sent to the client whenever they change. @@ -16,7 +20,7 @@ npx squid-graphql-server --help For each entity types, the following queries are supported for subscriptions: - `${EntityName}ById` -- query a single entity - `${EntityName}s` -- query multiple entities with a `where` filter -Note that despite being [deprecated](/sdk/resources/graphql-server/overview/#supported-queries) from the regular query set, `${EntityName}s` queries will continue to be available for subscriptions going forward. +Note that despite being [deprecated](/sdk/reference/openreader-server/overview/#supported-queries) from the regular query set, `${EntityName}s` queries will continue to be available for subscriptions going forward. ## Local runs diff --git a/docs/sdk/reference/openreader-server/overview.md b/docs/sdk/reference/openreader-server/overview.md new file mode 100644 index 00000000..1263fd25 --- /dev/null +++ b/docs/sdk/reference/openreader-server/overview.md @@ -0,0 +1,49 @@ +--- +sidebar_position: 10 +description: An open source GraphQL server built by SQD +--- + +# Overview + +:::info +OpenReader is no longer recommended for use in new squid projects [relying on PostgreSQL](/sdk/resources/persisting-data/typeorm). See [Serving GraphQL](/sdk/resources/serving-graphql) to learn about the new options and the [Known issues](#known-issues) section to understand our motivation. +::: + +OpenReader is a server that presents data of PostgreSQL-powered squids as a GraphQL API. It relies on the [eponymous library](https://github.com/subsquid/squid-sdk/tree/master/graphql/openreader) lib of the Squid SDK for schema generation. [Schema file](/sdk/reference/schema-file) is used as an input; the resulting API supports [OpenCRUD](https://www.opencrud.org/) queries for the entities defined in the schema. + +To start the API server based on `schema.graphql` install `@subsquid/graphql-server` and run the following in the squid project root: +```bash +npx squid-graphql-server +``` +The `squid-graphql-server` executable supports multiple optional flags to enable [caching](/sdk/reference/openreader-server/configuration/caching), [subscriptions](/sdk/reference/openreader-server/configuration/subscriptions), [DoS protection](/sdk/reference/openreader-server/configuration/dos-protection) etc. Its features are covered in the next sections. + +The API server listens at port defined by the `GQL_PORT` environment variable (defaults to `4350`). The database connection is configured with the env variables `DB_NAME`, `DB_USER`, `DB_PASS`, `DB_HOST`, `DB_PORT`. + +In [SQD Cloud](/cloud), OpenReader is usually ran as the `api:` service in the `deploy:` section of the [Deployment manifest](/cloud/reference/manifest). + +## Supported queries + +The details of the supported OpenReader queries can be found in a separate section [Core API](/sdk/reference/openreader-server/api). Here is a brief overview of the queries generated by OpenReader for each entity defined in the schema file: + +- the squid last processed block is available with `squidStatus { height }` query +- a "get one by ID" query with the name `{entityName}ById` for each [entity](/sdk/reference/schema-file/entities) defined in the schema file +- a "get one" query for [`@unique` fields](/sdk/reference/schema-file/indexes-and-constraints), with the name `{entityName}ByUniqueInput` +- Entity queries named `{entityName}sConnection`. Each query supports rich filtering support, including [field-level filters](/sdk/reference/openreader-server/api/queries), composite [`AND` and `OR` filters](/sdk/reference/openreader-server/api/and-or-filters), [nested queries](/sdk/reference/openreader-server/api/nested-field-queries), [cross-relation queries](/sdk/reference/openreader-server/api/cross-relation-field-queries) and [Relay-compatible](https://relay.dev/graphql/connections.htm) cursor-based [pagination](/sdk/reference/openreader-server/api/paginate-query-results). +- [Subsriptions](/sdk/reference/openreader-server/configuration/subscriptions) via live queries +- (Deprecated in favor of Relay connections) Lookup queries with the name `{entityName}s`. + +[Union and typed JSON types](/sdk/reference/schema-file/unions-and-typed-json) are mapped into [GraphQL Union Types](https://graphql.org/learn/schema/#union-types) with a [proper type resolution](/sdk/reference/openreader-server/api/resolve-union-types-interfaces) with `__typename`. + +## Built-in custom scalar types + +The OpenReader GraphQL API defines the following custom scalar types: + +- `DateTime` entity field values are presented in the ISO format +- `Bytes` entity field values are presented as hex-encoded strings prefixed with `0x` +- `BigInt` entity field values are presented as strings + +## Known issues + +- RAM usage of [subscriptions](/sdk/reference/openreader-server/configuration/subscriptions) scales poorly under high load, making the feature unsuitable for most production uses. There are currently no plans to fix this issue. +- Setting up custom resolvers for subscriptions is unreasonably hard. +- `@subsquid/graphql-server` depends on the deprecated Apollo Server v3. diff --git a/docs/sdk/reference/openreader/_category_.json b/docs/sdk/reference/openreader/_category_.json deleted file mode 100644 index 45d1c4d1..00000000 --- a/docs/sdk/reference/openreader/_category_.json +++ /dev/null @@ -1,12 +0,0 @@ -{ - "position": 80, - "label": "Core GraphQL API", - "collapsible": true, - "collapsed": true, - "className": "red", - "link": { - "type": "generated-index", - "slug": "/sdk/reference/openreader", - "title": "Core queries exposed by the squid GraphQL API" - } -} diff --git a/docs/sdk/reference/processors/evm-batch/context-interfaces.md b/docs/sdk/reference/processors/evm-batch/context-interfaces.md index 609c7604..bea6b1d9 100644 --- a/docs/sdk/reference/processors/evm-batch/context-interfaces.md +++ b/docs/sdk/reference/processors/evm-batch/context-interfaces.md @@ -34,7 +34,7 @@ The exact fields available in each data item type are inferred from the `setFiel ## Example -The handler below simply outputs all the log items emitted by the contract `0x2E645469f354BB4F5c8a05B3b30A929361cf77eC` in [real time](/sdk/resources/basics/unfinalized-blocks): +The handler below simply outputs all the log items emitted by the contract `0x2E645469f354BB4F5c8a05B3b30A929361cf77eC` in [real time](/sdk/resources/unfinalized-blocks): ```ts import { TypeormDatabase } from '@subsquid/typeorm-store' diff --git a/docs/sdk/reference/processors/evm-batch/general.md b/docs/sdk/reference/processors/evm-batch/general.md index d8642068..04f60d22 100644 --- a/docs/sdk/reference/processors/evm-batch/general.md +++ b/docs/sdk/reference/processors/evm-batch/general.md @@ -16,7 +16,7 @@ The following setters configure the global settings of `EvmBatchProcessor`. They Certain configuration methods are required: * one or both of [`setGateway()`](#set-gateway) and [`setRpcEndpoint()`](#set-rpc-endpoint) - * [`setFinalityConfirmation()`](#set-finality-confirmation) whenever [RPC ingestion](/sdk/resources/basics/unfinalized-blocks) is enabled, namely when + * [`setFinalityConfirmation()`](#set-finality-confirmation) whenever [RPC ingestion](/sdk/resources/unfinalized-blocks) is enabled, namely when - a RPC endpoint was configured with [`setRpcEndpoint()`](#set-rpc-endpoint) - RPC ingestion has **NOT** been explicitly disabled by calling [`setRpcDataIngestionSettings({ disabled: true })`](#set-rpc-data-ingestion-settings) @@ -41,7 +41,7 @@ See [EVM gateways](/subsquid-network/reference/evm-networks). ### `setRpcEndpoint(rpc: ChainRpc)` {#set-rpc-endpoint} Adds a RPC data source. If added, it will be used for - - [RPC ingestion](/sdk/resources/basics/unfinalized-blocks) (unless explicitly disabled with [`setRpcDataIngestionSettings()`](#set-rpc-data-ingestion-settings)) + - [RPC ingestion](/sdk/resources/unfinalized-blocks) (unless explicitly disabled with [`setRpcDataIngestionSettings()`](#set-rpc-data-ingestion-settings)) - any [direct RPC queries](/sdk/resources/tools/typegen/state-queries/?typegen=evm) you make in your squid code A node RPC endpoint can be specified as a string URL or as an object: @@ -67,7 +67,7 @@ Replaced by [`setGateway()`](#set-gateway) and [`setRpcEndpoint()`](#set-rpc-end ### `setRpcDataIngestionSetting(settings: RpcDataIngestionSettings)` {#set-rpc-data-ingestion-settings} -Specify the [RPC ingestion](/sdk/resources/basics/unfinalized-blocks) settings. +Specify the [RPC ingestion](/sdk/resources/unfinalized-blocks) settings. ```ts type RpcDataIngestionSettings = { disabled?: boolean diff --git a/docs/sdk/reference/processors/evm-batch/state-diffs.md b/docs/sdk/reference/processors/evm-batch/state-diffs.md index 98eb1577..c05af8d8 100644 --- a/docs/sdk/reference/processors/evm-batch/state-diffs.md +++ b/docs/sdk/reference/processors/evm-batch/state-diffs.md @@ -7,7 +7,7 @@ description: >- # Storage state diffs :::tip -State diffs for historical blocks are [currently available](/subsquid-network/reference/evm-networks) from [Subsquid Network](/subsquid-network) on the same basis as all other data stored there: for free. If you deploy a squid that indexes traces [in real-time](/sdk/resources/basics/unfinalized-blocks) to Subsquid Cloud and use our [RPC addon](/cloud/resources/rpc-proxy), the necessary `trace_` or `debug_` RPC calls made will be counted alongside all other calls and [the price](/cloud/pricing/#rpc-requests) will be computed for the total count. There are no surcharges for traces or state diffs. +State diffs for historical blocks are [currently available](/subsquid-network/reference/evm-networks) from [Subsquid Network](/subsquid-network) on the same basis as all other data stored there: for free. If you deploy a squid that indexes traces [in real-time](/sdk/resources/unfinalized-blocks) to Subsquid Cloud and use our [RPC addon](/cloud/resources/rpc-proxy), the necessary `trace_` or `debug_` RPC calls made will be counted alongside all other calls and [the price](/cloud/pricing/#rpc-requests) will be computed for the total count. There are no surcharges for traces or state diffs. ::: #### `addStateDiff(options)` {#add-state-diff} diff --git a/docs/sdk/reference/processors/evm-batch/traces.md b/docs/sdk/reference/processors/evm-batch/traces.md index 16d6ce53..eb27c26b 100644 --- a/docs/sdk/reference/processors/evm-batch/traces.md +++ b/docs/sdk/reference/processors/evm-batch/traces.md @@ -7,7 +7,7 @@ description: >- # Traces :::tip -Traces for historical blocks are [currently available](/subsquid-network/reference/evm-networks) from [Subsquid Network](/subsquid-network) on the same basis as all other data stored there: for free. If you deploy a squid that indexes traces [in real-time](/sdk/resources/basics/unfinalized-blocks) to Subsquid Cloud and use our [RPC addon](/cloud/resources/rpc-proxy), the necessary `trace_` or `debug_` RPC calls made will be counted alongside all other calls and [the price](/cloud/pricing/#rpc-requests) will be computed for the total count. There are no surcharges for traces or state diffs. +Traces for historical blocks are [currently available](/subsquid-network/reference/evm-networks) from [Subsquid Network](/subsquid-network) on the same basis as all other data stored there: for free. If you deploy a squid that indexes traces [in real-time](/sdk/resources/unfinalized-blocks) to Subsquid Cloud and use our [RPC addon](/cloud/resources/rpc-proxy), the necessary `trace_` or `debug_` RPC calls made will be counted alongside all other calls and [the price](/cloud/pricing/#rpc-requests) will be computed for the total count. There are no surcharges for traces or state diffs. ::: #### `addTrace(options)` {#add-trace} diff --git a/docs/sdk/reference/processors/substrate-batch/context-interfaces.md b/docs/sdk/reference/processors/substrate-batch/context-interfaces.md index d1d9deb9..fb84cffa 100644 --- a/docs/sdk/reference/processors/substrate-batch/context-interfaces.md +++ b/docs/sdk/reference/processors/substrate-batch/context-interfaces.md @@ -29,7 +29,7 @@ The exact fields available in each data item type are inferred from the `setFiel ## Example -The handler below simply outputs all the `Balances.transfer_all` calls on Kusama in [real time](/sdk/resources/basics/unfinalized-blocks): +The handler below simply outputs all the `Balances.transfer_all` calls on Kusama in [real time](/sdk/resources/unfinalized-blocks): ```ts import {SubstrateBatchProcessor} from '@subsquid/substrate-processor' diff --git a/docs/sdk/reference/processors/substrate-batch/general.md b/docs/sdk/reference/processors/substrate-batch/general.md index e41b349d..cb83e37d 100644 --- a/docs/sdk/reference/processors/substrate-batch/general.md +++ b/docs/sdk/reference/processors/substrate-batch/general.md @@ -15,7 +15,7 @@ The following setters configure the global settings of `SubstrateBatchProcessor` Calling [`setRpcEndpoint()`](#set-rpc-endpoint) is a hard requirement on Substrate, as chain RPC is used to retrieve chain metadata. Adding a [Subsquid Network gateway](/subsquid-network/reference/substrate-networks) with [`setGateway()`](#set-gateway) is optional but highly recommended, as it greatly reduces RPC usage. -To reduce it further, you can explicitly disable [RPC ingestion](/sdk/resources/basics/unfinalized-blocks) by calling [`setRpcDataIngestionSettings({ disabled: true })`](#set-rpc-data-ingestion-settings): in this scenario the RPC will only be used for metadata retrieval and to perform any [direct RPC queries](/sdk/resources/tools/typegen/state-queries/?typegen=substrate) you might be doing in your squid code. This will, however, introduce a delay of a few thousands of blocks between the chain head and the highest block available to your squid. +To reduce it further, you can explicitly disable [RPC ingestion](/sdk/resources/unfinalized-blocks) by calling [`setRpcDataIngestionSettings({ disabled: true })`](#set-rpc-data-ingestion-settings): in this scenario the RPC will only be used for metadata retrieval and to perform any [direct RPC queries](/sdk/resources/tools/typegen/state-queries/?typegen=substrate) you might be doing in your squid code. This will, however, introduce a delay of a few thousands of blocks between the chain head and the highest block available to your squid. ### `setGateway(url: string | GatewaySettings)` {#set-gateway} @@ -31,7 +31,7 @@ See [Substrate gateways](/subsquid-network/reference/substrate-networks). ### `setRpcEndpoint(rpc: ChainRpc)` {#set-rpc-endpoint} Adds a RPC data source. If added, it will be used for - - [RPC ingestion](/sdk/resources/basics/unfinalized-blocks) (unless explicitly disabled with [`setRpcDataIngestionSettings()`](#set-rpc-data-ingestion-settings)) + - [RPC ingestion](/sdk/resources/unfinalized-blocks) (unless explicitly disabled with [`setRpcDataIngestionSettings()`](#set-rpc-data-ingestion-settings)) - any [direct RPC queries](/sdk/resources/tools/typegen/state-queries/?typegen=substrate) you make in your squid code A node RPC endpoint can be specified as a string URL or as an object: @@ -57,7 +57,7 @@ Replaced by [`setGateway()`](#set-gateway) and [`setRpcEndpoint()`](#set-rpc-end ### `setRpcDataIngestionSetting(settings: RpcDataIngestionSettings)` {#set-rpc-data-ingestion-settings} -Specify the [RPC ingestion](/sdk/resources/basics/unfinalized-blocks) settings. +Specify the [RPC ingestion](/sdk/resources/unfinalized-blocks) settings. ```ts type RpcDataIngestionSettings = { disabled?: boolean diff --git a/docs/sdk/reference/schema-file/entity-relations.md b/docs/sdk/reference/schema-file/entity-relations.md index 6cacae57..09ff8b62 100644 --- a/docs/sdk/reference/schema-file/entity-relations.md +++ b/docs/sdk/reference/schema-file/entity-relations.md @@ -10,7 +10,7 @@ The term "entity relation" refers to the situation when an entity instance conta [One-to-one](https://github.com/typeorm/typeorm/blob/master/docs/one-to-one-relations.md) and [one-to-many](https://github.com/typeorm/typeorm/blob/master/docs/many-to-one-one-to-many-relations.md) relations are supported by Typeorm. The "many" side of the one-to-many relations is always the owning side. Many-to-many relations are modeled as [two one-to-many relations with an explicit join table](#many-to-many-relations). -An entity relation is always unidirectional, but it is possible to request the data on the owning entity from the non-owning one. To do so, define a field decorated `@derivedFrom` in the schema. Doing so will cause the Typeorm code generated by [`squid-typeorm-codegen`](/sdk/resources/persisting-data/typeorm) and the GraphQL API served by [`squid-graphql-server`](/sdk/resources/graphql-server/overview/) to show a virtual (that is, **not mapping to a database column**) field populated via inverse lookup queries. +An entity relation is always unidirectional, but it is possible to request the data on the owning entity from the non-owning one. To do so, define a field decorated `@derivedFrom` in the schema. Doing so will cause the Typeorm code generated by [`squid-typeorm-codegen`](/sdk/resources/persisting-data/typeorm) and the GraphQL API served by [`squid-graphql-server`](/sdk/reference/openreader-server/overview) to show a virtual (that is, **not mapping to a database column**) field populated via inverse lookup queries. The following examples illustrate the concepts. diff --git a/docs/sdk/reference/schema-file/interfaces.md b/docs/sdk/reference/schema-file/interfaces.md index 26aadd9f..d246dc2c 100644 --- a/docs/sdk/reference/schema-file/interfaces.md +++ b/docs/sdk/reference/schema-file/interfaces.md @@ -6,7 +6,9 @@ description: Queriable interfaces # Interfaces -The schema file supports [GraphQL Interfaces](https://graphql.org/learn/schema/#interfaces) for modelling complex types sharing common traits. Interfaces are annotated with `@query` at the type level and do not affect the backing database schema, only enriching the [GraphQL API queries](/sdk/resources/graphql-server) with [inline fragments](https://graphql.org/learn/queries/#inline-fragments). +The schema file supports [GraphQL Interfaces](https://graphql.org/learn/schema/#interfaces) for modelling complex types sharing common traits. Interfaces are annotated with `@query` at the type level and do not affect the database schema, only enriching the GraphQL API queries with [inline fragments](https://graphql.org/learn/queries/#inline-fragments). + +Currently, only [OpenReader](/sdk/reference/openreader-server) supports GraphQL interfaces defined in the schema file. ### Examples @@ -47,7 +49,7 @@ type Baz implements MyEntity @entity { } ``` -The `MyEntity` interface above enables `myEntities` and `myEntitiesConnection` [GraphQL API queries](/sdk/resources/graphql-server) with inline fragments and the `_type`, `__typename` [meta fields](https://graphql.org/learn/queries/#meta-fields): +The `MyEntity` interface above enables `myEntities` and `myEntitiesConnection` [GraphQL API queries](/sdk/reference/openreader-server/api) with inline fragments and the `_type`, `__typename` [meta fields](https://graphql.org/learn/queries/#meta-fields): ```graphql query { @@ -64,4 +66,4 @@ query { ... on Baz { baz } } } -``` \ No newline at end of file +``` diff --git a/docs/sdk/reference/schema-file/intro.md b/docs/sdk/reference/schema-file/intro.md index c75046a2..429598c8 100644 --- a/docs/sdk/reference/schema-file/intro.md +++ b/docs/sdk/reference/schema-file/intro.md @@ -10,7 +10,7 @@ description: >- The schema file `schema.graphql` uses a GraphQL dialect to model the target entities and entity relations. The tooling around the schema file is then used to: - Generate TypeORM entities (with `squid-typeorm-codegen(1)`, see below) - Generate the database schema from the TypeORM entities (see [db migrations](/sdk/resources/persisting-data/typeorm)) -- Present the target data with a rich API served by a built-in [GraphQL Server](/sdk/resources/graphql-server). A full API reference is covered in the [Query a Squid](/sdk/reference/openreader) section. +- Optionally, the schema can be used to present the target data with a [GraphQL API](/sdk/resources/serving-graphql). The schema file format is loosely compatible with the [subgraph schema](https://thegraph.com/docs/en/developing/creating-a-subgraph/) file, see [Migrate from subgraph](/sdk/resources/migrate/migrate-subgraph) section for details. diff --git a/docs/sdk/resources/basics/_category_.json b/docs/sdk/resources/basics/_category_.json deleted file mode 100644 index 0fe18207..00000000 --- a/docs/sdk/resources/basics/_category_.json +++ /dev/null @@ -1,12 +0,0 @@ -{ - "position": 9, - "label": "Basics", - "collapsible": true, - "collapsed": true, - "className": "red", - "link": { - "type": "generated-index", - "slug": "/sdk/resources/basics", - "title": "Basics of Squid SDK" - } -} diff --git a/docs/sdk/resources/basics/batch-processing.md b/docs/sdk/resources/batch-processing.md similarity index 100% rename from docs/sdk/resources/basics/batch-processing.md rename to docs/sdk/resources/batch-processing.md diff --git a/docs/sdk/resources/evm/_category_.json b/docs/sdk/resources/evm/_category_.json index 54c3174f..bad29ad4 100644 --- a/docs/sdk/resources/evm/_category_.json +++ b/docs/sdk/resources/evm/_category_.json @@ -1,5 +1,5 @@ { - "position": 30, + "position": 110, "label": "EVM-specific", "collapsible": true, "collapsed": true, diff --git a/docs/sdk/resources/basics/external-api.md b/docs/sdk/resources/external-api.md similarity index 100% rename from docs/sdk/resources/basics/external-api.md rename to docs/sdk/resources/external-api.md diff --git a/docs/sdk/resources/graphql-server/_category_.json b/docs/sdk/resources/graphql-server/_category_.json deleted file mode 100644 index 282d8113..00000000 --- a/docs/sdk/resources/graphql-server/_category_.json +++ /dev/null @@ -1,12 +0,0 @@ -{ - "position": 11, - "label": "GraphQL server", - "collapsible": true, - "collapsed": true, - "className": "red", - "link": { - "type": "generated-index", - "slug": "/sdk/resources/graphql-server", - "title": "GraphQL API server for squid data" - } -} diff --git a/docs/sdk/resources/graphql-server/alternatives.md b/docs/sdk/resources/graphql-server/alternatives.md deleted file mode 100644 index 1e07b476..00000000 --- a/docs/sdk/resources/graphql-server/alternatives.md +++ /dev/null @@ -1,25 +0,0 @@ ---- -sidebar_position: 90 -title: Alternatives -description: Using Squid SDK with PostGraphile or Hasura ---- - -# Alternatives to the SDK GraphQL server - -We encourage using squids with third-party GraphQL tools like [PostGraphile](https://www.graphile.org/postgraphile/) and [Hasura](https://hasura.io). No special configuration is required and there aren't any constraints on running them in [Subsquid Cloud](/cloud). - -## PostGraphile - -Here we cover one possible way of integrating PostGraphile into a squid project ([full example](https://github.com/subsquid-labs/squid-postgraphile-example/)). Note the following: - -* There is a dedicated entry point for PostGraphile (`src/api.ts`). It is complemented by an [`sqd` command](https://github.com/subsquid-labs/squid-postgraphile-example/blob/f1fd1691eb59da2c9d57c475a71d0ed44cfed891/commands.json#L58) and a [manifest entry](https://github.com/subsquid-labs/squid-postgraphile-example/blob/f1fd1691eb59da2c9d57c475a71d0ed44cfed891/squid.yaml#L15). This makes it easier to run the squid both locally (with [`sqd run`](/squid-cli/run)) and in [Cloud](/cloud). - -* As per usual with PostGraphile installations, you can freely extend it with plugins, including your own. Here is an [example plugin for serving the `_squidStatus` queries](https://github.com/subsquid-labs/squid-postgraphile-example/blob/f1fd1691eb59da2c9d57c475a71d0ed44cfed891/src/api.ts#L11) from the standard Squid SDK GraphQL server schema. - -## Hasura - -If you want to run Hasura in [Subsquid Cloud](/cloud), visit the [`hasura` addon page](/cloud/reference/hasura). - -When running it elsewhere, simply supply database credentials in your Hasura configuration. For squids running in Subsquid Cloud you can find the credentials in [the Cloud app](https://app.subsquid.io/squids). - -![database creds](database-creds.png) diff --git a/docs/sdk/resources/graphql-server/overview.md b/docs/sdk/resources/graphql-server/overview.md deleted file mode 100644 index 596f8d1e..00000000 --- a/docs/sdk/resources/graphql-server/overview.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -sidebar_position: 10 -description: Zero-config GraphQL server ---- - -# Overview - -The data indexed by a squid into a Postgres database can be automatically presented with a GraphQL API service powered by the [OpenReader](https://github.com/subsquid/squid-sdk/tree/master/graphql/openreader) lib of the Squid SDK. The OpenReader GraphQL server takes [the schema file](/sdk/reference/schema-file) as an input and serves a GraphQL API supporting [OpenCRUD](https://www.opencrud.org/) queries for the entities defined in the schema. - -To start the API server based on the `schema.graphql` run in the squid project root: -```bash -npx squid-graphql-server -``` -or, for more options, -```bash -npx squid-graphql-server -``` -The `squid-graphql-server` binary supports multiple optional flags to enable caching, subscriptions, DoS protection etc. Its features are covered in the next sections. - -The API server listens at port defined by `GQL_PORT` (defaults to `4350`). The database connection is configured with the env variables `DB_NAME`, `DB_USER`, `DB_PASS`, `DB_HOST`, `DB_PORT`. - -The GraphQL API is enabled by the `api:` service in the `deploy` section of [squid.yaml](/cloud/reference/manifest) for Subsquid Cloud deployments. - -## Supported Queries - -The details of the supported OpenReader queries can be found in a separate section [Query a Squid](/sdk/reference/openreader). Here is a brief overview of the queries generated by OpenReader for each entity defined in the schema file: - -- the squid last processed block is available with `squidStatus { height }` query -- a "get one by ID" query with the name `{entityName}ById` for each [entity](/sdk/reference/schema-file/entities) defined in the schema file -- a "get one" query for [`@unique` fields](/sdk/reference/schema-file/indexes-and-constraints), with the name `{entityName}ByUniqueInput` -- Entity queries named `{entityName}sConnection`. Each query supports rich filtering support, including [field-level filters](/sdk/reference/openreader/queries), composite [`AND` and `OR` filters](/sdk/reference/openreader/and-or-filters), [nested queries](/sdk/reference/openreader/nested-field-queries), [cross-relation queries](/sdk/reference/openreader/cross-relation-field-queries) and [Relay-compatible](https://relay.dev/graphql/connections.htm) cursor-based [pagination](/sdk/reference/openreader/paginate-query-results). -- [Subsriptions](/sdk/resources/graphql-server/subscriptions) via live queries -- (Deprecated in favor of Relay connections) Lookup queries with the name `{entityName}s`. - -[Union and typed JSON types](/sdk/reference/schema-file/unions-and-typed-json) are mapped into [GraphQL Union Types](https://graphql.org/learn/schema/#union-types) with a [proper type resolution](/sdk/reference/openreader/resolve-union-types-interfaces) with `__typename`. - -## Built-in custom scalar types - -The OpenReader GraphQL API defines the following custom scalar types: - -- `DateTime` entity field values are presented in the ISO format -- `Bytes` entity field values are presented as hex-encoded strings prefixed with `0x` -- `BigInt` entity field values are presented as strings diff --git a/docs/sdk/resources/migrate/_category_.json b/docs/sdk/resources/migrate/_category_.json index f0706d59..a18b92c5 100644 --- a/docs/sdk/resources/migrate/_category_.json +++ b/docs/sdk/resources/migrate/_category_.json @@ -1,5 +1,5 @@ { - "position": 50, + "position": 140, "label": "Migration guides", "collapsible": true, "collapsed": true, diff --git a/docs/sdk/resources/migrate/migrate-subgraph.md b/docs/sdk/resources/migrate/migrate-subgraph.md index fb1b818b..1a79154b 100644 --- a/docs/sdk/resources/migrate/migrate-subgraph.md +++ b/docs/sdk/resources/migrate/migrate-subgraph.md @@ -14,7 +14,7 @@ git clone https://github.com/subsquid-labs/gravatar-squid.git `EvmBatchProcessor` provided by the Squid SDK defines a single handler that indexes EVM logs and transaction data in batches. It differs from the programming model of subgraph mappings that defines a separate data handler for each EVM log topic to be indexed. Due to significantly less frequent database hits (once per batch compared to once per log) the batch-based handling model shows up to a 10x increase in the indexing speed. -At the same time, concepts of the [schema file](/sdk/reference/schema-file), [code generation from the schema file](/sdk/reference/schema-file/intro/#typeorm-codegen) and [auto-generated GraphQL API](/sdk/resources/graphql-server) should be familiar to subgraph developers. In most cases the schema file of a subgraph can be imported into a squid as is. +At the same time, concepts of the [schema file](/sdk/reference/schema-file), [code generation from the schema file](/sdk/reference/schema-file/intro/#typeorm-codegen) and [auto-generated GraphQL API](/sdk/resources/serving-graphql) should be familiar to subgraph developers. In most cases the schema file of a subgraph can be imported into a squid as is. There are some known limitations: - Many-to-Many entity relations should be [modeled explicitly](/sdk/reference/schema-file/entity-relations/#many-to-many-relations) as two many-to-one relations @@ -24,10 +24,10 @@ On top of the features provided by subgraphs, Squid SDK and Subsquid Cloud offer - Full control over the target database (Postgres), including custom migrations and ad-hoc queries in the handler - Custom target databases and data formats (e.g. CSV) - Arbitrary code execution in the data handler -- [Extension of the GraphQL API](/sdk/resources/graphql-server/custom-resolvers) with arbitrary SQL +- [Extension of the GraphQL API](/sdk/reference/openreader-server/configuration/custom-resolvers) with arbitrary SQL - [Secret environment variables](/cloud/resources/env-variables), allowing to seamlessly use private third-party JSON-RPC endpoints and integrate with external APIs - [API versioning and aliasing](/cloud/resources/production-alias) -- [API caching](/sdk/resources/graphql-server/caching) +- [API caching](/sdk/reference/openreader-server/configuration/caching) For a full feature set comparison, see [Subsquid vs The Graph](/sdk/subsquid-vs-thegraph). diff --git a/docs/sdk/resources/basics/multichain.md b/docs/sdk/resources/multichain.md similarity index 94% rename from docs/sdk/resources/basics/multichain.md rename to docs/sdk/resources/multichain.md index bf93a065..6a68bff6 100644 --- a/docs/sdk/resources/basics/multichain.md +++ b/docs/sdk/resources/multichain.md @@ -6,7 +6,7 @@ description: Combine data from multiple chains # Multichain indexing -Squids can extract data from multiple chains into a shared data sink. If the data is [stored to Postgres](/sdk/resources/persisting-data/typeorm) it can then be served as a unified multichain [GraphQL API](/sdk/resources/graphql-server). +Squids can extract data from multiple chains into a shared data sink. If the data is [stored to Postgres](/sdk/resources/persisting-data/typeorm) it can then be served as a unified multichain [GraphQL API](/sdk/resources/serving-graphql). To do this, run one [processor](/sdk/overview) per source network: @@ -69,7 +69,7 @@ Also ensure that async (ctx) => { // ... ``` -2. [Schema](/sdk/reference/schema-file) and [GraphQL API](/sdk/resources/graphql-server) are shared among the processors. +2. [Schema](/sdk/reference/schema-file) and [GraphQL API](/sdk/resources/serving-graphql) are shared among the processors. ### Handling concurrency @@ -79,7 +79,7 @@ Also ensure that - To avoid cross-chain data dependencies, use per-chain records for volatile data. E.g. if you track account balances across multiple chains you can avoid overlaps by storing the balance for each chain in a different table row. - When you need to combine the records (e.g. get a total of all balaces across chains) use a [custom resolver](/sdk/resources/graphql-server/custom-resolvers) to do it on the GraphQL server side. + When you need to combine the records (e.g. get a total of all balaces across chains) use a [custom resolver](/sdk/reference/openreader-server/configuration/custom-resolvers) to do it on the GraphQL server side. - It is OK to use cross-chain [entities](/sdk/reference/schema-file/entities) to simplify aggregation. Just don't store any data in them: ```graphql diff --git a/docs/sdk/resources/persisting-data/_category_.json b/docs/sdk/resources/persisting-data/_category_.json index ce7b1389..87714615 100644 --- a/docs/sdk/resources/persisting-data/_category_.json +++ b/docs/sdk/resources/persisting-data/_category_.json @@ -1,5 +1,5 @@ { - "position": 10, + "position": 100, "label": "Persisting data", "collapsible": true, "collapsed": true, diff --git a/docs/sdk/resources/basics/self-hosting.md b/docs/sdk/resources/self-hosting.md similarity index 100% rename from docs/sdk/resources/basics/self-hosting.md rename to docs/sdk/resources/self-hosting.md diff --git a/docs/sdk/resources/graphql-server/database-creds.png b/docs/sdk/resources/serving-graphql-database-creds.png similarity index 100% rename from docs/sdk/resources/graphql-server/database-creds.png rename to docs/sdk/resources/serving-graphql-database-creds.png diff --git a/docs/sdk/resources/serving-graphql.md b/docs/sdk/resources/serving-graphql.md new file mode 100644 index 00000000..9608d20c --- /dev/null +++ b/docs/sdk/resources/serving-graphql.md @@ -0,0 +1,46 @@ +--- +sidebar_position: 80 +title: Serving GraphQL +description: GraphQL servers commonly used in squids +--- + +# Serving GraphQL + +It is common (although not required) for squids to serve GraphQL APIs. Historically, the most common way to do that was to [persist the squid data to PostgreSQL](/sdk/resources/persisting-data/typeorm), then attach [OpenReader](#openreader) to it. Although this is still supported, we encourage using [PostGraphile](#postgraphile) or [Hasura](#hasura) in new PostgreSQD-based projects. See [OpenReader's known issues](/sdk/reference/openreader-server/overview/#known-issues) if you're curious about our motivation. + +## PostGraphile + +[PostGraphile](https://www.graphile.org/postgraphile/) is an open-source tool that builds powerful, extensible and performant GraphQL APIs from PostgreSQL schemas. + +The recommended way of integrating PostGraphile into squid projects is by making a dedicated entry point at `src/api.ts`. A complete example squid implementing this approach is available in [this repository](https://github.com/subsquid-labs/squid-postgraphile-example/). + +With this entry point in place, we [create a `sqd` command](https://github.com/subsquid-labs/squid-postgraphile-example/blob/f1fd1691eb59da2c9d57c475a71d0ed44cfed891/commands.json#L58) for running PostGraphile with [`commands.json`](/squid-cli/commands-json), then use it in the [`deploy.api` entry](https://github.com/subsquid-labs/squid-postgraphile-example/blob/f1fd1691eb59da2c9d57c475a71d0ed44cfed891/squid.yaml#L15) of [Squid manifest](/cloud/reference/manifest). Although none of this is required, this makes it easier to run the squid both locally (with [`sqd run`](/squid-cli/run)) and in the [Cloud](/cloud). + +As per usual with PostGraphile installations, you can freely extend it with plugins, including your own. Here is an [example plugin for serving the `_squidStatus` queries](https://github.com/subsquid-labs/squid-postgraphile-example/blob/f1fd1691eb59da2c9d57c475a71d0ed44cfed891/src/api.ts#L11) from the standard Squid SDK GraphQL server schema. A plugin for making PostGraphile API fully compatible with [old APIs](/sdk/reference/openreader-server/api) served by OpenReader will be made available soon. + +## Hasura + +[Hasura](https://hasura.io) is a powerful open-source GraphQL engine geared towards exposing multiple data sources via a single API. You can integrate it with your squid in two ways: +1. **Use Hasura to gather data from multiple sources, including your squid.** + + For this scenario we recommend separating your Hasura instance from your squid, which should consist of just one service, [the processor](/sdk/reference/processors/architecture), plus the database. Supply your database credentials to Hasura, then configure it to produce the desired API. + + If you run your squid in our [Cloud](/cloud) you can find database credentials in [the app](https://app.subsquid.io/squids): + + ![database creds](serving-graphql-database-creds.png) + +2. **Run a dedicated Hasura instance for serving the data just from your squid.** + + A complete example implementing this approach is available in [this repository](https://github.com/subsquid-labs/squid-hasura-example). Here's how it works: + + * Locally, Hasura runs in a [Docker container](https://github.com/subsquid-labs/squid-hasura-example/blob/70bb6d703dc90c1bb00b47f3fef7f388ab54e565/docker-compose.yml#L14C1-L28C20). In the Cloud it is managed via the [Hasura addon](/cloud/reference/hasura). + * Hasura metadata is shared among all squid instances by means of the [Hasura configuration tool](/sdk/resources/tools/hasura-configuration). The tool can automatically create an initial configuration based on your [TypeORM models](/sdk/reference/schema-file/intro/#typeorm-codegen), then persist any changes you might make with the web GUI and metadata exports. + * Admin authentication secret is set via the `HASURA_GRAPHQL_ADMIN_SECRET`. The variable is set in `.env` locally and from a [secret](/cloud/resources/env-variables/#secrets) in Cloud deployments. + + See the [configuration tool page](/sdk/resources/tools/hasura-configuration) and the [repo readme](https://github.com/subsquid-labs/squid-hasura-example#readme) for more details. + +## OpenReader + +[OpenReader](/sdk/reference/openreader-server) is a GraphQL server developed by the SQD team. Although still supported, it's not recommeded for new PostgreSQL-powered projects due to its [known issues](/sdk/reference/openreader-server/overview/#known-issues), especially for APIs implementing GraphQL subscriptions. + +The server uses the [schema file](/sdk/reference/schema-file) to produce its [core API](/sdk/reference/openreader-server/api) that can be extended with [custom resolvers](/sdk/reference/openreader-server/configuration/custom-resolvers). Extra features include [DoS protection](/sdk/reference/openreader-server/configuration/dos-protection) and [caching](/sdk/reference/openreader-server/configuration/caching). diff --git a/docs/sdk/resources/substrate/_category_.json b/docs/sdk/resources/substrate/_category_.json index bdd18e8a..cd56972e 100644 --- a/docs/sdk/resources/substrate/_category_.json +++ b/docs/sdk/resources/substrate/_category_.json @@ -1,5 +1,5 @@ { - "position": 31, + "position": 120, "label": "Substrate-specific", "collapsible": true, "collapsed": true, diff --git a/docs/sdk/resources/tools/_category_.json b/docs/sdk/resources/tools/_category_.json index e0774d18..4e4387dc 100644 --- a/docs/sdk/resources/tools/_category_.json +++ b/docs/sdk/resources/tools/_category_.json @@ -1,5 +1,5 @@ { - "position": 40, + "position": 130, "label": "Tools", "collapsible": true, "collapsed": true, diff --git a/docs/sdk/resources/tools/hasura-configuration-web-ui-import-export.png b/docs/sdk/resources/tools/hasura-configuration-web-ui-import-export.png new file mode 100644 index 00000000..1a151b45 Binary files /dev/null and b/docs/sdk/resources/tools/hasura-configuration-web-ui-import-export.png differ diff --git a/docs/sdk/resources/tools/hasura-configuration.md b/docs/sdk/resources/tools/hasura-configuration.md new file mode 100644 index 00000000..476eefce --- /dev/null +++ b/docs/sdk/resources/tools/hasura-configuration.md @@ -0,0 +1,89 @@ +--- +sidebar_position: 110 +title: Hasura configuration tool +description: Configure Hasura for your squid +--- + +# Hasura configuration tool + +[`@subsquid/hasura-configuration`](https://www.npmjs.com/package/@subsquid/hasura-configuration) is a tool for managing Hasura configuration in [PostgreSQL-powered squids](/sdk/resources/persisting-data/typeorm). Install it with +```bash +npm i @subsquid/hasura-configuration +``` +Make sure that the following environment variables are set: + + * `HASURA_GRAPHQL_ENDPOINT` for Hasura URL (defaults to `http://localhost:8080`). + * `HASURA_GRAPHQL_ADMIN_SECRET` for admin access (only required to use `squid-hasura-configuration apply`). + * If your Hasura instance(s) use a role other than `public` to serve the anonymous part of your API, also set `HASURA_GRAPHQL_UNAUTHORIZED_ROLE`. + +## Generating the initial configuration + +The tool uses your squid's [TypeORM models](/sdk/reference/schema-file/intro/#typeorm-codegen) as input when generating the initial configuration. Make sure they are up to date. + +
+Here's how + +1. Finalize any edits to [`schema.graphql`](/sdk/reference/schema-file) + +2. Update the TypeScript code of your models with + ```bash + npx squid-typeorm-codegen + ``` + +3. Compile your models with + ```bash + npm run build + ``` + +4. Regenerate your [DB migrations](/sdk/resources/persisting-data/typeorm/#database-migrations) with a clean database to make sure they match your updated schema. + ```bash + docker compose down + docker compose up -d + rm -r db + npm squid-typeorm-migration generate + ``` + +You can turn off your database after doing that - Hasura configuration tool does not use it to make its initial configuration + +
+ +When done, run +```bash +npx squid-hasura-configuration regenerate +``` +The generated configuration will be available at `hasura_metadata.json`. It enables: +- tracking all tables that correspond to [schema entities](/sdk/reference/schema-file/entities); +- `SELECT` permissions for the `public` (or `$HASURA_GRAPHQL_UNAUTHORIZED_ROLE` if it is defined) role for all columns in these tables; +- tracking all [entity relationships](/sdk/reference/schema-file/entity-relations). + +## Applying the configuration + +Make sure your database is up, your Hasura instance is connected to it and the schema is up to date. If necessary, apply the migrations: +```bash +npx squid-typeorm-migration apply +``` + +When done, you can apply the generated config with +```bash +npx squid-hasura-configuration apply +``` +or import it using the _Settings > Metadata Actions > Import metadata_ function of the web GUI. + +## Persisting configuration changes + +:::warning +Regenerating `hasura_metadata.json` removes any modifications you might have made via metadata exporting. So, it is advisable that you finalize your schema before you begin any manual API fine-tuning. +::: + +When running a squid with a dedicated Hasura instance you will notice that squid resetting operations (`docker compose down; docker compose up -d` and `sqd deploy -r`) restore your Hasura API to its non-configured state. As you develop your API further you may want to persist your changes. `squid-hasura-configuration` helps with that by being compatible with the _Settings > Metadata Actions > Import/Export metadata_ functions of the web GUI. + +![Web UI import and export](hasura-configuration-web-ui-import-export.png) + +Any extra configuration you may make via the web GUI or [Hasura metadata API](https://hasura.io/docs/2.0/api-reference/metadata-api/index) can be persisted by exporting the metadata to `hasura_metadata.json` via the _Export metadata_ function, then applying it to blank Hasura instances with +```bash +npx squid-hasura-configuration apply +``` + +## Example + +See [this repo](https://github.com/subsquid-labs/squid-hasura-example). diff --git a/docs/sdk/resources/basics/unfinalized-blocks.mdx b/docs/sdk/resources/unfinalized-blocks.mdx similarity index 100% rename from docs/sdk/resources/basics/unfinalized-blocks.mdx rename to docs/sdk/resources/unfinalized-blocks.mdx diff --git a/docs/sdk/troubleshooting.mdx b/docs/sdk/troubleshooting.mdx index c79aa4dc..347af5c9 100644 --- a/docs/sdk/troubleshooting.mdx +++ b/docs/sdk/troubleshooting.mdx @@ -69,13 +69,13 @@ PostgreSQL doesn't support storing `NULL (\0x00)` characters in text fields. Usu API queries are too slow - Make sure all the necessary fields are [indexed](/sdk/reference/schema-file/indexes-and-constraints/) -- Annotate the schema and [set reasonable limits](/sdk/resources/graphql-server/dos-protection/) for the incoming queries to protect against DoS attacks +- Annotate the schema and [set reasonable limits](/sdk/reference/openreader-server/configuration/dos-protection/) for the incoming queries to protect against DoS attacks
`response might exceed the size limit` -Make sure the input query has limits set or the entities are decorated with `@cardinality`. We recommend using `XXXConnection` queries for pagination. For configuring limits and max response sizes, see [DoS protection](/sdk/resources/graphql-server/dos-protection/). +Make sure the input query has limits set or the entities are decorated with `@cardinality`. We recommend using `XXXConnection` queries for pagination. For configuring limits and max response sizes, see [DoS protection](/sdk/reference/openreader-server/configuration/dos-protection/).
diff --git a/docs/sdk/tutorials/bayc/step-four-optimizations.md b/docs/sdk/tutorials/bayc/step-four-optimizations.md index 7a1d1f4b..64b102e7 100644 --- a/docs/sdk/tutorials/bayc/step-four-optimizations.md +++ b/docs/sdk/tutorials/bayc/step-four-optimizations.md @@ -15,7 +15,7 @@ Pre-requisites: Node.js, [Subsquid CLI](/squid-cli/installation), Docker, a proj ## Using Multicall for aggregating state queries -We begin by introducing [batch processing](/sdk/resources/basics/batch-processing/) wherever possible, and our first step is to replace individual contract state queries with [batch calls](/sdk/resources/tools/typegen/state-queries/#batch-state-queries) to a [MakerDAO multicall contract](https://github.com/mds1/multicall). Retrieve the multicall contract ABI by re-running `squid-evm-typegen` with `--multicall` option: +We begin by introducing [batch processing](/sdk/resources/batch-processing/) wherever possible, and our first step is to replace individual contract state queries with [batch calls](/sdk/resources/tools/typegen/state-queries/#batch-state-queries) to a [MakerDAO multicall contract](https://github.com/mds1/multicall). Retrieve the multicall contract ABI by re-running `squid-evm-typegen` with `--multicall` option: ```bash npx squid-evm-typegen --multicall src/abi 0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#bayc ``` diff --git a/docs/sdk/tutorials/bayc/step-one-indexing-transfers.md b/docs/sdk/tutorials/bayc/step-one-indexing-transfers.md index aedd2c17..dc60e470 100644 --- a/docs/sdk/tutorials/bayc/step-one-indexing-transfers.md +++ b/docs/sdk/tutorials/bayc/step-one-indexing-transfers.md @@ -195,7 +195,7 @@ Note a few things here: * A unique event log ID is available at `log.id` - no need to generate your own! * `tokenId` returned from the decoder is an `ethers.BigNumber`, so it has to be explicitly converted to `number`. The conversion is valid only because we know that BAYC NFT IDs run from 0 to 9999; in most cases we would use `BigInt` for the entity field type and convert with `tokenId.toBigInt()`. * `block.header` contains block metadata that we use to fill the extra fields. -* Accumulating the `Transfer` entity instances before using `ctx.store.insert()` on the whole array of them in the end allows us to get away with just one database transaction per batch. This is [crucial for achieving a good syncing performance](/sdk/resources/basics/batch-processing/). +* Accumulating the `Transfer` entity instances before using `ctx.store.insert()` on the whole array of them in the end allows us to get away with just one database transaction per batch. This is [crucial for achieving a good syncing performance](/sdk/resources/batch-processing/). At this point we have a squid that indexes the data on BAYC token transfers and is capable of serving it over a GraphQL API. Full code is available at [this commit](https://github.com/subsquid-labs/bayc-squid-1/tree/aeb6268168385cc605ce04fe09d0159f708efe47). Test it by running ```bash diff --git a/docs/sdk/tutorials/bayc/step-three-adding-external-data.md b/docs/sdk/tutorials/bayc/step-three-adding-external-data.md index 4bfca3eb..cb4a2513 100644 --- a/docs/sdk/tutorials/bayc/step-three-adding-external-data.md +++ b/docs/sdk/tutorials/bayc/step-three-adding-external-data.md @@ -19,7 +19,7 @@ Now that we have a record for each BAYC NFT, let's explore how we can retrieve m [EIP-721](https://eips.ethereum.org/EIPS/eip-721) suggests that token metadata contracts may make token data available in a JSON referred to by the output of the `tokenURI()` contract function. Upon examining `src/abi/bayc.ts`, we find that the BAYC token contract implements this function. Also, the public ABI has no obvious contract methods that may set token URI or events that may be emitted on its change. In other words, it appears that the only way to retrieve this data is by [querying the contract state](/sdk/resources/tools/typegen/state-queries/). -This requires a RPC endpoint of an archival Ethereum node, but we do not need to add one here: processor will reuse the endpoint we [supplied in part one](../step-one-indexing-transfers/#configuring-the-data-filters) of the tutorial for use in [RPC ingestion](/sdk/resources/basics/unfinalized-blocks). +This requires a RPC endpoint of an archival Ethereum node, but we do not need to add one here: processor will reuse the endpoint we [supplied in part one](../step-one-indexing-transfers/#configuring-the-data-filters) of the tutorial for use in [RPC ingestion](/sdk/resources/unfinalized-blocks). The next step is to prepare for retrieving and parsing the metadata proper. For this, we need to understand the protocols used in the URIs and the structure of metadata JSONs. To learn that, we retrieve and inspect some URIs ahead of the main squid sync. The most straightforward way to achieve this is by adding the following to the batch handler: ```diff title="src/main.ts" @@ -151,7 +151,7 @@ interface PartialToken { } ``` -Here, `PartialToken` stores the incomplete `Token` information obtained purely from blockchain events and function calls, before any [state queries](/sdk/resources/tools/typegen/state-queries/) or enrichment with [external data](/sdk/resources/basics/external-api/). +Here, `PartialToken` stores the incomplete `Token` information obtained purely from blockchain events and function calls, before any [state queries](/sdk/resources/tools/typegen/state-queries/) or enrichment with [external data](/sdk/resources/external-api/). The function `completeTokens()` is responsible for filling `Token` fields that are missing in `PartialToken`s. This involves IO operations, so both the function and its caller `createTokens()` have to be asynchronous. The functions also require a batch context for state queries and logging. We modify the `createTokens()` call in the batch handler to accommodate these changes: ```diff processor.run(new TypeormDatabase(), async (ctx) => { diff --git a/docs/sdk/tutorials/bayc/step-two-deriving-owners-and-tokens.md b/docs/sdk/tutorials/bayc/step-two-deriving-owners-and-tokens.md index 94448804..f114b593 100644 --- a/docs/sdk/tutorials/bayc/step-two-deriving-owners-and-tokens.md +++ b/docs/sdk/tutorials/bayc/step-two-deriving-owners-and-tokens.md @@ -99,7 +99,7 @@ Note how the entities we define form an acyclic dependency graph: As a consequence, the creation of entity instances must proceed in a [particular order](https://en.wikipedia.org/wiki/Topological_sorting). Squids usually use small graphs like this one, and in these the order can be easily found manually (e.g. `Owner`s then `Token`s then `Transfer`s in this case). We will assume that it can be hardcoded by the programmer. -Further, at each step we will [process the data for the whole batch](/sdk/resources/basics/batch-processing/) instead of handling the items individually. This is crucial for achieving a good syncing performance. +Further, at each step we will [process the data for the whole batch](/sdk/resources/batch-processing/) instead of handling the items individually. This is crucial for achieving a good syncing performance. With all that in mind, let's create a batch processor that generates and persists all of our entities: diff --git a/docs/sdk/tutorials/frontier-evm.md b/docs/sdk/tutorials/frontier-evm.md index cf089727..9bce1eb4 100644 --- a/docs/sdk/tutorials/frontier-evm.md +++ b/docs/sdk/tutorials/frontier-evm.md @@ -100,7 +100,7 @@ The results will be stored at `src/abi`. One module will be generated for each A Subsquid SDK provides users with the [`SubstrateBatchProcessor` class](/sdk). Its instances connect to [Subsquid Network](/subsquid-network/overview) gateways at chain-specific URLs, to get chain data and apply custom transformations. The indexing begins at the starting block and keeps up with new blocks after reaching the tip. -`SubstrateBatchProcessor`s [expose methods](/sdk/reference/processors/substrate-batch) that "subscribe" them to specific data such as Substrate events and calls. There are also [specialized methods](/sdk/resources/substrate/frontier-evm) for subscribing to EVM logs and transactions by address. The actual data processing is then started by calling the `.run()` function. This will start generating requests to the Subsquid Network gateway for [*batches*](/sdk/resources/basics/batch-processing) of data specified in the configuration, and will trigger the callback function, or *batch handler* (passed to `.run()` as second argument) every time a batch is returned by the gateway. +`SubstrateBatchProcessor`s [expose methods](/sdk/reference/processors/substrate-batch) that "subscribe" them to specific data such as Substrate events and calls. There are also [specialized methods](/sdk/resources/substrate/frontier-evm) for subscribing to EVM logs and transactions by address. The actual data processing is then started by calling the `.run()` function. This will start generating requests to the Subsquid Network gateway for [*batches*](/sdk/resources/batch-processing) of data specified in the configuration, and will trigger the callback function, or *batch handler* (passed to `.run()` as second argument) every time a batch is returned by the gateway. It is in this callback function that all the mapping logic is expressed. This is where chain data decoding should be implemented, and where the code to save processed data on the database should be defined. diff --git a/docs/sdk/tutorials/ink.md b/docs/sdk/tutorials/ink.md index 1b5d8494..23a971ee 100644 --- a/docs/sdk/tutorials/ink.md +++ b/docs/sdk/tutorials/ink.md @@ -190,7 +190,7 @@ export type ProcessorContext = DataHandlerContext ## Define the batch handler -Once requested, the events can be processed by calling the `.run()` function that starts generating requests to Subsquid Network for [*batches*](/sdk/resources/basics/batch-processing) of data. +Once requested, the events can be processed by calling the `.run()` function that starts generating requests to Subsquid Network for [*batches*](/sdk/resources/batch-processing) of data. Every time a batch is returned by the Network, it will trigger the callback function, or *batch handler* (passed to `.run()` as second argument). It is in this callback function that all the mapping logic is expressed. This is where chain data decoding should be implemented, and where the code to save processed data on the database should be defined. diff --git a/docs/sdk/tutorials/substrate.md b/docs/sdk/tutorials/substrate.md index 2af9cd99..26051e83 100644 --- a/docs/sdk/tutorials/substrate.md +++ b/docs/sdk/tutorials/substrate.md @@ -191,7 +191,7 @@ type Fields = SubstrateBatchProcessorFields export type ProcessorContext = DataHandlerContext ``` This creates a processor that - - Uses Subsquid Network as its main data source and a chain RPC for [real-time updates](/sdk/resources/basics/unfinalized-blocks). URLs of the Subsquid Network gateways are available on [this page](/subsquid-network/reference/substrate-networks) and via [`sqd gateways`](/squid-cli/gateways). See [this page](/sdk/reference/processors/substrate-batch/general) for the reference on data sources configuration; + - Uses Subsquid Network as its main data source and a chain RPC for [real-time updates](/sdk/resources/unfinalized-blocks). URLs of the Subsquid Network gateways are available on [this page](/subsquid-network/reference/substrate-networks) and via [`sqd gateways`](/squid-cli/gateways). See [this page](/sdk/reference/processors/substrate-batch/general) for the reference on data sources configuration; - [Subscribes](/sdk/reference/processors/substrate-batch/data-requests) to `Market.FileSuccess`, `Swork.JoinGroupSuccess` and `Swork.WorksReportSuccess` events emitted at heights starting at 583000; - Additionally subscribes to calls that emitted the events and the corresponding extrinsics; - [Requests](/sdk/reference/processors/substrate-batch/field-selection) the `hash` data field for all retrieved extrinsics and the `timestamp` field for all block headers. @@ -200,7 +200,7 @@ We also export the `ProcessorContext` type to be able to pass the sole argument ## Define the batch handler -Squids [batch process](/sdk/resources/basics/batch-processing) chain data from multiple blocks. Compared to the [handlers](/sdk/resources/basics/batch-processing/#migrate-from-handlers) approach this results in a much lower database load. Batch processing is fully defined by processor's [batch handler](/sdk/reference/processors/architecture/#processorrun), the callback supplied to the `processor.run()` call at the entry point of each processor (`src/main.ts` by convention). +Squids [batch process](/sdk/resources/batch-processing) chain data from multiple blocks. Compared to the [handlers](/sdk/resources/batch-processing/#migrate-from-handlers) approach this results in a much lower database load. Batch processing is fully defined by processor's [batch handler](/sdk/reference/processors/architecture/#processorrun), the callback supplied to the `processor.run()` call at the entry point of each processor (`src/main.ts` by convention). We begin defining our batch handler by importing the entity model classes and Crust event types that we generated in previous sections. We also import the processor and its types: diff --git a/docs/solana-indexing/sdk/solana-batch/context-interfaces.md b/docs/solana-indexing/sdk/solana-batch/context-interfaces.md index 2288b643..a4f66605 100644 --- a/docs/solana-indexing/sdk/solana-batch/context-interfaces.md +++ b/docs/solana-indexing/sdk/solana-batch/context-interfaces.md @@ -43,7 +43,7 @@ The exact fields available in each data item type are inferred from the `setFiel