docker build . -t data-pipelines:latest
Clone your anchor repo (if capturing events for anchor). Otherwise, follow similar steps for you Solana setup.
Run
anchor localnet
Then upload your idl(s) with
anchor idl init <program-id> --filepath <idl.json> --provider.cluster localnet
First, update ACCOUNTS
and ANCHOR_IDLS
in docker-compose.yml to the programs you would like to capture, and the anchor programs you would like to parse.
Run
docker-compose up
In this repo. You can also run a subset, for example only run up to the event transformer:
docker-compose up event-transformer
If you're doing local dev for strata, you'll want our leaderboards.
First, clone strata api and build:
cd strata-api && docker build . -t strata-api:latest
cd strata-compose && docker-compose up
See (and render) architecture.puml for a birds-eye view of the system.
Identifies contiguous solana slots and pushes them to a heavily partitioned kafka topic
This utility pulls blocks for each contiguous Solana slot (as idneitified by the slot identifier) and inserts them into S3.
It then sends an event pointing to that s3 location to Kafka. We avoid sending the full block to kafka as it may be too large of a message.
Note that because slot identifier slots are partitioned, we can horizontally scale this uploader as many times as there are partitions. We found we needed 3-4 to keep up with mainnet.
Reads the events from Kafka S3 Block Uploader, pulls the blocks from S3, and transforms the transaction data into usable JSON events. Each event has common fields like type
, blockTime
, slot
.
This gives us a fat topic of all events occurring on the blockchain
Looking at ksql/
, you can see all of our ksqlDB queries. These queries turn the firehose of json.solana.events
topic into useful tables and streams.
The main usecase right now for these streams is to create leaderboards both on holders of individual accounts, and top tokens leaderboards
These read from streams generated by ksqlDB and insert them into Redis sorted sets so that we can power a fast graphQL API.
You should use the strata-terraform
repo to deploy the full pipeline. We use app.terraform.io to provision and launch terraform objects on AWS.
Boot up docker compose, but excluding the services you don't need. You can do this by passing args
docker-compose up minio kafka redis kowl
Now, you can launch whatever utility you want using vscode tasks that exist for this purpose.
You can use kowl at localhost:8080 to see what's going into the topics.
To test trophy sending, you can run
jq -rc . tests/resources/trophy.json | kafka-console-producer.sh --topic json.solana.trophies --bootstrap-server localhost:29092