Maintain an entire copy of the LTO public blockchain plus more in an off-chain database.
- Node.js v8+
- knex.js supported database (MySQL tested only)
- RabbitMQ 3.6+ with Erlang 20+
- RabbitMQ Delayed Message Exchange Plugin
- The block collector scrapes blocks from the public blockchain at a defined interval in batches of up to 99 blocks.
- The collected block(s) get queued where they get processed at a 1:1 ratio in chronological order.
- Block
n
is processed by the key block consumer which stores the block into the database. - Block
n-1
is processed simultaneously by the micro block consumer which stores all transactions belonging to the block into the database. - Block
n-100
is processed by the verify block consumer which validates the blocks signature. - Incase the signature mismatches, the block collector rewinds (think of a database reroll) to the last healthy block and continuous.
- Setup a database
- Setup .env (see example below)
- Run
npm run start
- set
VERIFY_CACHE=0
- set
UPDATE_ADDRESS=0
if you are not interested in address balances and creation dates COLLECTOR_INTERVAL
can be lowered at the start, but not recommended after block 50,000
Consider using pm2 to keep the collector running and enabling RabbitMQ Dashboard to have an overview on the queues.
DB_TYPE=mysql
DB_HOST=localhost
DB_PORT=3306
DB_USER=
DB_PASS=
DB_NAME=
RABBITMQ_HOST=localhost
RABBITMQ_USER=
RABBITMQ_PASS=
NODE_ADDRESS=nodes.lto.network
COLLECTOR_INTERVAL=10000
VERIFY_CACHE=1
VERIFY_INTERVAL=3000000
UPDATE_ADDRESS=1
TIMEOUT=10000
ATOMIC_NUMBER=100000000
*if nodes.lto.network
is unavailable try node.lto.cloud