Tokens information from the Antelope blockchains, powered by Substreams
Method | Path | Query parameters (* = Required) |
Description |
---|---|---|---|
GET text/html |
/ |
- | Swagger API playground |
GET application/json |
/balance |
account* contract symcode limit page |
Balances of an account |
GET application/json |
/balance/historical |
account* block_num contract symcode limit page |
Historical token balances |
GET application/json |
/head |
limit page |
Head block information |
GET application/json |
/holders |
contract* symcode* limit page |
List of holders of a token |
GET application/json |
/supply |
block_num issuer contract* symcode* limit page |
Total supply for a token |
GET application/json |
/tokens |
limit page |
List of available tokens |
GET application/json |
/transfers |
block_range contract* symcode* limit page |
All transfers related to a token |
GET application/json |
/transfers/account |
account* block_range from to contract symcode limit page |
All transfers related to an account |
GET application/json |
/transfers/id |
trx_id* limit page |
Specific transfer related to a token |
Method | Path | Description |
---|---|---|
GET application/json |
/openapi |
OpenAPI specification |
GET application/json |
/version |
API version and Git short commit hash |
Method | Path | Description |
---|---|---|
GET text/plain |
/health |
Checks database connection |
GET text/plain |
/metrics |
Prometheus metrics |
Use the Variables
tab at the bottom to add your API key:
{
"X-Api-Key": "changeme"
}
- For the
block_range
parameter intransfers
, you can pass a single integer value (low bound) or an array of two values (inclusive range). - Use the
from
andto
field for transfers of an account to further filter the results (i.e. incoming or outgoing transactions from/to another account). - Don't forget to request the
meta
fields in the response to get access to pagination and statistics !
- ClickHouse, databases should follow a
{chain}_tokens_{version}
naming scheme. Database tables can be setup using theschema.sql
definitions created by thecreate_schema.sh
script. - A Substream sink for loading data into ClickHouse. We recommend Substreams Sink ClickHouse or Substreams Sink SQL. This Token API makes use of the
substreams-antelope-tokens
substream.
Example on how to set up the ClickHouse backend for sinking EOS data.
- Start the ClickHouse server
clickhouse server
- Create the token database
echo "CREATE DATABASE eos_tokens_v1" | clickhouse client -h <host> --port 9000 -d <database> -u <user> --password <password>
- Run the
create_schema.sh
script
./create_schema.sh -o /tmp/schema.sql
- Execute the schema
cat /tmp/schema.sql | clickhouse client -h <host> --port 9000 -d <database> -u <user> --password <password>
- Run the sink
substreams-sink-sql run clickhouse://<username>:<password>@<host>:9000/eos_tokens_v1 \
https://github.com/pinax-network/substreams-antelope-tokens/releases/download/v0.4.0/antelope-tokens-v0.4.0.spkg `#Substreams package` \
-e eos.substreams.pinax.network:443 `#Substreams endpoint` \
1: `#Block range <start>:<end>` \
--final-blocks-only --undo-buffer-size 1 --on-module-hash-mistmatch=warn --batch-block-flush-interval 100 --development-mode `#Additional flags`
- Start the API
# Will be available on locahost:8080 by default
antelope-token-api --host <host> --database eos_tokens_v1 --username <username> --password <password> --verbose
If you run ClickHouse in a cluster, change step 2 & 3:
- Create the token database
echo "CREATE DATABASE eos_tokens_v1 ON CLUSTER <cluster>" | clickhouse client -h <host> --port 9000 -d <database> -u <user> --password <password>
- Run the
create_schema.sh
script
./create_schema.sh -o /tmp/schema.sql -c <cluster>
Warning
Linux x86 only
$ wget https://github.com/pinax-network/antelope-token-api/releases/download/v4.0.0/antelope-token-api
$ chmod +x ./antelope-token-api
$ ./antelope-token-api --help
Usage: antelope-token-api [options]
Token balances, supply and transfers from the Antelope blockchains
Options:
-V, --version output the version number
-p, --port <number> HTTP port on which to attach the API (default: "8080", env: PORT)
--hostname <string> Server listen on HTTP hostname (default: "localhost", env: HOSTNAME)
--host <string> Database HTTP hostname (default: "http://localhost:8123", env: HOST)
--database <string> The database to use inside ClickHouse (default: "default", env: DATABASE)
--username <string> Database user (default: "default", env: USERNAME)
--password <string> Password associated with the specified username (default: "", env: PASSWORD)
--max-limit <number> Maximum LIMIT queries (default: 10000, env: MAX_LIMIT)
-v, --verbose <boolean> Enable verbose logging (choices: "true", "false", default: false, env: VERBOSE)
-h, --help display help for command
# API Server
PORT=8080
HOSTNAME=localhost
# Clickhouse Database
HOST=http://127.0.0.1:8123
DATABASE=default
USERNAME=default
PASSWORD=
MAX_LIMIT=500
# Logging
VERBOSE=true
- Pull from GitHub Container registry
For latest tagged release
docker pull ghcr.io/pinax-network/antelope-token-api:latest
For head of main
branch
docker pull ghcr.io/pinax-network/antelope-token-api:develop
- Build from source
docker build -t antelope-token-api .
- Run with
.env
file
docker run -it --rm --env-file .env ghcr.io/pinax-network/antelope-token-api
See CONTRIBUTING.md
.
Install Bun
bun install
bun dev
Tests
bun lint
bun test