Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge develop to mainnet-backend (20240927) #219

Merged
merged 43 commits into from
Sep 27, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
43 commits
Select commit Hold shift + click to select a range
bbc3172
fix(frontend): btc transaction query address field (#197)
Vibes-INS Sep 25, 2024
a7674b2
feat: use cluster and add redis for bullmq queue
ahonn Sep 25, 2024
3fe6e2f
fix: remove scheduler registry
ahonn Sep 25, 2024
01830d0
ci: update backend test
ahonn Sep 25, 2024
143021d
Merge pull request #198 from utxostack/feat/cluster-n-redis
ahonn Sep 25, 2024
989c017
ci: update docker-compose.yaml
ahonn Sep 25, 2024
08dd3c9
Merge pull request #199 from utxostack/feat/cluster-n-redis
ahonn Sep 25, 2024
4b23843
fix: improve caching and add dataloader maxBatchSize
ahonn Sep 26, 2024
81f13b0
feat: improve complexity handling and Sentry logging
ahonn Sep 26, 2024
fb58c39
Merge pull request #200 from utxostack/fix/cache-n-dataloader
ahonn Sep 26, 2024
0ea5268
feat: add resolver field complexity
ahonn Sep 26, 2024
9808361
Merge pull request #201 from utxostack/fix/cache-n-dataloader
ahonn Sep 26, 2024
1972bad
feat: improve caching for get_transaction/get_transactions
ahonn Sep 26, 2024
b0aa67b
Merge pull request #202 from utxostack/fix/cache-n-dataloader
ahonn Sep 26, 2024
3ece2ef
perf(frontend): cache and query (#175)
Vibes-INS Sep 26, 2024
7b0d851
refactor(indexer): improve asset indexing and health check
ahonn Sep 26, 2024
f42dc5b
Merge pull request #204 from utxostack/fix/cache-n-dataloader
ahonn Sep 26, 2024
de708d8
feat: add cluster service and add p-limit for bitcoin api
ahonn Sep 26, 2024
a5eaccb
Merge pull request #205 from utxostack/feat/cluster
ahonn Sep 26, 2024
35f7b5f
feat(cluster): add configurable number of worker processes
ahonn Sep 26, 2024
ad255c7
Merge pull request #206 from utxostack/feat/cluster
ahonn Sep 26, 2024
d90433e
feat(frontend): disabled prefetch (#203)
Vibes-INS Sep 26, 2024
928e9fb
feat: implement Apollo Server caching and add logging plugin
ahonn Sep 26, 2024
8a59cf6
feat: add graphql cache redis backend
ahonn Sep 26, 2024
eae0c44
chore: remove console.log
ahonn Sep 26, 2024
e057ef0
Merge pull request #208 from utxostack/feat/cache-control
ahonn Sep 26, 2024
40a7850
fix: fix health check loggin plugin error
ahonn Sep 26, 2024
64aa6ab
Merge pull request #209 from utxostack/feat/cache-control
ahonn Sep 26, 2024
4c20743
feat(backend): integrate Redis-based throttler storage
ahonn Sep 26, 2024
ca9b1da
Merge pull request #210 from utxostack/fix/rate-limit-storage
ahonn Sep 26, 2024
72a32c7
refactor: refactor cacheable decorator and implement cron jobs for in…
ahonn Sep 27, 2024
7ba2c77
chore: remove nestjs-cacheable
ahonn Sep 27, 2024
0230378
Merge pull request #213 from utxostack/fix/cacheable
ahonn Sep 27, 2024
a339ecc
fix: remove nestjs-cacheable
ahonn Sep 27, 2024
83d5610
Merge pull request #214 from utxostack/fix/cacheable
ahonn Sep 27, 2024
8c5add2
fix(frontend): fix background (#215)
Vibes-INS Sep 27, 2024
ed1acea
perf(graphql): improve complexity calculation for list fields
ahonn Sep 27, 2024
b85595e
Merge pull request #217 from utxostack/pref/complexity
ahonn Sep 27, 2024
f374e46
perf(frontend): pref query (#216)
Vibes-INS Sep 27, 2024
eba194c
chore: update default graphql complexity limit
ahonn Sep 27, 2024
06378e3
Merge pull request #218 from utxostack/pref/complexity
ahonn Sep 27, 2024
7b7168c
bump(backend): v0.2.0
ahonn Sep 27, 2024
ca2ca10
Merge pull request #221 from utxostack/bump/0.2.0
ahonn Sep 27, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 12 additions & 2 deletions .github/workflows/backend-test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ jobs:
runs-on: ubuntu-latest

services:
redis:
redis-cache:
image: redis
options: >-
--health-cmd "redis-cli ping"
Expand All @@ -23,6 +23,15 @@ jobs:
--health-retries 5
ports:
- 6379:6379
redis-queue:
image: redis
options: >-
--health-cmd "redis-cli ping"
--health-interval 10s
--health-timeout 5s
--health-retries 5
ports:
- 6380:6379

steps:
- name: Checkout code
Expand Down Expand Up @@ -57,7 +66,8 @@ jobs:
echo BITCOIN_ELECTRS_API_URL="{{ secrets.BITCOIN_ELECTRS_API_URL }}" >> .env
echo CKB_EXPLORER_API_URL="${{ secrets.CKB_EXPLORER_API_URL }}" >> .env
echo CKB_RPC_WEBSOCKET_URL="${{ secrets.CKB_RPC_WEBSOCKET_URL }}" >> .env
echo REDIS_URL="redis://localhost:6379" >> .env
echo REDIS_CACHE_URL="redis://localhost:6379" >> .env
echo REDIS_QUEUE_URL="redis://localhost:6380" >> .env
echo DATABASE_URL="postgres://postgres:postgres@postgres:5432/explorer?sslmode=disable" >> .env
cat .env
pnpm run test
Expand Down
7 changes: 5 additions & 2 deletions backend/package.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"name": "@utxo-stack-explorer/backend",
"version": "0.1.1",
"version": "0.2.0",
"description": "",
"author": "",
"private": true,
Expand Down Expand Up @@ -30,11 +30,13 @@
},
"dependencies": {
"@apollo/server": "^4.11.0",
"@apollo/server-plugin-response-cache": "^4.1.3",
"@as-integrations/fastify": "^2.1.1",
"@cell-studio/mempool.js": "^2.5.3",
"@ckb-lumos/bi": "^0.23.0",
"@ckb-lumos/lumos": "^0.23.0",
"@ckb-lumos/rpc": "^0.23.0",
"@nest-lab/throttler-storage-redis": "^1.0.0",
"@nestjs/apollo": "^12.2.0",
"@nestjs/axios": "^3.0.2",
"@nestjs/bullmq": "^10.2.0",
Expand Down Expand Up @@ -64,14 +66,15 @@
"fastify": "^4.28.1",
"graphql": "^16.9.0",
"graphql-query-complexity": "^1.0.0",
"ioredis": "^5.4.1",
"lodash": "^4.17.21",
"nestjs-cacheable": "^1.0.0",
"p-limit": "^3.1.0",
"prisma": "^5.16.2",
"redis": "^4.6.7",
"reflect-metadata": "^0.2.0",
"rpc-websockets": "^7.11.2",
"rxjs": "^7.8.1",
"serialize-javascript": "^6.0.2",
"ws": "^8.18.0",
"zod": "^3.23.8"
},
Expand Down
80 changes: 80 additions & 0 deletions backend/redis-queue.conf
Original file line number Diff line number Diff line change
@@ -0,0 +1,80 @@
# Redis configuration
#
# Example: https://raw.githubusercontent.com/redis/redis/7.4/redis.conf

################################## NETWORK #####################################
bind 0.0.0.0

################################ SNAPSHOTTING ################################

# Save the DB to disk.
#
# save <seconds> <changes> [<seconds> <changes> ...]
#
# Redis will save the DB if the given number of seconds elapsed and it
# surpassed the given number of write operations against the DB.
#
# Snapshotting can be completely disabled with a single empty string argument
# as in following example:
#
# save ""
#
# Unless specified otherwise, by default Redis will save the DB:
# * After 3600 seconds (an hour) if at least 1 change was performed
# * After 300 seconds (5 minutes) if at least 100 changes were performed
# * After 60 seconds if at least 10000 changes were performed
#
# You can set these explicitly by uncommenting the following line.
#
save 3600 1 300 100 60 10000

############################## APPEND ONLY MODE ###############################

# By default Redis asynchronously dumps the dataset on disk. This mode is
# good enough in many applications, but an issue with the Redis process or
# a power outage may result into a few minutes of writes lost (depending on
# the configured save points).
#
# The Append Only File is an alternative persistence mode that provides
# much better durability. For instance using the default data fsync policy
# (see later in the config file) Redis can lose just one second of writes in a
# dramatic event like a server power outage, or a single write if something
# wrong with the Redis process itself happens, but the operating system is
# still running correctly.
#
# AOF and RDB persistence can be enabled at the same time without problems.
# If the AOF is enabled on startup Redis will load the AOF, that is the file
# with the better durability guarantees.
#
# Please check https://redis.io/topics/persistence for more information.
appendonly yes

# Redis can create append-only base files in either RDB or AOF formats. Using
# the RDB format is always faster and more efficient, and disabling it is only
# supported for backward compatibility purposes.
aof-use-rdb-preamble yes

# Set a memory usage limit to the specified amount of bytes.
# When the memory limit is reached Redis will try to remove keys
# according to the eviction policy selected (see maxmemory-policy).
#
# If Redis can't remove keys according to the policy, or if the policy is
# set to 'noeviction', Redis will start to reply with errors to commands
# that would use more memory, like SET, LPUSH, and so on, and will continue
# to reply to read-only commands like GET.
#
# This option is usually useful when using Redis as an LRU or LFU cache, or to
# set a hard memory limit for an instance (using the 'noeviction' policy).
#
# WARNING: If you have replicas attached to an instance with maxmemory on,
# the size of the output buffers needed to feed the replicas are subtracted
# from the used memory count, so that network problems / resyncs will
# not trigger a loop where keys are evicted, and in turn the output
# buffer of replicas is full with DELs of keys evicted triggering the deletion
# of more keys, and so forth until the database is completely emptied.
#
# In short... if you have replicas attached it is suggested that you set a lower
# limit for maxmemory so that there is some free RAM on the system for replica
# output buffers (but this is not needed if the policy is 'noeviction').
maxmemory 2gb
maxmemory-policy noeviction
6 changes: 2 additions & 4 deletions backend/src/app.module.ts
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,6 @@ import { redisStore } from 'cache-manager-redis-yet';
import { Env } from './env';
import { CoreModule } from './core/core.module';
import { ApiModule } from './modules/api.module';
import { CacheableModule } from 'nestjs-cacheable';
import { ScheduleModule } from '@nestjs/schedule';
import { BullModule } from '@nestjs/bullmq';
import configModule from './config';
Expand All @@ -18,13 +17,12 @@ import { BootstrapService } from './bootstrap.service';
imports: [
configModule,
SentryModule.forRoot(),
CacheableModule.register(),
CacheModule.registerAsync<RedisClientOptions>({
isGlobal: true,
imports: [ConfigModule],
useFactory: async (configService: ConfigService<Env>) => {
const store = (await redisStore({
url: configService.get('REDIS_URL'),
url: configService.get('REDIS_CACHE_URL'),
isCacheable: (value) => value !== undefined,
})) as unknown as CacheStore;
return {
Expand All @@ -36,7 +34,7 @@ import { BootstrapService } from './bootstrap.service';
BullModule.forRootAsync({
imports: [ConfigModule],
useFactory: async (configService: ConfigService<Env>) => {
const url = new URL(configService.get('REDIS_URL')!);
const url = new URL(configService.get('REDIS_QUEUE_URL')!);
return {
connection: {
host: url.hostname,
Expand Down
9 changes: 8 additions & 1 deletion backend/src/bootstrap.service.ts
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
import { Injectable, Logger } from '@nestjs/common';
import { PrismaService } from './core/database/prisma/prisma.service';
import { IndexerServiceFactory } from './core/indexer/indexer.factory';
import cluster from 'node:cluster';

@Injectable()
export class BootstrapService {
Expand All @@ -9,7 +10,13 @@ export class BootstrapService {
constructor(
private prismaService: PrismaService,
private IndexerServiceFactory: IndexerServiceFactory,
) {}
) { }

public async bootstrap() {
if (cluster.isPrimary) {
await this.bootstrapAssetsIndex();
}
}

public async bootstrapAssetsIndex() {
const chains = await this.prismaService.chain.findMany();
Expand Down
30 changes: 30 additions & 0 deletions backend/src/cluster.service.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
import { Injectable, Logger } from '@nestjs/common';
import cluster from 'node:cluster';
import * as process from 'node:process';
import * as os from 'node:os';
import { envSchema } from './env';

const numCPUs = os.cpus().length;
const env = envSchema.parse(process.env);

@Injectable()
export class ClusterService {
private static logger = new Logger(ClusterService.name);

public static clusterize(callback: Function): void {
if (cluster.isPrimary) {
this.logger.log(`PRIMIRY PROCESS (${process.pid}) IS RUNNING `);
const workersNum = Math.min(env.CLUSTER_WORKERS_NUM, numCPUs);
for (let i = 0; i < workersNum; i++) {
cluster.fork();
}
cluster.on('exit', (worker) => {
this.logger.log(`WORKER ${worker.process.pid} DIED, FORKING NEW ONE`);
cluster.fork();
});
} else {
this.logger.log(`WORKER PROCESS (${process.pid}) IS RUNNING`);
callback();
}
}
}
1 change: 1 addition & 0 deletions backend/src/constants.ts
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ export const BtcTestnetTypeMap: Record<NetworkType.testnet | NetworkType.signet,
};

export const CKB_MIN_SAFE_CONFIRMATIONS = 24;
export const CKB_ONE_DAY_BLOCKS = 8640;
export const CKB_CHAIN_ID = 1;

export const XUDT_TYPESCRIPTS = {
Expand Down
3 changes: 2 additions & 1 deletion backend/src/core/bitcoin-api/bitcoin-api.service.ts
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,8 @@ import { MempoolService } from './provider/mempool.service';
import { ChainInfo, Transaction } from './bitcoin-api.schema';
import { ONE_HOUR_MS, ONE_MONTH_MS, TEN_MINUTES_MS } from 'src/common/date';
import { Cacheable } from 'src/decorators/cacheable.decorator';
import { PLimit } from 'src/decorators/plimit.decorator';
import * as Sentry from '@sentry/nestjs';
import { PLimit } from 'src/decorators/plimit.decorator';

type MethodParameters<T, K extends keyof T> = T[K] extends (...args: infer P) => any ? P : never;
type MethodReturnType<T, K extends keyof T> = T[K] extends (...args: any[]) => infer R ? R : never;
Expand Down Expand Up @@ -88,6 +88,7 @@ export class BitcoinApiService {
}
}

@PLimit({ concurrency: 200 })
private async call<K extends keyof IBitcoinDataProvider>(
method: K,
...args: MethodParameters<IBitcoinDataProvider, K>
Expand Down
33 changes: 26 additions & 7 deletions backend/src/core/blockchain/blockchain.service.ts
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ import { ONE_MONTH_MS } from 'src/common/date';
import { CKB_MIN_SAFE_CONFIRMATIONS } from 'src/constants';
import * as Sentry from '@sentry/nestjs';
import { Chain } from '@prisma/client';
import { PLimit } from 'src/decorators/plimit.decorator';

class WebsocketError extends Error {
constructor(message: string) {
Expand Down Expand Up @@ -70,6 +71,12 @@ export class BlockchainService {
});
}

@PLimit({ concurrency: 200 })
private async call(method: string, params: any[]): Promise<any> {
await this.websocketReady;
return this.websocket.call(method, params);
}

private handleReconnection() {
if (this.reconnectAttempts < this.maxReconnectAttempts) {
this.reconnectAttempts++;
Expand Down Expand Up @@ -122,7 +129,7 @@ export class BlockchainService {
): Promise<TransactionWithStatusResponse> {
await this.websocketReady;
this.logger.debug(`get_transaction - txHash: ${txHash}`);
const response = await this.websocket.call('get_transaction', [txHash]);
const response = await this.call('get_transaction', [txHash]);
const tx = response as TransactionWithStatusResponse;
// XXX: we don't need these fields by default, remove them to save cache/memory space
if (!withData) {
Expand Down Expand Up @@ -162,7 +169,7 @@ export class BlockchainService {
): Promise<Block> {
await this.websocketReady;
this.logger.debug(`get_block - blockHash: ${blockHash}`);
const response = await this.websocket.call('get_block', [blockHash]);
const response = await this.call('get_block', [blockHash]);
const block = response as Block;
if (!withTxData) {
block.transactions = block.transactions.map((tx) => {
Expand Down Expand Up @@ -204,7 +211,7 @@ export class BlockchainService {
): Promise<Block> {
await this.websocketReady;
this.logger.debug(`get_block_by_number - blockNumber: ${blockNumber}`);
const response = await this.websocket.call('get_block_by_number', [
const response = await this.call('get_block_by_number', [
BI.from(blockNumber).toHexString(),
]);
const block = response as Block;
Expand All @@ -231,7 +238,7 @@ export class BlockchainService {
public async getBlockEconomicState(blockHash: string): Promise<BlockEconomicState> {
await this.websocketReady;
this.logger.debug(`get_block_economic_state - blockHash: ${blockHash}`);
const blockEconomicState = await this.websocket.call('get_block_economic_state', [blockHash]);
const blockEconomicState = await this.call('get_block_economic_state', [blockHash]);
return blockEconomicState as BlockEconomicState;
}

Expand All @@ -244,10 +251,16 @@ export class BlockchainService {
public async getTipBlockNumber(): Promise<number> {
await this.websocketReady;
this.logger.debug('get_tip_block_number');
const tipBlockNumber = await this.websocket.call('get_tip_block_number', []);
const tipBlockNumber = await this.call('get_tip_block_number', []);
return BI.from(tipBlockNumber).toNumber();
}

@Cacheable({
namespace: 'BlockchainService',
key: (searchKey: SearchKey, order: 'asc' | 'desc', limit: string, after?: string) =>
`getTransactions:${JSON.stringify(searchKey)}:${order}:${limit}:${after}`,
ttl: 10_000,
})
public async getTransactions(
searchKey: SearchKey,
order: 'asc' | 'desc',
Expand All @@ -258,11 +271,17 @@ export class BlockchainService {
this.logger.debug(
`get_transactions - searchKey: ${JSON.stringify(searchKey)}, order: ${order}, limit: ${limit}, after: ${after}`,
);
const result = await this.websocket.call('get_transactions', [searchKey, order, limit, after]);
const result = await this.call('get_transactions', [searchKey, order, limit, after]);
const transactions = result as GetTransactionsResult;
return transactions;
}

@Cacheable({
namespace: 'BlockchainService',
key: (searchKey: SearchKey, order: 'asc' | 'desc', limit: string, after?: string) =>
`getCells:${JSON.stringify(searchKey)}:${order}:${limit}:${after}`,
ttl: 10_000,
})
public async getCells(
searchKey: SearchKey,
order: 'asc' | 'desc',
Expand All @@ -274,7 +293,7 @@ export class BlockchainService {
this.logger.debug(
`get_cells - searchKey: ${JSON.stringify(searchKey)}, order: ${order}, limit: ${limit}, after: ${after}`,
);
const result = await this.websocket.call('get_cells', [searchKey, order, limit, after]);
const result = await this.call('get_cells', [searchKey, order, limit, after]);
const cells = result as GetCellsResult;
cells.objects = cells.objects.map((cell) => {
if (!withData) {
Expand Down
9 changes: 9 additions & 0 deletions backend/src/core/core.service.ts
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,8 @@ import { Env } from 'src/env';
import { Transaction } from './blockchain/blockchain.interface';
import { BlockchainServiceFactory } from './blockchain/blockchain.factory';
import { LeapDirection } from '@prisma/client';
import { ONE_MONTH_MS } from 'src/common/date';
import { Cacheable } from 'src/decorators/cacheable.decorator';

export const CELLBASE_TX_HASH =
'0x0000000000000000000000000000000000000000000000000000000000000000';
Expand Down Expand Up @@ -72,6 +74,13 @@ export class CoreService {
);
}

@Cacheable({
namespace: 'CoreService',
key: (chainId: number, ckbTx: Transaction) => {
return `getLeapDirectionByCkbTx:${chainId}:${ckbTx.hash}`;
},
ttl: ONE_MONTH_MS,
})
public async getLeapDirectionByCkbTx(chainId: number, ckbTx: Transaction) {
const blockchainService = this.blockchainServiceFactory.getService(chainId);
const inputCells = await Promise.all(
Expand Down
Loading
Loading