Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No flush for lower ranges. #39

Open
gusinacio opened this issue Sep 13, 2023 · 0 comments
Open

No flush for lower ranges. #39

gusinacio opened this issue Sep 13, 2023 · 0 comments

Comments

@gusinacio
Copy link
Contributor

I'm trying to execute the app by setting the following range: 17893158:17893162. The substreams finishes with the following logs and no flush:

2023-09-13T12:13:46.122-0300 INFO (sink-postgres) sinker from CLI {"endpoint": "mainnet.eth.streamingfast.io:443", "manifest_path": "substreams.spkg", "params": [], "output_module_name": "db_out", "expected_module_type": "sf.substreams.sink.database.v1.DatabaseChanges,sf.substreams.database.v1.DatabaseChanges", "block_range": "17893158:17893162", "development_mode": false, "infinite_retry": false, "final_blocks_only": false, "skip_package_validation": false, "live_block_time_delta": "5m0s", "undo_buffer_size": 12, "extra_headers": []}
2023-09-13T12:13:46.122-0300 INFO (sink-postgres) reading substreams manifest {"manifest_path": "substreams.spkg"}
2023-09-13T12:13:46.124-0300 INFO (sink-postgres) finding output module {"module_name": "db_out"}
2023-09-13T12:13:46.124-0300 INFO (sink-postgres) validating output module type {"module_name": "db_out", "module_type": "proto:sf.substreams.sink.database.v1.DatabaseChanges"}
2023-09-13T12:13:46.124-0300 INFO (sink-postgres) sinker configured {"mode": "Production", "module_count": 1, "output_module_name": "db_out", "output_module_type": "proto:sf.substreams.sink.database.v1.DatabaseChanges", "output_module_hash": "cb430a7f31a05e6057d087f6e62410865cd0a1b9", "client_config": "mainnet.eth.streamingfast.io:443 (insecure: false, plaintext: false, JWT present: true)", "buffer": "Buffering (12 blocks)", "block_range": "[17893158, 17893162)", "infinite_retry": false, "final_blocks_only": false, "liveness_checker": true}
2023-09-13T12:13:46.126-0300 INFO (sink-postgres) starting postgres sink {"stats_refresh_each": "15s", "restarting_at": "None", "database": "uniswap_v2_ethereum", "schema": "uniswap_v2_ethereum"}
2023-09-13T12:13:46.126-0300 INFO (sink-postgres) starting sinker {"stats_refresh_each": "15s", "end_at": "#17893173"}
2023-09-13T12:13:46.530-0300 INFO (sink-postgres) session initialized with remote endpoint {"max_parallel_workers": 10, "linear_handoff_block": 17893174, "resolved_start_block": 17893158, "trace_id": "0a483664a47a455929a13b21420c891f"}
2023-09-13T12:13:47.097-0300 INFO (sink-postgres) substreams ended correctly, reached your stop block {"last_block_seen": "#17893173 (8d6a8f90dce5f10e27b6763fc2c3bf267a0f5549e09dee0f9024b0833bdbf12a)"}
2023-09-13T12:13:47.098-0300 INFO (sink-postgres) postgres sink stats {"db_flush_rate": "NaN flush/s (0 total)", "flushed_entries": 0, "last_block": "<Unset>"}
2023-09-13T12:13:47.098-0300 INFO (sink-postgres) postgres sinker terminating {"last_block_written": "<Unset>"}
2023-09-13T12:13:47.098-0300 INFO (sink-postgres) sinker terminating
2023-09-13T12:13:47.098-0300 INFO (sink-postgres) run terminating {"from_signal": false, "with_error": false}
2023-09-13T12:13:47.098-0300 INFO (sink-postgres) waiting for run termination
2023-09-13T12:13:47.098-0300 INFO (sink-postgres) run terminated gracefully
2023-09-13T12:13:47.098-0300 INFO (sink-postgres) substreams stream stats {"data_msg_rate": "16.000 msg/s (16 total)", "progress_last_block": {}, "progress_running_jobs": {}, "progress_total_processed_blocks": 0, "progress_last_contiguous_block": {}, "undo_msg_rate": "0.000 msg/s (0 total)", "last_block": "#17893173 (8d6a8f90dce5f10e27b6763fc2c3bf267a0f5549e09dee0f9024b0833bdbf12a)"}

The same happens for both Clickhouse and Postgres.

I thought it could be something related to the undo signal, but trying with ranges bigger than 12 blocks but within the 1000 blocks did the same result.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant