Skip to content

Commit

Permalink
Fixed changelog link and removed some leftover println
Browse files Browse the repository at this point in the history
  • Loading branch information
maoueh committed Mar 5, 2024
1 parent f68bd03 commit 685f663
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 7 deletions.
4 changes: 2 additions & 2 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ tools fix-bloated-merged-blocks <merged-blocks-store> <output-store> <start>:<st

* More tolerant retry/timeouts on filesource (prevent "Context Deadline Exceeded")

## [1.1.5-rc1](https://github.com/streamingfast/firehose-near/releases/tag/v1.1.5)
## [1.1.5-rc1](https://github.com/streamingfast/firehose-near/releases/tag/v1.1.5-rc1)

This release candidate is a hotfix for an issue introduced at block v1.1.3 and affecting `production-mode` where the stream will hang and some `map_outputs` will not be produced over some specific ranges of the chains.

Expand All @@ -134,7 +134,7 @@ This release bumps substreams to v1.1.9 and firehose-core to v0.1.3

#### Substreams Scheduler Improvements for Parallel Processing

The `substreams` scheduler has been improved to reduce the number of required jobs for parallel processing. This affects `backprocessing` (preparing the states of modules up to a "start-block") and `forward processing` (preparing the states and the outputs to speed up streaming in production-mode).
The `substreams` scheduler has been improved to reduce the number of required jobs for parallel processing. This affects `backprocessing` (preparing the states of modules up to a "start-block") and `forward processing` (preparing the states and the outputs to speed up streaming in production-mode).

Jobs on `tier2` workers are now divided in "stages", each stage generating the partial states for all the modules that have the same dependencies. A `substreams` that has a single store won't be affected, but one that has 3 top-level stores, which used to run 3 jobs for every segment now only runs a single job per segment to get all the states ready.

Expand Down
5 changes: 0 additions & 5 deletions cmd/firenear/reader_node_bootstraper.go
Original file line number Diff line number Diff line change
Expand Up @@ -22,16 +22,11 @@ func newReaderNodeBootstrapper(ctx context.Context, logger *zap.Logger, cmd *cob

hostname, _ := os.Hostname()

fmt.Println("Hostname", hostname)
fmt.Println("Config file", viper.GetString("reader-node-config-file"))

configFile := replaceNodeRole(viper.GetString("reader-node-config-file"), hostname)
genesisFile := replaceNodeRole(viper.GetString("reader-node-genesis-file"), hostname)
nodeKeyFile := replaceHostname(viper.GetString("reader-node-key-file"), hostname)
overwriteNodeFiles := viper.GetBool("reader-node-overwrite-node-files")

fmt.Println("Config final", configFile)

return &bootstrapper{
configFile: configFile,
genesisFile: genesisFile,
Expand Down

0 comments on commit 685f663

Please sign in to comment.