Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: display throughput for read session #25

Merged
merged 3 commits into from
Nov 16, 2024
Merged

Conversation

infiniteregrets
Copy link
Member

@infiniteregrets infiniteregrets commented Nov 15, 2024

  • also bumps the rev and fixes breaking changes from the sdk

@infiniteregrets infiniteregrets requested a review from a team as a code owner November 15, 2024 19:47
src/main.rs Outdated
/ start.unwrap().elapsed().as_secs_f64())
/ 1024.0
/ 1024.0)
as u64;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe worth reporting as a float (with a couple digits, {:.2} or etc), to avoid the more significant effects of taking a floor for lower rates (e.g. 0.9mibps would appear as 0, etc)

src/main.rs Outdated
/ 1024.0)
as u64;

eprintln!("{seq_range:?} // {batch_len} bytes in {num_records} records // {throughput_mibps:?}");
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

probably useful to also report a cumulative mibps and total bytes transferred at the end?

Copy link
Member Author

@infiniteregrets infiniteregrets left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there's probably a better way to handle the errors, resorting to just String repr of tonic codes for now

src/main.rs Outdated
Comment on lines 599 to 602
format!(
"✓ [READ] Total throughput: {:.2} MiB/s",
total_throughput_mibps
)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe also with total bytes and duration?

format!("✓ [READ] {total_data_len} metered bytes in {seconds} seconds at {total_throughput_mibps:.2} MiB/s")

Copy link
Member

@sgbalogh sgbalogh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lg!

src/main.rs Outdated
if let Some(ref mut writer) = writer {
writer.write_all(&sequenced_record.body).await.unwrap();
writer.write_all(&data).await.unwrap();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe map to a S2CliError instead of unwrap?

let mut read_output_stream = StreamService::new(stream_client)
.read_session(start_seq_num)
.await?;
let mut writer = match output {
Some(output) => Some(output.into_writer().await.unwrap()),
None => None,
};

let mut start = None;
Copy link
Member

@sgbalogh sgbalogh Nov 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe better to just set the time here instead of waiting for the first batch to arrive (and could simplify by rm'ing the start.is_none() stuff in the loop) -- feels somehow more correct to also include this period in the throughput calc, wdyt?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah i can speak to why we do that in the consumer bin, it's when we are tailing from the end of the stream and concurrently producing, don't wanna measure the time spent waiting for the first record

@infiniteregrets infiniteregrets merged commit 794b6ad into main Nov 16, 2024
2 checks passed
@infiniteregrets infiniteregrets deleted the throughput branch November 16, 2024 00:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants