-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: display throughput for read session #25
Conversation
infiniteregrets
commented
Nov 15, 2024
•
edited
Loading
edited
- also bumps the rev and fixes breaking changes from the sdk
src/main.rs
Outdated
/ start.unwrap().elapsed().as_secs_f64()) | ||
/ 1024.0 | ||
/ 1024.0) | ||
as u64; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe worth reporting as a float (with a couple digits, {:.2}
or etc), to avoid the more significant effects of taking a floor for lower rates (e.g. 0.9mibps would appear as 0, etc)
src/main.rs
Outdated
/ 1024.0) | ||
as u64; | ||
|
||
eprintln!("{seq_range:?} // {batch_len} bytes in {num_records} records // {throughput_mibps:?}"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
probably useful to also report a cumulative mibps and total bytes transferred at the end?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there's probably a better way to handle the errors, resorting to just String
repr of tonic codes for now
src/main.rs
Outdated
format!( | ||
"✓ [READ] Total throughput: {:.2} MiB/s", | ||
total_throughput_mibps | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe also with total bytes and duration?
format!("✓ [READ] {total_data_len} metered bytes in {seconds} seconds at {total_throughput_mibps:.2} MiB/s")
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lg!
src/main.rs
Outdated
if let Some(ref mut writer) = writer { | ||
writer.write_all(&sequenced_record.body).await.unwrap(); | ||
writer.write_all(&data).await.unwrap(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe map to a S2CliError
instead of unwrap?
let mut read_output_stream = StreamService::new(stream_client) | ||
.read_session(start_seq_num) | ||
.await?; | ||
let mut writer = match output { | ||
Some(output) => Some(output.into_writer().await.unwrap()), | ||
None => None, | ||
}; | ||
|
||
let mut start = None; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe better to just set the time here instead of waiting for the first batch to arrive (and could simplify by rm'ing the start.is_none()
stuff in the loop) -- feels somehow more correct to also include this period in the throughput calc, wdyt?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah i can speak to why we do that in the consumer bin, it's when we are tailing from the end of the stream and concurrently producing, don't wanna measure the time spent waiting for the first record