Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Extra benches #1651

Merged
merged 5 commits into from
Jun 11, 2024
Merged

Extra benches #1651

merged 5 commits into from
Jun 11, 2024

Conversation

fabianmurariu
Copy link
Collaborator

What changes were proposed in this pull request?

To catch potential slowdowns sooner

Why are the changes needed?

we failed to catch slowdowns and they made it into master

Does this PR introduce any user-facing change? If yes is this documented?

No

How was this patch tested?

Issues

Are there any further changes required?

Copy link
Collaborator

@ljeub-pometry ljeub-pometry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Construct the layered SuperUser graph in the benchmark instead of adding strange arguments to the public function
  • Minor suggestions for the benchmark to reduce variance and test higher-level apis

Comment on lines 330 to 334
graph.edge_window_layers(
edge.edge,
active_t.saturating_sub(5)..active_t + 5,
&LayerIds::All,
)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we test the high-level api instead? i.e. graph.window(active_t.saturating_sub(5)..active_t + 5).explode_layers().iter().for_each(|e| black_box(e))

Also, you are not actually consuming the iterator?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

completely missed that being an iterator 😄

@@ -28,6 +31,72 @@ pub fn graph(c: &mut Criterion) {
None,
);
graph_window_group_10.finish();

// subgraph
let mut rng = rand::thread_rng();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should give this a seed so we don't get variability due to choosing different nodes

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fair

Comment on lines 78 to 102
/// # Arguments
///
/// * `shards` - The number of shards to use for the graph
///
/// Returns:
///
/// - A Result containing the graph or an error
pub fn sx_superuser_graph() -> Result<Graph, Box<dyn std::error::Error>> {
pub fn sx_superuser_graph(num_layers: Option<usize>) -> Result<Graph, Box<dyn std::error::Error>> {
let graph = Graph::new();
CsvLoader::new(sx_superuser_file()?)
.set_delimiter(" ")
.load_into_graph(&graph, |edge: TEdge, g: &Graph| {
g.add_edge(edge.time, edge.src_id, edge.dst_id, NO_PROPS, None)
if let Some(layer) = num_layers
.map(|num_layers| calculate_hash(&(edge.src_id, edge.dst_id)) % num_layers as u64)
.map(|id| id.to_string())
{
g.add_edge(
edge.time,
edge.src_id,
edge.dst_id,
NO_PROPS,
Some(layer.as_str()),
)
.expect("Error: Unable to add edge");
} else {
g.add_edge(edge.time, edge.src_id, edge.dst_id, NO_PROPS, None)
.expect("Error: Unable to add edge");
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we make this a separate function in the benchmark instead? It is a really strange parameter to have for this dataset

@miratepuffin miratepuffin dismissed ljeub-pometry’s stale review June 11, 2024 09:39

merging so that we can check your PR

@miratepuffin miratepuffin merged commit 771d45c into master Jun 11, 2024
13 checks passed
@miratepuffin miratepuffin deleted the extra_benches branch June 11, 2024 09:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants