Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve efficiency of ETW metrics exporter #134

Open
wants to merge 23 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 15 commits
Commits
Show all changes
23 commits
Select commit Hold shift + click to select a range
255952c
Initial criterion support
mattbodd Nov 26, 2024
64255ab
Add doc comment to etw bench
mattbodd Nov 26, 2024
7d2890f
Add useful exporter benchmark
mattbodd Dec 3, 2024
c327437
Add useful etw benchmark
mattbodd Dec 3, 2024
391d151
Add system info and exporter and etw times
mattbodd Dec 3, 2024
fe5e7d0
Run cargo fmt
mattbodd Dec 3, 2024
bada294
Update benchmark
mattbodd Dec 3, 2024
18f550a
Try to be more efficient in exporter
mattbodd Dec 3, 2024
8dd5df6
Apply clippy lints
mattbodd Dec 3, 2024
7b18b01
Create ExportMetricsServiceRequest for each metric and replace the da…
mattbodd Dec 4, 2024
d167101
Reorder imports
mattbodd Dec 4, 2024
6f3e6f1
Merge branch 'main' into dev/mboddewy/add_criterion_benchmarks
mattbodd Dec 4, 2024
afac194
Attempt to bench mark export function more explicitly
mattbodd Dec 17, 2024
4bb3bf1
Use tokio async with criterion
mattbodd Dec 17, 2024
2044d73
Merge main
mattbodd Dec 17, 2024
3bbe76a
Do no create a new exporter for each exporter benchmark
mattbodd Dec 17, 2024
c817fa2
Remove etw benchmark and revert etw module to be private
mattbodd Dec 17, 2024
4b995d1
Remove etw benchmark from Cargo.toml
mattbodd Dec 17, 2024
36102a8
Use 10 metrics in the ResourceMetrics that is exported
mattbodd Dec 17, 2024
8b39545
Reuse the same ResourceMetric for every benchmark loop iteration in e…
mattbodd Dec 17, 2024
651913e
Avoid unecessary let (clippy lint)
mattbodd Dec 17, 2024
b29b530
Manage own async runtime instead of relying on criterion's
mattbodd Dec 18, 2024
72a6bc9
Move criterion to dev-dependencies in etw-metrics
mattbodd Dec 19, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -19,4 +19,7 @@ opentelemetry-http = "0.27"
opentelemetry-proto = { version = "0.27", default-features = false }
opentelemetry_sdk = { version = "0.27", default-features = false }
opentelemetry-stdout = "0.27"
opentelemetry-semantic-conventions = { version = "0.27", features = ["semconv_experimental"] }
opentelemetry-semantic-conventions = { version = "0.27", features = [
"semconv_experimental",
] }
criterion = "0.5"
11 changes: 10 additions & 1 deletion opentelemetry-etw-metrics/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,8 @@ opentelemetry-proto = { workspace = true, features = ["gen-tonic", "metrics"] }
async-trait = "0.1"
prost = "0.13"
tracelogging = "1.2.1"
tracing = {version = "0.1", optional = true}
tracing = { version = "0.1", optional = true }
criterion = { workspace = true, features = ["html_reports", "async_tokio"] }

[dev-dependencies]
tokio = { version = "1.0", features = ["full"] }
Expand All @@ -28,3 +29,11 @@ default = ["internal-logs"]

[package.metadata.cargo-machete]
ignored = ["tracing"]

[[bench]]
name = "etw"
harness = false

[[bench]]
name = "exporter"
harness = false
29 changes: 29 additions & 0 deletions opentelemetry-etw-metrics/benches/etw.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
//! run with `$ cargo bench --bench etw -- --exact <test_name>` to run specific test for logs
//! So to run test named "fibonacci" you would run `$ cargo bench --bench etw -- --exact fibonacci`
//! To run all tests for logs you would run `$ cargo bench --bench etw`
//!
/*
The benchmark results:
criterion = "0.5.1"
OS: Windows 11 Enterprise N, 23H2, Build 22631.4460
Hardware: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz 2.79 GHz, 16vCPUs
RAM: 64.0 GB
| Test | Average time|
|--------------------------------|-------------|
| write_event | 2.0649ns |
*/

use criterion::{criterion_group, criterion_main, Criterion};
use opentelemetry_etw_metrics::etw::write;

fn write_event() {
let buffer = "This is a test buffer".as_bytes();
write(buffer);
}

fn criterion_benchmark(c: &mut Criterion) {
c.bench_function("write_event", |b| b.iter(|| write_event()));
}

criterion_group!(benches, criterion_benchmark);
criterion_main!(benches);
80 changes: 80 additions & 0 deletions opentelemetry-etw-metrics/benches/exporter.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,80 @@
//! run with `$ cargo bench --bench exporter -- --exact <test_name>` to run specific test for logs
//! So to run test named "fibonacci" you would run `$ cargo bench --bench exporter -- --exact fibonacci`
//! To run all tests for logs you would run `$ cargo bench --bench exporter`
//!
/*
The benchmark results:
criterion = "0.5.1"
OS: Windows 11 Enterprise N, 23H2, Build 22631.4460
Hardware: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz 2.79 GHz, 16vCPUs
RAM: 64.0 GB
| Test | Average time|
|--------------------------------|-------------|
| exporter | 847.38µs |
*/

use opentelemetry::{InstrumentationScope, KeyValue};
use opentelemetry_etw_metrics::MetricsExporter;

use opentelemetry_sdk::{
metrics::{
data::{DataPoint, Metric, ResourceMetrics, ScopeMetrics, Sum},
exporter::PushMetricExporter,
Temporality,
},
Resource,
};

use criterion::{criterion_group, criterion_main, Criterion};

async fn export(mut resource_metrics: ResourceMetrics) {
let exporter = MetricsExporter::new();
exporter.export(&mut resource_metrics).await.unwrap();
}

fn create_resource_metrics() -> ResourceMetrics {
let data_point = DataPoint {
attributes: vec![KeyValue::new("datapoint key", "datapoint value")],
start_time: Some(std::time::SystemTime::now()),
time: Some(std::time::SystemTime::now()),
value: 1.0_f64,
exemplars: vec![],
};

let sum: Sum<f64> = Sum {
data_points: vec![data_point.clone(), data_point.clone(), data_point],
temporality: Temporality::Delta,
is_monotonic: true,
};

let resource_metrics = ResourceMetrics {
resource: Resource::new(vec![KeyValue::new("service.name", "my-service")]),
scope_metrics: vec![ScopeMetrics {
scope: InstrumentationScope::default(),
metrics: vec![Metric {
mattbodd marked this conversation as resolved.
Show resolved Hide resolved
name: "metric_name".into(),
description: "metric description".into(),
unit: "metric unit".into(),
data: Box::new(sum),
}],
}],
};

resource_metrics

Check failure on line 63 in opentelemetry-etw-metrics/benches/exporter.rs

View workflow job for this annotation

GitHub Actions / lint

returning the result of a `let` binding from a block
}

fn criterion_benchmark(c: &mut Criterion) {
let runtime = tokio::runtime::Builder::new_multi_thread()
.worker_threads(1)
.enable_all()
.build()
.unwrap();

c.bench_function("export", |b| {
b.to_async(&runtime)
.iter(|| export(create_resource_metrics()))
mattbodd marked this conversation as resolved.
Show resolved Hide resolved
});
}

criterion_group!(benches, criterion_benchmark);
criterion_main!(benches);
Loading
Loading