You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First, thanks for making this, I'm in the early stages of a custom TSDB -> prometheus based TSDB evaluation for a platform that does ~9M metrics/sec currently, and load testing my ingestion path with something like this is going to be critical, so, thanks.
I'm checking our the remote_write feature, and I wanted to see if I've got things correct:
with the /metrics endpoint, avalanche will keep running, refreshing the metrics/labels/samples at the defiend interval, continuing to make metrics available for scraping and simulating some metric churn.
However when using the remote_write feature, its more of a "generate and run once" ? It seems avalanche will spin up, and flush the generated metrics to the remote_write url, if it takes long enough do some rotation of metrics/labels/samples, but once everything has been sent (or the request limit reached?) itll stop and shutdown, as opposed to continuously run like the. /metrics endpoint does?
Do I have that correct?
I'm looking to simulate a significant amount of remote_write volume, so i was hoping to spin up a few hundred instances of avalanche to keep sending metrics as opposed to scraping them all and remote_writing from there continuously.
any clarity on the expected use of the feature would be helpful, thanks!
The text was updated successfully, but these errors were encountered:
First, thanks for making this, I'm in the early stages of a custom TSDB -> prometheus based TSDB evaluation for a platform that does ~9M metrics/sec currently, and load testing my ingestion path with something like this is going to be critical, so, thanks.
I'm checking our the remote_write feature, and I wanted to see if I've got things correct:
with the
/metrics
endpoint, avalanche will keep running, refreshing the metrics/labels/samples at the defiend interval, continuing to make metrics available for scraping and simulating some metric churn.However when using the remote_write feature, its more of a "generate and run once" ? It seems avalanche will spin up, and flush the generated metrics to the remote_write url, if it takes long enough do some rotation of metrics/labels/samples, but once everything has been sent (or the request limit reached?) itll stop and shutdown, as opposed to continuously run like the.
/metrics
endpoint does?Do I have that correct?
I'm looking to simulate a significant amount of remote_write volume, so i was hoping to spin up a few hundred instances of avalanche to keep sending metrics as opposed to scraping them all and remote_writing from there continuously.
any clarity on the expected use of the feature would be helpful, thanks!
The text was updated successfully, but these errors were encountered: