-
Notifications
You must be signed in to change notification settings - Fork 180
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: adjustable rule_group interval #515
Open
wbollock
wants to merge
10
commits into
slok:main
Choose a base branch
from
wbollock:feat/rule_group_interval
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
+364
−33
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This allows for more expensive SLO recording rules by letting users create a custom Prometheus `rule_group.interval`. https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/#rule_group Now instead of all rule groups using the default global evaluation interval, a custom interval can be set on all sets of recording rules for an SLO. If no `interval` is set then the global default will be assumed, matching current Sloth behavior. Resolves: slok#367
Related to how the unit test coverage works, this will make sure RuleGroupInterval is not appended to RuleGroups unless it is defined and not empty. Not requring the `yaml` file does an okay job of this but expliclity writing the rules without RuleGroupInterval is safer. It's a bit ugly and repetitive but I think that's just how Go works..
This updates tests include `interval`'s and also includes a test to make sure the rules render correctly when interval is not included.
Instead of a singular global default, now a rule_group interval can be set for every individual type of rule_group Sloth generates. The generic, `interval:all` rule will also stay and can "fill in" any missing per-rule group defaults. Along with the default behavior of doing nothing if no `interval` is specified.
Specific ones and general/specific combined
Co-authored-by: Will Hegedus <[email protected]>
@slok when you have a chance could you take a look and let me know what you think? |
looks like an amazing feature, cc @slok |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This allows for more expensive SLO recording rules by letting users
create a custom Prometheus
rule_group.interval
, as rule group default or per type of rule.https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/#rule_group
Now instead of all rule groups using the default global evaluation
interval, a custom interval can be set on all three types of recording rules
for an SLO.
If no
interval
is set then the global default will be assumed,matching current Sloth behavior.
Resolves: #367
Edit: I didn't touch the kubernetes operator side and probably need adjustments there