-
Notifications
You must be signed in to change notification settings - Fork 24.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ES|QL categorize with multiple groupings #118173
base: main
Are you sure you want to change the base?
Conversation
Pinging @elastic/ml-core (Team:ML) |
Hi @jan-elastic, I've created a changelog YAML for you. |
099e143
to
7409695
Compare
7409695
to
35e9811
Compare
...in/java/org/elasticsearch/compute/aggregation/blockhash/CategorizePackedValuesBlockHash.java
Outdated
Show resolved
Hide resolved
Hi @jan-elastic, I've created a changelog YAML for you. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very nice!
x-pack/plugin/esql/compute/src/main/java/org/elasticsearch/compute/data/Page.java
Outdated
Show resolved
Hide resolved
...in/java/org/elasticsearch/compute/aggregation/blockhash/CategorizePackedValuesBlockHash.java
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work 👀
Some nits and new tests in comments; the impl looks quite solid to me
...in/esql/compute/src/main/java/org/elasticsearch/compute/aggregation/blockhash/BlockHash.java
Show resolved
Hide resolved
x-pack/plugin/esql/qa/testFixtures/src/main/resources/categorize.csv-spec
Show resolved
Hide resolved
...in/java/org/elasticsearch/compute/aggregation/blockhash/CategorizePackedValuesBlockHash.java
Outdated
Show resolved
Hide resolved
if (id == 0) { | ||
builder.appendNull(); | ||
} else { | ||
builder.appendBytesRef(regexes.getBytesRef(id + idsOffset, scratch)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not for now: We're repeating, potentially, a lot of bytesref values here. I wonder if there is or it would make sense to have a BytesRefBlock that instead of all the bytesrefs, stores every value just once, and then a reference per position:
AAAAAA
BBBBBBB
AAAAAA
AAAAAA
->
// 1: AAAAAA
// 2: BBBBBBB
1
2
1
1
@nik9000 Something to consider for later? Maybe it's too specific for this. And anyway, the next EVAL or whatever will duplicate the value again.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That sounds like a nice thing to have, but definitely out of scope for this PR.
However, the next EVAL
should not duplicate the value again.
If you have:
// 1: AAAAAA
// 2: BBBBBBB
1
2
1
1
then an efficient EVAL x=SUBSTRING(x, 1, 2)
should give
// 1: AA
// 2: BB
1
2
1
1
without ever duplicating.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For that SUBSTRING to not duplicate, we would need to add that "hashtable" strategy in the BytesRefBlockBuilder. It looks goo (?), but I wonder if using that by default could perform negatively in some scenarios. Something to try eventually probably
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds like worth trying in the future. Are you making a note (issue) of this, so that the idea doesn't get lost?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure! I'll comment it with Nik, just in case it was considered and discarded already, and then I'll document it in an issue somewhere
Thanks for the feedback @ivancea . I've processed all your comments. PTAL |
…egorize-multiple-groupings
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Heya, this generally looks very good, thank you @jan-elastic !
I only have one observation about potential untracked memory that I think we should ponder at least a bit to ensure we're safe; see below.
assert groups.get(0).isCategorize(); | ||
assert groups.subList(1, groups.size()).stream().noneMatch(GroupSpec::isCategorize); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit/bikeshed: throwing IllegalArgumentException
would be more friendly towards test; when assertions trigger, they bring down a whole node because that's an error, not exception. It's probably fine, though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This shouldn't happen, right? If this assertion fails, other code is broken (the verifier). I'll leave it as is unless you object. BTW, do you know if we run assertions in production?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Assertions are disabled in Prod, and this indeed shouldn't happen. Occasionally, a bug slips through, though, and when it triggers an assertion, it kill the whole IT suite run because it kills a node. It's fine to leave as-is, though!
@@ -1897,28 +1897,21 @@ public void testIntervalAsString() { | |||
public void testCategorizeSingleGrouping() { | |||
assumeTrue("requires Categorize capability", EsqlCapabilities.Cap.CATEGORIZE_V5.isEnabled()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
out of scope, but this assumeTrue
could be removed now that CATEGORIZE is in tech preview.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed them all from the unit tests. I still need them in the csv tests, right (for bwc tests)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The capability checks remain required, otherwise we'll run new tests against old nodes, indeed!
x-pack/plugin/esql/src/test/java/org/elasticsearch/xpack/esql/analysis/VerifierTests.java
Outdated
Show resolved
Hide resolved
x-pack/plugin/esql/src/test/java/org/elasticsearch/xpack/esql/analysis/VerifierTests.java
Outdated
Show resolved
Hide resolved
x-pack/plugin/esql/qa/testFixtures/src/main/resources/categorize.csv-spec
Show resolved
Hide resolved
...in/java/org/elasticsearch/compute/aggregation/blockhash/CategorizePackedValuesBlockHash.java
Show resolved
Hide resolved
...in/java/org/elasticsearch/compute/aggregation/blockhash/CategorizePackedValuesBlockHash.java
Outdated
Show resolved
Hide resolved
try (BytesStreamOutput out = new BytesStreamOutput()) { | ||
out.writeBytesRef(categorizeBlockHash.serializeCategorizer()); | ||
IntVector idsVector = (IntVector) keys[0].asVector(); | ||
int[] idsArray = new int[idsVector.getPositionCount()]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a little afraid that we allocate potentially quite a bit of memory without asking the breaker first. I believe this will lead to tricky-to-debug situations when the memory pressure is already high and this leads to an OOM. Not sure how likely, but still.
The blockFactory
has convenience methods preAdjustBreakerForInt
and adjustBreaker
that we better use here. That needs to be done carefully re. try/catching as not to have a circuit breaker leak.
@nik9000 wdyt? Should we play it safe here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of manually handling the memory here, maybe we should just do a idsVector.writeTo(...)
, so we remove a chunk of code from here, and avoid allocating anything else?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be nice to track these. I'm not sure it has to be a blocker though. Until a few months ago aggs didn't track the a few similar things to this. OTOH, it could cause problems.....
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, it looks like we just writeIntArray
with this. In that case, yeah, I'd write the ids manually.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I checked and we also don't really track memory in CategorizeBlockHash.getKeys
; neither do we track the memory for the categorizer itself. Update: actually, we probably do - so that should be covered.
The problem here is that due to combinatorial explosion, the untracked memory when writing the idsVector can be a lot larger than the actual categorizer state.
E.g. STATS ... BY CATEGORIZE(message), field1, field2
.
If there are n
categories of messages
, m
distinct field1
values and o
distinct field2
values, then the number of rows - and thus ids - will be n*m*o
. And we're copying this twice: once into an int[]
and another time when writing into out
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll leave this up to you all to decide...
...in/java/org/elasticsearch/compute/aggregation/blockhash/CategorizePackedValuesBlockHash.java
Show resolved
Hide resolved
...in/java/org/elasticsearch/compute/aggregation/blockhash/CategorizePackedValuesBlockHash.java
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's wait with merging until we had a chance to look at this subtle point.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Heya, the test here is a good start - but we could maybe also add some randomization, nulls and multivalues to gain more confidence.
@alex-spies Thanks for the review! Processed all your comments, except the OOM one, which needs more discussion. PTAL |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @jan-elastic !
Let's take care of the memory accounting in a follow-up.
Maybe we want to add a couple more block hash test cases, but otherwise this LGTM and should be unblocked :)
return newIds.build(); | ||
} | ||
int[] ids = in.readIntArray(); | ||
ids = categorizeBlockHash.recategorize(categorizerState, ids); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
++ definitely easier to grasp, thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This could still use randomization, nulls and multi-values.
No description provided.