Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request] Use of Binary DocValue for high cardinality fields to improve aggregations performance #16837

Open
rishabhmaurya opened this issue Dec 12, 2024 · 5 comments
Assignees
Labels
enhancement Enhancement or improvement to existing feature or request Search:Performance untriaged

Comments

@rishabhmaurya
Copy link
Contributor

rishabhmaurya commented Dec 12, 2024

Is your feature request related to a problem? Please describe

DocValue type for keyword field is always set as SORTED_SET, this works well for cases with low/medium cardinality fields, however, for high cardinality fields, its an overhead as it unnecessarily iterate over ordinals and lookup ordinals using term dictionaries.
Lucene 9 also started always compressing the term dictionaries for sorted doc values (https://issues.apache.org/jira/browse/LUCENE-9843) and disregarding compression mode associated with codec. This makes ordinal lookup even slower when sorted doc values are used, making high cardinality agg queries even slower.

Describe the solution you'd like

Use of binary doc values for high cardinality fields can improve the performance significantly for cardinality aggregation and other aggregations too. The catch is, its an index time setting to set the doc value type and we can't set both as it will significantly increase the index size involving keyword fields.

We can do one of the following, feel free to add any other solution -

  1. Introduce a new field type for such high cardinality fields and use doc value type as binary for them.
  2. Introduce a configuration within keyword field (this is what i did for poc as a hack); I'm against this solution due to complexity it adds to keyword field type.

Shortcoming of having just binary doc value for a given field type compared to sorted set DV -

  1. Larger index size depending on the amount of duplications present. Also, as lucene 9 always compresses term dict for sorted DV, which not the case for binary DV, so that will also add to higher index size when default best_speed compression mode is used.
  2. aggregations or any other codepath involving ordinals like ordinalCollector for CardinalityAggregation can never be used. I believe for high cardinality fields, this will anyways be the case where ordinals overhead will always be very high and shouldn't be used.

Related component

Search:Performance

Describe alternatives you've considered

No response

Additional context

I tweaked the code to add both sorted set and binary doc values for keyword field type. Also, added a way to configure what to use for FieldData which is used for aggregations.
On running osb against Big5 workload for a high cardinality field, the improvement was significant - almost 10x from 28.8 sec to 3.2 sec:

Query:

{ 
  "size": 0, 
  "aggs": {
    "agent": {
      "cardinality": {
        "field": "event.id.keyword"
      }
    }
  }
}

Using sorted set doc value

{
  "took" : 28851,
  "timed_out" : false,
  "_shards" : {
    "total" : 1,
    "successful" : 1,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : {
      "value" : 10000,
      "relation" : "gte"
    },
    "max_score" : null,
    "hits" : [ ]
  },
  "aggregations" : {
    "agent" : {
      "value" : 180250
    }
  }
}

Using binary doc value:

{
  "took" : 3266,
  "timed_out" : false,
  "_shards" : {
    "total" : 1,
    "successful" : 1,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : {
      "value" : 10000,
      "relation" : "gte"
    },
    "max_score" : null,
    "hits" : [ ]
  },
  "aggregations" : {
    "agent" : {
      "value" : 180250
    }
  }
}
@rishabhmaurya rishabhmaurya added enhancement Enhancement or improvement to existing feature or request untriaged labels Dec 12, 2024
@rishabhmaurya rishabhmaurya self-assigned this Dec 12, 2024
@kkewwei
Copy link
Contributor

kkewwei commented Dec 12, 2024

@rishabhmaurya. Good idea, that is to say, we sacrifice compression rate to speed up reading. If we don't use term dictionaries, the storage will be a bit bloated, can you do a benchmark to compare?

@rishabhmaurya
Copy link
Contributor Author

rishabhmaurya commented Dec 12, 2024

Good idea, that is to say, we sacrifice compression rate to speed up reading.

@kkewwei There might be some impact as even in high cardinality fields, there will be duplications, so it all depends on how much duplications are present. Also, it would be interesting to see the size differences when binary DV are encoded as well. I'm currently analyzing the sizes for the event.id fields used in big5 workload, will post the update soon.

@rishabhmaurya
Copy link
Contributor Author

rishabhmaurya commented Dec 12, 2024

sorted set:

  "dvd" : {
    "size_in_bytes" : 331618470,
    "description" : "DocValues"
  },

binary:

  "dvd" : {
    "size_in_bytes" : 982887189,
    "description" : "DocValues"
  },

increase of ~500mb i.e. size of DV more than doubled on replacing sorted set with binary. but this is compressed term dictionary in sorted set vs uncompressed binary.

index size -

curl localhost:9200/_cat/indices
yellow open big5cardssdv O4SqYFCrRNSSUlJMS-pkiA 1 1 69223950 0  2.5gb  2.5gb
yellow open big5cardbdv  SlzNDVjwQ_KirkVG0DywWQ 1 1 69223950 0    3gb    3gb

increase of ~500 mb.

Next step - I will check how size and speed is impacted using best_compression as codec with binary DV.

@rishabhmaurya
Copy link
Contributor Author

rishabhmaurya commented Dec 13, 2024

Surprisingly, on using best_compression, the size of dvd file remained same -

    "dvd" : {
      "size_in_bytes" : 983490211,
      "description" : "DocValues"
    },

so we are definitely trading off storage size with use of binary doc values. How much? that totally depends on duplications in high cardinality field.
It might still be worth it for some of the users to use this format for the speed gains it is providing. Need inputs from other folks as well on what they think about introducing a new fields type in keyword field family for such high cardinality cases.
cc @msfroh @andrross @reta @jainankitk

@bugmakerrrrrr
Copy link
Contributor

Surprisingly, on using best_compression, the size of dvd file remained same -

AFAIK, the compression is not applied to binary doc values. Lucene 8 added compression for Binary doc value fields, but it's removed in Lucene 9. Maybe we could consider adding it back as a custom codec. In addition to, we could also consider using the ZSTD to compress the binary/sorted set/sorted doc values.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Enhancement or improvement to existing feature or request Search:Performance untriaged
Projects
Status: Todo
Development

No branches or pull requests

3 participants