Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[fix](catalog) opt the count pushdown rule for iceberg/paimon/hive scan node (#44038) #45224

Merged
merged 1 commit into from
Dec 15, 2024

Conversation

morningman
Copy link
Contributor

bp #44038

@doris-robot
Copy link

Thank you for your contribution to Apache Doris.
Don't know what should be done next? See How to process your PR.

Please clearly describe your PR:

  1. What problem was fixed (it's best to include specific error reporting information). How it was fixed.
  2. Which behaviors were modified. What was the previous behavior, what is it now, why was it modified, and what possible impacts might there be.
  3. What features were added. Why was this function added?
  4. Which code was refactored and why was this part of the code refactored?
  5. Which functions were optimized and what is the difference before and after the optimization?

@morningman
Copy link
Contributor Author

run buildall

Copy link
Contributor

clang-tidy review says "All clean, LGTM! 👍"

…an node (apache#44038)

### What problem does this PR solve?

1. Opt the parallelism when doing count push down optimization

Count push down optimization is used to optimize queries such as `select
count(*) from table`.
In this scenario, we can directly obtain the number of rows through the
row count statistics
    of the external table, or the metadata of the Parquet/ORC file,
without reading the actual file content, thereby speeding up such
queries.

Currently, we support count push down optimization for Hive, Iceberg,
and Paimon tables.
    There are two ways to obtain the number of rows:

    1. Obtain directly from statistics

For Iceberg tables, we can obtain the number of rows directly from
statistics.
However, due to the historical issues of Iceberg, if there is
position/equality delete in the table,
        this method cannot be used to prevent incorrect row count.
In this case, it will degenerate to obtaining from the metadata of the
file.

    2. Obtain from the metadata of the file

For Hive, Paimon, and some of Iceberg tables, the number of rows can be
obtained directly
        from the metadata of the Parquet/ORC file.
For Text format tables, efficiency can also be improved by only
performing row separation, without column separation.

In the task splitting logic, for Count push-down optimization, the
number of split tasks should comprehensively
consider the file format, number of files, parallelism, number of BE
nodes, and the Local Shuffle:

1. Count push-down optimization should avoid Local Shuffle, so the
number of split tasks should be greater than or equal to `parallelism *
number of BE nodes`.

2. Fix the incorrect logic of Count push-down optimization

In the previous code, for Iceberg and Paimon tables, Count push-down
optimization did not take effect because we did not push
CountPushDown information to FileFormatReader inside TableForamtReader.
This PR fixes this problem.

3. Store SessionVaraible variables in FileQueryScanNode.

SessionVaraible is a variable in ConnectionContext. And
ConnectionContext is a ThreadLocal variable.
In FileQueryScanNode, SessionVaraible may be accessed in other threads
in some cases,
    so ThreadLocal variables may not be obtained.
Therefore, the SessionVaraible reference is stored in FileQueryScanNode
to prevent illegal access.

4. Independent FileSplitter class.

The FileSplitter class is a tool class that allows users to split
`Split` according to different strategies.
This PR does not modify the splitting strategy, but only extracts this
part of the logic separately,
    to be able to perform logic optimization later.
@morningman
Copy link
Contributor Author

run buildall

@morningman morningman merged commit af0c1ac into apache:branch-3.0 Dec 15, 2024
20 of 23 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants