-
Notifications
You must be signed in to change notification settings - Fork 116
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SNOW-1636678 add server side param for complexity bounds #2273
SNOW-1636678 add server side param for complexity bounds #2273
Conversation
) | ||
# The complexity score lower bound is set to match COMPILATION_MEMORY_LIMIT | ||
# in Snowflake. This is the limit where we start seeing compilation errors. | ||
DEFAULT_COMPLEXITY_SCORE_LOWER_BOUND = 10_000_000 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm assuming the compilation memory limit is measured in bytes? Can we also assume that our complexity score is measured in bytes? Wasn't it more based on the number of plan nodes?
If they don't use the same units, then is there a mapping between such units that we use?
"PYTHON_SNOWPARK_LARGE_QUERY_BREAKDOWN_COMPLEXITY_LOWER_BOUND" | ||
) | ||
# The complexity score lower bound is set to match COMPILATION_MEMORY_LIMIT | ||
# in Snowflake. This is the limit where we start seeing compilation errors. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we know if this is a soft limit or a hard limit? IoW, does the compiler intentionally errors out as soon as the limit is exceeded, or does it keep going anyway with undefined behavior?
) | ||
# The complexity score lower bound is set to match COMPILATION_MEMORY_LIMIT | ||
# in Snowflake. This is the limit where we start seeing compilation errors. | ||
DEFAULT_COMPLEXITY_SCORE_LOWER_BOUND = 10_000_000 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm also assuming that this limit is configurable on the compiler side. Is there a way to pull the currently configured value instead of hard coding it here - to make sure the two configurations are in sync?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
left a comment
@@ -784,6 +810,24 @@ def large_query_breakdown_enabled(self, value: bool) -> None: | |||
"value for large_query_breakdown_enabled must be True or False!" | |||
) | |||
|
|||
@large_query_breakdown_complexity_bounds.setter | |||
def large_query_breakdown_complexity_bounds(self, value: Tuple[int, int]) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
might be easier to take a lower and upper bound there, then you can reconstruct the tuple internally, and we can make lower or upper optional also
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sfc-gh-yzou this is a @setter
method. I'm afraid we have to abide by this format.
Which Jira issue is this PR addressing? Make sure that there is an accompanying issue to your PR.
Fixes SNOW-1636678
Fill out the following pre-review checklist:
Please describe how your code solves the related issue.
This PR makes the following changes:
session
object.