Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ignore DID loop ids in unroll #2912

Merged
merged 2 commits into from
Sep 6, 2024
Merged

Ignore DID loop ids in unroll #2912

merged 2 commits into from
Sep 6, 2024

Conversation

cowanmeg
Copy link
Collaborator

@cowanmeg cowanmeg commented Sep 5, 2024

Fixes false warning from #2888

Fixes false warning from #2888
@cowanmeg cowanmeg requested a review from naoyam September 5, 2024 20:57
@@ -1214,7 +1214,7 @@ void ensureStaticIndexing(
continue;
}
IterDomain* loop_id = loop->iter_domain();
if (loop->vectorize() || loop_id->isThread()) {
if (loop->vectorize() || loop_id->isThread() || loop_id->isDeviceDim()) {
Copy link
Collaborator

@naoyam naoyam Sep 5, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please change this line to:

Suggested change
if (loop->vectorize() || loop_id->isThread() || loop_id->isDeviceDim()) {
if (loop->vectorize() || isMemoryPartitionedAcross(tv->getMemoryType(), loop_id->getParallelType()) {

Basically, what we are doing here is to figure out if each loop contributes to the index of the given tensor. If so, the loop needs to be fully unrolled to make the index static. That's important since dynamic indices would prevent the use of registers.

This particular line is just filtering out loops that won't matter. For register tensors, TID/BID parallelized loops won't matter as registers are thread local. That's obviously the case too for DID.

isMemoryPartitionedAcross is a utility function we recently added to make this logic a little easier to change and extend. If we ever add another memory type, this would help reduce code changes.

Copy link
Collaborator

@naoyam naoyam left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@cowanmeg
Copy link
Collaborator Author

cowanmeg commented Sep 5, 2024

!build

@cowanmeg cowanmeg merged commit 82a2d52 into main Sep 6, 2024
36 checks passed
@cowanmeg cowanmeg deleted the cowanmeg-patch-1 branch September 6, 2024 03:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants