Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Always enable IdModel-based indexing when resize is used #3567

Merged
merged 5 commits into from
Dec 11, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions csrc/device_lower/lower2device.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -348,6 +348,12 @@ IdModelOptions getIdModelOptions(Fusion* fusion) {
} else if (expr->isA<MmaOp>()) {
options.setBuildTensorIndexer(true);
continue;
} else if (expr->isOneOf<SliceOp, PadOp>()) {
options.setProducerIndex(true);
options.setConsumerIndex(true);
options.setInlinePredicate(true);
options.setUnswitchPredicate(true);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we have options.setLoop(true) also?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nevermind, I see your comment in test_resize.cpp now.

continue;
} else if (auto reshape = dynamic_cast<ViewOp*>(expr)) {
// The legacy indexer has an issue when an expand broadcast is
// involved in reshape transformations. Enable both tensor and
Expand Down
34 changes: 24 additions & 10 deletions csrc/id_model/indexing.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -413,19 +413,29 @@ class AllocationDomainSetup : private kir::IrVisitor {
}

// Reorder non-logical allocation domains to follow the ordering of
// the logical domain. This is necessary when an allocation domain
// includes a vectorized loop iter domain since it must be at the
// the set allocation domain. This is necessary when an allocation
// domain includes a vectorized loop iter domain since it must be at the
// innermost position but that may not be the case in the loop
// domain. Not strictly necessary otherwise, but this should also
// domain. It is also necessary when the tensor is a producer of a
// vectorized store. Not strictly necessary otherwise, but this should also
// minimize the deviation from the old indexing scheme which always
// uses the logical domain to index.
//
// Returns reordered allocation domains if reordering is done.
std::optional<std::vector<IterDomain*>> reorderAllocationDomains(
const TensorView* tv,
const std::vector<IterDomain*>& allocation_domains) const {
// Use getMaybeAllocationDomain instead of getLogicalDomain. When
// this tv is a producer of a vectorized store, the consumer
// tensor shoud be a global memory tensor and this is likely a
// cache tensor created by cacheBefore. The consumer tensor may
// have a reordered allocation domain and that dictates the actual
// allocation ordering of this producer local tensor as well. If
// getLogicalDomain is used, DistributedTransformerTest.Backward
// fails at the result validation.
auto exprs = DependencyCheck::getAllExprsBetween(
{tv->getLogicalDomain().begin(), tv->getLogicalDomain().end()},
{tv->getMaybeAllocationDomain().begin(),
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One issue found here

tv->getMaybeAllocationDomain().end()},
{allocation_domains.begin(), allocation_domains.end()});

if (exprs.empty()) {
Expand All @@ -434,7 +444,7 @@ class AllocationDomainSetup : private kir::IrVisitor {

// Replay exprs from the logical domain to get the non-reordered
// domains
auto ordered_domains = tv->getLogicalDomain();
auto ordered_domains = tv->getMaybeAllocationDomain();
for (auto expr : exprs) {
// Find the position to insert the outputs.
int64_t insertion_pos = -1;
Expand Down Expand Up @@ -845,14 +855,18 @@ std::vector<Val*> TensorIndexer::getIndexFor(
const auto& replacement_map = getIndexReplacementMap(
expr, as_consumer, info.loop_domains, for_loops, info.index_map);

const auto index_groups = traversalGraph().toGroups(index_ids);
// Note that IDs of index_ids may be mapped as the traversal graph
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another issue found here. Previously, this function only returns a list of indices for a unique vector of ValGroups. This resulted in an error when the given index vector (index_ids) have mapped IDs.

// is the AlmostExact graph.

std::vector<Val*> result;
result.reserve(index_groups.size());
for (const auto& g : index_groups) {
auto it = info.index_map.find(g);
result.reserve(index_ids.size());
for (IterDomain* index_id : index_ids) {
const auto& index_group = traversalGraph().toGroup(index_id);
auto it = info.index_map.find(index_group);
NVF_ERROR(
it != info.index_map.end(), "Index not found for ", g->toString());
it != info.index_map.end(),
"Index not found for ",
index_id->toString());
result.push_back(
ir_utils::replaceValRecursively(it->second, replacement_map));
}
Expand Down
Loading
Loading