Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support ops.sum(data, dim=None, keepdims=False) #490

Merged
merged 5 commits into from
Mar 18, 2021
Merged

Conversation

fritzo
Copy link
Member

@fritzo fritzo commented Mar 13, 2021

Addresses #489
pair coded with @eb8680 @ordabayevy @fehiepsi

This demonstrates the new parametrized op syntax from #491 . The recipe is:

  1. add new *args, *kwargs to your op in funsor.ops.array
  2. add backend support in funsor.torch.ops and funsor.jax.ops
  3. implement find_domain(op, ...) in funsor.domains
  4. work around batch dimensions in funsor.tensor
  5. add some tests

Tested

  • unit test for various op arg,kwarg syntax
  • unit test for batched tensors

@fritzo fritzo changed the title WIP sketch ParametrizedOp and a richer ops.sum Support ops.sum(data, dim=None, keepdims=False) Mar 18, 2021
@fritzo fritzo added awaiting review enhancement New feature or request and removed WIP refactor labels Mar 18, 2021
@fritzo fritzo requested a review from eb8680 March 18, 2021 02:00
@fritzo
Copy link
Member Author

fritzo commented Mar 18, 2021

@ordabayevy I hope this can serve as a template for your #482 . Feel free to refactor after this PR merges, in case you'd like to reuse the logic e.g. in find_domain or eager.register. We may even be able to create a subclass ReductionOp(UnaryOp), replace to use

- @UnaryOp.make
+ @ReductionOp.make
  def sum(...):
      ...

and register ops.sum, ops.mean, etc. in a single pattern.

if not arg.inputs:
return Tensor(op(arg.data), arg.inputs, arg.dtype)

# Work around batch inputs.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This logic converting dimensions into batch-aware dimensions seems useful and general-purpose enough that some version of it should maybe live in an Op method or something - we don't want to have to write this from scratch in each new op with nontrivial shape semantics.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree this is useful, and I'm hoping @ordabayevy will generalize this in #482 once we have more than one use case. This file seems like the right place for that general logic since funsor.ops should be agnostic to inputs and domains etc.

funsor/tensor.py Outdated Show resolved Hide resolved
@eb8680 eb8680 merged commit 079ab86 into master Mar 18, 2021
@eb8680 eb8680 deleted the parametrized-op branch March 18, 2021 02:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
awaiting review enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants