Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix top-down inference on Apple Silicon #1547

Open
wants to merge 1 commit into
base: develop
Choose a base branch
from

Conversation

talmo
Copy link
Collaborator

@talmo talmo commented Oct 12, 2023

Description

This PR changes the logic for how we handle empty outputs from the centroid model in top-down inference models.

We now check for this condition and explicitly create empty tensors (this was done automatically in older versions of TF on other platforms).

Types of changes

  • Bugfix
  • New feature
  • Refactor / Code style update (no logical changes)
  • Build / CI changes
  • Documentation Update
  • Other (explain)

Does this address any currently open issues?

#1100

Outside contributors checklist

  • Review the guidelines for contributing to this repository
  • Read and sign the CLA and add yourself to the authors list
  • Make sure you are making a pull request against the develop branch (not main). Also you should start your branch off develop
  • Add tests that prove your fix is effective or that your feature works
  • Add necessary documentation (if appropriate)

Thank you for contributing to SLEAP!

❤️

Summary by CodeRabbit

Release Notes

  • New Feature: Enhanced the process_batch function to adjust instance_peaks based on offset_x and offset_y values, improving the accuracy of instance peak detection.
  • Refactor: Updated the call methods in several classes to include n_peaks, instance_peaks, and instance_peak_vals in the outputs dictionary, providing more detailed output information.
  • New Feature: Added handling for cases when n_peaks is 0 or instance_peaks is an instance of FindInstancePeaksGroundTruth, increasing the robustness of the code.

@coderabbitai
Copy link

coderabbitai bot commented Oct 12, 2023

Walkthrough

The changes primarily revolve around the enhancement of the call method in multiple classes and the process_batch function. The modifications include the addition of new parameters, creation of new output dictionaries, and handling of specific cases. The changes aim to improve the handling of n_peaks, instance_peaks, and instance_peak_vals values.

Changes

File Summary
sleap/nn/inference.py The process_batch function now adjusts the instance_peaks array based on offset_x and offset_y values, and then divides it by the scale array. The call method in several classes has been enhanced with a new input parameter n_peaks, and the outputs dictionary has been modified to include the n_peaks, instance_peaks, and instance_peak_vals values. New output dictionaries have been created to handle specific cases.

🐇
"In the land of code, where logic is king,
Changes are made, improvements they bring.
With peaks and instances, handled with care,
The code now runs smoother, like a hare.
So let's celebrate these changes, oh so neat,
With a carrot cake, that's a treat!" 🥕🍰


Tips

Chat with CodeRabbit Bot (@coderabbitai)

  • Mention @coderabbitai in any review comment for bot assistance.
  • Note: Review comments are made on code diffs or files, not on the PR overview.

Pause Incremental Reviews

  • Insert @coderabbitai: ignore in the PR description to halt the bot's ongoing reviews. Remove the line to resume.
  • Additionally, enter @coderabbitai pause to halt the bot's ongoing reviews as any issue or review comment. To
    resume reviews, enter @coderabbitai resume as any issue or review comment.

@talmo talmo changed the title Fix top-down inference of Apple Silicon Fix top-down inference on Apple Silicon Oct 12, 2023
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Status

Actionable comments generated: 0

Commits Files that changed from the base of the PR and between 1e0627a and 524300a.
Files selected for processing (1)
  • sleap/nn/inference.py (6 hunks)
Additional comments (Suppressed): 6
sleap/nn/inference.py (6)
  • 408-419: The new hunk introduces a check for the size of ex["instance_peaks"] before performing operations on it. This is a good practice as it prevents potential errors when the array is empty. However, ensure that this change does not affect the expected behavior of the function in cases where ex["instance_peaks"] is expected to be non-empty.

  • 796-800: The new hunk introduces a new parameter n_peaks to the function call. Ensure that the function being called has been updated to accept this new parameter and that all calls to this function throughout the codebase have been updated to include this new argument.

  • 1909-1914: The new hunk adds n_peaks to the outputs dictionary. This is a good practice as it makes the function's output more informative. However, ensure that this change does not affect the expected behavior of the function in cases where n_peaks is expected to be a certain value.

  • 2085-2096: The new hunk introduces checks for the presence of centroids, centroid_vals, and centroid_confmaps in inputs before adding them to outputs. This is a good practice as it prevents potential errors when these keys are not present in inputs. However, ensure that this change does not affect the expected behavior of the function in cases where these keys are expected to be present in inputs.

  • 2153-2158: The new hunk modifies the way outputs is built. Instead of creating a new dictionary, it now adds instance_peaks and instance_peak_vals to the existing outputs dictionary. This is a good practice as it makes the function's output more informative. However, ensure that this change does not affect the expected behavior of the function in cases where outputs is expected to be a certain structure.

  • 2261-2298: The new hunk introduces a check for n_peaks being 0 and handles this case separately. This is a good practice as it prevents potential errors when n_peaks is 0. However, ensure that this change does not affect the expected behavior of the function in cases where n_peaks is expected to be non-zero.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant