Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add tests #239

Open
dpascualhe opened this issue Dec 10, 2024 · 3 comments
Open

Add tests #239

dpascualhe opened this issue Dec 10, 2024 · 3 comments

Comments

@dpascualhe
Copy link
Collaborator

Add automatic tests using pytest + github actions. A good starting point: metrics.py.

@mansipatil12345
Copy link

mansipatil12345 commented Dec 17, 2024

Hello sir,

I noticed this issue regarding eval bugs and the accuracy metric. I’ve reviewed the problem and have a clear approach to address it:

Improving shape checks,
Handling division by zero safely, and
Refining per-class metric calculations.
Before I proceed, I’d like to get your guidance:

Does this approach align with your expectations?
Are there any specific considerations or requirements I should keep in mind while implementing the fixes?
I can also add tests to validate the changes to ensure everything works as expected. Please let me know your thoughts or suggestions before I move forward!

@dpascualhe
Copy link
Collaborator Author

Hello @mansipatil12345 👋

Thanks for your interest in the project! The current metrics implemented are working as intended, but adding safeguards like the ones proposed seems like a good idea. My only doubt, what do you mean by Refining per-class metric calculations?

If you plan to work on said safeguards, please open another issue as this one is specific for adding tests.

@mansipatil12345
Copy link

Thank you for reviewing my request and providing guidance. Based on your feedback, I will proceed with the following steps to address the issue:

Shape Checks:

Add robust checks to ensure predictions and ground truth tensors match in dimensions. This will prevent shape mismatches during metric computation.

Handling Division by Zero:

Introduce safeguards (e.g., conditional checks or np.where) to handle cases where a class has no ground truth samples (e.g., zero denominator). Metrics for such classes will return NaN to avoid errors and maintain clarity.

Refining Per-Class Metric Calculations:

Ensure that calculations for metrics like IoU and accuracy are correctly handled on a class-by-class basis.
This includes managing edge cases like rare or missing classes, where no pixels exist in the ground truth for a given class.

Validation Tests:

Write unit tests to validate the fixes, ensuring all edge cases (e.g., missing classes, shape mismatches) are covered.
I believe this approach aligns with the issue’s requirements, but I’d appreciate your confirmation or any additional suggestions before I proceed with the implementation.

Looking forward to your thoughts!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants