-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Add 3D support and confusion matrix output to PanopticQualityMetric #8684
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dev
Are you sure you want to change the base?
Add 3D support and confusion matrix output to PanopticQualityMetric #8684
Conversation
…roject-MONAI#5702) - Add support for 5D tensors (B2HWD) in addition to existing 4D (B2HW) - Add `return_confusion_matrix` parameter to return raw tp, fp, fn, iou_sum values - Add `compute_mean_iou` helper function for computing mean IoU from confusion matrix - Update docstrings to reflect 2D/3D support - Add comprehensive tests for new functionality Signed-off-by: Soumya Snigdha Kundu <[email protected]>
📝 WalkthroughWalkthroughThis PR extends PanopticQualityMetric with 3D volumetric support, adds a Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes 🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
monai/metrics/panoptic_quality.py (1)
24-24: Consider sorting__all__alphabetically.Static analysis suggests alphabetical ordering for consistency.
♻️ Proposed fix
-__all__ = ["PanopticQualityMetric", "compute_panoptic_quality", "compute_mean_iou"] +__all__ = ["PanopticQualityMetric", "compute_mean_iou", "compute_panoptic_quality"]
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (2)
monai/metrics/panoptic_quality.pytests/metrics/test_compute_panoptic_quality.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
⚙️ CodeRabbit configuration file
Review the Python code for quality and correctness. Ensure variable names adhere to PEP8 style guides, are sensible and informative in regards to their function, though permitting simple names for loop and comprehension variables. Ensure routine names are meaningful in regards to their function and use verbs, adjectives, and nouns in a semantically appropriate way. Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings. Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these. Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness. Ensure new or modified definitions will be covered by existing or new unit tests.
Files:
monai/metrics/panoptic_quality.pytests/metrics/test_compute_panoptic_quality.py
🧬 Code graph analysis (2)
monai/metrics/panoptic_quality.py (1)
monai/metrics/confusion_matrix.py (1)
_compute_tensor(80-99)
tests/metrics/test_compute_panoptic_quality.py (1)
monai/metrics/panoptic_quality.py (3)
PanopticQualityMetric(27-168)aggregate(132-168)compute_mean_iou(315-333)
🪛 Ruff (0.14.10)
monai/metrics/panoptic_quality.py
24-24: __all__ is not sorted
Apply an isort-style sorting to __all__
(RUF022)
107-109: Avoid specifying long messages outside the exception class
(TRY003)
148-148: Prefer TypeError exception for invalid type
(TRY004)
148-148: Avoid specifying long messages outside the exception class
(TRY003)
328-331: Avoid specifying long messages outside the exception class
(TRY003)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (19)
- GitHub Check: flake8-py3 (mypy)
- GitHub Check: packaging
- GitHub Check: quick-py3 (macOS-latest)
- GitHub Check: min-dep-py3 (3.9)
- GitHub Check: flake8-py3 (codeformat)
- GitHub Check: flake8-py3 (pytype)
- GitHub Check: quick-py3 (windows-latest)
- GitHub Check: quick-py3 (ubuntu-latest)
- GitHub Check: build-docs
- GitHub Check: min-dep-pytorch (2.8.0)
- GitHub Check: min-dep-pytorch (2.7.1)
- GitHub Check: min-dep-pytorch (2.5.1)
- GitHub Check: min-dep-py3 (3.11)
- GitHub Check: min-dep-pytorch (2.6.0)
- GitHub Check: min-dep-os (ubuntu-latest)
- GitHub Check: min-dep-py3 (3.10)
- GitHub Check: min-dep-os (windows-latest)
- GitHub Check: min-dep-os (macOS-latest)
- GitHub Check: min-dep-py3 (3.12)
🔇 Additional comments (12)
monai/metrics/panoptic_quality.py (5)
58-60: LGTM!The
return_confusion_matrixparameter is well-documented and maintains backward compatibility with its default value.Also applies to: 70-70, 78-78
83-94: LGTM!Docstrings clearly document 2D and 3D input formats.
106-109: LGTM!Validation correctly accepts both 4D (2D images) and 5D (3D volumes) tensors. Error message is clear and helpful.
141-156: LGTM!Early return pattern cleanly handles confusion matrix output. Logic and documentation are correct.
315-333: LGTM!Function correctly computes mean IoU from confusion matrix. Formula matches Segmentation Quality calculation (line 164), which is appropriate. Docstring is complete and validation is robust.
tests/metrics/test_compute_panoptic_quality.py (7)
92-120: LGTM!3D test data is properly shaped (B=1, C=2, H=2, W=2, D=2) and test cases are well-defined.
142-152: LGTM!Test validates 3D input acceptance and correct output shape. Good coverage of the new feature.
154-170: LGTM!Comprehensive test of confusion matrix return. Validates both shape and value constraints (non-negativity).
172-184: LGTM!Test validates
compute_mean_iouhelper with appropriate shape and value checks.
186-203: LGTM!Test confirms metric filtering works correctly and different metrics produce distinct outputs.
205-218: LGTM!Test validates proper rejection of invalid tensor dimensions (3D and 6D). Good edge case coverage.
220-232: LGTM!Test validates proper error handling for invalid confusion matrix shapes. Good coverage of error cases.
Fixes #5702
Description
This PR adds two features requested in issue #5702:
3D Support:
PanopticQualityMetricnow accepts both 4D tensors (B2HW for 2D data) and 5D tensors (B2HWD for 3D data). Previously, only 2D inputs were supported.Confusion Matrix Output: Added
return_confusion_matrixparameter toPanopticQualityMetric. When set toTrue, theaggregate()method returns raw confusion matrix values (tp, fp, fn, iou_sum) instead of computed metrics, enabling custom metric calculations.Helper Function: Added
compute_mean_iou()function to compute mean IoU from confusion matrix values.Note: While panoptica exists as a standalone library, I feel this would still be a nice addition to MONAI.
Types of changes
./runtests.sh -f -u --net --coverage../runtests.sh --quick --unittests --disttests.make htmlcommand in thedocs/folder.