Skip to content

Conversation

@aymuos15
Copy link

@aymuos15 aymuos15 commented Jan 7, 2026

Fixes #5702

Description

This PR adds two features requested in issue #5702:

  1. 3D Support: PanopticQualityMetric now accepts both 4D tensors (B2HW for 2D data) and 5D tensors (B2HWD for 3D data). Previously, only 2D inputs were supported.

  2. Confusion Matrix Output: Added return_confusion_matrix parameter to PanopticQualityMetric. When set to True, the aggregate() method returns raw confusion matrix values (tp, fp, fn, iou_sum) instead of computed metrics, enabling custom metric calculations.

  3. Helper Function: Added compute_mean_iou() function to compute mean IoU from confusion matrix values.

Note: While panoptica exists as a standalone library, I feel this would still be a nice addition to MONAI.

Types of changes

  • Non-breaking change (fix or new feature that would not break existing functionality).
  • Breaking change (fix or new feature that would cause existing functionality to change).
  • New tests added to cover the changes.
  • Integration tests passed locally by running ./runtests.sh -f -u --net --coverage.
  • Quick tests passed locally by running ./runtests.sh --quick --unittests --disttests.
  • In-line docstrings updated.
  • Documentation updated, tested make html command in the docs/ folder.

…roject-MONAI#5702)

- Add support for 5D tensors (B2HWD) in addition to existing 4D (B2HW)
- Add `return_confusion_matrix` parameter to return raw tp, fp, fn, iou_sum values
- Add `compute_mean_iou` helper function for computing mean IoU from confusion matrix
- Update docstrings to reflect 2D/3D support
- Add comprehensive tests for new functionality

Signed-off-by: Soumya Snigdha Kundu <[email protected]>
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 7, 2026

📝 Walkthrough

Walkthrough

This PR extends PanopticQualityMetric with 3D volumetric support, adds a return_confusion_matrix parameter to optionally return raw confusion matrices instead of computed metrics, and introduces a new compute_mean_iou function for downstream IoU calculations. Input validation now accepts 5D tensors (batch, 2, height, width, depth) alongside existing 4D support. Comprehensive tests validate 3D functionality, confusion matrix shapes, mean IoU computation, and metric filtering behavior.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

🚥 Pre-merge checks | ✅ 4 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 76.92% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately and concisely summarizes the two main changes: 3D support and confusion matrix output for PanopticQualityMetric.
Description check ✅ Passed Description covers all key changes with clear issue reference, feature explanations, and appropriate checkboxes marked. Required sections present and complete.
Linked Issues check ✅ Passed All coding objectives from issue #5702 are met: 3D tensor support (B2HWD) added [#5702], confusion matrix output via return_confusion_matrix parameter [#5702], and compute_mean_iou helper function [#5702].
Out of Scope Changes check ✅ Passed All changes are directly scoped to issue #5702 objectives: 3D support, confusion matrix return option, and mean IoU computation. No unrelated modifications detected.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
monai/metrics/panoptic_quality.py (1)

24-24: Consider sorting __all__ alphabetically.

Static analysis suggests alphabetical ordering for consistency.

♻️ Proposed fix
-__all__ = ["PanopticQualityMetric", "compute_panoptic_quality", "compute_mean_iou"]
+__all__ = ["PanopticQualityMetric", "compute_mean_iou", "compute_panoptic_quality"]
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between 57fdd59 and 23d33cf.

📒 Files selected for processing (2)
  • monai/metrics/panoptic_quality.py
  • tests/metrics/test_compute_panoptic_quality.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

⚙️ CodeRabbit configuration file

Review the Python code for quality and correctness. Ensure variable names adhere to PEP8 style guides, are sensible and informative in regards to their function, though permitting simple names for loop and comprehension variables. Ensure routine names are meaningful in regards to their function and use verbs, adjectives, and nouns in a semantically appropriate way. Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings. Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these. Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness. Ensure new or modified definitions will be covered by existing or new unit tests.

Files:

  • monai/metrics/panoptic_quality.py
  • tests/metrics/test_compute_panoptic_quality.py
🧬 Code graph analysis (2)
monai/metrics/panoptic_quality.py (1)
monai/metrics/confusion_matrix.py (1)
  • _compute_tensor (80-99)
tests/metrics/test_compute_panoptic_quality.py (1)
monai/metrics/panoptic_quality.py (3)
  • PanopticQualityMetric (27-168)
  • aggregate (132-168)
  • compute_mean_iou (315-333)
🪛 Ruff (0.14.10)
monai/metrics/panoptic_quality.py

24-24: __all__ is not sorted

Apply an isort-style sorting to __all__

(RUF022)


107-109: Avoid specifying long messages outside the exception class

(TRY003)


148-148: Prefer TypeError exception for invalid type

(TRY004)


148-148: Avoid specifying long messages outside the exception class

(TRY003)


328-331: Avoid specifying long messages outside the exception class

(TRY003)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (19)
  • GitHub Check: flake8-py3 (mypy)
  • GitHub Check: packaging
  • GitHub Check: quick-py3 (macOS-latest)
  • GitHub Check: min-dep-py3 (3.9)
  • GitHub Check: flake8-py3 (codeformat)
  • GitHub Check: flake8-py3 (pytype)
  • GitHub Check: quick-py3 (windows-latest)
  • GitHub Check: quick-py3 (ubuntu-latest)
  • GitHub Check: build-docs
  • GitHub Check: min-dep-pytorch (2.8.0)
  • GitHub Check: min-dep-pytorch (2.7.1)
  • GitHub Check: min-dep-pytorch (2.5.1)
  • GitHub Check: min-dep-py3 (3.11)
  • GitHub Check: min-dep-pytorch (2.6.0)
  • GitHub Check: min-dep-os (ubuntu-latest)
  • GitHub Check: min-dep-py3 (3.10)
  • GitHub Check: min-dep-os (windows-latest)
  • GitHub Check: min-dep-os (macOS-latest)
  • GitHub Check: min-dep-py3 (3.12)
🔇 Additional comments (12)
monai/metrics/panoptic_quality.py (5)

58-60: LGTM!

The return_confusion_matrix parameter is well-documented and maintains backward compatibility with its default value.

Also applies to: 70-70, 78-78


83-94: LGTM!

Docstrings clearly document 2D and 3D input formats.


106-109: LGTM!

Validation correctly accepts both 4D (2D images) and 5D (3D volumes) tensors. Error message is clear and helpful.


141-156: LGTM!

Early return pattern cleanly handles confusion matrix output. Logic and documentation are correct.


315-333: LGTM!

Function correctly computes mean IoU from confusion matrix. Formula matches Segmentation Quality calculation (line 164), which is appropriate. Docstring is complete and validation is robust.

tests/metrics/test_compute_panoptic_quality.py (7)

92-120: LGTM!

3D test data is properly shaped (B=1, C=2, H=2, W=2, D=2) and test cases are well-defined.


142-152: LGTM!

Test validates 3D input acceptance and correct output shape. Good coverage of the new feature.


154-170: LGTM!

Comprehensive test of confusion matrix return. Validates both shape and value constraints (non-negativity).


172-184: LGTM!

Test validates compute_mean_iou helper with appropriate shape and value checks.


186-203: LGTM!

Test confirms metric filtering works correctly and different metrics produce distinct outputs.


205-218: LGTM!

Test validates proper rejection of invalid tensor dimensions (3D and 6D). Good edge case coverage.


220-232: LGTM!

Test validates proper error handling for invalid confusion matrix shapes. Good coverage of error cases.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Feature Request] 3D support for PanopticQualityMetric + provide option to return confusion matrix

1 participant