-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Pytorch 2.8 Support #8530
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pytorch 2.8 Support #8530
Conversation
for more information, see https://pre-commit.ci
…ply.github.com> I, Eric Kerfoot <[email protected]>, hereby add my Signed-off-by to this commit: 21f1f39 I, Eric Kerfoot <[email protected]>, hereby add my Signed-off-by to this commit: a8f972d Signed-off-by: Eric Kerfoot <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
tests/utils/test_set_determinism.py (1)
45-45: Remove trailing whitespace.- b = torch.randint(seed, (1,)) - - # test when global flag support is disabled + b = torch.randint(seed, (1,)) + + # test when global flag support is disabled
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Knowledge Base: Disabled due to Reviews > Disable Knowledge Base setting
📒 Files selected for processing (2)
monai/utils/misc.py(2 hunks)tests/utils/test_set_determinism.py(2 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
⚙️ CodeRabbit Configuration File
Review the Python code for quality and correctness. Ensure variable names adhere to PEP8 style guides, are sensible and informative in regards to their function, though permitting simple names for loop and comprehension variables. Ensure routine names are meaningful in regards to their function and use verbs, adjectives, and nouns in a semantically appropriate way. Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings. Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these. Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness. Ensure new or modified definitions will be covered by existing or new unit tests.
Files:
tests/utils/test_set_determinism.pymonai/utils/misc.py
🪛 Ruff (0.12.2)
tests/utils/test_set_determinism.py
45-45: Blank line contains whitespace
Remove whitespace from blank line
(W293)
🔇 Additional comments (6)
monai/utils/misc.py (4)
105-107: LGTM! Proper initialization of CUBLAS workspace config.Setting the environment variable at module load ensures deterministic behavior for cuBLAS operations.
382-387: Clean implementation of backend flag management.The context manager approach properly handles flag mutation when global flags are disabled.
390-390: Simplified deterministic algorithms setting looks good.Removing backward compatibility checks is appropriate for PyTorch 2.8 support.
381-381: No public API replacement in PyTorch 2.8MONAI is intentionally using an internal context manager (
torch.backends.__allow_nonbracketed_mutation()) because PyTorch does not expose a public equivalent for non-bracketed backend mutation. Continue with this workaround for now and monitor the PyTorch backends docs or issue tracker for any future official API (e.g., additions totorch.backends.*.flags).• File: monai/utils/misc.py Line 381
• Action: keep workaround, track upstream for a public alternativetests/utils/test_set_determinism.py (2)
64-72: Good test clarification with rename and docstring.The rename from
test_algototest_algo_not_deterministicbetter describes the test's purpose.
73-79: Excellent test coverage for CUBLAS environment variable.Validates that
torch.mmworks correctly with the CUBLAS_WORKSPACE_CONFIG setting.
Signed-off-by: Eric Kerfoot <[email protected]>
Signed-off-by: Eric Kerfoot <[email protected]>
Signed-off-by: Eric Kerfoot <[email protected]>
Signed-off-by: Eric Kerfoot <[email protected]>
WalkthroughThis change updates the codebase to support PyTorch 2.8. Workflow files are updated to test and install PyTorch 2.8 and related torchvision versions. Several internal code and test files are modified to ensure compatibility with PyTorch 2.8, including type annotations, explicit Python integer conversions, and deterministic settings. No changes to public APIs or logic are introduced; the updates are focused on compatibility, type safety, and deterministic behavior adjustments. Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Assessment against linked issues
Assessment against linked issues: Out-of-scope changesNo out-of-scope changes found. Note 🔌 MCP (Model Context Protocol) integration is now available in Early Access!Pro users can now connect to remote MCP servers under the Integrations page to get reviews and chat conversations that understand additional development context. 📜 Recent review detailsConfiguration used: .coderabbit.yaml 📒 Files selected for processing (1)
🧰 Additional context used📓 Path-based instructions (1)**/*.py⚙️ CodeRabbit Configuration File
Files:
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (19)
🔇 Additional comments (2)
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
Signed-off-by: Eric Kerfoot <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🔭 Outside diff range comments (2)
tests/utils/test_set_determinism.py (1)
46-56: Guard disable_global_flags for compatibilityWrap the test of
torch.backends.disable_global_flags()in a version‐safe guard and skip on PyTorch versions that lack it. No extra teardown is needed sinceset_determinism(seed=None)resets backend flags.
- File: tests/utils/test_set_determinism.py
- # test when global flag support is disabled - torch.backends.disable_global_flags() + # test when global flag support is disabled + if not hasattr(torch.backends, "disable_global_flags"): + self.skipTest("torch.backends.disable_global_flags not available") + torch.backends.disable_global_flags() set_determinism(seed=seed) c = np.random.randint(seed) d = torch.randint(seed, (1,)) self.assertEqual(a, c) self.assertEqual(b, d) self.assertTrue(torch.backends.cudnn.deterministic) self.assertTrue(not torch.backends.cudnn.benchmark) - set_determinism(seed=None) + set_determinism(seed=None)monai/transforms/intensity/array.py (1)
2787-2788: Preserve input device/type/meta on return (avoid CPU fallback).Currently, hv_maps is created via convert_to_tensor which will land on CPU and may drop MetaTensor metadata. Convert to the dst type/device instead, consistent with other transforms.
- hv_maps = convert_to_tensor(np.concatenate([h_map, v_map]), track_meta=get_track_meta()) - return hv_maps + hv_maps_np = np.concatenate([h_map, v_map]) + hv_maps, *_ = convert_to_dst_type(hv_maps_np, dst=mask, dtype=self.dtype) + return hv_mapsIf you adopt this, consider updating the ComputeHoVerMaps docstring return type to NdarrayOrTensor for consistency.
🧹 Nitpick comments (26)
monai/transforms/utility/array.py (1)
944-944: Use keepdim + explicit Tensor cast instead of type: ignoreAvoid the suppression by leveraging keepdim and a local cast for type checkers.
- return data.any(0)[None] # type: ignore + data_t = cast(torch.Tensor, data) + return data_t.any(dim=0, keepdim=True)Add once at the top of the file:
from typing import castmonai/visualize/utils.py (1)
119-119: Drop type: ignore in favor of reshape with ints and an explicit castMake shapes explicit and cast the sliced view to ndarray to satisfy static checkers.
- vol = vol.reshape((-1, vol.shape[-3], vol.shape[-2], vol.shape[-1])) # type: ignore[assignment] + vol = np.reshape(vol, (-1, int(vol.shape[-3]), int(vol.shape[-2]), int(vol.shape[-1])))- vol = vol.reshape((-1, vol.shape[-2], vol.shape[-1])) # type: ignore[assignment] + vol = np.reshape(vol, (-1, int(vol.shape[-2]), int(vol.shape[-1])))- vol = vol[:: max(every_n, 1)] # type: ignore[assignment] + vol = cast(np.ndarray, vol[:: max(every_n, 1)])Add once near the other imports:
from typing import castAlso applies to: 122-122, 127-127
monai/visualize/img2tensorboard.py (1)
203-205: Replace shape access type: ignore with an explicit int()Avoid the suppression; cast the dimension to int and keep the rest unchanged.
- d_chans = d.shape[0] # type: ignore + d_chans = int(d.shape[0])Subsequent lines using d_chans (Lines 205 and 211) remain valid.
Also applies to: 211-211
monai/transforms/utils_pytorch_numpy_unification.py (2)
219-219: Explicit ndarray coercion: confirm intent (subclass drop) or use asanyarray.
np.asarray(...)will coerce MaskedArray and other ndarray subclasses to plain ndarray, dropping subclass semantics. If that’s intended for unification, LGTM. If not, prefernp.asanyarray(...).Possible tweak:
- return np.asarray(np.floor_divide(a, b)) + return np.asanyarray(np.floor_divide(a, b))
206-208: Docstring vs implementation mismatch.Docstring says to use
torch.div(..., rounding_mode="floor")for pt>=1.8, but code usestorch.floor_divide. Align the docstring or switch totorch.divfor consistency.Docstring-only fix:
- As of pt1.8, use `torch.div(..., rounding_mode="floor")`, and - before that, use `torch.floor_divide`. + Uses `torch.floor_divide` for torch tensors for broad version compatibility.Alternatively, change implementation:
- if isinstance(a, torch.Tensor): - return torch.floor_divide(a, b) + if isinstance(a, torch.Tensor): + return torch.div(a, b, rounding_mode="floor")monai/networks/nets/spade_diffusion_model_unet.py (1)
964-966: Minor: drop type ignore and avoid extra list copy.
len(ModuleList)isint; the# type: ignoreis likely unnecessary. Also, usingdelavoids copying the head of the list.- idx: int = -len(upsample_block.resnets) # type: ignore - res_samples = down_block_res_samples[idx:] - down_block_res_samples = down_block_res_samples[:idx] + n = len(upsample_block.resnets) + res_samples = down_block_res_samples[-n:] + del down_block_res_samples[-n:]Optional guard (if you want an early sanity check):
assert len(down_block_res_samples) >= n, "Not enough residuals to match upsample block."monai/data/image_reader.py (3)
583-596: Type the list element to avoid future typing friction.The element can be np.ndarray or cp.ndarray. Using list[Any] here avoids import coupling and mypy noise.
- stack_array_list: list = [first_array] + stack_array_list: list[Any] = [first_array]
601-606: Prefer clear() and ensure early release of slice arrays.Use list.clear() (more idiomatic) and clear in both code paths to drop references promptly, which can help RAM/VRAM pressure with large series.
- if self.to_gpu: - stack_array = cp.stack(stack_array_list, axis=-1) - else: - stack_array = np.stack(stack_array_list, axis=-1) - - del stack_array_list[:] + if self.to_gpu: + stack_array = cp.stack(stack_array_list, axis=-1) + else: + stack_array = np.stack(stack_array_list, axis=-1) + stack_array_list.clear()
612-612: Single-slice path: avoid or clear the temp list.Either reuse first_array directly or clear the list after use for symmetry.
Option A (minimal; clear after use):
- stack_array = stack_array_list[0] + stack_array = stack_array_list[0] + del stack_array_list # drop list ref promptlyOption B (cleanest; no list for single-slice):
- stack_array_list: list[Any] = [first_array] - for idx in range(1, len(slices)): + for idx in range(1, len(slices)): ... - if len(slices) > 1: + if len(slices) > 1: + stack_array_list: list[Any] = [first_array] + # (populate stack_array_list inside the loop above) ... - else: - stack_array = stack_array_list[0] + else: + stack_array = first_arraymonai/networks/nets/efficientnet.py (1)
419-423: Drop type: ignore by iterating robustly (or use apply).Avoid suppressions; either use named_children + isinstance checks or Module.apply for brevity.
Option A (apply):
- sub_stack: nn.Sequential - block: MBConvBlock - for sub_stack in self._blocks: # type: ignore[assignment] - for block in sub_stack: # type: ignore[assignment] - block.set_swish(memory_efficient) + def _set(m: nn.Module) -> None: + if isinstance(m, MBConvBlock): + m.set_swish(memory_efficient) + self._blocks.apply(_set)Option B (typed, guard with isinstance):
- sub_stack: nn.Sequential - block: MBConvBlock - for sub_stack in self._blocks: # type: ignore[assignment] - for block in sub_stack: # type: ignore[assignment] - block.set_swish(memory_efficient) + for _, sub_stack in self._blocks.named_children(): + if not isinstance(sub_stack, nn.Sequential): + continue + for _, block in sub_stack.named_children(): + if isinstance(block, MBConvBlock): + block.set_swish(memory_efficient)monai/utils/misc.py (2)
105-108: Prefer setdefault; reconsider global env mutation at import-timeUse setdefault for brevity. Also, setting CUBLAS_WORKSPACE_CONFIG globally at import-time can have surprising, system-wide effects and potential perf impact even when determinism isn’t requested.
- os.environ["CUBLAS_WORKSPACE_CONFIG"] = os.environ.get("CUBLAS_WORKSPACE_CONFIG", ":4096:8") + os.environ.setdefault("CUBLAS_WORKSPACE_CONFIG", ":4096:8")If feasible, consider gating this behind an opt-in env (e.g., MONAI_SET_CUBLAS_WORKSPACE_CONFIG=1) or set it only when determinism is enabled early in process startup. Do you want me to sketch that variant?
390-391: Guard use_deterministic_algorithms across supported rangesAssuming MONAI’s minimum torch guarantees this API, fine. If not, guard it to avoid AttributeError on older installations.
- if use_deterministic_algorithms is not None: - torch.use_deterministic_algorithms(use_deterministic_algorithms) + if use_deterministic_algorithms is not None: + if hasattr(torch, "use_deterministic_algorithms"): + torch.use_deterministic_algorithms(use_deterministic_algorithms) + else: + warnings.warn( + "torch.use_deterministic_algorithms not available on this torch version; ignoring.", + stacklevel=2, + )Can you confirm the minimum supported PyTorch version in this release train includes this API so we can skip the guard?
monai/networks/nets/diffusion_model_unet.py (1)
1798-1801: Slicing refactor reads clearer; tighten the type ignoreChange looks good and is type-checker-friendly. Minor: narrow the ignore to attr-defined to avoid masking other issues.
- idx: int = -len(upsample_block.resnets) # type: ignore + idx: int = -len(upsample_block.resnets) # type: ignore[attr-defined]monai/apps/generation/maisi/networks/diffusion_model_unet_maisi.py (1)
361-363: Clarify typing and add a sanity check to prevent silent mismatches.The refactor is good. Two small tweaks:
- Avoid the blanket type ignore by casting.
- Guard against insufficient residuals at runtime.
Possible diff within this block:
- idx: int = -len(upsample_block.resnets) # type: ignore - res_samples = down_block_res_samples[idx:] - down_block_res_samples = down_block_res_samples[:idx] + # number of resnet residuals expected by this up block + idx: int = -len(upsample_block.resnets) # mypy: up_block type lacks `resnets` + # sanity check to surface wiring mistakes early + if len(down_block_res_samples) < -idx: + raise ValueError( + f"Insufficient residuals: have {len(down_block_res_samples)}, need {-idx}." + ) + res_samples = down_block_res_samples[idx:] + down_block_res_samples = down_block_res_samples[:idx]If you want to remove the ignore, use a cast:
+from typing import Sized, cast # at top of file ... - idx: int = -len(upsample_block.resnets) # type: ignore + idx: int = -len(cast(Sized, upsample_block.resnets))monai/apps/detection/utils/box_coder.py (1)
213-216: Prefer typing.cast over type: ignore for assignment.convert_box_mode returns a Tensor here (Tensor in, Tensor out). Casting avoids suppressing other type errors.
Apply:
- boxes_cccwhd: torch.Tensor = convert_box_mode( - reference_boxes, src_mode=StandardMode, dst_mode=CenterSizeMode - ) # type: ignore[assignment] + boxes_cccwhd: torch.Tensor = cast( + torch.Tensor, convert_box_mode(reference_boxes, src_mode=StandardMode, dst_mode=CenterSizeMode) + )And add the import:
+from typing import cast # top-level importsmonai/data/image_dataset.py (1)
99-101: Nit: add a short docstring to randomize().Keeps API self-documented and aligned with guidelines.
def randomize(self, data: Any | None = None) -> None: + """ + Randomize the internal seed used to synchronize Randomizable transforms. + + Args: + data: Unused; present for API compatibility. + """ self._seed = int(self.R.randint(MAX_SEED, dtype="uint32"))monai/networks/nets/segresnet_ds.py (1)
410-412: Unify mypy ignore codes for clarity.Use the specific code
# type: ignore[assignment](as above) instead of a bare ignore.- for level in self.up_layers: # type: ignore + for level in self.up_layers: # type: ignore[assignment]- for level in self.up_layers: # type: ignore + for level in self.up_layers: # type: ignore[assignment]- for level in self.up_layers_auto: # type: ignore + for level in self.up_layers_auto: # type: ignore[assignment]Optional (cleaner than ignores): cast the iterated items.
Add to imports:
from typing import castThen e.g.:
for m in self.up_layers: level = cast(nn.ModuleDict, m) ...Also applies to: 519-520, 532-533
tests/integration/test_seg_loss_integration.py (1)
53-53: Seed setup viaset_determinism(0)— OK.Optional: pass
use_deterministic_algorithms=Trueto catch nondeterministic CUDA paths early.- set_determinism(0) + set_determinism(0, use_deterministic_algorithms=True)tests/utils/test_set_determinism.py (1)
73-80: Strengthencublastest by asserting and synchronizing.Ensure kernel completion and sanity-check the result.
- x = torch.rand(5, 5, device="cuda:0") - _ = torch.mm(x, x) + x = torch.rand(5, 5, device="cuda:0") + y = torch.mm(x, x) + torch.cuda.synchronize() + self.assertTrue(torch.isfinite(y).all())tests/integration/test_reg_loss_integration.py (1)
37-37: Seed setup — consider enforcing deterministic algorithms.Optional: add
use_deterministic_algorithms=Trueto surface nondeterministic kernels during CI.- set_determinism(0) + set_determinism(0, use_deterministic_algorithms=True)monai/data/dataset.py (2)
1355-1357: Add a brief docstring for randomizeDocstrings are expected per guidelines. Clarify that data is unused and the method sets a Python int seed.
Apply this diff:
def randomize(self, data: Any | None = None) -> None: - self._seed = int(self.R.randint(MAX_SEED, dtype="uint32")) + """Generate and store a Python int seed to synchronize Randomizable transforms. + + Args: + data: Unused. Present for API compatibility with Randomizable.randomize. + """ + self._seed = int(self.R.randint(MAX_SEED, dtype="uint32"))
1356-1356: Optional: prefer dtype=np.uint32 over stringUsing the dtype object improves readability and helps static tooling.
- self._seed = int(self.R.randint(MAX_SEED, dtype="uint32")) + self._seed = int(self.R.randint(MAX_SEED, dtype=np.uint32))monai/transforms/spatial/array.py (2)
1743-1758: Tighten device handling; drop redundant cast and type ignoresNo functional issue, but you can simplify and avoid the type ignores:
- grid_.device is already a torch.device; no need to wrap with torch.device(...)
- Assignment in the lazy branch doesn’t require an ignore with the union type.
- _device: torch.device | None + _device: torch.device | None @@ - grid_: torch.Tensor = convert_to_tensor(grid_, dtype=_dtype, track_meta=get_track_meta()) # type: ignore - _device = torch.device(grid_.device) # type: ignore + grid_: torch.Tensor = convert_to_tensor(grid_, dtype=_dtype, track_meta=get_track_meta()) # type: ignore + _device = grid_.device @@ - _device = self.device # type: ignore[assignment] + _device = self.device
3522-3523: Simplify and harden target_shape computationCurrent logic is correct and yields Python ints. Two minor tweaks:
- Simpler construction without np.int_ and intermediate list.
- Optional: guard against zero-sized dims when zooming tiny inputs (Resize/grid_sample can’t handle 0).
- target_shape = tuple(np.round(np.array(input_shape) * self.zoom_factor).astype(np.int_).tolist()) + # Ensures Python ints; consider guarding with max(1, ...) if very small inputs are possible. + target_shape = tuple(int(v) for v in np.round(np.array(input_shape) * self.zoom_factor)) + # Optional safety: + # target_shape = tuple(max(1, int(v)) for v in np.round(np.array(input_shape) * self.zoom_factor))Please confirm whether inputs like (1, 1[, 1]) are possible with zoom_factor < 1.0; if yes, the max(1, ...) guard prevents zero-sized shapes.
monai/transforms/intensity/array.py (2)
912-914: Prefer torch.is_floating_point for robustness.Use the API that inspects tensors directly rather than dtype properties to avoid version quirks.
- if not img_t.dtype.is_floating_point: + if not torch.is_floating_point(img_t): img_t, *_ = convert_data_type(img_t, dtype=torch.float32)
915-920: Avoid in-place per-channel assignment (safer for views/meta).Building a list and stacking avoids potential surprises with views and aligns with patterns elsewhere.
- for i, d in enumerate(img_t): - img_t[i] = self._normalize( # type: ignore - d, - sub=self.subtrahend[i] if self.subtrahend is not None else None, - div=self.divisor[i] if self.divisor is not None else None, - ) + out_channels = [] + for i, d in enumerate(img_t): + out_channels.append( + self._normalize( + d, + sub=self.subtrahend[i] if self.subtrahend is not None else None, + div=self.divisor[i] if self.divisor is not None else None, + ) + ) + img_t = torch.stack(out_channels)
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Knowledge Base: Disabled due to Reviews > Disable Knowledge Base setting
📒 Files selected for processing (27)
.github/workflows/pythonapp-min.yml(1 hunks).github/workflows/pythonapp.yml(3 hunks)monai/apps/detection/metrics/coco.py(2 hunks)monai/apps/detection/utils/box_coder.py(1 hunks)monai/apps/generation/maisi/networks/diffusion_model_unet_maisi.py(1 hunks)monai/data/box_utils.py(2 hunks)monai/data/dataset.py(1 hunks)monai/data/image_dataset.py(1 hunks)monai/data/image_reader.py(2 hunks)monai/networks/nets/diffusion_model_unet.py(1 hunks)monai/networks/nets/dints.py(4 hunks)monai/networks/nets/efficientnet.py(1 hunks)monai/networks/nets/segresnet_ds.py(4 hunks)monai/networks/nets/spade_diffusion_model_unet.py(1 hunks)monai/transforms/compose.py(1 hunks)monai/transforms/croppad/dictionary.py(1 hunks)monai/transforms/intensity/array.py(2 hunks)monai/transforms/spatial/array.py(4 hunks)monai/transforms/utility/array.py(1 hunks)monai/transforms/utils.py(1 hunks)monai/transforms/utils_pytorch_numpy_unification.py(1 hunks)monai/utils/misc.py(2 hunks)monai/visualize/img2tensorboard.py(1 hunks)monai/visualize/utils.py(1 hunks)tests/integration/test_reg_loss_integration.py(2 hunks)tests/integration/test_seg_loss_integration.py(2 hunks)tests/utils/test_set_determinism.py(2 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
⚙️ CodeRabbit Configuration File
Review the Python code for quality and correctness. Ensure variable names adhere to PEP8 style guides, are sensible and informative in regards to their function, though permitting simple names for loop and comprehension variables. Ensure routine names are meaningful in regards to their function and use verbs, adjectives, and nouns in a semantically appropriate way. Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings. Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these. Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness. Ensure new or modified definitions will be covered by existing or new unit tests.
Files:
monai/data/image_dataset.pymonai/apps/detection/utils/box_coder.pymonai/transforms/utility/array.pymonai/visualize/img2tensorboard.pymonai/transforms/utils_pytorch_numpy_unification.pytests/integration/test_seg_loss_integration.pymonai/visualize/utils.pymonai/data/dataset.pymonai/networks/nets/segresnet_ds.pytests/integration/test_reg_loss_integration.pymonai/apps/generation/maisi/networks/diffusion_model_unet_maisi.pymonai/networks/nets/efficientnet.pymonai/transforms/compose.pymonai/apps/detection/metrics/coco.pytests/utils/test_set_determinism.pymonai/transforms/croppad/dictionary.pymonai/networks/nets/diffusion_model_unet.pymonai/networks/nets/dints.pymonai/networks/nets/spade_diffusion_model_unet.pymonai/data/image_reader.pymonai/transforms/intensity/array.pymonai/transforms/spatial/array.pymonai/transforms/utils.pymonai/data/box_utils.pymonai/utils/misc.py
🧬 Code Graph Analysis (8)
monai/apps/detection/utils/box_coder.py (1)
monai/data/box_utils.py (2)
convert_box_mode(546-603)CenterSizeMode(289-356)
tests/integration/test_seg_loss_integration.py (1)
monai/utils/misc.py (1)
set_determinism(339-390)
tests/integration/test_reg_loss_integration.py (1)
monai/utils/misc.py (1)
set_determinism(339-390)
monai/transforms/compose.py (1)
monai/transforms/transform.py (1)
set_random_state(197-227)
tests/utils/test_set_determinism.py (1)
tests/test_utils.py (1)
skip_if_no_cuda(245-249)
monai/transforms/intensity/array.py (2)
monai/utils/type_conversion.py (3)
convert_to_tensor(113-181)convert_data_type(274-340)convert_to_dst_type(343-393)monai/data/meta_obj.py (1)
get_track_meta(47-60)
monai/transforms/spatial/array.py (1)
monai/data/meta_tensor.py (3)
array(395-404)array(407-409)astype(434-461)
monai/transforms/utils.py (1)
monai/transforms/utils_pytorch_numpy_unification.py (1)
ravel(255-268)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (19)
- GitHub Check: flake8-py3 (mypy)
- GitHub Check: build-docs
- GitHub Check: quick-py3 (ubuntu-latest)
- GitHub Check: flake8-py3 (codeformat)
- GitHub Check: packaging
- GitHub Check: flake8-py3 (pytype)
- GitHub Check: quick-py3 (windows-latest)
- GitHub Check: quick-py3 (macOS-latest)
- GitHub Check: min-dep-os (windows-latest)
- GitHub Check: min-dep-pytorch (2.8.0)
- GitHub Check: min-dep-os (ubuntu-latest)
- GitHub Check: min-dep-pytorch (2.7.1)
- GitHub Check: min-dep-os (macOS-latest)
- GitHub Check: min-dep-py3 (3.12)
- GitHub Check: min-dep-py3 (3.11)
- GitHub Check: min-dep-pytorch (2.5.1)
- GitHub Check: min-dep-py3 (3.9)
- GitHub Check: min-dep-py3 (3.10)
- GitHub Check: min-dep-pytorch (2.6.0)
🔇 Additional comments (27)
monai/apps/detection/metrics/coco.py (2)
460-460: Good: convert NumPy scalar to native int.This avoids surprises in consumers expecting a Python int and aligns with stricter typing.
526-526: LGTM: list init for precision simplifies typing and is efficient.Behavior preserved; later conversion to ndarray keeps output shape/dtype unchanged.
monai/transforms/croppad/dictionary.py (1)
695-699: Casting seed to Python int is correct and future-proof.Prevents NumPy scalar seeds from leaking into RNG APIs that expect native int. Deterministic behavior across keys remains unchanged.
.github/workflows/pythonapp-min.yml (1)
127-127: Good: matrix now includes PyTorch 2.8.0.This gives explicit CI coverage for 2.8 while keeping prior minors.
monai/data/image_dataset.py (1)
99-101: Explicit int() on randint is the right fix.Avoids NumPy scalar types in downstream
set_random_statecalls; sync seeding remains intact..github/workflows/pythonapp.yml (3)
97-97: Windows CPU pin updated to torch 2.5.1 / torchvision 0.20.1 — OK.Matches pairings and uses the CPU index URL.
106-106: Linux/macOS deps pinned to torch 2.5.1 / torchvision 0.20.1 — OK.Consistent with Windows step; good baseline for quick jobs.
158-158: Packaging: torch>=2.5.1 with torchvision — OK.Allows testing against latest stable while keeping a sane minimum.
monai/networks/nets/segresnet_ds.py (1)
221-223: Type-hint addition is fine; no runtime impact.Annotating
leveland suppressing mypy assignment is reasonable given ModuleList’s typing.tests/integration/test_seg_loss_integration.py (2)
24-24: Centralize determinism via utility — good change.Importing
set_determinismreduces duplication and aligns tests with utils.
57-57: Teardown restores flags viaset_determinism(None)— OK.tests/utils/test_set_determinism.py (1)
64-67: Renamed test and clarified expectation — good.Confirms nondeterministic op raises under deterministic algorithms.
monai/data/box_utils.py (2)
814-817: Type annotations forinter/unionimprove static checks — OK.No runtime change; annotations and ignore pragmas are appropriate here.
984-986: Typedgiou_tassignment — OK.Consistent with earlier changes; safe and improves readability.
tests/integration/test_reg_loss_integration.py (2)
22-22: Usingset_determinism— good consolidation.
41-41: Teardown restore call — OK.monai/data/dataset.py (2)
1356-1356: Casting to native int is correct and desirableConverting the NumPy uint32 to a Python int avoids numpy-scalar pitfalls and aligns with PyTorch 2.8 expectations for seeds. Good change.
1356-1356: Wrap remaining NumPy scalar seed assignment with int()One assignment still returns a NumPy scalar—wrap it in
int():• monai/transforms/croppad/dictionary.py:681
- self.sub_seed = self.R.randint(MAX_SEED, dtype="uint32") + self.sub_seed = int(self.R.randint(MAX_SEED, dtype="uint32"))After this change, all
randint(MAX_SEED, dtype="uint32")calls are converted to native Python ints.Likely an incorrect or invalid review comment.
monai/transforms/compose.py (1)
274-274: LGTM - Type safety improvement for PyTorch 2.8.Explicit int() conversion ensures native Python integer rather than NumPy scalar type, aligning with set_random_state signature expectations.
monai/networks/nets/dints.py (4)
495-495: Appropriate type ignore for dictionary lookup.Dictionary access returns generic type but runtime guarantees StemInterface implementation.
508-508: Appropriate type ignore for dictionary lookup.Dictionary access returns generic type but runtime guarantees StemInterface implementation.
683-683: Appropriate type ignore for dictionary lookup.Dictionary access returns generic type but runtime guarantees CellInterface implementation.
785-788: Good refactoring with type safety.Introducing cell_inter variable improves readability and avoids repeated dictionary lookups. Type ignore comment appropriately suppresses static analysis warnings for known-safe operations.
monai/transforms/spatial/array.py (1)
3314-3316: LGTM: explicit type hint improves clarityThe local type annotation for patched_image is clear and non-invasive.
monai/transforms/intensity/array.py (3)
903-906: Type-narrowing and channel-length check look good.Using a dedicated img_t and len(img_t) clarifies intent and avoids ambiguity with MetaTensor.
922-925: Final conversion back to dst type (and dtype selection) LGTM.This preserves original type/device/meta as expected.
2768-2769: Explicit ndarray conversion is fine.The typed instance_mask is clear and compatible with skimage.regionprops.
Signed-off-by: Eric Kerfoot <[email protected]>
|
The failing Windows tests appear to be related to this. |
Signed-off-by: Eric Kerfoot <[email protected]>
Signed-off-by: Eric Kerfoot <[email protected]>
KumoLiu
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the update, lgtm.
|
/build |
Fixes #8529.
Description
This adds support for PyTorch 2.8. This required a few minor changes to how determinism worked, and skipping some tests which mysteriously started failing under Windows which seems related to GLOO under 2.8. Mypy seems to have updated with new things it found and a number of spurious typing issues so many more files had to be fixed for that.
Types of changes
./runtests.sh -f -u --net --coverage../runtests.sh --quick --unittests --disttests.make htmlcommand in thedocs/folder.