Replies: 1 comment 2 replies
-
Hi @bintadkr, could you please paste your transform chain? Hope it helps, thanks. |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
I've been trying to implement a Swin UNETR model, using the pretrained weights for BTCV segmentation and the tutorial available on this repository.
However, at the training step, I always run into this error: ValueError: spatial dimensions [2, 3, 4] of input image (spatial shape: torch.Size([73, 73, 34])) must be divisible by 2**5.
The thing is I don't understand why [73 73 34] is the shape of my data, since I used RandCropByPosNegLabeld, with spatial_size = (96, 96, 96)
I tried to resize my data by using Resized in the transforms, but it makes my environment crash...
I then try to resize them later on, in the training process with the line: x = resize_transform(x), with resize_transform = Resized(spatial_dim=(64, 64, 32). This gets me the an error saying that the dimensions of spatial _dim and the one of my image do not match.
Is there a way to fix this?
Beta Was this translation helpful? Give feedback.
All reactions