UNet trains, UNETR does not on same data and augmentations #1731
Unanswered
theskywalker1
asked this question in
Q&A
Replies: 1 comment
-
Hi @theskywalker1, this is not to say that a more complex network will always yield better results; in your example, unet can only achieve an accuracy of roughly 0.4. When switching to a more complex UNETR, you should adjust the parameters appropriately. Even so, it frequently happens that we overfit a basic network before attempting to intentionally alter it. Thanks. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello Monai community.


I am attempting to train a UNETR model using this tutorial for vertebral segmentation.
Using this tutorial, I was able to sucessfully train a regular UNet on the VerSe dataset. I chose 25 high quality, full body volumes with a 70/30 split for training and validation.
Here is an example sagittal slice of this dataset:
Here is my config for this project:
In the code snippets I provide below, you will see model1 as a UNETR model and model2 as the regular UNet.
Below this, I provide a graph of the regular UNet's performance and a screenshot of the UNETR's lack of change.
Could anyone explain why a more complex and supposedly better performing model is behaving much worse on the same dataset?
Thanks for anything.
Here is the performance over 6k iterations on the regular UNet:


Unfortunately, I do not have a graph of the UNETR's training. But I do have this screenshot of the console showing no change in the loss or mean dice:
Beta Was this translation helpful? Give feedback.
All reactions