You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi!
My question is about your paper No New Net which cites your previous paper, corresponding to this repository.
Are you using Concatenation to supply feature maps from encoder pathway to decoder pathway of your UNet (as in this repository), or simple Addition which is maybe more memory efficient?
The text was updated successfully, but these errors were encountered:
Thank you so much for the prompt response. Just have a few more to clear out:
It seems that there are no skips connections/residual blocks or pre-activation blocks in No New Net. Also there is use of Max-Pooling in contrast to Strided Convolution in this repository. Am I on track?
In No New Net, the term Trilinear Up-Sampling is used. Does this refer to Up-Sampling by repetition (as in this repository) or Up-Sampling by linear interpolation?
If it was recorded during implementation, in both cases, what was the memory footprint of your architectures on GPU?
Hi!
My question is about your paper No New Net which cites your previous paper, corresponding to this repository.
Are you using Concatenation to supply feature maps from encoder pathway to decoder pathway of your UNet (as in this repository), or simple Addition which is maybe more memory efficient?
The text was updated successfully, but these errors were encountered: