Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question: No New Net (2019) #33

Open
stalhabukhari opened this issue Dec 30, 2019 · 2 comments
Open

Question: No New Net (2019) #33

stalhabukhari opened this issue Dec 30, 2019 · 2 comments

Comments

@stalhabukhari
Copy link

Hi!
My question is about your paper No New Net which cites your previous paper, corresponding to this repository.
Are you using Concatenation to supply feature maps from encoder pathway to decoder pathway of your UNet (as in this repository), or simple Addition which is maybe more memory efficient?

@FabianIsensee
Copy link
Member

Hi,
I use concatenation. Addition could work just as well, though. I have not compared the two.
Best,
Fabian

@stalhabukhari
Copy link
Author

Thank you so much for the prompt response. Just have a few more to clear out:

  • It seems that there are no skips connections/residual blocks or pre-activation blocks in No New Net. Also there is use of Max-Pooling in contrast to Strided Convolution in this repository. Am I on track?
  • In No New Net, the term Trilinear Up-Sampling is used. Does this refer to Up-Sampling by repetition (as in this repository) or Up-Sampling by linear interpolation?
  • If it was recorded during implementation, in both cases, what was the memory footprint of your architectures on GPU?

Best Regards,
Talha

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants