You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
My understanding of the model is that the output of the Generator should be equal in shape to the output of the classifier (ResNet-18), i.e., a [512, 1, 1] tensor after GAP. In this case, as you said, the actual structure of the Generator is just a MLP. So, what does conv2d layers do? Or to put it another way, if I use a normal network of nn.Linear + nn.BatchNorm1d as a generator, and then reshape the output to a shape of [512, 1, 1], are they equal?
If so, what's your actual meaning of the annotations like "state size: (self.ngf*8) x 4 x 4"? Shouldn't the actual shape of the output of this layer be (self.ngf*8) x 1 x 1?
Looking forward to your reply.
The text was updated successfully, but these errors were encountered:
Hello, can I take a look at the OTS characteristics generated by your resnet18 network. If you can know the structure of your resnet18 network, thank you very much and I wish you all the best in your work!
In demo_CrossDatasetOpenSet_training.ipynb [3]:
My understanding of the model is that the output of the Generator should be equal in shape to the output of the classifier (ResNet-18), i.e., a [512, 1, 1] tensor after GAP. In this case, as you said, the actual structure of the Generator is just a MLP. So, what does conv2d layers do? Or to put it another way, if I use a normal network of nn.Linear + nn.BatchNorm1d as a generator, and then reshape the output to a shape of [512, 1, 1], are they equal?
If so, what's your actual meaning of the annotations like "state size: (self.ngf*8) x 4 x 4"? Shouldn't the actual shape of the output of this layer be (self.ngf*8) x 1 x 1?
Looking forward to your reply.
The text was updated successfully, but these errors were encountered: