You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for your work. Does this have a pre-training model weights? if i use my own dataset,is it accurate to use the Inceptionv3 model parameters pretrained on Imagenet to evaluate FID or IS ?
Should I change the "nclass_dict" and "classes_per_sheet_dict" values in the utils.py file to "3" when there are only three classes in my dataset.
The text was updated successfully, but these errors were encountered:
We haven't released any pretrained model weights, but I think I may still have some lying around somewhere if you need them.
For GAN evaluation it is common practice to use an Inceptionv3 model pretrained on ImageNet to evaluate FID and IS. Although, it has been shown that this may bias the metric to favour the generation of samples that contain features indicative of ImageNet classes (https://arxiv.org/pdf/2203.06026.pdf). An alternative is to use a model trained in a self-supervised manner, such as https://github.com/stanis-morozov/self-supervised-gan-eval.
Yes, if you have your own dataset you'll likely need to update those values accordingly.
Thank you for your work. Does this have a pre-training model weights? if i use my own dataset,is it accurate to use the Inceptionv3 model parameters pretrained on Imagenet to evaluate FID or IS ?
Should I change the "nclass_dict" and "classes_per_sheet_dict" values in the utils.py file to "3" when there are only three classes in my dataset.
The text was updated successfully, but these errors were encountered: