-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is there something wrong with the original cfg file, please? #1
Comments
Hi, |
OK. I also asked the same question on Detectron2, maybe I need to find other way to solve this problem. By the way, Is the other paper (Meta Faster R-CNN) also the same configuration and environment? |
Yes, we used the same environment in the two repos. |
Thank you for your reply and good luck with your work. |
Excuse me, I have another problem: |
Both of the two scripts are crucial. We first use fsod_train_net_fewx.py to train the baseline model following this repo FewX, which is reorganized in our fewx module. Then we add the proposed heterogeneous GCNs and use fsod_train_net.py to train the whole model, which is defined in our QA_FewDet module. The two modules fewx and QA_FewDet are different, and the two-step meta-training is crucial for our final performance. If we only use the QA_FewDet module, the training is unstable. |
Thank you for your excellent work and your reply. |
Command: sh scripts/meta_training_pascalvoc_split1_resnet101.sh
ValueError: Milestone must be smaller than total number of updates: num_updates=10000, milestone=10000
version: 0.5
cfg File:
SOLVER:
IMS_PER_BATCH: 4
BASE_LR: 0.002
STEPS: (15000, 20000)
MAX_ITER: 20000
CHECKPOINT_PERIOD: 10000
Has the author encountered this problem?
Environment:
Ubuntu18+torch1.8+cuda11.0 detetron2-v0.5
The text was updated successfully, but these errors were encountered: