Skip to content

Latest commit

 

History

History
64 lines (45 loc) · 5.23 KB

README.md

File metadata and controls

64 lines (45 loc) · 5.23 KB

Multiple Object Stitching for Unsupervised Representation Learning

1. Requirements

conda create -n mos python=3.8
pip install -r requirements.txt

2. Datasets

Torchvision provides CIFAR10, CIFAR100 datasets. The root paths of data are respectively set to ./dataset/cifar10 and ./dataset/cifar100.

3. Trained Model Weights & Accuracy

Dataset Training (#Epochs) ViT-Tiny/2 ViT-Small/2 ViT-Base/2
CIFAR10 Pretrain (800) link link link
KNN Accuracy 93.2% 95.1% 95.1%
Linear (100) link link link
Linear Accuracy 94.8% 96.3% 96.4%
Finetune (100) link link link
Finetune Accuracy 97.6% 98.3% 98.3%
CIFAR100 Pretrain (800) link link link
KNN Accuracy 67.8% 73.5% 74.4%
Linear (100) link link link
Linear Accuracy 73.5% 78.5% 79.6%
Finetune (100) link link link
Finetune Accuracy 83.7% 86.1% 86.2%

4. Usage: Pretraining

ViT-Small with 1-node (8-GPU) training

Set hyperparameter, dataset and GPUs in config/pretrain/vit_small_pretrain.py and run the following command

python main_pretrain.py --arch vit-small

5. Usage: KNN

Set hyperparameter, dataset and GPUs in config/knn/knn.py and run the following command

python main_knn.py --arch vit-small --pretrained-weights /path/to/pretrained-weights.pth

6. Usage: Linear Classification

Set hyperparameter, dataset and GPUs in config/linear/vit_small_linear.py and run the following command:

python main_linear.py --arch vit-small --pretrained-weights /path/to/pretrained-weights.pth

7. Usage: End-to-End Fine-tuning

Set hyperparameter, dataset and GPUs in config/finetuning/vit_small_finetuning.py and run the following command

python python main_finetune.py --arch vit-small --pretrained-weights /path/to/pretrained-weights.pth