Skip to content

Vision Conformer: Incorporating Convolutions into Vision Transformer Layers

License

Notifications You must be signed in to change notification settings

uchidalab/vision-conformer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

vision-conformer

Implemention of Vision Conformer, a vision classification model that conbines Vision Transformer and CNN.

The contents are under developing.
If you have any questions, please touch me. akihiro.kusuda@human.ait.kyushu-u.ac.jp

Usage

You can create a conda environment with envirionment.yaml.

conda env create -f envirionment.yaml
conda activate vision-conformer

You can train the model by running main.py.

python main.py

You can also check the results on mlflow dashboard.

mlflow ui

Parameters

In main.py, you can change the parameters for this model.

model = VisionConformer(
    image_size=28,
    patch_size=4,
    num_classes=10,
    dim=256,
    depth=3,
    heads=4,
    mlp_dim=256,
    dropout=0.1,
    emb_dropout=0.1,
    pool='cls',
    channels=1,
    hidden_channels=32,
    cnn_depth=2,
)

About

Vision Conformer: Incorporating Convolutions into Vision Transformer Layers

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages