Paper code: Image tampering localization network based on multi-class attention and progressive subtraction
- Ubuntu18.04
- Python 3.8
- PyTorch 1.9.0 + Cuda 11.1
- Detail python librarys can found in requirements.txt
An example of the dataset index file is given as datasets/Casiav2.txt, where each line contains:
img_path mask_path label
- 0 represents the authentic and 1 represents the manipulated.
- For an authentic image, the mask_path is "None".
- For wild images without mask groundtruth, the index should at least contain "img_path" per line.
-
Authentic image:
./Casiav2/authentic/Au_ani_00001.jpg None None 0
-
Manipulated image with pre-generated edge mask:
./Casiav2/tampered/Tp_D_CND_M_N_ani00018_sec00096_00138.tif ./Casiav2/mask/Tp_D_CND_M_N_ani00018_sec00096_00138_gt.png ./Casiav2/edge/Tp_D_CND_M_N_ani00018_sec00096_00138_gt.png 1
-
Manipulated image without pre-generated edge mask:
./Casiav2/tampered/Tp_D_CND_M_N_ani00018_sec00096_00138.tif ./Casiav2/mask/Tp_D_CND_M_N_ani00018_sec00096_00138_gt.png None 1
-
You should follow the format and generate your own "path file" in a
xxxx.txt
.
Limits: At this time, the edge mask can only be generated during training and cannot be pre generated. This will be a little bit slow. Since every Epoch you will generate a edge mask for each image, however, they are always the same edge mask. Better choice should be generate the edge mask from the ground truth mask before start training. Script for pre-generate the edge mask will release later.
- Please set the train image path in train_base.py, then run train_lanch.py with Python.
- In the models folder we have provided several working python files, namely "maps_net-xxx.py", please select the .py file you need to run and rename it to "maps_net.py " for proper training. These .py files were also used by us in the ablation experiments.
- Please set the test image path in inference.py and run inference.py with Python, then run evaluate.py with Python.