Skip to content

Data set processing? #7

Open
Open
@xiaoyufenfei

Description

@xiaoyufenfei

Hello @mapleneverfade What you mean is that all data sets are placed as follows:

  • datasets
    • VOC2012

      • JPEGImages
      • gtFine
      • train
        • image.txt
        • label.txt
      • val
        • image.txt
        • label.txt
      • test
        • image.txt
        • label.txt
    • Cityscapes

      • JPEGImages
      • gtFine
      • train
        • image.txt
        • label.txt
      • val
        • image.txt
        • label.txt
      • test
        • image.txt
        • label.txt
          But I don't understand another question, you gtFine in cityscapes using *_labelTrainIds.png or *_labelIds.png? I can't figure it out. Can you help me? Thank you very much!

Activity

mapleneverfade

mapleneverfade commented on Jan 17, 2019

@mapleneverfade
Owner

I'm not sure that i know what puzzles you. But gtFine should be the gray scale image.

xiaoyufenfei

xiaoyufenfei commented on Jan 17, 2019

@xiaoyufenfei
Author

@mapleneverfade Thank you! I am sure that the cityscapes-gtFine folder is a grayscale image, Just, I'm not sure if you are using *_labelIds.png or the processed *_labelTrainIds.png?

When I used *_labelTrainIds.png, I encountered the following problem:

CUDA_VISIBLE_DEVICES=0 python3 train.py --datadir ./data/cityscapes/ --savedir ./save_models/segnet/ --model segnet --num-classes 20 --num-epochs 150 --batch-size 2 --steps-loss 50 --num-workers 4 --epoch-save 10
------------ Options -------------
batch_size: 2
cuda: True
datadir: ./data/cityscapes/
epoch_save: 10
iouTrain: True
iouVal: True
lr: 0.0005
model: segnet
num_classes: 20
num_epochs: 150
num_workers: 4
pretrained: ./pre_trained/~~~.pth
resume: False
savedir: ./save_models/segnet/
state: None
steps_loss: 50
-------------- End ----------------
RuntimeError: cuda runtime error (59) : device-side assert triggered at /pytorch/aten/src/THC/generic/THCStorage.cpp:36
/pytorch/aten/src/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [5,0,0], thread: [862,0,0] Assertion t >= 0 && t < n_classes failed.
/pytorch/aten/src/THCUNN/SpatialClassNLLCriterion.cu:99: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T *, T *, T *, long *, T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [5,0,0], thread: [863,0,0] Assertion t >= 0 && t < n_classes failed.

I am not sure if it is a dataset label problem. When I try to replace the label in cityscapes-gtFine with *_labelIds.png, i encounter the same problem?

I can't figure it out. Can you help me? Thank you very much!

mapleneverfade

mapleneverfade commented on Jan 17, 2019

@mapleneverfade
Owner

Have you ever notice this?
image

xiaoyufenfei

xiaoyufenfei commented on Jan 17, 2019

@xiaoyufenfei
Author

@mapleneverfade
My dataset is already processed.
I use the Cityscapesscripts script to generate *_labelTrainIds.png based on *_labelIds.png. I am not sure if you are using *_labelIds.png or _labelTrainIds.png in gtFine while you are training. The directory ./utils/cityscapes/helps/labels is part of Cityscapesscripts, I have dealt with it, and I set num_classes=20 in Training, but when I store the label(_labelIds.png or *_labelTrainIds.png) in gtFine directory , the program still has the above error.
I can't figure it out. Can you help me? Thank you very much!

mapleneverfade

mapleneverfade commented on Jan 17, 2019

@mapleneverfade
Owner

Actually i use https://github.com/wkentaro/labelme to generate labels.
Please check out whether the label is range from 0 to 19.

xiaoyufenfei

xiaoyufenfei commented on Jan 17, 2019

@xiaoyufenfei
Author

@mapleneverfade
I understand what you mean, you mean that the data of ./data/dataset/gtFine is processed by https://github.com/wkentaro/labelme, but the official dataset of PasCalVOC2012 has been processed, http://host.robots.ox.ac.uk/pascal/VOC/voc2012/index.html, Can this not be used directly?

Another Cityscapes is also an officially processed dataset, but always need us use the script https://github.com/kinglintianxia/cityscapesScripts to process *_labelIds.png to *_labelTrainIds.png,
I want to know how you train the network on cityscapes ?
It's impossible to re-label cityscapes. This is not in line with the principle of using public benchmark data sets, unless it is your own new data set?
I can't figure it out. Can you help me? Thank you very much!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      Data set processing? · Issue #7 · mapleneverfade/pytorch-semantic-segmentation