Allows processing of images with MMSegmentation.
Uses PyTorch 1.9.0 and CPU support.
MMDetection github repo tag/hash:
v0.25.0
46326f63ce411c794d237e986dd3924590d0e75e
and timestamp:
June 3rd, 2022
-
Log into registry using public credentials:
docker login -u public -p public public.aml-repo.cms.waikato.ac.nz:443
-
Pull and run image (adjust volume mappings
-v
):docker run --shm-size 8G \ -v /local/dir:/container/dir \ -it public.aml-repo.cms.waikato.ac.nz:443/open-mmlab/mmsegmentation:0.25.0_cpu
-
Pull and run image (adjust volume mappings
-v
):docker run --shm-size 8G \ -v /local/dir:/container/dir \ -it waikatodatamining/mmsegmentation:0.25.0_cpu
-
Build the image from Docker file (from within /path_to/mmsegmentation/0.25.0_cpu)
docker build -t mmseg .
-
Run the container
docker run --shm-size 8G -v /local/dir:/container/dir -it mmseg
/local/dir:/container/dir
maps a local disk directory into a directory inside the container
docker build -t mmsegmentation:0.25.0_cpu .
-
Tag
docker tag \ mmsegmentation:0.25.0_cpu \ public-push.aml-repo.cms.waikato.ac.nz:443/open-mmlab/mmsegmentation:0.25.0_cpu
-
Push
docker push public-push.aml-repo.cms.waikato.ac.nz:443/open-mmlab/mmsegmentation:0.25.0_cpu
If error "no basic auth credentials" occurs, then run (enter username/password when prompted):
docker login public-push.aml-repo.cms.waikato.ac.nz:443
-
Tag
docker tag \ mmsegmentation:0.25.0_cpu \ waikatodatamining/mmsegmentation:0.25.0_cpu
-
Push
docker push waikatodatamining/mmsegmentation:0.25.0_cpu
If error "no basic auth credentials" occurs, then run (enter username/password when prompted):
docker login
The following scripts are available:
mmseg_config
- for expanding/exporting default configurations (calls/mmsegmentation/tools/misc/print_config.py
)mmseg_predict_poll
- for applying a model to images (uses file-polling, calls/mmsegmentation/tools/predict_poll.py
)mmseg_predict_redis
- for applying a model to images (via Redis backend), add--net=host
to the Docker options (calls/mmsegmentation/tools/predict_redis.py
)
-
Predict and produce PNG files
mmseg_predict_poll \ --model /path_to/epoch_n.pth \ --config /path_to/your_data_config.py \ --prediction_in /path_to/test_imgs \ --prediction_out /path_to/test_results
Run with -h for all available options.
-
Predict via Redis backend
You need to start the docker container with the
--net=host
option if you are using the host's Redis server.The following command listens for images coming through on channel
images
and broadcasts predicted images on channelpredictions
:mmseg_predict_redis \ --model /path_to/epoch_n.pth \ --config /path_to/your_data_config.py \ --redis_in images \ --redis_out predictions
Run with
-h
for all available options.
When running the docker container as regular use, you will want to set the correct user and group on the files generated by the container (aka the user:group launching the container):
docker run -u $(id -u):$(id -g) -e USER=$USER ...
PyTorch downloads base models, if necessary. However, by using Docker, this means that models will get downloaded with each Docker image, using up unnecessary bandwidth and slowing down the startup. To avoid this, you can map a directory on the host machine to cache the base models for all processes (usually, there would be only one concurrent model being trained):
-v /somewhere/local/cache:/.cache
Or specifically for PyTorch:
-v /somewhere/local/cache/torch:/.cache/torch
NB: When running the container as root rather than a specific user, the internal directory will have to be
prefixed with /root
.
You can use simple-redis-helper to broadcast images and listen for image segmentation results when testing.
You can test the inference of your container with the image_demo2.py script as follows:
-
create a test directory and change into it
mkdir test_inference cd test_inference
-
create cache directory
mkdir -p cache/torch
-
start the container in interactive mode
docker run --shm-size 8G -u $(id -u):$(id -g) -e USER=$USER \ -v `pwd`:/workspace \ -v `pwd`/cache:/.cache \ -v `pwd`/cache/torch:/.cache/torch \ -it public.aml-repo.cms.waikato.ac.nz:443/open-mmlab/mmsegmentation:0.25.0_cuda11.1
-
download a pretrained model
cd /workspace mim download mmsegmentation --config pspnet_r50-d8_512x1024_40k_cityscapes --dest .
-
perform inference
python /mmsegmentation/demo/image_demo2.py \ --img /mmsegmentation/demo/demo.png \ --config /mmsegmentation/configs/pspnet/pspnet_r50-d8_512x1024_40k_cityscapes.py \ --checkpoint pspnet_r50-d8_512x1024_40k_cityscapes_20200605_003338-2966598c.pth \ --output_file /workspace/demo_out.png
-
the model saved the result of the segmentation in
test_inference/demo_out.png
(in grayscale)
-
ValueError: SyncBatchNorm expected input tensor to be on GPU
Replace
SyncBN
withBN
in the config file (see here).