Skip to content

Commit 6212712

Browse files
committed
Dockerfile and README
0 parents  commit 6212712

File tree

2 files changed

+242
-0
lines changed

2 files changed

+242
-0
lines changed

Dockerfile

+105
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,105 @@
1+
FROM ubuntu:16.04
2+
3+
RUN apt-get update && apt-get install -y --no-install-recommends ca-certificates apt-transport-https gnupg-curl && \
4+
rm -rf /var/lib/apt/lists/* && \
5+
NVIDIA_GPGKEY_SUM=d1be581509378368edeec8c1eb2958702feedf3bc3d17011adbf24efacce4ab5 && \
6+
NVIDIA_GPGKEY_FPR=ae09fe4bbd223a84b2ccfce3f60f4b3d7fa2af80 && \
7+
apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/7fa2af80.pub && \
8+
apt-key adv --export --no-emit-version -a $NVIDIA_GPGKEY_FPR | tail -n +5 > cudasign.pub && \
9+
echo "$NVIDIA_GPGKEY_SUM cudasign.pub" | sha256sum -c --strict - && rm cudasign.pub && \
10+
echo "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64 /" > /etc/apt/sources.list.d/cuda.list && \
11+
echo "deb https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1604/x86_64 /" > /etc/apt/sources.list.d/nvidia-ml.list
12+
13+
ENV CUDA_VERSION 9.1.85
14+
15+
ENV CUDA_PKG_VERSION 9-1=$CUDA_VERSION-1
16+
RUN apt-get update && apt-get install -y --no-install-recommends \
17+
cuda-cudart-$CUDA_PKG_VERSION && \
18+
ln -s cuda-9.1 /usr/local/cuda && \
19+
rm -rf /var/lib/apt/lists/*
20+
21+
# nvidia-docker 1.0
22+
LABEL com.nvidia.volumes.needed="nvidia_driver"
23+
LABEL com.nvidia.cuda.version="${CUDA_VERSION}"
24+
25+
RUN echo "/usr/local/nvidia/lib" >> /etc/ld.so.conf.d/nvidia.conf && \
26+
echo "/usr/local/nvidia/lib64" >> /etc/ld.so.conf.d/nvidia.conf
27+
28+
ENV PATH /usr/local/nvidia/bin:/usr/local/cuda/bin:${PATH}
29+
ENV LD_LIBRARY_PATH /usr/local/nvidia/lib:/usr/local/nvidia/lib64
30+
31+
# nvidia-container-runtime
32+
ENV NVIDIA_VISIBLE_DEVICES all
33+
ENV NVIDIA_DRIVER_CAPABILITIES compute,utility
34+
ENV NVIDIA_REQUIRE_CUDA "cuda>=9.1"
35+
36+
# == tensorflow-gpu: tensorflow + cuda and cudnn build with bazel based on the official Dockerfile
37+
# cf until 1.12.3: https://github.com/tensorflow/tensorflow/blob/v1.12.3/tensorflow/tools/docker/Dockerfile.devel-gpu
38+
# or master https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/dockerfiles/dockerfiles/devel-gpu.Dockerfile
39+
# The difference here is that no jupyter, keras, etc are needed, the goal here is tha bare minimum
40+
# in order to build the tensorflow-gpu with python3.5, cuda 9.1.85 and cudnn 7.1.3
41+
42+
ENV CUDNN_PKG_VERSION 7.1.3.16-1+cuda9.1
43+
44+
RUN apt-get update && apt-get install -y --no-install-recommends \
45+
build-essential \
46+
cmake \
47+
cuda-command-line-tools-9-1 \
48+
cuda-cublas-dev-9-1 \
49+
cuda-cudart-dev-9-1 \
50+
cuda-cufft-dev-9-1 \
51+
cuda-curand-dev-9-1 \
52+
cuda-cusolver-dev-9-1 \
53+
cuda-cusparse-dev-9-1 \
54+
cuda-nvrtc-dev-9-1 \
55+
curl \
56+
git \
57+
libcudnn7=$CUDNN_PKG_VERSION \
58+
libcudnn7-dev=$CUDNN_PKG_VERSION \
59+
libcurl3-dev \
60+
libfreetype6-dev \
61+
libhdf5-serial-dev \
62+
libpng12-dev \
63+
libzmq3-dev \
64+
pkg-config \
65+
python3-dev \
66+
python3-pip \
67+
python3-numpy \
68+
python3-setuptools \
69+
python3-scipy \
70+
python3-wheel \
71+
rsync \
72+
software-properties-common \
73+
unzip \
74+
zip \
75+
zlib1g-dev \
76+
wget \
77+
&& \
78+
rm -rf /var/lib/apt/lists/* && \
79+
find /usr/local/cuda-9.1/lib64/ -type f -name 'lib*_static.a' -not -name 'libcudart_static.a' -delete && \
80+
rm /usr/lib/x86_64-linux-gnu/libcudnn_static_v7.a
81+
82+
RUN git clone -b 'v1.0.0' --single-branch --depth 1 https://github.com/pytorch/pytorch.git
83+
84+
WORKDIR '/pytorch'
85+
86+
RUN git submodule update --init --recursive && \
87+
pip3 install pyyaml==3.13 wheel && \
88+
pip3 install -r requirements.txt
89+
90+
RUN CUDAHOSTCXX='/usr/bin/gcc' \
91+
USE_OPENCV=1 \
92+
BUILD_TORCH=ON \
93+
CMAKE_PREFIX_PATH="/usr/bin/" \
94+
LD_LIBRARY_PATH=/usr/local/cuda/lib64:/usr/local/lib:$LD_LIBRARY_PATH \
95+
CUDA_BIN_PATH=/usr/local/cuda/bin \
96+
CUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda/ \
97+
CUDNN_LIB_DIR=/usr/local/cuda/lib64 \
98+
CUDA_HOST_COMPILER=cc \
99+
USE_CUDA=1 \
100+
USE_NNPACK=1 \
101+
CC=cc \
102+
CXX=c++ \
103+
TORCH_CUDA_ARCH_LIST="6.0 6.1+PTX 7.0" \
104+
TORCH_NVCC_FLAGS="-Xfatbin -compress-all" \
105+
python3 setup.py bdist_wheel

README.md

+137
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,137 @@
1+
# Build PyTorch from source
2+
3+
(You may want to see the already built [releases](https://github.com/edumucelli/build-torch/releases) before building yourself)
4+
5+
This project aims to build PyTorch from the source using a Dockerfile.
6+
The idea is to simplify the build so that one can choose the the Cuda/CuDNN
7+
version which better fits ones environment.
8+
9+
As PyTorch pre-built binaries require specific CUDA/CuDNN versions, e.g.,
10+
[#10971](https://github.com/pytorch/pytorch/issues/10971) you can't have pre-built
11+
PyTorch with CUDA 9.1 after the 0.4.0 release. For instance if you are using
12+
Debian Stretch and the Nvidia distribution packages you won't be able to
13+
use newer versions as they are built with CUDA 9.2.
14+
15+
The default configuration of the Dockerfile will build:
16+
17+
* Python 3.5
18+
* PyTorch 1.3
19+
* Cuda 9.1.85
20+
* CuDNN 7.1.3
21+
22+
As it downloads the packages straight from Nvidia you can build the one
23+
you prefer with Cuda, the package comes straight from [here](https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64).
24+
The following versions are then available, just replace the `CUDA_VERSION`
25+
variable by one of the following:
26+
27+
* 8.0.44
28+
* 8.0.61
29+
* 9.0.176
30+
* 9.1.85
31+
* 9.2.88
32+
* 9.2.148
33+
* 10.0.130
34+
* 10.1.105
35+
* 10.1.168
36+
* 10.1.243
37+
38+
Likewise, CuDNN version can be one of the following. Just replace the `CUDNN_PKG_VERSION`
39+
variable to the one you prefer. The packages come directly from [here](https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1604/x86_64).
40+
41+
* 7.0.1.13-1+cuda8.0
42+
* 7.0.2.38-1+cuda8.0
43+
* 7.0.3.11-1+cuda8.0
44+
* 7.0.3.11-1+cuda9.0
45+
* 7.0.4.31-1+cuda8.0
46+
* 7.0.4.31-1+cuda9.0
47+
* 7.0.5.15-1+cuda8.0
48+
* 7.0.5.15-1+cuda9.0
49+
* 7.0.5.15-1+cuda9.1
50+
* 7.1.1.5-1+cuda8.0
51+
* 7.1.1.5-1+cuda9.0
52+
* 7.1.1.5-1+cuda9.1
53+
* 7.1.2.21-1+cuda8.0
54+
* 7.1.2.21-1+cuda9.0
55+
* 7.1.2.21-1+cuda9.1
56+
* 7.1.3.16-1+cuda8.0
57+
* 7.1.3.16-1+cuda9.0
58+
* 7.1.3.16-1+cuda9.1
59+
* 7.1.4.18-1+cuda8.0
60+
* 7.1.4.18-1+cuda9.0
61+
* 7.1.4.18-1+cuda9.2
62+
* 7.2.1.38-1+cuda8.0
63+
* 7.2.1.38-1+cuda9.0
64+
* 7.2.1.38-1+cuda9.2
65+
* 7.3.0.29-1+cuda9.0
66+
* 7.3.0.29-1+cuda10.0
67+
* 7.3.1.20-1+cuda9.0
68+
* 7.3.1.20-1+cuda9.2
69+
* 7.3.1.20-1+cuda10.0
70+
* 7.4.1.5-1+cuda9.0
71+
* 7.4.1.5-1+cuda9.2
72+
* 7.4.1.5-1+cuda10.0
73+
* 7.4.2.24-1+cuda9.0
74+
* 7.4.2.24-1+cuda9.2
75+
* 7.4.2.24-1+cuda10.0
76+
* 7.5.0.56-1+cuda9.0
77+
* 7.5.0.56-1+cuda9.2
78+
* 7.5.0.56-1+cuda10.0
79+
* 7.5.0.56-1+cuda10.1
80+
* 7.5.1.10-1+cuda9.0
81+
* 7.5.1.10-1+cuda9.2
82+
* 7.5.1.10-1+cuda10.0
83+
* 7.5.1.10-1+cuda10.1
84+
* 7.6.0.64-1+cuda9.0
85+
* 7.6.0.64-1+cuda9.2
86+
* 7.6.0.64-1+cuda10.0
87+
* 7.6.0.64-1+cuda10.1
88+
* 7.6.1.34-1+cuda9.0
89+
* 7.6.1.34-1+cuda9.2
90+
* 7.6.1.34-1+cuda10.0
91+
* 7.6.1.34-1+cuda10.1
92+
* 7.6.2.24-1+cuda9.0
93+
* 7.6.2.24-1+cuda9.2
94+
* 7.6.2.24-1+cuda10.0
95+
* 7.6.2.24-1+cuda10.1
96+
* 7.6.3.30-1+cuda9.0
97+
* 7.6.3.30-1+cuda9.2
98+
* 7.6.3.30-1+cuda10.0
99+
* 7.6.3.30-1+cuda10.1
100+
* 7.6.4.38-1+cuda9.0
101+
* 7.6.4.38-1+cuda9.2
102+
* 7.6.4.38-1+cuda10.0
103+
* 7.6.4.38-1+cuda10.1
104+
105+
One thing to note is that some of the packages will respect the Cuda version
106+
you are looking for, e.g.,
107+
108+
```
109+
apt-get install ...
110+
cuda-command-line-tools-9-1 \
111+
cuda-cublas-dev-9-1 \
112+
cuda-cudart-dev-9-1 \
113+
cuda-cufft-dev-9-1 \
114+
cuda-curand-dev-9-1 \
115+
cuda-cusolver-dev-9-1 \
116+
cuda-cusparse-dev-9-1 \
117+
```
118+
119+
have the `9-1` in the package names, if you are building a Cuda `10.1` then
120+
you should replace the `9-1` everywhere by `10-1`.
121+
122+
## Building
123+
124+
Just run `(sudo) docker build -t build-torch .` in the same directory
125+
as the one where the Dockerfile is stored. The pytorch wheel will be
126+
stored on the `/pytorch/dist` directory of the container.
127+
128+
## Getting the wheel out of the container
129+
130+
There are [several ways](https://stackoverflow.com/questions/22049212/copying-files-from-docker-container-to-host) to copy
131+
data from container to the host machine. Here is a suggestion:
132+
133+
`sudo docker cp CONTAINER_ID:/tmp/pip/torch-1.3.0a0+50c90a2-cp35-cp35m-linux_x86_64.whl .`
134+
135+
To get the `CONTAINER_ID` just run `docker ps -alq` or `docker ps` then see the `CONTAINER_ID`
136+
respective to the `build-torch` image, or the one you -t `image tag` you gave in the
137+
building command.

0 commit comments

Comments
 (0)