pytorch docker file based on ubuntu-18.04-cuda 10.2-cudnn7
for gpu support machine:
docker build -t image_name:tag -f dockerfile_nroot.gpu .
for cpu support machine:
docker build -t image_name:tag -f dockerfile_nroot.cpu .
for other packages, add the install command at the bottom of the dockerfile to reduce the compile time.
docker create --name ryan_ocr -it \
-p 6006:6006 \
-p 8080:8080 \
-p 5000:5000 \
-p 10086:22 \
-e DISPLAY=unix$DISPLAY \
-e GDK_SCALE \
-e GDK_DPI_SCALE \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-v /home/ryan/Documents/Github:/home/ryan90/code \
-v /media/ryan/Data:/home/ryan90/data \
--gpus all \
ryan/dl-docker:gpu \
/bin/zsh
- Use the following command to enable x11 forward to host machine(for GUI applications only)
xhost +
-
Change the path of the volume that mounted to container using '-v'.
-
For nvidia-GPU support, following the instruction in 'https://github.com/NVIDIA/nvidia-docker' to build nvidia-docker.
- use the following command to enter container shell
docker exec -it container_id /bin/zsh
or use vscode docker extension to manage docker image and container vscode could config python in container as python interpreter directly.
- right click on container in vscode
- attach vistual studio code to container
- select python in container as interpreter