You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docker.md
+25-27Lines changed: 25 additions & 27 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,4 +1,4 @@
1
-
# Getting started with Docker
1
+
# 🐳 Getting started with Docker
2
2
3
3
Before we start, you will need to download Docker Desktop for free from https://www.docker.com/products/docker-desktop/
4
4
@@ -10,7 +10,7 @@ The overall structure of the codebase and how Docker can be used to interact wit
10
10
11
11
We will run through everything in the diagram in this `README`.
12
12
13
-
## Why is Docker useful?
13
+
## 🙋♂️ Why is Docker useful?
14
14
15
15
A common problem you may have faced when using other people's code is that it simply doesn't run on your machine. This can be especially frustrating, especially when it runs just fine on the developer's machine and they cannot help you with specific bugs that occur when deploying onto your operating system.
16
16
@@ -30,9 +30,9 @@ Docker solves this problem. Instead of specifying only the required Python packa
30
30
31
31
Let's first take a look at the Dockerfile for the Generative Deep Learning codebase and see how it contains all the information required to build the image.
32
32
33
-
## The Dockerfile
33
+
## 📝 The Dockerfile
34
34
35
-
In the codebase that you pulled from GitHub, there is a file simply called 'Dockerfile' inside the `docker` folder. This is the recipe that Docker will use to build the image and is shown in <<dockerfile>>. We'll walk through line by line, explaining what each step does.
35
+
In the codebase that you pulled from GitHub, there is a file simply called 'Dockerfile' inside the `docker` folder. This is the recipe that Docker will use to build the image. We'll walk through line by line, explaining what each step does.
36
36
37
37
```
38
38
FROM ubuntu:20.04 #<1>
@@ -59,19 +59,19 @@ COPY /setup.cfg /app
59
59
ENV PYTHONPATH="${PYTHONPATH}:/app" #<7>
60
60
```
61
61
62
-
<1> The first line defines the base image. Our base image is an Ubuntu 20.04 (Linux) operating system. This is pulled from DockerHub - the online store of publicly available images (`https://hub.docker.com/_/ubuntu`).
63
-
<2> Update `apt-get`, the Linux package manager and install relevant packages
64
-
<3> Upgrade `pip` the Python package manager
65
-
<4> Change the working directory to `/app`.
66
-
<5> Copy the `requirements.txt` file into the image and use `pip` to install all relevant Python packages
67
-
<6> Copy relevant folders into the image
68
-
<7> Update the `PYTHONPATH` so that we can import functions that we write from our `/app` directory
62
+
1. The first line defines the base image. Our base image is an Ubuntu 20.04 (Linux) operating system. This is pulled from DockerHub - the online store of publicly available images (`https://hub.docker.com/_/ubuntu`).
63
+
2. Update `apt-get`, the Linux package manager and install relevant packages
64
+
3. Upgrade `pip` the Python package manager
65
+
4. Change the working directory to `/app`.
66
+
5. Copy the `requirements.txt` file into the image and use `pip` to install all relevant Python packages
67
+
6. Copy relevant folders into the image
68
+
7. Update the `PYTHONPATH` so that we can import functions that we write from our `/app` directory
69
69
70
70
You can see how the Dockerfile can be thought of as a recipe for building a particular run-time environment. The magic of Docker is that you do not need to worry about installing a resource intensive virtual machine on your computer - Docker is lightweight and allows you to build an environment template purely using code.
71
71
72
72
A running version of an image is called a *container*. You can think of the image as like a cookie cutter, that can be used to create a particular cookie (the container). There is one other file that we need to look at before we finally get to build our image and run the container - the docker-compose.yaml file.
73
73
74
-
## The docker-compose.yaml file
74
+
## 🎼 The docker-compose.yaml file
75
75
76
76
Docker Compose is an extension to Docker that allows you to define how you would like your containers to run, through a simple YAML file, called 'docker-compose.yaml'.
77
77
@@ -82,7 +82,6 @@ The alternative to using Docker Compose is specify all of these parameters in th
82
82
Let's now take a look at the Docker Compose YAML file.
<1> This specifies the version of Docker Compose to use (currently version 3)
107
-
<2> Here, we specify the services we wish to launch
108
-
<3> We only have one service, which we call `app`
109
-
<4> Here, we tell Docker where to find the Dockerfile (the same directory as the docker-compose.yaml file)
110
-
<5> This allows us to open up an interactive command line inside the container, if we wish
111
-
<6> Here, we map folders on our local machine (e.g. ./data), to folders inside the container (e.g. /app/data).
112
-
<7> Here, we specify the port mappings - the dollar sign means that it will use the ports as specified in the `.env` file (e.g. `JUPYTER_PORT=8888`)
113
-
<8> The location of the `.env` file on your local machine.
114
-
<9> The command that should be run when the container runs - here, we run JupyterLab.
104
+
1. This specifies the version of Docker Compose to use (currently version 3)
105
+
2. Here, we specify the services we wish to launch
106
+
3. We only have one service, which we call `app`
107
+
4. Here, we tell Docker where to find the Dockerfile (the same directory as the docker-compose.yaml file)
108
+
5. This allows us to open up an interactive command line inside the container, if we wish
109
+
6. Here, we map folders on our local machine (e.g. ./data), to folders inside the container (e.g. /app/data).
110
+
7. Here, we specify the port mappings - the dollar sign means that it will use the ports as specified in the `.env` file (e.g. `JUPYTER_PORT=8888`)
111
+
8. The location of the `.env` file on your local machine.
112
+
9. The command that should be run when the container runs - here, we run JupyterLab.
115
113
116
-
===== Building the image and running the container
114
+
## 🧱 Building the image and running the container
117
115
118
116
We're now at a point where we have everything we need to build our image and run the container. Building the image is simply a case of running the command shown below in your terminal, from the root folder.
119
117
@@ -135,17 +133,17 @@ Because we have mapped port 8888 in the container to port 8888 on your machine,
135
133
136
134
Congratulations! You now have a functioning Docker container that you can use to start working through the Generative Deep Learning codebase! To stop running the Jupyter server, you use `Ctrl-C` and to bring down the running container, you use the command `docker compose down`. Because the volumes are mapped, you won't lose any of your work that you save whilst working in the Jupyter notebooks, even if you bring the container down.
137
135
138
-
## Using a GPU
136
+
## ⚡️ Using a GPU
139
137
140
138
The default `Dockerfile` and `docker-compose.yaml` file assume that you do not want to use a local GPU to train your models. If you do have a GPU that you wish to use (for example, you are using a cloud VM), I have provided two extra files called `Dockerfile-gpu` and `docker-compose-gpu.yaml` files that can be used in place of the default files.
141
139
142
-
For example, to build an image that includes support for GPU, use the command shown in <<docker_build_gpu>>.
140
+
For example, to build an image that includes support for GPU, use the command shown below:
143
141
144
142
```
145
143
docker-compose -f docker-compose-gpu.yml build
146
144
```
147
145
148
-
To run this image, use the following shown in <<docker_up_gpu>>
0 commit comments