Docker Images: What You Need To Know
Nowadays, most tools that are being built by open source developers offer Docker as a method of running, whether for testing or even in a production environment. In this tutorial, we will go through the basics of how to run Docker Images.
Prerequisites
You need to have Docker installed on your device. It is preferred that you run the mentioned commands on a Linux system (e.g., Ubuntu), it is easier to deal with compared to other Operating Systems.
You also need to know the basics of Nginx because that is the tool that we will be using in this tutorial to explain how to run Docker Images.
What Are Docker Images?
Docker Images are basically like “templates” or “blueprints” for what a Docker Container will need in order to run the configured app. For example, a Docker Image for a web-based tool might use a base image like Ubuntu, install the necessary dependencies, and then run the application. The Docker Image is based on a Dockerfile that is written by a developer.
In case you do not know the difference between a Dockerfile, a Docker Image, and a Docker Container:
Dockerfile
The Dockerfile is a file that lists the necessary steps to run the tool. The below piece of code is a simple implementation of a Dockerfile that runs Nginx with a custom file:
FROM nginx:latest
COPY ./index.html /usr/share/nginx/html/index.html
The first line, FROM nginx:latest
pulls an existing Docker Image (the latest version of Nginx in this case) to build on top of.
The second line describes the first (and only, in this case) action to run on top of the Nginx image we pulled in the first line. It copies a file we have locally, ./index.html
and copies it to a directory inside the container that would be built eventually (this path is the default file that Nginx deploys with the default configuration).
Docker Image
A Docker Image is basically the Dockerfile but built and ready to be used. We can take the example above and build it (make sure you have an HTML page named index.html
to make sure it works properly):
docker build -t my-nginx .
It builds the Dockerfile into an Image that is ready to be used. You can view the image (along with any other images you previously pulled) by running
docker images
The result should be similar to this:
REPOSITORY TAG IMAGE ID CREATED SIZE
my-nginx latest 0dcb1e22a520 11 minutes ago 187MB
Now you have built a very simple Docker Image.
Docker Container
A Docker Container is basically a running version of a Docker Image. It is an instance that can be used almost like a normal application. We can start a container for our custom-made image by running the command below:
docker run -p 8888:80 my-nginx
Now open your browser on the same device and go to localhost:8888
. You should see the HTML file you wrote previously, running in its own container.
You can view all your running containers by running:
docker ps
You can also view all containers, even stopped ones, by adding -a
:
docker ps -a
Closure
Hopefully you could see the difference between a Dockerfile, a Docker Image, and a Docker Container:
- Dockerfile includes the steps to build and run a specific tool or tools.
- Docker Image is the built and ready-to-use version of a Dockerfile.
- Docker Container is a running copy of a Docker Image.
We can start going through how to run a Docker Image efficiently.
Ports and Volumes
The two things that will matter the most when running a Docker Image are Ports Mapping and Volumes Mapping (there are other things as well, but they are out of scope because of their complexity). I will go through each of them below in their individual subsections.
Ports Mapping
Usually, a container runs on a specific port, which depends on the image being run. For example, if you run an Nginx container, by default, it runs inside the container on port 80. That means that if you access the container and curl localhost
then you will get the result from the Nginx running internally.
What would happen if you tried to run curl localhost
from outside the container? It will not work.
That is because the application (Nginx) is running on port 80 only inside the container, and not outside. If you wish to access the Nginx running inside the container from the outside, you would have to do some port mapping.
Ports mapping basically allows you to run the inside application on a host port (from outside the container).
For example, if you want to run Nginx on a port that can be accessed from outside the container, you would map a port from the host (the main system where Docker is running) to the container (on port 80).
You can do so by adding -p
representing that there is going to be port mapping, then adding the port number where you want it to run on the host, then a colon (:
), and afterwards the port of the application running inside the container. You can check out the example below:
docker run -p 8888:80 nginx
In the example above, we run an instance of nginx
as a container and specify the port mapping: -p 8888:80
This means we will expose port 8888
on our host system to use port 80
from inside the container.
If you run the command above, you can run curl localhost:8888
and it should return the result successfully.
Volumes Mapping
Sometimes, you might need to use a specific set of configuration files, or maybe you need to save the data generated inside the container on your host system, so that even if the container is deleted, you will still have the data.
Volumes Mapping allows you to map a directory path on the container to one on the host system.
For example, say you want to deploy an Nginx container that uses a specific HTML file. You can first create a directory with an index.html
file and add some content there, then map that directory to the container directory responsible for displaying the HTML in Nginx. You can do this by running:
docker run -v ./:/usr/share/nginx/html -p 8888:80 nginx
In Nginx, the HTML file being displayed is in the path /usr/share/nginx/html
. Our HTML file is also in the same directory we are currently ./index.html
. We then tell Docker that we want the two paths to act as one and share the files. In this case, we are using our own index.html
instead of the one created by default inside the container, therefore, if we open our browser and go to localhost:8888
then we will see our HTML file deployed.
Another common example is when we want to persist data to keep it on the host, even if the container is deleted. We will be using MySQL for this example.
We will first create a directory named mysql-data
and make sure it is empty. Then we run the below command:
docker run -e MYSQL_ROOT_PASSWORD=password -v ./mysql-data:/var/lib/mysql -p 9000:3306 mysql
*Note: The first part is essential when running a MySQL container to specify the Root Password as an Environment variable (it is out of scope of this article).
We can see -v ./mysql-data:/var/lib/mysql
which allows us to map the directory we created previously, to the directory where the MySQL data is stored, at /var/lib/mysql
.
After we run the command, we will see inside the local mysql-data
directory that a lot of files have been created (they have been created by MySQL).
If you are familiar with MySQL, you can use a client to access it through port 9000
and create a database to test it out.
Since we mapped the volume locally, we do not have to worry about losing the data if the container gets deleted. You can try to stop and delete the container. Run docker ps
and find the specific container ID, then stop and delete it:
# To Stop The Container
docker stop <CONTAINER-ID>
# To Delete The Container
docker rm <CONTAINER-ID>
You will notice that the directory mysql-data
still has all the files that the container created.
You can now create another container using the same command:
docker run -e MYSQL_ROOT_PASSWORD=password -v ./mysql-data:/var/lib/mysql -p 9000:3306 mysql
It will use the same directory we had previously, and any data that was created in the previous container, will be used in the new container.
If you created a database previously, you can use the client again and see that the database is still there.
Conclusion
Hopefully, you now know how to use Docker Images. The reason I wrote this tutorial was because I noticed a lot of great tools can be run and tested very easily using Docker.
I personally run Docker Images for tools I use often. For example, I use MailHog as a Docker container. I also run my databases, including MySQL, PostgreSQL, and MongoDB, as Docker containers.
I can create new instances and delete them very easily using Docker instead of installing the full application locally or having to build it from scratch.
Thank you for reading; hopefully you benefitted from this tutorial. See you in the next one!