Sneak view into Docker for Web devs | Part-1
Welcome to this first section to learn about Docker for Web developers..
Story so far :- Let let’s take a look at a real world use case. Let’s say we have two teams. Team A and team B. As of today the teams share the same infrastructure. In the past they had requirements, which allowed them to use the same versions of the JDK and Tomcat. So they were able to expose different ports and get their applications running in production using the same infrastructure. So they never had any issues.
Current twist in story :- Now the problem is that :
- Team A has engineers who want to develop software on Mac OSX. However their application in production is going to run on Ubuntu 16 with JDK 1.7 and Tomcat 7, but QA team doesn’t likes the idea of shipping softwares, which are developed on Mac OSX and tested on Ubuntu.
- Team B on the other hand wants to upgrade JDK to 1.8 and Tomcat 8 in QA environment, to test their app.
Now the summary of the problem is that, How can we have the same hardware versions running two origins of JDK the 1.7 and 1.8. This is because the software is being developed on a different hardware and an operating system and once it goes into production, that environment is completely different.
Solution to problem :- Let’s take a look at how Docker tries to resolve this issue. Here :-
- The infrastructure or the server sits on the bottom.
- On top of the infrastructure, the host operating system is installed. Now, it could be Windows Mac OSX, Linux etc.
- On top of this, sits the docker-engine and the docker-engine is responsible for spawning various containers. So, all this docker engine is doing is to spawn multiple containers.
What is a Docker-Engine :-
What is a Docker Container :- Imagine containers as good as a new virtual pc, but with its own memory processing ability and can mount their own volumes to the operating system and they can share volumes too. These containers have their own :-
- Processing ability.
- Memory.
- Volumes mounted to the OS.
These containers can interact with the host operating system or the files on the host operating system.
So as you can see in the picture the docker-engine sits on top of the host machine and is responsible for spawning multiple containers.
- First Container runs on the port 80 hosted on Tomcat-7 and JDK 1.7.
- Second Container runs on the port 80 hosted on Tomcat-8 and JDK 1.8.
Now, you might be wondering how port 80 is shared between these two containers. Remember these two are virtual machines which means they are as good as a PC running inside a PC.
Back to Solved story :-
- Team-A installs docker on their local MacOSX and develops software using JDK 1.7 and Tomcat-7 and this environment is going to be consistent across all the environments.
- Team B gets to use JDK 1.8 and Tomcat-8 even though they share the servers with Team A.
- QA on the other hand is happy, as they get to test an application which was developed using the same OS (as that of prod).
Concluding Docker Intro :-
Installation of Docker at MAC :- We can observe here that, there is docker server and client both have been installed on the machine. Below output has been given to docker-daemon.
Now, let’s head to docker preferences to note that, we have allocated 2 GBs of RAM to our docker and 4 CPUs to it.
Let’s head to our terminal and install our first container on it :-
Please do also note that, we also have docker-compose as well being installed onto our hardware.
Use-Case-1 :- Let’s see our first use-case to host the website on Apache httpd web-server. Below is what we are going to do :-
Below is how the flow for this use-case would look like :-
Here, is how our static website looks like, which we shall be hosting inside an docker-container :-
Step #1 :- We have below as docker-file :-
Step #2:- docker build
This command builds Docker images from a Dockerfile and a “context”. A build’s context is the set of files located in the specified PATH
or URL
. The build process can refer to any of the files in the context.
--tag
, -t
calls out to have the Name and optionally a tag in the 'name:tag' format.
Step #3:- docker images
This command docker images
will show all top level images, their repository and tags, and their size. Docker images have intermediate layers that increase reusability, decrease disk usage, and speed up docker build
by allowing each step to be cached. These intermediate layers are not shown by default. The SIZE
is the cumulative space taken up by the image and all its parent images. This is also the disk space used by the contents of the Tar file created when you docker save
an image. An image will be listed more than once if it has multiple repository names or tags. This single image (identifiable by its matching IMAGE ID
) uses up the SIZE
listed only once.
Step #4:- docker inspect
This command inspect provides detailed information on constructs controlled by Docker. By default,will render results in a JSON array.
Step #5:- docker run
This command creates a container from a given image and starts the container using a given command. It is one of the first commands you should become familiar with when starting to work with Docker.
# -i : Keep STDIN open, even if not attached.
# -d : represents (detached mode), note that if you don’t run this in detached mode, the life of the container will be the life of the terminal in which you are executing it.
# -p : represents the host-port to container-port mapping, if you substitute it with -P you will get a random port allocated by docker
# — name : represents the name of the container
Here, we shall be launching 2 different containers from the same underlying docker image :-
Step #6:- docker ps -a
This command shows all the containers that we have currently :-
Now, the above output indicates that, the container is running well and we can access it now. So, let’s head to our browser and try to access the 2 ports: 5555 & 5556. Recall that in previous steps, we exposed these 2 hardware ports, which are in-turn mapping to the port no. 80 of the respective container.
Step #7:- docker logs -ft <container_id>
This command is usually helpful in debugging cases. Now, we are accessing the website and it’s geenrating the logs :-
Step #8:- docker inspect <container_id>
This command again is helpful from debugging prospective, in order for us to inspect the particular container :-
Step #9:- docker stop <container_id>
This command would stop the currently running container. Let’s stop the container, which was hosting website on hardware port 5555 :-
Let’s observe the website now , which is no-more accessible because of very obvious reason that, we have stopped the container, which was powering to this website :-
Step #9:- docker exec -it <container_id>
This command would help us to log-inside the docker container. Remember from our previous discussions that, our docker is also a virtual computer on the top of our actual computer.
Step #10:- Now, we know that, our container is a Linux based computer and now that, we have logged-into this newly launched container, let’s first install the ‘ps’ package, so that we can see what all processes are currently running inside this container :-
Step #11:- Next, let’s inspect all the processes which are currently running in this container :-
Now remember that every image which you download from DockerHub or even the one you create, you are going to have the first process ID corresponding to the default startup script within our default container which is launched from that image. Thus, here in our case, we launched Apache httpd and that it started up as with process number one.
Step #11:- Next, let’s inspect our beautiful container further. Good to revisit our DockerFile, remember we placed an instruction to copy our static website from our machine(system) to the docker-container. In the below “Dockerfile” @ line #12, we are instructing that the folder website(This folder contains our locally developed static website’s artefacts) should be copied to the directory “/usr/local/apache2/htdocs/aditya-web” inside our container.
So, now, we would change our directory to the aforementioned directory path into our docker container :-
Let’s also see the entire folder structure, that it copied for us.
We had also overriden the file “httpd.conf”, let’s also have a look @ it. Note that, latest timestamp of this file indicates that, it had been changed recently.
Let’s also note that, we had initially allocated 4 cpus, 2 GB of RAM and 59 GB of HDD to our docker. Let’s have a look at the same, which proves our initial allocations :-
Step #12:- docker rm <container_id>
This command would help us to remove a particular container. Note that, in order to remove the particular container, we first need to stop it and then only, removal of container can be allowed.
Step #13:- docker rmi <image_id>
This command would help us to remove a particular image. Note that, in order to remove an image, we need to make sure that, no container is being powered through that image.
That’s all for this blog. We shall see further things in next blog.
References :-
- https://httpd.apache.org/ABOUT_APACHE.html
- https://www.docker.com/
- https://docs.docker.com/engine/reference/commandline/container/
- https://docs.docker.com/engine/reference/commandline/inspect/
- https://docs.docker.com/engine/reference/run/
- https://docs.docker.com/engine/reference/commandline/images/
- https://docs.docker.com/engine/reference/commandline/build/
- https://code.visualstudio.com/download
- https://stackoverflow.com/questions/26982274/ps-command-doesnt-work-in-docker-container
- https://docs.docker.com/engine/reference/commandline/ps/