My Introduction to Docker

My Introduction to Docker

My team has decided to transition from Windows Services to Docker Containers. We’ll move to Linux and make all our new systems containerized. Very futuristic!

I was in charge of checking the Docker world from scratch and figuring out how easy it would be to make the transition.

To make this POC I decided to make the new system I was in charge of directly on Docker. This would be my POC system. I learned a lot from working this way, and as of today, the system is production ready.

I can say this whole POC was a success and now I want the whole team to be Docker-ready.

But what does that mean? I decided to make this list of must-know features to make yourself, what I call, a Docker Developer. These include the ability to build your own image, to debug it – inside and out and more. All written from a developer’s eye.

This can be seen as a Cheat Sheet on Docker commands, but it is intended to be more than that. Most cheat sheets are just an ensemble of all the common commands with a short description of what they do. I will attempt to make a more complete cheat sheet, explaining what the command does and when and why you will need it.

Let’s make sure we are talking the same language

Docker

“Docker is a set of platform as a Service (PaaS) products that use OS-level virtualization to deliver software in packages called containers” – Wikipedia.

What does this mean? Docker allows creating a software enclosed package that will run on top of your host device OS (kernel). This is lighter than running a whole VM on top of your OS and it’s more straight-forward than installing the software directly on your OS. Moreover, since it works on top of your OS and not directly on it, the same package can be used on Windows, Linux and MacOS. Say bye-bye to “it works on my machine”!

Image

A closed software package. The images are built in a layer formation where the first layer is always the basic “OS” kernel. You can choose if you want the kernel to be Linux based or Windows bases. If you use Windows based, you will be able to run only on Windows.

Every image consists of a name, a version and a signature. The default version is “latest” and that version image will be used if you don’t specify a specific version.

Container

The container is an image in running state. It can actually be running or it can be stopped. Changes you make to the container are not saved the next time the container is started from scratch.

The need-to-know commands

“docker images” (or “docker image ls”)

The first command I used. This is used to see all the images you have loaded or pulled. Your local registry. If you have just installed docker, you’ll see an empty list.

“docker pull”

Used to pull images from a Docker Registry. You most probably won’t use this command directly but when creating an image using a Dockerfile (with the “FROM” command).

“docker build .”

Tells docker to use the current directory’s Dockerfile to create a new image. You can add “-t :” to tag the images with a name and a version.

“docker ps”

Much like the Linux “ps” command, this command shows the current running containers. This is useful to know the names of the containers that are running and for how long they are running.

“docker logs ”

Docker routes all stdout and stderr of the running application to a logging engine. By default it is routed to a log file and can be viewed running this command with the name of the desired container.

“docker exec -it /bin/bash”

This is a really helpful docker command. It allows you to execute a bash into a running container. Let’s say you want to see the inner configuration of the image, the inner filesystem or even the environment variables, then you can attach yourself to the container and “cat” the configuration file, “ls” the filesystem and more.

“docker save -o .tar” (and its brother “docker load -i .tar”)

Let’s say you have built your image and you want to move it to another location. You most probably will connect to a local/global image artifactory, upload it there and then “pull” the image from the other location. But what if you need to use the image in an offline scenario? You can “docker save” the image as a tar file and then “docker load” the image in the other location.

Some architecture decisions made when working on Docker

Making the new system whilst knowing it will work on Docker, made some architecture guidelines.

  1. No monoliths! Working on Docker really pushes you to work Microservice-ly. When you start to work with Docker and realize how easy it is to start a new image, duplicate it and change it, you realize that there simply isn’t a way a monolithic system is the better choice.

  2. You really focus on the logic. You don’t focus on how and where to write the logs, on where the configuration will be or other stuff that distract you from just writing the logic of this microservice. You know that Docker will take care of that.

  3. You can use whatever language and packages you want! Just make sure the image you’ll build contains them and you are good to go! In Dev/Test/Prod environments you’ll just need to make sure Docker itself is installed and voila!

  4. No more thinking of what port I haven’t used yet. As Docker makes a bridge from your host machine to your container, you can use the same port for all of your microservices and just change it in the container.
    And more…

There is a difference when making an architecture that is based on a containerized environment, and if you are an architect, you must realize this from the beginning and take this into account. It can save you hours of overthinking and over-designing.

22