More often than not I meet developers who have been in the industry for a very long time, having a good grasp on their technology stack, favourite shortcut commands, and other productivity tools. The only missing part of the puzzle is deployment, to know what tools are used or how your code gets automatically deployed with all the dependencies you’ve locally added on your system. The heart of a CI-CD pipeline is dockers, and this article is dedicated to impart some knowledge into some of the buzzwords of the devops community, and make dockers easier to use and implement.

Fun Fact: Some brands get ingrained in the mind so much that we start recognising the product with the brand name e.g. calling Detergents as Surf, thanks to the brand name Surf Excel. Something similar happened here as the underlying technology is actually containers, but the tool is so popular we started calling it dockers. In this article we’ll be using the two terms interchangeably.

Now coming to the question, why are containers really used? Have you ever heard of the phrase, “But, It works on my machine”. If you haven’t yet, you should really start coding a lot more to go through this painful experience and bond with other great developers. So, containers technology was developed to solve exactly these 2 common problems:

  • Devs want to run their application that they’re developing with the new code they just wrote, and want to test if things run fine without breaking things for others.
  • Devs want to make sure that other people’s broken code does not intervene with their workflow.

What does a docker for me?

Container Image: A container image is immutable (can not be changed once created) & defines a version of a single service/application with its dependencies (libraries, runtimes, etc.). To test code through different development cycles, we need to use the same container image everywhere: dev, test, staging, and production.

  • Container image contains application/service version, library version and versions of all runtime dependencies needed for the application.
  • A container runs an image in an isolated environment
  • Multiple containers can run side-by-side within a single PC/VM
Credits: Jeffrey Richter

If you’ve read my previous article Cloud Computing For Dummies you must have a fair share of idea of what a Virtual Machine (VM) does compared to running things on your own system. Unlike VMs, containers aren’t bulky i.e they do not consume a lot of resources(CPU, RAM, Storage) to execute, are extremely easy to load and work with. You can configure different container images for an application depending on the app version or the version of the library or dependent service you need and then boot a container to use the given container image.

Containers provide a level of isolation as compared to processes on the same machine that use the shared memory, but are very less restrictive in terms of sharing the same hardware and software resources.

Credits: Jeffrey Richter

Apart from other benefits, you can always just kill the container and recreate it whenever you need again. This frees up all the extra space for keeping multiple versions of the same application and libraries you’ve been developing on a single PC and makes development hassle free.

Hyper-V containers are specific to Windows, while container technology exists on all OS. Hyper-V provides a way to not even share the OS Kernel and hence you can use a linux container on a Windows machine, which is otherwise impossible since containers share the OS of the host machine. To implement containers, all you need is to tell your orchestrator to start say 10 containers of container image 1 on your Linux machines and it’ll do that for you.

How does a docker work?

Credits: Jeffrey Richter

Local Registry: To run a container on a PC/VM, it needs to have the container image stored in it’s own file system known as local registry. The first time a container image is used on a VM, daemon brings the container image from Docker Hub and loads it into the local registry. The next time the same type of container is required, the container creation becomes extremely fast as the image is already present in the system.

Orchestrator: This is the service that instructs or commands the docker daemon to start a particular container on a given system (PC/VM). The concept of orchestrators exists even outside of containerization, and generally this is the service which is responsible to do optimal resource utilization (moving services/applications between PC’s depending on their use) or auto-scaling of services depending on load, or restarting killed services.

Docker Daemon: This usually runs on ports 2375, 2376. It checks if the container image is loaded in the local registry of the system. If not, it fetches and loads it from Docker Hub and then spins up the container.

Now if for any reason the container crashes, orchestrator can tell the docker client to spin another container. This time, docker daemon will check the local registry, find it there and spin up a container directly.

How to implement Docker?

  1. Let’s add a small node.js file to your system. So, create a new file, app.js with content, console.log(“Hello.js!”);
  2. Now add another file, named Dockerfile. Also install the extension for Docker in your ide.

Here, we are using alpine as our Linux Distribution as it is small in size. You are free to choose any other img of your choice from the link.
COPY . /app copies all the content of the current directory to the file system of this image under the directory /app.
WORKDIR /app then chooses the working directory to be /app for your container application.
CMD node app.js here we specify the command that needs to be run to run this application. In our ase, it is node app.js.

4. In the command line, then go to the current working directory and type, docker build -t hello-docker . this command will then build the docker image based on the docker file we just created. It will bundle the alpine distribution and your code in a container image.

5. docker image ls this command will help you see the image that you’ve just built.

6. docker run hello-docker this command will help you run the container image you’ve built. docker automatically handles copying of the container image to the docker hub.

These are the 6 simple steps to create and run docker images on your local machine.

How to use docker image on a different machine?

  1. Push your container image to your github account.
  2. Go to link and signup and add a new VM with Linux distribution and install docker here.
  3. Use docker pull codewithmosh/hello-docker to pull your docker image
  4. Use docker image ls to help you see the image that you’ve just built.
  5. Use docker run codewithmosh/hello-docker and it should give you the same results.

Huge shoutout to Jeffrey Richter for providing in depth knowledge on the topic. Please refer to the link to learn more on it.
Another shoutout to Mosh for creating fun videos on docker implementation details. Please refer to this video in case you’re facing any runtime issues.

Hope this article helps you have a good understanding of the concept. If you liked the content, throw in some claps. P.S. medium lets you clap upto 50 times for an article, to show your love and support.

Software Engineer, Mentor, Blogger, Shitposter, hustling for goals to become history. https://linktr.ee/kritika.agarwal

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store