Containers have gained a lot of popularity in the last couple of years. Today organizations are using containerization increasingly to create new applications, and to modernize existing applications for the public cloud. However, a lot of IT engineers there are still trying to understand what does containerization actually does, when to use it, benefits of containerization and how does it play a role in DevOps.

This blog explains the basics of containers and much more. 

Before we start, it you are new to DevOps, understand the basics first. Check out this entry level blog on DevOps. 

Blog on Basics of DevOps

Let us first understand few key terminologies that will be used in this blog.

  • Virtualization
  • Hypervisor
  • Containerization

What is Virtualization

Imagine a scenario where you would want to run Windows OS and Linux OS on the same hardware or even a Virtual Machine. Back in the days companies would run each OS on a dedicated physical server. This means that running multiple operating systems is an expensive endeavor. You not only need to buy a lot of physical servers, you would also need to spend more on the operations and maintenance of these servers. 

In comes Virtualization!!

Virtualization allows you to separate the operating system from the underlying hardware, which means you can run multiple operating systems such as Windows and Linux, at the same time on a single physical machine.

Virtualization works by running a type of software application called a hypervisor on top of a physical server to emulate the underlying hardware (RAM, CPU, I/O, etc.) so that these resources can be used by multiple, fully isolated virtual machines (VM) running on top of the hypervisor.

Virtualization as the name suggests essentially “fakes” all the hardware that a real machine would have. The software that is running the virtualized OS acts as a sort of intermediary, picking up all the requests to execute code with physical devices and pretending to do it.

Example – If an application needs to make a network request it engages the “fake” network card and the intermediary picks up the request and passes it to the real network card with some additional wrapping, so when the response comes back, it knows to pass it to the fake network.

What is a Hypervisor?

A hypervisor, or virtual machine monitor, is the software or firmware layer that enables multiple operating systems to run side-by-side, all with access to the same physical server resources. The hypervisor orchestrates and separates the available resources (computing power, memory, storage, etc.), aligning a portion to each virtual machine as needed.

What is Containerization

Containerization is a type of virtualization at the application level, which allows for multiple isolated user space instances on a single kernel. These instances are called containers.

To break it down into simple terms, containerization, is just about building a piece of software inside a box. This allows you to put all your configuration files & dependencies together as a single unit so you can easily deploy a known-working configuration without caring how the host machine is set up. The program in the container, however, is still actually running as a regular process on the host machine.

Containers provide a standard method of packaging an application’s code, runtime, system tools, system libraries, and configurations into one executable package. Containers share one kernel (operating system), which is installed on the hardware. Containerization bundles the application code together with the related configuration files, libraries, and dependencies to form a container image, which becomes a real container at runtime through a runtime engine. 

All containers on a server use the underlying OS of the host machine. A container runtime (instead of a hypervisor) maintains the isolation of each container’s processes from those of others, while sharing the OS’s runtime. In other words, containerization virtualizes the OS, while virtualization virtualize the hardware.

One of the most popular container platform is Docker. Applications in Docker containers have the capability of being able to run on multiple operating systems and cloud environments such as Amazon ECS, Azure Containers and many more.

Which one of these DevOps Tools are you looking to learn?
257 votes

From Virtualization To Containerization

It might come as a surprise that the concept of containerization and process isolation is decades old, but the emergence in 2013 of the open source Docker Engine—an industry standard for containers with simple developer tools and a universal packaging approach—accelerated the adoption of this technology.

Related: AWS DevOps Preparation Guide

Before containers, developers largely built monolithic applications with interwoven components. In other words, the program’s features and functionalities shared one big user interface, back-end code and database.

Below is a side by side comparison of the evolution. 

FeatureVirtualizationContainerization
IsolationProvides complete isolation from the host operating system and the other VMs.Provides lightweight isolation from the host and other containers, but doesn’t provide as strong a security boundary as a VM.
SpeedVirtual Machines need to start the entire operating system, which includes the full boot process. This will also include the startup of the services, and it will take much longer.The container starts immediately, since the operating system is already up and running, so the application will start up without any noticeable delay.
SecurityVirtualization provides complete isolation for all the applications it hosts and hence is more secure.Based on the configuration, Containers provide only process-level isolation. So security risks are more when compared to virtual machines.
Operating SystemRuns a complete operating system including the kernel, thus requiring more system resources such as CPU, memory, and storage.Runs the user-mode portion of an operating system, and can be tailored to contain just the needed services for your app using fewer system resources.
Application DeploymentDeploy individual VMs by using Hypervisor software.Deploy individual application containers by using Docker or deploy multiple containers by using an orchestrator such as Kubernetes.
Application Load TimeVirtual Machines simulate the entire operating environment and hence take more time to load.Containerized applications encapsulates application code together with the required libraries and dependencies needed to run it, they are generally smaller in size as compared to virtual machines.This reduces the loading time of containers.
Persistent storageUse a Virtual Hard Disk (VHD) for local storage for a single VM or a Server Message Block (SMB) file share for storage shared by multiple servers.Use local disks for local storage for a single node or SMB for storage shared by multiple nodes or servers.
Load balancingVirtual machine load balancing is done by running VMs in other servers in a failover cluster.An orchestrator can automatically start or stop containers on cluster nodes to manage changes in load and availability.
NetworkingUses virtual network adapters.Uses an isolated view of a virtual network adapter. Thus, provides a little less virtualization.

While VMs on public cloud provide an easy way to scale computing resources on demand, a customer has to predict how much might be needed, as it takes a certain amount of time to spin up a VM. Containers can solve this as the OS is already running. Containers along with the benefits of microservice containerization include minimal overhead, independently scaling, and easy management via a container orchestrator such as Kubernetes.

Related Blog: DevOps Tool Primer: Docker, Kubernetes, Ansible

Conclusion

In a recent IBM survey, 61% of container adopters reported using containers in 50% or more of the new applications they built during the previous two years; 64% of adopters expected 50% or more of their existing legacy applications to be put into containers during the next two years. With containers, you get the flexibility to move between cloud environment and greater scalability, all without rewriting your applications, with little configuration changes. Regardless of the cloud provider, in the longer run, you will get some cost savings with the use of containers

The open source Docker Engine for running containers started the industry standard for containers with simple developer tools and a universal packaging approach that works on any operating system. Software developers can use Agile tools like Scrum, XP, Lean Startup, etc., and DevOps tools like Git/GitHub, Jenkins, Docker, Puppet, Chef, Ansible, Vag to create cloud native apps.

We hope this gave you a good overview of containers and what are the advantages of using them.

Further reading – Check out how Agile, DevOps and CI/CD are related.