Kubernetes for Container Orchestration: An Introduction
Kubernetes is brand new for many developers. That’s why in a recent webinar, I provided an introduction to Kubernetes and the building blocks within it that developers use to create cloud-native applications in Kubernetes.
The Evolution of Abstraction
Physical systems are still very much part of the data center, even though virtualization is now commonplace. Virtualization is about abstracting the underpinning hardware and exposing that to individual virtual machines, so we can run many different machines on a single host. And while you can run dozens of virtual machines on one host, when it comes to HA, fault tolerance and DRS, it’s not a sound practice. Generally you’ll have multiple clusters and an orchestrator, such as Virtual Center, to move workloads between them.
More recently, the abstraction layer has been divided further by cloud providers such as AWS, Google and Microsoft, as well as MSPs, who enable businesses to consume platforms as a service. Cloud providers have removed the management layer. You simply pay someone to look after your platform and you never have to worry about managing the bare metal hardware. Layered on top of the management function is the ability to consume the operating software as a service. Office 365 is a prime example of how we’ve moved from a physical system that previously ran on our Exchange Server to a SaaS-based consumption model. Still, on-premises platforms are available for certain workloads.
The trend toward abstracting various layers of the computing infrastructure paved the way for the introduction of Containers, which abstract away certain elements of the operating system. Containers are very small — a megabyte or less — and operate using container runtime applications such as Docker, containerd or CRI-O. They enable you to spin up and down multiple workloads on a single host, whether that’s a hypervisor like VMware VSphere or in the cloud, for example on an EC2 instance. Most vendors are now offering Containers as a Service, as well.
Containers Versus VMs?
If you do a search on your favorite search engine for containers and VMs, you’re likely to see evidence of an ongoing debate about which one is better. The point is moot: they have different use cases and responsibilities when it comes to serving workloads out to end users, and each has its rightful place. Furthermore, each has advantages and disadvantages (see Table 1).
VMs are for apps that need all OS functionality and are useful for deploying multiple apps on one server.
Containers minimize the number of servers you need for multiple applications, and are very fast to set up.
Each VM has its own operating system, and despite new technologies such as thin provisioning, ultimately has a larger footprint compared to a container.
Because each VM runs its own OS, it requires more RAM, CPU, storage and network resources than a container.
Updates are slower from a development standpoint.
All container images on a host need to be designed to run on a similar or the same operating system or put into a different node. This can result in a proliferation of nodes.
If there’s a vulnerability inside a container, everything within the container set is also at risk.
When moving workloads between containers, refactoring and rearchitecting is necessary.
Whereas virtualization enables you to run multiple operating systems on the hardware of a single physical server, containerization enables you to deploy multiple applications using the same OS on a single virtual machine or server. The point is there’s an overwhelming number of choices for how best to store our workloads and data, and whether to run them in physical or virtual environments. One size does not fit all.
While Containers enable developers to consolidate workloads efficiently, orchestration isn’t always necessary. Take, for example, a web server. Several containers may be required to handle website demand, but they’re static. There are no peaks and troughs in terms of demand; the ability to make rapid updates is all that’s needed. However, for the majority of workloads, that’s not the case, and orchestration is necessary. That’s where Kubernetes comes in.
Kubernetes is a container orchestration platform that helps you manage the deployment, placement and lifecycle of containers in a similar manner to how Virtual Center orchestrates the movement of VMs across clusters. Kubernetes also has several other responsibilities:
- It manages clusters of nodes and federates those into one target pool of compute, memory and storage resources.
- It schedules the distribution of containers across nodes.
- It discovers services and distributes client requests across the appropriate containers.
- It provides replication to ensure the right number of nodes and containers are available for the requested workload.
- It detects and replaces “unhealthy” containers and nodes.
As you can see, Kubernetes simplifies many aspects of running a service-oriented application infrastructure using containers. Kasten K10 by Veeam provides effective backup and recovery of Kubernetes applications, so you can develop and innovate within the Kubernetes environment without worrying about losing your data to accidental deletion, malicious activity or unplanned downtime. Give a try for yourself.
That’s Just the Beginning...
We’ve just scratched the surface in terms of an introduction to Kubernetes in this post. For a deeper dive, including key building blocks of Kubernetes and how Kasten K10 helps tackle the challenges of data management in Kubernetes with efficient backup and recovery, watch my full webinar on-demand.
A community first technologist for Kasten by Veeam Software. Based in the UK with over 16 years of industry experience with a key focus on technologies such as cloud native, automation & data management. His role at Kasten is to act as a technical thought leader, community champion and project owner to engage with the community to enable influencers and customers to overcome the challenges of Cloud Native Data Management and be successful, speaking at events sharing the technical vision and corporate strategy whilst providing ongoing feedback from the field into product management to shape the future success.