Why are so many organizations choosing Kubernetes and migrating to containers? There are a number of key reasons that center around flexibility, agility and performance. In this blog, we will give an overview of how Kubernetes works, its benefits and telco use cases.
There are a number of components that contribute to the overall Kubernetes solution appeal, containers and Kubernetes being the main ones. We will collectively refer to them as the “Kubernetes Solution.”
A container is a lightweight technology used for packaging applications and their dependencies. The keyword here is lightweight, as this is one of the main reasons containers were invented. A container encapsulates the application, libraries, settings and runtime, enabling it to run consistently in different cloud environments. One of the big differentiators is that, unlike the incumbent technology of Virtual Machines (VMs), containers do not require a guest operating system (OS) for each instance; there is just a single host OS per physical node. This improves performance and efficiency.
Furthermore, unlike VMs that are monolithic application constructs, containerized applications are typically broken down into many micro-services, where each micro-service runs in a separate container, providing additional isolation and limiting the security attack surface while providing numerous scaling advantages.
Kubernetes is the fastest growing project in the history of open-source software, after the Linux OS. It is one of the main Cloud Native Cloud Foundation (CNCF) projects and was open-sourced by Google. Many see Kubernetes as becoming as ubiquitous as Linux and it is already proving to be indispensable in the cloud.
While containers improve resource utilization and isolation for individual applications, Kubernetes extends these ideals by orchestrating and efficiently managing clusters of containers across multiple servers.
A Kubernetes cluster provides the foundation of a container-driven (containerized) infrastructure, providing a manageable layer to deploy and efficiently manage containerized applications. The cluster architecture enables customers to rapidly scale applications in a high availability environment. Furthermore, it abstracts lifecycle management complexities with a declarative configuration model (“tell me the desired outcome, not every step to get there”), streamlining application management and the underlying infrastructure. The main elements of a Kubernetes cluster include the following:
The Kubernetes solution was designed from the ground up to address both performance and scale. Containers’ predecessor, VMs, require significant overhead for EVERY application, the least of which is an additional operating system (OS) called guest OSs, where each VM requires a guest OS. Additionally, VMs require adaptation from yet another piece of software called a hypervisor, that emulates all server resources in a highly I/O intensive scheme, wasting precious CPU cycles. Kubernetes completely does away with guest OS and hypervisor, drastically reducing resource overhead, allowing one to use far fewer resources and reducing the number of application instances needed to perform a task, while reducing unnecessary OS license costs.
On top of this, most containerized applications are broken down into their constituent parts or functions, called micro-services. With VMs, to scale just one part of the application, one needs to replicate the entire VM, including an additional guest OS and all of the compute/store/network resources associated with it – even if it is to address the scaling needs of one simple function. With Kubernetes containerized micro-services, one only needs to scale out the micro-service dedicated to a particular function.
Additionally, resource efficiency becomes increasingly important as solutions not only scale up to the core, but also down to the to the edge. Adding new racks of equipment or even building a small remote facility is a challenge on all fronts led by cost, human effort and additional build time. Due to the Kubernetes solution’s lightweight nature, one can achieve higher application density on a single physical host, by reducing the number of CPUs needed to support the application.
The resource efficiencies we just discussed also provide significant performance advantages, as there are fewer CPU cycles and less I/O wasted on emulation and guest OSs.
Furthermore, being lightweight, containers can start much faster than VMs, as they don't need to boot an entire guest OS. This allows Kubernetes to spin up and scale applications faster, to meet varying demands without the delay associated with VM boot times. This drastically improves the many real-time use cases enabled by 5G.
Additionally, both containers and VMs run faster on Kubernetes. (Yes, we can run VMs on Kubernetes.) While legacy VM cloud platforms can run containerized applications, these applications are still subject to the resource and performance penalties associated with VMs. Because of this overhead applications run faster on Kubernetes than on legacy VM solutions – as demonstrated for Rakuten Mobile by our own Symcloud Platform.
Automation is critical to the success of any service and is a key component of the overall viability of different Kubernetes offerings. While minimizing human error and reducing time to outcome, autoscaling and auto-healing also improve solution reliability for any application. It is a key factor in one’s solution agility, enabling one to fix, improve and iterate faster than the competition.
Kubernetes is easily configured to auto-scale micro-services and can do so based on a large number of Key Performance Indicators (KPIs). For example, CPU usage of 80% can be used as a trigger. With similar declarative automation, Kubernetes heals itself when there is a discrepancy between the declared optimal state and any suboptimal state – unreachable, malfunctioning resources or crashed – where any state can trigger a different automated response. This also feeds into a mechanism called Horizontal Pod Autoscaling (HPA) that can rapidly create pod replicas based on custom metrics, maintaining better performance under varying workloads.
Portability is a term used to indicate that an application can be easily adapted to run in different environments composed of different resource types, OSs, container runtimes, Kubernetes versions and Kubernetes distributions. In other words, portability allows you to run services anywhere, instantly without additional constraints or timely software adaptations, adding to solution delivery timelines.
Furthermore, Kubernetes operates on virtually any type of underlying Information Technology (IT) infrastructure (compute/store/network) and does not care if it exists on a private, public or hybrid cloud.
Similar to portability, multi-cloud allows solution providers and their customers the flexibility to deploy wherever the resources are. It gives them the opportunity to choose the best locations for those resources.
A Kubernetes solution provides security advantages over legacy VM solutions. As we discussed earlier, applications built on containers are split up into numerous micro services, each in its own container, unlike a monolithic VM. This allows us to isolate and limit the attack surface in many ways. In the application itself, functions are isolated into separate containers, compartmentalizing the attack surface. Micro-service isolation also means that different resources, compute, storage and network, can be isolated per container or per group of similar containers.
Kubernetes itself offers pod security policies that support best practices. On a configuration level, Kubernetes allows users to granularly configure security parameters at the pod and container levels, defining privileges and access controls that the pod adheres to. This fine-grained control can enhance the security of running applications.
Containers also share the host’s OS, instead of running on top of both a host OS and an additional guest OS for each VM, thus reducing OS vulnerabilities. Kubernetes automation reduces the amount of human error and supports automated patching and roll-back deployments should vulnerabilities be discovered.
Last, Kubernetes enables an immutable infrastructure, meaning that the components are unchangeable and cannot be overwritten by malicious actors or code. That means that once a container is instantiated, it is never modified. To change the deployment, the container must but replaced entirely by a new version. This approach reduces the risk of tampering and helps prevent security breaches due to unauthorized changes.
Nobody likes vendor lock-ins. Until recently, for legacy mobility solutions this was the case. Just as Open RAN pushed an open solution in the radio network, Kubernetes has enabled this at a cloud-platform level, making it easier to mix and match vendor 5G NFs and applications An added benefit of this openness is that it gives companies the freedom to select leading edge vendors from a rich, interoperable, ecosystem with far less risk.
Kubernetes is a fully open source, community-led project overseen by the CNCF. It has several major sponsors, both vendors and operators. No single group dictates how the platform develops. To many businesses, this open-source strategy makes Kubernetes preferable to other solutions that are closed and vendor-specific, thus facilitating multi-vendor interoperability without lock-in.
From its very early years, Kubernetes was designed to be DevOps friendly, enabling development teams to iterate, test and deploy faster. This is critical for massive scale projects, such as 5G, that need continuous innovation and improvement over existing designs.
As mentioned earlier, Kubernetes brings with it declarative abstraction, where you tell the solution what you want as an outcome instead of all the steps to get there. This is an operations friendly approach that drastically removes lines of code, streamlines and uncomplicates deployment models.
Additionally, Kubernetes applications use a highly modular approach that enables faster development with smaller, more focused teams that are each responsible for specific tasks. This modularity makes it easier to isolate dependencies and make use of well-defined, well-tuned, reusable and smaller components.
The Kubernetes deployment structure lends itself to controlled rollouts across clusters, canary deployments and automated rollback plans.
For the reasons we have discussed in this blog Kubernetes has gained significant traction in the in the telco market. Telcos have deployed Kubernetes to address numerous use cases that optimize operations and make their business offerings more agile.
Kubernetes telco cases include:
Because of the reasons we outlined in this blog, Kubernetes is fast becoming the de facto cloud platform and has been deployed across all telcos and OTT enterprise application markets. But, even with all of these advantages there is still room for innovation. Not all Kubernetes distributions are created equal. Connect with us to see why Symcloud Platform is the most automated Kubernetes platform out there, with a no-code, App Store look-and-feel as well as our feature rich, low-footprint architecture.