Spotlight on Tech

Better Telco with Kubernetes

By
Brooke Frischemeier
Head of Product Management, Unified Cloud
Rakuten Symphony
August 3, 2023
12
minute read

Why are so many organizations choosing Kubernetes and migrating to containers? There are a number of key reasons that center around flexibility, agility and performance. In this blog, we will give an overview of how Kubernetes works, its benefits and telco use cases.

What are Containers and What is Kubernetes?

There are a number of components that contribute to the overall Kubernetes solution appeal, containers and Kubernetes being the main ones. We will collectively refer to them as the “Kubernetes Solution.”

Containers

A container is a lightweight technology used for packaging applications and their dependencies. The keyword here is lightweight, as this is one of the main reasons containers were invented. A container encapsulates the application, libraries, settings and runtime, enabling it to run consistently in different cloud environments. One of the big differentiators is that, unlike the incumbent technology of Virtual Machines (VMs), containers do not require a guest operating system (OS) for each instance; there is just a single host OS per physical node. This improves performance and efficiency.

Furthermore, unlike VMs that are monolithic application constructs, containerized applications are typically broken down into many micro-services, where each micro-service runs in a separate container, providing additional isolation and limiting the security attack surface while providing numerous scaling advantages.

Kubernetes

Kubernetes is the fastest growing project in the history of open-source software, after the Linux OS. It is one of the main Cloud Native Cloud Foundation (CNCF) projects and was open-sourced by Google. Many see Kubernetes as becoming as ubiquitous as Linux and it is already proving to be indispensable in the cloud.

While containers improve resource utilization and isolation for individual applications, Kubernetes extends these ideals by orchestrating and efficiently managing clusters of containers across multiple servers.

A Kubernetes cluster provides the foundation of a container-driven (containerized) infrastructure, providing a manageable layer to deploy and efficiently manage containerized applications. The cluster architecture enables customers to rapidly scale applications in a high availability environment. Furthermore, it abstracts lifecycle management complexities with a declarative configuration model (“tell me the desired outcome, not every step to get there”), streamlining application management and the underlying infrastructure. The main elements of a Kubernetes cluster include the following:

Fig 1. Kubernetes cluster
Fig 1. Kubernetes cluster

  • Master Node: Represents the control plane of the Kubernetes cluster. It provides cluster-wide coordination and management. Control plane components under its domain include the API server, controller manager and the scheduler.
  • Worker Nodes a.k.a. Nodes: These form the foundation of the compute resources in the cluster’s worker nodes. These are typically physical server nodes but can also be VMs in a hybrid environment.
  • Pods: These are logical subdivisions within a worker node. They are the smallest and simplest units in the Kubernetes object model, representing one or more tightly coupled containers that share resources and network namespaces. Pods are scheduled and run on worker nodes. There are no hard-set rules on how to map containers into pods. They can be mapped on anything, including function, by customer, security type, resource needs and so on.
  • Container Runtime: This software is responsible for running containers within the pods. Examples of popular container runtimes include Docker and containerd. It is a critical part of the container ecosystem and serves as the interface between the node’s host OS and the containerized applications.
  • kubelet: This agent runs on each worker node and communicates with the master node, ensuring that the containers and pods on the node are running in the desired state.
  • kube-proxy: This agent runs on each worker node and is responsible for managing network routing, enabling communication between pods within the cluster and outside entities.
  • etcd: The master node components, along with the individual kubelets, use etcd to store and retrieve cluster information. etcd is responsible for storing the cluster’s configuration data and follows the model of a distributed key-value store.
  • Networking Plugins: Kubernetes itself does not define the network components. However, it does provide the Container Network Interface (CNI) that allows it to interact with various networking plugins. These plugins provide additional functionality and automation, such as advanced IP support.
  • Storage Plugins: Kubernetes itself does not define the storage automation and components. But it provides the Network Storage Interface or CSI that allows it to interact with various storage plugins that provide additional functionality and automation. These plugins can be either open source or proprietary, and are typically used to support advanced features, persistent and stateful data sets.

Kubernetes Solution Benefits

Resource Efficiency

The Kubernetes solution was designed from the ground up to address both performance and scale. Containers’ predecessor, VMs, require significant overhead for EVERY application, the least of which is an additional operating system (OS) called guest OSs, where each VM requires a guest OS. Additionally, VMs require adaptation from yet another piece of software called a hypervisor, that emulates all server resources in a highly I/O intensive scheme, wasting precious CPU cycles. Kubernetes completely does away with guest OS and hypervisor, drastically reducing resource overhead, allowing one to use far fewer resources and reducing the number of application instances needed to perform a task, while reducing unnecessary OS license costs.  

On top of this, most containerized applications are broken down into their constituent parts or functions, called micro-services. With VMs, to scale just one part of the application, one needs to replicate the entire VM, including an additional guest OS and all of the compute/store/network resources associated with it – even if it is to address the scaling needs of one simple function. With Kubernetes containerized micro-services, one only needs to scale out the micro-service dedicated to a particular function.

Additionally, resource efficiency becomes increasingly important as solutions not only scale up to the core, but also down to the to the edge. Adding new racks of equipment or even building a small remote facility is a challenge on all fronts led by cost, human effort and additional build time. Due to the Kubernetes solution’s lightweight nature, one can achieve higher application density on a single physical host, by reducing the number of CPUs needed to support the application.

Increased Performance

The resource efficiencies we just discussed also provide significant performance advantages, as there are fewer CPU cycles and less I/O wasted on emulation and guest OSs.

Furthermore, being lightweight, containers can start much faster than VMs, as they don't need to boot an entire guest OS. This allows Kubernetes to spin up and scale applications faster, to meet varying demands without the delay associated with VM boot times. This drastically improves the many real-time use cases enabled by 5G.

Additionally, both containers and VMs run faster on Kubernetes. (Yes, we can run VMs on Kubernetes.) While legacy VM cloud platforms can run containerized applications, these applications are still subject to the resource and performance penalties associated with VMs. Because of this overhead applications run faster on Kubernetes than on legacy VM solutions – as demonstrated for Rakuten Mobile by our own Symcloud Platform.  

Automation, Scaling & Self-Healing

Automation is critical to the success of any service and is a key component of the overall viability of different Kubernetes offerings. While minimizing human error and reducing time to outcome, autoscaling and auto-healing also improve solution reliability for any application. It is a key factor in one’s solution agility, enabling one to fix, improve and iterate faster than the competition.

Kubernetes is easily configured to auto-scale micro-services and can do so based on a large number of Key Performance Indicators (KPIs). For example, CPU usage of 80% can be used as a trigger. With similar declarative automation, Kubernetes heals itself when there is a discrepancy between the declared optimal state and any suboptimal state – unreachable, malfunctioning resources or crashed – where any state can trigger a different automated response. This also feeds into a mechanism called Horizontal Pod Autoscaling (HPA) that can rapidly create pod replicas based on custom metrics, maintaining better performance under varying workloads.

Enhanced Portability and Multi-Cloud

Portability is a term used to indicate that an application can be easily adapted to run in different environments composed of different resource types, OSs, container runtimes, Kubernetes versions and Kubernetes distributions. In other words, portability allows you to run services anywhere, instantly without additional constraints or timely software adaptations, adding to solution delivery timelines.  

Furthermore, Kubernetes operates on virtually any type of underlying Information Technology (IT) infrastructure (compute/store/network) and does not care if it exists on a private, public or hybrid cloud.  

Similar to portability, multi-cloud allows solution providers and their customers the flexibility to deploy wherever the resources are. It gives them the opportunity to choose the best locations for those resources.

Better Security

A Kubernetes solution provides security advantages over legacy VM solutions. As we discussed earlier, applications built on containers are split up into numerous micro services, each in its own container, unlike a monolithic VM. This allows us to isolate and limit the attack surface in many ways. In the application itself, functions are isolated into separate containers, compartmentalizing the attack surface. Micro-service isolation also means that different resources, compute, storage and network, can be isolated per container or per group of similar containers.

Kubernetes itself offers pod security policies that support best practices. On a configuration level, Kubernetes allows users to granularly configure security parameters at the pod and container levels, defining privileges and access controls that the pod adheres to. This fine-grained control can enhance the security of running applications.

Containers also share the host’s OS, instead of running on top of both a host OS and an additional guest OS for each VM, thus reducing OS vulnerabilities. Kubernetes automation reduces the amount of human error and supports automated patching and roll-back deployments should vulnerabilities be discovered.

Last, Kubernetes enables an immutable infrastructure, meaning that the components are unchangeable and cannot be overwritten by malicious actors or code. That means that once a container is instantiated, it is never modified. To change the deployment, the container must but replaced entirely by a new version. This approach reduces the risk of tampering and helps prevent security breaches due to unauthorized changes.

Open Source & Multi-Vendor, Less Risk

Nobody likes vendor lock-ins. Until recently, for legacy mobility solutions this was the case. Just as Open RAN pushed an open solution in the radio network, Kubernetes has enabled this at a cloud-platform level, making it easier to mix and match vendor 5G NFs and applications An added benefit of this openness is that it gives companies the freedom to select leading edge vendors from a rich, interoperable, ecosystem with far less risk.

Kubernetes is a fully open source, community-led project overseen by the CNCF. It has several major sponsors, both vendors and operators. No single group dictates how the platform develops. To many businesses, this open-source strategy makes Kubernetes preferable to other solutions that are closed and vendor-specific, thus facilitating multi-vendor interoperability without lock-in.  

Increased Developer Productivity

From its very early years, Kubernetes was designed to be DevOps friendly, enabling development teams to iterate, test and deploy faster. This is critical for massive scale projects, such as 5G, that need continuous innovation and improvement over existing designs.

As mentioned earlier, Kubernetes brings with it declarative abstraction, where you tell the solution what you want as an outcome instead of all the steps to get there. This is an operations friendly approach that drastically removes lines of code, streamlines and uncomplicates deployment models.

Additionally, Kubernetes applications use a highly modular approach that enables faster development with smaller, more focused teams that are each responsible for specific tasks. This modularity makes it easier to isolate dependencies and make use of well-defined, well-tuned, reusable and smaller components.  

The Kubernetes deployment structure lends itself to controlled rollouts across clusters, canary deployments and automated rollback plans.  

Telco Use Cases

For the reasons we have discussed in this blog Kubernetes has gained significant traction in the in the telco market. Telcos have deployed Kubernetes to address numerous use cases that optimize operations and make their business offerings more agile.

Kubernetes telco cases include:

  • Telco Infrastructure and Over The Top (OTT) Applications: Enterprise use cases are also telco use cases. Every telco runs enterprise use cases, including human resources, databases, video streaming, analytics email and so on. If there is one thing that we have already learned with the Internet and mobile phones, it is that OTT applications, for streaming, business , gaming and so on, can supercharge revenue. With Kubernetes, both the services and the overall telco infrastructure can be efficiently managed with a single, unified platform, shared between multiple organizations and development teams.
  • DevOps: Kubernetes provides a powerful platform that streamlines various aspects of the development and operations lifecycles. Most Kubernetes solutions offer an easy-to-use self-service model that enables developer teams to spin up their own infrastructure, instead of filling out requests and waiting on administrative teams. Kubernetes’ immutable model and ability to interact with software repositories also adds efficiency, security and single source of truth to deployment states.
  • Legacy Network Function Virtualization (NFV) Migration: The legacy VM-based hardware infrastructure is aging out and so are the software licenses of the cloud platforms that support them. This is an ideal transition point for telcos to invest in a future-proof architecture. Even if a vendor’s applications have not transitioned to containers, they can run both VMs and containers on Kubernetes, migrating at their own pace, without being shackled to their vendors’ roadmaps.
  • 5G RAN to Core and Legacy Mobility Networks: Virtually every 5G network function vendor has a containerized offering and there are just a few that don’t have it on their roadmap. 5G network functions can be finicky, and they require the detailed performance provided by Kubernetes. While there are still many 4G and older generations running in VNFs, they can still run on a Kubernetes platform as the operator migrates.
  • Edge Computing: Edge computing spans both telco and revenue enterprise applications. At the edge, cloud resources are scarce. Kubernetes solutions are much more resource-efficient than legacy VM-based solutions and they can be used to process and reduce the amount of data flowing inward from the edge.
  • Streaming: Kubernetes’ dynamic scaling can best service the needs of demand-driven resources in a multi-regional environment. Providers can efficiently and dynamically scale pods of transcoders, up and down, without manual intervention.
  • Hosted Private 5G and LTE: Kubernetes enables secure automation that accelerates the deployment and management of private networks for enterprises, Industry 4.0 and smart-X, needed for multi-use solutions. Kubernetes is an ideal solution for multi-use services, and it also supports the key application of network slicing.
  • Network Slicing: Kubernetes is a key enabler that dovetails seamlessly in the creation and lifecycle management of network slices in that it already supports the key tenants of network slicing including virtual networks, dedicated resource pooling and policy-based scaling.  Furthermore, depending on the deployment model, network slicing can be used to dedicate virtual networks within a Private 5G customer’s own network as well as between multiple customers as a shared resource tied to a service level agreement.
  • Analytics: Kubernetes’ built-in capabilities are ideal for analytics solutions, providing a robust platform for deploying, scaling, and managing analytical applications, enabling organizations to analyze large volumes of data efficiently. Furthermore, we see more and more analytics services deployed at the edge, where Kubernetes is a clear winner. Analytics datasets are huge and edge processing enables users to process data into smaller, more meaningful and manageable data sets, before they are transferred over expensive transport links to a second tier of processing or a data warehouse.

Conclusion

Because of the reasons we outlined in this blog, Kubernetes is fast becoming the de facto cloud platform and has been deployed across all telcos and OTT enterprise application markets. But, even with all of these advantages there is still room for innovation. Not all Kubernetes distributions are created equal. Connect with us to see why Symcloud Platform is the most automated Kubernetes platform out there, with a no-code, App Store look-and-feel as well as our feature rich, low-footprint architecture.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Kubernetes
Telecom
Future of Telecom
Cloud
Symcloud
How can Symphony help?
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Notice for more information.