Enterprises, system integrators and telcos worldwide are adopting intelligent infrastructure and automation technologies as they prepare to deploy next-generation cloud-native applications. Every mobile device, warehouse, factory, mine, port of call, bank, retail complex or sports facility will need to connect securely and communicate seamlessly. This shift is driving technology investments and innovation in robotics, drones, security systems, and connected production lines. At Rakuten Symphony, we see numerous trends impacting the data centers (DCs) of today and tomorrow, and we build those solutions, enabling you to transform not just your DC but your entire business.
Ever since the introduction of the data center, decades ago, the model has been constantly improving and changing its delivery architecture, from dumb-terminals, to the client/server model, right up to the modern day multi-cloud, which is no exception as it continues to grow and move into every physical location and device. If history has taught us one thing, it's “innovate or die”, which means for the foreseeable future, your cloud platform and automation suite must be adaptable and must unify the cloud no matter where it lives.
Over the last decade, there has been considerable debate on the best location for applications and their data. In the past there were two main choices: in your data center or somewhere in the cloud. Today, we are in the middle of an edge proliferation, with numerous small data centers located at, or near, the data's source. We see this in numerous verticals, including 5G services, IoT, Industry 4.0, logistics, content delivery, remote experts and Smart-X (cities, vehicles and so on). Edge data centers provide immediate, lower latency decision-making, without the need for data retrieval and storage over slow WAN links. This model enables edge processing efficiencies, while reducing the amount of active and backup data needed to be warehoused in a core data center. This in turn, reduces WAN utilization and costs associated with remote data storage. Furthermore, edge deployment provides data affinity with the region or location the data is actually being used in, paving the way for low-footprint, low-cost, low-power hyperconverged equipment.
When hyperscalers began rolling out their deployments and eventual customer migrations, it was heralded as “the easy button”, focusing on the model of “one cloud to rule them all”.
Hyperscalers then doubled down with a message of both capex and opex savings:
While a lot of this is true to some extent, it certainly has not delivered the savings and general business results that were originally promised. Thus, the move to either fully or partially return to your own data center is now in the works for many businesses. This, in turn, brings with it a whole set of new challenges, including the migration itself, hybrid cloud operations coordination and a new high reliability model.
When we look at the decentralization of the next-gen DC and then add the needs of mobile 5G networks and mobile applications, we see an explosion of teams, products, subscribers and technologies, all interacting with the networks they consume, with significant added complexity under the covers. To meet this challenge, next-gen DC solutions must rectify even more deployment models, with interactions that span multiple organizations and business entities, unifying the underlying architectures, requiring more automation at scale than ever before.
Multitenancy, observability and chargeback are typically considered different areas of operation, but they do go hand-in-hand and their seamless integration is paramount. While, Kubernetes has demonstrated its ability to increase the efficiency, security and the dynamic scale of its applications, it has little to no in-built multitenancy. Furthermore, multitenancy is a requisite function for efficient edge and Multi-access Edge Compute (MEC), 5G and many other edge applications.
Without strong, built-in multitenancy and Role-Based Access (RBAC), your DevOps strategy can become a rats' nest. Regardless of the DC's location, autonomous teams, from the same or different organizations and business entities, need the capability to intelligently manage resources for both collaborative and isolated tasks. Without intelligent resource partitioning and operational streamlining, service level agreements (SLAs) and the customer experience are forever uncertain, due to noisy neighbors, security issues and overall solution agility.
Lastly, regardless of where different organizations reside or belong to, in a multi-tenant solution, resources are not free. Someone is always accountable for cost, and that falls on the tenants, whether they are in the same business entity or not. Therefore per-tenant, usage-based billing must account for resources, such as CPUs, memory, storage and networking.
While there will most likely be multiple development teams, organizations are looking to consolidate the number of operations teams into one team that provisions, monitors, fine-tunes and troubleshoots the overall solution. In order to scale for the future, automation domains must be unified and consolidated. Long outdated is the old model where domains act serially with separate tools and teams. All of this needs to be intelligently collapsed. Complex things that once took days now need to take minutes or even less.
Data centers have more working parts than ever as part of their architecture, including:
Siloed operations will not scale and while there will be the need for observability on a per-tenant basis, there still needs to be one holistic view of the entire solution that is directly linked to real-time multi-domain automation. Solutions and their domains must be unified under the covers, not just hidden behind a single pane of glass, in that they reduce the total number of systems collecting and processing the data. If they don't, it means multiple operations layers, multiple layers of data formatting, conditioning data into similar formats and numerous analytics tools sitting on top of them, limiting both solution scale and agility.
The industry is actively transitioning from virtual machines (VMs) to a cloud-native, container-based design and will be doing so for many years. Kubernetes offers numerous advantages that include superior scale, resource efficiency, automation and portability, but the migration is not trivial. It requires planning, vendor support and a better cloud platform.
Most legacy cloud platforms support either VM-based Virtual Network Functions (VNFs) or container-based (CNFs). Even those that claim to support both typically do so with two separate platforms hidden under a Graphical User Interface (GUI). Using one platform for CNFs and another for VNFs is not a migration strategy, it's a technology boat anchor created from poor decisions.
Unaccommodating legacy platforms have the following consequences:
The best step forward is to have a unified platform that supports both CNFs and VNFs on bare metal.
Before we discuss stateful applications let's first give an example of the more simple “stateless” applications. Stateless applications typically provide a single function, for example a print server, a basic calculator or an old school web search. Furthermore, stateless application transactions do not need to understand or retain any information about a prior transaction, in order to perform the current transaction at hand. In other words, there are no preexisting conditions or states that impact their function. On the other hand, a stateful application, such as in the case of bank transactions, does care about preexisting conditions or states. Therefore, it needs to have a persistent relationship with its data and users as it scales, migrates, stops/starts and heals. Kubernetes is not inherently stateful and does not adequately account for stateful storage, even though its Container Storage Interface (CSI) does have a few primitive commands that neither enable it to properly quiesce applications or perform large-scale automation.
In other words, while Kubernetes allows one to scale and enhance performance by leaps and bounds, when it comes to data protection, the relationship between storage and application becomes more complex and it needs to become “application-aware”.
All of this has to relate back to storage in its own way, and it is constantly changing.
The Symcloud™ family covers all of your cloud platform and orchestration needs. Everything comes fully automated and our unique, policy-driven interface means that you never have to be a domain expert, developer or CLI master to operate it.
Symcloud Orchestrator manages the lifecycle of any workflow including, bare-metal provisioning, Kubernetes cloud platforms (any Kubernetes and K3s), NF lifecycle management, Network Services (NS) lifecycle design, with a Methods Of Procedures (MOPs) engine that enables large-scale management of any device or appliance, all 1-clicked automatically triggered via a policy engine. Our automated workflows support container-based NFs, VM-based NFs and 3rd party physical NFs, simultaneously. Symcloud Orchestrator makes it fast and easy to use with our custom Service Designer, MOPs Automation Studio and Orchestrator GUIs, where one can mix, match and reuse multiple workflow elements, as well as existing executors and scripts. We can reduce all of this down to one, contextually aware, unified workflow. In a single click or automated event you can prep server configurations, spin up Kubernetes clusters, provision network devices, deploy NFs and bring up a custom service chain, working in a GitOps model.
All of this comes with full contextually aware and multitenant observability, with easy-to-use analytics and drill-down capabilities, across all of the many domains. From bare metal to services, multiple DCs can be easily correlated, managed and healed.
Symcloud™ incorporates a best-of-breed Kubernetes-based platform designed to be ideal for network and storage-intensive workloads, combining 1-click application onboarding with declarative, context-aware workload placement, pinning your NFs and services to automated policies. Just tell CNP what resources your service needs and it will auto-discover, then configure them for you, as per your “policy”, over the entire automated lifecycle of the service, stop, start, heal, clone and migrate.
With Symcloud Platform, there is never any hunting or hardcoding. Resources are modeled on numerous NUMA-aware options, including memory, CPU cores, GPU slices, HugePages, overlay/underlay networks and redundancy, applying affinity and anti-affinity rules as needed. This also extends into the compute and storage placement and locality, with persistent addressing.
Our advanced policy-driven GUI requires only a fundamental understanding of your application to operate. Although the capability is there, no command line programming or Kubernetes expertise is needed to operate it, which is ideal for self service. This is all backed up with built-in multitenancy, RBAC and chargeback framework that enables the pinning and efficient sharing of resources pools.
Beware of the bait and switch when it comes to running VMs and containers. Unlike the competition, all of Symcloud™ runs VMs and containers on the exact same platform, even on the same pod. There are no disparate underpinnings to run VMs, there are no resource silos and there are no lifecycle management/operations silos. You don't have to have separate teams. You don't need different operations models. It's not two products hidden behind a single pane of glass, with two sets of design and operations rules. Symcloud Platform is completely unified underneath the covers, for all operations.
Symcloud™'s unified design comes complete with a built-in multitenancy, RBAC and chargeback framework that enables the pinning and efficient sharing of resource pools, eliminating noisy neighbors and tightening security, with policy-driven automation that controls and restricts access between namespaces, isolates resources, enforces usage quotas and restricts network access.
Last, while Symcloud™ is a feature-rich platform that has demonstrated immense scale, it can also be scaled down to 2 cores or less, without losing its APIs, operator support and observability, unlike other low footprint solutions. This is not only ideal for edge deployments, but it also means that the same platform and operations model you have in the edge, you also have at the core. There is no need to deploy a different, specialized, edge-only platform.
Symcloud Storage includes industry-leading, software-defined storage that supports a comprehensive set of application-aware services, including snapshots, clone, backup, encryption, and business continuity. All data services are application-aware, tracking not only data storage, but the metadata and the ever-changing Kubernetes application config, protecting a wide range of datasets for “application-consistent” disaster recovery of complex network and storage-intensive stateful applications.
Symcloud Storage is true Kubernetes application-aware storage. As mentioned earlier, with Kubernetes applications, the relationship of the containerized application to its storage is constantly changing, the moment it is deployed. For this reason, high-level automation operations need to be performed on more than just the data at some point in time. One needs to also capture the config data, Kubernetes config data, secrets and metadata with the underpinnings necessary to quiesce complex applications, instantly at any point in time.
Symcloud Storage understands, auto-learns and auto-adapts to all application and data permutations. Backups, snapshots, cloning, DR are all application and Kubernetes state aware.
Some other vendors claim Kubernetes application awareness but they require manual intensive tagging and marking over the lifetime of the application and they also need Kubernetes expertise. With Symcloud Storage, we auto-ingest the application from its Helm chart, YAML file or operator. We then auto-discover, auto-monitor and adapt its changes over its entire lifecycle. It is fully automated and way easier to use; no Kubernetes expertise required.
Additionally, Symcloud Storage provides programmable pre- and post-processing policies that auto-adjust to target environments and can even renumber IP addresses when cloning so there are no network clashes. This is further combined with automated storage placement based on easy-to-configure policies and IOPs-based storage QoS.
Symcloud Storage comes completely integrated and automated with Symcloud Platform, but can also be purchased separately and runs on all of the main Kubernetes and hyperscaler distributions.
Delivering More with Symcloud™