The increasing complexity of 5G, and early 6G, network infrastructure demands sophisticated management solutions.
Much of the complexity is due to networks built using different hardware types, multi-cloud environments, and varying protocols from multiple vendors. This heterogeneous environment poses challenges in terms of cost, performance, and energy consumption – and necessitates intelligent solutions.
One promising approach is the integration of AI/ML into RAN to automate RAN deployment and ongoing operations. Training these AI models, however, needs the data from a live production network for accuracy, but can negatively affect services if trained in a live network.
One solution is the use of digital twin (DT) technology optimized for the RAN (DT-RAN).
In a previous white paper, I explored how digital twin technology applies to Open RAN and 5G. Recently, I contributed my thoughts on how DT-RAN could work for AI/ML training, evaluation and performance as a contributor to a DT-RAN research report sponsored by the O-RAN ALLIANCE next Generation Research Group (nGRG) task force. Rakuten Symphony joined 10 other network operators, telecom vendors and one university from around the world to compile the research for the report.
This research report is one of the documents that will inform the work of standards bodies to create at DT-RAN standard. The 3GPP is developing its DT-RAN standard slated for Release 19 / Release 20 that will be finalized in 2025 or 2026. The O-RAN ALLIANCE is also planning to include DT-RAN in its O-RAN Rel005 that will be available at about the same time.
The rest of this post will give you a preview of my use case, but the research report has other interesting use cases for DT-RAN including network testing automation, network planning, and energy savings. I encourage you to download the entire report here.
As network complexity grows, AI/ML solutions become crucial for automating, managing, orchestrating, and optimizing RAN components. However, current AI/ML implementations face several challenges, as I state in the research report:
Beyond these initial challenges, training AI/ML models is not a one-time event but an ongoing process of optimization, where models continually interact with networks to refine their accuracy.
However, without DT-RAN, this requirement presents multiple risks and inefficiencies. For example, experimenting with AI in a production network can degrade network performance. AI models can make critical errors that would impact the production network. Building another physical network for training purposes is expensive and takes time – and if the data is not real-world then it won’t represent the diversity of the real world network.
Digital Twin (DT) technology presents a robust solution to the challenges faced by AI/ML in network environments.
A DT-RAN is a digital replica of a real-world system—in this case, the RAN—that enables AI/ML models to train, evaluate, and optimize within a risk-free, controlled environment before interacting with the live network. DT-RAN integrates with the AI/ML workflow to:
The DT-RAN approach will be essential in future 6G networks, where it will serve as a key component of the AI/ML framework. This framework, often referred to as the “cognitive plane,” integrates various AI/ML models and supports their training, validation, and deployment across different network functions. In the report, I list the main components of this cognitive plane:
As I show in the report, through sandbox environments, advanced visualization tools, and scenario generation, DT-RAN will empower network operators to optimize their infrastructure dynamically, allowing AI/ML solutions to evolve alongside the network itself.
DT-RAN provides a critical framework that will facilitate the seamless integration of AI/ML technologies in next-generation networks, ensuring performance assurance, reducing risks, and accelerating the development of more intelligent, responsive networks. For more details on the use of DT-RAN in AI/ML management or other use cases, download the research report here.