Spotlight on Tech

The AI Infrastructure Race Has Begun – and It’s Only the First Mile

By
Udai Kanukolanu
Global Head of Sales
Rakuten Symphony
September 14, 2025
4
minute read

Bell Canada’s recent announcement to expand its AI infrastructure (with new data centers, cloud partnerships, and sovereign AI services) marks a pivotal shift in telecom’s role in the digital economy. The move isn’t just about scale; it’s about intent.

AI is forcing telcos to rethink how they build, monetize, and operate their assets.

That’s an encouraging sign because it shows the industry has finally stopped treating AI as a lab experiment and started treating it as infrastructure – a fundamental shift in how telcos and technology companies architect, invest in, and operationalize AI. This means shifting from using AI to improve networks, to using AI to become the network. From Canada to Japan, we’re seeing telcos build new data center capacity, partner with hyperscalers and AI model providers, and rethink operational models to align with compute-driven ecosystems. It’s the start of an era where connectivity, compute, and intelligence converge into one stack.

AI infrastructure, however, is not just real estate with fiber and cooling. It’s a living ecosystem – where compute meets policy, data meets orchestration, and intelligence becomes the new control plane. Simply putting powerful GPUs into an old data center doesn’t make it AI-ready; running AI at scale needs new architecture, not just new hardware.

The opportunity now lies in how fast telcos can evolve that ecosystem from static hosting to dynamic enablement. Afterall, we don’t want to miss the AI wave the way we missed the rise of OTT, cloud, data centers, and enterprise digitalization.

This is where partners like Rakuten Symphony come in – not as competitors, but as catalysts. Our work over the past years has focused on turning the network itself into a programmable, AI-ready platform that can integrate seamlessly with any operator’s data center, edge, or sovereign cloud strategy. 

In other words, we help telcos make their infrastructure self-aware.

The evolution of AI infrastructure in telecom

In the traditional model, telcos built infrastructure for predictable workloads. Think human (initiated and consumed) traffic, downlink-heavy consumption, and long-lived sessions.

AI traffic changes the physics of this: it’s burst-heavy, uplink-driven, and deterministic in its expectations. These are workloads that demand real-time orchestration, multi-domain observability, and continuous optimization.

The new performance currency

Reports suggest that the next wave of telecom growth won’t come from connectivity alone, but from enabling the infrastructure behind AI itself. As AI workloads surge, global data-center demand is projected to more than triple by 2030, creating massive opportunities for operators that can combine fiber reach, edge presence, and orchestration intelligence.

Evidently, the economics of telecom are being rewritten.

The next competitive frontier is the ability to guarantee performance consistency, which is the confidence that a network will deliver a fixed latency, bounded jitter, and predictable compute availability regardless of load. This is critical to support business goals (SLOs) such as achieving inference responses within milliseconds or completing AI model training in days, not weeks.

So, rather than competing head-on with hyperscalers, telcos should focus on strategic adjacencies – connecting new data centers, offering intelligent network services, and turning underused power and space into compute capacity. Through a robust partner ecosystem, they can evolve to become the AI fabric for enterprises, bridging infrastructure, compute, and intelligence into one seamless platform.

That’s why the real opportunity for telcos like Bell, Deutsche Telekom, or any large-scale operator isn’t just in leasing capacity, it’s in embedding intelligence into that capacity. Rakuten Symphony’s platforms are designed for exactly this – to help operators evolve from managing infrastructure to managing experience: using AI-driven orchestration to optimize cost, utilization, and SLA compliance in real time.

The complementary equation

The world doesn’t need every telco to reinvent orchestration. It needs collaboration, where data-center providers, cloud partners, and AI-native platforms each play to their strengths.

A carrier building sovereign AI data centers can pair that foundation with an orchestration layer that makes every rack, link, and router part of a responsive, self-learning system.

Bell’s move to integrate Cohere’s LLMs through the Bell AI Fabric is a great example: now imagine those same capabilities extending across a self-optimizing, intent-driven network fabric. That’s the synergy we see unfolding. We bring the software DNA – the abstraction, automation, and autonomy – that lets traditional assets behave like intelligent ecosystems. 

That’s what we call “AI-native operations.” More on this in my upcoming blog.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
5G
6G
AI
Automation
Cloud
cyber security framework
Cloud-Native
Stateful Edge

Subscribe to Covered, a Newsletter for Modern Telecom

You are signed up!

Thank you for joining. You are now a part of the Rakuten Symphony community. As a community member, you will receive news, announcements, updates, insights and information in our eNewsletter.
How can we help?
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Notice for more information.