What the SoftBank–Nokia Announcement Means for AI-RAN Commercialization
By Sam Bowling |
13 Mar 2026 |
IN-8070
Log In to unlock this content.
You have x unlocks remaining.
This content falls outside of your subscription, but you may view up to five pieces of premium content outside of your subscription each month
You have x unlocks remaining.
By Sam Bowling |
13 Mar 2026 |
IN-8070
NEWSSoftBank and Nokia's AI-RAN Announcement |
In February 2026, SoftBank Corp. and Nokia announced the integration of Nokia Bell Labs' AI-RAN External Compute Engine with access to SoftBank's AI-Traffic Management System through an expansion of AITRAS Orchestrator. This enables deploying third-party Artificial Intelligence (AI) processing services on AI-Radio Access Network (RAN) infrastructure.
Consequently, the AI-RAN environment will now be able to handle external workloads, in addition to managing internal RAN control and other operator AI functions, converting the RAN into a distributed execution platform for AI rather than a network-only system. While specific applications have not yet been formally defined, the platform is intended to support latency-sensitive edge AI workloads, such as real-time video analytics, Extended Reality (XR) services, and industrial or smart city AI inference, that benefit from distributed compute close to users.
The announcement presents telco infrastructure as a platform for AI compute services, moving beyond network automation to monetizing idle compute capacity via AI Processing-as-a-Service. The model begins to execute one of the architectures identified by the AI-RAN Alliance and provides the step necessary to move from an AI-RAN conceptual design to an operational deployable AI-RAN infrastructure, with an anticipated public demonstration at MWC Barcelona 2026 in March. The announcement represents a fundamental change in AI-RAN's role in a telecoms business strategy, linking platform capability to revenue generation, rather than treating AI only as a network optimization tool.
IMPACTFrom Network Infrastructure to AI Platform Infrastructure |
This event represents a dramatic change in AI-RAN's economic purpose due to structural transformation. Historically, AI-RAN was valued for internal benefits, including automation, performance optimization, predictive maintenance, traffic efficiency, and energy savings, which provided only indirect financial impact, making large capital investment difficult until a validated Return on Investment (ROI) could be demonstrated. As planned under the concept of “AI on the RAN,” SoftBank and Nokia’s announcement appears to be the first commercial application, signalling tangible direct economic impact.
By leveraging external AI workloads, AI-RAN shifts from a cost-efficient solution to a platform economics model. The RAN becomes a multi-purpose digital platform providing both communication services and AI processing, transforming telecoms infrastructure from a single-purpose utility to a dual-purpose economic asset embedded in both connectivity and AI value chains.
Operators will now be measured not only by capacity, coverage, and performance, but also by utilization, compute aggregation, service monetization, and revenue per site. Previously unutilized compute will turn into a monetizable resource, shifting investment logic from “building for the peak” to “optimizing for continual utilization.” This positions telco operators as distributed AI infrastructure providers, embedding them in the AI economy as low-latency computation resources across regions, enabling AI services through telco networks, rather than centralized clouds.
Nevertheless, this transition creates additional systemic risks: increased Operational Expenditure (OPEX), increased energy usage, and increased vulnerability to volatility from the telecoms and AI markets. Succeeding with AI-RAN transformation efforts requires superior technical quality, high technology demand from end users, continuous engagement of all parties in the ecosystem, and long-term revenue generation.
Operators have been reluctant to adopt Graphics Processing Unit (GPU)-intensive AI-RAN architectures due to concerns regarding Total Cost of Ownership (TCO), energy use, and vendor lock-in. However, heterogeneous computing platforms and AI-ready hardware (e.g., AI-native radios and neural processing accelerators) provide viable alternatives. Please refer to ABI Insight “AI-Enhanced MIMO Implementation: Proven Large-Scale Technology OR Selective Deployment?” for more on AI in massive Multiple Input, Multiple Output (mMIMO) and adaptive radio planning.
Ultimately, AI-RAN represents a shift from internally-optimized architectures to externally-facing economic platforms. Success will be measured not by network optimization alone, but by the ability to support ongoing AI demand, generate sustainable revenue, and operate as an economically viable part of the broader AI ecosystem.
RECOMMENDATIONSBuilding AI-RAN as a Sustainable Platform Economy |
To operationalize the new AI-RAN model, the industry should consider it as a commercial platform architecture instead of a new network innovation layer. The ecosystem must move toward creating coordinated commercial validation programs together, while developing clear metrics that relate to their respective AI workload models (as opposed to fragmented pilots and proofs-of-concept). Creating structured roundtables between operators and enterprises and conducting field validation initiatives that consist of third-party AI processing use cases will be essential to achieving this goal.
As part of this effort, these programs must fully evaluate pricing structures, service demand, and deployment models, while capturing clear commercial metrics (e.g., utilization density, revenue per site, energy costs per workload, service margin, and capital payback) on a regular basis. By publishing cross-business case studies and commercial benchmarks, operators can demonstrate repeatable ROI and help mitigate the differences in operators’ reluctance to commit capital to deploying AI-RANs.
The next step for operators is to integrate AI-RANs into their production environments, rather than continuing to experiment with their deployment. The initial deployments of AI-RANs should focus on higher-demand urban or industrial locations where edge AI workloads are expected to be generated at a constant rate. Working with enterprise customers to develop initial production workloads will help operators better understand how their customers’ workloads can use those networks and how to price those services.
Operators should also test out various pricing models concerning dynamic pricing for AI processing services and the possibility of constructing a marketplace where developers and enterprises can deploy or scale AI applications directly on RAN infrastructure—and pay for them. Further, cooperative arrangements between operators will help speed up this process by allowing them to share operational insights, establish standard practices, and enable multi-location, commercially viable AI service rollout by operating as a group or coalition, versus each operator building out in isolation.
NVIDIA and other AI ecosystem vendors also have an important role in accelerating adoption beyond the initial SoftBank deployment by helping operators operationalize shared AI and RAN compute environments.Vendors should continue developing architectures that will allow for safe resource sharing between both AI and RAN processing workloads, which, in part, is being enabled via collaborations. This includes the work that NVIDIA is doing with zTouch Networks around demonstrating how GPUs can be dynamically allocated among telco workloads and external AI workloads, while still maintaining deterministic performance of the RAN.
This will be critical as operators investigate different AI-RAN approaches, including GPU-accelerated platforms and Central Processing Unit (CPU)-centric approaches that are being introduced elsewhere in the market. Vendors should build and provide standards-based AI-RAN development frameworks, operational tools to monitor the performance of mixed AI and RAN workloads, and pre-validated edge AI applications that will allow operators to deploy AI-RAN applications on currently live networks. Through these endeavors, operators will be better positioned to evaluate utilization, effectively manage shared compute resources, and improve upon the timeliness and commercialization of AI-RAN infrastructure.
Written by Sam Bowling
Related Service
- Competitive & Market Intelligence
- Executive & C-Suite
- Marketing
- Product Strategy
- Startup Leader & Founder
- Users & Implementers
Job Role
- Telco & Communications
- Hyperscalers
- Industrial & Manufacturing
- Semiconductor
- Supply Chain
- Industry & Trade Organizations
Industry
Services
Spotlights
5G, Cloud & Networks
- 5G Devices, Smartphones & Wearables
- 5G, 6G & Open RAN
- Cellular Standards & Intellectual Property Rights
- Cloud
- Enterprise Connectivity
- Space Technologies & Innovation
- Telco AI
AI & Robotics
Automotive
Bluetooth, Wi-Fi & Short Range Wireless
Cyber & Digital Security
- Citizen Digital Identity
- Digital Payment Technologies
- eSIM & SIM Solutions
- Quantum Safe Technologies
- Trusted Device Solutions