For the first time, NVIDIA GTC partnered with VivaTech Paris to host one of the largest technology events of the year. Drawing over 180,000 attendees, the event highlighted sovereign Artificial Intelligence (AI) and its transformative potential for the European ecosystem. NVIDIA took this opportunity to position itself, once again, as the de facto partner for regional AI development, while also continuing to develop its message around agentic inference and AI democratization.
The conference featured candid discussions with speakers and attendees openly acknowledging the structural gaps in Europe’s technology ecosystem and emphasizing the need for increased governmental support and sustained private investment to support supply chain reshoring. Despite these challenges, the overall tone was optimistic, fueled by several promising developments:
- Mistral Expanded Relationship with NVIDIA: Arthur Mensch, in collaboration with Jensen Huang, announced Mistral Compute—a comprehensive full-stack AI infrastructure platform built on NVIDIA hardware, focused on delivering sovereign AI infrastructure solutions. Coupled with Mistral’s recent announcement of Europe’s first AI reasoning model, Magistral, this clearly signals that it intends to become the region’s first full-stack AI player, forming the foundation upon which the region can realize its global ambitions.
- Expansion of a Neocloud Ecosystem: A growing neocloud ecosystem is developing across Europe to bridge the AI infrastructure gap left by U.S. hyperscalers. These players represent a potential unifying force for European AI development. They also will provide a viable alternative to hyperscalers, driving positive competition in the European ecosystem and providing much needed “choice” for AI developers and enterprise customers.
- Alignment of AI Development with Europe’s Industrial Strength: NVIDIA launched the Industrial AI Cloud in partnership with Deutsche Telekom, reinforcing the continent’s strong industrial ecosystem. The Industrial AI Cloud, set to launch in 2027, will be a gigafactory equipped with 100,000 Graphics Processing Units (GPUs), designed specifically to meet sovereignty, Service-Level Agreements (SLA), and other stringent requirements unique to industrial AI applications. This will provide the infrastructure necessary to support applications in design, engineering, simulation, digital twins, robotics, and Generative Artificial Intelligence (Gen AI). In addition, NVIDIA continues to closely align with key industry players such as Siemens, Schneider Electric, and BMW, with initiatives spanning the omniverse and digital twins.
- Sovereign AI Factories as a Revenue Opportunity: Sovereign AI factories are rapidly emerging as a strategic imperative, driven by a growing alignment with NVIDIA’s vision that every nation should “own their intelligence.” But beyond the geopolitical and security dimensions, NVIDIA is positioning these factories as high-value revenue opportunities for a wide array of ecosystem stakeholders. The strategic positioning is certainly resonating. NVIDIA announced expanded partnerships with Orange Business and Swisscom, the latter showcasing its full-stack capabilities built entirely on NVIDIA infrastructure. The momentum was further underscored by the strong presence of next-gen cloud-native players (neoclouds), including Nscale, Scaleway, and FluidStack, all of which are building exclusively, for now, on NVIDIA’s infrastructure.
- Political Imperative for Supply Chain Onshoring: The presence of France’s President Emmanuel Macron highlighted governmental commitment to homegrown innovation and digital sovereignty. The conference emphasized Europe’s—and particularly France’s—opportunity to onshore its supply chain and build sovereign AI capabilities. Beyond infrastructure, these leaders stressed the need for broader ecosystem development encompassing talent, infrastructure, and centers of excellence. NVIDIA is actively advancing this vision through the announcement of six AI hubs across Europe. President Macron further emphasized the need to onshore the remainder of the AI supply chain, supporting the establishment of fabrication plants to manufacture leading-edge semiconductor nodes.
NVIDIA’s visit has catalyzed infrastructure investment, public sector commitments, and supply chain development across Europe. But it also gave NVIDIA the spotlight to reinforce key messages with new partnerships and product announcements:
Agentic AI and Inference Remain Front and Center
It has been quite clear for some time that inference has overtaken training as NVIDIA’s core focus—this was once again reinforced at GTC Paris 2025. Agentic AI and the need for efficient, scalable, and secure AI systems to support the exponential growth in tokens continues to drive its message. NVIDIA continues to highlight its unique ecosystem of tightly integrated partners providing the infrastructure to enable agentic solutions.
Dynamo, an open-source framework that disaggregates inference stages (pre-load, generation) across hardware types to maximize efficiency and reduce latency, continues to gain traction playing a more central role within NVIDIA’s overall message. It now benefits from partner support as NVIDIA and F5 aim to optimize inference and security on Sesterce (neocloud) infrastructure. The combined solution enables smart Large Language Model (LLM) routing for NVIDIA microservices, securing the Model Context Protocol (MCP) for agentic workflows, and supporting multi-tenancy and integration with NVIDIA Dynamo to reduce inference latency and improve GPU resource utilization. These partnerships support the strategic reframing of AI infrastructure to focus on agentic inference and they align closely with NVIDIA’s narrative of efficient GPU resource utilization.
NVIDIA Underscores AI Democratization Message with DGX Cloud Lepton Announcement
DGX Cloud Lepton is a free platform, built on Lepton AI, that provides developers with a unified interface to manage GPU resources across multiple cloud environments. While it only currently supports owned or rented infrastructure, future iterations will enable on-demand GPU access, potentially transforming access to AI-optimized compute. DGX Cloud Lepton presents a strategic opportunity to optimize resource utilization, empower Europe’s neocloud providers, and create a neutral marketplace for new infrastructure players like telcos to engage developers. However, it also introduces risks of commoditization, with cost becoming the only differentiator, potentially driving a race to the bottom and squeezing out smaller players that lack sufficient scale to drive down prices, while remaining profitable.
Making AI Reliable, Responsible, and Explainable
In view of mounting skepticism around the reliability and understandability of AI-enabled systems and solutions (especially related to Physical AI in the form of robots and driverless cars), the necessity of putting “safety and security guardrails” around AI was widely mentioned and discussed in the keynote, in various GTC sessions, and across the exhibition. This takes many forms, ranging from letting AI agents check each other on the validity of their decisions during runtime to safety-centric training (NVIDIA AI Safety Recipe). The Halos safety framework for automotive and robotics was also highlighted repeatedly. Importantly, NVIDIA claims that its longstanding open (source) ecosystem approach to AI tools and models is the best guarantee to collectively keep AI under control, not just for physical systems, but also to protect language models against an increasing number of cyberattacks. Clearly, these considerations are testimony to AI fast becoming both mature and mainstream, and hence being subjected to quality and reliability requirements and best practices, often in the form of vertical-specific regulation.
AI and Accelerated Simulation Continuing to Fuel More Vertical Use Cases
This edition of GTC was no different from previous versions in terms of showcasing how AI is further penetrating verticals and becoming more deeply embedded in the very fabric of solutions and systems. Interesting exhibition demos included smart cities simulation, searching for corner case video training data for autonomous driving, AI-driven supply chain simulation, Virtual Reality (VR)-based car design AI agents, and a wide range of manufacturing and industrial design use cases such as wind tunnel simulation, enabled by NVIDIA’s partnership with Siemens on industrial AI. It is important to understand that AI often only enhances existing software systems with Gen AI increasingly taking the role of a seamless user interface and interaction mechanism with complex underlying legacy solutions unlocking value for human operators. At the same time, the horizontal and generic nature of LLMs, allows them to be deployed and scaled almost universally across all verticals and markets, being customizable for a virtually unlimited number of use cases.
Conclusion
Despite positive developments within the European AI ecosystem, significant structural challenges continue to hinder regional progress. None are more critical than power availability and grid limitations across various areas—U.K. stakeholders were particularly animated about the glut of power available in the North Sea, but the limitations of the power grid. These issues will remain major barriers to regional AI development in the coming years.
Sustained regional success will require long-term commitment, strong collaboration, and a unified vision from both the public and private sectors to effectively tackle all market challenges. While the NVIDIA roadshow plays an important role, it can only do so much. Now, it is up to Europe to unite and drive forward the creation of a sovereign AI supply chain.
For more updates on NVIDIA’s technology roadmap, check out the following content:
- GTC 2025: Redefining AI's Computational Horizon With Token Economy
- NVIDIA GTC at VivaTech Brings Hope to the European AI Ecosystem, but Some Neoclouds May Be Left Wanting
- Cloud AI Market Update: NVIDIA’s Cloud Strategy, Hyperscalers' ASICs, and DeepSeek
- NVIDIA's Strategy: Dominating AI Through Ecosystem, Access, and Interconnect