Nvidia Strengthens Software AI Offering for GPU

Subscribe To Read This Insight

2Q 2018 | IN-5099


TensorRT for TensorFlow and Kubernetes Integration for GPU Announced at NVIDIA GTC


NVIDIA is doing some very important heavy lifting when it comes to AI. Much of this is self-fulfilling; when it comes to doing both AI training and inference, its GPUs dramatically outperform CPUs on most fronts, so it is the obvious choice for any implementer looking to build and apply AI algorithms. Selling hardware is NVIDIA’s main goal, so the company is committed to making the life of potential developers and implementors as easy as possible, at whatever scale that may be. During NVIDIA’s annual conference GPU Technology Conference (GTC) in Silicon Valley, CEO Jensen Huang took the opportunity to make significant announcements regarding NVIDIA’s AI software stack. NVIDIA’s AI implementation layer, TensorRT, is a software compiler optimized for deep learning on GPUs. TensorRT will now be integrated with TensorFlow, and it will also be possible to run Kubernetes on GPUs.

Announcements Should Increase Performance and Developer Options


You must be a subscriber to view this ABI Insight.
To find out more about subscribing contact a representative about purchasing options.