FPGA in AI: The Good, the Bad, and the Ugly
In the past, Field-Programmable Gate Array (FPGA) vendors have been aggressively pushing into Artificial Intelligence (AI) and Machine Learning (ML). The FPGA partnership between Microsoft and Intel demonstrated in 2018 that FPGA is capable of handling large AI inference workloads. The FPGA-fueled architecture provides ultra-high throughput that can run ResNet 50, an industry-standard deep neural network for object classification, without batching, meaning no tradeoff between high performance and low cost. This is due to FPGA’s strengths in flexible hardware. AI developers can fully customize FPGA to their needs, adding custom data paths, bit-widths, memory hierarchies, and bit manipulations. This means FPGA is fully adaptive to evolving AI algorithms.
Despite all of this, the popularity of FPGA in the ML space is definitely not on par with Graphics Processing Unit (GPU) and certain Application-Specific Integrated Circuit (ASIC) products. A key obstacle for this is the programming language preference of AI software developers. Popular ML toolkits such as cuDNN, Open…
You must be a subscriber to view this ABI Insight.
To find out more about subscribing contact a representative about purchasing options.