DeGirum Leveraging Pruning to Deliver Efficient Edge AI Solution

Subscribe To Read This Insight

By Lian Jye Su | 4Q 2021 | IN-6397


DeGirum Introduced ORCA-NNX


In Q4 2021, Artificial Intelligence (AI) chipset startup DeGirum emerged from stealth mode and announced the launch of its ORCA-NNX Edge Inference Accelerator. Coupled with the company’s Deep Neural Network (DNN) pruning support, the company offers a full stack, hardware-to-software technology suite to address low-powered high-accuracy Machine Learning (ML) inference at the edge.

Founded in 2017, the California-based startup focuses on ML hardware and software solutions based on its support for pruned DNN models. The company’s first ML inference processor is called ORCA-NNX, a flexible Application Specific Integrated Circuit (ASIC) that accelerates data processing through high bandwidth and compute efficiency. The processor comes with a network compiler that identifies and generates the most memory conserving code for a particular pruned DNN. Additionally, the company has also developed a set of software libraries and tools targeted at developers.

A Combination of Hardware and Software for Pruning


Pruning has always …

You must be a subscriber to view this ABI Insight.
To find out more about subscribing contact a representative about purchasing options.