Co-Inference: An Artificial Intelligence Technique that 5G Will Unlock

4Q 2018 | IN-5327
A team of researchers from Sun Yat-sen University in China have developed a new technique for Artificial Intelligence (AI) inference that spreads inference across both the edge and the cloud—the researchers are calling this technique “co-inference.” A combination of 5G and co-inference could massively improve flexibility around the management of inference on devices. The researchers present an approach that marries both the edge and the cloud together in a framework called Edgenet, a deep-learning co-inference model. Co-inference relies on an idea called Deep Neural Network (DNN) partitioning—a process of adaptively splitting DNN layers between the edge and the cloud relative to the available bandwidth and compute at both. Co-inferencing segments inference processing between the edge device and the cloud by splitting the layers of the DNN and assigning them either to the edge or to the cloud. The critical task here is identifying the most computationally intensive layers of a DNN and having the inference of those layers take place in the cloud. Done correctly, this can reduce latency and at the same time send as little data to the cloud as possible.

You must be a subscriber to view this ABI Insight.

To find out more about subscribing contact a representative about purchasing options.