Co-Inference: An Artificial Intelligence Technique that 5G Will Unlock
A team of researchers from Sun Yat-sen University in China have developed a new technique for Artificial Intelligence (AI) inference that spreads inference across both the edge and the cloud—the researchers are calling this technique “co-inference.” A combination of 5G and co-inference could massively improve flexibility around the management of inference on devices. The researchers present an approach that marries both the edge and the cloud together in a framework called Edgenet, a deep-learning co-inference model. Co-inference relies on an idea called Deep Neural Network (DNN) partitioning—a process of adaptively splitting DNN layers between the edge and the cloud relative to the available bandwidth and compute at both. Co-inferencing segments inference processing between the edge device and the cloud by splitting the layers of the DNN and assigning them either to the edge or to the cloud. The critical task here is identifying the most computationally intensive layers of a DNN and having the inference of those layers take place in the cloud. Done correctly, this can reduce latency and at the same time send as little data to the cloud as possible.
To find out more about subscribing:
- Too Much Fragmentation is Decelerating European 5G Deployments
- Digital Map Vendors Are Jumping on the Ai Bandwagon
- The Transformative Potential of Massive MIMO
- The “Strategy Paradox” in a 5G Era—Ericsson, Huawei, and Nokia Must Embrace “Options” Now to Have Options in the Future
- IoT Market Tracker: 5G