AI & Machine Learning
Artificial Intelligence and Machine Learning
This report delivers a quantitative assessment of Machine Learning (ML) usage across numerous consumer, commercial, and industrial device markets. The assessment was informed by, and further refined by, a qualitative analysis of the technological, business, and political drivers and constraints impacting the use of Artificial Intelligence (AI) technologies.
Detailed shipments and segmentation are provided in product-based Market Data (MD) research deliverables spanning automotive, mobile devices, wearables, smart home, robotics, drones, manufacturing, retail, video systems, buildings, and energy. Device categories, segmentation, and annual shipment volumes used in this Artificial Intelligence and Machine Learning MD are derived from existing device and product data sets.Continue
Reports & Data
A team of researchers from Sun Yat-sen University in China have developed a new technique for Artificial Intelligence (AI) inference that spreads inference across both the edge and the cloud—the researchers are calling this technique “co-inference.” A combination of 5G and co-inference could massively improve flexibility around the management of inference on devices. The researchers present an approach that marries both the edge and the cloud together in a framework called Edgenet, a deep-learning co-inference model. Co-inference relies on an idea called Deep Neural Network (DNN) partitioning—a process of adaptively splitting DNN layers between the edge and the cloud relative to the available bandwidth and compute at both. Co-inferencing segments inference processing between the edge device and the cloud by splitting the layers of the DNN and assigning them either to the edge or to the cloud. The critical task here is identifying the most computationally intensive layers of a DNN and having the inference of those layers take place in the cloud. Done correctly, this can reduce latency and at the same time send as little data to the cloud as possible.
The International Broadcaster Conference runs from September 15th to 19th. One of the most significant new trends promoted at this conference will be related to the implementation of artificial intelligence (AI) and machine learning (ML) in video services. Some solutions targeted as AI or Machine Learning simply migrate from editor- or developer-coded optimization methods to neural-network trained solutions, a host of new solutions leverage video analytics to generate metadata.
Every client is assigned a key member of our research team, based on their organization’s needs and goals. And, an unlimited number of Analyst Inquiry calls are available to answer your specific questions.
AI is going to see a dramatic growth in adoption in many verticals. However the currently popular model of having AI inference and training take place in the cloud is simply not appropriate for many use cases, creating a sizable opportunity for edge AI hardware to flourish. In this webinar Malik Saadi, Vice President of Strategic Technologies and Jack Vernon, Industry Analyst, will cover the main drivers for shifting AI to the edge, how the technology stack for AI is changing to reflect the shift, and the market opportunity in edge AI.
This webinar will address the following questions:
- How is AI currently being implemented?
- What is the case for shifting AI processing to the edge?
- What are the use cases that will drive edge AI?
- What are the hardware options for implementing AI at the edge?
- How big is the market opportunity in edge AI implementation?
The Bright Ideas Visionaries Should Have Learned at CES 2019
Accelerated Growth Opportunities Identified for Telco, Smart Factory, and Enterprise Video Markets
Deep Learning-Based Machine Vision Accelerates the Drive Toward the Smart Factory
ABI Analysts to Provide Strategic Guidance About Transformative Technologies to CES 2019 Attendees
Big Changes for Autonomous Vehicles, Deep Learning and Augmented Reality Markets