Gesture Control in Mobile Devices Galvanized by Google's Radar Technology Approval

Subscribe To Download This Insight

By David McQueen | 1Q 2019 | IN-5390

The recent announcement that the Federal Communications Commission (FCC) has approved Google’s use of its sensor technology--called Project Soli--could be the shot in the arm that the smartphone market, wearables and beyond, has been looking for to make gesture control more widespread in any number of devices and applications. Google’s solution looks set to provide an effectual alternative to other gesture control technologies already making their mark on the smart devices market, thereby bringing ever-closer a future where interactive touchless hand gesture recognition technologies will become the norm, allowing an increasing number of tasks to be performed on a device via a more natural, intuitive interface.

Registered users can unlock up to five pieces of premium content each month.

Log in or register to unlock this Insight.

 

Google Gets FCC Approval for its Project Soli Radar Technology

NEWS


The recent announcement that the Federal Communications Commission (FCC) has approved Google’s use of its sensor technology--called Project Soli--could be the shot in the arm that the smartphone market, wearables and beyond, has been looking for to make gesture control more widespread in any number of devices and applications. Google’s solution looks set to provide an effectual alternative to other gesture control technologies already making their mark on the smart devices market, thereby bringing ever-closer a future where interactive touchless hand gesture recognition technologies will become the norm, allowing an increasing number of tasks to be performed on a device via a more natural, intuitive interface.

Gesture Control on Mobile Devices a Step Closer, but How Close?

IMPACT


While the FCC has just given the green light to Google’s Soli Project, it is far from being a new technology. Project Soli has actually been bumbling around since 2015, first getting underway under the auspices of Google's Advanced Technology and Projects (ATAP) division, which has a reputation for undertaking projects that never actually reach fruition. However, in the case of Project Soli, they may actually be on to a winner.

The Soli Project is a radar-based gesture control technology that can track hand movements in real-time in a three-dimensional space. Initially, the touchless technology’s radar chip had trouble picking up motions and gestures and needed a power boost to become more accurate. For this reason that, the project fell foul of the FCC’s conditions despite being in line with the European Telecommunications Standards Institute (ETSI) standards. Once the commission determined that Project Soli could serve the public interest and had little potential to interfere with other technologies or cause harm, the FCC finally approved Google’s request for its chip to operate at higher power levels (although not the levels first proposed). It was approved for use in the 57-64GHz frequency band.

While not new to smartphones, gesture control has generally been the preserve of camera systems, which allow users to control applications and camera functions with certain gestures, such as hand waving. ABI Research expects smartphone shipments with this type of gesture control, using specific processors and video cameras, to reach 210 million in 2019, rising to near 500 million by 2023. Other similar camera-based systems have also been used in devices such as video game consoles like the Microsoft Kinect or on the PC using Leap Motion. Now that Soli has been given the all-clear by the FCC, despite still being at the experimental stage, it could help stimulate greater use of gesture control in many types of devices based on technologies beyond the use of cameras.

Indeed, gesture control innovation using ultrasound is also part of this solutions mix, which utilizes proximity sensing through speakers, microphones, and specific sensors to allow users to control the device. This technology has been touted by Elliptic Labs and has been used specifically in Xiaomi’s Mi MIX smartphones. The attraction of ultrasound for gesturing is that it can utilize standard components, such as microphones and speakers, already on a device whereas Project Soli will require the addition of a (potentially costly) radar chip. The ultrasound solution also has the added benefits of replacing a smartphone’s proximity sensor, allowing for changes in screen design and providing presence sensing. Both were also main attributes of the Mi Mix device, which was arguably one of the first smartphones to offer a bezel-less design. The use of the Elliptic Labs solution is also set to gather momentum in the smartphone arena following a partnership with AAC Technologies that permits all device OEMs that already work with AAC to leverage on the audio components to capture and use ultrasound.

The additive effect of each of these solutions is giving a significant boost to the provision of innovative device control features using touchless hand gesture recognition technologies. These will allow for an increasing number of tasks to be performed on a device via a more natural, intuitive interface, thereby replacing or augmenting physical controls. These technologies are expected to be used widely beyond smartphones as gesture control can also be used to overcome limitations of device size, such as on a smartwatch, while allowing detection of any manner of pre-defined detailed hand gesture, such as finger-thumb tapping, rubbing, clicking, and swiping. Indeed, such systems could also provide benefit to people with mobility or disability impairments.

"Horses for Courses" Market Approach Expected for Gesture Control Technologies

RECOMMENDATIONS


The approval of Project Soli is a welcome addition to the current gesture-based controllers that have been used or experimented with in the devices market. The issue will be whether a specific gesture technology will win out in a particular product sector or if its innate strengths and weaknesses will mostly command its use. Generally, choice will be based on a trade-off between balancing the extra cost of introducing the technology and the effectiveness and accuracy it is able to bring. The ability of the technology to scale and be used in smaller form factor devices, notably wearables, will also be a defining factor for selection, as will power consumption, range, field of view, and privacy.

For example, camera and ultrasonic solutions can be more readily implemented in smartphones, as many of the necessary components are already resident and so do not necessarily carry the burden of additional costs. However, the use of cameras does not provide the accuracy and capture of fine gestures that can be achieved with radar or ultrasound, which both provide submillimeter detection accuracy. Indeed, it should be incumbent on all solutions to have good accuracy over control and create immediate utility to the end user, such as instructing a screen unlock, or else it risks being just a novelty technology.

However the market shakes out through choice of technology, there is undoubtedly a clear market impetus to bring ever-closer a future where interactive touchless controllers are able to replace all manner of current physical interfaces or, indeed, in some cases, the screen itself. With a number of heavy-hitting companies now involved in these technologies, also including Intel and Microsoft and with more expected to come, gesture control technology will create a seamless and intuitive interface that will be continually implemented in an array of devices from smartphones, wearables, and computers to televisions, gaming consoles, and smart home devices. Moreover, as developers create new interactions and applications, the use of gesture control is expected to spread beyond consumer electronics to other major sectors, most notably healthcare and automotive.

Of further improvement to gesture control will be the addition of artificial intelligence (AI). At present, machine learning techniques are used to detect gestures, which are in turn translated into pre-defined commands. By linking it with AI, gesture technology could be trained to identify an individual’s unique movements that would then be used to trigger a specific objective or action. As AI is starting to find its way into many device types, notably smartphones, this opens up the market to a multitude of potential use cases where gestures can be converted to meaningful commands for use in a wide variety of circumstances or applications.