Microlocation Ushers in the Next Advancement in Mapping and Navigation of Robots

Subscribe To Download This Insight

4Q 2018 | IN-5309

Robots, for the most part, are highly inflexible systems that cannot operate outside a highly structured environment. This does not matter so much if their task is arc welding or assembly, but if the use case dictates an unstructured, unmapped, or uneven environment, then today’s robots are not up to the task of mass deployment. While emphasis is played on the value of machine learning and the drive toward collaboration, all this potential means very little if the systems cannot locate themselves within a workspace, map out their surroundings, and navigate accordingly. Fortunately, advances in localization, mapping, and navigation, enabled by sensors and machine vision, are changing this.

Registered users can unlock up to five pieces of premium content each month.

Log in or register to unlock this Insight.

 

Without Geography, You Are Nowhere

NEWS


Robots, for the most part, are highly inflexible systems that cannot operate outside a highly structured environment. This doesn’t matter so much if their task is arc-welding or assembly, but when the use-case dictates an unstructured, unmapped or uneven environment, today’s robots are not up to the task of mass-deployment. While emphasis is played on the value of machine-learning and the drive towards collaboration, all this potential means very little if the systems cannot locate themselves within a workspace, map out their surroundings, and navigate accordingly. Fortunately, advances in localization, mapping and navigation, enabled by sensors and machine vision, are changing this.

SLAM and Navigation

IMPACT


Most mobile robots are categorized as autonomous guided vehicles (AGV’s). They navigate their surroundings, usually warehouses, with magnetic tape, barcodes, and fiducial markers. They are being deployed at a grand scale by pure-play e-retailers like Amazon and Ocado, and if you can afford to build your factories and sortation centers from the ground up- incorporating a comprehensive solution for them makes sense. But that choice applies primarily to e-retailing giants who are enjoying supercharged demand, and to fast-growing warehousing companies in the developing world where CAPEX growth is high. In the developing world, a more adaptable, flexible and sophisticated solution is necessary.

Enter autonomous mobile robots (AMR’s).  These are mobile systems that, through a combination of reference maps, sensors fusion technologies and machine vision, can localize themselves in a changing environment, make alterations to their reference map, and navigate without getting stuck.

This has significant benefits to operations where flexibility is key, operational change is part of the business, and in verticals where there are a large number of brownfield sites and capital investment growth is low; The prime example being manufacturing spaces in advanced and aging countries.

The technology that enables this is simultaneous localization & mapping (SLAM). This refers to robots being able to map out an area and localize themselves within it at the same time. This has huge value for systems that explore uncharted areas, such as the use of drones to map out the interior of mines to updates the effects of blasting, and thus improve health and safety standards.

It is worth noting that most SLAM systems do amount to full dynamic mapping of uncharted environments; they tend to use a reference map that can be updated and made more detailed. This is for a few reasons;

•           Dynamic Mapping is a challenging technology that is currently unsuited for most market demands.

•           Having a reference map adds robustness to the solution.

•           Reference maps, outside on unmapped or highly changeable environments, are readily available.

While SLAM often requires a range of optical and laser sensors, visual Simultaneous Localization and Mapping (vSLAM) technology is becoming more mature, with Seegrid’s stereoscopic solution being a prime example. The benefits of a visual system is they are less expensive than using LIDAR, and are less affected by environmental factors like elevation and dust. A drawback is the need for sensors that allow SLAM at night time.  For this, multi-sensor solutions are still necessary.

A further development is the testing of monocular vSLAM. Large stereoscopic camera solutions or cameras combined with other sensors are expensive and bulky. But research has shown how SLAM could be done successfully using only a single RGB camera. Devices equipped with just a single camera - such as webcams and mobile phones - are vastly more common and accessible than specialist sensing equipment. In 2015, researchers from MIT developed a system that utilized a single RGB camera to detect and recognize objects in real time, and which was found to perform comparably, or even better, then traditional approaches. 

There is an intersection developing between SLAM and object recognition. Object recognition can be difficult to accomplish, as it typically relies on complex and costly sensing systems, and computationally intensive 3D modeling. The low cost, ubiquity, and increasing power of imaging hardware and software makes visual sensing a very compelling solution for object recognition. As such, much effort has been expended in the development of vision-based object recognition techniques. Much like with vSLAM, vision-based solutions are becoming the norm for object recognition.

SLAM information is increasingly being utilized to augment and improve object recognition. Preprocessed 3D SLAM maps generated by a single camera provide a depiction of suspected images from multiple perspectives. In this way, individual “objects” are more accurately and speedily isolated compared to approaches using still images, especially when items were occluded or in close approximation to one another.

Once the robot has mapped an area and localized its position, the next obstacle is navigation. This can be achieved by following a simple route, or through more expensive and advanced methods via deep-learning enabled machine vision.DL machine vision techniques are now being used to analyze non-traditional forms of sensor data, beyond cameras such as Light Detection and Ranging (LIDAR), radar, ultrasound and others. Combined with sensor-fusion technologies, DL machine vision allows for autonomous navigation in conjunction with advanced mapping and positioning.

While Robot Operating System (ROS) was essential to democratizing the capabilities of SLAM and navigation, it is too unwieldy for effective industrial applications. Luckily for the industry, a range of alternatives is developing, whether that be through company-specific modifications of ROS, like Vecna’s V-ROS, or through proprietary OS. Examples of these include Seegrid’s stereo-based vSLAM OS, BlueBotics’ ANT which observes environmental structure to detect position, Perceptin’s vSLAM system (powered by Nvidia Jetson) and Brain Corp Brain OS which allows for navigation. Brain is also announcing its collaboration with Qualcomm, aside from its existing partnership with Intel and Nvidia.

Bringing Microlocation to Robots

RECOMMENDATIONS


End-users of mobile robotics should not assume the advances in navigation technology will singularly localize robots with advanced centimeters and millimeter accuracy. Radio-frequency (RF) technologies will have to be added to ensure a robust positioning system and for asset tracking. Among the RF sensor vendors targeting robots for asset tracking and fleet management are Blackberry, ORBCOMM and Verizon Connect (formerly Fleetmatics).

Current localization solutions, using a variety of LIDAR and optical sensors are often limited to indoors and controlled environments due to the limitations and expense of LIDAR. For outdoor applications, GNSS is often key for positioning. Yet this large-scale global system is itself highly inaccurate. Multipath reflection in urban canyons makes localizing autonomous vehicles in cities particularly problematic for use-cases like mobile robots in last-mile delivery. The same goes for built environments like indoors and underground, all areas where robots are expected to proliferate.

Sometimes the precision needed for certain tasks is greater than the capacity of current RF sensors. this particularly relates to vehicle to vehicle navigation to avoid collision, mobile manipulation and wireless charging. In these cases, the accuracy needs to be in low-centimeters or even sub-mm. This is leading to the coining of the term ‘microlocation’, to describe smaller constellations of highly precise RF sensors to augment the larger GNSS constellations and to fill the role of localization in areas where GNSS fails, ideally at a fraction of the cost of advanced machine vision and expensive LIDAR.

Among the key drivers of microlocation is Boston-based startup Humatics. Through deploying RF beacons on both mobile robots, stationary infrastructure like cranes and across the site, Humatics’ solution is able to provide 3D microlocation of accuracies reaching 2 centimeters with a maximum range of 500 meters. The company is developing more precise technology that will offer millimeter level accuracy over a range of 30 meters, enabling mobile manipulation and other use-cases. Humatics uses ultra-wide-band frequencies and aims to create a new category of RF-based local coordinate frames to compliment GNSS coordinate frames in areas where the latter is insufficient.

These RF sensors, which relay coordinates to beacon arrays to establish 3D positioning, will facilitate the nascent capabilities of mobile robots; for example, mobile manipulation via mounted robotic arms is very difficult with current positioning technology, but with RF sensors, is much more achievable. While these frames will have far less scope than GNSS, they are more accurate and can be deployed in all environments, including underground. Their range, while limited, is significantly greater than that of costly machine-vision based positioning solutions, which require line-of-sight. The real-time positioning of RF sensors will be an improvement on the time it takes for machine-vision solutions to make a determination. Of course, machine vision is useful for other considerations, like collecting data points and providing material for predictive and prescriptive analytics solutions, so the deployment of these differing technologies is not mutually exclusive.

In conclusion, localization, mapping and navigation represent the three primary challenges for robotics in terms of breaking into non-traditional markets with unstructured environments. Sensor proliferation is combing advances in deep learning-enabled machine vision to make vision-based mobility achievable, while the development of multiple proprietary operating systems is allowing more and more companies to tailor SLAM solutions to their specific needs. End-users now have the flexibility between accessible solutions like rough grid map references, to real-time dynamic mapping at the advanced end of the spectrum. Alongside this development, advances in RF sensors, previously used for underground mining and millimeter-level positioning for mission critical use-cases, will make robot positioning accurate to the point mobile manipulation and autonomous movement can be adopted on a large scale in complex and varied environments. Though there are many drawbacks to individual technologies, such as machine vision being expensive, stereoscopic cameras being bulky, LIDAR being vulnerable to environmental factors or RF sensors requiring more infrastructure, the gradual cross-adoption of them will mean robots can be deployed to construction sites, remote mines and even the wilderness without significant infrastructure costs.