This insight reviews the advancement in robotics manipulation in recent years and future technologies. The year 2019 is going to be an important one for general manipulation, as robots are no longer just expected to be mobile, but also to be multi-taskers.
Registered users can unlock up to five pieces of premium content each month.
Log in or register to unlock this Insight.
The Importance of General Manipulation
In 2018, the robot industry witnessed the maturity of the Autonomous Mobile Robot (AMR). From the widescale adoption in warehouses and factory floors, to the software implementation on existing infrastructure, e.g., Brain Corp’s collaboration with Walmart, vision-based robotics navigation is no longer a constraint. Now, robots can start to move around independently in unstructured environments.
Looking forward to 2019, the industry is looking to solve the next big challenge. Warehousing companies face trouble adding sufficient manpower to tackle the ever-growing demands of the e-commerce industry. At the moment, different companies have deployed their own unique solutions. According to ABI Research’s Commercial and Collaborative Robots for E-Commerce report (AN-2527), IAM Robotics and inVia Robotics rely on vacuum grippers to pick up parcels, while Magazino uses a rectangular-shaped holder. However, none of these can be considered a truly universal solution for general manipulation.
Current Hardware and Software Advancements
The idea of general manipulation originates from various attempts to develop robotics grippers that can pick up and hold all types of objects, regardless of the packaging, shape, or surface materials. The most common solutions available in the market right now are finger grippers and vacuum grippers, each with its own pros and cons. Finger grippers have been around for a long time, commonly deployed in discrete manufacturing. Many companies, such as OnRobot, Robotiq, and Schunk, have introduced various innovations to finger grippers, including adaptive grippers that feature a wider stroke for tasks that require more precision and have a higher tolerance for difficult tasks, but not one is universal enough to handle all types of items, especially soft and fragile items. Vacuum grippers, on the other hand, have restrictions in terms of surface materials. If an item is too heavy or too porous, the suction system will fail.
Currently, many companies have introduced innovative solutions to overcome the challenges in manipulation. Soft Robotics, for example, pioneers the use of deformable and flexible soft materials as gripper fingers. Different from other gripper companies, the company is targeting the food and beverage industry. Many types of foodstuffs are ill-suited for traditional grippers and grasping methods during processing operations, which has hindered robotics and automation proliferation in the sector. Grippers made of soft materials will enable ingredients to be picked up without being heavily pressured or damaged.
In the textile industry, startups like Grabit have demonstrated the use of electroadhesion to pick up soft fabric and threads. Grabit’s tentacle-like fingers consist of conductive electrodes that can generate electrostatic adhesion capable of holding onto various surfaces. The company is trialing its solution in footwear and apparel, and even in logistics and automotive. Grabit claims that its solution is not only more flexible, but also cheaper and faster, and it consumes far less power.
The Holy Grail of General Manipulation
All of the aforementioned solutions are customized solutions. This means they are only ideal in certain tasks and the grippers must be swapped when performing a different task. The constant swapping creates friction and inconveniences in industrial processes, particularly for picking, sorting, packaging, and assembly processes in warehouses, where robotics arms must handle thousands of different items and packages that come in varying degrees of size, weight, and form factors.
This is where a combination of hardware and software advancements can play a huge role. The advancement in computation hardware allows robotics developers to train deep learning-based machine vision in simulation and achieve great improvements in a short time period. This machine vision solution can be deployed in on-premise servers, along with a camera and robotics arms, to perform picking and sorting. RightHand Robotics, for example, brings a deep learning-based machine vision solution into warehousing using two Red, Green, Blue (RGB) cameras for depth sensing and a set of deep learning algorithms powered by Graphics Processing Units (GPUs). The company claims its solution is capable of picking up to 1,000 units per hour across an assortment of items, including products that the system has never seen before. The self-learning capability of deep learning-based machine vision can also help manufacturers detect unexpected anomalies and offer insights into flaws in manufacturing processes, as they are being used for quality control, screening, and inspection.
Not being restricted to vision means that more information can be gathered by incorporating a sensor system that can provide haptic feedback, such as a somatosensory system that can sense touch, pressure, position, and vibration. Another startup, Robotics Materials, is adding impedance control and tactile sensing on top of deep learning-based machine vision to achieve general manipulation. Artificial Intelligence (AI) computation takes into account information collected from various sensors and adjusts the grippers accordingly to perform multiple tasks using conventional two-finger grippers. The company’s solution can pick up small and soft objects and sort them accordingly.
General manipulation is crucial, particularly in multi-task environments, where robots are required to pick up various objects, such as warehousing, manufacturing, and agriculture. It will be even more intriguing when general manipulation is deployed in conjunction with microlocation and navigation technologies to create robots with situational awareness, as robots can start to pick up items using vision, just like a human. There will always be ongoing debates about the economic viability of this combination, but given the demand in the aforementioned industries, the industry should see good momentum in general manipulation in 2019.