Lensless Image Sensing from Rambus - Putting the Eye in IoT

Going/deciphering through my notes from this year’s Mobile World Congress, I have as of late become growingly keen on lensless cameras. My early sightings of the concept have come mostly from Alcatel-Lucent’s Bell Labs, which publicized some fruits of its research on the topic around mid-2013. At MWC, it was one of the key technologies for Rambus, which in general tends to be one of the show’s best sources for insight on the more forward-looking, pre-emergent tech areas – such as, for example, IoT security, which is something that my colleagues at our Cybersecurity practice have explored recently.

The point about lensless cameras – or lensless image sensing – is that images aren’t captured directly, but created subsequently by computing. In Rambus’s case, a critical link in this is a diffraction grating, which is done by using a spiral-shaped optical pattern. This pattern gives the captured light the shape that is needed to process it into an individual image. The main advantages of the lensless approach are the extremely miniature form factors that it allows, and potentially dramatic decrease in costs. Lenses require a relatively lot of space and are famously expensive to manufacture, so extensive computerization can shake up the market on both fronts.

Something to bear in mind is the fact that lensless sensing isn’t really meant to replace lenses when it comes to actual photographing, as it would seem unlikely that entirely computed images can ever match lensed ones in terms of resolution. However, the concept’s real potential is namely in detection – of physical changes and motion. And that’s where it also touches strongly on the IoT. Tiny, cheaply produced cameras could allow countless types of physical objects to “see” their dynamic environments, as well as to notify and act on changes in them. Security and monitoring are the most obvious use cases that come to mind, but there are also many, many others, ranging from gesture recognition in wearable UIs to contextual awareness in connected vehicles.

Importantly, lenslessly created images have also a far smaller data footprint than the traditionally captured ones, which is an advantage when it comes to both transferring them wirelessly and analyzing them further. This aspect ties up closely with one major theme that we at ABI Research are these days paying much attention to in our Internet of Everything research: distributed intelligence in IoT architectures. This research theme concerns where analytics and automated actions will ultimately take place: in the cloud, in endpoint devices, or in the “foggy” edge (routers, switches) between the two. (This is something that we have covered e.g. in an earlier blog as well as our free MWC whitepaper.) Most likely, it will largely depend on the parameters of each use case (“why, where and when is the data needed”), which in turn will result in rather diverse architectures. Lensless sensing could address pain points in all parts of the IoT continuum, by making both low-power, low-latency endpoint analytics and remote cloud analytics more efficient and advanced.