How Neuromorphic vision sensors will change the world
Despite millions of years of evolution, the human eye has yet to develop the ability to take and reproduce pictures of the type seen in photographs. It is, however, exceptionally adept at responding to changes in environment and the surrounding world in a faster and much more efficient way than cameras and computers currently can. With this intention and taking inspiration from the brain we have been working for several years on creating a camera that possesses the same ability, as that of the human eye, through the process of neuromorphic engineering. Our co-founders have been involved in this field for several decades, working at some of the world’s top research institutes, such as ETH Zurich, Oxford and Caltech.
Neuromorphic engineering replicates the key elements of visual sensing and processing similar to those used by the retina and visual cortex in the human eye and brain. This means that, though it is a computer chip used to inform sensors and cameras, it works in some ways like a biological eye that has the power to capture and process high-quality images. Neuromorphic sensors can directly sense fast changes, again much like the human eye can, and it sends that imagery to be processed. By allowing each pixel to record independently and only when triggered, movement is captured as a continuous stream of information, as opposed to frame by frame. This approach reduces the amount of data we need to send, which reduces the power consumption and also greatly improves the responsiveness of the camera. It allows for the total re-imagination of vision processing for cameras and because of this a neuromorphic sensor can generate up to a 1000 times less data, while still responding incredibly quickly, rendering the entire process significantly faster and much more efficient.
Our mission is to design and develop sensors and software that empower technology to rapidly see and understand the world. In our latest sensor, each pixel can independently output not just its current value, but it can also work in event mode in which it outputs changes in intensity. This mode compresses the output to only provide data when a change is detected. An additional level of compression is provided by grouping adjacent pixels that are changing in a similar way. The overall effect is to provide very fast, compressed data with much lower power and bandwidth requirements compared with a normal full-frame sensor readout. This allows for unprecedented advantages over conventional camera systems, such as ultra-low response latency, low, informative data rates and low power consumption.
A 2022 report from McKinsey identified neuromorphic computing as one of the top ten technology trends that have the potential to reshape the future of several markets and industries in the coming decades. It noted that next-generation computing could help solve pressing societal issues by giving corporations access to previously unattainable levels of functionality. The report further states that neuromorphic event cameras provide faster vision-based control and object tracking, with a high-potential to impact the world of technology. At iniVation we have created an ecosystem of over 700 customers and partners that, for the first time, are applying our technology in a real-world setting. Partners are deploying our tech in areas such as smart homeIoT, industrial monitoring and even cubesats and the International Space Station.
Not only destined for the wider solar system, iniVation’s sensors offer a broad range of every day real world capabilities. Our highly intelligent and efficient machine vision can enable robots and vehicles to work independently of humans. This is a huge step forward for dangerous, repetitive tasks that come with a risk and are carried out by a human workforce. Neuromorphic robots have the potential to run autonomously for long periods, in situations that pose a risk to safety. Take for example mining, widely considered to be one of the most dangerous jobs in the world, where workers are routinely exposed to possible cave-ins, explosions and toxic air. Neuromorphic technology has the potential to make this and many more industries significantly safer by deploying robots with highly efficient, ultra-fast sensing capabilities to carry out the more perilous aspects of an occupation.
It is not solely industrial inspection that can benefit from this technology. In fact, any scenario that utilises cameras can implement this technology, including in factories, robotics, consumer electronics and the field of aerospace. In particular it can be used to enable the development of extremely power-efficient devices for next generation wearables in AR, VR and IoT.
Having successfully established our technology in industrial and scientific settings, we are focused now on entering more broadly into the areas of consumer electronics, mobile robotics and automotive. Ultimately, there is scope to implement our technology into nearly every sector, and indeed every aspect of camera sensing and imaging. We are looking forward to creating a future in which our machines can see and understand the world just as well as we ourselves can.
Kynan Eng is the CEO and co-founder of iniVation, based in Zurich, Switzerland. With a wealth of experience at the intersection of neuroscience, engineering and human-computer interaction, he oversees the design and production of ultra-high performance vision sensors and software for automation, robotics, consumer electronics, and aerospace. His co-founders helped event the field of event-based neuromorphic vision. He has been the recipient of a CES Best of Innovation award and a Red Dot Award. Through his involvement with iniVation and other deep tech startups, he has seen these ventures raise over $20M in research and venture funding. He holds degrees in computer science, applied mathematics, mechanical engineering from Monash University Melbourne and a PhD from ETH Zurich.