Immervision, Co-Founder & VP Technology, Patrice Roulet Fontani sat down with Jim Beretta from the Robot Industry Podcast to discuss ‘Vision Systems and Deep Seeing Technology for Robots’.

A robotic vision system essentially enables a robot to “see.” Or as we say at Immervision ‘a robotic vision system brings eyes to machines’. These vision systems are linked to a computer that processes the images so the robot can interpret what it sees. This allows the robot to follow instructions to complete tasks like identification, navigation, inspection and handling objects and items.

At Immervision, we are working to change the way robots see the world! 

Immervision started developing expertise in wide-angle and high resolution used to design cameras and lenses for human vision where the main concern is to get the best image and increasing resolution to have vivid colors and sharp images. Over the past 20 years Immervision used their experience to bring vision to all types of machines and have designed eyes for surveillance cameras, automotive monitoring, action cameras, lunar roving vehicles, flying drones, phones, laptops and of course robots.

Recently, the latest vision system trends the company is seeing is an increase in demand for computer vision, and those requirements are very different from human vision needs.

Whether it’s an autonomous vehicle or a robot dog prowling the streets to communicate messages, robots are completing a wide variety of tasks, allowing the expansion of human capabilities and performance in the real world. These tasks require supervision and planning beyond simple cognitive tasks. This presents the two biggest challenges for robotic vision systems.

The first challenge robotic vision systems face is the combination of numerous uses that require robots to have dual vision of both human and machine capabilities. This requires a dual cortex that interfaces with both the robot and the human vision needs, which are not always aligned.

Patrice explains “Immervision has mastered the complete robotic vision system pipeline and how to optimize the design of the eyes according to the result we want to achieve. We continue to research what is the best design that maximizes the efficiency of Machine Learning and AI. We then apply this research to robotics, as well as automotive and other industries.”

The second challenge, is latency – how can you capture the environment, process it and extract the right information for the rest of the pipeline? Not only does that have to be done efficiently, it also has to be done fast, because the faster the pipeline, the faster the robot can move and make decisions. Many believe this latency can be reduced by electronics or computing programs – but it starts with the lens, the lens must be fast to capture more light in a small timeframe. Immervision, brings unprecedented accomplishments here because not only do we make the complete pipeline fast, but the lens too. This better-quality data and efficient algorithms are key to allowing robots to perceive their environment, a requirement to enable autonomy.

Within todays robotics industry, there is no single camera that can match the human eyes in terms of adaptability, resolution, sensitivity, dynamic range. Even if a machine deploys a wide variety of sensors (camera, radar, lidar/ToF, sonar, thermal, microphone, etc.) which can outperform human capabilities in many cases, there are still limitations. For example, currently, a camera cannot achieve human eye performance at the same speed a human can make sense of imagery input. Human vision is very efficient and, in combination with human brain power, it can convey a meaning to an image input in fractions of seconds, while a vision system needs to correlate information from image input, auxiliary sensors and AI algorithms to do the same job. Dealing with multiple sensors can lead to contradictory conclusions.

To find a robotic vision system solution, it takes a unique and visionary company like Immervision who was the first to introduce biological freeform lenses that combine super wide-angle field of view with augmented resolution. Having spent the last two decades working with the state-of-the-art optical technologies such as meta-lens, the company has always pushed the boundary of machine sensing and perception.

Listen to the full podcast