_
_
_
_

An electronic eye that simulates human vision

An event camera discriminates important information rapidly and reduces visual intake by bypassing unnecessary data for efficient processing

Raúl Limón
Bernabe Linares
Professor Bernabé Linares at CSIC's IMSE laboratory in Spain, which is developing the event camera.

Scientists often aspire to develop technology and systems that mimic the complexity of the human body, composed of 37 trillion cells. Trying to figure out the entire body may seem impossible, but incremental advancements are being achieved. Spain’s Institute of Microelectronics (IMSE), is collaborating with the National Research Council (CSIC) and the University of Seville to focus on simulating human vision. Unlike conventional cameras, our eyes and brain allow us to perceive and adapt to tiny changes in the environment without storing all the information that floods our field of vision. IMSE simulates this capability using dynamic vision sensors (DVS) in the types of event cameras adopted by tech giants like Samsung and Sony.

Conventional cameras function more like hyperrealistic paintings than the human visual system. They capture an image within a frame and reproduce it. The primary improvements in camera technology have been to increase image resolution by incorporating more pixels for better definition and fewer processing defects. “They can provide a huge amount of data that needs to be stored and lots of wiring to transmit it. However, someone must process all that information,” said Bernabé Linares, a research professor at IMSE.

“The biological retina doesn’t capture images. Instead, all information is transmitted through the optic nerve and processed by the brain. In contrast, a conventional camera’s pixels mostly operate independently, adjusting luminosity by interacting with neighboring pixels. However, a digital image taken inside a tunnel can appear all white or all black. But our human vision allows us to see the content both inside and outside, except in very extreme conditions,” said Linares. This is a crucial capability for developing self-driving vehicles. This feature of human vision is called foveation. It maximizes resolution in the focused area while keeping it low in the peripheral areas, which reduces the amount of information from the retina but maintains visual recognition for decision-making.

IMSE’s Neuromorphic Systems Group is seeking to develop an electronic eye that mimics biological capabilities — a sensor that operates at high speed, consumes less power, and requires minimal data for effective processing. Enter the event camera, which captures continuous flows of electrical impulses (events or spikes) instead of frames. Each photosensor autonomously captures these impulses when it detects a significant change in light.

“These cameras,” said Linares, “capture initial information from the contours of objects as a dynamic flow of pixels [events] that continuously change. But these aren’t images, as we know them. During the processing phase, the brain-like algorithm establishes a hierarchy of layers to interpret these events.”

This new approach to images began at the California Institute of Technology (Caltech) in the 1990s. But about 20 years ago, a European project called CAVIAR (coordinated by IMSE in Switzerland) began to use it to simulate the human eye. This led to new patents, companies, and the use of the technology for image processing by tech giants like Samsung and Sony. “The objective,” said Linares, “is to develop an electronic fovea [the region of the retina where visual acuity is highest].” This device efficiently identifies and processes the area of interest in high resolution, with minimal information generation. This is crucial for applications like autonomous driving because it makes processing more efficient and minimizes resource usage. “If the camera detects a sign, pedestrian, or another vehicle, it only analyzes the new element, not the entire image,” said Linares.

It also has remarkable applications in surveillance, image tracking, image diagnosis and drone navigation. A research team led by Bodo Rueckauer from the University of Radboud (the Netherlands) developed a dynamic vision sensor much like IMSE’s DVS. The sensor activates when it detects relevant changes in its field of vision, and then highlights the affected areas. “The frameless sensor detects changes in light intensity based on pixels. It has a high dynamic range and quick temporal resolution in microseconds. A gesture recognition AI achieves 90% accuracy using the DVS.”

IMSE Director Teresa Serrano says neuroscience can use these types of processors to interact with neuronal systems and benefit patients with epilepsy or Parkinson’s disease. IMSE’s Nimble AI project conducts research using advanced microelectronics and integrated circuit technology to create secure and private neuromorphic detection and processing. This innovation lowers costs, energy consumption (up to 100 times less), and latency (50 times faster response time).

One of the companies that came out of the research group is Chronocam, now known as Prophesee. “We’re creating a new way of detecting information that’s quite different from traditional cameras that have been around for ages,” said Prophesee CEO and co-founder Luca Verre.

“Our sensors generate minimal data, enabling low-power and cost-effective systems. By producing event data that the processor can easily interact with locally, instead of overwhelming it with excessive frames, the event camera facilitates real-time data processing of any scene,” said Verre.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition

More information

Archived In

Recomendaciones EL PAÍS
Recomendaciones EL PAÍS
_
_