The CAOS camera captures previously invisible scenes by transforming incident light into signals that undergo light-detection and extreme dynamic range decoding via electronic wireless technology, writes Prof Nabeel Riza

Imagine that you are driving at night. You are tired. Suddenly, you see a bright light approaching you, getting even brighter. It is the headlights from a truck. At that instant, your vision is impaired due to the dazzling brightness. You hope for the best that in the next few moments, the truck will pass you by without another vehicle appearing in or near your lane.


CLICK TO ENLARGE Fig 1: H-CAOS camera design using a CMOS PDA as the Hybrid (H) element. L1/L2/L3: Lenses. SM1/SM2/SM3: Smart Modules for light conditioning. PD: Photo-Detector with amplifier

Such wishful thinking on a statistical scale can be life threatening, so today the automobile industry has developed cameras for night vision to improve driving safety. Today’s large-scale deployment multi-pixel CMOS (complementary metal-oxide-semiconductor) and CCD (charge-coupled devices) camera technology intrinsically supports 60 dB level dynamic ranges that can reach higher 80 dB dynamic ranges using a variety of techniques, such as using hardware modifications in the sensor chip by increasing pixel size and pixel integration time, or by using software methods such as deployment of multi-image capture processing.

However, there still remains a challenge for cameras to reach extreme dynamic ranges of 190 dB with real-time multi-colour capture of extreme contrast images, such as are present in natural night-scene settings of smaller rural roads as well as bigger city infrastructures.

To take on this challenge, I recently invented the CAOS camera. CAOS stands for coded access optical sensor. The origins of the CAOS camera are inspired by a postgraduate course I took in 1985 at the California Institute of Technology (Caltech) with the late Physics Nobel Prize-winning Prof Dr Richard Feynman.


Caltech electrical-engineering graduate students in the ‘Potential and Limits of Computation’ class, taught by Prof Richard Feynman (centre). Nabeel Riza (MS EE 85, PhD EE 1989) is second from left

I recall Prof Feynman basically saying: “There is Radio Moscow in this room; there is Radio Beijing in this room; there is Radio Mexico City in this room.” He paused, before continuing: “Aren’t we humans fortunate that we can’t sense all these signals? If we did, we would surely go mad with the massive overload of electromagnetic radiation [radio signals around us!”

These words of wisdom stayed with me for over 30 years and led to the CAOS invention. Specifically, all the radio signals that Prof Feynman mentioned do exist simultaneously in space, but each are carrying their unique signature or time-frequency domain radio code.

Today, using advanced device and design technologies, even the weakest of these radio frequency (RF) signals can be detected using a sensitive enough RF receiver tuned to the specific radio code. This is indeed the basis of the world’s wireless mobile-phone network, so why not apply this RF wireless multiple-access phone network design philosophy to the optical-imaging scenario?

Development of CAOS camera


CLICK TO ENLARGE Fig 2: Laboratory prototype of the H-CAOS camera using a CMOS PDA as the H (Hybrid) element in the overall camera design

Thus, CAOS is born where desired agile pixels of the light captured from an image are rapidly coded like RF signals in the time-frequency-space domain using an Optical Array Device (OAD) such as a multi-pixel spatial light modulator (SLM). Then, these signals are simultaneously detected by one point optical-to-RF detector/antenna. The output of this optical detector undergoes RF decoding via electronic wireless-style processing to recover the light levels for all the agile pixels in the image.

On the contrary, CCD/CMOS cameras simply collect light from an image, so photons are collected in the sensor buckets/wells and are transferred as electronic charge values (DC levels). There is no deployment of time-frequency content of the photons. Hence, CAOS is a paradigm shift in imager design empowered by modern-day advances in wireless and wired devices in the optical and electronic domains.

A version of the CAOS camera called the CAOS-CMOS camera (see Figure 1 and Figure 2), or H-CAOS camera (H for the hybrid element), has been recently built and demonstrated using the Texas Instruments (TI) digital micromirror device (DMD) SLM as the CAOS-mode space-time-frequency agile pixel encoder.


CLICK TO ENLARGE Fig 3. 82 dB HDR scaled irradiance map of the CAOS-mode acquired image in a linear (top photo: two faint targets not seen) scale and logarithmic (bottom photo: all three targets seen) scale

To start the imaging operation, the DMD is programmed to direct the incident light to the camera arm with the CMOS photo-detector array (PDA) to gather initial scene information that is used to next program the DMD in the CAOS-mode to seek out desired pixel high dynamic range (HDR) regions of the scene.

This visible band camera demonstrated a three orders (a factor of 1000) improvement in camera dynamic range over a commercial 51 dB dynamic range CMOS camera when subjected to three test targets that created a scene with extreme brightness as well as extreme contrast (>82 dB) HDR conditions.

Specifically, as shown in Figure 2, the CAOS camera produced target on the left edge of the image is extremely bright, while the two targets to the right of the bright target are extremely non-bright targets, near the noise floor of the demonstrated camera. Yet, the demonstrated CAOS camera is able to correctly see all three targets without using any attenuation of the incoming light from the imaged scene.

Note that any attenuation of the light to eliminate saturation of the CMOS sensor sends the weak light images into the noise floor of the CMOS sensor, making it impossible to see the weak light targets.

The CAOS camera platform is undergoing technology development and design optimisations for specific commercial applications. Current DMD technology, for example, supports a 100 agile pixels CAOS camera operating at 1000 frames/second, a factor of 33 faster speed than standard 30 frames/second CCD/CMOS cameras.

When used in unison with current multi-pixel sensor camera technology, the CAOS camera is envisioned to enable users to make a smart extreme (e.g. 190 dB) dynamic range camera with exceptionally low inter-pixel and inter-wavelength crosstalk levels, opening up a world of the yet unseen across diverse applications including automobile machine vision systems to enhance driver and road safety.

More information on the CAOS camera, including videos, presentations and papers, can be seen at:

nabeel-riza-caosNabeel A. Riza, Honorary Fellow, Engineers Ireland
Chair Professor of Electrical and Electronic Engineering
School of Engineering
University College Cork

Nabeel A. Riza (Fellow IEEE, IET, EOS, OSA, SPIE, Hon. Fellow EI) holds a PhD from Caltech. His awards include the 2001 International Commission for Optics (ICO) Prize and 2001 Ernst Abbe Medal from Carl Zeiss Foundation-Germany. He has held positions at General Electric Corporate Research & Development Center (Schenectady, New York), Nuonics and CREOL, USA. In August 2011, he was appointed Chair Professor and Head of the Department of Electrical and Electronic Engineering and Associate Member Tyndall National Institute, UCC. In June 2013, he was appointed Head of the School of Engineering (2013- September 2016), UCC. In January 2013, he authored his first textbook Photonic Signals and Systems: An Introduction, McGraw Hill, a book for students in science and engineering. He has published 150 international journal papers and 177 conference papers and holds 46 US patents. O'RiordanElecelectronics,sensors,UCC
Imagine that you are driving at night. You are tired. Suddenly, you see a bright light approaching you, getting even brighter. It is the headlights from a truck. At that instant, your vision is impaired due to the dazzling brightness. You hope for the best that in the next...