Single-Photon Camera Guided Extreme Dynamic Range Imaging

IEEE/CVF Winter Conference on Applications of Computer Vision (WACV 2022)

Yuhao Liu1, Felipe Gutierrez-Barragan1, Atul Ingle1, Mohit Gupta1, Andreas Velten1
1University of Wisconsin-Madison

Abstract

Reconstruction of high-resolution extreme dynamic range images from a small number of low dynamic range (LDR) images is crucial for many computer vision applications. Current high dynamic range (HDR) cameras based on CMOS image sensor technology rely on multi-exposure bracketing which suffers from motion artifacts and signal-to-noise (SNR) dip artifacts in extreme dynamic range scenes. Recently, single-photon cameras (SPCs) have been shown to achieve orders of magnitude higher dynamic range for passive imaging than conventional CMOS sensors. SPCs are becoming increasingly available commercially, even in some consumer devices. Unfortunately, current SPCs suffer from low spatial resolution. To overcome the limitations of CMOS and SPC sensors, we propose a learning-based CMOS-SPC fusion method to recover high-resolution extreme dynamic range images. We compare the performance of our method against various traditional and state-of-the-art baselines using both synthetic and experimental data. Our method outperforms these baselines, both in terms of visual quality and quantitative metrics.

Highlights

  • CMOS-SPC sensor fusion model for extreme dynamic range imaging from only two images.

  • Analysis of artifacts produced by existing high dynamic range imaging approaches that only use one or two images.

Video Overview

Dataset

Please fill out this form to obtain link to our dataset (Google Drive). The download link will be provided in the confirmation message right after submitting the form.

The dataset contains a total of 667 synthetic HDR samples and the real-world experimental data. The synthetic data is split into train & dev set and test set. For test set, a single set of synthetic images include:

  • original HDR input

  • simulated CMOS and SPC image

  • output from our proposed model and baseline methods

You may also find outputs from our ablation models.

errata

There is a mistake in Section 5.2 Training and Implementation: Data Pre-processing, where we claimed that we multiply input and ground truth tensors by 255. In our code implementation, we do not multiply those tensors by 255. The only normalization performed is dividing CMOS, SPC, and ground truth tensors by the CMOS photon flux saturation limit.

Bibtex Citation

@InProceedings{Liu_2022_WACV,
    author    = {Liu, Yuhao and Gutierrez-Barragan, Felipe and Ingle, Atul and Gupta, Mohit and Velten, Andreas},
    title     = {Single-Photon Camera Guided Extreme Dynamic Range Imaging},
    booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
    month     = {January},
    year      = {2022},
    pages     = {1575-1585}
}


Related projects

From One Photon to a Billion: High Flux Passive Imaging with Single-Photon Sensors

From One Photon to a Billion: High Flux Passive Imaging with Single-Photon Sensors

Inter-photon timing measurements captured by a passive single-photon sensitive camera enable unprecedented dynamic range

Passive Inter-Photon Imaging