Image and Video DenoisingThis research was supported by NSF grants OAC 1940097 and OAC 2103936. Denoising tutorialThis tutorial provides a self-contained description of deep-learning methodology for denoising, emphasizing aspects that are important in real-world scientific applications. The tutorial presents convolutional neural networks from first principles, explaining how to train them in a supervised and unsupervised manner. It also describes how to analyze the denoising strategies learned by these models, and how to evaluate their performance. Illustrative examples are provided based on computational experiments with simple 1D piecewise-constant signals, natural images and simulated electron-microscopy data. In addition, a detailed case study is used to demonstrate the potential and challenges of applying deep denoising in practical scenarios. In the case study, supervised, unsupervised and semi-supervised deep-learning models are leveraged to denoise transmission-electron-microscopy data acquired at a very low signal-to-noise-ratio with the goal of investigating the dynamics of catalytic nanoparticles.
Discovering the dynamics of catalytic nanoparticlesMaterials functionalities may be associated with atomic-level structural dynamics occurring on the millisecond timescale. However, the capability of electron microscopy to image structures with high spatial resolution and millisecond temporal resolution is often limited by poor signal-to-noise ratios. With an unsupervised deep denoising framework, we observed metal nanoparticle surfaces (platinum nanoparticles on cerium oxide) in a gas environment with time resolutions down to 10 milliseconds at a moderate electron dose. On this timescale, many nanoparticle surfaces continuously transition between ordered and disordered configurations. Stress fields can penetrate below the surface, leading to defect formation and destabilization, thus making the nanoparticle fluxional. Combining this unsupervised denoiser with in situ electron microscopy greatly improves spatiotemporal characterization, opening a new window for the exploration of atomic-level structural dynamics in materials.
Unsupervised metricsUnsupervised deep-learning methods have demonstrated impressive performance on benchmarks based on synthetic noise. However, no metrics are available to evaluate these methods in an unsupervised fashion. This is highly problematic for the many practical applications where ground-truth clean images are not available. In this work, we propose two novel metrics: the unsupervised mean squared error (MSE) and the unsupervised peak signal-to-noise ratio (PSNR), which are computed using only noisy data. We provide a theoretical analysis of these metrics, showing that they are asymptotically consistent estimators of the supervised MSE and PSNR. Controlled numerical experiments with synthetic noise confirm that they provide accurate approximations in practice. We validate our approach on real-world data from two imaging modalities: videos in raw format and transmission electron microscopy.
Deep denoising for electron microscopyDeep convolutional neural networks (CNNs) are the current state of the art in denoising natural images. However, their potential has barely been explored in the context of scientific imaging. Denoising CNNs are typically trained on real natural images artificially corrupted with simulated noise. In contrast, in scientific applications, noiseless ground-truth images are usually not available. To address this issue, we propose a simulation-based denoising (SBD) framework, in which CNNs are trained on simulated images. We test the framework on data obtained from transmission electron microscopy (TEM), an imaging technique with widespread applications in material science, biology, and medicine. SBD outperforms existing techniques by a wide margin on a simulated benchmark dataset, as well as on real data. We perform a thorough analysis of the generalization capability of SBD, demonstrating that the trained networks are robust to variations of imaging parameters and of the underlying signal structure. We also release a benchmark dataset of TEM images, containing 18,000 examples.
Adaptive denoising via GainTuningDeep convolutional neural networks for image denoising achieve the current state of the art, but they have difficulties generalizing when applied to data that deviate from the training distribution. Recent work has shown that it is possible to train denoisers on a single noisy image. These models adapt to the features of the test image, but their performance is limited by the small amount of information used to train them. Here we propose “GainTuning”, in which CNN models pre-trained on large datasets are adaptively and selectively adjusted for individual test images. To avoid overfitting, GainTuning optimizes a single multiplicative scaling parameter (the “Gain”) of each channel in the convolutional layers of the CNN. We show that GainTuning improves state-of-the-art CNNs on standard image-denoising benchmarks, boosting their denoising performance on nearly every image in a held-out test set. These adaptive improvements are even more substantial for test images differing systematically from the training data, either in noise level or image type. We illustrate the potential of adaptive denoising in a scientific application, in which a CNN is trained on synthetic data, and tested on real transmission-electron-microscope images. In contrast to the existing methodology, GainTuning is able to faithfully reconstruct the structure of catalytic nanoparticles from these data at extremely low signal-to-noise ratios.
Unsupervised deep video denoisingDeep convolutional neural networks (CNNs) currently achieve state-of-the-art performance in denoising videos. They are typically trained with supervision, minimizing the error between the network output and ground-truth clean videos. However, in many applications, such as microscopy, noiseless videos are not available. To address these cases, we build on recent advances in unsupervised still image denoising to develop an Unsupervised Deep Video Denoiser (UDVD). UDVD is shown to perform competitively with current state-of-the-art supervised methods on benchmark datasets, even when trained only on a single short noisy video sequence. Experiments on microscopy data illustrate the promise of our approach for imaging modalities where ground-truth clean data is generally not available. In addition, we study the mechanisms used by trained CNNs to perform video denoising. An analysis of the gradient of the network output with respect to its input reveals that these networks perform spatio-temporal filtering that is adapted to the particular spatial structures and motion of the underlying content.
Interpretable and robust denoising via bias-free networksWe show that deep convolutional neural networks can be rendered robust to changes in noise level by removing additive terms in the architecture. Locally, the networks act linearly on the noisy image, enabling direct analysis of their behavior via linear-algebraic tools. These analyses provide interpretations of network functionality in terms of nonlinear adaptive filtering, and projection onto a union of low-dimensional subspaces, connecting the learning-based method to more traditional denoising methodology.
|