Download PDFOpen PDF in browserCurrent versionPerception-Aware Losses Facilitate CT Denoising and Artifact RemovalEasyChair Preprint 6259, version 16 pages•Date: August 7, 2021AbstractThe concerns over radiation-related health risks associated with the increasing use of computed tomography (CT) have accelerated the development of low-dose strategies. There is a higher need for low dosage in interventional applications as repeated scanning is performed. However, using the noisier and undersampled low-dose datasets, the standard reconstruction algorithms produce low-resolution images with severe streaking artifacts. This adversely affects the CT assisted interventions. Recently, variational autoencoders (VAEs) have achieved state-of-the-art results for the reconstruction of high fidelity images. The existing VAE approaches typically use mean squared error (MSE) as the loss, because it is convex and differentiable. However, pixel-wise MSE does not capture the perceptual quality difference between the target and model predictions. In this work, we propose two simple but effective MSE based perception-aware losses, which facilitate a better reconstruction quality. The proposed losses are motivated by perceptual fidelity measures used in image quality assessment. One of the losses involves calculation of the MSE in the spectral domain. The other involves calculation of the MSE in the pixel space and the Laplacian of Gaussian transformed domain. We use a hierarchical vector-quantized VAE equipped with the perception-aware losses for the artifact removal task. The best performing perception-aware loss improves the structural similarity index measure (SSIM) from 0.74 to 0.80. Further, we provide an analysis of the role of the pertinent components of the architecture in the denoising and artifact removal task. Keyphrases: Artifact removal, CT reconstruction, Computed Tomography, Denoising, Low-dose CT, deep learning, perception-aware
|