Download PDFOpen PDF in browser

A Light-Weight Monocular Depth Estimation With Edge-Guided Occlusion Fading Reduction

EasyChair Preprint 4388

12 pagesDate: October 13, 2020

Abstract

Self-supervised monocular depth estimation methods suffer occlusion fading which is a result of a lack of supervision by the per pixel ground truth. A recent work introduced a post-processing method to reduce occlusion fading; however, the results have a severe halo effect. In this work, we propose a novel edge-guided post-processing method that reduces occlusion fading for self-supervised monocular depth estimation. We also introduce Atrous Spatial Pyramid Pooling with Forward-Path (ASPPF) into the network to reduce computational costs and improve inference performance. The proposed ASPPF-based network is lighter, faster, and better than current depth estimation networks. Our light-weight network only needs 7.6 million parameters and can achieve up to 67 frames per second for 256x512 inputs using a single nVIDIA GTX1080 GPU. The proposed network also outperforms the current state-of-the-art methods on the KITTI benchmark. The ASPPF-based network and edge-guided post-processing produces better results either quantitatively and qualitatively than the competitors.

Keyphrases: Atrous Spatial Pyramid Pooling, Edge-Guided post-processing, Monocular Depth Estimation

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:4388,
  author    = {Kuo-Shiuan Peng and Gregory Ditzler and Jerzy Rozenblit},
  title     = {A Light-Weight Monocular Depth Estimation With Edge-Guided Occlusion Fading Reduction},
  howpublished = {EasyChair Preprint 4388},
  year      = {EasyChair, 2020}}
Download PDFOpen PDF in browser