motion-parallax

Parallax Motion Effect Generation Through Instance Segmentation And Depth Estimation (Supplemental Material)

View the Project on GitHub allansp84/motion-parallax

International Conference on Image Processing (ICIP) 2020
(Supplemental Material)


Parallax Motion Effect Generation Through Instance Segmentation And Depth Estimation

Allan Pinto1, Manuel A. Córdova1, Luis G. L. Decker1, Jose L. Flores-Campana1, Marcos R. Souza1, Andreza A. dos Santos1, Jhonatas S. Conceição1, Henrique F. Gagliardi2, Diogo C. Luvizon2, Ricardo da S. Torres3, and Helio Pedrini1

1 Institute of Computing, University of Campinas (UNICAMP), Campinas, SP, Brazil, 13083-852

2 AI R&D Lab, Samsung R&D Institute Brazil, Campinas, SP, 13097-160, Brazil

3 NTNU – Norwegian University of Science and Technology, Ålesund, Norway


Abstract

Stereo vision is a growing topic in computer vision due to the innumerable opportunities and applications this technology offers for the development of modern solutions, such as virtual and augmented reality applications. To increase the user’s three-dimensional experience, motion parallax estimation is a promising technique to achieve this objective. In this paper, we propose an algorithm for generating parallax motion effects from a single image, taking advantage of state-of-the-art instance segmentation and depth estimation approaches. This work also presents a comparison against such algorithms to investigate the trade-off between efficiency and quality of the parallax motion effects, taking into consideration a multi-task learning network capable of estimating instance segmentation and depth estimation at once. Experimental results and visual quality assessment indicate that the PyD-Net network (depth estimation) combined with Mask-RCNN or FBNet networks (instance segmentation) are able to produce parallax motion effects with good visual quality.

Visual Quality Assessment