Document Type

Conference Proceeding

Publication Date

2021

Abstract

360° photogrammetry captures the surrounding light from a central point. To process and transmit these types of images over the network to the end user, the most common approach is to project them onto a 2D image using the equirectangular projection to generate a 360° image. However, this projection introduces redundancy into the image, increasing storage and transmission requirements. To address this problem, the standard approach is to use compression algorithms, such as JPEG or PNG, but they do not take full advantage of the visual redundancy produced by the equirectangular projection. In this study of the 360SP dataset (a collection of Google Street View images), we analyze the redundancy in equirectangular images and show how it is structured across the image. Outcomes from our study will support the developing of spherical compression algorithms, improving the immersive experience of Virtual Reality users by reducing loading times and increasing the perceptual image quality.

Copyright Statement

This document was originally published in Computer Science Research Notes by Winter School of Computer Graphics. Copyright restrictions may apply. https://doi.org/10.24132/CSRN.2021.3101.6

Share

COinS