Mathematics, Vol. 13, Pages 3110: Neural Network-Based Atlas Enhancement in MPEG Immersive Video


Mathematics, Vol. 13, Pages 3110: Neural Network-Based Atlas Enhancement in MPEG Immersive Video

Mathematics doi: 10.3390/math13193110

Authors:
Taesik Lee
Kugjin Yun
Won-Sik Cheong
Dongsan Jun

Recently, the demand for immersive videos has surged with the expansion of virtual reality, augmented reality, and metaverse technologies. As an international standard, moving picture experts group (MPEG) has developed MPEG immersive video (MIV) to efficiently transmit large-volume immersive videos. The MIV encoder generates atlas videos to convert extensive multi-view videos into low-bitrate formats. When these atlas videos are compressed using conventional video codecs, compression artifacts often appear in the reconstructed atlas videos. To address this issue, this study proposes a feature-extraction-based convolutional neural network (FECNN) to reduce the compression artifacts during MIV atlas video transmission. The proposed FECNN uses quantization parameter (QP) maps and depth information as inputs and consists of shallow feature extraction (SFE) blocks and deep feature extraction (DFE) blocks to utilize layered feature characteristics. Compared to the existing MIV, the proposed method improves the Bjontegaard delta bit-rate (BDBR) by −4.12% and −6.96% in the basic and additional views, respectively.



Source link

Taesik Lee www.mdpi.com