MHV ’24: No-Reference Quality of Experience Model for Dynamic Point Clouds in Augmented Reality

ACM Mile High Video (MHV) 2024

February 11-14, 2024 – Denver, USA

https://www.mile-high.video/

[PDF],[GitHub]

Minh Nguyen (Alpen-Adria-Universität Klagenfurt, Austria), Shivi Vats (Alpen-Adria-Universität Klagenfurt, Austria), Hermann Hellwagner (Alpen-Adria-Universität Klagenfurt, Austria)

Abstract: Point cloud streaming is becoming increasingly popular due to its ability to provide six degrees of freedom (6DOF) for immersive media. Measuring the quality of experience (QoE) is essential to evaluate the performance of point cloud applications. However, most existing QoE models for point cloud streaming are complicated and/or not open source. Therefore, it is desirable to provide an opensource QoE model for point cloud streaming.

(…)

In this work, we provide a fine-tuned ITU-T P.1203 model for dynamic point clouds in Augmented Reality (AR) environments. We re-train the P.1203 model with our dataset to get the optimal coefficients in this model that achieves the lowest root mean square error (RMSE). The dataset was collected in a subjective test in which the participants watched dynamic point clouds from the 8i lab database with Microsoft’s HoloLens 2 AR glasses. The dynamic point clouds have static qualities or a quality switch in the/ middle of the sequence. We split this dataset into a training set and a validation set. We train the coefficients of the P.1203 model with the former set and validate its performance with the latter one.

The trained model is available on Github: https://github.com/minhkstn/itu-p1203-point-clouds.

Index Terms: Point Clouds, Quality of Experience, Subjective Tests, Augmented Reality

This entry was posted in SPIRIT. Bookmark the permalink.