SPIRIT Project: Open Call 1 is live!

SPIRIT – Open Call 1

Scalable Platform for Innovations on Real-time Immersive Telepresence

https://www.spirit-project.eu/open-call-1/

SPIRIT’s 1st wave of Open Call (SPIRIT-OC1) is now open. It provides up to 200 thousand € to financially support the involvement of third parties to develop and test a wide variety of collaborative telepresence applications on the first release of the SPIRIT platform. OC1 aims to engage different organisations to test, develop further, and validate their specific use cases (applications) or to contribute components that enhance/extend the SPIRIT platform.

The applications run till 27 May, 2024, 17:00 CET. 10 third-party projects will be selected and will be expected to have a total duration of 9 months. For further information about the OC1 and the technical results of the SPIRIT project, please refer to the OC1 webpage: https://www.spirit-prject.eu/open-call-1/.

Index Terms: Telepresence, Point Clouds, Augmented Reality

Posted in SPIRIT | Comments Off on SPIRIT Project: Open Call 1 is live!

Generative AI for HTTP Adaptive Streaming

15th ACM Multimedia Systems Conference (MMSys)
15 – 18 April 2024 | Bari, Italy.
[PDF][Slides][Poster]

Emanuele Artioli (Alpen-Adria-Universität Klagenfurt)

Abstract:

Video streaming stands as the cornerstone of telecommunication networks, constituting over 60% of mobile data traffic as of June 2023. The paramount challenge faced by video streaming service providers is ensuring high Quality of Experience (QoE) for users. In HTTP Adaptive Streaming (HAS), including DASH and HLS, video content is encoded at multiple quality versions, with an Adaptive Bitrate (ABR) algorithm dynamically selecting versions based on network conditions. Concurrently, Artificial Intelligence (AI) is revolutionizing the industry, particularly in content recommendation and personalization. Leveraging user data and advanced algorithms, AI enhances user engagement, satisfaction, and video quality through super-resolution and denoising techniques.

However, challenges persist, such as real-time processing on resource-constrained devices, the need for diverse training datasets, privacy concerns, and model interpretability. Despite these hurdles, the promise of Generative Artificial Intelligence emerges as a transformative force. Generative AI, capable of synthesizing new data based on learned patterns, holds vast potential in the video streaming landscape. In the context of video streaming, it can create realistic and immersive content, adapt in real time to individual preferences, and optimize video compression for seamless streaming in low-bandwidth conditions.

This research proposal outlines a comprehensive exploration at the intersection of advanced AI algorithms and digital entertainment, focusing on the potential of generative AI to elevate video quality, user interactivity, and the overall streaming experience. The objective is to integrate generative models into video streaming pipelines, unraveling novel avenues that promise a future of dynamic, personalized, and visually captivating streaming experiences for viewers.

Posted in ATHENA | Comments Off on Generative AI for HTTP Adaptive Streaming

MMSys ’24: ComPEQ–MR: Compressed Point Cloud Dataset with Eye Tracking and Quality Assessment in Mixed Reality

15th ACM Multimedia Systems Conference

April 15-18, 2024 – Bari, Italy

https://2024.acmmmsys.org/

[PDF],[Dataset]

Minh Nguyen (Fraunhofer Fokus, Germany), Shivi Vats (Alpen-Adria-Universität Klagenfurt, Austria), Xuemei Zhou (Centrum Wiskunde & Informatica and TU Delft, Netherlands), Irene Viola (Centrum Wiskunde & Informatica, Netherlands), Pablo Cesar (Centrum Wiskunde & Informatica, Netherlands), Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Austria) Hermann Hellwagner (Alpen-Adria-Universität Klagenfurt, Austria)

Abstract: Point clouds (PCs) have attracted researchers and developers due to their ability to provide immersive experiences with six degrees of freedom (6DoF). However, there are still several open issues in understanding the Quality of Experience (QoE) and visual attention of end users while experiencing 6DoF volumetric videos. First, encoding and decoding point clouds require a significant amount of both time and computational resources. Second, QoE prediction models for dynamic point clouds in 6DoF have not yet been developed due to the lack of visual quality databases. Third, visual attention in 6DoF is hardly explored, which impedes research into more sophisticated approaches for adaptive streaming of dynamic point clouds. In this work, we provide an open-source Compressed Point cloud dataset with Eye-tracking and Quality assessment in Mixed Reality (ComPEQ–MR). The dataset comprises four compressed dynamic point clouds processed by Moving Picture Experts Group (MPEG) reference tools (i.e., VPCC and GPCC), each with 12 distortion levels. We also conducted subjective tests to assess the quality of the compressed point clouds with different levels of distortion. The rating scores are attached to ComPEQ–MR so that they can be used to develop QoE prediction models in the context of MR environments. Additionally, eye-tracking data for visual saliency is included in this dataset, which is necessary to predict where people look when watching 3D videos in MR experiences. We collected opinion scores and eye-tracking data from 41 participants, resulting in 2132 responses and 164 visual attention maps in total. The dataset is available at https://ftp.itec.aau.at/datasets/ComPEQ-MR/.

Index Terms: Point Clouds, Quality of Experience, Subjective Tests, Augmented Reality

Posted in SPIRIT | Comments Off on MMSys ’24: ComPEQ–MR: Compressed Point Cloud Dataset with Eye Tracking and Quality Assessment in Mixed Reality

MHV ’24: No-Reference Quality of Experience Model for Dynamic Point Clouds in Augmented Reality

ACM Mile High Video (MHV) 2024

February 11-14, 2024 – Denver, USA

https://www.mile-high.video/

[PDF],[GitHub]

Minh Nguyen (Alpen-Adria-Universität Klagenfurt, Austria), Shivi Vats (Alpen-Adria-Universität Klagenfurt, Austria), Hermann Hellwagner (Alpen-Adria-Universität Klagenfurt, Austria)

Abstract: Point cloud streaming is becoming increasingly popular due to its ability to provide six degrees of freedom (6DOF) for immersive media. Measuring the quality of experience (QoE) is essential to evaluate the performance of point cloud applications. However, most existing QoE models for point cloud streaming are complicated and/or not open source. Therefore, it is desirable to provide an opensource QoE model for point cloud streaming.

(…)

In this work, we provide a fine-tuned ITU-T P.1203 model for dynamic point clouds in Augmented Reality (AR) environments. We re-train the P.1203 model with our dataset to get the optimal coefficients in this model that achieves the lowest root mean square error (RMSE). The dataset was collected in a subjective test in which the participants watched dynamic point clouds from the 8i lab database with Microsoft’s HoloLens 2 AR glasses. The dynamic point clouds have static qualities or a quality switch in the/ middle of the sequence. We split this dataset into a training set and a validation set. We train the coefficients of the P.1203 model with the former set and validate its performance with the latter one.

The trained model is available on Github: https://github.com/minhkstn/itu-p1203-point-clouds.

Index Terms: Point Clouds, Quality of Experience, Subjective Tests, Augmented Reality

Posted in SPIRIT | Comments Off on MHV ’24: No-Reference Quality of Experience Model for Dynamic Point Clouds in Augmented Reality

EVCA: Enhanced Video Complexity Analyzer

The 15th ACM Multimedia Systems Conference (Technical Demos)

15-18 April, 2024 in Bari, Italy

[PDF],[Github]

Hadi Amirpour (AAU, Austria), Mohammad Ghasempour (AAU, Austria), Lingfen Qu (Guangzhou University, China), Wassim Hamidouche (TII, UAE), and Christian Timmerer (AAU, Austria)

The optimization of video compression and streaming workflows critically relies on understanding the video complexity, including both spatial and temporal features. These features play a vital role in guiding rate control, predicting video encoding parameters (such as resolution and frame rate), and selecting test videos for subjective analysis. Traditional methods primarily utilize SI and TI to measure these spatial and temporal complexity features, respectively. Moreover, VCA has been introduced as a tool employing DCT-based functions to evaluate these features, specifically E and h for spatial and temporal complexity features, respectively. In this paper, we introduce Enhanced Video Complexity Analyzer(EVCA), an advanced tool that integrates the functionalities of both VCA and the SITI approach. Developed in Python to ensure compatibility with GPU processing, EVCA enhances the definition of temporal complexity originally used in VCA. This refinement significantly improves the detection of temporal complexity features in VCA (i.e., h), raising its Peasrson Correlation Coefficient (PCC) from 0.6 to 0.77. Furthermore, EVCA demonstrates exceptional performance on GPU devices, achieving feature extraction speeds exceeding 1200 fps for 1080p resolution videos.

Posted in ATHENA | Comments Off on EVCA: Enhanced Video Complexity Analyzer

GREEM : An Open-Source Energy Measurement Tool for Video Processing

GREEM: An Open-Source Benchmark Tool Measuring the Environmental Footprint of Video Streaming

The 15th ACM Multimedia Systems Conference (Open-source Software and Datasets)

15-18 April, 2024 in Bari, Italy

[PDF], [Github]

Christian Bauer  (AAU, Austria),  Samira Afzal (AAU, Austria)Sandro Linder (AAU, Austria), Radu Prodan (AAU,Austria)and Christian Timmerer (AAU, Austria)

Addressing climate change requires a global decrease in greenhouse gas (GHG) emissions. In today’s digital landscape, video streaming significantly influences internet traffic, driven by the widespread use of mobile devices and the rising popularity of streaming plat-
forms. This trend emphasizes the importance of evaluating energy consumption and the development of sustainable and eco-friendly video streaming solutions with a low Carbon Dioxide (CO2) footprint. We developed a specialized tool, released as an open-source library called GREEM , addressing this pressing concern. This tool measures video encoding and decoding energy consumption and facilitates benchmark tests. It monitors the computational impact on hardware resources and offers various analysis cases. GREEM is helpful for developers, researchers, service providers, and policy makers interested in minimizing the energy consumption of video encoding and streaming.

Posted in GAIA | Comments Off on GREEM : An Open-Source Energy Measurement Tool for Video Processing

VEEP: Video Encoding Energy and CO2 Emission Prediction

VEEP: Video Encoding Energy and CO2 Emission Prediction

ACM MMsys, GMSys workshop (2024)

15-18 April, 2024 in Bari, Italy

[PDF], [Slides]

Manuel Hoi* (AAU,  Austria), Armin Lachini* (AAU, Austria), Samira Afzal (AAU, Austria), Sandro Linder (AAU, Austria), Farzad Tashtarian (AAU, Austria), Radu Prodan (AAU, Austria), and Christian Timmerer (AAU, Austria)

*These authors contributed equally to this work

In the context of rising environmental concerns, this paper introduces VEEP, an architecture designed to predict energy consumption and CO2 emissions in cloud-based video encoding. VEEP combines video analysis with machine learning (ML)-based energy prediction and real-time carbon intensity, enabling precise estimations of CPU energy usage and CO2 emissions during the encoding process. It is trained on the Video Complexity Dataset (VCD) and encoding results from various AWS EC2 instances. VEEP achieves high accuracy, indicated by an 𝑅2-score of 0.96, a mean absolute error (MAE) of 2.41 × 105, and a mean squared error (MSE) of 1.67 × 109. An important finding is the potential to reduce emissions by up to 375 times when comparing cloud instances and their locations. These results highlight the importance of considering environmental factors in cloud computing.

 

Posted in GAIA | Comments Off on VEEP: Video Encoding Energy and CO2 Emission Prediction