ALIVE: A Latency- and Cost-Aware Hybrid P2P-CDN Framework for Live Video Streaming

IEEE Transactions on Network and Service Management 

[PDF]

Reza Farahani (Alpen-Adria-Universität Klagenfurt, Austria), Ekrem Çetinkaya (Alpen-Adria-Universität Klagenfurt, Austria), Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Austria), Mohammad Shojafar (University of Surrey, UK), Mohammad Ghanbari (University of Essex, UK), and Hermann Hellwagner (Alpen-Adria-Universität Klagenfurt, Austria)

Abstract: Recent years have witnessed video streaming demands evolve into one of the most popular Internet applications. With the ever-increasing personalized demands for high-definition and low-latency video streaming services, network-assisted video streaming schemes employing modern networking paradigms have become a promising complementary solution in the HTTP Adaptive Streaming (HAS) context. The emergence of such techniques addresses long-standing challenges of enhancing users’ Quality of Experience (QoE), end-to-end (E2E) latency, as well as network utilization. However, designing a cost-effective, scalable, and flexible network-assisted video streaming architecture that supports the aforementioned requirements for live streaming services is still an open challenge. This article leverage novel networking paradigms, i.e., edge computing and Network Function Virtualization (NFV), and promising video solutions, i.e., HAS, Video Super-Resolution (SR), and Distributed Video Transcoding (TR), to introduce A Latency- and cost-aware hybrId P2P-CDN framework for liVe video strEaming (ALIVE). We first introduce the ALIVE multi-layer architecture and design an action tree that considers all feasible resources (i.e., storage, computation, and bandwidth) provided by peers, edge, and CDN servers for serving peer requests with acceptable latency and quality. We then formulate the problem as a Mixed Integer Linear Programming (MILP) optimization model executed at the edge of the network. To alleviate the optimization model’s high time complexity, we propose a lightweight heuristic, namely, Greedy-Based Algorithm (GBA). Finally, we (i) design and instantiate a large-scale cloud-based testbed including 350 HAS players, (ii) deploy ALIVE on it, and (iii) conduct a series of experiments to evaluate the performance of ALIVE in various scenarios. Experimental results indicate that ALIVE (i) improves the users’ QoE by at least 22%, (ii) decreases incurred cost of the streaming service provider by at least 34%, (iii) shortens clients’ serving latency by at least 40%, (iv) enhances edge server energy consumption by at least 31%, and (v) reduces backhaul bandwidth usage by at least 24% compared to base line approaches.

Keywords: HTTP Adaptive Streaming (HAS); Edge Computing; Network Function Virtualization (NFV); Content Delivery Network (CDN); Peer-to-Peer (P2P); Quality of Experience (QoE); Video Transcoding; Video Super-Resolution.

 

Posted in ATHENA | Comments Off on ALIVE: A Latency- and Cost-Aware Hybrid P2P-CDN Framework for Live Video Streaming

Towards Low-Latency and Energy-Efficient Hybrid P2P-CDN Live Video Streaming

Special Issue on Sustainable Multimedia Communications and Services, IEEE COMSOC MMTC Communications – Frontiers,

[PDF]

Reza Farahani(Alpen-Adria-Universität Klagenfurt, Austria), Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Austria), and Hermann Hellwagner (Alpen-Adria-Universität Klagenfurt, Austria)

Abstract: Streaming segmented videos over the Hypertext Transfer Protocol (HTTP) is an increasingly popular approach in both live and video-on-demand (VoD) applications. However, designing a scalable and adaptable framework that reduces servers’ energy consumption and supports low latency and high quality services, particularly for live video streaming scenarios, is still challenging for Over-The-Top (OTT) service providers. To address such challenges, this paper introduces a new hybrid P2P-CDN framework that leverages new networking and computing paradigms, i.e., Network Function Virtualization (NFV) and edge computing for live video streaming. The proposed framework introduces a multi-layer architecture and a tree of possible actions therein (an action tree), taking into account all available resources from peers, edge, and CDN servers to efficiently distribute video fetching and transcoding tasks across a hybrid P2P-CDN network, consequently enhancing the users’ latency and video quality. We also discuss our testbed designed to validate the framework and compare it with baseline methods. The experimental results indicate that the proposed framework improves user Quality of Experience (QoE), reduces client serving latency, and improves edge server energy consumption compared to baseline approaches.

Keywords: Energy Efficiency; HAS; DASH; Edge Computing; NFV; CDN; P2P; Low Latency; QoE; Video Transcoding.

Posted in ATHENA | Comments Off on Towards Low-Latency and Energy-Efficient Hybrid P2P-CDN Live Video Streaming

SIGMM Test of Time Paper Honorable Mention in the category of “MM Systems & Networking”

We’re excited to share that the ACM Special Interest Group in Multimedia (SIGMM) presents to

Stefan Lederer, Christopher Müller, and Christian Timmerer

The SIGMM Test of Time Paper Honorable Mention in the category of “MM Systems & Networking”

for their paper “Dynamic Adaptive Streaming over HTTP Dataset”. In Proceedings of the 3rd Multimedia Systems Conference, MMSys ’12, page 89–94, New York, NY, USA, 2012. ACM. doi:10.1145/2155555.2155570

Posted in ATHENA | Comments Off on SIGMM Test of Time Paper Honorable Mention in the category of “MM Systems & Networking”

Machine Learning Based Resource Utilization Prediction in the Computing Continuum

IEEE International Workshop on Computer Aided Modeling and Design of Communication Links and Networks

6–8 November 2023 | Edinburgh, Scotland

Conference Website

[PDF][Slides]

Christian Bauer (Alpen-Adria-Universität Klagenfurt), Narges Mehran (Alpen-Adria-Universität Klagenfurt), Radu Prodan (Alpen-Adria-Universität Klagenfurt) and Dragi Kimovski (Alpen-Adria-Universität Klagenfurt)

Abstract: This paper presents UtilML, a novel approach for tackling resource utilization prediction challenges in the computing continuum. UtilML leverages Long-Short-Term Memory (LSTM) neural networks, a machine learning technique, to forecast resource utilization accurately. The effectiveness of UtilML is demonstrated through its evaluation of data extracted from a real GPU cluster in a computing continuum infrastructure comprising more than 1800 computing devices. To assess the performance of UtilML, we compared it with two related approaches that utilize a Baseline-LSTM model. Furthermore, we analyzed the LSTM results against User-Predicted values provided by GPU cluster owners for task deployment with estimated allocation values. The results indicate that UtilML outperformed user predictions by 2% to 27% for CPU utilization prediction. For memory prediction, UtilML variants excelled, showing improvements of 17% to 20% compared to user predictions.

Keywords: Utilization Prediction, Machine Learning, Computing Continuum, Cloud.

Posted in GAIA | Comments Off on Machine Learning Based Resource Utilization Prediction in the Computing Continuum

Empowerment of Atypical Viewers via Low-Effort Personalized Modeling of Video Streaming Quality

The 19th International Conference on emerging Networking EXperiments and Technologies

December 5-8, 2023 | Paris, France

[PDF] [PPT] [PPT (Artifacts)]

Leonardo Peroni (IMDEA Networks Institute and UC3M), Sergey Gorinsky (IMDEA Networks Institute), Farzad Tashtarian (AAU, Austria), and Christian Timmerer (AAU, Austria).


Abstract: Quality of Experience (QoE) and QoE models are of an increasing importance to networked systems. The traditional QoE modeling for video streaming applications builds a one-size-fits-all QoE model that underserves atypical viewers who perceive QoE differently. To address the problem of atypical viewers, this paper proposes iQoE (individualized QoE), a method that employs explicit, expressible, and actionable feedback from a viewer to construct a personalized QoE model for this viewer. The iterative iQoE design exercises active learning and combines a novel sampler with a modeler. The chief emphasis of our paper is on making iQoE sample-efficient and accurate.
By leveraging the Microworkers crowdsourcing platform, we conduct studies with 120 subjects who provide 14,400 individual scores. According to the subjective studies, a session of about 22 minutes empowers a viewer to construct a personalized QoE model that, compared to the best of the 10 baseline models, delivers the average accuracy improvement of at least 42% for all viewers and at least 85% for the atypical viewers. The large-scale simulations based on a new technique of synthetic profiling expand the evaluation scope by exploring iQoE design choices, parameter sensitivity, and generalizability.

 

Posted in ATHENA | Comments Off on Empowerment of Atypical Viewers via Low-Effort Personalized Modeling of Video Streaming Quality

IEEE Access: Characterization of the Quality of Experience and Immersion of Point Cloud Videos in Augmented Reality through a Subjective Study

IEEE Access, A Multidisciplinary, Open-access Journal of the IEEE

[PDF]

Minh Nguyen (Alpen-Adria-Universität Klagenfurt, Austria), Shivi Vats (Alpen-Adria-Universität Klagenfurt, Austria), Sam Van Damme (Ghent University – imec and KU Leuven, Belgium), Jeroen van der Hooft (Ghent University – imec, Belgium), Maria Torres Vega (Ghent University – imec and KU Leuven, Belgium), Tim Wauters (Ghent University – imec, Belgium), Filip De Turck (Ghent University – imec, Belgium), Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Austria), Hermann Hellwagner (Alpen-Adria-Universität Klagenfurt, Austria)

Abstract: Point cloud streaming has recently attracted research attention as it has the potential to provide six degrees of freedom movement, which is essential for truly immersive media. The transmission of point clouds requires high-bandwidth connections, and adaptive streaming is a promising solution to cope with fluctuating bandwidth conditions. Thus, understanding the impact of different factors in adaptive streaming on the Quality of Experience (QoE) becomes fundamental. Point clouds have been evaluated in Virtual Reality (VR), where viewers are completely immersed in a virtual environment. Augmented Reality (AR) is a novel technology and has recently become popular, yet quality evaluations of point clouds in AR environments are still limited to static images.

In this paper, we perform a subjective study of four impact factors on the QoE of point cloud video sequences in AR conditions, including encoding parameters (quantization parameters, QPs), quality switches, viewing distance, and content characteristics. The experimental results show that these factors significantly impact the QoE. The QoE decreases if the sequence is encoded at high QPs and/or switches to lower quality and/or is viewed at a shorter distance, and vice versa. Additionally, the results indicate that the end user is not able to distinguish the quality differences between two quality levels at a specific (high) viewing distance. An intermediate-quality point cloud encoded at geometry QP (G-QP) 24 and texture QP (T-QP) 32 and viewed at 2.5 m can have a QoE (i.e., score 6.5 out of 10) comparable to a high-quality point cloud encoded at 16 and 22 for G-QP and T-QP, respectively, and viewed at a distance of 5 m. Regarding content characteristics, objects with lower contrast can yield better quality scores. Participants’ responses reveal that the visual quality of point clouds has not yet reached an immersion level as desired. The average QoE of the highest visual quality is less than 8 out of 10. There is also a good correlation between objective metrics (e.g., color Peak Signal-to-Noise Ratio (PSNR) and geometry PSNR) and the QoE score. Especially the Pearson correlation coefficients of color PSNR is 0.84. Finally, we found that machine learning models are able to accurately predict the QoE of point clouds in AR environments.

The subjective test results and questionnaire responses are available on Github: https://github.com/minhkstn/QoE-and-Immersion-of-Dynamic-Point-Cloud.

Index Terms: Point Clouds, Quality of Experience, Subjective Tests, Augmented Reality

Posted in SPIRIT | Comments Off on IEEE Access: Characterization of the Quality of Experience and Immersion of Point Cloud Videos in Augmented Reality through a Subjective Study

Video Coding Enhancements for HTTP Adaptive Streaming using Machine Learning

Klagenfurt, June 7, 2023

Congratulations to Dr. Ekrem Çetinkaya for successfully defending his dissertation on “Video Coding Enhancements for HTTP Adaptive Streaming using Machine Learning” at Universität Klagenfurt in the context of the Christian Doppler Laboratory ATHENA.

Abstract

Video is evolving into a crucial tool as daily lives are increasingly centered around visual communication. The demand for better video content is constantly rising, from entertainment to business meetings. The delivery of video content to users is of utmost significance. HTTP adaptive streaming, in which the video content adjusts to the changing network circumstances, has become the de-facto method for delivering internet video.
As video technology continues to advance, it presents a number of challenges, one of which is the large amount of data required to describe a video accurately. To address this issue, it is necessary to have a powerful video encoding tool. Historically, these efforts have relied on hand-crafted tools and heuristics. However, with the recent advances in machine learning, there has been increasing exploration into using these techniques to enhance video coding performance.

This thesis proposes eight contributions that enhance video coding performance for HTTP adaptive streaming using machine learning. These contributions are presented in four categories:

  1. Fast Multi-Rate Encoding with Machine Learning: This category consists of two contributions that target the need for encoding multiple representations of the same video for HTTP adaptive streaming. FaME-ML tackles the multi-rate encoding problem using convolutional neural networks to guide encoding decisions, while FaRes-ML extends the solution for multi-resolution scenarios. Evaluations showed FaME-ML could reduce parallel encoding time by 41% and FaRes-ML could reduce overall encoding time by 46% while preserving the visual quality.
  2. Enhancing Visual Quality on Mobile Devices: The second category consists of three contributions targeting the need for the improved visual quality of videos on mobile devices. The limited hardware of mobile devices makes them a challenging environment to execute complex machine learning models. SR-ABR explores the integration of the super-resolution approach into the adaptive bitrate selection algorithm. SR-ABR can save up to 43% bandwidth. LiDeR is addressing the computational complexity of super-resolution networks by proposing an alternative that considers the limitations of mobile devices by design. LiDeR can increase execution speed up to 428% compared to state-of-the-art networks while managing to preserve the visual quality. MoViDNN is proposed to enable straightforward evaluation of machine learning-based solutions for improving visual quality on mobile devices.
  3. Light-Field Image Coding with Super-Resolution: Emerging media formats provide a more immersive experience with the cost of increased data size. The third category proposes a single contribution to tackle the huge data size of light field images by utilizing super-resolution. LFC-SASR can reduce data size by 54% while preserving the visual quality.
  4. Blind Visual Quality Assessment Using Vision Transformers: The final category consists of a single contribution that is proposed to tackle the blind visual quality assessment problem for videos. BQ-ViT utilizes recently proposed vision transformer architecture. It can predict the visual quality of a video with a high correlation (0.895 PCC) by using only the encoded frames.

The thesis is available for download here. Slides and video are available as follows:

Posted in ATHENA | Comments Off on Video Coding Enhancements for HTTP Adaptive Streaming using Machine Learning