Hybrid P2P-CDN Architecture for Live Video Streaming: An Online Learning Approach

IEEE Global Communications Conference (GLOBECOM)

December 4-8, 2022 |Rio de Janeiro, Brazil

[PDF][Slides]

Reza Farahani (Alpen-Adria-Universität Klagenfurt, Austria), Abdelhak Bentaleb (National University of Singapore, Singapore), Ekrem Cetinkaya (Alpen-Adria-Universität Klagenfurt, Austria), Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Austria), Roger Zimmermann (National University of Singapore, Singapore), and Hermann Hellwagner (Alpen-Adria-Universität Klagenfurt, Austria)

Abstract: a cost-effective, scalable, and flexible architecture that supports low latency and high-quality live video streaming is still a challenge for Over-The-Top (OTT) service providers. To cope with this issue, this paper leverages Peer-to-Peer (P2P), Content Delivery Network (CDN), edge computing, Network Function Virtualization (NFV), and distributed video transcoding paradigms to introduce a hybRId P2P-CDN arcHiTecture for livE video stReaming (RICHTER). We first introduce RICHTER’s multi-layer architecture and design an action tree that considers all feasible resources provided by peers, edge, and CDN servers for serving peer requests with minimum latency and maximum quality. We then formulate the problem as an optimization model executed at the edge of the network. We present an Online Learning (OL) approach that leverages an unsupervised Self Organizing Map (SOM) to (i) alleviate the time complexity issue of the optimization model and (ii) make it a suitable solution for large-scale scenarios by enabling decisions for groups of requests instead of for single requests. Finally, we implement the RICHTER framework, conduct our experiments on a large-scale cloud-based testbed including 350 HAS players, and compare its effectiveness with baseline systems. The experimental results illustrate that RICHTER outperforms baseline schemes in terms of users’ Quality of Experience (QoE), latency, and network utilization, by at least 59%, 39%, and 70%, respectively.

Index Terms—HAS; Edge Computing; NFV; CDN; P2P; Low Latency; QoE; Video Transcoding; Online Learning.

Posted in ATHENA | Comments Off on Hybrid P2P-CDN Architecture for Live Video Streaming: An Online Learning Approach

Between Two and Six? Towards Correct Estimation of JND Step Sizes for VMAF-based Bitrate Laddering

 14th International Conference on Quality of Multimedia Experience (QoMEX)

September 5-7, 2022 | Lippstadt, Germany

[PDF][Poster]

Hadi Amirpour (Alpen-Adria-Universität Klagenfurt)Raimund Schatz (AIT Austrian Institute of Technology, Austria)and Christian Timmerer (Alpen-Adria-Universität Klagenfurt)

Abstract:

We currently witness the rapidly growing importance of intelligent video streaming quality optimization and reduction of video delivery costs. Per-Title encoding, in contrast to a fixed bitrate ladder, shows significant promise to deliver higher quality video streams by addressing the trade-off between compression efficiency and video characteristics such as resolution and frame rate.
Selecting encodings with noticeable quality differences in between prevents the construction of an inefficient bitrate ladder that suffers from too similar quality representations.
In this respect, the VMAF metric represents a promising foundation for bitrate laddering, as it currently yields the highest video quality prediction performance. However, the minimum noticeable quality difference, referred as to just-noticeable-difference (JND), has not been properly validated for VMAF yet, with existing sources proposing highly diverse ΔVMAF step sizes ranging from two to six.

 

Posted in ATHENA | Comments Off on Between Two and Six? Towards Correct Estimation of JND Step Sizes for VMAF-based Bitrate Laddering

FuRA: Fully Random Access Light Field Image Compression

10th European Workshop on Visual Information Processing (EUVIP)

September 11-14, 2022 | Lisbon, Portugal

[PDF]

Hadi Amirpour (Alpen-Adria-Universität Klagenfurt),   Christine Guillemot (INRIA, France)and Christian Timmerer (Alpen-Adria-Universität Klagenfurt)

Abstract:

Light fields are typically represented by multi-view images and enable post-capture
actions such as refocusing and perspective shift. To compress a light field image, its view images are typically converted into a pseudo video sequence (PVS) and the generated PVS is compressed using a video codec. However, when using the inter-coding tool of a video codec to exploit the redundancy among view images, the possibility to randomly access any view image is lost. On the other hand, when video codecs independently encode view images using the intra-coding tool, random access to view images is enabled, however, at the expense of a significant drop in the compression efficiency. To address this trade-off, we propose to use neural representations to represent 4D light fields. For each light field, a multi-layer perceptron (MLP) is trained to map the light field four dimensions to the color space, thus enabling random access even to pixels. To achieve higher compression efficiency, neural network compression techniques are deployed. The proposed method outperforms the compression efficiency of HEVC inter-coding, while providing random access to view images and even pixel values.

 

Posted in ATHENA | Comments Off on FuRA: Fully Random Access Light Field Image Compression

Low Latency Live Streaming Implementation in DASH and HLS

ACM Multimedia Conference – OSS Track

10-14 October 2022 | Lisbon, Portugal

[PDF]

Abdelhak Bentaleb (National University of Singapore), Zhengdao Zhan (National University of Singapore), Farzad Tashtarian (AAU, Austria), May Lim (National University of Singapore), Saad Harous (University of Sharjah), Christian Timmerer (AAU, Austria), Hermann Hellwagner (AAU, Austria), and Roger Zimmermann (National University of Singapore)

Low latency live streaming over HTTP using Dynamic Adaptive Streaming over HTTP (LL-DASH) and HTTP Live Streaming} (LL-HLS) has emerged as a new way to deliver live content with respectable video quality and short end-to-end latency. Satisfying these requirements while maintaining viewer experience in practice is challenging, and adopting conventional adaptive bitrate (ABR) schemes directly to do so will not work. Therefore, recent solutions including LoL$^+$, L2A, Stallion, and Llama re-think conventional ABR schemes to support low-latency scenarios. These solutions have been integrated with dash.js  that support LL-DASH. However, their performance in LL-HLS remains in question. To bridge this gap, we implement and integrate existing LL-DASH ABR schemes in the hls.js video player which supports LL-HLS.
Moreover, a series of real-world trace-driven experiments have been conducted to check their efficiency under various network conditions including a comparison with results achieved for LL-DASH in dash.js.

Posted in ATHENA | Comments Off on Low Latency Live Streaming Implementation in DASH and HLS

Hermann Hellwagner in a BITMOVIN Webinar featuring ATHENA

Latest Edge Computing Innovations for Video Streaming (ft. ATHENA)

Webinar Recording

14th June 2022

Video streaming is not for online video providers alone. Telco providers are equal players in the game of adaptive video streaming, but to stand out, there is a constant demand for innovations at the edge.

Posted in ATHENA | Comments Off on Hermann Hellwagner in a BITMOVIN Webinar featuring ATHENA

Detection and Localization of Video Transcoding From AVC to HEVC Based on Deep Representations of Decoded Frames and PU Maps

[PDF]

Haichao Yao (Beijing Jiaotong University), Rongrong Ni (Beijing Jiaotong University), Hadi Amirpour (Alpen-Adria-Universität Klagenfurt), Christian Timmerer (Alpen-Adria-Universität Klagenfurt)Yao Zhao (Beijing Jiaotong University).

Abstract: In general, manipulated videos will eventually undergo recompression. Video transcoding will occur when the standard of recompression is different from the prior standard. Therefore, as a special sign of recompression, video transcoding can also be considered evidence of forgery in video forensics. In this paper, we focus on the detection and localization of video transcoding from AVC to HEVC (AVC-HEVC). There are two probable cases of AVC-HEVC transcoding – whole video transcoding and partial frame transcoding. However, the existing forensic methods only consider the detection of whole video transcoding, and they do not consider partial frame transcoding localization. In view of this, we propose a framewise scheme based on a convolutional neural network. First, we analyze that the essential difference between AVC-HEVC and HEVC is reflected in the high-frequency components of decoded frames. Then, the partition and location information of prediction units (PUs) are introduced to generate frame-level PU maps to make full use of the local artifacts of PUs. Finally, taking the decoded frames and PU maps as inputs, a dual-path network including specific convolutional modules and an adaptive fusion module is proposed. Through it, the artifacts on a single frame can be better extracted, and the transcoded frames can be detected and localized. Coupled with a simple voting strategy, the results of whole transcoding detection can be easily obtained. A large number of experiments are conducted to verify the performances. The results show that the proposed scheme outperforms or rivals the state-of-the-art methods in AVC-HEVC transcoding detection and localization.

Posted in ATHENA | Comments Off on Detection and Localization of Video Transcoding From AVC to HEVC Based on Deep Representations of Decoded Frames and PU Maps

Hadi Amirpour to give a talk at Fraunhofer FOKUS MWS

Video Encoding Optimizations for Live Video Streaming

FOKUS Media Web Symposium

20th – 24th June 2022 | Berlin, Germany

 

Abstract: Live video streaming is expected to become mainstream in the fifth-generation (5G) mobile networks. Optimizing video encoding for live video streaming is challenging due to the latency introduced by any optimization method. In this talk, we introduce low-latency video optimization methods that are utilized to improve the quality of video encodings by predicting optimized encoding parameters.

 

Hadi Amirpour is a postdoc research fellow at ATHENA  directed by Prof. Christian Timmerer. He received his B.Sc. degrees in Electrical and Biomedical Engineering, and he pursued his M.Sc. in Electrical Engineering. He got his Ph.D. in computer science from the University of Klagenfurt in 2022. He was appointed co-chair of Task Force 7 (TF7) Immersive Media Experience (IMEx) at the 15th Qualinet meeting. He was involved in the project EmergIMG, a Portuguese consortium on emerging imaging technologies, funded by the Portuguese funding agency and H2020. Currently, he is working on the ATHENA project in cooperation with its industry partner Bitmovin. His research interests are image processing and compression, video processing and compression, quality of experience, emerging 3D imaging technology, and medical image analysis.

 

 

Posted in ATHENA | Comments Off on Hadi Amirpour to give a talk at Fraunhofer FOKUS MWS