Content-adaptive Encoder Preset Prediction for Adaptive Live Streaming

2022 Picture Coding Symposium (PCS)

December 7-9, 2022 | San Jose, CA, USA

Conference Website
[PDF][Slides][Video]

Vignesh V Menon (Alpen-Adria-Universität Klagenfurt),  Hadi Amirpour (Alpen-Adria-Universität Klagenfurt), Prajit T Rajendran (Universite Paris-Saclay, France), Mohammad Ghanbari (School of Computer Science and Electronic Engineering, University of Essex, Colchester, UK), and Christian Timmerer (Alpen-Adria-Universität Klagenfurt)

Abstract:

In live streaming applications, a fixed set of bitrate-resolution pairs (known as bitrate ladder) is generally used to avoid additional pre-processing run-time to analyze the complexity of every video content and determine the optimized bitrate ladder. Furthermore, live encoders use the fastest available preset for encoding to ensure the minimum possible latency in streaming. For live encoders, it is expected that the encoding speed is equal to the video framerate. However, an optimized encoding preset may result in (i) increased Quality of Experience (QoE) and (ii) improved CPU utilization while encoding. In this light, this paper introduces a Content-Adaptive encoder Preset prediction Scheme (CAPS) for adaptive live video streaming applications. In this scheme, the encoder preset is determined using Discrete Cosine Transform (DCT)-energy-based low-complexity spatial and temporal features for every video segment, the number of
CPU threads allocated for each encoding instance, and the target encoding speed. Experimental results show that CAPS yields an overall quality improvement of 0.83 dB PSNR and 3.81 VMAF with the same bitrate, compared to the fastest preset encoding
of the HTTP Live Streaming (HLS) bitrate ladder using x265 HEVC open-source encoder. This is achieved by maintaining the desired encoding speed and reducing CPU idle time.

Posted in News | Comments Off on Content-adaptive Encoder Preset Prediction for Adaptive Live Streaming

Light-weight Video Encoding Complexity Prediction using Spatio Temporal Features

2022 IEEE 24th International Workshop on Multimedia Signal Processing (MMSP)

September 26-28, 2022 | Shanghai, China

Conference Website

[PDF][Slides][Video]

Hadi Amirpour (Alpen-Adria-Universität Klagenfurt), Prajit T Rajendran (Universite Paris-Saclay, Paris, France), Vignesh V Menon (Alpen-Adria-Universität Klagenfurt),   Mohammad Ghanbari (School of Computer Science and Electronic Engineering, University of Essex, Colchester, UK)and Christian Timmerer (Alpen-Adria-Universität Klagenfurt)

Abstract:

The increasing demand for high-quality and low-cost video streaming services calls for the prediction of video encoding complexity. The prior prediction of video encoding complexity including encoding time and bitrate predictions are used to allocate resources and set optimized parameters for video encoding effectively. In this paper, a light-weight video encoding complexity prediction (VECP) scheme that predicts the encoding bitrate and the encoding time of video with high accuracy is proposed. Firstly, low-complexity Discrete Cosine Transform (DCT)-energy-based features, namely spatial complexity, temporal complexity, and brightness of videos are extracted, which can efficiently
represent the encoding complexity of videos. The latent vectors are also extracted from a Convolutional Neural Network (CNN) with MobileNet as the backend to obtain additional features from representative frames of each video to assist the prediction process. The extreme gradient boosting (XGBoost) regression algorithm is deployed to predict video encoding complexity using the extracted features. The experimental results demonstrate that VECP predicts the encoding bitrate with an error percentage of up to 3.47% and encoding time with an error percentage of up to 2.89%, but with a significantly low overall latency of 3.5 milliseconds per frame which makes it suitable for both Video on Demand (VoD) and live streaming applications.

VECP architecture

Posted in News | Comments Off on Light-weight Video Encoding Complexity Prediction using Spatio Temporal Features

ETPS: Efficient Two-pass Encoding Scheme for Adaptive Live Streaming

2022 IEEE International Conference on Image Processing (ICIP)

October 16-19, 2022 | Bordeaux, France

Conference Website

[PDF][Slides][Video]

Vignesh V Menon (Alpen-Adria-Universität Klagenfurt),  Hadi Amirpour (Alpen-Adria-Universität Klagenfurt), Mohammad Ghanbari (School of Computer Science and Electronic Engineering, University of Essex, Colchester, UK)and Christian Timmerer (Alpen-Adria-Universität Klagenfurt)

Abstract:

In two-pass encoding, also known as multi-pass encoding, the input video content is analyzed in the first-pass to help the second-pass encoding utilize better encoding decisions and improve overall compression efficiency. In live streaming applications, a single-pass encoding scheme is mainly used to avoid the additional first-pass encoding run-time to analyze the complexity of every video content. This paper introduces an Efficient low-latency Two-Pass encoding Scheme (ETPS) for live video streaming applications. In this scheme, Discrete Cosine Transform (DCT)-energy-based low-complexity spatial and temporal features for every video segment are extracted in the first-pass to predict each target bitrate’s optimal constant rate factor (CRF) for the second-pass constrained variable bitrate (cVBR) encoding. Experimental results show that, on average, ETPS compared to a traditional two-pass average bitrate encoding scheme yields encoding time savings of 43.78% without any noticeable drop in compression efficiency. Additionally, compared to a single-pass constant bitrate (CBR) encoding, it yields bitrate savings of 10.89% and 8.60% to maintain the same PSNR and VMAF, respectively.

ETPS architecture

Posted in News | Comments Off on ETPS: Efficient Two-pass Encoding Scheme for Adaptive Live Streaming

Hermann Hellwagner received appreciation award

We are happy to announce and our congratulations to Dr. Hermann Hellwagner for  receiving the appreciation award of Carinthia in the area of natural/technical sciences.

Posted in News | Comments Off on Hermann Hellwagner received appreciation award

Towards Better Quality of Experience in HTTP Adaptive Streaming


16th International Conference on Signal Image Technology & Internet based Systems – Dijon, France – October 19-21, 2022

Conference Website
[PDF][Slides][Video]

Babak Taraghi (Alpen-Adria-Universität Klagenfurt, Austria), Selina Zoë Haack (Alpen-Adria-Universität Klagenfurt, Austria), and Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Austria)

Abstract: HTTP Adaptive Streaming (HAS) is nowadays a popular solution for multimedia delivery. The novelty of HAS lies in the possibility of continuously adapting the streaming session to current network conditions, facilitated by Adaptive Bitrate (ABR) algorithms. Various popular streaming and Video on Demand services such as Netflix, Amazon Prime Video, and Twitch use this method. Given this broad consumer base, ABR algorithms continuously improve to increase user satisfaction. The insights for these improvements are, among others, gathered within the research area of Quality of Experience (QoE). Within this field, various researchers have dedicated their works to identifying potential impairments and testing their impact on viewers’ QoE. Two frequently discussed visual impairments influencing QoE are stalling events and quality switches. So far, it is commonly assumed that those stalling events have the worst impact on QoE. This paper challenged this belief and reviewed this assumption by comparing stalling events with multiple quality and high amplitude quality switches. Two subjective studies were conducted. During the first subjective study, participants received a monetary incentive, while the second subjective study was carried out with volunteers. The statistical analysis demonstrated that stalling events do not result in the worst degradation of QoE. These findings suggest that a reevaluation of the effect of stalling events in QoE research is needed. Therefore, these findings may be used for further research and to improve current adaptation strategies in ABR algorithms.

Posted in News | Comments Off on Towards Better Quality of Experience in HTTP Adaptive Streaming

ARARAT: A Collaborative Edge-Assisted Framework for HTTP Adaptive Video Streaming

IEEE Transactions on Network and Service Management (TNSM)

Journal Website

[PDF]

Reza Farahani (Alpen-Adria-Universität Klagenfurt, Austria), Mohammad Shojafar (University of Surrey, UK), Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Austria), Farzad Tashtarian (Alpen-Adria-Universität Klagenfurt, Austria), Mohammad Ghanbari (University of Essex, UK), and Hermann Hellwagner (Alpen-Adria-Universität Klagenfurt, Austria)

Abstract: With the ever-increasing demands for high-definition and low-latency video streaming applications, network-assisted video streaming schemes have become a promising complementary solution in the HTTP Adaptive Streaming (HAS) context to improve users’ Quality of Experience (QoE) as well as network utilization. Edge computing is considered one of the leading networking paradigms for designing such systems by providing video processing and caching close to the end-users. Despite the wide usage of this technology, designing network-assisted HAS architectures that support low-latency and high-quality video streaming, including edge collaboration is still a challenge. To address these issues, this article leverages the Software-Defined Networking (SDN), Network Function Virtualization (NFV), and edge computing paradigms to propose A collaboRative edge-Assisted framewoRk for HTTP Adaptive video sTreaming (ARARAT). Aiming at minimizing HAS clients’ serving time and network cost, besides considering available resources and all possible serving actions, we design a multi-layer architecture and formulate the problem as a centralized optimization model executed by the SDN controller. However, to cope with the high time complexity of the centralized model, we introduce three heuristic approaches that produce near-optimal solutions through efficient collaboration between the SDN controller and edge servers. Finally, we implement the ARARAT framework, conduct our experiments on a large-scale cloud-based testbed including 250 HAS players, and compare its effectiveness with state-of-the-art systems within comprehensive scenarios. The experimental results illustrate that the proposed ARARAT methods (i) improve users’ QoE by at least 47%, (ii) decrease the streaming cost, including bandwidth and computational costs, by at least 47%, and (iii) enhance network utilization, by at least 48% compared to state-of-the-art approaches.

Index Terms—HTTP Adaptive Streaming (HAS), Network-Assisted Video Streaming, Software-Defined Networking (SDN), Network Function Virtualization (NFV), Edge Computing, Edge Collaboration, Video Transcoding.

Posted in News | Comments Off on ARARAT: A Collaborative Edge-Assisted Framework for HTTP Adaptive Video Streaming

Hadi Amirpour to give a talk at INSA France

LiVE: Toward Better Live Video Experience

INSA, France

 27th September 2022 | Rennes, France

 

Abstract: In this presentation, we first introduce the principles of video streaming and the existing challenges. While live video streaming is expected to continue growing at an accelerated pace, one potential area for optimization that has remained relatively untapped is the use of content-aware encoding to improve the quality of live contribution streams due to avoid of latency. In this talk, we introduce revolutionary real-time content-aware video quality improvement methods for live applications that keep the added latency very low.

 

Hadi Amirpour is a postdoctoral researcher at the University of Klagenfurt. He received his B.Sc. degrees in Electrical and Biomedical Engineering, and he pursued his M.Sc. in Electrical Engineering. He got his Ph.D. in computer science from the University of Klagenfurt in 2022. He was involved in the project EmergIMG, a Portuguese consortium on emerging imaging technologies, funded by the Portuguese funding agency and H2020. Currently, he is working on the ATHENA project in cooperation with its industry partner Bitmovin. His research interests are image processing and compression, video processing and compression, quality of assessment, emerging 3D imaging technology, and medical image analysis.

Posted in News | Comments Off on Hadi Amirpour to give a talk at INSA France