QoCoVi: QoE- and Cost-Aware Adaptive Video Streaming for the Internet of Vehicles

QoCoVi: QoE- and Cost-Aware Adaptive Video Streaming for the Internet of Vehicles

Elsevier Computer Communications journal 

[PDF]

Alireza Erfanian (Alpen-Adria-Universität Klagenfurt), Farzad Tashtarian  (Alpen-Adria-Universität Klagenfurt), Christian Timmerer (Alpen-Adria-Universität Klagenfurt), and Hermann Hellwagner (Alpen-Adria-Universität Klagenfurt).

Abstract: Recent advances in embedded systems and communication technologies enable novel, non-safety applications in Vehicular Ad Hoc Networks (VANETs). Video streaming has become a popular core service for such applications. In this paper, we present QoCoVi as a QoE- and cost-aware adaptive video streaming approach for the Internet of Vehicles (IoV) to deliver video segments requested by mobile users at specified qualities and deadlines. Considering a multitude of transmission data sources with different capacities and costs, the goal of QoCoVi is to serve the desired video qualities with minimum costs. By applying Dynamic Adaptive Streaming over HTTP (DASH) principles, QoCoVi considers cached video segments on vehicles equipped with storage capacity as the lowest-cost sources for serving requests.

We design QoCoVi in two SDN-based operational modes: (i) centralized and (ii) distributed. In centralized mode, we can obtain a suitable solution by introducing a mixed-integer linear programming (MILP) optimization model that can be executed on the SDN controller. However, to cope with the computational overhead of the centralized approach in real IoV scenarios, we propose a fully distributed version of QoCoVi based on the proximal Jacobi alternating direction method of multipliers (ProxJ-ADMM) technique. The effectiveness of the proposed approach is confirmed through emulation with Mininet-WiFi in different scenarios.

Posted in ATHENA | Comments Off on QoCoVi: QoE- and Cost-Aware Adaptive Video Streaming for the Internet of Vehicles

Fast and smooth video experience

Every minute, more than 500 hours of video material are published on YouTube. These days, moving images account for a vast majority of data traffic, and there is no end in sight. This means that technologies that can improve the efficiency of video streaming are becoming all the more important. This is exactly what Hadi Amirpourazarian is working on in the Christian Doppler Laboratory ATHENA at the University of Klagenfurt. Read the full article here.

Posted in ATHENA | Comments Off on Fast and smooth video experience

VCA v1.0 released on Valentine’s day!

As Valentine’s day gift to video coding enthusiasts across the globe, we released Video Complexity Analyzer (VCA) version 1.0 open-source software on Feb 14, 2022. The primary objective of VCA is to become the best spatial and temporal complexity predictor for every frame/ video segment/ video which aids in predicting encoding parameters for applications like scene-cut detection and online per-title encoding. VCA leverages x86 SIMD and multi-threading optimizations for effective performance. While VCA is primarily designed as a video complexity analyzer library, a command-line executable is provided to facilitate testing and development. We expect VCA to be utilized in many leading video encoding solutions in the coming years.

VCA is available as an open-source library, published under the GPLv3 license. For more details, please visit the software online documentation here. The source code can be found here.

Heatmap of spatial complexity (E)

Heatmap of temporal complexity (h)

 

 

 

 

 

 

 

 

 

A performance comparison (frames analyzed per second) of VCA (with different levels of threading enabled) compared to Spatial Information/Temporal Information (SITI) [Github] is shown below:

Further information about a few possible VCA applications can be found at:

  1. Content-adaptive Encoder Preset Prediction for Adaptive Live Streaming
  2. Light-weight Video Encoding Complexity Prediction using Spatio Temporal Features
  3. ETPS: Efficient Two-pass Encoding Scheme for Adaptive Live Streaming
  4. OPSE: Online Per-Scene Encoding for Adaptive HTTP Live Streaming
  5. Perceptually-aware Per-title Encoding for Video Streaming
  6. Live-PSTR: Live Per-title Encoding for Ultra HD Adaptive Streaming
  7. OPTE: Online Per-title Encoding for Live Video Streaming
  8. CODA: Content-aware Frame Dropping Algorithm for High Frame-rate Video Streaming
  9. VQEG NORM talk on Video Quality Analyzer
  10. INCEPT: INTRA CU Depth Prediction for HEVC
  11. Efficient Content-Adaptive Feature-based Shot Detection for HTTP Adaptive Streaming
Posted in ATHENA | Comments Off on VCA v1.0 released on Valentine’s day!

Live-PSTR: Live Per-title Encoding for Ultra HD Adaptive Streaming

2022 NAB Broadcast Engineering and Information Technology (BEIT) Conference

April 24-26, 2022 | Las Vegas, US

[PDF][Slides]

Vignesh V Menon (Alpen-Adria-Universität Klagenfurt),  Hadi Amirpour (Alpen-Adria-Universität Klagenfurt), Christian Feldmann (Bitmovin, Klagenfurt),
Adithyan Ilangovan
(Bitmovin, Klagenfurt), Martin Smole (Bitmovin, Klagenfurt), Mohammad Ghanbari (School of Computer Science and Electronic Engineering, University of Essex, Colchester, UK), and Christian Timmerer (Alpen-Adria-Universität Klagenfurt).

Abstract:

Current per-title encoding schemes encode the same video content at various bitrates and spatial resolutions to find optimal bitrate-resolution pairs (known as bitrate ladder) for each video content in Video on Demand (VoD) applications. But in live streaming applications, a fixed bitrate ladder is used for simplicity and efficiency to avoid the additional latency to find the optimized bitrate-resolution pairs for every video content. However, an optimized bitrate ladder may result in (i) decreased storage or network resources or/and (ii) increased Quality of Experience (QoE). In this paper, a fast and efficient per-title encoding scheme (Live-PSTR) is proposed tailor-made for live Ultra High Definition (UHD) High Framerate (HFR) streaming. It includes a pre-processing step in which Discrete Cosine Transform (DCT)-energy-based low-complexity spatial and temporal features are used to determine the complexity of each video segment, based on which the optimized encoding resolution and framerate for streaming at every target bitrate is determined. Experimental results show that, on average, Live-PSTR yields bitrate savings of 9.46% and 11.99% to maintain the same PSNR and VMAF scores, respectively compared to the HTTP Live Streaming (HLS) bitrate ladder.

Architecture of Live-PSTR

Posted in ATHENA | Comments Off on Live-PSTR: Live Per-title Encoding for Ultra HD Adaptive Streaming

MPEG DASH video streaming technology co-developed in Klagenfurt wins Technology and Engineering Emmy® Award

The Emmy® Awards do not only honour the work of actors and directors but also recognize technologies that are steadily improving the viewing experience for consumers.

This year, the winners include the MPEG DASH Standard. Christian Timmerer (Department of Information Technology) played a leading role in its development.

Read more about it here.

 

Posted in ATHENA | Comments Off on MPEG DASH video streaming technology co-developed in Klagenfurt wins Technology and Engineering Emmy® Award

Take the Red Pill for H3 and See How Deep the Rabbit Hole Goes

ACM Mile-High video Conference 2022 (MHV)

March 01-03, 2022 | Denver, CO, USA

[PDF][Slides][Video]

Minh Nguyen (AAU, Austria), Christian Timmerer (AAU, Austria), Stefan Pham (Fraunhofer FOKUS, Germany), Daniel Silhavy (Fraunhofer FOKUS, Germany), Ali C. Begen (Ozyegin University, Turkey)

Abstract: With the introduction of HTTP/3 (H3) and QUIC at its core, there is an expectation of significant improvements in Web-based secure object delivery. As HTTP is a central protocol to the current adaptive streaming methods in all major over-the-top (OTT) services, an important question is what H3 will bring to the table for such services. To answer this question, we present the new features of H3 and QUIC, and compare them to those of H/1.1/2 and TCP. We also share the latest research findings in this domain.

Keywords: HTTP adaptive streaming, QUIC, CDN, ABR, OTT, DASH, HLS.

Posted in ATHENA | Comments Off on Take the Red Pill for H3 and See How Deep the Rabbit Hole Goes

Super-resolution Based Bitrate Adaptation for HTTP Adaptive Streaming for Mobile Devices

ACM Mile-High video Conference 2022 (MHV)

March 01-03, 2022 | Denver, CO, USA

Conference Website

[PDF][Slides][Video]

Minh Nguyen (AAU, Austria), Ekrem Çetinkaya (AAU, Austria), Hermann Hellwagner (AAU, Austria), and Christian Timmerer (AAU, Austria)

Abstract: The advancement of mobile hardware in recent years made it possible to apply deep neural network (DNN) based approaches on mobile devices. This paper introduces a lightweight super-resolution (SR) network, namely SR-ABR Net, deployed at mobile devices to upgrade low-resolution/low-quality videos and a novel adaptive bitrate (ABR) algorithm, namely WISH-SR, that leverages SR networks at the client to improve the video quality depending on the client’s context. WISH-SR takes into account mobile device properties, video characteristics, and user preferences. Experimental results show that the proposed SR-ABR Net can improve the video quality compared to traditional SR approaches while running in real-time. Moreover, the proposed WISH-SR can significantly boost the visual quality of the delivered content while reducing both bandwidth consumption and the number of stalling events.

Keywords: Super-resolution, Deep Neural Networks, Mobile Devices, ABR

Posted in ATHENA | Comments Off on Super-resolution Based Bitrate Adaptation for HTTP Adaptive Streaming for Mobile Devices