PyStream: Enhancing Video Streaming Evaluation

PyStream: Enhancing Video Streaming Evaluation

The 15th ACM Multimedia Systems Conference (Technical Demos)

15-18 April, 2024 in Bari, Italy

[PDF],[Github]

Samuel Radler* (AAU, Austria) , Leon Prüller* (AAU, Austria), Emanuele Artioli (AAU, Austria), Farzad Tashtarian (AAU, Austria), and Christian Timmerer (AAU, Austria)

*These authors contributed equally to this work

As streaming services become more commonplace, analyzing their behavior effectively under different network conditions is crucial. This is normally quite expensive, requiring multiple players with different bandwidth configurations to be emulated by a powerful local machine or a cloud environment. Furthermore, emulating a realistic network behavior or guaranteeing adherence to a real network trace is challenging. This paper presents PyStream, a simple yet powerful way to emulate a video streaming network, allowing multiple simultaneous tests to run locally. By leveraging a network of Docker containers, many of the implementation challenges are abstracted away, keeping the resulting system easily manageable and upgradeable. We demonstrate how PyStream not only reduces the requirements for testing a video streaming system but also improves the accuracy of the emulations with respect to the current state-of-the-art. On average, PyStream reduces the error between the original network trace and the bandwidth emulated by video players by a factor of 2-3 compared to Wondershaper, a common network traffic shaper in many video streaming evaluation environments. Moreover, PyStream decreases the cost of running experiments compared to existing cloud-based video streaming evaluation environments such as CAdViSE.

 

Posted in ATHENA | Comments Off on PyStream: Enhancing Video Streaming Evaluation

COCONUT: Content Consumption Energy Measurement Dataset for Adaptive Video Streaming

COCONUT: Content Consumption Energy Measurement Dataset for Adaptive Video Streaming

The 15th ACM Multimedia Systems Conference (Open-source Software and Datasets)

15-18 April, 2024 in Bari, Italy

[PDF],[Github]

Farzad Tashtarian (AAU, Austria), Daniele Lorenzi (AAU, Austria), Hadi Amirpour  (AAU, Austria), Samira Afzal  (AAU, Austria), and Christian Timmerer (AAU, Austria)

*These authors contributed equally to this work

HTTP Adaptive Streaming (HAS) has emerged as the predominant solution for delivering video content on the Internet. The urgency of the climate crisis has accentuated the demand for investigations into the environmental impact of HAS techniques. In HAS, clients rely on adaptive bitrate (ABR) algorithms to drive the quality selection for video segments. Focusing on maximizing
video quality, these algorithms often prioritize maximizing video quality under favorable network conditions, disregarding the impact of energy consumption. To thoroughly investigate the effects
of energy consumption, including the impact of bitrate and other video parameters such as resolution and codec, further research is still needed. In this paper, we propose COCONUT, a COntent COnsumption eNergy measUrement daTaset for adaptive video streaming collected through a digital multimeter on various types of client devices, such as laptop and smartphone, streaming MPEG-DASH segments. Furthermore, we analyze the dataset and find insights into the influence of multiple codecs, various video encoding parameters, such as segment length, framerate, bitrates, and resolutions, and decoding type, i.e., hardware or software, on energy
consumption. We gather and categorize these measurements based on segment retrieval through the network interface card (NIC), decoding, and rendering. Additionally, we compare the impact of
different HAS players on energy consumption. This research offers valuable perspectives on the energy usage of streaming devices, which could contribute to creating a media consumption experience that is both more sustainable and resource-efficient.

 

Posted in ATHENA | Comments Off on COCONUT: Content Consumption Energy Measurement Dataset for Adaptive Video Streaming

The 1st IEEE ICME Workshop on Surpassing Latency Limits in Adaptive Live Video Streaming (LIVES’24)

Click here for more information.

Posted in ATHENA | Comments Off on The 1st IEEE ICME Workshop on Surpassing Latency Limits in Adaptive Live Video Streaming (LIVES’24)

Patent Approval for “Low-Latency Online Per-Title Encoding”

Low-Latency Online Per-Title Encoding

US Patent

[PDF]

Vignesh Menon (Alpen-Adria-Universität Klagenfurt, Austria), Hadi Amirpour (Alpen-Adria-Universität Klagenfurt, Austria), and Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Austria)

 

Abstract: The technology described herein relates to online per-title encoding. A method for online per-title encoding includes receiving a video input, generating segments of the video input, extracting a spatial feature and a temporal feature, predicting bitrate-resolution pairs based on the spatial feature and the temporal feature, using a discrete cosine transform (DCT)-based energy function, and per-title encoding segments of the video input for the predicted bitrate-resolution pairs. A system for online per-title encoding may include memory for storing a set of bitrates, a set of resolutions, and a machine learning module configured to predict bitrate resolution pairs based on low-complexity spatial and temporal features.

 

Posted in ATHENA | Comments Off on Patent Approval for “Low-Latency Online Per-Title Encoding”

EUSIPCO’24 Special Session: Frugality for Video Streaming

EUSIPCO 2024

32nd European Signal Processing Conference

Special Session: Frugality for Video Streaming

https://eusipcolyon.sciencesconf.org/

It’s time to take action against the threat of climate change by making significant changes to our global greenhouse gas (GHG) emissions. That includes rethinking how we consume energy for digital technologies, and video in particular. Indeed, video streaming technology alone is responsible for over half of digital technology’s global impact. With the rise of digital and remote work becoming more common, there’s been a rapid increase in video data volume, processing, and streaming. Unfortunately, this also means an increase in energy consumption and GHG emissions.

The goal of this special session is to gather the most recent research works dealing with the objective of reducing the impact of video streaming. It includes contributions to reducing the energy cost of generating, compressing, storing, transmitting, and displaying video data. The special session also aims to include works that target global video volume reduction (even by questioning our video usage). Finally, this special session is also dedicated to works that propose reliable models for estimating the video streaming energy cost.

Submission guidelines can be found here and the actual paper submission is here.

Important dates:

  • Full paper submission Mar. 310, 2024
  • Paper acceptance notification May 22, 2024
  • Camera-ready paper deadline Jun. 1, 2024
  • 3-Minute Thesis contest Jun. 15, 2024

Organizers:

  • Thomas Maugey, Senior Researcher at Inria, Rennes, France
  • Cagri Ozcinar, MSK AI, UK
  • Christian Timmerer, Alpen-Adria-Universität, Klagenfurt, Austria

 

Posted in ATHENA, GAIA | Comments Off on EUSIPCO’24 Special Session: Frugality for Video Streaming

ICIP 2024 Grand Challenge on Video Complexity

IEEE International Conference on Image Processing (IEEE ICIP)

Grand Challenge on

Video Complexity

27-30 October 2024, Abu Dhabi, UAE

https://cd-athena.github.io/GCVC

 

Organizers:

  • Ioannis Katsavounidis (Meta, USA)
  • Hadi Amirpour (AAU, Austria)
  • Ali Ak (Nantes Univ., France)
  • Anil Kokaram (TCD, Ireland)
  • Christian Timmerer (AAU, Austria)

 

Abstract: Video compression standards rely heavily on eliminating spatial and temporal redundancy within and across video frames. Intra-frame encoding targets redundancy within blocks of a single video frame, whereas inter-frame coding focuses on removing redundancy between the current frame and its reference frames. The level of spatial and temporal redundancy, or complexity, is a crucial factor in video compression. Generally, videos with higher complexity require a greater bitrate to maintain a specific quality level. Understanding the complexity of a video beforehand can significantly enhance the optimization of video coding and streaming workflows. While Spatial Information (SI) and Temporal Information (TI) are traditionally used to represent video complexity, they often exhibit low correlation with actual video coding performance. In this challenge, the goal is to find innovative methods that can quickly and accurately predict the spatial and temporal complexity of a video, with a high correlation to actual performance. These methods should be efficient enough to be applicable in live video streaming scenarios, ensuring real-time adaptability and optimization.

 

Posted in ATHENA | Comments Off on ICIP 2024 Grand Challenge on Video Complexity

Beyond Curves and Thresholds – Introducing Uncertainty Estimation to Satisfied User Ratios for Compressed Video

Picture Coding Symposium (PCS) 

12-14 June 2024, Taichung, Taiwan

https://2024.picturecodingsymposium.org/

[PDF]

Jingwen Zhu (University of Nantes, France), Hadi Amirpour (AAU, Austria), Raimund Schatz (AIT, Austria), Patrick Le Callet (University of Nantes, France)and Christian Timmerer (AAU, Austria)

Abstract: Just Noticeable Difference (JND) establishes the threshold between two images or videos wherein differences in quality remain imperceptible to an individual. This threshold, collectively known as the Satisfied User Ratio (SUR), holds significant importance in image and video compression applications, ensuring that differences in quality are imperceptible to the majority (p%) of users, known as p%SUR. While substantial efforts have been dedicated to predicting the p%SUR for various encoding parameters (e.g., QP) and quality metrics (e.g., VMAF), referred to as proxies, systematic consideration of the prediction uncertainties associated with these proxies has hitherto remained unexplored. In this paper, we analyze the uncertainty of p%SUR through Confidence Interval (CI) estimation and assess the consistency of various Video Quality Metrics (VQMs) as proxies for SUR. The analysis reveals challenges in directly using p%SUR as ground truth for training models and highlights the need for uncertainty estimation for SUR with different proxies.

Posted in ATHENA | Comments Off on Beyond Curves and Thresholds – Introducing Uncertainty Estimation to Satisfied User Ratios for Compressed Video