EVCA: Enhanced Video Complexity Analyzer

EVCA: Enhanced Video Complexity Analyzer

The 15th ACM Multimedia Systems Conference (Technical Demos)

15-18 April, 2024 in Bari, Italy

[PDF],[Github]

Hadi Amirpour (AAU, Austria), Mohammad Ghasempour (AAU, Austria), Lingfen Qu (Guangzhou University, China), Wassim Hamidouche (TII, UAE), and Christian Timmerer (AAU, Austria)

The optimization of video compression and streaming workflows critically relies on understanding the video complexity, including both spatial and temporal features. These features play a vital role in guiding rate control, predicting video encoding parameters (such as resolution and frame rate), and selecting test videos for subjective analysis. Traditional methods primarily utilize SI and TI to measure these spatial and temporal complexity features, respectively. Moreover, VCA has been introduced as a tool employing DCT-based functions to evaluate these features, specifically E and h for spatial and temporal complexity features, respectively. In this paper, we introduce Enhanced Video Complexity Analyzer(EVCA), an advanced tool that integrates the functionalities of both VCA and the SITI approach. Developed in Python to ensure compatibility with GPU processing, EVCA enhances the definition of temporal complexity originally used in VCA. This refinement significantly improves the detection of temporal complexity features in VCA (i.e., h), raising its Peasrson Correlation Coefficient (PCC) from 0.6 to 0.77. Furthermore, EVCA demonstrates exceptional performance on GPU devices, achieving feature extraction speeds exceeding 1200 fps for 1080p resolution videos.

Posted in ATHENA | Comments Off on EVCA: Enhanced Video Complexity Analyzer

GREEM : An Open-Source Energy Measurement Tool for Video Processing

GREEM: An Open-Source Benchmark Tool Measuring the Environmental Footprint of Video Streaming

The 15th ACM Multimedia Systems Conference (Open-source Software and Datasets)

15-18 April, 2024 in Bari, Italy

[PDF], [Github]

Christian Bauer  (AAU, Austria),  Samira Afzal (AAU, Austria)Sandro Linder (AAU, Austria), Radu Prodan (AAU,Austria)and Christian Timmerer (AAU, Austria)

Addressing climate change requires a global decrease in greenhouse gas (GHG) emissions. In today’s digital landscape, video streaming significantly influences internet traffic, driven by the widespread use of mobile devices and the rising popularity of streaming plat-
forms. This trend emphasizes the importance of evaluating energy consumption and the development of sustainable and eco-friendly video streaming solutions with a low Carbon Dioxide (CO2) footprint. We developed a specialized tool, released as an open-source library called GREEM , addressing this pressing concern. This tool measures video encoding and decoding energy consumption and facilitates benchmark tests. It monitors the computational impact on hardware resources and offers various analysis cases. GREEM is helpful for developers, researchers, service providers, and policy makers interested in minimizing the energy consumption of video encoding and streaming.

Posted in GAIA | Comments Off on GREEM : An Open-Source Energy Measurement Tool for Video Processing

VEED: Video Encoding Energy and CO2 Emissions Dataset for AWS EC2 instances

VEED: Video Encoding Energy and CO2 Emissions Dataset for AWS EC2 instances

The 15th ACM Multimedia Systems Conference (Open-source Software and Datasets)

15-18 April, 2024 in Bari, Italy

[PDF],[Github]

Sandro Linder (AAU, Austria), Samira Afzal (AAU, Austria), Christian Bauer  (AAU, Austria), Hadi Amirpour (AAU, Austria), Radu Prodan (AAU,Austria)and Christian Timmerer (AAU, Austria)

Video streaming constitutes 65 % of global internet traffic, prompting an investigation into its energy consumption and CO2 emissions. Video encoding, a computationally intensive part of streaming, has moved to cloud computing for its scalability and flexibility. However, cloud data centers’ energy consumption, especially video encoding, poses environmental challenges. This paper presents VEED, a FAIR Video Encoding Energy and CO2 Emissions Dataset for Amazon Web Services (AWS) EC2 instances. Additionally, the dataset also contains the duration, CPU utilization, and cost of the encoding. To prepare this dataset, we introduce a model and conduct a benchmark to estimate the energy and CO2 emissions of different Amazon EC2 instances during the encoding of 500 video segments with various complexities and resolutions using Advanced Video Coding (AVC)
and High-Efficiency Video Coding (HEVC). VEED and its analysis can provide valuable insights for video researchers and engineers to model energy consumption, manage energy resources, and distribute workloads, contributing to the sustainability of cloud-based video encoding and making them cost-effective. VEED is available at Github.

Posted in GAIA | Comments Off on VEED: Video Encoding Energy and CO2 Emissions Dataset for AWS EC2 instances

PyStream: Enhancing Video Streaming Evaluation

PyStream: Enhancing Video Streaming Evaluation

The 15th ACM Multimedia Systems Conference (Technical Demos)

15-18 April, 2024 in Bari, Italy

[PDF],[Github]

Samuel Radler* (AAU, Austria) , Leon Prüller* (AAU, Austria), Emanuele Artioli (AAU, Austria), Farzad Tashtarian (AAU, Austria), and Christian Timmerer (AAU, Austria)

*These authors contributed equally to this work

As streaming services become more commonplace, analyzing their behavior effectively under different network conditions is crucial. This is normally quite expensive, requiring multiple players with different bandwidth configurations to be emulated by a powerful local machine or a cloud environment. Furthermore, emulating a realistic network behavior or guaranteeing adherence to a real network trace is challenging. This paper presents PyStream, a simple yet powerful way to emulate a video streaming network, allowing multiple simultaneous tests to run locally. By leveraging a network of Docker containers, many of the implementation challenges are abstracted away, keeping the resulting system easily manageable and upgradeable. We demonstrate how PyStream not only reduces the requirements for testing a video streaming system but also improves the accuracy of the emulations with respect to the current state-of-the-art. On average, PyStream reduces the error between the original network trace and the bandwidth emulated by video players by a factor of 2-3 compared to Wondershaper, a common network traffic shaper in many video streaming evaluation environments. Moreover, PyStream decreases the cost of running experiments compared to existing cloud-based video streaming evaluation environments such as CAdViSE.

 

Posted in ATHENA | Comments Off on PyStream: Enhancing Video Streaming Evaluation

COCONUT: Content Consumption Energy Measurement Dataset for Adaptive Video Streaming

COCONUT: Content Consumption Energy Measurement Dataset for Adaptive Video Streaming

The 15th ACM Multimedia Systems Conference (Open-source Software and Datasets)

15-18 April, 2024 in Bari, Italy

[PDF],[Github]

Farzad Tashtarian (AAU, Austria), Daniele Lorenzi (AAU, Austria), Hadi Amirpour  (AAU, Austria), Samira Afzal  (AAU, Austria), and Christian Timmerer (AAU, Austria)

*These authors contributed equally to this work

HTTP Adaptive Streaming (HAS) has emerged as the predominant solution for delivering video content on the Internet. The urgency of the climate crisis has accentuated the demand for investigations into the environmental impact of HAS techniques. In HAS, clients rely on adaptive bitrate (ABR) algorithms to drive the quality selection for video segments. Focusing on maximizing
video quality, these algorithms often prioritize maximizing video quality under favorable network conditions, disregarding the impact of energy consumption. To thoroughly investigate the effects
of energy consumption, including the impact of bitrate and other video parameters such as resolution and codec, further research is still needed. In this paper, we propose COCONUT, a COntent COnsumption eNergy measUrement daTaset for adaptive video streaming collected through a digital multimeter on various types of client devices, such as laptop and smartphone, streaming MPEG-DASH segments. Furthermore, we analyze the dataset and find insights into the influence of multiple codecs, various video encoding parameters, such as segment length, framerate, bitrates, and resolutions, and decoding type, i.e., hardware or software, on energy
consumption. We gather and categorize these measurements based on segment retrieval through the network interface card (NIC), decoding, and rendering. Additionally, we compare the impact of
different HAS players on energy consumption. This research offers valuable perspectives on the energy usage of streaming devices, which could contribute to creating a media consumption experience that is both more sustainable and resource-efficient.

 

Posted in ATHENA | Comments Off on COCONUT: Content Consumption Energy Measurement Dataset for Adaptive Video Streaming

The 1st IEEE ICME Workshop on Surpassing Latency Limits in Adaptive Live Video Streaming (LIVES’24)

Click here for more information.

Posted in ATHENA | Comments Off on The 1st IEEE ICME Workshop on Surpassing Latency Limits in Adaptive Live Video Streaming (LIVES’24)

Patent Approval for “Low-Latency Online Per-Title Encoding”

Low-Latency Online Per-Title Encoding

US Patent

[PDF]

Vignesh Menon (Alpen-Adria-Universität Klagenfurt, Austria), Hadi Amirpour (Alpen-Adria-Universität Klagenfurt, Austria), and Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Austria)

 

Abstract: The technology described herein relates to online per-title encoding. A method for online per-title encoding includes receiving a video input, generating segments of the video input, extracting a spatial feature and a temporal feature, predicting bitrate-resolution pairs based on the spatial feature and the temporal feature, using a discrete cosine transform (DCT)-based energy function, and per-title encoding segments of the video input for the predicted bitrate-resolution pairs. A system for online per-title encoding may include memory for storing a set of bitrates, a set of resolutions, and a machine learning module configured to predict bitrate resolution pairs based on low-complexity spatial and temporal features.

 

Posted in ATHENA | Comments Off on Patent Approval for “Low-Latency Online Per-Title Encoding”