University of Klagenfurt to Host VQEG Meeting

Video Quality Experts Group (VQEG)

1-5 July 2024

University of Klagenfurt, Austria

[website]

The University of Klagenfurt welcomes VQEG members to the next meeting that will be held in Klagenfurt, Carinthia, Austria, from July 01-05, 2024.

The Video Quality Experts Group (VQEG) convenes a consortium of global specialists from various sectors, including industry, academia, governmental bodies, the International Telecommunication Union, and other standard-setting organizations.

Posted in ATHENA | Comments Off on University of Klagenfurt to Host VQEG Meeting

Patent Approval for “Adaptive Bitrate Algorithm Deployed at Edge Nodes”

Adaptive Bitrate Algorithm Deployed at Edge Nodes

US Patent

[URL][PDF]

Jesús Aguilar-Armijo (Alpen-Adria-Universität Klagenfurt, Austria), Ekrem Çetinkaya (Alpen-Adria-Universität Klagenfurt, Austria), Hermann Hellwagner (Alpen-Adria-Universität Klagenfurt, Austria), Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Austria)

Abstract: The technology described herein relates to implementing an adaptive bitrate (ABR) algorithm at edge nodes. A method for implementing an ABR algorithm at an edge node may include receiving at the edge node a request for a video segment from a client according to the client’s ABR algorithm, the request indicating a quality. A weighted sum score for each of a set of qualities may be computed based on a quality score and a fairness score using the ABR algorithm at the edge node, the qualities including at least the requested quality and another quality. A modified request may be generated in response to the weighted sum score for the other quality being better than the weighted sum score for the requested quality. The modified request may be sent to a server. The video segment in the other quality may be received from the server and provided to a client.

Posted in ATHENA | Comments Off on Patent Approval for “Adaptive Bitrate Algorithm Deployed at Edge Nodes”

Report on GMSys 2024: Second International ACM Green Multimedia Systems Workshop

The second International ACM Green Multimedia Systems Workshop, hosted and organized as part of the 15th ACM Multimedia Systems Conference, took place on Thursday, April 18, 2024, in Bari, Italy. The workshop provided a platform for discussing innovative ideas and research findings in multimedia systems, specifically focusing on energy usage and environmental impact within multimedia frameworks.

GMSys featured five technical presentations, with an acceptance rate of 62.5%. These presentations, comprising both full and short papers, covered a range of innovative approaches and solutions related to green video streaming. It provided a valuable opportunity for multimedia researchers to delve into this critical topic.

The workshop brought together experts and researchers from universities and research institutes, including InterDigital, Fraunhofer FOKUS, Alpen-Adria-Universitat Klagenfurt, University at Buffalo, and researchers from leading companies such as Technology Innovation Institute (TII), Bitmovin, and IBM, all focused on advancing green video streaming technology.

We want to thank you to everyone who made our workshop awesome – participants, speakers, committees, authors, and attendees! Your support was key to our success.

Huge thanks to the ACM organizing team, especially Luca De Cicco and Ali C. Begen, for helping us host this event.

A special thanks to GAIA and Green Streaming for their generous technical sponsorship, which helped us pull everything together smoothly.

Presentations in GMSys’24: 

How to make images less power-hungry: An objective benchmark study 
Emmanuel SAMPAIO (InterDigital, France), Claire-Hélène DEMARTY (InterDigital, France), Olivier Le Meur (InterDigital)

Energy Cost of Coding Omnidirectional Videos using ARM and x86 Platforms
Ibrahim Farhat (Technology Innovation Institure (TII)), Ibrahim Khadraoui (Technology Innovation Institure (TII)), Wassim hamidouche (Technology Innovation Institure (TII)), Mohit K. sharma (Technology Innovation Institure (TII))

Framework for automated energy measurement of video streaming devices
Martin Lasak (Fraunhofer FOKUS), Robert Seeliger (Fraunhofer FOKUS), Goerkem Gueclue (Fraunhofer FOKUS), Stefan Arbanowski (Fraunhofer FOKUS)

VEEP: Video Encoding Energy and CO2 Emission Prediction
Armin Lachini (Bitmovin), Manuel Hoi (Alpen-Adria-Universitat Klagenfurt), Samira Afzal (Alpen-Adria-Universitat Klagenfurt), Sandro Linder (Alpen-Adria-Universitat Klagenfurt), Farzad Tashtarian (Alpen-Adria Universität Klagenfurt), Radu Prodan (University of Klagenfurt), Christian Timmerer (Alpen-Adria Universität Klagenfurt)

Modeling Video Playback Power Consumption on Mobile Devices
Bekir Turkkan (IBM Research), Adithya Raman (University at Buffalo, SUNY), Tevfik Kosar (University at Buffalo, SUNY)

 

Posted in GAIA | Comments Off on Report on GMSys 2024: Second International ACM Green Multimedia Systems Workshop

EDVS: An Energy Efficient Deep Q-Learning-based Video Streaming in Harvesting Wireless Sensor Networks

International Conference on Smart Cities, Internet of Things and Applications

14 – 15 May 2024 | Mashhad, Iran

Conference Website

[PDF][Slides]

Alireza Shamloo (Comprehensive University Of The Islamic Revolution, Tehran, Iran), Razieh Mohammadi (Shahid Rajaee Teacher Training University, Tehran, Iran), Zahra Shirmohammadi (Shahid Rajaee Teacher Training University, Tehran, Iran), Samira Afzal (Alpen-Adria-Universität Klagenfurt), and Christian Timmerer (Alpen-Adria-Universität Klagenfurt)

Abstract

Wireless Sensor Networks face a significant challenge in achieving balanced energy consumption during data transmission and harvesting. One area where this challenge is particularly acute is in high-demand video data transmissions that are energy-intensive. To address this problem, a duty cycle (sleep-and-active-based) method called Energy efficient Deep q-learning-based Video Streaming (EDVS) is proposed. EDVS uses deep Q learning to determine the sleep and wake of nodes and determine the amount of the node’s
frame. EDVS preprocesses video data, detects correlated frames, and sends only keyframes, thus reducing energy consumption. Initial simulation results show our approach significantly outperforms state-of-the-art mechanisms by lowering energy consumption up to 44 %.

Keywords: Video Streaming, Wireless Sensor Network, Energy Efficiency, Deep
Q-learning, Energy Harvesting.

Posted in ATHENA, GAIA | Comments Off on EDVS: An Energy Efficient Deep Q-Learning-based Video Streaming in Harvesting Wireless Sensor Networks

ATHENA, GAIA, and SPIRIT contributions to ACM MMSys 2024

15th ACM Multimedia Systems Conference (MMSys)
15 – 18 April 2024 | Bari, Italy

Posted in ATHENA, GAIA, SPIRIT | Comments Off on ATHENA, GAIA, and SPIRIT contributions to ACM MMSys 2024

IoT Privacy Protection: JPEG-TPE with Lower File Size Expansion and Lossless Decryption

IEEE Internet of Things Journal (IEEE IoT)

[PDF]

Hongjie He (Southwest Jiaotong University, China), Yuan Yuan (Southwest Jiaotong University, China), Hadi Amirpour (AAU, Klagenfurt, Austria),  Lingfeng Qu (Southwest Jiaotong University, China), Christian Timmerer (AAU, Klagenfurt, Austria), Fan Chen (Southwest Jiaotong University, China)

 

 

Abstract: With the development of Internet of Things (IoT) and cloud services, many images generated from IoT devices are stored in the cloud, calling for efficient data encryption methods. To balance the security and usability, the thumbnail preserving encryption (TPE) has emerged. However, existing JPEG image-based TPE (JPEG-TPE) schemes face challenges in achieving low file extension, lossless decryption and better privacy protect of detailed information. To solve these challenges, we propose a novel JPEG-TPE scheme.
Firstly, to achieve a smaller file size expansion and preserve the thumbnail, we reallocate the values, maintaining the sum for the DC difference instead of the DC coefficient. To ensure that the coefficients do not overflow, the valid range of reallocated difference is constrained not only by the sum but also by the neighborhood difference.
Secondly, to preserve file size of AC encryption while improve the security of detailed information, the AC coefficient groups with undivided RSV are permuted adaptively.
Besides, the intra TPE block swapping of DC difference, quantization table modification,
non-zero AC coefficients mapping, and block permutation are used to further encrypt the image. The experimental results show that the proposed JPEG-TPE scheme achieves lossless decryption, reducing the file size expansion of encrypted images from 15.41% to 0.64% compared to the state-of-the-art scheme. Additionally, it is observed that the proposed method can effectively resist against various attacks, including the deep-learning based super-resolution attack.

Posted in ATHENA | Comments Off on IoT Privacy Protection: JPEG-TPE with Lower File Size Expansion and Lossless Decryption

DeepVCA: Deep Video Complexity Analyzer

IEEE Transactions on Circuits and Systems for Video Technology (TCSVT)

[PDF]

 Hadi Amirpour (AAU, Klagenfurt, Austria), Klaus Schoeffmann (AAU, Klagenfurt, Austria), Mohammad Ghanbari (University of Essex, UK), Christian Timmerer (AAU, Klagenfurt, Austria)

 

Abstract: Video streaming and its applications are growing rapidly, making video optimization a primary target for content providers looking to enhance their services. Enhancing the quality of videos requires the adjustment of different encoding parameters such as bitrate, resolution, and frame rate. To avoid brute force approaches for predicting optimal encoding parameters, video complexity features are typically extracted and utilized. To predict optimal encoding parameters effectively, content providers traditionally use unsupervised feature extraction methods, such as ITU-T’s Spatial Information ( SI ) and Temporal Information ( TI ) to represent the spatial and temporal complexity of video sequences. Recently, Video Complexity Analyzer (VCA) was introduced to extract DCT-based features to represent the complexity of a video sequence (or parts thereof). These unsupervised features, however, cannot accurately predict video encoding parameters. To address this issue, this paper introduces a novel supervised feature extraction method named DeepVCA, which extracts the spatial and temporal complexity of video sequences using deep neural networks. In this approach, the encoding bits required to encode each frame in intra-mode and inter-mode are used as labels for spatial and temporal complexity, respectively. Initially, we benchmark various deep neural network structures to predict spatial complexity. We then leverage the similarity of features used to predict the spatial complexity of the current frame and its previous frame to rapidly predict temporal complexity. This approach is particularly useful as the temporal complexity may depend not only on the differences between two consecutive frames but also on their spatial complexity. Our proposed approach demonstrates significant improvement over unsupervised methods, especially for temporal complexity. As an example application, we verify the effectiveness of these features in predicting the encoding bitrate and encoding time of video sequences, which are crucial tasks in video streaming. The source code and dataset are available at https://github.com/cd-athena/ DeepVCA.

Posted in ATHENA | Comments Off on DeepVCA: Deep Video Complexity Analyzer