Samira Afzal talk at 11th Fraunhofer FOKUS MWS

Energy Efficient Video Encoding for Cloud and Edge Computing Instances

11th FOKUS Media Web Symposium

 11th JUN 2024 | Berlin, Germany

 

Abstract: The significant increase in energy consumption within data centers is primarily due to the exponential rise in demand for complex computing workflows and storage resources. Video streaming applications, which are both compute and storage-intensive, account for the majority of today’s internet services. To address this, this talk proposes a novel matching-based method designed to schedule video encoding applications on Cloud resources. The method optimizes for user-defined objectives, including energy consumption, processing time, cost, CO2 emissions, or a a trade-off between these priorities.

Samira Afzal is a postdoctoral researcher at Alpen-Adria-Universität Klagenfurt, Austria, and in collaborating with Bitmovin. Before, she was a postdoctoral researcher at the University of Sao Paulo, researching deeply on IoT, SWARM, and Surveillance Systems. She graduated with her Ph.D. in November 2019 from the University of Campinas (UNICAMP). During her Ph.D., she collaborated with Samsung on a project in the area of mobile video streaming over heterogeneous wireless networks and multipath transmission methods in order to increase perceived
video quality. Further information is available here.

Posted in ATHENA, GAIA | Comments Off on Samira Afzal talk at 11th Fraunhofer FOKUS MWS

ODVista: An Omnidirectional Video Dataset for Super-Resolution and Quality Enhancement Tasks

IEEE International Conference on Image Processing (ICIP 2024)

27-30 October 2024, Abu Dhabi, UAE

[PDF]

Ahmed Telili (TII, UAE), Ibrahim Farhat (TII, UAE), Wassim Hamidouche (TII, UAE), Hadi Amirpour (AAU, Austria)

 

Abstract: Omnidirectional or 360-degree video is being increasingly deployed, largely due to the latest advancements in immersive virtual reality (VR) and extended reality (XR) technology. However, the adoption of these videos in streaming encounters challenges related to bandwidth and latency, particularly in mobility conditions such as with unmanned aerial vehicles (UAVs). Adaptive resolution and compression aim to preserve quality while maintaining low latency under these constraints, yet downscaling and encoding can still degrade quality and introduce artifacts. Machine learning (ML)-based super-resolution (SR) and quality enhancement techniques offer a promising solution by enhancing detail recovery and reducing compression artifacts. However, current publicly available 360-degree video SR datasets lack compression artifacts, which limit research in this field. To bridge this gap, this paper introduces omnidirectional video streaming dataset (ODVista), which comprises 200 high-resolution and high-quality videos downscaled and encoded at four bitrate ranges using the high-efficiency video coding (HEVC)/H.265 standard. Evaluations show that the dataset not only features a wide variety of scenes but also spans different levels of content complexity, which is crucial for robust solutions that perform well in real-world scenarios and generalize across diverse visual environments. Additionally, we evaluate the performance, considering both quality enhancement and runtime, of two handcrafted and two ML-based SR models on the validation and testing sets of ODVista.
Dataset URL: https://github.com/Omnidirectional-video-group/ODVista

 

Posted in ATHENA | Comments Off on ODVista: An Omnidirectional Video Dataset for Super-Resolution and Quality Enhancement Tasks

Long Night of Research/Lange Nacht der Forschung 2024

The Klagenfurt University (AAU) made a strong impression at the 2024 Lange Nacht der Forschung (Long Night of Research) held at Klagenfurt University and Lakeside Park on May 24th, attracting over 8,000 visitors. The Athena Lab, a leading research group within AAU, particularly impressed visitors with its three interactive stations – ATHENA (L20), GAIA (L21) and  SPIRIT (L22) – showcasing its work at the forefront of technology and sustainability.

ATHENA: L20 – How does video work on the Internet?

The ATHENA (L20) station explored the world of video streaming. Visitors learned how content travels from its source to their devices. Through interactive displays, they learned how innovative technologies ensure videos stream quickly and in the best quality possible, reaching Smart TVs seamlessly.

GAIA: L21 – Greener Video Streaming for a Sustainable Future

The GAIA (L21) station aimed to raise visitors’ awareness about the energy consumption and environmental impact of video streaming. It demonstrated how modern technologies and a conscious approach to video streaming can positively impact the environment. This station encouraged visitors to contribute to a greener future.

                                              

SPIRIT: L22 – A look into the future of virtual presence: What do people look like as 3D point clouds?

This station transported visitors to the future of communication. SPIRIT (L22) explored immersive telepresence, where people and objects are no longer confined to 2D video tiles but represented as realistic 3D point clouds for VR/AR glasses. Imagine the iconic “holodeck” from Star Trek coming to life! The station showed what such representations might look like, bridging the gap between the physical and virtual worlds.

                                               

The ATHENA Lab’s stations were a magnet for kids at LNDF! Filled with cool activities and demos, the displays showcasing cutting-edge research kept young minds engaged and curious about the future.

 

Posted in ATHENA, GAIA, SPIRIT | Comments Off on Long Night of Research/Lange Nacht der Forschung 2024

On the Security of Selectively Encrypted HEVC Video Bitstreams

ACM Transactions on Multimedia Computing Communications and Applications (ACM TOMM)

[PDF]

Chen Chen (Tsinghua University, China),  Lingfeng Qu (Guangzhou University, China), Hadi Amirpour (Alpen-Adria-Universität Klagenfurt), Xingjun Wang (Tsinghua University, China),  Christian Timmerer (Alpen-Adria-Universität Klagenfurt), and Zhihong Tian (Guangzhou University, China)

Abstract:

With the growing applications of video, ensuring its security has become of utmost importance. Selective encryption (SE) has gained significant attention in the field of video content protection due to its compatibility with video codecs, favorable visual distortion, and low time complexity. However, few studies consider SE security under cryptographic attacks. To fill this gap, we analyze the security concerns of encrypted bitstreams by SE schemes and propose two known plaintext attacks (KPAs). Then, the corresponding defense is presented against the KPAs.To validate the effectiveness of the KPA, it is applied to attack two existing SE schemes with superior visual degradation in HEVC videos.

Firstly, the encrypted bitstreams are generated using the HEVC encoder with SE (HESE).
Secondly, the video sequences are encoded using H.265/HEVC. During encoding, the selected syntax elements are recorded. Then, the recorded syntax elements are imported into the HEVC decoder using decryption (HDD). By utilizing the encryption parameters and the imported data in the HDD, it becomes possible to reconstruct a significant portion of the original syntax elements before encryption. Finally, the reconstructed syntax elements are compared with the encrypted syntax elements in the HDD, allowing the design of a pseudo-key stream (PKS) through the inverse of the encryption operations. The PKS is used to decrypt the existing SE scheme, and the experimental results provide evidence that the two existing SE schemes are vulnerable to the proposed KPAs.
In the case of single bitstream estimation (SBE), the average correct rate of key stream estimation exceeds 93%. Moreover, with multi-bitstream complementation (MBC), the average estimation accuracy can be further improved to 99%.

 

Posted in ATHENA | Comments Off on On the Security of Selectively Encrypted HEVC Video Bitstreams

Video Encoding Enhancement via Content-Aware Spatial and Temporal Super-Resolution

European Signal Processing Conference (EUSIPCO)

26-30 August 2024, Lyon, France

[PDF]

Yiying Wei (AAU, Austria), Hadi Amirpour (AAU, Austria) Ahmed Telili (INSA Rennes, France), Wassim Hamidouche (INSA Rennes, France), Guo Lu (Shanghai Jiao Tong University, China) and Christian Timmerer (AAU, Austria)

Abstract: Content-aware deep neural networks (DNNs) are trending in Internet video delivery. They enhance quality within bandwidth limits by transmitting videos as low resolution (LR) bitstreams with overfitted super-resolution (SR) model streams to reconstruct high-resolution (HR) video on the decoder end. However, these methods underutilize spatial and temporal re- dundancy, compromising compression efficiency. In response, our proposed video compression framework introduces spatial- temporal video super-resolution (STVSR), which encodes videos into low spatial-temporal resolution (LSTR) content and a model stream, leveraging the combined spatial and temporal reconstruction capabilities of DNNs. Compared to the state-of- the-art approaches that consider only spatial SR, our approach achieves bitrate savings of 18.71% and 17.04% while maintainingthe same PSNR and VMAF, respectively.

Posted in ATHENA | Comments Off on Video Encoding Enhancement via Content-Aware Spatial and Temporal Super-Resolution

Antrittsvorlesung von Christian Timmerer

Das Original befindet sich hier.

Am 5. Juni 2024 findet an der Universität Klagenfurt die Antrittsvorlesung von Christian Timmerer vom Institut für Informationstechnologie zum Thema „Video Streaming: Then, Now, and in the Future“ statt.   

Das Rektorat der Universität Klagenfurt und der Dekan der Fakultät für Technische Wissenschaften laden herzlich am 5. Juni 2024 zur Antrittsvorlesung von Christian Timmerer ein. Christian Timmerer ist seit Dezember 2022 Universitätsprofessor für Multimedia Systems am Institut für Informationstechnologie der Fakultät für Technische Wissenschaften. Seine Antrittsvorlesung hält er zum Thema:

„Video Streaming: Then, Now, and in the Future“   [Slides]

5. Juni 2024
17.00 Uhr
Universität Klagenfurt
Hörsaal 2 (Zentraltrakt der Universität)

In seinem öffentlichen Vortrag gibt Christian Timmerer Einblicke in die faszinierende Geschichte des Videostreamings, beginnend bei den bescheidenen Anfängen vor YouTube bis hin zu den bahnbrechenden Technologien, die heutzutage Plattformen wie Netflix und ORF ON dominieren. Timmerer präsentiert dabei auch provokante eigene Beiträge, die die Branche maßgeblich beeinflusst haben. Zum Abschluss wirft er einen Blick auf die zukünftigen Herausforderungen und lädt das Publikum zu einer Diskussion ein.

Die Antrittsvorlesung findet in englischer Sprache statt.

Posted in ATHENA | Comments Off on Antrittsvorlesung von Christian Timmerer

Content-aware Reference Frame Synthesis for Enhanced Inter Prediction

European Signal Processing Conference (EUSIPCO)

26-30 August 2024, Lyon, France

[PDF]

Mohammad Ghasempour (AAU, Austria), Yiying Wei (AAU, Austria), Hadi Amirpour (AAU, Austria),  and Christian Timmerer (AAU, Austria)

Abstract: Video coding relies heavily on reducing spatial and temporal redundancy to enable efficient transmission. To tackle the temporal redundancy, each video frame is predicted from the previously encoded frames, known as reference frames. The quality of this prediction is highly dependent on the quality of the reference frames. Recent advancements in machine learning are motivating the exploration of frame synthesis to generate high-quality reference frames. However, the efficacy of such models depends on training with content similar to that encountered during usage, which is challenging due to the diverse nature of video data. This paper introduces a content-aware reference frame synthesis to enhance inter-prediction efficiency. Unlike conventional approaches that rely on pre-trained models, our proposed framework optimizes a deep learning model for each content by fine-tuning only the last layer of the model, requiring the transmission of only a few kilobytes of additional information to the decoder. Experimental results show that the proposed framework yields significant bitrate savings of 12.76%, outperforming its counterpart in the pre-trained framework, which only achieves 5.13% savings in bitrate.

Posted in ATHENA | Comments Off on Content-aware Reference Frame Synthesis for Enhanced Inter Prediction