ACM MM’25: Nature-1k: The Raw Beauty of Nature in 4K at 60FPS

Nature-1k: The Raw Beauty of Nature in 4K at 60FPS

ACM Multimedia 2025

October 27 – October 31, 2025

Dublin, Ireland

[PDF]

Mohammad Ghasempour (AAU, Austria), Hadi Amirpour (AAU, Austria), Christian Timmerer (AAU, Austria)

Abstract: The push toward data-driven video processing, combined with recent advances in video coding and streaming technologies, has fueled the need for diverse, large-scale, and high-quality video datasets. However, the limited availability of such datasets remains a key barrier to the development of next-generation video processing solutions. In this paper, we introduce Nature-1k, a large-scale video dataset consisting of 1000 professionally captured 4K Ultra High Definition (UHD) videos, each recorded at 60fps. The dataset covers a wide range of environments, lighting conditions, texture complexities, and motion patterns. To maintain temporal consistency, which is crucial for spatio-temporal learning applications, the dataset avoids scene cuts within the sequences. We further characterize the dataset using established metrics, including spatial and temporal video complexity metrics, as well as colorfulness, brightness, and contrast distribution. Moreover, Nature-1k includes a compressed version to support rapid prototyping and lightweight testing. The quality of the compressed videos is evaluated using four commonly used video quality metrics: PSNR, SSIM, MS-SSIM, and VMAF. Finally, we compare Nature-1k with existing datasets to demonstrate its superior quality and content diversity. The dataset is suitable for a wide range of applications, including Generative Artificial Intelligence (AI), video super-resolution and enhancement, video interpolation, as well as video coding, and adaptive video streaming optimization. Dataset URL: Link

Posted in ATHENA | Comments Off on ACM MM’25: Nature-1k: The Raw Beauty of Nature in 4K at 60FPS

Receiving Kernel-Level Insights via eBPF: Can ABR Algorithms Adapt Smarter?

Receiving Kernel-Level Insights via eBPF: Can ABR Algorithms Adapt Smarter?

Würzburg Workshop on Next-Generation Communication Networks (WueWoWAS) 2025

6 – 8 Oct 2025, Würzburg, Germany

[PDF]

Mohsen Ghasemi (Sharif University of Technology, Iran); Daniele Lorenzi (Alpen-Adria-Universität Klagenfurt, Austria); Mahdi Dolati (Sharif University of Technology, Iran); Farzad Tashtarian (Alpen-Adria Universität Klagenfurt, Austria); Sergey Gorinsky (IMDEA Networks Institute, Spain); Christian Timmerer (Alpen-Adria-Universität Klagenfurt & Bitmovin, Austria)

Abstract: The rapid rise of video streaming services such as Netflix and YouTube has made video delivery the largest driver of global Internet traffic, including mobile networks such as 5G or the upcoming 6G network. To maintain playback quality, client devices employ Adaptive Bitrate (ABR) algorithms that adjust video quality based on metrics like available bandwidth and buffer occupancy. However, these algorithms often react slowly to sudden bandwidth fluctuations due to limited visibility
into network conditions, leading to stall events that significantly degrade the user’s Quality of Experience (QoE). In this work, we introduce CaBR, a Congestion-aware adaptive BitRate decision module designed to operate on top of existing ABR algorithms. CaBR enhances video streaming performance by leveraging real-time, in-kernel network telemetry collected via the extended Berkeley Packet Filter (eBPF). By utilizing congestion metrics such as queue lengths observed at network switches, CaBR refines the bitrate selection of the underlying ABR algorithms for upcoming segments, enabling faster adaptation to changing network conditions. Our evaluation shows that CaBR significantly reduces the playback stalls and improves QoE by up to 25% compared to state-of-the-art approaches in a congested environment.

Posted in ATHENA | Comments Off on Receiving Kernel-Level Insights via eBPF: Can ABR Algorithms Adapt Smarter?

BMVC’25: Cross-Modal Scene Semantic Alignment for Image Complexity Assessment

Cross-Modal Scene Semantic Alignment for Image Complexity Assessment

British Machine Vision Conference (BMVC) 2025

November, 2025

Sheffield, UK

[PDF]

Yuqing Luo, YIXIAO LI, Jiang Liu, Jun Fu, Hadi Amirpour, Guanghui Yue, Baoquan Zhao, Padraig Corcoran, Hantao Liu, Wei Zhou

Abstract: Image complexity assessment (ICA) is a challenging task in perceptual evaluation due to the subjective nature of human perception and the inherent semantic diversity in real-world images. Existing ICA methods predominantly rely on hand-crafted or shallow convolutional neural network-based features of a single visual modality, which are insufficient to fully capture the perceived representations closely related to image complexity. Recently, cross-modal scene semantic information has been shown to play a crucial role in various computer vision tasks, particularly those involving perceptual understanding. However, the exploration of cross-modal scene semantic information in the context of ICA remains unaddressed. Therefore, in this paper, we propose a novel ICA method called Cross-Modal Scene Semantic Alignment (CM-SSA), which leverages scene semantic alignment from a cross-modal perspective to enhance ICA performance, enabling complexity predictions to be more consistent with subjective human perception. Specifically, the proposed CM-SSA consists of a complexity regression branch and a scene semantic alignment branch. The complexity regression branch estimates image complexity levels under the guidance of the scene semantic alignment branch, while the scene semantic alignment branch is used to align images with corresponding text prompts that convey rich scene semantic information by pair-wise learning. Extensive experiments on several ICA datasets demonstrate that the proposed CM-SSA significantly outperforms state-of-the-art approaches.

Posted in ATHENA | Comments Off on BMVC’25: Cross-Modal Scene Semantic Alignment for Image Complexity Assessment

Interns at ATHENA (Summer 2025)

     

In July 2025, the ATHENA Christian Doppler Laboratory hosted four interns working on the following topics:

  • Leon Kordasch: Large-scale 4K 60fps video dataset
  • Theresa Petschenig: Video generation and quality assessment

At the conclusion of their internships, the interns showcased their projects and findings, earning official certificates from the university. The collaboration proved to be a rewarding experience for both the interns and the researchers at ATHENA. Through personalized mentorship, hands-on training, and ongoing support, the interns benefited from an enriched learning journey. This comprehensive guidance enabled them to build strong practical skills while deepening their understanding of research methodologies and technologies in the video streaming domain. We sincerely thank both interns for their enthusiasm, dedication, and insightful feedback, which contributed meaningfully to the ATHENA lab’s ongoing efforts.

Leon Kordasch: My internship at ATHENA was an incredibly valuable experience. The team was welcoming and supportive, and I especially appreciated the guidance of my supervisor, Mohammad Ghasempour, who did a great job explaining the theoretical background and technical concepts needed for my work. During my time there, I developed a high-quality and diverse 4K60 video dataset for applications such as AI training, real-time upscaling, and advanced video encoding research.

Theresa Petschenig: My four-week internship at ATHENA was a really enjoyable and meaningful experience. I worked on a project related to video generation and quality assessment, which allowed me to dive into some fascinating topics. I got a much better understanding of how AI-generated videos are created and evaluated, and what makes them look realistic. The internship gave me a perfect balance of practical work and learning new concepts. My supervisor, Yiying, was very nice and helpful throughout the internship. The atmosphere in the office was calm and welcoming, and the team was really friendly. I’m grateful for everything I’ve learned and for the chance to be part of such a supportive environment. This experience gave me both valuable knowledge and great memories.

Posted in ATHENA | Comments Off on Interns at ATHENA (Summer 2025)

STEP-MR: A Subjective Testing and Eye-Tracking Platform for Dynamic Point Clouds in Mixed Reality

STEP-MR: A Subjective Testing and Eye-Tracking Platform for Dynamic Point Clouds in Mixed Reality

EuroXR 2025

September 03 – September 05, 2025

Winterthur, Switzerland

[PDF, Poster]

Shivi Vats (AAU, Austria), Christian Timmerer (AAU, Austria), Hermann Hellwagner (AAU, Austria)

Abstract: The use of point cloud (PC) streaming in mixed reality (MR) environments is of particular interest due to the immersiveness and the six degrees of freedom (6DoF) provided by the 3D content. However, this immersiveness requires significant bandwidth. Innovative solutions have been developed to address these challenges, such as PC compression and/or spatially tiling the PC to stream different portions at different quality levels. This paper presents a brief overview of a Subjective Testing and Eye-tracking Platform for dynamic point clouds in Mixed Reality (STEP-MR) for the Microsoft HoloLens 2. STEP-MR was used to conduct subjective tests (described in [1]) with 41 participants, yielding over 2000 responses and more than 150 visual attention maps, the results of which can be used, among other things, to improve dynamic (animated) point cloud streaming solutions mentioned above. Building on our previous platform , the new version now enables eye-tracking tests, including calibration and heatmap generation. Additionally, STEP-MR features modifications to the subjective tests’ functionality, such as a new rating scale and adaptability to participant movement during the tests, along with other user experience changes.

[1] Nguyen, M., Vats, S., Zhou, X., Viola, I., Cesar, P., Timmerer, C., & Hellwagner, H. (2024). ComPEQ-MR: Compressed Point Cloud Dataset with Eye Tracking and Quality Assessment in Mixed Reality. Proceedings of the 15th ACM Multimedia Systems Conference, 367–373. https://doi.org/10.1145/3625468.3652182
Posted in SPIRIT | Comments Off on STEP-MR: A Subjective Testing and Eye-Tracking Platform for Dynamic Point Clouds in Mixed Reality

ACM MM’25 Open Source: diveXplore – An Open-Source Software for Modern Video Retrieval with Image/Text Embeddings

diveXplore – An Open-Source Software for Modern Video Retrieval with Image/Text Embeddings

ACM Multimedia 2025

October 27 – October 31, 2025

Dublin, Ireland

[PDF]

Mario Leopold (AAU, Austria), Farzad Tashtarian (AAU, Austria), Klaus Schöffmann (AAU, Austria)

Abstract:Effective video retrieval in large-scale datasets presents a significant challenge, with existing tools often being too complex, lacking sufficient retrieval capabilities, or being too slow for rapid search tasks. This paper introduces diveXplore, an open-source software designed for interactive video retrieval. Due to its success in various competitions like the Video Browser Showdown (VBS) and the Interactive Video Retrieval 4 Beginners (IVR4B), as well as its continued development since 2017, diveXplore is a solid foundation for various kinds of retrieval tasks. The system is built on a three-layer architecture, comprising a backend for offline preprocessing, a middleware with a Node.js and Python server for query handling, and a MongoDB for metadata storage, as well as an Angular-based frontend for user interaction. Key functionalities include free-text search using natural language, temporal queries, similarity search, and other specialized search strategies. By open-sourcing diveXplore, we aim to establish a solid baseline for future research and development in the video retrieval community, encouraging contributions and adaptations for a wide range of use cases, even beyond competitive settings.

Posted in ATHENA | Comments Off on ACM MM’25 Open Source: diveXplore – An Open-Source Software for Modern Video Retrieval with Image/Text Embeddings

Patent Approval for “Content-adaptive encoder preset prediction for adaptive live streaming”

Content-adaptive encoder preset prediction for adaptive live streaming

US Patent

[PDF]

Vignesh Menon (Alpen-Adria-Universität Klagenfurt, Austria), Hadi Amirpour (Alpen-Adria-Universität Klagenfurt, Austria), and Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Austria)

 

Abstract: Techniques for content-adaptive encoder preset prediction for adaptive live streaming are described herein. A method for content-adaptive encoder preset prediction for adaptive live streaming includes performing video complexity feature extraction on a video segment to extract complexity features such as an average texture energy, an average temporal energy, and an average lumiscence. These inputs may be provided to an encoding time prediction model, along with a bitrate ladder, a resolution set, a target video encoding speed, and a number of CPU threads for the video segment, to predict an encoding time, and an optimized encoding preset may be selected for the video segment by a preset selection function using the predicted encoding time. The video segment may be encoded according to the optimized encoding preset.

Posted in ATHENA | Comments Off on Patent Approval for “Content-adaptive encoder preset prediction for adaptive live streaming”