Best Paper Award at PCS

The paper titled “Beyond Curves and Thresholds – Introducing Uncertainty Estimation to Satisfied User Ratios for Compressed Video,” co-authored by Jingwen Zhu, Hadi Amirpour, Raimund Shatz, Patrick Le Callet, and Christian Timmerer, received the Best Paper Award at the 37th Picture Coding Symposium.

Posted in ATHENA | Comments Off on Best Paper Award at PCS

ACM TOMM Special Issue on ACM Multimedia Systems 2024 and Co-located Workshops

This special issue aims to collect extended versions of the accepted papers at ACM Multimedia Systems 2024 and co-located workshops (i.e., NOSSDAV, MMVE, and GMSys). Similarly, as for 2023, it is planned that all accepted MMSys full research papers and workshop papers are eligible for submission, which must have at least 25% new material compared to the accepted paper at MMSys or co-located workshops, respectively.

The ACM Multimedia Systems Conference and associated workshops seek to bring together experts from academia and industry to share their latest research findings in the field of multimedia systems. While research about specific aspects of multimedia systems is regularly published in various venues covering networking, operating systems, real-time systems, databases, mobile computing, distributed systems, computer vision, and middleware communities, MMSys aims to cut across these domains in the context of multimedia data types. This provides a unique opportunity to investigate the intersections and the interplay of the various approaches and solutions developed across these domains.
Topics Submissions are solicited on all aspects of multimedia systems, including but not limited to:

  • Content generation, adaptation, and summarization
  • Adaptive streaming of multimedia content
  • AI (e.g., machine/deep learning) for all aspects of multimedia systems
  • Network and system support for multimedia
  • Video games and cloud gaming
  • Virtual and augmented reality content and systems
  • Multiview, 360 degrees, 3D, and volumetric videos
  • Internet of Things (IoTs) and multimedia
  • Mobile multimedia and 5G/6G
  • Wearable multimedia
  • Cloud and edge computing for multimedia systems
  • Digital twins
  • Cyber-physical systems
  • Multi-sensory experiences
  • Autonomous multimedia systems
  • Quality of Experience (QoE)
  • Multimedia systems for robotics and unmanned vehicles
  • Multimedia systems for health
  • Audio, image and video coding for humans and machines
  • Analytics for multimedia systems
  • Sustainable (green) multimedia systems

Important Dates

  • Open for submissions: July 15, 2024
  • Submission deadline: September 15, 2024
  • First-round review decisions: November 15, 202
  • Deadline for revision submissions: January 15, 2025
  • Notification of final decisions: March 15, 2025
  • Tentative publication: April 2025

Submission Information

Prospective authors are invited to submit their manuscripts electronically adhering to the ACM TOMM journal guidelines (see https://tomm.acm.org/authors.cfm). The manuscript will not be entertained if guidelines are not followed. The manuscript should be within the scope of ACM TOMM. Please submit your papers through the online system (https://mc.manuscriptcentral.com/tomm) and be sure to select the special issue. Manuscripts should not be published or currently submitted for publication elsewhere.

Guest Editors

  • Christian Timmerer, University of Klagenfurt, Austria, christian.timmerer@aau.at
  • Maria Martini, Kingston University, M.Martini@kingston.ac.uk
  • Ali C. Begen, Ozyegin University, Türkiye, ali.begen@ozyegin.edu.tr
  • Lucca De Cicco, Politecnico di Bari, Italy, luca.decicco@poliba.it

For questions and further information, please contact guest editors using acm-tomm-si-msys2024@itec.aau.at.

Posted in ATHENA | Comments Off on ACM TOMM Special Issue on ACM Multimedia Systems 2024 and Co-located Workshops

Christian Timmerer presents at Telecom Seminar Series at TII about HTTP Adaptive Streaming

HTTP Adaptive Streaming – Quo Vadis?

Jun 27, 2024, 04:00 PM Dubai

[Slides]

Abstract: Video traffic on the Internet is constantly growing; networked multimedia applications consume a predominant share of the available Internet bandwidth. A major technical breakthrough and enabler in multimedia systems research and of industrial networked multimedia services certainly was the HTTP Adaptive Streaming (HAS) technique. This resulted in the standardization of MPEG Dynamic Adaptive Streaming over HTTP (MPEG-DASH) which, together with HTTP Live Streaming (HLS), is widely used for multimedia delivery in today’s networks. Existing challenges in multimedia systems research deal with the trade-off between (i) the ever-increasing content complexity, (ii) various requirements with respect to time (most importantly, latency), and (iii) quality of experience (QoE). Optimizing towards one aspect usually negatively impacts at least one of the other two aspects if not both. This situation sets the stage for our research work in the ATHENA Christian Doppler (CD) Laboratory (Adaptive Streaming over HTTP and Emerging Networked Multimedia Services; https://athena.itec.aau.at/), jointly funded by public sources and industry. In this talk, we will present selected novel approaches and research results of the first year of the ATHENA CD Lab’s operation. We will highlight HAS-related research on (i) multimedia content provisioning (machine learning for video encoding); (ii) multimedia content delivery (support of edge processing and virtualized network functions for video networking); (iii) multimedia content consumption and end-to-end aspects (player-triggered segment retransmissions to improve video playout quality); and (iv) novel QoE investigations (adaptive point cloud streaming). We will also put the work into the context of international multimedia systems research.

Biography: Christian Timmerer is a full professor of computer science at Alpen-Adria-Universität Klagenfurt (AAU), Institute of Information Technology (ITEC) and he is the director of the Christian Doppler (CD) Laboratory ATHENA (https://athena.itec.aau.at/). His research interests include multimedia systems, immersive multimedia communication, streaming, adaptation, and quality of experience where he co-authored more than 20 patent applications and more than 300 articles. He was the general chair of WIAMIS 2008, QoMEX 2013, MMSys 2016, and PV 2018 and has participated in several EC-funded projects, notably DANAE, ENTHRONE, P2P-Next, ALICANTE, SocialSensor, COST IC1003 QUALINET, ICoSOLE, and SPIRIT. He also participated in ISO/MPEG work for several years, notably in the area of MPEG-21, MPEG-M, MPEG-V, and MPEG-DASH where he also served as standard editor. In 2012 he cofounded Bitmovin (http://www.bitmovin.com/) to provide professional services around MPEG-DASH where he holds the position of the Chief Innovation Officer (CIO) –- Head of Research and Standardization. Further information at http://timmerer.com.

Posted in ATHENA | Comments Off on Christian Timmerer presents at Telecom Seminar Series at TII about HTTP Adaptive Streaming

Patent Approval for “Per-Title Encoding Using Spatial and Temporal Resolution Downscaling”

Per-Title Encoding Using Spatial and Temporal Resolution Downscaling

US Patent

[PDF]

Hadi Amirpour (Alpen-Adria-Universität Klagenfurt, Austria) and Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Austria)

 

Abstract: Techniques relating to per-title encoding using spatial and temporal resolution downscaling is disclosed. A method for per-title encoding includes receiving a video input comprised of video segments, spatially downscaling the video input, temporally downscaling the video input, encoding the video input to generate an encoded video, then temporally and spatially upscaling the encoded video. Spatially downscaling may include reducing a resolution of the video input, and temporally downscaling may include reducing a framerate of the video input. Objective metrics for the upscaled encoded video show improved quality over conventional methods.

Posted in ATHENA | Comments Off on Patent Approval for “Per-Title Encoding Using Spatial and Temporal Resolution Downscaling”

Samira Afzal talk at 11th Fraunhofer FOKUS MWS

Energy Efficient Video Encoding for Cloud and Edge Computing Instances

11th FOKUS Media Web Symposium

 11th JUN 2024 | Berlin, Germany

 

Abstract: The significant increase in energy consumption within data centers is primarily due to the exponential rise in demand for complex computing workflows and storage resources. Video streaming applications, which are both compute and storage-intensive, account for the majority of today’s internet services. To address this, this talk proposes a novel matching-based method designed to schedule video encoding applications on Cloud resources. The method optimizes for user-defined objectives, including energy consumption, processing time, cost, CO2 emissions, or a a trade-off between these priorities.

Samira Afzal is a postdoctoral researcher at Alpen-Adria-Universität Klagenfurt, Austria, and in collaborating with Bitmovin. Before, she was a postdoctoral researcher at the University of Sao Paulo, researching deeply on IoT, SWARM, and Surveillance Systems. She graduated with her Ph.D. in November 2019 from the University of Campinas (UNICAMP). During her Ph.D., she collaborated with Samsung on a project in the area of mobile video streaming over heterogeneous wireless networks and multipath transmission methods in order to increase perceived
video quality. Further information is available here.

Posted in ATHENA, GAIA | Comments Off on Samira Afzal talk at 11th Fraunhofer FOKUS MWS

ODVista: An Omnidirectional Video Dataset for Super-Resolution and Quality Enhancement Tasks

IEEE International Conference on Image Processing (ICIP 2024)

27-30 October 2024, Abu Dhabi, UAE

[PDF]

Ahmed Telili (TII, UAE), Ibrahim Farhat (TII, UAE), Wassim Hamidouche (TII, UAE), Hadi Amirpour (AAU, Austria)

 

Abstract: Omnidirectional or 360-degree video is being increasingly deployed, largely due to the latest advancements in immersive virtual reality (VR) and extended reality (XR) technology. However, the adoption of these videos in streaming encounters challenges related to bandwidth and latency, particularly in mobility conditions such as with unmanned aerial vehicles (UAVs). Adaptive resolution and compression aim to preserve quality while maintaining low latency under these constraints, yet downscaling and encoding can still degrade quality and introduce artifacts. Machine learning (ML)-based super-resolution (SR) and quality enhancement techniques offer a promising solution by enhancing detail recovery and reducing compression artifacts. However, current publicly available 360-degree video SR datasets lack compression artifacts, which limit research in this field. To bridge this gap, this paper introduces omnidirectional video streaming dataset (ODVista), which comprises 200 high-resolution and high-quality videos downscaled and encoded at four bitrate ranges using the high-efficiency video coding (HEVC)/H.265 standard. Evaluations show that the dataset not only features a wide variety of scenes but also spans different levels of content complexity, which is crucial for robust solutions that perform well in real-world scenarios and generalize across diverse visual environments. Additionally, we evaluate the performance, considering both quality enhancement and runtime, of two handcrafted and two ML-based SR models on the validation and testing sets of ODVista.
Dataset URL: https://github.com/Omnidirectional-video-group/ODVista

 

Posted in ATHENA | Comments Off on ODVista: An Omnidirectional Video Dataset for Super-Resolution and Quality Enhancement Tasks

Long Night of Research/Lange Nacht der Forschung 2024

The Klagenfurt University (AAU) made a strong impression at the 2024 Lange Nacht der Forschung (Long Night of Research) held at Klagenfurt University and Lakeside Park on May 24th, attracting over 8,000 visitors. The Athena Lab, a leading research group within AAU, particularly impressed visitors with its three interactive stations – ATHENA (L20), GAIA (L21) and  SPIRIT (L22) – showcasing its work at the forefront of technology and sustainability.

ATHENA: L20 – How does video work on the Internet?

The ATHENA (L20) station explored the world of video streaming. Visitors learned how content travels from its source to their devices. Through interactive displays, they learned how innovative technologies ensure videos stream quickly and in the best quality possible, reaching Smart TVs seamlessly.

GAIA: L21 – Greener Video Streaming for a Sustainable Future

The GAIA (L21) station aimed to raise visitors’ awareness about the energy consumption and environmental impact of video streaming. It demonstrated how modern technologies and a conscious approach to video streaming can positively impact the environment. This station encouraged visitors to contribute to a greener future.

                                              

SPIRIT: L22 – A look into the future of virtual presence: What do people look like as 3D point clouds?

This station transported visitors to the future of communication. SPIRIT (L22) explored immersive telepresence, where people and objects are no longer confined to 2D video tiles but represented as realistic 3D point clouds for VR/AR glasses. Imagine the iconic “holodeck” from Star Trek coming to life! The station showed what such representations might look like, bridging the gap between the physical and virtual worlds.

                                               

The ATHENA Lab’s stations were a magnet for kids at LNDF! Filled with cool activities and demos, the displays showcasing cutting-edge research kept young minds engaged and curious about the future.

 

Posted in ATHENA, GAIA, SPIRIT | Comments Off on Long Night of Research/Lange Nacht der Forschung 2024