VE-Match: Video Encoding Matching-based Model for Cloud and Edge Computing Instances

GMSys 2023: First International ACM Green Multimedia Systems Workshop

7 – 10 June 2023 | Vancouver, Canada

Conference Website

Samira Afzal (Alpen-Adria-Universität Klagenfurt), Narges Mehran (Alpen-Adria-Universität Klagenfurt), Sandro Linder (Bitmovin), Christian Timmerer (Alpen-Adria-Universität Klagenfurt), and Radu Prodan (Alpen-Adria-Universität Klagenfurt)

Abstract: The considerable surge in energy consumption within data centers can be attributed to the exponential rise in demand for complex computing workflows and storage resources. Video streaming applications are both compute and storage-intensive and account for the majority of today’s internet services. In this work, we designed a video encoding application consisting of codec, bitrate, and resolution set for encoding a video segment. Then, we propose VE-Match, a matching-based method to schedule video encoding applications on both Cloud and Edge resources to optimize costs and energy consumption. Evaluation results on a real computing testbed federated between Amazon Web Services (AWS) EC2 Cloud instances and the Alpen-Adria University (AAU) Edge server reveal that VE-Match achieves lower costs by 17%-78% in the cost-optimized scenarios compared to the energy-optimized and tradeoff between cost and energy. Moreover, VE-Match improves the video encoding energy consumption by 38%-45% and gCO2 emission by up to 80 % in the energy-optimized scenarios compared to the cost-optimized and tradeoff between cost and energy.

Keywords: Video encoding, Cloud and Edge computing, energy consumption, CO2 emission, scheduling.

Posted in GAIA | Comments Off on VE-Match: Video Encoding Matching-based Model for Cloud and Edge Computing Instances

HTTP Adaptive Streaming – Quo Vadis? (2023)

IEEE ComSoc MMTC Distinguished Lecture Series

Speaker: Prof. Christian Timmerer, Alpen-Adria-Universität Klagenfurt (AAU), Austria

Date/Time: Thursday, Apr 20, 2023, 10:00 AM Pacific Time (US and Canada), CET Time 7:00 PM Austria
Title: HTTP Adaptive Streaming (HAS) — Quo Vadis? (2023; for the 2021 version, see here)

Abstract: Video traffic on the Internet is constantly growing; networked multimedia applications consume a predominant share of the available Internet bandwidth. A major technical breakthrough and enabler in multimedia systems research and of industrial networked multimedia services certainly was the HTTP Adaptive Streaming (HAS) technique. This resulted in the standardization of MPEG Dynamic Adaptive Streaming over HTTP (MPEG-DASH) which, together with HTTP Live Streaming (HLS), is widely used for multimedia delivery in today’s networks. Existing challenges in multimedia systems research deal with the trade-off between (i) the ever-increasing content complexity, (ii) various requirements with respect to time (most importantly, latency), and (iii) quality of experience (QoE). Optimizing towards one aspect usually negatively impacts at least one of the other two aspects if not both. This situation sets the stage for our research work in the ATHENA Christian Doppler (CD) Laboratory (Adaptive Streaming over HTTP and Emerging Networked Multimedia Services;, jointly funded by public sources and industry. In this talk, we will present selected novel approaches and research results of the first year of the ATHENA CD Lab’s operation. We will highlight HAS-related research on (i) multimedia content provisioning (machine learning for video encoding); (ii) multimedia content delivery (support of edge processing and virtualized network functions for video networking); (iii) multimedia content consumption and end-to-end aspects (player-triggered segment retransmissions to improve video playout quality); and (iv) novel QoE investigations (adaptive point cloud streaming). We will also put the work into the context of international multimedia systems research.

Biography: Christian Timmerer is a full professor of computer science at Alpen-Adria-Universität Klagenfurt (AAU), Institute of Information Technology (ITEC) and he is the director of the Christian Doppler (CD) Laboratory ATHENA ( His research interests include multimedia systems, immersive multimedia communication, streaming, adaptation, and quality of experience where he co-authored seven patents and more than 300 articles. He was the general chair of WIAMIS 2008, QoMEX 2013, MMSys 2016, and PV 2018 and has participated in several EC-funded projects, notably DANAE, ENTHRONE, P2P-Next, ALICANTE, SocialSensor, COST IC1003 QUALINET, ICoSOLE, and SPIRIT. He also participated in ISO/MPEG work for several years, notably in the area of MPEG-21, MPEG-M, MPEG-V, and MPEG-DASH where he also served as standard editor. In 2012 he cofounded Bitmovin ( to provide professional services around MPEG-DASH where he holds the position of the Chief Innovation Officer (CIO) –- Head of Research and Standardization. Further information at

Posted in News | Comments Off on HTTP Adaptive Streaming – Quo Vadis? (2023)

MTAP: Performance Analysis of H2BR: HTTP/2-based Segment Upgrading to Improve the QoE in HAS

Multimedia Tools and Applications


Minh Nguyen, Hadi Amirpour, Farzad Tashtarian, Christian Timmerer and Hermann Hellwagner

(Alpen-Adria-Universität Klagenfurt)

Abstract: HTTP Adaptive Streaming (HAS) plays a key role in over-the-top video streaming with the ability to reduce the video stall duration by adapting the quality of transmitted video segments to the network conditions. However, HAS still suffers from two problems. First, it incurs variations in video quality because of throughput fluctuation. Adaptive bitrate (ABR) algorithms at the HAS client usually select a low-quality segment when the throughput drops to avoid stall events, which impairs the Quality of Experience (QoE) of the end-users. Second, many ABR algorithms choose the lowest-quality segments at the beginning of a video streaming session to ramp up the playout buffer early on. Although this strategy decreases the startup time, clients can be annoyed as they have to watch a low-quality video initially.

To address these issues, we introduced the H2BR technique (HTTP/2-Based Retransmission) that utilizes certain features of HTTP/2 (including server push, multiplexing, stream priority, and stream termination) for late transmissions of higher-quality versions of video segments already in the client buffer, in order to improve video quality. Although H2BR was shown to enhance the QoE, limited streaming scenarios were considered resulting in a lack of general conclusions on H2BR’s performance. Thus, this article provides a profound evaluation to answer three open questions: (i) how H2BR’s performance is impacted by parameters at the server side (i.e., various encoding specifications), at the network side (i.e., packet loss rate), and at the client side (i.e., buffer size) on the performance of H2BR; (ii) how H2BR outperforms other state-of-the-art approaches in different configurations of the parameters above; (iii) how to effectively utilize H2BR on top of ABR algorithms in various streaming scenarios.

The experimental results show that H2BR’s performance increases with the buffer size and decreases with increasing packet loss rates and/or video segment duration. The number of quality levels can negatively or positively impact on H2BR’s performance, depending on the ABR algorithm deployed. In general, H2BR is able to enhance the video quality by up to and 14% in scalablevideo streaming and in non-scalable video streaming, respectively. Compared with an existing retransmission technique (i.e., SQUAD), H2BR shows better results with more than 10% in QoE and 9% in the average video quality.

Keywords: HTTP adaptive streaming, DASH, Retransmission, QoE, HTTP/2, H2BR

Posted in News | Comments Off on MTAP: Performance Analysis of H2BR: HTTP/2-based Segment Upgrading to Improve the QoE in HAS

Green video complexity analysis for efficient encoding in Adaptive Video Streaming

GMSys 2023: First International ACM Green Multimedia Systems Workshop

7 – 10 June 2023 | Vancouver, Canada

Conference Website

[PDF] [Slides]

Vignesh V Menon (Alpen-Adria-Universität Klagenfurt), Christian Feldmann (Bitmovin, Klagenfurt), Klaus Schoeffmann (Alpen-Adria-Universität Klagenfurt), Mohammed Ghanbari (University of Essex),  and Christian Timmerer (Alpen-Adria-Universität Klagenfurt).


For adaptive streaming applications, low-complexity and accurate video complexity features are necessary to analyze the video content in real time, which ensures fast and compression-efficient video streaming without disruptions. The popular state-of-the-art video complexity features are Spatial Information (SI) and Temporal Information (TI) features which do not correlate well with the encoding parameters in adaptive streaming applications. To this light, Video Complexity Analyzer (VCA) was introduced, determining the features based on Discrete Cosine Transform (DCT)-energy. This paper presents optimizations on VCA for faster and energy-efficient video complexity analysis. Experimental results show that VCAv2.0, using eight CPU threads, Single Instruction Multiple Data (SIMD), and low-pass DCT optimization determines seven complexity features of Ultra High Definition 8-bit videos with better accuracy at a speed of 292.68 fps and an energy consumption of 97.06% lower than the reference SITI implementation.

Content-adaptive encoding framework using video content complexity analysis.

Posted in GAIA, News | Comments Off on Green video complexity analysis for efficient encoding in Adaptive Video Streaming

IXR’23: Interactive eXtended Reality 2023

IXR’23: Interactive eXtended Reality 2023

colocated with ACM Multimedia 2023

October 2023, Ottawa, Canada

Workshop Chairs:

  • Irene Viola, CWI, Netherlands
  • Hadi Amirpour, Klagenfurt University, Austria
  • Stephanie Arévalo Arboleda, TUIlmenau , Germany
  • Maria Torres Vega, Ghent University, Belgium

Topics of interest include, but are not limited to:

  • Novel low latency encoding techniques for interactive XR applications
  • Novel networking systems and protocols to enable interactive immersive applications. This includes optimizations ranging from hardware (i.e., millimeter-wave networks or optical wireless), physical and MAC layer up to the network, transport and application layers (such as over the top protocols);
  • Significative advances and optimization in 3D modeling pipelines for AR/VR visualization, accessible and inclusive GUI, interactive 3D models;
  • Compression and delivery strategies for immersive media contents, such as omnidirectional video, light fields, point clouds, dynamic and time varying meshes;
  • Quality of Experience management of interactive immersive media applications;
  • Novel rendering techniques to enhance interactivity of XR applications;
  • Application of interactive XR to different areas of society, such as health (i.e., virtual reality exposure therapy), industry (Industry 4.0), XR e-learning (according to new global aims);


  • Submission deadline: 05 July 2023, 23:59 AoE
  • Notifications of acceptance: 30 July 2023
  • Camera ready submission: 06 August 2023
  • Workshop: 29th October to 3rd November
Posted in News | Comments Off on IXR’23: Interactive eXtended Reality 2023

VCIP 2025 Conference to be held in Klagenfurt by AAU

 International Conference on Visual Communications and Image Processing (VCIP)

1-4 December 2025

Klagenfurt, Austria

Aussichtsturm Pyramidenkogel - Täglich gratis mit der Winter Kärnten Card

VCIP has a long tradition in showcasing pioneering technologies in visual communication and processing, and many landmark papers first appeared in VCIP. We will carry on this tradition of VCIP in disseminating the state of art of visual communication technology, brainstorming and envisioning the future of visual communication technology and applications.

General Chairs:

  • Lu Yu (ZJU, CN)
  • Shan Liu (Tencent, USA)
  • Christian Timmerer (AAU, AT)

Technical Program Committee Chairs:

  • Fernando Pereira (IST-IT, PT)
  • Carla Pagliari (IME, BR)
  • Hadi Amirpour (AAU, AT)

Plenary Session Chairs:

  • Christine Guillemot (INRIA, FR)
  • Ali Begen (OZU, TR)

Special Session Chairs:

  • Jörn Ostermann (LUH, DE)
  • Frederic Dufaux (CNRS, FR)

Tutorial Chairs:

  • Eckehard Steinbach (TUM, DE)
  • Roger Zimmermann (NUS, SG)

Publicity Chairs:

  • Carl James Debono (UM, MT)
  • Bruno Zatt (ViTech, BR)
  • Wen-Huang Cheng (NYCU, TW)

Publication Chairs:

  • Abdelhak Bentaleb (Concordia Univ., CA)
  • Christian Herglotz (FAU, DE)

Industry Liaison:

  • Iraj Sodagar (Tencent, USA)
  • Michael Raulet (ATEME, FR)
  • Christian Feldmann (Bitmovin, DE)
  • Rufael Mekuria (Unified Streaming, NL)
  • Debargha Mukherjee (Google, USA)

Demo, Open Source, Dataset Chairs:

  • Daniel Silhavy (Fraunhofer FOKUS, DE)
  • Farzad Tashtarian (AAU, AT)
  • TBD

Doctoral Symposium Chairs:

  • Angeliki Katsenou (TCD, IE)
  • Mathias Wien (RWTH, DE)

Diversity and Inclusion Chairs:

  • TBD
  • Samira Afzal (AAU, AT)

Local Organization Team:

  • Martina Steinbacher
  • Margit Letter
  • Mario Taschwer
  • Rudi Messner

More Information to be announced.

Posted in News | Comments Off on VCIP 2025 Conference to be held in Klagenfurt by AAU

IEEE COMST: A Tutorial on Immersive Video Delivery: From Omnidirectional Video to Holography

A Tutorial on Immersive Video Delivery: From Omnidirectional Video to Holography

IEEE Communications Surveys and Tutorials

Journal Website


Jeroen van der Hooft (Ghent University, Belgium), Hadi Amirpour (AAU, Austria), Maria Torres Vega (KU Leuven, Belgium), Yago Sanchez (Fraunhofer/HHI), Raimund Schatz (AIT, Austria), Thomas Schierl (Fraunhofer/HHI, Germany), and Christian Timmerer (AAU, Austria)

Abstract: Video services are evolving from traditional two-dimensional video to virtual reality and holograms, which offer six degrees of freedom to users, enabling them to freely move around in a scene and change focus as desired. However, this increase in freedom translates into stringent requirements in terms of ultra-high bandwidth (in the order of Gigabits per second) and minimal latency (in the order of milliseconds). To realize such immersive services, the network transport, as well as the video representation and encoding, have to be fundamentally enhanced. The purpose of this tutorial article is to provide an elaborate introduction to the creation, streaming, and evaluation of immersive video. Moreover, it aims to provide lessons learned and to point at promising research paths to enable truly interactive immersive video applications toward holography.

Keywords—Immersive video delivery, 3DoF, 6DoF, omnidirectional video, volumetric video, point clouds, meshes, light fields, holography, end-to-end systems

Posted in News | Comments Off on IEEE COMST: A Tutorial on Immersive Video Delivery: From Omnidirectional Video to Holography