LIVES’25

The 2nd IEEE ICME Workshop on

Surpassing Latency Limits in Adaptive Live Video Streaming (LIVES’25)

June 30 to July 4, 2025, Nantes, France.

CFP

Call For Submissions

Delivering video content from a video server to viewers over the Internet is time-consuming in the streaming workflow and has to be handled to offer an uninterrupted streaming experience. The end-to-end latency, i.e., from the camera capture to the user device, is particularly problematic for live streaming. Some streaming-based applications, such as virtual events, esports, online learning, gaming, webinars, and all-hands meetings, require low latency for their operation. Video streaming is ubiquitous in many applications, devices, and fields. Delivering high Quality-of-Experience (QoE) to the streaming viewers is crucial, while the requirement to process a large amount of data to satisfy such QoE cannot be handled with human-constrained possibilities. (more details)

Important Dates

  • Submission deadline: March 25, 2025
  • Author notification: April 18, 2025
  • Camera-ready: April 30, 2025
  • Workshop date: July 4, 2025
Posted in ATHENA | Comments Off on LIVES’25

Best Student Paper Award at NAB BEIT Conference 2025

Two-Pass Encoding for Live Video Streaming

NAB Broadcast Engineering and IT (BEIT) Conference

5–9 April 2025 | Las Vegas, NV, USA

[PDF]

Mohammad Ghasempour (AAU, Austria); Hadi Amirpour (AAU, Austria); Christian Timmerer (AAU, Austria)

Abstract: Live streaming has become increasingly important in our daily lives due to the growing demand for real-time content consumption. Traditional live video streaming typically relies on single-pass encoding due to its low latency. However, it lacks video content analysis, often resulting in inefficient compression and quality fluctuations during playback. Constant Rate Factor (CRF) encoding, a type of single-pass method, offers more consistent quality but suffers from unpredictable output bitrate, complicating bandwidth management. In contrast, multi-pass encoding improves compression efficiency through multiple passes. However, its added latency makes it unsuitable for live streaming. In this paper, we propose OTPS, an online two-pass encoding scheme that overcomes these limitations by employing fast feature extraction on a downscaled video representation and a gradient-boosting regression model to predict the optimal CRF for encoding. This approach provides consistent quality and efficient encoding while avoiding the latency introduced by traditional multi-pass techniques. Experimental results show that OTPS offers 3.7% higher compression efficiency than single-pass encoding and achieves up to 28.1% faster encoding than multi-pass modes. Compared to single-pass encoding, encoded videos using OTPS exhibit 5% less deviation from the target bitrate while delivering notably more consistent quality.

Posted in ATHENA | Comments Off on Best Student Paper Award at NAB BEIT Conference 2025

Merry Christmas and Happy New Year 2025

Merry Christmas & Happy New Year 2025 from the ATHENA research lab

Posted in ATHENA | Comments Off on Merry Christmas and Happy New Year 2025

ALPHAS: Adaptive Bitrate Ladder Optimization for Multi-Live Video Streaming

ALPHAS: Adaptive Bitrate Ladder Optimization for Multi-Live Video Streaming

IEEE International Conference on Computer Communications

IEEE INFOCOM 2025

19–22 May 2025 // London, United Kingdom

[PDF]

Farzad Tashtarian (Alpen-Adria Universität Klagenfurt, Austria); Mahdi Dolati (Sharif University of Technology, Iran); Daniele Lorenzi (University of Klagenfurt, Austria); Mojtaba Mozhganfar (University of Tehran, Iran); Sergey Gorinsky (IMDEA Networks Institute, Spain); Ahmad Khonsari (University of Tehran, Iran); Christian Timmerer (Alpen-Adria-Universität Klagenfurt & Bitmovin, Austria); Hermann Hellwagner (Klagenfurt University, Austria)

Abstract: Live streaming routinely relies on the Hypertext Transfer Protocol (HTTP) and content delivery networks (CDNs) to scalably disseminate videos to diverse clients. A bitrate ladder refers to a list of bitrate-resolution pairs, or representations, used for encoding a video. A promising trend in HTTP-based video streaming is to adapt not only the client’s representation choice but also the bitrate ladder during the streaming session. This paper examines the problem of multi-live streaming, where an encoding service performs coordinated CDN-aware bitrate ladder adaptation for multiple live streams delivered to heterogeneous clients in different zones via CDN edge servers. We design ALPHAS, a practical and scalable system for multi-live streaming that accounts for CDNs’ bandwidth constraints and encoder’s computational capabilities and also supports stream prioritization. ALPHAS, aware of both video content and streaming context, seamlessly integrates with the end-to-end streaming pipeline and operates in real time transparently to clients and encoding algorithms. We develop a cloud-based ALPHAS implementation and evaluate it through extensive real-world and trace-driven experiments against four prominent baselines that encode each stream independently. The evaluation shows that ALPHAS outperforms the baselines, improving quality of experience, end-to-end latency, and per-stream processing by up to 23%, 21%, and 49%, respectively.

Posted in ATHENA | Comments Off on ALPHAS: Adaptive Bitrate Ladder Optimization for Multi-Live Video Streaming

Generative AI for Realistic Voice Dubbing Across Languages

Generative AI for Realistic Voice Dubbing Across Languages

ACM 4th Mile-High Video Conference (MHV’25)

18–20 February 2025 | Denver, CO, USA

[PDF]

Emanuele Artioli (Alpen-Adria Universität Klagenfurt, Austria), Daniele Lorenzi (Alpen-Adria Universität Klagenfurt, Austria), Farzad Tashtarian (Alpen-Adria Universität Klagenfurt, Austria), Christian Timmerer (Alpen-Adria Universität Klagenfurt, Austria)

Abstract: The demand for accessible, multilingual video content has grown significantly with the global rise of streaming platforms, social media, and online learning. The traditional solutions for making content accessible across languages include subtitles, even generated ones, as YouTube offers, and synthesizing voiceovers, offered, for example, by the Yandex Browser. Subtitles are cost-effective and reflect the original voice of the speaker, which is often essential for authenticity. However, they require viewers to divide their attention between reading text and watching visuals, which can diminish engagement, especially for highly visual content. Synthesized voiceovers, on the other hand, eliminate this need by providing an auditory translation. Still, they typically lack the emotional depth and unique vocal characteristics of the original speaker, which can affect the viewing experience and disconnect audiences from the intended pathos of the content. A straightforward solution would involve having the original actor “perform” in every language, thereby preserving the traits that define their character or narration style. However, recording actors in multiple languages is impractical, time-intensive, and expensive, especially for widely distributed media.

By leveraging generative AI, we aim to develop a client-side tool, to incorporate in a dedicated video streaming player, that combines the accessibility of multilingual dubbing with the authenticity of the original speaker’s performance, effectively allowing a single actor to deliver their voice in any language. To the best of our knowledge, no current streaming system can capture the speaker’s unique voice or emotional tone.

Index Terms— HTTP adaptive streaming, Generative AI, Audio.

Posted in ATHENA | Comments Off on Generative AI for Realistic Voice Dubbing Across Languages

Adaptive Quality and Energy Enhancement in Video Streaming with RecABR

Adaptive Quality and Energy Enhancement in Video Streaming with RecABR

ACM 4th Mile-High Video Conference (MHV’25)

18–20 February 2025 | Denver, CO, USA

[PDF]

Daniele Lorenzi (Alpen-Adria Universität Klagenfurt, Austria), Farzad Tashtarian (Alpen-Adria Universität Klagenfurt, Austria), Christian Timmerer (Alpen-Adria Universität Klagenfurt, Austria)

Abstract: HTTP Adaptive Streaming (HAS) dominates video delivery but faces sustainability issues due to its energy demands. Current adaptive bitrate (ABR) algorithms prioritize quality, neglecting the energy costs of higher bitrates. Super-resolution (SR) can enhance quality but increases energy use, especially for GPU-equipped devices in competitive networks. RecABR addresses these challenges by clustering clients based on device attributes (e.g., GPU, resolution) and optimizing parameters via linear programming. This reduces computational overhead and ensures energy-efficient, quality-aware recommendations. Using metrics like VMAF and compressed SR models, RecABR minimizes storage and processing costs, making it scalable for CDN edge deployment.

Index Terms— QoE, HAS, Super-resolution, Energy.

Posted in ATHENA | Comments Off on Adaptive Quality and Energy Enhancement in Video Streaming with RecABR

Patent Approval for “Scalable Per-Title Encoding”

Scalable Per-Title Encoding 

US Patent

[PDF]

Hadi Amirpour (Alpen-Adria-Universität Klagenfurt, Austria) and Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Austria)

 

Abstract: A scalable per-title encoding technique may include detecting scene cuts in an input video received by an encoding network or system, generating segments of the input video, performing per-title encoding of a segment of the input video, training a deep neural network (DNN) for each representation of the segment, thereby generating a trained DNN, compressing the trained DNN, thereby generating a compressed trained DNN, and generating an enhanced bitrate ladder including metadata comprising the compressed trained DNN. In some embodiments, the method also may include generating a base layer bitrate ladder for CPU devices, and providing the enhanced bitrate ladder for GPU-available devices.

Posted in ATHENA | Comments Off on Patent Approval for “Scalable Per-Title Encoding”