MPEG awarded a Technology & Engineering Emmy® Award for DASH

MPEG, specifically, ISO/IEC JTC 1/SC 29/WG 3 (MPEG Systems), has been just awarded a Technology & Engineering Emmy® Award for its ground-breaking MPEG-DASH standard. Dynamic Adaptive Streaming over HTTP (DASH) is the first international de-jure standard that enables efficient streaming of video over the Internet and it has changed the entire video streaming industry including — but not limited to —  on-demand, live, and low latency streaming and even for 5G and the next generation of hybrid broadcast-broadband. The first edition has been published in April 2012 and MPEG is currently working towards publishing the 5th edition demonstrating an active and lively ecosystem still being further developed and improved to address requirements and challenges for modern media transport applications and services.

This award belongs to 90+ researchers and engineers from around 60 companies all around the world who participated in the development of the MPEG-DASH standard for over 12 years.

From left to right: Kyung-mo Park, Cyril Concolato, Thomas Stockhammer, Yuriy Reznik, Alex Giladi, Mike Dolan, Iraj Sodagar, Ali Begen, Christian Timmerer, Gary Sullivan, Per Fröjdh, Young-Kwon Lim, Ye-Kui Wang. (Photo © Yuriy Reznik)

Christian Timmerer, director of the Christian Doppler Laboratory ATHENA, chaired the evaluation of responses to the call for proposals and since that served as MPEG-DASH Ad-hoc Group (AHG) / Break-out Group (BoG) co-chair as well as co-editor for Part 2 of the standard. For a more detailed history of the MPEG-DASH standard, the interested reader is referred to Christian Timmerer’s blog post “HTTP Streaming of MPEG Media” (capturing the development of the first edition) and Nicolas Weill’s blog post “MPEG-DASH: The ABR Esperanto” (DASH timeline).

Posted in News | Comments Off on MPEG awarded a Technology & Engineering Emmy® Award for DASH

Multi-Codec Ultra High Definition 8K MPEG-DASH Dataset

The 13th ACM Multimedia Systems Conference (ACM MMSys 2022) Open Dataset and Software (ODS) track | June 14–17, 2022 |  Athlone, Ireland [PDF][Slides][Video]

Babak Taraghi (Alpen-Adria-Universität Klagenfurt), Hadi Amirpour (Alpen-Adria-Universität Klagenfurt), and Christian Timmerer (Alpen-Adria-Universität Klagenfurt).

sequences

workflowAbstract: There exist many applications that produce multimedia traffic over the Internet. Video streaming is on the list, with a rapidly growing desire for more bandwidth to deliver higher resolutions such as Ultra High Definition (UHD) 8K content. HTTP Adaptive Streaming (HAS) technique defines baselines for audio-visual content streaming to balance the delivered media quality and minimize streaming session defects. On the other hand, video codecs development and standardization help the theorem by introducing efficient algorithms and technologies. Versatile Video Coding (VVC) is one of the latest advancements in this area that is still not fully optimized and supported on all platforms. Stated optimization and supporting many platforms require years of research and development. This paper offers a dataset that facilitates the research and development of the aforementioned technologies. Our open-source dataset comprises Dynamic Adaptive Streaming over HTTP (MPEG-DASH) multimedia test assets of encoded Advanced Video Coding (AVC), High Efficiency Video Coding (HEVC), AOMedia Video 1 (AV1), and VVC content with resolutions of up to 7680×4320 or 8K. Our dataset has a maximum media duration of 322 seconds, and we offer our MPEG-DASH packaged content with two segments lengths, 4 and 8 seconds.

The dataset is available here.

Posted in News | Comments Off on Multi-Codec Ultra High Definition 8K MPEG-DASH Dataset

VCD: Video Complexity Dataset

The 13th ACM Multimedia Systems Conference (ACM MMSys 2022) Open Dataset and Software (ODS) track

June 14–17, 2022 |  Athlone, Ireland

Conference Website
[PDF][Slides][Video]

Hadi Amirpour (Alpen-Adria-Universität Klagenfurt), Vignesh V Menon (Alpen-Adria-Universität Klagenfurt), Samira Afzal (Alpen-Adria-Universität Klagenfurt), Mohammad Ghanbari (School of Computer Science and Electronic Engineering, University of Essex, Colchester, UK), and Christian Timmerer (Alpen-Adria-Universität Klagenfurt).

Abstract:

This paper provides an overview of the open Video Complexity Dataset (VCD) which comprises 500 Ultra High Definition (UHD) resolution test video sequences. These sequences are provided at 24 frames per second (fps) and stored online in losslessly encoded 8-bit 4:2:0 format. In this paper, all sequences are characterized by spatial and temporal complexities, rate-distortion complexity, and encoding complexity with the x264 AVC/H.264 and x265 HEVC/H.265 video encoders. The dataset is tailor-made for cutting-edge multimedia applications such as video streaming, two-pass encoding, per-title encoding, scene-cut detection, etc. Evaluations show that the dataset includes diversity in video complexities. Hence, using this dataset is recommended for training and testing video coding applications. All data have been made publicly available as part of the dataset, which can be used for various applications.
The details of VCD can be accessed online at https://vcd.itec.aau.at.

Posted in News | Comments Off on VCD: Video Complexity Dataset

VCA: Video Complexity Analyzer

The 13th ACM Multimedia Systems Conference (ACM MMSys 2022) Open Dataset and Software (ODS) track

June 14–17, 2022 |  Athlone, Ireland

Conference Website
[PDF][Slides][Video]

Vignesh V Menon (Alpen-Adria-Universität Klagenfurt), Christian Feldmann (Bitmovin, Klagenfurt), Hadi Amirpour (Alpen-Adria-Universität Klagenfurt)
Mohammad Ghanbari (School of Computer Science and Electronic Engineering, University of Essex, Colchester, UK), and Christian Timmerer (Alpen-Adria-Universität Klagenfurt).

Abstract:

VCA in content-adaptive encoding applications

For online analysis of the video content complexity in live streaming applications, selecting low-complexity features is critical to ensure low-latency video streaming without disruptions. To this light, for each video (segment), two features, i.e., the average texture energy and the average gradient of the texture energy, are determined. A DCT-based energy function is introduced to determine the block-wise texture of each frame. The spatial and temporal features of the video (segment) are derived from the DCT-based energy function. The Video Complexity Analyzer (VCA) project aims to provide an
efficient spatial and temporal complexity analysis of each video (segment) which can be used in various applications to find the optimal encoding decisions. VCA leverages some of the x86 Single Instruction Multiple Data (SIMD) optimizations for Intel CPUs and
multi-threading optimizations to achieve increased performance. VCA is an open-source library published under the GNU GPLv3 license.

Github: https://github.com/cd-athena/VCA
Online documentation: https://cd-athena.github.io/VCA/

 

Posted in News | Comments Off on VCA: Video Complexity Analyzer

Towards Low Latency Live Streaming: Challenges in a Real-World Deployment

The 13th ACM Multimedia Systems Conference (ACM MMSys 2022)

June 14–17, 2022 |  Athlone, Ireland

Conference Website
[PDF][Slides][Video]

Reza Shokri Kalan (Digiturk Company, Istanbul), Reza Farahani (Alpen-Adria-Universität Klagenfurt), Emre Karsli (Digiturk Company, Istanbul), Christian Timmerer (Alpen-Adria-Universität Klagenfurt), and Hermann Hellwagner (Alpen-Adria-Universität Klagenfurt)

Over-the-Top (OTT) service providers need faster, cheaper, and Digital Rights Management (DRM)-capable video streaming solutions. Recently, HTTP Adaptive Streaming (HAS) has become the dominant video delivery technology over the Internet. In HAS, videos are split into short intervals called segments, and each segment is encoded at various qualities/bitrates (i.e., representations) to adapt to the available bandwidth. Utilizing different HAS-based technologies with various segment formats imposes extra cost, complexity, and latency to the video delivery system. Enabling an integrated format for transmitting and storing segments at Content Delivery Network (CDN) servers can alleviate the aforementioned issues. To this end, MPEG Common Media Application Format (CMAF) is presented as a standard format for cost-effective and low latency streaming. However, CMAF has not been adopted by video streaming providers yet and it is incompatible with most legacy end-user players. This paper reveals some useful steps for achieving low latency live video streaming that can be implemented for non-DRM sensitive contents before jumping to CMAF technology. We first design and instantiate our testbed in a real OTT provider environment, including a heterogeneous network and clients, and then investigate the impact of changing format, segment duration, and Digital Video Recording (DVR) window length on a real live event. The results illustrate that replacing the transport stream (.ts) format with fragmented MP4 (.fMP4) and shortening segments’ duration reduces live latency significantly.

 

 

 

 

 

 

 

 

Keywords: HAS, DASH, HLS, CMAF, Live Streaming, Low Latency

 

Posted in News | Comments Off on Towards Low Latency Live Streaming: Challenges in a Real-World Deployment

IXR’22: Interactive eXtended Reality

IXR’22: Interactive eXtended Reality 2022

colocated with ACM Multimedia 2022

October, 2022, Lisbon, Portugal

Workshop Chairs:

  • Irene Viola, CWI, Netherlands
  • Hadi Amirpour, Klagenfurt University, Austria
  • Asim Hameed, NTNU, Norway
  • Maria Torres Vega, Ghent University, Belgium

Topics of interest include, but are not limited to:

  • Novel low latency encoding techniques for interactive XR applications
  • Novel networking systems and protocols to enable interactive immersive applications. This includes optimizations ranging from hardware (i.e., millimeter-wave networks or optical wireless), physical and MAC layer up to the network, transport and application layers (such as over the top protocols);
  • Significative advances and optimization in 3D modeling pipelines for AR/VR visualization, accessible and inclusive GUI, interactive 3D models;
  • Compression and delivery strategies for immersive media contents, such as omnidirectional video, light fields, point clouds, dynamic and time varying meshes;
  • Quality of Experience management of interactive immersive media applications;
  • Novel rendering techniques to enhance interactivity of XR applications;
  • Application of interactive XR to different areas of society, such as health (i.e., virtual reality exposure therapy), industry (Industry 4.0), XR e-learning (according to new global aims);

Dates:

  • Submission deadline: 20 June 2022, 23:59 AoE
  • Notifications of acceptance: 29 July 2022
  • Camera ready submission: 21 August 2022
  • Workshop: 10th or 14th October
Posted in News | Comments Off on IXR’22: Interactive eXtended Reality

Artificial Intelligence for Live Video Streaming (ALIS 2022)

ALIS’22: Artificial Intelligence for Live Video Streaming

colocated with ACM Multimedia 2022

October 2022, Lisbon, Portugal

Download ALIS’22 Poster/ CfP

Posted in News | Comments Off on Artificial Intelligence for Live Video Streaming (ALIS 2022)