ACMMM’19: Towards 6DoF HTTP Adaptive Streaming Through Point Cloud Compression

Towards 6DoF HTTP Adaptive Streaming Through Point Cloud Compression

https://www.acmmm.org/2019/

[PDF] (coming soon; slides to be provided later)

Jeroen van der Hooft, Tim Wauters, Filip De Turck (Ghent University – imec), Christian Timmerer, and Hermann Hellwagner (Alpen-Adria-Universität Klagenfurt)

Abstract: The increasing popularity of head-mounted devices and 360° video cameras allows content providers to offer virtual reality video streaming over the Internet, using a relevant representation of the immersive content combined with traditional streaming techniques. While this approach allows the user to freely move her head, her location is fixed by the camera’s position within the scene. Recently, an increased interest has been shown for free movement within immersive scenes, referred to as six degrees of freedom. One way to realize this is by capturing objects through a number of cameras positioned in different angles, and creating a point cloud which consists of the location and RGB color of a significant number of points in the three-dimensional space. Although the concept of point clouds has been around for over two decades, it recently received increased attention by ISO/IEC MPEG, issuing a call for proposals for point cloud compression. As a result, dynamic point cloud objects can now be compressed to bit rates in the order of 3 to 55 Mb/s, allowing feasible delivery over today’s mobile networks. In this paper, we propose PCC-DASH, a standards-compliant means for HTTP adaptive streaming of scenes comprising multiple, dynamic point cloud objects. We present a number of rate adaptation heuristics which use information on the user’s position and focus, the available bandwidth, and the client’s buffer status to decide upon the most appropriate quality representation of each object. Through an extensive evaluation, we discuss the advantages and drawbacks of each solution. We argue that the optimal solution depends on the considered scene and camera path, which opens interesting possibilities for future work.

Keywords: HTTP adaptive streaming, MPEG-DASH, immersive video, point clouds, MPEG V-PCC, rate adaptation

ACM Reference Format:
Jeroen van der Hooft, Tim Wauters, Filip De Turck, Christian Timmerer, and Hermann Hellwagner. 2019. Towards 6DoF HTTP Adaptive Streaming Through Point Cloud Compression. In ACM Multimedia ’19, October 21–25, 2019, Nice, France. ACM, New York, NY, USA, 9 pages. https://doi.org/10. 1145/1122445

Posted in News | Comments Off on ACMMM’19: Towards 6DoF HTTP Adaptive Streaming Through Point Cloud Compression

ACM NOSSDAV’19: Bandwidth Prediction in Low-Latency Chunked Streaming

Bandwidth Prediction in Low-Latency Chunked Streaming
http://nossdav.org/2019/
[PDF]

Abdelhak Bentaleb (National University of Singapore), Christian Timmerer (Alpen-Adria Universität & Bitmovin Inc,), Ali C. Begen (Ozyegin University), and Roger Zimmermann (National University of Singapore)

Abstract: HTTP adaptive streaming (HAS) with chunked transfer encoding can be used to reduce latency without sacrificing the coding efficiency. While this allows a media segment to be generated and delivered at the same time, it also causes grossly inaccurate bandwidth measurements, leading to incorrect bitrate selections. To overcome this effect, we design a novel Adaptive bitrate scheme for Chunked Transfer Encoding (ACTE) that leverages the unique nature of chunk downloads. It uses a sliding window to accurately measure the available bandwidth and an online linear adaptive filter to predict the available bandwidth into the future. Results show that ACTE achieves 96% measurement accuracy, which translates to a 64% reduction in stalls and a 27% increase in video quality.

Keywords: HAS; ABR; DASH; CMAF; low-latency; HTTP chunked transfer encoding; bandwidth measurement and prediction; RLS.

Acknowledgment: This research has been supported in part by the Singapore Ministry of Education Academic Research Fund Tier 1 under MOE’s official grant number T1 251RES1820 and the Austrian Research Promotion Agency (FFG) under the Next Generation Video Streaming project “PROMETHEUS”.

Abdelhak Bentaleb, Christian Timmerer, Ali C. Begen, and Roger Zimmermann. 2019. Bandwidth prediction in low-latency chunked streaming. In Proceedings of the 29th ACM Workshop on Network and Operating Systems Support for Digital Audio and Video (NOSSDAV ’19). ACM, New York, NY, USA, 7-13. DOI: https://doi.org/10.1145/3304112.3325611

Posted in News | Comments Off on ACM NOSSDAV’19: Bandwidth Prediction in Low-Latency Chunked Streaming

ACMMM’19 Tutorial: A Journey towards Fully Immersive Media Access

ACM Multimedia 2019
October 21-25, 2019, Nice, France
https://www.acmmm.org/2019/

Note: exact date/time slot of this tutorial will be provided at a later stage

Lecturers
Christian Timmerer, Alpen-Adria-Universität Klagenfurt & Bitmovin, Inc.
Ali C. Begen, Ozyegin University and Networked Media

Abstract
Universal media access as proposed in the late 90s, early 2000 is now reality. Thus, we can generate, distribute, share, and consume any media content, anywhere, anytime, and with/on any device. A major technical breakthrough was the adaptive streaming over HTTP resulting in the standardization of MPEG-DASH, which is now successfully deployed in HTML5 environments thanks to corresponding media source extensions (MSE). The next big thing in adaptive media streaming is virtual reality applications and, specifically, omnidirectional (360°) media streaming, which is currently built on top of the existing adaptive streaming ecosystems. This tutorial provides a detailed overview of adaptive streaming of both traditional and omnidirectional media within HTML5 environments. The tutorial focuses on the basic principles and paradigms for adaptive streaming – both traditional and omnidirectional media – as well as on already deployed content generation, distribution, and consumption workflows. Additionally, the tutorial provides insights into standards and emerging technologies in the adaptive streaming space. Finally, the tutorial includes the latest approaches for immersive media streaming enabling 6DoF DASH through Point Cloud Compression (PCC) and concludes with open research issues and industry efforts in this domain.

Keywords: Omnidirectional media, HTTP adaptive streaming, over-the-top video, 360-degree video, virtual reality, immersive media access.

Learning Objectives
This tutorial consists of two main parts. In the first part, we provide a detailed overview of the HTML5 standard and show how it can be used for adaptive streaming deployments. In particular, we focus on the HTML5 video, media extensions, and multi-bitrate encoding, encapsulation and encryption workflows, and survey well-established streaming solutions. Furthermore, we present experiences from the existing deployments and the relevant de jure and de facto standards (DASH, HLS, CMAF) in this space. In the second part, we focus on omnidirectional (360-degree) media from creation to consumption as well as first thoughts on dynamic adaptive point cloud streaming. We survey means for the acquisition, projection, coding and packaging of omnidirectional media as well as delivery, decoding and rendering methods. Emerging standards and industry practices are covered as well (OMAF, VR-IF). Both parts present some of the current research trends, open issues that need further exploration and investigation, and various efforts that are underway in the streaming industry. Upon attending this tutorial, the participants will have an overview and understanding of the following topics:

  • Principles of HTTP adaptive streaming for the Web/HTML5
  • Principles of omnidirectional (360-degree) media delivery
  • Content generation, distribution and consumption workflows for traditional and omnidirectional media
  • Standards and emerging technologies in the adaptive streaming space
  • Current and future research on traditional and omnidirectional media delivery, specifically enabling 6DoF adaptive streaming through point cloud compression

ACM Multimedia attracts attendees that are quite knowledgeable in specific areas. However, not all are experts across multiple disciplines (such as the subject matter here) and only few are familiar with what is happening in the field and standards. Thus, we believe the proposed tutorial will be of interest to this year’s attendees as much as it did in the past.

Table of Contents
Part I: The HTML5 Standard and Adaptive Streaming

  • HTML5 video and media extensions
  • Survey of well-established streaming solutions
  • Multi-bitrate encoding, and encapsulation and encryption workflows
  • The MPEG-DASH standard, Apple HLS and the developing CMAF standard
  • Common issues in scaling and improving quality, multi-screen/hybrid delivery

Part II: Omnidirectional (360-degree) Media

  • Acquisition, projection, coding and packaging of 360-degree video
  • Delivery, decoding and rendering methods
  • The developing MPEG-OMAF and MPEG-I standards
  • Ongoing industry efforts, specifically towards 6DoF adaptive streaming

Speakers
Christian Timmerer received his M.Sc. (Dipl.-Ing.) in January 2003 and his Ph.D. (Dr.techn.) in June 2006 (for research on the adaptation of scalable multimedia content in streaming and constraint environments) both from the Alpen-Adria-Universität (AAU) Klagenfurt. He joined the AAU in 1999 (as a system administrator) and is currently an Associate Professor at the Institute of Information Technology (ITEC) within the Multimedia Communication Group. His research interests include immersive multimedia communication, streaming, adaptation, Quality of Experience, and Sensory Experience. He was the general chair of WIAMIS 2008, QoMEX 2013, MMSys 2016, and PV 2018 and has participated in several EC-funded projects, notably DANAE, ENTHRONE, P2P-Next, ALICANTE, SocialSensor, COST IC1003 QUALINET, and ICoSOLE. He also participated in ISO/MPEG work for several years, notably in the area of MPEG- 21, MPEG-M, MPEG-V, and MPEG-DASH where he also served as standard editor. In 2013, he cofounded Bitmovin (http://www.bitmovin.com/) to provide professional services around MPEG-DASH where he holds the position of the Chief Innovation Officer (CIO) – Head of Research and Standardization. He is a senior member of IEEE and member of ACM, specifically IEEE Computer Society, IEEE Communications Society, and ACM SIGMM. Dr. Timmerer was a guest editor of three special issues for the IEEE Journal on Selected Areas in Communications (JSAC) and currently serves as associate editor for IEEE Transactions on Multimedia. Further information available at http://blog.timmerer.com.

Ali C. Begen is the co-founder of Networked Media, a technology company that offers consulting services to industrial, legal and academic institutions in the IP video space. He has been a research and development engineer since 2001, and has broad experience in mathematical modeling, performance analysis, optimization, standards development, intellectual property and innovation. Between 2007 and 2015, he was with the Video and Content Platforms Research and Advanced Development Group at Cisco, where he designed and developed algorithms, protocols, products and solutions in the service provider and enterprise video domains. Currently, he is also affiliated with Ozyegin University, where he is teaching and advising students in the computer science department. Ali has a PhD in electrical and computer engineering from Georgia Tech. To date, he received a number of academic and industry awards, and was granted 30+ US patents. He held editorial positions in leading magazines and journals, and served in the organizing committee of several international conferences and workshops in the field. He is a senior member of both the IEEE and ACM. In 2016, he was elected distinguished lecturer by the IEEE Communications Society, and in 2018, he was re-elected for another two-year term. More details are at http://ali.begen.net.

Posted in News | Comments Off on ACMMM’19 Tutorial: A Journey towards Fully Immersive Media Access

QoMEX’19: Tile-based Streaming of 8K Omnidirectional Video: Subjective and Objective QoE Evaluation

Tile-based Streaming of 8K Omnidirectional Video: Subjective and Objective QoE Evaluation

https://www.qomex2019.de/

[PDF]

Raimund Schatz (AIT Austrian Institute of Technology), Anatoliy Zabrovskiy (Alpen-Adria Universität Klagenfurt), Christian Timmerer (Alpen-Adria Universität Klagenfurt, Bitmovin Inc.)

Abstract: Omnidirectional video (ODV) streaming applications are becoming increasingly popular. They enable a highly immersive experience as the user can freely choose her/his field of view within the 360-degree environment. Current deployments are fairly simple but viewport-agnostic which inevitably results in high storage/bandwidth requirements and low Quality of Experience (QoE). A promising solution is referred to as tile- based streaming which allows to have higher quality within the user’s viewport while quality outside the user’s viewport could be lower. However, empirical QoE assessment studies in this domain are still rare. Thus, this paper investigates the impact of different tile-based streaming approaches and configurations on the QoE of ODV. We present the results of a lab-based subjective evaluation in which participants evaluated 8K omnidirectional video QoE as influenced by different (i) tile-based streaming approaches (full vs. partial delivery), (ii) content types (static vs. moving camera), and (iii) tile encoding quality levels determined by different quantization parameters. Our experimental setup is characterized by high reproducibility since relevant media delivery aspects (including the user’s head movements and dynamic tile quality adaptation) are already rendered into the respective processed video sequences. Additionally, we performed a complementary objective evaluation of the different test sequences focusing on bandwidth efficiency and objective quality metrics. The results are presented in this paper and discussed in detail which confirm that tile-based streaming of ODV improves visual quality while reducing bandwidth requirements.

Index Terms: Omnidirectional Video, Tile-based Streaming, Subjective Testing, Objective Metrics, Quality of Experience

Acknowledgment: This work was supported in part by the Austrian Research Promotion Agency (FFG) under the Next Generation Video Streaming project “PROMETHEUS”.

Posted in News | Comments Off on QoMEX’19: Tile-based Streaming of 8K Omnidirectional Video: Subjective and Objective QoE Evaluation

Mobile data traffic report and forecast 2017-2022

In my most recent blog post I wrote about “mobile data traffic report and forecast 2017-2022” based on white papers/reports from Sandvine and Cisco.

The short summary is as follows: Sandvine reports that in total around 42% is video (compared to almost 58% in the global report from October 2018) while Cisco reports that mobile video traffic accounted for 59% of total mobile data traffic in 2017 and predicts that nearly 79% of the world’s mobile data traffic will be video by 2022, it will increase 9-fold between 2017 and 2022.

Further details are available here and ATHENA is expected to address current and upcoming challenges in this area. Interested how? Please see about page and consider applying for a job!

Posted in News | Comments Off on Mobile data traffic report and forecast 2017-2022

ATHENA “Adaptive Streaming over HTTP and Emerging Networked Multimedia Services”

This is the Web site of Christian Doppler (CD) Pilot Laboratory ATHENA
“Adaptive Streaming over HTTP and Emerging Networked Multimedia Services”.

The project will start in 2019 and for further details, please see the about page as well as open jobs.

Posted in News | Comments Off on ATHENA “Adaptive Streaming over HTTP and Emerging Networked Multimedia Services”