Internship 2021 at ATHENA

At ATHENA, we offer an internship*) for 2021 for Master Students and we kindly request your applications until 21st of December 2020 with the following data (in German or English):

  • CV
  • Record of study/transcript (“Studienerfolgsnachweis”)

*) A 3-month period in 2021 (with an exact time slot to be discussed) with the possibility to spend up to 1-month at the industrial partner; 20h per week “Universitäts-KV, Verwendungsgruppe C1, studentische Hilfskraft”

Please send your application by email to nina.stiller@aau.at.


About ATHENA: The Christian Doppler laboratory ATHENA (AdapTive Streaming over HTTP and Emerging Networked MultimediA Services) is jointly proposed by the Institute of Information Technology (ITEC; http://itec.aau.at) at Alpen-Adria-Universität Klagenfurt (AAU) and Bitmovin GmbH (https://bitmovin.com) to address current and future research and deployment challenges of HAS and emerging streaming methods. AAU (ITEC) has been working on adaptive video streaming for more than a decade, has a proven record of successful research projects and publications in the field, and has been actively contributing to MPEG standardization for many years, including MPEG-DASH; Bitmovin is a video streaming software company founded by ITEC researchers in 2013 and has developed highly successful, global R&D and sales activities and a world-wide customer base since then.

The aim of ATHENA is to research and develop novel paradigms, approaches, (prototype) tools, and evaluation results for the phases

  1. multimedia content provisioning,
  2. content delivery, and
  3. content consumption in the media delivery chain as well as for
  4. end-to-end aspects, with a focus on, but not being limited to, HTTP Adaptive Streaming (HAS).

The new approaches and insights are to enable Bitmovin to build innovative applications and services to account for the steadily increasing and changing multimedia traffic on the Internet.

Posted in News | Comments Off on Internship 2021 at ATHENA

20 Years of Streaming in 20 Minutes

Further details and registration available here: https://mile-high.video/

Posted in News | Comments Off on 20 Years of Streaming in 20 Minutes

Christian Timmerer to give a Keynote at WebMedia 2020

HTTP Adaptive Streaming – Where Is It Heading?

WebMedia2020, November 30 to December 4, 2020, Online

[PDF][Slides]

Abstract: Video traffic on the Internet is constantly growing; networked multimedia applications consume the predominant share of the available Internet bandwidth. A major technical breakthrough and enabler in multimedia systems research and of industrial networked multimedia services certainly was the HTTP Adaptive Streaming (HAS) technique. This resulted in the standardization of MPEG Dynamic Adaptive Streaming over HTTP (MPEG-DASH) which, together with HTTP Live Streaming (HLS) by Apple Inc., is widely used for multimedia delivery in today’s networks.
Existing challenges in multimedia systems research deal with the trade-off between (i) the ever-increasing content complexity, (ii) various requirements with respect to time (most importantly, latency), and (iii) quality of experience (QoE). Optimizing towards one aspect usually negatively impacts at least one of the other two aspects if not both.

This situation sets the stage for our research work in the ATHENA Christian Doppler (CD) Laboratory (Adaptive Streaming over HTTP and Emerging Networked Multimedia Services; https://athena.itec.aau.at/), jointly funded by public sources and industry.

In this talk, we will present selected novel approaches and research results of the first year of the ATHENA CD Lab’s operation. We will highlight HAS-related research on (i) multimedia content provisioning (machine learning for video encoding); (ii) multimedia content delivery (support of edge processing and virtualized network functions for video networking); (iii) multimedia content consumption and end-to-end aspects (player-triggered segment retransmissions to improve video playout quality); and (iv) novel QoE investigations (adaptive point cloud streaming). We will also put the work into the context of international multimedia systems research.

Biography: Christian Timmerer received his M.Sc. (Dipl.-Ing.) In January 2003 and his Ph.D. (Dr.techn.) In June 2006 (for research on the adaptation of scalable multimedia content in streaming and constraint environments ) both from the Alpen-Adria-Universität (AAU) Klagenfurt. He is currently an Associate Professor at the Institute of Information Technology (ITEC) and is the director of the Christian Doppler (CD) Laboratory ATHENA (https://athena.itec.aau.at/). His research interests include immersive multimedia communication, streaming, adaptation, and Quality of Experience. He co-authored seven patents and more than 200 articles in this area. He was the general chair of WIAMIS 2008, QoMEX 2013, MMSys 2016, and PV 2018 and has participated in several EC-funded projects, notably DANAE, ENTHRONE, P2P-Next, ALICANTE, SocialSensor, COST IC1003 QUALINET, and ICoSOLE. He also participated in ISO / MPEG work for several years, notably in the area of ​​MPEG-21, MPEG-M, MPEG-V, and MPEG-DASH where he also served as standard editor. In 2013 he cofounded Bitmovin (http://www.bitmovin.com/) to provide professional services around MPEG-DASH where he holds the position of the Chief Innovation Officer (CIO) – Head of Research and Standardization.

Posted in News | Comments Off on Christian Timmerer to give a Keynote at WebMedia 2020

Cluster Computing paper: FastTTPS: Fast Approach for Video Transcoding Time Prediction and Scheduling for HTTP Adaptive Streaming Videos

Cluster Computing paper: FastTTPS: Fast Approach for Video Transcoding Time Prediction and Scheduling for HTTP Adaptive Streaming Videos

Cluster Computing (Springer Journal) [PDF]

Prateek Agrawal (University of Klagenfurt, Austria), Anatoliy Zabrovskiy (University of Klagenfurt, Austria), Adithyan Ilagovan (Bitmovin Inc., CA, USA), Christian Timmerer (University of Klagenfurt, Austria), Radu Prodan (University of Klagenfurt, Austria)

Abstract:

HTTP adaptive streaming of video content becomes an integrated part of the Internet and dominates other streaming protocols and solutions. The duration of creating video content for adaptive streaming ranges from seconds or up to several hours or days, due to the plethora of video transcoding parameters and video source types. Although, the computing resources of different transcoding platforms and services constantly increase, accurate and fast transcoding time prediction and scheduling is still crucial. We propose in this paper a novel method called Fast video Transcoding Time Prediction and Scheduling (FastTTPS) of x264 encoded videos based on three phases: (i) transcoding data engineering, (ii) transcoding time prediction, and (iii) transcoding scheduling. The first phase is responsible for video sequence selection, segmentation and feature data collection required for predicting the transcoding time. The second phase develops an artificial neural network (ANN) model for segment transcoding time prediction based on transcoding parameters and derived video complexity features. The third phase compares a number of parallel schedulers to map the predicted transcoding segments on the underlying high-performance computing resources. Experimental results show that our predictive ANN model minimizes the transcoding mean absolute error (MAE) and mean square error (MSE) by up to 1.7 and 26.8, respectively. In terms of scheduling, our method reduces the transcoding time by up to 38% using a Max-Min algorithm compared to the actual transcoding time without prediction information.

Keywords: Transcoding time prediction, Video transcoding, Scheduling, Artificial neural networks, MPEG-DASH, Adaptive streaming

Acknowledgment: This work received support from Austrian Research Promotion Agency (FFG) under grant agreement 877503571 (APOLLO project) and European Union Horizon 2020 research and innovation programme under grant agreement 801091 (ASPIDE project).

Posted in APOLLO, News | Comments Off on Cluster Computing paper: FastTTPS: Fast Approach for Video Transcoding Time Prediction and Scheduling for HTTP Adaptive Streaming Videos

MMM’21: Towards Optimal Multirate Encoding for HTTP Adaptive Streaming

Towards Optimal Multirate Encoding for HTTP Adaptive Streaming

The International MultiMedia Modeling Conference (MMM)

25-27 January 2021, Prague, Czech Republic

https://mmm2021.cz

[PDF][Slides][Video]

Hadi Amirpour (Alpen-Adria-Universität Klagenfurt),Ekrem Çetinkaya (Alpen-Adria-Universität Klagenfurt), Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Bitmovin), and Mohammad Ghanbari (School of Computer Science and Electronic Engineering, University of Essex, Colchester, UK)

Abstract: HTTP Adaptive Streaming (HAS) enables high quality stream-ing of video contents. In HAS, videos are divided into short intervalscalled segments, and each segment is encoded at various quality/bitratesto adapt to the available bandwidth. Multiple encodings of the same con-tent imposes high cost for video content providers. To reduce the time-complexity of encoding multiple representations, state-of-the-art methods typically encode the highest quality representation first and reusethe information gathered during its encoding to accelerate the encodingof the remaining representations. As encoding the highest quality rep-resentation requires the highest time-complexity compared to the lowerquality representations, it would be a bottleneck in parallel encoding scenarios and the overall time-complexity will be limited to the time-complexity of the highest quality representation. In this paper and toaddress this problem, we consider all representations from the highestto the lowest quality representation as a potential, single reference toaccelerate the encoding of the other, dependent representations. We for-mulate a set of encoding modes and assess their performance in terms ofBD-Rate and time-complexity, using both VMAF and PSNR as objec-tive metrics. Experimental results show that encoding a middle qualityrepresentation as a reference, can significantly reduce the maximum en-coding complexity and hence it is an efficient way of encoding multiplerepresentations in parallel. Based on this fact, a fast multirate encodingmethod is proposed which utilizes depth and prediction mode of a middle quality representation to accelerate the encoding of the dependentrepresentations.

Keywords: HEVC, Video Encoding , Multirate Encoding , DASH

Posted in News | Comments Off on MMM’21: Towards Optimal Multirate Encoding for HTTP Adaptive Streaming

ISM’20: Dynamic Segment Repackaging at the Edge for HTTP Adaptive Streaming

Dynamic Segment Repackaging at the Edge for HTTP Adaptive Streaming

IEEE International Symposium on Multimedia (ISM)

2-4 December 2020, Naples, Italy

https://www.ieee-ism.org/

[PDF][Slides][Video]

Jesús Aguilar Armijo (Alpen-Adria-Universität Klagenfurt), Babak Taraghi (Alpen-Adria-Universität Klagenfurt), Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Bitmovin), and Hermann Hellwagner (Alpen-Adria-Universität Klagenfurt)

Abstract: Adaptive video streaming systems typically support different media delivery formats, e.g., MPEG-DASH and HLS, replicating the same content multiple times into the network. Such a diversified system results in inefficient use of storage, caching, and bandwidth resources. The Common Media Application Format (CMAF) emerges to simplify HTTP Adaptive Streaming (HAS), providing a single encoding and packaging
format of segmented media content and offering the opportunities of bandwidth savings, more cache hits and less storage needed. However, CMAF is not yet supported by most devices. To solve this issue, we present a solution where we maintain the main
advantages of CMAF while supporting heterogeneous devices using different media delivery formats. For that purpose, we propose to dynamically convert the content from CMAF to the desired media delivery format at an edge node. We study the bandwidth savings with our proposed approach using an analytical model and simulation, resulting in bandwidth savings of up to 20% with different media delivery format distributions.
We analyze the runtime impact of the required operations on the segmented content performed in two scenarios: the classic one, with four different media delivery formats, and the proposed scenario, using CMAF-only delivery through the network. We
compare both scenarios with different edge compute power assumptions. Finally, we perform experiments in a real video streaming testbed delivering MPEG-DASH using CMAF content to serve a DASH and an HLS client, performing the media conversion for the latter one.

Keywords: CMAF, Edge Computing, HTTP Adaptive Streaming (HAS)

Posted in News | Comments Off on ISM’20: Dynamic Segment Repackaging at the Edge for HTTP Adaptive Streaming

PCS’21 Special Session: Video encoding for large scale HAS deployments

Video encoding for large scale HAS deployments

Picture Coding Symposium (PCS)

29 June to 2 July 2021, Bristol, UK

https://pcs2021.org

Session organizers: Christian Timmerer (Bitmovin, Austria), Mohammad Ghanbari (University of Essex, UK), and Alex Giladi (Comcast, USA).

Abstract: Video accounts for the vast majority of today’s internet traffic and video coding is vital for efficient distribution towards the end-user. Software- or/and cloud-based video coding is becoming more and more attractive, specifically with the plethora of video codecs available right now (e.g., AVC, HEVC, VVC, VP9, AV1, etc.) which is also supported by the latest Bitmovin Video Developer Report 2020. Thus, improvements in video coding enabling efficient adaptive video streaming is a requirement for current and future video services. HTTP Adaptive Streaming (HAS) is now mainstream due to its simplicity, reliability, and standard support (e.g., MPEG-DASH). For HAS, the video is usually encoded in multiple versions (i.e., representations) of different resolutions, bitrates, codecs, etc. and each representation is divided into chunks (i.e., segments) of equal length (e.g., 2-10 sec) to enable dynamic, adaptive switching during streaming based on the user’s context conditions (e.g., network conditions, device characteristics, user preferences). In this context, most scientific papers in the literature target various improvements which are evaluated based on open, standard test sequences. We argue that optimizing video encoding for large scale HAS deployments is the next step in order to improve the Quality of Experience (QoE), while optimizing costs.

Posted in News | Comments Off on PCS’21 Special Session: Video encoding for large scale HAS deployments

IEEE Communication Magazine: From Capturing to Rendering: Volumetric Media Delivery With Six Degrees of Freedom

From Capturing to Rendering: Volumetric Media Delivery With Six Degrees of Freedom

Teaser: “Help me, Obi-Wan Kenobi. You’re my only hope,” said the hologram of Princess Leia in Star Wars: Episode IV – A New Hope (1977). This was the first time in cinematic history that the concept of holographic-type communication was illustrated. Almost five decades later, technological advancements are quickly moving this type of communication from science fiction to reality.

IEEE Communication Magazine

[PDF]

Jeroen van der Hooft (Ghent University), Maria Torres Vega (Ghent University), Tim Wauters (Ghent University), Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Bitmovin), Ali C. Begen (Ozyegin University, Networked Media), Filip De Turck (Ghent University), and Raimund Schatz (AIT Austrian Institute of Technology)

Abstract: Technological improvements are rapidly advancing holographic-type content distribution. Significant research efforts have been made to meet the low-latency and high-bandwidth requirements set forward by interactive applications such as remote surgery and virtual reality. Recent research made six degrees of freedom (6DoF) for immersive media possible, where users may both move their heads and change their position within a scene. In this article, we present the status and challenges of 6DoF applications based on volumetric media, focusing on the key aspects required to deliver such services. Furthermore, we present results from a subjective study to highlight relevant directions for future research.

Posted in News | Comments Off on IEEE Communication Magazine: From Capturing to Rendering: Volumetric Media Delivery With Six Degrees of Freedom

MTAP paper: Automated Bank Cheque Verification Using Image Processing and Deep Learning Methods

MTAP paper: Automated Bank Cheque Verification Using Image Processing and Deep Learning Methods

Multimedia tools and applications (Springer Journal)

[PDF]

Prateek Agrawal (University of Klagenfurt, Austria), Deepak Chaudhary (Lovely Professional University, India), Vishu Madaan (Lovely professional University, India), Anatoliy Zabrovskiy (University of Klagenfurt, Austria), Radu Prodan (University of Klagenfurt, Austria), Dragi Kimovski (University of Klagenfurt, Austria), Christian Timmerer (University of Klagenfurt, Austria)

Abstract: Automated bank cheque verification using image processing is an attempt to complement the present cheque truncation system, as well as to provide an alternate methodology for the processing of bank cheques with minimal human intervention. When it comes to the clearance of the bank cheques and monetary transactions, this should not only be reliable and robust but also save time which is one of the major factor for the countries having large population. In order to perform the task of cheque verification, we developed a tool which acquires the cheque leaflet key components, essential for the task of cheque clearance using image processing and deep learning methods. These components include the bank branch code, cheque number, legal as well as courtesy amount, account number, and signature patterns. our innovation aims at benefiting the banking system by re-innovating the other competent cheque-based monetary transaction system which requires automated system intervention. For this research, we used institute of development and research in banking technology (IDRBT) cheque dataset and deep learning based convolutional neural networks (CNN) which gave us an accuracy of 99.14% for handwritten numeric character recognition. It resulted in improved accuracy and precise assessment of the handwritten components of bank cheque. For machine printed script, we used MATLAB in-built OCR method and the accuracy achieved is satisfactory (97.7%) also for verification of Signature we have used Scale Invariant Feature Transform (SIFT) for extraction of features and Support Vector Machine (SVM) as classifier, the accuracy achieved for signature verification is 98.10%.

Keywords: Cheque Truncation system, Image Segmentation, Bank Cheque Clearance, Image Feature Extraction, Convolution Neural Network, Support Vector Machine, Scale Invariant Feature Transform.

Acknowledgment: This work has been partly supported by the European Union Horizon 2020 Research and Innovation Programme under the ARTICONF Project with grant agreement number 644179 and in part by the Austrian Research Promotion Agency (FFG) under the APOLLO project.

Posted in APOLLO, News | Comments Off on MTAP paper: Automated Bank Cheque Verification Using Image Processing and Deep Learning Methods

VCIP’20: FaME-ML: Fast Multirate Encoding for HTTP Adaptive Streaming Using Machine Learning

FaME-ML: Fast Multirate Encoding for HTTP Adaptive Streaming Using Machine Learning

IEEE International Conference on Visual Communications and Image Processing (VCIP)

1-4 December 2020, Macau

http://www.vcip2020.org/

[PDF][Slides][Video]

Ekrem Çetinkaya (Alpen-Adria-Universität Klagenfurt), Hadi Amirpour (Alpen-Adria-Universität Klagenfurt), Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Bitmovin), and Mohammad Ghanbari (School of Computer Science and Electronic Engineering, University of Essex, Colchester, UK)

Abstract: HTTP Adaptive Streaming(HAS) is the most common approach for delivering video content over the Internet. Therequirement to encode the same content at different quality levels(i.e., representations) in HAS is a challenging problem for content providers. Fast multirate encoding approaches try to accelerate this process by reusing information from previously encoded representations. In this paper, we propose to use convolutional neural networks (CNNs) to speed up the encoding of multiple representations with a specific focus on parallel encoding. In parallel encoding, the overall time-complexity is limited to the maximum time-complexity of one of the representations that are encoded in parallel. Therefore, instead of reducing the time-complexity for all representations, the highest time-complexities are reduced. Experimental results show that FaME-ML achieves significant time-complexity savings in parallel encoding scenarios(41%in average) with a slight increase in bitrate and quality degradation compared to the HEVC reference software.

Keywords: Video Coding, Convolutional Neural Networks, HEVC, HTTP Adaptive Streaming (HAS)

Posted in News | Comments Off on VCIP’20: FaME-ML: Fast Multirate Encoding for HTTP Adaptive Streaming Using Machine Learning