MPEC2: Multilayer and Pipeline Video Encoding on the Computing Continuum

MPEC2: Multilayer and Pipeline Video Encoding on the Computing Continuum

conference website: IEEE NCA 2022

Samira Afzal (Alpen-Adria-Universität Klagenfurt), Zahra Najafabadi Samani (Alpen-Adria-Universität Klagenfurt), Narges Mehran (Alpen-Adria-Universität Klagenfurt), Christian Timmerer (Alpen-Adria-Universität Klagenfurt), and Radu Prodan (Alpen-Adria-Universität Klagenfurt)

Abstract:

Video streaming is the dominating traffic in today’s data-sharing world. Media service providers stream video content for their viewers, while worldwide users create and distribute videos using mobile or video system applications that significantly increase the traffic share. We propose a multilayer and pipeline encoding on the computing continuum (MPEC2) method that addresses the key technical challenge of high-price and computational complexity of video encoding. MPEC2 splits the video encoding into several tasks scheduled on appropriately selected Cloud and Fog computing instance types that satisfy the media service provider and user priorities in terms of time and cost.
In the first phase, MPEC2 uses a multilayer resource partitioning method to explore the instance types for encoding a video segment. In the second phase, it distributes the independent segment encoding tasks in a pipeline model on the underlying instances.
We evaluate MPEC2 on a federated computing continuum encompassing Amazon Web Services (AWS) EC2 Cloud and Exoscale Fog instances distributed on seven geographical locations. Experimental results show that MPEC2 achieves 24% faster completion time and 60% lower cost for video encoding compared to resource allocation related methods. When compared with baseline methods, MPEC2 yields 40%-50% lower completion time and 5-60% reduced total cost.

 

Posted in APOLLO, GAIA, News | Comments Off on MPEC2: Multilayer and Pipeline Video Encoding on the Computing Continuum

Elsevier Signal Processing: Reversible Data Hiding for Color Images Based on Pixel Value Order of Overall Process Channel Correlation

Reversible Data Hiding for Color Images Based on Pixel Value Order of Overall Process Channel Correlation

Elsevier Signal Processing

[pdf]

Journal Website

Ningxiong Mao (Southwest Jiaotong University), Hongjie Hea (Southwest Jiaotong University), Fan Chenb (Southwest Jiaotong University), Lingfeng Qu (Southwest Jiaotong University), Hadi Amirpour (Alpen-Adria-Universität Klagenfurt, Austria), and Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Austria)

Abstract:

Color image Reversible Data Hiding (RDH) is getting more and more important since the number of its applications is steadily growing. This paper proposes an efficient color image RDH scheme based on pixel value ordering (PVO), in which the channel correlation is fully utilized to improve the embedding performance. In the proposed method, the channel correlation is used in the overall process of data embedding, including prediction stage, block selection and capacity allocation. In the prediction stage, since the pixel values in the co-located blocks in different channels are monotonically consistent, the large pixel values are collected preferentially by pre-sorting the intra-block pixels. This can effectively improve the embedding capacity of RDH based on PVO. In the block selection stage, the description accuracy of block complexity value is improved by exploiting the texture similarity between the channels. The smoothing the block is then preferentially used to reduce invalid shifts. To achieve low complexity and high accuracy in capacity allocation, the proportion of the expanded prediction error to the total expanded prediction error in each channel is calculated during the capacity allocation process. The experimental results show that the proposed scheme achieves significant superiority in fidelity over a series of state-of-the-art schemes. For example, the PSNR of the Lena image reaches 62.43dB, which is a 0.16dB gain compared to the best results in the literature with a 20,000bits embedding capacity.

KeywordsReversible data hiding, color image, pixel value ordering, channel correlation

Posted in News | Comments Off on Elsevier Signal Processing: Reversible Data Hiding for Color Images Based on Pixel Value Order of Overall Process Channel Correlation

IEEE TIP: Advanced Scalability for Light Field Image Coding

Advanced Scalability for Light Field Image Coding

IEEE Transactions on Image Processing (TIP)

Journal Website

[PDF]

Hadi Amirpour (Alpen-Adria-Universität Klagenfurt, Austria), Christine Guillemot (INRIA, France), Mohammad Ghanbari (University of Essex, UK), and Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Austria)

Abstract: Light field imaging, which captures both spatial and angular information, improves user immersion by enabling post-capture actions, such as refocusing and changing view perspective. However, light fields represent very large volumes of data with a lot of redundancy that coding methods try to remove. State-of-the-art coding methods indeed usually focus on improving compression efficiency and overlook other important features in light field compression such as scalability. In this paper, we propose a novel light field image compression method that enables (i) viewport scalability, (ii) quality scalability, (iii) spatial scalability, (iv) random access, and (v) uniform quality distribution among viewports, while keeping compression efficiency high. To this end, light fields in each spatial resolution are divided into sequential viewport layers, and viewports in each layer are encoded using the previously encoded viewports. In each viewport layer, the available viewports are used to synthesize intermediate viewports using a video interpolation deep learning network. The synthesized views are used as virtual reference images to enhance the quality of intermediate views. An image super-resolution method is applied to improve the quality of the lower spatial resolution layer. The super-resolved images are also used as virtual reference images to improve the quality of the higher spatial resolution layer.
The proposed structure also improves the flexibility of light field streaming, provides random access to the viewports, and increases error resiliency. The experimental results demonstrate that the proposed method achieves a high compression efficiency and it can adapt to the display type, transmission channel, network condition, processing power, and user needs.

Keywords—Light field, compression, scalability, random access, deep learning.

Posted in News | Comments Off on IEEE TIP: Advanced Scalability for Light Field Image Coding

GMSys 2023: First International ACM Green Multimedia Systems Workshop, 7 – 10 June 2023, Vancouver, Canada

The threat of climate change requires a drastic reduction of global greenhouse gas (GHG) emissions in several societal spheres. Thus, this also applies to reducing and rethinking the energy consumption of digital technologies. Video streaming technology is responsible for more than half of digital technology’s global impact [ref]. There is rapid growth, also now with digital and remote work has become more mainstream, in the amount of video data volume, processing of video content, and streaming which affects the rise of energy consumption and its associated GHG emissions.

The International Workshop on Green Multimedia Systems 2023 (GMSys 2023) aims to bring together experts and researchers to present and discuss recent developments and challenges for energy reduction in multimedia systems. This workshop focuses on innovations, concepts, and energy-efficient solutions from video generation to processing, delivery, and further usage.

Find further info at https://athena.itec.aau.at/events-gmsys23/

 

Posted in News | Comments Off on GMSys 2023: First International ACM Green Multimedia Systems Workshop, 7 – 10 June 2023, Vancouver, Canada

MCOM-Live: A Multi-Codec Optimization Model at the Edge for Live Streaming

29th International Conference on MultiMedia Modeling
9 – 12 January 2023 | Bergen, Norway

Daniele Lorenzi (Alpen-Adria-Universität Klagenfurt), Farzad Tashtarian (Alpen-Adria-Universität Klagenfurt), Hadi Amirpour (Alpen-Adria-Universität Klagenfurt), Christian Timmerer (Alpen-Adria-Universität Klagenfurt), and Hermann Hellwagner (Alpen-Adria-Universität Klagenfurt)

Abstract:

HTTP Adaptive Streaming (HAS) is the predominant technique to deliver video contents across the Internet with the increasing demand of its applications. With the evolution of videos to deliver more immersive experiences, such as their evolution in resolution and framerate, highly efficient video compression schemes are required to ease the burden on the delivery process. While AVC/H.264 still represents the most adopted codec, we are experiencing an increase in the usage of new generation codecs (HEVC/H.265, VP9, AV1, VVC/H.266, etc.). Compared to AVC/H.264, these codecs can either achieve the same quality besides a bitrate reduction or improve the quality while targeting the same bitrate. In this paper, we propose a Mixed-Binary Linear Programming (MBLP) model called Multi-Codec Optimization Model at the edge for Live streaming (MCOM-Live) to jointly optimize (i) the overall streaming costs, and (ii) the visual quality of the content played
out by the end-users by efficiently enabling multi-codec content delivery. Given a video content encoded with multiple codecs according to a fixed bitrate ladder, the model will choose among three available policies, i.e., fetch, transcode, or skip, the best option to handle the representations. We compare the proposed model with traditional approaches used in the industry. The experimental results show that our proposed method can reduce the additional latency by up to 23% and the streaming costs by up to 78%, besides improving the visual quality of the delivered segments by up to 0.5 dB, in terms of PSNR.

MCOM architecture overview.

Posted in News | Comments Off on MCOM-Live: A Multi-Codec Optimization Model at the Edge for Live Streaming

OTEC: An Optimized Transcoding Task Scheduler for Cloud and Fog Environments

OTEC: An Optimized Transcoding Task Scheduler for Cloud and Fog Environments

ACM CoNEXT 2022 – ViSNext 22 Workshop

Samira Afzal (Alpen-Adria-Universität Klagenfurt), Farzad Tashtarian (Alpen-Adria-Universität Klagenfurt), Hamid Hadian (Alpen-Adria-Universität Klagenfurt), Alireza Erfanian (Alpen-Adria-Universität Klagenfurt), Christian Timmerer (Alpen-Adria-Universität Klagenfurt), and Radu Prodan (Alpen-Adria-Universität Klagenfurt)

Abstract:

Encoding and transcoding videos into multiple codecs and representations is a significant challenge that requires seconds or even days on high-performance computers depending on many technical characteristics, such as video complexity or encoding parameters. Cloud computing offering on-demand computing resources optimized to meet the needs of customers and their budgets is a promising technology for accelerating dynamic transcoding workloads. In this work, we propose OTEC, a novel multi-objective optimization method based on the mixed-integer linear programming model to optimize the computing instance selection for transcoding processes. OTEC determines the type and number of cloud and fog resource instances for video encoding and transcoding tasks with optimized computation cost and time. We evaluated OTEC on AWS EC2 and Exoscale instances for various administrator priorities, the number of encoded video segments, and segment transcoding times. The results show that OTEC can achieve appropriate resource selections and satisfy the administrator’s priorities in terms of time and cost minimization.

OTEC architecture overview.

Posted in APOLLO, GAIA, News, SPIRIT | Comments Off on OTEC: An Optimized Transcoding Task Scheduler for Cloud and Fog Environments

EMES: Efficient Multi-Encoding Schemes for HEVC-based Adaptive Bitrate Streaming

Transactions on Multimedia Computing Communications and Applications (TOMM)

Journal Website

[PDF]

Vignesh V Menon (Alpen-Adria-Universität Klagenfurt),  Hadi Amirpour (Alpen-Adria-Universität Klagenfurt), Mohammad Ghanbari (School of Computer Science and Electronic Engineering, University of Essex, Colchester, UK)and Christian Timmerer (Alpen-Adria-Universität Klagenfurt)

Abstract:

In HTTP Adaptive Streaming (HAS), videos are encoded at multiple bitrates and spatial resolutions (i.e., representations) to adapt to the heterogeneity of network conditions, device attributes, and end-user preferences. Encoding the same video segment at
multiple representations increases costs for content providers. State-of-the-art multi-encoding schemes improve the encoding process by utilizing encoder analysis information from already encoded representation(s) to reduce the encoding time of the remaining
representations. These schemes typically use the highest bitrate representation as the reference to accelerate the encoding of the remaining representations. Nowadays, most streaming services utilize cloud-based encoding techniques, enabling a fully parallel
encoding process to reduce the overall encoding time. The highest bitrate representation has the highest encoding time than the other representations. Thus, utilizing it as the reference encoding is unfavorable in a parallel encoding setup as the overall encoding time is bound by its encoding time. This paper provides a comprehensive study of various multi-rate and multi-encoding schemes in both serial and parallel encoding scenarios. Furthermore, it introduces novel heuristics to limit the Rate Distortion Optimization (RDO) process across various representations. Based on these heuristics, three multi-encoding schemes are proposed, which rely on encoder analysis sharing across different representations: (i) optimized for the highest compression efficiency, (ii) optimized for the best compression efficiency-encoding time savings trade-off, and (iii) optimized for the best encoding time savings. Experimental results demonstrate that the proposed multi-encoding schemes (i), (ii), and (iii) reduce the overall serial encoding time by 34.71%, 45.27%, and 68.76% with a 2.3%, 3.1%, and 4.5% bitrate increase to maintain the same VMAF, respectively compared to stand-alone encodings. The overall parallel encoding time is reduced by 22.03%, 20.72%, and 76.82% compared to stand-alone encodings for schemes (i), (ii), and (iii), respectively.

An example of video representations’ storage in HAS. The input video is encoded at multiple resolutions and bitrates. Novel multi-rate and multi-resolution encoder
analysis sharing methods are presented to accelerate encoding in more than one representation.

Posted in News | Comments Off on EMES: Efficient Multi-Encoding Schemes for HEVC-based Adaptive Bitrate Streaming