SARENA: SFC-Enabled Architecture for Adaptive Video Streaming Applications

IEEE International Conference on Communications (ICC)

28 May – 01 June 2023– Rome, Italy

Conference Website
[PDF][Slides]

Reza Farahani (Alpen-Adria-Universität Klagenfurt),  Abdelhak Bentaleb (Concordia University, Canada), Christian Timmerer (Alpen-Adria-Universität Klagenfurt), Mohammad Shojafar (University of Surrey, UK), Radu Prodan (Alpen-Adria-Universität Klagenfurt), and Hermann Hellwagner (Alpen-Adria-Universität Klagenfurt).

Abstract: 5G and 6G networks are expected to support various novel emerging adaptive video streaming services (e.g., live, VoD, immersive media, and online gaming) with versatile Quality of Experience (QoE) requirements such as high bitrate, low latency, and sufficient reliability. It is widely agreed that these requirements can be satisfied by adopting emerging networking paradigms like Software-Defined Networking (SDN), Network Function Virtualization (NFV), and edge computing. Previous studies have leveraged these paradigms to present network-assisted video streaming frameworks, but mostly in isolation without devising chains of Virtualized Network Functions (VNFs) that consider the QoE requirements of various types of Multimedia Services (MS).

To bridge the aforementioned gaps, we first introduce a set of multimedia VNFs at the edge of an SDN-enabled network, form diverse Service Function Chains (SFCs) based on the QoE requirements of different MS services. We then propose SARENA, an SFC-enabled ArchitectuRe for adaptive VidEo StreamiNg Applications. Next, we formulate the problem as a central scheduling optimization model executed at the SDN controller. We also present a lightweight heuristic solution consisting of two phases that run on the SDN controller and edge servers to alleviate the time complexity of the optimization model in
large-scale scenarios. Finally, we design a large-scale cloud-based testbed, including 250 HTTP Adaptive Streaming (HAS) players requesting two popular MS applications (i.e., live and VoD), conduct various experiments, and compare its effectiveness with baseline systems. Experimental results illustrate that SARENA outperforms baseline schemes in terms of users’ QoE by at least 39.6%, latency by 29.3%, and network utilization by 30% in both MS services.

Index TermsHAS; DASH; NFV; SFC; SDN, Edge Computing.

 

Posted in News | Comments Off on SARENA: SFC-Enabled Architecture for Adaptive Video Streaming Applications

A holistic survey of multipath wireless video streaming

A holistic survey of multipath wireless video streaming

Journal Website: Journal of Network and Computer Applications

[PDF]

Samira Afzal (Alpen-Adria-Universität Klagenfurt), Vanessa Testoni (unico IDtech), Christian Esteve Rothenberg (University of Campinas), Prakash Kolan (Samsung Research America), and Imed Bouazizi (Qualcomm)

Abstract:

Demand for wireless video streaming services increases with users expecting to access high-quality video streaming experiences. Ensuring Quality of Experience (QoE) is quite challenging due to varying bandwidth and time constraints. Since most of today’s mobile devices are equipped with multiple network interfaces, one promising approach is to benefit from multipath communications. Multipathing leads to higher aggregate bandwidth and distributing video traffic over multiple network paths improves stability, seamless connectivity, and QoE. However, most of current transport protocols do not match the requirements of video streaming applications or are not designed to address relevant issues, such as networks heterogeneity, head-of-line blocking, and delay constraints. In this comprehensive survey, we first review video streaming standards
and technology developments. We then discuss the benefits and challenges of multipath video transmission over wireless. We provide a holistic literature review of multipath wireless video streaming, shedding light on the different alternatives from an end-to-end layered stack perspective, reviewing key multipath wireless scheduling functions, unveiling trade-offs of each approach, and presenting a suitable taxonomy to classify the
state-of-the-art. Finally, we discuss open issues and avenues for future work.

 

Posted in News | Comments Off on A holistic survey of multipath wireless video streaming

Reza Farahani to give a talk at 5G/6G Innovation Center, University of Surrey, UK

Collaborative Edge-Assisted Systems for HTTP Adaptive Video Streaming

5G/6G Innovation Center,  University of Surrey, UK

6th January 2023 | Guildford, UK

Abstract: The proliferation of novel video streaming technologies, advancement of networking paradigms, and steadily increasing numbers of users who prefer to watch video content over the Internet rather than using classical TV have made video the predominant traffic on the Internet. However, designing cost-effective, scalable, and flexible architectures that support low-latency and high-quality video streaming is still a challenge for both over-the-top (OTT) and ISP companies. In this talk, we first introduce the principles of video streaming and the existing challenges. We then review several 5G/6G networking paradigms and explain how we can leverage networking technologies to form collaborative network-assisted video streaming systems for improving users’ quality of experience (QoE) and network utilization.

Reza Farahani is a last-year Ph.D. candidate at the University of Klagenfurt, Austria, and a Ph.D. visitor at the University of Surrey, Uk. He received his B.Sc. in 2014 and M.Sc. in 2019 from the university of Isfahan, IRAN, and the university of Tehran, IRAN, respectively. Currently, he is working on the ATHENA project in cooperation with its industry partner Bitmovin. His research is focused on designing modern network-assisted video streaming solutions (via SDN, NFV, MEC, SFC, and P2P paradigms), multimedia Communication, computing continuum challenges, and parallel and distributed systems. He also worked in different roles in the computer networks field, e.g., network administrator, ISP customer support engineer, Cisco network engineer, network protocol designer, network programmer, and Cisco instructor (R&S, SP).

Posted in News | Comments Off on Reza Farahani to give a talk at 5G/6G Innovation Center, University of Surrey, UK

LALISA: Adaptive Bitrate Ladder Optimization in HTTP-based Adaptive Live Streaming

LALISA: Adaptive Bitrate Ladder Optimization in HTTP-based Adaptive Live Streaming

IEEE/IFIP Network Operations and Management Symposium (NOMS)

8-12 May 2023- Miami, FL – USA

[PDF][PPT][Video]

Farzad Tashtarian (Alpen-Adria-Universität Klagenfurt, Austria), Abdelhak Bentaleb (Concordia University, Canada), Hadi Amirpour (Alpen-Adria-Universität Klagenfurt, Austria), Babak Taraghi (Alpen-Adria-Universität Klagenfurt, Austria), Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Austria), Hermann Hellwagner (Alpen-Adria-Universität Klagenfurt, Austria), Roger Zimmermann (National University of Singapore, Singapore)

Video content in Live HTTP Adaptive Streaming (HAS) is typically encoded using a pre-defined, fixed set of bitrate-resolution pairs (termed Bitrate Ladder), allowing playback devices to adapt to changing network conditions using an adaptive bitrate (ABR) algorithm. However, using a fixed one-size-fits-all solution when faced with various content complexities, heterogeneous network conditions, viewer device resolutions and locations, does not result in an overall maximal viewer quality of experience (QoE). Here, we consider these factors and design LALISA, an efficient framework for dynamic bitrate ladder optimization in live HAS. LALISA dynamically changes a live video session’s bitrate ladder, allowing improvements in viewer QoE and savings in encoding, storage, and bandwidth costs. LALISA is independent of ABR algorithms and codecs, and is deployed along the path between viewers and the origin server. In particular, it leverages the latest developments in video analytics to collect statistics from video players, content delivery networks and video encoders, to perform bitrate adder tuning. We evaluate the performance of LALISA against existing solutions in various video streaming scenarios using a trace-driven testbed. Evaluation results demonstrate significant improvements in encoding computation (24.4%) and bandwidth (18.2%) costs with an acceptable QoE.

 

Posted in News | Comments Off on LALISA: Adaptive Bitrate Ladder Optimization in HTTP-based Adaptive Live Streaming

CD-LwTE: Cost- and Delay-aware Light-weight Transcoding at the Edge

CD-LwTE: Cost- and Delay-aware Light-weight Transcoding at the Edge

IEEE Transactions on Network and Service Management (TNSM)

[PDF]

Alireza Erfanian (Alpen-Adria-Universität Klagenfurt, Austria), Hadi Amirpour (Alpen-Adria-Universität Klagenfurt, Austria), Farzad Tashtarian (Alpen-Adria-Universität Klagenfurt, Austria), Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Austria), and Hermann Hellwagner.

Abstract—The edge computing paradigm brings cloud capabilities close to the clients. Leveraging the edge’s capabilities can improve video streaming services by employing the storage capacity and processing power at the edge for caching and transcoding tasks, respectively, resulting in video streaming services with higher quality and lower latency. In this paper, we propose CD-LwTE, a Cost- and Delay-aware Light-weight Transcoding approach at the Edge, in the context of HTTP Adaptive Streaming (HAS). The encoding of a video segment requires computationally intensive search processes. The main idea of CD-LwTE is to store the optimal search results as metadata for each bitrate of video segments and reuse it at the edge servers to reduce the required time and computational resources for transcoding. Aiming at minimizing the cost and delay of Video-on-Demand (VoD) services, we formulate the problem of selecting an optimal policy for serving segment requests at the edge server, including (i) storing at the edge server, (ii) transcoding from a higher bitrate at the edge server, and (iii) fetching from the origin or a CDN server, as a Binary Linear Programming (BLP) model. As a result, CD-LwTE stores the popular video segments at the edge and serves the unpopular ones by transcoding using metadata or fetching from the origin/CDN server. In this way, in addition to the significant reduction in bandwidth and storage costs, the transcoding time of a requested segment is remarkably decreased by utilizing its corresponding metadata. Moreover, we prove the proposed BLP model is an NP-hard problem and propose two heuristic algorithms to mitigate the time complexity of CD-LwTE. We investigate the performance of CD-LwTE in comprehensive scenarios with various video contents, encoding software, encoding settings, and available resources at the edge. The experimental results show that our approach (i) reduces the transcoding time by up to 97%, (ii) decreases the streaming cost, including storage, computation, and bandwidth costs, by up to 75%, and (iii) reduces delay by up to 48% compared to state-of-the-art approaches.

 

Posted in News | Comments Off on CD-LwTE: Cost- and Delay-aware Light-weight Transcoding at the Edge

Special Session on “Optimized Media Delivery” at ICME’23

ICME 2023 Special Session on

“Optimized Media Delivery”

July, 2023, Brisbane, Australia

Link


Organizers:

  • Hadi Amirpour, University of Klagenfurt

  • Angeliki Katsenou, Trinity College Dublin, IE and University of Bristol, UK


Abstracts

Video streaming in the context of HTTP Adaptive Streaming (HAS) is replacing legacy media platforms and its market share is growing rapidly due to its simplicity, reliability, and standard support (e.g., MPEG-DASH). It results in an increasing number of video content, where nowadays, video accounts for the vast majority of today’s internet traffic either in the form of user-generated content (UGC) or pristine cinematic content. For HAS, the video is usually encoded in multiple versions (i.e., representations) of different resolutions, bitrates, codecs, etc. and each representation is divided into chunks (i.e., segments) of equal length (e.g., 2-10 second) to enable dynamic, adaptive switching during streaming based on the user’s context conditions (e.g., network conditions, device characteristics, user preferences).

The optimized media delivery requires optimization of streaming from an end-to-end aspect, including content provisioning, and content consumption. In content provisioning, the quality of the video to be streamed is vital; for example cinematic content is pristine, while UGC content is already distorted. Thus video coding/transcoding is crucial for both the efficient distribution to the end-user (real-time or on demand) and the high quality of experience. There is a plethora of different techniques for a smooth visual experience. Many researchers focus on improving the compression efficiency of the standardised video codecs (e.g., HEVC, VVC, VP9, AV1, AVS3, etc.). Other researchers are focused on driving the video codecs using perceptual models to improve the delivery. At a HAS streaming level, the video service providers focus on the construction of optimized bitrate ladders per content that can also reduce the streaming cost. New immersive media formats add to the complexity of the optimization required for an end-to-end quality of experience.

The goal of this special session is to provide a forum for sharing and discussing cutting-edge research in Media Streaming and Quality Assessment. Possible topics that would be a good fit for this session include but are not limited to:

  • video coding parameter selection for optimized streaming;
  • transcoding techniques for the improved delivery of media;
  • perceptual evaluation of immersive media;
  • end-to-end video adaptive streaming methods;
  • pre- and post-processing for improved compression and delivery.
Posted in News | Comments Off on Special Session on “Optimized Media Delivery” at ICME’23

IEEE TCSVT: DeepStream: Video Streaming Enhancements using Compressed Deep Neural Networks

DeepStream: Video Streaming Enhancements using Compressed Deep Neural Networks

IEEE Transactions on Circuits and Systems for Video Technology (IEEE TCSVT)

Journal Website

[PDF]

Hadi Amirpour (Alpen-Adria-Universität Klagenfurt, Austria), Mohammad Ghanbari (University of Essex, UK), and Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Austria)

Abstract: In HTTP Adaptive Streaming (HAS), each video is divided into smaller segments, and each segment is encoded at multiple pre-defined bitrates to construct a bitrate ladder. To optimize bitrate ladders, per-title encoding approaches encode each segment at various bitrates and resolutions to determine the convex hull. From the convex hull, an optimized bitrate ladder is constructed, resulting in an increased Quality of Experience (QoE) for end-users. With the ever-increasing efficiency of deep learning-based video enhancement approaches, they are more and more employed at the client-side to increase the QoE, specifically when GPU capabilities are available. Therefore, scalable approaches are needed to support end-user devices with both CPU and GPU capabilities (denoted as CPU-only and GPU-available end-users, respectively) as a new dimension of a bitrate ladder.
To address this need, we propose DeepStream, a scalable content-aware per-title encoding approach to support both CPU-only and GPU-available end-users. (i) To support backward compatibility, DeepStream constructs a bitrate ladder based on any existing per-title encoding approach. Therefore, the video content will be provided for legacy end-user devices with CPU-only capabilities as a base layer (BL). (ii) For high-end end-user devices with GPU capabilities, an enhancement layer (EL) is added on top of the base layer comprising lightweight video super-resolution deep neural networks (DNNs) for each bitrate-resolution pair of the bitrate ladder. A content-aware video super-resolution approach leads to higher video quality, however, at the cost of bitrate overhead. To reduce the bitrate overhead for streaming content-aware video super-resolution DNNs, DeepCABAC, context-adaptive binary arithmetic coding for DNN compression, is used. Furthermore, the similarity among (i) segments within a scene and (ii) frames within a segment are used to reduce the training costs of DNNs.
Experimental results show bitrate savings of 34% and 36% to maintain the same PSNR and VMAF, respectively, for GPU-available end-users, while the CPU-only users get the desired video content as usual.

Keywords—HTTP adaptive streaming, per-title encoding, video streaming, video super-resolution.

Posted in News | Comments Off on IEEE TCSVT: DeepStream: Video Streaming Enhancements using Compressed Deep Neural Networks