JPEG Image Encryption with DC Rotation and Undivided RSV-based AC Group Permutation

IEEE Transactions on Multimedia (TMM)

[PDF]

Yuan Yuan (Southwest Jiaotong University, China), Hongjie He (Southwest Jiaotong University, China), Yaolin Yang (Southwest Jiaotong University, China), Hadi Amirpour (AAU, Klagenfurt, Austria), Christian Timmerer (AAU, Klagenfurt, Austria), Fan Chen (Southwest Jiaotong University, China)

Abstract: Existing JPEG encryption approaches pose a security risk due to the difficulty in changing all block-feature values while considering format compatibility and file size expansion. To address these concerns, this paper introduces a novel JPEG image encryption scheme. First, the security of sketch information against chosen-plaintext attacks is improved by increasing the change rate of block-feature values. Second, a classification global permutation approach is designed to encrypt the undivided run/size, value (RSV)-based AC groups to achieve larger changes in the block-feature values. Third, to reduce file size expansion while maintaining format compatibility, the DC coefficients are rotated based on the mapped DC differences in the same category, and the nonzero AC coefficients are mapped in the same category. Extensive experiments demonstrate that the proposed algorithm is superior to existing schemes in terms of security. Notably, the average change rate of block-feature values is increased by at least 20%. Furthermore, the proposed scheme reduces the file size by an average of 2.036% compared to existing JPEG image encryption methods.

 

Posted in ATHENA | Comments Off on JPEG Image Encryption with DC Rotation and Undivided RSV-based AC Group Permutation

ALIVE: A Latency- and Cost-Aware Hybrid P2P-CDN Framework for Live Video Streaming

IEEE Transactions on Network and Service Management 

[PDF]

Reza Farahani (Alpen-Adria-Universität Klagenfurt, Austria), Ekrem Çetinkaya (Alpen-Adria-Universität Klagenfurt, Austria), Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Austria), Mohammad Shojafar (University of Surrey, UK), Mohammad Ghanbari (University of Essex, UK), and Hermann Hellwagner (Alpen-Adria-Universität Klagenfurt, Austria)

Abstract: Recent years have witnessed video streaming demands evolve into one of the most popular Internet applications. With the ever-increasing personalized demands for high-definition and low-latency video streaming services, network-assisted video streaming schemes employing modern networking paradigms have become a promising complementary solution in the HTTP Adaptive Streaming (HAS) context. The emergence of such techniques addresses long-standing challenges of enhancing users’ Quality of Experience (QoE), end-to-end (E2E) latency, as well as network utilization. However, designing a cost-effective, scalable, and flexible network-assisted video streaming architecture that supports the aforementioned requirements for live streaming services is still an open challenge. This article leverage novel networking paradigms, i.e., edge computing and Network Function Virtualization (NFV), and promising video solutions, i.e., HAS, Video Super-Resolution (SR), and Distributed Video Transcoding (TR), to introduce A Latency- and cost-aware hybrId P2P-CDN framework for liVe video strEaming (ALIVE). We first introduce the ALIVE multi-layer architecture and design an action tree that considers all feasible resources (i.e., storage, computation, and bandwidth) provided by peers, edge, and CDN servers for serving peer requests with acceptable latency and quality. We then formulate the problem as a Mixed Integer Linear Programming (MILP) optimization model executed at the edge of the network. To alleviate the optimization model’s high time complexity, we propose a lightweight heuristic, namely, Greedy-Based Algorithm (GBA). Finally, we (i) design and instantiate a large-scale cloud-based testbed including 350 HAS players, (ii) deploy ALIVE on it, and (iii) conduct a series of experiments to evaluate the performance of ALIVE in various scenarios. Experimental results indicate that ALIVE (i) improves the users’ QoE by at least 22%, (ii) decreases incurred cost of the streaming service provider by at least 34%, (iii) shortens clients’ serving latency by at least 40%, (iv) enhances edge server energy consumption by at least 31%, and (v) reduces backhaul bandwidth usage by at least 24% compared to base line approaches.

Keywords: HTTP Adaptive Streaming (HAS); Edge Computing; Network Function Virtualization (NFV); Content Delivery Network (CDN); Peer-to-Peer (P2P); Quality of Experience (QoE); Video Transcoding; Video Super-Resolution.

 

Posted in ATHENA | Comments Off on ALIVE: A Latency- and Cost-Aware Hybrid P2P-CDN Framework for Live Video Streaming

Towards Low-Latency and Energy-Efficient Hybrid P2P-CDN Live Video Streaming

Special Issue on Sustainable Multimedia Communications and Services, IEEE COMSOC MMTC Communications – Frontiers,

[PDF]

Reza Farahani(Alpen-Adria-Universität Klagenfurt, Austria), Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Austria), and Hermann Hellwagner (Alpen-Adria-Universität Klagenfurt, Austria)

Abstract: Streaming segmented videos over the Hypertext Transfer Protocol (HTTP) is an increasingly popular approach in both live and video-on-demand (VoD) applications. However, designing a scalable and adaptable framework that reduces servers’ energy consumption and supports low latency and high quality services, particularly for live video streaming scenarios, is still challenging for Over-The-Top (OTT) service providers. To address such challenges, this paper introduces a new hybrid P2P-CDN framework that leverages new networking and computing paradigms, i.e., Network Function Virtualization (NFV) and edge computing for live video streaming. The proposed framework introduces a multi-layer architecture and a tree of possible actions therein (an action tree), taking into account all available resources from peers, edge, and CDN servers to efficiently distribute video fetching and transcoding tasks across a hybrid P2P-CDN network, consequently enhancing the users’ latency and video quality. We also discuss our testbed designed to validate the framework and compare it with baseline methods. The experimental results indicate that the proposed framework improves user Quality of Experience (QoE), reduces client serving latency, and improves edge server energy consumption compared to baseline approaches.

Keywords: Energy Efficiency; HAS; DASH; Edge Computing; NFV; CDN; P2P; Low Latency; QoE; Video Transcoding.

Posted in ATHENA | Comments Off on Towards Low-Latency and Energy-Efficient Hybrid P2P-CDN Live Video Streaming

SIGMM Test of Time Paper Honorable Mention in the category of “MM Systems & Networking”

We’re excited to share that the ACM Special Interest Group in Multimedia (SIGMM) presents to

Stefan Lederer, Christopher Müller, and Christian Timmerer

The SIGMM Test of Time Paper Honorable Mention in the category of “MM Systems & Networking”

for their paper “Dynamic Adaptive Streaming over HTTP Dataset”. In Proceedings of the 3rd Multimedia Systems Conference, MMSys ’12, page 89–94, New York, NY, USA, 2012. ACM. doi:10.1145/2155555.2155570

Posted in ATHENA | Comments Off on SIGMM Test of Time Paper Honorable Mention in the category of “MM Systems & Networking”

Machine Learning Based Resource Utilization Prediction in the Computing Continuum

IEEE International Workshop on Computer Aided Modeling and Design of Communication Links and Networks

6–8 November 2023 | Edinburgh, Scotland

Conference Website

[PDF][Slides]

Christian Bauer (Alpen-Adria-Universität Klagenfurt), Narges Mehran (Alpen-Adria-Universität Klagenfurt), Radu Prodan (Alpen-Adria-Universität Klagenfurt) and Dragi Kimovski (Alpen-Adria-Universität Klagenfurt)

Abstract: This paper presents UtilML, a novel approach for tackling resource utilization prediction challenges in the computing continuum. UtilML leverages Long-Short-Term Memory (LSTM) neural networks, a machine learning technique, to forecast resource utilization accurately. The effectiveness of UtilML is demonstrated through its evaluation of data extracted from a real GPU cluster in a computing continuum infrastructure comprising more than 1800 computing devices. To assess the performance of UtilML, we compared it with two related approaches that utilize a Baseline-LSTM model. Furthermore, we analyzed the LSTM results against User-Predicted values provided by GPU cluster owners for task deployment with estimated allocation values. The results indicate that UtilML outperformed user predictions by 2% to 27% for CPU utilization prediction. For memory prediction, UtilML variants excelled, showing improvements of 17% to 20% compared to user predictions.

Keywords: Utilization Prediction, Machine Learning, Computing Continuum, Cloud.

Posted in GAIA | Comments Off on Machine Learning Based Resource Utilization Prediction in the Computing Continuum

Empowerment of Atypical Viewers via Low-Effort Personalized Modeling of Video Streaming Quality

The 19th International Conference on emerging Networking EXperiments and Technologies

December 5-8, 2023 | Paris, France

[PDF] [PPT] [PPT (Artifacts)]

Leonardo Peroni (IMDEA Networks Institute and UC3M), Sergey Gorinsky (IMDEA Networks Institute), Farzad Tashtarian (AAU, Austria), and Christian Timmerer (AAU, Austria).


Abstract: Quality of Experience (QoE) and QoE models are of an increasing importance to networked systems. The traditional QoE modeling for video streaming applications builds a one-size-fits-all QoE model that underserves atypical viewers who perceive QoE differently. To address the problem of atypical viewers, this paper proposes iQoE (individualized QoE), a method that employs explicit, expressible, and actionable feedback from a viewer to construct a personalized QoE model for this viewer. The iterative iQoE design exercises active learning and combines a novel sampler with a modeler. The chief emphasis of our paper is on making iQoE sample-efficient and accurate.
By leveraging the Microworkers crowdsourcing platform, we conduct studies with 120 subjects who provide 14,400 individual scores. According to the subjective studies, a session of about 22 minutes empowers a viewer to construct a personalized QoE model that, compared to the best of the 10 baseline models, delivers the average accuracy improvement of at least 42% for all viewers and at least 85% for the atypical viewers. The large-scale simulations based on a new technique of synthetic profiling expand the evaluation scope by exploring iQoE design choices, parameter sensitivity, and generalizability.

 

Posted in ATHENA | Comments Off on Empowerment of Atypical Viewers via Low-Effort Personalized Modeling of Video Streaming Quality

IEEE Access: Characterization of the Quality of Experience and Immersion of Point Cloud Videos in Augmented Reality through a Subjective Study

IEEE Access, A Multidisciplinary, Open-access Journal of the IEEE

[PDF]

Minh Nguyen (Alpen-Adria-Universität Klagenfurt, Austria), Shivi Vats (Alpen-Adria-Universität Klagenfurt, Austria), Sam Van Damme (Ghent University – imec and KU Leuven, Belgium), Jeroen van der Hooft (Ghent University – imec, Belgium), Maria Torres Vega (Ghent University – imec and KU Leuven, Belgium), Tim Wauters (Ghent University – imec, Belgium), Filip De Turck (Ghent University – imec, Belgium), Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Austria), Hermann Hellwagner (Alpen-Adria-Universität Klagenfurt, Austria)

Abstract: Point cloud streaming has recently attracted research attention as it has the potential to provide six degrees of freedom movement, which is essential for truly immersive media. The transmission of point clouds requires high-bandwidth connections, and adaptive streaming is a promising solution to cope with fluctuating bandwidth conditions. Thus, understanding the impact of different factors in adaptive streaming on the Quality of Experience (QoE) becomes fundamental. Point clouds have been evaluated in Virtual Reality (VR), where viewers are completely immersed in a virtual environment. Augmented Reality (AR) is a novel technology and has recently become popular, yet quality evaluations of point clouds in AR environments are still limited to static images.

In this paper, we perform a subjective study of four impact factors on the QoE of point cloud video sequences in AR conditions, including encoding parameters (quantization parameters, QPs), quality switches, viewing distance, and content characteristics. The experimental results show that these factors significantly impact the QoE. The QoE decreases if the sequence is encoded at high QPs and/or switches to lower quality and/or is viewed at a shorter distance, and vice versa. Additionally, the results indicate that the end user is not able to distinguish the quality differences between two quality levels at a specific (high) viewing distance. An intermediate-quality point cloud encoded at geometry QP (G-QP) 24 and texture QP (T-QP) 32 and viewed at 2.5 m can have a QoE (i.e., score 6.5 out of 10) comparable to a high-quality point cloud encoded at 16 and 22 for G-QP and T-QP, respectively, and viewed at a distance of 5 m. Regarding content characteristics, objects with lower contrast can yield better quality scores. Participants’ responses reveal that the visual quality of point clouds has not yet reached an immersion level as desired. The average QoE of the highest visual quality is less than 8 out of 10. There is also a good correlation between objective metrics (e.g., color Peak Signal-to-Noise Ratio (PSNR) and geometry PSNR) and the QoE score. Especially the Pearson correlation coefficients of color PSNR is 0.84. Finally, we found that machine learning models are able to accurately predict the QoE of point clouds in AR environments.

The subjective test results and questionnaire responses are available on Github: https://github.com/minhkstn/QoE-and-Immersion-of-Dynamic-Point-Cloud.

Index Terms: Point Clouds, Quality of Experience, Subjective Tests, Augmented Reality

Posted in SPIRIT | Comments Off on IEEE Access: Characterization of the Quality of Experience and Immersion of Point Cloud Videos in Augmented Reality through a Subjective Study