Adaptive Compressed Domain Video Encryption

Adaptive Compressed Domain Video Encryption

Expert Systems With Applications

[PDF]

Mohammad Ghasempour (AAU, Austria), Yuan Yuan (Southwest Jiaotong University), Hadi Amirpour (AAU, Austria), Hongjie He (Southwest Jiaotong University), and Christian Timmerer (AAU, Austria)

Abstract: With the ever-increasing amount of digital video content, efficient encryption is crucial to protect visual content across diverse platforms. Existing methods often incur excessive bitrate overhead due to content variability. Furthermore, since most videos are already compressed, encryption in the compressed domain is essential to avoid processing overhead and re-compression quality loss. However, achieving both format compliance and compression efficiency while ensuring that the decoded content remains unrecognizable is challenging in the compressed domain, since only limited information is available without full decoding. This paper proposes an adaptive compressed domain video encryption (ACDC) method that dynamically adjusts the encryption strategy according to content characteristics. Two tunable parameters derived from the bitstream information enable adaptation to various application requirements. An adaptive syntax integrity method is employed to produce format-compliant bitstreams without full decoding. Experimental results show that ACDC reduces bitrate overhead by 48.2% and achieves a 31-fold speedup in encryption time compared to the latest state of the art, while producing visually unrecognizable outputs.

Posted in ATHENA | Comments Off on Adaptive Compressed Domain Video Encryption

YTLive: A Dataset of Real-World YouTube Live Streaming Sessions

YTLive: A Dataset of Real-World YouTube Live Streaming Sessions

IEEE/IFIP Network Operations and Management Symposium (NOMS) 2026

 Rome, Italy- 18 – 22 May 2026

[PDF]

Mojtaba Mozhganfar (University of Tehran),  Pooya Jamshidi (University of Tehran), Seyyed Ali Aghamiri (University of Tehran), Mohsen Ghasemi (Sharif University of Technology),  Mahdi Dolati (Sharif University of Technology), Farzad Tashtarian (Alpen-Adria-Universität Klagenfurt),  Ahmad Khonsari (University of Tehran), Christian Timmerer (Alpen-Adria-Universität Klagenfurt)

Abstract

Live streaming plays a major role in today’s digital platforms, supporting entertainment, education, social media, etc. However, research in this field is limited by the lack of large, publicly available datasets that capture real-time viewer behavior at scale. To address this gap, we introduce YTLive, a public dataset focused on YouTube Live. Collected through the YouTube Researcher Program over May and June 2024, YTLive includes more than 507000 records from 12156 live streams, tracking concurrent viewer counts at five-minute intervals along with precise broadcast durations. We describe the dataset design and collection process and present an initial analysis of temporal viewing patterns. Results show that viewer counts are higher and more stable on weekends, especially during afternoon hours. Shorter streams attract larger and more consistent audiences, while longer streams tend to grow slowly and exhibit greater variability. These insights have direct implications for adaptive streaming, resource allocation, and Quality of Experience (QoE) modeling. YTLive offers a timely, open resource to support reproducible research and system-level innovation in live streaming. The dataset is publicly available at: https://github.com/ghalandar/YTLive.
Posted in ATHENA | Comments Off on YTLive: A Dataset of Real-World YouTube Live Streaming Sessions

Resource Management for Distributed Binary Neural Networks in Programmable Data Plane

Resource Management for Distributed Binary Neural Networks in Programmable Data Plane

IEEE/IFIP Network Operations and Management Symposium (NOMS) 2026

 Rome, Italy- 18 – 22 May 2026

[PDF]

Fatemeh Babaei (Sharif University of Technology),  Mahdi Dolati (Sharif University of Technology), Mojtaba Mozhganfar (University of Tehran),  Sina Darabi (Università della Svizzera Italiana),  Farzad Tashtarian (University of Klagenfurt)

Abstract

Programmable networks enable the deployment of customized network functions that can process traffic at line rate. The growing traffic volume and the increasing complexity of network management have motivated the use of data-driven and machine learning–based functions within the network. Recent studies demonstrate that machine learning models can be fully executed in the data plane to achieve low latency. However, the limited hardware resources of programmable switches pose a significant challenge for deploying such functions. This work investigates Binary Neural Networks (BNNs) as an effective mechanism for implementing network functions entirely in the data plane. We propose a network-wide resource allocation algorithm that exploits the inherent distributability of neural networks across multiple switches. The algorithm builds on the linear programming relaxation and randomized rounding framework to achieve efficient resource utilization. We implement our approach using Mininet and bmv2 software switches. Comprehensive evaluations on two public datasets show that our method attains near-optimal performance in small-scale networks and consistently outperforms baseline schemes in larger deployments.

Posted in ATHENA | Comments Off on Resource Management for Distributed Binary Neural Networks in Programmable Data Plane

QoE Modeling in Volumetric Video Streaming: A Short Survey

QoE Modeling in Volumetric Video Streaming: A Short Survey

IEEE/IFIP Network Operations and Management Symposium (NOMS) 2026

 Rome, Italy- 18 – 22 May 2026

[PDF]

Mojtaba Mozhganfar (University of Tehran),  Masoumeh Khodarahmi (IMDEA),  Daniele Lorenzi (Bitmovin),  Mahdi Dolati (Sharif University of Technology), Farzad Tashtarian (Alpen-Adria-Universität Klagenfurt),  Ahmad Khonsari (University of Tehran), Christian Timmerer (Alpen-Adria-Universität Klagenfurt)

Abstract

Volumetric video streaming enables six degrees of freedom (6DoF) interaction, allowing users to navigate freely within immersive 3D environments. Despite notable advancements, volumetric video remains an emerging field, presenting ongoing challenges and vast opportunities in content capture, compression, transmission, decompression, rendering, and display. As user expectations grow, delivering high Quality of Experience (QoE) in these systems becomes increasingly critical due to the complexity of volumetric content and the demands of interactive streaming. This paper reviews recent progress in QoE for volumetric streaming, beginning with an overview of QoE evaluation of volumetric video streaming studies, including subjective assessments tailored to 6DoF content. The core focus of this work is on objective QoE modeling, where we analyze existing models based on their input factors and methodological strategies. Finally, we discuss the key challenges and promising research directions for building perceptually accurate and adaptable QoE models that can support the future of immersive volumetric media.

Posted in ATHENA | Comments Off on QoE Modeling in Volumetric Video Streaming: A Short Survey

Perceptual Quality Optimization of Image Super-Resolution

Perceptual Quality Optimization of Image Super-Resolution

2026 IEEE International Conference on Acoustics, Speech, and Signal Processing (IEEE ICASSP 2026)

4 – 8 May, 2026

Barcelona, Spain

[PDF]

Wei Zhou, Yixiao Li, Hadi Amirpour, Xiaoshuai Hao, Jiang Liu, Peng Wang, Hantao Liu

Abstract: Single image super-resolution (SR) has achieved remarkable progress with deep learning, yet most approaches rely on distortion-oriented losses or heuristic perceptual priors, which often lead to a trade-off between fidelity and visual quality. To address this issue, we propose an \textbf{Efficient Perceptual Bi-directional Attention Network (Efficient-PBAN)} that explicitly optimizes SR towards human-preferred quality. The proposed framework is trained on a newly constructed SR quality dataset that covers a wide range of state-of-the-art SR methods with corresponding human opinion scores. Using this dataset, Efficient-PBAN learns to predict perceptual quality in a way that correlates strongly with subjective judgments. The learned metric is further integrated into SR training as a differentiable perceptual loss, enabling closed-loop alignment between reconstruction and perceptual assessment. Extensive experiments demonstrate that our approach delivers superior perceptual quality.

 

Posted in ATHENA | Comments Off on Perceptual Quality Optimization of Image Super-Resolution

ProgressIQA: Progressive Curriculum and Ensemble Self-Training for Filter-Altered Image Quality Assessment

ProgressIQA: Progressive Curriculum and Ensemble Self-Training for Filter-Altered Image Quality Assessment

2026 IEEE International Conference on Acoustics, Speech, and Signal Processing (IEEE ICASSP 2026)

4 – 8 May, 2026

Barcelona, Spain

[PDF]

MohammadAli Hamidi, Hadi Amirpour, Christian Timmerer, Luigi Atzori

Abstract: Filter-altered images are increasingly prevalent in online visual communication, particularly on social media platforms. Assessing the relevant perceived quality is essential for effectively managing visual communication. However, the perceived quality is content-dependent and non-monotonic, posing challenges for distortion-centric Image Quality Assessment (IQA) models. The Image Manipulation Quality Assessment (IMQA) benchmark addressed this gap with a dual-stream baseline that fuses filter-aware and quality-aware encoders via an MS-CAM attention module. However, only eight of the ten dataset folds are publicly released, making the task more data-constrained than the original 10-fold protocol. To overcome this limitation, we propose ProgressIQA, a data-efficient framework that integrates ensemble self-training, label distribution stratification, and multi-stage progressive curriculum learning. Fold-specific models are ensembled to generate stable teacher predictions, which are used as pseudo-labels for external filter-augmented images. These pseudo-labels are then balanced through stratified sampling and combined with the original data in a progressive curriculum that transfers knowledge from coarse to fine resolution across stages. Under the restricted 8-fold protocol, ProgressIQA achieves PLCC 0.7082 / SROCC 0.7107, outperforming the IMQA baseline (0.5616 / 0.5486) and even surpassing the original 10-fold evaluation in SROCC (0.7253 / 0.6870).

 

Posted in ATHENA | Comments Off on ProgressIQA: Progressive Curriculum and Ensemble Self-Training for Filter-Altered Image Quality Assessment

BiNR: Live Video Broadcasting Quality Assessment

BiNR: Live Video Broadcasting Quality Assessment

2026 IEEE International Conference on Acoustics, Speech, and Signal Processing (IEEE ICASSP 2026)

4 – 8 May, 2026

Barcelona, Spain

[PDF]

Hadi Amirpour, MohammadAli Hamidi, Wei Zhou, Luigi Atzori, Christian Timmerer

Abstract: Live video broadcasting has become widely accessible through popular platforms such as Instagram, Facebook, and YouTube, enabling real-time content sharing and user interaction. While the Quality of Experience (QoE) has been extensively studied for Video-on-Demand (VoD) services, the QoE of live broadcast videos remains relatively underexplored. In this paper, we address this gap by proposing a novel machine learning–based model for QoE prediction in live video broadcasting scenarios. Our approach, BiNR, introduces two models: BiNR_fast, which uses only bitstream features for ultra-fast QoE predictions, and the full model BiNR_full, which integrates bitstream features with a pixel-based no-reference (NR) quality metric that works on the decoded signal.
We evaluate multiple regression models to predict subjective QoE scores and further conduct feature importance analysis. Experimental results show that our full model achieves a Pearson Correlation Coefficient (PCC)/Spearman Rank Correlation Coefficient (SRCC) of 0.92/0.92 with subjective scores, significantly outperforming the state-of-the-art methods.

 

Posted in ATHENA | Comments Off on BiNR: Live Video Broadcasting Quality Assessment