Real-Time AI-Driven Avatar Generation for Sign Language in HTTP Adaptive Streaming
The 3rd ACM SIGCOMM Workshop on Emerging Multimedia Systems (ACM EMS 2025)
https://conferences.sigcomm.org/sigcomm/2025/workshop/ems/
8 September 2025 // Coimbra, Portugal
[PDF]
Daniele Lorenzi (AAU, Austria), Emanuele Artioli (AAU, Austria), Farzad Tashtarian (AAU, Austria), Christian Timmerer (AAU, Austria)
Abstract: As digital media consumption over the Internet surges globally, ensuring accessibility for all users becomes paramount. For people with hearing impairments, this means providing inclusion beyond classic captioning, which does not convey the full emotional and contextual depth of spoken content. This work addresses this accessibility gap by exploring the use of AI-generated avatars capable of translating speech into sign language in real-time. After defining the multifaceted challenges in this domain, we propose a novel AI-driven task partition to animate avatars for accurate and expressive sign language interpretations in live streaming.