Visual Quality Assessment Competition
VQualA
co-located with ICCV 2025
Visual quality assessment plays a crucial role in computer vision, serving as a fundamental step in tasks such as image quality assessment (IQA), image super-resolution, document image enhancement, and video restoration. Traditional visual quality assessment techniques often rely on scalar metrics like Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM), which, while effective in certain contexts, fall short in capturing the perceptual quality experienced by human observers. This gap emphasizes the need for more perceptually aligned and comprehensive evaluation methods that can adapt to the growing demands of applications such as medical imaging, satellite remote sensing, immersive media, and document processing. In recent years, advancements in deep learning, generative models, and multimodal large language models (MLLMs) have opened up new avenues for visual quality assessment. These models offer capabilities that extend beyond traditional scalar metrics, enabling more nuanced assessments through natural language explanations, open-ended visual comparisons, and enhanced context awareness. With these innovations, VQA is evolving to better reflect human perceptual judgments, making it a critical enabler for next-generation computer vision applications.
The VQualA Workshop aims to bring together researchers and practitioners from academia and industry to discuss and explore the latest trends, challenges, and innovations in visual quality assessment. We welcome original research contributions addressing, but not limited to, the following topics:
- Image and video quality assessment
- Perceptual quality assessment techniques
- Multi-modal quality evaluation (image, video, text)
- Visual quality assessment for immersive media (VR/AR)
- Document image enhancement and quality analysis
- Quality assessment under adverse conditions (low light, weather distortions, motion blur)
- Robust quality metrics for medical and satellite imaging
- Perceptual-driven image and video super-resolution
- Visual quality in restoration tasks (denoising, deblurring, upsampling)
- Human-centric visual quality assessment
- Learning-based quality assessment models (CNNs, Transformers, MLLMs)
- Cross-domain visual quality adaptation
- Benchmarking and datasets for perceptual quality evaluation
- Integration of large language models for quality explanation and assessment
- Open-ended comparative assessments with natural language reasoning
- Emerging applications of VQA in autonomous driving, surveillance, and smart cities