Hadi

Visual Quality Assessment Competition

VQualA

co-located with ICCV 2025

https://vquala.github.io/

ICCV 2025 Workshop: Visual Quality Assessment Competition | ATHENA Christian Doppler (CD) Laboratory

VQualA Logo

Visual quality assessment plays a crucial role in computer vision, serving as a fundamental step in tasks such as image quality assessment (IQA), image super-resolution, document image enhancement, and video restoration. Traditional visual quality assessment techniques often rely on scalar metrics like Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM), which, while effective in certain contexts, fall short in capturing the perceptual quality experienced by human observers. This gap emphasizes the need for more perceptually aligned and comprehensive evaluation methods that can adapt to the growing demands of applications such as medical imaging, satellite remote sensing, immersive media, and document processing. In recent years, advancements in deep learning, generative models, and multimodal large language models (MLLMs) have opened up new avenues for visual quality assessment. These models offer capabilities that extend beyond traditional scalar metrics, enabling more nuanced assessments through natural language explanations, open-ended visual comparisons, and enhanced context awareness. With these innovations, VQA is evolving to better reflect human perceptual judgments, making it a critical enabler for next-generation computer vision applications.

The VQualA Workshop aims to bring together researchers and practitioners from academia and industry to discuss and explore the latest trends, challenges, and innovations in visual quality assessment. We welcome original research contributions addressing, but not limited to, the following topics:

  • Image and video quality assessment
  • Perceptual quality assessment techniques
  • Multi-modal quality evaluation (image, video, text)
  • Visual quality assessment for immersive media (VR/AR)
  • Document image enhancement and quality analysis
  • Quality assessment under adverse conditions (low light, weather distortions, motion blur)
  • Robust quality metrics for medical and satellite imaging
  • Perceptual-driven image and video super-resolution
  • Visual quality in restoration tasks (denoising, deblurring, upsampling)
  • Human-centric visual quality assessment
  • Learning-based quality assessment models (CNNs, Transformers, MLLMs)
  • Cross-domain visual quality adaptation
  • Benchmarking and datasets for perceptual quality evaluation
  • Integration of large language models for quality explanation and assessment
  • Open-ended comparative assessments with natural language reasoning
  • Emerging applications of VQA in autonomous driving, surveillance, and smart cities

 

On 12 May, Dr Felix Schniz held an invited guest talk at the Department of Culturology, Faculty of Arts, University of Ljubljana.

In his talk, Felix discussed to challenges and necessities of Critical Theory thinking on an age of digital pessimism with a focus in game studies:

The Frankfurt School is a long-standing moral pillar of cultural studies. As the lines between humankind, virtuality, and technology blur more and more, however, the Frankfurt School’s stance towards the cultural industry incrementally enters a fundamental crisis of purpose. In this guest lecture, I elaborate on the crisis of digi-pessimism by drawing a bridge from the early days of Frankfurt School thinking to its contemporary work on interactive virtual artworks on the example of video games. I outline the origins of the school and its vital key terms along with its most prominent thinkers, its importance in the analysis of cultural artefacts, and its challenges when confronted with a medium that is as highly capitalistic in its conception as it is subversive. Focusing on Bloodborne (FromSoftware 2015), I portray the difficulties of navigating a virtual world that fuses pop-culture gothic horror with practices of anti-capitalist resistance and the meaning of such spaces in the face of contemporary political and technological raptures.

 

We are happy to announce that our paper “EnergyLess: An Energy-Aware Serverless Workflow Batch Orchestration on the Computing Continuum” (by Reza Farahani and Radu Prodan) has been accepted for IEEE CLOUD 2025, which will take place in Helsinki, Finland, in July 2025.

Venue: IEEE International Conference on Cloud Computing 2025 (IEEE CLOUD 2025)

Abstract: Serverless cloud computing is increasingly adopted for workflow management, optimizing resource utilization for providers while lowering costs for customers. Integrating edge computing into this paradigm enhances scalability and efficiency, enabling seamless workflow distribution across geographically dispersed resources on the computing continuum. However, existing serverless workflow orchestration methods on the computing continuum often prioritize time and cost objectives, neglecting energy consumption and carbon footprint. This paper introduces EnergyLess, a multi-objective concurrent serverless workflow batch orchestration service for the computing continuum. EnergyLess decomposes workflow functions within a batch into finer-grained sub-functions and schedules either the original or sub-function versions to appropriate regions and instances on the continuum, improving energy consumption, carbon footprint, economic cost, and completion time while considering individual workflow requirements and resource constraints. We formulate the problem as a mixed-integer nonlinear programming (MINLP) model and propose three lightweight heuristic algorithms for function decomposition and scheduling. Evaluations on a large- scale computing continuum testbed with realistic workflows, spanning AWS Lambda, Google Cloud Functions (GCF), and 325 fog and edge instances across six regions demonstrate that EnergyLess improves cost efficiency by 75 %, completion time by 6%, energy consumption by 15%, and CO2 emissions by 20% for a batch size of 300, compared to three baseline methods.

 

Big news for spider fans: The viral sensation and wacky physics game A Webbing Journey is launching in Early Access on May 19th, on PC via Steam and got a new trailer too!

Congrats to Sebastian and his team!

A Webbing Journey has been drumming up excitement over the past months with gathering millions of views in regular viral social media posts! The demo is currently available on Steam and has a stunning, overwhelmingly positive review rating of 99% (over 500 reviews!). It comes from indie dev team Fire Totem Games and publisher Future Friends Games (Exo One, SUMMERHOUSE, The Cabin Factory).

 

 

From 3 to 4 May, Game Studies and Engineering has been present at the HaruCon in Klagenfurt. As every year, the programme direction hosted an info-booth about Game Studies and Engineering @ AAU, enabled students to showcase their current game projects, and organised a workshop for those interested in the programme’s research.

The annual HaruCon is the biggest gaming, pop-culture, and fandom event in Carinthia. Attracting several thousand visitors over the weekend, it has ever since been a great opportunity for the master’s programme Game Studies and Engineering to present itself to an excited crowd of tech and play enthusiasts and to show them that their interest plays an important role in the academic landscape of the region.

Pattern Recognition Special Issue on

Advances in Multimodal-Driven Video Understanding and Assessment

The rapid growth of video content across various domains has led to an increasing demand for more intelligent and efficient video understanding and assessment techniques. This Special Issue focuses on the integration of multimodal information, such as audio, text, and sensor data, with video to enhance processing, analysis, and interpretation. Multimodal-driven approaches are crucial for numerous real-world applications, including automated surveillance, content recommendation, and healthcare diagnostics.

This Special Issue invites cutting-edge research on topics such as video capture, compression, transmission, enhancement, and quality assessment, alongside advancements in deep learning, multimodal fusion, and real-time processing frameworks. By exploring innovative methodologies and emerging applications, we aim to provide a comprehensive perspective on the latest developments in this dynamic and evolving field.

Topics of interest include but are not limited to:

  • Multimodal-driven video capture techniques
  • Video compression and efficient transmission for/using multimodal data
  • Deep learning-based video enhancement and super-resolution
  • Multimodal action and activity recognition
  • Audio-visual and text-video fusion methods
  • Video quality assessment with multimodal cues
  • Video captioning and summarization using multimodal data
  • Real-time multimodal video processing frameworks
  • Explainability and interpretability in multimodal video models
  • Applications in surveillance, healthcare, and autonomous systems

Guest editors:

Wei Zhou, PhD
Cardiff University, Cardiff, United Kingdom
Email: zhouw26@cardiff.ac.uk

Yakun Ju, PhD
University of Leicester, Leicester, United Kingdom
Email: yj174@leicester.ac.uk

Hadi Amirpour, PhD
University of Klagenfurt, Klagenfurt, Austria
Email: hadi.amirpour@aau.at

Bruce Lu, PhD
University of Western Australia, Perth, Australia
Email: bruce.lu@uwa.edu.au

Jun Liu, PhD
Lancaster University, Lancaster, United Kingdom
Email: j.liu81@lancaster.ac.uk

Important dates 

Submission Portal Open: April 04, 2025

Submission Deadline: October 30, 2025

Acceptance Deadline: May 30, 2026

Keywords:

Multimodal video analysis, video understanding, deep learning, video quality assessment, action recognition, real-time video processing, audio-visual learning, text-video processing

Mathias Lux

Auf der Uni Klagenfurt werden Spiele erforscht. Mathias Lux erklärt, wie es zu den “WarThunder”-Leaks kommen kann.

Ein interessantes Interview mit Kollege Mathias Lux und der Kärntner Krone vom 06. April 2025.

 

KRONE_20250406_SEITE_16_KtnMorgen_Lux

Sustainable and Serverless Service Management

on the Edge-Cloud Continuum

Abstract: In recent years, the computing continuum has transformed distributed computing by integrating centralized cloud infrastructure with decentralized edge devices, enabling support for computationally intensive and data-driven applications. Serverless computing, with its event-driven resource management, dynamic scalability, and infrastructure abstraction, has emerged recently as a complementary paradigm for such an environment. However, integrating serverless computing into the computing continuum introduces additional challenges in service and system management, such as ensuring seamless service scalability across distributed resources, maintaining low-latency and energy-efficient task execution, and orchestrating dynamic resource provisioning across diverse and heterogeneous infrastructures. This talk first reviews the key challenges and emerging solutions for serverless service and application management on the edge-cloud continuum. It then explores the broader implications of these schemes, highlighting open questions for achieving high-performance and sustainable service orchestration. The talk further presents our serverless frameworks designed to address these challenges across serverless functions, application workflows, and workflow batches, which leveraged multi-objective scheduling algorithms, adaptive workload allocation strategies, and heuristic-based orchestration techniques to enable fine-grained resource optimization in the computing continuum, balancing energy efficiency, latency, economic cost, and service quality. Through real-world applications and experiments on our designed computing continuum, the talk demonstrates how these frameworks enable sustainable and scalable service management, paving the way for next-generation distributed computing systems.

 

Authors: Mario Colosi (University of Messina, Italy), Reza Farahani (University of Klagenfurt, Austria), Maria Fazio (University of Messina, Italy), Radu Prodan (University of Innsbruck, Austria), Massimo Villari (University of Messina, Italy)

Venue: International Joint Conference on Neural Networks (IJCNN), 30 June – 5 July 2025, Rome, Italy

Abstract: Data within a specific context gains deeper significance beyond its isolated interpretation. In distributed systems, interdependent data sources reveal hidden relationships and latent structures, representing valuable information for many applications. This paper introduces Osmotic Learning (OSM-L), a self-supervised distributed learning paradigm designed to uncover higher-level latent knowledge from distributed data. The core of OSM-L is osmosis, a process that synthesizes dense and compact representation by extracting contextual information, eliminating the need for raw data exchange between distributed entities. OSM-L iteratively aligns local data representations, enabling information diffusion and convergence into a dynamic equilibrium that captures contextual patterns. During training, it also identifies correlated data groups, functioning as a decentralized clustering mechanism. Experimental results confirm OSM-L’s convergence and representation capabilities on structured datasets, achieving over 0.99 accuracy in local information alignment while preserving contextual integrity.

ACM Transactions on Multimedia Computing, Communications, and Applications

 

Christian Timmerer (AAU, AT), Hadi Amirpour (AAU, AT), Farzad Tashtarian (AAU, AT), Samira Afzal (AAU, AT), Amr Rizk (Leibniz University Hannover, DE), Michael Zink (University of Massachusetts Amherst, US), and Hermann Hellwagner (AAU, AT)

Abstract: Video streaming has evolved from push-based, broad-/multicasting approaches with dedicated hard-/software infrastructures to pull-based unicast schemes utilizing existing Web-based infrastructure to allow for better scalability. In this article, we provide an overview of the foundational principles of HTTP adaptive streaming (HAS), from video encoding to end user consumption, while focusing on the key advancements in adaptive bitrate algorithms, quality of experience (QoE), and energy efficiency. Furthermore, the article highlights the ongoing challenges of optimizing network infrastructure, minimizing latency, and managing the environmental impact of video streaming. Finally, future directions for HAS, including immersive media streaming and neural network-based video codecs, are discussed, positioning HAS at the forefront of next-generation video delivery technologies.

Keywords: HTTP Adaptive Streaming, HAS, DASH, Video Coding, Video Delivery, Video Consumption, Quality of Experience, QoE

 

https://athena.itec.aau.at/2025/03/acm-tomm-http-adaptive-streaming-a-review-on-current-advances-and-future-challenges/