Dragi Kimovski Receives FGCS Outstanding Reviewer Award 2025

We are proud to announce that Dragi Kimovski has been selected as a recipient of the 2025 Outstanding Reviewer Award by the Future Generation Computer Systems journal.

Out of more than 5,400 reviewers worldwide, only 31 were chosen for this distinction, making this recognition highly competitive and a testament to exceptional contributions to the peer-review process. The award was given for the dedication, expertise, and commitment to maintaining high scientific standards, which have played an important role in supporting the quality and integrity of published research.

The full list of awardees will be featured in an upcoming open-access editorial in FGCS (Volume 182, September 2026).

Title: EPS: Efficient Patch Sampling for Video Overfitting in Deep Super-Resolution Model Training

Authors: Yiying Wei, Hadi Amirpour, Jong Hwan Ko, and Christian Timmerer

Abstract: Leveraging the overfitting property of deep neural networks (DNNs) is trending in video delivery systems to enhance video quality within bandwidth limits. Existing approaches transmit overfitted super-resolution (SR) model streams for low-resolution (LR) bitstreams, which are used to reconstruct high-resolution (HR) videos at the decoder. Although these approaches show promising results, the huge computational costs of training a large number of video frames limit their practical applications. To overcome this challenge, we propose an efficient patch sampling method named EPS for video SR network overfitting, which identifies the most valuable training patches from video frames.

To this end, we first present two low-complexity Discrete Cosine Transform (DCT)-based spatial-temporal features to measure the complexity score of each patch directly. By analyzing the histogram distribution of these features, we then categorize all possible patches into different clusters and select training patches from the cluster with the highest spatial-temporal information. The number of sampled patches is adaptive based on the video content, addressing the trade-off between training complexity and efficiency.

Our method reduces the number of training patches by 75.00\% to 91.69\%, depending on the resolution and number of clusters, while preserving high video quality and greatly improving training efficiency. Our method speeds up patch sampling by up to 82.1$\times$ compared to the state-of-the-art patch sampling technique (EMT).

Hadi

Title: Perception-Inspired Network for Stereo Image Quality Assessment

Authors: Yongli Chang, Guanghui Yue, Bo Zhao, Li Yu, Yakun Ju,  Hadi Amirpour,  Moncef Gabbouj and Wei Zhou.

Abstract: Existing stereo image quality assessment (SIQA) methods generally have limitations in binocular fusion and fine-grained perception modeling. To address these issues, we propose a Perception-Inspired Network for SIQA that simulates binocular difference-guided fusion, high-frequency sensitivity, and hierarchical perception mechanisms of the human visual system (HVS). First, a difference-guided binocular fusion (DGBF) module is designed to mimic the binocular difference sensitivity mechanism, which exploits difference information at both the feature-level and image-level to optimize binocular fusion. Furthermore, the image distortion primarily affects the high-frequency components, which are critical for perceptual quality. To reflect this, we propose a high-frequency enhancement module (HFEM) to simulate the human eye’s sensitivity to edge and texture distortions. Finally, to better achieve fine-grained perception modeling, we propose a hierarchical quality regression strategy that simulates the human perceptual process, from perceiving local details to forming a global quality judgment, thereby achieving a quality prediction more aligned with human subjective evaluation. Experimental results demonstrate that the proposed method outperforms mainstream approaches, achieving a PLCC of 0.9734 on the LIVE I database, and a PLCC of 0.9632 on the LIVE II database.

Sustainability in Video Encoding and Streaming:
Energy-Efficient Techniques and Metrics

Workshop on Media Energy Consumption Measurement and Exposure

[Workshop URL] [Slides] [PDF]

Presenter: Christian Timmerer (Alpen-Adria-Universität Klagenfurt

Abstract: The presentation discusses the increasing environmental impact of video streaming and highlights the urgent need for more sustainable approaches across the entire streaming pipeline. Video traffic dominates internet usage and contributes significantly to global greenhouse gas emissions, while the demand for higher quality content continues to drive up computational complexity and energy consumption in encoding, delivery, and playback.

A central insight is that there is a strong trade-off between video quality and energy consumption, where small reductions in quality can lead to substantial energy savings. By introducing energy as an explicit optimization objective, techniques such as content-aware encoding, energy-aware bitrate ladder construction, and real-time optimization for live streaming can significantly reduce energy usage while maintaining nearly the same perceptual quality.

The work also emphasizes the role of adaptive bitrate algorithms that incorporate energy consumption alongside traditional quality and buffer-based metrics. These approaches demonstrate that it is possible to simultaneously improve user experience and reduce energy consumption, indicating that sustainability and performance can be aligned rather than conflicting goals.

To enable such optimizations, the presentation introduces a range of metrics and models, including video complexity measures, quality prediction models, and machine learning-based approaches for estimating encoding and decoding energy as well as CO₂ emissions. These tools support more informed, data-driven decisions across the full streaming workflow from encoding to playback.

Another important theme is end-to-end optimization, where energy efficiency depends on the combined behavior of encoding strategies, bitrate selection, and client-side adaptation. Industry efforts confirm the practical relevance of these approaches and highlight the importance of collaboration and real-world validation.

Despite promising results, several challenges remain, including difficulties in measuring and benchmarking energy consumption, the lack of standardized methodologies, and the limited integration of energy considerations into existing workflows. Overall, the presentation argues that energy consumption should become a first-class optimization target in video streaming systems, similar to established quality metrics, to enable truly sustainable media delivery.

Keywords: sustainable streaming, energy-aware encoding, adaptive bitrate streaming, green multimedia, video compression, bitrate ladder optimization, QoE optimization, energy-quality tradeoff, video complexity analysis, CO2 footprint, energy modeling, machine learning for video, end-to-end optimization, eco-efficient streaming, real-time streaming optimization

Title: Dynamic Participatory Game Design with Local AI: From Interviews to Trauma-Aware Interactive Narratives

Authors: Kseniia Harshina, Tom Tucek, Mathias Lux

Location: TextStory 2026 – Delft, The Netherlands, March 2026

Abstract: We present a work-in-progress, trauma-aware participatory storytelling pipeline that uses a locally hosted large language model (LLM) as a neutral chatbot interviewer. The system supports self-paced narration without cloud processing, prioritizing privacy, data sovereignty, and participant control. Interview transcripts are transformed into a structured scene representation (extracted fields and dialogue prompts), which is then replayed through a lightweight prototype interface as an initial step toward interactive memory-based experiences. We report a small formative expert evaluation (n=2) focusing on perceived comfort, emotional safety, and usability. Participants described the interviewer as low-pressure and reflective, while highlighting limitations such as weak acknowledgement of long answers and occasional “forced turns.” We discuss design implications for narrative extraction, turn-taking, and staged evaluation in sensitive contexts, and outline next steps for community-informed studies with participants who have lived experience of displacement.

On Thursday, February 26, 2026. Kurt successfully defended his PhD thesis (Service Discovery in the Computing Continuum) under the supervision of  Prof. Radu Prodan and Dr. Dragi Kimovski. The defense was chaired by Assoc.-Prof. DI Dr Klaus Schöffmann and the examiners were Prof. Valeria Cardellini (online) and Prof. Karin Anna Hummel (on-site). 

We are pleased to congratulate Dr. Kurt Horvath on successfully passing his Ph.D. examination!

On 27 February, Sabrina Größing and Dr Felix Schniz welcomed a delegation from the University of Vienna’s Game Lab to Klagenfurt. After exchanging origin stories, concepts and objectives, the warm‑hearted meeting quickly revealed shared ambitions and core values. The participants agreed to schedule follow‑up visits to both Vienna and Klagenfurt, deepening the partnership with the Klagenfurt Critical Game Lab and laying the groundwork for a burgeoning network of game labs across Austria.

 

On 29 January, Dr Felix Schniz held an interview on the topic of Spirituality and Video Games for Deutschlandfunk Kultur. The interviews discussed his las publications and offered an insight into how technology, first and foremost video games, can help with the contemporary crisis of faith.

Title: Lightweight WebAssembly-Based Intrusion Detection for Zero Trust Edge Networks

Authors: Jonathan Weber (TU Wien, Austria), Ilir Murturi (University of Prishtina, Kosova), Xhevahir Bajrami (University of Prishtina, Kosova), Reza Farahani (University of Klagenfurt, Austria), Praveen Kumar Donta (Stockholm University, Sweden), Schahram Dustdar (TU Wien, Austria)

Venue: IEEE Access

Abstract: IoT devices deployed across computing continuum infrastructures present significant security challenges due to resource constraints and decentralization. Traditional centralized intrusion detection systems struggle in such environments because of limited connectivity, high latency, and single points of failure. To address these challenges, this article extends a learning-driven Zero Trust framework tailored to resource-constrained edge environments and proposes an approach for evaluating lightweight intrusion detection models in such environments. Our extended approach enables systematic evaluation of lightweight machine learning models for localized intrusion detection, comprising three layers: (i) compilation, (ii) execution, and (iii) measurement. The proposed approach is implemented using Rust and WebAssembly to ensure portable, efficient, and isolated execution across heterogeneous devices. Using this framework, seven representative intrusion detection models (i.e., Decision Tree (DT), Random Forest (RF), k-Nearest Neighbor (KNN), Logistic Regression (LR), Artificial Neural Network (ANN), and Convolutional Neural Network (CNN) variants) were implemented and evaluated on the UNSW-NB15 dataset. Results show that RF achieved the best trade-off between detection accuracy and efficiency, while simpler models (DT and LR) offered near-instant inference with minimal resource usage, making them ideal for highly constrained devices. In contrast, more complex models such as deep neural networks and KNN introduced significant overhead for only modest accuracy gains. These findings underscore the need to balance accuracy and resource efficiency for effective Zero Trust edge security.

Title: Performance Evaluation of Privacy Models for Data Streams on the Edge

Authors: Ilir Murturi, Boris Sedlak, Reza Farahani, Schahram Dustdar

Venue: Internet Technology Letter

Abstract: Recent advances in edge computing enable data stream privacy enforcement directly on resource‐constrained devices, reducing latency and the exposure of sensitive information. In this paper, we extend and validate our previously proposed privacy‐enforcing framework, which allows high‐level privacy policies to be expressed as chains of triggers and transformations, executed at the edge. To assess its practical viability, we conduct a comprehensive performance profiling of multiple privacy models across heterogeneous edge hardware platforms. Six privacy‐model chains, ranging from basic face detection to combined face‐and‐person anonymization, are evaluated across three representative edge devices. Key performance metrics (i.e., execution time, CPU utilization, memory usage, and power consumption) are measured to inform optimal placement of privacy transformations. Our evaluation offers critical insights into the effectiveness of the privacy‐enforcing framework on resource‐constrained devices, thereby guiding practitioners in selecting suitable deployment targets for privacy‐preserving data stream analytics on the edge.