Conference: International Symposium on Biomedical Imaging (ISBI 2026), April 8-11, 2026, London, UK

Paper-Title: SAM-Fed: SAM-Guided Federated Semi-Supervised Learning for Medical Image Segmentation. 

Authors: Sahar Nasirihaghighi, Negin Ghamsarian, Yiping Li, Marcel Breeuwer, Raphael Sznitman, and Klaus Schoeffmann

Abstract:

Medical image segmentation is clinically important, yet data privacy and the cost of expert annotation limit the availability of labeled data. Federated semi-supervised learning (FSSL) offers a solution but faces two challenges: pseudo-label reliability depends on the strength of local models, and client devices often require compact or heterogeneous architectures due to limited computational resources. These constraints reduce the quality and stability of pseudo-labels, while large models, though more accurate, cannot be trained or used for routine inference on client devices. We propose SAM-Fed, a federated semi-supervised framework that leverages a high-capacity segmentation foundation model to guide lightweight clients during training. SAM-Fed combines dual knowledge distillation with an adaptive agreement mechanism to refine pixel-level supervision. Experiments on skin lesion and polyp segmentation across homogeneous and heterogeneous settings show that SAM-Fed consistently outperforms state-of-the-art FSSL methods.

Paper title: ELLMPEG: An Edge-based Agentic LLM Video Processing Tool

Authors: Zoha Azimi, Reza Farahani, Radu Prodan, Christian Timmerer

Venue:  MMSys’26, The 17th ACM Multimedia System Conference, Hong Kong SAR, 4th – 8th April 2026

Abstract:

Large language models (LLMs), the foundation of generative AI systems like ChatGPT, are transforming many fields and applications, including multimedia, enabling more advanced content generation, analysis, and interaction. However, cloud-based LLM deployments face three key limitations: high computational and energy demands, privacy and reliability risks from remote processing, and recurring API costs. Recent advances in agentic AI, especially in structured reasoning and tool use, offer a better way to exploit open and locally deployed tools and LLM models. This paper presents ELLMPEG, an edge-enabled agentic LLM framework for the automated generation of video-processing commands. ELLMPEG integrates tool-aware
Retrieval-Augmented Generation (RAG) with iterative self-reflection to produce and locally verify executable FFmpeg and VVenC commands directly at the edge, eliminating reliance on external cloud APIs. To evaluate ELLMPEG, we collect a dedicated prompt dataset comprising 480 diverse queries covering different categories of FFmpeg and the Versatile Video Codec (VVC) encoder (VVenC) commands. We validate command generation accuracy and evaluate four open-source LLMs based on command validity, tokens generated per second, inference time, and energy efficiency. We also execute the generated commands to assess their runtime correctness and practical applicability. Experimental results show that Qwen2.5, when augmented with the ELLMPEG framework, achieves an average command-generation accuracy of 78 % with zero recurring API cost, outperforming all other open-source models across both the FFmpeg and VVenC datasets.

On 8 January 2026, Dr Felix Schniz held a guest presentation at the University of Graz. Invited by the Department of English, his talk focused on the narrative capabilities of video games:

 

Video Games as Storytelling Worlds

The capacities for narrative superstructures in video games are often depicted as being at odds with their elementary interactivity. From the game studies defining faux-skirmish between ludology and narratology, the question of spatial narrative capacities, and steadily refining perspectives on the literary purpose of video games, the central question remained ever the same: What is the connection between agency and the literary?

 

This lecture session explores the question in depth. It builds on an approximate definition of the multidisciplinary and complex medium to map its historical literary emancipation. Following this overview, concrete examples are used to evaluate the usage of literary key terminology for the analysis of video games. A distinct focus is set on the unique potential for interactive storytelling in video games.

December 19-21, 2025
Organized by: Tom Tucek, Patrick Mieslinger

With help from: Bodo Thausing, Kristell Potocnik, Agon Guri

With 63 participants and 15 submitted games, this year’s winter game jam has concluded just before the winter holidays.

Students, teachers, alumni, and even individuals not directly affiliated with the university came together to create new video games from scratch, all within a 48-hour time frame. The topic this time was “Resistance”.

A big thank you to everyone who participated or helped; you made this event a big success once again!

Please feel free to check out all the games here:

https://itch.io/jam/klujam-ws25

 

Title: From Latency to Engagement: Technical Synergies and Ethical Questions in IoT-Enabled Gaming

Authors: Kurt Horvath, Tom Tucek

Abstract: The convergence of video games with the Internet of Things (IoT), artificial intelligence (AI), and emerging 6G networks creates unprecedented opportunities and pressing challenges. On a technical level, IoT-enabled gaming requires ultra-low latency, reliable quality of service (QoS), and seamless multi-device integration supported by edge and cloud intelligence. On a societal level, gamification increasingly extends into education, health, and commerce, where points, badges, and immersive feedback loops can enhance engagement but also risk manipulation, privacy violations, and dependency. This position paper examines these dual dynamics by linking technical enablers, such as 6G connectivity, IoT integration, and edge/AI offloading, with ethical concerns surrounding behavioral influence, data usage, and accessibility. We propose a comparative perspective that highlights where innovation aligns with user needs and where safeguards are necessary. We identify open research challenges by combining technical and ethical analysis and emphasize the importance of regulatory and design frameworks to ensure responsible, inclusive, and sustainable IoT-enabled gaming.

Overview – 1st Workshop on Intelligent and Scalable Systems across the Computing Continuum

Title: Wi-Fi Enabled Edge Intelligence Framework for Smart City Traffic Monitoring using Low-Power IoT Cameras

Authors: Raphael Walcher, Kurt Horvath, Dragi Kimovski, Stojan Kitanov

Abstract: Real-time traffic monitoring in smart cities demands ultra-low latency processing to support time-critical decisions such as incident detection and congestion management. While cloud-based solutions offer robust computation, their inherent latency limits their applicability for such tasks. This work proposes a localized edge AI framework that connects low-power IoT camera sensors to a client, or applies offloading of inference to an NVIDIA Jetson Nano (GPU). Networking is achieved via Wi-Fi, enabling image classification without relying on wide-area infrastructure such as 5G, or wired networks. We evaluate two processing strategies: local inference on camera nodes and GPU-accelerated offloading to the Jetson Nano. We show that local processing is only feasible for lightweight models and low frame rates, whereas offloading enables near-real-time performance even for more complex models. These results demonstrate the viability of cost-effective, Wi-Fi-based edge AI deployments for latency-critical urban monitoring.

Overview – 1st Workshop on Intelligent and Scalable Systems across the Computing Continuum

Title: 6G Network O-RAN Energy Efficiency Performance Evaluation

Authors:Ivan Petrov, Kurt Klaus Horvath, Stojan Kitanov, Dragi Kimovski, Fisnik Doko, Toni Janevski

Abstract: The Open Radio Access Network (O-RAN) paradigm introduces disaggregated and virtualized RAN elements connected through open interfaces, enabling AI-driven optimization across the RAN. It allows flexible energy management strategies by leveraging intelligent RIC (RAN Intelligent Controller), and computing continuum (Dew-Edge-Cloud) resources. The Open Radio Access Network (O-RAN) provides vendor-neutral, disaggregated RAN components and AI-driven control, making it a strong candidate for future networks. On the other hand, the transition to 6G networks requires more open, adaptable, and intelligent RAN architecture. Open RAN is envisioned as a key enabler for 6G, offering agility, cost efficiency, energy savings, and resilience. This paper assesses O-RAN’s performance in 6G mobile networks in terms of energy efficiency.

 

Paper Title: STEP-MR: A Subjective Testing and Eye-Tracking Platform for Dynamic Point Clouds in Mixed Reality

Conference Details:  32nd International Conference on Multimedia Modeling; Jan 29 – Jan 31, 2026; Prague, Czech Republic

Authors: Shivi Vats (AAU, Austria), Christian Timmerer (AAU, Austria), Hermann Hellwagner (AAU, Austria)

Abstract: 

The use of point cloud (PC) streaming in mixed reality (MR) environments is of particular interest due to the immersiveness and the six degrees of freedom (6DoF) provided by the 3D content. However, this immersiveness requires significant bandwidth. Innovative solutions have been developed to address these challenges, such as PC compression and/or spatially tiling the PC to stream different portions at different quality levels. This paper presents a brief overview of a Subjective Testing and Eye-tracking Platform for dynamic point clouds in Mixed Reality (STEP-MR) for the Microsoft HoloLens 2. STEP-MR was used to conduct subjective tests (described in [1]) with 41 participants, yielding over 2000 responses and more than 150 visual attention maps, the results of which can be used, among other things, to improve dynamic (animated) point cloud streaming solutions mentioned above. Building on our previous platform, the new version now enables eye-tracking tests, including calibration and heatmap generation. Additionally, STEP-MR features modifications to the subjective tests’ functionality, such as a new rating scale and adaptability to participant movement during the tests, along with other user experience changes.

Paper Title: Eye-Tracking, Quality Assessment, and QoE Prediction Models for Point Cloud Videos: Extended Analysis of the ComPEQ-MR Dataset

Link: https://ieeexplore.ieee.org/document/11263821

 

Authors: Shivi Vats (AAU, Austria), Minh Nguyen (Fraunhofer FOKUS, Berlin), Christian Timmerer (AAU, Austria), Hermann Hellwagner (AAU, Austria)

Abstract: 

Point cloud videos, also termed dynamic point clouds (DPCs), have the potential to provide immersive experiences with six degrees of freedom (6DoF). However, there are still several open issues in understanding the Quality of Experience (QoE) and visual attention of end users while experiencing 6DoF volumetric videos. For instance, the quality impact of compressing DPCs, which requires a significant amount of both time and computational resources, needs further investigation. Also, QoE prediction models for DPCs in 6DoF have rarely been developed due to the lack of visual quality databases. Furthermore, visual attention in 6DoF is hardly explored, which impedes research into more sophisticated approaches for adaptive streaming of DPCs. In this paper, we review and analyze in detail the open-source Compressed Point cloud dataset with Eye-tracking and Quality assessment in Mixed Reality (ComPEQ–MR). The dataset, initially presented in [24], comprises 4 uncompressed (raw) DPCs as well as compressed versions processed by Moving Picture Experts Group (MPEG) reference tools (i.e., VPCC and 2 GPCC variants). The dataset includes eye-tracking data of 41 study participants watching the raw DPCs with 6DoF, yielding 164 visual attention maps. We analyze this data and present head and gaze movement results here. The dataset also includes results from subjective tests conducted to assess the quality of the DPCs, each both uncompressed and compressed with 12 levels of distortion, resulting in 2132 quality scores. This work presents the QoE performance results of the compression techniques, the factors with significant impact on participant ratings, and the correlation of the objective Peak Signal-to-Noise Ratio (PSNR) metrics with Mean Opinion Scores (MOS). The results indicate superior performance of the VPCC codec as well as significant variations in quality ratings based on codec choice, bitrate, and quality/distortion level, providing insights for optimizing point cloud video compression in MR applications. Finally, making use of the subjective scores, we trained and evaluated models for QoE prediction for DPCs compressed using the pertinent MPEG tools.We present the models and their prediction results, noting that the fine-tuned ITU-T P.1203 models exhibit good correlation with the subjective ratings. The dataset is available at https://ftp.itec.aau.at/datasets/ComPEQ-MR/.

Predicting Encoding Energy from Low-Pass Anchors for Green Video Streaming

Authors: Zoha Azimi (AAU, Austria), Reza Farahani (AAU, Austria), Vignesh V Menon (Fraunhofer HHI, Berlin), Christian Timmerer (AAU, Austria)

Event: 1st International Workshop on Intelligent and Scalable Systems Across the Computing Continuum (ScaleSys ’25), November 18, 2025, Vienna, Austria, https://scalesys2025.itec.aau.at/

Abstract:  

Video streaming now represents the dominant share of Internet traffic, as ever-higher-resolution content is distributed across a growing range of heterogeneous devices to sustain user Quality of Experience (QoE). However, this trend raises significant concerns about energy efficiency and carbon emissions, requiring methods to provide a trade-off between energy and QoE. This paper proposes a lightweight energy prediction method that estimates the energy consumption of high-resolution video encodings using reference encodings generated at lower resolutions (so-called anchors), eliminating the need for exhaustive per-segment energy measurements, a process that is infeasible at scale. We automatically select encoding parameters, such as resolution and quantization parameter (QP), to achieve substantial energy savings while maintaining perceptual quality, as measured by the Video Multimethod Fusion Assessment (VMAF), within acceptable limits. We implement and evaluate our approach with the open-source VVenC encoder on 100 video sequences from the Inter4K dataset across multiple encoding settings. Results show that, for an average VMAF score reduction of only 1.68, which stays below the Just Noticeable Difference (JND)
threshold, our method achieves 51.22 % encoding energy savings and 53.54 % decoding energy savings compared to a scenario with no quality degradation.