On November 14, Dr Felix Schniz held a workshop for Master Students wishing to pursue an academic career related to game studies and game engineering. Invited by ÖH representatives, he focused on the first conference presentation, including topics such as abstract writing, conference etiquette, and publishing a conference paper.

On 14.11.2025, Farzad Tashtarian defended his habilitation thesis “Network-Assisted Adaptive Streaming: Toward Optimal QoE through System Collaboration”

Congratulations!

Committee members:
Prof. Martin Pinzger (Chairperson), Prof. Oliver Hohlfeld (external member), Prof. Bernhard Rinner, Prof. Angelika Wiegele, Prof. Chitchanok Chuengsatiansup, MSc Zoha Azimi Ourimi, Dr. Alice Tarzariol, Kateryna Taranov, and Gregor Lammer

Title: Agentic Edge Intelligence: A Research Agenda

Authors: Lauri Lovén, Reza Farahani, Ilir Murturi, Stephan Sigg, Schahram Dustdar

Abstract: Agentic AI is rapidly transforming autonomous decision-making, yet its deployment across the edge-cloud continuum remains poorly understood. This paper introduces the concept of agentic edge intelligence, an emerging paradigm in which autonomous agents operate across the computing continuum to negotiate computational resources, data, and services within dynamic digital marketplaces. We position this concept at the intersection of edge intelligence, multi-agent systems, and computational economics, where distributed decision-making replaces centralized orchestration. The paper outlines key research challenges, including scalability, interoperability, market stability, and ethical governance, and proposes a research agenda addressing theoretical, architectural, and societal dimensions. By integrating mechanism design with trustworthy AI and edge computing, the real-time AI economy envisions a self-organizing infrastructure for efficient, transparent, and equitable resource exchange in future digital ecosystems.

Venue: International Workshop on Intelligent Systems and Paradigms for Next Generation Computing Evolution (INSPIRE 2025) in conjunction with the 18th IEEE/ACM Utility and Cloud Computing Conference (UCC)

Title: Serverless Everywhere: A Comparative Analysis of WebAssembly Workflows Across Browser, Edge, and Cloud

Authors: Mario Colosi, Reza Farahani, Lauri Lovén, Radu Prodan, Massimo Villari

Abstract: WebAssembly (Wasm) is a binary instruction format that enables portable, sandboxed, and near-native execution across heterogeneous platforms, making it well-suited for serverless workflow execution on browsers, edge nodes, and cloud servers. However, its performance and stability depend heavily on factors such as startup overhead, runtime execution model (e.g., Ahead-of-Time (AOT) and Just-in-Time (JIT) compilation), and resource variability across deployment contexts. This paper evaluates a Wasm-based serverless workflow executed consistently from the browser to edge and cloud instances. The setup uses wasm32-wasi modules: in the browser, execution occurs within a web worker, while on Edge and Cloud, an HTTP shim streams frames to the Wasm runtime. We measure cold- and warm-start latency, per-step delays, workflow makespan, throughput, and CPU/memory utilization to capture the end-to-end behavior across environments. Results show that AOT compilation and instance warming substantially reduce startup latency. For workflows with small payloads, the browser achieves competitive performance owing to fully in-memory data exchanges. In contrast, as payloads grow, the workflow transitions into a compute- and memory-intensive phase where AOT execution on edge and cloud nodes distinctly surpasses browser performance.

Venue: International Workshop on Intelligent and Scalable Systems across the Computing Continuum (ScaleSys 2025) in conjunction with the 15th International Conference on the Internet of Things (loT 2025)

Title: Toward Sustainability-Aware LLM Inference on Edge Clusters

Authors: Kolichala Rajashekar, Nafiseh Sharghivand, Radu Prodan, Reza Farahani

Abstract: Large language models (LLMs) require substantial computational resources, leading to significant carbon emissions and operational costs. Although training is energy-intensive, the long-term environmental burden arises from inference, amplified by the massive global query volume. Cloud-based inference offers scalability but suffers from latency and bandwidth constraints due to centralized processing and continuous data transfer. Edge clusters instead can mitigate these limitations by enabling localized execution, yet they face trade-offs between performance, energy efficiency, and device constraints. This short paper presents a sustainability-aware LLM inference for edge clusters comprising NVIDIA Jetson Orin NX (8GB) and Nvidia Ada 2000 (16GB) devices. It aims to balance inference latency and carbon footprint through carbon- and latency-aware routing strategies, guided by empirical benchmarking of energy consumption and execution time across diverse prompts and batch (i.e., group of prompts) configurations. We compared baseline greedy strategies to carbon-aware and latency-aware strategies in prompt routing to specific hardware based on benchmarking information. Experimental evaluation shows that a batch size of four prompts achieves a trade-off between throughput, energy efficiency, while larger batches risk GPU memory saturation.

Venue: International Workshop on Intelligent and Scalable Systems across the Computing Continuum (ScaleSys 2025) in conjunction with the 15th International Conference on the Internet of Things (loT 2025)

 

 

On 22 October 2025, Dr Felix Schniz opened the newly founded Media Club of AAU with a spectacular guest lecture. Founded by the Department of English, the Media Club has been installed to offer students an extracurricular and multidisciplinary journey through a leitmotif every semester. Starting with “Dystopia” in Winter 2025, Felix Schniz took the audience onto a journey through the video game “Hellblade: Senua’s Sacrifice” and reminisced on technological and psychological facets of game design.

We are happy to announce that our tutorial “Serverless Orchestration on the Edge-Cloud Continuum: From Small Functions to Large Language Models” (by Reza Farahani) has been accepted for IEEE/ACM UCC 2025, which will take place in Nantes, France, in December 2025.

Venue: IEEE/ACM International Conference on Utility and Cloud Computing (UCC) (https://ucc-conference.org/)

Abstract: Serverless computing simplifies application development by abstracting infrastructure management, allowing developers to focus on functionality while cloud providers handle resource provisioning and scaling. However, orchestrating serverless workloads across the edge-cloud continuum presents challenges, from managing heterogeneous resources to ensuring low-latency execution and maintaining fault tolerance and scalability. These challenges intensify when scaling from lightweight functions to compute-intensive tasks such as large language model (LLM) inferences in distributed environments. This tutorial explores serverless computing’s evolution from small functions to large-scale AI workloads. It introduces foundational concepts like Function-as-a-Service (FaaS) and Backend-as-a-Service (BaaS) before covering advanced edge-cloud orchestration strategies. Topics include dynamic workload distribution, multi-objective scheduling, energy-efficient orchestration, and deploying functions with diverse computational requirments. Hands-on demonstrations with Kubernetes, GCP Functions, AWS Lambda, OpenFaaS, OpenWhisk, and monitoring tools provide participants with practical insights into optimizing performance and energy efficiency in serverless orchestration across distributed infrastructures.

Authors: Samira Afzal (Baylor University), Narges Mehran (Salzburg Research Forschungsgesellschaft mbH), Farzad Tashtarian (AAU, Austria), Andrew C. Freeman (Baylor University), Radu Prodan (University of Innsbruck), Christian Timmerer (AAU, Austria)

Venue: IEEE VCIP 2025December 1 – December 4, 2025, Klagenfurt, Austria

Abstract: The environmental impact of video streaming is gaining more attention due to its growing share in global internet traffic and energy consumption. To support accurate and transparent sustainability assessments, we present SEED (Streaming Energy and Emission Dataset)}: an open dataset for estimating energy usage and CO2 emissions in adaptive video streaming. SEED comprises over 500 video segments. It provides segment-level measurements of energy consumption and emissions for two primary stages: provisioning, which encompasses encoding and storage on cloud infrastructure, and end-user consumption, including network interface retrieval, video decoding, and display on end-user devices. The dataset covers multiple codecs (AVC, HEVC), resolutions, bitrates, cloud instance types, and geographic regions, reflecting real-world variations in computing efficiency and regional carbon intensity. By combining empirical benchmarks with component-level energy models, \dataset{} enables detailed analysis and supports the development of energy- and emission-aware adaptive bitrate (ABR) algorithms. The dataset is publicly available at: https://github.com/cd-athena/SEED.

SEED is available at: https://github.com/cd-athena/SEED

On 24 September 2025, Senior Scientist Dr Felix Schniz held a workshop on Tarot cards for the youth centre kwadr.at in Klagenfurt. Over the course of three hours, he educated a very excited crowd on the history of the cards, their purpose for divination and self-reflection, Tarot-card-based games, and what card games have to do with modern computer games. kwadr.at is a space for young people aged about 14 to 27 to hang out, be creative, and participate in cultural or social activities

 

NeVES: Real-Time Neural Video Enhancement for HTTP Adaptive Streaming

IEEE VCIP 2025

December 1 – December 4, 2025

Klagenfurt, Austria

[PDF]

Daniele Lorenzi, Farzad Tashtarian, Christian Timmerer

Abstract: Enhancing low-quality video content is a task that has raised particular interest since recent developments in deep learning. Since most of the video content consumed worldwide is delivered over the Internet via HTTP Adaptive Streaming (HAS), implementing these techniques on web browsers would ease the access to visually-enhanced content on user devices.

In this paper, we present NeVES, a multimedia system capable of enhancing the quality of video content streamed through HAS in real time.

The demo is available at: https://github.com/cd-athena/NeVES.