Hadi

We are glad that the paper was accepted for publication in IEEE Transactions on Multimedia.

Authors: Hadi Amirpour (AAU, AT), Jingwen Zhu (Nantes University, FR), Wei Zhu (Cardiff University, UK), Patrick Le Callet (Nantes University, FR), and Christian Timmerer (AAU, AT)

Abstract: In HTTP Adaptive Streaming (HAS), a video is encoded at various bitrate-resolution pairs, collectively known as the bitrate ladder, allowing users to select the most suitable representation based on their network conditions. Optimizing this set of pairs to enhance the Quality of Experience (QoE) requires accurately measuring the quality of these representations. VMAF and ITU-T’s P.1204.3 are highly reliable metrics for assessing the quality of representations in HAS. However, in practice, using these metrics for optimization is often impractical for live streaming applications due to their high computational costs and the large number of bitrate-resolution pairs in the bitrate ladder that need to be evaluated. To address their high complexity, our paper introduces a new method called VQM4HAS, which extracts low-complexity features including (i) video complexity features, (ii) frame-level encoding statistics logged during the encoding process, and (iii) lightweight video quality metrics. These extracted features are then fed into a regression model to predict VMAF and P.1204.3, respectively.

The VQM4HAS model is designed to operate on a per bitrate-resolution pair, per-resolution, and cross-representation basis, optimizing quality predictions across different HAS scenarios. Our experimental results demonstrate that VQM4HAS achieves a high correlation with VMAF and P.1204.3, with Pearson correlation coefficients (PCC) ranging from 0.95 to 0.96 for VMAF and 0.97 to 0.99 for P.1204.3, depending on the resolution. Despite achieving a high correlation with VMAF and P.1204.3, VQM4HAS exhibits significantly less complexity than both metrics, with 98% and 99% less complexity for VMAF and P.1204.3, respectively, making it suitable for live streaming scenarios.
We also conduct a feature importance analysis to further reduce the complexity of the proposed method. Furthermore, we evaluate the effectiveness of our method by using it to predict subjective quality scores. The results show that VQM4HAS achieves a higher correlation with subjective scores at various resolutions, despite its minimal complexity.

 

The following papers have been accepted at the Intel4EC Workshop 2025 which will be held on June 4, 2025 in Milan, Italy in conjunction with 39th IEEE International Parallel and Distributed Processing Symposium (IPDPS 2025)

 

Title: 6G Infrastructures for Edge AI: An Analytical Perspective

Authors: Kurt Horvath, Shpresa Tuda*, Blerta Idrizi*, Stojan Kitanov*, Fisnik Doko*, Dragi Kimovski (*Mother Teresa University Skopje, North Macedonia)

Abstract: The convergence of Artificial Intelligence (AI) and the Internet of Things has accelerated the development of distributed, network-sensitive applications, necessitating ultra-low latency, high throughput, and real-time processing capabilities. While 5G networks represent a significant technological milestone, their ability to support AI-driven edge applications remains constrained by performance gaps observed in real-world deployments. This paper addresses these limitations and highlights critical advancements needed to realize a robust and scalable 6G ecosystem optimized for AI applications. Furthermore, we conduct an empirical evaluation of 5G network infrastructure in central Europe, with latency measurements ranging from 61 ms to 110 ms across different close geographical areas. These values exceed the requirements of latency-critical AI applications by approximately 270%, revealing significant shortcomings in current deployments. Building on these findings, we propose a set of recommendations to bridge the gap between existing 5G performance and the requirements of next-generation AI applications.

 

Title: Blockchain consensus mechanisms for democratic voting environments

Authors: Thomas Auer, Kurt Horvath, Dragi Kimovski

Abstract: Democracy relies on robust voting systems to ensure transparency, fairness, and trust in electoral processes. Despite its foundational role, voting mechanisms – both manual and electronic – remain vulnerable to threats such as vote manipulation, data loss, and administrative interference. These vulnerabilities highlight the need for secure, scalable, and cost-efficient alternatives to safeguard electoral integrity. The fully decentralized voting system leverages blockchain technology to overcome critical challenges in modern voting systems, including scalability, cost-efficiency, and transaction throughput. By eliminating the need for a centralized authority, the system ensures transparency, security, and real-time monitoring by integrating Distributed Ledger Technologies. This novel architecture reduces operational costs, enhances voter anonymity, and improves scalability, achieving significantly lower costs for 1,000 votes than traditional voting methods.

The system introduces a formalized decentralized voting model that adheres to constitutional requirements and practical standards, making it suitable for implementation in direct and representative democracies. Additionally, the design accommodates high transaction volumes without compromising performance, ensuring reliable operation even in large-scale elections. The results demonstrate that this system outperforms classical approaches regarding efficiency, security, and affordability, paving the way for broader adoption of blockchain-based voting solutions.

 

 

We are happy to announce that our tutorial “Serverless Orchestration on the Edge-Cloud Continuum: From Small Functions to Large Language Models” (by Reza Farahani and Radu Prodan) has been accepted for IEEE ICDCS 2025, which will take place in Glasgow, Scotland, UK, in July 2025.

Venue: 45th IEEE International Conference on Distributed Computing Systems (ICDCS) (https://icdcs2025.icdcs.org/)

Abstract: Serverless computing simplifies application development by abstracting infrastructure management, allowing developers to focus on functionality while cloud providers handle resource provisioning and scaling. However, orchestrating serverless workloads across the edge-cloud continuum presents challenges, from managing heterogeneous resources to ensuring low-latency execution and maintaining fault tolerance and scalability. These challenges intensify when scaling from lightweight functions to compute-intensive tasks such as large language model (LLM) inferences in distributed environments. This tutorial explores serverless computing’s evolution from small functions to large-scale AI workloads. It introduces foundational concepts like Function-as-a-Service (FaaS) and Backend-as-a-Service (BaaS) before covering advanced edge-cloud orchestration strategies. Topics include dynamic workload distribution, multi-objective scheduling, energy-efficient orchestration, and deploying functions with diverse computational requirments. Hands-on demonstrations with Kubernetes, GCP Functions, AWS Lambda, OpenFaaS, OpenWhisk, and monitoring tools provide participants with practical insights into optimizing performance and energy efficiency in serverless orchestration across distributed infrastructures.

Hadi

Co-located with ACM Multimedia 2025

URL: https://weizhou-geek.github.io/workshop/MM2025.html

In health and medicine, an immense amount of data is being generated by distributed sensors and cameras, as well as multimodal digital health platforms that support multimedia, such as audio, video, image, 3D geometry, and text. The availability of such multimedia data from medical devices and digital record systems has greatly increased the potential for automated diagnosis. The past several years have witnessed an explosion of interest, and a dizzyingly fast development, in computer-aided medical investigations using MRI, CT, X-rays, images, point clouds, etc. This proposed workshop focuses on various multimedia computing techniques (including mobile solutions and hardware solutions) for health and medicine, which targets real-world data/problems in healthcare, involves a large number of stakeholders, and is closely connected with people’s health.

Authors: Emanuele Artioli (Alpen-Adria Universität Klagenfurt, Austria), Farzad Tashtarian (Alpen-Adria Universität Klagenfurt, Austria), Christian Timmerer (Alpen-Adria Universität Klagenfurt, Austria)

Venue: ACM 35th Workshop on Network and Operating System Support for Digital Audio and Video (NOSSDAV’25)

Abstract: The primary challenge of video streaming is to balance high video quality with smooth playback. Traditional codecs are well tuned for this trade-off, yet their inability to use context means they must encode the entire video data and transmit it to the client.
This paper introduces ELVIS (\textbf{E}nd-to-end \textbf{L}earning-based \textbf{VI}deo \textbf{S}treaming Enhancement Pipeline), an end-to-end architecture that combines server-side encoding optimizations with client-side generative in-painting to remove and reconstruct redundant video data. Its modular design allows ELVIS to integrate different codecs, in-painting models, and quality metrics, making it adaptable to future innovations.
Our results show that current technologies achieve improvements of up to 11 VMAF points over baseline benchmarks, though challenges remain for real-time applications due to computational demands. ELVIS represents a foundational step toward incorporating generative AI into video streaming pipelines, enabling higher quality experiences without increased bandwidth requirements.
By leveraging generative AI, we aim to develop a client-side tool, to incorporate in a dedicated video streaming player, that combines the accessibility of multilingual dubbing with the authenticity of the original speaker’s performance, effectively allowing a single actor to deliver their voice in any language. To the best of our knowledge, no current streaming system can capture the speaker’s unique voice or emotional tone.

 

We are glad that the paper was accepted for publication in Future Generation Computer Systems. This journal publishes cutting-edge research on high-performance computing, distributed systems, and advanced computing technologies for future computing environments.

Authors: Juan José Escobar, Pablo Sánchez-Cuevas, Beatriz Prieto, Rukiye Savran Kızıltepe, Fernando Díaz-del-Río, Dragi Kimovski

Abstract: Time and energy efficiency is a highly relevant objective in high-performance computing systems, with high costs for executing the tasks. Among these tasks, evolutionary algorithms are of consideration due to their inherent parallel scalability and usually costly fitness evaluation functions. In this respect, several scheduling strategies for workload balancing in heterogeneous systems have been proposed in the literature, with runtime and energy consumption reduction as their goals. Our hypothesis is that a dynamic workload distribution can be fitted with greater precision using metaheuristics, such as genetic algorithms, instead of linear regression. Therefore, this paper proposes a new mathematical model to predict the energy-time behaviour of applications based on multi-population genetic algorithms, which dynamically distributes the evaluation of individuals among the CPU-GPU devices of heterogeneous clusters. An accurate predictor would save time and energy by selecting the best resource set before running such applications. The estimation of the workload distributed to each device has been carried out by simulation, while the model parameters have been fitted in a two-phase run using another genetic algorithm and the experimental energy-time values of the target application as input. When the new model is analysed and compared with another based on linear regression, the one proposed in this work significantly improves the baseline approach, showing normalised prediction errors of 0.081 for runtime and 0.091 for energy consumption, compared to 0.213 and 0.256 shown in the baseline approach.

We are glad that the paper was accepted for publication in SCSA Journal. The journal covers research in smart computing systems and applications, with a focus on next-generation networking, cloud, and edge computing solutions.

Authors: Stojan Kitanov, Dragi Kimovski, Fisnik Doko, Kurt Horvath, Shpresa Tuda, Blerta Idrizi

Abstract: The rapid proliferation of IoT devices, coupled with the generated exponential growth of data, has necessitated the development of advanced network architectures. As a result, 5G mobile networks have already begun to face challenges such as network congestion, latency, and scalability limitations. Therefore, the need for a robust and future-proof solution becomes increasingly evident. In this direction, many research initiatives and industrial communities started to work on the development of 6G mobile networks. On the other hand, the emerging concept of Computing Continuum encompasses the seamless integration of edge, fog, and cloud computing resources to provide a unified and distributed computing environment, and it aims to enable real-time data processing, low-latency response, and intelligent decision-making at the network edge. The primary objective of this research paper is to address the shortcomings of existing network infrastructures and to overcome these shortcomings by integrating advanced AI capabilities in 6G mobile networks with the Computing Continuum. Moreover, it would be proposed a Computing Continuum Middleware for Artificial Intelligence over 6G networks, which would offer high-level and well-defined (“standardized”) interfaces which would create an automated, sustainable loop for managing IoT applications utilizing AI approaches over 6G networks.

On Febuary 25 2025, Felix Schniz held a talk titled “Mit Erfahrung lehren: Von Kafka, Spielen, und dem Erleben abstrakter Inhalte“ at the Conference Didaktik des TTRPG – Das ludonarrative Rollenspiel im Deutschunterricht in Cologne. His talk focused on the usage of video games and information technology didactics and their potential role in central European high school teaching contexts. The innovative methodologies developed and applied by the University of Klagenfurt, such as tech-focused Post-Mortem documentation from a humanities perspective, have been well received by the audience.

We are glad that the paper was accepted for publication at ICFEC 2025. ICFEC focuses on innovations in cloud and edge computing, bringing together researchers and practitioners to discuss emerging challenges and solutions.

Title: ADApt: Edge Device Anomaly Detection and Microservice Replica Prediction

Authors: Narges Mehran, Nikolay Nikolov, Radu Prodan, Dumitru Roman, Dragi Kimovski, Frank Pallas, Peter Dorfinger

Venue: 9th IEEE International Conference on Fog and Edge Computing 2025, in conjunction with CCGrid 2025, 19-22 May, 2025 – Tromso, Norway

Abstract: The recent shift towards increasing user microservices in the Edge computing infrastructure brings new orchestration challenges, such as detecting overutilized resources and scaling out overloaded microservices in response to augmenting requests. In this work, we present the ADApt using the monitoring data related to Edge computing resources, detecting the utilization-based anomalies of resources (e.g., CORE or MEM), investigating the scalability in microservices, and adapting the application executions. To reduce the bottleneck while using computing resources, we first explore monitored devices executing microservices with various requirements, detecting overutilization-based processing events, and scoring them. Thereafter, based on the memory requirements, ADApt predicts the processing requirements of the microservices and estimates the number of replicas running on the overutilized devices. The prediction results show that the gradient boosting regression-based replica prediction reduces the MAE, MAPE, and RMSE compared to other models. Moreover, ADApt is able to estimate the number of replicas for each microservice close to the actual data without any prediction and to reduce the utilization of the device.

The 15th International Conference on the Internet of Things (IoT 2025) is set to take place in late November 2025 in Vienna, Austria, organized by TU Wien. The conference will feature a research paper track, keynotes, workshops, and poster and demo sessions, all held in the unique “Kuppelsaal” of TU Wien. Dragi Kimovski from Klagenfurt University will serve as one of the Workshop Chairs, focusing on attracting high-quality workshops that drive innovation in IoT research. The conference aims to connect world-class researchers with industry experts to steer innovation across various IoT verticals, including smart industry, smart cities, smart health, and smart environments (iot-conference.org).