ACM Transactions on Multimedia Computing, Communications, and Applications

 

Christian Timmerer (AAU, AT), Hadi Amirpour (AAU, AT), Farzad Tashtarian (AAU, AT), Samira Afzal (AAU, AT), Amr Rizk (Leibniz University Hannover, DE), Michael Zink (University of Massachusetts Amherst, US), and Hermann Hellwagner (AAU, AT)

Abstract: Video streaming has evolved from push-based, broad-/multicasting approaches with dedicated hard-/software infrastructures to pull-based unicast schemes utilizing existing Web-based infrastructure to allow for better scalability. In this article, we provide an overview of the foundational principles of HTTP adaptive streaming (HAS), from video encoding to end user consumption, while focusing on the key advancements in adaptive bitrate algorithms, quality of experience (QoE), and energy efficiency. Furthermore, the article highlights the ongoing challenges of optimizing network infrastructure, minimizing latency, and managing the environmental impact of video streaming. Finally, future directions for HAS, including immersive media streaming and neural network-based video codecs, are discussed, positioning HAS at the forefront of next-generation video delivery technologies.

Keywords: HTTP Adaptive Streaming, HAS, DASH, Video Coding, Video Delivery, Video Consumption, Quality of Experience, QoE

 

https://athena.itec.aau.at/2025/03/acm-tomm-http-adaptive-streaming-a-review-on-current-advances-and-future-challenges/

Farzad recently participated in an interview with the Austrian newspaper Der Standard. The conversation covered a range of topics, and the final article has now been published. You can find the full piece at the following link:

https://www.derstandard.at/story/3000000262214/forscher-aus-klagenfurt-inspizieren-windraeder-mit-drohnenschwaermen

 

 

Neural Representations for Scalable Video Coding

IEEE International Conference on Multimedia & Expo (ICME) 2025

 

Authors: Yiying Wei (AAU, Austria), Hadi Amirpour (AAU, Austria) and Christian Timmerer (AAU, Austria)

 

Abstract: Scalable video coding encodes a video stream into multiple layers so that it can be decoded at different levels of quality/resolution, depending on the device’s capabilities or the available network bandwidth. Recent advances in implicit neural representation (INR)-based video codecs have shown competitive compression performance to both traditional and other learning-based methods. In INR approaches, a neural network is trained to overfit a video sequence, and its parameters are compressed to create a compact representation of the video content. While they achieve promising results, existing INR-based codecs require training separate networks for each resolution/quality of a video, making them challenging for scalable compression. In this paper, we propose Neural representations for Scalable Video Coding (NSVC) that encodes multi-resolution/-quality videos into a single neural network comprising multiple layers. The base layer (BL) of the neural network encodes video streams with the lowest resolution/quality. Enhancement layers (ELs) encode additional information that can be used to reconstruct a higher resolution/quality video during decoding using the BL as a starting point. This multi-layered structure allows the scalable bitstream to be truncated to adapt to the client’s bandwidth conditions or computational decoding requirements. Experimental results show that NSVC outperforms AVC’s Scalable Video Coding (SVC) extension and surpasses HEVC’s scalable extension (SHVC) in terms of VMAF. Additionally, NSVC achieves comparable decoding speeds at high resolutions/qualities.

 

ICME 2025: Neural Representations for Scalable Video Coding | ATHENA Christian Doppler (CD) Laboratory

 

 

 

Hadi

We are glad that the paper was accepted for publication in IEEE Transactions on Multimedia.

Authors: Hadi Amirpour (AAU, AT), Jingwen Zhu (Nantes University, FR), Wei Zhu (Cardiff University, UK), Patrick Le Callet (Nantes University, FR), and Christian Timmerer (AAU, AT)

Abstract: In HTTP Adaptive Streaming (HAS), a video is encoded at various bitrate-resolution pairs, collectively known as the bitrate ladder, allowing users to select the most suitable representation based on their network conditions. Optimizing this set of pairs to enhance the Quality of Experience (QoE) requires accurately measuring the quality of these representations. VMAF and ITU-T’s P.1204.3 are highly reliable metrics for assessing the quality of representations in HAS. However, in practice, using these metrics for optimization is often impractical for live streaming applications due to their high computational costs and the large number of bitrate-resolution pairs in the bitrate ladder that need to be evaluated. To address their high complexity, our paper introduces a new method called VQM4HAS, which extracts low-complexity features including (i) video complexity features, (ii) frame-level encoding statistics logged during the encoding process, and (iii) lightweight video quality metrics. These extracted features are then fed into a regression model to predict VMAF and P.1204.3, respectively.

The VQM4HAS model is designed to operate on a per bitrate-resolution pair, per-resolution, and cross-representation basis, optimizing quality predictions across different HAS scenarios. Our experimental results demonstrate that VQM4HAS achieves a high correlation with VMAF and P.1204.3, with Pearson correlation coefficients (PCC) ranging from 0.95 to 0.96 for VMAF and 0.97 to 0.99 for P.1204.3, depending on the resolution. Despite achieving a high correlation with VMAF and P.1204.3, VQM4HAS exhibits significantly less complexity than both metrics, with 98% and 99% less complexity for VMAF and P.1204.3, respectively, making it suitable for live streaming scenarios.
We also conduct a feature importance analysis to further reduce the complexity of the proposed method. Furthermore, we evaluate the effectiveness of our method by using it to predict subjective quality scores. The results show that VQM4HAS achieves a higher correlation with subjective scores at various resolutions, despite its minimal complexity.

 

The following papers have been accepted at the Intel4EC Workshop 2025 which will be held on June 4, 2025 in Milan, Italy in conjunction with 39th IEEE International Parallel and Distributed Processing Symposium (IPDPS 2025)

 

Title: 6G Infrastructures for Edge AI: An Analytical Perspective

Authors: Kurt Horvath, Shpresa Tuda*, Blerta Idrizi*, Stojan Kitanov*, Fisnik Doko*, Dragi Kimovski (*Mother Teresa University Skopje, North Macedonia)

Abstract: The convergence of Artificial Intelligence (AI) and the Internet of Things has accelerated the development of distributed, network-sensitive applications, necessitating ultra-low latency, high throughput, and real-time processing capabilities. While 5G networks represent a significant technological milestone, their ability to support AI-driven edge applications remains constrained by performance gaps observed in real-world deployments. This paper addresses these limitations and highlights critical advancements needed to realize a robust and scalable 6G ecosystem optimized for AI applications. Furthermore, we conduct an empirical evaluation of 5G network infrastructure in central Europe, with latency measurements ranging from 61 ms to 110 ms across different close geographical areas. These values exceed the requirements of latency-critical AI applications by approximately 270%, revealing significant shortcomings in current deployments. Building on these findings, we propose a set of recommendations to bridge the gap between existing 5G performance and the requirements of next-generation AI applications.

 

Title: Blockchain consensus mechanisms for democratic voting environments

Authors: Thomas Auer, Kurt Horvath, Dragi Kimovski

Abstract: Democracy relies on robust voting systems to ensure transparency, fairness, and trust in electoral processes. Despite its foundational role, voting mechanisms – both manual and electronic – remain vulnerable to threats such as vote manipulation, data loss, and administrative interference. These vulnerabilities highlight the need for secure, scalable, and cost-efficient alternatives to safeguard electoral integrity. The fully decentralized voting system leverages blockchain technology to overcome critical challenges in modern voting systems, including scalability, cost-efficiency, and transaction throughput. By eliminating the need for a centralized authority, the system ensures transparency, security, and real-time monitoring by integrating Distributed Ledger Technologies. This novel architecture reduces operational costs, enhances voter anonymity, and improves scalability, achieving significantly lower costs for 1,000 votes than traditional voting methods.

The system introduces a formalized decentralized voting model that adheres to constitutional requirements and practical standards, making it suitable for implementation in direct and representative democracies. Additionally, the design accommodates high transaction volumes without compromising performance, ensuring reliable operation even in large-scale elections. The results demonstrate that this system outperforms classical approaches regarding efficiency, security, and affordability, paving the way for broader adoption of blockchain-based voting solutions.

 

 

We are happy to announce that our tutorial “Serverless Orchestration on the Edge-Cloud Continuum: From Small Functions to Large Language Models” (by Reza Farahani and Radu Prodan) has been accepted for IEEE ICDCS 2025, which will take place in Glasgow, Scotland, UK, in July 2025.

Venue: 45th IEEE International Conference on Distributed Computing Systems (ICDCS) (https://icdcs2025.icdcs.org/)

Abstract: Serverless computing simplifies application development by abstracting infrastructure management, allowing developers to focus on functionality while cloud providers handle resource provisioning and scaling. However, orchestrating serverless workloads across the edge-cloud continuum presents challenges, from managing heterogeneous resources to ensuring low-latency execution and maintaining fault tolerance and scalability. These challenges intensify when scaling from lightweight functions to compute-intensive tasks such as large language model (LLM) inferences in distributed environments. This tutorial explores serverless computing’s evolution from small functions to large-scale AI workloads. It introduces foundational concepts like Function-as-a-Service (FaaS) and Backend-as-a-Service (BaaS) before covering advanced edge-cloud orchestration strategies. Topics include dynamic workload distribution, multi-objective scheduling, energy-efficient orchestration, and deploying functions with diverse computational requirments. Hands-on demonstrations with Kubernetes, GCP Functions, AWS Lambda, OpenFaaS, OpenWhisk, and monitoring tools provide participants with practical insights into optimizing performance and energy efficiency in serverless orchestration across distributed infrastructures.

Hadi

Co-located with ACM Multimedia 2025

URL: https://weizhou-geek.github.io/workshop/MM2025.html

In health and medicine, an immense amount of data is being generated by distributed sensors and cameras, as well as multimodal digital health platforms that support multimedia, such as audio, video, image, 3D geometry, and text. The availability of such multimedia data from medical devices and digital record systems has greatly increased the potential for automated diagnosis. The past several years have witnessed an explosion of interest, and a dizzyingly fast development, in computer-aided medical investigations using MRI, CT, X-rays, images, point clouds, etc. This proposed workshop focuses on various multimedia computing techniques (including mobile solutions and hardware solutions) for health and medicine, which targets real-world data/problems in healthcare, involves a large number of stakeholders, and is closely connected with people’s health.

Authors: Emanuele Artioli (Alpen-Adria Universität Klagenfurt, Austria), Farzad Tashtarian (Alpen-Adria Universität Klagenfurt, Austria), Christian Timmerer (Alpen-Adria Universität Klagenfurt, Austria)

Venue: ACM 35th Workshop on Network and Operating System Support for Digital Audio and Video (NOSSDAV’25)

Abstract: The primary challenge of video streaming is to balance high video quality with smooth playback. Traditional codecs are well tuned for this trade-off, yet their inability to use context means they must encode the entire video data and transmit it to the client.
This paper introduces ELVIS (\textbf{E}nd-to-end \textbf{L}earning-based \textbf{VI}deo \textbf{S}treaming Enhancement Pipeline), an end-to-end architecture that combines server-side encoding optimizations with client-side generative in-painting to remove and reconstruct redundant video data. Its modular design allows ELVIS to integrate different codecs, in-painting models, and quality metrics, making it adaptable to future innovations.
Our results show that current technologies achieve improvements of up to 11 VMAF points over baseline benchmarks, though challenges remain for real-time applications due to computational demands. ELVIS represents a foundational step toward incorporating generative AI into video streaming pipelines, enabling higher quality experiences without increased bandwidth requirements.
By leveraging generative AI, we aim to develop a client-side tool, to incorporate in a dedicated video streaming player, that combines the accessibility of multilingual dubbing with the authenticity of the original speaker’s performance, effectively allowing a single actor to deliver their voice in any language. To the best of our knowledge, no current streaming system can capture the speaker’s unique voice or emotional tone.

 

We are glad that the paper was accepted for publication in Future Generation Computer Systems. This journal publishes cutting-edge research on high-performance computing, distributed systems, and advanced computing technologies for future computing environments.

Authors: Juan José Escobar, Pablo Sánchez-Cuevas, Beatriz Prieto, Rukiye Savran Kızıltepe, Fernando Díaz-del-Río, Dragi Kimovski

Abstract: Time and energy efficiency is a highly relevant objective in high-performance computing systems, with high costs for executing the tasks. Among these tasks, evolutionary algorithms are of consideration due to their inherent parallel scalability and usually costly fitness evaluation functions. In this respect, several scheduling strategies for workload balancing in heterogeneous systems have been proposed in the literature, with runtime and energy consumption reduction as their goals. Our hypothesis is that a dynamic workload distribution can be fitted with greater precision using metaheuristics, such as genetic algorithms, instead of linear regression. Therefore, this paper proposes a new mathematical model to predict the energy-time behaviour of applications based on multi-population genetic algorithms, which dynamically distributes the evaluation of individuals among the CPU-GPU devices of heterogeneous clusters. An accurate predictor would save time and energy by selecting the best resource set before running such applications. The estimation of the workload distributed to each device has been carried out by simulation, while the model parameters have been fitted in a two-phase run using another genetic algorithm and the experimental energy-time values of the target application as input. When the new model is analysed and compared with another based on linear regression, the one proposed in this work significantly improves the baseline approach, showing normalised prediction errors of 0.081 for runtime and 0.091 for energy consumption, compared to 0.213 and 0.256 shown in the baseline approach.

We are glad that the paper was accepted for publication in SCSA Journal. The journal covers research in smart computing systems and applications, with a focus on next-generation networking, cloud, and edge computing solutions.

Authors: Stojan Kitanov, Dragi Kimovski, Fisnik Doko, Kurt Horvath, Shpresa Tuda, Blerta Idrizi

Abstract: The rapid proliferation of IoT devices, coupled with the generated exponential growth of data, has necessitated the development of advanced network architectures. As a result, 5G mobile networks have already begun to face challenges such as network congestion, latency, and scalability limitations. Therefore, the need for a robust and future-proof solution becomes increasingly evident. In this direction, many research initiatives and industrial communities started to work on the development of 6G mobile networks. On the other hand, the emerging concept of Computing Continuum encompasses the seamless integration of edge, fog, and cloud computing resources to provide a unified and distributed computing environment, and it aims to enable real-time data processing, low-latency response, and intelligent decision-making at the network edge. The primary objective of this research paper is to address the shortcomings of existing network infrastructures and to overcome these shortcomings by integrating advanced AI capabilities in 6G mobile networks with the Computing Continuum. Moreover, it would be proposed a Computing Continuum Middleware for Artificial Intelligence over 6G networks, which would offer high-level and well-defined (“standardized”) interfaces which would create an automated, sustainable loop for managing IoT applications utilizing AI approaches over 6G networks.