Hadi

Perceptually-aware Online Per-title Encoding for Live Video Streaming – US Patent

PDF

Vignesh Menon (Alpen-Adria-Universität Klagenfurt, Austria), Hadi Amirpour (Alpen-Adria-Universität Klagenfurt, Austria), and Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Austria)

Abstract: Techniques for implementing perceptually aware per-title encoding may include receiving an input video, a set of resolutions, a maximum target bitrate and a minimum target bitrate, extracting content aware features for each segment of the input video, predicting a perceptually aware bitrate-resolution pair for each segment using a model configured to optimize for a quality metric using constants trained for each of the set of resolutions, generating a target encoding set including a set of perceptually aware bitrate-resolution pairs, and encoding the target encoding set. The content aware features may include a spatial energy feature and an average temporal energy. According to these methods only a subset of bitrates and resolutions, less than a full set of bitrates and resolutions, are encoded to provide high quality video content for streaming.

Hadi

Tutorial title: Video Coding Advancements in HTTP Adaptive Streaming

Venue: IEEE International Conference on Multimedia & Expo (ICME) 2025 (https://2025.ieeeicme.org/tutorials/)

We are happy to announce that our tutorial “Video Coding Advancements in HTTP Adaptive Streaming” (by Hadi Amirpourazarian and Christian Timmerer) has been accepted for IEEE ICME 2025, which will take place in Nantes, France, June 30 – July 4, 2025.

Description: This tutorial provides a comprehensive exploration of the HTTP Adaptive Streaming (HAS) pipeline, covering advancements from content provisioning to content consumption. We begin by tracing the history of video streaming and the evolution of video coding technologies. Attendees will gain insights into the timeline of significant developments, from early proprietary solutions to modern adaptive streaming standards like HAS. A comparative analysis of video codecs is presented, highlighting milestones such as H.264, HEVC, and the latest standard, Versatile Video Coding (VVC), emphasizing their efficiency, adoption, and impact on streaming technologies. Additionally, new trends in video coding, including AI-based coding solutions, will be covered, showcasing their potential to transform video compression and streaming workflows.

Building on this foundation, we explore per-title encoding techniques, which dynamically tailor bitrate ladders to the specific characteristics of video content. These methods account for factors such as spatial resolution, frame rate, device compatibility, and energy efficiency, optimizing both Quality of Experience (QoE) and environmental sustainability. Next, we highlight cutting-edge advancements in live streaming, including novel approaches to optimizing bitrate ladders without introducing latency. Fast multi-rate encoding methods are also presented, showcasing how they significantly reduce encoding times and computational costs, effectively addressing scalability challenges for streaming providers.

The tutorial further delves into edge computing capabilities for video transcoding, emphasizing how edge-based architectures can streamline the processing and delivery of streaming content. These approaches reduce latency and enable efficient resource utilization, particularly in live and interactive streaming scenarios.

Finally, we discuss the QoE parameters that influence both streaming and coding pipelines, providing a holistic view of how QoE considerations guide decisions in codec selection, bitrate optimization, and delivery strategies. By combining historical context, theoretical foundations, and practical insights, this tutorial equips attendees with the knowledge to navigate and address the evolving challenges in video streaming applications.

Title: Project “Scalable Platform for Innovations on Real-time Immersive Telepresence” (SPIRIT) successfully passed periodic review

The “Scalable Platform for Innovations on Real-time Immersive Telepresence” (SPIRIT) project, a Horizon Europe innovation initiative uniting seven consortium partners, including ITEC from the University of Klagenfurt, has successfully completed its periodic review that took place in November 2024.

SPIRIT aims to develop a “multi-site, interconnected framework dedicated for supporting the operation of heterogeneous collaborative telepresence applications at large scale”.

ITEC focuses on three key areas in SPIRIT:

  • determining subjective and objective metrics for the Quality of Experience (QoE) of volumetric video,
  • developing a Live Low Latency DASH (Dynamic Adaptive Streaming over HTTP) system for the transmission of volumetric video, and
  • contributing to standardisation bodies regarding work done in volumetric video.

The review committee was satisfied with the project’s progress, and accepted all deliverables. The project was praised for a successful first round of open calls, which saw a remarkable 61 applicants for 11 available spots.

ITEC’s work with researching QoE of volumetric video through subjective testing was also deemed impressive, with us having obtained over 2000 data points across two rounds of testing. Contributions to standardisation bodies such as MPEG and 3GPP were also praised.

ITEC continues to work in the SPIRIT project, focusing on the second round of open calls and Live Low Latency DASH transmission of volumetric video.

Efficient Location-Based Service Discovery for IoT and Edge Computing in the 6G Era

Authors: Kurt Horvath, Dragi Kimovski

Conference: 2025 10th International Conference on Information and Network Technologies (ICINT 2025)

Abstract: Efficient service discovery is a cornerstone of the rapidly expanding Internet of Things (IoT) and edge computing ecosystems, where low latency and localized service provisioning are critical. This paper proposes a novel location-based DNS (Domain Name System) method that leverages Location Resource Records (LOC RRs) to enhance service discovery. By embedding geographic data in DNS responses, the system dynamically allocates services to edge nodes based on user proximity, ensuring reduced latency and improved Quality of Service (QoS). Comprehensive evaluations demonstrate minimal computational overhead, with processing times below 1 ms, making the approach highly suitable for latency-sensitive applications. Furthermore, the proposed methodology aligns with emerging 6G standards, which promise sub-millisecond latency and robust connectivity. Future research will focus on real-world deployment, validating the approach in dynamic IoT environments. This work establishes a scalable, efficient, and practical framework for location aware service discovery, providing a strong foundation for next generation IoT and edge-computing solutions.

 

Enhancing Traffic Safety with AI and 6G: Latency Requirements and Real-Time Threat Detection

Authors: Kurt Horvath, Dragi Kimovski, Stojan Kitanov, Radu Prodan

Conference: 2025 10th International Conference on Information and Network Technologies (ICINT 2025)

Abstract: The rapid digitalization of urban infrastructure opens the path to smart cities, where IoT-enabled infrastructure enhances public safety and efficiency. This paper presents a 6G and AI-enabled framework for traffic safety enhancement, focusing on real-time detection and classification of emergency vehicles and leveraging 6G as the latest global communication standard. The system integrates sensor data acquisition, convolutional neural network-based threat detection, and user alert dissemination through various software modules of the use case. We define the latency requirements for such a system, segmenting the end-toend latency into computational and networking components. Our empirical evaluation demonstrates the impact of vehicle speed and user trajectory on system reliability. The results provide insights for network operators and smart city service providers, emphasizing the critical role of low-latency communication and how networks can enable relevant services for traffic safety.

Tutorial title: Serverless Orchestration on the Edge-Cloud Continuum: Challenges and Solutions

Venue: 16th ACM/SPEC International Conference on Performance Engineering (ICPE) (https://icpe2025.spec.org/)

We are happy to announce that our tutorial “Serverless Orchestration on the Edge-Cloud Continuum: Challenges and Solutions” (by Reza Farahani and Radu Prodan) has been accepted for ACM/SPEC ICPE 2025, which will take place in Torento, Canada, in May 2025.

 

Authors: Sahar Nasirihaghighi, Negin Ghamsarian, Raphael Sznitman, Klaus Schoeffmann

Event: International Symposium on Biomedical Imaging (ISBI), April 14-17, 2025

Abstract: Accurate surgical phase recognition is crucial for advancing computer-assisted interventions, yet the scarcity of labeled data hinders training reliable deep learning models. Semi-supervised learning (SSL), particularly with pseudo-labeling, shows promise over fully supervised methods but often lacks reliable pseudo-label assessment mechanisms. To address this gap, we propose a novel SSL framework, Dual Invariance Self-Training (DIST), that incorporates both Temporal and Transformation Invariance to enhance surgical phase recognition. Our two-step self-training process dynamically selects reliable pseudo-labels, ensuring robust pseudo-supervision. Our approach mitigates the risk of noisy pseudo-labels, steering decision boundaries toward true data distribution and improving generalization to unseen data. Evaluations on Cataract and Cholec80 datasets show our method outperforms state-of-the-art SSL approaches, consistently surpassing both supervised and SSL baselines across various network architectures.

 

DORBINE is a cooperative project between AIR6 Systems and Alpen-Adria-Universität Klagenfurt (AAU) (Farzad Tashtarian, project leader; Christian Timmerer and Hamid Amirpourazarian) and is funded by the Austrian Research Promotion Agency FFG.

Project description: Renewable energy plays a critical role in the global transition to sustainable and environmentally friendly power sources, and among the various technologies, turbines stand out as a key contributor. Wind turbines, for example, can convert up to 45% of the available wind energy into electricity, with modern designs reaching efficiencies as high as 50%, depending on conditions. The DORBINE project aims to enhance wind turbine efficiency in electricity production by developing an innovative inspection framework powered by cutting-edge AI techniques. It leverages a swarm of drones equipped with high-resolution cameras and advanced sensors to perform real-time, detailed blade inspections without the need for turbine shutdowns.

 

Title: For Empowerment’s Sake: Rethinking Video Game Adaptations Of Real-Life Experiences

Author: Kseniia Harshina

Abstract:

In adapting real-life experiences into interactive narratives, video games have prioritized empathy-building as a central design goal (review in Schrier & Farber, 2021). This often involves crafting stories and gameplay mechanics that allow players without direct exposure to experiences such as mental illnesses, queerness, trauma, or forced migration—to better understand and emotionally connect with these realities. This focus on empathy can be seen as commendable and serving a critical purpose for broadening awareness and encouraging understanding. However, researchers, game developers and people with said experiences have raised critique against the rhetoric of empathy. Common arguments against these video games include minimizing the lived experiences, labeling them as “other” and promoting appropriation of affect (Ruberg, 2020). Prioritizing players without these experiences overlooks the opportunity to address the needs of those who have lived them. These players might not seek to feel empathy, but instead could benefit from video games that reflect their own stories and emotions, providing them with opportunities for healing, empowerment, and self-reflection. We believe there is a need for games designed explicitly with these audiences in mind and argue for an expansion of design goals in video game adaptations of real-life experiences including empowerment through feelings of catharsis.

Catharsis as a Mechanism for Empowerment

We view catharsis as a key component to fostering empowerment. Derived from Aristotelian concepts, catharsis refers to relieving tension and intense emotion by expressing them in a safe context (Kettles, 1995). To date, psychological research on catharsis in video games has largely focused on its connection to violent video games (e.g., Ferguson et al., 2014; Kersten & Greitemeyer, 2021). This limited focus has led to a gap in understanding how catharsis might operate as a constructive mechanism within video game narratives. We advocate reframing catharsis as a tool for processing emotions, promoting healing, and empowering players.

In our study of Reddit posts about Silent Hill 2, we examine how the game fosters cathartic experiences through its themes of trauma, guilt, and grief. Many players describe how the emotionally charged narrative and immersive world resonate with their own struggles, offering a space to process and confront difficult emotions. The ability to resonate with the game’s themes and process emotions through catharsis makes playing Silent Hill 2 an empowering experience for its players.

Co-Creating Stories: Designing for Affected Communities

To illustrate the potential of empowerment-focused video game adaptations of real-life experiences, we would like to present an ongoing participatory design project involving people with experiences of forced migration (Harshina, 2024; Harshina & Harbig, in press). Our project employs a collaborative methodology, combining surveys and iterative feedback sessions to co-create a game framework. By directly involving affected individuals, we aim to ensure that the resulting narratives authentically represent their lived experiences and emotional journeys.

Our design framework is structured around two distinct pathways: one tailored to players with lived experiences of forced migration and another aimed at promoting empathy among broader audiences. For the former group, the focus lies in crafting narratives and mechanics that provide cathartic, resilience-building experiences, reflecting their struggles and triumphs. For the latter, we emphasize creating opportunities for meaningful engagement and shared understanding through immersive storytelling. Additionally, we explore how a single game could serve as a bridge between these audiences, cultivating dialogue and mutual understanding. By combining empathy and empowerment within a single narrative, we envision games as a space where individuals with diverse perspectives can come together, challenge biases, and build solidarity.

Reimagining Video Game Adaptations

We argue for a reimagining of video game adaptations of real-life experiences that:
1. Ground narratives in authentic stories told by those directly affected.
2. Consider affected people as target audiences.
3. Strive for empowerment through cathartic and resilience-building experiences, rather than solely aiming for empathy.

Empowerment-focused narratives invite players to connect with their own emotions and stories. Our proposed dual-path approach—focusing on both empathy and empowerment—reframes how video games adapt real-life experiences. By centering affected individuals and embracing catharsis as a design principle, games can create meaningful connections across diverse audiences. We encourage game designers and researchers to collaborate with affected communities to craft narratives that resonate deeply, connecting storytelling, personal growth, and societal impact.

Real-Time Quality- and Energy-Aware Bitrate Ladder Construction for Live Video Streaming

IEEE Journal on Emerging and Selected Topics in Circuits and Systems

Mohammad Ghasempour (AAU, Austria), Hadi Amirpour (AAU, Austria), and Christian Timmerer (AAU, Austria)

Abstract: Live video streaming’s growing demand for high-quality content has resulted in significant energy consumption, creating challenges for sustainable media delivery. Traditional adaptive video streaming approaches rely on the over-provisioning of resources leading to a fixed bitrate ladder, which is often inefficient for the heterogeneous set of use cases and video content. Although dynamic approaches like per-title encoding optimize the bitrate ladder for each video, they mainly target video-on-demand to avoid latency and fail to address energy consumption. In this paper, we present LiveESTR, a method for building a quality- and energy-aware bitrate ladder for live video streaming. LiveESTR eliminates the need for exhaustive video encoding processes on the server side, ensuring that the bitrate ladder construction process is fast and energy efficient. A lightweight model for multi-label classification, along with a lookup table, is utilized to estimate the optimized resolution-bitrate pair in the bitrate ladder. Furthermore, both spatial and temporal resolutions are supported to achieve high energy savings while preserving compression efficiency. Therefore, a tunable parameter λ and a threshold τ are introduced to balance the trade-off between compression, quality, and energy efficiency. Experimental results show that LiveESTR reduces the encoder and decoder energy consumption by 74.6% and 29.7%, with only a 2.1% increase in Bjøntegaard Delta Rate (BD-Rate) compared to traditional per-title encoding. Furthermore, it is shown that by increasing λ to prioritize video quality, LiveESTR achieves 2.2% better compression efficiency in terms of BD-Rate while still reducing decoder energy consumption by 7.5%.

Authors: Zoha Azimi (Alpen-Adria Universität Klagenfurt, Austria), Reza Farahani (Alpen-Adria Universität Klagenfurt, Austria), Christian Timmerer (Alpen-Adria Universität Klagenfurt, Austria), Radu Prodan (Alpen-Adria Universität Klagenfurt, Austria)

Event: ACM 4th Mile-High Video Conference (MHV’25), 18–20 February 2025 | Denver, CO, USA

Abstract: Large language models (LLMs), the backbone of generative artificial intelligence (AI) like ChatGPT, have become more widely integrated in different fields, including multimedia. The rising number of conversational queries on such platforms now emits as much CO2 as everyday activities, leading to an exponential growth of energy consumption and underscoring urgent sustainability challenges. This short paper introduces an energy-aware LLM-based video processing tool. Employing open-source LLM models and techniques like fine-tuning and Retrieval-Augmented Generation (RAG), this tool recommends video processing commands and executes them in an energy-aware manner. Preliminary results show that it achieves reduced energy consumption per prompt compared to baselines.