Title: For Empowerment’s Sake: Rethinking Video Game Adaptations Of Real-Life Experiences

Author: Kseniia Harshina

Abstract:

In adapting real-life experiences into interactive narratives, video games have prioritized empathy-building as a central design goal (review in Schrier & Farber, 2021). This often involves crafting stories and gameplay mechanics that allow players without direct exposure to experiences such as mental illnesses, queerness, trauma, or forced migration—to better understand and emotionally connect with these realities. This focus on empathy can be seen as commendable and serving a critical purpose for broadening awareness and encouraging understanding. However, researchers, game developers and people with said experiences have raised critique against the rhetoric of empathy. Common arguments against these video games include minimizing the lived experiences, labeling them as “other” and promoting appropriation of affect (Ruberg, 2020). Prioritizing players without these experiences overlooks the opportunity to address the needs of those who have lived them. These players might not seek to feel empathy, but instead could benefit from video games that reflect their own stories and emotions, providing them with opportunities for healing, empowerment, and self-reflection. We believe there is a need for games designed explicitly with these audiences in mind and argue for an expansion of design goals in video game adaptations of real-life experiences including empowerment through feelings of catharsis.

Catharsis as a Mechanism for Empowerment

We view catharsis as a key component to fostering empowerment. Derived from Aristotelian concepts, catharsis refers to relieving tension and intense emotion by expressing them in a safe context (Kettles, 1995). To date, psychological research on catharsis in video games has largely focused on its connection to violent video games (e.g., Ferguson et al., 2014; Kersten & Greitemeyer, 2021). This limited focus has led to a gap in understanding how catharsis might operate as a constructive mechanism within video game narratives. We advocate reframing catharsis as a tool for processing emotions, promoting healing, and empowering players.

In our study of Reddit posts about Silent Hill 2, we examine how the game fosters cathartic experiences through its themes of trauma, guilt, and grief. Many players describe how the emotionally charged narrative and immersive world resonate with their own struggles, offering a space to process and confront difficult emotions. The ability to resonate with the game’s themes and process emotions through catharsis makes playing Silent Hill 2 an empowering experience for its players.

Co-Creating Stories: Designing for Affected Communities

To illustrate the potential of empowerment-focused video game adaptations of real-life experiences, we would like to present an ongoing participatory design project involving people with experiences of forced migration (Harshina, 2024; Harshina & Harbig, in press). Our project employs a collaborative methodology, combining surveys and iterative feedback sessions to co-create a game framework. By directly involving affected individuals, we aim to ensure that the resulting narratives authentically represent their lived experiences and emotional journeys.

Our design framework is structured around two distinct pathways: one tailored to players with lived experiences of forced migration and another aimed at promoting empathy among broader audiences. For the former group, the focus lies in crafting narratives and mechanics that provide cathartic, resilience-building experiences, reflecting their struggles and triumphs. For the latter, we emphasize creating opportunities for meaningful engagement and shared understanding through immersive storytelling. Additionally, we explore how a single game could serve as a bridge between these audiences, cultivating dialogue and mutual understanding. By combining empathy and empowerment within a single narrative, we envision games as a space where individuals with diverse perspectives can come together, challenge biases, and build solidarity.

Reimagining Video Game Adaptations

We argue for a reimagining of video game adaptations of real-life experiences that:
1. Ground narratives in authentic stories told by those directly affected.
2. Consider affected people as target audiences.
3. Strive for empowerment through cathartic and resilience-building experiences, rather than solely aiming for empathy.

Empowerment-focused narratives invite players to connect with their own emotions and stories. Our proposed dual-path approach—focusing on both empathy and empowerment—reframes how video games adapt real-life experiences. By centering affected individuals and embracing catharsis as a design principle, games can create meaningful connections across diverse audiences. We encourage game designers and researchers to collaborate with affected communities to craft narratives that resonate deeply, connecting storytelling, personal growth, and societal impact.

Real-Time Quality- and Energy-Aware Bitrate Ladder Construction for Live Video Streaming

IEEE Journal on Emerging and Selected Topics in Circuits and Systems

Mohammad Ghasempour (AAU, Austria), Hadi Amirpour (AAU, Austria), and Christian Timmerer (AAU, Austria)

Abstract: Live video streaming’s growing demand for high-quality content has resulted in significant energy consumption, creating challenges for sustainable media delivery. Traditional adaptive video streaming approaches rely on the over-provisioning of resources leading to a fixed bitrate ladder, which is often inefficient for the heterogeneous set of use cases and video content. Although dynamic approaches like per-title encoding optimize the bitrate ladder for each video, they mainly target video-on-demand to avoid latency and fail to address energy consumption. In this paper, we present LiveESTR, a method for building a quality- and energy-aware bitrate ladder for live video streaming. LiveESTR eliminates the need for exhaustive video encoding processes on the server side, ensuring that the bitrate ladder construction process is fast and energy efficient. A lightweight model for multi-label classification, along with a lookup table, is utilized to estimate the optimized resolution-bitrate pair in the bitrate ladder. Furthermore, both spatial and temporal resolutions are supported to achieve high energy savings while preserving compression efficiency. Therefore, a tunable parameter λ and a threshold τ are introduced to balance the trade-off between compression, quality, and energy efficiency. Experimental results show that LiveESTR reduces the encoder and decoder energy consumption by 74.6% and 29.7%, with only a 2.1% increase in Bjøntegaard Delta Rate (BD-Rate) compared to traditional per-title encoding. Furthermore, it is shown that by increasing λ to prioritize video quality, LiveESTR achieves 2.2% better compression efficiency in terms of BD-Rate while still reducing decoder energy consumption by 7.5%.

Authors: Zoha Azimi (Alpen-Adria Universität Klagenfurt, Austria), Reza Farahani (Alpen-Adria Universität Klagenfurt, Austria), Christian Timmerer (Alpen-Adria Universität Klagenfurt, Austria), Radu Prodan (Alpen-Adria Universität Klagenfurt, Austria)

Event: ACM 4th Mile-High Video Conference (MHV’25), 18–20 February 2025 | Denver, CO, USA

Abstract: Large language models (LLMs), the backbone of generative artificial intelligence (AI) like ChatGPT, have become more widely integrated in different fields, including multimedia. The rising number of conversational queries on such platforms now emits as much CO2 as everyday activities, leading to an exponential growth of energy consumption and underscoring urgent sustainability challenges. This short paper introduces an energy-aware LLM-based video processing tool. Employing open-source LLM models and techniques like fine-tuning and Retrieval-Augmented Generation (RAG), this tool recommends video processing commands and executes them in an energy-aware manner. Preliminary results show that it achieves reduced energy consumption per prompt compared to baselines.

 

Authors: Kurt Horvath (University of Klagenfurt, Austria), Dragi Kimovski (University of Klagenfurt, Austria), Stojan Kitanov (Mother Theresa Universiy Skopje, Macedonia), Radu Prodan (University of Klagenfurt, Austria)

Event: 2025 10th International Conference on Information and Network Technologies (ICINT), March 12-14 2025, Melbourne (Australia)

Abstract: The rapid digitalization of urban infrastructure opens the path to smart cities, where IoT-enabled infrastructure enhances public safety and efficiency. This paper presents a 6G and AI-enabled framework for traffic safety enhancement, focusing on real-time detection and classification of emergency vehicles and leveraging 6G as the latest global communication standard. The system integrates sensor data acquisition, convolutional neural network-based threat detection, and user alert dissemination through various software modules of the use case. We define the latency requirements for such a system, segmenting the end-to-end latency into computational and networking components. Our empirical evaluation demonstrates the impact of vehicle speed and user trajectory on system reliability. The results provide insights for network operators and smart city service providers, emphasizing the critical role of low-latency communication and how networks can enable relevant services for traffic safety.

 

Hadi

CLIP-DQA: Blindly Evaluating Dehazed Images from Global and Local Perspectives Using CLIP

The IEEE International Symposium on Circuits and Systems (IEEE ISCAS 2025)

https://2025.ieee-iscas.org/

25–28 May 2025 // London, United Kingdom

Yirui Zeng (Cardiff University, UK), Jun Fu (Cardiff University), Hadi Amirpour (AAU, Austria), Huasheng Wang (Alibaba Group), Guanghui Yue (Shenzhen University, China), Hantao Liu (Cardiff University), Ying Chen (Alibaba Group), Wei Zhou (Cardiff University)

Abstract: Blind dehazed image quality assessment (BDQA), which aims to accurately predict the visual quality of dehazed images without any reference information, is essential for the evaluation, comparison, and optimization of image dehazing algorithms. Existing learning-based BDQA methods have achieved remarkable success, while the small scale of DQA datasets limits their performance. To address this issue, in this paper, we propose to adapt Contrastive Language-Image Pre-Training (CLIP), pre-trained on large-scale image-text pairs, to the BDQA task. Specifically, inspired by the fact that the human visual system understands images based on hierarchical features, we take global and local information of the dehazed image as the input of CLIP. To accurately map the input hierarchical information of dehazed images into the quality score, we tune both the vision branch and language branch of CLIP with prompt learning. Experimental results on two authentic DQA datasets demonstrate that our proposed approach, named CLIP-DQA, achieves more accurate quality predictions over existing BDQA methods.

Multi-resolution Encoding for HTTP Adaptive Streaming using VVenC

The IEEE International Symposium on Circuits and Systems (IEEE ISCAS 2025)

https://2025.ieee-iscas.org/

25–28 May 2025 // London, United Kingdom

Kamran Qureshi (AAU, Austria), Hadi Amirpour (AAU, Austria), Christian Timmerer (AAU, Austria)

Abstract: HTTP Adaptive Streaming (HAS) is a widely adopted method for delivering video content over the Internet, requiring each video to be encoded at multiple bitrates and resolution pairs, known as representations, to adapt to various network conditions and device capabilities. This multi-bitrate encoding introduces significant challenges due to the computational and time-intensive nature of encoding multiple representations. Conventional approaches often encode these videos independently without leveraging similarities between different representations of the same input video. This paper proposes an accelerated multi-resolution encoding strategy that utilizes representations of lower resolutions as references to speed up the encoding of higher resolutions when using Versatile Video Coding (VVC); specifically in VVenC, an optimized open-source software implementation. For multi-resolution encoding, a mid-bitrate representation serves as the reference, allowing interpolated encoded partition data to efficiently guide the partitioning process in higher resolutions. The proposed approach uses shared encoding information to reduce redundant calculations, thereby optimizing the partitioning decisions. Experimental results demonstrate that the proposed technique achieves a reduction of up to 17% compared to medium preset in encoding time across videos of varying complexities with minimal BDBR/BDT of 0.12 compared to the fast preset.

 

Improving the Efficiency of VVC using Partitioning of Reference Frames

The IEEE International Symposium on Circuits and Systems (IEEE ISCAS 2025)

https://2025.ieee-iscas.org/

25–28 May 2025 // London, United Kingdom

Kamran Qureshi (AAU, Austria), Hadi Amirpour (AAU, Austria), Christian Timmerer (AAU, Austria)

Abstract: In response to the growing demand for high-quality videos, a new coding standard, Versatile Video Coding (VVC), was released in 2020. VVC is based on the same hybrid coding architecture as its predecessor, High-Efficiency Video Coding (HEVC), providing a bitrate reduction of approximately 50% for the same subjective quality. VVC extends HEVC’s Coding Tree Unit (CTU) partitioning with more flexible block sizes, increasing its encoding complexity. Optimization is essential to making efficient use of VVC in practical applications. VVenC, an optimized open-source VVC encoder, introduces multiple presets to address the trade-off between compression efficiency and encoder complexity. Although an optimized set of encoding tools has been selected for each preset, the rate-distortion (RD) search space in the encoder presets still poses a challenge for efficient encoder implementations. This paper proposes Early Termination using Reference Frames (ETRF). It improves the trade-off between encoding efficiency and time complexity and positions itself as a new preset between medium and fast presets. The CTU partitioning map of the reference frames present in lower temporal layers is employed to accelerate the encoding of frames in higher temporal layers. The results show a reduction in the encoding time of around 22% compared to the medium preset. Specifically, for videos with high spatial and temporal complexities, which typically require longer encoding times, the proposed method shows an improved BDBR/BDT compared to the fast preset.

 

Authors: Farzad Tashtarian (Alpen-Adria Universität Klagenfurt, Austria); Mahdi Dolati (Sharif University of Technology, Iran); Daniele Lorenzi (University of Klagenfurt, Austria); Mojtaba Mozhganfar (University of Tehran, Iran); Sergey Gorinsky (IMDEA Networks Institute, Spain); Ahmad Khonsari (University of Tehran, Iran); Christian Timmerer (Alpen-Adria-Universität Klagenfurt & Bitmovin, Austria); Hermann Hellwagner (Klagenfurt University, Austria)

Event: IEEE INFOCOM 2025,  19–22 May 2025 // London, United Kingdom

Abstract: Live streaming routinely relies on the Hypertext Transfer Protocol (HTTP) and content delivery networks (CDNs) to scalably disseminate videos to diverse clients. A bitrate ladder refers to a list of bitrate-resolution pairs, or representations, used for encoding a video. A promising trend in HTTP-based video streaming is to adapt not only the client’s representation choice but also the bitrate ladder during the streaming session. This paper examines the problem of multi-live streaming, where an encoding service performs coordinated CDN-aware bitrate ladder adaptation for multiple live streams delivered to heterogeneous clients in different zones via CDN edge servers. We design ALPHAS, a practical and scalable system for multi-live streaming that accounts for CDNs’ bandwidth constraints and encoder’s computational capabilities and also supports stream prioritization. ALPHAS, aware of both video content and streaming context, seamlessly integrates with the end-to-end streaming pipeline and operates in real time transparently to clients and encoding algorithms. We develop a cloud-based ALPHAS implementation and evaluate it through extensive real-world and trace-driven experiments against four prominent baselines that encode each stream independently. The evaluation shows that ALPHAS outperforms the baselines, improving quality of experience, end-to-end latency, and per-stream processing by up to 23%, 21%, and 49%, respectively.

 

Authors: Emanuele Artioli (Alpen-Adria Universität Klagenfurt, Austria), Daniele Lorenzi (Alpen-Adria Universität Klagenfurt, Austria), Farzad Tashtarian (Alpen-Adria Universität Klagenfurt, Austria), Christian Timmerer (Alpen-Adria Universität Klagenfurt, Austria)

Event: ACM 4th Mile-High Video Conference (MHV’25), 18–20 February 2025 |
Denver, CO, USA

Abstract: The demand for accessible, multilingual video content has grown significantly with the global rise of streaming platforms, social media, and online learning. The traditional solutions for making content accessible across languages include subtitles, even generated ones, as YouTube offers, and synthesizing voiceovers, offered, for example, by the Yandex Browser. Subtitles are cost-effective and reflect the original voice of the speaker, which is often essential for authenticity. However, they require viewers to divide their attention between reading text and watching visuals, which can diminish engagement, especially for highly visual content. Synthesized voiceovers, on the other hand, eliminate this need by providing an auditory translation. Still, they typically lack the emotional depth and unique vocal characteristics of the original speaker, which can affect the viewing experience and disconnect audiences from the intended pathos of the content. A straightforward solution would involve having the original actor “perform” in every language, thereby preserving the traits that define their character or narration style. However, recording actors in multiple languages is impractical, time-intensive, and expensive, especially for widely distributed media.

By leveraging generative AI, we aim to develop a client-side tool, to incorporate in a dedicated video streaming player, that combines the accessibility of multilingual dubbing with the authenticity of the original speaker’s performance, effectively allowing a single actor to deliver their voice in any language. To the best of our knowledge, no current streaming system can capture the speaker’s unique voice or emotional tone.

Authors: Daniele Lorenzi (Alpen-Adria Universität Klagenfurt, Austria), Farzad Tashtarian (Alpen-Adria Universität Klagenfurt, Austria), Christian Timmerer (Alpen-Adria Universität Klagenfurt, Austria)

Event: ACM 4th Mile-High Video Conference (MHV’25), 18–20 February 2025 |
Denver, CO, USA

Abstract: HTTP Adaptive Streaming (HAS) dominates video delivery but faces sustainability issues due to its energy demands. Current adaptive bitrate (ABR) algorithms prioritize quality, neglecting the energy costs of higher bitrates. Super-resolution (SR) can enhance quality but increases energy use, especially for GPU-equipped devices in competitive networks. RecABR addresses these challenges by clustering clients based on device attributes (e.g., GPU, resolution) and optimizing parameters via linear programming. This reduces computational overhead and ensures energy-efficient, quality-aware recommendations. Using metrics like VMAF and compressed SR models, RecABR minimizes storage and processing costs, making it scalable for CDN edge deployment.