Perceptual Quality Assessment of Spatial Videos on Apple Vision Pro

ACMMM IXR 2025

October 27 – October 31, 2025

Dublin, Ireland

[PDF]

Afshin Gholami, Sara Baldoni, Federica Battisti, Wei Zhou, Christian Timmerer, Hadi Amirpour

Abstract: Immersive stereoscopic/3D video experiences have entered a new era with the advent of smartphones capable of capturing spatial videos, advanced video codecs optimized for multiview content, and Head Mounted Displays (HMD s) that natively support spatial video playback. In this work, we evaluate the quality of spatial videos encoded using optimized x265 software implementations of MV-HEVC on the AVP and compare them with their corresponding 2D versions through a subjective test.

To support this study, we introduce SV-QoE, a novel dataset comprising video clips rendered with a twin-camera setup that replicates the human inter-pupillary distance. Our analysis reveals that spatial videos consistently deliver a superior Quality of Experience ( QoE ) when encoded at identical bitrates, with the benefits becoming more pronounced at higher bitrates. Additionally, renderings at closer distances exhibit significantly enhanced video quality and depth perception, highlighting the impact of spatial proximity on immersive viewing experiences.

We further analyze the impact of disparity on depth perception and examine the correlation between Mean Opinion Score (MOS ) and established objective quality metrics such as PSNR, SSIM, MS-SSIM, VMAF, and AVQT. Additionally, we explore how video quality and depth perception together influence overall quality judgments.

 

SVD: Spatial Video Dataset

ACM Multimedia 2025

October 27 – October 31, 2025

Dublin, Ireland

[PDF]

MH Izadimehr, Milad Ghanbari, Guodong Chen, Wei Zhou, Xiaoshuai Hao, Mallesham Dasari, Christian Timmerer, Hadi Amirpour

Abstract:  Stereoscopic video has long been the subject of research due to its ability to deliver immersive three-dimensional content to a wide range of applications, from virtual and augmented reality to advanced human–computer interaction. The dual‑view format inherently provides binocular disparity cues that enhance depth perception and realism, making it indispensable for fields such as telepresence, 3D mapping, and robotic vision. Until recently, however, end‑to‑end pipelines for capturing, encoding, and viewing high‑quality 3D video were neither widely accessible nor optimized for consumer‑grade devices. Today’s smartphones, such as the iPhone Pro and modern HMDs like the AVP, offer built‑in support for stereoscopic video capture, hardware‑accelerated encoding, and seamless playback on devices like the AVP and Meta Quest 3, which require minimal user intervention. Apple refers to this streamlined workflow as spatial Video. Making the full stereoscopic video process available to everyone has made new applications possible. Despite these advances, there remains a notable absence of publicly available datasets that include the complete spatial video pipeline on consumer platforms, hindering reproducibility and comparative evaluation of emerging algorithms.

In this paper, we introduce SVD, a spatial video dataset comprising 300 five-second video sequences, i.e., 150 captured using an iPhone Pro and 150 with an AVP. Additionally, 10 longer videos with a minimum duration of 2 minutes have been recorded. The SVD is publicly released under an open source license to facilitate research in codec performance evaluation, subjective and objective Quality of Experience assessment, depth‑based computer vision, stereoscopic video streaming, and other emerging 3D applications such as neural rendering and volumetric capture. Link to the dataset: https://cd-athena.github.io/SVD/.

 

Nature-1k: The Raw Beauty of Nature in 4K at 60FPS

ACM Multimedia 2025

October 27 – October 31, 2025

Dublin, Ireland

[PDF]

Mohammad Ghasempour (AAU, Austria), Hadi Amirpour (AAU, Austria), Christian Timmerer (AAU, Austria)

Abstract: The push toward data-driven video processing, combined with recent advances in video coding and streaming technologies, has fueled the need for diverse, large-scale, and high-quality video datasets. However, the limited availability of such datasets remains a key barrier to the development of next-generation video processing solutions. In this paper, we introduce Nature-1k, a large-scale video dataset consisting of 1000 professionally captured 4K Ultra High Definition (UHD) videos, each recorded at 60fps. The dataset covers a wide range of environments, lighting conditions, texture complexities, and motion patterns. To maintain temporal consistency, which is crucial for spatio-temporal learning applications, the dataset avoids scene cuts within the sequences. We further characterize the dataset using established metrics, including spatial and temporal video complexity metrics, as well as colorfulness, brightness, and contrast distribution. Moreover, Nature-1k includes a compressed version to support rapid prototyping and lightweight testing. The quality of the compressed videos is evaluated using four commonly used video quality metrics: PSNR, SSIM, MS-SSIM, and VMAF. Finally, we compare Nature-1k with existing datasets to demonstrate its superior quality and content diversity. The dataset is suitable for a wide range of applications, including Generative Artificial Intelligence (AI), video super-resolution and enhancement, video interpolation, as well as video coding, and adaptive video streaming optimization. Dataset URL: Link

Receiving Kernel-Level Insights via eBPF: Can ABR Algorithms Adapt Smarter?

Würzburg Workshop on Next-Generation Communication Networks (WueWoWAS) 2025

6 – 8 Oct 2025, Würzburg, Germany

[PDF]

Mohsen Ghasemi (Sharif University of Technology, Iran); Daniele Lorenzi (Alpen-Adria-Universität Klagenfurt, Austria); Mahdi Dolati (Sharif University of Technology, Iran); Farzad Tashtarian (Alpen-Adria Universität Klagenfurt, Austria); Sergey Gorinsky (IMDEA Networks Institute, Spain); Christian Timmerer (Alpen-Adria-Universität Klagenfurt & Bitmovin, Austria)

Abstract: The rapid rise of video streaming services such as Netflix and YouTube has made video delivery the largest driver of global Internet traffic, including mobile networks such as 5G or the upcoming 6G network. To maintain playback quality, client devices employ Adaptive Bitrate (ABR) algorithms that adjust video quality based on metrics like available bandwidth and buffer occupancy. However, these algorithms often react slowly to sudden bandwidth fluctuations due to limited visibility
into network conditions, leading to stall events that significantly degrade the user’s Quality of Experience (QoE). In this work, we introduce CaBR, a Congestion-aware adaptive BitRate decision module designed to operate on top of existing ABR algorithms. CaBR enhances video streaming performance by leveraging real-time, in-kernel network telemetry collected via the extended Berkeley Packet Filter (eBPF). By utilizing congestion metrics such as queue lengths observed at network switches, CaBR refines the bitrate selection of the underlying ABR algorithms for upcoming segments, enabling faster adaptation to changing network conditions. Our evaluation shows that CaBR significantly reduces the playback stalls and improves QoE by up to 25% compared to state-of-the-art approaches in a congested environment.

 

Hadi

Cross-Modal Scene Semantic Alignment for Image Complexity Assessment

British Machine Vision Conference (BMVC) 2025

November, 2025

Sheffield, UK

[PDF]

Yuqing Luo, YIXIAO LI, Jiang Liu, Jun Fu, Hadi Amirpour, Guanghui Yue, Baoquan Zhao, Padraig Corcoran, Hantao Liu, Wei Zhou

Abstract: Image complexity assessment (ICA) is a challenging task in perceptual evaluation due to the subjective nature of human perception and the inherent semantic diversity in real-world images. Existing ICA methods predominantly rely on hand-crafted or shallow convolutional neural network-based features of a single visual modality, which are insufficient to fully capture the perceived representations closely related to image complexity. Recently, cross-modal scene semantic information has been shown to play a crucial role in various computer vision tasks, particularly those involving perceptual understanding. However, the exploration of cross-modal scene semantic information in the context of ICA remains unaddressed. Therefore, in this paper, we propose a novel ICA method called Cross-Modal Scene Semantic Alignment (CM-SSA), which leverages scene semantic alignment from a cross-modal perspective to enhance ICA performance, enabling complexity predictions to be more consistent with subjective human perception. Specifically, the proposed CM-SSA consists of a complexity regression branch and a scene semantic alignment branch. The complexity regression branch estimates image complexity levels under the guidance of the scene semantic alignment branch, while the scene semantic alignment branch is used to align images with corresponding text prompts that convey rich scene semantic information by pair-wise learning. Extensive experiments on several ICA datasets demonstrate that the proposed CM-SSA significantly outperforms state-of-the-art approaches.

In July 2025, the ATHENA Christian Doppler Laboratory hosted four interns working on the following topics:

  • Leon Kordasch: Large-scale 4K 60fps video dataset
  • Theresa Petschenig: Video generation and quality assessment

At the conclusion of their internships, the interns showcased their projects and findings, earning official certificates from the university. The collaboration proved to be a rewarding experience for both the interns and the researchers at ATHENA. Through personalized mentorship, hands-on training, and ongoing support, the interns benefited from an enriched learning journey. This comprehensive guidance enabled them to build strong practical skills while deepening their understanding of research methodologies and technologies in the video streaming domain. We sincerely thank both interns for their enthusiasm, dedication, and insightful feedback, which contributed meaningfully to the ATHENA lab’s ongoing efforts.

Leon Kordasch: My internship at ATHENA was an incredibly valuable experience. The team was welcoming and supportive, and I especially appreciated the guidance of my supervisor, Mohammad Ghasempour, who did a great job explaining the theoretical background and technical concepts needed for my work. During my time there, I developed a high-quality and diverse 4K60 video dataset for applications such as AI training, real-time upscaling, and advanced video encoding research.

Theresa Petschenig: My four-week internship at ATHENA was a really enjoyable and meaningful experience. I worked on a project related to video generation and quality assessment, which allowed me to dive into some fascinating topics. I got a much better understanding of how AI-generated videos are created and evaluated, and what makes them look realistic. The internship gave me a perfect balance of practical work and learning new concepts. My supervisor, Yiying, was very nice and helpful throughout the internship. The atmosphere in the office was calm and welcoming, and the team was really friendly. I’m grateful for everything I’ve learned and for the chance to be part of such a supportive environment. This experience gave me both valuable knowledge and great memories.

 

diveXplore – An Open-Source Software for Modern Video Retrieval with Image/Text Embeddings

ACM Multimedia 2025

October 27 – October 31, 2025

Dublin, Ireland

[PDF]

Mario Leopold (AAU, Austria), Farzad Tashtarian (AAU, Austria), Klaus Schöffmann (AAU, Austria)

Abstract:Effective video retrieval in large-scale datasets presents a significant challenge, with existing tools often being too complex, lacking sufficient retrieval capabilities, or being too slow for rapid search tasks. This paper introduces diveXplore, an open-source software designed for interactive video retrieval. Due to its success in various competitions like the Video Browser Showdown (VBS) and the Interactive Video Retrieval 4 Beginners (IVR4B), as well as its continued development since 2017, diveXplore is a solid foundation for various kinds of retrieval tasks. The system is built on a three-layer architecture, comprising a backend for offline preprocessing, a middleware with a Node.js and Python server for query handling, and a MongoDB for metadata storage, as well as an Angular-based frontend for user interaction. Key functionalities include free-text search using natural language, temporal queries, similarity search, and other specialized search strategies. By open-sourcing diveXplore, we aim to establish a solid baseline for future research and development in the video retrieval community, encouraging contributions and adaptations for a wide range of use cases, even beyond competitive settings.

Kseniia, Felix, and Tom at Video Game Cultures 2025
Prague, Czech Republic, 10-12th September 2025

Kseniia, Felix, and Tom presented at the VGC 2025 in Prague. This academic research conference dealing with video games from a variety of different angles was also held in Klagenfurt two years ago, and is likely to be hosted here again in the near future.

Author: Kseniia Harshina
Title: Traces of Memory, Traces of Home: Trauma-Aware Environmental Storytelling in Games
Abstract: This presentation explores how game environments can function as emotional architectures, spaces that do not just represent trauma, but embody it. In particular, I examine how digital environments can reflect fractured relationships to memory, identity, and home. I propose a twofold approach: a critical reading of trauma in environmental storytelling, and a participatory design method grounded in co-creation and lived experience.
First, I examine how games such as Silent Hill 2 use space to externalize grief, memory, and emotional fragmentation. These environments are not just settings, they are structured by loss. They invite players to navigate emotional landscapes through movement and embodiment, instead of exposition.
Second, I reflect on my research-creation work with people who have experienced forced migration, in which we co-design adaptive environments that shift in response to memory, emotion, and identity. I would like to introduce a dual-role framework: Survivors, who shape and embed memory traces into environments; and Witnesses, who explore these spaces with limited agency. This distinction reflects different relationships to trauma: those who have lived it, and those invited to listen. This model invites reflection on authorship, the emotional labor of sharing trauma, and the ethics of game design. It also challenges dominant design assumptions, suggesting that withholding agency can be a powerful act of care. This work argues that trauma-aware environmental storytelling offers a way to reimagine home, not as a static setting, but as a shifting, layered space where pain and longing coexist.
Ultimately, this approach makes space for grief and displacement not just thematically, but architecturally. What remains are not just spaces, but traces, of memory, of home, of stories that ask us to listen more deeply to others, and to the pasts we carry with us.

 

Author: Felix Schniz
Title: In Cardboard Space, No One Can Hear Your Scream The Alien Universe Between Digital and Analogue Game Experiences
Abstract: The science-fiction horror that began with Alien (Scott 1979) has long since evolved into a transmedia universe spanning a diverse set of media artefacts (cf. Heinze 2019). Its unique selling points – dark, confined environments and the clearly defined protagonist/antagonist conflict of alien Xenomorphs and human Colonial Marines taking place within them – make the setting an especially favourable topic for game adaptations. Intense digital survival horror games, such as Alien: Isolation (Creative Assembly 2014), have received a fair share of
academic attention in game studies (cf. Švelch 2020). Analogue Alien games and their unique mechanic potential on environmental storytelling, however, yet deserve more attention. In my talk, I focus on the analogue interpretations of the Alien universe and how they differ from the digital. I identify the opportunities and demands of digital and analogue game spaces that capture the essential experience the universe provides – a storytelling world deeply laden with political, gothic, and evolutionary horrors – and illustrate their manifestation in twodimensional cardboard and table spaces.
After introducing the universe and the parameters for game design set by its pivotal spaces, I establish a dialogue between spaces of play and represented spaces. Relying on the foundational works on transmedia adaption theories (cf. Hutcheon 2006 and Rauscher 2012), cross-sectioning them with environmental storytelling concepts (cf. Rauscher 2015) and a look at horror in gaming (Perron 2018), I provide an overview of key Alien game adaptations, analysing how said parameters define game mechanics. I focus on the differences in how digital
interfaces and systems, as seen in video games such as the already mentioned Alien: Isolation or the recent Aliens: Dark Descent (Tindalos Interactive 2023), compare to the material and social interactions demanded by analogue forms. These observations include rarely discussed works such as the Aliens Predator Customizable Card Game (Ackels et al. 1997), the tabletop war game Aliens Vs Predator: The Hunt Begins (Ewertowski and Olesky 2015), and the board game Aliens: Another Glorious Day in the Corps (Haught 2020).
My outcome is a nuanced understanding of how analogue games adapt an established cinematic universe/environmental storytelling world, revealing specific design strategies employed to evoke shared yet medium-specific, universe-encapsulating space. I ultimately offer insights for debate into the mechanic translation required for cross-platform IP adaptation.

Author: Tom Tuček
Title: The Costs of Generative AI in Video Games: Using Locally-Running Models for Sustainability
Abstract: Generative AI is reshaping video game worlds by providing developers and designers with quick and easy access to assets, as well as by allowing for the dynamic creation of environments, dialogue, and narrative during gameplay. While generative AI brings increased potential for creativity and accessibility, it also brings many new issues and questions. We highlight the ethical implications and problems of AI-native games (video games using real-time generative AI as a core part of their design) by focusing on their environmental impact.
Following a short discussion on the ethics of generative AI, we frame the topic within the ongoing climate crisis and investigate the energy demands of models used for and within video games. By comparing the costs of various approaches, we highlight the potential for the use of smaller models in video games, which can run on local end-user machines, such as PCs or game consoles, while using less power. This approach also helps with other issues, such as online dependency and data privacy.
To ground these arguments, we present a case study of our game, One Spell Fits All, an AI-native video game prototype that runs offline on consumer laptops. Preliminary findings show the potential of this approach, showcasing reduced energy consumption while maintaining a high-quality game experience.
Based on these critiques and findings, we propose guidelines for more responsible AI-native video game design, such as prioritizing low-power models and client-side inference, selecting appropriate models for each task, and monitoring the energy consumption of games during the development process.
By looking at AI-native games through the lens of climate ethics, this work contributes to our understanding of the novel field of generative AI in games while also offering best practice approaches for designers, developers, and players committed to greener virtual worlds.

IEEE Conference on Games 2025

In Lisboa, Portugal, 26-29th August, 2025

 

Author: Tom Tucek

Title: Using Large Language Models to Create Meaningful and Dynamic Interactions in Serious Game Contexts

Abstract: Video games have become the most successful entertainment medium, both in terms of financial success and as a carrier of modern culture. At the same time, recent trends in generative artificial intelligence (AI), particularly large language models (LLMs), are bringing about a paradigm shift in how humans interact with games and computers in general. The unpredictability of generative AI has already been utilized to create fun experiences within games, but the same aspect makes it difficult to use in serious contexts (e.g., games dealing with minority status), where unwanted output can potentially cause harm. This doctoral research proposes to find out how LLMs can be used in video games with serious contexts to create and enhance meaningful experiences. Following design science principles, role-playing game (RPG) prototypes that utilize this new technology and deal with serious topics are created and tested for their efficacy in terms of user engagement, narrative coherence, and lasting impact (e.g., changed views or behavior after extended periods of time). Iterative development and validation, through user tests and heuristic evaluations, ensure that the created video game prototypes have the desired effects and findings are incorporated into a framework, which in turn is validated in a long-term study. Other aspects, such as data privacy and latency, are also addressed by focusing on the local deployment of AI models, instead of cloud-based services. The main contribution of this research is a framework that improves the reflected use of generative AI in video games, increasing narrative coherence and player engagement while enabling the creation of games that allow for meaningful, personalized, and dynamic experiences.

IEEE Conference on Games 2025

In Lisboa, Portugal, 26-29th August, 2025

Author: Kseniia Harshina

Title: Developing a Video Game for Empathy and Empowerment in the Context of Forced Migration Experiences

Abstract: This dissertation explores how video games can be designed to foster empathy and empowerment in the context of forced migration. While existing games often focus on raising awareness, they frequently exclude displaced individuals from the design process. To address this, the project proposes a participatory, AI-assisted storytelling system that allows people with lived migration experience to co-create and replay interactive scenes based on personal memories.

The research follows a three-phase structure: data collection through surveys and a participatory game jam; iterative development of an interview-to-game prototype using a locally run large language model (LLM); and a mixed-methods evaluation. The system includes an interview-based chatbot interface, automatic scene generation, and post-game reflection tools. The evaluation examines the system’s emotional, psychological, and representational impact across three player groups: scene authors (migrant participants), other displaced individuals, and players without migration experience.

The project contributes to generative AI research, HCI, and game studies by combining participatory design, storytelling, and technical implementation. It offers both a theoretical framework and a functional prototype to inform future practices in socially responsive game design.