Authors: Samira Afzal (Baylor University), Narges Mehran (Salzburg Research Forschungsgesellschaft mbH), Farzad Tashtarian (AAU, Austria), Andrew C. Freeman (Baylor University), Radu Prodan (University of Innsbruck), Christian Timmerer (AAU, Austria)

Venue: IEEE VCIP 2025December 1 – December 4, 2025, Klagenfurt, Austria

Abstract: The environmental impact of video streaming is gaining more attention due to its growing share in global internet traffic and energy consumption. To support accurate and transparent sustainability assessments, we present SEED (Streaming Energy and Emission Dataset)}: an open dataset for estimating energy usage and CO2 emissions in adaptive video streaming. SEED comprises over 500 video segments. It provides segment-level measurements of energy consumption and emissions for two primary stages: provisioning, which encompasses encoding and storage on cloud infrastructure, and end-user consumption, including network interface retrieval, video decoding, and display on end-user devices. The dataset covers multiple codecs (AVC, HEVC), resolutions, bitrates, cloud instance types, and geographic regions, reflecting real-world variations in computing efficiency and regional carbon intensity. By combining empirical benchmarks with component-level energy models, \dataset{} enables detailed analysis and supports the development of energy- and emission-aware adaptive bitrate (ABR) algorithms. The dataset is publicly available at: https://github.com/cd-athena/SEED.

SEED is available at: https://github.com/cd-athena/SEED

NeVES: Real-Time Neural Video Enhancement for HTTP Adaptive Streaming

IEEE VCIP 2025

December 1 – December 4, 2025

Klagenfurt, Austria

[PDF]

Daniele Lorenzi, Farzad Tashtarian, Christian Timmerer

Abstract: Enhancing low-quality video content is a task that has raised particular interest since recent developments in deep learning. Since most of the video content consumed worldwide is delivered over the Internet via HTTP Adaptive Streaming (HAS), implementing these techniques on web browsers would ease the access to visually-enhanced content on user devices.

In this paper, we present NeVES, a multimedia system capable of enhancing the quality of video content streamed through HAS in real time.

The demo is available at: https://github.com/cd-athena/NeVES.

Perceptual Quality Assessment of Spatial Videos on Apple Vision Pro

ACMMM IXR 2025

October 27 – October 31, 2025

Dublin, Ireland

[PDF]

Afshin Gholami, Sara Baldoni, Federica Battisti, Wei Zhou, Christian Timmerer, Hadi Amirpour

Abstract: Immersive stereoscopic/3D video experiences have entered a new era with the advent of smartphones capable of capturing spatial videos, advanced video codecs optimized for multiview content, and Head Mounted Displays (HMD s) that natively support spatial video playback. In this work, we evaluate the quality of spatial videos encoded using optimized x265 software implementations of MV-HEVC on the AVP and compare them with their corresponding 2D versions through a subjective test.

To support this study, we introduce SV-QoE, a novel dataset comprising video clips rendered with a twin-camera setup that replicates the human inter-pupillary distance. Our analysis reveals that spatial videos consistently deliver a superior Quality of Experience ( QoE ) when encoded at identical bitrates, with the benefits becoming more pronounced at higher bitrates. Additionally, renderings at closer distances exhibit significantly enhanced video quality and depth perception, highlighting the impact of spatial proximity on immersive viewing experiences.

We further analyze the impact of disparity on depth perception and examine the correlation between Mean Opinion Score (MOS ) and established objective quality metrics such as PSNR, SSIM, MS-SSIM, VMAF, and AVQT. Additionally, we explore how video quality and depth perception together influence overall quality judgments.

 

SVD: Spatial Video Dataset

ACM Multimedia 2025

October 27 – October 31, 2025

Dublin, Ireland

[PDF]

MH Izadimehr, Milad Ghanbari, Guodong Chen, Wei Zhou, Xiaoshuai Hao, Mallesham Dasari, Christian Timmerer, Hadi Amirpour

Abstract:  Stereoscopic video has long been the subject of research due to its ability to deliver immersive three-dimensional content to a wide range of applications, from virtual and augmented reality to advanced human–computer interaction. The dual‑view format inherently provides binocular disparity cues that enhance depth perception and realism, making it indispensable for fields such as telepresence, 3D mapping, and robotic vision. Until recently, however, end‑to‑end pipelines for capturing, encoding, and viewing high‑quality 3D video were neither widely accessible nor optimized for consumer‑grade devices. Today’s smartphones, such as the iPhone Pro and modern HMDs like the AVP, offer built‑in support for stereoscopic video capture, hardware‑accelerated encoding, and seamless playback on devices like the AVP and Meta Quest 3, which require minimal user intervention. Apple refers to this streamlined workflow as spatial Video. Making the full stereoscopic video process available to everyone has made new applications possible. Despite these advances, there remains a notable absence of publicly available datasets that include the complete spatial video pipeline on consumer platforms, hindering reproducibility and comparative evaluation of emerging algorithms.

In this paper, we introduce SVD, a spatial video dataset comprising 300 five-second video sequences, i.e., 150 captured using an iPhone Pro and 150 with an AVP. Additionally, 10 longer videos with a minimum duration of 2 minutes have been recorded. The SVD is publicly released under an open source license to facilitate research in codec performance evaluation, subjective and objective Quality of Experience assessment, depth‑based computer vision, stereoscopic video streaming, and other emerging 3D applications such as neural rendering and volumetric capture. Link to the dataset: https://cd-athena.github.io/SVD/.

 

Nature-1k: The Raw Beauty of Nature in 4K at 60FPS

ACM Multimedia 2025

October 27 – October 31, 2025

Dublin, Ireland

[PDF]

Mohammad Ghasempour (AAU, Austria), Hadi Amirpour (AAU, Austria), Christian Timmerer (AAU, Austria)

Abstract: The push toward data-driven video processing, combined with recent advances in video coding and streaming technologies, has fueled the need for diverse, large-scale, and high-quality video datasets. However, the limited availability of such datasets remains a key barrier to the development of next-generation video processing solutions. In this paper, we introduce Nature-1k, a large-scale video dataset consisting of 1000 professionally captured 4K Ultra High Definition (UHD) videos, each recorded at 60fps. The dataset covers a wide range of environments, lighting conditions, texture complexities, and motion patterns. To maintain temporal consistency, which is crucial for spatio-temporal learning applications, the dataset avoids scene cuts within the sequences. We further characterize the dataset using established metrics, including spatial and temporal video complexity metrics, as well as colorfulness, brightness, and contrast distribution. Moreover, Nature-1k includes a compressed version to support rapid prototyping and lightweight testing. The quality of the compressed videos is evaluated using four commonly used video quality metrics: PSNR, SSIM, MS-SSIM, and VMAF. Finally, we compare Nature-1k with existing datasets to demonstrate its superior quality and content diversity. The dataset is suitable for a wide range of applications, including Generative Artificial Intelligence (AI), video super-resolution and enhancement, video interpolation, as well as video coding, and adaptive video streaming optimization. Dataset URL: Link

Receiving Kernel-Level Insights via eBPF: Can ABR Algorithms Adapt Smarter?

Würzburg Workshop on Next-Generation Communication Networks (WueWoWAS) 2025

6 – 8 Oct 2025, Würzburg, Germany

[PDF]

Mohsen Ghasemi (Sharif University of Technology, Iran); Daniele Lorenzi (Alpen-Adria-Universität Klagenfurt, Austria); Mahdi Dolati (Sharif University of Technology, Iran); Farzad Tashtarian (Alpen-Adria Universität Klagenfurt, Austria); Sergey Gorinsky (IMDEA Networks Institute, Spain); Christian Timmerer (Alpen-Adria-Universität Klagenfurt & Bitmovin, Austria)

Abstract: The rapid rise of video streaming services such as Netflix and YouTube has made video delivery the largest driver of global Internet traffic, including mobile networks such as 5G or the upcoming 6G network. To maintain playback quality, client devices employ Adaptive Bitrate (ABR) algorithms that adjust video quality based on metrics like available bandwidth and buffer occupancy. However, these algorithms often react slowly to sudden bandwidth fluctuations due to limited visibility
into network conditions, leading to stall events that significantly degrade the user’s Quality of Experience (QoE). In this work, we introduce CaBR, a Congestion-aware adaptive BitRate decision module designed to operate on top of existing ABR algorithms. CaBR enhances video streaming performance by leveraging real-time, in-kernel network telemetry collected via the extended Berkeley Packet Filter (eBPF). By utilizing congestion metrics such as queue lengths observed at network switches, CaBR refines the bitrate selection of the underlying ABR algorithms for upcoming segments, enabling faster adaptation to changing network conditions. Our evaluation shows that CaBR significantly reduces the playback stalls and improves QoE by up to 25% compared to state-of-the-art approaches in a congested environment.

 

Hadi

Cross-Modal Scene Semantic Alignment for Image Complexity Assessment

British Machine Vision Conference (BMVC) 2025

November, 2025

Sheffield, UK

[PDF]

Yuqing Luo, YIXIAO LI, Jiang Liu, Jun Fu, Hadi Amirpour, Guanghui Yue, Baoquan Zhao, Padraig Corcoran, Hantao Liu, Wei Zhou

Abstract: Image complexity assessment (ICA) is a challenging task in perceptual evaluation due to the subjective nature of human perception and the inherent semantic diversity in real-world images. Existing ICA methods predominantly rely on hand-crafted or shallow convolutional neural network-based features of a single visual modality, which are insufficient to fully capture the perceived representations closely related to image complexity. Recently, cross-modal scene semantic information has been shown to play a crucial role in various computer vision tasks, particularly those involving perceptual understanding. However, the exploration of cross-modal scene semantic information in the context of ICA remains unaddressed. Therefore, in this paper, we propose a novel ICA method called Cross-Modal Scene Semantic Alignment (CM-SSA), which leverages scene semantic alignment from a cross-modal perspective to enhance ICA performance, enabling complexity predictions to be more consistent with subjective human perception. Specifically, the proposed CM-SSA consists of a complexity regression branch and a scene semantic alignment branch. The complexity regression branch estimates image complexity levels under the guidance of the scene semantic alignment branch, while the scene semantic alignment branch is used to align images with corresponding text prompts that convey rich scene semantic information by pair-wise learning. Extensive experiments on several ICA datasets demonstrate that the proposed CM-SSA significantly outperforms state-of-the-art approaches.

diveXplore – An Open-Source Software for Modern Video Retrieval with Image/Text Embeddings

ACM Multimedia 2025

October 27 – October 31, 2025

Dublin, Ireland

[PDF]

Mario Leopold (AAU, Austria), Farzad Tashtarian (AAU, Austria), Klaus Schöffmann (AAU, Austria)

Abstract:Effective video retrieval in large-scale datasets presents a significant challenge, with existing tools often being too complex, lacking sufficient retrieval capabilities, or being too slow for rapid search tasks. This paper introduces diveXplore, an open-source software designed for interactive video retrieval. Due to its success in various competitions like the Video Browser Showdown (VBS) and the Interactive Video Retrieval 4 Beginners (IVR4B), as well as its continued development since 2017, diveXplore is a solid foundation for various kinds of retrieval tasks. The system is built on a three-layer architecture, comprising a backend for offline preprocessing, a middleware with a Node.js and Python server for query handling, and a MongoDB for metadata storage, as well as an Angular-based frontend for user interaction. Key functionalities include free-text search using natural language, temporal queries, similarity search, and other specialized search strategies. By open-sourcing diveXplore, we aim to establish a solid baseline for future research and development in the video retrieval community, encouraging contributions and adaptations for a wide range of use cases, even beyond competitive settings.

GenStream: Semantic Streaming Framework for Generative Reconstruction of Human-centric Media

ACM Multimedia 2025

October 27 – October 31, 2025

Dublin, Ireland

[PDF]

Emanuele Artioli (AAU, Austria), Daniele Lorenzi (AAU, Austria), Shivi Vats (AAU, Austria), Farzad Tashtarian (AAU, Austria), Christian Timmerer (AAU, Austria)

Abstract: Video streaming dominates global internet traffic, yet conventional pipelines remain inefficient for structured, human-centric content such as sports, performance, or interactive media. Standard codecs re-encode entire frames, foreground and background alike, treating all pixels uniformly and ignoring the semantic structure of the scene. This leads to significant bandwidth waste, particularly in scenarios where backgrounds are static and motion is constrained to a few salient actors. We introduce GenStream, a semantic streaming framework that replaces dense video frames with compact, structured metadata. Instead of transmitting pixels, GenStream encodes each scene as a combination of skeletal keypoints, camera viewpoint parameters, and a static 3D background model. These elements are transmitted to the client, where a generative model reconstructs photorealistic human figures and composites them into the 3D scene from the original viewpoint. This paradigm enables extreme compression, achieving over 99.9% bandwidth reduction compared to HEVC. We partially validate GenStream on Olympic figure skating footage and demonstrate potential high perceptual fidelity under minimal data. Looking forward, GenStream opens new directions in volumetric avatar synthesis, canonical 3D actor fusion across views, personalized and immersive viewing experiences at arbitrary viewpoints, and lightweight scene reconstruction, laying the groundwork for scalable, intelligent streaming in the post-codec era.

Paper Title: STEP-MR: A Subjective Testing and Eye-Tracking Platform for Dynamic Point Clouds in Mixed Reality

Conference Details:  EuroXR 2025; Sep 03 – Sep 05, 2025; Winterthur, Switzerland

Authors: Shivi Vats (AAU, Austria), Christian Timmerer (AAU, Austria), Hermann Hellwagner (AAU, Austria)

Abstract: 

The use of point cloud (PC) streaming in mixed reality (MR) environments is of particular interest due to the immersiveness and the six degrees of freedom (6DoF) provided by the 3D content. However, this immersiveness requires significant bandwidth. Innovative solutions have been developed to address these challenges, such as PC compression and/or spatially tiling the PC to stream different portions at different quality levels. This paper presents a brief overview of a Subjective Testing and Eye-tracking Platform for dynamic point clouds in Mixed Reality (STEP-MR) for the Microsoft HoloLens 2. STEP-MR was used to conduct subjective tests (described in another work) with 41 participants, yielding over 2000 responses and more than 150 visual attention maps, the results of which can be used, among other things, to improve dynamic (animated) point cloud streaming solutions mentioned above. Building on our previous platform, the new version now enables eye-tracking tests, including calibration and heatmap generation. Additionally, STEP-MR features modifications to the subjective tests’ functionality, such as a new rating scale and adaptability to participant movement during the tests, along with other user experience changes.