Multimedia Communication

Hadi

Co-located with ACM Multimedia 2025

URL: https://weizhou-geek.github.io/workshop/MM2025.html

In health and medicine, an immense amount of data is being generated by distributed sensors and cameras, as well as multimodal digital health platforms that support multimedia, such as audio, video, image, 3D geometry, and text. The availability of such multimedia data from medical devices and digital record systems has greatly increased the potential for automated diagnosis. The past several years have witnessed an explosion of interest, and a dizzyingly fast development, in computer-aided medical investigations using MRI, CT, X-rays, images, point clouds, etc. This proposed workshop focuses on various multimedia computing techniques (including mobile solutions and hardware solutions) for health and medicine, which targets real-world data/problems in healthcare, involves a large number of stakeholders, and is closely connected with people’s health.

Hadi

ACM MM’25 Tutorial: Perceptually Inspired Visual Quality Assessment in Multimedia Communication

ACM MM 2025, October 27, 2025, Dublin, Ireland

https://acmmm2025.org/tutorial/

Tutorial speakers:

  • Wei Zhou (Cardiff University)
  • Hadi Amirpour (University of Klagenfurt)

Tutorial description:

As multimedia services like video streaming, video conferencing, virtual reality (VR), and online gaming continue to expand, ensuring high perceptual quality becomes a priority for maintaining user satisfaction and competitiveness. However, during acquisition, compression, transmission, and storage, multimedia content undergoes various distortions, causing degradation in experienced quality. Thus, perceptual quality assessment, which focuses on evaluating the quality of multimedia content based on human perception, is essential for optimizing user experiences in advanced communication systems. Several challenges are involved in the quality assessment process, including diverse characteristics of multimedia content such as image, video, VR, point cloud, mesh, multimodality, etc., and complex distortion scenarios as well as viewing conditions. The tutorial first presents a detailed overview of principles and methods for perceptually inspired visual quality assessment. This includes both subjective methods, where users directly rate their experience, and objective methods, where algorithms predict human perception based on measurable factors such as bitrate, frame rate, and compression levels. Based on the basics of perceptually inspired visual quality assessment, metrics for different multimedia data are then introduced. Apart from the traditional image and video, immersive multimedia and AI-generated content will also be involved.

Hadi

URL: https://dl.acm.org/journal/tomm

Authors: Ahmed Telili (INSA, Rennes, France),  Wassim Hamidouce (INSA, Rennes, France), Hadi Amirpour (Alpen-Adria-Universität Klagenfurt), Sid Ahmed Fezza (INPTIC, Algeira), Christian Timmerer (Alpen-Adria-Universität Klagenfurt), and Luce Morin (INSA, Rennes, France)

Abstract:
HTTP adaptive streaming (HAS ) has emerged as a prevalent approach for over-the-top (OTT ) video streaming services due to its ability to deliver a seamless user experience. A fundamental component of HAS is the bitrate ladder, which comprises a set of encoding parameters (e.g., bitrate-resolution pairs) used to encode the source video into multiple representations. This adaptive bitrate ladder enables the client’s video player to dynamically adjust the quality of the video stream in real-time based on fluctuations in network conditions, ensuring uninterrupted playback by selecting the most suitable representation for the available bandwidth. The most straightforward approach involves using a fixed bitrate ladder for all videos, consisting of pre-determined bitrate-resolution pairs known as one-size-fits-all. Conversely, the most reliable technique relies on intensively encoding all resolutions over a wide range of bitrates to build the convex hull, thereby optimizing the bitrate ladder by selecting the representations from the convex hull for each specific video. Several techniques have been proposed to predict content-based ladders without performing a costly, exhaustive search encoding. This paper provides a comprehensive review of various convex hull prediction methods, including both conventional and learning-based approaches. Furthermore, we conduct a benchmark study of several handcrafted- and deep learning ( DL )-based approaches for predicting content-optimized convex hulls across multiple codec settings. The considered methods are evaluated on our proposed large-scale dataset, which includes 300 UHD video shots encoded with software and hardware encoders using three state-of-the-art video standards, including AVC /H.264, HEVC /H.265, and VVC /H.266, at various bitrate points. Our analysis provides valuable insights and establishes baseline performance for future research in this field.
Dataset URL: https://nasext-vaader.insa-rennes.fr/ietr-vaader/datasets/br_ladder

Hadi

Perceptually-aware Online Per-title Encoding for Live Video Streaming – US Patent

PDF

Vignesh Menon (Alpen-Adria-Universität Klagenfurt, Austria), Hadi Amirpour (Alpen-Adria-Universität Klagenfurt, Austria), and Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Austria)

Abstract: Techniques for implementing perceptually aware per-title encoding may include receiving an input video, a set of resolutions, a maximum target bitrate and a minimum target bitrate, extracting content aware features for each segment of the input video, predicting a perceptually aware bitrate-resolution pair for each segment using a model configured to optimize for a quality metric using constants trained for each of the set of resolutions, generating a target encoding set including a set of perceptually aware bitrate-resolution pairs, and encoding the target encoding set. The content aware features may include a spatial energy feature and an average temporal energy. According to these methods only a subset of bitrates and resolutions, less than a full set of bitrates and resolutions, are encoded to provide high quality video content for streaming.

Title: Project “Scalable Platform for Innovations on Real-time Immersive Telepresence” (SPIRIT) successfully passed periodic review

The “Scalable Platform for Innovations on Real-time Immersive Telepresence” (SPIRIT) project, a Horizon Europe innovation initiative uniting seven consortium partners, including ITEC from the University of Klagenfurt, has successfully completed its periodic review that took place in November 2024.

SPIRIT aims to develop a “multi-site, interconnected framework dedicated for supporting the operation of heterogeneous collaborative telepresence applications at large scale”.

ITEC focuses on three key areas in SPIRIT:

  • determining subjective and objective metrics for the Quality of Experience (QoE) of volumetric video,
  • developing a Live Low Latency DASH (Dynamic Adaptive Streaming over HTTP) system for the transmission of volumetric video, and
  • contributing to standardisation bodies regarding work done in volumetric video.

The review committee was satisfied with the project’s progress, and accepted all deliverables. The project was praised for a successful first round of open calls, which saw a remarkable 61 applicants for 11 available spots.

ITEC’s work with researching QoE of volumetric video through subjective testing was also deemed impressive, with us having obtained over 2000 data points across two rounds of testing. Contributions to standardisation bodies such as MPEG and 3GPP were also praised.

ITEC continues to work in the SPIRIT project, focusing on the second round of open calls and Live Low Latency DASH transmission of volumetric video.

DORBINE is a cooperative project between AIR6 Systems and Alpen-Adria-Universität Klagenfurt (AAU) (Farzad Tashtarian, project leader; Christian Timmerer and Hamid Amirpourazarian) and is funded by the Austrian Research Promotion Agency FFG.

Project description: Renewable energy plays a critical role in the global transition to sustainable and environmentally friendly power sources, and among the various technologies, turbines stand out as a key contributor. Wind turbines, for example, can convert up to 45% of the available wind energy into electricity, with modern designs reaching efficiencies as high as 50%, depending on conditions. The DORBINE project aims to enhance wind turbine efficiency in electricity production by developing an innovative inspection framework powered by cutting-edge AI techniques. It leverages a swarm of drones equipped with high-resolution cameras and advanced sensors to perform real-time, detailed blade inspections without the need for turbine shutdowns.

 

The paper “Two-pass Encoding for Live Video Streaming” has been selected as the Best Student Paper at the NAB Broadcast Engineering and IT (BEIT) Conference 2025.

NAB Broadcast Engineering and IT (BEIT) Conference

5–9 April 2025 | Las Vegas, NV, USA

Abstract: Live streaming has become increasingly important in our daily lives due to the growing demand for real-time content consumption. Traditional live video streaming typically relies on single-pass encoding due to its low latency. However, it lacks video content analysis, often resulting in inefficient compression and quality fluctuations during playback. Constant Rate Factor (CRF) encoding, a type of single-pass method, offers more consistent quality but suffers from unpredictable output bitrate, complicating bandwidth management. In contrast, multi-pass encoding improves compression efficiency through multiple passes. However, its added latency makes it unsuitable for live streaming. In this paper, we propose OTPS, an online two-pass encoding scheme that overcomes these limitations by employing fast feature extraction on a downscaled video representation and a gradient-boosting regression model to predict the optimal CRF for encoding. This approach provides consistent quality and efficient encoding while avoiding the latency introduced by traditional multi-pass techniques. Experimental results show that OTPS offers 3.7% higher compression efficiency than single-pass encoding and achieves up to 28.1% faster encoding than multi-pass modes. Compared to single-pass encoding, encoded videos using OTPS exhibit 5% less deviation from the target bitrate while delivering notably more consistent quality.

Authors: Mohammad Ghasempour (AAU, Austria); Hadi Amirpour (AAU, Austria); Christian Timmerer (AAU, Austria)

 

Hadi

Scalable Per-Title Encoding – US Patent

[PDF]

Hadi Amirpour (Alpen-Adria-Universität Klagenfurt, Austria) and Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Austria)

Abstract: A scalable per-title encoding technique may include detecting scene cuts in an input video received by an encoding network or system, generating segments of the input video, performing per-title encoding of a segment of the input video, training a deep neural network (DNN) for each representation of the segment, thereby generating a trained DNN, compressing the trained DNN, thereby generating a compressed trained DNN, and generating an enhanced bitrate ladder including metadata comprising the compressed trained DNN. In some embodiments, the method may also include generating a base layer bitrate ladder for CPU devices and providing the enhanced bitrate ladder for GPU-available devices.

Authors: Leonardo Peroni (UC3M, Spain); Sergey Gorinsky (IMDEA Networks Institute, Spain); Farzad Tashtarian (Alpen-Adria Universität Klagenfurt, Austria)

Conference: IEEE 13th International Conference on Cloud Networking (CloudNet)

27–29 November 2024 // Rio de Janeiro, Brazil

Abstract: While ISPs (Internet service providers) strive to improve QoE (quality of experience) for end users, end-to-end traffic encryption by OTT (over-the-top) providers undermines independent inference of QoE by an ISP. Due to the economic and technological complexity of the modern Internet, ISP-side QoE inference based on OTT assistance or out-of-band signaling sees low adoption. This paper presents IQN (in-band quality notification), a novel mechanism for signaling QoE impairments from an automated agent on the end-user device to the server-to-client ISP responsible for QoE-impairing congestion. Compatible with multi-ISP paths, asymmetric routing, and other Internet realities, IQN does not require OTT support and induces the OTT server to emit distinctive packet patterns that encode QoE information, enabling ISPs to infer QoE by monitoring these patterns in network traffic. We develop a prototype system, YouStall, which applies IQN signaling to ISP-side inference of YouTube stalls.
Cloud-based experiments with YouStall on YouTube Live streams validate IQN’s feasibility and effectiveness, demonstrating its potential for accurate user-assisted ISP-side QoE inference from encrypted traffic in real Internet environments.

Hadi

Authors: Hadi Amirpour (AAU, Austria), Mohammad Ghasempour (AAU, Austria), Farzad Tashtarian (AAU, Austria), Ahmed Telili (TII, UAE), Samira Afzal (AAU, Austria), Wassim Hamidouche (INSA, France), Christian Timmerer (AAU, Austria)

Conference: IEEE Visual Communications and Image Processing (IEEE VCIP 2024) – Tokyo, Japan, December 8-11, 2024

Abstract: In the field of video streaming, the optimization of video encoding and decoding processes is crucial for delivering high-quality video content. Given the growing concern about carbon dioxide emissions, it is equally necessary to consider the energy consumption associated with video streaming. Therefore, to take advantage of machine learning techniques for optimizing video delivery, a dataset encompassing the energy consumption of the encoding and decoding process is needed. This paper introduces a comprehensive dataset featuring diverse video content, encoded and decoded using various codecs and spanning different devices. The dataset includes 1000 videos encoded with four resolutions (2160p, 1080p, 720p, and 540p) at two frame rates (30 fps and 60 fps), resulting in eight unique encodings for each video. Each video is further encoded with four different codecs — AVC (libx264), HEVC (libx265), AV1 (libsvtav1), and VVC (VVenC) — at four quality levels defined by QPs of 22, 27, 32, and 37. In addition, for AV1, three additional QPs of 35, 46, and 55 are considered. We measure both encoding and decoding time and energy consumption on various devices to provide a comprehensive evaluation, employing various metrics and tools. Additionally, we assess encoding bitrate and quality using quality metrics such as PSNR, SSIM, MS-SSIM, and VMAF. All data and the reproduction commands and scripts have been made publicly available as part of the dataset, which can be used for various applications such as rate and quality control, resource allocation, and energy-efficient streaming.

Dataset URL: https://github.com/cd-athena/MVCD

Index Terms— Video encoding, decoding, energy, complexity, quality.