Multimedia Communication

PV’20: H2BR: An HTTP/2-based Retransmission Technique to Improve the QoE of Adaptive Video Streaming


Authors: Minh Nguyen (Alpen-Adria-Universität Klagenfurt), Christian Timmerer (Alpen-Adria-Universität Klagenfurt / Bitmovin Inc.), Hermann Hellwagner (Alpen-Adria-Universität Klagenfurt)

Abstract: HTTP-based Adaptive Streaming (HAS) plays a key role in over-the-top video streaming. It contributes towards reducing the rebuffering duration of video playout by adapting the video quality to the current network conditions. However, it incurs variations of video quality in a streaming session because of the throughput fluctuation, which impacts the user’s Quality of Experience (QoE). Besides, many adaptive bitrate (ABR) algorithms choose the lowest-quality segments at the beginning of the streaming session to ramp up the playout buffer as soon as possible. Although this strategy decreases the startup time, the users can be annoyed as they have to watch a low-quality video initially. In this paper, we propose an efficient retransmission technique, namely H2BR, to replace low-quality segments being stored in the playout buffer with higher-quality versions by using features of HTTP/2 including (i) stream priority, (ii) server push, and (iii) stream termination. The experimental results show that H2BR helps users avoid watching low video quality during video playback and improves the user’s QoE. H2BR can decrease by up to more than 70% the time when the users suffer the lowest-quality video as well as benefits the QoE by up to 13%.

Keywords: HTTP adaptive streaming, DASH, ABR algorithms, QoE, HTTP/2

Packet Video Workshop 2020 (PV) June 10-11, 2020, Istanbul, Turkey (co-located with ACM MMSys’20)


ICME’20: Towards View-aware Adaptive Streaming of Holographic content


Authors: Hadi Amirpour (Alpen-Adria-Universität Klagenfurt), Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Bitmovin), and Mohammad Ghanbari (University of Essex)

Abstract: Holography is able to reconstruct a three-dimensional structure of an object by recording full wave fields of light emitted from the object. This requires a huge amount of data to be encoded, stored, transmitted, and decoded for holographic content, making its practical usage challenging especially for bandwidth-constrained networks and memory-limited devices. In the delivery of holographic content via the internet, bandwidth wastage should be avoided to tackle high bandwidth demands of holography streaming. For real-time applications, encoding time-complexity is also a major problem. In this paper, the concept of dynamic adaptive streaming over HTTP (DASH) is extended to holography image streaming and view-aware adaptation techniques are studied. As each area of a hologram contains information of a specific view, instead of encoding and decoding the entire hologram, just the part required to render the selected view is encoded and transmitted via the network based on the users’ interactivity. Four different strategies, namely, monolithic, single view, adaptive view, and non-real time streaming strategies are explained and compared in terms of bandwidth requirements, encoding time-complexity, and bitrate overhead. Experimental results show that the view-aware methods reduce the required bandwidth for holography streaming at the cost of a bitrate increase.

Keywords: Holography, compression, bitrate adaptation, dynamic adaptive streaming over HTTP, DASH.

Philipp Moll

How Players Play Games: Observing the Influences of Game Mechanics


Authors: Philipp Moll, Veit Frick, Natascha Rauscher, Mathias Lux (Alpen-Adria-Universität Klagenfurt)
Abstract: The popularity of computer games is remarkably high and is still growing. Despite the popularity and economical impact of games, data-driven research in game design, or to be more precise, in-game mechanics – game elements and rules defining how a game works – is still scarce. As data on user interaction in games is hard to get by, we propose a way to analyze players’ movement and action based on video streams of games. Utilizing this data we formulate four hypotheses focusing on player experience, enjoyment, and interaction patterns, as well as the interrelation thereof. Based on a user study for the popular game Fortnite, we discuss the interrelation between game mechanics, enjoyment of players, and different player skill levels in the observed data.
Keywords: Online Games; Game Mechanics; Game Design; Video Analysis
Links: International Workshop on Immersive Mixed and Virtual Environment Systems (MMVE)

Hermann Hellwagner continues as TPC member of INFOCOM 2020

IEEE Communications Society extends its appreciation of Hermann Hellwagner as a distingguished member of the IEEE INFOCOM 2020.
See more Information here.
IEEE INFOCOM 2020 – Online Conference July 6-9, 2020

Christian Timmerer

ICME’20: Multi-Period Per-Scene Optimization for HTTP Adaptive Streaming


Authors: Venkata Phani Kumar M (Alpen-Adria-Universität Klagenfurt), Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Bitmovin) and Hermann Hellwagner  (Alpen-Adria-Universität Klagenfurt)

Abstract: Video delivery over the Internet has become more and more established in recent years due to the widespread use of Dynamic Adaptive Streaming over HTTP (DASH). The current DASH specification defines a hierarchical data model for Media Presentation Descriptions (MPDs) in terms of periods, adaptation sets, representations and segments. Although multi-period MPDs are widely used in live streaming scenarios, they are not fully utilized in Video-on-Demand (VoD) HTTP adaptive streaming (HAS) scenarios. In this paper, we introduce MiPSO, a framework for MultiPeriod per-Scene Optimization, to examine multiple periods in VoD HAS scenarios. MiPSO provides different encoded representations of a video at either (i) maximum possible quality or (ii) minimum possible bitrate, beneficial to both service providers and subscribers. In each period, the proposed framework adjusts the video representations (resolution-bitrate pairs) by taking into account the complexities of the video content, with the aim of achieving streams at either higher qualities or lower bitrates. The experimental evaluation with a test video data set shows that the MiPSO reduces the average bitrate of streams with the same visual quality by approximately 10% or increases the visual quality of streams by at least 1 dB in terms of Peak Signal-to-Noise (PSNR) at the same bitrate compared to conventional approaches to video content delivery.

Keywords: Adaptive Streaming, Video-on-Demand, Per-Scene Encoding, Media Presentation Description

IEEE International Conference on Multimedia and Expo. July 06 – 10, London, United Kingdom


MMSys’20: Cloud-based Adaptive Video Streaming Evaluation Framework for the Automated Testing of Media Players (CAdViSE)

Authors: Babak Taraghi (Alpen-Adria-Universität Klagenfurt), Anatoliy Zabrovskiy (Alpen-Adria-Universität Klagenfurt), Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Bitmovin) and Hermann Hellwagner (Alpen-Adria-Universität Klagenfurt)

Abstract: Attempting to cope with fluctuations of network conditions in terms of available bandwidth, latency and packet loss, and to deliver the highest quality of video (and audio) content to users, research on adaptive video streaming has attracted intense efforts from the research community and huge investments from technology giants. How successful these efforts and investments are, is a question that needs precise measurements of the results of those technological advancements. HTTP-based Adaptive Streaming (HAS) algorithms, which seek to improve video streaming over the Internet, introduce video bitrate adaptivity in a way that is scalable and efficient. However, how each HAS implementation takes into account the wide spectrum of variables and configuration options, brings a high complexity to the task of measuring the results and visualizing the statistics of the performance and quality of experience. In this paper, we introduce CAdViSE, our Cloud-based Adaptive Video Streaming Evaluation framework for the automated testing of adaptive media players. The paper aims to demonstrate a test environment which can be instantiated in a cloud infrastructure, examines multiple media players with different network attributes at defined points of the experiment time, and finally concludes the evaluation with visualized statistics and insights into the results.

Keywords: HTTP Adaptive Streaming, Media Players, MPEG-DASH, Network Emulation, Automated Testing, Quality of Experience

Link: ACM Multimedia Systems Conference 2020 (MMSys 2020)

Christian Timmerer

ACM TOMM: Performance Analysis of ACTE: a Bandwidth Prediction Method for Low-Latency Chunked Streaming


Abstract: HTTP adaptive streaming with chunked transfer encoding can offer low-latency streaming without sacrificing the coding efficiency.This allows media segments to be delivered while still being packaged. However, conventional schemes often make widely inaccurate bandwidth measurements due to the presence of idle periods between the chunks and hence this is causing sub-optimal adaptation decisions. To address this issue, we earlier proposed ACTE (ABR for Chunked Transfer Encoding), a bandwidth prediction scheme for low-latency chunked streaming. While ACTE was a significant step forward, in this study we focus on two still remaining open areas, namely (i) quantifying the impact of encoding parameters, including chunk and segment durations, bitrate levels, minimum interval between IDR-frames and frame rate onACTE, and (ii) exploring the impact of video content complexity on ACTE. We thoroughly investigate these questions and report on our findings. We also discuss some additional issues that arise in the context of pursuing very low latency HTTP video streaming.

Authors: Abdelhak Bentaleb (National University of Singapore), Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Bitmovin), Ali C. Begen (Ozyegin University, Networked Media), Roger Zimmermann (National University of Singapore)

Keywords: HAS; ABR; DASH; CMAF; low-latency; HTTP chunked transfer encoding; bandwidth measurement and prediction; RLS; encoding parameters; FFmpeg

Christian Timmerer

QoMEX’20: Objective and Subjective QoE Evaluation for Adaptive Point Cloud Streaming


Abstract: Volumetric media has the potential to provide the six degrees of freedom (6DoF) required by truly immersive media. However, achieving 6DoF requires ultra-high bandwidth transmissions, which real-world wide area networks cannot provide economically. Therefore, recent efforts have started to target efficient delivery of volumetric media, using a combination of compression and adaptive streaming techniques. It remains, however, unclear how the effects of such techniques on the user perceived quality can be accurately evaluated. In this paper, we present the results of an extensive objective and subjective quality of experience (QoE) evaluation of volumetric 6DoF streaming. We use PCC-DASH, a standards-compliant means for HTTP adaptive streaming of scenes comprising multiple dynamic point cloud objects. By means of a thorough analysis we investigate the perceived quality impact of the available bandwidth, rate adaptation algorithm, viewport prediction strategy and user’s motion within the scene. We determine which of these aspects has more impact on the user’s QoE, and to what extent subjective and objective assessments are aligned.

Authors:Jeroen van der Hooft (Ghent University), Maria Torres Vega (Ghent University), Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Bitmovin), Ali C. Begen (Ozyegin University, Networked Media), Filip De Turck (Ghent University), Raimund Schatz (Alpen-Adria Universität Klagenfurt & AIT Austrian Institute of Technology, Austria)

Keywords: Volumetric Media; HTTP Adaptive Streaming; 6DoF; MPEG V-PCC; QoE Assessment; Objective Metrics

International Conference on Quality of Multimedia Experience (QoMEX)
May 26-28, 2020, Athlone, Ireland

200+ excited IT experts at Josef Hammer’s talk on Edge Computing

High-tech meets history. When thousands of international software developers gather at the Vienna Imperial Castle (Hofburg Wien), you can feel that magic is about to happen. Exactly that occurred on November 28 and 29 at this year’s We Are Developers Congress in Vienna.

Josef Hammer - Edge Computing

‘Are you on the Edge? Or still in the Cloud?’ – On one of the three stages, Josef Hammer inspired over 200 IT enthusiasts with a 30-minute talk on Edge Computing and 5G networks. As with the transition from mainframes to desktop computers, in the upcoming years a lot of processing will move from the cloud to the edge of the network, i.e. closer to the user. This will particularly affect areas with high data volume (IoT, AI) and low latency requirements (IoT).

Josef gave a short introduction to this exciting new area and its benefits and use cases, which frameworks and tools developers can use right now, and where we might be headed. Especially the presentation of our 5G Playground Carinthia was curiously followed by the attendees who enjoyed a first glance at the ambitious research projects conducted here.

More information:

Christian Timmerer

5GPlayground-Eröffnung mit ITEC-Use-Case “Virtual Realities”


Mit dem 5G Summit Carinthia, ein Kurzsymposium zur neuen Mobilfunktechnologie 5G, wurde heute der 5G Playground Carinthia feierlich eröffnet. Der 5G Playground Carinthia ist österreichweit die erste Serviceeinrichtung für die Erforschung und Weiterentwicklung von 5G-spezifischen Anwendungen, Services und Geschäftsmodellen. Das Bundesministerium für Verkehr, Innovation und Technology (BMVIT) sowie das Land Kärnten finanzieren dieses einzigartige Forschungslabor im Süden Österreichs. A1 Telekom Austria stellt die technische Infrastruktur zur Verfügung.

Der 5G Playground Carinthia bietet allen Forschungs-, Innovations- und Bildungseinrichtungen sowie KMUs und Start Ups die einzigartige Möglichkeit ihre Produkte und Anwendungen mit dieser neuen Technologie zu testen und im Echtbetrieb zu erproben.

Die Alpen-Adria-Universität Klagenfurt und insbesondere das Institut für Informationstechnologie beteiligt sich an dem 5GPlayground mit einen Use-Case über “Virtual Realities”. Das Projekt erforscht, entwickelt, erprobt und evaluiert ausgewählte VR-Anwendungen über 5G-Netze, z.B. Streaming von 360°-Videos und von neuen Formen immersiver Medien, etwa von volumetrischen Daten (Point Clouds). Diese Anwendungen erfordern und testen sowohl die hohen Datenraten als auch die extrem geringen Verzögerungszeiten von 5G-Netzen, im Downlink (Streaming zu einer VR-Brille) wie auch im Uplink (Streaming von Live-Inhalten von einer 360°-Kamera weg). Darüber hinaus werden Edge-Computing-Komponenten genutzt, die 5G vorsieht, um höhere Präsentationsqualität und raschere Reaktionszeiten des VR-Systems bei Bewegung/Interaktion eines Nutzers zu erreichen. Es werden VR-Systeme entwickelt, welche die Leistungsfähigkeit von 5G zu demonstrieren erlauben.