Multimedia Communication

IEEE Access, A Multidisciplinary, Open-access Journal of the IEEE

[PDF; GitHub]

Babak Taraghi , Hermann Hellwagner and Christian Timmerer  (Alpen-Adria-Universität Klagenfurt)

g2g

Low-latency live streaming by HTTP Chunked Transfer Encoding

Abstract: Live media streaming is a challenging task by itself, and when it comes to use cases that define low-latency as a must, the complexity will rise multiple times. In a typical media streaming session, the main goal can be declared as providing the highest possible Quality of Experience (QoE), which has proved to be measurable using quality models and various metrics. In a low-latency media streaming session, the requirements are to provide the lowest possible delay between the moment a frame of video is captured and the moment that the captured frame is rendered on the client screen, also known as end-to-end (E2E) latency and maintain the QoE. This paper proposes a sophisticated cloud-based and open-source testbed that facilitates evaluating a low-latency live streaming session as the primary contribution. Live Low-Latency Cloud-based Adaptive Video Streaming Evaluation (LLL-CAdViSE) framework is enabled to asses the live streaming systems running on two major HTTP Adaptive Streaming (HAS) formats, Dynamic Adaptive Streaming over HTTP (MPEG-DASH) and HTTP Live Streaming (HLS). We use Chunked Transfer Encoding (CTE) to deliver Common Media Application Format (CMAF) chunks to the media players. Our testbed generates the test content (audiovisual streams). Therefore, no test sequence is required, and the encoding parameters (e.g., encoder, bitrate, resolution, latency) are defined separately for each experiment. We have integrated the ITU-T P.1203 quality model inside our testbed. To demonstrate the flexibility and power of LLL-CAdViSE, we have presented a secondary contribution in this paper; we have conducted a set of experiments with different network traces, media players, ABR algorithms, and with various requirements (e.g., E2E latency (typical/reduced/low/ultra-low), diverse bitrate ladders, and catch-up logic) and presented the essential findings and the experimental results.

Keywords: Live Streaming; Low-latency; HTTP Adaptive Streaming; Quality of Experience; Objective Evaluation, Open-source Testbed.

14th ACM Multimedia Systems Conference (MMSys)
7 – 10 June 2023 | Vancouver, BC, Canada

Daniele Lorenzi (Alpen-Adria-Universität Klagenfurt)

Abstract:

Video streaming services account for the majority of today’s traffic on the Internet, and according to recent studies, this share is expected to continue growing. This implies that many people around the globe utilize video streaming services on a daily basis to fruit video content. Given this broad utilization, research in video streaming is recently moving towards energy-aware approaches, which aim at the minimization of the energy consumption of the devices involved. On the other side, the perception of quality delivered to the user plays an important role, and the advent of HTTP Adaptive Streaming (HAS) changed the way quality is perceived. The focus moved from the Quality of Service (QoS) towards the Quality of Experience (QoE) of the user taking part in the streaming session. Therefore video streaming services need to develop Adaptive BitRate (ABR) techniques to deal with different network environments on the client side or appropriate end-to-end strategies to provide high QoE to the users. The scope of this doctoral study is within the end-to-end environment with a focus on the end-users domain, referred to as the player environment, including video content consumption and interactivity. This thesis aims to investigate and develop different techniques to increase the delivered QoE to the users and reduce the energy consumption of the end devices in HAS context. We present four main research questions to target the related challenges in the domain of content consumption for HAS systems.

5th Workshop on Parallel AI and Systems for the Edge (PAISE 2023) held in conjunction with 37th IEEE International Parallel & Distributed Processing Symposium (IPDPS 2023) St. Petersburg, Florida, USA

https://edge.itec.aau.at/

Authors: Josef Hammer and Hermann Hellwagner, Alpen-Adria-Universität Klagenfurt

Abstract: Multi-access Edge Computing (MEC) is a central piece of 5G telecommunication systems and is essential to satisfy the challenging low-latency demands of future applications. MEC provides a cloud computing platform at the edge of the radio access network. Our previous publications argue that edge computing should be transparent to clients, leveraging Software-Defined Networking (SDN). While we introduced a solution to implement such a transparent approach, one question remained: How to handle user requests to a service that is not yet running in a nearby edge cluster? One advantage of the transparent edge is that one could process the initial request in the cloud. However, this paper argues that on-demand deployment might be fast enough for many services, even for the first request. We present an SDN controller that automatically deploys an application container in a nearby edge cluster if no instance is running yet. In the meantime, the user’s request is forwarded to another (nearby) edge cluster or kept waiting to be forwarded immediately to the newly instantiated instance. Our performance evaluations on a real edge/fog testbed show that the waiting time for the initial request – e.g., for an nginx-based service – can be as low as 0.5 seconds – satisfactory for many applications.

SPACE: Segment Prefetching and Caching at the Edge for Adaptive Video Streaming

IEEE Access

Jesús Aguilar Armijo (Alpen-Adria-Universität Klagenfurt), Christian Timmerer (Alpen-Adria-Universität Klagenfurt) and Hermann Hellwagner (Alpen-Adria-Universität Klagenfurt)

Abstract: Multi-access Edge Computing (MEC) is a new paradigm that brings storage and computing close to the clients. MEC enables the deployment of complex network-assisted mechanisms for video streaming that improve clients’ Quality of Experience (QoE). One of these mechanisms is segment prefetching, which transmits the future video segments in advance closer to the client to serve content with lower latency. In this work, for HAS-based (HTTP Adaptive Streaming) video streaming and specifically considering a cellular (e.g., 5G) network edge, we present our approach Segment Prefetching and Caching at the Edge for Adaptive Video Streaming (SPACE). We propose and analyze different segment prefetching policies that differ in resource utilization, player and radio metrics needed, and deployment complexity. This variety of policies can dynamically adapt to the network’s current conditions and the service provider’s needs. We present segment prefetching policies based on diverse approaches and techniques: past segment requests, segment transrating (i.e., reducing segment bitrate/quality), Markov prediction model, machine learning to predict future segment requests, and super-resolution.We study their performance and feasibility using metrics such as QoE characteristics, computing times, prefetching hits, and link bitrate consumption. We analyze and discuss which segment prefetching policy is better under which circumstances, as well as the influence of the client-side Adaptive Bit Rate (ABR) algorithm and the set of available representations (“bitrate ladder”) in segment prefetching. Moreover, we examine the impact on segment prefetching of different caching policies for (pre-)fetched segments, including Least Recently Used (LRU), Least Frequently Used (LFU), and our proposed popularity-based caching policy Least Popular Used (LPU).

Keywords: Adaptive video streaming, content delivery, HAS, edge computing, cellular network edge, MEC, segment prefetching, segment caching.

 

Title: Large-scale graph processing and simulation with serverless workflows in federated FaaS

The First Workshop on Serverless, Extreme-Scale, and Sustainable Graph Processing Systems (GraphSys), Co-located with ICPE 2023, April 15-19, Coimbra, Portugal (https://sites.google.com/view/graphsys23/home)

Authors: Sashko Ristov (Universität Innsbruck, Austria), Reza Farahani (Alpen-Adria-Universität Klagenfurt, Austria), Radu Prodan (Alpen-Adria-Universität Klagenfurt, Austria)

Abstract: Serverless computing offers a cheap and easy way to code lightweight functions that can be invoked based on some events to perform some simple tasks. For more complicated processing, multiple serverless functions can be orchestrated as a directed acyclic graph and form a serverless workflow or function choreography (FC). While all top cloud providers offer FC systems, as well as there are many open-source FC systems, they are focused on how to describe data flow and control flow between serverless functions of the FC, they rarely consider data that is processed, which often is in the form of a graph. In this paper, we review the support for graph processing of the existing serverless workflow management systerms, detect gaps, and recommend future directions for large-scale graph processing with serverless computing.

Title: Towards Sustainable Serverless Processing of Massive Graphs on the Computing Continuum

The First Workshop on Serverless, Extreme-Scale, and Sustainable Graph Processing Systems (GraphSys), Co-located with ICPE 2023, April 15-19, Coimbra, Portugal (https://sites.google.com/view/graphsys23/home)

Authors: Reza Farahani (Alpen-Adria-Universität Klagenfurt, Austria), Dragi Kimovski (Alpen-Adria-Universität Klagenfurt, Austria), Sashko Ristov (Universität Innsbruck, Austria), Alexandru Iosup (Vrije Universiteit Amsterdam, Netherland), Radu Prodan (Alpen-Adria-Universität Klagenfurt, Austria)

Abstract: With the ever-increasing volume of data and the demand to analyze and comprehend it, graph processing has become an essential apapproach for solving complex problems in various domains, like social networks, bioinformatics, and finance. Despite the potential benefits
of current graph processing platforms, they often encounter difficultures supporting diverse workloads, models, and languages. Moreover, existing specialized platforms suffer from limited portability and interoperability, resulting in redundant efforts and inefficient resource and energy utilization due to vendor and even platform lock-in. To bridge the aforementioned gaps, the Graph-Massivizer project, funded by the Horizon Europe research and innovation program conducts research and develops a high-performance, scalable, and sustainable platform for information processing and reasoning based on the massive graph (MG) representation of extreme data. In this paper, we briefly introduce the Graph-Massivizer platform. We then leverage the emerging serverless computing paradigm to devise Graph-Serverlizer, a scalable graph analytics tool over a codesigned computing continuum infrastructure. Finally, we sketch six crucial research questions in Graph-Serverlizer’s design and outline three ongoing and future research directions for addressing them.

How to Optimize Dynamic Adaptive Video Streaming? Challenges and Solutions

[slide]

Abstract: Empowered by today’s rich tools for media generation and collaborative production and convenient network access to the Internet, video streaming has become very popular. Dynamic adaptive video streaming is a technique used to deliver video content to users over the Internet, where the quality of the video adapts in real time based on the network conditions and the capabilities of the user’s device. HTTP Adaptive Streaming (HAS) has become the de-facto standard to provide a smooth and uninterrupted viewing experience, especially when network conditions frequently change. Improving the QoE of users concerning various applications‘ requirements presents several challenges, such as network variability, limited resources, and device heterogeneity. For example, the available network bandwidth can vary over time, leading to frequent changes in the video quality. In addition, different users have different preferences and viewing habits, which can further complicate live streaming optimization. Researchers and engineers have developed various approaches to optimize dynamic adaptive streaming, such as QoE-driven adaptation, machine learning-based approaches, and multi-objective optimization, to address these challenges. In this talk, we will give an introduction to the topic of video streaming and point out the significant challenges in the field. We will present a layered architecture for video streaming and then discuss a selection of approaches from our research addressing these challenges. For instance, we will present approaches to improve the  QoE of clients in User-generated content applications in centralized and distributed fashions. Moreover, we will present a novel architecture for low-latency live streaming that is agnostic to the protocol and codecs that can work equally with existing HAS-based approaches.

36th IEEE/IFIP Network Operations and Management Symposium (NOMS 2023) Miami, USA
Authors: Josef Hammer, Dragi Kimovski, Narges Mehran, Radu Prodan, and Hermann Hellwagner (Alpen-Adria-Universität Klagenfurt, Austria)
Abstract: The challenging demands for the next generation of the Internet of Things have led to a massive increase in edge computing and network virtualization technologies. While there is vast potential for research in these areas, managing complex adaptive infrastructure is difficult, and experiments with real hardware are tedious to set up. Furthermore, proposed solutions often require expensive hardware or labor-intensive procedures to replicate and build on these ideas. With our C3-Edge testbed, we address these challenges and propose a novel approach for automated edge testbed setup with a low-cost software-defined network and adaptive infrastructure configuration. We validated the efficiency of our approach on a real-world computing continuum infrastructure. The evaluation results confirm that our flexible approach is suitable for all but the most bandwidth-intensive applications.
23rd International Symposium on Cluster, Cloud and Internet Computing (CCGrid 2023) Bangalore, India
Authors: Josef Hammer and Hermann Hellwagner (Alpen-Adria-Universität Klagenfurt, Austria)
Abstract: Multi-access Edge Computing (MEC) is a central piece of 5G telecommunication systems and is essential to satisfy the challenging low-latency demands of future applications. MEC provides a cloud computing platform at the edge of the radio access network that developers can utilize for their applications. Our previous publications argue that edge computing should be transparent to clients. We introduced an efficient solution to implement such a transparent approach, leveraging Software-Defined Networking (SDN) and virtual IP+port addresses for registered edge services. In this work, we introduce the Unique Mask, a solution superior to the Unique Prefix presented in our previous work that considerably reduces the number of required flows in the switches. Our evaluations show that both algorithms perform very well, with the Unique Mask capable of reducing the number of flows by up to 98 %.
7th IEEE International Conference on Fog and Edge Computing (ICFEC 2023) held in conjunction with CCGrid 2023 Bangalore, India
Authors: Josef Hammer and Hermann Hellwagner, Alpen-Adria-Universität Klagenfurt
Abstract: The challenging demands for the next generation of the Internet of Things have led to a massive increase in edge computing and network virtualization technologies. One significant technology is Multi-access Edge Computing (MEC), a central piece of 5G telecommunication systems. MEC provides a cloud computing platform at the edge of the radio access network and is particularly essential to satisfy the challenging low-latency demands of future applications. Our previous publications argue that edge computing should be transparent to clients. We introduced an efficient solution to implement such a transparent approach, leveraging Software-Defined Networking (SDN) and virtual IP+port addresses for registered edge services.
Read more