Funded by the EU, HiPEAC (High-Performance Edge And Cloud computing) is the premier focal point for networking, dissemination, training, and collaboration activities in Europe for researchers, industry, and policy related to computing systems.

The HiPEAC webinar series allows you to keep up to date on the latest advances in computer architecture and compilation research via online sessions, which can be accessed anywhere.

The Graph-Massivizer Project webinar took place on November 29, 2023. After an introduction by Nuria De LamaRadu Prodan presented the background of GraphProcessing, from Euler‘s five-node graphs to the massive graphs of today, and the motivations for the project.

 Project details have been presented by Reza Farahani and Matteo Angelinelli.

 

 

With the current popularity of ECO in the Asia–Pacific (APAC), the Bitmovin team in APAC, led by Adrian Britton, expressed an interest in the energy-aware research initiatives conducted within the GAIA project in Austria. Following an introductory meeting between the APAC team and AAU on October 17, 2023, both teams decided to meet in person on November 21, 2023, to explore the topics further.

The meeting proved to be highly productive, centering around two recent research topics:

– VE-Match: Video Encoding Matching-Based Model in the Cloud and Edge (presented by Samira Afzal & Narges Mehran)

– Energy-aware Spatial and Temporal Resolution Selection for Per-Title (presented by Mohammad Ghasempour & Hadi Amirpour)

Many interesting Q&As appeared during each presentation due to customer and provider requirements and the future insight of climate-friendly video streaming in the Cloud and Edge. The fruitful discussions opened up avenues for future exploration in this dynamic field.

The 19th International Conference on emerging Networking EXperiments and Technologies (CoNEXT) Paris, France, December 5-8, 2023

Authors: Leonardo Peroni (IMDEA Networks Institute), Sergey Gorinsky (IMDEA Networks Institute), Farzad Tashtarian (Alpen-Adria-Universität Klagenfurt, Austria), and Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Austria).

Abstract: Quality of Experience (QoE) and QoE models are of an increasing importance to networked systems. The traditional QoE modeling for video streaming applications builds a one-size-fits-all QoE model that underserves atypical viewers who perceive QoE differently. To address the problem of atypical viewers, this paper proposes iQoE (individualized QoE), a method that employs explicit, expressible, and actionable feedback from a viewer to construct a personalized QoE model for this viewer. The iterative iQoE design exercises active learning and combines a novel sampler with a modeler. The chief emphasis of our paper is on making iQoE sample-efficient and accurate.
By leveraging the Microworkers crowdsourcing platform, we conduct studies with 120 subjects who provide 14,400 individual scores. According to the subjective studies, a session of about 22 minutes empowers a viewer to construct a personalized QoE model that, compared to the best of the 10 baseline models, delivers the average accuracy improvement of at least 42% for all viewers and at least 85\% for the atypical viewers. The large-scale simulations based on a new technique of synthetic profiling expand the evaluation scope by exploring iQoE design choices, parameter sensitivity, and generalizability.

 

International Conference on Visual Communications and Image Processing (IEEE VCIP’23)

http://www.vcip2023.org/

Authors: Vignesh V Menon, Reza Farahani, Prajit T Rajendran, Samira Afzal, Klaus Schoeffmann, Christian Timmerer

Abstract: With the emergence of multiple modern video codecs, streaming service providers are forced to encode, store, and transmit bitrate ladders of multiple codecs separately, consequently suffering from additional energy costs for encoding, storage, and transmission. To tackle this issue, we introduce an online energy-efficient Multi-Codec Bitrate ladder Estimation scheme (MCBE) for adaptive video streaming applications. In MCBE, quality representations within the bitrate ladder of new-generation codecs (e.g., HEVC, AV1) that lie below the predicted rate-distortion curve of the AVC codec are removed. Moreover, perceptual redundancy between representations of the bitrate ladders of the considered codecs is also minimized based on a Just Noticeable Difference (JND) threshold. Therefore, random forest-based models predict the VMAF of bitrate ladder representations of each codec. In a live streaming session where all clients support the decoding of AVC, HEVC, and AV1, MCBE achieves impressive results, reducing cumulative encoding energy by 56.45%, storage energy usage by 94.99%, and transmission energy usage by 77.61% (considering a JND of six VMAF points). These energy reductions are in comparison to a baseline bitrate ladder encoding based on current industry practice.

Authors: Reza Saeedinia (University of Tehran), S. Omid Fatemi (University of Tehran),Daniele Lorenzi (Alpen-Adria Universität Klagenfurt), Farzad Tashtarian (Alpen-Adria Universität Klagenfurt),  Christian Timmerer (Alpen-Adria Universität Klagenfurt)

Abstract: Live user-generated content (UGC) has increased significantly in video streaming applications. Improving the quality of experience (QoE) for users is a crucial consideration in UGC live streaming, where a user can be both a subscriber and a streamer. Resource allocation is an NP-complete task in UGC live streaming due to many subscribers and streamers with varying requests, bandwidth limitations, and network constraints. In this paper, to decrease the execution time of the resource allocation algorithm, we first process streamers’ and subscribers’ requests and then aggregate them into a limited number of groups based on their preferences. Second, we
perform resource allocation for these groups that we call communities. We formulate the resource allocation problem for communities into an optimization problem. With an efficient aggregation of subscribers and streamers at the core of the proposed architecture, the computational complexity of the optimization problem is reduced, consequently improving QoE. This improvement occurs because of the prompt reaction to the bandwidth fluctuations and, subsequently, appropriate resource allocation by the proposed model. We conduct experiments in various scenarios. The results show an average of 41% improvement in execution time. To evaluate the impact of bandwidth fluctuations on the proposed algorithm, we employ two network traces: AmazonFCC and NYUBUS. The results show 4%, and 28% QoE improvement in a scenario with 5
streamers over the AmazonFCC and the NYUBUS network traces, respectively.

Link: 13th International Conference on Computer and Knowledge Engineering (ICCKE)

Title: Designing A Sustainable Serverless Graph Processing Tool on the Computing Continuum

Abstract: Graph processing has become increasingly popular and essential for solving complex problems in various domains, like social networks. However, processing graphs at a massive scale poses critical challenges, such as inefficient resource and energy utilization. To bridge such challenges, the Graph-Massivizer project, funded by the Horizon Europe research and innovation program, conducts research and develops a high-performance, scalable, and sustainable platform for information processing and reasoning based on the massive graph (MG) representation of extreme data. This paper presents an initial architectural design for the Choreographer, one of the five Graph-Massivizer tools. We explain Choreographer’s components and their collaboration with other Graph-Massivizer tools. We demonstrate how Choreographer can adopt the emerging serverless computing paradigm to process Basic Graph Operations (BGOs) as serverless functions across the computing continuum efficiently. Moreover, we present an early vision of our federated Function-as-a-Service (FaaS) testbed, which will be used to conduct experiments and assess Choreographer performance.

 

Kurt Horvath presented the paper titled A distributed geofence-based discovery scheme for the computing Continuum at 29th International European Conference on Parallel and Distributed Computing (EURO-PAR 2023)

Authors: Kurt Horvath, Dragi Kimovski, Christoph Uran, Helmut Wöllik, and Radu Prodan

Abstract: Service discovery is a vital process that enables low latency provisioning of Internet of Things applications across the computing continuum. Unfortunately, it becomes increasingly difficult to identify a proper service within strict time constraints due to the high heterogeneity of the computing continuum. Moreover, the plethora of network technologies and protocols commonly used by the Internet of Things applications further hinders service discovery. To address these issues, we introduce a novel mobile edge service discovery algorithm named Mobile Edge Service Discovery using the DNS (MESDD), which utilizes intermediate code to identify a suitable service instance across the computing continuum based on the naming scheme used to identify the users’ location. MESDD utilizes geofences to aid this process, which enables fine-grained resource discovery. We deployed a real-life distributed computing continuum testbed and compared MESDD with three related methods. The evaluation results show that MESDD outperforms the other approaches by 60% after eight discovery iterations.

A DEEP DIVE INTO VIDEO STREAMING AND GRAPH PROCESSING USE CASES

More information

Prof. Hermann Hellwagner is a keynote speaker at IEEE MIPR, 30th August – 1st September 2023.

Title: Advances in Edge-Based and In-Network Media Processing for Adaptive Video Streaming

Talk Abstract: Media traffic (mainly, video) on the Internet is constantly growing; networked multimedia applications consume a predominant share of the available Internet bandwidth. A major technical breakthrough and enabler in multimedia systems research was the HTTP Adaptive Streaming (HAS) technique. While this technique is widely used and works well in industrial networked multimedia services today, challenges exist for future multimedia systems, dealing with the trade-offs between (i) the ever-increasing content complexity, (ii) various requirements with respect to time (most importantly, low latency), and (iii) quality of experience (QoE). This situation sets the stage for our research work in the ATHENA Christian Doppler (CD) Laboratory (Adaptive Streaming over HTTP and Emerging Networked Multimedia Services; https://athena.itec.aau.at/), jointly funded by public sources and industry.

In this talk, I’ll explore one facet of the ATHENA research, namely how and with which benefits edge-based and in-network media processing can cope with adverse network conditions and/or improve media quality/perception. Content Delivery Networks (CDNs) are the classical example of supporting content distribution on today’s Internet. In recent years, though, techniques like Multi-access Edge Computing (MEC), Software Defined Networking (SDN), Network Function Virtualization (NFV), Peer Assistance (PA) for CDNs, and Machine Learning (ML) have emerged that can additionally be leveraged to support adaptive video streaming services. In the talk, I’ll present several approaches of edge-based and in-network media processing in support of adaptive streaming, in four groups:

  1. Edge Computing (EC) support, for instance transcoding, content prefetching, and adaptive bitrate algorithms at the edge.
  2. Virtualized Network Function (VNF) support for live video streaming.
  3. Hybrid P2P, Edge and CDN support including content caching, transcoding, and super-resolution at various layers of the system.
  4. Machine Learning (ML) techniques facilitating various (end-to-end) properties of an adaptive streaming system.

On 22.08.2023, Reza Farahani successfully defended his doctoral studies with the thesis on the title: “Network-Assisted Delivery of Adaptive Video Streaming Services through CDN, SDN, and MEC” under the supervision of Univ.-Prof. DI Dr. Hermann Hellwagner and Univ.-Prof. DI Dr. Christian Timmerer at ITEC. His defense was chaired by Assoc. Prof. DI Dr. Klaus Schöffmann and examined by Prof. Dr. Tobias Hoßfeld (University of Würzburg, Germany) and Prof. Dr. Filip De Turck (Ghent University, Belgium).
During his doctoral study, he contributed to ATHENA and Graph Massivizer projects.
Reza will continue as a Postdoctoral researcher at ITEC in the Graph Massivizer project.

The abstract of his disseration is as follows:

Multimedia applications, mainly video streaming services, are currently the dominant source of network load worldwide. In recent VoD and live video streaming services, traditional streaming delivery techniques have been replaced by adaptive solutions based on the HTTP protocol. Current trends toward high-resolution and low-latency VoD and live video streaming pose new challenges to E2E bandwidth demand and have stringent delay requirements. To do this, video providers rely on CDNs to ensure that they provide scalable video streaming services. To support future streaming scenarios involving millions of users, it is necessary to increase the CDNs’ efficiency. It is agreed that these requirements may be satisfied by adopting emerging networking techniques to present Network Assisted Video Streaming (NAVS) methods. Motivated by this, this thesis goes one step beyond traditional pure client-based HAS algorithms by incorporating (an) in-network component(s) with a broader view of the network to present completely transparent NAVS solutions for HAS clients. Our first contribution concentrates on leveraging the capabilities of the SDN, NFV, and MEC paradigms to introduce ES-HAS and CSDN as edge- and SDN-assisted frameworks. ES-HAS and CSDN introduce VNFs named VRP servers at the edge of an SDN-enabled network to collect HAS clients’ requests and retrieve networking information. The SDN controller in these systems manages a single domain network. VRP servers perform optimization models as server/segment selection policies to serve clients’ requests with the shortest fetching time by selecting the most appropriate cache server/video segment quality or by reconstructing the requested quality through transcoding at the edge. Deployment of ES-HAS and CSDN on the cloud-based testbeds and estimation of users’ QoE using objective metrics demonstrates how clients’ requests can be served with higher QoE by 40% and lower bandwidth usage by 63% compared to state-of-the-art approaches. Our second contribution designs an architecture that simultaneously supports various types of video streaming (live and VoD), considering their versatile QoE and latency requirements. To this end, the SDN, NFV, and MEC paradigms are leveraged, and three VNFs, i.e., VPF, VCF, and VTF, are designed. We build a series of these function chains through the SFC paradigm, utilize all CDN and edge server resources, and present SARENA, an SFC-enabled architecture for adaptive video streaming applications. We equip SARENA’s SDN controller with a lightweight request scheduler and edge configurator to make it deployable in practical environments and to dynamically scale edge servers based on service requirements, respectively. Experimental results show that SARENA outperforms baseline schemes in terms of higher users’ QoE figures by 39.6%, lower E2E latency by 29.3%, and lower backhaul traffic usage by 30% for live and VoD services. Our third contribution aims to use the idle resources of edge servers and employ the capabilities of the SDN controller to establish a collaboration between edge servers in addition to collaboration between edge servers and the SDN controller. We introduce two collaborative edge-assisted frameworks named LEADER and ARARAT. LEADER utilizes sets of actions, presented in an Action Tree, formulates the problem as a central optimization model to enhance the HAS clients’ serving time, subject to the network’s and edge servers’ resource constraints, and proposes a lightweight heuristic algorithm to solve the model. ARARAT extends LEADER’s Action Tree, considers network cost in the optimization, devises multiple heuristic algorithms, and runs extensive scenarios. Evaluation results show that LEADER and ARARAT improve users’ QoE by 22%, decrease the streaming cost by 47%, and enhance network utilization by 13%, as compared to others. Our final contribution focuses on incorporating P2P networks and CDNs, utilizing NFV and edge computing techniques, and then presenting RICHTER and ALIVE as hybrid P2P-CDN frameworks. RICHTER and ALIVE particularly use HAS clients’ potential idle computational resources besides their available bandwidth to provide distributed video processing services, e.g., video transcoding and video super-resolution. Both frameworks introduce multi-layer architectures and design Action Trees that consider all feasible resources for serving clients’ requests with acceptable latency and quality. Moreover, RICHTER proposes an online learning method and ALIVE utilizes a lightweight algorithm distributed over in-network virtualized components, which are designed to play decision-maker roles in large-scale practical scenarios. Results show that RICHTER and ALIVE improve the users’ QoE by 22%, decrease cost incurred for the streaming service provider by 34%, shorten clients’ serving latency by 39%, enhance edge server energy consumption by 31%, and reduce backhaul bandwidth usage by 24% compared to the others.