Prof. Hermann Hellwagner is a keynote speaker at IEEE MIPR, 30th August – 1st September 2023.

Title: Advances in Edge-Based and In-Network Media Processing for Adaptive Video Streaming

Talk Abstract: Media traffic (mainly, video) on the Internet is constantly growing; networked multimedia applications consume a predominant share of the available Internet bandwidth. A major technical breakthrough and enabler in multimedia systems research was the HTTP Adaptive Streaming (HAS) technique. While this technique is widely used and works well in industrial networked multimedia services today, challenges exist for future multimedia systems, dealing with the trade-offs between (i) the ever-increasing content complexity, (ii) various requirements with respect to time (most importantly, low latency), and (iii) quality of experience (QoE). This situation sets the stage for our research work in the ATHENA Christian Doppler (CD) Laboratory (Adaptive Streaming over HTTP and Emerging Networked Multimedia Services; https://athena.itec.aau.at/), jointly funded by public sources and industry.

In this talk, I’ll explore one facet of the ATHENA research, namely how and with which benefits edge-based and in-network media processing can cope with adverse network conditions and/or improve media quality/perception. Content Delivery Networks (CDNs) are the classical example of supporting content distribution on today’s Internet. In recent years, though, techniques like Multi-access Edge Computing (MEC), Software Defined Networking (SDN), Network Function Virtualization (NFV), Peer Assistance (PA) for CDNs, and Machine Learning (ML) have emerged that can additionally be leveraged to support adaptive video streaming services. In the talk, I’ll present several approaches of edge-based and in-network media processing in support of adaptive streaming, in four groups:

  1. Edge Computing (EC) support, for instance transcoding, content prefetching, and adaptive bitrate algorithms at the edge.
  2. Virtualized Network Function (VNF) support for live video streaming.
  3. Hybrid P2P, Edge and CDN support including content caching, transcoding, and super-resolution at various layers of the system.
  4. Machine Learning (ML) techniques facilitating various (end-to-end) properties of an adaptive streaming system.

On 22.08.2023, Reza Farahani successfully defended his doctoral studies with the thesis on the title: “Network-Assisted Delivery of Adaptive Video Streaming Services through CDN, SDN, and MEC” under the supervision of Univ.-Prof. DI Dr. Hermann Hellwagner and Univ.-Prof. DI Dr. Christian Timmerer at ITEC. His defense was chaired by Assoc. Prof. DI Dr. Klaus Schöffmann and examined by Prof. Dr. Tobias Hoßfeld (University of Würzburg, Germany) and Prof. Dr. Filip De Turck (Ghent University, Belgium).
During his doctoral study, he contributed to ATHENA and Graph Massivizer projects.
Reza will continue as a Postdoctoral researcher at ITEC in the Graph Massivizer project.

The abstract of his disseration is as follows:

Multimedia applications, mainly video streaming services, are currently the dominant source of network load worldwide. In recent VoD and live video streaming services, traditional streaming delivery techniques have been replaced by adaptive solutions based on the HTTP protocol. Current trends toward high-resolution and low-latency VoD and live video streaming pose new challenges to E2E bandwidth demand and have stringent delay requirements. To do this, video providers rely on CDNs to ensure that they provide scalable video streaming services. To support future streaming scenarios involving millions of users, it is necessary to increase the CDNs’ efficiency. It is agreed that these requirements may be satisfied by adopting emerging networking techniques to present Network Assisted Video Streaming (NAVS) methods. Motivated by this, this thesis goes one step beyond traditional pure client-based HAS algorithms by incorporating (an) in-network component(s) with a broader view of the network to present completely transparent NAVS solutions for HAS clients. Our first contribution concentrates on leveraging the capabilities of the SDN, NFV, and MEC paradigms to introduce ES-HAS and CSDN as edge- and SDN-assisted frameworks. ES-HAS and CSDN introduce VNFs named VRP servers at the edge of an SDN-enabled network to collect HAS clients’ requests and retrieve networking information. The SDN controller in these systems manages a single domain network. VRP servers perform optimization models as server/segment selection policies to serve clients’ requests with the shortest fetching time by selecting the most appropriate cache server/video segment quality or by reconstructing the requested quality through transcoding at the edge. Deployment of ES-HAS and CSDN on the cloud-based testbeds and estimation of users’ QoE using objective metrics demonstrates how clients’ requests can be served with higher QoE by 40% and lower bandwidth usage by 63% compared to state-of-the-art approaches. Our second contribution designs an architecture that simultaneously supports various types of video streaming (live and VoD), considering their versatile QoE and latency requirements. To this end, the SDN, NFV, and MEC paradigms are leveraged, and three VNFs, i.e., VPF, VCF, and VTF, are designed. We build a series of these function chains through the SFC paradigm, utilize all CDN and edge server resources, and present SARENA, an SFC-enabled architecture for adaptive video streaming applications. We equip SARENA’s SDN controller with a lightweight request scheduler and edge configurator to make it deployable in practical environments and to dynamically scale edge servers based on service requirements, respectively. Experimental results show that SARENA outperforms baseline schemes in terms of higher users’ QoE figures by 39.6%, lower E2E latency by 29.3%, and lower backhaul traffic usage by 30% for live and VoD services. Our third contribution aims to use the idle resources of edge servers and employ the capabilities of the SDN controller to establish a collaboration between edge servers in addition to collaboration between edge servers and the SDN controller. We introduce two collaborative edge-assisted frameworks named LEADER and ARARAT. LEADER utilizes sets of actions, presented in an Action Tree, formulates the problem as a central optimization model to enhance the HAS clients’ serving time, subject to the network’s and edge servers’ resource constraints, and proposes a lightweight heuristic algorithm to solve the model. ARARAT extends LEADER’s Action Tree, considers network cost in the optimization, devises multiple heuristic algorithms, and runs extensive scenarios. Evaluation results show that LEADER and ARARAT improve users’ QoE by 22%, decrease the streaming cost by 47%, and enhance network utilization by 13%, as compared to others. Our final contribution focuses on incorporating P2P networks and CDNs, utilizing NFV and edge computing techniques, and then presenting RICHTER and ALIVE as hybrid P2P-CDN frameworks. RICHTER and ALIVE particularly use HAS clients’ potential idle computational resources besides their available bandwidth to provide distributed video processing services, e.g., video transcoding and video super-resolution. Both frameworks introduce multi-layer architectures and design Action Trees that consider all feasible resources for serving clients’ requests with acceptable latency and quality. Moreover, RICHTER proposes an online learning method and ALIVE utilizes a lightweight algorithm distributed over in-network virtualized components, which are designed to play decision-maker roles in large-scale practical scenarios. Results show that RICHTER and ALIVE improve the users’ QoE by 22%, decrease cost incurred for the streaming service provider by 34%, shorten clients’ serving latency by 39%, enhance edge server energy consumption by 31%, and reduce backhaul bandwidth usage by 24% compared to the others.

An der Universität Klagenfurt wird aktuell daran geforscht, wie große Datenmengen energieeffizienter verarbeitet werden können. Die digitale Übertragung von Informationen verbrauche Energie. Wissenschafter aus zwölf Institutionen forschen an der effizienteren Verarbeitung dieser sogenannten “massive graphs”, wie es in einer Aussendung des Projektteams am Mittwoch hieß. Ziel sei unter anderem ein Energielabel für Software-Codes einzuführen.

++ THEMENBILD ++ Illustration zum Thema Hacker, Hackerangriff und ComputerkriminalitŠt, fotografiert am 22. November 2022, in Wien. Angriffe auf IT-Infrastrukturen durch Cybercrime bereiten zunehmend Sorge.

“Das Sparpotenzial in der Verarbeitung von Daten wird noch zu wenig gesehen. Wir wollen es sichtbar machen und Lösungen anbieten”, sagte Projektleiter Radu Prodan. Im Green Supercomputing ginge es darum, die Rechenleistung effizienter zu organisieren, sodass in Summe weniger Energie verbraucht wird. Die Forscher arbeiten seit fast einem Jahr an dem Projekt “Extreme and Sustainable Graph Processing for Urgent Societal Challenges in Europe” und konnte bereits erste Ergebnisse vorweisen, die bisher auf drei Veranstaltungen in Portugal, Rumänien und in den USA präsentiert wurden.

Title: Beyond von Neumann in the Computing Continuum: Architectures, Applications, and Future Directions

Authors: Kimovski, Dragi; Saurabh, Nishant; Jansen, Matthijs; Aral, Atakan; Al-Dulaimy, Auday; Bondi, Andre; Galletta, Antonino; Papadopoulos, Alessandro; Iosup, Alexandru; Prodan, Radu

Abstract: The article discusses the emerging non-von Neumann computer architectures and their integration in the computing continuum for supporting modern distributed applications, including artificial intelligence, big data, and scientific computing. It provides a detailed summary of the available and emerging non-von Neumann architectures, which range from power-efficient single-board accelerators to quantum and neuromorphic computers. Furthermore, it explores their potential benefits for revolutionizing data processing and analysis in various societal, science, and industry fields. The paper provides a detailed analysis of the most widely used class of distributed applications and discusses the difficulties in their execution over the computing continuum, including communication, interoperability, orchestration, and sustainability issues.

Sahar Nasirihaghighi presented the paper titled “Action Recognition in Video Recordings from Gynecology Laparoscopy” at IEEE 36th International Symposium on Computer-Based Medical Systems 2023.

Authors: Sahar Nasirihaghighi, Negin Ghamsarian, Daniela Stefanics, Klaus Schoeffmann and Heinrich Husslein

Abstract: Action recognition is a prerequisite for many applications in laparoscopic video analysis including but not limited to surgical training, operation room planning, follow-up surgery preparation, post-operative surgical assessment, and surgical outcome estimation. However, automatic action recognition in laparoscopic surgeries involves numerous challenges such as (I) cross-action and intra-action duration variation, (II) relevant content distortion due to smoke, blood accumulation, fast camera motions, organ movements, object occlusion, and (III) surgical scene variations due to different illuminations and viewpoints. Besides, action annotations in laparoscopy surgeries are limited and expensive due to requiring expert knowledge. In this study, we design and evaluate a CNN-RNN architecture as well as a customized training-inference framework to deal with the mentioned challenges in laparoscopic surgery action recognition. Using stacked recurrent layers, our proposed network takes advantage of inter-frame dependencies to negate the negative effect of content distortion and variation in action recognition. Furthermore, our proposed frame sampling strategy effectively manages the duration variations in surgical actions to enable action recognition with high temporal resolution. Our extensive experiments confirm the superiority of our proposed method in action recognition compared to static CNNs.

Speakers: Dan Nicolae (University of Chicago, USA), Razvan Bunescu (University of North Carolina at Charlotte, USA), Anna Fensel Wageningen University & Research, the Netherlands), Radu Prodan (University of Klagenfurt, Austria), Ioan Toma (Onlim GmbH, Austria), Dumitru Roman (SINTEF / University of Oslo, Norway), Dr. Pawel Gasiorowski (Sofia University – GATE Institute, Bulgaria, and London Metropolitan University – Cyber Security Research Centre, UK), Jože Rožanec (Qlector, Slovenia), Nikolay Nikolov (SINTEF AS, Norway), Viktor Sowinski-Mydlarz (London Metropolitan University, UK and GATE Institute, Bulgaria), Brian Elvesæter (SINTEF AS, Norway)

Summer school organized by Academia de Studii Economice din București, in collaboration with the GATE Institute at Sofia University St. Kliment Ohridski and the projects DataCloud, enRichMyData, Graph-Massivizer Project, UPCAST Project, and InterTwino.

Organizing team: Dan Nicolae, Razvan Bunescu, Dumitru Roman, Sylvia Ilieva, Ahmet Soylu, Raluca C., Iva Krasteva, Irena Pavlova, Cosmin PROȘCANU, Miruna Proșcanu, Anca Bogdan, Georgescu (Cretan) Georgiana Camelia, Dessislava Petrova-Antonova, Orlin Kouzov. Vasile Alecsandru Strat, Adriana AnaMaria Alexandru(Davidescu) Miruna Mazurencu Marinescu Pele Daniel Traian Pele Liviu-Adrian Cotfas Cristina-Rodica Boboc Oana Geman Alina Petrescu-Nita Ovidiu-Aurel Ghiuta Codruta Mare

 

On 14.07.2023, Zahra Najafabadi Samani successfully defended her doctoral studies with the thesis on the title: “Resource-Aware Time-Critical Application Placement in the Computing Continuum” under the supervision of Prof. Radu Prodan and Assoc.-Prof. Dr. Klaus Schöffmann at ITEC. Her defense was chaired by Univ.-Prof. Dr. Christian Timmerer and examined by Univ.-Prof. Dr. Thomas Fahringer (Leopold Franzens-Universität Innsbruck, AT) and Assoc.-Prof. Dr. Attila Kertesz (University of Szeged, HU).
During her doctoral study, she contributed to ARTICONF and DataCloud EU H2020 projects.
Zahra will continue as a Postdoctoral researcher at the Leopold Franzens-Universität Innsbruck.

The abstract of her disseration is as follows:

The rapid expansion of time-critical applications with substantial demands on high bandwidth and ultra-low latency pose critical challenges for Cloud data centers. To address time-critical application demands, the computing continuum emerged as a new distributed platform that extends the Cloud toward nearby Edge and Fog resources, substantially decreasing communication latency and network traffic. However, the distributed and heterogeneous nature of the computing continuum with sporadic availability of devices may result in service failures and deadline violations, significantly negating its advantages for hosting time-critical applications and lowering users’ satisfaction. Additionally, the dense deployment and intense competition for limited nearby resources pose resource utilization challenges. To tackle these problems, this thesis investigates the problem of resource-aware time-critical application placement with constraint deadlines and various demands in the heterogeneous computing continuum with three main contributions:
 1. A multilayer resource partitioning model for placing time-critical applications to minimize resource wastage while maximizing deadline satisfaction;
 2. An adaptive placement for dynamic computing continuum with sporadic device availability to minimize resource wastage and maximize deadline satisfaction;
 3. A proactive service level agreement-aware placement method, leveraging distributed monitoring to enhance deadline satisfaction and service success.

Title: A distributed and energy-efficient KNN for EEG classification with dynamic money-saving policy in heterogeneous clusters

Authors: Juan José Escobar, Francisco Rodríguez, Beatriz Prieto, Dragi Kimovski, Andrés Ortiz, and Miguel Damas

Abstract: Due to energy consumption’s increasing importance in recent years, energy-time efficiency is a highly relevant objective to address in High-Performance Computing (HPC) systems, where cost significantly impacts the tasks executed. Among these tasks, classification problems are considered due to their great computational complexity, which is sometimes aggravated when processing high-dimensional datasets. In addition, implementing efficient applications for high-performance systems is not an easy task since hardware must be considered to maximize performance, especially on heterogeneous platforms with multi-core CPUs. Thus, this article proposes an efficient distributed K-Nearest Neighbors (KNN) for Electroencephalogram (EEG) classification that uses minimum Redundancy Maximum Relevance (mRMR) as a feature selection technique to reduce the dimensionality of the dataset. The approach implements an energy policy that can stop or resume the execution of the program based on the cost per Megawatt. Since the procedure is based on the master-worker scheme, the performance of three different workload distributions is also analyzed to identify which one is more suitable according to the experimental conditions. The proposed approach outperforms the classification results obtained by previous works that use the same dataset. It achieves a speedup of 74.53 when running on a multi-node heterogeneous cluster, consuming only 13.38% of the energy consumed by the sequential version. Moreover, the results show that financial costs can be reduced when energy policy is activated and the importance of developing efficient methods, proving that energy-aware computing is necessary for sustainable computing.

 

Our Graph-Massivizer Project is thrilled to be part of the #DataWeek2023 event! Join us for a thought-provoking session on “Are current infrastructures suitable for extreme data processing? Technologies for data management.”

Don’t miss this opportunity to explore cutting-edge solutions and discuss the future of data processing together with Nuria De Lama Dumitru Roman Roberta Turra Radu Prodan Lilit Axner Jan Martinovič Bill Patrowicz Irena Pavlova! ?

? Tuesday 13th

⏰ 15:30 – 17:00

BDVA – Big Data Value Association

 

Container-based Data Pipelines on the Computing Continuum for Remote Patient Monitoring

Authors: Nikolay Nikolov, Arnor Solberg, Radu Prodan, Ahmet Soylu, Mihhail Matskin, Dumitru Roman

Computer Jounal, Special Issue on Computing in Telemedicine

Abstract: Diagnosing, treatment, and follow-up care of patients is happening increasingly through telemedicine, especially in remote areas where direct interaction is hindered. Over the past three years, following the COVID-19 pandemic, the utility of remote patient care has been further field-tested. Tackling the technical challenges of a growing demand for telemedicine requires a convergence of several fields: 1) software solutions for reliable, secure, and reusable data processing, 2) management of hardware resources (at scale) on the Cloud/Fog/Edge Computing Continuum, and 3) automation of DevOps processes for deployment of digital healthcare solutions with patients. In this context, the emerging concept of \emph{big data pipelines} provides relevant solutions and is one of the main enablers. In what follows, we present a data pipeline for remote patient monitoring and show a real-world example of how data pipelines help address the stringent requirements of telemedicine.