Title: FaaScinating Resilience for Serverless Function Choreographies in Federated Clouds

Authors: Sasko Ristov, Dragi Kimovski, Thomas Fahringer

Abstract: Cloud applications often benefit from deployment on serverless technology Function-as-a-Service (FaaS), which may instantly spawn numerous functions and charge users for the period when serverless functions are running. Maximum benefit is achieved when functions are orchestrated in a workflow or function choreographies (FCs). However, many provider limitations specific for FaaS, such as maximum concurrency or duration often increase the failure rate, which can severely hamper the execution of entire FCs. Current support for resilience is often limited to function retries or try-catch, which are applicable within the same cloud region only. To overcome these limitations, we introduce rAF CL, a middleware platform that maintains the reliability of complex FCs in federated clouds. In order to support resilient FC execution under rAF CL, our model creates an alternative strategy for each function based on the required availability specified by the user. Alternative strategies are not restricted to the same cloud region, but may contain alternative functions across five providers, invoked concurrently in a single alternative plan or executed subsequently in multiple alternative plans. With this approach, rAF CL offers flexibility in terms of cost-performance trade-off. We evaluated rAF CL by running three real-life applications across three cloud providers. Experimental results demonstrated that rAF CL outperforms the resilience of AWS Step Functions, increasing the success rate of the entire FC by 53.45%, while invoking only 3.94% more functions with zero wasted function invocations. rAF CL significantly improves the availability of entire FCs to almost 1 and survives even after massive failures of alternative functions

Project Lead: H. Hellwagner, Ch. Timmerer

Abstract: Immersive telepresence technologies will have game-changing impacts on interactions amongst individuals or with non-human objects (e.g. machines), in cyberspace with blurred boundaries between the virtual and physical world. The impacts of this technology are expected to range in a variety of vertical sectors, including education and training, entertainment, healthcare, manufacturing industry, etc. The key challenges include limitations of both the application platform and the underlying network support to achieve seamless presentation, processing, and delivery of immersive telepresence content at a large scale. Innovative design, rigorous validation, and testing exercises aim to fulfill the key technical requirements identified such as low-latency communication, high bandwidth demand, and complex content encoding/rendering tasks in real-time. The industry-leading SPIRIT consortium will build on the existing TRL4 application platforms and network infrastructures developed by the project partners, aiming to address key technical challenges and further develop all major aspects of telepresence technologies to achieve targeted TRL7. The SPIRIT Project will focus its innovations in network-layer, transport-layer, application/content-layer techniques, as well as security and privacy mechanisms to facilitate the large-scale operation of telepresence applications. The project team will develop a fully distributed, interconnected testing infrastructure across two geographical sites in Germany and UK, allowing large-scale testing of heterogeneous telepresence applications in real-life Internet environments. The network infrastructure will host two mainstream application
environments based on WebRTC and low-latency DASH. In addition to the project-designated use case scenarios, the project team will test a variety of additional use cases covering heterogeneous vertical sectors through FSTP participation.

Elsevier Computer Communications journal 

Alireza Erfanian (Alpen-Adria-Universität Klagenfurt), Farzad Tashtarian  (Alpen-Adria-Universität Klagenfurt), Christian Timmerer (Alpen-Adria-Universität Klagenfurt), and Hermann Hellwagner (Alpen-Adria-Universität Klagenfurt).

Abstract: Recent advances in embedded systems and communication technologies enable novel, non-safety applications in Vehicular Ad Hoc Networks (VANETs). Video streaming has become a popular core service for such applications. In this paper, we present QoCoVi as a QoE- and cost-aware adaptive video streaming approach for the Internet of Vehicles (IoV) to deliver video segments requested by mobile users at specified qualities and deadlines. Considering a multitude of transmission data sources with different capacities and costs, the goal of QoCoVi is to serve the desired video qualities with minimum costs. By applying Dynamic Adaptive Streaming over HTTP (DASH) principles, QoCoVi considers cached video segments on vehicles equipped with storage capacity as the lowest-cost sources for serving requests.

We design QoCoVi in two SDN-based operational modes: (i) centralized and (ii) distributed. In centralized mode, we can obtain a suitable solution by introducing a mixed-integer linear programming (MILP) optimization model that can be executed on the SDN controller. However, to cope with the computational overhead of the centralized approach in real IoV scenarios, we propose a fully distributed version of QoCoVi based on the proximal Jacobi alternating direction method of multipliers (ProxJ-ADMM) technique. The effectiveness of the proposed approach is confirmed through emulation with Mininet-WiFi in different scenarios.

Every minute, more than 500 hours of video material are published on YouTube. These days, moving images account for a vast majority of data traffic, and there is no end in sight. This means that technologies that can improve the efficiency of video streaming are becoming all the more important. This is exactly what Hadi Amirpourazarian is working on in the Christian Doppler Laboratory ATHENA at the University of Klagenfurt. Read the full article here.

The new 5G standard will bring data centers closer to the customer. ITEC researchers are developing a system for the rapid distribution of computing tasks.
The newspaper “Der Standard” reported and published the article “Verteiltes Rechnen dank Fog-Computings“.

 

The kick-off meeting of the FF4EuroHPC Project “CardioHPC” took place online on Friday, March 11, 2022. The purpose of this first meeting was primarily the definition of work structures, work packages, and getting to know each partner region. The project partners consist of the following institutions: INNO, Ss. Cyril and Methodius University, Skopjee,  and Klagenfurt University

 

Electronic health records, like ELGA in Austria, provide an overview of laboratory results, diagnostics and therapies. Much could be learned from the personal and private data of individuals – with the help of machine learning – for use in the treatment of others. However, the use of the data is a delicate matter, especially when it comes to diseases that carry a stigma. Researchers involved in the EU project “Enabling the Big Data Pipeline Lifecycle on the Computing Continuum (DataCloud)” are working to make new forms of information processing suitable for medical purposes. Dragi Kimovski and his colleagues recently presented their findings in a publication. Read the complete article here.

 

Title: Big data analytics in Industry 4.0 ecosystems

Authors: Gagangeet Singh Aujla, Radu Prodan, Danda B. Rawat

Journal: “Software: Practice and Experience”

Full editorial/article: https://onlinelibrary.wiley.com/doi/10.1002/spe.3008

Second online meeting between Austria and China took place on 21.02.2022. Consortium discussed aspects of sustainable transportation networks, #blockchain and new development strategies in line with #UnitedNations #IYBSSD #SDGs

Prof. Radu Prodan

Title: Big Data Pipelines on the Computing Continuum: Tapping the Dark Data

 

Authors: Dumitru Roman, Radu Prodan, Nikolay Nikolov, Ahmet Soylu, Mihhail Matskin, Andrea Marrella, Dragi Kimovski, Brian Elvesæter, Anthony Simonet-Boulogne, Giannis Ledakis, Hui Song, Francesco Leotta, Evgeny Kharlamov

 

Abstract: Big Data pipelines are essential for leveraging Dark Data, i.e., data collected but not used and turned into value. However, tapping their potential requires going beyond existing approaches and frameworks for Big Data processing. The Computing Continuum enables new opportunities for managing Big Data pipelines concerning efficient management of heterogeneous and untrustworthy resources. This article discusses the Big Data pipelines lifecycle on the Computing Continuum, its associated challenges and outlines a future research agenda in this area.