Distributed and Parallel Systems

The paper “A Two-Sided Matching Model for Data Stream Processing in the Cloud–Fog Continuum” has been accepted for publication at the 21st IEEE/ACM international Symposium on Cluster, Cloud and Internet Computing (CCGrid 2021).

Authors: Narges Mehran, Dragi Kimovskiand Radu Prodan

Abstract: Latency-sensitive and bandwidth-intensive stream processing applications are dominant traffic generators over the Internet network. A stream consists of a continuous sequence of data elements, which require processing in nearly real-time. To improve communication latency and reduce the network congestion, Fog computing complements the Cloud services by moving the computation towards the edge of the network. Unfortunately, the heterogeneity of the new Cloud–Fog continuum raises important challenges related to deploying and executing data stream applications. We explore in this work a two-sided stable matching model called Cloud–Fog to data stream application matching (CODA) for deploying a distributed application represented as a workflow of stream processing microservices on heterogeneous Cloud–Fog computing resources. In CODA, the application microservices rank the continuum resources based on their microservice stream processing time, while resources rank the stream processing microservices based on their residual bandwidth. A stable many-to-one matching algorithm assigns microservices to resources based on their mutual preferences, aiming to optimize the complete stream processing time on the application side, and the total streaming traffic on the resource side.
We evaluate the CODA algorithm using simulated and real-world Cloud–Fog scenarios. We achieved 11 to 45 % lower stream processing time and 1.3 to 20 % lower streaming traffic compared to related state-of-the-art approaches.

Prof. Radu Prodan

The project “Kärntner Fog: A 5G-Enabled Fog Infrastructure for Automated Operation of Carinthia’s 5G Playground Application Use Cases” proposes a new infrastructure automation use case in the 5G Playground Carinthia (5GPG). Kärntner Fog plans to create and deploy a
distributed service middleware infrastructure over a diverse set of novel heterogeneous 5G edge devices, complemented by a high-performance Cloud data center accessible with low latency according to 5G standards. Such an infrastructure is currently missing in the 5GPG and will represent a horizontal backbone that interconnects and integrates the application use cases. Kärtner Fog will automate the development and operation of the applications use cases in the 5GPG in an integrated and more cost-effective fashion to enable more science and innovation within a limited budget.

Involved Organisations: BABEG, ITEC@AAU, ONDA TLC GmbH, FFG/KWF

Coordinator: Prof. Radu Prodan
Project Start: 01.01.2021
Project Duration: 48 months

The manuscript “Cloud, Fog or Edge: Where to Compute?” has been accepted for publication in an upcoming issue of IEEE Internet Computing.

Authors: Dragi Kimovski, Roland Mathá, Josef Hammer, Narges Mehran, Hermann Hellwagner and Radu Prodan

Abstract: The computing continuum extends the high-performance cloud data centers with energy-efficient and low-latency devices close to the data sources located at the edge of the network.
However, the heterogeneity of the computing continuum raises multiple challenges related to application management. These include where to offload an application – from the cloud to the edge – to meet its computation and communication requirements.
To support these decisions, we provide in this article a detailed performance and carbon footprint analysis of a selection of use case applications with complementary resource requirements across the computing continuum over a real-life evaluation testbed.

Prof. Radu Prodan

IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC) accepted the paper “Dynamic Multi-objective Scheduling of Microservices in the Cloud”.

Authors: Hamid Mohammadi Fard, Radu Prodan, Felix Wolf

Abstract: For many applications, a microservices architecture promises better performance and flexibility compared to a conventional monolithic architecture. In spite of the advantages of a microservices architecture, deploying microservices poses various challenges for service developers and providers alike. One of these challenges is the efficient placement of microservices on the cluster nodes. Improper allocation of microservices can quickly waste resource capacities and cause low system throughput. In the last few years, new technologies in orchestration frameworks, such as the possibility of multiple schedulers for pods in Kubernetes, have improved scheduling solutions of microservices but using these technologies needs to involve both the service developer and the service provider in the behavior analysis of workloads. Using memory and CPU requests specified in the service manifest, we propose a general microservices scheduling mechanism that can operate efficiently in private clusters or enterprise clouds. We model the scheduling problem as a complex variant of the knapsack problem and solve it using a multi-objective optimization approach. Our experiments show that the proposed mechanism is highly scalable and simultaneously increases utilization of both memory and CPU, which in turn leads to better throughput when compared to the state-of-the-art.

Prof. Radu Prodan

Prof. Radu Prodan is a keynote speaker at the 13th International Conference On The Developments in eSystems Engineering (DeSE), 13th-17th December 2020.

Prof. Radu Prodan

Prof. Radu Prodan ist guest of honor at the 9th International Workshop on Soft Computing Applications (SOFA), 27-29 Nov 2020, Arad, Romania. The title of his talk is “Distribute one Billion”.

Prof. Radu Prodan

The newspaper “Kronen Zeitung” published the article “IM KAMPF GEGEN CORONA: Universität Klagenfurt forscht mit den Chinesen” with Prof. Radu Prodan.

Prof. Radu Prodan

The newspaper “Kleine Zeitung” published the article “Medizinische Schutzausrüstung: Neue IT-Lösung soll Menschenleben retten” with Prof. Radu Prodan.

 

Authors: Dragi Kimovski, Dijana C. Bogatinoska, Narges Mehran, Aleksandar Karadimce, Natasha Paunkoska, Radu Prodan, Ninoslav Marina

Abstract: The proliferation of smart sensing and computing devices, capable of collecting a vast amount of data, has made the gathering of the necessary vehicular traffic data relatively easy. However, the analysis of these big data sets requires computational resources, which are currently provided by the Cloud Data Centers. Nevertheless, the Cloud Data Centers can have unacceptably high latency for vehicular analysis applications with strict time requirements. The recent introduction of the Edge computing paradigm, as an extension of the Cloud services, has partially moved the processing of big data closer to the data sources, thus addressing this issue. Unfortunately, this unlocked multiple challenges related to resources management. Therefore, we present a model for scheduling of vehicular traffic analysis applications with partial task offloading across the Cloud — Edge continuum. The approach represents the traffic applications as a set of interconnected tasks composed into a workflow that can be partially offloaded to the Edge. We evaluated the approach through a simulated Cloud — Edge environment that considers two representative vehicular traffic applications with a focus on video stream analysis. Our results show that the presented approach reduces the application response time up to eight times while improving energy efficiency by a factor of four.