Distributed and Parallel Systems

Title: Handover Authentication Latency Reduction using Mobile Edge Computing and Mobility Patterns

Authors: Fatima Abdullah, Dragi Kimovski, Radu Prodan, and Kashif Munir

Abstract: With the advancement in technology and the exponential growth of mobile devices, network traffic has increased manifold in cellular networks. Due to this reason, latency reduction has become a challenging issue for mobile devices. In order to achieve seamless connectivity and minimal disruption during movement, latency reduction is crucial in the handover authentication process. Handover authentication is a process in which the legitimacy of a mobile node is checked when it crosses the boundary of an access network. This paper proposes an efficient technique that utilizes mobility patterns of the mobile node and mobile Edge computing framework to reduce handover authentication latency. The key idea of the proposed technique is to categorize mobile nodes on the basis of their mobility patterns. We perform simulations to measure the networking latency. Besides, we use queuing model to measure the processing time of an authentication query at an Edge servers. The results show that the proposed approach reduces the handover authentication latency up to 54% in comparison with the existing approach.

Link: https://c3.itec.aau.at/index.php/paper-accepted-elsevier-computing/

Prof. Radu Prodan

Authors:Yasir Noman Khalid, Muhammad Aleem, Usman Ahmed, Radu Prodan, Muhammad Arshad Islam & Muhammad Azhar Iqbal

Abstract: Employing general-purpose graphics processing units (GPGPU) with the help of OpenCL has resulted in greatly reducing the execution time of data-parallel applications by taking advantage of the massive available parallelism. However, when a small data size application is executed on GPU there is a wastage of GPU resources as the application cannot fully utilize GPU compute-cores. There is no mechanism to share a GPU between two kernels due to the lack of operating system support on GPU. In this paper, we propose the provision of a GPU sharing mechanism between two kernels that will lead to increasing GPU occupancy, and as a result, reduce execution time of a job pool. However, if a pair of the kernel is competing for the same set of resources (i.e., both applications are compute-intensive or memory-intensive), kernel fusion may also result in a significant increase in execution time of fused kernels. Therefore, it is pertinent to select an optimal pair of kernels for fusion that will result in significant speedup over their serial execution. This research presents FusionCL, a machine learning-based GPU sharing mechanism between a pair of OpenCL kernels. FusionCL identifies each pair of kernels (from the job pool), which are suitable candidates for fusion using a machine learning-based fusion suitability classifier. Thereafter, from all the candidates, it selects a pair of candidate kernels that will produce maximum speedup after fusion over their serial execution using a fusion speedup predictor. The experimental evaluation shows that the proposed kernel fusion mechanism reduces execution time by 2.83× when compared to a baseline scheduling scheme. When compared to state-of-the-art, the reduction in execution time is up to 8%.

Link: https://link.springer.com/article/10.1007/s00607-021-00958-2

Title: “A Two-Sided Matching Model for Data Stream Processing in the Cloud-Fog Continuum” by: Narges Mehran, Dragi Kimovski and Radu Prodan, was virtually presented in the CCgrid2021 conference.

We at @alpenadriauni are proud of Narges and her work in the @EU_H2020 @DataCloud2020 project awarded at cloudbus.org/ccgrid2021/.

Watch award presentation at: https://www.youtube.com/watch?v=aSnzDpd5Kqc

Nishant Saurabh

Title: The ARTICONF Approach to Decentralised  Car-sharing 

Authors: Nishant Saurabh (UNI-KLU), Carlos Rubia (Agilia), Anandakumar Palanisamy (BY), Spiros Koulouzis (UvA), Mirsat Sefidanoski (UIST), Antorweep Chakravorty (UiS), Zhiming Zhao (UvA), Aleksandar Karadimce (UIST), Radu Prodan (UNI-KLU)

Abstract: Social media applications are essential for next generation connectivity. Today, social media are centralized platforms with a single proprietary organization controlling the network and posing critical trust and governance issues over the created and propagated content.
The ARTICONF project funded by the European Union’s Horizon 2020 program researches a decentralized social media platform based on a novel set of trustworthy, resilient and globally sustainable tools that address privacy, robustness and autonomy-related promises that proprietary social media platforms have failed to deliver so far. This paper presents the ARTICONF approach to a car-sharing decentralized application (DApp) use case, as a new collaborative peer-to-peer model providing an alternative solution to private car ownership. We describe a prototype implementation of the car-sharing social media DApp and illustrate through real snapshots how the different ARTICONF tools support it in a simulated scenario.

The presentation has been accepted to the main-track of the Austrian-Slovenian HPC Meeting (ASHPC’21). Meeting will be organized in a hybrid format on 31 May – 2 June, 2021 at the Institute of Information Science in Maribor, Slovenia.

Title: Automated Workflows Scheduling via Two-Phase Event-based MILP Heuristic for MRCPSP Problem

Authors: Vladislav Kashansky, Gleb Radchenko, Radu Prodan, Anatoliy Zabrovskiy and Prateek Agrawal

Abstract: In today’s reality massive amounts of data-intensive tasks are managed by utilizing a large number of heterogeneous computing and storage elements interconnected through high-speed communication networks. However, one issue that still requires research effort is to enable effcient workflows scheduling in such complex environments.
As the scale of the system grows and the workloads become more heterogeneous in the inner structure and the arrival patterns, scheduling problem becomes exponentially harder, requiring problem-specifc heuristics. Many techniques evolved to tackle this problem, including, but not limited to Heterogeneous Earliest Finish Time (HEFT), The Dynamic Scaling Consolidation Scheduling (DSCS), Partitioned Balanced Time Scheduling (PBTS), Deadline Constrained Critical Path (DCCP) and Partition Problem-based Dynamic Provisioning Scheduling (PPDPS). In this talk, we will discuss the two-phase heuristic for makespan-optimized assignment of tasks and computing machines on large-scale computing systems, consisting of matching phase with subsequent event-based MILP method for schedule generation. We evaluated the scalability of the heuristic using the Constraint Integer Programing (SCIP) solver with various configurations based on data sets, provided by the MACS framework. Preliminary results show that the model provides near-optimal assignments and schedules for workflows composed of up to 100 tasks with complex task I/O interactions and demonstrates variable sensitivity with respect to the scale of workflows and resource limitation policies imposed.

Keywords: HPC Schedule Generation, MRCPSP Problem, Workflows Scheduling, Two-Phase Heuristic

Acknowledgement: This work has received funding from the EC-funded project H2020 FETHPC ASPIDE (Agreement #801091)

ADAPT started with the online Kickoff meeting, coordinated by Prof. Radu Prodan.

Prof. Radu Prodan

Prof. Radu Prodan has been nominated as Management Committee (MC) Member CA19135 at COST (European Cooperation in Science & Technologie).

Prof. Radu Prodan

Conference: 15th International Conference on Research Challenges in Information Science

Title : DataCloud: Enabling the Big Data Pipelines on the Computing Continuum

Authors: Dumitru Roman, Nikolay Nikolov, Brian Elvesæter, Ahmet Soylu, Radu Prodan, Dragi Kimovski, Andrea Marrella, Francesco Leotta, Dario Benvenuti, Mihhail Matskin, Giannis Ledakis, Anthony Simonet-Boulogne, Fernando Perales, Evgeny Kharlamov, Alexandre Ulisses, Arnor Solberg and Raffaele Ceccarelli

Prof. Radu Prodan

Prof. Radu Prodan is a keynote speaker at Memphis DATA 2021, 25th-26th March 2021.

Talk Abstract: We live in a digital world estimated to host around 4 billion Internet users and 10 billion of mobile connections generating 2.5 billion billion of data every day. Managing and extracting value from this sheer amount of raw data requires deep software analysis tools on massive distributed and parallel computing infrastructures aggregating billions of cores and threads. The talk gives an overview of the research activities at the University of Klagenfurt, Austria, on optimising system software support for extreme-scale data processing applications, with focus on scientific simulations, social media and massively multiplayer online games.

Title: WELFake: Word Embedding over Linguistic Features for Fake News Detection

Authors: Pawan Kumar Verma (Lovely Professional University, India | GLA University, India), Prateek Agrawal (University of Klagenfurt, Austria | Lovely Professional University, India), Ivone Amorin (MOG Technologies | University of Porto, Portugal), Radu Prodan (University of Klagenfurt, Austria)

Abstract: Social media is a popular medium for dissemination of real-time news all over the world. Easy and quick information proliferation is one of the reasons for its popularity. An extensive number of users with different age groups, gender and societal beliefs are engaged in social media websites. Despite these favorable aspects, a significant disadvantage comes in the form of fake news, as people usually read and share information without caring about its genuineness. Therefore, it is imperative to research methods for the authentication of news. To address this issue, this paper proposes a two phase benchmark model named WELFake based on word embedding (WE) over linguistic features for fake news detection using machine learning classification. The first phase pre-processes the dataset and validates the veracity of news content by using linguistic features. The second phase merges the linguistic feature sets with WE and applies voting classification. To validate its approach, this paper also carefully designs a novel WELFake dataset with approximately 72,000 articles, which incorporates different datasets to generate an unbiased classification output. Experimental results show that the WELFake model categorises the news in real and fake with a 96.73% which improves the overall accuracy by 1.31% compared to BERT and 4.25% compared to CNN models. Our frequency-based and focused analyzing writing patterns model outperforms predictive-based related works implemented using the Word2vec WE method by up to 1.73%.

Acknowledgement: ARTICONF project