Distributed and Parallel Systems

Title: Beyond von Neumann in the Computing Continuum: Architectures, Applications, and Future Directions

Authors: Kimovski, Dragi; Saurabh, Nishant; Jansen, Matthijs; Aral, Atakan; Al-Dulaimy, Auday; Bondi, Andre; Galletta, Antonino; Papadopoulos, Alessandro; Iosup, Alexandru; Prodan, Radu

Abstract: The article discusses the emerging non-von Neumann computer architectures and their integration in the computing continuum for supporting modern distributed applications, including artificial intelligence, big data, and scientific computing. It provides a detailed summary of the available and emerging non-von Neumann architectures, which range from power-efficient single-board accelerators to quantum and neuromorphic computers. Furthermore, it explores their potential benefits for revolutionizing data processing and analysis in various societal, science, and industry fields. The paper provides a detailed analysis of the most widely used class of distributed applications and discusses the difficulties in their execution over the computing continuum, including communication, interoperability, orchestration, and sustainability issues.

Speakers: Dan Nicolae (University of Chicago, USA), Razvan Bunescu (University of North Carolina at Charlotte, USA), Anna Fensel Wageningen University & Research, the Netherlands), Radu Prodan (University of Klagenfurt, Austria), Ioan Toma (Onlim GmbH, Austria), Dumitru Roman (SINTEF / University of Oslo, Norway), Dr. Pawel Gasiorowski (Sofia University – GATE Institute, Bulgaria, and London Metropolitan University – Cyber Security Research Centre, UK), Jože Rožanec (Qlector, Slovenia), Nikolay Nikolov (SINTEF AS, Norway), Viktor Sowinski-Mydlarz (London Metropolitan University, UK and GATE Institute, Bulgaria), Brian Elvesæter (SINTEF AS, Norway)

Summer school organized by Academia de Studii Economice din București, in collaboration with the GATE Institute at Sofia University St. Kliment Ohridski and the projects DataCloud, enRichMyData, Graph-Massivizer Project, UPCAST Project, and InterTwino.

Organizing team: Dan Nicolae, Razvan Bunescu, Dumitru Roman, Sylvia Ilieva, Ahmet Soylu, Raluca C., Iva Krasteva, Irena Pavlova, Cosmin PROȘCANU, Miruna Proșcanu, Anca Bogdan, Georgescu (Cretan) Georgiana Camelia, Dessislava Petrova-Antonova, Orlin Kouzov. Vasile Alecsandru Strat, Adriana AnaMaria Alexandru(Davidescu) Miruna Mazurencu Marinescu Pele Daniel Traian Pele Liviu-Adrian Cotfas Cristina-Rodica Boboc Oana Geman Alina Petrescu-Nita Ovidiu-Aurel Ghiuta Codruta Mare

 

On 14.07.2023, Zahra Najafabadi Samani successfully defended her doctoral studies with the thesis on the title: “Resource-Aware Time-Critical Application Placement in the Computing Continuum” under the supervision of Prof. Radu Prodan and Assoc.-Prof. Dr. Klaus Schöffmann at ITEC. Her defense was chaired by Univ.-Prof. Dr. Christian Timmerer and examined by Univ.-Prof. Dr. Thomas Fahringer (Leopold Franzens-Universität Innsbruck, AT) and Assoc.-Prof. Dr. Attila Kertesz (University of Szeged, HU).
During her doctoral study, she contributed to ARTICONF and DataCloud EU H2020 projects.
Zahra will continue as a Postdoctoral researcher at the Leopold Franzens-Universität Innsbruck.

The abstract of her disseration is as follows:

The rapid expansion of time-critical applications with substantial demands on high bandwidth and ultra-low latency pose critical challenges for Cloud data centers. To address time-critical application demands, the computing continuum emerged as a new distributed platform that extends the Cloud toward nearby Edge and Fog resources, substantially decreasing communication latency and network traffic. However, the distributed and heterogeneous nature of the computing continuum with sporadic availability of devices may result in service failures and deadline violations, significantly negating its advantages for hosting time-critical applications and lowering users’ satisfaction. Additionally, the dense deployment and intense competition for limited nearby resources pose resource utilization challenges. To tackle these problems, this thesis investigates the problem of resource-aware time-critical application placement with constraint deadlines and various demands in the heterogeneous computing continuum with three main contributions:
 1. A multilayer resource partitioning model for placing time-critical applications to minimize resource wastage while maximizing deadline satisfaction;
 2. An adaptive placement for dynamic computing continuum with sporadic device availability to minimize resource wastage and maximize deadline satisfaction;
 3. A proactive service level agreement-aware placement method, leveraging distributed monitoring to enhance deadline satisfaction and service success.

Authors: Juanjuan Li, Rui Qin, Cristina Olaverri-Monreal, Radu Prodan, Fei-Yue Wang

Journal: IEEE Transactions on Intelligent Vehicles

Abstract: As part of TIV’s DHW on Vehicle 5.0, this letter introduces a novel concept, Logistics 5.0, to address high complexities in logistics CyberPhysical-Social Systems (CPSS). Building upon the theory of parallel intelligence and leveraging advanced technologies and methods such as blockchain, scenarios engineering, Decentralized Autonomous Organizations and Operations (DAOs), Logistics 5.0 promises to accelerate the paradigm shift towards intelligent and sustainable logistics. First, the parallel logistic framework is proposed, and the logistics ecosystem is discussed. Then, the human-oriented operating systems (HOOS) are suggested to providing intelligent Logistics 5.0 solutions. Logistics 5.0 serves as a critical catalyst in realizing the “6S” objectives, i.e. Safety, Security, Sustainability, Sensitivity, Service, and Smartness, within the logistics industry

 

Title: A distributed and energy-efficient KNN for EEG classification with dynamic money-saving policy in heterogeneous clusters

Authors: Juan José Escobar, Francisco Rodríguez, Beatriz Prieto, Dragi Kimovski, Andrés Ortiz, and Miguel Damas

Abstract: Due to energy consumption’s increasing importance in recent years, energy-time efficiency is a highly relevant objective to address in High-Performance Computing (HPC) systems, where cost significantly impacts the tasks executed. Among these tasks, classification problems are considered due to their great computational complexity, which is sometimes aggravated when processing high-dimensional datasets. In addition, implementing efficient applications for high-performance systems is not an easy task since hardware must be considered to maximize performance, especially on heterogeneous platforms with multi-core CPUs. Thus, this article proposes an efficient distributed K-Nearest Neighbors (KNN) for Electroencephalogram (EEG) classification that uses minimum Redundancy Maximum Relevance (mRMR) as a feature selection technique to reduce the dimensionality of the dataset. The approach implements an energy policy that can stop or resume the execution of the program based on the cost per Megawatt. Since the procedure is based on the master-worker scheme, the performance of three different workload distributions is also analyzed to identify which one is more suitable according to the experimental conditions. The proposed approach outperforms the classification results obtained by previous works that use the same dataset. It achieves a speedup of 74.53 when running on a multi-node heterogeneous cluster, consuming only 13.38% of the energy consumed by the sequential version. Moreover, the results show that financial costs can be reduced when energy policy is activated and the importance of developing efficient methods, proving that energy-aware computing is necessary for sustainable computing.

 

SWForum.eu: The Way Forward: Workshop on Future Challenges in Software Engineering

https://www.flickr.com/photos/198632876@N07/sets/72177720309399251/

 

During the session, experts delved into the challenges of processing massive amounts of data and explored cutting-edge technologies that can handle such extreme data requirements.

From graph-based solutions to distributed computing frameworks, attendees shared valuable insights into the evolving landscape of data management. The discussion highlighted the need for scalable infrastructure and intelligent algorithms to efficiently process and analyze vast datasets. The future of data management is promising, thanks to innovative approaches showcased in the session. Stay tued as we continue to push the boundaries of data processing and drive advancements in the field through the Graph-Massivizer Project Together, we’re shaping the future of extreme data management!

BDVA – Big Data Value Association

Our Graph-Massivizer Project is thrilled to be part of the #DataWeek2023 event! Join us for a thought-provoking session on “Are current infrastructures suitable for extreme data processing? Technologies for data management.”

Don’t miss this opportunity to explore cutting-edge solutions and discuss the future of data processing together with Nuria De Lama Dumitru Roman Roberta Turra Radu Prodan Lilit Axner Jan Martinovič Bill Patrowicz Irena Pavlova! ?

? Tuesday 13th

⏰ 15:30 – 17:00

BDVA – Big Data Value Association

 

Container-based Data Pipelines on the Computing Continuum for Remote Patient Monitoring

Authors: Nikolay Nikolov, Arnor Solberg, Radu Prodan, Ahmet Soylu, Mihhail Matskin, Dumitru Roman

Computer Jounal, Special Issue on Computing in Telemedicine

Abstract: Diagnosing, treatment, and follow-up care of patients is happening increasingly through telemedicine, especially in remote areas where direct interaction is hindered. Over the past three years, following the COVID-19 pandemic, the utility of remote patient care has been further field-tested. Tackling the technical challenges of a growing demand for telemedicine requires a convergence of several fields: 1) software solutions for reliable, secure, and reusable data processing, 2) management of hardware resources (at scale) on the Cloud/Fog/Edge Computing Continuum, and 3) automation of DevOps processes for deployment of digital healthcare solutions with patients. In this context, the emerging concept of \emph{big data pipelines} provides relevant solutions and is one of the main enablers. In what follows, we present a data pipeline for remote patient monitoring and show a real-world example of how data pipelines help address the stringent requirements of telemedicine.