Authors: Seyedehhaleh Seyeddizaji, Joze Martin Rozanec, Reza Farahani, Dumitru Roman and Radu Prodan

Venue: The 2nd Workshop on Serverless, Extreme-Scale, and Sustainable Graph Processing Systems Co-located with ICPE 2024

Abstract: While graph sampling is key to scalable processing, little research has tried to thoroughly compare and understand how it preserves features such as degree, clustering, and distances dependent on the graph size and structural properties. This research evaluates twelve widely adopted sampling algorithms across synthetic and real datasets to assess their qualities in three metrics: degree, clustering coefficient (CC), and hop plots. We find the random jump algorithm to be an appropriate choice regarding degree and hop-plot metrics and the random node for CC metric. In addition, we interpret the algorithms’ sample quality by conducting correlation analysis with diverse graph properties. We discover eigenvector centrality and path-related features as essential features for these algorithms’ degree quality estimation, node numbers (or the size of the largest connected component) as informative features for CC quality estimation and degree entropy, edge betweenness and path-related features as meaningful features for hop-plot metric. Furthermore, with increasing graph size, most sampling algorithms produce better-quality samples under degree and hop-plot metrics.

 

 

 

 

Authors: Reza Farahani, Frank Loh, Dumitru Roman, and Radu Prodan
Venue: The 2nd Workshop on Serverless, Extreme-Scale, and Sustainable Graph Processing Systems Co-located with ICPE 2024
Abstract: The growing desire among application providers for a cost model
based on pay-per-use, combined with the need for a seamlessly
integrated platform to manage the complex workflows of their
applications, has spurred the emergence of a promising comput-
ing paradigm known as serverless computing. Although serverless
computing was initially considered for cloud environments, it has
recently been extended to other layers of the computing continuum,
i.e., edge and fog. This extension emphasizes that the proximity of
computational resources to data sources can further reduce costs
and improve performance and energy efficiency. However, orches-
trating the computing continuum in complex application workflows,
including a set of serverless functions, introduces new challenges.
This paper investigates the opportunities and challenges introduced
by serverless computing for workflow management systems (WMS)
on the computing continuum. In addition, the paper provides a
taxonomy of state-of-the-art WMSs and reviews their capabilities.

The enduring popularity of the Pokémon franchise can be explained by a mix of nostalgia, constant innovation, and their appeal to a wide and diverse fan base, as Felix Schniz, game studies scholar and senior scientist at the University of Klagenfurt, explains: Pokémon is a pioneer of this dynamic. The game perfectly shows how a franchise can stay fresh and relevant through the ongoing reinterpretation of its genre dimensions.

Read the whole interview here: Dringel, Severin: “Tag des Pokémon – Warum Pokémon 28 Jahre später immer noch ein Renner ist.” Kleine Zeitung International, 27.02.2024. https://www.kleinezeitung.at/international/18047876/warum-pokemon-auch-nach-zwanzig-jahren-immer-noch-ein-renner-ist

 

Pokemon Ash Transparent Background PNG Image

 

 

 

”Fictional Practices of Spirituality” provides critical insight into the implementation of belief, mysticism, religion, and spirituality into (digital) worlds of fiction. This first volume focuses on interactive, virtual worlds – may that be the digital realms of video games and VR applications or the imaginary spaces of life action role-playing and soul-searching practices. It features analyses of spirituality as gameplay facilitator, sacred spaces and architecture in video game geography, religion in video games and spiritual acts and their dramaturgic function in video games, tabletop, or larp, among other topics. The contributors offer a first-time ever comprehensive overview of play-rites as spiritual incentives and playful spirituality in various medial incarnations.

The anthology was edited by Felix Schniz and Leonardo Marcato. It is now available as a printed copy, or for download via Open Access. Published by transcript 2023.

book: Fictional Practices of Spirituality I

On 15.03.2024, Narges Mehran successfully defended her doctoral studies with the “Scheduling of Dataflow Processing Applications on the Computing Continuum” thesis under Prof. Radu Prodan and Univ.-Prof. Dipl.-Ing. Dr. Hermann Hellwagner, alongside the mentorship of Postdoc-Ass. Priv.-Doz. Dr. Dragi Kimovski at ITEC.

Her defense was chaired by Assoc.Prof. Dr. Klaus Schoeffmann and examined by Prof. Dr. Mihhail Matskin (KTH Royal Institute of Technology, SE) and  Assoc.-Prof. Dr. Andrea Marrella (Sapienza University of Rome, IT).

During her doctoral study, she was a university assistant and contributed to the DataCloud EU H2020 project.

The abstract of her dissertation is as follows:

Latency-sensitive and bandwidth-intensive dataflow processing applications, such as the Internet of Things, big data, and machine learning, are dominant traffic generators on the Internet. Such workflows require nearly real-time processing of a continuous sequence of dataflows. To move the computation toward the network’s edge, improve communication latency, and reduce network congestion, Fog and Edge computing emerged as promising solutions to extend and complement the Cloud services. Unfortunately, the heterogeneity of the Cloud, Fog, and Edge computing (aka. Computing Continuum) raises important challenges related to deploying and executing dataflow processing applications. Therefore, in this thesis, I investigate the scheduling of dataflow processing applications. First, this thesis presents a method called Multi-objective dAtaflow processing aPplication scheduling in cloud, fOg, and edge (MAPO) that optimizes completion time, energy consumption, and economic cost, achieving up to a 28% improvement in efficiency. Second, a double-sided stable matching model named Cloud, fOg, and edge to Dataflow processing application mAtching (CODA) is introduced for deploying distributed applications on heterogeneous devices, but its limitation in considering microservices’ earliest start and finish times is acknowledged. CODA is extended with a capacity-aware matching-based algorithm, C3-Match, for scheduling dataflow asynchronous processing applications, resulting in lower data processing, queuing, and application completion times with increased data transmission time in experiments with various processing loads.

Title: Cloud Storage Tier Optimization through Storage Object Classification

Authors: Akif Quddus Khan, Mihhail Matskin, Radu Prodan, Christoph Bussler, Dumitru Roman, Ahmet Soylu

Abstract: Cloud storage adoption has increased over the years given the high demand for fast processing, low access latency, and ever-increasing amount of data being generated by, e.g., Internet of Things (IoT) applications. In order to meet the users’ demands and provide a cost-effective solution, cloud service providers (CSPs) offer tiered storage; however, keeping the data in one tier is not cost-effective. In this respect, cloud storage tier optimization involves aligning data storage needs with the most suitable and cost-effective storage tier, thus reducing costs while ensuring data availability and meeting performance requirements. Ideally, this process considers the trade-off between performance and cost, as different storage tiers offer different levels of performance and durability. It also encompasses data lifecycle management, where data is automatically moved between tiers based on access patterns, which in turn impacts the storage cost. In this respect, this article explores two novel classification approaches, rule-based and game theory-based, to optimize cloud storage cost by reassigning data between different storage tiers. Four distinct storage tiers are considered: premium, hot, cold, and archive. The viability and potential of the proposed approaches are demonstrated by comparing cost savings and analyzing the computational cost using both fully-synthetic and semi-synthetic datasets with static and dynamic access patterns. The results indicate that the proposed approaches have the potential to significantly reduce cloud storage cost, while being computationally feasible for practical applications. Both approaches are lightweight and industry- and platform-independent.

Computing, https://link.springer.com/journal/607

The 15th ACM Multimedia Systems Conference (Technical Demos)

15-18 April, 2024 in Bari, Italy

Authors: Samuel Radler* (AAU, Austria) , Leon Prüller* (AAU, Austria), Emanuele Artioli (AAU, Austria), Farzad Tashtarian (AAU, Austria), and Christian Timmerer (AAU, Austria)

As streaming services become more commonplace, analyzing their behavior effectively under different network conditions is crucial. This is normally quite expensive, requiring multiple players with different bandwidth configurations to be emulated by a powerful local machine or a cloud environment. Furthermore, emulating a realistic network behavior or guaranteeing adherence to a real network trace is challenging. This paper presents PyStream, a simple yet powerful way to emulate a video streaming network, allowing multiple simultaneous tests to run locally. By leveraging a network of Docker containers, many of the implementation challenges are abstracted away, keeping the resulting system easily manageable and upgradeable. We demonstrate how PyStream not only reduces the requirements for testing a video streaming system but also improves the accuracy of the emulations with respect to the current state-of-the-art. On average, PyStream reduces the error between the original network trace and the bandwidth emulated by video players by a factor of 2-3 compared to Wondershaper, a common network traffic shaper in many video streaming evaluation environments. Moreover, PyStream decreases the cost of running experiments compared to existing cloud-based video streaming evaluation environments such as CAdViSE.

 

 

The 15th ACM Multimedia Systems Conference (Open-source Software and Datasets)

15-18 April, 2024 in Bari, Italy

Authors: Farzad Tashtarian∗ (AAU, Austria), Daniele Lorenzi∗ (AAU, Austria), Hadi Amirpour  (AAU, Austria), Samira Afzal  (AAU, Austria), and Christian Timmerer (AAU, Austria)

HTTP Adaptive Streaming (HAS) has emerged as the predominant solution for delivering video content on the Internet. The urgency of the climate crisis has accentuated the demand for investigations into the environmental impact of HAS techniques. In HAS, clients rely on adaptive bitrate (ABR) algorithms to drive the quality selection for video segments. Focusing on maximizing video quality, these algorithms often prioritize maximizing video quality under favorable network conditions, disregarding the impact of energy consumption. To thoroughly investigate the effects of energy consumption, including the impact of bitrate and other video parameters such as resolution and codec, further research is still needed. In this paper, we propose COCONUT, a COntent COnsumption eNergy measUrement daTaset for adaptive video streaming collected through a digital multimeter on various types of client devices, such as laptop and smartphone, streaming MPEG-DASH segments.

Radu Prodan has been invited and will participate as a general chair at the ICONIC 2024, April 26-27, 2024, at Lovely Professional University, Punjab, India.

The Conference will provide a platform for scientists, researchers, academicians, industrialists, and students to assimilate the knowledge and get the opportunity to discuss and share insights through deep-dive research findings on the recent disruptions and developments in computing. All technical sessions will largely be steering Network Technologies, Artificial Intelligence and ethics, Advances in Computing, Futuristic Trends in Data Science, Security and Privacy, Data Mining and Information Retrieval.

Objectives

  • To provide a platform to facilitate the exchange of knowledge, ideas, and innovations among scientists, researchers, academicians, industrialists, and students.
  • To deliberate and disseminate the recent advancements and challenges in the computing sciences.
  • To enable the delegates to establish research or business relations and find international linkage for future collaborations.

Cluster Computing

DFARM: A deadline-aware fault-tolerant scheduler for cloud computing

Authors: Ahmad Awan, Muhammad Aleem, Altaf Hussain, Radu Prodan

Abstract:

Cloud computing has become popular for small businesses due to its cost-effectiveness and the ability to acquire necessary on-demand services, including software, hardware, network, etc., anytime around the globe. Efficient job scheduling in the Cloud is essential to optimize operational costs in data centers. Therefore, scheduling should consider assigning tasks to Virtual Machines (VMs) in a Cloud environment in such a manner that could speed up execution, maximize resource utilization, and meet users’ SLA and other constraints such as deadlines. For this purpose, the tasks can be prioritized based on their deadlines and task lengths, and the resources could be provisioned and released as needed. Moreover, to cope with unexpected execution situations or hardware failures, a fault-tolerance mechanism could be employed based on hybrid replication and the re-submission method. Most of the existing techniques tend to improve performance. However, their pitfall lies in certain aspects such as either those techniques prioritize tasks based on a singular value (e.g., usually deadline), only utilize a singular fault tolerance mechanism, or try to release resources that cause more overhead immediately. This research work proposes a new scheduler called the Deadline and fault-aware task Adjusting and Resource Managing (DFARM) scheduler, the scheduler dynamically acquires resources and schedules deadline-constrained tasks by considering both their length and deadlines while providing fault tolerance through the hybrid replication-resubmission method. Besides acquiring resources, it also releases resources based on their boot time to lessen costs due to reboots. The performance of the DFARM scheduler is compared to other scheduling algorithms, such as Random Selection, Round Robin, Minimum Completion Time, RALBA, and OG-RADL. With a comparable execution performance, the proposed DFARM scheduler reduces task-rejection rates by $2.34 – 9.53$ times compared to the state-of-the-art schedulers using two benchmark datasets.