Authors: Reza Farahani, Frank Loh, Dumitru Roman, and Radu Prodan
Venue: The 2nd Workshop on Serverless, Extreme-Scale, and Sustainable Graph Processing Systems Co-located with ICPE 2024
Abstract: The growing desire among application providers for a cost model
based on pay-per-use, combined with the need for a seamlessly
integrated platform to manage the complex workflows of their
applications, has spurred the emergence of a promising comput-
ing paradigm known as serverless computing. Although serverless
computing was initially considered for cloud environments, it has
recently been extended to other layers of the computing continuum,
i.e., edge and fog. This extension emphasizes that the proximity of
computational resources to data sources can further reduce costs
and improve performance and energy efficiency. However, orches-
trating the computing continuum in complex application workflows,
including a set of serverless functions, introduces new challenges.
This paper investigates the opportunities and challenges introduced
by serverless computing for workflow management systems (WMS)
on the computing continuum. In addition, the paper provides a
taxonomy of state-of-the-art WMSs and reviews their capabilities.

”Fictional Practices of Spirituality” provides critical insight into the implementation of belief, mysticism, religion, and spirituality into (digital) worlds of fiction. This first volume focuses on interactive, virtual worlds – may that be the digital realms of video games and VR applications or the imaginary spaces of life action role-playing and soul-searching practices. It features analyses of spirituality as gameplay facilitator, sacred spaces and architecture in video game geography, religion in video games and spiritual acts and their dramaturgic function in video games, tabletop, or larp, among other topics. The contributors offer a first-time ever comprehensive overview of play-rites as spiritual incentives and playful spirituality in various medial incarnations.

The anthology was edited by Felix Schniz and Leonardo Marcato. It is now available as a printed copy, or for download via Open Access. Published by transcript 2023.

book: Fictional Practices of Spirituality I

Cluster Computing

DFARM: A deadline-aware fault-tolerant scheduler for cloud computing

Authors: Ahmad Awan, Muhammad Aleem, Altaf Hussain, Radu Prodan

Abstract:

Cloud computing has become popular for small businesses due to its cost-effectiveness and the ability to acquire necessary on-demand services, including software, hardware, network, etc., anytime around the globe. Efficient job scheduling in the Cloud is essential to optimize operational costs in data centers. Therefore, scheduling should consider assigning tasks to Virtual Machines (VMs) in a Cloud environment in such a manner that could speed up execution, maximize resource utilization, and meet users’ SLA and other constraints such as deadlines. For this purpose, the tasks can be prioritized based on their deadlines and task lengths, and the resources could be provisioned and released as needed. Moreover, to cope with unexpected execution situations or hardware failures, a fault-tolerance mechanism could be employed based on hybrid replication and the re-submission method. Most of the existing techniques tend to improve performance. However, their pitfall lies in certain aspects such as either those techniques prioritize tasks based on a singular value (e.g., usually deadline), only utilize a singular fault tolerance mechanism, or try to release resources that cause more overhead immediately. This research work proposes a new scheduler called the Deadline and fault-aware task Adjusting and Resource Managing (DFARM) scheduler, the scheduler dynamically acquires resources and schedules deadline-constrained tasks by considering both their length and deadlines while providing fault tolerance through the hybrid replication-resubmission method. Besides acquiring resources, it also releases resources based on their boot time to lessen costs due to reboots. The performance of the DFARM scheduler is compared to other scheduling algorithms, such as Random Selection, Round Robin, Minimum Completion Time, RALBA, and OG-RADL. With a comparable execution performance, the proposed DFARM scheduler reduces task-rejection rates by $2.34 – 9.53$ times compared to the state-of-the-art schedulers using two benchmark datasets.

Journal of Grid Computing

Authors: Zeinab Bakhshi, Guillermo Rodriguez-Navas, Hans Hansson, Radu Prodan

Abstract:

This paper analyzes a persistent storage method’s timing performance for distributed container-based architectures in industrial control applications. The method focuses on ensuring data availability and consistency while accommodating faults. The analysis considers four aspects: placement strategy, design options, data size, and evaluation under faulty conditions. Experimental results considering the timing constraints in industrial applications indicate that the storage solution can meet critical deadlines, particularly under specific failure patterns. Moreover, the method is applicable for evaluating timing constraints in other container-based critical applications that require persistent storage.Further comparison results reveal that, while the method may underperform current centralized solutions under fault-free conditions, it outperforms the centralized solutions in failure scenarios.

ACM Mile High Video 2024 (mhv), Denver, Colorado, February 11-14, 2024

Authors: Vignesh V Menon, Prajit T Rajendran, Reza Farahani, Klaus Schoffmann, Christian Timmerer

Abstract: The rise in video streaming applications has increased the demand for video quality assessment (VQA). In 2016, Netflix introduced Video Multi-Method Assessment Fusion (VMAF), a full reference VQA metric that strongly correlates with perceptual quality, but its computation is time-intensive. This paper proposes a Discrete Cosine Transform (DCT)-energy-based VQA with texture information fusion (VQ-TIF) model for video streaming applications that determines the visual quality of the reconstructed video compared to the original video. VQ-TIF extracts Structural Similarity (SSIM) and spatiotemporal features of the frames from the original and reconstructed videos and fuses them using a long short-term mem- ory (LSTM)-based model to estimate the visual quality. Experimental results show that VQ-TIF estimates the visual quality with a Pearson Correlation Coefficient (PCC) of 0.96 and a Mean Absolute Error (MAE) of 2.71, on average, compared to the ground truth VMAF scores. Additionally, VQ-TIF estimates the visual quality at a rate of 9.14 times faster than the state-of-the-art VMAF implementation, along with an 89.44 % reduction in energy consumption, assuming an Ultra HD (2160p) display resolution.

ACM Mile High Video 2024 (mhv), Denver, Colorado, February 11-14, 2024

Authors: Daniele Lorenzi (Alpen-Adria-Universität Klagenfurt, Austria), Minh Nguyen (Alpen-Adria-Universität Klagenfurt, Austria), Farzad Tashtarian (Alpen-Adria-Universität Klagenfurt, Austria), and Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Austria)

Abstract: HTTP Adaptive Streaming (HAS) is the de-facto solution for delivering video content over the Internet. The climate crisis has highlighted the environmental impact of information and communication technologies (ICT) solutions and the need for green solutions to reduce ICT’s carbon footprint. As video streaming dominates Internet traffic, research in this direction is vital now more than ever. HAS relies on Adaptive BitRate (ABR) algorithms, which dynamically choose suitable video representations to accommodate device characteristics and network conditions. ABR algorithms typically prioritize video quality, ignoring the energy impact of their decisions. Consequently, they often select the video representation with the highest bitrate under good network conditions, thereby increasing energy consumption. This is problematic, especially for energy-limited devices, because it affects the device’s battery life and the user experience. To address the aforementioned issues, we propose E-WISH, a novel energy-aware ABR algorithm, which extends the already-existing WISH algorithm to consider energy consumption while selecting the quality for the next video segment. According to the experimental findings, E-WISH shows the ability to improve Quality of Experience (QoE) by up to 52% according to the ITU-T P.1203 model (mode 0) while simultaneously reducing energy consumption by up to 12% with respect to state-of-the-art approaches.

Keywords: HTTP adaptive streaming, Energy, Adaptive Bitrate (ABR), DASH

 

IEEE Transactions on Network and Service Management

Authors: Reza Farahani, Ekrem Cetinkaya, Christian Timmerer, Mohammad Shojafar, Mohammad Ghanbari, and Hermann Hellwagner

Abstract: Recent years have witnessed video streaming demands evolve into one of the most popular Internet applications. With the ever-increasing personalized demands for high-definition and low-latency video streaming services, network-assisted video streaming schemes employing modern networking paradigms have become a promising complementary solution in the HTTP Adaptive Streaming (HAS) context. The emergence of such techniques addresses long-standing challenges of enhancing users’ Quality of Experience (QoE), end-to-end (E2E) latency, as well as network utilization. However, designing a cost-effective, scalable, and flexible network-assisted video streaming architecture that supports the aforementioned requirements for live streaming services is still an open challenge. This article leverage novel networking paradigms, i.e., edge computing and Network Function Virtualization (NFV), and promising video solutions, i.e., HAS, Video Super-Resolution (SR), and Distributed Video Transcoding (TR), to introduce A Latency- and cost-aware hybrId P2P-CDN framework for liVe video strEaming (ALIVE). We first introduce the ALIVE multi-layer architecture and design an action tree that considers all feasible resources (i.e., storage, computation, and bandwidth) provided by peers, edge, and CDN servers for serving peer requests with acceptable latency and quality. We then formulate the problem as a Mixed Integer Linear Programming (MILP) optimization model executed at the edge of the network. To alleviate the optimization model’s high time complexity, we propose a lightweight heuristic, namely, Greedy-Based Algorithm (GBA). Finally, we (i) design and instantiate a large-scale cloud-based testbed including 350 HAS players, (ii) deploy ALIVE on it, and (iii) conduct a series of experiments to evaluate the performance of ALIVE in various scenarios. Experimental results indicate that ALIVE (i) improves the users’ QoE by at least 22%, (ii) decreases incurred cost of the streaming service provider by at least 34%, (iii) shortens clients’ serving latency by at least 40%, (iv) enhances edge server energy consumption by at least 31%, and (v) reduces backhaul bandwidth usage by at least 24% compared to baseline approaches.

Keywords: HTTP Adaptive Streaming (HAS); Edge Com- puting; Network Function Virtualization (NFV); Content Deliv- ery Network (CDN); Peer-to-Peer (P2P); Quality of Experience (QoE); Video Transcoding; Video Super-Resolution.

IEEE Access, A Multidisciplinary, Open-access Journal of the IEEE

Title: Characterization of the Quality of Experience and Immersion of Point Cloud Video Sequences through a Subjective Study @ IEEE Access

AuthorsMinh NguyenShivi VatsSam Van Damme (Ghent University – imec and KU Leuven, Belgium), Jeroen van der Hooft (Ghent University – imec, Belgium), Maria Torres Vega (Ghent University – imec and KU Leuven, Belgium), Tim Wauters (Ghent University – imec, Belgium), Filip De Turck (Ghent University – imec, Belgium), Christian Timmerer, Hermann Hellwagner

Abstract: Point cloud streaming has recently attracted research attention as it has the potential to provide six degrees of freedom movement, which is essential for truly immersive media. The transmission of point clouds requires high-bandwidth connections, and adaptive streaming is a promising solution to cope with fluctuating bandwidth conditions. Thus, understanding the impact of different factors in adaptive streaming on the Quality of Experience (QoE) becomes fundamental. Point clouds have been evaluated in Virtual Reality (VR), where viewers are completely immersed in a virtual environment. Augmented Reality (AR) is a novel technology and has recently become popular, yet quality evaluations of point clouds in AR environments are still limited to static images.

In this paper, we perform a subjective study of four impact factors on the QoE of point cloud video sequences in AR conditions, including encoding parameters (quantization parameters, QPs), quality switches, viewing distance, and content characteristics. The experimental results show that these factors significantly impact the QoE. The QoE decreases if the sequence is encoded at high QPs and/or switches to lower quality and/or is viewed at a shorter distance, and vice versa. Additionally, the results indicate that the end user is not able to distinguish the quality differences between two quality levels at a specific (high) viewing distance. An intermediate-quality point cloud encoded at geometry QP (G-QP) 24 and texture QP (T-QP) 32 and viewed at 2.5 m can have a QoE (i.e., score 6.5 out of 10) comparable to a high-quality point cloud encoded at 16 and 22 for G-QP and T-QP, respectively, and viewed at a distance of 5 m. Regarding content characteristics, objects with lower contrast can yield better quality scores. Participants’ responses reveal that the visual quality of point clouds has not yet reached an immersion level as desired. The average QoE of the highest visual quality is less than 8 out of 10. There is also a good correlation between objective metrics (e.g., color Peak Signal-to-Noise Ratio (PSNR) and geometry PSNR) and the QoE score. Especially the Pearson correlation coefficients of color PSNR is 0.84. Finally, we found that machine learning models are able to accurately predict the QoE of point clouds in AR environments.

The subjective test results and questionnaire responses are available on Github: https://github.com/minhkstn/QoE-and-Immersion-of-Dynamic-Point-Cloud.

Sebastian Uitz and Michael Steinkellner presented their highly anticipated game, “A Webbing Journey,” at the biggest gaming event in Austria, the Game City in Vienna, from October 13th to 15th, 2023. This event was a bustling hub of innovation, bringing together game developers and enthusiasts from near and far. It offered a remarkable opportunity to connect with fellow developers and immerse themselves in a world of fantastic games from other indie developers and big publishers. 
Nestled within the heart of Game City, our booth provided a gateway into the captivating universe of “A Webbing Journey.” Attendees of all ages were invited to step into the eight-legged shoes of our adventurous spider, experiencing the game’s enchanting storyline and unique gameplay mechanics. Our setup, equipped with a laptop, a Steam Deck, and a Nintendo Switch, allowed players to traverse the spider’s wondrous journey, leaving no web unspun. 
One of the event’s highlights was our engaging interview with the FM4 radio channel. This platform provided an excellent opportunity to share the inspiration behind “A Webbing Journey,” explore the game’s captivating features, and show off the newest level in our game. We were thrilled to offer a glimpse into the game’s development process and reveal the magic that makes our project so unique.

Authors: Gregor Molan, Gregor Dolinar, Jovan Bojkovski, Radu Prodan, Andrea Borghesi, Martin Molan

Journal: IEEE Access

Purpose: The gap between software development requirements and the available resources of software developers continues to widen. This requires changes in the development and organization of software development.

Objectives: Presented is a model introducing a quantitative software development management methodology that estimates the relative importance and risk of functionality retention or abundance, which determines the final value of the software product.

Method: The final value of the software product is interpreted as a function of the requirements and functionalities, represented as a computational graph (called a software product graph). The software product graph allows the relative importance of functionalities to be estimated by calculating the corresponding partial derivatives of the value function. The risk of not implementing the functionality is estimated by reducing the final value of a product.

Validation: This model has been applied to two EU projects: CareHD and vINCI. In vINCI, the functionalities with the most significant added value to the application were developed based on the implemented model and those that brought the least value were abandoned. Optimization was not implemented in the CareHD project and proceeded as initially designed. Consequently, only 71% of the CareHD’s potential value has been realized.

Conclusions: Presented model enables rational management and organization of software product development with real-time quantitative evaluation of functionalities impacts, assessment of the risks of omitting them without a significant impact. A quantitative evaluation of the impacts and risks of retention or abundance is possible based on the proposed algorithm, which is the core of the model. This model is a tool for rational organization and development of software products.