Hadi

Authors: Annalisa Gallina (UNIPD, Italy), Hadi Amirpour (AAU, Austria), Sara Baldoni (UNIPD, Italy), Giuseppe Valenzise (UPSaclay, France), Federica Battisti (UNIPD, Italy).

Conference: IEEE Visual Communications and Image Processing (IEEE VCIP 2024) – Tokyo, Japan, December 8-11, 2024

Abstract: Measuring the complexity of visual content is crucial in various applications, such as selecting sources to test processing algorithms, designing subjective studies, and efficiently determining the appropriate encoding parameters and bandwidth allocation for streaming. While spatial and temporal complexity measures exist for 2D videos, a geometric complexity measure for 3D content is still lacking. In this paper, we present the first study to characterize the geometric complexity of 3D point clouds. Inspired by existing complexity measures, we propose several compression-based definitions of geometric complexity derived from the rate-distortion curves obtained by compressing a dataset of point clouds using G-PCC. Additionally, we introduce density-based and geometry-based descriptors to predict complexity. Our initial results show that even simple density measures can accurately predict the geometric complexity of point clouds.

Index Terms— Point cloud, complexity, compression, G-PCC.

Authors: Prajit T Rajendran (Universite Paris-Saclay), Samira Afzal (Alpen-Adria-Universität Klagenfurt), Vignesh V Menon (Fraunhofer HHI), Christian Timmerer (Alpen-Adria-Universität Klagenfurt)

Conference: IEEE Visual Communications and Image Processing (IEEE VCIP 2024)

Abstract: Optimizing framerate for a given bitrate-spatial resolution pair in adaptive video streaming is essential to maintain perceptual quality while considering decoding complexity. Low framerates at low bitrates reduce compression artifacts and decrease decoding energy. We propose a novel method, Decoding-complexity aware Framerate Prediction (DECODRA), which employs a Variable Framerate Pareto-front approach to predict an optimized framerate that minimizes decoding energy under quality degradation constraints. DECODRA dynamically adjusts the framerate based on current bitrate and spatial resolution, balancing trade-offs between framerate, perceptual quality, and decoding complexity. Extensive experimentation with the Inter-4K dataset demonstrates DECODRA’s effectiveness, yielding an average PSNR and VMAF increase of 0.87 dB and 5.14 points, respectively, for the same bitrate compared to the default 60 fps encoding. Additionally, DECODRA achieves an average reduction in decoding energy consumption of 13.27 %, enhancing the viewing experience, extending mobile device battery life, and reducing the energy footprint of streaming services.

Authors: Sashko Ristov, Mika Hautz, Philipp Gritsch, Stefan Nastic, Radu Prodan, Michael Felderer

ICSOC 2024: 22nd International Conference on Service-Oriented Computing https://icsoc2024.redcad.tn/

Abstract:

We observe irregular data transfer performance across federated serverless infrastructures (sometimes faster across providers than colocated), making the entire workflow scheduling even more challenging in federated FaaS and sky computing. This paper introduces STORELESS – a novel workflow scheduler and heuristic algorithm for serverless storage attachments that dynamically selects, provisions, and configures suitable function deployments and storage backends from the federated serverless infrastructure. STORELESS improves workflow execution time by up to 30% by running cross-regional setup compared to the state-of-the-art.

 

Authors: Akif Quddus Khan, Mihhail Matskin, Radu Prodan, Christoph Bussler, Dumitru Roman, Ahmet Soylu

Journal of Cloud Computing: https://journalofcloudcomputing.springeropen.com/

Abstract: Cloud computing has become popular among individuals and enterprises due to its convenience, scalability, and flexibility. However, a major concern for many cloud service users is the rising cost of cloud resources. Since cloud computing uses a pay-per-use model, costs can add up quickly, and unexpected expenses can arise from a lack of visibility and control. The cost structure gets even more complicated when working with multi-cloud or hybrid environments. Businesses may spend much of their IT budget on cloud computing, and any savings can improve their competitiveness and financial stability. Hence, an efficient cloud cost management is crucial. To overcome this difficulty, new approaches and tools are being developed to provide greater oversight and command over cloud computing expenses. In this respect, this article, presents a graph-based approach for modelling cost elements and cloud resources and a potential way to solve the resulting constraint problem of cost optimisation. In this context, we primarily consider utilisation, cost, performance, and availability. The proposed approach is evaluated on three different user scenarios, and results indicate that it could be effective in cost modelling, cost optimisation, and scalability. This approach will eventually help organisations make informed decisions about cloud resource placement and manage the costs of software applications and data workflows deployed in single, hybrid, or multi-cloud environments.

Authors: Kurt Horvath, Dragi Kimovski, Radu Prodan, Bernd Spiess, Oliver Hohlfeld

Venue: 14th International Conference on Internet of Things (IoT 2024); Oulu, Finland, 19-22 November, https://iot-conference.org/iot2024

Abstract: Traditional network measurement campaigns suffer from the lack of control over network infrastructure and the inability to evaluate communication performance directly, especially for the placement of highly distributed Internet of Things (IoT) services. In response, we propose a novel Scalable Latency Evaluation Methodology for the Computing Continuum (SEAL-CC). SEAL-CC extends beyond short-term evaluations by capturing the long-term responsiveness of networks supporting IoT services on the computing continuum. It organizes and evaluates a network of nodes, offering insights for optimized IoT service placement in urban and international settings. Our contributions include a novel evaluation methodology tailored for IoT services over the computing continuum, a comprehensive framework for transparent network evaluation using distributed Internet measurement platforms, and a real-life case-study validation with recommendations for IoT service placement.

Authors: Haleh Dizaji, Reza Farahani, Dragi Kimovski, Joze Rozanec, Ahmet Soylu, Radu Prodan

Venue: 31st IEEE International Conference on High Performance Computing, Data, and Analytics; Bengaluru, India,  18-21 December

https://www.hipc.org

Abstract: The increasing size of graph structures in real-world applications, such as distributed computing networks, social media, or bioinformatics, requires appropriate sampling algorithms that simplify them while preserving key properties. Unfortunately, predicting the outcome of graph sampling algorithms is challenging due to their irregular complexity and randomized properties. Therefore, it is essential to identify appropriate graph features and apply suitable models capable of estimating their sampling outcomes. In this paper, we compare three machine learning (ML) models for predicting the divergence of five metrics produced by twelve node, edge, and traversal-based graph sampling algorithms: degree distribution (D3), clustering coefficient distribution (C2D2), hop-plots distribution (HPD2) (including the largest connected component (HPD2C)), and execution time. We use these prediction models to recommend suitable sampling algorithms for each metric and conduct mutual information analysis to extract relevant graph features. Experiments on six large real-world graphs from three categories (scale-free, power-law, binomial) demonstrate an accuracy under 20% in C2D2 and HPD2 prediction for most algorithms despite the relatively high similarity displacement. Sampling algorithm recommendations on ten real-world graphs show higher hits@3 for D3 and

C2D2 and comparable results for HPD2 and HPD2C compared to the K-best baseline method accessing true empirical data. Finally, ML models show superior runtime recommendations compared to baseline methods, with

hits@3 over 86% for synthetic and real graphs and hits@1 over 60% for small graphs. These findings are promising for algorithm recommendation systems, particularly when balancing quality and runtime preferences.

 

Title: High Complexity and Bad Quality? Efficiency Assessment for Video QoE Prediction Approaches

Authors: Frank Loh, Gülnaziye Bingöl, Reza Farahani, Andrea Pimpinella, Radu Prodan, Luigi Atzori, Tobias Hoßfeld

Venue: 20th International Conference on Network and Service Management (CNSM 2024)

Abstract:  In recent years, video streaming has dominated Internet data traffic, prompting network providers to ensure high-quality streaming experiences to prevent customer churn. However, due to the encryption of streaming traffic, extensive network monitoring by providers is required to predict the streaming quality and improve their services. Several such prediction approaches have been studied in recent years, with a primary focus on the ability to determine key video quality degradation factors, often without considering the required resources or
energy consumption. To address this gap, we consider existing methods to predict key Quality of Experience (QoE) degradation factors from the literature and quantify the data that have to be monitored and processed for video streaming applications. Based on this, we assess the efficiency of different QoE degradation factor prediction approaches and quantify the ratio between efficiency and the achieved prediction quality. In this context, we identify significant disparities in the efficiency, influenced by data requirements and the specific prediction approach, and finally by the resulting quality. Consequently, we provide insights for network providers to choose the most appropriate method tailored to their specific requirements.

Published in: From Multimedia Communication to the Future Internet: Essays Dedicated to the Retirement of Prof. Dr. Dr. h.c. Ralf Steinmetz

Authors: Amr Rizk (Leibniz Universität Hannover, Germany), Hermann Hellwagner (AAU, Austria), Christian Timmerer (AAU, Austria), and Michael Zink (University of Massachusetts Amherst, MA, USA)

Abstract: Adaptivity is a cornerstone concept in video streaming. Equipped with the concept of Transitions, we review in this paper adaptivity mechanisms known from classical video streaming scenarios. We specifically highlight how these mechanisms emerge in a specific context, such that their performance finally depends on the deployment conditions. Using multiple examples we highlight the strength of the concept of adaptivity at runtime for video streaming.

Authors: Michael Seufert (University of Augsburg, Germany), Marius Spangenberger (University of Würzburg, Germany), Fabian Poignée (University of Würzburg, Germany), Florian Wamser (Lucerne University of Applied Sciences and Arts, Switzerland), Werner Robitza (AVEQ GmbH, Austria), Christian Timmerer (Christian Doppler-Labor ATHENA, Alpen-Adria-Universität, Austria), Tobias Hoßfeld (University of Würzburg, Germany)

Journal: ACM Transactions on Multimedia Computing Communications and Applications (ACM TOMM)

Abstract: Reaching close-to-optimal bandwidth utilization in Dynamic Adaptive Streaming over HTTP (DASH) systems can, in theory, be achieved with a small discrete set of bit rate representations. This includes typical bit rate ladders used in state-of-the-art DASH systems. In practice, however, we demonstrate that bandwidth utilization, and consequently the Quality of Experience (QoE), can be improved by offering a continuous set of bit rate representations, i.e., a continuous bit rate slide (COBIRAS). Moreover, we find that the buffer fill behavior of different standard adaptive bit rate (ABR) algorithms is sub-optimal in terms of bandwidth utilization. To overcome this issue, we leverage COBIRAS’ flexibility to request segments with any arbitrary bit rate and propose a novel ABR algorithm MinOff, which helps maximizing bandwidth utilization by minimizing download off-phases during streaming. To avoid extensive storage requirements with COBIRAS and to demonstrate the feasibility of our approach, we design and implement a proof-of-concept DASH system for video streaming that relies on just-in-time encoding (JITE), which reduces storage consumption on the DASH server. Finally, we conduct a performance evaluation on our testbed and compare a state-of-the-art DASH system with few bit rate representations and our JITE DASH system, which can offer a continuous bit rate slide, in terms of bandwidth utilization and video QoE for different ABR algorithms.

Authors: Reza Farahani, Narges Mehran, Sashko Ristov, and Radu Prodan

Venue: IEEE International Conference on Cluster Computing (CLUSTER), Kobe, Japan, 24-27 September

Abstract: Extending cloud computing towards fog and edge computing yields a heterogeneous computing environment known as computing continuum. In recent years, increasing demands for scalable, cost-effective, and streamlined maintenance services have led application and service providers to prefer serverless models over monolithic and serverful processing. However, orchestrating the computing continuum in complex application workflows of serverless functions, each with distinct requirements, introduces new resource management and scheduling
challenges. This paper introduces an orchestration service for concurrent serverless workflow processing across the computing continuum called HEFTLess. HEFTLess uses two deployment modes tailored to serve each workflow function: predeployed and undeployed. We formulate the problem as a Binary Linear Programming (BLP) optimization model, incorporating multiple groups of constraints to minimize the overall completion time and monetary cost of executing workflow batches. Inspired by the Heterogeneous Earliest Finish Time (HEFT) algorithm, we
propose a lightweight serverless workflow scheduling heuristic to cope with the high optimization time complexity in polynomial time. We evaluate HEFTLess using two machine learning-based serverless workflows on a real computing continuum testbed, including AWS Lambda and 325 combined on-promise and cloud instances from Exoscale, distributed across five geographic locations. The experimental results confirm that HEFTLess outperforms state-of-the-art methods in terms of both workflow batch completion time and cost.