Hadi

Authors: Hadi Amirpour (AAU, Austria), Lingfeng Qu (Guangzhou University, China), Jong Hwan Ko (SKKU, South Korea), Cosmin Stejerean (Meta, USA), Christian Timmerer (AAU, Austria

Conference: IEEE Visual Communications and Image Processing (IEEE VCIP 2024)  – Tokyo, Japan, December 8-11, 2024

Abtract: As video dimensions — including resolution, frame rate, and bit depth — increase, a larger bitrate is required to maintain a higher Quality of Experience (QoE). While videos are often optimized for resolution and frame rate to improve compression and energy efficiency, the impact of color space is often overlooked. Larger color spaces are essential for avoiding color banding and delivering High Dynamic Range (HDR) content with richer, more accurate colors, although this comes at the cost of higher processing energy. This paper investigates the effects of bit depth and color subsampling on video compression efficiency and energy consumption. By analyzing different bit depths and subsampling schemes, we aim to determine optimized settings that balance compression efficiency with energy consumption, ultimately contributing to more sustainable and high-quality video delivery. We evaluate both encoding and decoding energy consumption and assess the quality of videos using various metrics including PSNR, VMAF, ColorVideoVDP, and CAMBI. Our findings offer valuable insights for video codec developers and content providers aiming to improve the performance and environmental footprint of their video streaming services.

Index Terms— Video encoding, video decoding, video quality, bit depth, color subsampling, energy.

Hadi

Authors: Annalisa Gallina (UNIPD, Italy), Hadi Amirpour (AAU, Austria), Sara Baldoni (UNIPD, Italy), Giuseppe Valenzise (UPSaclay, France), Federica Battisti (UNIPD, Italy).

Conference: IEEE Visual Communications and Image Processing (IEEE VCIP 2024) – Tokyo, Japan, December 8-11, 2024

Abstract: Measuring the complexity of visual content is crucial in various applications, such as selecting sources to test processing algorithms, designing subjective studies, and efficiently determining the appropriate encoding parameters and bandwidth allocation for streaming. While spatial and temporal complexity measures exist for 2D videos, a geometric complexity measure for 3D content is still lacking. In this paper, we present the first study to characterize the geometric complexity of 3D point clouds. Inspired by existing complexity measures, we propose several compression-based definitions of geometric complexity derived from the rate-distortion curves obtained by compressing a dataset of point clouds using G-PCC. Additionally, we introduce density-based and geometry-based descriptors to predict complexity. Our initial results show that even simple density measures can accurately predict the geometric complexity of point clouds.

Index Terms— Point cloud, complexity, compression, G-PCC.

From 11 to 13 October, ITEC’s Felix Schniz participated in the annual FROG (Future and Reality of Gaming) conference in Vienna. It is Austria’s biggest, (video) game dedicated academic conference, which attracted 50 speakers from 12 countries this year. Under this year’s topic of “Gaming the Apocalypse”, Felix delivered the talk “Scales of Apocalypse: Space and Affect in Dystopian Video Games between Sacred and Profane”.

Also presenting at the conference were AAU’s Kseniia Harshina and the Game Studies and Engineering master students Tim Sanders and Elli Chraibi, showcasing the diverse research interest and academic expertise produced by Game Studies and Engineering staff and students in the field.

The conference proceedings are expected to be published next year Summer.

Authors: Prajit T Rajendran (Universite Paris-Saclay), Samira Afzal (Alpen-Adria-Universität Klagenfurt), Vignesh V Menon (Fraunhofer HHI), Christian Timmerer (Alpen-Adria-Universität Klagenfurt)

Conference: IEEE Visual Communications and Image Processing (IEEE VCIP 2024)

Abstract: Optimizing framerate for a given bitrate-spatial resolution pair in adaptive video streaming is essential to maintain perceptual quality while considering decoding complexity. Low framerates at low bitrates reduce compression artifacts and decrease decoding energy. We propose a novel method, Decoding-complexity aware Framerate Prediction (DECODRA), which employs a Variable Framerate Pareto-front approach to predict an optimized framerate that minimizes decoding energy under quality degradation constraints. DECODRA dynamically adjusts the framerate based on current bitrate and spatial resolution, balancing trade-offs between framerate, perceptual quality, and decoding complexity. Extensive experimentation with the Inter-4K dataset demonstrates DECODRA’s effectiveness, yielding an average PSNR and VMAF increase of 0.87 dB and 5.14 points, respectively, for the same bitrate compared to the default 60 fps encoding. Additionally, DECODRA achieves an average reduction in decoding energy consumption of 13.27 %, enhancing the viewing experience, extending mobile device battery life, and reducing the energy footprint of streaming services.

Authors: Sashko Ristov, Mika Hautz, Philipp Gritsch, Stefan Nastic, Radu Prodan, Michael Felderer

ICSOC 2024: 22nd International Conference on Service-Oriented Computing https://icsoc2024.redcad.tn/

Abstract:

We observe irregular data transfer performance across federated serverless infrastructures (sometimes faster across providers than colocated), making the entire workflow scheduling even more challenging in federated FaaS and sky computing. This paper introduces STORELESS – a novel workflow scheduler and heuristic algorithm for serverless storage attachments that dynamically selects, provisions, and configures suitable function deployments and storage backends from the federated serverless infrastructure. STORELESS improves workflow execution time by up to 30% by running cross-regional setup compared to the state-of-the-art.

 

Haleh gave a presentation at the first International Workshop on Scaling Knowledge Graphs for Industry (co-located with the 20th International Conference on Semantic Systems – SEMANTICS 2024)  – in Amsterdam, Sept. 17-19, 2024.

Title: “Modeling and Generating Extreme Volumes of Financial Synthetic Time-Series Data with Knowledge Graphs

Authors: Laurentiu Vasiliu, S. Haleh S. Dizaji , Aaron Eberhart, Dumitru Roman and Radu Prodan

Authors: Akif Quddus Khan, Mihhail Matskin, Radu Prodan, Christoph Bussler, Dumitru Roman, Ahmet Soylu

Journal of Cloud Computing: https://journalofcloudcomputing.springeropen.com/

Abstract: Cloud computing has become popular among individuals and enterprises due to its convenience, scalability, and flexibility. However, a major concern for many cloud service users is the rising cost of cloud resources. Since cloud computing uses a pay-per-use model, costs can add up quickly, and unexpected expenses can arise from a lack of visibility and control. The cost structure gets even more complicated when working with multi-cloud or hybrid environments. Businesses may spend much of their IT budget on cloud computing, and any savings can improve their competitiveness and financial stability. Hence, an efficient cloud cost management is crucial. To overcome this difficulty, new approaches and tools are being developed to provide greater oversight and command over cloud computing expenses. In this respect, this article, presents a graph-based approach for modelling cost elements and cloud resources and a potential way to solve the resulting constraint problem of cost optimisation. In this context, we primarily consider utilisation, cost, performance, and availability. The proposed approach is evaluated on three different user scenarios, and results indicate that it could be effective in cost modelling, cost optimisation, and scalability. This approach will eventually help organisations make informed decisions about cloud resource placement and manage the costs of software applications and data workflows deployed in single, hybrid, or multi-cloud environments.

Authors: Kurt Horvath, Dragi Kimovski, Radu Prodan, Bernd Spiess, Oliver Hohlfeld

Venue: 14th International Conference on Internet of Things (IoT 2024); Oulu, Finland, 19-22 November, https://iot-conference.org/iot2024

Abstract: Traditional network measurement campaigns suffer from the lack of control over network infrastructure and the inability to evaluate communication performance directly, especially for the placement of highly distributed Internet of Things (IoT) services. In response, we propose a novel Scalable Latency Evaluation Methodology for the Computing Continuum (SEAL-CC). SEAL-CC extends beyond short-term evaluations by capturing the long-term responsiveness of networks supporting IoT services on the computing continuum. It organizes and evaluates a network of nodes, offering insights for optimized IoT service placement in urban and international settings. Our contributions include a novel evaluation methodology tailored for IoT services over the computing continuum, a comprehensive framework for transparent network evaluation using distributed Internet measurement platforms, and a real-life case-study validation with recommendations for IoT service placement.

Authors: Haleh Dizaji, Reza Farahani, Dragi Kimovski, Joze Rozanec, Ahmet Soylu, Radu Prodan

Venue: 31st IEEE International Conference on High Performance Computing, Data, and Analytics; Bengaluru, India,  18-21 December

https://www.hipc.org

Abstract: The increasing size of graph structures in real-world applications, such as distributed computing networks, social media, or bioinformatics, requires appropriate sampling algorithms that simplify them while preserving key properties. Unfortunately, predicting the outcome of graph sampling algorithms is challenging due to their irregular complexity and randomized properties. Therefore, it is essential to identify appropriate graph features and apply suitable models capable of estimating their sampling outcomes. In this paper, we compare three machine learning (ML) models for predicting the divergence of five metrics produced by twelve node, edge, and traversal-based graph sampling algorithms: degree distribution (D3), clustering coefficient distribution (C2D2), hop-plots distribution (HPD2) (including the largest connected component (HPD2C)), and execution time. We use these prediction models to recommend suitable sampling algorithms for each metric and conduct mutual information analysis to extract relevant graph features. Experiments on six large real-world graphs from three categories (scale-free, power-law, binomial) demonstrate an accuracy under 20% in C2D2 and HPD2 prediction for most algorithms despite the relatively high similarity displacement. Sampling algorithm recommendations on ten real-world graphs show higher hits@3 for D3 and

C2D2 and comparable results for HPD2 and HPD2C compared to the K-best baseline method accessing true empirical data. Finally, ML models show superior runtime recommendations compared to baseline methods, with

hits@3 over 86% for synthetic and real graphs and hits@1 over 60% for small graphs. These findings are promising for algorithm recommendation systems, particularly when balancing quality and runtime preferences.

 

Dr. Reza Farahani (University of Klagenfurt, Austria) and Dr. Vignesh V Menon (Fraunhofer HHI, Germany) presented a joint tutorial titled ‘Latency- and Energy-Aware Video Coding and Delivery Streaming Systems’ at the 12th European Workshop on Visual Information Processing (EUVIP 2024) on September 8.

Abstract: This tutorial introduces modern performance and energy-aware video coding and content delivery solutions and tools, focusing on popular video streaming applications, i.e., VoD and live streaming. In this regard, after introducing fundamentals of modern video encoding and networking paradigms, we introduce modern solutions systems, using per-title encoding, per-scene encoding, virtualized and software networks, edge computing, overlay networks such as Content Delivery Networks (CDNs) and/or Peer-to-Peer (P2P) paradigms to provide latency and energy-efficient VoD and live HAS streaming. Furthermore, the tutorial also presents our tools, software, datasets, and testbeds to demonstrate our latest achievements and share practical insights for researchers, engineers, and students who want to improve conversational streaming or even test such techniques for immersive video sequences (e.g., tile-based 360-degree VR) with a focus on latency, economic cost, and energy.