Authors: Zoha Azimi, Amritha Premkumar, Reza Farahani, Vignesh V Menon, Christian Timmerer, Radu Prodan

Venue: 32nd European Signal Processing Conference (EUSIPCO’24)

Abstract: Traditional per-title encoding approaches aim to maximize perceptual video quality by optimizing resolutions for each bitrate ladder representation. However, ensuring acceptable decoding times in video streaming, especially with the increased runtime complexity of modern codecs like Versatile Video Coding (VVC) compared to predecessors such as High Efficiency Video Coding (HEVC), is essential, as it leads to diminished buffering time, decreased energy consumption, and an improved Quality of Experience (QoE). This paper introduces a decoding complexity-sensitive bitrate ladder estimation scheme designed to optimize adaptive VVC streaming experiences. We design a customized bitrate ladder for the device configuration, ensuring that the

decoding time remains below the threshold to mitigate adverse QoE issues such as rebuffering and to reduce energy consumption. The proposed scheme utilizes an eXtended PSNR (XPSNR)-optimized resolution prediction for each target bitrate, ensuring
the highest possible perceptual quality within the constraints of device resolution and decoding time. Furthermore, it employs XGBoost-based models for predicting XPSNR, QP, and decoding time, utilizing the Inter-4K video dataset for training. The
experimental results indicate that our approach achieves an average 28.39 % reduction in decoding time using the VVC Test Model (VTM). Additionally, it achieves bitrate savings of 3.7 % and 1.84 % to maintain almost the same PSNR and XPSNR,
respectively, for a display resolution constraint of 2160p and a decoding time constraint of 32 s.

 

 

 

Authors: Zoha Azimi, Reza Farahani, Vignesh V Menon, Christian Timmerer, Radu Prodan

Venue: 16th International Conference on Quality of Multimedia Experience (QoMEX’24)

Abstract: As video streaming dominates Internet traffic, users constantly seek a better Quality of Experience (QoE), often resulting in increased energy consumption and a higher carbon footprint. The increasing focus on sustainability underscores the
critical need to balance energy consumption and QoE in video streaming. This paper proposes a modular architecture that refines video encoding parameters by assessing video complexity and encoding settings for the prediction of energy consumption and video quality (based on Video Multimethod Assessment Fusion (VMAF)) using lightweight XGBoost models trained on the multi-dimensional video compression dataset (MVCD). We apply Explainable AI (XAI) techniques to identify the critical encoding parameters that influence the energy consumption and video quality prediction models and then tune them using a weighting strategy between energy consumption and video quality. The experimental results confirm that applying a suitable weighting factor to energy consumption in the x265 encoder results in a 46 % decrease in energy consumption, with a 4-point drop in VMAF, staying below the Just Noticeable Difference (JND) threshold.

Cloud storage cost: a taxonomy and survey

Authors: Akif Quddus Khan, Mihhail Matskin, Radu Prodan, Christoph Bussler, Dumitru Roman, Ahmet Soylu

World Wide Web

Internet and Web Information Systems

https://link.springer.com/journal/11280

Cloud service providers offer application providers with virtually infinite storage and com- 2 1 puting resources, while providing cost-efficiency and various other quality of service (QoS) 3 properties through a storage-as-a-service (StaaS) approach. Organizations also use multi- 4 cloud or hybrid solutions by combining multiple public and/or private cloud service providers 5 to avoid vendor lock-in, achieve high availability and performance, and optimise cost. Indeed 2 3 6 cost is one of the important factors for organizations while adopting cloud storage; however, 7 cloud storage providers offer complex pricing policies, including the actual storage cost and 8 4 the cost related to additional services (e.g., network usage cost). In this article, we provide 9 a detailed taxonomy of cloud storage cost and a taxonomy of other QoS elements, such as 10 network performance, availability, and reliability. We also discuss various cost trade-offs, 11 including storage and computation, storage and cache, and storage and network.

Finally, we 12 provide a cost comparison across different storage providers under different contexts and 13 a set of user scenarios to demonstrate the complexity of cost structure and discuss existing 14 literature for cloud storage selection and cost optimization. We aim that the work presented in 15 this article will provide decision-makers and researchers focusing on cloud storage selection 16 for data placement, cost modelling, and cost optimization with a better understanding and 17 insights regarding the elements contributing to the storage cost and this complex problem 18 domain.

 

Title: Cataract-1K Dataset for Deep-Learning-Assisted Analysis of Cataract Surgery Videos

Authors: Negin Ghamsarian, Yosuf El-Shabrawi, Sahar Nasirihaghighi, Doris Putzgruber-Adamitsch, Martin Zinkernagel, Sebastian Wolf, Klaus Schoeffmann, and Raphael Sznitman

Abstract: In recent years, the landscape of computer-assisted interventions and post-operative surgical video analysis has been dramatically reshaped by deep-learning techniques, resulting in significant advancements in surgeons’ skills, operation room management, and overall surgical outcomes. However, the progression of deep-learning-powered surgical technologies is profoundly reliant on large-scale datasets and annotations. In particular, surgical scene understanding and phase recognition stand as pivotal pillars within the realm of computer-assisted surgery and post-operative assessment of cataract surgery videos. In this context, we present the largest cataract surgery video dataset that addresses diverse requisites for constructing computerized surgical workflow analysis and detecting post-operative irregularities in cataract surgery. We validate the quality of annotations by benchmarking the performance of several state-of-the-art neural network architectures for phase recognition and surgical scene segmentation. Besides, we initiate the research on domain adaptation for instrument segmentation in cataract surgery by evaluating cross-domain instrument segmentation performance in cataract surgery videos. The dataset and annotations are publicly available in Synapse.

 

The paper is available here: https://doi.org/10.1038/s41597-024-03193-4

Authors: Sandro Linder (AAU, Austria), Samira Afzal (AAU, Austria), Christian Bauer  (AAU, Austria), Hadi Amirpour (AAU, Austria), Radu Prodan (AAU, Austria)and Christian Timmerer (AAU, Austria)

Venue: The 15th ACM Multimedia Systems Conference (Open-source Software and Datasets)

Abstract: Video streaming constitutes 65 % of global internet traffic, prompting an investigation into its energy consumption and CO2 emissions. Video encoding, a computationally intensive part of streaming, has moved to cloud computing for its scalability and flexibility. However, cloud data centers’ energy consumption, especially video encoding, poses environmental challenges. This paper presents VEED, a FAIR Video Encoding Energy and CO2 Emissions Dataset for Amazon Web Services (AWS) EC2 instances. Additionally, the dataset also contains the duration, CPU utilization, and cost of the encoding. To prepare this dataset, we introduce a model and conduct a benchmark to estimate the energy and CO2 emissions of different Amazon EC2 instances during the encoding of 500 video segments with various complexities and resolutions using Advanced Video Coding (AVC)
and High-Efficiency Video Coding (HEVC). VEED and its analysis can provide valuable insights for video researchers and engineers to model energy consumption, manage energy resources, and distribute workloads, contributing to the sustainability of cloud-based video encoding and making them cost-effective. VEED is available at Github.

 

Authors: Christian Bauer  (AAU, Austria),  Samira Afzal (AAU, Austria)Sandro Linder (AAU, Austria), Radu Prodan (AAU,Austria)and Christian Timmerer (AAU, Austria)

Venue: The 15th ACM Multimedia Systems Conference (Open-source Software and Datasets)

Abstract: Addressing climate change requires a global decrease in greenhouse gas (GHG) emissions. In today’s digital landscape, video streaming significantly influences internet traffic, driven by the widespread use of mobile devices and the rising popularity of streaming plat-
forms. This trend emphasizes the importance of evaluating energy consumption and the development of sustainable and eco-friendly video streaming solutions with a low Carbon Dioxide (CO2) footprint. We developed a specialized tool, released as an open-source library called GREEM , addressing this pressing concern. This tool measures video encoding and decoding energy consumption and facilitates benchmark tests. It monitors the computational impact on hardware resources and offers various analysis cases. GREEM is helpful for developers, researchers, service providers, and policy makers interested in minimizing the energy consumption of video encoding and streaming.

Authors: Seyedehhaleh Seyeddizaji, Joze Martin Rozanec, Reza Farahani, Dumitru Roman and Radu Prodan

Venue: The 2nd Workshop on Serverless, Extreme-Scale, and Sustainable Graph Processing Systems Co-located with ICPE 2024

Abstract: While graph sampling is key to scalable processing, little research has tried to thoroughly compare and understand how it preserves features such as degree, clustering, and distances dependent on the graph size and structural properties. This research evaluates twelve widely adopted sampling algorithms across synthetic and real datasets to assess their qualities in three metrics: degree, clustering coefficient (CC), and hop plots. We find the random jump algorithm to be an appropriate choice regarding degree and hop-plot metrics and the random node for CC metric. In addition, we interpret the algorithms’ sample quality by conducting correlation analysis with diverse graph properties. We discover eigenvector centrality and path-related features as essential features for these algorithms’ degree quality estimation, node numbers (or the size of the largest connected component) as informative features for CC quality estimation and degree entropy, edge betweenness and path-related features as meaningful features for hop-plot metric. Furthermore, with increasing graph size, most sampling algorithms produce better-quality samples under degree and hop-plot metrics.

 

 

 

 

Authors: Reza Farahani, Frank Loh, Dumitru Roman, and Radu Prodan
Venue: The 2nd Workshop on Serverless, Extreme-Scale, and Sustainable Graph Processing Systems Co-located with ICPE 2024
Abstract: The growing desire among application providers for a cost model
based on pay-per-use, combined with the need for a seamlessly
integrated platform to manage the complex workflows of their
applications, has spurred the emergence of a promising comput-
ing paradigm known as serverless computing. Although serverless
computing was initially considered for cloud environments, it has
recently been extended to other layers of the computing continuum,
i.e., edge and fog. This extension emphasizes that the proximity of
computational resources to data sources can further reduce costs
and improve performance and energy efficiency. However, orches-
trating the computing continuum in complex application workflows,
including a set of serverless functions, introduces new challenges.
This paper investigates the opportunities and challenges introduced
by serverless computing for workflow management systems (WMS)
on the computing continuum. In addition, the paper provides a
taxonomy of state-of-the-art WMSs and reviews their capabilities.

”Fictional Practices of Spirituality” provides critical insight into the implementation of belief, mysticism, religion, and spirituality into (digital) worlds of fiction. This first volume focuses on interactive, virtual worlds – may that be the digital realms of video games and VR applications or the imaginary spaces of life action role-playing and soul-searching practices. It features analyses of spirituality as gameplay facilitator, sacred spaces and architecture in video game geography, religion in video games and spiritual acts and their dramaturgic function in video games, tabletop, or larp, among other topics. The contributors offer a first-time ever comprehensive overview of play-rites as spiritual incentives and playful spirituality in various medial incarnations.

The anthology was edited by Felix Schniz and Leonardo Marcato. It is now available as a printed copy, or for download via Open Access. Published by transcript 2023.

book: Fictional Practices of Spirituality I

Cluster Computing

DFARM: A deadline-aware fault-tolerant scheduler for cloud computing

Authors: Ahmad Awan, Muhammad Aleem, Altaf Hussain, Radu Prodan

Abstract:

Cloud computing has become popular for small businesses due to its cost-effectiveness and the ability to acquire necessary on-demand services, including software, hardware, network, etc., anytime around the globe. Efficient job scheduling in the Cloud is essential to optimize operational costs in data centers. Therefore, scheduling should consider assigning tasks to Virtual Machines (VMs) in a Cloud environment in such a manner that could speed up execution, maximize resource utilization, and meet users’ SLA and other constraints such as deadlines. For this purpose, the tasks can be prioritized based on their deadlines and task lengths, and the resources could be provisioned and released as needed. Moreover, to cope with unexpected execution situations or hardware failures, a fault-tolerance mechanism could be employed based on hybrid replication and the re-submission method. Most of the existing techniques tend to improve performance. However, their pitfall lies in certain aspects such as either those techniques prioritize tasks based on a singular value (e.g., usually deadline), only utilize a singular fault tolerance mechanism, or try to release resources that cause more overhead immediately. This research work proposes a new scheduler called the Deadline and fault-aware task Adjusting and Resource Managing (DFARM) scheduler, the scheduler dynamically acquires resources and schedules deadline-constrained tasks by considering both their length and deadlines while providing fault tolerance through the hybrid replication-resubmission method. Besides acquiring resources, it also releases resources based on their boot time to lessen costs due to reboots. The performance of the DFARM scheduler is compared to other scheduling algorithms, such as Random Selection, Round Robin, Minimum Completion Time, RALBA, and OG-RADL. With a comparable execution performance, the proposed DFARM scheduler reduces task-rejection rates by $2.34 – 9.53$ times compared to the state-of-the-art schedulers using two benchmark datasets.