Authors: Zoha Azimi, Amritha Premkumar, Reza Farahani, Vignesh V Menon, Christian Timmerer, Radu Prodan

Venue: 32nd European Signal Processing Conference (EUSIPCO’24)

Abstract: Traditional per-title encoding approaches aim to maximize perceptual video quality by optimizing resolutions for each bitrate ladder representation. However, ensuring acceptable decoding times in video streaming, especially with the increased runtime complexity of modern codecs like Versatile Video Coding (VVC) compared to predecessors such as High Efficiency Video Coding (HEVC), is essential, as it leads to diminished buffering time, decreased energy consumption, and an improved Quality of Experience (QoE). This paper introduces a decoding complexity-sensitive bitrate ladder estimation scheme designed to optimize adaptive VVC streaming experiences. We design a customized bitrate ladder for the device configuration, ensuring that the

decoding time remains below the threshold to mitigate adverse QoE issues such as rebuffering and to reduce energy consumption. The proposed scheme utilizes an eXtended PSNR (XPSNR)-optimized resolution prediction for each target bitrate, ensuring
the highest possible perceptual quality within the constraints of device resolution and decoding time. Furthermore, it employs XGBoost-based models for predicting XPSNR, QP, and decoding time, utilizing the Inter-4K video dataset for training. The
experimental results indicate that our approach achieves an average 28.39 % reduction in decoding time using the VVC Test Model (VTM). Additionally, it achieves bitrate savings of 3.7 % and 1.84 % to maintain almost the same PSNR and XPSNR,
respectively, for a display resolution constraint of 2160p and a decoding time constraint of 32 s.

 

 

 

The Second Workshop on Serverless, Extreme-Scale, and Sustainable Graph Processing Systems (GraphSys ’24) took place in South Kensington, London, co-located with the 15th ACM/SPEC International Conference on Performance Engineering.

Reza Farahani gave a talk entitled “Serverless Workflow Management Systems on the Computing Continuum”

Authors: Reza Farahani (AAU, Klagenfurt, Austria), Frank Loh (University of Würzburg, Germany), Dumitru Roman (Sintef, Oslo, Norway), Radu Prodan (AAU, Klagenfurt, Austria)

Abstract: The growing desire among application providers for a cost model based on pay-per-use, combined with the need for a seamlessly integrated platform to manage the complex workflows of their applications, has spurred the emergence of a promising computing paradigm known as serverless computing. Although serverless computing was initially considered for cloud environments, it has recently been extended to other layers of the computing continuum, i.e., edge and fog. This extension emphasizes that the proximity of computational resources to data sources can further reduce costs and improve performance and energy efficiency. However, orchestrating the computing continuum in complex application workflows, including a set of serverless functions, introduces new challenges. This paper investigates the opportunities and challenges introduced by serverless computing for workflow management systems (WMS) on the computing continuum. In addition, the paper provides a taxonomy of state-of-the-art WMSs and reviews their capabilities.

Furthermore Reza Farahani and the backend Graph-Massivizer team met to discuss Graph-Massivizer toolkit integration plan.

 

Dragi Kimovski co-chaired the 7th Workshop on Hot Topics in Cloud Computing Performance (HotCloudPerf 2024) workshop within the International Conference on Performance Engineering (ICPE). During the workshop, he presented a paper titled “Hypergraphs: Facilitating High-Order Modeling of the Computing Continuum.” This event, held at Imperial College London on May 11, 2024, focused on various aspects of cloud computing performance, including elasticity, performance isolation, and dependability.

On May 8th, 2024, Mathias, Tom, and a group of helpers organized the first internal generative AI mini hackathon. More than ten people participated and tried their hand at using various forms of generative AI – such as text, image, sound, and 3D model generation. After 8 hours of coding and testing, the common goal of creating an engine to generate new Pokemon-like creatures started to take shape and achieved some presentable results! Much was learned about how much is reachable in such a short period, and insights into many potential uses of generative AI were gained. The event also fostered contact between ITEC and Athena Lab employees, ISYS, and NES! Using what was learned, a locally hosted LLM (akin to ChatGPT) for ITEC will be presented soon and possibly extended for university-wide use later. Thank you to everyone who attended; hopefully, similar events can be hosted again in the future!

 

On the weekend of April 27-28th, HaruCon, Carinthea’s youth pop culture convention, took place (https://www.harucon.at/) in Klagenfurt. Felix, Tom, Claudia, Sebastian, and many Game Studies and Engineering students were present and represented GSE, TEWI, and the university. Tom and Sebastian held a workshop on how to enter the video game industry, while Felix held one for an introduction to video game analysis. The convention was an enormous success, with more than 2000 visitors over two days. Flyers, buttons, and stickers were handed out to everyone so that awareness of the university as part of Klagenfurt’s youth culture could continue to grow. How to study there, especially video games and many other burning questions were answered by our brave helpers more than a hundred times during the convention.

Authors: Zoha Azimi, Reza Farahani, Vignesh V Menon, Christian Timmerer, Radu Prodan

Venue: 16th International Conference on Quality of Multimedia Experience (QoMEX’24)

Abstract: As video streaming dominates Internet traffic, users constantly seek a better Quality of Experience (QoE), often resulting in increased energy consumption and a higher carbon footprint. The increasing focus on sustainability underscores the
critical need to balance energy consumption and QoE in video streaming. This paper proposes a modular architecture that refines video encoding parameters by assessing video complexity and encoding settings for the prediction of energy consumption and video quality (based on Video Multimethod Assessment Fusion (VMAF)) using lightweight XGBoost models trained on the multi-dimensional video compression dataset (MVCD). We apply Explainable AI (XAI) techniques to identify the critical encoding parameters that influence the energy consumption and video quality prediction models and then tune them using a weighting strategy between energy consumption and video quality. The experimental results confirm that applying a suitable weighting factor to energy consumption in the x265 encoder results in a 46 % decrease in energy consumption, with a 4-point drop in VMAF, staying below the Just Noticeable Difference (JND) threshold.

Cloud storage cost: a taxonomy and survey

Authors: Akif Quddus Khan, Mihhail Matskin, Radu Prodan, Christoph Bussler, Dumitru Roman, Ahmet Soylu

World Wide Web

Internet and Web Information Systems

https://link.springer.com/journal/11280

Cloud service providers offer application providers with virtually infinite storage and com- 2 1 puting resources, while providing cost-efficiency and various other quality of service (QoS) 3 properties through a storage-as-a-service (StaaS) approach. Organizations also use multi- 4 cloud or hybrid solutions by combining multiple public and/or private cloud service providers 5 to avoid vendor lock-in, achieve high availability and performance, and optimise cost. Indeed 2 3 6 cost is one of the important factors for organizations while adopting cloud storage; however, 7 cloud storage providers offer complex pricing policies, including the actual storage cost and 8 4 the cost related to additional services (e.g., network usage cost). In this article, we provide 9 a detailed taxonomy of cloud storage cost and a taxonomy of other QoS elements, such as 10 network performance, availability, and reliability. We also discuss various cost trade-offs, 11 including storage and computation, storage and cache, and storage and network.

Finally, we 12 provide a cost comparison across different storage providers under different contexts and 13 a set of user scenarios to demonstrate the complexity of cost structure and discuss existing 14 literature for cloud storage selection and cost optimization. We aim that the work presented in 15 this article will provide decision-makers and researchers focusing on cloud storage selection 16 for data placement, cost modelling, and cost optimization with a better understanding and 17 insights regarding the elements contributing to the storage cost and this complex problem 18 domain.

 

Radu participated and gave a keynote talk at the ICONIC 2024. To mark the 100th Birth Anniversary of Jean Bartik, one of the original six programmers of the ENIAC Computer, LPU hosted “BARTIK100 – International Conference on Networks, Intelligence and Computing (ICONIC-2024).” The conference provided a platform for scientists, researchers, academicians, industrialists, and students to assimilate the knowledge and get the opportunity to discuss and share insights through deep-dive research findings on the recent disruptions and developments in computing.

 

On Thursday, 25.04.2024, Kseniia organized and participated with Marie Biedermann and Rachel Gorden (student and graduate of Game Studies and Engineering) in the Women in Data Science Villach 2024 Conference. During this event, they conducted a workshop to acquaint participants with the online interactive fiction tool Twine, guiding them in creating their initial game projects. Besides explaining the main features of Twine, they also talked about how Data Science is used in games. Additionally, they introduced the educational and knowledge dissemination potential of gamification.

 

 

Title: Cataract-1K Dataset for Deep-Learning-Assisted Analysis of Cataract Surgery Videos

Authors: Negin Ghamsarian, Yosuf El-Shabrawi, Sahar Nasirihaghighi, Doris Putzgruber-Adamitsch, Martin Zinkernagel, Sebastian Wolf, Klaus Schoeffmann, and Raphael Sznitman

Abstract: In recent years, the landscape of computer-assisted interventions and post-operative surgical video analysis has been dramatically reshaped by deep-learning techniques, resulting in significant advancements in surgeons’ skills, operation room management, and overall surgical outcomes. However, the progression of deep-learning-powered surgical technologies is profoundly reliant on large-scale datasets and annotations. In particular, surgical scene understanding and phase recognition stand as pivotal pillars within the realm of computer-assisted surgery and post-operative assessment of cataract surgery videos. In this context, we present the largest cataract surgery video dataset that addresses diverse requisites for constructing computerized surgical workflow analysis and detecting post-operative irregularities in cataract surgery. We validate the quality of annotations by benchmarking the performance of several state-of-the-art neural network architectures for phase recognition and surgical scene segmentation. Besides, we initiate the research on domain adaptation for instrument segmentation in cataract surgery by evaluating cross-domain instrument segmentation performance in cataract surgery videos. The dataset and annotations are publicly available in Synapse.

 

The paper is available here: https://doi.org/10.1038/s41597-024-03193-4