Cloud storage cost: a taxonomy and survey

Authors: Akif Quddus Khan, Mihhail Matskin, Radu Prodan, Christoph Bussler, Dumitru Roman, Ahmet Soylu

World Wide Web

Internet and Web Information Systems

https://link.springer.com/journal/11280

Cloud service providers offer application providers with virtually infinite storage and com- 2 1 puting resources, while providing cost-efficiency and various other quality of service (QoS) 3 properties through a storage-as-a-service (StaaS) approach. Organizations also use multi- 4 cloud or hybrid solutions by combining multiple public and/or private cloud service providers 5 to avoid vendor lock-in, achieve high availability and performance, and optimise cost. Indeed 2 3 6 cost is one of the important factors for organizations while adopting cloud storage; however, 7 cloud storage providers offer complex pricing policies, including the actual storage cost and 8 4 the cost related to additional services (e.g., network usage cost). In this article, we provide 9 a detailed taxonomy of cloud storage cost and a taxonomy of other QoS elements, such as 10 network performance, availability, and reliability. We also discuss various cost trade-offs, 11 including storage and computation, storage and cache, and storage and network.

Finally, we 12 provide a cost comparison across different storage providers under different contexts and 13 a set of user scenarios to demonstrate the complexity of cost structure and discuss existing 14 literature for cloud storage selection and cost optimization. We aim that the work presented in 15 this article will provide decision-makers and researchers focusing on cloud storage selection 16 for data placement, cost modelling, and cost optimization with a better understanding and 17 insights regarding the elements contributing to the storage cost and this complex problem 18 domain.

 

Radu participated and gave a keynote talk at the ICONIC 2024. To mark the 100th Birth Anniversary of Jean Bartik, one of the original six programmers of the ENIAC Computer, LPU hosted “BARTIK100 – International Conference on Networks, Intelligence and Computing (ICONIC-2024).” The conference provided a platform for scientists, researchers, academicians, industrialists, and students to assimilate the knowledge and get the opportunity to discuss and share insights through deep-dive research findings on the recent disruptions and developments in computing.

 

On Thursday, 25.04.2024, Kseniia organized and participated with Marie Biedermann and Rachel Gorden (student and graduate of Game Studies and Engineering) in the Women in Data Science Villach 2024 Conference. During this event, they conducted a workshop to acquaint participants with the online interactive fiction tool Twine, guiding them in creating their initial game projects. Besides explaining the main features of Twine, they also talked about how Data Science is used in games. Additionally, they introduced the educational and knowledge dissemination potential of gamification.

 

 

Title: Cataract-1K Dataset for Deep-Learning-Assisted Analysis of Cataract Surgery Videos

Authors: Negin Ghamsarian, Yosuf El-Shabrawi, Sahar Nasirihaghighi, Doris Putzgruber-Adamitsch, Martin Zinkernagel, Sebastian Wolf, Klaus Schoeffmann, and Raphael Sznitman

Abstract: In recent years, the landscape of computer-assisted interventions and post-operative surgical video analysis has been dramatically reshaped by deep-learning techniques, resulting in significant advancements in surgeons’ skills, operation room management, and overall surgical outcomes. However, the progression of deep-learning-powered surgical technologies is profoundly reliant on large-scale datasets and annotations. In particular, surgical scene understanding and phase recognition stand as pivotal pillars within the realm of computer-assisted surgery and post-operative assessment of cataract surgery videos. In this context, we present the largest cataract surgery video dataset that addresses diverse requisites for constructing computerized surgical workflow analysis and detecting post-operative irregularities in cataract surgery. We validate the quality of annotations by benchmarking the performance of several state-of-the-art neural network architectures for phase recognition and surgical scene segmentation. Besides, we initiate the research on domain adaptation for instrument segmentation in cataract surgery by evaluating cross-domain instrument segmentation performance in cataract surgery videos. The dataset and annotations are publicly available in Synapse.

 

The paper is available here: https://doi.org/10.1038/s41597-024-03193-4

Authors: Reza Farahani (AAU, Austria), and Vignesh V Menon (Fraunhofer HHI, Berlin, Germany)

Venue: The 12th European Workshop on Visual Information Processing (EUVIP 2024)

08-11 September, 2024 in Geneva, Switzerland

The 15th ACM Multimedia Systems Conference was held from 15-18 April, 2024 in Bari, Italy. MMSys 2024 provides a forum to leading researchers from academia and industry to present and share their latest findings in multimedia systems.

Christian Timmerer, Mathias Lux, Samira Afzal, Christian Bauer, Daniele Lorenzi, Emanuele Artioli, Mohammad Ghasempour, Shivi Vats, and Armin Lachini participated and presented ATHENA, GAIA, and SPIRIT contributions:

Within the Organizing Committee Christian Timmerer officiated as TPC Chair and Farzad Tashtarian as Proceeding Chair.

 

 

On Friday, 12 April 2024, seven representatives of the Pioneers of Game Development Austria (https://pgda.at/) visited the University of Klagenfurt for the event Press Start: Your Journey into Game Development, organised by the master’s programme Game Studies and Engineering. The PGDA members, composed of different video game developers from all over Austria, provided insightful talks, gave feedback on student game projects, and provided personal support during a mentoring café. Attracting about 50 GSE students, academic staff, senate members, and many potential new students, the event can be considered the most successful in recent GSE history.

 

Wenn Tom Tuček über die Welt spricht, muss er stets konkretisieren: Handelt es sich um die reale Welt oder um virtuelle Welten? Der Doktorand am Institut für Informationstechnologie beschäftigt sich aktuell mit digital humans, also virtuellen Figuren, denen wir beispielsweise in Videospielen begegnen. Tom Tuček möchte gerne wissen, wie sich der Kontakt mit digitalen Menschen, die mit neuer Künstlicher Intelligenz ausgestattet werden, auf die Spieler:innen auswirkt.

Read the whole interview here: https://www.aau.at/blog/das-spiel-mit-dem-digitalen-menschen/

 

 

 

Title: DeepVCA: Deep Video Complexity Analyzer

Authors: Hadi Amirpour (AAU, Klagenfurt, Austria), Klaus Schoeffmann (AAU, Klagenfurt, Austria), Mohammad Ghanbari (University of Essex, UK), Christian Timmerer (AAU, Klagenfurt, Austria)

Abstract: Video streaming and its applications are growing rapidly, making video optimization a primary target for content providers looking to enhance their services. Enhancing the quality of videos requires the adjustment of different encoding parameters such as bitrate, resolution, and frame rate. To avoid brute force approaches for predicting optimal encoding parameters, video complexity features are typically extracted and utilized. To predict optimal encoding parameters effectively, content providers traditionally use unsupervised feature extraction methods, such as ITU-T’s Spatial Information ( SI ) and Temporal Information ( TI ) to represent the spatial and temporal complexity of video sequences. Recently, Video Complexity Analyzer (VCA) was introduced to extract DCT-based features to represent the complexity of a video sequence (or parts thereof). These unsupervised features, however, cannot accurately predict video encoding parameters. To address this issue, this paper introduces a novel supervised feature extraction method named DeepVCA, which extracts the spatial and temporal complexity of video sequences using deep neural networks. In this approach, the encoding bits required to encode each frame in intra-mode and inter-mode are used as labels for spatial and temporal complexity, respectively. Initially, we benchmark various deep neural network structures to predict spatial complexity. We then leverage the similarity of features used to predict the spatial complexity of the current frame and its previous frame to rapidly predict temporal complexity. This approach is particularly useful as the temporal complexity may depend not only on the differences between two consecutive frames but also on their spatial complexity. Our proposed approach demonstrates significant improvement over unsupervised methods, especially for temporal complexity. As an example application, we verify the effectiveness of these features in predicting the encoding bitrate and encoding time of video sequences, which are crucial tasks in video streaming. The source code and dataset are available at https://github.com/cd-athena/ DeepVCA.

 

Author: Emanuele Artioli

Abstract: Video streaming stands as the cornerstone of telecommunication networks, constituting over 60% of mobile data traffic as of June 2023. The paramount challenge faced by video streaming service providers is ensuring high Quality of Experience (QoE) for users. In HTTP Adaptive Streaming (HAS), including DASH and HLS, video content is encoded at multiple quality versions, with an Adaptive Bitrate (ABR) algorithm dynamically selecting versions based on network conditions. Concurrently, Artificial Intelligence (AI) is revolutionizing the industry, particularly in content recommendation and personalization. Leveraging user data and advanced algorithms, AI enhances user engagement, satisfaction, and video quality through super-resolution and denoising techniques.

However, challenges persist, such as real-time processing on resource-constrained devices, the need for diverse training datasets, privacy concerns, and model interpretability. Despite these hurdles, the promise of Generative Artificial Intelligence emerges as a transformative force. Generative AI, capable of synthesizing new data based on learned patterns, holds vast potential in the video streaming landscape. In the context of video streaming, it can create realistic and immersive content, adapt in real time to individual preferences, and optimize video compression for seamless streaming in low-bandwidth conditions.

This research proposal outlines a comprehensive exploration at the intersection of advanced AI algorithms and digital entertainment, focusing on the potential of generative AI to elevate video quality, user interactivity, and the overall streaming experience. The objective is to integrate generative models into video streaming pipelines, unraveling novel avenues that promise a future of dynamic, personalized, and visually captivating streaming experiences for viewers.