IEEE Cloud Summit 2022, https://www.ieeecloudsummit.org/

Authors: Radu Prodan, Dragi Kimovski, Andrea Bartolini, Michael Cochez,
Alexandru Iosup, Evgeny Kharlamov, Joze Rozanec, Laurentiu Vasiliu, Ana
Lucia Varbanescu

Abstract: The Graph-Massivizer project, funded by the Horizon Europe research and innovation program, researches and develops a high-performance, scalable, and sustainable platform for information processing and reasoning based on the massive graph (MG) representation of extreme data. It delivers a toolkit of five open-source software tools and FAIR graph datasets covering the sustainable lifecycle of processing extreme data as MGs. The tools focus on holistic usability (from extreme data ingestion and MG creation), automated intelligence (through analytics and reasoning), performance modelling, and environmental sustainability tradeoffs, supported by credible data-driven evidence across the computing continuum. The automated operation based on the emerging serverless computing paradigm supports experienced and novice stakeholders from a broad group of large and small organisations to capitalise on extreme data through MG programming and processing.

Graph-Massivizer validates its innovation on four complementary use cases considering their extreme data properties and coverage of the three sustainability pillars (economy, society, and environment): sustainable green finance, global environment protection foresight, green AI for the sustainable automotive industry, and data centre digital twin for exascale computing. Graph-Massivizer promises 70% more efficient analytics than AliGraph, and 30% improved energy awareness for ETL storage operations than Amazon Redshift. Furthermore, it aims to demonstrate a possible two-fold improvement in data centre energy efficiency and over 25% lower greenhouse gas emissions for basic graph operations.

18th International Conference on Network and Service Management (CNSM 2022)

Thessaloniki, Greece | 31 October – 4 November 2022

Conference Website

Minh Nguyen (Alpen-Adria-Universität Klagenfurt, Austria), Babak Taraghi (Alpen-Adria-Universität Klagenfurt, Austria), Abdelhak Bentaleb (National University of Singapore, Singapore), Roger Zimmermann (National University of Singapore, Singapore), and Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Austria)

Abstract: Considering network conditions, video content, and viewer device type/screen resolution to construct a bitrate ladder is necessary to deliver the best Quality of Experience (QoE).
A large-screen device like a TV needs a high bitrate with high resolution to provide good visual quality, whereas a small one like a phone requires a low bitrate with low resolution. In
addition, encoding high-quality levels at the server side while the network is unable to deliver them causes unnecessary cost for the content provider. Recently, the Common Media Client Data (CMCD) standard has been proposed, which defines the data that is collected at the client and sent to the server with its HTTP requests. This data is useful in log analysis, quality of service/experience monitoring and delivery improvements.

cadlad

 

In this paper, we introduce a CMCD-Aware per-Device bitrate LADder construction (CADLAD) that leverages CMCD to address the above issues. CADLAD comprises components at both client and server sides. The client calculates the top bitrate (tb) — a CMCD parameter to indicate the highest bitrate that can be rendered at the client — and sends it to the server together with its device type and screen resolution. The server decides on a suitable bitrate ladder, whose maximum bitrate and resolution are based on CMCD parameters, to the client device with the purpose of providing maximum QoE while minimizing delivered data. CADLAD has two versions to work in Video on
Demand (VoD) and live streaming scenarios. Our CADLAD is client agnostic; hence, it can work with any players and ABR algorithms at the client. The experimental results show that CADLAD is able to increase the QoE by 2.6x while saving 71% of delivered data, compared to an existing bitrate ladder of an available video dataset. We implement our idea within CAdViSE — an open-source testbed for reproducibility.

 

IEEE Global Communications Conference (GLOBECOM)

December 4-8, 2022 |Rio de Janeiro, Brazil
Conference Website

Authors: Reza Farahani (Alpen-Adria-Universität Klagenfurt, Austria), Abdelhak Bentaleb (National University of Singapore, Singapore), Ekrem Cetinkaya (Alpen-Adria-Universität Klagenfurt, Austria), Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Austria), Roger Zimmermann (National University of Singapore, Singapore), and Hermann Hellwagner (Alpen-Adria-Universität Klagenfurt, Austria)

Abstract: a cost-effective, scalable, and flexible architecture that supports low latency and high-quality live video streaming is still a challenge for Over-The-Top (OTT) service providers. To cope with this issue, this paper leverages Peer-to-Peer (P2P), Content Delivery Network (CDN), edge computing, Network Function Virtualization (NFV), and distributed video transcoding paradigms to introduce a hybRId P2P-CDN arcHiTecture for livE video stReaming (RICHTER). We first introduce RICHTER’s multi-layer architecture and design an action tree that considers all feasible resources provided by peers, edge, and CDN servers for serving peer requests with minimum latency and maximum quality. We then formulate the problem as an optimization model executed at the edge of the network. We present an Online Learning (OL) approach that leverages an unsupervised Self Organizing Map (SOM) to (i) alleviate the time complexity issue of the optimization model and (ii) make it a suitable solution for large-scale scenarios by enabling decisions for groups of requests instead of for single requests. Finally, we implement the RICHTER framework, conduct our experiments on a large-scale cloud-based testbed including 350 HAS players, and compare its effectiveness with baseline systems. The experimental results illustrate that RICHTER outperforms baseline schemes in terms of users’ Quality of Experience (QoE), latency, and network utilization, by at least 59%, 39%, and 70%, respectively.

From August 16.-19.2022, a CardioHPC project meeting took place in Skopje, Macedonia. Radu Prodan, Andrei Amza and Sahsko Ristvo participated for AAU.

During the period Aug 1st –26th, 2022, Hamza Baniata, a PhD Candidate at the Department of Computer Science, University of Szeged, Hungary, has visited the institute of Information Technology of the University of Klagenfurt, Austria. Under the collaborative supervision by Prof.
Attila Kertesz (SZTE) and Prof. Radu Prodan (ITEC), Hamza has performed several research activities related to the simulation of Blockchain and Fog Computing applications, the enhancement of the FoBSim simulation tool, and the integration of Machine Learning with Blockchain technology. The visit was encouraged and funded by the European COST program under action identifier CA19135 (CERCIRAS), in which Attila, Radu and Hamza are active members. The scientific results of this research visit are currently being edited and finalized in order to be disseminated in an international scientific conference.

Vignesh V Menon

ACM Multimedia Conference – Doctoral Symposium Track

Lisbon, Portugal | 10-14 October 2022

Vignesh V Menon (Christian Doppler Laboratory ATHENA, Alpen-Adria-Universität Klagenfurt)

Abstract: Rapid growth in multimedia streaming traffic over the Internet motivates the research and further investigation of the video coding performance of such services in terms of speed and Quality of Experience (QoE). HTTP Adaptive Streaming (HAS) is today’s de-facto standard to deliver clients the highest possible video quality. In HAS, the same video content is encoded at multiple bitrates, resolutions, framerates, and coding formats called representations. This study aims to (i) provide fast and compression-efficient multi-bitrate, multi-resolution representations, (ii) provide fast and compression-efficient multi-codec representations, (iii) improve the encoding efficiency of Video on Demand (VoD) streaming using content-adaptive encoding optimizations, and (iv) provide encoding schemes with optimizations per-title for live streaming applications to decrease the storage or delivery costs or/and increase QoE.

The ideal video compression system for HAS envisioned in this doctoral study.

 

Hadi

Secure Reversible Data Hiding in Encrypted Images based on Classification Encryption Difference

IEEE 24th Workshop on MultiMedia Signal Processing (MMSP)

September 26-28, 2022 | Shanghai, China

Authors: Lingfeng Qu (Southwest Jiaotong University),  Hadi Amirpour (Alpen-Adria-Universität Klagenfurt),   Mohammad Ghanbari (University of Essex, UK)and Christian Timmerer (Alpen-Adria-Universität Klagenfurt), Hongjie He (Southwest Jiaotong University)


Abstract: This paper introduces an algorithm to improve the security, efficiency, and
embedding capacity of reversible data hiding in encrypted images (RDH-EI). It is based on classification encryption difference and adaptive fixed-length coding. Firstly, the prediction error image is obtained, the difference with a bin value greater than the encryption threshold in the difference histogram is found, and it is further modified to obtain the embedding threshold range. Then, under the condition of ensuring that the difference inside and outside the threshold range is not confused, the difference within the threshold is only scrambled, and the difference outside the threshold is scrambled and mod encrypted. After obtaining the encrypted image, an adaptive difference fixed-length coding method is proposed to encode and compress the differences within the threshold. The secret data is embedded in the multiple most significant bits of the encoded difference. Experimental results show that the embedding capacity of the proposed algorithm is improved compared with the state-of-the-art algorithm.

During the visit of our minister of education (Martin Polaschek),  Hermann Hellwagner gave an overview of drone research at the AAU and the use of the drone hall.

Read more about Mr. Polaschek´s visit here.

In July-August 2022, the ATHENA Christian Doppler Laboratory hosted four interns working on the following topics:

  • Fabio Zinner: A Study and Evaluation on HTTP Adaptive Video Streaming using Mininet
  • Moritz Pecher: Dataset Creation and HAS Basics
  • Per-Luca Thalmann: Codec-war: is it necessary? Welcome to the multi-codec world
  • Georg Kelih: Server Client Simulator for QoE with practical Implementation

At the end of their internships, they presented their works and achieved results, and received official certificates from the university. We believe the joint work with them was beneficial for both the laboratory and the interns. We would like to thank the interns for their genuine interest, productive work, and excellent feedback about our laboratory.

Fabio Zinner: In my four weeks, I had an amazingly practical and theoretical experience which is very important for my future practical and academic line of work! It was great and fascinating working with Python, Mininet, Linux, FFMpeg, Gpac, Iperf, etc. I really liked working with ATHENA, and the experience I gathered was exceptional. Also, I am very happy that I had Reza Farahani as my supervisor!

Per-Luca Thalmann:I really enjoyed my 4 weeks at ATHENA. At first, I had to read a lot of articles and papers to get a basic understanding of Video Codecs and encoding. As I started my Main Project, which evaluated the performance of modern codecs with different video complexities, I noticed that everything I had read before was useful to progress faster towards my end goal. After I got the results of my script, which ran for over a week, I also noticed some outcomes which were not expected. Basically, that older codecs get at some very specific settings higher Scores than their successor. Whenever I got stuck or had any questions, my supervisor, Vignesh, helped me. I did not only improve my technical knowledge, but I also got a lot of insights into how research works, what is the motivation of research and also about the process for scientific research.

Georg Kelih:I worked by Athena as an Intern for a month and got the tasks to build a simulator which simulates the server-client communication (ABR, bitrate ladder, resource allocation) and shows the results in a graph and a Server Client script where the server runs on the local host and the client requests segments and plays them using python-vlc
My daily routine was pretty chill, not only we had only 30 hours to work, but also the programming was quite fun and challenging. So my day looked something like this I stand up go to work play a game of round table soccer and then start to work start Visual Studio Code and write the code I thought about yesterday hope that it runs, but it shows you just a few error messages start debugging then notice that it’s already time to eat something and that I am hungry, eat something and find finally after your lunch the silly error I made think about the new implementation and better ways to solve something and then it is already time to go, so you go to the Strandbad to swim a round and then drive home. Something like this, my daily routine looked like. For me, I think it was a bit too chill for my taste because I like the stress of a 40-hour week especially when I only work during my holidays.
But the rest was absolutely nice, especially that here by Athena are so many people from different countries is pretty cool. For myself, I learned not many new skills, but I found out about many new Linux tools and how to find information even more efficiently.

 

For the quality and timeliness of his reviews, Klaus Schöffmann has been awarded with the Outstanding Reviewer Award at the ACM International Conference on Multimedia Retrieval (ICMR) 2022, which was held at Newark, NJ, USA in June 2022.