On Saturday, during the #ICPE2023, the Graph-Massivizer Project organized the #GraphSys first #workshop on #Serverless, #ExtremeScale, and #Sustainable #GraphProcessing #Systems

It was great to see so many passionate #attendees eager to share and learn about the latest #advancements in #graphsystems ?️ We had some amazing speakers who shared their #insights and #expertise on the topic with a lot of engaging and thought-provoking discussions ⚡ ??Thanks to everyone who participated and made it such a memorable event!

Title: Performance Improvement Strategies of Edge-Enabled Social Impact Applications

Authors: Shajulin Benedict, S. Vivek Reddy, Bhagyalakshmi M., Jiby Mariya Jose, Radu Prodan

International Conference on Inventive Computation Technologies (ICICT 2023)

Abstract: In recent years, social relationships have been rooted in a blend with technological advancements to eradicate emerging challenges, such as loneliness, poverty, pollution, climate change, health issues, and so forth. IoT-enabled social good applications, accordingly, have emerged in various dimensions. In fact, those developing IoT-enabled social good applications have to diligently consider the efficiency of underlying computational infrastructures. This article explores the performance improvement (PI) aspects of edge intelligence techniques that apply to social good applications. It highlights the most commonly practiced PI methods in the literature. Additionally, the article lists the near-future research perspectives of edge-enabled solutions. The article
will be beneficial to several researchers/practitioners who prefer to address social causes using edge-enabled efficient intelligent techniques.

Authors: Zahra Najafabadi Samani, Narges Mehran, Dragi Kimovski, Shajulin Benedikt, Nishant Saurabh, Radu Prodan

IEEE Transactions on Parallel and Distributed Systems

Abstract: Fog computing platforms became essential for deploying low-latency applications at the network’s edge. However, placing and managing time-critical applications over a Fog infrastructure with many heterogeneous and resource-constrained devices over a dynamic network is challenging. This paper proposes an incremental multilayer resource-aware partitioning (M-RAP) method that minimizes resource wastage and maximizes service placement and deadline satisfaction in a dynamic Fog with many application requests. M-RAP represents the heterogeneous Fog resources as a multilayer graph, partitions it based on the network structure and resource types, and constantly updates it upon dynamic changes in the underlying Fog infrastructure. Finally, it identifies the device partitions for placing the application services according to their resource requirements, which must overlap in the same low-latency network partition. We evaluated M-RAP through extensive simulation and two applications executed on a real testbed. The results show that M-RAP can place 1.6 times as many services, satisfy deadlines for 43% more applications, lower their response time by up to 58%, and reduce resource wastage by up to 54% compared to three state-of-the-art methods.

On 03.02.2023 , Dragi Kimovski defended his habilitation thesis “The Computing Continuum in the Internet-of-Things Era: Beyond the Cloud Data Centers”. In the meantime, the procedure has been completed and we were happy to hand out the certificate. Congratulations!

Dragi Kimovski is a tenure track researcher at the Institute of Information Technology (ITEC), University of Klagenfurt. He earned his doctoral degree in 2013 from the Technical University in Sofia. He was an assistant professor at the University of Information Science and Technology in Ohrid and a senior researcher and lecturer at the University of Innsbruck. Kimovski conducted multiple research stays at renowned universities, including the University of Michigan, the University of Utrecht, the University of Bologna, and the University of Granada. He co-authored more than 60 articles in international conferences and journals. His research interests include parallel and distributed computing and multi-objective optimization for energy efficiency and sustainability. He acted as a scientific coordinator and work-package leader in dozen Horizon 2020 projects (DataCloud, ENTICE, and ASPIDE).

Fog and edge computing have been introduced as an extension of the cloud services towards the data sources, thus forming the computing continuum. The computing continuum enables the creation of a new type of services, spanning across distributed infrastructures, supporting various Internet of Things (IoT) applications. However, the introduction of the computing continuum raises multiple challenges for the management, deployment and orchestration of complex distributed applications, such as increased network heterogeneity, limited resource capacity of edge devices, fragmented storage management, high mobility of edge devices and limited support of native monolithic applications. Therefore, the habilitation thesis explores novel algorithms for low latency, scalable, and sustainable computing over heterogeneous resources for information processing and reasoning, thus enabling transparent integration of IoT applications. It tackles the heterogeneity challenge of dynamically changing computing infrastructure topologies and presents a novel concept for sustainable processing at scale.

Vignesh V Menon

2023 ACM Mile High Video (MHV) 

May 7-10, 2023 | Denver, US

Conference Website

Vignesh V Menon (Alpen-Adria-Universität Klagenfurt), Reza Farahani (Alpen-Adria-Universität Klagenfurt), Prajit T Rajendran (Universite Paris-Saclay), Mohammed Ghanbari (University of Essex), Hermann Hellwagner (Alpen-Adria-Universität Klagenfurt),  and Christian Timmerer (Alpen-Adria-Universität Klagenfurt).

Abstract:

In recent years, video streaming applications have proliferated the demand for Video Quality Assessment (VQA). Reduced reference video quality assessment (RR-VQA) is a category of VQA where certain features (e.g., texture, edges) of the original video are provided for quality assessment. It is a popular research area for various applications such as social media, online games, and video streaming. This paper introduces a reduced reference Transcoding Quality Prediction Model (TQPM) to determine the visual quality score of the video possibly transcoded in multiple stages. The quality is predicted using Discrete Cosine Transform (DCT)-energy-based features of the video (i.e., the video’s brightness, spatial texture information, and temporal activity) and the target bitrate representation of each transcoding stage. To do that, the problem is formulated, and a Long Short-Term Memory (LSTM)-based quality prediction model is presented. Experimental results illustrate that, on average, TQPM yields PSNR, SSIM, and VMAF predictions with an ?2 score of 0.83, 0.85, and 0.87, respectively, and Mean Absolute Error (MAE) of 1.31 dB, 1.19 dB, and 3.01, respectively, for single-stage transcoding.
Furthermore, an ?2 score of 0.84, 0.86, and 0.91, respectively, and MAE of 1.32 dB, 1.33 dB, and 3.25, respectively, are observed for a two-stage transcoding scenario. Moreover, the average processing time of TQPM for 4s segments is 0.328s, making it a practical VQA method in online streaming applications.

Presentation of Radu Prodan on “Massive Graphs on the Computing Continuum” in the seminar on “AI meets complex knowledge structures: Neuro-Symbolic AI and Graph Technologies” at the Oslo Metropolitan University.

IEEE Access, A Multidisciplinary, Open-access Journal of the IEEE

[PDF; GitHub]

Babak Taraghi , Hermann Hellwagner and Christian Timmerer  (Alpen-Adria-Universität Klagenfurt)

g2g

Low-latency live streaming by HTTP Chunked Transfer Encoding

Abstract: Live media streaming is a challenging task by itself, and when it comes to use cases that define low-latency as a must, the complexity will rise multiple times. In a typical media streaming session, the main goal can be declared as providing the highest possible Quality of Experience (QoE), which has proved to be measurable using quality models and various metrics. In a low-latency media streaming session, the requirements are to provide the lowest possible delay between the moment a frame of video is captured and the moment that the captured frame is rendered on the client screen, also known as end-to-end (E2E) latency and maintain the QoE. This paper proposes a sophisticated cloud-based and open-source testbed that facilitates evaluating a low-latency live streaming session as the primary contribution. Live Low-Latency Cloud-based Adaptive Video Streaming Evaluation (LLL-CAdViSE) framework is enabled to asses the live streaming systems running on two major HTTP Adaptive Streaming (HAS) formats, Dynamic Adaptive Streaming over HTTP (MPEG-DASH) and HTTP Live Streaming (HLS). We use Chunked Transfer Encoding (CTE) to deliver Common Media Application Format (CMAF) chunks to the media players. Our testbed generates the test content (audiovisual streams). Therefore, no test sequence is required, and the encoding parameters (e.g., encoder, bitrate, resolution, latency) are defined separately for each experiment. We have integrated the ITU-T P.1203 quality model inside our testbed. To demonstrate the flexibility and power of LLL-CAdViSE, we have presented a secondary contribution in this paper; we have conducted a set of experiments with different network traces, media players, ABR algorithms, and with various requirements (e.g., E2E latency (typical/reduced/low/ultra-low), diverse bitrate ladders, and catch-up logic) and presented the essential findings and the experimental results.

Keywords: Live Streaming; Low-latency; HTTP Adaptive Streaming; Quality of Experience; Objective Evaluation, Open-source Testbed.

14th ACM Multimedia Systems Conference (MMSys)
7 – 10 June 2023 | Vancouver, BC, Canada

Daniele Lorenzi (Alpen-Adria-Universität Klagenfurt)

Abstract:

Video streaming services account for the majority of today’s traffic on the Internet, and according to recent studies, this share is expected to continue growing. This implies that many people around the globe utilize video streaming services on a daily basis to fruit video content. Given this broad utilization, research in video streaming is recently moving towards energy-aware approaches, which aim at the minimization of the energy consumption of the devices involved. On the other side, the perception of quality delivered to the user plays an important role, and the advent of HTTP Adaptive Streaming (HAS) changed the way quality is perceived. The focus moved from the Quality of Service (QoS) towards the Quality of Experience (QoE) of the user taking part in the streaming session. Therefore video streaming services need to develop Adaptive BitRate (ABR) techniques to deal with different network environments on the client side or appropriate end-to-end strategies to provide high QoE to the users. The scope of this doctoral study is within the end-to-end environment with a focus on the end-users domain, referred to as the player environment, including video content consumption and interactivity. This thesis aims to investigate and develop different techniques to increase the delivered QoE to the users and reduce the energy consumption of the end devices in HAS context. We present four main research questions to target the related challenges in the domain of content consumption for HAS systems.

On Friday and Saturday (March 10 and March 11, 2023), Sebastian Uitz presented his game “A Webbing Journey” with his partner Michael Steinkellner and Noel Treese at the Button Festival at the Messe Graz. Their booth consisted of 2 PC, a Steam Deck and a Nintendo Switch, all running the game. This was the first time the two handheld devices were used at an event, and people loved playing on them. The Nintendo Switch was a fan favourite for all the kids. The older players tended to the Steam Deck because it’s still a new console, and most of them had never had the chance to play on it before. Similar to other events, a lot of feedback in the form of new ideas for quests and possibilities to extend the game were gathered, which will be implemented in the following weeks and months, ready for the next event in May.

5th Workshop on Parallel AI and Systems for the Edge (PAISE 2023) held in conjunction with 37th IEEE International Parallel & Distributed Processing Symposium (IPDPS 2023) St. Petersburg, Florida, USA

https://edge.itec.aau.at/

Authors: Josef Hammer and Hermann Hellwagner, Alpen-Adria-Universität Klagenfurt

Abstract: Multi-access Edge Computing (MEC) is a central piece of 5G telecommunication systems and is essential to satisfy the challenging low-latency demands of future applications. MEC provides a cloud computing platform at the edge of the radio access network. Our previous publications argue that edge computing should be transparent to clients, leveraging Software-Defined Networking (SDN). While we introduced a solution to implement such a transparent approach, one question remained: How to handle user requests to a service that is not yet running in a nearby edge cluster? One advantage of the transparent edge is that one could process the initial request in the cloud. However, this paper argues that on-demand deployment might be fast enough for many services, even for the first request. We present an SDN controller that automatically deploys an application container in a nearby edge cluster if no instance is running yet. In the meantime, the user’s request is forwarded to another (nearby) edge cluster or kept waiting to be forwarded immediately to the newly instantiated instance. Our performance evaluations on a real edge/fog testbed show that the waiting time for the initial request – e.g., for an nginx-based service – can be as low as 0.5 seconds – satisfactory for many applications.