Dragi Kimovski

The manuscript “Mobility-Aware IoT Application Placement in the Cloud — Edge Continuum” has been accepted for publication in the A* (IF: 5.823) Journal – IEEE Transactions on Services Computing (TSC).

Autors: Dragi Kimovski, Narges Mehran, Christopher Kerth, Radu Prodan

Abstract: The Edge computing extension of the Cloud services towards the network boundaries raises important placement challenges for IoT applications running in a heterogeneous environment with limited computing capacities. Unfortunately, existing works only partially address this challenge by optimizing a single or aggregate objective (e.g., response time), and not considering the edge devices’ mobility and resource constraints. To address this gap, we propose a novel mobility-aware multi-objective IoT application placement (mMAPO) method in the Cloud – Edge Continuum that optimizes completion time, energy consumption, and economic cost as conflicting objectives. mMAPO utilizes a Markov model for predictive analysis of the Edge device mobility and constrains the optimization to devices that do not frequently move through the network. We evaluate the quality of the mMAPO placements using simulation and real-world experimentation on two IoT applications. Compared to related work, mMAPO reduces the economic cost by 28% and decreases the completion time by 80% while maintaining a stable energy consumption.

Bitmovin will be sponsoring a classroom in the area of Computer Science for the next five years, starting in June 2021, and thus continuing the long-standing successful cooperation with the University of Klagenfurt.

Read more about our project partner Bitmovin here.

 

Hadi

Hadi Amirpour has been appointed co-chair of Task Force 7 (TF7)
Immersive Media Experience (IMEx) at the 15th Qualinet meeting

Co-chairs:

 

TF7: Immersive Media Experiences (IMEx)

Immersive media applications are entering our daily lives starting from VR/AR/360° video applications to multi-sensory/multimedia experiences potentially addressing all human senses rather than focusing on hearing and seeing. The overall goal of providing Immersive Media Experiences (IMEx) to end-users is giving them the sensation of being part of the particular media which shall result in a worthwhile, informative user and quality of experience.

The actual objectives of this task force are as follows:

  • disseminating the white paper
  • working towards submission of the extended version
  • liaison with other communities (UX, sensory sciences) and standards developing organizations (JPEG, MPEG, EBU)
  • Identification of different QoE aspects of immersive experiences
  • QoE models and QoE assessment approaches for immersive experiences, addressing various audiovisual modalities; e.g. HDR, omnidirectional video, light fields, point clouds, and spatial audio.
Nishant Saurabh

 

Authors: Shajulin Benedict, Prateek Agrawal, Radu Prodan

Link: Advanced Informatics for Computing Research, CCIS-Springer, 4th ICAICR 2020, Vol. 1393

Abstract: The push for agile pandemic analytic solutions has attained development-stage software modules of applications instead of functioning as full-fledged production-stage applications – i.e., performance, scalability, and energy-related concerns are not optimized for the underlying computing domains. And while the research continues to support the idea that reducing the energy consumption of algorithms improves the lifetime of battery-operated machines, advisable tools in almost any developer setting, an energy analysis report for R-based analytic programs is indeed a valuable suggestion. This article proposes an energy analysis framework for R-programs that enables data analytic developers, including pandemic-related application developers, to analyze the programs. It reveals an energy analysis report for R programs written to predict the new cases of 215 countries using random forest variants. Experiments were carried out at the IoT cloud research lab and the energy efficiency aspects were discussed in the article. In the experiments, ranger-based prediction program consumed 95.8 J.

HTTP Adaptive Streaming – Quo Vadis?

Christian Timmerer, Tuesday, June 29, 2021

35th Picture Coding Symposium (PCS) 2021

Abstract: Video traffic on the Internet is constantly growing; networked multimedia applications consume a predominant share of the available Internet bandwidth. A major technical breakthrough and enabler in multimedia systems research and of industrial networked multimedia services certainly was the HTTP Adaptive Streaming (HAS) technique. This resulted in the standardization of MPEG Dynamic Adaptive Streaming over HTTP (MPEG-DASH) which, together with HTTP Live Streaming (HLS), is widely used for multimedia delivery in today’s networks. Existing challenges in multimedia systems research deal with the trade-off between (i) the ever-increasing content complexity, (ii) various requirements with respect to time (most importantly, latency), and (iii) quality of experience (QoE). Optimizing towards one aspect usually negatively impacts at least one of the other two aspects if not both.

This situation sets the stage for our research work in the ATHENA Christian Doppler (CD) Laboratory (Adaptive Streaming over HTTP and Emerging Networked Multimedia Services; https://athena.itec.aau.at/), jointly funded by public sources and industry.

In this talk, we will present selected novel approaches and research results of the first year of the ATHENA CD Lab’s operation. We will highlight HAS-related research on (i) multimedia content provisioning (machine learning for video encoding); (ii) multimedia content delivery (support of edge processing and virtualized network functions for video networking); (iii) multimedia content consumption and end-to-end aspects (player-triggered segment retransmissions to improve video playout quality); and (iv) novel QoE investigations (adaptive point cloud streaming). We will also put the work into the context of the international multimedia systems research.

Conference: https://escience2021.org/

Title: Where to Encode: A Performance Analysis of x86 and Arm-based Amazon EC2 Instances

Authors: Roland Mathá, Dragi Kimovski, Anatoliy Zabrovskiy, Christian Timmerer and Radu Prodan

Abstract: Video streaming became an undivided part of the Internet. To efficiently utilise the limited network bandwidth it is essential to encode the video content. However, encoding is a computationally intensive task, involving high-performance resources provided by private infrastructures or public clouds. Public clouds, such as Amazon EC2, provide a large portfolio of services and instances optimized for specific purposes and budgets. The majority of Amazon’s instances use x86 processors, such as Intel Xeon or AMD EPYC. However, following the recent trends in computer architecture, Amazon introduced Arm based instances that promise up to 40% better cost performance ratio than comparable x86 instances for specific workloads. We evaluate in this paper the video encoding performance of x86 and Arm instances of four instance families using the latest FFmpeg version and two video codecs. We examine the impact of the encoding parameters, such as different presets and bitrates, on the time and cost for encoding. Our experiments reveal that Arm instances show high time and cost saving potential of up to 33.63% for specific bitrates and presets, especially for the x264 codec. However, the x86 instances are more general and achieve low encoding times, regardless of the codec.

Title: Handover Authentication Latency Reduction using Mobile Edge Computing and Mobility Patterns

Authors: Fatima Abdullah, Dragi Kimovski, Radu Prodan, and Kashif Munir

Abstract: With the advancement in technology and the exponential growth of mobile devices, network traffic has increased manifold in cellular networks. Due to this reason, latency reduction has become a challenging issue for mobile devices. In order to achieve seamless connectivity and minimal disruption during movement, latency reduction is crucial in the handover authentication process. Handover authentication is a process in which the legitimacy of a mobile node is checked when it crosses the boundary of an access network. This paper proposes an efficient technique that utilizes mobility patterns of the mobile node and mobile Edge computing framework to reduce handover authentication latency. The key idea of the proposed technique is to categorize mobile nodes on the basis of their mobility patterns. We perform simulations to measure the networking latency. Besides, we use queuing model to measure the processing time of an authentication query at an Edge servers. The results show that the proposed approach reduces the handover authentication latency up to 54% in comparison with the existing approach.

Link: https://c3.itec.aau.at/index.php/paper-accepted-elsevier-computing/

Prof. Radu Prodan

Authors:Yasir Noman Khalid, Muhammad Aleem, Usman Ahmed, Radu Prodan, Muhammad Arshad Islam & Muhammad Azhar Iqbal

Abstract: Employing general-purpose graphics processing units (GPGPU) with the help of OpenCL has resulted in greatly reducing the execution time of data-parallel applications by taking advantage of the massive available parallelism. However, when a small data size application is executed on GPU there is a wastage of GPU resources as the application cannot fully utilize GPU compute-cores. There is no mechanism to share a GPU between two kernels due to the lack of operating system support on GPU. In this paper, we propose the provision of a GPU sharing mechanism between two kernels that will lead to increasing GPU occupancy, and as a result, reduce execution time of a job pool. However, if a pair of the kernel is competing for the same set of resources (i.e., both applications are compute-intensive or memory-intensive), kernel fusion may also result in a significant increase in execution time of fused kernels. Therefore, it is pertinent to select an optimal pair of kernels for fusion that will result in significant speedup over their serial execution. This research presents FusionCL, a machine learning-based GPU sharing mechanism between a pair of OpenCL kernels. FusionCL identifies each pair of kernels (from the job pool), which are suitable candidates for fusion using a machine learning-based fusion suitability classifier. Thereafter, from all the candidates, it selects a pair of candidate kernels that will produce maximum speedup after fusion over their serial execution using a fusion speedup predictor. The experimental evaluation shows that the proposed kernel fusion mechanism reduces execution time by 2.83× when compared to a baseline scheduling scheme. When compared to state-of-the-art, the reduction in execution time is up to 8%.

Link: https://link.springer.com/article/10.1007/s00607-021-00958-2

Our project „ADAPT“ started in March 2021, during the most critical phase of the COVID-19 outbreak in Europe. The demand for Personal Protective Equipment (PPE) from each country’s health care system has surpassed national stock amounts by far.

Learn more about it in an interview with Univ.-Prof. DI Dr. Radu Aurel Prodan in University Klagenfurt´s journal „ad astra“ (pdf).