The newspaper “Kleine Zeitung” published the article “Medizinische Schutzausrüstung: Neue IT-Lösung soll Menschenleben retten” with Prof. Radu Prodan.
The newspaper “Kleine Zeitung” published the article “Medizinische Schutzausrüstung: Neue IT-Lösung soll Menschenleben retten” with Prof. Radu Prodan.
Authors: Dragi Kimovski, Dijana C. Bogatinoska, Narges Mehran, Aleksandar Karadimce, Natasha Paunkoska, Radu Prodan, Ninoslav Marina
Abstract: The proliferation of smart sensing and computing devices, capable of collecting a vast amount of data, has made the gathering of the necessary vehicular traffic data relatively easy. However, the analysis of these big data sets requires computational resources, which are currently provided by the Cloud Data Centers. Nevertheless, the Cloud Data Centers can have unacceptably high latency for vehicular analysis applications with strict time requirements. The recent introduction of the Edge computing paradigm, as an extension of the Cloud services, has partially moved the processing of big data closer to the data sources, thus addressing this issue. Unfortunately, this unlocked multiple challenges related to resources management. Therefore, we present a model for scheduling of vehicular traffic analysis applications with partial task offloading across the Cloud — Edge continuum. The approach represents the traffic applications as a set of interconnected tasks composed into a workflow that can be partially offloaded to the Edge. We evaluated the approach through a simulated Cloud — Edge environment that considers two representative vehicular traffic applications with a focus on video stream analysis. Our results show that the presented approach reduces the application response time up to eight times while improving energy efficiency by a factor of four.
This project started during the most critical phase of the COVID-19 outbreak in Europe where the demand for Personal Protective Equipment (PPE) from each country’s health care system has
surpassed national stock amounts by far. Therefore, the ADAPT consortia agreed to bundle its joint resources to develop and adaptive and autonomous decision-making network to support the involved stakeholders along the PPE Supply Chain in their endeavour to save and protect human lives as quickly as possible.
The partners will do that by providing a Blockchain solution capable of optimizing supply, demand and transport capacities between them, elaborating a technical solution for transparent and realtime certification checks on equipment and production documentation as well as distributed and parallel decision-making capabilities on all levels of this multi-dimensional research problem.
In total, the world community will spent more than € 49,6 billion on PPE medical equipment in 2020, € 7,7 billion thereof could be saved with the transport optimization of ADAPT and additional € 5,18 billion could be freed up in the financing and banking sector which could be reinvested immediately into the expansion of the world’s national health care systems.
ADAPT is a 36-month duration project submitted to 6th Call for Austrian-Chinese Coop. RTD Projects FFG & CAS.
Partners:
The manuscript “Inter-host Orchestration Platform Architecture for Ultra-scale Cloud Applications” has been accepted for publication in an upcoming issue of IEEE Internet Computing.
Authors: Sasko Ristov, Thomas Fahringer, Radu Prodan, Magdalena Kostoska, Marjan Gusev, Shahram Dustdar
Abstract: Cloud data centers exploit many memory page management techniques that reduce the total memory utilization and access time. Mainly these techniques are applied to a hypervisor in a single host (intra-hypervisor) without the possibility to exploit the knowledge obtained by a group of hosts (clusters). We introduce a novel inter-hypervisor orchestration platform to provide intelligent memory page management for horizontal scaling. It will use the performance behavior of faster virtual machines to activate pre-fetching mechanisms that reduce the number of page faults. The overall platform consists of five modules – profiler, collector, classifier, predictor, and pre-fetcher. We developed and deployed a prototype of the platform, which comprises the first three modules. The evaluation shows that data collection is feasible in real-time, which means that if our approach is used on top of the existing memory page management techniques, it can significantly lower the miss rate that initiates page faults.
Authors: Hadi Amirpour (Alpen-Adria-Universität Klagenfurt),Ekrem Çetinkaya (Alpen-Adria-Universität Klagenfurt), Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Bitmovin), and Mohammad Ghanbari (School of Computer Science and Electronic Engineering, University of Essex, Colchester, UK)
Abstract: HTTP Adaptive Streaming (HAS) enables high quality stream-ing of video contents. In HAS, videos are divided into short intervalscalled segments, and each segment is encoded at various quality/bitratesto adapt to the available bandwidth. Multiple encodings of the same con-tent imposes high cost for video content providers. To reduce the time-complexity of encoding multiple representations, state-of-the-art methods typically encode the highest quality representation first and reusethe information gathered during its encoding to accelerate the encodingof the remaining representations. As encoding the highest quality rep-resentation requires the highest time-complexity compared to the lowerquality representations, it would be a bottleneck in parallel encoding scenarios and the overall time-complexity will be limited to the time-complexity of the highest quality representation. In this paper and toaddress this problem, we consider all representations from the highestto the lowest quality representation as a potential, single reference toaccelerate the encoding of the other, dependent representations. We for-mulate a set of encoding modes and assess their performance in terms ofBD-Rate and time-complexity, using both VMAF and PSNR as objec-tive metrics. Experimental results show that encoding a middle qualityrepresentation as a reference, can significantly reduce the maximum en-coding complexity and hence it is an efficient way of encoding multiplerepresentations in parallel. Based on this fact, a fast multirate encodingmethod is proposed which utilizes depth and prediction mode of a middle quality representation to accelerate the encoding of the dependentrepresentations.
The International MultiMedia Modeling Conference (MMM)
25-27 January 2021, Prague, Czech Republic
Link: https://mmm2021.cz
Keywords: HEVC, Video Encoding , Multirate Encoding , DASH
Authors: Negin Ghamsarian (Alpen-Adria-Universität Klagenfurt), Mario Taschwer (Alpen-Adria-Universität Klagenfurt), Doris Putzgruber-Adamitsch (Klinikum Klagenfurt), Stephanie Sarny (Klinikum Klagenfurt), Klaus Schoeffmann (Alpen-Adria-Universität Klagenfurt)
Abstract: In cataract surgery, the operation is performed with the help of a microscope. Since the microscope enables watching real-time surgery by up to two people only, a major part of surgical training is conducted using the recorded videos. To optimize the training procedure with the video content, the surgeons require an automatic relevance detection approach. In addition to relevance-based retrieval, these results can be further used for skill assessment and irregularity detection in cataract surgery videos. In this paper, a three-module framework is proposed to detect and classify the relevant phase segments in cataract videos. Taking advantage of an idle frame recognition network, the video is divided into idle and action segments. To boost the performance in relevance detection Mask R-CNN is utilized to detect the cornea in each frame where the relevant surgical actions are conducted. The spatio-temporal localized segments containing higher-resolution information about the pupil texture and actions, and complementary temporal information from the same phase are fed into the relevance detection module. This module consists of four parallel recurrent CNNs being responsible to detect four relevant phases that have been defined with medical experts. The results will then be integrated to classify the action phases as irrelevant or one of four relevant phases. Experimental results reveal that the proposed approach outperforms static CNNs and different configurations of feature-based and end-to-end recurrent networks.
25th International Conference on Pattern Recognition, Milan, Italy
The FOG just moved from the Lake Wörthersee to ITEC ;)! Lead researchers Dragi Kimovski, and Narges Mehran from Radu Prodan’s Lab and Josef Hammer from Hermann Hellwagner’s Lab setup UNI-KLU’s first FOG infrastructure with 40 computing nodes including 5 GPU-enabled ones.
Why should Cloud have all the FUN xD?
Faculty of Technical Sciences, University of Klagenfurt nominated Alexander Lercher from ITEC (Radu Prodan‘s group) for Best Performer Award owing to his outstanding performance in studies. He will be conferred with this honor at a public presentation in lecture hall -3 of the University of Klagenfurt on September 16, 2020. In the course of research carried out by the Studies and Examination Department, Alexander was identified as the most successful student in this field of study.
Elsevier’s Journal of Information and Software Technology (INSOF) accepted the manuscript ”A Dynamic Evolutionary Multi-Objective Virtual Machine Placement Heuristic for Cloud Infrastructures”.
Authors: Ennio Torre, Juan J. Durillo (Leibniz Supercomputing Center), Vincenzo de Maio (Vienna University of Technology), Prateek Agrawal (University of Klagenfurt), Shajulin Benedict (Indian Institute of Information Technology), Nishant Saurabh (University of Klagenfurt), Radu Prodan (University of Klagenfurt).
Abstract: Minimizing the resource wastage reduces the energy cost of operating a data center, but may also lead to a considerably high resource over-commitment affecting the Quality of Service (QoS) of the running applications. The effective trade-off between resource wastage and over-commitment is a challenging task in virtualized Clouds and depends on the allocation of virtual machines (VMs) to physical resources. We propose in this paper a multi-objective method for dynamic VM placement, which exploits live migration mechanisms to simultaneously optimize the resource wastage, over-commitment ratio and migration energy. Our optimization algorithm uses a novel evolutionary meta-heuristic based on an island population model to approximate the Pareto optimal set of VM placements with good accuracy and diversity. Simulation results using traces collected from a real Google cluster demonstrate that our method outperforms related approaches by reducing the migration energy by up to 57 % with a QoS increase below 6 %.
Acknowledgements:
This work is supported by:
The manuscript ”Expelliarmus: Semantic-Centric Virtual Machine Image Management in IaaS Clouds” is accepted for publication at the Journal of Parallel and Distributed Computing (JPDC) (https://www.journals.elsevier.com/journal-of-parallel-and-distributed-computing).
Authors: Nishant Saurabh (University of Klagenfurt), Shajulin Benedict (Indian Institute of Information Technology, Kottayam), Jorge G. Barbosa (LIACC, Faculdade de Engenharia da Universidade do Porto), Radu Prodan (University of Klagenfurt).
Abstract: Infrastructure-as-a-service (IaaS) Clouds concurrently accommodate diverse sets of user requests, requiring an efficient strategy for storing and retrieving virtual machine images (VMIs) at a large scale. The VMI storage management require dealing with multiple VMIs, typically in the magnitude of gigabytes, which entails VMI sprawl issues hindering the elastic resource management and provisioning. Nevertheless, existing techniques to facilitate VMI management overlook VMI semantics (i.e at the level of base image and software packages) with either restricted possibility to identify and extract reusable functionalities or with higher VMI publish and retrieval overheads. In this paper, we design, implement and evaluate Expelliarmus, a novel VMI management system that helps to minimize storage, publish and retrieval overheads. To achieve this goal, Expelliarmus incorporates three complementary features. First, it makes use of VMIs modelled as semantic graphs to expedite the similarity computation between multiple VMIs. Second, Expelliarmus provides a semantic aware VMI decomposition and base image selection to extract and store non-redundant base image and software packages. Third, Expelliarmus can also assemble VMIs based on the required software packages upon user request. We evaluate Expelliarmus through a representative set of synthetic Cloud VMIs on the real test-bed. Experimental results show that our semantic-centric approach is able to optimize repository size by 2.3-22 times compared to state-of-the-art systems (e.g. IBM’s Mirage and Hemera) with significant VMI publish and slight retrieval performance improvement.
Acknowledgements:
This work is supported by: