The manuscript “Simplified Workflow Simulation on Clouds based on Computation and Communication Noisiness” has been accepted for
publication in the IEEE Transactions on Parallel and Distributed Systems journal (TPDS) with an impact factor of 4.181.

Authors: Roland Mathá, Sasko Ristov, Thomas Fahringer, Radu Prodan.

Abstract: Many researchers rely on simulations to analyze and validate their researched methods on Cloud infrastructures. However, determining relevant simulation parameters and correctly instantiating them to match the real Cloud performance is a difficult and costly operation, as minor configuration changes can easily generate an unreliable inaccurate simulation result. Using legacy values experimentally determined by other researchers can reduce the configuration costs, but is still inaccurate as the underlying public Clouds and the number of active tenants are highly different and dynamic in time. To overcome these deficiencies, we propose a novel model that simulates the dynamic Cloud performance by introducing noise in the computation and communication tasks, determined by a small set of runtime execution data. Although the estimating method is apparently costly, a comprehensive sensitivity analysis shows that the configuration parameters determined for a certain simulation setup can be used for other simulations too, thereby reducing the tuning cost by up to 82.46%, while declining the simulation accuracy by only 1.98% in average. Extensive evaluation also shows that our novel model outperforms other state-of-the-art dynamic Cloud simulation models, leading up to 22% lower makespan inaccuracy.

Acknowledgments: This work has been supported by the ASPIDE Project funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 801091.

An der Universität Klagenfurt wird das CD Labor für Adaptive Streaming over HTTP and Emerging Networked Multimedia Services eingerichtet. Die Mission des Labors ist die Erforschung neuer Tools und Methoden für die Codierung, den Transport und die Wiedergabe von Live- und On-Demand-Videos mithilfe des HTTP-Adaptive-Streaming-Verfahrens. In Christian Doppler Labors wird anwendungsorientierte Grundlagenforschung auf hohem Niveau betrieben, hervorragende WissenschaftlerInnen kooperieren dazu mit innovativen Unternehmen. Für die Förderung dieser Zusammenarbeit gilt die Christian Doppler Forschungsgesellschaft international als Best-Practice-Beispiel. Christian Doppler Labors werden von der öffentlichen Hand und den beteiligten Unternehmen gemeinsam finanziert. Wichtigster öffentlicher Fördergeber ist das Bundesministerium für Digitalisierung und Wirtschaftsstandort (BMDW). Wir laden Sie herzlich zur feierlichen Eröffnung des CD Labors ein. Read more

Dragi Kimovski

The manuscript “Multi-objective scheduling of extreme data scientific workflows in Fog” has been accepted for publication in the “Future Generation Computer Systems” journal published by Elsevier. The journal is ranked in the Q1 (SJR) in the areas of “Hardware and Architectures” and “Computer Networks and Communications” with an Impact Factor of 5.768 (Journal Citations Reports). The manuscript was prepared in collaboration with the Technical University of Vienna.

Authors: Vincenzo De Maio and Dragi Kimovski

Abstract: The concept of “extreme data” is a recent re-incarnation of the “big data” problem, which is distinguished by the massive amounts of information that must be analyzed with strict time requirements. In the past decade, the Cloud data centers have been envisioned as the essential computing architectures for enabling extreme data workflows. However, the Cloud data centers are often geographically distributed. Such geographical distribution increases offloading latency, making it unsuitable for processing of workflows with strict latency requirements, as the data transfer times could be very high. Fog computing emerged as a promising solution to this issue, as it allows partial workflow processing in lower-network layers. Performing data processing on the Fog significantly reduces data transfer latency, allowing to meet the workflows’ strict latency requirements. However, the Fog layer is highly heterogeneous and loosely connected, which affects reliability and response time of task offloading. In this work, we investigate the potential of Fog for scheduling of extreme data workflows with strict response time requirements. Moreover, we propose a novel Pareto-based approach for task offloading in Fog, called Multi-objective Workflow Offloading (MOWO). MOWO considers three optimization objectives, namely response time, reliability, and financial cost. We evaluate MOWO workflow scheduler on a set of real-world biomedical, meteorological and astronomy workflows representing examples of extreme data application with strict latency requirements.

Acknowledgments: ASPIDE H2020 and ATOMICFOG ÖAD AT/MK 2018

Narges Mehran presented the paper  “MAPO: A Multi-Objective Model for IoT Application Placement in a Fog Environment” at the 9th International Conference on the Internet of Things, IoT 2019 in Bilbao, Spain (October 22-25, 2019).

Authors: Narges MehranDragi KimovskiRadu Prodan (Alpen-Adria Universität Klagenfurt).

Abstract: The emergence of the Fog computing paradigm that leverages in-network virtualized resources raises important challenges in terms of resource and IoT application management in a heterogeneous environment with limited computing resources. In this work, we propose a novel Pareto-based approach for application placement close to the data sources called Multi-objective IoT Application Placement in fOg (MAPO). MAPO models applications based on a finite state machine using three conflicting optimization objectives, completion time, energy consumption, and economic cost, and considering both the computation and communication aspects. In contrast to existing solutions that optimize a single objective, MAPO enables multi-objective energy and cost-aware application placement. To evaluate the quality of the MAPO placements, we created both simulated and real-world testbeds tailored for a set of medical IoT application case studies. Compared to the state-of-the-art approaches, MAPO reduces the economic cost by 28%, while decreasing the energy requirements by 29-64% on average, and improves the completion time by a factor of six.

Track: IoT Edge and Cloud @IoT’19
Acknowledgement: Austrian Research Promotion Agency (FFG), project 848448, Tiroler Cloud, funded this work.

Christian Timmerer

Mit dem 5G Summit Carinthia, ein Kurzsymposium zur neuen Mobilfunktechnologie 5G, wurde heute der 5G Playground Carinthia feierlich eröffnet. Der 5G Playground Carinthia ist österreichweit die erste Serviceeinrichtung für die Erforschung und Weiterentwicklung von 5G-spezifischen Anwendungen, Services und Geschäftsmodellen. Das Bundesministerium für Verkehr, Innovation und Technology (BMVIT) sowie das Land Kärnten finanzieren dieses einzigartige Forschungslabor im Süden Österreichs. A1 Telekom Austria stellt die technische Infrastruktur zur Verfügung.

Der 5G Playground Carinthia bietet allen Forschungs-, Innovations- und Bildungseinrichtungen sowie KMUs und Start Ups die einzigartige Möglichkeit ihre Produkte und Anwendungen mit dieser neuen Technologie zu testen und im Echtbetrieb zu erproben.

Die Alpen-Adria-Universität Klagenfurt und insbesondere das Institut für Informationstechnologie beteiligt sich an dem 5GPlayground mit einen Use-Case über “Virtual Realities”. Das Projekt erforscht, entwickelt, erprobt und evaluiert ausgewählte VR-Anwendungen über 5G-Netze, z.B. Streaming von 360°-Videos und von neuen Formen immersiver Medien, etwa von volumetrischen Daten (Point Clouds). Diese Anwendungen erfordern und testen sowohl die hohen Datenraten als auch die extrem geringen Verzögerungszeiten von 5G-Netzen, im Downlink (Streaming zu einer VR-Brille) wie auch im Uplink (Streaming von Live-Inhalten von einer 360°-Kamera weg). Darüber hinaus werden Edge-Computing-Komponenten genutzt, die 5G vorsieht, um höhere Präsentationsqualität und raschere Reaktionszeiten des VR-Systems bei Bewegung/Interaktion eines Nutzers zu erreichen. Es werden VR-Systeme entwickelt, welche die Leistungsfähigkeit von 5G zu demonstrieren erlauben.

Link: https://5gplayground.at/

Read more about the High level Symposium.

The Klagenfurt University hosted the first ASPIDE technical meeting (30th September – 2nd October), which aims on designing scalable software solutions for exascale computing.

2019 ASPIDE Meeting Klagenfurt

ASPIDE Meeting at Klagenfurt University

2019 ASPIDE Meeting Klagenfurt Social Event

2019 ASPIDE Meeting Klagenfurt (Social Event)

Natalia Sokolova

Our paper has been accepted for publication at the MMM 2020 Conference on Multimedia Modeling. The work was conducted in the context of the ongoing OVID project.

Authors: Natalia Sokolova, Klaus Schoeffmann, Mario Taschwer (AAU Klagenfurt); Doris
Putzgruber-Adamitsch, Yosuf El-Shabrawi (Klinikum Klagenfurt)

Abstract:
In the field of ophthalmic surgery, many clinicians nowadays record their microscopic procedures with a video camera and use the recorded footage for later purpose, such as forensics, teaching, or training. However, in order to efficiently use the video material after surgery, the video content needs to be analyzed automatically. Important semantic content to be analyzed and indexed in these short videos are operation instruments, since they provide an indication of the corresponding operation phase and surgical action. Related work has already shown that it is possible to accurately detect instruments in cataract surgery videos. However, their underlying dataset (from the CATARACTS challenge) has very good visual quality, which is not reflecting the typical quality of videos acquired in general hospitals. In this paper, we therefore analyze the generalization performance of deep learning models for instrument recognition in terms of dataset change. More precisely, we trained such models as ResNet-50, Inception v3 and NASNet Mobile using a dataset of high visual quality (CATARACT) and test it on another dataset with low visual quality (Cataract-101), and vice versa. Our results show that the generalizability is rather low in general, but clearly worse for the model trained on the high-quality dataset. Another important observation is the fact that the trained models are able to detect similar instruments in the other dataset even if their appearance is different.