Authors: Hadi Amirpour, Ekrem Çetinkaya (Alpen-Adria-Universität Klagenfurt), Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Bitmovin), and Mohammad Ghanbari (University of Tehran, University of Essex)

Abstract: Adaptive HTTP streaming is the preferred method to deliver multimedia content on the internet. It provides multiple representations of the same content in different qualities (i.e., bit-rates and resolutions) and allows the client to request segments from the available representations in a dynamic, adaptive way depending on its context. The growing number of representations in adaptive HTTP streaming makes encoding of one video segment at different representations a challenging task in terms of encoding time-complexity. In this paper, information of both highest and lowest quality representations are used to limit Rate Distortion Optimization (RDO) for each Coding Unit Tree (CTU) in High Efficiency Video Coding. Our proposed method first encodes the highest quality representation and consequently uses it to encode the lowest quality representation. In particular, the block structure and the selected reference frame of both highest and lowest quality representations are then used to predict and shorten the RDO process of each CTU for intermediate quality representations. Our proposed method introduces a delay of two CTUs thanks to employing parallel processing techniques. Experimental results show a significant reduction in time-complexity over the reference software (38%) and state-of-the-art (10%) is achieved while quality degradation is negligible.

Keywords:  HTTP adaptive streaming, Multi-rate encoding, HEVC, Fast block partitioning

Link: Data Compression Conference 2020

The manuscript “Simplified Workflow Simulation on Clouds based on Computation and Communication Noisiness” has been accepted for
publication in the IEEE Transactions on Parallel and Distributed Systems journal (TPDS) with an impact factor of 4.181.

Authors: Roland Mathá, Sasko Ristov, Thomas Fahringer, Radu Prodan.

Abstract: Many researchers rely on simulations to analyze and validate their researched methods on Cloud infrastructures. However, determining relevant simulation parameters and correctly instantiating them to match the real Cloud performance is a difficult and costly operation, as minor configuration changes can easily generate an unreliable inaccurate simulation result. Using legacy values experimentally determined by other researchers can reduce the configuration costs, but is still inaccurate as the underlying public Clouds and the number of active tenants are highly different and dynamic in time. To overcome these deficiencies, we propose a novel model that simulates the dynamic Cloud performance by introducing noise in the computation and communication tasks, determined by a small set of runtime execution data. Although the estimating method is apparently costly, a comprehensive sensitivity analysis shows that the configuration parameters determined for a certain simulation setup can be used for other simulations too, thereby reducing the tuning cost by up to 82.46%, while declining the simulation accuracy by only 1.98% in average. Extensive evaluation also shows that our novel model outperforms other state-of-the-art dynamic Cloud simulation models, leading up to 22% lower makespan inaccuracy.

Acknowledgments: This work has been supported by the ASPIDE Project funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 801091.

An der Universität Klagenfurt wird das CD Labor für Adaptive Streaming over HTTP and Emerging Networked Multimedia Services eingerichtet. Die Mission des Labors ist die Erforschung neuer Tools und Methoden für die Codierung, den Transport und die Wiedergabe von Live- und On-Demand-Videos mithilfe des HTTP-Adaptive-Streaming-Verfahrens. In Christian Doppler Labors wird anwendungsorientierte Grundlagenforschung auf hohem Niveau betrieben, hervorragende WissenschaftlerInnen kooperieren dazu mit innovativen Unternehmen. Für die Förderung dieser Zusammenarbeit gilt die Christian Doppler Forschungsgesellschaft international als Best-Practice-Beispiel. Christian Doppler Labors werden von der öffentlichen Hand und den beteiligten Unternehmen gemeinsam finanziert. Wichtigster öffentlicher Fördergeber ist das Bundesministerium für Digitalisierung und Wirtschaftsstandort (BMDW). Wir laden Sie herzlich zur feierlichen Eröffnung des CD Labors ein. Read more

Dragi Kimovski

The manuscript “Multi-objective scheduling of extreme data scientific workflows in Fog” has been accepted for publication in the “Future Generation Computer Systems” journal published by Elsevier. The journal is ranked in the Q1 (SJR) in the areas of “Hardware and Architectures” and “Computer Networks and Communications” with an Impact Factor of 5.768 (Journal Citations Reports). The manuscript was prepared in collaboration with the Technical University of Vienna.

Authors: Vincenzo De Maio and Dragi Kimovski

Abstract: The concept of “extreme data” is a recent re-incarnation of the “big data” problem, which is distinguished by the massive amounts of information that must be analyzed with strict time requirements. In the past decade, the Cloud data centers have been envisioned as the essential computing architectures for enabling extreme data workflows. However, the Cloud data centers are often geographically distributed. Such geographical distribution increases offloading latency, making it unsuitable for processing of workflows with strict latency requirements, as the data transfer times could be very high. Fog computing emerged as a promising solution to this issue, as it allows partial workflow processing in lower-network layers. Performing data processing on the Fog significantly reduces data transfer latency, allowing to meet the workflows’ strict latency requirements. However, the Fog layer is highly heterogeneous and loosely connected, which affects reliability and response time of task offloading. In this work, we investigate the potential of Fog for scheduling of extreme data workflows with strict response time requirements. Moreover, we propose a novel Pareto-based approach for task offloading in Fog, called Multi-objective Workflow Offloading (MOWO). MOWO considers three optimization objectives, namely response time, reliability, and financial cost. We evaluate MOWO workflow scheduler on a set of real-world biomedical, meteorological and astronomy workflows representing examples of extreme data application with strict latency requirements.

Acknowledgments: ASPIDE H2020 and ATOMICFOG ÖAD AT/MK 2018

At the 26th International Conference on MultiMedia Modeling (MMM 2020) in Daejeon, Korea, researchers from ITEC have successfully presented several scientific contributions to the multimedia community. First, Natalia Sokolova presented her first paper on “Evaluating the Generalization Performance of Instrument Classification in Cataract Surgery Videos”. Next, Sabrina Kletz presented her work on “Instrument Recognition in Laparoscopy for Technical Skill Assessment”. Finally, Andreas Leibetseder talked about “GLENDA: Gynecologic Laparoscopy Endometriosis Dataset”.

The paper “Deblurring Cataract Surgery Videos Using a Multi-Scale Deconvolutional Neural Network” has been accepted for publication at the “IEEE International Symposium on Biomedical Imaging”, located at Iowa City, Iowa, USA (April 3-7, 2020). This conference is a joint initiative from the IEEE Signal Processing Society (SPS) and the IEEE Engineering in Medicine and Biology Society (EMBS).
Authors: Negin Ghamsarian, Klaus Schoeffmann, Mario Taschwer

Abstract: A common quality impairment observed in surgery videos is blur, caused by object motion or a defocused camera. Degraded image quality hampers the progress of machine-learning-based approaches in learning and recognizing semantic information in surgical video frames like instruments, phases, and surgical actions. This problem can be mitigated by automatically deblurring video frames as a preprocessing method for any subsequent video analysis task. In this paper, we propose and evaluate a multi-scale deconvolutional neural network to deblur cataract surgery videos. Experimental results confirm the effectiveness of the proposed approach in terms of the visual quality of frames as well as PSNR improvement.

Keywords: Video Deblurring, Deconvolutional Neural Networks, Cataract Surgery Videos

Acknowledgment: This work was funded by the FWF Austrian Science Fund under grant P 31486-N31

Natalia Sokolova

VBS 2020 in Daejeon (South Korea) was an amazing event with a lot of fun! Eleven teams, each consisting of two users (coming from 11 different countries) competed against each other in both a private session for about 5 hours and a public session for almost 3 hours. ITEC did also participate with two teams. In total all teams had to solve 22 challenging video retrieval tasks, issued on a shared dataset consisting of 1000 hours of content (V3C1)! Many thanks go to the VBS teams but also to the VBS organizers as well as the local organizers, who did a great job and made VBS2020 a wonderful and entertaining event!

Prof. Radu Prodan

High-tech meets history. When thousands of international software developers gather at the Vienna Imperial Castle (Hofburg Wien), you can feel that magic is about to happen. Exactly that occurred on November 28 and 29 at this year’s We Are Developers Congress in Vienna.

Josef Hammer - Edge Computing

‘Are you on the Edge? Or still in the Cloud?’ – On one of the three stages, Josef Hammer inspired over 200 IT enthusiasts with a 30-minute talk on Edge Computing and 5G networks. As with the transition from mainframes to desktop computers, in the upcoming years a lot of processing will move from the cloud to the edge of the network, i.e. closer to the user. This will particularly affect areas with high data volume (IoT, AI) and low latency requirements (IoT).

Josef gave a short introduction to this exciting new area and its benefits and use cases, which frameworks and tools developers can use right now, and where we might be headed. Especially the presentation of our 5G Playground Carinthia was curiously followed by the attendees who enjoyed a first glance at the ambitious research projects conducted here.

More information:

https://5gplayground.at/

https://www.wearedevelopers.com/events/congress-vienna/

The paper has been accepted (through double-blind peer review) as a regular paper of the Euromicro PDP’2020 conference to be held in Vasteras, Sweden on 11-13 March, 2020.

Title: M3AT: Monitoring Agents Assignment Model for Data-Intensive Applications

Authors: Vladislav Kashansky, Dragi Kimovski, Radu Prodan, Prateek Agrawal, Fabrizio Marozzo, Iuhasz Gabriel, Marek Justyna and Javier Garcia-Blas

Abstract: Nowadays, massive amounts of data are acquired, transferred, and analyzed nearly in real-time by utilizing a large number of computing and storage elements interconnected through high-speed communication networks. However, one issue that still requires research effort is to enable efficient monitoring of applications and infrastructures of such complex systems. In this paper, we introduce a Integer Linear Programming (ILP) model called M3AT for optimised assignment of monitoring agents and aggregators on large-scale computing systems. We identified a set of requirements from three representative data-intensive applications and exploited them to define the model’s input parameters. We evaluated the scalability of M3AT using the Constraint Integer Programing (SCIP) solver with default configuration based on synthetic data sets. Preliminary results show that the model provides optimal assignments for systems composed of up to 200 monitoring agents while keeping the number of aggregators constant and demonstrates variable sensitivity with respect to the scale of monitoring data aggregators and limitation policies imposed.

Keywords: Monitoring systems, high performance computing, aggregation, systems control, data-intensive systems, generalized assignment problem, SCIP optimization suite.

Acknowledgement: This work has received funding from the EC-funded project H2020 FETHPC ASPIDE (Agreement #801091)