Paper accepted in the 10th IEEE Conference on Big Data and Cloud Computing: “Cloud — Edge Offloading Model for Vehicular Traffic Analysis”

, , , ,

Authors: Dragi Kimovski, Dijana C. Bogatinoska, Narges Mehran, Aleksandar Karadimce, Natasha Paunkoska, Radu Prodan, Ninoslav Marina

Abstract: The proliferation of smart sensing and computing devices, capable of collecting a vast amount of data, has made the gathering of the necessary vehicular traffic data relatively easy. However, the analysis of these big data sets requires computational resources, which are currently provided by the Cloud Data Centers. Nevertheless, the Cloud Data Centers can have unacceptably high latency for vehicular analysis applications with strict time requirements. The recent introduction of the Edge computing paradigm, as an extension of the Cloud services, has partially moved the processing of big data closer to the data sources, thus addressing this issue. Unfortunately, this unlocked multiple challenges related to resources management. Therefore, we present a model for scheduling of vehicular traffic analysis applications with partial task offloading across the Cloud — Edge continuum. The approach represents the traffic applications as a set of interconnected tasks composed into a workflow that can be partially offloaded to the Edge. We evaluated the approach through a simulated Cloud — Edge environment that considers two representative vehicular traffic applications with a focus on video stream analysis. Our results show that the presented approach reduces the application response time up to eight times while improving energy efficiency by a factor of four.

Feedback on online-teaching

Teaching in times of Corona is a particular challenge. An online survey among AAU students shows that they were delighted with the digital teaching formats. Here you will find the best feedback from the students:
https://www.aau.at/feedback-zur-online-lehre/

Josef received very positive reviews, for example: “The course became more and more enjoyable, not only because of the content but also because of the technical aids: He integrated special effects, course intro, applause at the weekly quizzes, which significantly loosened the atmosphere.”

 

Prof. Radu Prodan

FFG project “ADaptive and Autonomous data Performance connectivity and decentralized Transport decision-making Network” (ADAPT) accepted

, , , ,

This project started during the most critical phase of the COVID-19 outbreak in Europe where the demand for Personal Protective Equipment (PPE) from each country’s health care system has
surpassed national stock amounts by far. Therefore, the ADAPT consortia agreed to bundle its joint resources to develop and adaptive and autonomous decision-making network to support the involved stakeholders along the PPE Supply Chain in their endeavour to save and protect human lives as quickly as possible.

The partners will do that by providing a Blockchain solution capable of optimizing supply, demand and transport capacities between them, elaborating a technical solution for transparent and realtime certification checks on equipment and production documentation as well as distributed and parallel decision-making capabilities on all levels of this multi-dimensional research problem.

In total, the world community will spent more than € 49,6 billion on PPE medical equipment in 2020, € 7,7 billion thereof could be saved with the transport optimization of ADAPT and additional € 5,18 billion could be freed up in the financing and banking sector which could be reinvested immediately into the expansion of the world’s national health care systems.

ADAPT is a 36-month duration project submitted to 6th Call for Austrian-Chinese Coop. RTD Projects FFG & CAS.

Partners:

  • Alpen-Adria Universität Klagenfurt, Institute of Information Technology (UNI-KLU)
  • Johannes-Kepler-Universität Linz, Intelligent Transport Systems-Sustainable Transport Logistics 4.0. (JKU)
  • Logoplan – Logistik, Verkehrs und Umweltschutz Consulting GmbH (LP)
  • Intact GmbH (INTACT)
  • Chinese Academy of Sciences, Institute of Computing Technology (ICTCAS)
Prof. Radu Prodan

Paper accepted in IEEE Internet Computing: “Inter-host Orchestration Platform Architecture for Ultra-scale Cloud Applications”

, , ,

The manuscript “Inter-host Orchestration Platform Architecture for Ultra-scale Cloud Applications” has been accepted for publication in an upcoming issue of IEEE Internet Computing.

Authors: Sasko Ristov, Thomas Fahringer, Radu Prodan, Magdalena Kostoska, Marjan Gusev, Shahram Dustdar

Abstract: Cloud data centers exploit many memory page management techniques that reduce the total memory utilization and access time. Mainly these techniques are applied to a hypervisor in a single host (intra-hypervisor) without the possibility to exploit the knowledge obtained by a group of hosts (clusters). We introduce a novel inter-hypervisor orchestration platform to provide intelligent memory page management for horizontal scaling. It will use the performance behavior of faster virtual machines to activate pre-fetching mechanisms that reduce the number of page faults. The overall platform consists of five modules – profiler, collector, classifier, predictor, and pre-fetcher. We developed and deployed a prototype of the platform, which comprises the first three modules. The evaluation shows that data collection is feasible in real-time, which means that if our approach is used on top of the existing memory page management techniques, it can significantly lower the miss rate that initiates page faults.

Paper accepted MMM’21: Towards Optimal Multirate Encoding for HTTP Adaptive Streaming

, ,

Authors: Hadi Amirpour (Alpen-Adria-Universität Klagenfurt),Ekrem Çetinkaya (Alpen-Adria-Universität Klagenfurt), Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Bitmovin), and Mohammad Ghanbari (School of Computer Science and Electronic Engineering, University of Essex, Colchester, UK)

Abstract: HTTP Adaptive Streaming (HAS) enables high quality stream-ing of video contents. In HAS, videos are divided into short intervalscalled segments, and each segment is encoded at various quality/bitratesto adapt to the available bandwidth. Multiple encodings of the same con-tent imposes high cost for video content providers. To reduce the time-complexity of encoding multiple representations, state-of-the-art methods typically encode the highest quality representation first and reusethe information gathered during its encoding to accelerate the encodingof the remaining representations. As encoding the highest quality rep-resentation requires the highest time-complexity compared to the lowerquality representations, it would be a bottleneck in parallel encoding scenarios and the overall time-complexity will be limited to the time-complexity of the highest quality representation. In this paper and toaddress this problem, we consider all representations from the highestto the lowest quality representation as a potential, single reference toaccelerate the encoding of the other, dependent representations. We for-mulate a set of encoding modes and assess their performance in terms ofBD-Rate and time-complexity, using both VMAF and PSNR as objec-tive metrics. Experimental results show that encoding a middle qualityrepresentation as a reference, can significantly reduce the maximum en-coding complexity and hence it is an efficient way of encoding multiplerepresentations in parallel. Based on this fact, a fast multirate encodingmethod is proposed which utilizes depth and prediction mode of a middle quality representation to accelerate the encoding of the dependentrepresentations.

The International MultiMedia Modeling Conference (MMM)

25-27 January 2021, Prague, Czech Republic

Link: https://mmm2021.cz

Keywords: HEVC, Video Encoding , Multirate Encoding , DASH

Grand Challenge Keynote on “Deep Video Understanding and the User” at ACMMM2020

,

Today, Klaus Schöffmann will present his keynote talk on “Deep Video Understanding and the User” at the ACM Multimedia 2020 Grand Challenge (GC) on “Deep Video Understanding”. The talk will highlight user aspects of automatic video content search, based on deep neural networks, and show several examples where users have serious issues in finding the correct content scene, when video search systems rely too much on the “automatic search” scenario and ignore the user behind. Registered users of ACMMM2020 can watch the talk online; the corresponding GC is scheduled for October 14 from 21:00-22:00 (15:00-16:00 NY Time).

Link: https://2020.acmmm.org/

ICPR 2020: Relevance Detection in Cataract Surgery Videos by Spatio-Temporal Action Localization

,

Authors: Negin Ghamsarian (Alpen-Adria-Universität Klagenfurt), Mario Taschwer (Alpen-Adria-Universität Klagenfurt), Doris Putzgruber-Adamitsch (Klinikum Klagenfurt), Stephanie Sarny (Klinikum Klagenfurt), Klaus Schoeffmann (Alpen-Adria-Universität Klagenfurt)

Abstract: In cataract surgery, the operation is performed with the help of a microscope. Since the microscope enables watching real-time surgery by up to two people only, a major part of surgical training is conducted using the recorded videos. To optimize the training procedure with the video content, the surgeons require an automatic relevance detection approach. In addition to relevance-based retrieval, these results can be further used for skill assessment and irregularity detection in cataract surgery videos. In this paper, a three-module framework is proposed to detect and classify the relevant phase segments in cataract videos. Taking advantage of an idle frame recognition network, the video is divided into idle and action segments. To boost the performance in relevance detection Mask R-CNN is utilized to detect the cornea in each frame where the relevant surgical actions are conducted. The spatio-temporal localized segments containing higher-resolution information about the pupil texture and actions, and complementary temporal information from the same phase are fed into the relevance detection module. This module consists of four parallel recurrent CNNs being responsible to detect four relevant phases that have been defined with medical experts. The results will then be integrated to classify the action phases as irrelevant or one of four relevant phases. Experimental results reveal that the proposed approach outperforms static CNNs and different configurations of feature-based and end-to-end recurrent networks.

25th International Conference on Pattern Recognition, Milan, Italy

Link: https://www.micc.unifi.it/icpr2020/

Paper accepted ISM’20: Dynamic Segment Repackaging at the Edge for HTTP Adaptive Streaming

,

Authors: Jesús Aguilar Armijo (Alpen-Adria-Universität Klagenfurt), Babak Taraghi (Alpen-Adria-Universität Klagenfurt), Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Bitmovin), and Hermann Hellwagner (Alpen-Adria-Universität Klagenfurt)

Abstract: Adaptive video streaming systems typically support different media delivery formats, e.g., MPEG-DASH and HLS, replicating the same content multiple times into the network. Such a diversified system results in inefficient use of storage, caching, and bandwidth resources. The Common Media Application Format (CMAF) emerges to simplify HTTP Adaptive Streaming (HAS), providing a single encoding and packaging
format of segmented media content and offering the opportunities of bandwidth savings, more cache hits and less storage needed. However, CMAF is not yet supported by most devices. To solve this issue, we present a solution where we maintain the main
advantages of CMAF while supporting heterogeneous devices using different media delivery formats. For that purpose, we propose to dynamically convert the content from CMAF to the desired media delivery format at an edge node. We study the bandwidth savings with our proposed approach using an analytical model and simulation, resulting in bandwidth savings of up to 20% with different media delivery format distributions.
We analyze the runtime impact of the required operations on the segmented content performed in two scenarios: the classic one, with four different media delivery formats, and the proposed scenario, using CMAF-only delivery through the network. We
compare both scenarios with different edge compute power assumptions. Finally, we perform experiments in a real video streaming testbed delivering MPEG-DASH using CMAF content to serve a DASH and an HLS client, performing the media conversion for the latter one.

IEEE International Symposium on Multimedia (ISM)

2-4 December 2020, Naples, Italy

https://www.ieee-ism.org/

Keywords: CMAF, Edge Computing, HTTP Adaptive Streaming (HAS)

PCS´21 Special Session: Video encoding for large scale HAS deployments

,

Abstract: Video accounts for the vast majority of today’s internet traffic and video coding is vital for efficient distribution towards the end-user. Software- or/and cloud-based video coding is becoming more and more attractive, specifically with the plethora of video codecs available right now (e.g., AVC, HEVC, VVC, VP9, AV1, etc.) which is also supported by the latest Bitmovin Video Developer Report 2020. Thus, improvements in video coding enabling efficient adaptive video streaming is a requirement for current and future video services. HTTP Adaptive Streaming (HAS) is now mainstream due to its simplicity, reliability, and standard support (e.g., MPEG-DASH). For HAS, the video is usually encoded in multiple versions (i.e., representations) of different resolutions, bitrates, codecs, etc. and each representation is divided into chunks (i.e., segments) of equal length (e.g., 2-10 sec) to enable dynamic, adaptive switching during streaming based on the user’s context conditions (e.g., network conditions, device characteristics, user preferences). In this context, most scientific papers in the literature target various improvements which are evaluated based on open, standard test sequences. We argue that optimizing video encoding for large scale HAS deployments is the next step in order to improve the Quality of Experience (QoE), while optimizing costs.

Session organizers: Christian Timmerer (Bitmovin, Austria), Mohammad Ghanbari (University of Essex, UK), and Alex Giladi (Comcast, USA).

Picture Coding Symposium (PCS)  at 29 June to 2 July 2021, UK

Link: https://pcs2021.org

H2020 project “DataCloud: Enabling the Big Data Pipeline Lifecycle on the Computing Continuum” accepted with excellent score 15 (out of 15)

,

DataCloud provides a novel paradigm covering the complete lifecycle of managing Big Data pipelines through discovery, design, simulation, provisioning, deployment, and adaptation across the Computing Continuum. Big Data pipelines in DataCloud interconnect the end-to-end industrial operations of collecting preprocessing and filtering data, transforming and delivering insights, training simulation models, and applying them in the cloud to achieve a business goal. DataCloud delivers a toolbox of new languages, methods, infrastructures, and prototypes for discovering, simulating, deploying, and adapting Big Data pipelines on heterogeneous and untrusted resources. DataCloud separates the design from the run- time aspects of Big Data pipeline deployment, empowering domain experts to take an active part in their definitions. The main exploitation targets the operation and monetization of the toolbox in European markets, and in the Spanish-speaking countries of Latin America. Its aim is to lower the technological entry barriers for the incorporation of Big Data pipelines in organizations’ business processes and make them accessible to a wider set of stakeholders regardless of the hardware infrastructure. DataCloud validates its plan through a strong selection of complementary business cases offered by SMEs and a large company targeting higher mobile business revenues in smart marketing campaigns, reduced production costs of sport events, trustworthy eHealth patient data management, and reduced time to production and better analytics in Industry 4.0 manufacturing. The balanced consortium consists of 11 partners from eight countries. It has three strong university partners specialised in Big Data, distributed computing, and high-productivity languages, led by a research institute. DataCloud gathers six SMEs and one large company (as technology providers and stakeholders/users/early adopters) that prioritise the business focus of the project in achieving high business impacts.

Datacloud is a 36-month duration project submitted to the H2020-ICT-2020-2 call as a Research and Innovation Action (RIA).

Principal investigator at University of Klagenfurt is Univ.-Prof. Dr. Radu Prodan.