Elsevier Computer Communications journal 

Alireza Erfanian (Alpen-Adria-Universität Klagenfurt), Farzad Tashtarian  (Alpen-Adria-Universität Klagenfurt), Christian Timmerer (Alpen-Adria-Universität Klagenfurt), and Hermann Hellwagner (Alpen-Adria-Universität Klagenfurt).

Abstract: Recent advances in embedded systems and communication technologies enable novel, non-safety applications in Vehicular Ad Hoc Networks (VANETs). Video streaming has become a popular core service for such applications. In this paper, we present QoCoVi as a QoE- and cost-aware adaptive video streaming approach for the Internet of Vehicles (IoV) to deliver video segments requested by mobile users at specified qualities and deadlines. Considering a multitude of transmission data sources with different capacities and costs, the goal of QoCoVi is to serve the desired video qualities with minimum costs. By applying Dynamic Adaptive Streaming over HTTP (DASH) principles, QoCoVi considers cached video segments on vehicles equipped with storage capacity as the lowest-cost sources for serving requests.

We design QoCoVi in two SDN-based operational modes: (i) centralized and (ii) distributed. In centralized mode, we can obtain a suitable solution by introducing a mixed-integer linear programming (MILP) optimization model that can be executed on the SDN controller. However, to cope with the computational overhead of the centralized approach in real IoV scenarios, we propose a fully distributed version of QoCoVi based on the proximal Jacobi alternating direction method of multipliers (ProxJ-ADMM) technique. The effectiveness of the proposed approach is confirmed through emulation with Mininet-WiFi in different scenarios.

Every minute, more than 500 hours of video material are published on YouTube. These days, moving images account for a vast majority of data traffic, and there is no end in sight. This means that technologies that can improve the efficiency of video streaming are becoming all the more important. This is exactly what Hadi Amirpourazarian is working on in the Christian Doppler Laboratory ATHENA at the University of Klagenfurt. Read the full article here.

The new 5G standard will bring data centers closer to the customer. ITEC researchers are developing a system for the rapid distribution of computing tasks.
The newspaper “Der Standard” reported and published the article “Verteiltes Rechnen dank Fog-Computings“.

 

The kick-off meeting of the FF4EuroHPC Project “CardioHPC” took place online on Friday, March 11, 2022. The purpose of this first meeting was primarily the definition of work structures, work packages, and getting to know each partner region. The project partners consist of the following institutions: INNO, Ss. Cyril and Methodius University, Skopjee,  and Klagenfurt University

 

Electronic health records, like ELGA in Austria, provide an overview of laboratory results, diagnostics and therapies. Much could be learned from the personal and private data of individuals – with the help of machine learning – for use in the treatment of others. However, the use of the data is a delicate matter, especially when it comes to diseases that carry a stigma. Researchers involved in the EU project “Enabling the Big Data Pipeline Lifecycle on the Computing Continuum (DataCloud)” are working to make new forms of information processing suitable for medical purposes. Dragi Kimovski and his colleagues recently presented their findings in a publication. Read the complete article here.

 

Title: Big data analytics in Industry 4.0 ecosystems

Authors: Gagangeet Singh Aujla, Radu Prodan, Danda B. Rawat

Journal: “Software: Practice and Experience”

Full editorial/article: https://onlinelibrary.wiley.com/doi/10.1002/spe.3008

Second online meeting between Austria and China took place on 21.02.2022. Consortium discussed aspects of sustainable transportation networks, #blockchain and new development strategies in line with #UnitedNations #IYBSSD #SDGs

Prof. Radu Prodan

Title: Big Data Pipelines on the Computing Continuum: Tapping the Dark Data

 

Authors: Dumitru Roman, Radu Prodan, Nikolay Nikolov, Ahmet Soylu, Mihhail Matskin, Andrea Marrella, Dragi Kimovski, Brian Elvesæter, Anthony Simonet-Boulogne, Giannis Ledakis, Hui Song, Francesco Leotta, Evgeny Kharlamov

 

Abstract: Big Data pipelines are essential for leveraging Dark Data, i.e., data collected but not used and turned into value. However, tapping their potential requires going beyond existing approaches and frameworks for Big Data processing. The Computing Continuum enables new opportunities for managing Big Data pipelines concerning efficient management of heterogeneous and untrustworthy resources. This article discusses the Big Data pipelines lifecycle on the Computing Continuum, its associated challenges and outlines a future research agenda in this area.

Vignesh V Menon

2022 NAB Broadcast Engineering and Information Technology (BEIT) Conference

April 24-26, 2022 | Las Vegas, US

Conference Website

Vignesh V Menon (Alpen-Adria-Universität Klagenfurt),  Hadi Amirpour (Alpen-Adria-Universität Klagenfurt), Christian Feldmann (Bitmovin, Klagenfurt),
Adithyan Ilangovan
(Bitmovin, Klagenfurt), Martin Smole (Bitmovin, Klagenfurt), Mohammad Ghanbari (School of Computer Science and Electronic Engineering, University of Essex, Colchester, UK), and Christian Timmerer (Alpen-Adria-Universität Klagenfurt).

Abstract:

Current per-title encoding schemes encode the same video content at various bitrates and spatial resolutions to find optimal bitrate-resolution pairs (known as bitrate ladder) for each video content in Video on Demand (VoD) applications. But in live streaming applications, a fixed bitrate ladder is used for simplicity and efficiency to avoid the additional latency to find the optimized bitrate-resolution pairs for every video content. However, an optimized bitrate ladder may result in (i) decreased storage or network resources or/and (ii) increased Quality of Experience (QoE). In this paper, a fast and efficient per-title encoding scheme (Live-PSTR) is proposed tailor-made for live Ultra High Definition (UHD) High Framerate (HFR) streaming. It includes a pre-processing step in which Discrete Cosine Transform (DCT)-energy-based low-complexity spatial and temporal features are used to determine the complexity of each video segment, based on which the optimized encoding resolution and framerate for streaming at every target bitrate is determined. Experimental results show that, on average, Live-PSTR yields bitrate savings of 9.46% and 11.99% to maintain the same PSNR and VMAF scores, respectively compared to the HTTP Live Streaming (HLS) bitrate ladder.

Architecture of Live-PSTR

As a Valentine’s day gift to video coding enthusiasts across the globe, we release Video Complexity Analyzer (VCA) version 1.0 open-source software on Feb 14, 2022. The primary objective of VCA is to become the best spatial and temporal complexity predictor for every frame/ video segment/ video which aids in predicting encoding parameters for applications like scene-cut detection and online per-title encoding. VCA leverages x86 SIMD and multi-threading optimizations for effective performance. While VCA is primarily designed as a video complexity analyzer library, a command-line executable is provided to facilitate testing and development. We expect VCA to be utilized in many leading video encoding solutions in the coming years.

VCA is available as an open-source library, published under the GPLv3 license. For more details, please visit the software online documentation here. The source code can be found here.

Heatmap of spatial complexity (E)

Heatmap of temporal complexity (h)