NOSSDAV’21: The 31st edition of the Workshop on Network and Operating System Support for Digital Audio and Video
Sept. 28-Oct. 1, 2021, Istanbul, Turkey
Conference Website

Authors: Babak Taraghi (Alpen-Adria-Universität Klagenfurt), Abdelhak Bentaleb (National University of Singapore), Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Bitmovin), Roger Zimmermann (National University of Singapore) and Hermann Hellwagner (Alpen-Adria-Universität Klagenfurt)

Abstract: Adaptive BitRate (ABR) algorithms play a crucial role in delivering the highest possible viewer’s Quality of Experience (QoE) in HTTP Adaptive Streaming (HAS). Online video streaming service providers use HAS – the dominant video streaming technique on the Internet – to deliver the best QoE for their users. Viewer’s delightfulness relies heavily on how the ABR of a media player can adapt the stream’s quality to the current network conditions. QoE for end-to-end video streaming sessions has been evaluated in many research projects to give better insight into the quality metrics. Objective evaluation models such as ITU Telecommunication Standardization Sector (ITU-T) P.1203 allow for the calculation of Mean Opinion Score (MOS) by considering various QoE metrics, and subjective evaluation is the best assessment approach in investigating the end-user opinion over a video streaming session’s experienced quality. We have conducted subjective evaluations with crowdsourced participants and evaluated the MOS of the sessions using the ITU-T P.1203 quality model. This paper’s main contribution is subjective evaluation analogy with objective evaluation for well-known heuristic-based ABRs.

Keywords: HTTP Adaptive Streaming, ABR Algorithms, Quality of Experience, Crowdsourcing, Subjective Evaluation, Objective Evaluation, MOS, (ITU-T) P.1203

The paper “A Two-Sided Matching Model for Data Stream Processing in the Cloud–Fog Continuum” has been accepted for publication at the 21st IEEE/ACM international Symposium on Cluster, Cloud and Internet Computing (CCGrid 2021).

Authors: Narges Mehran, Dragi Kimovskiand Radu Prodan

Abstract: Latency-sensitive and bandwidth-intensive stream processing applications are dominant traffic generators over the Internet network. A stream consists of a continuous sequence of data elements, which require processing in nearly real-time. To improve communication latency and reduce the network congestion, Fog computing complements the Cloud services by moving the computation towards the edge of the network. Unfortunately, the heterogeneity of the new Cloud–Fog continuum raises important challenges related to deploying and executing data stream applications. We explore in this work a two-sided stable matching model called Cloud–Fog to data stream application matching (CODA) for deploying a distributed application represented as a workflow of stream processing microservices on heterogeneous Cloud–Fog computing resources. In CODA, the application microservices rank the continuum resources based on their microservice stream processing time, while resources rank the stream processing microservices based on their residual bandwidth. A stable many-to-one matching algorithm assigns microservices to resources based on their mutual preferences, aiming to optimize the complete stream processing time on the application side, and the total streaming traffic on the resource side.
We evaluate the CODA algorithm using simulated and real-world Cloud–Fog scenarios. We achieved 11 to 45 % lower stream processing time and 1.3 to 20 % lower streaming traffic compared to related state-of-the-art approaches.

ITEC is delighted to announce the next speaker in our guest lecture series – Prof. Carsten Griwodz from the University of Oslo & SIMULA Research Laboratory, Norway. The online-course will take place from March 5 – May 28, 2021.

This course is meant to provide the participants with the means for evaluating end-user satisfaction with interactive applications. Please register at the course 780.411.

Further information is available HERE.

Authors: Vishu Madaan (Lovely Professional University, India), Aditya Roy (Lovely Professional University, India), Charu Gupta (Bhagwan Parashuram Institute of Technology, New Delhi, India), Prateek Agrawal (Institute of ITEC, University of Klagenfurt, Austria), Cristian Bologa (Babes-Bolyai University, Cluj-Napoca, Romania) and Radu Prodan (Institute of ITEC, University of Klagenfurt, Austria).

Abstract: COVID-19 (also known as SARS-COV-2) pandemic has spread in the entire world. It is a contagious disease that easily spreads from one person in direct contact to another, classified by experts in five categories: asymptomatic, mild, moderate, severe, and critical. Already more than 66 million people got infected worldwide with more than 22 million active patients as of 5 December 2020 and the rate is accelerating. More than 1.5 million patients (approximately 2.5% of total reported cases) across the world lost their life. In many places, the COVID-19 detection takes place through reverse transcription polymerase chain reaction (RTPCR) tests which may take longer than 48 hours. This is is one major reason of its severity and rapid spread. We propose in this paper a two-phase X-ray image classification called XCOVNet for early COVID-19 detection using convolutional neural Networks model. XCOVNet detects COVID-19 infections in chest X-ray patient images in two phases. The first phase pre-processes a dataset of 392 chest X-ray images of which half are COVID-19 positive and half are negative. The second phase trains and tunes the neural network model to achieve a 98.44% accuracy in patient classification.

Journal: New Generation Computing

Acknowledgement: This work is partially supported by ARTICONF

Hadi

Authors: Hadi Amirpour (Alpen-Adria-Universität Klagenfurt), Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Bitmovin), and Mohammad Ghanbari (School of Computer Science and Electronic Engineering, University of Essex, Colchester, UK)

Abstract: Light field imaging enables some post-processing capabilities like refocusing, changing view perspective, and depth estimation. As light field images are represented by multiple views they contain a huge amount of data that makes compression inevitable. Although there are some proposals to efficiently compress light field images, their main focus is on encoding efficiency. However, some important functionalities such as viewpoint and quality scalabilities, random access, and uniform quality distribution have not been addressed adequately. In this paper, an efficient light field image compression method based on a deep neural network is proposed, which classifies multiple views into various layers. In each layer, the target view is synthesized from the available views of previously encoded/decoded layers using a deep neural network. This synthesized view is then used as a virtual reference for the target view inter-coding. In this way, random access to an arbitrary view is provided. Moreover, uniform quality distribution among multiple views is addressed. In higher bitrates where random access to an arbitrary view is more crucial, the required bitrate to access the requested view is minimized.

Keywords: Light field, Compression, Scalable, Random Access.

Data Compression Conference (DCC)

23-26 March 2021, Snowbird, Utah, USA

https://www.cs.brandeis.edu/~dcc

Prof. Radu Prodan

The project “Kärntner Fog: A 5G-Enabled Fog Infrastructure for Automated Operation of Carinthia’s 5G Playground Application Use Cases” proposes a new infrastructure automation use case in the 5G Playground Carinthia (5GPG). Kärntner Fog plans to create and deploy a
distributed service middleware infrastructure over a diverse set of novel heterogeneous 5G edge devices, complemented by a high-performance Cloud data center accessible with low latency according to 5G standards. Such an infrastructure is currently missing in the 5GPG and will represent a horizontal backbone that interconnects and integrates the application use cases. Kärtner Fog will automate the development and operation of the applications use cases in the 5GPG in an integrated and more cost-effective fashion to enable more science and innovation within a limited budget.

Involved Organisations: BABEG, ITEC@AAU, ONDA TLC GmbH, FFG/KWF

Coordinator: Prof. Radu Prodan
Project Start: 01.01.2021
Project Duration: 48 months

Dr. Shajulin Benedict war 2019 an der Alpen Adria Universität Klagenfurt im Rahmen einer Forschungskooperation tätig. Heute ist es soweit, die Kooperation des build! mit dem IIIT-Kottayam Startup Center startet. Build! startet 2021 mit einem Startup in Residence Programm (geplanter Rollout Sommer) – erster Partner ist Indien. Ein Austausch für Entrepreneurs in Startups zwischen Kottayam und Kärnten ist ab Q3 geplant. Mehr Informationen finden Sie hier.

The manuscript “Cloud, Fog or Edge: Where to Compute?” has been accepted for publication in an upcoming issue of IEEE Internet Computing.

Authors: Dragi Kimovski, Roland Mathá, Josef Hammer, Narges Mehran, Hermann Hellwagner and Radu Prodan

Abstract: The computing continuum extends the high-performance cloud data centers with energy-efficient and low-latency devices close to the data sources located at the edge of the network.
However, the heterogeneity of the computing continuum raises multiple challenges related to application management. These include where to offload an application – from the cloud to the edge – to meet its computation and communication requirements.
To support these decisions, we provide in this article a detailed performance and carbon footprint analysis of a selection of use case applications with complementary resource requirements across the computing continuum over a real-life evaluation testbed.

Prof. Radu Prodan

IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC) accepted the paper “Dynamic Multi-objective Scheduling of Microservices in the Cloud”.

Authors: Hamid Mohammadi Fard, Radu Prodan, Felix Wolf

Abstract: For many applications, a microservices architecture promises better performance and flexibility compared to a conventional monolithic architecture. In spite of the advantages of a microservices architecture, deploying microservices poses various challenges for service developers and providers alike. One of these challenges is the efficient placement of microservices on the cluster nodes. Improper allocation of microservices can quickly waste resource capacities and cause low system throughput. In the last few years, new technologies in orchestration frameworks, such as the possibility of multiple schedulers for pods in Kubernetes, have improved scheduling solutions of microservices but using these technologies needs to involve both the service developer and the service provider in the behavior analysis of workloads. Using memory and CPU requests specified in the service manifest, we propose a general microservices scheduling mechanism that can operate efficiently in private clusters or enterprise clouds. We model the scheduling problem as a complex variant of the knapsack problem and solve it using a multi-objective optimization approach. Our experiments show that the proposed mechanism is highly scalable and simultaneously increases utilization of both memory and CPU, which in turn leads to better throughput when compared to the state-of-the-art.