Title: WELFake: Word Embedding over Linguistic Features for Fake News Detection

Authors: Pawan Kumar Verma (Lovely Professional University, India | GLA University, India), Prateek Agrawal (University of Klagenfurt, Austria | Lovely Professional University, India), Ivone Amorin (MOG Technologies | University of Porto, Portugal), Radu Prodan (University of Klagenfurt, Austria)

Abstract: Social media is a popular medium for dissemination of real-time news all over the world. Easy and quick information proliferation is one of the reasons for its popularity. An extensive number of users with different age groups, gender and societal beliefs are engaged in social media websites. Despite these favorable aspects, a significant disadvantage comes in the form of fake news, as people usually read and share information without caring about its genuineness. Therefore, it is imperative to research methods for the authentication of news. To address this issue, this paper proposes a two phase benchmark model named WELFake based on word embedding (WE) over linguistic features for fake news detection using machine learning classification. The first phase pre-processes the dataset and validates the veracity of news content by using linguistic features. The second phase merges the linguistic feature sets with WE and applies voting classification. To validate its approach, this paper also carefully designs a novel WELFake dataset with approximately 72,000 articles, which incorporates different datasets to generate an unbiased classification output. Experimental results show that the WELFake model categorises the news in real and fake with a 96.73% which improves the overall accuracy by 1.31% compared to BERT and 4.25% compared to CNN models. Our frequency-based and focused analyzing writing patterns model outperforms predictive-based related works implemented using the Word2vec WE method by up to 1.73%.

Acknowledgement: ARTICONF project

The full paper has been accepted to the main-track of the International Conference on Computational Science (ICCS’21). Conference will be organized in a virtual format on 16-18 June, 2021.

Title: Monte-Carlo Approach to the Computational Capacities Analysis of the Computing Continuum

Authors: Vladislav Kashansky, Gleb Radchenko, Radu Prodan

Abstract: This article proposes an approach to the problem of computational capacities analysis of the computing continuum via theoretical framework of equilibrium phase-transitions and numerical simulations. We introduce the concept of phase transitions in computing continuum and show how this phenomena can be explored in the context of workflow makespan, which we treat as an order parameter. We simulate the behavior of the computational network in the equilibrium regime within the framework of the XY-model defined over complex agent network with Barabasi-Albert topology. More specifically, we define Hamiltonian over complex network topology and sample the resulting spin-orientation distribution with the Metropolis-Hastings technique. The key aspect of the paper is derivation of the bandwidth matrix, as the emergent effect of the “low-level” collective spin interaction. This allows us to study the first order approximation to the makespan of the “high-level” system-wide workflow model in the presence of data-flow anisotropy and phase transitions of the bandwidth matrix controlled by the means of “noise regime” parameter. For this purpose, we have built a simulation engine in Python 3.6. Simulation results confirm existence of the phase transition, revealing complex transformations in the computational abilities of the agents. Notable feature is that bandwidth distribution undergoes a critical transition from single to multi-mode case. Our simulations generally open new perspectives for reproducible comparative performance analysis of the novel and classic scheduling algorithms.

Keywords: Complex Networks, Computing Continuum, Phase Transitions, Computational Model, MCMC, Metropolis-Hastings, XY-model, Equilibrium Model

Acknowledgement: This work has received funding from the EC-funded project H2020 FETHPC ASPIDE (Agreement #801091)

The paper “PSTR: Per-title encoding using Spatio-Temporal Resolutions” has been accepted for publication at the IEEE International Conference on Multimedia and Expo (ICME) 2021 at July 5-9, 2021 Shenzhen, China.

Authors: Hadi Amirpour (Alpen-Adria-Universität Klagenfurt), Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Bitmovin), and Mohammad Ghanbari (School of Computer Science and Electronic Engineering, University of Essex, Colchester, UK)

Abstract: Current per-title encoding schemes encode the same video content (or snippets/subsets thereof) at various bitrates and spatial resolutions to find an optimal bitrate ladder for each video content. Compared to traditional approaches, in which a predefined, content-agnostic (“fit-to-all”) encoding ladder is applied to all video contents, per-title encoding can result in (i) a significant decrease of storage and delivery costs and (ii) an increase in the Quality of Experience. In the current per-title encoding schemes, the bitrate ladder is optimized using only spatial resolutions, while we argue that with the emergence of high framerate videos, this principle can be extended to temporal resolutions as well. In this paper, we improve the per-title encoding for each content using spatio-temporal resolutions. Experimental results show that our proposed approach doubles the performance of bitrate saving by considering both temporal and spatial resolutions compared to considering only spatial resolutions.

Keywords: Bitrate ladder, per-title encoding, framerate, spatial resolution.

IEEE International Conference on Multimedia and Expo (ICME) , 5-9 July 2021, Shenzhen, China

Authors: Alireza Erfanian (Alpen-Adria-Universität Klagenfurt), Farzad Tashtarian (Alpen-Adria-Universität Klagenfurt), Anatoliy Zabrovskiy (Alpen-Adria-Universität Klagenfurt), Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Bitmovin), Hermann Hellwagner (Alpen-Adria-Universität Klagenfurt)

Abstract: Live video streaming traffic and related applications have experienced significant growth in recent years. However, this has been accompanied by some challenging issues, especially in terms of resource utilization. Although IP multicasting can be recognized as an efficient mechanism to cope with these challenges, it suffers from many problems. Applying software-defined networking (SDN) and network function virtualization (NFV) technologies enable researchers to cope with IP multicasting issues in novel ways. In this paper, by leveraging the SDN concept, we introduce OSCAR (Optimizing reSourCe utilizAtion in live video stReaming) as a new cost-aware video streaming approach to provide advanced video coding (AVC)-based live streaming services in the network. In this paper, we use two types of virtualized network functions (VNFs): virtual reverse proxy (VRP) and virtual transcoder function (VTF). At the edge of the network, VRPs are responsible for collecting clients’ requests and sending them to an SDN controller.  Then, by executing a mixed-integer linear program (MILP), the SDN controller determines a group of optimal multicast trees for streaming the requested videos from an appropriate origin server to the VRPs. Moreover, to elevate the efficiency of resource allocation and meet the given end-to-end latency threshold, OSCAR delivers only the highest requested quality from the origin server to an optimal group of VTFs over a multicast tree. The selected VTFs then transcode the received video segments and transmit them to the requesting VRPs in a multicast fashion. To mitigate the time complexity of the proposed MILP model, we present a simple and efficient heuristic algorithm that determines a near-optimal solution in polynomial time. Using the MiniNet emulator, we evaluate the performance of OSCAR in various scenarios. The results show that OSCAR surpasses other SVC- and AVC-based multicast and unicast approaches in terms of cost and resource utilization.

Link: IEEE Transactions on Network and Service Management (TNSM)

Keywords: Dynamic Adaptive Streaming over HTTP (DASH), Live Video Streaming, Software Defined Networking (SDN), Video Transcoding, Network Function Virtualization (NFV).

NOSSDAV’21: The 31st edition of the Workshop on Network and Operating System Support for Digital Audio and Video
Sept. 28-Oct. 1, 2021, Istanbul, Turkey
Conference Website

Authors: Babak Taraghi (Alpen-Adria-Universität Klagenfurt), Abdelhak Bentaleb (National University of Singapore), Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Bitmovin), Roger Zimmermann (National University of Singapore) and Hermann Hellwagner (Alpen-Adria-Universität Klagenfurt)

Abstract: Adaptive BitRate (ABR) algorithms play a crucial role in delivering the highest possible viewer’s Quality of Experience (QoE) in HTTP Adaptive Streaming (HAS). Online video streaming service providers use HAS – the dominant video streaming technique on the Internet – to deliver the best QoE for their users. Viewer’s delightfulness relies heavily on how the ABR of a media player can adapt the stream’s quality to the current network conditions. QoE for end-to-end video streaming sessions has been evaluated in many research projects to give better insight into the quality metrics. Objective evaluation models such as ITU Telecommunication Standardization Sector (ITU-T) P.1203 allow for the calculation of Mean Opinion Score (MOS) by considering various QoE metrics, and subjective evaluation is the best assessment approach in investigating the end-user opinion over a video streaming session’s experienced quality. We have conducted subjective evaluations with crowdsourced participants and evaluated the MOS of the sessions using the ITU-T P.1203 quality model. This paper’s main contribution is subjective evaluation analogy with objective evaluation for well-known heuristic-based ABRs.

Keywords: HTTP Adaptive Streaming, ABR Algorithms, Quality of Experience, Crowdsourcing, Subjective Evaluation, Objective Evaluation, MOS, (ITU-T) P.1203

The paper “A Two-Sided Matching Model for Data Stream Processing in the Cloud–Fog Continuum” has been accepted for publication at the 21st IEEE/ACM international Symposium on Cluster, Cloud and Internet Computing (CCGrid 2021).

Authors: Narges Mehran, Dragi Kimovskiand Radu Prodan

Abstract: Latency-sensitive and bandwidth-intensive stream processing applications are dominant traffic generators over the Internet network. A stream consists of a continuous sequence of data elements, which require processing in nearly real-time. To improve communication latency and reduce the network congestion, Fog computing complements the Cloud services by moving the computation towards the edge of the network. Unfortunately, the heterogeneity of the new Cloud–Fog continuum raises important challenges related to deploying and executing data stream applications. We explore in this work a two-sided stable matching model called Cloud–Fog to data stream application matching (CODA) for deploying a distributed application represented as a workflow of stream processing microservices on heterogeneous Cloud–Fog computing resources. In CODA, the application microservices rank the continuum resources based on their microservice stream processing time, while resources rank the stream processing microservices based on their residual bandwidth. A stable many-to-one matching algorithm assigns microservices to resources based on their mutual preferences, aiming to optimize the complete stream processing time on the application side, and the total streaming traffic on the resource side.
We evaluate the CODA algorithm using simulated and real-world Cloud–Fog scenarios. We achieved 11 to 45 % lower stream processing time and 1.3 to 20 % lower streaming traffic compared to related state-of-the-art approaches.

Authors: Vishu Madaan (Lovely Professional University, India), Aditya Roy (Lovely Professional University, India), Charu Gupta (Bhagwan Parashuram Institute of Technology, New Delhi, India), Prateek Agrawal (Institute of ITEC, University of Klagenfurt, Austria), Cristian Bologa (Babes-Bolyai University, Cluj-Napoca, Romania) and Radu Prodan (Institute of ITEC, University of Klagenfurt, Austria).

Abstract: COVID-19 (also known as SARS-COV-2) pandemic has spread in the entire world. It is a contagious disease that easily spreads from one person in direct contact to another, classified by experts in five categories: asymptomatic, mild, moderate, severe, and critical. Already more than 66 million people got infected worldwide with more than 22 million active patients as of 5 December 2020 and the rate is accelerating. More than 1.5 million patients (approximately 2.5% of total reported cases) across the world lost their life. In many places, the COVID-19 detection takes place through reverse transcription polymerase chain reaction (RTPCR) tests which may take longer than 48 hours. This is is one major reason of its severity and rapid spread. We propose in this paper a two-phase X-ray image classification called XCOVNet for early COVID-19 detection using convolutional neural Networks model. XCOVNet detects COVID-19 infections in chest X-ray patient images in two phases. The first phase pre-processes a dataset of 392 chest X-ray images of which half are COVID-19 positive and half are negative. The second phase trains and tunes the neural network model to achieve a 98.44% accuracy in patient classification.

Journal: New Generation Computing

Acknowledgement: This work is partially supported by ARTICONF

Prof. Radu Prodan

The project “Kärntner Fog: A 5G-Enabled Fog Infrastructure for Automated Operation of Carinthia’s 5G Playground Application Use Cases” proposes a new infrastructure automation use case in the 5G Playground Carinthia (5GPG). Kärntner Fog plans to create and deploy a
distributed service middleware infrastructure over a diverse set of novel heterogeneous 5G edge devices, complemented by a high-performance Cloud data center accessible with low latency according to 5G standards. Such an infrastructure is currently missing in the 5GPG and will represent a horizontal backbone that interconnects and integrates the application use cases. Kärtner Fog will automate the development and operation of the applications use cases in the 5GPG in an integrated and more cost-effective fashion to enable more science and innovation within a limited budget.

Involved Organisations: BABEG, ITEC@AAU, ONDA TLC GmbH, FFG/KWF

Coordinator: Prof. Radu Prodan
Project Start: 01.01.2021
Project Duration: 48 months

Prof. Radu Prodan

IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC) accepted the paper “Dynamic Multi-objective Scheduling of Microservices in the Cloud”.

Authors: Hamid Mohammadi Fard, Radu Prodan, Felix Wolf

Abstract: For many applications, a microservices architecture promises better performance and flexibility compared to a conventional monolithic architecture. In spite of the advantages of a microservices architecture, deploying microservices poses various challenges for service developers and providers alike. One of these challenges is the efficient placement of microservices on the cluster nodes. Improper allocation of microservices can quickly waste resource capacities and cause low system throughput. In the last few years, new technologies in orchestration frameworks, such as the possibility of multiple schedulers for pods in Kubernetes, have improved scheduling solutions of microservices but using these technologies needs to involve both the service developer and the service provider in the behavior analysis of workloads. Using memory and CPU requests specified in the service manifest, we propose a general microservices scheduling mechanism that can operate efficiently in private clusters or enterprise clouds. We model the scheduling problem as a complex variant of the knapsack problem and solve it using a multi-objective optimization approach. Our experiments show that the proposed mechanism is highly scalable and simultaneously increases utilization of both memory and CPU, which in turn leads to better throughput when compared to the state-of-the-art.

Authors: Shajulin Benedict (IIIT Kottayam, India), Prateek Agrawal (University of Klagenfurt, Austria & Lovely Professional University, India) , Radu Prodan (University of Klagenfurt, Austria)

Abstract: The push for agile pandemic analytic solutions has rapidly attained development-stage software modules instead of functioning as full-fledged production-stage products — i.e., performance, scalability, and energy-related concerns need to be optimized for the underlying computing domains. And while the research continues to support the idea that reducing the energy consumption of algorithms improves the lifetime of battery-operated machines, advisable tools in almost any developer setting, an energy analysis report for R-based analytic programs is indeed a valuable suggestion. This article proposes an energy analysis framework for R-programs that enables data analytic developers, including pandemic-related application developers, to analyze code. It reveals an energy analysis report for R programs written to predict the new cases of 215 countries using random forest variants. Experiments were carried out at the IoT cloud research lab and the energy efficiency aspects were discussed in the article. In the experiments, ranger-based prediction program consumed 95.8 Joules.

4th International Conference on Advanced Informatics for Computing Research (ICAICR-2020) 

Link: http://informaticsindia.co.in/

Acknowledgement: This work is supported by IIIT-Kottayam faculty research fund and OEAD-DST fund.