Authors: Hadi Amirpour (Alpen-Adria-Universität Klagenfurt), Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Bitmovin), and Mohammad Ghanbari (School of Computer Science and Electronic Engineering, University of Essex, Colchester, UK)
Abstract: Light field imaging enables some post-processing capabilities like refocusing, changing view perspective, and depth estimation. As light field images are represented by multiple views they contain a huge amount of data that makes compression inevitable. Although there are some proposals to efficiently compress light field images, their main focus is on encoding efficiency. However, some important functionalities such as viewpoint and quality scalabilities, random access, and uniform quality distribution have not been addressed adequately. In this paper, an efficient light field image compression method based on a deep neural network is proposed, which classifies multiple views into various layers. In each layer, the target view is synthesized from the available views of previously encoded/decoded layers using a deep neural network. This synthesized view is then used as a virtual reference for the target view inter-coding. In this way, random access to an arbitrary view is provided. Moreover, uniform quality distribution among multiple views is addressed. In higher bitrates where random access to an arbitrary view is more crucial, the required bitrate to access the requested view is minimized.
Keywords: Light field, Compression, Scalable, Random Access.
Data Compression Conference (DCC)
23-26 March 2021, Snowbird, Utah, USA
The project “Kärntner Fog: A 5G-Enabled Fog Infrastructure for Automated Operation of Carinthia’s 5G Playground Application Use Cases” proposes a new infrastructure automation use case in the 5G Playground Carinthia (5GPG). Kärntner Fog plans to create and deploy a
distributed service middleware infrastructure over a diverse set of novel heterogeneous 5G edge devices, complemented by a high-performance Cloud data center accessible with low latency according to 5G standards. Such an infrastructure is currently missing in the 5GPG and will represent a horizontal backbone that interconnects and integrates the application use cases. Kärtner Fog will automate the development and operation of the applications use cases in the 5GPG in an integrated and more cost-effective fashion to enable more science and innovation within a limited budget.
Involved Organisations: BABEG, ITEC@AAU, ONDA TLC GmbH, FFG/KWF
Coordinator: Prof. Radu Prodan
Project Start: 01.01.2021
Project Duration: 48 months
The manuscript “Cloud, Fog or Edge: Where to Compute?” has been accepted for publication in an upcoming issue of IEEE Internet Computing.
Authors: Dragi Kimovski, Roland Mathá, Josef Hammer, Narges Mehran, Hermann Hellwagner and Radu Prodan
Abstract: The computing continuum extends the high-performance cloud data centers with energy-efficient and low-latency devices close to the data sources located at the edge of the network.
However, the heterogeneity of the computing continuum raises multiple challenges related to application management. These include where to offload an application – from the cloud to the edge – to meet its computation and communication requirements.
To support these decisions, we provide in this article a detailed performance and carbon footprint analysis of a selection of use case applications with complementary resource requirements across the computing continuum over a real-life evaluation testbed.
IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC) accepted the paper “Dynamic Multi-objective Scheduling of Microservices in the Cloud”.
Authors: Hamid Mohammadi Fard, Radu Prodan, Felix Wolf
Abstract: For many applications, a microservices architecture promises better performance and flexibility compared to a conventional monolithic architecture. In spite of the advantages of a microservices architecture, deploying microservices poses various challenges for service developers and providers alike. One of these challenges is the efficient placement of microservices on the cluster nodes. Improper allocation of microservices can quickly waste resource capacities and cause low system throughput. In the last few years, new technologies in orchestration frameworks, such as the possibility of multiple schedulers for pods in Kubernetes, have improved scheduling solutions of microservices but using these technologies needs to involve both the service developer and the service provider in the behavior analysis of workloads. Using memory and CPU requests specified in the service manifest, we propose a general microservices scheduling mechanism that can operate efficiently in private clusters or enterprise clouds. We model the scheduling problem as a complex variant of the knapsack problem and solve it using a multi-objective optimization approach. Our experiments show that the proposed mechanism is highly scalable and simultaneously increases utilization of both memory and CPU, which in turn leads to better throughput when compared to the state-of-the-art.
Authors: Nakisa Shams (ETS, Montreal, Canada), Hadi Amirpour (Alpen-Adria-Universität Klagenfurt), Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Bitmovin), and Mohammad Ghanbari (School of Computer Science and Electronic Engineering, University of Essex, Colchester, UK)
Abstract: Cognitive radio networks by utilizing the spectrum holes in licensed frequency bands are able to efficiently manage the radio spectrum. A significant improvement in spectrum use can be achieved by giving secondary users access to these spectrum holes. Predicting spectrum holes can save significant energy that is consumed to detect spectrum holes. This is because the secondary users can only select the channels that are predicted to be idle channels. However, collisions can occur either between a primary user and secondary users or among the secondary users themselves. This paper introduces a centralized channel allocation algorithm in a scenario with multiple secondary users to control both primary and secondary collisions. The proposed allocation algorithm, which uses a channel status predictor, provides a good performance with fairness among the secondary users while they have the minimal interference with the primary user. The simulation results show that the probability of a wrong prediction of an idle channel state in a multi-channel system is less than 0.9%. In addition, the channel state prediction saves the sensing energy up to 73%, and the utilization of the spectrum can be improved more than 77%.
Keywords: Cognitive radio, Biological neural networks, Prediction, Idle channel.
International Congress on Information and Communication Technology
25-26 February 2021, London, UK
Prof. Radu Prodan is a keynote speaker at the 13th International Conference On The Developments in eSystems Engineering (DeSE), 13th-17th December 2020.
Further details and registration available here: https://mile-high.video/
Authors: Shajulin Benedict (IIIT Kottayam, India), Prateek Agrawal (University of Klagenfurt, Austria & Lovely Professional University, India) , Radu Prodan (University of Klagenfurt, Austria)
Abstract: The push for agile pandemic analytic solutions has rapidly attained development-stage software modules instead of functioning as full-fledged production-stage products — i.e., performance, scalability, and energy-related concerns need to be optimized for the underlying computing domains. And while the research continues to support the idea that reducing the energy consumption of algorithms improves the lifetime of battery-operated machines, advisable tools in almost any developer setting, an energy analysis report for R-based analytic programs is indeed a valuable suggestion. This article proposes an energy analysis framework for R-programs that enables data analytic developers, including pandemic-related application developers, to analyze code. It reveals an energy analysis report for R programs written to predict the new cases of 215 countries using random forest variants. Experiments were carried out at the IoT cloud research lab and the energy efficiency aspects were discussed in the article. In the experiments, ranger-based prediction program consumed 95.8 Joules.
4th International Conference on Advanced Informatics for Computing Research (ICAICR-2020)
Link: http://informaticsindia.co.in/
Acknowledgement: This work is supported by IIIT-Kottayam faculty research fund and OEAD-DST fund.