Where does technology help us in our daily lives?

Interview with Felix Schniz, Game Studies and Engineering SPL @ ITEC

 

We meet Felix Schniz for an interview in Lakeside Park, in the CD laboratory Athena, building B12B, to learn something about him and his work and why he chose his career. For those who don´t yet know Felix: he is always neatly dressed, has a smile on his lips and is eager for a mutual exchange of ideas and opinions. So, he was quick to accept the invitation to be the first person on a new journey from “People Behind Informatics”. He is passionate about his work and is happy to share his views with us.

 

Hello Felix, thanks for taking the time to talk to us. Please tell me something about yourself, where you come from, and how your professional career has evolved.

I was born in Bietigheim-Bissingen near Stuttgart. I studied in Mannheim, with the focus of my Bachelor’s degree in English and American Studies. For my Master, I specialized in culture in the process of modernity. In addition to literature and film, we also dealt with digitization processes and that’s how I came to the video game area. That was my “unusual entry” into technical sciences. After my Master’s degree, it was clear to me: I wanted to write a doctoral thesis on video games. The academic path is simply mine, and the topic offers many exciting perspectives, as it is still unexplored in large parts. During my research for the right environment for such a research project, I met René Schallegger at a conference in Oxford. We stay in contact. When a vacancy for a university assistant was advertised at the Department of English in 2016, I applied for this position, started my doctorate at the same time and have been here since then.

 

Such a coincidence, and very lucky that you found exactly what you were looking for. How was your start at the University of Klagenfurt?

I started immediately and also took on the role of the SPL (programme director) of the Master’s degree in “Game Studies and Engineering“, which combines both – humanities and technical aspects. This is also what is special about this programme: the students learn technical approaches to video games and what kind of a role a technical medium plays in society.

 

What do you particularly like about your work?

I am taken seriously and can combine my passion for technology and humanities. I am very happy to question: What is the reason for that, what is behind it, and what else needs to be considered? I can live that to the full in my work.

 

And how did your doctorate continue?

In my doctorate, I asked the research question of what a video game experience actually is. It’s not that easy to name and has to be illuminated from many sides. Philosophically – psychologically – sociologically – media science… The path goes from one’s own, personal to the technical implementation. I wrote theoretical basics, worked with content analyses and scientifically processed my own experiences. This gave me a new, exciting field of questions for myself and research on video games – because how can we speak scientifically about the content of the medium when we experience it in such a personal way?

 

What consensus emerged for you?

Video games help us to get a bigger, better picture of people in the digital age. We have to ask ourselves what kind of influence video games in the future can and should have and need to raise awareness of what kind of responsibility video game programmers have. Programmers should also ask themselves what they want to offer people. The virtual worlds that open video games can offer us a lot, but we have to learn how to deal with them.

In short, I have to ask myself: What do I want to achieve with technology? What role should it play in my life?

Over the past few years, one has been able to follow what role virtual worlds can play in the lives of people. The well-known video game “Fortnite”, for example, was suddenly not just a popular game, but also a much-needed social meeting point, and a retreat for young people, whose social and private spaces were taken away by the pandemic.

Video games can be of great importance for each of us. They can offer us things we need emotionally, socially, or intellectually, or allow us to explore ourselves. This does not mean that the virtual should replace the real world – but it can be a great addition to it. In order to continue to pursue these thoughts in targeted extracts, I also wrote a lot about coping with grief in addition to my doctoral thesis. I am currently working on a book about the spiritual experience of interactive media in general. It will be published later this year.

 

Thank you very much for inviting us into your interesting area of work. We wish you a lot of joy and success in your favourite research area.

Journal Website: Journal of Network and Computer Applications

[PDF]

Samira Afzal (Alpen-Adria-Universität Klagenfurt), Vanessa Testoni (unico IDtech), Christian Esteve Rothenberg (University of Campinas), Prakash Kolan (Samsung Research America), and Imed Bouazizi (Qualcomm)

Abstract:

Demand for wireless video streaming services increases with users expecting to access high-quality video streaming experiences. Ensuring Quality of Experience (QoE) is quite challenging due to varying bandwidth and time constraints. Since most of today’s mobile devices are equipped with multiple network interfaces, one promising approach is to benefit from multipath communications. Multipathing leads to higher aggregate bandwidth and distributing video traffic over multiple network paths improves stability, seamless connectivity, and QoE. However, most of current transport protocols do not match the requirements of video streaming applications or are not designed to address relevant issues, such as networks heterogeneity, head-of-line blocking, and delay constraints. In this comprehensive survey, we first review video streaming standards
and technology developments. We then discuss the benefits and challenges of multipath video transmission over wireless. We provide a holistic literature review of multipath wireless video streaming, shedding light on the different alternatives from an end-to-end layered stack perspective, reviewing key multipath wireless scheduling functions, unveiling trade-offs of each approach, and presenting a suitable taxonomy to classify the
state-of-the-art. Finally, we discuss open issues and avenues for future work.

 

Journal: Sensors

Authors: Akif Quddus Khan, Nikolay Nikolov, Mihhail Matskin,Radu Prodan, Dumitru Roman, Bekir Sahin, Christoph Bussler, Ahmet Soylu

Abstract: Big data pipelines are developed to process data characterized by one or more of the three big data features, commonly known as the three Vs (volume, velocity, and variety), through a series of steps (e.g., extract, transform, and move), making the ground work for the use of advanced analytics and ML/AI techniques. Computing continuum (i.e., cloud/fog/edge) allows access to virtually infinite amount of resources, where data pipelines could be executed at scale; however, the implementation of data pipelines on the continuum is a complex task that needs to take computing resources, data transmission channels, triggers, data transfer methods, integration of message queues, etc., into account. The task becomes even more challenging when data storage is considered as part of the data pipelines. Local storage is expensive, hard to maintain, and comes with several challenges (e.g., data availability, data security, and backup). The use of cloud storage, i.e., storage-as-a-service (StaaS), instead of local storage has the potential of providing more flexibility in terms of scalability, fault tolerance, and availability. In this article, we propose a generic approach to integrate StaaS with data pipelines, i.e., computation on an on-premise server or on a specific cloud, but integration with StaaS, and develop a ranking method for available storage options based on five key parameters: cost, proximity, network performance, server-side encryption, and user weights/preferences. The evaluation carried out demonstrates the effectiveness of the proposed approach in terms of data transfer performance, utility of the individual parameters, and feasibility of dynamic selection of a storage option based on four primary user scenarios.

Hadi

Journal Website

Authors: Ningxiong Maoa (Southwest Jiaotong University), Hongjie Hea (Southwest Jiaotong University), Fan Chenb (Southwest Jiaotong University), Lingfeng Qua (Southwest Jiaotong University), Hadi Amirpour (Alpen-Adria-Universität Klagenfurt, Austria), and Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Austria)

Abstract: Color image Reversible Data Hiding (RDH) is getting more and more important since the number of its applications is steadily growing. This paper proposes an efficient color image RDH scheme based on pixel value ordering (PVO), in which the channel correlation is fully utilized to improve the embedding performance. In the proposed method, the channel correlation is used in the overall process of data embedding, including prediction stage, block selection and capacity allocation. In the prediction stage, since the pixel values in the co-located blocks in different channels are monotonically consistent, the large pixel values are collected preferentially by pre-sorting the intra-block pixels. This can effectively improve the embedding capacity of RDH based on PVO. In the block selection stage, the description accuracy of block complexity value is improved by exploiting the texture similarity between the channels. The smoothing the block is then preferentially used to reduce invalid shifts. To achieve low complexity and high accuracy in capacity allocation, the proportion of the expanded prediction error to the total expanded prediction error in each channel is calculated during the capacity allocation process. The experimental results show that the proposed scheme achieves significant superiority in fidelity over a series of state-of-the-art schemes. For example, the PSNR of the Lena image reaches 62.43dB, which is a 0.16dB gain compared to the best results in the literature with a 20,000bits embedding capacity.

KeywordsReversible data hiding, color image, pixel value ordering, channel correlation

5g_Kaerntner_Fog_Logo

IEEE ISM’2022 (https://www.ieee-ism.org/)

Authors: Shivi Vats, Jounsup Park, Klara Nahrstedt, Michael Zink, Ramesh Sitaraman, and Hermann Hellwagner

Abstract: In a 5G testbed, we use 360° video streaming to test, measure, and demonstrate the 5G infrastructure, including the capabilities and challenges of edge computing support. Specifically, we use the SEAWARE (Semantic-Aware View Prediction) software system, originally described in [1], at the edge of the 5G network to support a 360° video player (handling tiled videos) by view prediction. Originally, SEAWARE performs semantic analysis of a 360° video on the media server, by extracting, e.g., important objects and events. This video semantic information is encoded in specific data structures and shared with the client in a DASH streaming framework. Making use of these data structures, the client/player can perform view prediction without in-depth, computationally expensive semantic video analysis. In this paper, the SEAWARE system was ported and adapted to run (partially) on the edge where it can be used to predict views and prefetch predicted segments/tiles in high quality in order to have them available close to the client when requested. The paper gives an overview of the 5G testbed, the overall architecture, and the implementation of SEAWARE at the edge server. Since an important goal of this work is to achieve low motion-to-glass latencies, we developed and describe “tile postloading”, a technique that allows non-predicted tiles to be fetched in high quality into a segment already available in the player buffer. The performance of 360° tiled video playback on the 5G infrastructure is evaluated and presented. Current limitations of the 5G network in use and some challenges of DASH-based streaming and of edge-assisted viewport prediction under “real-world” constraints are pointed out; further, the performance benefits of tile postloading are disclosed.

 

Hadi

IEEE Transactions on Image Processing (TIP)
Journal Website

 

Authors: Hadi Amirpour (Alpen-Adria-Universität Klagenfurt, Austria), Christine Guillemot (INRIA, France), Mohammad Ghanbari (University of Essex, UK), and Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Austria)

Abstract: Light field imaging, which captures both spatial and angular information, improves user immersion by enabling post-capture actions, such as refocusing and changing view perspective. However, light fields represent very large volumes of data with a lot of redundancy that coding methods try to remove. State-of-the-art coding methods indeed usually focus on improving compression efficiency and overlook other important features in light field compression such as scalability. In this paper, we propose a novel light field image compression method that enables (i) viewport scalability, (ii) quality scalability, (iii) spatial scalability, (iv) random access, and (v) uniform quality distribution among viewports, while keeping compression efficiency high. To this end, light fields in each spatial resolution are divided into sequential viewport layers, and viewports in each layer are encoded using the previously encoded viewports. In each viewport layer, \revision{the} available viewports are used to synthesize intermediate viewports using a video interpolation deep learning network. The synthesized views are used as virtual reference images to enhance the quality of intermediate views. An image super-resolution method is applied to improve the quality of the lower spatial resolution layer. The super-resolved images are also used as virtual reference images to improve the quality of the higher spatial resolution layer.
The proposed structure also improves the flexibility of light field streaming, provides random access to the viewports, and increases error resiliency. The experimental results demonstrate that the proposed method achieves a high compression efficiency and it can adapt to the display type, transmission channel, network condition, processing power, and user needs.

Keywords—Light field, compression, scalability, random access, deep learning.

2022 IEEE/ACM 2nd Workshop on Distributed Machine Learning for the Intelligent Computing Continuum (DML-ICC) In conjuction with IEEE/ACM UCC 2022 December 6-9, 2022 | Vancouver, Washington, USA

Authors: Narges Mehran (Alpen-Adria-Universität Klagenfurt) and Radu Prodan (Alpen-Adria-Universität Klagenfurt)

Abstract: Processing rapidly growing data encompasses complex workflows that utilize the Cloud for high-performance computing and the Fog and Edge devices for low-latency communication. For example, autonomous driving applications require inspection, recognition, and classification of road signs for safety inspection assessments, especially on crowded roads. Such applications are among the famous research and industrial exploration topics in computer vision and machine learning. In this work, we design a road sign inspection workflow consisting of 1) encoding and framing tasks of video streams captured by camera sensors embedded in the vehicles, and 2) convolutional neural network (CNN) training and inference models for accurate visual object recognition. We explore a matching theoretic algorithm named CODA [1] to place the workflow on the computing continuum, targeting the workflow processing time, data transfer intensity, and energy consumption as objectives. Evaluation results on a real computing continuum testbed federated among four Cloud, Fog, and Edge providers reveal that CODA achieves 50%-60% lower completion time, 33%-59% lower CO2 emissions, and 19%-45% lower data transfer intensity compared to two stateof-the-art methods.

As a Hipeac member, we are hosting Zeinab Bakhshi, a Ph.D. student from Mälardalens University in Sweden. Zeinab achieved a Hipeac collaboration grant and is now hosted by Profesor Radu Prodan to expand her research on container-based fog architectures. Taking advantage of the multi-layer continuum computing architecture in Klagenfurt lab helps Zeinab deploy the use case she is researching on. These scientific experiments take her research work to the next level. We are planning to publish our collaborative research work in a series of papers based on the upcoming results.

IEEE Transactions on Network and Service Management (TNSM)

Journal Website

Authors: Reza Farahani (Alpen-Adria-Universität Klagenfurt, Austria), Mohammad Shojafar (University of Surry, UK), Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Austria), Farzad Tashtarian (Alpen-Adria-Universität Klagenfurt, Austria), Mohammad Ghanbari (University of Essex, UK), and Hermann Hellwagner (Alpen-Adria-Universität Klagenfurt, Austria)

Abstract: With the ever-increasing demands for high-definition and low-latency video streaming applications, network-assisted video streaming schemes have become a promising complementary solution in the HTTP Adaptive Streaming (HAS) context to improve users’ Quality of Experience (QoE) as well as network utilization. Edge computing is considered one of the leading networking paradigms for designing such systems by providing video processing and caching close to the end-users. Despite the wide usage of this technology, designing network-assisted HAS architectures that support low-latency and high-quality video streaming, including edge collaboration is still a challenge. To address these issues, this article leverages the Software-Defined Networking (SDN), Network Function Virtualization (NFV), and edge computing paradigms to propose A collaboRative edge-Assisted framewoRk for HTTP Adaptive video sTreaming (ARARAT). Aiming at minimizing HAS clients’ serving time and network cost, besides considering available resources and all possible serving actions, we design a multi-layer architecture and formulate the problem as a centralized optimization model executed by the SDN controller. However, to cope with the high time complexity of the centralized model, we introduce three heuristic approaches that produce near-optimal solutions through efficient collaboration between the SDN controller and edge servers. Finally, we implement the ARARAT framework, conduct our experiments on a large-scale cloud-based testbed including 250 HAS players, and compare its effectiveness with state-of-the-art systems within comprehensive scenarios. The experimental results illustrate that the proposed ARARAT methods (i) improve users’ QoE by at least 47%, (ii) decrease the streaming cost, including bandwidth and computational costs, by at least 47%, and (iii) enhance network utilization, by at least 48% compared to state-of-the-art approaches.

IEEE Cloud Summit 2022, https://www.ieeecloudsummit.org/

Authors: Radu Prodan, Dragi Kimovski, Andrea Bartolini, Michael Cochez,
Alexandru Iosup, Evgeny Kharlamov, Joze Rozanec, Laurentiu Vasiliu, Ana
Lucia Varbanescu

Abstract: The Graph-Massivizer project, funded by the Horizon Europe research and innovation program, researches and develops a high-performance, scalable, and sustainable platform for information processing and reasoning based on the massive graph (MG) representation of extreme data. It delivers a toolkit of five open-source software tools and FAIR graph datasets covering the sustainable lifecycle of processing extreme data as MGs. The tools focus on holistic usability (from extreme data ingestion and MG creation), automated intelligence (through analytics and reasoning), performance modelling, and environmental sustainability tradeoffs, supported by credible data-driven evidence across the computing continuum. The automated operation based on the emerging serverless computing paradigm supports experienced and novice stakeholders from a broad group of large and small organisations to capitalise on extreme data through MG programming and processing.

Graph-Massivizer validates its innovation on four complementary use cases considering their extreme data properties and coverage of the three sustainability pillars (economy, society, and environment): sustainable green finance, global environment protection foresight, green AI for the sustainable automotive industry, and data centre digital twin for exascale computing. Graph-Massivizer promises 70% more efficient analytics than AliGraph, and 30% improved energy awareness for ETL storage operations than Amazon Redshift. Furthermore, it aims to demonstrate a possible two-fold improvement in data centre energy efficiency and over 25% lower greenhouse gas emissions for basic graph operations.