HiPEAC magazine https://www.hipeac.net/news/#/magazine/

HiPEACINFO 68, pages 27-28.

Autohrs: Dragi Kimovski (Alpen-Adria-Universität Klagenfurt, Austria), Narges Mehran (Alpen-Adria-Universität Klagenfurt, Austria), Radu Prodan (Alpen-Adria-Universität Klagenfurt, Austria), Souvik Sengupta (iExec Blockchain Tech, France), Anthony Simonet-Boulgone (iExec Blockchain Tech, France), Ioannis Plakas (UBITECH, Greece) , Giannis Ledakis (UBITECH, Greece) and Dumitru Roman (University of Oslo and SINTEF AS, Norway)

Abstract: Modern big-data pipeline applications, such as machine learning, encompass complex workflows for real-time data gathering, storage and analysis. Big-data pipelines often have conflicting requirements, such as low communication latency and high computational speed. These require different kinds of computing resource, from cloud to edge, distributed across multiple geographical locations – in other words, the computing continuum. The Horizon 2020 DataCloud project is creating a novel paradigm for big-data pipeline processing over the computing continuum, covering the complete lifecycle of bigdata pipelines. To overcome the runtime challenges associated with automating big-data pipeline processing on the computing continuum, we’ve created the DataCloud architecture. By separating the discovery, definition, and simulation of big-data pipelines from runtime execution, this architecture empowers domain experts with little infrastructure or software knowledge to take an active part in defining big-data pipelines.

This work received funding from the DataCloud European Union’s Horizon 2020 research and innovation programme under grant agreement no. 101016835.

IEEE International Conference on Communications (ICC)

28 May – 01 June 2023– Rome, Italy

Conference Website

Reza Farahani (Alpen-Adria-Universität Klagenfurt),  Abdelhak Bentaleb (Concordia University, Canada), Christian Timmerer (Alpen-Adria-Universität Klagenfurt), Mohammad Shojafar (University of Surrey, UK), Radu Prodan (Alpen-Adria-Universität Klagenfurt), and Hermann Hellwagner (Alpen-Adria-Universität Klagenfurt)

Abstract: 5G and 6G networks are expected to support various novel emerging adaptive video streaming services (e.g., live, VoD, immersive media, and online gaming) with versatile Quality of Experience (QoE) requirements such as high bitrate, low latency, and sufficient reliability. It is widely agreed that these requirements can be satisfied by adopting emerging networking paradigms like Software-Defined Networking (SDN), Network Function Virtualization (NFV), and edge computing. Previous studies have leveraged these paradigms to present network-assisted video streaming frameworks, but mostly in isolation without devising chains of Virtualized Network Functions (VNFs) that consider the QoE requirements of various types of Multimedia Services (MS).

To bridge the aforementioned gaps, we first introduce a set of multimedia VNFs at the edge of an SDN-enabled network, form diverse Service Function Chains (SFCs) based on the QoE requirements of different MS services. We then propose SARENA, an SFC-enabled ArchitectuRe for adaptive VidEo StreamiNg Applications. Next, we formulate the problem as a central scheduling optimization model executed at the SDN controller. We also present a lightweight heuristic solution consisting of two phases that run on the SDN controller and edge servers to alleviate the time complexity of the optimization model in large-scale scenarios. Finally, we design a large-scale cloud-based testbed, including 250 HTTP Adaptive Streaming (HAS) players requesting two popular MS applications (i.e., live and VoD), conduct various experiments, and compare its effectiveness with baseline systems. Experimental results illustrate that SARENA outperforms baseline schemes in terms of users’ QoE by at least 39.6%, latency by 29.3%, and network utilization by 30% in both MS services.

Index TermsHAS; DASH; NFV; SFC; SDN, Edge Computing.

 

Where does technology help us in our daily lives?

Interview with Felix Schniz, Game Studies and Engineering SPL @ ITEC

 

We meet Felix Schniz for an interview in Lakeside Park, in the CD laboratory Athena, building B12B, to learn something about him and his work and why he chose his career. For those who don´t yet know Felix: he is always neatly dressed, has a smile on his lips and is eager for a mutual exchange of ideas and opinions. So, he was quick to accept the invitation to be the first person on a new journey from “People Behind Informatics”. He is passionate about his work and is happy to share his views with us.

 

Hello Felix, thanks for taking the time to talk to us. Please tell me something about yourself, where you come from, and how your professional career has evolved.

I was born in Bietigheim-Bissingen near Stuttgart. I studied in Mannheim, with the focus of my Bachelor’s degree in English and American Studies. For my Master, I specialized in culture in the process of modernity. In addition to literature and film, we also dealt with digitization processes and that’s how I came to the video game area. That was my “unusual entry” into technical sciences. After my Master’s degree, it was clear to me: I wanted to write a doctoral thesis on video games. The academic path is simply mine, and the topic offers many exciting perspectives, as it is still unexplored in large parts. During my research for the right environment for such a research project, I met René Schallegger at a conference in Oxford. We stay in contact. When a vacancy for a university assistant was advertised at the Department of English in 2016, I applied for this position, started my doctorate at the same time and have been here since then.

 

Such a coincidence, and very lucky that you found exactly what you were looking for. How was your start at the University of Klagenfurt?

I started immediately and also took on the role of the SPL (programme director) of the Master’s degree in “Game Studies and Engineering“, which combines both – humanities and technical aspects. This is also what is special about this programme: the students learn technical approaches to video games and what kind of a role a technical medium plays in society.

 

What do you particularly like about your work?

I am taken seriously and can combine my passion for technology and humanities. I am very happy to question: What is the reason for that, what is behind it, and what else needs to be considered? I can live that to the full in my work.

 

And how did your doctorate continue?

In my doctorate, I asked the research question of what a video game experience actually is. It’s not that easy to name and has to be illuminated from many sides. Philosophically – psychologically – sociologically – media science… The path goes from one’s own, personal to the technical implementation. I wrote theoretical basics, worked with content analyses and scientifically processed my own experiences. This gave me a new, exciting field of questions for myself and research on video games – because how can we speak scientifically about the content of the medium when we experience it in such a personal way?

 

What consensus emerged for you?

Video games help us to get a bigger, better picture of people in the digital age. We have to ask ourselves what kind of influence video games in the future can and should have and need to raise awareness of what kind of responsibility video game programmers have. Programmers should also ask themselves what they want to offer people. The virtual worlds that open video games can offer us a lot, but we have to learn how to deal with them.

In short, I have to ask myself: What do I want to achieve with technology? What role should it play in my life?

Over the past few years, one has been able to follow what role virtual worlds can play in the lives of people. The well-known video game “Fortnite”, for example, was suddenly not just a popular game, but also a much-needed social meeting point, and a retreat for young people, whose social and private spaces were taken away by the pandemic.

Video games can be of great importance for each of us. They can offer us things we need emotionally, socially, or intellectually, or allow us to explore ourselves. This does not mean that the virtual should replace the real world – but it can be a great addition to it. In order to continue to pursue these thoughts in targeted extracts, I also wrote a lot about coping with grief in addition to my doctoral thesis. I am currently working on a book about the spiritual experience of interactive media in general. It will be published later this year.

 

Thank you very much for inviting us into your interesting area of work. We wish you a lot of joy and success in your favourite research area.

Journal Website: Journal of Network and Computer Applications

[PDF]

Samira Afzal (Alpen-Adria-Universität Klagenfurt), Vanessa Testoni (unico IDtech), Christian Esteve Rothenberg (University of Campinas), Prakash Kolan (Samsung Research America), and Imed Bouazizi (Qualcomm)

Abstract:

Demand for wireless video streaming services increases with users expecting to access high-quality video streaming experiences. Ensuring Quality of Experience (QoE) is quite challenging due to varying bandwidth and time constraints. Since most of today’s mobile devices are equipped with multiple network interfaces, one promising approach is to benefit from multipath communications. Multipathing leads to higher aggregate bandwidth and distributing video traffic over multiple network paths improves stability, seamless connectivity, and QoE. However, most of current transport protocols do not match the requirements of video streaming applications or are not designed to address relevant issues, such as networks heterogeneity, head-of-line blocking, and delay constraints. In this comprehensive survey, we first review video streaming standards
and technology developments. We then discuss the benefits and challenges of multipath video transmission over wireless. We provide a holistic literature review of multipath wireless video streaming, shedding light on the different alternatives from an end-to-end layered stack perspective, reviewing key multipath wireless scheduling functions, unveiling trade-offs of each approach, and presenting a suitable taxonomy to classify the
state-of-the-art. Finally, we discuss open issues and avenues for future work.

 

Collaborative Edge-Assisted Systems for HTTP Adaptive Video Streaming

5G/6G Innovation Center,  University of Surrey, UK

6th January 2023 | Guildford, UK

Abstract: The proliferation of novel video streaming technologies, advancement of networking paradigms, and steadily increasing numbers of users who prefer to watch video content over the Internet rather than using classical TV have made video the predominant traffic on the Internet. However, designing cost-effective, scalable, and flexible architectures that support low-latency and high-quality video streaming is still a challenge for both over-the-top (OTT) and ISP companies. In this talk, we first introduce the principles of video streaming and the existing challenges. We then review several 5G/6G networking paradigms and explain how we can leverage networking technologies to form collaborative network-assisted video streaming systems for improving users’ quality of experience (QoE) and network utilization.

 

 

Reza Farahani is a last-year Ph.D. candidate at the University of Klagenfurt, Austria, and a Ph.D. visitor at the University of Surrey, Uk. He received his B.Sc. in 2014 and M.Sc. in 2019 from the university of Isfahan, IRAN, and the university of Tehran, IRAN, respectively. Currently, he is working on the ATHENA project in cooperation with its industry partner Bitmovin. His research is focused on designing modern network-assisted video streaming solutions (via SDN, NFV, MEC, SFC, and P2P paradigms), multimedia Communication, computing continuum challenges, and parallel and distributed systems. He also worked in different roles in the computer networks field, e.g., network administrator, ISP customer support engineer, Cisco network engineer, network protocol designer, network programmer, and Cisco instructor (R&S, SP).

Journal: Sensors

Authors: Akif Quddus Khan, Nikolay Nikolov, Mihhail Matskin,Radu Prodan, Dumitru Roman, Bekir Sahin, Christoph Bussler, Ahmet Soylu

Abstract: Big data pipelines are developed to process data characterized by one or more of the three big data features, commonly known as the three Vs (volume, velocity, and variety), through a series of steps (e.g., extract, transform, and move), making the ground work for the use of advanced analytics and ML/AI techniques. Computing continuum (i.e., cloud/fog/edge) allows access to virtually infinite amount of resources, where data pipelines could be executed at scale; however, the implementation of data pipelines on the continuum is a complex task that needs to take computing resources, data transmission channels, triggers, data transfer methods, integration of message queues, etc., into account. The task becomes even more challenging when data storage is considered as part of the data pipelines. Local storage is expensive, hard to maintain, and comes with several challenges (e.g., data availability, data security, and backup). The use of cloud storage, i.e., storage-as-a-service (StaaS), instead of local storage has the potential of providing more flexibility in terms of scalability, fault tolerance, and availability. In this article, we propose a generic approach to integrate StaaS with data pipelines, i.e., computation on an on-premise server or on a specific cloud, but integration with StaaS, and develop a ranking method for available storage options based on five key parameters: cost, proximity, network performance, server-side encryption, and user weights/preferences. The evaluation carried out demonstrates the effectiveness of the proposed approach in terms of data transfer performance, utility of the individual parameters, and feasibility of dynamic selection of a storage option based on four primary user scenarios.

Every year, Carinthia celebrates its cultural and scientific greats by awarding a total of 13 prizes based on the proposal of the Carinthian Cultural Board. This year Hermann Hellwagner received one of the three appreciation prizes in the natural and technical sciences category. Congratulations! Further information: https://www.aau.at/blog/kulturpreise-des-landes-kaernten-fuer-hermann-hellwagner-roswitha-rissner-und-wolfgang-puschnig/

IEEE/IFIP Network Operations and Management Symposium (NOMS)

8-12 May 2023- Miami, FL – USA

Farzad Tashtarian (Alpen-Adria-Universität Klagenfurt, Austria), Abdelhak Bentaleb (Concordia University, Canada), Hadi Amirpour (Alpen-Adria-Universität Klagenfurt, Austria), Babak Taraghi (Alpen-Adria-Universität Klagenfurt, Austria), Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Austria), Hermann Hellwagner (Alpen-Adria-Universität Klagenfurt, Austria), Roger Zimmermann (National University of Singapore, Singapore)

Video content in Live HTTP Adaptive Streaming (HAS) is typically encoded using a pre-defined, fixed set of bitrate-resolution pairs (termed Bitrate Ladder), allowing playback devices to adapt to changing network conditions using an adaptive bitrate (ABR) algorithm. However, using a fixed one-size-fits-all solution when faced with various content complexities, heterogeneous network conditions, viewer device resolutions and locations, does not result in an overall maximal viewer quality of experience (QoE). Here, we consider these factors and design LALISA, an efficient framework for dynamic bitrate ladder optimization in live HAS. LALISA dynamically changes a live video session’s bitrate ladder, allowing improvements in viewer QoE and savings in encoding, storage, and bandwidth costs. LALISA is independent of ABR algorithms and codecs, and is deployed along the path between viewers and the origin server. In particular, it leverages the latest developments in video analytics to collect statistics from video players, content delivery networks and video encoders, to perform bitrate adder tuning. We evaluate the performance of LALISA against existing solutions in various video streaming scenarios using a trace-driven testbed. Evaluation results demonstrate significant improvements in encoding computation (24.4%) and bandwidth (18.2%) costs with an acceptable QoE

IEEE Transactions on Network and Service Management (TNSM)

Alireza Erfanian (Alpen-Adria-Universität Klagenfurt, Austria), Hadi Amirpour (Alpen-Adria-Universität Klagenfurt, Austria), Farzad Tashtarian (Alpen-Adria-Universität Klagenfurt, Austria), Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Austria), and Hermann Hellwagner.

Abstract—The edge computing paradigm brings cloud capabilities close to the clients. Leveraging the edge’s capabilities can improve video streaming services by employing the storage capacity and processing power at the edge for caching and transcoding tasks, respectively, resulting in video streaming services with higher quality and lower latency. In this paper, we propose CD-LwTE, a Cost- and Delay-aware Light-weight Transcoding approach at the Edge, in the context of HTTP Adaptive Streaming (HAS). The encoding of a video segment requires computationally intensive search processes. The main idea of CD-LwTE is to store the optimal search results as metadata for each bitrate of video segments and reuse it at the edge servers to reduce the required time and computational resources for transcoding. Aiming at minimizing the cost and delay of Video-on-Demand (VoD) services, we formulate the problem of selecting an optimal policy for serving segment requests at the edge server, including (i) storing at the edge server, (ii) transcoding from a higher bitrate at the edge server, and (iii) fetching from the origin or a CDN server, as a Binary Linear Programming (BLP) model. As a result, CD-LwTE stores the popular video segments at the edge and serves the unpopular ones by transcoding using metadata or fetching from the origin/CDN server. In this way, in addition to the significant reduction in bandwidth and storage costs, the transcoding time of a requested segment is remarkably decreased by utilizing its corresponding metadata. Moreover, we prove the proposed BLP model is an NP-hard problem and propose two heuristic algorithms to mitigate the time complexity of CD-LwTE. We investigate the performance of CD-LwTE in comprehensive scenarios with various video contents, encoding software, encoding settings, and available resources at the edge. The experimental results show that our approach (i) reduces the transcoding time by up to 97%, (ii) decreases the streaming cost, including storage, computation, and bandwidth costs, by up to 75%, and (iii) reduces delay by up to 48% compared to state-of-the-art approaches.