IEEE Transactions on Network and Service Management

Authors: Reza Farahani, Ekrem Cetinkaya, Christian Timmerer, Mohammad Shojafar, Mohammad Ghanbari, and Hermann Hellwagner

Abstract: Recent years have witnessed video streaming demands evolve into one of the most popular Internet applications. With the ever-increasing personalized demands for high-definition and low-latency video streaming services, network-assisted video streaming schemes employing modern networking paradigms have become a promising complementary solution in the HTTP Adaptive Streaming (HAS) context. The emergence of such techniques addresses long-standing challenges of enhancing users’ Quality of Experience (QoE), end-to-end (E2E) latency, as well as network utilization. However, designing a cost-effective, scalable, and flexible network-assisted video streaming architecture that supports the aforementioned requirements for live streaming services is still an open challenge. This article leverage novel networking paradigms, i.e., edge computing and Network Function Virtualization (NFV), and promising video solutions, i.e., HAS, Video Super-Resolution (SR), and Distributed Video Transcoding (TR), to introduce A Latency- and cost-aware hybrId P2P-CDN framework for liVe video strEaming (ALIVE). We first introduce the ALIVE multi-layer architecture and design an action tree that considers all feasible resources (i.e., storage, computation, and bandwidth) provided by peers, edge, and CDN servers for serving peer requests with acceptable latency and quality. We then formulate the problem as a Mixed Integer Linear Programming (MILP) optimization model executed at the edge of the network. To alleviate the optimization model’s high time complexity, we propose a lightweight heuristic, namely, Greedy-Based Algorithm (GBA). Finally, we (i) design and instantiate a large-scale cloud-based testbed including 350 HAS players, (ii) deploy ALIVE on it, and (iii) conduct a series of experiments to evaluate the performance of ALIVE in various scenarios. Experimental results indicate that ALIVE (i) improves the users’ QoE by at least 22%, (ii) decreases incurred cost of the streaming service provider by at least 34%, (iii) shortens clients’ serving latency by at least 40%, (iv) enhances edge server energy consumption by at least 31%, and (v) reduces backhaul bandwidth usage by at least 24% compared to baseline approaches.

Keywords: HTTP Adaptive Streaming (HAS); Edge Com- puting; Network Function Virtualization (NFV); Content Deliv- ery Network (CDN); Peer-to-Peer (P2P); Quality of Experience (QoE); Video Transcoding; Video Super-Resolution.

Radu Prodan participated on November 1, 2023, as an external opponent at the PhD defense of Ruyue Xin ((Title of the dissertation: Towards Effective Performance Diagnosis for Distributed Applications), supervised by Dr. Zhiming Zhao, Dr. Paola Grosso and Prof. Cees de Laat at the University of Amsterdam, Netherlands.

 

The Graph Massivizer Project is part of the European Big Data Value Forum!

The team participates in a session exploring the latest in #KnowledgeGraph technology with real-world use cases in agrifood, industry 4.0 and healthcare. As part of the session, metaphacts GmbH founder Peter Haase will discuss the importance of knowledge graphs as a foundational layer for #AI applications.

The @DataCloud2020 dissemination Workshop, oriented by project partners @SINTEF, took place on the 26th of October as part of the @icpm_conf 2023, hosted by @SapienzaRoma. Narges Mehran participated for UNI-KLU.
@DataCloud2020 booth stand at the Auditorium Antonianum for the 5th International Conference on Process Mining (@icpm_conf 2023).

The Symposium “The Data Science and Artificial Intelligence (DSAI) carnival” took place on the 19th of October 2023 at the Wageningen University & Research Campus and was organized in collaboration with the Wageningen Data Competence Center (WDCC).

This symposium provided an in-depth examination of cutting-edge themes from areas such as the Web, Semantic Web, linked data and knowledge graphs, LLMs, MLOps, cloud computing, data infrastructures and data space, FAIR data management, and related developments.

Leading experts shared the latest research and applications in these areas, fostering collaboration and offering insights into emerging trends.

The event concluded with the inaugural lecture of Prof. Dr. Anna Fensel.

 

IEEE Access, A Multidisciplinary, Open-access Journal of the IEEE

Title: Characterization of the Quality of Experience and Immersion of Point Cloud Video Sequences through a Subjective Study @ IEEE Access

AuthorsMinh NguyenShivi VatsSam Van Damme (Ghent University – imec and KU Leuven, Belgium), Jeroen van der Hooft (Ghent University – imec, Belgium), Maria Torres Vega (Ghent University – imec and KU Leuven, Belgium), Tim Wauters (Ghent University – imec, Belgium), Filip De Turck (Ghent University – imec, Belgium), Christian Timmerer, Hermann Hellwagner

Abstract: Point cloud streaming has recently attracted research attention as it has the potential to provide six degrees of freedom movement, which is essential for truly immersive media. The transmission of point clouds requires high-bandwidth connections, and adaptive streaming is a promising solution to cope with fluctuating bandwidth conditions. Thus, understanding the impact of different factors in adaptive streaming on the Quality of Experience (QoE) becomes fundamental. Point clouds have been evaluated in Virtual Reality (VR), where viewers are completely immersed in a virtual environment. Augmented Reality (AR) is a novel technology and has recently become popular, yet quality evaluations of point clouds in AR environments are still limited to static images.

In this paper, we perform a subjective study of four impact factors on the QoE of point cloud video sequences in AR conditions, including encoding parameters (quantization parameters, QPs), quality switches, viewing distance, and content characteristics. The experimental results show that these factors significantly impact the QoE. The QoE decreases if the sequence is encoded at high QPs and/or switches to lower quality and/or is viewed at a shorter distance, and vice versa. Additionally, the results indicate that the end user is not able to distinguish the quality differences between two quality levels at a specific (high) viewing distance. An intermediate-quality point cloud encoded at geometry QP (G-QP) 24 and texture QP (T-QP) 32 and viewed at 2.5 m can have a QoE (i.e., score 6.5 out of 10) comparable to a high-quality point cloud encoded at 16 and 22 for G-QP and T-QP, respectively, and viewed at a distance of 5 m. Regarding content characteristics, objects with lower contrast can yield better quality scores. Participants’ responses reveal that the visual quality of point clouds has not yet reached an immersion level as desired. The average QoE of the highest visual quality is less than 8 out of 10. There is also a good correlation between objective metrics (e.g., color Peak Signal-to-Noise Ratio (PSNR) and geometry PSNR) and the QoE score. Especially the Pearson correlation coefficients of color PSNR is 0.84. Finally, we found that machine learning models are able to accurately predict the QoE of point clouds in AR environments.

The subjective test results and questionnaire responses are available on Github: https://github.com/minhkstn/QoE-and-Immersion-of-Dynamic-Point-Cloud.

Sebastian Uitz and Michael Steinkellner presented their highly anticipated game, “A Webbing Journey,” at the biggest gaming event in Austria, the Game City in Vienna, from October 13th to 15th, 2023. This event was a bustling hub of innovation, bringing together game developers and enthusiasts from near and far. It offered a remarkable opportunity to connect with fellow developers and immerse themselves in a world of fantastic games from other indie developers and big publishers. 
Nestled within the heart of Game City, our booth provided a gateway into the captivating universe of “A Webbing Journey.” Attendees of all ages were invited to step into the eight-legged shoes of our adventurous spider, experiencing the game’s enchanting storyline and unique gameplay mechanics. Our setup, equipped with a laptop, a Steam Deck, and a Nintendo Switch, allowed players to traverse the spider’s wondrous journey, leaving no web unspun. 
One of the event’s highlights was our engaging interview with the FM4 radio channel. This platform provided an excellent opportunity to share the inspiration behind “A Webbing Journey,” explore the game’s captivating features, and show off the newest level in our game. We were thrilled to offer a glimpse into the game’s development process and reveal the magic that makes our project so unique.

The 19th International Conference on emerging Networking EXperiments and Technologies (CoNEXT) Paris, France, December 5-8, 2023

Authors: Leonardo Peroni (IMDEA Networks Institute), Sergey Gorinsky (IMDEA Networks Institute), Farzad Tashtarian (Alpen-Adria-Universität Klagenfurt, Austria), and Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Austria).

Abstract: Quality of Experience (QoE) and QoE models are of an increasing importance to networked systems. The traditional QoE modeling for video streaming applications builds a one-size-fits-all QoE model that underserves atypical viewers who perceive QoE differently. To address the problem of atypical viewers, this paper proposes iQoE (individualized QoE), a method that employs explicit, expressible, and actionable feedback from a viewer to construct a personalized QoE model for this viewer. The iterative iQoE design exercises active learning and combines a novel sampler with a modeler. The chief emphasis of our paper is on making iQoE sample-efficient and accurate.
By leveraging the Microworkers crowdsourcing platform, we conduct studies with 120 subjects who provide 14,400 individual scores. According to the subjective studies, a session of about 22 minutes empowers a viewer to construct a personalized QoE model that, compared to the best of the 10 baseline models, delivers the average accuracy improvement of at least 42% for all viewers and at least 85\% for the atypical viewers. The large-scale simulations based on a new technique of synthetic profiling expand the evaluation scope by exploring iQoE design choices, parameter sensitivity, and generalizability.

 

Delighted to host last #PlenaryMeeting of #Datacloud 2020 project. Final adjustments for tool integration and business cases deployments ahead.

Authors: Gregor Molan, Gregor Dolinar, Jovan Bojkovski, Radu Prodan, Andrea Borghesi, Martin Molan

Journal: IEEE Access

Purpose: The gap between software development requirements and the available resources of software developers continues to widen. This requires changes in the development and organization of software development.

Objectives: Presented is a model introducing a quantitative software development management methodology that estimates the relative importance and risk of functionality retention or abundance, which determines the final value of the software product.

Method: The final value of the software product is interpreted as a function of the requirements and functionalities, represented as a computational graph (called a software product graph). The software product graph allows the relative importance of functionalities to be estimated by calculating the corresponding partial derivatives of the value function. The risk of not implementing the functionality is estimated by reducing the final value of a product.

Validation: This model has been applied to two EU projects: CareHD and vINCI. In vINCI, the functionalities with the most significant added value to the application were developed based on the implemented model and those that brought the least value were abandoned. Optimization was not implemented in the CareHD project and proceeded as initially designed. Consequently, only 71% of the CareHD’s potential value has been realized.

Conclusions: Presented model enables rational management and organization of software product development with real-time quantitative evaluation of functionalities impacts, assessment of the risks of omitting them without a significant impact. A quantitative evaluation of the impacts and risks of retention or abundance is possible based on the proposed algorithm, which is the core of the model. This model is a tool for rational organization and development of software products.