Christian Timmerer

Authors: Venkata Phani Kumar M (Alpen-Adria-Universität Klagenfurt), Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Bitmovin) and Hermann Hellwagner  (Alpen-Adria-Universität Klagenfurt)

Abstract: Video delivery over the Internet has become more and more established in recent years due to the widespread use of Dynamic Adaptive Streaming over HTTP (DASH). The current DASH specification defines a hierarchical data model for Media Presentation Descriptions (MPDs) in terms of periods, adaptation sets, representations and segments. Although multi-period MPDs are widely used in live streaming scenarios, they are not fully utilized in Video-on-Demand (VoD) HTTP adaptive streaming (HAS) scenarios. In this paper, we introduce MiPSO, a framework for MultiPeriod per-Scene Optimization, to examine multiple periods in VoD HAS scenarios. MiPSO provides different encoded representations of a video at either (i) maximum possible quality or (ii) minimum possible bitrate, beneficial to both service providers and subscribers. In each period, the proposed framework adjusts the video representations (resolution-bitrate pairs) by taking into account the complexities of the video content, with the aim of achieving streams at either higher qualities or lower bitrates. The experimental evaluation with a test video data set shows that the MiPSO reduces the average bitrate of streams with the same visual quality by approximately 10% or increases the visual quality of streams by at least 1 dB in terms of Peak Signal-to-Noise (PSNR) at the same bitrate compared to conventional approaches to video content delivery.

Keywords: Adaptive Streaming, Video-on-Demand, Per-Scene Encoding, Media Presentation Description

IEEE International Conference on Multimedia and Expo. July 06 – 10, London, United Kingdom

Link:https://www.2020.ieeeicme.org/

The manuscript “The Workflow Trace Archive: Open-Access Data from Public and Private Computing Infrastructures” has been accepted for publication in the A* ranked IEEE Transactions on Parallel and Distributed Systems (TPDS) journal.

Authors: Laurens Versluis, Roland Mathá, Sacheendra Talluri, Tim Hegeman, Radu Prodan, Ewa Deelman, and Alexandru Iosup

Abstract: Realistic, relevant, and reproducible experiments often need input traces collected from real-world environments. We focus in this work on traces of workflows—common in datacenters, clouds, and HPC infrastructures. We show that the state-of-the-art in using workflow-traces raises important issues: (1) the use of realistic traces is infrequent, and (2) the use of realistic, open-access traces even more so. Alleviating these issues, we introduce the Workflow Trace Archive (WTA), an open-access archive of workflow traces from diverse computing infrastructures and tooling to parse, validate, and analyze traces. The WTA includes >48 million workflows captured from >10 computing infrastructures, representing a broad diversity of trace domains and characteristics. To emphasize the importance of trace diversity, we characterize the WTA contents and analyze in simulation the impact of trace diversity on experiment results. Our results indicate significant differences in characteristics, properties, and workflow structures between workload sources, domains, and fields.

Acknowledgments: This work is supported by the projects Vidi MagnaData, Commit, the European Union’s Horizon 2020 Research and Innovation Programme, grant agreement number 801091 “ASPIDE”, and the National Science Foundation award number 1664162.

Abstract: Real-time video streaming traffic and related applications have witnessed significant growth in recent years. However, this has been accompanied by some challenging issues, predominantly resource utilization. IP multicasting, as a solution to this problem, suffers from many problems. Using scalable video coding could not gain wide adoption in the industry, due to reduced compression efficiency and additional computational complexity. The emerging software-defined networking (SDN)and network function virtualization (NFV) paradigms enable re-searchers to cope with IP multicasting issues in novel ways. In this paper, by leveraging the SDN and NFV concepts, we introduce a cost-aware approach to provide advanced video coding (AVC)-based real-time video streaming services in the network. In this study, we use two types of virtualized network functions (VNFs): virtual reverse proxy (VRP) and virtual transcoder (VTF)functions. At the edge of the network, VRPs are responsible for collecting clients’ requests and sending them to an SDN controller. Then, executing a mixed-integer linear program (MILP) determines an optimal multicast tree from an appropriate set of video source servers to the optimal group of transcoders. The desired video is sent over the multicast tree. The VTFs transcode the received video segments and stream to the requested VRPs over unicast paths. To mitigate the time complexity of the proposed MILPmodel, we propose a heuristic algorithm that determines a near-optimal solution in a reasonable amount of time. Using theMiniNet emulator, we evaluate the proposed approach and show it achieves better performance in terms of cost and resource utilization in comparison with traditional multicast and unicast approaches.

Authors: Alireza Erfanian, Farzad Tashtarian, Reza Farahani, Christian Timmerer, Hermann Hellwagner

IEEE Conference on Network Softwarization 29 June-3 July 2020 // Ghent, Belgium http://netsoft2020.netsoft-ieee.org

Keywords—Dynamic Adaptive Streaming over HTTP (DASH), Real-time Video Streaming, Software Defined Networking (SDN), Video Transcoding, Network Function Virtualization (NFV).

The first review of the ASPIDE project took place on 25.02.2020 in the premises of the European Commission in Luxemburg. During the project review, a live demo of the platform for supporting extreme scale applications was presented and future research and developing activities were discussed with the reviewers.

Aspide-Review-2020

Aspide Review 2020

ARTICONF: EU first review

ARTICONF: EU first review

Bitmovin, a world leader in online video technology, is teaming up with the University of Klagenfurt, Institute of Information Technology (ITEC) and the Austrian Federal Ministry of Digital and Economic Affairs (BMDW) in a multi-million Euro research project to uncover techniques that will enhance the video streaming experiences of the future. The joint project establishes a dedicated research team to investigate potential new tools and methodologies for encoding, transport and playback of live and on-demand video using the HTTP Adaptive Streaming protocol that is widely used by online video and TV providers. The resulting findings will help empower the creation of next-generation solutions for higher quality video experiences at lower latency, while also potentially reducing storage and distribution costs.

Continue reading …

The ITEC team participated in the HiPeac 2020 International Workshop on Exascale programing models for extreme data with a presentation with title “Monitoring data collection and mining for Exascale systems”. The ITEC team also attended the collocated ASPIDE meeting and actively participated in the decision of the next research activities in the project.

Dragi Kimovski

Title of the talk: Mobility-Aware Scheduling of Extreme Data Workflows across the Computing Continuum

Abstract: The appearance of the Fog/Edge computing paradigm, as an emanation of the computing continuum closer to the edge of the network, unravels important opportunities for execution of complex business and scientific workflows near the data sources. The main characteristics of these workflows are (i) their distributed nature, (ii) the vast amount of data (in the order of petabytes) they generate and (iii) the strict latency requirements. Current workflow management approaches rely exclusively on the Cloud Data Centers, which due to their geographical distance in relation to the data sources, could negatively influence the latency and cause violation of workflow requirements. It is therefore essential to research novel concepts for partial offloading of complex workflows closer to where the data is generated, thus reducing the communication latency and the need for frequent data transfers.

In this talk we will explore the  potential  of  the computing continuum  for  scheduling and partial offloading  of  complex  workflows  with  strict  response time requirements and expose the resource provisioning challenges related to the heterogeneity and mobility of the Fog/Edge environment. Consequently, we will discuss a novel mobility-aware Pareto-based approach for task offloading across the continuum, which considers three optimization objectives, namely response time, reliability, and financial cost. Besides, the approach introduces a Markov model to perform a single-step predictive analysis on the mobility of the Fog/Edge devices, thus constraining the task offloading optimization problem to devices that do not frequently move (roam) within the computing continuum. As a conclusion to the talk, we will discuss the efficiency of the presented approach, based on both a simulated and a real-world testbed environment tailored for a set real-world biomedical, meteorological and astronomy workflows.

IWCoCo 2020 in Bologna