Nishant Saurabh

The manuscript ”Expelliarmus: Semantic-Centric Virtual Machine Image Management in IaaS Clouds” is accepted for publication at the Journal of Parallel and Distributed Computing (JPDC) (https://www.journals.elsevier.com/journal-of-parallel-and-distributed-computing).

Authors: Nishant Saurabh (University of Klagenfurt), Shajulin Benedict (Indian Institute of Information Technology, Kottayam), Jorge G. Barbosa (LIACC, Faculdade de Engenharia da Universidade do Porto), Radu Prodan (University of Klagenfurt).

Abstract: Infrastructure-as-a-service (IaaS) Clouds concurrently accommodate diverse sets of user requests, requiring an efficient strategy for storing and retrieving virtual machine images (VMIs) at a large scale. The VMI storage management require dealing with multiple VMIs, typically in the magnitude of gigabytes, which entails VMI sprawl issues hindering the elastic resource management and provisioning. Nevertheless, existing techniques to facilitate VMI management overlook VMI semantics (i.e at the level of base image and software packages) with either restricted possibility to identify and extract reusable functionalities or with higher VMI publish and retrieval overheads. In this paper, we design, implement and evaluate Expelliarmus, a novel VMI management system that helps to minimize storage, publish and retrieval overheads. To achieve this goal, Expelliarmus incorporates three complementary features. First, it makes use of VMIs modelled as semantic graphs to expedite the similarity computation between multiple VMIs. Second, Expelliarmus provides a semantic aware VMI decomposition and base image selection to extract and store non-redundant base image and software packages. Third, Expelliarmus can also assemble VMIs based on the required software packages upon user request. We evaluate Expelliarmus through a representative set of synthetic Cloud VMIs on the real test-bed. Experimental results show that our semantic-centric approach is able to optimize repository size by 2.3-22 times compared to state-of-the-art systems (e.g. IBM’s Mirage and Hemera) with significant VMI publish and slight retrieval performance improvement.

Acknowledgements:

This work is supported by:

  • European Union’s Horizon 2020 research and innovation programme, grant agreement 825134, “Smart Social Media Ecosytstem in a Blockchain Federated Environment (ARTICONF)”;
  • Austrian Agency for International Cooperation in Education and Research (OeAD-GmbH) and Indian Department of Science and Technology (DST), project number, IN 20/2018, “Energy Aware Workflow Compiler for Future Heterogeneous Systems”

The Sixth IEEE International Conference on Multimedia Big Data (BigMM 2020)
http://bigmm2020.org/

Authors: Anatoliy Zabrovskiy (Alpen-Adria-Universitat Klagenfurt), Prateek Agrawal (Alpen-Adria-Universitat Klagenfurt, Lovely Professional University), Roland Matha (Alpen-Adria-Universitat Klagenfurt), Christian Timmerer (Alpen-Adria-Universitat Klagenfurt, Bitmovin) and Radu Prodan (Alpen-Adria-Universitat Klagenfurt).

Abstract: HTTP Adaptive Streaming of video content is becoming an integral part of the Internet and accounts for the majority of today’s traffic. Although Internet bandwidth is constantly increasing, video compression technology plays an important role and the major challenge is to select and set up multiple video codecs, each with hundreds of transcoding parameters. Additionally, the transcoding speed depends directly on the selected transcoding parameters and the infrastructure used. Predicting transcoding time for multiple transcoding parameters with different codecs and processing units is a challenging task, as it depends on many factors. This paper provides a novel and considerably fast method for transcoding time prediction using video content classification and neural network prediction. Our artificial neural network (ANN) model predicts the transcoding times of video segments for state-of-the-art video codecs based on transcoding parameters and content complexity. We evaluated our method for two video codecs/implementations (AVC/x264 and HEVC/x265) as part of large-scale HTTP Adaptive Streaming services. The ANN model of our method is able to predict the transcoding time by minimizing the mean absolute error (MAE) to 1.37 and 2.67 for x264 and x265 codecs, respectively. For x264, this is an improvement of 22% compared to the state of the art.

Keywords: Transcoding time prediction, adaptive streaming, video transcoding, neural networks, video encoding, video complexity class, MPEG-DASH

Authors: Minh Nguyen, Hadi Amirpour, Christian Timmerer, Hermann Hellwagner (Alpen-Adria-Universität Klagenfurt)

Abstract: HTTP/2 has been explored widely for video streaming, but still suffers from Head-of-Line blocking, and three-way hand-shake delay due to TCP. Meanwhile, QUIC running on top of UDP can tackle these issues. In addition, although many adaptive bitrate (ABR) algorithms have been proposed for scalable and non-scalable video streaming, the literature lacks an algorithm designed for both types of video streaming approaches. In this paper, we investigate the impact of quick and HTTP/2 on the performance of adaptive bitrate(ABR) algorithms in terms of different metrics. Moreover, we propose an efficient approach for utilizing scalable video coding formats for adaptive video streaming that combines a traditional video streaming approach (based on non-scalable video coding formats) and a retransmission technique. The experimental results show that QUIC benefits significantly from our proposed method in the context of packet loss and retransmission.

Compared to HTTP/2, it improves the average video quality and also provides a smoother adaptation behavior. Finally, we demonstrate that our proposed method originally designed for non-scalable video codecs also works efficiently for scalable videos such as Scalable High EfficiencyVideo Coding (SHVC).

Keywords: QUIC, H2BR, HTTP adaptive streaming, Retransmission, SHVC

Conference: ACM SIGCOMM 2020 Workshop on Evolution, Performance, and Interoperability of QUIC (EPIQ 2020), August 10-14, 2020, Newyork City, USA.

Link: https://conferences.sigcomm.org/sigcomm/2020/workshop-epiq.html

Authors: Anandhakumar Palanisamy, Mirsat Sefidanoski, Spiros Koulouzis, Carlos Rubia, Nishant Saurabh and Radu Prodan

Abstract: Social media applications are essential for next generation connectivity. Today, social media are centralized platforms with a single proprietary organization controlling the network and posing critical trust and governance issues over the created and propagated content. The ARTICONF project funded by the European Union’s Horizon 2020 program researches a decentralized social media platform based on a novel set of trustworthy, resilient and globally sustainable tools to fulfil the privacy, robustness and autonomy-related promises that proprietary social media platforms have failed to deliver so far. This paper presents the ARTICONF approach to a car-sharing use case application, as a new collaborative peer-to-peer model providing an alternative solution to private car ownership. We describe a prototype implementation of the car-sharing social media application and illustrate through real snapshots how the different ARTICONF tools support it in a simulated scenario.

Link: https://sites.google.com/view/brain-2020/

Natalia Sokolova

Authors: Natalia Sokolova, Mario Taschwer, Stephanie Sarny, Doris Putzgruber-Adamitsch and Klaus Schoeffmann

Abstract: Automatically detecting clinically relevant events in surgery video recordings is becoming increasingly important for documentary, educational, and scientific purposes in the medical domain. From a medical image analysis perspective, such events need to be treated individually and associated with specific visible objects or regions. In the field of cataract surgery (lens replacement in the human eye), pupil reaction (dilation or restriction) during surgery may lead to complications and hence represents a clinically relevant event. Its detection requires automatic segmentation and measurement of pupil and iris in recorded video frames. In this work, we contribute to research on pupil and iris segmentation methods by (1) providing a dataset of 82 annotated images for training and evaluating suitable machine learning algorithms, and (2) applying the Mask R-CNN algorithm to this problem, which – in contrast to existing techniques for pupil segmentation – predicts free-form pixel-accurate segmentation masks for iris and pupil.

The proposed approach achieves consistent high segmentation accuracies on several metrics while delivering an acceptable prediction efficiency, establishing a promising basis for further segmentation and event detection approaches on eye surgery videos.

Link: http://2020.biomedicalimaging.org/

Authors: Minh Nguyen (Alpen-Adria-Universität Klagenfurt), Christian Timmerer (Alpen-Adria-Universität Klagenfurt / Bitmovin Inc.), Hermann Hellwagner (Alpen-Adria-Universität Klagenfurt)

Abstract: HTTP-based Adaptive Streaming (HAS) plays a key role in over-the-top video streaming. It contributes towards reducing the rebuffering duration of video playout by adapting the video quality to the current network conditions. However, it incurs variations of video quality in a streaming session because of the throughput fluctuation, which impacts the user’s Quality of Experience (QoE). Besides, many adaptive bitrate (ABR) algorithms choose the lowest-quality segments at the beginning of the streaming session to ramp up the playout buffer as soon as possible. Although this strategy decreases the startup time, the users can be annoyed as they have to watch a low-quality video initially. In this paper, we propose an efficient retransmission technique, namely H2BR, to replace low-quality segments being stored in the playout buffer with higher-quality versions by using features of HTTP/2 including (i) stream priority, (ii) server push, and (iii) stream termination. The experimental results show that H2BR helps users avoid watching low video quality during video playback and improves the user’s QoE. H2BR can decrease by up to more than 70% the time when the users suffer the lowest-quality video as well as benefits the QoE by up to 13%.

Keywords: HTTP adaptive streaming, DASH, ABR algorithms, QoE, HTTP/2

Packet Video Workshop 2020 (PV) June 10-11, 2020, Istanbul, Turkey (co-located with ACM MMSys’20)

Link: https://2020.packet.video/

Authors: Hadi Amirpour (Alpen-Adria-Universität Klagenfurt), Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Bitmovin), and Mohammad Ghanbari (University of Essex)

Abstract: Holography is able to reconstruct a three-dimensional structure of an object by recording full wave fields of light emitted from the object. This requires a huge amount of data to be encoded, stored, transmitted, and decoded for holographic content, making its practical usage challenging especially for bandwidth-constrained networks and memory-limited devices. In the delivery of holographic content via the internet, bandwidth wastage should be avoided to tackle high bandwidth demands of holography streaming. For real-time applications, encoding time-complexity is also a major problem. In this paper, the concept of dynamic adaptive streaming over HTTP (DASH) is extended to holography image streaming and view-aware adaptation techniques are studied. As each area of a hologram contains information of a specific view, instead of encoding and decoding the entire hologram, just the part required to render the selected view is encoded and transmitted via the network based on the users’ interactivity. Four different strategies, namely, monolithic, single view, adaptive view, and non-real time streaming strategies are explained and compared in terms of bandwidth requirements, encoding time-complexity, and bitrate overhead. Experimental results show that the view-aware methods reduce the required bandwidth for holography streaming at the cost of a bitrate increase.

Keywords: Holography, compression, bitrate adaptation, dynamic adaptive streaming over HTTP, DASH.

Christian Timmerer

Authors: Venkata Phani Kumar M (Alpen-Adria-Universität Klagenfurt), Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Bitmovin) and Hermann Hellwagner  (Alpen-Adria-Universität Klagenfurt)

Abstract: Video delivery over the Internet has become more and more established in recent years due to the widespread use of Dynamic Adaptive Streaming over HTTP (DASH). The current DASH specification defines a hierarchical data model for Media Presentation Descriptions (MPDs) in terms of periods, adaptation sets, representations and segments. Although multi-period MPDs are widely used in live streaming scenarios, they are not fully utilized in Video-on-Demand (VoD) HTTP adaptive streaming (HAS) scenarios. In this paper, we introduce MiPSO, a framework for MultiPeriod per-Scene Optimization, to examine multiple periods in VoD HAS scenarios. MiPSO provides different encoded representations of a video at either (i) maximum possible quality or (ii) minimum possible bitrate, beneficial to both service providers and subscribers. In each period, the proposed framework adjusts the video representations (resolution-bitrate pairs) by taking into account the complexities of the video content, with the aim of achieving streams at either higher qualities or lower bitrates. The experimental evaluation with a test video data set shows that the MiPSO reduces the average bitrate of streams with the same visual quality by approximately 10% or increases the visual quality of streams by at least 1 dB in terms of Peak Signal-to-Noise (PSNR) at the same bitrate compared to conventional approaches to video content delivery.

Keywords: Adaptive Streaming, Video-on-Demand, Per-Scene Encoding, Media Presentation Description

IEEE International Conference on Multimedia and Expo. July 06 – 10, London, United Kingdom

Link:https://www.2020.ieeeicme.org/

The manuscript “The Workflow Trace Archive: Open-Access Data from Public and Private Computing Infrastructures” has been accepted for publication in the A* ranked IEEE Transactions on Parallel and Distributed Systems (TPDS) journal.

Authors: Laurens Versluis, Roland Mathá, Sacheendra Talluri, Tim Hegeman, Radu Prodan, Ewa Deelman, and Alexandru Iosup

Abstract: Realistic, relevant, and reproducible experiments often need input traces collected from real-world environments. We focus in this work on traces of workflows—common in datacenters, clouds, and HPC infrastructures. We show that the state-of-the-art in using workflow-traces raises important issues: (1) the use of realistic traces is infrequent, and (2) the use of realistic, open-access traces even more so. Alleviating these issues, we introduce the Workflow Trace Archive (WTA), an open-access archive of workflow traces from diverse computing infrastructures and tooling to parse, validate, and analyze traces. The WTA includes >48 million workflows captured from >10 computing infrastructures, representing a broad diversity of trace domains and characteristics. To emphasize the importance of trace diversity, we characterize the WTA contents and analyze in simulation the impact of trace diversity on experiment results. Our results indicate significant differences in characteristics, properties, and workflow structures between workload sources, domains, and fields.

Acknowledgments: This work is supported by the projects Vidi MagnaData, Commit, the European Union’s Horizon 2020 Research and Innovation Programme, grant agreement number 801091 “ASPIDE”, and the National Science Foundation award number 1664162.

Abstract: Real-time video streaming traffic and related applications have witnessed significant growth in recent years. However, this has been accompanied by some challenging issues, predominantly resource utilization. IP multicasting, as a solution to this problem, suffers from many problems. Using scalable video coding could not gain wide adoption in the industry, due to reduced compression efficiency and additional computational complexity. The emerging software-defined networking (SDN)and network function virtualization (NFV) paradigms enable re-searchers to cope with IP multicasting issues in novel ways. In this paper, by leveraging the SDN and NFV concepts, we introduce a cost-aware approach to provide advanced video coding (AVC)-based real-time video streaming services in the network. In this study, we use two types of virtualized network functions (VNFs): virtual reverse proxy (VRP) and virtual transcoder (VTF)functions. At the edge of the network, VRPs are responsible for collecting clients’ requests and sending them to an SDN controller. Then, executing a mixed-integer linear program (MILP) determines an optimal multicast tree from an appropriate set of video source servers to the optimal group of transcoders. The desired video is sent over the multicast tree. The VTFs transcode the received video segments and stream to the requested VRPs over unicast paths. To mitigate the time complexity of the proposed MILPmodel, we propose a heuristic algorithm that determines a near-optimal solution in a reasonable amount of time. Using theMiniNet emulator, we evaluate the proposed approach and show it achieves better performance in terms of cost and resource utilization in comparison with traditional multicast and unicast approaches.

Authors: Alireza Erfanian, Farzad Tashtarian, Reza Farahani, Christian Timmerer, Hermann Hellwagner

IEEE Conference on Network Softwarization 29 June-3 July 2020 // Ghent, Belgium http://netsoft2020.netsoft-ieee.org

Keywords—Dynamic Adaptive Streaming over HTTP (DASH), Real-time Video Streaming, Software Defined Networking (SDN), Video Transcoding, Network Function Virtualization (NFV).