Natalia Sokolova

Authors: Natalia Sokolova, Mario Taschwer, Stephanie Sarny, Doris Putzgruber-Adamitsch and Klaus Schoeffmann

Abstract: Automatically detecting clinically relevant events in surgery video recordings is becoming increasingly important for documentary, educational, and scientific purposes in the medical domain. From a medical image analysis perspective, such events need to be treated individually and associated with specific visible objects or regions. In the field of cataract surgery (lens replacement in the human eye), pupil reaction (dilation or restriction) during surgery may lead to complications and hence represents a clinically relevant event. Its detection requires automatic segmentation and measurement of pupil and iris in recorded video frames. In this work, we contribute to research on pupil and iris segmentation methods by (1) providing a dataset of 82 annotated images for training and evaluating suitable machine learning algorithms, and (2) applying the Mask R-CNN algorithm to this problem, which – in contrast to existing techniques for pupil segmentation – predicts free-form pixel-accurate segmentation masks for iris and pupil.

The proposed approach achieves consistent high segmentation accuracies on several metrics while delivering an acceptable prediction efficiency, establishing a promising basis for further segmentation and event detection approaches on eye surgery videos.

Link: http://2020.biomedicalimaging.org/

This year’s ACM MMSys was held as a fully virtual/online event and Slido was used for asking questions about keynotes and presentations including offline discussions with presenters. The interaction report provides some interesting key insights including the word cloud below which provides an overview of this year’s discussion items. Although ACM MMSys 2020 is over, everyone is welcome joining the MMSys Slack workspace where the discussion will continue until ACM MMSys 2021 (available soon!) and hopefully beyond.

The IEEE Transactions on Parallel and Distributed Systems (TPDS) paper “Simplified Workflow Simulation on Clouds based on Computation and Communication Noisiness”, published by Roland Mathá and Prof. Radu Prodan et al. got awarded the Code Reviewed Reproducibility EXCELLENCE Badge.

Christian Timmerer

Title: Objective and Subjective QoE Evaluation for Adaptive Point Cloud Streaming

Authors: Jeroen van der Hooft (Ghent University), Maria Torres Vega (Ghent University), Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Bitmovin), Ali C. Begen (Ozyegin University, Networked Media), Filip De Turck (Ghent University), Raimund Schatz (Alpen-Adria Universität Klagenfurt & AIT Austrian Institute of Technology, Austria)

Abstract: Volumetric media has the potential to provide the six degrees of freedom (6DoF) required by truly immersive media. However, achieving 6DoF requires ultra-high bandwidth transmissions, which real-world wide area networks cannot provide economically. Therefore, recent efforts have started to target efficient delivery of volumetric media, using a combination of compression and adaptive streaming techniques. It remains, however, unclear how the effects of such techniques on the user perceived quality can be accurately evaluated. In this paper, we present the results of an extensive objective and subjective quality of experience (QoE) evaluation of volumetric 6DoF streaming. We use PCC-DASH, a standards-compliant means for HTTP adaptive streaming of scenes comprising multiple dynamic point cloud objects. By means of a thorough analysis we investigate the perceived quality impact of the available bandwidth, rate adaptation algorithm, viewport prediction strategy and user’s motion within the scene. We determine which of these aspects has more impact on the user’s QoE, and to what extent subjective and objective assessments are aligned.

Keywords: Volumetric Media; HTTP Adaptive Streaming; 6DoF; MPEG V-PCC; QoE Assessment; Objective Metrics

International Conference on Quality of Multimedia Experience (QoMEX)
May 26-28, 2020, Athlone, Ireland
http://qomex2020.ie/

Christian Timmerer

Christian Timmerer and Peter Schelkens have been elected as Chairs of the QoMEX Steering Committee and Sebastian Möller has been elected as Treasurer.

The primary goal of the conference is to bring together leading professionals and scientists in multimedia quality and user experience from around the world. QoMEX is a conference taking place annually in early summer and guided by a steering committee.

The 12th International Conference on Quality of Multimedia Experience will be held from May 26th to 28th, 2020 in Athlone, Ireland (online). QoMEX 2020 will provide a warm welcome to leading experts from academia and industry to present and discuss current and future research on multimedia quality, quality of experience (QoE), and user experience (UX).

Authors: Minh Nguyen (Alpen-Adria-Universität Klagenfurt), Christian Timmerer (Alpen-Adria-Universität Klagenfurt / Bitmovin Inc.), Hermann Hellwagner (Alpen-Adria-Universität Klagenfurt)

Abstract: HTTP-based Adaptive Streaming (HAS) plays a key role in over-the-top video streaming. It contributes towards reducing the rebuffering duration of video playout by adapting the video quality to the current network conditions. However, it incurs variations of video quality in a streaming session because of the throughput fluctuation, which impacts the user’s Quality of Experience (QoE). Besides, many adaptive bitrate (ABR) algorithms choose the lowest-quality segments at the beginning of the streaming session to ramp up the playout buffer as soon as possible. Although this strategy decreases the startup time, the users can be annoyed as they have to watch a low-quality video initially. In this paper, we propose an efficient retransmission technique, namely H2BR, to replace low-quality segments being stored in the playout buffer with higher-quality versions by using features of HTTP/2 including (i) stream priority, (ii) server push, and (iii) stream termination. The experimental results show that H2BR helps users avoid watching low video quality during video playback and improves the user’s QoE. H2BR can decrease by up to more than 70% the time when the users suffer the lowest-quality video as well as benefits the QoE by up to 13%.

Keywords: HTTP adaptive streaming, DASH, ABR algorithms, QoE, HTTP/2

Packet Video Workshop 2020 (PV) June 10-11, 2020, Istanbul, Turkey (co-located with ACM MMSys’20)

Link: https://2020.packet.video/

Authors: Hadi Amirpour (Alpen-Adria-Universität Klagenfurt), Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Bitmovin), and Mohammad Ghanbari (University of Essex)

Abstract: Holography is able to reconstruct a three-dimensional structure of an object by recording full wave fields of light emitted from the object. This requires a huge amount of data to be encoded, stored, transmitted, and decoded for holographic content, making its practical usage challenging especially for bandwidth-constrained networks and memory-limited devices. In the delivery of holographic content via the internet, bandwidth wastage should be avoided to tackle high bandwidth demands of holography streaming. For real-time applications, encoding time-complexity is also a major problem. In this paper, the concept of dynamic adaptive streaming over HTTP (DASH) is extended to holography image streaming and view-aware adaptation techniques are studied. As each area of a hologram contains information of a specific view, instead of encoding and decoding the entire hologram, just the part required to render the selected view is encoded and transmitted via the network based on the users’ interactivity. Four different strategies, namely, monolithic, single view, adaptive view, and non-real time streaming strategies are explained and compared in terms of bandwidth requirements, encoding time-complexity, and bitrate overhead. Experimental results show that the view-aware methods reduce the required bandwidth for holography streaming at the cost of a bitrate increase.

Keywords: Holography, compression, bitrate adaptation, dynamic adaptive streaming over HTTP, DASH.

Philipp Moll

Authors: Philipp Moll, Veit Frick, Natascha Rauscher, Mathias Lux (Alpen-Adria-Universität Klagenfurt)
Abstract: The popularity of computer games is remarkably high and is still growing. Despite the popularity and economical impact of games, data-driven research in game design, or to be more precise, in-game mechanics – game elements and rules defining how a game works – is still scarce. As data on user interaction in games is hard to get by, we propose a way to analyze players’ movement and action based on video streams of games. Utilizing this data we formulate four hypotheses focusing on player experience, enjoyment, and interaction patterns, as well as the interrelation thereof. Based on a user study for the popular game Fortnite, we discuss the interrelation between game mechanics, enjoyment of players, and different player skill levels in the observed data.
Keywords: Online Games; Game Mechanics; Game Design; Video Analysis
Links: International Workshop on Immersive Mixed and Virtual Environment Systems (MMVE)

Our Paper “Pixel-Based Tool Segmentation in Cataract Surgery Videos with Mask R-CNN” has been accepted for publication at IEEE 33rd International Symposium on Computer Based Medical Systems (CBMS – http://cbms2020.org).
Authors: Markus Fox, Klaus Schöffmann, Mario Taschwer
Abstract:
Automatically detecting surgical tools in recorded surgery videos is an important building block of further content-based video analysis. In ophthalmology, the results of such methods can support training and teaching of operation techniques and enable investigation of medical research questions on a dataset of recorded surgery videos. While previous methods used frame-based classification techniques to predict the presence of surgical tools — but did not localize them, we apply a recent deep-learning segmentation method (Mask R-CNN) to localize and segment surgical tools used in ophthalmic cataract surgery. We add ground-truth annotations for multi-class instance segmentation to two existing datasets of cataract surgery videos and make resulting datasets publicly available for research purposes. In the absence of comparable results from literature, we tune and evaluate the Mask R-CNN approach on these datasets for instrument segmentation/localization and achieve promising results (61\% mean average precision on 50\% intersection over union for instance segmentation, working even better for bounding box detection or binary segmentation), establishing a reasonable baseline for further research. Moreover, we experiment with common data augmentation techniques and analyze the achieved segmentation performance with respect to each class (instrument), providing evidence for future improvements of this approach.
Pixel-Based Tool Segmentation in Cataract Surgery
Acknowledgments:
This work was funded by the FWF Austrian Science Fund under grant P 31486-N31.

IEEE Communications Society extends its appreciation of Hermann Hellwagner as a distingguished member of the IEEE INFOCOM 2020.
See more Information here.
IEEE INFOCOM 2020 – Online Conference July 6-9, 2020