Tag Archive for: virtual reality

Christian Timmerer

Abstract: Universal media access as proposed in the late 90s, early 2000 is now reality. Thus, we can generate, distribute, share, and consume any media content, anywhere, anytime, and with/on any device. A major technical breakthrough was the adaptive streaming over HTTP resulting in the standardization of MPEG-DASH, which is now successfully deployed in HTML5 environments thanks to corresponding media source extensions (MSE). The next big thing in adaptive media streaming is virtual reality applications and, specifically, omnidirectional (360°) media streaming, which is currently built on top of the existing adaptive streaming ecosystems. This tutorial provides a detailed overview of adaptive streaming of both traditional and omnidirectional media within HTML5 environments. The tutorial focuses on the basic principles and paradigms for adaptive streaming – both traditional and omnidirectional media – as well as on already deployed content generation, distribution, and consumption workflows. Additionally, the tutorial provides insights into standards and emerging technologies in the adaptive streaming space. Finally, the tutorial includes the latest approaches for immersive media streaming enabling 6DoF DASH through Point Cloud Compression (PCC) and concludes with open research issues and industry efforts in this domain.

Lecturers: Christian Timmerer, Alpen-Adria-Universität Klagenfurt & Bitmovin, Inc.
Ali C. Begen, Ozyegin University and Networked Media Read more

Paper Title: Alternative inputs for games and AR/VR applications: deep headbanging on the web

Abstract: In multimedia research, scientific progress is often slowed downby high demands on hard- and software. However, hardware con-tinuously improves and today’s hardware got powerful enoughto meet the performance demands of complex 3D and deep learn-ing applications. With this demo, we demonstrate that utilizingdeep learning and 3D modeling is not a major barrier anymorewhen building prototypes for showcasing research projects. Ourweb-based game, called “HeadbangZ”, showcases a novel gesture-based input methodology realized through deeply learned poseestimation and user interaction in a 3D environment. Since gesture-based inputs increase the immersion in virtual environments, weassume this input methodology to be especially useful for AR/VRapplications and games. Furthermore, we demonstrate that rapidprototyping of applications using novel technologies, such as deeplearning, is even possible within 48 hours by developing a workingdemo within this time frame. Finally, we provide insights into whatwe learned during the development of HeadbangZ to encourageother researchers to make use of novel technologies. In referenceto Stephen Harper’s quote “Having hit a wall, the next logical stepis not to bang our heads against it.”, we hope that the presentationof HeadbangZ encourages researchers to bang their heads rhythmi-cally to rock music instead of angrily against a virtual wall createdby hard- and software limitations.

Authors: Philipp Moll, Andreas Leibetseder, Sabrina Kletz, Mathias Lux, Bernd Münzer (Institute of Information Technology, Alpen-Adria-Universität Klagenfurt)

More information on the paper can be found on: https://dl.acm.org/citation.cfm?id=3323832