,

Paper Accepted: Efficient Transparent Access to 5G Edge Services

1st International Workshop on Edge Network Softwarization (ENS 2022) co-located with IEEE International Conference on Network Softwarization (NetSoft 2022)  Milan, Italy

Authors: Josef Hammer and Hermann Hellwagner, Alpen-Adria-Universität Klagenfurt

Abstract: Multi-access Edge Computing (MEC) is a central piece of 5G telecommunication systems and is essential to satisfy the challenging low-latency demands of future applications. MEC provides a cloud computing platform at the edge of the radio access network that developers can utilize for their applications. In [1] we argued that edge computing should be transparent to clients and introduced a solution to that end. This paper presents how to efficiently implement such a transparent approach, leveraging Software-Defined Networking. For high performance and scalability, our architecture focuses on three aspects: (i) a modular architecture that can easily be distributed onto multiple switches/controllers, (ii) multiple filter stages to avoid screening traffic not intended for the edge, and (iii) several strategies to keep the number of flows low to make the best use of the precious flow table memory in hardware switches. A performance evaluation is shown, with results from a real edge/fog testbed.

Keywords: 5G, Multi-Access Edge Computing, MEC, Patricia Trie, SDN, Software-Defined Networking

,

Paper accepted: Video Complexity Dataset at the 13th ACM Multimedia Systems Conference (ODS) track

Title: Video Complexity Dataset (VCD)

The 13th ACM Multimedia Systems Conference (ACM MMSys 2022) Open Dataset and Software (ODS) track

June 14–17, 2022 |  Athlone, Ireland

Authors: Hadi Amirpour (Alpen-Adria-Universität Klagenfurt), Vignesh V Menon (Alpen-Adria-Universität Klagenfurt), Samira Afzal (Alpen-Adria-Universität Klagenfurt), Mohammad Ghanbari (School of Computer Science and Electronic Engineering, University of Essex, Colchester, UK), and Christian Timmerer (Alpen-Adria-Universität Klagenfurt).

Abstract: This paper provides an overview of the open Video Complexity Dataset (VCD) which comprises 500 Ultra High Definition (UHD) resolution test video sequences. These sequences are provided at 24 frames per second (fps) and stored online in losslessly encoded 8-bit 4:2:0 format. In this paper, all sequences are characterized by spatial and temporal complexities, rate-distortion complexity, and encoding complexity with the x264 AVC/H.264 and x265 HEVC/H.265 video encoders. The dataset is tailor-made for cutting-edge multimedia applications such as video streaming, two-pass encoding, per-title encoding, scene-cut detection, etc. Evaluations show that the dataset includes diversity in video complexities. Hence, using this dataset is recommended for training and testing video coding applications. All data have been made publicly available as part of the dataset, which can be used for various applications.
The details of VCD can be accessed online at https://vcd.itec.aau.at.

,

EUVIP 2022 Special Session

EUVIP 2022 Special Session on

“Machine Learning for Immersive Content Processing”

September, 2022, Lisbon, Portugal

Link

Organizers:

  • Hadi Amirpour, Klagenfurt University, Austria
  • Christine Guillemot, INSA, France
  • Christian Timmerer, Klagenfurt University, Austria

 

Brief description:

The importance of remote communication is becoming more and more important in particular after  COVID-19 crisis. However, to bring a more realistic visual experience, more than the traditional two-dimensional (2D) interfaces we know today is required. Immersive media such as 360-degree, light fields, point cloud, ultra-high-definition, high dynamic range, etc. can fill this gap. These modalities, however, face several challenges from capture to display. Learning-based solutions show great promise and significant performance in improving traditional solutions in addressing the challenges. In this special session, we will focus on research works aimed at extending and improving the use of learning-based architectures for immersive imaging technologies.

Important dates:

Paper Submissions: 6th June, 2022
Paper Notifications: 11th July, 2022

 

, ,

MPEG awarded a Technology & Engineering Emmy® Award for DASH

MPEG, specifically, ISO/IEC JTC 1/SC 29/WG 3 (MPEG Systems), has been just awarded a Technology & Engineering Emmy® Award for its ground-breaking MPEG-DASH standard. Dynamic Adaptive Streaming over HTTP (DASH) is the first international de-jure standard that enables efficient streaming of video over the Internet and it has changed the entire video streaming industry including — but not limited to —  on-demand, live, and low latency streaming and even for 5G and the next generation of hybrid broadcast-broadband. The first edition has been published in April 2012 and MPEG is currently working towards publishing the 5th edition demonstrating an active and lively ecosystem still being further developed and improved to address requirements and challenges for modern media transport applications and services.

This award belongs to 90+ researchers and engineers from around 60 companies all around the world who participated in the development of the MPEG-DASH standard for over 12 years.

From left to right: Kyung-mo Park, Cyril Concolato, Thomas Stockhammer, Yuriy Reznik, Alex Giladi, Mike Dolan, Iraj Sodagar, Ali Begen, Christian Timmerer, Gary Sullivan, Per Fröjdh, Young-Kwon Lim, Ye-Kui Wang. (Photo © Yuriy Reznik)

Christian Timmerer, director of the Christian Doppler Laboratory ATHENA, chaired the evaluation of responses to the call for proposals and since that served as MPEG-DASH Ad-hoc Group (AHG) / Break-out Group (BoG) co-chair as well as co-editor for Part 2 of the standard. For a more detailed history of the MPEG-DASH standard, the interested reader is referred to Christian Timmerer’s blog post “HTTP Streaming of MPEG Media” (capturing the development of the first edition) and Nicolas Weill’s blog post “MPEG-DASH: The ABR Esperanto” (DASH timeline).

, ,

Paper accepted: Towards Low Latency Live Streaming: Challenges in a Real-World Deployment

The 13th ACM Multimedia Systems Conference (ACM MMSys 2022)

June 14–17, 2022 |  Athlone, Ireland

Conference Website

Reza Shokri Kalan (Digiturk Company, Istanbul), Reza Farahani (Alpen-Adria-Universität Klagenfurt), Emre Karsli (Digiturk Company, Istanbul), Christian Timmerer (Alpen-Adria-Universität Klagenfurt), and Hermann Hellwagner (Alpen-Adria-Universität Klagenfurt)

Over-the-Top (OTT) service providers need faster, cheaper, and Digital Rights Management (DRM)-capable video streaming solutions. Recently, HTTP Adaptive Streaming (HAS) has become the dominant video delivery technology over the Internet. In HAS, videos are split into short intervals called segments, and each segment is encoded at various qualities/bitrates (i.e., representations) to adapt to the available bandwidth. Utilizing different HAS-based technologies with various segment formats imposes extra cost, complexity, and latency to the video delivery system. Enabling an integrated format for transmitting and storing segments at Content Delivery Network (CDN) servers can alleviate the aforementioned issues. To this end, MPEG Common Media Application Format (CMAF) is presented as a standard format for cost-effective and low latency streaming. However, CMAF has not been adopted by video streaming providers yet and it is incompatible with most legacy end-user players. This paper reveals some useful steps for achieving low latency live video streaming that can be implemented for non-DRM sensitive contents before jumping to CMAF technology. We first design and instantiate our testbed in a real OTT provider environment, including a heterogeneous network and clients, and then investigate the impact of changing format, segment duration, and Digital Video Recording (DVR) window length on a real live event. The results illustrate that replacing the transport stream (.ts) format with fragmented MP4 (.fMP4) and shortening segments’ duration reduces live latency significantly.

 

 

 

 

 

 

 

 

Keywords: HAS, DASH, HLS, CMAF, Live Streaming, Low Latency

 

, ,

Workshop “IXR’22: Interactive eXtended Reality 2022”

colocated with ACM Multimedia 2022

October, 2022, Lisbon, Portugal

Workshop Chairs:

  • Irene Viola, CWI, Netherlands
  • Hadi Amirpour, Klagenfurt University, Austria
  • Asim Hameed, NTNU, Norway
  • Maria Torres Vega, Ghent University, Belgium

Topics of interest include, but are not limited to:

  • Novel low latency encoding techniques for interactive XR applications
  • Novel networking systems and protocols to enable interactive immersive applications. This includes optimizations ranging from hardware (i.e., millimeter-wave networks or optical wireless), physical and MAC layer up to the network, transport and application layers (such as over the top protocols);
  • Significative advances and optimization in 3D modeling pipelines for AR/VR visualization, accessible and inclusive GUI, interactive 3D models;
  • Compression and delivery strategies for immersive media contents, such as omnidirectional video, light fields, point clouds, dynamic and time varying meshes;
  • Quality of Experience management of interactive immersive media applications;
  • Novel rendering techniques to enhance interactivity of XR applications;
  • Application of interactive XR to different areas of society, such as health (i.e., virtual reality exposure therapy), industry (Industry 4.0), XR e-learning (according to new global aims);

Dates:

  • Submission deadline: 20 June 2022, 23:59 AoE
  • Notifications of acceptance: 29 July 2022
  • Camera ready submission: 21 August 2022
  • Workshop: 10th or 14th October
, ,

Workshop “Artificial Intelligence for Live Video Streaming (ALIS 2022)”

ALIS’22: Artificial Intelligence for Live Video Streaming


colocated with ACM Multimedia 2022


October 2022, Lisbon, Portugal

Download ALIS’22 Poster/ CfP

 

,

Horizon Europe project “Scalable Platform for Innovations on Real-time Immersive Telepresence” (SPIRIT) accepted

Project Lead: H. Hellwagner, Ch. Timmerer

Abstract: Immersive telepresence technologies will have game-changing impacts on interactions amongst individuals or with non-human objects (e.g. machines), in cyberspace with blurred boundaries between the virtual and physical world. The impacts of this technology are expected to range in a variety of vertical sectors, including education and training, entertainment, healthcare, manufacturing industry, etc. The key challenges include limitations of both the application platform and the underlying network support to achieve seamless presentation, processing, and delivery of immersive telepresence content at a large scale. Innovative design, rigorous validation, and testing exercises aim to fulfill the key technical requirements identified such as low-latency communication, high bandwidth demand, and complex content encoding/rendering tasks in real-time. The industry-leading SPIRIT consortium will build on the existing TRL4 application platforms and network infrastructures developed by the project partners, aiming to address key technical challenges and further develop all major aspects of telepresence technologies to achieve targeted TRL7. The SPIRIT Project will focus its innovations in network-layer, transport-layer, application/content-layer techniques, as well as security and privacy mechanisms to facilitate the large-scale operation of telepresence applications. The project team will develop a fully distributed, interconnected testing infrastructure across two geographical sites in Germany and UK, allowing large-scale testing of heterogeneous telepresence applications in real-life Internet environments. The network infrastructure will host two mainstream application
environments based on WebRTC and low-latency DASH. In addition to the project-designated use case scenarios, the project team will test a variety of additional use cases covering heterogeneous vertical sectors through FSTP participation.

,

Paper accepted: Take the Red Pill for H3 and See How Deep the Rabbit Hole Goes

ACM Mile-High video 2022 (MHV)

March 01-03, 2022 | Denver, CO, USA

Conference Website

Authors: Minh Nguyen (Christian Doppler Laboratory ATHENA, Alpen-Adria-Universität Klagenfurt, Austria), Christian Timmerer (Christian Doppler Laboratory ATHENA, Alpen-Adria-Universität Klagenfurt, Austria), Stefan Pham (Fraunhofer FOKUS, Germany), Daniel Silhavy (Fraunhofer FOKUS, Germany), Ali C. Begen (Ozyegin University, Turkey)

Abstract: With the introduction of HTTP/3 (H3) and QUIC at its core, there is an expectation of significant improvements in Web-based secure object delivery. As HTTP is a central protocol to the current adaptive streaming methods in all major over-the-top (OTT) services, an important question is what H3 will bring to the table for such services. To answer this question, we present the new features of H3 and QUIC, and compare them to those of H/1.1/2 and TCP. We also share the latest research findings in this domain.

Keywords: HTTP adaptive streaming, QUIC, CDN, ABR, OTT, DASH, HLS.

,

MPEG DASH video streaming technology co-developed in Klagenfurt wins Technology & Engineering Emmy® Award

The Emmy® Awards do not only honour the work of actors and directors, but they also recognise technologies that are steadily improving the viewing experience for consumers. This year, the winners include the MPEG DASH Standard. Christian Timmerer (Department of Information Technology) played a leading role in its development. Read more about it here.