Multimedia Communication

At Christian Doppler laboratory ATHENA, we offer an internship*) for 2023 for Master Students and we kindly request your applications until the 20th of January 2023 with the following data (in German or English):

  • CV
  • Record of study/transcript (“Studienerfolgsnachweis”)

*) A 3 months period in 2023 (with an exact time slot to be discussed) with the possibility to spend up to 1-month at the industrial partner; 20h per week “Universitäts-KV, Verwendungsgruppe C1, studentische Hilfskraft”

Please send your application by email to nina.stiller@aau.at.

About ATHENA: The Christian Doppler laboratory ATHENA (AdapTive Streaming over HTTP and Emerging Networked MultimediA Services) is jointly proposed by the Institute of Information Technology (ITEC; http://itec.aau.at) at Alpen-Adria-Universität Klagenfurt (AAU) and Bitmovin GmbH (https://bitmovin.com) to address current and future research and deployment challenges of HAS and emerging streaming methods. AAU (ITEC) has been working on adaptive video streaming for more than a decade, has a proven record of successful research projects and publications in the field, and has been actively contributing to MPEG standardization for many years, including MPEG-DASH; Bitmovin is a video streaming software company founded by ITEC researchers in 2013 and has developed highly successful, global R&D and sales activities and a world-wide customer base since then.

The aim of ATHENA is to research and develop novel paradigms, approaches, (prototype) tools, and evaluation results for the phases

  1. multimedia content provisioning,
  2. content delivery, and
  3. content consumption in the media delivery chain as well as for
  4. end-to-end aspects, with a focus on, but not being limited to, HTTP Adaptive Streaming (HAS).

The new approaches and insights are to enable Bitmovin to build innovative applications and services to account for the steadily increasing and changing multimedia traffic on the Internet.

Vignesh V Menon

2022 Picture Coding Symposium (PCS)

December 7-9, 2022 | San Jose, CA, USA

Conference Website

Vignesh V Menon (Alpen-Adria-Universität Klagenfurt),  Hadi Amirpour (Alpen-Adria-Universität Klagenfurt), Prajit T Rajendran (Universite Paris-Saclay, France), Mohammad Ghanbari (School of Computer Science and Electronic Engineering, University of Essex, Colchester, UK), and Christian Timmerer (Alpen-Adria-Universität Klagenfurt)

Abstract:

In live streaming applications, a fixed set of bitrate-resolution pairs (known as bitrate ladder) is generally used to avoid additional pre-processing run-time to analyze the complexity of every video content and determine the optimized bitrate ladder. Furthermore, live encoders use the fastest available preset for encoding to ensure the minimum possible latency in streaming. For live encoders, it is expected that the encoding speed is equal to the video framerate. However, an optimized encoding preset may result in (i) increased Quality of Experience (QoE) and (ii) improved CPU utilization while encoding. In this light, this paper introduces a Content-Adaptive encoder Preset prediction Scheme (CAPS) for adaptive live video streaming applications. In this scheme, the encoder preset is determined using Discrete Cosine Transform (DCT)-energy-based low-complexity spatial and temporal features for every video segment, the number of
CPU threads allocated for each encoding instance, and the target encoding speed. Experimental results show that CAPS yields an overall quality improvement of 0.83 dB PSNR and 3.81 VMAF with the same bitrate, compared to the fastest preset encoding
of the HTTP Live Streaming (HLS) bitrate ladder using x265 HEVC open-source encoder. This is achieved by maintaining the desired encoding speed and reducing CPU idle time.

Hadi

2022 IEEE 24th International Workshop on Multimedia Signal Processing (MMSP)

September 26-28, 2022 | Shanghai, China

Conference Website

Hadi Amirpour (Alpen-Adria-Universität Klagenfurt), Prajit T Rajendran (Universite Paris-Saclay, Paris, France), Vignesh V Menon (Alpen-Adria-Universität Klagenfurt),   Mohammad Ghanbari (School of Computer Science and Electronic Engineering, University of Essex, Colchester, UK)and Christian Timmerer (Alpen-Adria-Universität Klagenfurt)

Abstract:

The increasing demand for high-quality and low-cost video streaming services calls for the prediction of video encoding complexity. The prior prediction of video encoding complexity including encoding time and bitrate predictions are used to allocate resources and set optimized parameters for video encoding effectively. In this paper, a light-weight video encoding complexity prediction (VECP) scheme that predicts the encoding bitrate and the encoding time of video with high accuracy is proposed. Firstly, low-complexity Discrete Cosine Transform (DCT)-energy-based features, namely spatial complexity, temporal complexity, and brightness of videos are extracted, which can efficiently
represent the encoding complexity of videos. The latent vectors are also extracted from a Convolutional Neural Network (CNN) with MobileNet as the backend to obtain additional features from representative frames of each video to assist the prediction process. The extreme gradient boosting (XGBoost) regression algorithm is deployed to predict video encoding complexity using the extracted features. The experimental results demonstrate that VECP predicts the encoding bitrate with an error percentage of up to 3.47% and encoding time with an error percentage of up to 2.89%, but with a significantly low overall latency of 3.5 milliseconds per frame which makes it suitable for both Video on Demand (VoD) and live streaming applications.

VECP architecture

Vignesh V Menon

2022 IEEE International Conference on Image Processing (ICIP)

October 16-19, 2022 | Bordeaux, France

Conference Website

[Video]

Vignesh V Menon (Alpen-Adria-Universität Klagenfurt),  Hadi Amirpour (Alpen-Adria-Universität Klagenfurt), Mohammad Ghanbari (School of Computer Science and Electronic Engineering, University of Essex, Colchester, UK)and Christian Timmerer (Alpen-Adria-Universität Klagenfurt)

Abstract:

In two-pass encoding, also known as multi-pass encoding, the input video content is analyzed in the first-pass to help the second-pass encoding utilize better encoding decisions and improve overall compression efficiency. In live streaming applications, a single-pass encoding scheme is mainly used to avoid the additional first-pass encoding run-time to analyze the complexity of every video content. This paper introduces an Efficient low-latency Two-Pass encoding Scheme (ETPS) for live video streaming applications. In this scheme, Discrete Cosine Transform (DCT)-energy-based low-complexity spatial and temporal features for every video segment are extracted in the first-pass to predict each target bitrate’s optimal constant rate factor (CRF) for the second-pass constrained variable bitrate (cVBR) encoding. Experimental results show that, on average, ETPS compared to a traditional two-pass average bitrate encoding scheme yields encoding time savings of 43.78% without any noticeable drop in compression efficiency. Additionally, compared to a single-pass constant bitrate (CBR) encoding, it yields bitrate savings of 10.89% and 8.60% to maintain the same PSNR and VMAF, respectively.

ETPS architecture

16th International Conference on Signal Image Technology & Internet based Systems – Dijon, France – October 19-21, 2022

Conference Website

Babak Taraghi (Alpen-Adria-Universität Klagenfurt, Austria), Selina Zoë Haack (Alpen-Adria-Universität Klagenfurt, Austria), and Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Austria)

Abstract: HTTP Adaptive Streaming (HAS) is nowadays a popular solution for multimedia delivery. The novelty of HAS lies in the possibility of continuously adapting the streaming session to current network conditions, facilitated by Adaptive Bitrate (ABR) algorithms. Various popular streaming and Video on Demand services such as Netflix, Amazon Prime Video, and Twitch use this method. Given this broad consumer base, ABR algorithms continuously improve to increase user satisfaction. The insights for these improvements are, among others, gathered within the research area of Quality of Experience (QoE). Within this field, various researchers have dedicated their works to identifying potential impairments and testing their impact on viewers’ QoE. Two frequently discussed visual impairments influencing QoE are stalling events and quality switches. So far, it is commonly assumed that those stalling events have the worst impact on QoE. This paper challenged this belief and reviewed this assumption by comparing stalling events with multiple quality and high amplitude quality switches. Two subjective studies were conducted. During the first subjective study, participants received a monetary incentive, while the second subjective study was carried out with volunteers. The statistical analysis demonstrated that stalling events do not result in the worst degradation of QoE. These findings suggest that a reevaluation of the effect of stalling events in QoE research is needed. Therefore, these findings may be used for further research and to improve current adaptation strategies in ABR algorithms.

IEEE Transactions on Network and Service Management (TNSM)

Journal Website

Authors: Reza Farahani (Alpen-Adria-Universität Klagenfurt, Austria), Mohammad Shojafar (University of Surry, UK), Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Austria), Farzad Tashtarian (Alpen-Adria-Universität Klagenfurt, Austria), Mohammad Ghanbari (University of Essex, UK), and Hermann Hellwagner (Alpen-Adria-Universität Klagenfurt, Austria)

Abstract: With the ever-increasing demands for high-definition and low-latency video streaming applications, network-assisted video streaming schemes have become a promising complementary solution in the HTTP Adaptive Streaming (HAS) context to improve users’ Quality of Experience (QoE) as well as network utilization. Edge computing is considered one of the leading networking paradigms for designing such systems by providing video processing and caching close to the end-users. Despite the wide usage of this technology, designing network-assisted HAS architectures that support low-latency and high-quality video streaming, including edge collaboration is still a challenge. To address these issues, this article leverages the Software-Defined Networking (SDN), Network Function Virtualization (NFV), and edge computing paradigms to propose A collaboRative edge-Assisted framewoRk for HTTP Adaptive video sTreaming (ARARAT). Aiming at minimizing HAS clients’ serving time and network cost, besides considering available resources and all possible serving actions, we design a multi-layer architecture and formulate the problem as a centralized optimization model executed by the SDN controller. However, to cope with the high time complexity of the centralized model, we introduce three heuristic approaches that produce near-optimal solutions through efficient collaboration between the SDN controller and edge servers. Finally, we implement the ARARAT framework, conduct our experiments on a large-scale cloud-based testbed including 250 HAS players, and compare its effectiveness with state-of-the-art systems within comprehensive scenarios. The experimental results illustrate that the proposed ARARAT methods (i) improve users’ QoE by at least 47%, (ii) decrease the streaming cost, including bandwidth and computational costs, by at least 47%, and (iii) enhance network utilization, by at least 48% compared to state-of-the-art approaches.

Hadi

LiVE: Toward Better Live Video Experience

INSA, France

 27th September 2022 | Rennes, France

 

Abstract: In this presentation, we first introduce the principles of video streaming and the existing challenges. While live video streaming is expected to continue growing at an accelerated pace, one potential area for optimization that has remained relatively untapped is the use of content-aware encoding to improve the quality of live contribution streams due to avoid of latency. In this talk, we introduce revolutionary real-time content-aware video quality improvement methods for live applications that keep the added latency very low.

 

 

 

Hadi Amirpour is a postdoctoral researcher at the University of Klagenfurt. He received his B.Sc. degrees in Electrical and Biomedical Engineering, and he pursued his M.Sc. in Electrical Engineering. He got his Ph.D. in computer science from the University of Klagenfurt in 2022. He was involved in the project EmergIMG, a Portuguese consortium on emerging imaging technologies, funded by the Portuguese funding agency and H2020. Currently, he is working on the ATHENA project in cooperation with its industry partner Bitmovin. His research interests are image processing and compression, video processing and compression, quality of assessment, emerging 3D imaging technology, and medical image analysis.

The project partners reunited at @itecmmc for a final project review. Thank you Horizon2020 @EU_Commission it has been an honour to collaborate for the Future Hyper-connected Sociality.

ARTICONF project review

@UvA_Amsterdam
@mscdigsoc
@UOhrid
@MOGTechnologies
@AgiliaCenter
@vialog_io
@bitYogaAS
@itec

18th International Conference on Network and Service Management (CNSM 2022)

Thessaloniki, Greece | 31 October – 4 November 2022

Conference Website

Minh Nguyen (Alpen-Adria-Universität Klagenfurt, Austria), Babak Taraghi (Alpen-Adria-Universität Klagenfurt, Austria), Abdelhak Bentaleb (National University of Singapore, Singapore), Roger Zimmermann (National University of Singapore, Singapore), and Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Austria)

Abstract: Considering network conditions, video content, and viewer device type/screen resolution to construct a bitrate ladder is necessary to deliver the best Quality of Experience (QoE).
A large-screen device like a TV needs a high bitrate with high resolution to provide good visual quality, whereas a small one like a phone requires a low bitrate with low resolution. In
addition, encoding high-quality levels at the server side while the network is unable to deliver them causes unnecessary cost for the content provider. Recently, the Common Media Client Data (CMCD) standard has been proposed, which defines the data that is collected at the client and sent to the server with its HTTP requests. This data is useful in log analysis, quality of service/experience monitoring and delivery improvements.

cadlad

 

In this paper, we introduce a CMCD-Aware per-Device bitrate LADder construction (CADLAD) that leverages CMCD to address the above issues. CADLAD comprises components at both client and server sides. The client calculates the top bitrate (tb) — a CMCD parameter to indicate the highest bitrate that can be rendered at the client — and sends it to the server together with its device type and screen resolution. The server decides on a suitable bitrate ladder, whose maximum bitrate and resolution are based on CMCD parameters, to the client device with the purpose of providing maximum QoE while minimizing delivered data. CADLAD has two versions to work in Video on
Demand (VoD) and live streaming scenarios. Our CADLAD is client agnostic; hence, it can work with any players and ABR algorithms at the client. The experimental results show that CADLAD is able to increase the QoE by 2.6x while saving 71% of delivered data, compared to an existing bitrate ladder of an available video dataset. We implement our idea within CAdViSE — an open-source testbed for reproducibility.

 

IEEE Global Communications Conference (GLOBECOM)

December 4-8, 2022 |Rio de Janeiro, Brazil
Conference Website

Authors: Reza Farahani (Alpen-Adria-Universität Klagenfurt, Austria), Abdelhak Bentaleb (National University of Singapore, Singapore), Ekrem Cetinkaya (Alpen-Adria-Universität Klagenfurt, Austria), Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Austria), Roger Zimmermann (National University of Singapore, Singapore), and Hermann Hellwagner (Alpen-Adria-Universität Klagenfurt, Austria)

Abstract: a cost-effective, scalable, and flexible architecture that supports low latency and high-quality live video streaming is still a challenge for Over-The-Top (OTT) service providers. To cope with this issue, this paper leverages Peer-to-Peer (P2P), Content Delivery Network (CDN), edge computing, Network Function Virtualization (NFV), and distributed video transcoding paradigms to introduce a hybRId P2P-CDN arcHiTecture for livE video stReaming (RICHTER). We first introduce RICHTER’s multi-layer architecture and design an action tree that considers all feasible resources provided by peers, edge, and CDN servers for serving peer requests with minimum latency and maximum quality. We then formulate the problem as an optimization model executed at the edge of the network. We present an Online Learning (OL) approach that leverages an unsupervised Self Organizing Map (SOM) to (i) alleviate the time complexity issue of the optimization model and (ii) make it a suitable solution for large-scale scenarios by enabling decisions for groups of requests instead of for single requests. Finally, we implement the RICHTER framework, conduct our experiments on a large-scale cloud-based testbed including 350 HAS players, and compare its effectiveness with baseline systems. The experimental results illustrate that RICHTER outperforms baseline schemes in terms of users’ Quality of Experience (QoE), latency, and network utilization, by at least 59%, 39%, and 70%, respectively.