Multimedia Communication

Vignesh V Menon

VQEG NORM talk on Video Quality Analyzer

Vignesh V Menon and Hadi Amirpour gave a talk on ‘Video Complexity Analyzer for Streaming Applications’ at the Video Quality Experts Group (VQEG) meeting on December 14, 2021. Our research activities on video complexity analysis were presented in the talk.

The link to the presentation can be found here (pdf).


IEEE VCIP’21 Tutorial: A Journey towards Fully Immersive Media Access

Sunday, December 5, 2021

Find further info in the blog post here.


ANGELA won the 2nd Best Paper Award in IFIP/IEEE PEMWN 2021 Conference

The ANGELA: HTTP Adaptive Streaming and Edge Computing Simulator paper from ATHENA CD laboratory has won the 2nd Best Paper Award in the 10th IFIP/IEEE International Conference on Performance Evaluation and Modeling in Wired and Wireless Networks (PEMWN).

More information about the paper can be found in the blog post.


Farzad Tashtarian to give a talk at IMDEA Networks Institute, Madrid, Spain

Farzad Tashtarian is invited to talk on “LwTE: Light-weight Transcoding at the Edge” at IMDEA Networks Institute, Madrid, Spain.

, ,

Internship 2022 at ATHENA

You are a Master Student and want to get to know more about ATHENA in a 3 months ATHENA internship in 2022?

Come and join our team! Apply now.

(Please note: application deadline is 14 December 2021)


, ,

FaRes-ML Won the Best New Streaming Innovation Award in the Streaming Media Readers’ Choice Awards 2021

The Fast Multi-Resolution and Multi-Rate Encoding for HTTP Adaptive Streaming Using Machine Learning paper from ATHENA lab has won the Best New Streaming Innovation Award in the Streaming Media Readers’ Choice Awards 2021.

The journey that led to the publication of the FaRes-ML paper was quite an insightful one.

It all started with the question, “How to efficiently provide multi-rate representations over a wide range of resolutions for HTTP Adaptive Streaming?“. This led to the first publication, Fast Multi-Rate Encoding for Adaptive HTTP Streaming, in which we proposed a double-bound approach to speed up the multi-rate encoding. After analyzing the results, we saw room for improvement in parallel encoding performance, which led to the second publication Towards Optimal Multirate Encoding for HTTP Adaptive Streaming. The results were promising, but we believed we could improve the encoding performance by utilizing machine learning. That was the primary motivation behind our third paper, FaME-ML: Fast Multirate Encoding for HTTP Adaptive Streaming Using Machine Learning. In FaMe-ML, we have used convolutional neural networks (CNNs) to use the information from the reference representation better to encode other representations, resulting in significant improvement in the multi-rate encoding performance. Finally, we proposed FaRes-ML to extend our FaME-ML approach to include multi-resolution scenarios in Fast Multi-Resolution and Multi-Rate Encoding for HTTP Adaptive Streaming Using Machine Learning paper.

Here is the list of publications that led to FaRes-ML:

  1. Fast Multi-Rate Encoding for Adaptive HTTP Streaming. Published in DCC’20.
  2. Towards Optimal Multirate Encoding for HTTP Adaptive Streaming. Published in MMM’21.
  3. FaME-ML: Fast Multirate Encoding for HTTP Adaptive Streaming Using Machine Learning. Published in VCIP’20.
  4. Fast Multi-Resolution and Multi-Rate Encoding for HTTP Adaptive Streaming Using Machine Learning. Published in IEEE OJ-SP.

Christian Timmerer and Babak Taraghi will give talks at DDRC’21

Taichung, Taiwan, The 1st IEEE International Workshop on Data-Driven Rate Control for Media Streaming (DDRC’21) Co-located with the IEEE International Conference on Multimedia Big Data (BigMM’21)

Conference Website: (November 15-17)

HTTP Adaptive Streaming (HAS) — Quo Vadis?
Speaker: Professor Christian Timmerer
Time: November 16, 2021 12:10 (UTC +1)

CAdViSE or how to find the Sweet Spots of ABR Systems
Speaker: Babak Taraghi, M.Sc.
Time: November 16, 2021 13:00 (UTC +1)

Online attendance is free, Visit here for more information.


Paper accepted: ECAS-ML: Edge Computing Assisted Adaptation Scheme with Machine Learning for HTTP Adaptive Streaming

28th International Conference on Multimedia Modeling (MMM)

April 05-08, 2022 | Qui Nhon, Vietnam

Conference Website

Jesús Aguilar Armijo (Alpen-Adria-Universität Klagenfurt), Ekrem Çetinkaya (Christian Doppler Laboratory ATHENA, Alpen-Adria-Universität Klagenfurt), Christian Timmerer (Alpen-Adria-Universität Klagenfurt) and Hermann Hellwagner (Alpen-Adria-Universität Klagenfurt)

Abstract: As the video streaming traffic in mobile networks is increasing, improving the content delivery process becomes crucial, e.g., by utilizing edge computing support. At an edge node, we can deploy adaptive
bitrate (ABR) algorithms with a better understanding of network behavior and access to radio and player metrics. In this work, we present ECAS-ML, Edge Assisted Adaptation Scheme for HTTP Adaptive Streaming
with Machine Learning. ECAS-ML focuses on managing the tradeoff among bitrate, segment switches and stalls to achieve a higher quality of experience (QoE). For that purpose, we use machine learning techniques
to analyze radio throughput traces and predict the best parameters of our algorithm to achieve better performance. The results show that ECAS-ML outperforms other client-based and edge-based ABR algorithms.

Keywords: HTTP Adaptive Streaming, Edge Computing, Content Delivery, Network-assisted Video Streaming, Quality of Experience, Machine Learning.


Paper accepted: MoViDNN: A Mobile Platform for Evaluating Video Quality Enhancement with Deep Neural Networks

28th International Conference on Multimedia Modeling (MMM)

April 05-08, 2022 | Qui Nhon, Vietnam

Conference Website

Ekrem Çetinkaya (Christian Doppler Laboratory ATHENA, Alpen-Adria-Universität Klagenfurt), Minh Nguyen (Christian Doppler Laboratory ATHENA, Alpen-Adria-Universität Klagenfurt), and Christian Timmerer (Christian Doppler Laboratory ATHENA, Alpen-Adria-Universität Klagenfurt)

Abstract: Deep neural network (DNN) based approaches have been intensively studied to improve video quality thanks to their fast advancement in recent years. These approaches are designed mainly for desktop devices due to their high computational cost. However, with the increasing performance of mobile devices in recent years, it became possible to execute DNN based approaches in mobile devices. Despite having the required computational power, utilizing DNNs to improve the video quality for mobile devices is still an active research area. In this paper, we propose an open-source mobile platform, namely MoViDNN, to evaluate DNN based video quality enhancement methods, such as super-resolution, denoising, and deblocking. Our proposed platform can be used to evaluate the DNN based approaches both objectively and subjectively. For objective evaluation, we report common metrics such as execution time, PSNR, and SSIM. For subjective evaluation, Mean Score Opinion (MOS) is reported. The proposed platform is available publicly at

Keywords: Super resolution, Deblocking, Deep Neural Networks, Mobile Devices


Paper accepted: LwTE-Live: Light-weight Transcoding at the Edge for Live Streaming

The 1st ACM CoNEXT Workshop on Design, Deployment, and Evaluation of Network-assisted  video Streaming (ViSNext 2021)

Conference Website

Alireza Erfanian (Alpen-Adria-Universität Klagenfurt), Hadi Amirpour (Alpen-Adria-Universität Klagenfurt), Farzad Tashtarian (Alpen-Adria-Universität Klagenfurt), Christian Timmerer (Alpen-Adria-Universität Klagenfurt), and Hermann Hellwagner (Alpen-Adria-Universität Klagenfurt)

Abstract: Live video streaming is widely embraced in video services, and its applications have attracted much attention in recent years. The increased number of users demanding high quality (e.g., 4K resolution) live videos increase the bandwidth utilization in the backhaul network. To decrease bandwidth utilization in HTTP Adaptive Streaming (HAS), in on-the-fly transcoding approaches, only the highest bitrate representation is delivered to the edge, and other representations are generated by transcoding at the edge. However, this approach is inefficient due to the high transcoding cost. In this paper, we propose a light-weight transcoding at the edge method for live applications, LwTE-Live, to decrease the band-width utilization and the overall live streaming cost. During the encoding processes at the origin server, the optimal encoding decisions are saved as metadata, and the metadata replaces the corresponding representation in the bitrate ladder. The significantly reduced size of the metadata compared to its corresponding representation decreases the bandwidth utilization. The extracted metadata is then utilized at the edge to decrease the transcoding time. We formulate the problem as a Mixed-Binary Linear Programming (MBLP) model to optimize the live streaming cost, including the bandwidth and computation costs. We compare the proposed model with state-of-the-art approaches and the experimental results show that our proposed method saves the cost and backhaul bandwidth utilization up to 34% and 45%, respectively.

Keywords: live video streaming, network function virtualization, NFV, light-weight transcoding, transcoding, edge computing