Hadi

Authors: Hadi Amirpour (Alpen-Adria-Universität Klagenfurt, Austria), Mohammad Ghanbari (University of Essex, UK), and Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Austria)

Journal Website

Abstract: In HTTP Adaptive Streaming (HAS), each video is divided into smaller segments, and each segment is encoded at multiple pre-defined bitrates to construct a bitrate ladder. To optimize bitrate ladders, per-title encoding approaches encode each segment at various bitrates and resolutions to determine the convex hull. From the convex hull, an optimized bitrate ladder is constructed, resulting in an increased Quality of Experience (QoE) for end-users. With the ever-increasing efficiency of deep learning-based video enhancement approaches, they are more and more employed at the client-side to increase the QoE, specifically when GPU capabilities are available. Therefore, scalable approaches are needed to support end-user devices with both CPU and GPU capabilities (denoted as CPU-only and GPU-available end-users, respectively) as a new dimension of a bitrate ladder. Read more

Christina Obmann, one of our first Game Studies and Engineering students, has been recognized as outstanding in the Carinthia region by the local newspaper Kleine Zeitung. Besides her interest and work in games, she’s teaching at the university, learning Chinese, and was awarded a scholarship from Huawei.

 

 

Students at Klagenfurt University decide who is the best teacher: They nominate courses for the “Teaching Award 2022”. The 14 best-rated teachers submitted teaching concepts, which were evaluated and ranked by a jury. Josef Hammer was nominated this year. Congrats!

MPEC2: Multilayer and Pipeline Video Encoding on the Computing Continuum

conference website: IEEE NCA 2022

Samira Afzal (Alpen-Adria-Universität Klagenfurt), Zahra Najafabadi Samani (Alpen-Adria-Universität Klagenfurt), Narges Mehran (Alpen-Adria-Universität Klagenfurt), Christian Timmerer (Alpen-Adria-Universität Klagenfurt), and Radu Prodan (Alpen-Adria-Universität Klagenfurt)

Abstract:

Video streaming is the dominating traffic in today’s data-sharing world. Media service providers stream video content for their viewers, while worldwide users create and distribute videos using mobile or video system applications that significantly increase the traffic share. We propose a multilayer and pipeline encoding on the computing continuum (MPEC2) method that addresses the key technical challenge of high-price and computational complexity of video encoding. MPEC2 splits the video encoding into several tasks scheduled on appropriately selected Cloud and Fog computing instance types that satisfy the media service provider and user priorities in terms of time and cost.
In the first phase, MPEC2 uses a multilayer resource partitioning method to explore the instance types for encoding a video segment. In the second phase, it distributes the independent segment encoding tasks in a pipeline model on the underlying instances.
We evaluate MPEC2 on a federated computing continuum encompassing Amazon Web Services (AWS) EC2 Cloud and Exoscale Fog instances distributed on seven geographical locations. Experimental results show that MPEC2 achieves 24% faster completion time and 60% lower cost for video encoding compared to resource allocation related methods. When compared with baseline methods, MPEC2 yields 40%-50% lower completion time and 5-60% reduced total cost.

Radu Prodan participated in the panel on “Fueling Industrial AI with Data Pipelines” at presented the Graph-Massivizer project at the European Big Data Value Forum on November 22 in Prague, Czech Republic.

Hadi

Journal Website

Authors: Ningxiong Maoa (Southwest Jiaotong University), Hongjie Hea (Southwest Jiaotong University), Fan Chenb (Southwest Jiaotong University), Lingfeng Qua (Southwest Jiaotong University), Hadi Amirpour (Alpen-Adria-Universität Klagenfurt, Austria), and Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Austria)

Abstract: Color image Reversible Data Hiding (RDH) is getting more and more important since the number of its applications is steadily growing. This paper proposes an efficient color image RDH scheme based on pixel value ordering (PVO), in which the channel correlation is fully utilized to improve the embedding performance. In the proposed method, the channel correlation is used in the overall process of data embedding, including prediction stage, block selection and capacity allocation. In the prediction stage, since the pixel values in the co-located blocks in different channels are monotonically consistent, the large pixel values are collected preferentially by pre-sorting the intra-block pixels. This can effectively improve the embedding capacity of RDH based on PVO. In the block selection stage, the description accuracy of block complexity value is improved by exploiting the texture similarity between the channels. The smoothing the block is then preferentially used to reduce invalid shifts. To achieve low complexity and high accuracy in capacity allocation, the proportion of the expanded prediction error to the total expanded prediction error in each channel is calculated during the capacity allocation process. The experimental results show that the proposed scheme achieves significant superiority in fidelity over a series of state-of-the-art schemes. For example, the PSNR of the Lena image reaches 62.43dB, which is a 0.16dB gain compared to the best results in the literature with a 20,000bits embedding capacity.

KeywordsReversible data hiding, color image, pixel value ordering, channel correlation

5g_Kaerntner_Fog_Logo

IEEE ISM’2022 (https://www.ieee-ism.org/)

Authors: Shivi Vats, Jounsup Park, Klara Nahrstedt, Michael Zink, Ramesh Sitaraman, and Hermann Hellwagner

Abstract: In a 5G testbed, we use 360° video streaming to test, measure, and demonstrate the 5G infrastructure, including the capabilities and challenges of edge computing support. Specifically, we use the SEAWARE (Semantic-Aware View Prediction) software system, originally described in [1], at the edge of the 5G network to support a 360° video player (handling tiled videos) by view prediction. Originally, SEAWARE performs semantic analysis of a 360° video on the media server, by extracting, e.g., important objects and events. This video semantic information is encoded in specific data structures and shared with the client in a DASH streaming framework. Making use of these data structures, the client/player can perform view prediction without in-depth, computationally expensive semantic video analysis. In this paper, the SEAWARE system was ported and adapted to run (partially) on the edge where it can be used to predict views and prefetch predicted segments/tiles in high quality in order to have them available close to the client when requested. The paper gives an overview of the 5G testbed, the overall architecture, and the implementation of SEAWARE at the edge server. Since an important goal of this work is to achieve low motion-to-glass latencies, we developed and describe “tile postloading”, a technique that allows non-predicted tiles to be fetched in high quality into a segment already available in the player buffer. The performance of 360° tiled video playback on the 5G infrastructure is evaluated and presented. Current limitations of the 5G network in use and some challenges of DASH-based streaming and of edge-assisted viewport prediction under “real-world” constraints are pointed out; further, the performance benefits of tile postloading are disclosed.

 

Hadi

IEEE Transactions on Image Processing (TIP)
Journal Website

 

Authors: Hadi Amirpour (Alpen-Adria-Universität Klagenfurt, Austria), Christine Guillemot (INRIA, France), Mohammad Ghanbari (University of Essex, UK), and Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Austria)

Abstract: Light field imaging, which captures both spatial and angular information, improves user immersion by enabling post-capture actions, such as refocusing and changing view perspective. However, light fields represent very large volumes of data with a lot of redundancy that coding methods try to remove. State-of-the-art coding methods indeed usually focus on improving compression efficiency and overlook other important features in light field compression such as scalability. In this paper, we propose a novel light field image compression method that enables (i) viewport scalability, (ii) quality scalability, (iii) spatial scalability, (iv) random access, and (v) uniform quality distribution among viewports, while keeping compression efficiency high. To this end, light fields in each spatial resolution are divided into sequential viewport layers, and viewports in each layer are encoded using the previously encoded viewports. In each viewport layer, \revision{the} available viewports are used to synthesize intermediate viewports using a video interpolation deep learning network. The synthesized views are used as virtual reference images to enhance the quality of intermediate views. An image super-resolution method is applied to improve the quality of the lower spatial resolution layer. The super-resolved images are also used as virtual reference images to improve the quality of the higher spatial resolution layer.
The proposed structure also improves the flexibility of light field streaming, provides random access to the viewports, and increases error resiliency. The experimental results demonstrate that the proposed method achieves a high compression efficiency and it can adapt to the display type, transmission channel, network condition, processing power, and user needs.

Keywords—Light field, compression, scalability, random access, deep learning.

The threat of climate change requires a drastic reduction of global greenhouse gas (GHG) emissions in several societal spheres. Thus, this also applies to reducing and rethinking the energy consumption of digital technologies. Video streaming technology is responsible for more than half of digital technology’s global impact [ref]. There is rapid growth, also now with digital and remote work has become more mainstream, in the amount of video data volume, processing of video content, and streaming which affects the rise of energy consumption and its associated GHG emissions.

The International Workshop on Green Multimedia Systems 2023 (GMSys 2023) aims to bring together experts and researchers to present and discuss recent developments and challenges for energy reduction in multimedia systems. This workshop focuses on innovations, concepts, and energy-efficient solutions from video generation to processing, delivery, and further usage.

Find further info at https://athena.wp.itec.aau.at/events-gmsys23/