From December 9 to December 11, the 6th Klagenfurt Winter Jam took place at the Alpen-Adria Universität Klagenfurt. More than 80 highly motivated game enthusiasts worked for 48 hours on 21 new games and presented their results on Sunday to the public. More jammers joined online to participate remotely. It was an excellent comeback from the time of quarantines and restrictions, and the game jammers appreciated the event to make new contacts, work together, and meet old friends in a chilled and creative environment. Check out our video.

Save the date for the next Game Jams!

2nd Hüttenjam, a special event with limited seats, 13 – 16 April 2023

10th Game Jam will be on the weekend of 2 – 4 June 2023

 

 

 

We are happy to announce that the Call for Papers for our conference Video Game Cultures 2023: Exploring New Horizons is online now.

Please see our website for more info and submission.

Hadi

ICME`23 July, 2023, Brisbane, Australia

Organizers:

  • Hadi Amirpour, University of Klagenfurt

  • Angeliki Katsenou, Trinity College Dublin, IE and University of Bristol, UK

Abstracts

Video streaming in the context of HTTP Adaptive Streaming (HAS) is replacing legacy media platforms and its market share is growing rapidly due to its simplicity, reliability, and standard support (e.g., MPEG-DASH). It results in an increasing number of video content, where nowadays, video accounts for the vast majority of today’s internet traffic either in the form of user-generated content (UGC) or pristine cinematic content. For HAS, the video is usually encoded in multiple versions (i.e., representations) of different resolutions, bitrates, codecs, etc. and each representation is divided into chunks (i.e., segments) of equal length (e.g., 2-10 second) to enable dynamic, adaptive switching during streaming based on the user’s context conditions (e.g., network conditions, device characteristics, user preferences). Read more

Hadi

Authors: Hadi Amirpour (Alpen-Adria-Universität Klagenfurt, Austria), Mohammad Ghanbari (University of Essex, UK), and Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Austria)

Journal Website

Abstract: In HTTP Adaptive Streaming (HAS), each video is divided into smaller segments, and each segment is encoded at multiple pre-defined bitrates to construct a bitrate ladder. To optimize bitrate ladders, per-title encoding approaches encode each segment at various bitrates and resolutions to determine the convex hull. From the convex hull, an optimized bitrate ladder is constructed, resulting in an increased Quality of Experience (QoE) for end-users. With the ever-increasing efficiency of deep learning-based video enhancement approaches, they are more and more employed at the client-side to increase the QoE, specifically when GPU capabilities are available. Therefore, scalable approaches are needed to support end-user devices with both CPU and GPU capabilities (denoted as CPU-only and GPU-available end-users, respectively) as a new dimension of a bitrate ladder. Read more

Christina Obmann, one of our first Game Studies and Engineering students, has been recognized as outstanding in the Carinthia region by the local newspaper Kleine Zeitung. Besides her interest and work in games, she’s teaching at the university, learning Chinese, and was awarded a scholarship from Huawei.

 

 

Students at Klagenfurt University decide who is the best teacher: They nominate courses for the “Teaching Award 2022”. The 14 best-rated teachers submitted teaching concepts, which were evaluated and ranked by a jury. Josef Hammer was nominated this year. Congrats!

MPEC2: Multilayer and Pipeline Video Encoding on the Computing Continuum

conference website: IEEE NCA 2022

Samira Afzal (Alpen-Adria-Universität Klagenfurt), Zahra Najafabadi Samani (Alpen-Adria-Universität Klagenfurt), Narges Mehran (Alpen-Adria-Universität Klagenfurt), Christian Timmerer (Alpen-Adria-Universität Klagenfurt), and Radu Prodan (Alpen-Adria-Universität Klagenfurt)

Abstract:

Video streaming is the dominating traffic in today’s data-sharing world. Media service providers stream video content for their viewers, while worldwide users create and distribute videos using mobile or video system applications that significantly increase the traffic share. We propose a multilayer and pipeline encoding on the computing continuum (MPEC2) method that addresses the key technical challenge of high-price and computational complexity of video encoding. MPEC2 splits the video encoding into several tasks scheduled on appropriately selected Cloud and Fog computing instance types that satisfy the media service provider and user priorities in terms of time and cost.
In the first phase, MPEC2 uses a multilayer resource partitioning method to explore the instance types for encoding a video segment. In the second phase, it distributes the independent segment encoding tasks in a pipeline model on the underlying instances.
We evaluate MPEC2 on a federated computing continuum encompassing Amazon Web Services (AWS) EC2 Cloud and Exoscale Fog instances distributed on seven geographical locations. Experimental results show that MPEC2 achieves 24% faster completion time and 60% lower cost for video encoding compared to resource allocation related methods. When compared with baseline methods, MPEC2 yields 40%-50% lower completion time and 5-60% reduced total cost.

Radu Prodan participated in the panel on “Fueling Industrial AI with Data Pipelines” at presented the Graph-Massivizer project at the European Big Data Value Forum on November 22 in Prague, Czech Republic.

Hadi

Journal Website

Authors: Ningxiong Maoa (Southwest Jiaotong University), Hongjie Hea (Southwest Jiaotong University), Fan Chenb (Southwest Jiaotong University), Lingfeng Qua (Southwest Jiaotong University), Hadi Amirpour (Alpen-Adria-Universität Klagenfurt, Austria), and Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Austria)

Abstract: Color image Reversible Data Hiding (RDH) is getting more and more important since the number of its applications is steadily growing. This paper proposes an efficient color image RDH scheme based on pixel value ordering (PVO), in which the channel correlation is fully utilized to improve the embedding performance. In the proposed method, the channel correlation is used in the overall process of data embedding, including prediction stage, block selection and capacity allocation. In the prediction stage, since the pixel values in the co-located blocks in different channels are monotonically consistent, the large pixel values are collected preferentially by pre-sorting the intra-block pixels. This can effectively improve the embedding capacity of RDH based on PVO. In the block selection stage, the description accuracy of block complexity value is improved by exploiting the texture similarity between the channels. The smoothing the block is then preferentially used to reduce invalid shifts. To achieve low complexity and high accuracy in capacity allocation, the proportion of the expanded prediction error to the total expanded prediction error in each channel is calculated during the capacity allocation process. The experimental results show that the proposed scheme achieves significant superiority in fidelity over a series of state-of-the-art schemes. For example, the PSNR of the Lena image reaches 62.43dB, which is a 0.16dB gain compared to the best results in the literature with a 20,000bits embedding capacity.

KeywordsReversible data hiding, color image, pixel value ordering, channel correlation

5g_Kaerntner_Fog_Logo

IEEE ISM’2022 (https://www.ieee-ism.org/)

Authors: Shivi Vats, Jounsup Park, Klara Nahrstedt, Michael Zink, Ramesh Sitaraman, and Hermann Hellwagner

Abstract: In a 5G testbed, we use 360° video streaming to test, measure, and demonstrate the 5G infrastructure, including the capabilities and challenges of edge computing support. Specifically, we use the SEAWARE (Semantic-Aware View Prediction) software system, originally described in [1], at the edge of the 5G network to support a 360° video player (handling tiled videos) by view prediction. Originally, SEAWARE performs semantic analysis of a 360° video on the media server, by extracting, e.g., important objects and events. This video semantic information is encoded in specific data structures and shared with the client in a DASH streaming framework. Making use of these data structures, the client/player can perform view prediction without in-depth, computationally expensive semantic video analysis. In this paper, the SEAWARE system was ported and adapted to run (partially) on the edge where it can be used to predict views and prefetch predicted segments/tiles in high quality in order to have them available close to the client when requested. The paper gives an overview of the 5G testbed, the overall architecture, and the implementation of SEAWARE at the edge server. Since an important goal of this work is to achieve low motion-to-glass latencies, we developed and describe “tile postloading”, a technique that allows non-predicted tiles to be fetched in high quality into a segment already available in the player buffer. The performance of 360° tiled video playback on the 5G infrastructure is evaluated and presented. Current limitations of the 5G network in use and some challenges of DASH-based streaming and of edge-assisted viewport prediction under “real-world” constraints are pointed out; further, the performance benefits of tile postloading are disclosed.