Medical Multimedia Information Systems

Together with Cathal Gurrin from DCU, Ireland, on June 14, 2024, Klaus Schöffmann gave a keynote talk about “From Concepts to Embeddings. Charting the Use of AI in Digital Video and Lifelog Search Over the Last Decade” at the International Workshop on Multimodal Video Retrieval and Multimodal Language Modelling (MVRMLM’24), co-located with the ACM ICMR 2024 conference in Phuket, Thailand.


Here is the abstract of the talk:

In the past decade, the field of interactive multimedia retrieval has undergone a transformative evolution driven by the advances in artificial intelligence (AI). This keynote talk will explore the journey from early concept-based retrieval systems to the sophisticated embedding-based techniques that dominate the landscape today. By examining the progression of such AI-driven approaches at both the VBS (Video Browser Showdown) and the LSC (Lifelog Search Challenge), we will highlight the pivotal role of comparative benchmarking in accelerating innovation and establishing performance standards. We will also forward at the potential future developments in interactive multimedia retrieval benchmarking, including emerging trends, the integration of multimodal data, and the future comparative benchmarking challenges within our community.


On June 10, 2024, the 7th Lifelog Search Challenge (LSC 2024), an international competition on lifelog retrieval took place as a workshop at the ACM International Conference on Multimedia Retrieval (ICMR 2024) in Phuket, Thailand. The LSC is organized by a large international team (Cathal Gurrin, Björn Þór Jónsson, Duc-Tien Dang-Nguyen, Jakub Lokoc, Klaus Schoeffmann, Minh-Triet Tran, Steve Hodges, Graham Healy, Luca Rossetto, and Werner Bailer) and attracted 21 teams from all around the world (Austria, Czechia, Germany, Iceland, Ireland, Italy, Netherlands, Norway, Portugal, Switzerland, and Vietnam). The competition tests how fast and accurate state-of-the-art lifelog retrieval systems can solve search tasks (known-item search, ad-hoc search, visual question answering) in a shared dataset of about 720000 images, collected by an anonymous  lifelogger over 18 months. With the LIFEXPLORE system developed by Martin Rader, Mario Leopold, and Klaus Schöffmann, ITEC could win this competition for the second time in a row and was awarded for the Best LSC System. Congratulations!

From June 10, 2024 until June 14, 2024, the ACM International Conference on Multimedia Retrieval (ICMR 2024) took place in Phuket, Thailand. It was organized by Cathal Gurrin (DCU), Klaus Schoeffmann (ITEC, AAU), and Rachada Kongkachandra (Thammasat University). ICMR 2024 received 348 paper submissions and about 80 more to the nine co-located workshops (LSC’24, AI-SIPM’24, MORE’24, ICDAR’24, MAD’24, AIQAM’24, MUWS’24, R2B’24, and MVRMLM’24). The conference attracted about 202 on-site participants (including local organizers), with 10 oral sessions, an on-site and a virtual poster session, a demo session, a reproducibility session, two interesting keynotes about Multimodal Retrieval in Computer Vision (Mubarak Shah) and AI-Based Video Analytics (Supavadee Aramvith), a panel about LLM and Multimedia (Alan Smeaton), and four interesting tutorials.


The diveXplore video retrieval system, by Klaus Schoeffmann and Sahar Nasirihaghighi, was awarded as the best ‘Video Question-Answering-Tool for Novices’ at the 13th Video Browser Showdown (VBS 2024), which is an international video search challenge annually held at the International Conference on Multimedia Modeling (MMM 2024), which took place this year in Amsterdam, The Netherlands. VBS 2024 was a 6-hours long challenge with many search tasks of different types (known-item search/KIS, ad-hoc video search/AVS, question-answering/QA) in three different datasets, amounting for about 2500 hours of video content, some performed by experts and others by novices recruited from the conference audience.

diveXplore teaser:

diveXplore demo paper:

VBS info:

The 13th Video Browser Showdown (VBS 2024) was held on 29th January, 2024, in Amsterdam, The Netherlands, at the International Conference on Multimedia Modeling (MMM 2024). 12 international teams (from Austria, China, Czech Republic, Germany, Greece, Iceland, Ireland, Italy, Singapore, Switzerland, The Netherlands, Vietnam) competed over about 6 hours for quickly and accurately solving many search tasks of different types (known-item search/KIS, ad-hoc-video search/AVS, question-answering/QA) in three datasets with about 2500 hours of video content. Like in previous years, this large-scale international video retrieval challenge was an exciting event that demonstrated the state-of-the-art performance of interactive video retrieval systems.

The Interactive Video Retrieval for Beginners (IVR4B) special session and competition took place on September 21, 2023, at the International Conference on Multimedia Indexing (CBMI2023) in Orleans, France.

We are happy to announce that Klaus Schoeffmann could save the BEST KIS-VISUAL award for this competition, with his interactive video search system diveXplore.

From September 7-9, nearly 40 participants joined us at AAU Klagenfurt to discuss and theorise about the theme of “Video Game Cultures: Exploring New Horizons.” VGC is a recurring conference reinstated after the lockdowns, coordinated between universities and scholars from the US, UK, Czech Republic, Austria, and Germany. This year’s conference was an organisational collaboration between ITEC and the Department of English at AAU; Felix Schniz and René Schallegger were the local organising chairs. We had the pleasure to not only listen to a wonderful variety of perspectives and approaches from our participants in and around the field of Game Studies but also to cultivate a kind and constructive atmosphere. Many thanks to everyone who helped set up this year’s VGC, especially our sponsors!


Sahar Nasirihaghighi presented the paper titled “Action Recognition in Video Recordings from Gynecology Laparoscopy” at IEEE 36th International Symposium on Computer-Based Medical Systems 2023.

Authors: Sahar Nasirihaghighi, Negin Ghamsarian, Daniela Stefanics, Klaus Schoeffmann and Heinrich Husslein

Abstract: Action recognition is a prerequisite for many applications in laparoscopic video analysis including but not limited to surgical training, operation room planning, follow-up surgery preparation, post-operative surgical assessment, and surgical outcome estimation. However, automatic action recognition in laparoscopic surgeries involves numerous challenges such as (I) cross-action and intra-action duration variation, (II) relevant content distortion due to smoke, blood accumulation, fast camera motions, organ movements, object occlusion, and (III) surgical scene variations due to different illuminations and viewpoints. Besides, action annotations in laparoscopy surgeries are limited and expensive due to requiring expert knowledge. In this study, we design and evaluate a CNN-RNN architecture as well as a customized training-inference framework to deal with the mentioned challenges in laparoscopic surgery action recognition. Using stacked recurrent layers, our proposed network takes advantage of inter-frame dependencies to negate the negative effect of content distortion and variation in action recognition. Furthermore, our proposed frame sampling strategy effectively manages the duration variations in surgical actions to enable action recognition with high temporal resolution. Our extensive experiments confirm the superiority of our proposed method in action recognition compared to static CNNs.

Sebastian Uitz and Michael Steinkellner showcased their game, A Webbing Journey, at the A1 Austria eSports Festival in the Austria Center Vienna on May 27, 2023. The booth, featuring two PCs, a Steam Deck, and a Nintendo Switch, offered players of all ages a delightful experience. Valuable feedback was gathered, fueling the team’s determination to enhance the game for future events.

We are grateful for the positive response and eagerly await incorporating the feedback received. With its endearing storyline and unique gameplay mechanics, the game continues to build anticipation for its official release, offering an enchanting adventure filled with exploration and heartwarming quests.

Authors: Negin Ghamsarian, Javier Gamazo Tejero, Pablo Márquez Neila, Sebastian Wolf, Martin Zinkernagel, Klaus Schoeffmann, and Raphael Sznitman

26th Medical Image Computing and Computer-Assisted Intervention 2023 (MICCAI 2023), Vancouver, Canada, 8-12 October 2023

Abstract: Models capable of leveraging unlabelled data are crucial in overcoming large distribution gaps between the acquired datasets across different imaging devices and configurations. In this regard, self-training techniques based on pseudo-labeling have been shown to be highly effective for unsupervised domain adaptation. However, the unreliability of pseudo labels can hinder the capability of self-training techniques to induce abstract representation from the unlabeled target dataset, especially in the case of large distribution gaps. Since the neural network performance should be invariant to image transformations, we look to this fact to identify uncertain pseudo labels. Indeed, we argue that transformation invariant detections can provide more reasonable approximations of ground truth. Accordingly, we propose an unsupervised domain adaptation strategy termed transformation-invariant self-training (TI-ST) to assess pixel-wise pseudo-labels’ reliability and filter out unreliable detections during self-training. We perform comprehensive evaluations for domain adaptation using three different modalities of medical images, two different network architectures, and several alternative state-of-the-art domain adaptation methods. Experimental results confirm the superiority of our proposed method in mitigating the lack of target domain annotation and boosting segmentation performance in the target domain.