A Tutorial at  ACM SIGCOMM 2025

Optimizing Low-Latency Video Streaming: AI-Assisted Codec-Network Coordination

[Link]

Coimbra, Portugal, September 8 – 11, 2025.


Tutorial speakers:

  • Farzad Tashtarian (Alpen-Adria-Universität – AAU)
  • Zili Meng (Hong Kong University of Science and Technology – HKUST)
  • Abdelhak Bentaleb (Concordia University)
  • Mahdi Dolati (Sharif University of Technology)

This tutorial focuses on the emerging need for ultra-low-latency video streaming and how AI-assisted coordination between codecs and network infrastructure can significantly improve performance. Traditional end-to-end streaming pipelines are often disjointed, leading to inefficiencies under tight latency constraints. We present a cross-layer approach that leverages AI for real-time encoding parameter adaptation, network-aware bitrate selection, and joint optimization across codec behavior and transport protocols. The tutorial examines the integration of AI models with programmable network architectures (e.g., SDN, P4) and modern transport technologies such as QUIC and Media over QUIC (MoQ) to minimize startup delay, stall events, and encoding overhead. Practical use cases and experimental insights illustrate how aligning codec dynamics with real-time network conditions enhances both QoE and system efficiency. Designed for both researchers and engineers, this session provides a foundation for developing next-generation intelligent video delivery systems capable of sustaining low-latency performance in dynamic environments.

Hadi

EUVIP 2025
October 13-16, 2025

Malta

Link

Tutorial speakers:

  • Wei Zhou (Cardiff University)
  • Hadi Amirpour (University of Klagenfurt)

Tutorial description:

As multimedia services like video streaming, video conferencing, virtual reality (VR), and online gaming continue to evolve, ensuring high perceptual visual quality is crucial for enhancing user experience and maintaining competitiveness. However, multimedia content inevitably undergoes various distortions during acquisition, compression, transmission, and storage, leading to quality degradation. Therefore, perceptual visual quality assessment, which evaluates multimedia quality from a human perception perspective, plays a vital role in optimizing user experience in modern communication systems. This tutorial provides a comprehensive overview of perceptual visual quality assessment, covering both subjective methods, where human observers directly rate their experience, and objective methods, where computational models predict perceptual quality based on measurable factors such as bitrate, frame rate, and compression levels. The session also explores quality assessment metrics tailored to different types of multimedia content, including images, videos, VR, point clouds, meshes, and AI-generated media. Furthermore, we discuss challenges posed by diverse multimedia characteristics, complex distortion scenarios, and varying viewing conditions. By the end of this tutorial, attendees will gain a deep understanding of the principles, methodologies, and latest advancements in perceptual visual quality assessment for multimedia communication.