AOMedia Highlights from QoMex 2025
During the 17th International Conference on Quality of Multimedia Experience (QoMex 2025), AOMedia hosted a half-day industry workshop showcasing insights from leading voices at Amazon, Google (YouTube), Meta, Netflix, Samsung and Trinity College Dublin. By bringing together industry leaders and academic experts, the workshop helped foster collaboration and new perspectives that will shape the next generation of media standards and formats. The session was organized by AOMedia members Netflix and Meta.
Leading member companies and distinguished academics shared their latest research outcomes, delving into innovative approaches in media codecs, media processing, and related industry applications. Presenters gave the following updates:
-
Live AV1 Encoding at Broadcast Quality: Ramzi Khsib showcased how AWS Elemental turned the open-source SVT-AV1 into a broadcast-grade live encoder, achieving real-time performance with CPU and algorithm-level optimizations, low-latency parallelization, psycho-visual enhancements, and refined rate control for production-ready reliability. View the full presentation here.
-
Film Grain Synthesis at Netflix Scale: Li-Heng Chen shared Netflix’s approach to scaling AV1 film grain synthesis, highlighting optimizations that preserve texture and visual quality while maintaining efficiency at large scale. Find out what's next here.
-
AVM Video Codec Architecture (AV2): Andrey Norkin of Netflix discussed AV2’s recent results, highlighting its bitrate savings as compared to AV1. He also covered a suite of advanced tools improving prediction, transforms, filtering, and screen content coding. View the full presentation here.
-
Latest AVM Coding Gain Results: Ryan Lei of Meta highlighted the latest Common Test Conditions for evaluating AV2 coding gains, went into more depth on the specific coding gains that AV2 has produced, and shared some of the future work for the AOMedia Testing Subgroup. Watch the video here.
-
Psychophysical Models for Optimizing Media: Rafał Mantiuk of the University of Cambridge discussed contrast sensitivity functions and visual difference predictors as they relate to psychophysical models for optimizing media. He introduced ColorVideo VDP and CastleCSF, a comprehensive model of the contrast sensitivity visual system, which shows the smallest contrast discernable by the human eye. Learn more here.
Additional presentations included Introduction to IAMF Technology (Jani Huoponen, YouTube & Woohyun Nam, Samsung), Applying to Cinema-Grade Virtual Production (François Pitié & Vibhoothi, Trinity College Dublin), and Learned Image Compression (Jona Ballé, NYU). Full recordings are available at this link: QoMex 2025 Presentations Playlist
Following the AOMedia presentations, the second half of the session included a panel discussion on how to innovate on future media. Four experts (Patrick Le Callet, Ramzi Khsib, Rafał Mantiuk, Jona Ballé, moderated by Zhi Li) explored how merging psychophysics, signal processing, and AI can create more efficient, human-centered models for visual perception and media compression. The full panel discussion is available here: QoMex 2025 Panel Discussion.
Special thanks to the AOM member companies, workshop participants, and the dedicated organizers of QoMEX 2025 for their efforts in hosting the AOMedia Research Workshop Europe. Stay informed about upcoming events and all things AOMedia by signing up for our newsletter and following us on LinkedIn.