Please introduce your company and yourself briefly.
Vimeo develops and provides video solutions in a unified platform, from production and editing to hosting and sharing. Our goal is to make professional-quality video accessible to businesses and individuals alike.
I joined Vimeo's video transcoding team in 2019 after studying information technology engineering at the École de Technologie Supérieure University in Montréal, Canada. I became interested in AV1 while it was in development, which led me to contribute to the rav1e AV1 encoder as part of Google Summer of Code. I went on to work on a number of AV1-related projects at Vimeo.
What first brought you to AOMedia, and why did you decide to become a member?
Vimeo becoming a member of AOMedia predates me, but I believe the commitment to innovation and early adoption of promising technologies, as well as the focus on sharing knowledge through open source and royalty-free initiatives, are values that both Vimeo and AOMedia are aligned on. Being a member of the Alliance allows us to follow the next-generation codec development, and closely monitor the ongoing testing efforts and tool adoption, which helps us integrate and deploy them quickly.
What are you currently working on with regard to AV1? How will end-users benefit?
Vimeo was an early adopter of AV1, and we recently deployed AVIF support for all images on our platform. We're currently looking at ways to reduce video transcoding times in order to scale our coverage and use AV1 across all user-uploaded videos. The improvements in coding efficiency of AV1 compared to our other supported formats will definitely help reduce file sizes and improve video quality, which will translate to a more visually pleasing, smoother overall experience for users.
What AOMedia efforts are you most excited about?
I am very interested in the current research activities into new and improved coding tools, particularly those involving machine learning. As the high-performance requirements of a video codec pose an interesting challenge to the usual neural network architectures, I am excited to see how this problem will be tackled.