Last week I was able to join another year of the Real Time Communications Conference and Expo. The conference is supported by two important institutions, the Illinois Institute of Technology and the IEEE Chicago Section, and it’s one of the few conferences that approaches real-time communications from a scientific perspective. Industry and academia connect and bring together software developers, network engineers, entrepreneurs, business executives, students, and researchers to promote an open exchange of ideas in the rapidly changing field of real-time communications.
This year I was once again humbled to serve as the co-chair of the Research Track along with Dr. Ronald Marx. The Research Track accepts full paper submissions in real-time multimedia communications describing architecture, design, theoretical results, experiments, innovative systems, prototyping efforts, and case studies. We presented seven talks about a diverse set of topics, including investigations around blockchain, neural-based security, 5G for smart cities, and speaker verification. There were also three talks relating to videoconferencing research, including one by Evercast Senior Lead Engineer David Diaz about how to stream high-color video in the browser using data channels.
Additionally, I was asked to chair the WebRTC & Real-Time Applications Track in the absence of Alberto González Trastoy from WebRTC.Ventures. WebRTC has emerged as the standard for allowing easy access to the microphone and camera via the browser, and enabling peer-to-peer video, audio, and data connections directly between browsers and other devices. In this track we presented eight talks, covering topics such as AI voice assistants and real-time generative conversations, WebRTC call stats and metrics, and expanding scope through machine learning. This track also included a session with Evercast CEO Damien Stolarz about delivering spatial video on Apple Vision Pro.
One of the highlights of the conference was a presentation by Erik Språng (Senior Software Engineer at Google) about what is coming down the line from the latest W3C working group discussions. In particular, developments in WebCodecs and RtpTransport will allow an application to unbundle video encoding from the WebRTC monolith and create fully custom per-frame adaptive reference structures. These ideas were very well received by the conference attendees, who were interested in collaborating with Google to develop these new standards.
Finally, I’d like to emphasize the overwhelming presence of AI in the talks and discussions that happened at the conference. AI is revolutionizing the way content creators work together and, in this sense, combining it with real-time communications is of great importance to increase productivity and lower production costs. Because of its unique scientific approach, the IEEE RTC conference is an important venue for the future of this collaboration, and I expect it to be a key event in the coming years, catalyzing new technological opportunities and setting the stage for an exciting future in the industry and academia.