Wednesday, October 13 • 4:00pm - 5:00pm
Robert Bleidt • John Kean • David Bialik
Audio-specific metadata can offer new consumer benefits as well as a more compelling listening experience. This presentation discusses the development of systems using audio metadata and explains the types of metadata that monitor key characteristics of audio content, such as loudness, dynamic range, and signal peaks. The metadata is added to the content during encoding for real-time distribution – as in streams, or for file storage – as with podcasts. In the playback device or system, this metadata is decoded along with the audio data frames. It will be shown that audio dynamic range can be controlled according to the noise environment around the listener – quiet parts of a performance can be raised to audibility, but for those listeners who need it. Content producers need to make only one target level for all listeners, rather than one for smart speakers, another for fidelity-conscious listeners, etc. An audio demonstration is planned to allow the audience to hear the same encoded program over a range of playout conditions with the same device, from riding public transit to full dynamic range, for listeners who want highest fidelity.