MPEG-H 3D Audio is the first next generation audio system to be used in terrestrial 4K TV: Since May 2017, the new UHD TV system in South Korea is on-air utilizing the MPEG-H TV Audio System. Next year, viewers in Korea will enjoy the benefits of the new audio codec during the TV broadcast of the Olympic Winter Games. Viewers can set the audio mix for a program to their individual preferences, for example:
At IBC 2017 visitors can experience a live test of the MPEG-H TV Audio System:
We will present tools to produce 3D sound with MPEG-H like the plugin for the Spatial Audio Designer (SAD) and DSpatial. In addition, we will show the new Linear Acoustic AMSTM Authoring and Monitoring System and the Jünger Multichannel Monitoring and Authoring unit, comprehensive solutions for real-time authoring, rendering and monitoring of immersive audio programs that support MPEG-H 3D Audio.
Experience the first MPEG-H 3D Audio enabled TV sets from LG and Samsung made for the new UHD TV system in South Korea. You will be amazed by our reference design for a 3D soundbar that transforms any living room into a home cinema, without requiring the installation of a 3D loudspeaker setup.
The MPEG-H TV Audio System is designed to work with today`s broadcast and streaming equipment. The MPEG-H TV Audio System is part of the ATSC 3.0 standard and the DVB A/V codec specification.
At IBC, Fraunhofer IIS will showcase an end-to-end VR audio system based on MPEG-H for production, delivery, playback and rendering of immersive sound. The entire chain consists of Fraunhofer upHear® Spatial Audio Microphone Processing, Cingo Composer post-production plugin, the MPEG-H 3D Audio codec and a VR player SDK for high-quality VR experiences.
Capture spatial audio for Virtual Reality. Fraunhofer upHear has been designed to significantly improve the sound capture capabilities of professional and consumer 360° cameras and mobile devices using built-in microphones. It automatically transforms the captured sound in real-time to any popular surround or immersive audio reproduction format, such as FOA, HOA, 5.1 channels, and 7.1+4 height channels, while preserving the authenticity of the audio scene. Spatial Audio Microphone Processing is the first audio technology to be delivered under Fraunhofer’s upHear brand of immersive audio innovations.
Looking for a tool that simplifies mixing immersive sound for VR? If so, be sure to check out the Cingo Composer plugin at IBC! We recently released a BETA version of the plugin that allows sound designers to easily mix, pan and monitor audio channels, ambisonics audio, as well as audio objects using Fraunhofer Cingo. The plugin supports export of MPEG-H ready audio essence and metadata, plus export of FOA and 5.1 for legacy platforms. Therefore, you only need to mix once for any distribution channel.
The next-generation audio codec enables interactive and immersive audio experiences for VR as it can carry audio channels, ambisonics audio, as well as audio objects with metadata. Fraunhofer’s implementation allows the transmission of 3D sound on mobile devices at the same bit rates used today for the playback of 2D surround sound.
In addition to the efficient delivery of immersive sound using MPEG-H, Fraunhofer Cingo optimizes the 3D audio rendering on VR devices and applications with a stunning level of immersion, creating the experience of “being there”. Cingo’s reference customers include Samsung who integrated Cingo into the first generation of the »Gear VR«, LG for its »LG 360 VR« glasses as well as Hulu who integrated Cingo in its app for mobile and tethered VR experiences.
Besides immersive sound, Cingo enables excellent surround sound over stereo speakers and headphones when playing any media content on mobile devices such as tablets or smartphones. Thanks to loudness optimization, Cingo also improves the intelligibility of dialog and commentary making clear, authentic sound possible even in noisy environments. Cingo has been utilized in devices worldwide - for example in the Google Nexus and Pixel family of devices.
For effortless integration on VR playback systems, Fraunhofer IIS provides a VR Audio SDK, ready to create a VR experience with MPEG-H 3D Audio decoding and the best-in-class audio rendering.
The European research project ORPHEUS develops new radio services using object-based audio technology. ORPHEUS specifies, implements and validates an IP-based broadcast chain based on open standards. The resulting user experience will be presented at the IBC 2017 by an iOS app developed by elephantcandy.
MPEG-H enables immersive 3D sound and allows the user to adjust the sound to his or her personal preferences. At the same time, MPEG-H also works very efficiently: For a 7.1 + 4H playback, only 384 kbit/s are required.
At IBC, Fraunhofer IIS shows technologies and applications for the entire chain of the global digital radio standards DRM and DAB.
The Fraunhofer ContentServerTM R6 technology is at the core of flexible and highly reliable professional broadcasting solution for the digital radio standards DAB and DRM. It combines internal audio encoding, support for external audio encoders, data service management and multiplex generation. A convenient user-friendly web interface enables configuration and system monitoring via remote access.
Moreover, Fraunhofer IIS offers a diverse set of technologies to decode and present digital radio services as part of its SDR – Software Defined Radio approach – of which the following will be presented at IBC:
Newcomers in film production, small post-production houses or production teams who apply for film festivals only sometimes within the time frame of a year are looking for a reliable solution that guarantees the standard-conform creation and playback on the big screen. The experts for post-production tools at Fraunhofer IIS decided to provide a solution for such requirements – the easyDCP Publisher.
The easyDCP Publisher is an all-in-one software solution for the generation and playback of DCPs. Based on the recognized and wide-spread easyDCP suite among cinematographic specialists and post-production houses around the world, it is a lean version that provides all the essential features for creation and playback with a project based license model that is especially suited for production teams and post-production houses which will not do DCP creation as a main business.
The easyDCP Publisher provides a cost-effective solution to generate a DCP with minimal effort and risk for approval and playback on a cinema server. To work with the software, professionals only have to simply render, preview and fine-tune content and then the DCP is ready to be published.
New distribution channels for video-content arise almost daily. The Interoperable Master Format (IMF) - the standardized and released universal format by SMPTE - has become a recognized exchange format in professional film production for defining and automating transcoding steps. IMF is first choice to today´s challenges of exchanging content in highest image quality without supporting a lot of different formats.
For automated Quality Control (QC) of these formats, the experts of Fraunhofer IDMT and Fraunhofer IIS present a first demo version of the technology that allows for quality checks of IMPs as well as for the derived distribution formats based on the OPL.
The big advantage of the Fraunhofer solution are that typical quality issues introduced by video transcoders like audio and video errors due to incorrect operations, wrong parameters, interruptions within the processing chain, or inconsistent interpretation of standards or recommendations can be as brief as possible be detected. Furthermore, technical requirements on the transcoding process like the necessity of low bitrates as one example can significantly decrease the audiovisual quality. Therefore it is indispensable to detect, evaluate and if necessary change the parameters before and also after the transcoding and distribution.
The developed solution shows a first set of important quality checks and the way of detection and documentation of occurred errors in the well-known and widely-used easyDCP software environment for IMF creation.
By now, almost at every film set more than one single camera is used to capture the scene. Especially for special shots or visual effects more and more cameras or even camera arrays are recording the production to generate as many different views as possible. These perspectives of one scene are then rectified and calculated into a unified representation of the scene in which visual effects of real-action content can be applied as known from CGI. That makes the light-field approach so appealing for new ways of content post-production. Effects like refocusing, virtual camera movements, re-lighting of scenes can be carried out with light-field technology – and with the Realception® tools from Fraunhofer IIS in a post-production environment that is familiar to most of the professionals.
Our experts from Fraunhofer IIS will present a new image coding technology: the new JPEG XS standard. The upcoming JPEG XS standard will offer a low-latency lightweight image coding system that is able to support increasing resolution (up to 8K) and frame rate in a cost effective way and for example applicable for Video over IP.
Please feel free to share the Paper Session “Beyond HEVC - how to build an even better codec” (Emerald Room 14 Sep 2017 13:15 - 14:45), where Dr. Siegfried Fößel will present the paper “Introduction to JPEG XS – The new low complexity codec standard for professional video production”.