MPEG Meeting

MPEG news: A Report from the 106th Meeting, Geneva, Switzerland

Christian Timmerer
. 3 min read
MPEG news: a report from the 106th meeting, Geneva, Switzerland

November, 2013, Geneva, Switzerland

Here comes a news report from the 106th MPEG in Geneva, Switzerland which was actually on the same day as the Austrian national day and Austrian Airlines had a nice present (see picture) for their passengers.

The official press release can be found here. In this meeting, ISO/IEC 23008-1 (i.e., MPEG-H Part 1) MPEG Media Transport (MMT) reached Final Draft International Standard (FDIS). Looking back, to when this project was started with the aim to supersede the widely adopted MPEG-2 Transport Stream (M2TS) — which received the Technology & Engineering Emmy® Award in Jan’14 — and to what we have now, the following features are supported within MMT:

  • self-contained multiplexing structure
  • strict timing model
  • reference buffer model
  • flexible splicing of content
  • name based access of data
  • AL-FEC (application layer forward error correction)
  • Multiple Qualities of Service within one packet flow

Interestingly, MMT supports the carriage of MPEG-DASH segments and MPD for uni-directional environments such as broadcasting.
MPEG-H now comprises three major technologies, part 1 is about transport (MMT; at FDIS stage), part 2 deals with video coding (HEVC; at FDIS stage), and part 3 will be about audio coding, specifically 3D audio coding (but it is still in its infancy and technical responses have  only recently been evaluated ). Other parts of MPEG-H are currently related to these three parts.
In terms of research, it is important to determine the efficiency, overhead, and — in general — the use cases enabled by MMT. From a business point of view, it will be interesting to see whether MMT will actually supersede M2TS and how it will evolve compared, or in relation to DASH.
On another topic, MPEG-7 visual reached an important milestone at this meeting. The Committee Draft (CD) for Part 13 (ISO/IEC 15938-13) has been approved and is entitled Compact Descriptors for Visual Search (CDVS). This image description enables comparing and finding pictures that include similar content, e.g., when showing the same object from different viewpoints. CDVS mainly deals with images but MPEG also started work for compact descriptors for video search.
The CDVS standard truly helps to reduce the semantic gap. However, research in this domain is already well developed and it is unclear whether the research community will adopt CDVS, particularly because the interest in MPEG-7 descriptors has declined recently. On the other hand, such a standard will enable interoperability among vendors and services (e.g., Google Goggles) reducing the number of proprietary formats and, hopefully, APIs. However, the most important question is whether CDVS will be adopted by the industry (and researchers).

Finally, what about MPEG-DASH?

The 2nd edition of part 1 (MPD and segment formats) and the 1st edition of part 2 (conformance and reference software) have been finalized at the 105th MPEG meeting (FDIS). Additionally, we had a public/open workshop at that meeting which was about session management and control for DASH. This and other new topics are further developed within so-called core experiments for which I’d like to give a brief overview:

  • Server and Network assisted DASH Operation (SAND) which is the immediate result of the workshop at the 105th MPEG meeting and introduces a DASH-Aware Media Element (DANE) as depicted in the Figure below. Parameters from this element — as well as others — may support the DASH client within its operations, i.e., downloading the “best” segments for its context. SAND parameters typically come from the network itself whereas Parameters for enhancing delivery by DANE (PED) come from the content author.


Server and Network assisted DASH Operation (SAND)

  • Spatial Relationship Description is about delivering (tiled) ultra-high-resolution content towards heterogeneous clients while at the same time providing interactivity (e.g., zooming). Thus, not only the temporal but also spatial relationship of representations needs to be described.

Other CEs are related to signaling intended source and display characteristics, controlling the DASH client behavior, and DASH client authentication and content access authorization.
The outcome of these CEs is potentially interesting for future amendments. One CE dealt with at this meeting was about including quality information within DASH, e.g., as part of an additional track within ISOBMFF and an additional representation within the MPD. Clients may access this quality information in advance to assist the adaptation logic in order to make informed decisions about which segment to download next.
Interested people may join the MPEG-DASH Ad-hoc Group (AhG; where these topics (and others) are discussed.
Finally, additional information/outcome from the last meeting is accessible via including documents publicly available (some may have an editing period).


Dr. Christian Timmerer
CIO Bitmovin GmbH | [email protected]
Alpen-Adria-Universität Klagenfurt |  [email protected]

Christian Timmerer

Christian Timmerer

Chief Innovation Officer

Prof. Dr. Christian Timmerer is the Chief Innovation Officer and a co-founder at Bitmovin. His work focuses on research and standardization in the area of adaptive video streaming, video adaptation, and Quality of Experience. He is an active member of ISO/IEC MPEG and editor for the MPEG-21, MPEG-M, MPEG-V, and MPEG-DASH standards, and thus has also a wide range of knowledge, overview, and contacts within the international technology market. He holds the position of a Full Professor for Multimedia Systems at the Alpen-Adria University Klagenfurt where he had published more than 300 papers at international conferences and journals.

Related Posts

- Bitmovin
MPEG Meeting

144th MPEG Meeting Takeaways: Understanding Quality Impacts of Learning-based Codecs and Enhancing Green Metadata

- Bitmovin
MPEG Meeting

143rd MPEG Meeting Takeaways: Green metadata support added to VVC for improved energy efficiency

Join the conversation