VP9 is the next level in video compression and can help you to save up to 50% on your CDN costs, or significantly increase the quality of your streams.
We are happy to announce the introduction of full VP9 support for our HTML5 video player and our video encoder, both in the cloud as well as for containerized deployments that can run on-premise or in your own cloud account with Docker and Kubernetes. VP9 has recently gained popularity as there is still an uncertain royalty situation with HEVC which is the main competitor for VP9. Similar to HEVC, VP9 can perform up to 50% better, as a compression format, than H.264/AVC, especially for UHD or 4K resolutions. This results in higher quality video that can be delivered to the users, or help saving on bandwidth and thus reduce CDN costs by up to 50%!
VP9 is a royalty free codec that is developed by Google as an alternative to the commercial video formats. YouTube has been successfully using VP9 to deliver video content to their users for several years already and claims to deliver the same quality at half the bandwidth used by H.264/AVC. This is why YouTube prefers to stream VP9 on browsers/devices that have support for it, delivering better quality with less bandwidth. When streaming UHD and 4K content, VP9 is getting even more efficient. YouTube has chosen to deliver 4K resolutions only with VP9 and thus locking out Safari users to consume 4K content via YouTube.
This rise in popularity of VP9 is not only caused by the uncertain situation with the HEVC royalties, but also because of the ongoing development of AV1, a royalty-free video coding format developed by the Alliance for Open Media, which can basically be seen as a successor of VP9.
Looking at the range of supported browsers, VP9 is well ahead of HEVC. As of early 2017, VP9 is supported by roughly 75% of the browser market. This includes Google Chrome, Firefox, Opera, and also Microsoft Edge since summer 2016. On the other hand, HEVC is only supported in Microsoft Edge in cases where hardware decoding is available.
The Bitmovin encoder produces segmented VP9, which is perfectly suited for VoD streams as well as live streams. Furthermore our Live-to-VoD workflow fits perfectly with this format and allows you to generate VoD streams out of the live stream right after the stream has finished or even while the live stream is still running.
VoD Encoding for MPEG-DASH VP9
With the Bitmovin API you can create MPEG-DASH VP9 content for live as well as VoD use-cases. First we will demonstrate how to create an encoding job with MPEG-DASH VP9 output with our C# API Client. A full example can be found in our examples list in the GitHub repository.
Setup the Bitmovin API client with your API key
var bitmovin = new BitmovinApi(API_KEY);Create an output configuration
We are using a Google Cloud Storage bucket as output location for the MPEG-DASH VP9 content. However, if you prefer you could also use an AWS S3, Azure Blob, Scality, FTP, SFTP, or any S3 compatible storage instead.
var output = bitmovin.Output.Gcs.Create(new GcsOutput { Name = "GCS Ouput", AccessKey = GCS_ACCESS_KEY, SecretKey = GCS_SECRET_KEY, BucketName = GCS_BUCKET_NAME });Create an encoding and define the cloud region and version to be used
When you create an encoding you can choose the cloud region where the encoder should run. Ideally, the region matches the cloud region in which your bucket resides in so you save egress traffic. Besides the cloud region you can also pinpoint special encoder versions or use our STABLE branch that always points to the latest stable encoder version.
var encoding = bitmovin.Encoding.Encoding.Create(new Encoding.Encoding { Name = "VP9 VoD Encoding C#", CloudRegion = EncodingCloudRegion.GOOGLE_EUROPE_WEST_1, EncoderVersion = "STABLE" });Create an input source
We need to create a source for your input file. If you have stored your input files on an HTTP server, you can just configure this server as source of your inputs with the code below. Please note that many other input sources such as AWS S3, Google Cloud Storage, Azure Blob, Aspera, (S)FTP, Scality and any S3 compatible storage is also supported.
var httpHost = bitmovin.Input.Http.Create(new HttpInput { Name = "HTTP Input", Host = INPUT_HTTP_HOST });Create video codec configurations and add it to the encoding
A codec configuration contains the encoding related configuration for a video rendition or an audio rendition. You need to link the codec configuration to a stream of your encoding that connects an input stream with the codec configuration. For example, link your input video stream to a H.264 1080p codec configuration will encode this video stream to H.264 1080p output. The following example uses the AUTO selection mode and position “0”, thus links this configuration to the first video stream of the input file. Beside VP9 we also support H.264/AVC and H.265/HEVC as codecs, and resolutions of 8K and higher.
var videoConfig1080p = bitmovin.Codec.VP9.Create(new VP9VideoConfiguration { Name = "VP9_Profile_1080p", Width = 1920, Height = 1080, Bitrate = 4800000, Rate = 30.0f }); var videoStream1080p = bitmovin.Encoding.Encoding.Stream.Create(encoding.Id, CreateStream(httpHost, INPUT_HTTP_PATH, 0, videoConfig1080p, SelectionMode.VIDEO_RELATIVE));Similar to the code above you can add more video codec configurations and then add them to streams of your encoding to generate alternative renditions (e.g., 720p, 360p, etc.). Additionally to the video rendition you may also want to add an audio track. For audio it works in the same way as for video, as you will see in the example below:
var audioConfig = bitmovin.Codec.Aac.Create(new AACAudioConfiguration { Name = "AAC_Profile_128k", Bitrate = 128000, Rate = 48000 }); var audioStream = bitmovin.Encoding.Encoding.Stream.Create(encoding.Id, CreateStream(httpHost, INPUT_HTTP_PATH, 0, audioConfig, SelectionMode.AUDIO_RELATIVE));Mux the encoded data for MPEG-DASH
In order to create MPEG-DASH, the VP9 encoded data needs to be packaged accordingly. In the following lines of code we will define segmented WebM for MPEG-DASH. Here you also define how long a single segment should be. Note that we also define where the segments should be stored in your output bucket. You have full control over the output location of the video and the audio streams. If you added multiple video renditions you also need to create a segmented WebM muxing for each rendition.
First we will create the required fMP4 muxings for MPEG-DASH:var videoWebmMuxing1080p = bitmovin.Encoding.Encoding.SegmentedWebm.Create(encoding.Id, CreateSegmentedWebmMuxing(videoStream1080p, output, OUTPUT_PATH + "video/1080p", segmentLength)); var audioFMP4Muxing = bitmovin.Encoding.Encoding.Fmp4.Create(encoding.Id, CreateFMP4Muxing(audioStream, output, OUTPUT_PATH + "audio/128kbps", segmentLength));Start the VP9 Encoding
Finally we can start the encoding job to encode your source asset to MPEG-DASH VP9.
bitmovin.Encoding.Encoding.Start(encoding.Id);With the following code snippet you can wait for the encoding job to be finished:
var encodingTask = bitmovin.Encoding.Encoding.RetrieveStatus(encoding.Id); while (encodingTask.Status != Status.ERROR && encodingTask.Status != Status.FINISHED) { // Wait for the encoding to finish encodingTask = bitmovin.Encoding.Encoding.RetrieveStatus(encoding.Id); Thread.Sleep(2500); }Besides that you can also use webhooks to get notified as soon as the encoding job has finished.
Create the MPEG-DASH Manifest
After the encoding is finished we also need an MPEG-DASH manifest in order to be able to playback the content with MPEG-DASH players. With the Bitmovin API you have full control over creating manifests, e.g., create multiple manifests with a different set of qualities for targeting desktop or mobile, etc. When creating the MPEG-DASH manifest you also specify the output and the location and filename of the manifest:
var manifestOutput = new Encoding.Output { OutputPath = OUTPUT_PATH, OutputId = output.Id, Acl = new List<Acl> {new Acl {Permission = Permission.PUBLIC_READ}} }; var manifestDash = bitmovin.Manifest.Dash.Create(new Dash { Name = "MPEG-DASH VP9 Manifest", ManifestName = "stream.mpd", Outputs = new List<Encoding.Output> { manifestOutput } });Define the default period and video and audio adaptation sets. In the audio adaptation set you can define the language of the audio track.
var period = bitmovin.Manifest.Dash.Period.Create(manifestDash.Id, new Period()); var videoAdaptationSet = bitmovin.Manifest.Dash.VideoAdaptationSet.Create(manifestDash.Id, period.Id, new VideoAdaptationSet()); var audioAdaptationSet = bitmovin.Manifest.Dash.AudioAdaptationSet.Create(manifestDash.Id, period.Id, new AudioAdaptationSet { Lang = "en" });Add the created segmented WebM muxings to the adaptation sets. You also need to define the relative path to the segments based on the manifest output location:
bitmovin.Manifest.Dash.Webm.Create(manifestDash.Id, period.Id, videoAdaptationSet.Id, new Manifest.Webm { Type = SegmentScheme.TEMPLATE, EncodingId = encoding.Id, MuxingId = videoWebmMuxing1080p.Id, SegmentPath = "video/1080p" }); bitmovin.Manifest.Dash.Webm.Create(manifestDash.Id, period.Id, audioAdaptationSet.Id, new Manifest.Webm { Type = SegmentScheme.TEMPLATE, EncodingId = encoding.Id, MuxingId = audioWebmMuxing.Id, SegmentPath = "audio/128kbps" });After that the manifest is configured completely, we can start the manifest creation:
bitmovin.Manifest.Dash.Start(manifestDash.Id);Equally, as for the encoding job, we also need to wait for a successful manifest creation:
var status = bitmovin.Manifest.Dash.RetrieveStatus(manifestDash.Id); while (status.Status == Status.RUNNING) { status = bitmovin.Manifest.Dash.RetrieveStatus(manifestDash.Id); Thread.Sleep(2500); }Again, you can also use webhooks here to get notified as soon as the manifest creation is finished. After that, we have an MPEG-DASH manifest for the VP9 encoded content available and can test the playback in MPEG-DASH compatible players like the Bitmovin player, Shaka player, or Dash.js.
Live Encoding for MPEG-DASH VP9
Starting a live encoding for MPEG-DASH VP9 is not much different to starting a VoD encoding. We also have a full example available in our GitHub repository.
Obviously, the input will not be based on a file but rather be an RTMP source. The following shows how to grab the default RTMP input that is available in your account.var rtmpInput = bitmovin.Input.Rtmp.RetrieveList(0, 100)[0];When creating the streams in the different qualities, use the rtmpInput instead of the HTTP input from the example above.
The second difference is related to the manifest generation which must be done before the live encoder is started. Just create the MPEG-DASH manifest as in the example above, and pass it in the start encoding call:bitmovin.Encoding.Encoding.StartLive(encoding.Id, new StartLiveEncodingRequest { StreamKey = "YourStreamKey", DashManifests = new List<LiveDashManifest> { new LiveDashManifest { ManifestId = manifestDash.Id, Timeshift = 300, LiveEdgeOffset = 180 } } });That is the whole difference when starting a live encoding compared to a VoD encoding.
Playback of MPEG-DASH VP9 Content
There is no difference between the playback of VP9 encoded MPEG-DASH streams and H.264 encoded MPEG-DASH streams. In both cases you set the MPEG-DASH manifest as the source for your player and there is no need to define the type of codec that is used. Below you can see an example of our player with VP9 encoded content through our Bitmovin API.