Live Streaming with the Bitmovin API

How Live Streaming Works with the Bitmovin API

The Bitmovin cloud encoding service is a powerful tool for live streaming, and our API makes it easy to implement. There are basically 4 steps involved when it comes to our live streaming service in the cloud.

1. Ingest RTMP Stream to our Live Encoder

Usually a mezzanine encoder that is processing the live signal will transcode this signal to a high quality mezzanine format and ingest it at the RTMP ingest point in our live encoder. To guarantee a stable connection between the mezzanine encoder and our live encoder it is crucial to carefully select the cloud and region your live encoder will run to reduce latencies.

2. Encoding of the Input Stream to MPEG-DASH and HLS

You can define multiple output resolutions and bitrates for MPEG-DASH and HLS, define if you want to encode to H.264 (AVC) or H.265 (HEVC). There are literally no limits in defining what output you want from our live encoder, e.g., it can easily handle multiple 4k 60FPS streams encoded to HEVC.

3. Direct Output to Your Storage

Our live encoder writes the encoded files directly to your AWS S3 or GCS bucket. You could use this output storage as origin for your CDN or for multiple CDNs. This works well with GCS as Google offers a CDN Interconnect for several CDNs such as Akamai, Fastly, Level3, HighWinds, CloudFlare, Limelight, and Verizon. But also AWS S3 could be easily connected with their Cloudfront CDN and provides you an out-of-the-box solution in one cloud.

4. Playback on any Device, or Browser

If you are using the Bitmovin HTML5 player your content will playback on any device or browser. However, you are not limited to use our player and can also choose among others as described in our recent blog post on encoding for multiple HTML5 Players.

Starting a Live Stream

Now, let’s see how we can start a live stream that will generate MPEG-DASH and HLS. To showcase the setup of the live stream we will be utilizing the .NET C# API client. The full example can also be found in our examples list of the .NET C# API client as well.

Setup the Bitmovin API client with your API key:

var bitmovin = new BitmovinApi(API_KEY);

Create an Output Configuration

We are using a Google Cloud Storage bucket as output location of your live encoder, however, if you prefer you could also use an AWS S3 bucket as output instead.

var output = bitmovin.Output.Gcs.Create(new GcsOutput
{
    Name = "GCS Ouput",
    AccessKey = GCS_ACCESS_KEY,
    SecretKey = GCS_SECRET_KEY,
    BucketName = GCS_BUCKET_NAME
});

Create an Encoding and Define the Cloud Region and Version to be Used

When you create an encoding you can choose the cloud region where the encoder should run. Choose the cloud region carefully to have a stable connection and low latency between your mezzanine encoder and our live encoder ingest point.

Beside the cloud region you can also pinpoint special encoder versions or use BETA to access newer features, although we recommend to use our STABLE branch for most cases.

var encoding = bitmovin.Encoding.Encoding.Create(new Encoding.Encoding
{
    Name = "Live Stream C#",
    CloudRegion = EncodingCloudRegion.GOOGLE_EUROPE_WEST_1,
    EncoderVersion = "STABLE"
});

Create a Live Input

For live streams we currently support RTMP as ingest point. Your account comes with a pre configured RTMP input that you can get with the line below:

var rtmpInput = bitmovin.Input.Rtmp.RetrieveList(0, 100)[0];

Create Video Codec Configurations and add it to the Encoding

A codec configuration contains the encoding related configuration for a video rendition or an audio rendition. You need to link the codec configuration to a stream of your encoding that connects an input stream with the codec configuration. E.g., link your input video stream to a H.264 1080p codec configuration will encode this video stream to H.264 1080p output. The following example uses the AUTO selection mode and position “0”, thus links this configuration to the first video stream of the RTMP input. Beside H.264 we also support H.265 (HEVC) as code, and resolutions of 8k or even more.

var videoConfig1080p = bitmovin.Codec.H264.Create(new H264VideoConfiguration
{
    Name = "H264_Profile_1080p",
    Profile = H264Profile.HIGH,
    Width = 1920,
    Height = 1080,
    Bitrate = 4800000,
    Rate = 30.0f
});
var videoStream1080p = bitmovin.Encoding.Encoding.Stream.Create(encoding.Id,
    CreateStream(rtmpInput, "live", 0, videoConfig1080p, SelectionMode.AUTO));

Similar to the code above you can add more video codec configurations and add them to streams of your encoding to generate alternative renditions (e.g., 720p, 360p, etc.). Additionally to the video rendition you may also want to add an audio track. For audio it works pretty much the same as for video as you can see in the below example:

var audioConfig = bitmovin.Codec.Aac.Create(new AACAudioConfiguration
{
    Name = "AAC_Profile_128k",
    Bitrate = 128000,
    Rate = 48000
});
var audioStream = bitmovin.Encoding.Encoding.Stream.Create(encoding.Id,
    CreateStream(rtmpInput, "live", 0, audioConfig, SelectionMode.AUTO));

Mux the Encoded Data for MPEG-DASH and HLS

In order to create MPEG-DASH and HLS content the encoded data needs to be packaged accordingly. In the following lines of code we will define segmented fMP4 for MPEG-DASH and segmented TS for HLS. Here you also define how long a single segment should be. This can be an important parameter when creating video clips out of your live stream. The smaller this value, the more accurate can you “cut” the video clip. It is also possible to create VoD manifests out of the encoded data created from the live stream. This is explained in our blog post: Live-to-VoD.

Note that we also define where the segments should be stored in your output bucket. You have full control over the output location of the video and the audio streams. If you added multiple video rendition you also need to create a fMP4 muxing for each rendition.

First we will create the required fMP4 muxings for MPEG-DASH:

var videoFMP4Muxing1080p = bitmovin.Encoding.Encoding.Fmp4.Create(encoding.Id,
    CreateFMP4Muxing(videoStream1080p, output, OUTPUT_PATH + "video/1080p_dash", segmentLength));
var audioFMP4Muxing = bitmovin.Encoding.Encoding.Fmp4.Create(encoding.Id,
    CreateFMP4Muxing(audioStream, output, OUTPUT_PATH + "audio/128kbps_dash", segmentLength));

The following shows the same for segmented TS muxings:
var videoTsMuxing1080p = bitmovin.Encoding.Encoding.Ts.Create(encoding.Id,
    CreateTsMuxing(videoStream1080p, output, OUTPUT_PATH + "video/1080p_hls", segmentLength));
var audioTsMuxing = bitmovin.Encoding.Encoding.Ts.Create(encoding.Id,
    CreateTsMuxing(audioStream, output, OUTPUT_PATH + "audio/128kbps_hls", segmentLength));

Define the MPEG-DASH Live Manifest

In order to playback content for MPEG-DASH we also need to have a manifest. With the Bitmovin API you have full control over creating manifests, e.g., create multiple manifests with different. Later we will also see how we can create VoD manifests out of the encoded data created from the live stream.

When creating the MPEG-DASH manifest you also specify the output and the location and filename of the manifest:

var manifestOutput = new Encoding.Output
{
    OutputPath = OUTPUT_PATH,
    OutputId = output.Id,
    Acl = new List<Acl> {new Acl {Permission = Permission.PUBLIC_READ}}
};

var manifestDash = bitmovin.Manifest.Dash.Create(new Dash
{
	Name = "MPEG-DASH Live Manifest",
	ManifestName = "stream.mpd",
	Outputs = new List<Encoding.Output> { manifestOutput }
});

Define the default period and video and audio adaptation sets. In the audio adaptation set you can define the language of the audio track.

var period = bitmovin.Manifest.Dash.Period.Create(manifestDash.Id, new Period());
var videoAdaptationSet =
	bitmovin.Manifest.Dash.VideoAdaptationSet.Create(manifestDash.Id, period.Id, new VideoAdaptationSet());
var audioAdaptationSet = bitmovin.Manifest.Dash.AudioAdaptationSet.Create(manifestDash.Id, period.Id,
	new AudioAdaptationSet { Lang = "en" });

Add the created fMP4 muxings to the adaptation sets. You also need to define the relative path to the segments based on the manifest output location:

bitmovin.Manifest.Dash.Fmp4.Create(manifestDash.Id, period.Id, videoAdaptationSet.Id,
    new Manifest.Fmp4
    {
        Type = SegmentScheme.TEMPLATE,
        EncodingId = encoding.Id,
        MuxingId = videoFMP4Muxing1080p.Id,
        SegmentPath = "video/1080p_dash"
    });
bitmovin.Manifest.Dash.Fmp4.Create(manifestDash.Id, period.Id, audioAdaptationSet.Id,
    new Manifest.Fmp4
    {
        Type = SegmentScheme.TEMPLATE,
        EncodingId = encoding.Id,
        MuxingId = audioFMP4Muxing.Id,
        SegmentPath = "audio/128kbps_dash"
    });

Define the HLS Live Manifest

The definition of the HLS live manifest works similar than for MPEG-DASH. First we define the output location and name of the live HLS manifest:

var manifestHls = bitmovin.Manifest.Hls.Create(new Hls
{
    Name = "HLS Live Manifest",
    ManifestName = "stream.m3u8",
    Outputs = new List<Encoding.Output> {manifestOutput}
});

Add the audio TS muxing as a media info the the HLS manifest and give it a group name. Here you also define the name of the playlist file that will be created for the audio segments:

var mediaInfo = new MediaInfo
{
    GroupId = "audio",
    Name = "English",
    Uri = "audio.m3u8",
    Type = MediaType.AUDIO,
    SegmentPath = "audio/128kbps_hls/",
    StreamId = audioStream.Id,
    MuxingId = audioTsMuxing.Id,
    EncodingId = encoding.Id,
    Language = "en",
    AssocLanguage = "en",
    Autoselect = false,
    IsDefault = false,
    Forced = false
};
bitmovin.Manifest.Hls.AddMediaInfo(manifestHls.Id, mediaInfo);

Next create a variant stream for each video rendition and link it the the defined audio group. Also define the name of the video playlist file for this rendition:

bitmovin.Manifest.Hls.AddStreamInfo(manifestHls.Id, new StreamInfo
{
    Uri = "video_1080.m3u8",
    EncodingId = encoding.Id,
    StreamId = videoStream1080p.Id,
    MuxingId = videoTsMuxing1080p.Id,
    Audio = "audio",
    SegmentPath = "video/1080p_hls/"
});
bitmovin.Manifest.Hls.AddStreamInfo(manifestHls.Id, new StreamInfo
{
    Uri = "video_1080.m3u8",
    EncodingId = encoding.Id,
    StreamId = videoStream1080p.Id,
    MuxingId = videoTsMuxing1080p.Id,
    Audio = "audio",
    SegmentPath = "video/1080p_hls/"
});
bitmovin.Manifest.Hls.AddStreamInfo(manifestHls.Id, new StreamInfo
{
    Uri = "video_1080.m3u8",
    EncodingId = encoding.Id,
    StreamId = videoStream1080p.Id,
    MuxingId = videoTsMuxing1080p.Id,
    Audio = "audio",
    SegmentPath = "video/1080p_hls/"
});

Start the Live Stream

Finally we can start the live stream using both created manifests in the start call. You have the option to set specific live options as the timeshift parameter that defines how many seconds a user will be able to seek back in time. In the example we allow the users to seek back 5 minutes.

For MPEG-DASH you can also define how far away from the real live signal the player will start to playback the segments. If you are not aiming for low latency live streams choose a value between 60 and 120 seconds to give the player enough room head for buffering.

bitmovin.Encoding.Encoding.StartLive(encoding.Id, new StartLiveEncodingRequest
{
    StreamKey = "yourStreamKey",
    HlsManifests = new List<c>
    {
        new LiveHlsManifest
        {
            ManifestId = manifestHls.Id,
            Timeshift = 300
        }
    },
    DashManifests = new List<LiveDashManifest>
    {
        new LiveDashManifest
        {
            ManifestId = manifestDash.Id,
            Timeshift = 300,
            LiveEdgeOffset = 90
        }
    }
});

Get Live Stream Details and Ingest Point

Now that you have started the live stream you need to wait for it to be ready so you can ingest your RTMP stream. Once it is ready you can easily query the live stream details and print it to the console:

liveEncoding = bitmovin.Encoding.Encoding.RetrieveLiveDetails(encoding.Id);
Console.WriteLine("Live stream started");
Console.WriteLine("Encoding ID: {0}", encoding.Id);
Console.WriteLine("IP: {0}", liveEncoding.EncoderIp);
Console.WriteLine("Rtmp URL: rtmp://{0}/live", liveEncoding.EncoderIp);
Console.WriteLine("Stream Key: {0}", liveEncoding.StreamKey);

The console output will contain the RTMP push URL and the stream key. You can use a mezzanine encoder from Elemental, Teradek, Teracue, etc. or use software like the popular OBS studio or ffmpeg. As said already choose your encoding region wisely to have a stable RTMP ingest from your location. However, should the RTMP signal be interrupted to our ingest, we will continue to stream the last image we got from your encoder as still image to not interrupt playback of the stream. As soon as your encoder is able to connect to our ingest again, we will continue to stream your content.

Shutdown the Live Stream

After the live event is finished you need to shutdown the live stream to not generate unnecessary costs in your account. The following line of code shows how to stop a live stream:

bitmovin.Encoding.Encoding.StopLive(encoding.Id);
Back to Top
Simple Share Buttons