Updated: Jan 11, 2019
When you watch the live broadcast on the Internet, it usually takes 30 seconds to view the scenes delivered by the streamer. That is, even if you send a chat message now to the streamer, you can't see the streamer's response unless 30 seconds have elapsed. Therefore, you should keep watching for 30 seconds after sending a chat message so that you can communicate with the streamer. The chat message is in text format, so it sends immediately, but the streamer responds with a video stream, so it takes 30 seconds to reach you. This situation is called the Latency.
Until now, the broadcasting was the one-way broadcast from the Content provider to the Viewer, so there was no major inconvenience to viewers and the 30-second latency was not a big deal. However, interactive broadcasting, which is an increasing trend these days, requires viewers to participate and intervene in broadcasting in near real-time, such as Games, Stocks, etc. If the latency of 30 seconds as in existing broadcasting, people suffer from discomfort and will not even use it. So this service cannot keep going.
Why does latency occur during streaming?
Commonly, Live Stream works according to the flow. Please refer to the inserted figure.
Video source is the original video, such as your camera or game screen.
Encoder encodes the original video and transmits it to the Internet. Usually, the protocol used for Encoder Output is RTMP or MPEG-TS / UDP.
Ingest section receives the video from the encoder.
Transcoding section converts the video quality by changing the codec or adjusting the bit rate and resolution.
Packaging section packages the video in a format the player can play. HLS or MPEG-Dash is most often using when playing media on the web or mobile without plug-ins, so it's usually packaged in TS format and generates TS chunks of N seconds.
Chunks that has packaged in the packaging section deliver to the Origin Server.
Go through the Edge server.
Play in the Player.
Video has latency little by little through many sections, but the biggest cause is the length of the chunk created in the Packaging section. Because HLS traditionally has recommended three chunks of about 10 seconds, which is a 30-second latency. Therefore, there is latency as long as the chunk length.
The next big reason is the buffering of devices such as Encoder, Ingest, Transcoding or Player. Buffering is required to solve frequent transmission interruptions due to network jitter, and latency occurs as much as the time of buffering. Therefore, buffering should be minimized in all sections to reduce latency.
What is Ultra-Low Latency?
Chunk type streaming, such as HLS or MPEG-Dash, has a latency of 30 seconds or more, but reducing the chunk size while minimizing reliability can reduce latency to about 10 seconds. For this reason, major streaming service providers can support streaming with a latency of 6 to 10 seconds.
Using RTMP or the Smooth Streaming protocol instead of traditional chunks can deliver video with a latency of about 2 or 3 seconds. It's usually called Low Latency. Moreover, delivering video with less than a second of latency is called Ultra-Low Latency, which is as a result of a new category according to technological advancement.
What does it need for using ultra-low latency?
Improving the protocol between the Edge and Player sections is the most important to provide Ultra-low latency streaming service. Because if the chunk is excessively reduced to implement Ultra-Low Latency in HLS or MPEG-Dash, which is a traditional streaming protocol, the Player must frequently request a new chunk list to the Edge server, which degrades the quality of service very much. Of course, building up the performance of the device and reducing buffering in all sections in those as mentioned above "Why does latency occur during streaming?" should be preceded.
Recently, the media industry regards WebRTC and CMAF with MPEG-Dash(CMAF) as a new alternative. Because WebRTC and CMAF enable Ultra-low latency of less than 1 second, and they are protocols that can playback without plug-ins on most modern browsers.
WebRTC is a free open source project that provides real-time communication from web browsers and mobile applications standardizing by W3C and IETF. It's a P2P(Peer-to-peer) protocol created for direct communication between devices, but it can also apply to the streaming servers.
CMAF(Common Media Application Format) is supporting by Google and Akamai, and Delivery uses MPEG-Dash, but it's a protocol that reconfigured the Chunk format for low latency.
How can Ultra-low latency streaming be created?
It's complicated to implement protocols and various features directly for Ultra-low latency streaming service. Thus, AirenSoft has released OvenMediaEngine(OME), which is an ultra-low latency streaming server, as the Open source to contribute to the development of the media industry. OME supports RTMP input and WebRTC output for ultra-low latency, and traditional output is also possible such as HLS, MPEG-Dash, and RTMP for general streaming. Moreover, we are developing more various protocols than now such as CMAF, SRT to secure more favorable usability and is still actively developing to add various functions necessary for streaming service operation.
OME implemented as one system from the Ingest to Edge server section. Since the Live transcoder has included in the streaming server, it's a streaming server that anyone can use with simple setting without a separate system.
As you can see in AirenSoft 's GitHub, there is OvenPlayer which has released like OME. OvenPlayer is an HTML5 player and one of AirenSoft's Open Source Projects. It supports WebRTC, MPEG-Dash, HLS, RTMP, and you can directly embed the player into your web page only by inserting a few lines of source code.
In the future, We take a moment to explain WebRTC or other technologies through our blog deeply. Thank you!