Updated: Aug 12
When you watch the live broadcast on the Internet, it usually takes 30 seconds to view the scenes delivered by the streamer. Even if you send a chat message now to the streamer, you can't see the streamer's response unless 30 seconds have elapsed. Therefore, you should keep watching for 30 seconds after sending a chat message so that you can communicate with the streamer. The chat message is in text format, sending it immediately, but the streamer responds with a video stream, so it takes 30 seconds to reach you. This situation is called Latency.
The online broadcasting was a one-way broadcast from the content provider to the viewer, so there was no major inconvenience to viewers, and the 30-second latency was not a big deal. However, interactive broadcasting, which is an increasing trend these days, requires viewers to participate and intervene in broadcasting in near real-time, such as Games, Stocks, etc. If the latency of 30 seconds, as in existing broadcasting, people suffer from discomfort and will not even use it. So this service cannot keep going.
Why does latency occur during streaming?
Commonly, Live Streaming works according to the flow. Please refer to the inserted figure.
The video source is the original video, such as your camera or game screen.
The encoder encodes the original video and transmits it to the Internet. Usually, the protocol used for Encoder Output is RTMP or MPEG-TS / UDP.
The ingest section receives the video from the encoder.
The transcoding section converts the video quality by changing the codec or adjusting the bit rate and resolution.
The packaging section packages the video in a format the player can play. HLS or MPEG-Dash is most often used when playing media on the web or mobile without plug-ins, so it's usually packaged in TS format and generates TS chunks of N seconds.
Chunks that has packaged in the packaging section deliver to the Origin Server.
Go through the Edge server.
Stream in the Player.
Video has latency little by little through many sections, but the biggest cause is the length of the chunk created in the Packaging section. Because HLS traditionally has recommended three chunks of about 10 seconds, which is a 30-second latency. Therefore, there is latency as long as the chunk length.
The next big reason is the buffering of devices such as Encoder, Ingest, Transcoding, or Player. Buffering is required to solve frequent transmission interruptions due to network jitter, and latency occurs as much as the time of buffering. Therefore, buffering should be minimized in all sections to reduce latency.
What is Sub-Second Latency?
Chunk-type streaming, such as HLS or MPEG-Dash, has a latency of 30 seconds or more, but reducing the chunk size while minimizing reliability can reduce latency to about 10 seconds. For this reason, major streaming service providers can support streaming with a latency of 6 to 10 seconds.
Using RTMP or the Smooth Streaming protocol instead of traditional chunks can deliver video with a latency of about 2 or 3 seconds. It's usually called Low Latency. Moreover, delivering video with less than a second of latency is called Ultra-Low Latency (Sub-Second Latency), which results from a new category according to technological advancement.
What does it need for using Sub-Second Latency?
Improving the protocol between the Edge and Player sections is the most important to providing Sub-Second Latency Streaming Service. Because the chunk is excessively reduced to implement Low Latency in HLS or MPEG-Dash, which is a traditional streaming protocol, the Player must frequently request a new chunk list to the Edge server, which degrades the quality of service very much. Of course, building up the performance of the device and reducing buffering in all sections in those mentioned above, "Why does latency occur during streaming?" should be preceded.
Recently, the media industry regards WebRTC and CMAF with MPEG-Dash(CMAF) as a new alternative. Because WebRTC and CMAF enable Sub-Second Latency of less than 1 second, they are protocols that can playback without plug-ins on most modern browsers.
WebRTC is a free open source project that provides real-time communication from web browsers and mobile applications standardized by W3C and IETF. It's a P2P (Peer-to-peer) protocol created for direct communication between devices, which can also apply to streaming servers.
Google and Akamai support CMAF (Common Media Application Format), and Delivery uses MPEG-Dash, but it's a protocol that reconfigured the Chunk format for low latency.
How can Sub-Second Latency Streaming be created?
It isn't very easy to implement protocols and various features directly for Sub-Second Latency Streaming Service. Thus, AirenSoft has released OvenMediaEngine (OME), a Sub-Second Latency Streaming Server with Large-Scale and High-Definition, as the Open-Source to contribute to the development of the media industry. OME supports WebRTC, RTMP, SRT, MPEG-2 TS, and RTSP of input, and WebRTC, Low Latency DASH, Low Latency HLS (Soon) of output for Sub-Second Latency, and traditional output is also possible such as HLS, MPEG-Dash, and RTMP for general streaming. Moreover, we are still actively developing to add various functions necessary for streaming service operation.
OME is implemented as one system from the Ingest to Edge server section. Since the Live transcoder has been included in the streaming server, it's a streaming server that anyone can use with simple settings without a separate system.
As you can see in AirenSoft 's GitHub, OvenPlayer has been released like OME. OvenPlayer is an HTML5 player and one of AirenSoft's Open Source Projects. It supports WebRTC, Low Latency DASH, MPEG-DASH, HLS, and RTMP, and you can directly embed the player into your web page only by inserting a few lines of source code.
In the future, We will take a moment to explain WebRTC or other technologies through our blog deeply. Thank you!