using this tool.
If you want to support most browsers, you can to re-encode the stream by using H264 and AAC codecs, for instance by using FFmpeg:
ffmpeg -i rtsp://original-source \
-c:v libx264 -pix_fmt yuv420p -preset ultrafast -b:v 600k \
-c:a aac -b:a 160k \
-f rtsp rtsp://localhost:8554/mystream
In order to correctly display Low-Latency HLS streams in Safari running on Apple devices (iOS or macOS), a TLS certificate is needed and can be generated with OpenSSL:
openssl genrsa -out server.key 2048
openssl req -new -x509 -sha256 -key server.key -out server.crt -days 3650
Set the hlsEncryption
, hlsServerKey
and hlsServerCert
parameters in the configuration file:
hlsEncryption: yes
hlsServerKey: server.key
hlsServerCert: server.crt
Keep also in mind that not all H264 video streams can be played on Apple Devices due to some intrinsic properties (distance between I-Frames, profile). If the video can’t be played correctly, you can either:
re-encode it by following instructions in this README
disable the Low-latency variant of HLS and go back to the legacy variant:
hlsVariant: mpegts
in HLS, latency is introduced since a client must wait for the server to generate segments before downloading them. This latency amounts to 500ms-3s when the low-latency HLS variant is enabled (and it is by default), otherwise amounts to 1-15secs.
To decrease the latency, you can:
if the stream is being hardware-generated (i.e. by a camera), there’s usually a setting called Key-Frame Interval in the camera configuration page
otherwise, the stream must be re-encoded. It’s possible to tune the IDR frame interval by using ffmpeg’s -g option:
ffmpeg -i rtsp://original-stream -c:v libx264 -pix_fmt yuv420p -preset ultrafast -b:v 600k -max_muxing_queue_size 1024 -g 30 -f rtsp rtsp://localhost:$RTSP_PORT/compressed
FFmpeg can read a stream from the server in several ways (RTSP, RTMP, HLS, WebRTC with WHEP, SRT). The recommended one consists in reading with RTSP:
ffmpeg -i rtsp://localhost:8554/mystream -c copy output.mp4
The RTSP protocol supports several underlying transport protocols, each with its own characteristics (see RTSP-specific features). You can set the transport protocol by using the rtsp_transport
flag:
ffmpeg -rtsp_transport tcp -i rtsp://localhost:8554/mystream -c copy output.mp4
GStreamer can read a stream from the server in several ways (RTSP, RTMP, HLS, WebRTC with WHEP, SRT). The recommended one consists in reading with RTSP:
gst-launch-1.0 rtspsrc location=rtsp://127.0.0.1:8554/mystream latency=0 ! decodebin ! autovideosink
The RTSP protocol supports several underlying transport protocols, each with its own characteristics (see RTSP-specific features). You can change the transport protocol by using the protocols
flag:
gst-launch-1.0 rtspsrc protocols=tcp location=rtsp://127.0.0.1:8554/mystream latency=0 ! decodebin ! autovideosink
If encryption is enabled, set tls-validation-flags
to 0
:
gst-launch-1.0 rtspsrc tls-validation-flags=0 location=rtsps://ip:8322/...
GStreamer also supports reading streams with WebRTC/WHEP, although track codecs must be specified in advance through the video-caps
and audio-caps
parameters. Furthermore, if audio is not present, audio-caps
must be set anyway and must point to a PCMU codec. For instance, the command for reading a video-only H264 stream is:
gst-launch-1.0 whepsrc whep-endpoint=http://127.0.0.1:8889/stream/whep use-link-headers=true \
video-caps="application/x-rtp,media=video,encoding-name=H264,payload=127,clock-rate=90000" \
audio-caps="application/x-rtp,media=audio,encoding-name=PCMU,payload=0,clock-rate=8000" \
! rtph264depay ! decodebin ! autovideosink
While the command for reading an audio-only Opus stream is:
gst-launch-1.0 whepsrc whep-endpoint="http://127.0.0.1:8889/stream/whep" use-link-headers=true \
audio-caps="application/x-rtp,media=audio,encoding-name=OPUS,payload=111,clock-rate=48000,encoding-params=(string)2" \
! rtpopusdepay ! decodebin ! autoaudiosink
While the command for reading a H264 and Opus stream is:
gst-launch-1.0 whepsrc whep-endpoint=http://127.0.0.1:8889/stream/whep use-link-headers=true \
video-caps="application/x-rtp,media=video,encoding-name=H264,payload=127,clock-rate=90000" \
audio-caps="application/x-rtp,media=audio,encoding-name=OPUS,payload=111,clock-rate=48000,encoding-params=(string)2" \
! decodebin ! autovideosink
VLC can read a stream from the server in several ways (RTSP, RTMP, HLS, SRT). The recommended one consists in reading with RTSP:
vlc --network-caching=50 rtsp://localhost:8554/mystream
The RTSP protocol supports several underlying transport protocols, each with its own characteristics (see RTSP-specific features).
In order to use the TCP transport protocol, use the --rtsp_tcp
flag:
vlc --network-caching=50 --rtsp-tcp rtsp://localhost:8554/mystream
In order to use the UDP-multicast transport protocol, append ?vlcmulticast
to the URL:
vlc --network-caching=50 rtsp://localhost:8554/mystream?vlcmulticast
The VLC shipped with Ubuntu 21.10 doesn’t support playing RTSP due to a license issue (see here and here). To fix the issue, remove the default VLC instance and install the snap version:
sudo apt purge -y vlc
snap install vlc
At the moment VLC doesn’t support reading encrypted RTSP streams. However, you can use a proxy like stunnel or nginx or a local MediaMTX instance to decrypt streams before reading them.
OBS Studio can read streams from the server by using the RTSP protocol.
Open OBS, click on Add Source, Media source, OK, uncheck Local file, insert in Input:
rtsp://localhost:8554/stream
Then Ok.
Software written with the Unity Engine can read a stream from the server by using the WebRTC protocol.
Create a new Unity project or open an existing one.
Open Window -> Package Manager, click on the plus sign, Add Package by name… and insert com.unity.webrtc
. Wait for the package to be installed.
In the Project window, under Assets
, create a new C# Script called WebRTCReader.cs
with this content:
using System.Collections;
using UnityEngine;
using Unity.WebRTC;
public class WebRTCReader : MonoBehaviour
{
public string url = "http://localhost:8889/stream/whep";
private RTCPeerConnection pc;
private MediaStream receiveStream;
void Start()
{
UnityEngine.UI.RawImage rawImage = gameObject.GetComponentInChildren<UnityEngine.UI.RawImage>();
AudioSource audioSource = gameObject.GetComponentInChildren<AudioSource>();
pc = new RTCPeerConnection();
receiveStream = new MediaStream();
pc.OnTrack = e =>
{
receiveStream.AddTrack(e.Track);
};
receiveStream.OnAddTrack = e =>
{
if (e.Track is VideoStreamTrack videoTrack)
{
videoTrack.OnVideoReceived += (tex) =>
{
rawImage.texture = tex;
};
}
else if (e.Track is AudioStreamTrack audioTrack)
{
audioSource.SetTrack(audioTrack);
audioSource.loop = true;
audioSource.Play();
}
};
RTCRtpTransceiverInit init = new RTCRtpTransceiverInit();
init.direction = RTCRtpTransceiverDirection.RecvOnly;
pc.AddTransceiver(TrackKind.Audio, init);
pc.AddTransceiver(TrackKind.Video, init);
StartCoroutine(WebRTC.Update());
StartCoroutine(createOffer());
}
private IEnumerator createOffer()
{
var op = pc.CreateOffer();
yield return op;
if (op.IsError) {
Debug.LogError("CreateOffer() failed");
yield break;
}
yield return setLocalDescription(op.Desc);
}
private IEnumerator setLocalDescription(RTCSessionDescription offer)
{
var op = pc.SetLocalDescription(ref offer);
yield return op;
if (op.IsError) {
Debug.LogError("SetLocalDescription() failed");
yield break;
}
yield return postOffer(offer);
}
private IEnumerator postOffer(RTCSessionDescription offer)
{
var content = new System.Net.Http.StringContent(offer.sdp);
content.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue("application/sdp");
var client = new System.Net.Http.HttpClient();
var task = System.Threading.Tasks.Task.Run(async () => {
var res = await client.PostAsync(new System.UriBuilder(url).Uri, content);
res.EnsureSuccessStatusCode();
return await res.Content.ReadAsStringAsync();
});
yield return new WaitUntil(() => task.IsCompleted);
if (task.Exception != null) {
Debug.LogError(task.Exception);
yield break;
}
yield return setRemoteDescription(task.Result);
}
private IEnumerator setRemoteDescription(string answer)
{
RTCSessionDescription desc = new RTCSessionDescription();
desc.type = RTCSdpType.Answer;
desc.sdp = answer;
var op = pc.SetRemoteDescription(ref desc);
yield return op;
if (op.IsError) {
Debug.LogError("SetRemoteDescription() failed");
yield break;
}
yield break;
}
void OnDestroy()
{
pc?.Close();
pc?.Dispose();
receiveStream?.Dispose();
}
}
Edit the url
variable according to your needs.
In the Hierarchy window, find or create a scene. Inside the scene, add a Canvas. Inside the Canvas, add a Raw Image and an Audio Source. Then add the WebRTCReader.cs
script as component of the canvas, by dragging it inside the Inspector window. then Press the Play button at the top of the page.
Web browsers can read a stream from the server in several ways (WebRTC or HLS).
You can read a stream by using the WebRTC protocol by visiting the web page:
http://localhost:8889/mystream
This web page can be embedded into another web page by using an iframe:
<iframe src="http://mediamtx-ip:8889/mystream" scrolling="no"></iframe>
For more advanced setups, you can create and serve a custom web page by starting from the source code of the WebRTC read page. In particular, there’s a ready-to-use, standalone JavaScript class for reading streams with WebRTC, available in reader.js.
Web browsers can also read a stream with the HLS protocol. Latency is higher but there are less problems related to connectivity between server and clients, furthermore the server load can be balanced by using a common HTTP CDN (like Cloudflare or CloudFront), and this allows to handle an unlimited amount of readers. Visit the web page:
http://localhost:8888/mystream
This web page can be embedded into another web page by using an iframe:
<iframe src="http://mediamtx-ip:8888/mystream" scrolling="no"></iframe>
For more advanced setups, you can create and serve a custom web page by starting from the source code of the HLS read page.