Separate RTSP payloads from gst-rtsp-server - gstreamer

I have an RTSP video source (h265) which I can display using VLC. I would like to split the stream into two, one at native resolution (encoded with h265) and the other at a new, lower resolution (encoded with h264). Both of the new streams should also be RTSP streams, viewable with VLC.
Due to bandwidth considerations, I can only connect a single client to the primary source.
So far, I have a working gst-rstp-server setup, with a single media factory running this gst launch string:
rtspsrc location=... ! rtph265depay !
h265parse ! tee name=t ! queue ! rtph265pay name=pay1 pt=96 t. ! queue
! decodebin ! videoscale ! videorate !
video/x-raw,framerate=30/1,width=640,height=480 ! x264enc bitrate=500
speed-preset=superfast tune=zerolatency ! h264parse ! rtph264pay
name=pay0 pt=96
I set up a mount point for the media factory and can connect to VLC, eg. "rtsp://127.0.0.1:8550/test". With this, I can only get whichever substream is pay0 in VLC. I can see that both substreams are working by changing which one is pay0. But how can I have VLC show my pay1?
Otherwise, how can I tee the original video source, then have two different media factories (with different gst launch strings...) use the tee's as their own source?

Both streams are being sent to you at the same time.
Usually the case for pay0 & pay1, would be sending video & audio.
For your case where you want 2 separate video streams you will need to modify code.
A simple example of what you want to achieve can be done by modifying the file at gst-rtsp-server/examples/test-launch.c
factory = gst_rtsp_media_factory_new ();
gst_rtsp_media_factory_set_launch (factory, argv[1]);
gst_rtsp_media_factory_set_shared (factory, TRUE);
gst_rtsp_mount_points_add_factory (mounts, "/stream1", factory);
gst_rtsp_media_factory_set_launch (factory, argv[2]);
gst_rtsp_media_factory_set_shared (factory, TRUE);
gst_rtsp_mount_points_add_factory (mounts, "/stream2", factory);
Then start with ./test-launch "rtspsrc location=... ! rtph265depay ! h265parse ! rtph265pay name=pay1 pt=96" "rtspsrc location=... ! rtph265depay ! h265parse ! decodebin ! videoscale ! videorate ! video/x-raw,framerate=30/1,width=640,height=480 ! x264enc bitrate=500 speed-preset=superfast tune=zerolatency ! h264parse ! rtph264pay name=pay0 pt=96"
You would then have 2 consumers on your camera though.
If you prefer to only consume once, it would be up to you to T the stream & make it available as the src for your gst_rtsp_media_factory_set_launch pipeline.

Related

GStreamer: get video from RTP, setting the format automatically

Let's say I've got a simple RTP video server:
ximagesrc ! video/x-raw,framerate=30/1 ! videoscale ! videoconvert ! x264enc tune=zerolatency bitrate=500 speed-preset=slow ! rtph264pay ! udpsink host=127.0.0.1 port=5000
And I'm receiving it just fine:
udpsrc port=5000 caps = "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96" ! rtph264depay ! decodebin ! videoconvert ! autovideosink
But if my server was sending H265 (or any other format), I'd have to use another pipeline. Considering there are a ton of possible formats, that is definitely something I'd like to avoid. Is there any way to get my video decoded from any format?
No. RTP streams are supposed to be initiated by SDP and RTSP control protocols which will tell you what kind of stream you are dealing with. If you want self describing media, perhaps try looking at MPEG-TS instead.

How to combine appsink and filesink using GStreamer?

I'm new to GStreamer and I'm trying to create a pipeline to display a video and record it at the same time. I've managed to make the display part using:
ss << "filesrc location=/home/videos/video1.avi ! avidemux name=demux demux.video_0 ! mpeg4videoparse ! avdec_mpeg4 ! nvvidconv ! video/x-raw,format=I420 ! appsink name=mysink";
Also, I've read that filesink location=somepath is used for saving data into a file but I don't know how combine it with the rest of the pipeline.
So, how do I use appsink and filesink in the same pipeline?
GStreamer offers a tee element for such cases. Note however that in most cases you will want a queue after each branch of a tee to prevent deadlocks. E.g.
filesrc location=/home/videos/video1.avi ! avidemux name=demux demux.video_0 ! mpeg4videoparse ! avdec_mpeg4 ! nvvidconv ! video/x-raw,format=I420 ! tee name=mytee ! queue ! appsink name=mysink mytee. ! queue ! filesink location=out.raw

How to demux audio and video from rtspsrc and then save to file using matroska mux?

I have been working on an application where I use rtspsrc to gather audio and video from one network camera to another. However I can not watch the stream from the camera and thereby cant verify that the stream works as intended. To verify that the stream is correct I want to record it on a SD card and then play the file on a computer. The problem is that I want the camera to do as much of the parsing, decoding, depayloading as possible since that is the purpose of the application.
I thereby have to separate the audio and video streams by a demuxer and do the parsing, decoding etc and thereafter mux them back into a matroska file.
The video decoder has been omitted since it is not done yet for this camera.
Demux to live playback sink(works)
gst-launch-0.10 -v rtspsrc location="rtsp://host:pass#192.168.0.91/XXX/XXXX?resolution=1280x720&audio=1&audiocodec=g711&audiosamplerate=8000&audiobitrate=64000" latency=0 name=d d. ! rtppcmudepay ! mulawdec ! audioresample ! audioconvert ! autoaudiosink d. ! rtph264depay ! ffdec_h264 ! queue ! ffmpegcolorspace ! autovideosink
Multiple rtspsrc to matroska(works)
gst-launch-1.0 -v rtspsrc location="rtsp://host:pass#192.168.0.91/XXX/XXXX?audio=1&audiocodec=g711&audiosamplerate=8000&audiobitrate=64000" latency=0 ! rtppcmudepay ! mulawdec ! audioresample ! audioconvert ! queue ! matroskamux name=mux ! filesink location=/var/spool/storage/SD_DISK/testmovie.mkv rtspsrc location="rtsp://root:pass#192.168.0.91/axis-media/media.amp?resolution=1280x720" latency=0 ! rtph264depay ! h264parse ! mux.
Single rtspsrc to matroska(fails)
gst-launch-1.0 -v rtspsrc location="rtsp://host:pass#192.168.0.91/XXX/XXXX?resolution=1280x720&audio=1&audiocodec=g711&audiosamplerate=8000&audiobitrate=64000" latency=0 name=d d. ! queue ! rtppcmudepay ! mulawdec ! audioresample ! audioconvert ! queue ! matroskamux name=mux d. ! queue ! rtph264depay ! h264parse ! queue ! mux. ! filesink location=/var/spool/storage/SD_DISK/testmoviesinglertsp.mkv
The last example fails with the error message
WARNING: erroneous pipeline: link without source element
Have i missunderstood the usage of matroska mux and why does the 2 above examples work but not the last?
The problem is here:
queue ! mux. ! filesink
You need to do
queue ! mux. mux. ! filesink
mux. means that gst-launch should select a pad automatically from mux. and link it. You could also specify manually a name, like mux.src. So syntactically you are missing another element/pad there to link to the other element.

How to remove a branch of tee in an active GStreamer pipeline?

everyone
The version of GStreamer I use is 1.x. I've spent a lot of time in searching a way to delete a tee branch.
In an active pipeline, a recording bin is created as below and inserted into this pipeline by branching the tee element.
"queue ! video/x-h264, width=800, height=600, framerate=10/1, stream-format=(string)byte-stream ! h264parse ! mp4mux ! filesink location=/xxxx"
It works perfectly except that I want to dynamically delete the recording bin and get a playable mp4 file. According to some discussion and tutorial, to get a correct mp4 file , we need to handle something about EOS. After trying some methods, I always got broken mp4 files.
Does anyone have sample code written in C to show me ? I'd appreciate your help.
Your best bet for cases like this may be to create two processes. The first process would run the video, and half of the tee it has would deliver h264 data to the second process through whatever means.
Here are two pipelines demonstrating the concept using UDP sockets.
gst-launch-1.0 videotestsrc ! x264enc ! tee name=t ! h264parse ! avdec_h264 ! videoconvert ! ximagesink t. ! queue ! h264parse ! rtph264pay ! udpsink host=localhost port=8888
gst-launch-1.0 udpsrc port=8888 num-buffers=300 ! application/x-rtp,media=video,encoding-name=H264 ! rtph264depay ! h264parse ! mp4mux ! filesink location=/tmp/264.mp4
The trick to getting that clean mp4 is to make sure an EOS event is delivered reliably.
Instead of dynamically adding it you just have it in the pipeline by default, and add a probe callback at the source pad of the queue in the probe callback you have to do the trick either to pass the buffer or not (GST_PAD_PROBE_DROP drops the buffer and GST_PAD_PROBE_OK passes on the buffer to next element) so when you get an event to start/stop recoding you just need to return appropriate values. And filesink you can use multifilesink instead so as to write to different files everytime you start/stop.
Note the queue which drops the buffers needs before the mux element otherwise the file would be corrupt.
Hope that helps!
Finally, I came up with a solution.
Let's say that there is an active pipeline including a recording bin.
"udpsrc port=4444 caps=\"application/x-rtp, media=(string)video,
clock-rate=(int)90000, encoding-name=(string)H264 ! rtph264depay !
tee name=tp tp. ! queue ! video/x-h264, width=800, height=600,
framerate=10/1 ! decodebin ! videoconvert ! video/x-raw, format=RGBA !
autovideosink"
recording bin:
"queue ! video/x-h264, width=800, height=600, framerate=10/1,
stream-format=(string)byte-stream ! h264parse ! mp4mux ! filesink
location=/xxxx"
After a period of time, we want to stop recording and save as a mp4 file, and video media is still streaming.
First, I use a blocking probe to block the src pad of tee. In this blocking probe callback, I use an event probe to catch EOS in the sink pad of filesink and do a busy waiting.
*if EOS is catched in the event probe callback
self->isGotEOS = YES;
*busy waiting in the blocking probe callback
while (self->isGotEOS == NO) {
usleep(100000);
}
Before entering the busy waiting while loop, an EOS event is created and sent to the sink pad of recording bin.
After the busy waiting is done:
usleep(200000);
[self destory_record_elements];
I think usleep(200000) is a trick. Without it, a non-playable mp4 file is usually the result. It would seem that 200ms is long enough handling the EOS.
I had similar problem previously, my pipeline
videotestsrc do-timestamp="TRUE" ! videoflip method=0 ! tee name=t
t. ! queue ! videoconvert ! glupload ! glshader ! autovideosink async="FALSE"
t. ! queue ! identity drop-probability=1 ! videoconvert name=conv2 ! openh264enc ! h264parse ! avimux ! multifilesink async="FALSE" post-messages=true next-file=4
Then I just change drop-probability property on identity element
drop-probability = 1 + gst_pad_send_event(conv2_sinkpad, gst_event_new_eos()); - stop recording
drop-probability = 0 - resume recording

receive Stream with gst-rtsp-server

I have a question about gstreamer.
i made a streaming server using gst-rtsp-server. I'm trying to send camera capture to another machine (on the local network) and to parse it into an .ogv file.
The transmission of the streaming works fine, and i'm able to parse the informations into the file; but i can't read it or use it with any application after this parsing. It seems that there are some information missing (probably in relation with the encoding technique, i don't really know much about it)
Server side command (inside c++ code):
....
gst_rtsp_media_factory_set_launch (factory, "( v4l2src device=/dev/video0 ! videorate !
video/x-raw-yuv,width=320,height=240,framerate=30/1 ! videoscale ! ffmpegcolorspace !
theoraenc ! rtptheorapay name=pay0 pt=96 )");
gst_rtsp_media_factory_set_shared (factory, TRUE);
/* attach the test factory to the /test url */
gst_rtsp_media_mapping_add_factory (mapping, "/stream", factory);
....
Client side command (terminal command) :
gst-launch -v rtspsrc location=rtsp://192.168.0.115:8554/stream !
rtptheoradepay name=pay0 ! oggmux ! filesink location=/home/jean/Desktop/stream.ogv
Any help any kind of help is well appreciated !
Jean
I could decode the pipeline as follows to view it gst-launch -v rtspsrc location="rtsp://localhost:8554/test" name=demux demux. ! queue ! rtptheoradepay ! theoradec ! ffmpegcolorspace ! autovideosink
To decode it
gst-launch -v rtspsrc location="rtsp://localhost:8554/test" ! application/x-rtp, payload=96 ! rtptheoradepay ! theoradec ! videorate ! ffmpegcolorspace ! theoraenc ! oggmux ! filesink location=GIBBERISH.ogg
I decode it and encode it back with the videorate before writing to the file. There may be a more optimal way to perform the same but it is just a work around.