receive Stream with gst-rtsp-server - c++

I have a question about gstreamer.
i made a streaming server using gst-rtsp-server. I'm trying to send camera capture to another machine (on the local network) and to parse it into an .ogv file.
The transmission of the streaming works fine, and i'm able to parse the informations into the file; but i can't read it or use it with any application after this parsing. It seems that there are some information missing (probably in relation with the encoding technique, i don't really know much about it)
Server side command (inside c++ code):
....
gst_rtsp_media_factory_set_launch (factory, "( v4l2src device=/dev/video0 ! videorate !
video/x-raw-yuv,width=320,height=240,framerate=30/1 ! videoscale ! ffmpegcolorspace !
theoraenc ! rtptheorapay name=pay0 pt=96 )");
gst_rtsp_media_factory_set_shared (factory, TRUE);
/* attach the test factory to the /test url */
gst_rtsp_media_mapping_add_factory (mapping, "/stream", factory);
....
Client side command (terminal command) :
gst-launch -v rtspsrc location=rtsp://192.168.0.115:8554/stream !
rtptheoradepay name=pay0 ! oggmux ! filesink location=/home/jean/Desktop/stream.ogv
Any help any kind of help is well appreciated !
Jean

I could decode the pipeline as follows to view it gst-launch -v rtspsrc location="rtsp://localhost:8554/test" name=demux demux. ! queue ! rtptheoradepay ! theoradec ! ffmpegcolorspace ! autovideosink
To decode it
gst-launch -v rtspsrc location="rtsp://localhost:8554/test" ! application/x-rtp, payload=96 ! rtptheoradepay ! theoradec ! videorate ! ffmpegcolorspace ! theoraenc ! oggmux ! filesink location=GIBBERISH.ogg
I decode it and encode it back with the videorate before writing to the file. There may be a more optimal way to perform the same but it is just a work around.

Related

Separate RTSP payloads from gst-rtsp-server

I have an RTSP video source (h265) which I can display using VLC. I would like to split the stream into two, one at native resolution (encoded with h265) and the other at a new, lower resolution (encoded with h264). Both of the new streams should also be RTSP streams, viewable with VLC.
Due to bandwidth considerations, I can only connect a single client to the primary source.
So far, I have a working gst-rstp-server setup, with a single media factory running this gst launch string:
rtspsrc location=... ! rtph265depay !
h265parse ! tee name=t ! queue ! rtph265pay name=pay1 pt=96 t. ! queue
! decodebin ! videoscale ! videorate !
video/x-raw,framerate=30/1,width=640,height=480 ! x264enc bitrate=500
speed-preset=superfast tune=zerolatency ! h264parse ! rtph264pay
name=pay0 pt=96
I set up a mount point for the media factory and can connect to VLC, eg. "rtsp://127.0.0.1:8550/test". With this, I can only get whichever substream is pay0 in VLC. I can see that both substreams are working by changing which one is pay0. But how can I have VLC show my pay1?
Otherwise, how can I tee the original video source, then have two different media factories (with different gst launch strings...) use the tee's as their own source?
Both streams are being sent to you at the same time.
Usually the case for pay0 & pay1, would be sending video & audio.
For your case where you want 2 separate video streams you will need to modify code.
A simple example of what you want to achieve can be done by modifying the file at gst-rtsp-server/examples/test-launch.c
factory = gst_rtsp_media_factory_new ();
gst_rtsp_media_factory_set_launch (factory, argv[1]);
gst_rtsp_media_factory_set_shared (factory, TRUE);
gst_rtsp_mount_points_add_factory (mounts, "/stream1", factory);
gst_rtsp_media_factory_set_launch (factory, argv[2]);
gst_rtsp_media_factory_set_shared (factory, TRUE);
gst_rtsp_mount_points_add_factory (mounts, "/stream2", factory);
Then start with ./test-launch "rtspsrc location=... ! rtph265depay ! h265parse ! rtph265pay name=pay1 pt=96" "rtspsrc location=... ! rtph265depay ! h265parse ! decodebin ! videoscale ! videorate ! video/x-raw,framerate=30/1,width=640,height=480 ! x264enc bitrate=500 speed-preset=superfast tune=zerolatency ! h264parse ! rtph264pay name=pay0 pt=96"
You would then have 2 consumers on your camera though.
If you prefer to only consume once, it would be up to you to T the stream & make it available as the src for your gst_rtsp_media_factory_set_launch pipeline.

Save H264 encoded stream without re-encoding

I have a gstreamer pipeline that streams using :
v4l2src ! x264enc ! rtph264pay pt=96 ! udpsink host=ip port=8554
And this pipeline that receives this stream :
/ queue ! avdec_h264 ! appsink
udpsrc ! capsfilter ! rtpjitterbuffer ! rtph264depay ! tee !
\ queue ! h264parse ! mp4mux ! filesink
Simplified receiver pipeline without the tee is :
gst-launch-1.0 udpsrc port=8080 caps="lots-of-caps" ! rtpjitterbuffer ! rtph264depay ! h264parse ! mp4mux ! filesink location=/home/rish/Desktop/recorded.264 -e
Question :
Is there a way to save the H264 encoded stream received from udpsrc without having to re-encode it? How do I correctly close the filesink?
What I've tried so far : The discussion from this thread suggests the pipeline I've tried above but file is still corrupt. (not correctly closed).
This question asks a similar question. However I do not want to decode and re-encode. Another answer in the thread suggests using matroskamux element instead of mp4mux. This works, but I'd rather prefer using mp4mux (no particular reason, but I'd like to know why matroskamux works and mp4mux doesn't).
Your pipeline is already muxing without re-encoding, there is no encoder on your pipeline. h264parse is just a parser.
you've already got an answer on how to close the stream here: Sending EoS to filesink while removing branch from tee

record camera stream from gstreamer

I have a gstreamer pipeline which works perfectly and takes a camera stream, encodes it as H.264 video, saves it to a file AND displays it on the screen as follows:
gst-launch-1.0 -v autovideosrc ! tee name = t ! queue ! omxh264enc !
'video/x-h264, stream-format=(string)byte-stream' ! h264parse ! qtmux !
filesink location=test.mp4 t. ! queue ! videoscale ! video/x-raw,
width=480,height=270 ! xvimagesink -e sync=false
Now, I am trying to do something even simple and just record the stream to a file (without displaying on screen) and this does not seem to work! It writes a file but cannot play it. What I have tried so far is:
gst-launch-1.0 -v autovideosrc ! queue ! omxh264enc ! 'video/x-h264,
stream-format=(string)byte-stream' ! h264parse ! qtmux ! filesink
location=test.mp4 sync=false
I can also remove the queue element but with the same result:
gst-launch-1.0 -v autovideosrc ! omxh264enc ! 'video/x-h264,
stream-format=(string)byte-stream' ! h264parse ! qtmux ! filesink
location=test.mp4 sync=false
It does not give any errors but just does not write a valid stream to my filesink, it seems.
How do you stop the stream? Will the camera correctly inject an EOS signal? If not and you just press ctrl-c to stop the operation the .mp4 file will missing important headers which are required for proper playback.
Add -e to your command line. In that case when you press ctrl-c the pipeline will not just stop but is being properly shut down by sending an EOS signal through the pipeline.

How to demux audio and video from rtspsrc and then save to file using matroska mux?

I have been working on an application where I use rtspsrc to gather audio and video from one network camera to another. However I can not watch the stream from the camera and thereby cant verify that the stream works as intended. To verify that the stream is correct I want to record it on a SD card and then play the file on a computer. The problem is that I want the camera to do as much of the parsing, decoding, depayloading as possible since that is the purpose of the application.
I thereby have to separate the audio and video streams by a demuxer and do the parsing, decoding etc and thereafter mux them back into a matroska file.
The video decoder has been omitted since it is not done yet for this camera.
Demux to live playback sink(works)
gst-launch-0.10 -v rtspsrc location="rtsp://host:pass#192.168.0.91/XXX/XXXX?resolution=1280x720&audio=1&audiocodec=g711&audiosamplerate=8000&audiobitrate=64000" latency=0 name=d d. ! rtppcmudepay ! mulawdec ! audioresample ! audioconvert ! autoaudiosink d. ! rtph264depay ! ffdec_h264 ! queue ! ffmpegcolorspace ! autovideosink
Multiple rtspsrc to matroska(works)
gst-launch-1.0 -v rtspsrc location="rtsp://host:pass#192.168.0.91/XXX/XXXX?audio=1&audiocodec=g711&audiosamplerate=8000&audiobitrate=64000" latency=0 ! rtppcmudepay ! mulawdec ! audioresample ! audioconvert ! queue ! matroskamux name=mux ! filesink location=/var/spool/storage/SD_DISK/testmovie.mkv rtspsrc location="rtsp://root:pass#192.168.0.91/axis-media/media.amp?resolution=1280x720" latency=0 ! rtph264depay ! h264parse ! mux.
Single rtspsrc to matroska(fails)
gst-launch-1.0 -v rtspsrc location="rtsp://host:pass#192.168.0.91/XXX/XXXX?resolution=1280x720&audio=1&audiocodec=g711&audiosamplerate=8000&audiobitrate=64000" latency=0 name=d d. ! queue ! rtppcmudepay ! mulawdec ! audioresample ! audioconvert ! queue ! matroskamux name=mux d. ! queue ! rtph264depay ! h264parse ! queue ! mux. ! filesink location=/var/spool/storage/SD_DISK/testmoviesinglertsp.mkv
The last example fails with the error message
WARNING: erroneous pipeline: link without source element
Have i missunderstood the usage of matroska mux and why does the 2 above examples work but not the last?
The problem is here:
queue ! mux. ! filesink
You need to do
queue ! mux. mux. ! filesink
mux. means that gst-launch should select a pad automatically from mux. and link it. You could also specify manually a name, like mux.src. So syntactically you are missing another element/pad there to link to the other element.

Feed a video file to v4l2sink using gstreamer

I would like to feed a video file to my virtual video device using gstreamer and v4l2loopback.
Using videotestsrc, something like this works (i.e. I can open my virtual device from VLC):
gst-launch -v videotestsrc ! queue ! decodebin2 name=dec ! queue ! ffmpegcolorspace ! v4l2sink device=/dev/video0
However, the exact same code does not work with my video file:
gst-launch filesrc location=~/Documents/my_video.ogv ! queue ! decodebin2 name=dec ! queue ! ffmpegcolorspace ! v4l2sink device=/dev/video0
It actually gets stuck in the "PREROLLING" phase:
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
Can anybody see why? Do I miss some conversion between filesrc and decodebin2?
I don't know why exactly, but I was missing the ! videoscale ! step. And the ! queue !'s are apparently not necessary.
Here is the working line:
gst-launch filesrc location=~/Documents/my_video.ogv ! decodebin2 ! ffmpegcolorspace ! videoscale ! ffmpegcolorspace ! v4l2sink device=/dev/video0