I have created an application that uses appsrc to record mp4/mpeg files. EOS event is sent whenever I have to stop recording and the file is created successfully. Everything goes well, my pipeline is
appsrc ! queue ! videorate ! ffmpegcolorspace ! x264enc ! mp4mux ! filesink location=video.mp4
By chance, if my application crashes (is unable to generate successful EOS ), the amount of recorded data is completely lost.
Is there a way to recover such files in gstreamer. I was thinking if I could append EOS by reading such files in gstreamer. Is there a provision to do that or something similar so that i don't loose the data.
Thanks,
Rahul
You may wish to mux the data into an MPEG transport stream (.ts) instead of an MP4 file. The reason that the MP4 file is unreadable after an application crash is that the mp4mux doesn't get a chance to write the file's 'moov' atom which can only be done after all the multimedia data is recorded (i.e., when EOS is processed). A .ts file is built for streaming and can still be read even if the end of the file is incomplete.
To invoke it, change the end of your pipeline to:
... ! x264enc ! mpegtsmux ! filesink location=video.ts
If MP4 is a requirement, the .ts file can easily be losslessly remuxed into an MP4 after recording.
You can use the "moov-recovery-file" property and to be able to repair the file in the case of a crash. See atomsrecovery for details.
Related
Is it possible to extract the NTP timestamps from the RTCP SR to be synchronized later?
I am currently recording my video and audio feeds separately using the following launch strings:
matroskamux name=mux ! filesink location=/tmp/video.mkv udpsrc port=26770 caps="application/x-rtp,media=(string)video,clock-rate=(int)90000,payload=(int)101,encoding-name=(string)VP8,ssrc=(uint)608893168" ! .recv_rtp_sink rtpsession name=session .recv_rtp_src ! rtpvp8depay ! queue ! mux.video_0 udpsrc address=127.0.0.1 port=24694 ! session.recv_rtcp_sink
and
matroskamux name=mux ! filesink location=/tmp/audio.mkv udpsrc port=26540 caps="application/x-rtp,media=(string)audio,clock-rate=(int)48000,payload=(int)100,encoding-name=(string)OPUS,ssrc=(uint)632722900" ! .recv_rtp_sink rtpsession name=session .recv_rtp_src ! rtpopusdepay ! queue ! mux.audio_0 udpsrc address=127.0.0.1 port=23815 ! session.recv_rtcp_sink
I am recording them separately because the video or audio may start and stop multiple times throughout the session (for example, the user begins with just audio, then starts streaming video, then stops audio, etc etc etc). I figured that it would be easier to just start/stop separate pipelines when the user starts/stops a stream rather than trying to stop an audio-only pipeline and switch to an audio+video pipeline (and potentially have a gap in the audio between the stop and the start).
As a proof-of-concept, I used the created timestamp of the resulting mkv file (using gst-discoverer-1.0 and looking at the datetime field) and it mostly worked but seemed to be just a little bit off. I'm wondering if there's a way I can use the RTCP SR packets to encode the "real" timestamp for the start of the stream. It would be great if I could encode it into the mkv somehow but even if I could just access it in code with a signal then I could store the information elsewhere. I looked through the signals on rtpsession but nothing was jumping out at me as a possible solution.
I'm developing an app receiving an H.264 video stream from an RTSP camera and displaying and storing it to MP4 without transcoding. For the purpose of my current test, I record for 5 sec only.
My problem is that the MP4 is not playable. The resulting file varies in size from one run of the app to another showing something is very wrong (unexpected since the recording time is fixed).
Here are my pipelines:
rtspsrc location = rtsp://192.168.0.61:8554/quality_h264 latency=0 ! rtph264depay ! h264parse ! video/x-h264,stream-format=avc ! queue ! interpipesink name=cam1
interpipesrc allow-renegotiation=true name=src listen-to=cam1 is-live=true ! h264parse ! queue ! decodebin ! autovideoconvert ! d3dvideosink sync=false
interpipesrc allow-renegotiation=true name=src listen-to=cam1 is-live=true ! h264parse ! queue ! mp4mux ! filesink location=test.mp4
In a next step I will add more cameras and will need to be able to change which camera gets recorded to MP4 on the fly, as well as pause/resume the recording. For this reason, I've opted to use interpipesink/src. It's a set of gstreamer elements that allow communication between two independent pipelines. https://github.com/RidgeRun/gst-interpipe
A thread waits for 10 sec, then sends EOS on the 3rd pipeline (recording). Then, when the bus receives GST_MESSAGE_EOS it sets the pipeline state to NULL. I have checked with a pad probe that the EOS event is indeeed received on the sink pad of the filesink.
I send EOS using this code: gst_element_send_event(m_pipeline, gst_event_new_eos()); m_pipeline is the 3rd pipeline.
Those exact pipelines produce a playable MP4 when run with gst-launch adding -e at the end.
If I replace mp4mux by matroskamux in my app, the mkv is playable and has the expected size. However, there's something wrong with the timestamps as the player shows it starting at time 10 sec insteasd of 0. Do I need to edit the timestamps before passing the buffers to the mux (mp4mux or matroskamux)?
It looks to me as if the MP4 is not fully written, but I can't see what else I can do appart from sending EOS?
I'm opened to suggestions to restructure the app, in case the use of the interpipe elements may cause a problem (although I can't see why at the moment).
I'm using Gstreamer 1.18.2 on Windows 10 (x64).
I am trying to capture and store a webcam stream. The requirements are 1920x1080#30fps. And it must be done by a single-board-computer (Raspberry).
The duration to capture is 10 minutes. (For the moment I only capture 10 seconds for testing)
In general the camera (usbfhd01m from ELP) is able to provide an MJPEG stream in 1920x1080#30fps. I am just not able to store it. And I don't know why. I tried it with the following pipeline:
gst-launch-1.0 v4l2src device=/dev/video0 num-buffers=300 do-timestamp=true ! image/jpeg,width=1920,height=1080,framerate=30/1 ! queue ! avimux ! filesink location=test.avi
The result is a video file which is far away from being fluent. What is missing in my pipeline?
When I use the same pipeline, but decode the stream and save it in a raw file like this:
gst-launch-1.0 v4l2src device=/dev/video0 num-buffers=300 do-timestamp=true ! image/jpeg,width=1920,height=1080,framerate=30/1 ! queue ! jpegdec ! filesink location=test.yuv
then the raw video is absolutely fluent. Therefore, I think the pipeline and the device is able to record in 1920x1080#30fps, but there seems to be something wrong for saving the stream.
Storing the stream into matroska fileformat does not change my problem. And for transcoding on the fly to H264 the Raspberry Pi 3 doesn't seem to be powerful enough. (Even by using omxh264enc)
What happens when you remove the do-timestamp=true? This options applies current pipeline timestamps to the sample buffers - overwriting those coming out from the device. You probably want to store the original timestamps instead of overwriting them as they can carry some pipeline jitter.
In your second pipeline you save the stream as raw. Basically removing all timestamp information that you have (also the jitter timestamps). So when you play back the raw stream it assumes a constant framerate instead.
I'm recording a wav file using GStreamer receiving G711 flow through a UDP port.
Any wav player can play the file, but shows a wrong duration and cannot fast forward.
I believe that GStreamer writes the header at the beginning with empty data.
This pipeline can reproduce the issue:
gst-launch-1.0 udpsrc port=3000 caps="application/x-rtp,media=(string)audio, payload=0,clock-rate=(int)8000" ! rtpjitterbuffer ! rtppcmudepay ! mulawdec ! wavenc ! filesink append=true location=c:/recordings/audio-zz.wav
Florian Zwoch suggested to use -e and the file will be closed properly.
Indeed it works perfectly.
I'm using this pipeline inside a Java program with the gst1-java-core library.
Seems that I'm missing something closing the pipeline.
My program has the same behaviour as gst-launch without -e parameter.
Before stopping the pipeline I send an EOS Event.
pipeline.sendEvent(new EOSEvent());
How can I fix it?
The append parameter of filesink element does not allow rewriting the header.
Thank you.
How do you stop the pipeline? If you interrupt the pipeline with ctrl-c it may indeed be that the header finalization is skipped. Run your pipeline with the -e option so that on ctrl-c your pipeline gets stopped gracefully.
I'm just trying to save the dummy video to my directory.
In that case I end up in this error so I knew something is wrong in the pipeline.
Do I missing any parameters here ?
gst-launch -v videotestsrc ! ximagesink ! filesink location=~/cupcake.mp4
WARNING: erroneous pipeline: could not link ximagesink0 to filesink0
I just want to record only the video.
ximagesink is a sink element and as such doesn't have an output (source pad).
This command will tell you about the details of an element:
gst-inspect-1.0 ximagesink
Notice that ximagesink has only sink pad and no source pads, so it doesn't generate any output.
You can dump the video directly to file by using:
gst-launch-1.0 videotestsrc ! filesink location=~/cupcake.raw
Unfortunately, this is still not what you want as videotestsrc will generate raw video and not encoded or muxed to mp4. If you want mp4 you need to put it into the mp4mux that will organize the data it receives into the mp4 container. It is also recommended to encode the video to reduce its size. Let's assume you want to use H.264 as your codec. You can use the element x264enc to encode to H.264
gst-launch-1.0 -e videotestsrc ! x264enc ! mp4mux ! filesink location=~/cupcake.mp4
Notice that I also added the "-e" parameter that will make gst-launch-1.0 send an EOS event and wait for the EOS message to indicate elements have finished working. Without the flag the pipeline is simply interrupted and aborted.
In any case I'd recommend going back to the manuals for application development: http://gstreamer.freedesktop.org/documentation/
The manpage for gst-launch-1.0 is also useful.
Disclaimer: You are using gstreamer 0.10 which is 3 years unmantained and obsolete, please upgrade to 1.0 (This answer is aimed at 1.0 but it can easily be applied to 0.10 by changing the commands to 0.10 version)