Gstreamer multifilesink wav files splitting - gstreamer

I have problem with recording streams using gstreamer.
I have to write audio and video separately and cut in when signal arrived. I have correctly working video, but still have problems with wav files.
Even simple pipeline in gst-launch don't work correctly. I have wave file and I am trying to split it using multifilesink:
gst-launch filesrc location=test.wav ! multifilesink location=test2%d.wav next-file=4 max-file-size=512000
But final wav files are corrupted while the same pipeline with ts files is working ok:
gst-launch-1.0 filesrc location=test.ts ! multifilesink location=test2%d.ts next-file=4 max-file-size=2000000

multifilesink doesn't know anything about the data it splits up, so it won't take care of adding headers to each of the files it writes.
The reason why your .ts files work is because it was designed to be a streaming format where each separate packet will be treated independently. Therefore, one can just 'tune in' to the stream whenever one likes. The decoder will simply look for the next packet header it finds and start decoding there (for Details have a look at MPEG TS' wiki page.
The WAV file format however was designed as pure file (and not as a streaming) format. Therefore, there's only one header at the start of the file. When you split that file up into multiple files, these headers are missing (the file contains only raw PCM data then).
To work around that issue, you can...
manually copy the .wav header from the first file to all the other ones
use programs that support PCM files and either work directly with them or convert the files (you'll have to set the channel count, sample rate and bitrate manually when opening those files though).
use another, stream oriented file format like .mp3 which comes from the same family of codecs as .ts and also uses a separate 4-byte header for each frame (Keep in mind though that MP3 is a lossy file format).
An example pipeline would be:
gst-launch filesrc location=test.wav ! wavparse ! lame ! multifilesink location=test%d.mp3 next-file=4 max-file-size=100000

If you're willing to use some scripting as well and split the task up into different gst-launch calls, I can offer you another possible way to solve your little problem:
The following script is a Linux bash script. You should be able to translate that to Windows batch script (or a C or python app if you want):
#!/bin/bash -e
# First write the buffer stream to .buff files (annotated using GStreamer's GDP format)
gst-launch -e filesrc location=test.wav ! wavparse ! gdppay ! multifilesink next-file=4 max-file-size=1000000 location=foo%05d.buff
# use the following instead for any other source (e.g. internet radio streams)
#gst-launch -e uridecodebin uri=http://url.to/stream ! gdppay ! multifilesink next-file=4 max-file-size=1000000 location=foo%05d.buff
# After we're done, convert each of the resulting files to proper .wav files with headers
for file in *.buff; do
tgtFile="$(echo "$file"|sed 's/.buff$/.wav/')"
gst-launch-0.10 filesrc "location=$file" ! gdpdepay ! wavenc ! filesink "location=$tgtFile"
done
# Uncomment the following line to remove the .buff files here, but to avoid accidentally
# deleting stuff we haven't properly converted if something went wrong, I'm not gonna do that now.
#rm *.buff
Now to what the script does:
First we're gonna use multifilesink to create a set of .buff files, each under 1MB of size (gdppay will annotate each buffer with its caps; the -e flag of gst-launch will cause it to trigger an EOS if the process gets killed prematurely which is useful if you're reading and decoding an internet stream)
The second gst-launch invocation within the for loop takes one of the .buff files, parses the GDP headers using gdpdepay (and strips them), adds a WAV header and writes the result to a .wav file.
Hope this is a solution you can live with, because I doubt there's a way to do it with one single gst-launch run.

Related

gst-launch equivalent for gst-play

Is it possible to get gst-launch string equivalent for any gst-play command?
For example, playing rtsp stream with gst-play could be:
gst-play-1.0.exe rtsp://path/to/source
That command makes connect to server and opens internal (gstreamer) window for playing.
Equivalent command could be (I don't really sure):
gst-launch-1.0.exe uridecodebin uri=rtsp://path/to/source ! autovideosink
But how to get it in general case?
My main purpose is to redirect video stream to an avi-file while I know only good gst-play command. So I need to replace autovideosink with filesink in result command.
After update - I would say that you have some options:
1, Use gst-play with option --videosink but you would also need the avi mux element there and also it must be encoded in h264.. so this approach would need some hacking in source code of gst-play which you obviously do not want
1a, You can also use playbin as suggested by #thiagoss with parameter video-sink .. then you can maybe use named bin and pass it to it (not sure if this is possible this way, but you may try this):
gst-launch-1.0 playbin uri=rtsp video-sink=bin_avi \( name=bin_avi x264enc ! avimux ! filesink location=file.avi \)
2, Get the pipeline picture, analyse it and create the same thing yourself manually, in Unix-like systems do:
export GST_DEBUG_DUMP_DOT_DIR=`pwd`
gst-play-1.0 rtsp://...
#or use gst-launch and playbin.. its the same thing basically
Check the generated *.dot files.. choose the latest one (in PLAYING state) and use graphviz library to turn it into picture:
dot -T png *PLAYING.dot -o playbin.png
3, Use just the uridecodebin stuff and continue like I wrote in 1a
uridecodebin ! video/x-raw ! x264enc ! ....
HTH
Use playbin:
gst-launch-1.0.exe playbin uri=rtsp://path/to/source

Segmented mp4 in GStreamer

I'm have pipeline:
gst-launch-1.0 rtspsrc location=rtsp://ip/cam ! rtph264depay ! h264parse ! mp4mux fragment-duration=10000 streamable=1 ! multifilesink next-file=2 location=file-%03d.mp4
The first segment is played well, others not. When I'm try to view the structure of damaged mp4 see an interesting bug:
MOOV
Some data
MOOF
MDAT
MOOF
MDAT
The most interesting thing in "Some data". There is no header data, they simply exist. By block size I think it MDAT. I find size of the block and add before it MDAT header. File immediately becomes valid and playing. But the unknown piece can't be played because before it no MOOF header.
Problem is at mp4mux and qtmux. Tested on GStreamer 1.1.0 and 1.2.2. All results are identical.
Can use multifilesink not correct?
If you take look at documentation for multifilesink you will find the answer:
It is not possible to use this element to create independently playable mp4 files, use the splitmuxsink element for that instead. ...
So use splitmuxsink and don't forget to send EOS when you done to correct finish last file
Update
Looks like at time when question has been asked there wasn't such element like splitmuxsink
Can this be reproduced using videotestsrc instead of rtsp?
Try replacing your h264 receiving and depayloading with "videotestsrc num-buffers= ! x264enc ! mp4mux ..."
This might be a bug, please file it at https://bugzilla.gnome.org/enter_bug.cgi?product=GStreamer so it gets proper attention from maintainers.
Also, how are you trying to play it?
Thanks

convert a video to a sequence of frame images

I need to capture a video using a webcam and output a single image for each video frame captured.
I have tried using gstreamer with a multifilesink, e.g.:
gst-launch v4l2src device=/dev/video1 ! video/x-raw-yuv,framerate=30/1 ! ffmpegcolorspace ! pngenc ! multifilesink location="frame%d.png"
However, this does not actually output every frame, meaning that if I record for 2 seconds at 30 fps, I don't get 60 images. I'm assuming this is because the encoding can't go that fast, so I need another method.
I figured it might work if I have one pipeline capture a video, and a separate pipeline convert that video to frames, but I don't know enough about codecs. Do I need to encode the video to a file like h264 or mp4 just to then decode it again?
Does anyone have any thoughts or suggestions? Keep in mind that I need to be able to do this in code, not using an application like Adobe Premiere, for example.
Thanks!
You could simply add a queue in there like this:
gst-launch v4l2src device=/dev/video1 ! video/x-raw-yuv,framerate=30/1 ! queue ! ffmpegcolorspace ! pngenc ! multifilesink location="frame%d.png"
This should make sure the video-capture is allowed to run at 30 fps, and then writing it to
disk can happen in its own tempo. Just be aware that the queue will grow to quite a large size
if you leave this setup for too long.
the solution I have to offer doesn't use gstreamer but ffmpeg. I hope that's fine for you too.
As described in this forum post, you can use something like this:
ffmpeg -i movie.avi frame%d.png
to get a png/jpg image for each frame of the video.
But depending on the input file you use, you might have to convert it to an MPEG vid before running ffmpeg.
Note:
If you want leading zeroes in your image file names, use %05d instead (for 5-digit numbers, like in C's printf()):
ffmpeg -i movie.avi frame%05d.png
The output file format depends on the file extension, so you might use .jpg, .bmp, ... instead of .png.
I ended up doing this in two parts.
Write video to file.
gst-launch v4l2src device=/dev/video2 ! video/x-raw-yuv,framerate=30/1 ! xvidenc ! queue ! avimux ! filesink location=test.avi
Post process.
gst-launch-1.0 --gst-debug-level=3 filesrc location=test.avi ! decodebin ! queue ! autovideoconvert ! pngenc ! multifilesink location="frame%d.png"

Extract still image from MJPEG file?

I need to modify each jpg-image of a mjpeg file.
I have to use Visual Studio C++ 2010.
So far, i need to a) load a mjpeg file from source and b) extract an bitmap (CImage, byte array, ...)
In pseudo code it should look like:
fun getBitmap(filename, timestamp)
{
MJPEG myInput = Open(filename);
BITMAP myOutput = myInput.getBitmap(timestamp);
return myOutput;
}
What would be a got way to solve this problem?
I already tried come along with OpenCV2.1.0 but there is always a LNK2001 error.
(Tutorial from the official site).
Is OpenCV the correct way or does anyone knows a way more easy?
Do you want to use the JPEG images in the video or would you decompress them anyway?
An MJPEG stream consists of lots of JPEG images, without the usual JPEG header (and the code dictionaries). So if you want to extract them losslessly and get a lot of JPEG files in a directory, another tools may suit you better.
However, if you would modify them anyway, you'll need to decompress them. OpenCV is a nice way to do that as long as you have the necessary backends to decode the stream (some codecs for Windows... on Linux, the ffmpeg libraries will decode almost everything for you).
So I would do something like this:
CvCapture *capture = cvCaptureFromFile("filename.avi");
IplImage *current_frame = NULL;
while(current_frame = cvQueryFrame(capture)) {
process(current_frame); // that's your modification code
}
See this:
http://opencv.willowgarage.com/documentation/c/highgui_reading_and_writing_images_and_video.html
For the linker error: LNK2001 is "unresolved external symbol", so... have you added the library? (Additional Dependencies, add all four libs (cv210.lib cxcore210.lib, cvaux210.lib and highgui210.lib, or... check your OpenCV installation for correct names). Ensure that your project is compiled for 32 bits (or the same as OpenCV), and do not forget to add the path to the libraries.
Using gstreamer tools
for ip based source
gst-launch souphttpsrc location="http://[ip]:[port]/[dir]/xxx.cgi" do-timestamp=true is_live=true ! multipartdemux ! jpegdec ! videoflip method=vertical-flip ! jpegenc ! multifilesink location=image-out-%05d.jpg
for file source with mjpeg codec and avi container
gst-launch filesrc location="xyz.avi" ! multipartdemux ! jpegdec ! videoflip method=vertical-flip ! jpegenc ! multifilesink location=image-out-%05d.jpg

How do I use gstreamer to encode an ffv1 file?

I'd like to encode a video with gstreamer to an FFV1 (ffmpeg's lossless video format) file. However, I cannot work out what type of mux'ing to use. If I run this:
gst-launch videotestsrc ! ffenc_ffv1 ! filesink location="test.ffv1"
Then the thing runs OK, but the resulting file doesn't appear to be a valid video file. When creating theora videos, I've previously written "theora ! oggmux ! filesink" in the pipeline, and this works. However, oggmux doesn't work here. What type of transport stream should I be using here, and what is the correct gst-launch fudge to use?
Cheers.
This does not seem to be supported in the version I have installed. You can check it for your version by saving the output of gst-xmlinspect to a file and searching for video/x-ffv in this file. The elements where this mime type is mentioned are:
avidemux
ogmvideoparse
ffdec_ffv1
ffenc_ffv1
So it seems this is supported by the avi demuxer but not by any muxer.
PS: The mime type can be found with gst-inspect ffenc_ffv1.