Gstreamer : transcoding Matroska video to mp4 - gstreamer

The hardware on which we are working on doesnt support playing of mkv files.
So i'm required to transcode Matroska (mkv) video filea to mp4 video file.
As I have understood from the material available online on transcoding,I'm required to do the following :
separate out different streams of mkv file using matroskademux element.
decode the audio and Video streams into raw format using available mkv decoder and
supply this data to the mp4 Muxer element and re-encode to required format.
Could anyone please tell me if I applying right approach?
Any information/link on this would be very helpful.
vikram

Depending on what is in the Matroska file you might not need to decode it at all, just remux.
I assume the video for instance is H264, so just remux that.
Below is an example pipeline for gst-launch for remuxing a file with h264 and mp3.
gst-launch-0.10 -v filesrc location=$file \
! matroskademux name="demux" demux. ! h264parse ! queue \
! mp4mux name=mux ! filesink location=$file._out.mp4 demux. \
! mp3parse ! queue ! mux.`
You can also look at the Transmageddon transcoder (www.linuxrising.org) which should give you want you want.

Related

GStreamer - RTSP to HLS / mp4

I try to save RTSP h.264 stream to HLS mp4 files:
gst-launch-1.0 rtspsrc location="rtsp://....." ! rtph264depay ! h264parse ! matroskamux ! hlssink max-files=0 playlist-length=0 location="/home/user/ch%05d.mp4" playlist-location="/home/user/list.m3u8" target-duration=15
As a result - there is only one file ch00000.mp4, which includes the whole videostream (3min instead of 15sec in "target-duration").
If I save to mpegtsmux / ts files - all is ok for the same command.
What is wrong? Thanks in advance.
HLS consists of MPEG transport stream segments. So first: matroskamux does not make sense here. You will need mpegtsmux instead. To indicate what it really is you normally would name the files with a .ts extension. It may still work for GStreamer as it is just a file name - players may reject playing it because the expect another sort of file format.
E.g.
gst-launch-1.0 rtspsrc location="rtsp://....." ! rtph264depay ! h264parse ! \
mpegtsmux ! hlssink max-files=0 playlist-length=0 location="/home/user/ch%05d.ts" \
playlist-location="/home/user/list.m3u8" target-duration=15
Do you have to use gstreamer? Otherwise I believe this ffmpeg command does what you want.
ffmpeg -i rtsp://... -c copy -hls_list_size 10 -hls_segment_type fmp4 output.m3u8

Storing AAC Audio and Retrieving

I would like to store a file which has AAC audio frames,
For that i used the below pipeline,
gst-launch-1.0 filesrc location=Test_44100Hz_2ch_s16le.wav ! "audio/x-raw,rate=44100,format=s16le,channels=2" ! audioparse format=raw raw-format=s16le rate=44100 channels=2 ! faac ! aacparse ! queue ! filesink location=a1
While reading that file again to pulsesink using below pipeline,
gst-launch-1.0 filesrc location=a1 ! aacparse ! faad ! audioconvert ! audioresample ! pulsesink
I am Receiving below error, I used GST_DEBUG=3, but i am not able find the solution.
0:00:00.031924804 3379 0x2231d60 WARN basesrc gstbasesrc.c:3483:gst_base_src_start_complete:<filesrc0> pad not activated yet
Pipeline is PREROLLING ...
0:00:00.033044700 3379 0x2231050 WARN baseparse gstbaseparse.c:3255:gst_base_parse_loop:<aacparse0> error: No valid frames found before end of stream
ERROR: from element /GstPipeline:pipeline0/GstAacParse:aacparse0: No valid frames found before end of stream
Additional debug info:
gstbaseparse.c(3255): gst_base_parse_loop (): /GstPipeline:pipeline0/GstAacParse:aacparse0
ERROR: pipeline doesn't want to preroll.
Can anybody help me, To solve this? I need to store AAC audio frames and need to stream that file as AAC audio stream.
This is it, tested working:
gst-launch-1.0 filesrc location=WAV_44_16bit.wav ! decodebin ! audioconvert ! queue ! voaacenc ! aacparse ! queue ! mp4mux ! filesink location=aac.mp4
gst-launch-1.0 filesrc location=aac.mp4 ! decodebin ! audioconvert ! audioresample ! alsasink
In container there are metadata information stored.. without them the decoder does not know how to process the data.
AAC Audio streams require a container in order to be useful within gstreamer
For decoder initialization it is necessary to know sampling frequency and Audio Object. In gstreamer we are unable to pass this metadata directly to the parser or the decoder. The parser collects this data instead from the mp4 header then the encoder inherits the frame structure/size and sample rate. So this is a deficiency in either aacparse(parser) or avdec_aac/faad(decoder), none of which have exposed parameters to specify frame size of a raw file, the afore mentioned metadata. That being said, I haven't found a compelling reason why anyone would need to do this. I found myself trying to do it before I discovered the aac simply needed to be muxed into an MP4(mp4mux) or another container to work and be portable. The container/framing only adds a small amount of data to the stream.

Timing is lost when converting h264 video to non segmented mp4 using gstreamer

I would like to create a non segmented .mp4 video from a matroska source. I have seen this post and created a similar pipeline. My source contains only h264 video and no sound, so my pipeline looks like this:
gst-launch-1.0 filesrc location=x.mkv ! matroskademux ! h264parse ! mp4mux ! filesink location=x.mp4
However running gst-discoverer-1.0 on the result gives a duration of 0:00:00.000000000. Also vlc is not able to play the resulting .mp4 file and it cannot be used in a HTML5 <video> element (which is the final purpose of this conversion).
If I create a segmented .mp4 by adding fragment-duration=1000 to the mp4mux element, then vlc can play the .mp4, but this is not what I want. I need a .mp4 where the total length is known. What am I doing wrong?
Additional information: The length was present in the matroska source, as displayed by gst-discoverer-1.0, and vlc can display that source. I also can replay the non segmented .mp4 with gstreamer (using gst-launch-1.0 filesrc location=x.mp4 ! qtdemux ! h264parse ! avdec_h264 ! videoconvert ! autovideosink). Inspecting the generated .dot file reveals a framerate of 10000/1 coming out of qtdemux which seems quite strange.
The solution was to add disable-passthrough=true to the h264parse element, so the pipeline now looks like this:
gst-launch-1.0 filesrc location=x.mkv ! \
matroskademux ! \
h264parse disable-passthrough=true ! \
mp4mux ! \
filesink location=x.mp4
Now the resulting .mp4 file includes the timing information and can nicely be played with vlc as well as in a <video> tag including forward/backward navigation.

convert a video to a sequence of frame images

I need to capture a video using a webcam and output a single image for each video frame captured.
I have tried using gstreamer with a multifilesink, e.g.:
gst-launch v4l2src device=/dev/video1 ! video/x-raw-yuv,framerate=30/1 ! ffmpegcolorspace ! pngenc ! multifilesink location="frame%d.png"
However, this does not actually output every frame, meaning that if I record for 2 seconds at 30 fps, I don't get 60 images. I'm assuming this is because the encoding can't go that fast, so I need another method.
I figured it might work if I have one pipeline capture a video, and a separate pipeline convert that video to frames, but I don't know enough about codecs. Do I need to encode the video to a file like h264 or mp4 just to then decode it again?
Does anyone have any thoughts or suggestions? Keep in mind that I need to be able to do this in code, not using an application like Adobe Premiere, for example.
Thanks!
You could simply add a queue in there like this:
gst-launch v4l2src device=/dev/video1 ! video/x-raw-yuv,framerate=30/1 ! queue ! ffmpegcolorspace ! pngenc ! multifilesink location="frame%d.png"
This should make sure the video-capture is allowed to run at 30 fps, and then writing it to
disk can happen in its own tempo. Just be aware that the queue will grow to quite a large size
if you leave this setup for too long.
the solution I have to offer doesn't use gstreamer but ffmpeg. I hope that's fine for you too.
As described in this forum post, you can use something like this:
ffmpeg -i movie.avi frame%d.png
to get a png/jpg image for each frame of the video.
But depending on the input file you use, you might have to convert it to an MPEG vid before running ffmpeg.
Note:
If you want leading zeroes in your image file names, use %05d instead (for 5-digit numbers, like in C's printf()):
ffmpeg -i movie.avi frame%05d.png
The output file format depends on the file extension, so you might use .jpg, .bmp, ... instead of .png.
I ended up doing this in two parts.
Write video to file.
gst-launch v4l2src device=/dev/video2 ! video/x-raw-yuv,framerate=30/1 ! xvidenc ! queue ! avimux ! filesink location=test.avi
Post process.
gst-launch-1.0 --gst-debug-level=3 filesrc location=test.avi ! decodebin ! queue ! autovideoconvert ! pngenc ! multifilesink location="frame%d.png"

Compress H264 Stream Using Gstreamer

I am trying to create a GStreamer pipeline (v 1.0) in order to record and play special file format.
For recording purpose I use the following pipeline:
gst-launch-1.0 videotestsrc ! video/x-raw-yuv, format=\(fourcc\)I420, width=640, height=480 ! videoconvert ! x264enc byte-stream=1 ! queue ! appsink
In appsink (using new_sample() callback) I use a compression method to compress H264 stream and finally store in a output file.
I use the following pipeline to play the recorded file:
gst-launch-1.0 appsrc ! video/x-h264 ! avdec_h264 ! autovideosink
In appsrc I decompress H264 stream and send it to appsrc buffer (using push-buffer). The size of each buffer is 4095.
Unfortunately GStreamer after push 2 buffers print the following debug message:
Error: Internal data flow error.
Is there any way to fix the problem?
Add legacyh264parse or h264parse (depending on your version of gst components) before your decoder. You need to be able to send full frames to the decoder.
Post avdec_h264 it would be nice to have a ffmpegcolorspace to be able to convert the video format to your display requirements.