I'm trying to cut 60-minute file to 60 1-minute files using this gstreamer pipeline:
gst-launch-1.0 filesrc location= video1.mp4 ! video/x-raw ! decodebin ! videoconvert ! hlssink target-duration=5 location=./video1/%05d.mp4
After 54 seconds it stops because a haven't got enough space on my disk.
File that gstreamer tryed to save is unplayable and weights 60 GB
Related
I need to demux the video frames and KLV data from an MPEG-TS stream in sync, frame-by-frame.
The following command to demux the KLV data and outputs a text file with the KLV data.
gst-launch-1.0 filesrc location="some_file.ts" ! tsdemux name=demux \
demux. ! queue ! meta/x-klv ! filesink location="some_file-KLV.txt"
The following command to demux the video and outputs a video file.
gst-launch-1.0 filesrc location="some_file.ts" ! tsdemux name=demux \
demux. ! queue ! decodebin ! videorate ! videoscale ! x264enc ! mp4mux ! filesink location="some_file-video.mp4"
On combining the above two:
gst-launch-1.0 filesrc location="some_file.ts" ! tsdemux name=demux \
demux. ! queue ! decodebin ! videorate ! videoscale ! x264enc ! mp4mux ! filesink location="some_file-video.mp4"
demux. ! queue ! meta/x-klv ! filesink location="some_file.txt"
The command doesn't work. It just gets stuck after the following message on the terminal;
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
and, the size text and video files is 0 bytes.
An example .ts file can be found at(this file hasn't been uploaded and created by me, it is part of data for some code on github(https://gist.github.com/All4Gis/509fbe06ce53a0885744d16595811e6f)): https://drive.google.com/drive/folders/1AIbCGTqjk8NgA4R818pGSvU1UCcm-lib?usp=sharing
Thank you for helping! Cheers. :)
Edit:
I realised that there can be some confusion. The files in the link above were just used to create the .ts file.
The .ts file I am using, is available directly in either of the links below:
https://drive.google.com/drive/folders/1t-u8rnEE2MftWQkS1q3UB-J3ogXBr3p9?usp=sharing
https://easyupload.io/xufeny
It seems if we use Gstreamer's multiqueue element, instead of queue, the files are being created.
I tried the above based on a suggestion from a commenter on another website I had posted the question on.
But, the KLV data and frames are still not in sync. That is what I am trying to do now.
Regards.
I cannot figure out an gst-launch invocation to simply play an opus file to pulseaudio. Any help?
Things I've tried
130 % file foo.opus
foo.opus: Ogg data, Opus audio, version 0.1, stereo, 44100 Hz (Input Sample Rate)
0 % gst-launch-1.0 filesrc location=foo.opus ! pulsesink
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
ERROR: from element /GstPipeline:pipeline0/GstPulseSink:pulsesink0: The stream is in the wrong format.
1 % gst-launch-1.0 filesrc location=foo.opus ! opusparse ! pulsesink
WARNING: erroneous pipeline: could not link opusparse0 to pulsesink0
1 % gst-launch-1.0 filesrc location=foo.opus ! opusdec ! pulsesink
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
ERROR: from element /GstPipeline:pipeline0/GstFileSrc:filesrc0: Internal data stream error.
# this runs, but only makes one burst of noise
0 % gst-launch-1.0 filesrc location=foo.opus ! opusparse ! opusdec ! pulsesink
Here is a working example.
gst-launch-1.0 filesrc location=foo.opus ! oggdemux ! opusparse ! opusdec ! alsasink
I was able to piece this together from the example pipelines here
https://gstreamer.freedesktop.org/documentation/opus/opusdec.html?gi-language=c
and
https://gstreamer.freedesktop.org/documentation/opus/opusenc.html?gi-language=c
Although, in those examples, there is no opusparse. I don't know why opusparse is needed for my files, but it isn't needed for sine.ogg created with opusenc. Or what the difference is between parsing and decoding the opus file data ...
I'm trying to convert an mp3 file to wav with gstreamer. Here's the pipeline:
gst-launch-1.0 filesrc location=audio.mp3 ! audio/mpeg ! mpg123audiodec ! wavenc ! filesink location=audio.wav
Also, I'd like the output to be 24 bit/48kHz
I get this error:
ERROR: from element /GstPipeline:pipeline0/GstCapsFilter:capsfilter0: Filter caps do not completely specify the output format
There was another similar thread that I saw here and tried to comment, but I had to have 50 points or whatever;)
I would make use of the bins to make your life easier. I came up with this:
gst-launch-1.0 filesrc location=in.mp3 ! decodebin ! audioresample ! audioconvert ! \
audio/x-raw,format=S24LE,rate=48000 ! wavenc ! filesink location=out.wav
Which gives me this result:
$ file out.wav
out.wav: RIFF (little-endian) data, WAVE audio, Microsoft PCM, 24 bit, stereo 48000 Hz
I could not find a stable and balanced approach to demux the A/V stream and then save it as a playable h264 annex B format video.
Well, I tried the following steps for shrinkage file.
gst-launch-0.10 filesrc
location=h264_720p_mp_3.1_3mbps_aac_shrinkage.mkv ! matroskademux !
filesink location=abc.h264
-rw-rw-r-- 1 XXX XXX 28697147 Nov 1 10:04 h264_720p_mp_3.1_3mbps_aac_shrinkage.mkv
-rw-rw-r-- 1 XXX XXX 27581733 Nov 1 10:19 abc.h264
a file got saved with "not so smaller" size but is not playable, however the parent container format is playable with the following pipeline
gst-launch-0.10 filesrc
location=h264_720p_mp_3.1_3mbps_aac_shrinkage.mkv ! matroskademux !
h264parse ! ffdec_h264 ! ffmpegcolorspace ! ximagesink
Questions
Q1. What are the methods to extract the video ES and Audio ES from different containers using gstreamer ?
Q2. Q1 using some other methods which always works and/or are easy ?
In general, you need to specify which pad you're interested in. Otherwise you couldn't distinguish the audio ES from the video ES.
The following works on my machine:
gst-launch-1.0 filesrc location=example.mkv ! queue ! matroskademux name=dmux dmux.video_0 ! queue ! filesink location=vid.265 dmux.audio_0 ! queue ! filesink location=aud.aac
Following all commands works for me. It creates h.264 byte stream file from mp4 video file. Newly created file also played using ffplay or gst-play-1.0
gst-launch-1.0 filesrc location=./VID-20190903-WA0012.mp4 ! qtdemux name=pnkj_demux ! h264parse ! video/x-h264,stream-format=byte-stream ! filesink location=./VID-20190903-WA0012_1.264
gst-launch-1.0 -e filesrc location=./VID-20190903-WA0012.mp4 ! qtdemux name=pnkj_demux ! h264parse ! video/x-h264,stream-format=byte-stream ! filesink location=./VID-20190903-WA0012_2.264
gst-launch-1.0 filesrc location=./VID-20190903-WA0012.mp4 ! qtdemux name=pnkj_demux pnkj_demux.video_0 ! h264parse ! video/x-h264,stream-format=byte-stream ! filesink location=./VID-20190903-WA0012_3.264
gst-launch-1.0 -e filesrc location=./VID-20190903-WA0012.mp4 ! qtdemux name=pnkj_demux pnkj_demux.video_0 ! h264parse ! video/x-h264,stream-format=byte-stream ! filesink location=./VID-20190903-WA0012_4.264
My pipe line is simply trying to mux an audiotestsrc with a videotestsrc and output to a filesink.
videotestsrc num-buffers=150 ! video/x-raw-yuv,width=1920, height=1080 !
timeoverlay ! videorate ! queue ! xvidenc ! avimux name=mux mux.
! filesink sync=true location=new.avi
audiotestsrc num-buffers=150 !
queue ! audioconvert ! audiorate ! mux.
new.avi is produced.
Video is exactly 5 seconds long as expected
Audio is about 3.5 seconds long and the remaining 1.5 seconds is
slient.
What am I missing here? I've tried every combination of sync="" properties, etc.
What pipeline would generate a test clip with autotestpattern and videotest pattern muxed together where audio and video are the same duration?
Thanks
audiotestsrc num-buffers=150
By default each buffer contains 1024 samples: samplesperbuffer
Which means you are generating 150*1024=153600 samples.
Asuming 44.1kHz, the duration would be 153600/44100=3.48 seconds.
So if you need 5 seconds audio, you need 5*44100=220500 samples. with samplesperbuffer==1024, this means 220500/1024=215.33 buffers. (ie 215 or 216 buffers).
It would be easier if you set samplesperbuffer to 441, then you need exactly 100 buffers for every second audio:
audiotestsrc num-buffers=500 samplesperbuffer=441
You can make use of the blocksize roperty of audiotesrc to match the duration of a frame. this is in bytes and thus you might want to use a caps filter after audiotestsrc to select a sampling-rate and sample format.