Using avprobe to examine one ts file i have this:
Input #0, mpegts, from '/tmp/file.ts':
Duration: 00:00:17.06, start: 82902.417489, bitrate: 3533kb/s
Program 30601
Program 30602
Program 30603
Program 30604
Program 30605
Program 30606
Program 30607
Program 30608
Program 30609
Program 30610
Program 30611
Stream #0.0[0xa0]: Video: mpeg2video (Main), yuv420p, 720x576 [PAR 64:45 DAR 16:9], 7647 kb/s, 25 fps, 90k tbn, 50 tbc
Stream #0.1[0x50](spa): Audio: mp2, 48000 Hz, 2 channels, s16p, 128 kb/s (clean effects)
Stream #0.2[0x51](dos): Audio: mp2, 48000 Hz, 2 channels, s16p, 128 kb/s (clean effects)
Stream #0.3[0xd0]: Data: [192][0][0][0] / 0x00C0
Stream #0.4[0xde]: Data: [192][0][0][0] / 0x00C0
Stream #0.5[0xd5]: Data: [193][0][0][0] / 0x00C1
Stream #0.6[0xfd]: Data: [193][0][0][0] / 0x00C1
Stream #0.7[0x133]: Data: [193][0][0][0] / 0x00C1
Stream #0.8[0x164]: Data: [193][0][0][0] / 0x00C1
Stream #0.9[0x188]: Data: [193][0][0][0] / 0x00C1
Stream #0.10[0x135]: Data: [192][0][0][0] / 0x00C0
Stream #0.11[0x276]: Data: [193][0][0][0] / 0x00C1
Stream #0.12[0x378]: Data: [193][0][0][0] / 0x00C1
Program 30612
I am testing this command to transcode to mp4 over the network one test ts file and it works fine but using default video and audio stream of program 30611:
gst-launch-1.0 filesrc location=/tmp/file.ts ! \
tsdemux program-number=30611 name=demux demux. ! \
queue ! \
mpegvideoparse ! \
omxmpeg2videodec ! \
queue ! \
omxh264enc ! \
video/x-h264,stream-format=byte-stream,profile=high,framerate=25/1 ! \
h264parse config-interval=1 ! \
mpegtsmux name=mux ! \
tcpserversink host=ipaddress port=port demux. ! \
queue ! \
mpegaudioparse ! \
mpg123audiodec ! \
audioconvert dithering=0 ! \
audio/x-raw,channels=1 ! \
avenc_mp2 bitrate=32768 ! \
mux.
But i would like to select first or second audio stream. I can't find on documentation or internet how to do it. Could you help me please?
This worked on my raspberry pi 2 with mpeg2 hardware decoder license enabled, you have to buy code license:
nohup gst-launch-1.0 filesrc location=/tmp/file.ts ! \
tsdemux name=demux demux.${VIDEOSELTXT} ! \
queue ! \
mpegvideoparse ! \
omxmpeg2videodec !
queue !
omxh264enc !
video/x-h264,stream-format=byte-stream,profile=high,width=360,height=288,framerate=25/1 !
h264parse config-interval=1 !
mpegtsmux name=mux !
tcpserversink host=${IP2LISTEN} port=${PORT2LISTEN} demux.${AUDIOSELTXT} !
queue !
mpegaudioparse !
mpg123audiodec !
audioconvert dithering=0 ! \
audio/x-raw,channels=1 ! \
avenc_mp2 bitrate=32768 ! \
mux. > /tmp/sal.log &
On our previous example change video variable:
VIDEOSELTXT for video_00a0
and audio variable:
AUDIOSELTXT for audio_0050 to spa audio stream
for audio_0051 to dos audio stream
On my tests it is only working using four numbers for audio and video stream, following examples are not working fine:
audio_00050
audio_50
Related
I would need the above mentioned method to convert h264 stream with stream format byte-stream to avc (sort of packetized format) in order for it to be fed into matroskamux
I have used C codes to program my pipeline and I have tried to use h264parse element before feeding to the matroskamux but it just simply parse it without changing the stream format.
The result is as below
However when I tried to use command line interfaces to script out the static pipeline that I use to mimic the same effects. Miraculously, the result is what I need and there is virtually no other h264parse properties that i use. I can obtain what I need to feed the matroskamux
The result
I tried to figure out what is the property to set for h264parse but none of which seem to be able to explicitly convert to another format. Is there something else to packetize the stream bytes into avc? Or it is I must force the pad?
I am using vpuh264_enc on ARM platform and as such would have limited choice of plugins available but would still be happy to hear what I could do about the problem.
Thanks
==========================================================
More information:
How i have gotten the video pad of splitmuxsink
first i get the pad template
video_pad_template = gst_element_class_get_pad_template (GST_ELEMENT_GET_CLASS (splitmuxsink),
"video");
Next I get the pad using request pad
pad = tee_pad = gst_element_request_pad (splitmuxsink,
video_pad_template,
NULL,
NULL);
After which i joined as per normal
'if (gst_pad_link (src_pad,
pad) != GST_PAD_LINK_OK )
{
g_printerr (" could not be linked.\n");
gst_object_unref (pipeline);
return -1;
}'
============================================================
The CLI command (which I use)
gst-launch-1.0 \
v4l2src \
! video/x-raw,width=640,height=480,framerate=30/1,is-live=true \
! videorate \
! video/x-raw,framerate=25/1 \
! tee name=tv \
tv. \
! queue \
! vpuenc_h264 \
! tee name=tv2 \
tv2. \
! queue \
! h264parse \
! queue \
! m.video \
tv2. \
! queue \
! rtph264pay pt=96 \
! udpsink host="192.168.50.3" port=2534 \
pulsesrc \
! audioconvert \
! audioresample \
! audio/x-raw,rate=8000,channels=1,depth=8,format=S16LE \
! tee name=ta \
! queue \
! mulawenc \
! tee name=ta2 \
ta2. \
! queue \
! rtppcmupay pt=97 \
! udpsink host="192.168.50.3" port=2536 \
ta2. \
! queue \
! m.audio_0 splitmuxsink location=log%02d.mkv muxer=matroskamux max-size-time=60000000000 name=m
The full source or a minimal reproducible example would be useful to help you better. It seems like matroskamux is not being considered during the caps negotiation so AVC conversion is not being forced.
Ideally, you want to figure out why this is happening. But you can simply force the conversion from byte-stream to AVC by placing a capsfilter element right after the h264parse and configure its caps property accordingly. For example:
... ! h264parse ! capsfilter caps="video/x-h264,stream-format=avc,alignment=au ! splitmuxsink
The output format of the h264parse is controlled via caps, not properties.
I'm trying to stream a video encoded in H264 over RTP/UDP.
Sending:
gst-launch-1.0 \
videotestsrc ! \
video/x-raw,format=RGBx,width=960,height=540,framerate=25/1 ! \
videoconvert ! \
x264enc bitrate=2000 ! \
rtph264pay config-interval=1 pt=96 ! \
udpsink port=5000
Receive:
gst-launch-1.0 \
udpsrc port=5000 caps="application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96" ! \
rtph264depay ! \
decodebin ! \
videoconvert ! \
ximagesink
If I start receiving the video before sending it, then everything works as intended.
However, if I start receiving video after the start of sending, then the image "breaks".
An example of a corrupted image
How to fix this problem?
The problem was solved by specifying caps after videoconvert
...
videoconvert ! video/x-raw,format=I420
...
I use the following pipeline to play my video on screen
gst-launch-1.0 filesrc location=01_189_libxvid_1920x1080_6M.mp4 !
qtdemux ! mpeg4videoparse ! omxmpeg4videodec ! videobalance brightness=100 !
video/x-raw,format=BGRA ! waylandsink --gst-debug=*:2
but now instead of directly playing I want to encode it and save it in some folder. Please suggest
Should be something like this (example with h264 codec):
gst-launch-1.0 -e --gst-debug=3 \
filesrc location="/path/input/sample_in.mp4" \
! qtdemux \
! mpeg4videoparse \
! omxmpeg4videodec \
! queue \
! x264enc \
! qtmux \
! filesink location="/path/output/sample_out.mp4"
I am trying to encode an audio file using gstreamer. I am using the command
gst launch filesrc location=s.pcm ! audio/x-raw-int, rate=4000, channels=2, endianness=1234, width=16, depth=16, signed=true ! ffenc_aac ! filesink location=file.wav
And i am getting an error message:-
Setting pipeline to PAUSED ... Pipeline is PREROLLING ... ERROR: from
element /GstPipeline:pipeline0/GstFileSrc:filesrc0: Internal data flow
error. Additional debug info: gstbasesrc.c(2625): gst_base_src_loop
(): /GstPipeline:pipeline0/GstFileSrc:filesrc0: streaming task paused,
reason not-negotiated (-4) ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ... Freeing pipeline ...
can any one guide me to overcome this issue
Don't confuse encoding with containers. You cannot have an AAC encoded WAV, WAV's are PCM. You can have a 4k WAV or you can have an AAC encoded file in an MP4 or M4A container. Both examples are below. Note that in these examples the AAC encoders get very picky if you try to change the sample rate below 48000.
Create raw audio file
gst-launch audiotestsrc num-buffers=100 \
! audio/x-raw-int, rate=48000, channels=2, endianness=1234, width=16, depth=16, signed=true \
! filesink location=foo.pcm
Encode it as a WAV
gst-launch filesrc location=foo.pcm \
! audio/x-raw-int, rate=48000, channels=2, endianness=1234, width=16, depth=16, signed=true \
! audioresample \
! audio/x-raw-int, rate=4000 \
! wavenc \
! filesink location=foo.wav
Encode it as AAC and mux into mp4
dont really know why I had to encode then decode again, but nothing else worked, even though I could go directly from the audiotest src.
gst-launch filesrc location=foo.pcm \
! audio/x-raw-int, rate=48000, channels=2, endianness=1234, width=16, depth=16, signed=true \
! wavenc \
! wavparse \
! ffenc_aac \
! mp4mux \
! filesink location=foo.mp4
..alternately using faac
the pipeline was a lot cleaner and the output file was smaller
gst-launch filesrc location=foo.pcm \
! audio/x-raw-int, rate=48000, channels=2, endianness=1234, width=16, depth=16, signed=true \
! faac \
! mp4mux \
! filesink location=foo.mp4
or voaacenc
voaacenc wouldn't work below 48000 even though it looks to have the most flexible capabilities. I tried 8k,16k,48k,96k and 44100 which anecdotally changed the pitch of the test tone.
gst-launch filesrc location=foo.pcm \
! audio/x-raw-int, rate=48000, channels=2, endianness=1234, width=16, depth=16, signed=true \
! voaacenc \
! mp4mux \
! filesink location=foo.mp4
Low bit rate AAC
The lowest AAC bitrates I was successful with was 16000, here are those tests, again noting that faac produced the smallest file size.
gst-launch audiotestsrc num-buffers=100 \
! audio/x-raw-int, rate=16000, channels=2, endianness=1234, width=16, depth=16, signed=true \
! ffenc_aac \
! mp4mux \
! filesink location=foo.mp4
gst-launch audiotestsrc num-buffers=100 \
! audio/x-raw-int, rate=16000, channels=2, endianness=1234, width=16, depth=16, signed=true \
! faac \
! mp4mux \
! filesink location=foo.mp4
I am trying to convert a DVD to mkv file with gstreamer. The pipeline I use is:
gst-launch -evv multifilesrc location="VTS_01_%d.VOB" index=1 ! dvddemux name=demuxer \
matroskamux name=mux ! filesink location=test.mkv \
demuxer.current_video ! queue ! mpeg2dec ! x264enc ! mux. \
demuxer.current_audio ! queue ! ffdec_ac3 ! lamemp3enc ! mux.
Unfortunately the pipeline does not go beyond prerolling. When I replace x264enc with for instance ffenc_mpeg4, then everything works fine..
This may work :
gst-launch filesrc location=file.vob \
! queue \
! dvddemux name=demuxer matroskamux name=mux \
! queue \
! filesink location=test.mkv demuxer.current_video\
! queue \
! ffdec_mpeg2video \
! ffdeinterlace \
! x264enc \
! 'video/x-h264, width=720, height=576, framerate=25/1' \
! mux. demuxer.current_audio \
! queue max-size-bytes=0 max-size-buffers=0 max-size-time=10000000000 \
! ffdec_ac3 \
! audioconvert \
! lamemp3enc \
! mux.
Byte stream should be 0 - sorry for that earlier
You need to give the caps of the video after the x264enc
and you need to increase the limits on the audio queue to handle the delay in x264enc
These two changes have the pipeline running at my end.
The deinterlacer is optional but desirable for interlaced content.