gstreamer pipeline only generates mono stream - gstreamer

I'm trying to get UPNP streaming to work. Rygel runs fine, however, all I get is a mono stream, even if the input is stereo. Doing some debugging, I replicated Rygel's gstreamer pipeline with
gst-launch-1.0 pulsesrc device=upnp.monitor num-buffers=100 ! audioconvert ! lamemp3enc target=quality quality=6 ! filesink location=test.mp3
where the problem is also apparent:
mp3info -x test.mp3
...
Media Type: MPEG 1.0 Layer III
Audio: Variable kbps, 44 kHz (mono)
...
Where does this pipeline lose the second channel? How can I debug this?

You never ask for stereo:
gst-launch-1.0 pulsesrc device=upnp.monitor num-buffers=100 ! "audio/x-raw,channels=2" ! audioconvert ! lamemp3enc target=quality quality=6 ! filesink location=test.mp3

Add a -v to the launch-line to see all the caps negotiated on all pads of the pipeline. Look for "channels" and see where it goes from 2 to 1.

Related

adding audio delay in decoding pipeline - decklinkaudiosink

Dear gstreamer community,
I am running gstreamer (1.20.3) on ubuntu 22.04 LTS with decklinkdrivers (12.4)
After building (and playing around with gstreamer, wathing tutorials etc) the following pipeline I am able to decode a high quality HD SRT Stream (udp streaming) and am outputting it to SDI (in 1080i50), works very well.
gst-launch-1.0 -v srtsrc uri=srt://x.x.x.x:xxxx latency=200 ! tsdemux name=demux demux. ! h264parse ! video/x-h264 ! avdec_h264 ! queue ! videoconvert ! video/x-raw,format=UYVY ! decklinkvideosink mode=1080i50 sync=false demux. ! avdec_aac ! queue ! audioconvert ! audio/x-raw, format=S32LE, channels=2 ! decklinkaudiosink
Audio to Videosync is stable to each other for hours (didn't test for days), but after testing the encoder to decoder end to end on my gstreamer pipeline audio comes a little too early (about 60ms early).
I tried to only change buffersize in audiopart of the pipeline to correct the timing on the audiosite e.g.
queue max-size-buffers=0 max-size-time=0 max-size-bytes=0 min-threshold-time=60000000
but audio to video offset didn't change here trying several different min-threshold-times.
for the decklinkaudiosink there is no ts-offset cap to change the timing here and also changing the buffer-time property here didn't change anything.
Can anybody please help me here how to correct the audio timing or audio latency to accurate videodecoding on my pipeline!?
Thanks!

Synchronize two RTSP/RTP H264 video streams capture using GStreamer

I have two AXIS IP cameras streaming H264 stream over RTSP/RTP. Both cameras are set to synchronize with same NTP server so I assume both cameras will have same exact clock (may be minor diff in ms).
In my application, both cameras are pointing to same view and its required to process both camera images of same time. Thus, I want to synchronize the image capture using GStreamer.
I have tried invoking two pipelines separately on different cmd prompts but the videos are 2-3 seconds apart .
gst-launch rtspsrc location=rtsp://192.168.16.136:554/live ! rtph264depay ! h264parse ! splitmuxsink max-size-time=100000000 location=cam1_video_%d.mp4
gst-launch rtspsrc location=rtsp://192.168.16.186:554/live ! rtph264depay ! h264parse ! splitmuxsink max-size-time=100000000 location=cam2_video_%d.mp4
Can someone suggest a gstreamer pipeline to synchronize both H264 streams and record them into separate video files?
Thanks!
ARM
I am able to launch a pipeline using gst-launch as shown below. It shows good improvement on captured frame synchronization compare to lanuching two pipelines. Most times they differ by 0-500 msec. Though, I still want to synchronize them less than 150 msec accuracy.
rtspsrc location=rtsp://192.168.16.136:554/axis-media/media.amp?videocodec=h264 \
! rtph264depay ! h264parse \
! splitmuxsink max-size-time=10000000000 location=axis/video_136_%d.mp4 \
rtspsrc location=rtsp://192.168.16.186:554/axis-media/media.amp?videocodec=h264 \
! rtph264depay ! h264parse \
! splitmuxsink max-size-time=10000000000 location=axis/video_186_%d.mp4
Appreciate if someone can point other ideas!
~Arm
What do you mean synchronize? if you record to separate video files you do not need any synchronization.. as this is going to totaly separate them.. each RT(S)P stream will contain different timestamps, if you want to align them somehow to the same time (I mean real human time.. like "both should start from 15:00") then you have to configure them this way somehow (this is just idea)..
Also you did not tell us whats inside those rtp/rtsp streams (is it MPEG ts or pure IP.. etc). So I will give example of mpeg ts encapsulated rtp streams.
We will go step by step:
Suppose this is one camera just to demonstrate how it may look like:
gst-launch-1.0 -v videotestsrc ! videoconvert ! x264enc ! mpegtsmux ! rtpmp2tpay ! udpsink host=127.0.0.1 port=8888
Then this would be reciever (it must use rtmp2tdepay. We are encapsulating metadata inside MPEG container):
gst-launch-1.0 udpsrc port=8888 caps=application/x-rtp\,\ media\=\(string\)video\,\ encoding-name\=\(string\)MP2T ! rtpmp2tdepay ! decodebin ! videoconvert ! autovideosink
If you test this with your camera .. the autovideosink means that new window will popup displaying your camera..
Then you can try to store it inside file.. we will use mp4mux..
So for same camera input we do:
gst-launch-1.0 -e udpsrc port=8888 caps=application/x-rtp\,\ media\=\(string\)video\,\ encoding-name\=\(string\)MP2T ! rtpmp2tdepay ! tsdemux ! h264parse ! mp4mux ! filesink location=test.mp4
Explanation: We do not decode and reencode(waste of processing power) so I will just demux the MPEG ts stream and then instead of decoding H264 I will just parse it for the mp4mux which accepts video/x-h264.
Now you could use the same pipeline for each camera.. or you can just copypaste all elements into the same pipeline..
Now as you did not provide any - at least partial - attempt to make something out this is going to be your homework :) or make yourself more clear about the synchronization as I do not understand it..
UPDATE
After your update to question this answer is not very useful, but I will keep it here as reference. I have no idea how to synchronize that..
Another advise.. try to look at timestamps after udpsrc.. maybe they are synchronized already.. in that case you can use streamsynchronizer to synchronize two streams.. or maybe video/audio mixer:
gst-launch-1.0 udpsrc -v port=8888 ! identity silent=false ! fakesink
This should print the timestamps (PTS, DTS, Duration ..):
/GstPipeline:pipeline0/GstIdentity:identity0: last-message = chain ******* (identity0:sink) (1328 bytes, dts: 0:00:02.707033598, pts:0:00:02.707033598, duration: none, offset: -1, offset_end: -1, flags: 00004000 tag-memory ) 0x7f57dc016400
Compare PTS of each stream.. maybe you could combine two udpsrc in one pipeline and after each udpsrc put identity (with different name=something1) to make them start reception together..
HTH

No key frames when sending H263 with gstreamer

I'm trying to stream H263 via RTP with gstreamer 1.0. It works just fine aside from no key frames being sent. The command line looks like this:
gst-launch-1.0 videotestsrc pattern=ball ! avenc_h263 ! rtph263pay pt=34 ! udpsink host=10.0.75.196 port=25782 sync=true
The result is that it starts from black and only works with changes thereafter. Could it have anything to do with avenc_h263 using stuff that only H263+ or H263++ handles?
I would be very grateful for any help on this!
I finally found the problem! Standard rtp-payload-size is 0. Changing this parameter to anything above zero, I tried 1 and 20, makes it run smooth and with full frames.
gst-launch-1.0 videotestsrc pattern=ball ! avenc_h263 rtp-payload-size=10 ! rtph263pay pt=34 ! udpsink host=10.0.75.196 port=25782 sync=true

play encoded stream in gstreamer

I used the following GStreamer pipeline to store my encoded stream in a binary file:
gst-launch v4l2src ! videorate ! video/x-raw-yuv, framerate=\(fraction\)10/1 \
! videoscale ! video/x-raw-yuv, format=\(fourcc\)I420, width=640, height=480\
! ffmpegcolorspace ! x264enc ! fdsink > vid.bin
Now i want to play previously recorded files in GStreamer using the following pipeline:
cat vid.bin | gst-launch fdsrc ! ffdec_h264 ! autovideosink
But then it gives the following error:
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
ERROR: from element /GstPipeline:pipeline0/ffdec_h264:ffdec_h2640: Internal GStreamer error: negotiation problem. Please file a bug at http://bugzilla.gnome.org/enter_bug.cgi?product=GStreamer.
Additional debug info:
gstffmpegdec.c(2804): gst_ffmpegdec_chain (): /GstPipeline:pipeline0/ffdec_h264:ffdec_h2640:
ffdec_h264: input format was not set before data start
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
Freeing pipeline ...
I know that the best way to capture video is using Muxers but is there any way to play my previous files?
Thanks
Not sure your pipeline is right.
If you want to write to a file why not simply use filesink and filesrc.
fdsink > vid.bin will not work fine because if you see the prints by gstreamer gst-launch will also go into the file. [Just open vid.bin in an text editor and you will see what I mean].
Also for x264 stream to be stored without a muxer you need to use byte-stream=1 in your x264enc to store it in annexb format so that it is decodable.
To play back raw x264 stream you need to have a color space convertor before the video sink
gst-launch filesrc location=inputfile ! legacyh264parse ! ffdec_h264 ! queue ! ffmpegcolorspace ! autovideosink
plays just fine here at my end
Or, to playback a raw h264 file with gstreamer 1.0:
gst-launch-1.0 filesrc location=/tmp/video.h264 ! h264parse ! avdec_h264 ! autovideosink

Recording audio+video from webcam with gstreamer

I'm having a problem trying to record audio+video from my webcam to a file. If I use videotestsrc and autoaudiosrc I get everything right (read as in I get a file with audio recorded from the webcam's mic, and test-video image), but as soon as I replace videotestsrc with v4l2src (or autovideosrc) I get Error starting streaming on device '/dev/video0'.
The command I'm using:
gst-launch-0.10 videotestsrc ! queue ! ffmpegcolorspace! theoraenc ! queue ! oggmux name=mux autoaudiosrc ! queue ! audioconvert ! vorbisenc ! queue ! mux. mux. ! queue ! filesink location = test.ogg
Why is that happening? What am I doing wrong?
EDIT:
In fact, something as simple as
gst-launch-0.10 autovideosrc ! autovideosink autoaudiosrc ! autoaudiosink
is failing with the same error (Error starting streaming on device '/dev/video0')
Replacing autovideosrc with videotestsrc gives me test image + real audio.
Replacing autoauidosrc with audiotestsrc gives me real image + test audio.
I'm starting to think that this is some kind of limitation of my webcam. Is that possible?
EDIT:
GST_DEBUG=2 log here: http://pastie.org/4755009
EDIT 2:
GST_DEBUG="v4l2*:5" (gstreamer 0.10): http://pastie.org/4810519
GST_DEBUG="v4l2*:5" (gstreamer 1.0): http://pastie.org/4810502
Please do a
gst-launch-1.0 v4l2src ! videoscale ! videoconvert ! autovideosink
Does that run? If not repeat as
GST_DEBUG="v4l2*:5" GST_DEBUG_NO_COLOR=1 gst-launch 2>debug.log ...
and check the log for errors. You also might want to run v4l-info (install v4l-conf under debian/ubuntu) and report what formats your camera supports.