I run gstreamer with udpsrc waiting for raw pcm that goes through audiomixer.
proc="rawaudioparse use-sink-caps=false format=pcm \
pcm-format=s16be sample-rate=44100 num-channels=2 \
! audioconvert ! audioresample"
gst-launch-1.0 audiomixer name=mix ! queue ! autoaudiosink \
udpsrc port=3000 ! $proc ! mix.
And feed it with ffmpeg
ffmpeg -re -i audio.mp3 -ac 2 -ar 44100 -f s16be udp://localhost:3000
And it works fine.
But when I connect second udpsrc
gst-launch-1.0 audiomixer name=mix ! queue ! autoaudiosink \
udpsrc port=3000 ! $proc ! mix. \
udpsrc port=3001 ! $proc ! mix.
And I feed only one of them (say port 3000) audio gets clicky/glitchy. As soon as second port is also fed- audio is fine.
Tried also with RTP instead of raw but same result.
Thanks for any tips.
Related
I want to encode jpg/png images into h264/h265 mp4 video (h265 is preferred if possible).
I tried using the commands of this question:
How to create a mp4 video file from PNG images using Gstreamer
I got a mp4 video out with this command:
gst-launch-1.0 -e multifilesrc location="IMG%03d.png" index=1 caps="image/png,framerate=30/1" ! pngdec ! videoconvert ! omxh265enc ! qtmux ! filesink location=image2.mp4
or
gst-launch-1.0 -e multifilesrc location="IMG%03d.png" index=1 caps="image/png,framerate=30/1" ! pngdec ! videoconvert ! queue ! x264enc ! queue ! mp4mux ! filesink location=image3.mp4
However according to the docs:
Accelerated_GStreamer_User_Guide
We can have hardware acceleration with:
H.265 Encode (NVIDIA Accelerated Encode)
gst-launch-1.0 nvarguscamerasrc ! \
'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, \
format=(string)NV12, framerate=(fraction)30/1' ! nvv4l2h265enc \
bitrate=8000000 ! h265parse ! qtmux ! filesink \
location=<filename_h265.mp4> -e
I changed it a little bit for images as input:
gst-launch-1.0 multifilesrc location="IMG%03d.png" index=1 caps="image/png,framerate=30/1" ! pngdec ! videoconvert ! queue ! nvv4l2h265enc bitrate=8000000 ! h265parse ! qtmux ! filesink location=output.mp4 -e
However I get the error:
WARNING: erroneous pipeline: could not link queue0 to nvv4l2h264enc0
According to the docs in nvv4l2h265enc encoder should be available in GStreamer version 1.0
What I'm I doing wrong?
NVIDIA's devtalk forum is the best place for these sorts of questions, but multifilesrc probably puts images in normal CPU memory, not in the GPU NvBuffers that the nvv4l2h265enc element expects. Furthermore, the encoder only seems to work with NV12-formatted YCbCr data while I think the multifilesrc probably outputs in RGB.
The nvvidconv element converts between the "CPU" parts and the "NVIDIA accelerated" parts by moving the data to GPU memory and converting the color space to NV12.
This launch string worked for me:
gst-launch-1.0 \
multifilesrc location="IMG%03d.png" index=1 caps="image/png,framerate=30/1" ! pngdec \
! nvvidconv \
! 'video/x-raw(memory:NVMM), format=(string)NV12 \
! queue \
! nvv4l2h265enc bitrate=8000000 \
! h265parse \
! qtmux \
! filesink location=output.mp4 -e
The caps string after nvvidconv isn't acutally necessary (I also ran successfully without it). nvv4l2h265enc also provides caps and nvvidconv knows how to change what needs to be changed (color space and memory type). I added it for illustration purposes to let you know what is actually going on.
I hope this helps!
I have googled it all but I couldn't find solution to my problem. I will be happy if anyone had similiar need and resolved somehow.
I do stream to RTMP server by following command. It captures video from HDMI Encoder, crops, rotates video.
gst-launch-1.0 -e v4l2src device=/dev/v4l/by-path/platform-fe801000.csi-video-index0 ! video/x-raw,format=UYVY,framerate=20/1 ! videoconvert ! videoscale ! video/x-raw, width=1280,height=720 ! videocrop top=0 left=0 right=800 bottom=0 ! videoflip method=counterclockwise ! omxh264enc ! h264parse! flvmux name=mux streamable=true ! rtmpsink sync=true async=true location='rtmp://XXXXX live=true'
and I want to add Audio to existing microphone on Raspberry. For example I can record microphone input to wav file by below pipeline.
gst-launch-1.0 alsasrc num-buffers=1000 device="hw:1,0" ! audio/x-raw,format=S16LE ! wavenc ! filesink location = a.wav
My question is; how can I add audio to my existing command line which streams to RTMP Server? And also, when I capture audio to file, there is a lots of Noise. How can I avoid?
Thank you
I have combined Audio & Video. But I have still Noise on Audio.
gst-launch-1.0 -e v4l2src device=/dev/v4l/by-path/platform-fe801000.csi-video-index0 ! video/x-raw,format=UYVY,framerate=20/1 ! videoconvert ! videoscale ! video/x-raw, width=1280,height=720 ! videocrop top=0 left=0 right=800 bottom=0 ! videoflip method=counterclockwise ! omxh264enc ! h264parse! flvmux name=mux streamable=true ! rtmpsink sync=true async=true location='rtmp://XXXXXXXXXXXXXXXX' alsasrc device="hw:1,0" ! queue ! audioconvert ! audioresample ! audio/x-raw,rate=44100 ! queue ! voaacenc bitrate=128000 ! audio/mpeg ! aacparse ! audio/mpeg, mpegversion=4 ! mux.
I have kind of resolved Noise by following code. but still not so good.
"ffmpeg -ar 48000 -ac 1 -f alsa -i hw:1,0 -acodec aac -ab 128k -af 'highpass=f=200, lowpass=f=200' -f flv rtmp://XXXXX.XXXXXXX.XXXXX/LiveApp/"+ str(Id) + "-" + str(deviceId)+"-Audio"
I'm trying to make a server and client application that sends and receives a raw video stream using rtpbin. In order to send an uncompressed videostream I'm using rtpgstpay and rtpgstdepay to payload to the data.
The server application can succesfully send the video stream with the following pipeline:
gst-launch-1.0 -vvv rtpbin name=rtpbin \
videotestsrc ! \
rtpgstpay ! application/x-rtp,media=application,payload=96,encoding-name=X-GST ! rtpbin.send_rtp_sink_0 \
rtpbin.send_rtp_src_0 ! udpsink port=5000 host=127.0.0.1 name=vrtpsink \
rtpbin.send_rtcp_src_0 ! udpsink port=5002 host=127.0.0.1 sync=false async=false name=vrtcpsink
The client pipeline looks like this:
gst-launch-1.0 -vvv rtpbin name=rtpbin \
udpsrc caps="application/x-rtp,payload=96,media=application,encoding-name=X-GST" port=5000 ! rtpbin.recv_rtp_sink_0 \
rtpbin. ! rtpgstdepay ! videoconvert ! autovideosink \
udpsrc port=5002 ! rtpbin.recv_rtcp_sink_0
rtpbin succesfully creates a sink and links to the udpsrc, but no stream comes out of rtp source pad.
The same pipeline without rtpbin can succesfully display the stream:
gst-launch-1.0 -vvv \
udpsrc caps="application/x-rtp,payload=96,media=application,encoding-name=X-GST" port=5000 ! \
rtpgstdepay ! videoconvert ! autovideosink
What an I doing wrong that rtpbin doesn't want to output the stream?
I also tried to replace the rtp_source part of the client with a fakesink to see if it would output anything, but still nothing comes out of rtpbin.
I found the solution to my problem. If any people come across the same problem, this is how to fix it:
First of all, rtpbin needs the clock-rate to be specified in the caps
When using rtpgst(de)pay, you need to specify the caps event string in your caps filter at the receiver, you can find this when printing the caps of the rtpgstpay element at the transmitter, eg:
application/x-rtp, media=(string)application, clock-rate=(int)90000, encoding-name=(string)X-GST, caps=(string)"dmlkZW8veC1yYXcsIGZvcm1hdD0oc3RyaW5nKUdSQVk4LCB3aWR0aD0oaW50KTY0MCwgaGVpZ2h0PShpbnQpNDYwLCBpbnRlcmxhY2UtbW9kZT0oc3RyaW5nKXByb2dyZXNzaXZlLCBwaXhlbC1hc3BlY3QtcmF0aW89KGZyYWN0aW9uKTEvMSwgY29sb3JpbWV0cnk9KHN0cmluZykxOjQ6MDowLCBmcmFtZXJhdGU9KGZyYWN0aW9uKTI1LzE\=", capsversion=(string)0, payload=(int)96, ssrc=(uint)2501988797, timestamp-offset=(uint)1970605309, seqnum-offset=(uint)2428, a-framerate=(string)25
So here the caps event string is
dmlkZW8veC1yYXcsIGZvcm1hdD0oc3RyaW5nKUdSQVk4LCB3aWR0aD0oaW50KTY0MCwgaGVpZ2h0PShpbnQpNDYwLCBpbnRlcmxhY2UtbW9kZT0oc3RyaW5nKXByb2dyZXNzaXZlLCBwaXhlbC1hc3BlY3QtcmF0aW89KGZyYWN0aW9uKTEvMSwgY29sb3JpbWV0cnk9KHN0cmluZykxOjQ6MDowLCBmcmFtZXJhdGU9KGZyYWN0aW9uKTI1LzE\=
When adding this to the caps at the receiver, you have to add a null terminator ( \0 ) at the end of the string.
Here is what I'm trying:
gst-launch -v udpsrc port=1234 ! fakesink dump=1
I test with:
gst-launch -v audiotestsrc ! udpsink host=127.0.0.1 port=1234
And everything works fine, I can see the packages arriving from the audiotestsrc
Now lets test with the webcam source:
gst-launch -v v4l2src device=/dev/video0 ! queue ! videoscale method=1 ! "video/x-raw-yuv,width=320,height=240" ! queue ! videorate ! "video/x-raw-yuv,framerate=(fraction)15/1" ! queue ! udpsink host=127.0.0.1 port=1234
And nothing happens, no package appears in the dump.
Here is a logdump of what verbose shows in the server.
Does anyone have a clue on this?
Try these (You may have to install gstreamer-ugly plugins for this one)
UDP streaming from Webcam (stream over the network)
gst-launch v4l2src device=/dev/video0 ! 'video/x-raw-yuv,width=640,height=480' ! x264enc pass=qual quantizer=20 tune=zerolatency ! rtph264pay ! udpsink host=127.0.0.1 port=1234
UDP Streaming received from webcam (receive over the network)
gst-launch udpsrc port=1234 ! "application/x-rtp, payload=127" ! rtph264depay ! ffdec_h264 ! xvimagesink sync=false
Update
To determine the payload at the streaming end simply use verbose option with gst-launch -v ...
Maybe packets are too large for udp? They are limited to 64K. Try resizing frames to really small size to check if this is the reason. If so, you may be interested in some compression and payloaders/depayloaders (gst-inspect | grep pay).
gstreamer1-1.16.0-1.fc30
gst-launch-1.0 -v filesrc location=/.../.../.../sample-mp4-file.mp4 ! qtdemux ! h264parse ! queue ! rtph264pay config-interval=10 pt=96 ! udpsink port=8888 host=127.0.0.1
https://en.wikipedia.org/wiki/RTP_audio_video_profile
I'm trying to create gstreamer pipeline with rtpbin to stream webcam both way (videophone). However, I am not even able to make rtpbin work with simple snippet like below which just takes webcam source and streams out, then other udpsrc captures RTP packets and displays. All localhost. When splitted to two pipes and launched separately, it works. This, however, not. I feel it has something with threading, however I am stucked here as no queue worked for me so far.
Basically, what I need is displaying incomming videostream and stream out my webcam videostream out to remote party.
gst-launch -v \
gstrtpbin name=rtpbin \
udpsrc caps="application/x-rtp,media=(string)video,clock-rate=(int)90000,encoding-name=(string)H263" port=5000 ! rtpbin. \
rtpbin. ! rtph263depay ! ffdec_h263 ! ffmpegcolorspace ! xvimagesink \
v4l2src ! video/x-raw-yuv, framerate=30/1, width=320, height=240 ! videoscale ! videorate ! "video/x-raw-yuv,width=352,height=288,framerate=30/1" ! ffenc_h263 ! rtph263pay ! rtpbin. \
rtpbin. ! udpsink port=5000
Ok, I have to answer to myself, it was enough to add sync=false async=false to the udpsink:
gst-launch -v \
gstrtpbin name=rtpbin udpsrc caps="application/x-rtp,media=(string)video,clock-rate=(int)90000,encoding-name=(string)H263" port=5000 ! queue ! rtpbin. \
rtpbin. ! rtph263depay ! ffdec_h263 ! ffmpegcolorspace ! xvimagesink \
v4l2src ! video/x-raw-yuv, framerate=30/1, width=320, height=240 ! videoscale ! videorate ! "video/x-raw-yuv,width=352,height=288,framerate=30/1" ! ffenc_h263 ! rtph263pay ! rtpbin. \
rtpbin. ! udpsink port=5000 sync=false async=false