While streaming rtp packets i want to change the frequency to 48kHz by default max freq is 44.1kHz is there any api to directly do it in pulseaudio ???
You can set many parameters in Gstreamer one of them is the Clock-rate which in this case is the same as your frequency:
gst-launch-0.10 -v udpsrc port=5000 ! "application/x-rtp,media=(string)audio,
clock-rate=(int)44100, width=16, height=16, encoding-name=(string)L16,
encoding-params=(string)1, channels=(int)1, channel-positions=(int)1,
payload=(int)96" ! rtpL16depay ! audioconvert ! alsasink sync=false
You can change the integer after clock-rate to any value you desire.
Related
I have googled it all but I couldn't find solution to my problem. I will be happy if anyone had similiar need and resolved somehow.
I do stream to RTMP server by following command. It captures video from HDMI Encoder, crops, rotates video.
gst-launch-1.0 -e v4l2src device=/dev/v4l/by-path/platform-fe801000.csi-video-index0 ! video/x-raw,format=UYVY,framerate=20/1 ! videoconvert ! videoscale ! video/x-raw, width=1280,height=720 ! videocrop top=0 left=0 right=800 bottom=0 ! videoflip method=counterclockwise ! omxh264enc ! h264parse! flvmux name=mux streamable=true ! rtmpsink sync=true async=true location='rtmp://XXXXX live=true'
and I want to add Audio to existing microphone on Raspberry. For example I can record microphone input to wav file by below pipeline.
gst-launch-1.0 alsasrc num-buffers=1000 device="hw:1,0" ! audio/x-raw,format=S16LE ! wavenc ! filesink location = a.wav
My question is; how can I add audio to my existing command line which streams to RTMP Server? And also, when I capture audio to file, there is a lots of Noise. How can I avoid?
Thank you
I have combined Audio & Video. But I have still Noise on Audio.
gst-launch-1.0 -e v4l2src device=/dev/v4l/by-path/platform-fe801000.csi-video-index0 ! video/x-raw,format=UYVY,framerate=20/1 ! videoconvert ! videoscale ! video/x-raw, width=1280,height=720 ! videocrop top=0 left=0 right=800 bottom=0 ! videoflip method=counterclockwise ! omxh264enc ! h264parse! flvmux name=mux streamable=true ! rtmpsink sync=true async=true location='rtmp://XXXXXXXXXXXXXXXX' alsasrc device="hw:1,0" ! queue ! audioconvert ! audioresample ! audio/x-raw,rate=44100 ! queue ! voaacenc bitrate=128000 ! audio/mpeg ! aacparse ! audio/mpeg, mpegversion=4 ! mux.
I have kind of resolved Noise by following code. but still not so good.
"ffmpeg -ar 48000 -ac 1 -f alsa -i hw:1,0 -acodec aac -ab 128k -af 'highpass=f=200, lowpass=f=200' -f flv rtmp://XXXXX.XXXXXXX.XXXXX/LiveApp/"+ str(Id) + "-" + str(deviceId)+"-Audio"
Let's say I've got a simple RTP video server:
ximagesrc ! video/x-raw,framerate=30/1 ! videoscale ! videoconvert ! x264enc tune=zerolatency bitrate=500 speed-preset=slow ! rtph264pay ! udpsink host=127.0.0.1 port=5000
And I'm receiving it just fine:
udpsrc port=5000 caps = "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96" ! rtph264depay ! decodebin ! videoconvert ! autovideosink
But if my server was sending H265 (or any other format), I'd have to use another pipeline. Considering there are a ton of possible formats, that is definitely something I'd like to avoid. Is there any way to get my video decoded from any format?
No. RTP streams are supposed to be initiated by SDP and RTSP control protocols which will tell you what kind of stream you are dealing with. If you want self describing media, perhaps try looking at MPEG-TS instead.
The video I transferred is already encoded. Why do I encode again when transferring?
example: gst-launch-1.0 -v filesrc location=123.mp4 ! decodebin ! x264enc ! rtph264pay ! udpsink host=192.168.10.186 port=9001
Just send the video without encoding. Can I view it on the other side?
for example:
server: gst-launch-1.0 -v filesrc location =123.mp4 ! udpsink host=192.168.10.186 port=9001
123.mp4 encoded h265
client: gst-launch-1.0 udpsrc port=9001 caps = "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H265, payload=(int)96" ! rtph265depay ! h265parse ! nvh265dec ! autovideosink
best regards
Okay, with the clarification that your input is assumed to be an MPEG4 file which contains an H.265: yes, then it's possible (if this is assumption doesn't hold, then this will not work).
The following should do the trick:
gst-launch-1.0 filesrc location=123.mp4 ! qtdemux ! h265parse config-interval=-1 ! rtph265pay ! udpsink host=192.168.10.186 port=9001
Explanation:
the qtdemux will demux the MPEG4 container into the contained video/audio/subtitile streams (if there's more than one stream inside that container, you'll need to link to it multple times, or GStreamer will error out)
the h265parse config-interval=1 will make sure that your contains the correct SPS/PPS parameter sets. If the stream inside your original input file is not an H265 video stream, this will fail to link.
the rth265pay will translate this into RTP packets.
... which the udpsink can then send over the specified socket
P.S.: you might also be interested in the rtpsink (which used to live out-of-tree, but is now included in the latest master of GStreamer)
P.P.S.: you should use an even port to send an RTP stream
what I want to do is create an m3u8-file out of an alsa soundcard input.
Like:
arecord hw:1,0 -d 10 test.wav | gst-launch-1.0 ....
I tried this for testing:
gst-launch-1.0 audiotestsrc ! audioconvert ! audioresample ! hlssink
but it doesn't work.
Thank you for helping.
You can’t create directly HLS video transport segments (.ts) from audio raw source. You need to encode it with some encoder and then mux it before sending to hlssink plugin.
One of the problems that you’ll encounter is that the hlssink plugin won’t split the segments with only audio stream so you are going to need something like keyunitsscheduler to split correctly the streams and create the files.
An example pipeline using voaacenc to encode audio and mpegtmux to mux would be as follows:
gst-launch-1.0 audiotestsrc is-live=true ! audioconvert ! voaacenc bitrate=128000 ! aacparse ! audio/mpeg ! queue ! mpegtsmux ! keyunitsscheduler interval=5000000000 ! hlssink playlist-length=5 max-files=10 target-duration=5 playlist-root="http://localhost/hls/" playlist-location="/var/www/html/hls/stream0.m3u8" location="/var/www/html/hls/fragment%05d.ts"
I have two GStreamer instances : a sender and a receiver. I want to stream RTP / VP8 video. It works perfectly fine if I stream via UDP, like this :
sender
gst-launch-0.10 -v videotestsrc ! vp8enc ! rtpvp8pay ! udpsink host=127.0.0.1 port=9001
receiver
gst-launch-0.10 udpsrc port=9001 caps = "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)VP8-DRAFT-IETF-01, payload=(int)96" ! rtpvp8depay ! vp8dec ! ffmpegcolorspace ! autovideosink
That works fine. But when I try to stream throug a FIFO / named pipe (done with mkfifo()) with :
sender
gst-launch-0.10 -v videotestsrc ! vp8enc ! rtpvp8pay ! filesink location = myPipe
receiver
gst-launch-0.10 filesrc location = myPipe ! capsfilter caps = application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)VP8-DRAFT-IETF-01, payload=(int)96 ! rtpvp8depay ! vp8dec ! ffmpegcolorspace ! autovideosink
It fails and my receiver continusouly outputs :
WARNING: from element /GstPipeline:pipeline0/GstRtpVP8Depay:rtpvp8depay0: Could not decode stream.
Additional debug info:
gstbasertpdepayload.c(387): gst_base_rtp_depayload_chain (): /GstPipeline:pipeline0/GstRtpVP8Depay:rtpvp8depay0:
Received invalid RTP payload, dropping
I think I read somewhere (but can't find it again) that it was because when using UDP, the RTP packets were separated properly, while using a named pipe like this, the packets being written are "chained" (not properly separated) and thus gstreamer doesn't know how much bytes to read to get a RTP packet.
Is this correct, and if yes, how can I change that ?
Thanks in advance !
When going through a named pipe, the RTP are not packetized properly. You could either,
Send the encoded stream directly through as a byte-stream, without using the rtpvp8pay element.
Use another RTP element in GStreamer that handles byte-stream format, such as rtpstreampay or rtpgdppay. (I believe the rtpstreampay might be a GStreamer 1.0 element though.)
I finally solved my problem.
I did not manage the byte-stream thing through a pipe, but I managed to use an AppSrc to feed the gst pipeline.
So my whole pipeline (might be useful for other people) looks like this : appsrc -> rtpvp8depay -> vp8dec -> videoconvert -> videoscale -> appsink (I'm using Gstreamer1.0 on ArchLinux).
Hope this helps !