How to configure the Gstreamer audio sink to use the virtual sink - gstreamer

I am trying to stream audio to a virtual audio sink that I created.
I created the virtual audio sink using
pactl load-module module-null-sink sink_name=virtsink sink_properties=device.description=Virtual_Sink
Now all I need to do is to configure the gstreamer client to use the sink that I created. I was able to get it working with:
gst-launch-1.0 ... ! autoaudiosink sync=false
and selecting Virtual_Sink as the Output Device via Settings>Sound>Output. This works because autoaudiosink selects the default system output when I alter the Output device.
Is there a way to achieve the same thing without having to modify the system's Sound settings?
I tried this:
gst-launch-1.0 -v udpsrc port=<udp-port> \
caps="..." \
! rtpL24depay \
! audioconvert \
! audioresample \
! alsasink \
device=virtsink
But I am getting an error:
Playback open error on device 'virtsink': No such file or directory

Related

kurento can't receive rtp from gstreamer correctly

I installed kurento media server, and run the kurento java tutorial,(RTP receiver), kurento offer a gstreamer pipeline,
PEER_V=23490 PEER_IP=10.0.176.127 SELF_V=5004 SELF_VSSRC=112233
bash -c 'gst-launch-1.0 -t \
rtpbin name=r \
v4l2src device=/dev/video0 ! videoconvert ! x264enc tune=zerolatency \
! rtph264pay ! "application/x-rtp,payload=(int)103,clock-rate=(int)90000,ssrc=(uint)$SELF_VSSRC" \
! r.send_rtp_sink_1 \
r.send_rtp_src_1 ! udpsink host=$PEER_IP port=$PEER_V bind-port=$SELF_V \
'
this is the pipe which I simplified from officail pipeline, and it could run successfully;
but there is a problem when I implement this pipeline with c or c++ code.
kurento can't receive rtp stream, but I can receive rtp stream with my own rtp receiver that I wrote by c++.
the kurento media server log warnings:
enter image description here
it looks like that kurento doesn't process video stream, but audio stream.
but I never send audio stream.
So I want to know how to change c code to fit the kurento, let my video stream to kurento. my code enter link description here
yes , After a few days of toss, I figure out this problem today,
PEER_V=23490 PEER_IP=10.0.176.127 SELF_V=5004 SELF_VSSRC=112233
bash -c 'gst-launch-1.0 -t \
rtpbin name=r \
v4l2src device=/dev/video0 ! videoconvert ! x264enc tune=zerolatency \
! rtph264pay ! "application/x-rtp,payload=(int)103,clock-rate=(int)90000,ssrc=
(uint)$SELF_VSSRC" \
! r.send_rtp_sink_1 \
r.send_rtp_src_1 ! udpsink host=$PEER_IP port=$PEER_V bind-port=$SELF_V \
this pipeline, if you change payload to 96,then kurento media server will report the same warning as the picture in question.
so I think that it's my payload setting error.
then I add a pad probe to detect pad's caps.
s.h.i.t, it's true,
but I don't know why I set caps but not effective,
so I set the property "pt" of rtph264pay, and it runs successfully.
the code is enter link description here

Extract h264 stream from USB webcam (logitech C920)

So, I'm starting to play around with gstreamer and I'm able to do very simple pipes such as
gst-launch-1.0 -v v4l2src device=/dev/video1 ! video/x-raw,format=YUY2,width=640,height=480,framerate=10/1 ! videoconvert ! autovideosink
Now, as my USB webcam (which is video1, video0 being the computer's built in camera) supports h264 (I have checked using lsusb), I would like to try to get the h264 feed directly. I understand that this feed is muxed in the mjpeg one, but looking around on the web it seems that gstreamer is able to get it nonetheless.
Since my end goal is to stream it from a Beaglebone, I made an attempt using the solution given to this post (adding a listener from a different terminal):
#sender
gst-launch-1.0 v4l2src device=/dev/video1 ! video/x-264,width=320,height=90,framerate=10/1 ! tcpserversink host=192.168.148.112 port=9999
But this yields the following error :
WARNING: erroneous pipeline: could not link v4l2src0 to tcpserversink0
I also tried something similar to my first command, changing the source from raw to h264 (based on that post , trying the full command given there gives the same error message)
gst-launch-1.0 -v v4l2src device=/dev/video1 ! video/x-h264,width=640,height=480,framerate=10/1 ! h264parse ! avdec_h264 ! autovideosink
But again, this did not work either:
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data flow error.
Additional debug info:
gstbasesrc.c(2948): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
streaming task paused, reason not-negotiated (-4)
Execution ended after 0:00:00.036309961
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
I admit this is driving me pretty crazy: looking on SO or elsewhere on the web, there seems to be a lot of people who made it work with exactly the same webcam as the one I have (Logitech C920), but I keep running into issues one after the other.
What would be an example of correct pipe to extract the h264 from that webcam?
You definitely need to use a payloader before it hits the wire. For example rtph264pay. Here is an example that cannot test as I don't have your hardware available. I have working udp examples from alternates sources if this doesn't steer you in the right direction.
server
gst-launch v4l2src device=/dev/video1 \
! video/x-264,width=320,height=90,framerate=10/1 \
! x264enc \
! queue \
! rtph264pay, config-interval=3, pt=96, mtu=1500 \
! queue \
! tcpserversink host=127.0.0.1 port=9002
client
gst-launch tcpserversrc host=127.0.0.1 port=9002 \
! application/x-rtp, media=video, clock-rate=90000, encoding-name=H264, payload=96 \
! rtph264depay \
! video/x-h264 \
! queue \
! ffdec_h264 \
! queue \
! xvimagesink

Record & live preview V4L2 /dev/video0 to H264 file with GStreamer

How to record only video from V4L2 input device and encode it to a file using H.264 while seeing a live preview of the input at the same time?
Using GStreamer GStreamer 0.10.36
Command gst-launch-1.0
Using v4l-utils 1.6.3-3
Command v4l2-ctl
Determine available resolutions and formats:
v4l2-ctl -d /dev/video0 --list-formats-ext
Preview, record & encode at the same time:
"format", "width", "height" and "framerate" need to be filled in.
"keyframe_period" specifies how often a keyframe appears in the video, which is used for reconstruction of a video frame and (to my understanding) seeking.
"min-qp" specifies compression quality where lower means better quality.
:
gst-launch-1.0 v4l2src device=/dev/video0 ! \
video/x-raw,format=YV12,width=960,height=544,framerate=30/1 ! \
tee name=t ! \
queue ! \
autovideosink sync=false t. ! \
videorate ! \
queue ! \
vaapiencode_h264 keyframe_period=5 tune=high-compression min-qp=50 ! \
queue ! \
mpegtsmux ! \
filesink location=FIRST.mp4
(For some reason the resulting FIRST.mp4 cannot be seeked. Something about invalid timestamps.)
Rebuilding the mp4 container without re-encoding produces a seekable mp4 file:
ffmpeg -i FIRST.mp4 -c:v copy SECOND.mp4

How to stream a file.avi over network using gstreamer

I'm trying to use gstreamer to send a sample file .avi over a network. The code that I'm using to build my pipeline is the following:
gst-launch-1.0 -v rtpbin name=rtpbin latency=200 \
filesrc location=filesrc location=/home/enry/drop.avi ! decodebin name=dec \
dec. ! queue ! x264enc byte-stream=false bitrate=300 ! rtph264pay ! rtpbin.send_rtp_sink_0 \
rtpbin.send_rtp_src_0 ! udpsink port=5000 host=127.0.0.1 ts-offset=0 name=vrtpsink \
rtpbin.send_rtcp_src_0 ! udpsink port=5001 host=127.0.0.1 sync=false async=false \
name=vrtcpsink udpsrc port=5005 name=vrtpsrc ! rtpbin.recv_rtcp_sink_0
When I try to execute this command I'm getting this error:
gstavidemux.c(5383): gst_avi_demux_loop ():
/GstPipeline:pipeline0/GstDecodeBin:dec/GstAviDemux:avidemux0:
streaming stopped, reason not-linked
Execution ended after 0:00:00.032906515
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
Can you help me to solve this?
I think you would need a demux element before the decodebin, since an avi file would consist both an audio and video but you are using just one decodebin to decode them.
Take a look at the avidemux element here. You need it to get the video stream from the file.

How can I create a live steam using gstreamer?

I would like to stream my webcam, I tried with vlc, but I'm getting a 10-15s delay between the server and client on the same network
vlc v4l2:// :v4l2-dev=/dev/video0 :v4l2-width=640 :v4l2-height=480 --sout="#transcode{vcodec=h264,vb=800,scale=1,acodec=mp4a,ab=128,channels=2,samplerate=44100}:rtp{sdp=rtsp://:8554/live.ts}" -I dummy
Now I would like to test gstreamer, but I couldn't found any example, how can I setup a live webcam stream(rtsp or http) using gstreamer?
To create a YouTube live event, one needs a RTMP stream containing x264+aac.
gst-launch -v videotestsrc \
! video/x-raw-yuv,width=640,height=480,framerate=30/1 \
! x264enc key-int-max=60 \
! h264parse \
! flvmux name=mux \
audiotestsrc ! queue ! audioconvert ! ffenc_aac ! aacparse ! mux. \
mux. ! rtmpsink location="rtmp://<stream-server-url>/"
Key frames in live feed must appear each 2 seconds at most, thus key-int-max=<double framerate>.
Note that RTMP works over TCP, so on a bad channel it will suffer significant delays.
Take a look at the rtsp-server examples in
http://cgit.freedesktop.org/gstreamer/gst-rtsp-server/tree/examples