rtpvp8depay + rtpvp8pay appear to introduce artifacts on Janus Gateway - gstreamer

A VP8 stream comes from Janus Videoroom plugin with restreaming to 10002/10004 locally. From there, it's picked up with the following gstreamer pipeline:
gst-launch-1.0 -v udpsrc \
caps="application/x-rtp,media=(string)video,encoding-name=(string)VP8,payload=100" \
address=127.0.0.1 port=10004 ! \
rtpvp8depay ! rtpvp8pay ! \
udpsink host=127.0.0.1 port=5004
and sent to Streaming plugin. As you can see, no transcoding here, just depayloading and payloading. resulting video breaks down into artifacts on some keyframes, approximately once in 10 keyframes, only to be fixed on next keyframe.
if i remove depay and pay, simply forwarding on rtp level, to get this
gst-launch-1.0 -v udpsrc \
caps="application/x-rtp,media=(string)video,encoding-name=(string)VP8,payload=100" \
address=127.0.0.1 port=10004 ! \
udpsink host=127.0.0.1 port=5004
it never happens.
i understand this is not the Janus issue but a gstreamer issue. but maybe anyone has an idea what could be the problem? this has been very reliably tested, the problem is easy to reproduce in the former case and never happens in the latter.
Of course, the goal of what i am doing is transcoding, and there was a lot more in the setup and the pipeline before i boiled it down to this level. Reproduced on Janus installed on a fresh Ubuntu 18.04 machine with all out-of-the-box settings.
update:
export GST_DEBUG="rtp*:4";
revealed this error message which drops out each time artifacts appear:
rtpbasedepayload gstrtpbasedepayload.c:473:gst_rtp_base_depayload_handle_buffer:
<rtpvp8depay0> 12 <= 100, dropping old packet
with the number which is "12" fluctuating being typically between 5 and 12.

This was the fix:
rtpjitterbuffer latency=50 !
before rtpvp8depay.
Logically, the order of packets is the same by that point as the one that came through the internet between the sending browser and Janus. If we don't depay+pay, it goes the same way to the receiving browser that's connected to Streaming plugin, and it has it's own jitter buffer so able to fix order, but if we do depay+pay here, there is no buffer so these packets are dropped, resuling in broken frames.
And yes, i got back transcoding and all the rest of my pipeline and all of other bells and whistles that were around, and it still works fine.

Related

Using Gstreamer, i can't ind a solution to send av1 video throught Udpsink in rtp packets

I'm currently working on Gstreamer and my goal is to take video from camera(coded natively in h264) decode it, then encode in AV1 and send it in udp to another computer on the network.
My pipelines currently are :
Server :
gst-launch-1.0 -v rtspsrc location= rtsp://192.168.33.104:8554/vis.0 latency=1 is-live=TRUE ! decodebin ! autovideoconvert ! x265enc tune=zerolatency bitrate=300 speed-preset=3 ! rtph265pay ! udpsink host=192.168.33.39 port=8123
Client :
gst-launch-1.0 udpsrc address=192.168.33.39 port=8123 ! application/x-rtp,media=video,clock-rate=90000,encoding-name=H265,payload=96 ! rtph265depay ! avdec_h265 ! autovideosink
So with h265 it works but i cannot find how to do it with AV1 because i can't find a rtpav1pay (and depay).
Thanks in advance.
I tried to search for rtpav1pay but found nothing. I tried rtpgstpay(and depay) didn't work. The main goal is to use as little as possible the network without lag so maybe it's not the best solution. If you have any other idea please share it.
There are rtpav1pay and rtpav1depay plugins provided by gst-plugins-rs; they can be built along with GStreamer if you enable the Rust plugins option, but you could also build them separately from their own repo (instructions on the README).

xh264 streaming to website using Gstreamer-1.0

I am very much new to the whole GStreamer-thing, therefore I would be happy if you could help me.
I need to stream a near-zero-latency videosignal from a webcam to a server and them be able to view the stream on a website.
The webcam is linked to a Raspberry Pi 3, because there are space-constraints on the mounting plattform. As a result of using the Pi I really can't transcode the video on the Pi itself. Therefore I bought a Logitech C920 Webcam, which is able to output a raw h264-stream.
By now I managed to view the stream on my windows-machine, but didn't manage to get the whole website-thing working.
My "achivements":
Sender:
gst-launch-1.0 -e -v v4l2src device=/dev/video0 ! video/x-h264,width=1920,height=1080,framerate=30/1 ! rtph264pay pt=96 config-interval=5 mtu=60000 ! udpsink host=192.168.0.132 port=5000
My understanding of this command is: Get the signal of video-device0, which is a h264-stream with a certain width, height and framerate. Then pack it into a rtp-package with a high enough mtu to have no artefacts and capsulate the rtp-package into a udp-package and stream in to a ip+port.
Receiver:
gst-launch-1.0 -e -v udpsrc port=5000 ! application/x-rtp, payload=96 ! rtpjitterbuffer ! rtph264depay ! avdec_h264 ! fpsdisplaysink sync=false text-overlay=false
My understanding of this command is: Receive a udp-package at port 5000. Application says it is a rtp-package inside. I don't know what rtpjitterbuffer does, but it reduces the latency of the video a bit.
rtph264depay says that inside the rtp is a h264-encoded stream. To get the raw data, which fpsdisplaysink understands we need to decode the h264 signal by the use of avdec_h264.
My next step was to change the receiver-sink to a local tcp-sink and output that signal with the following html5-tag:
<video width=320 height=240 autoplay>
<source src="http://localhost:#port#">
</video>
If I view the website I can't see the stream, but I can view the videodata, which arrived as plain text, when I analyse the data.
Am I missing a videocontainer like MP4 for my video?
Am I wrong with decoding?
What am I doing wrong?
How can I improve my solution?
How would you solve that problem?
Best regards

x264 enc v4l2 /dev/video2 stream

I have this working but have been unable to get video from my magwell to intergrate and could use help with the correct pipline.
gst-launch-1.0 videotestsrc ! video/x-raw,width=848,height=480,framerate=25/1 ! x264enc bitrate=700 ! video/x-h264,width=848,height=480,framerate=25/1,stream-format=byte-stream,profile=baseline ! tee name=t\
t. ! queue ! tcpclientsink host=172.18.0.3 port=8000 \
t. ! queue ! tcpclientsink host=172.18.0.4 port=8000
I do not see the receiver side pipeline in the question description. This is required to verify that there are no issues at the receiver side. Based on your current pipeline I have the following suggestions:
You don't need set the caps again after the element x264enc, because the output is anyhow of type video/x-h264. What you need is to add h264parse after x264enc. You need to also add h264parse, before passing the data to decoder you are using at the receiver side.
The bitrate set for x264enc is also very less. The units are in kbits/sec, and for a video this might be very less. It's best to leave this to default setting if you do not have any strict resource constraints. Otherwise try for a higher value.
Also is there any reason why you are using TCP. Using UDP might be a better idea for video, in case video data/packet loss is not an issue.

Extract h264 stream from USB webcam (logitech C920)

So, I'm starting to play around with gstreamer and I'm able to do very simple pipes such as
gst-launch-1.0 -v v4l2src device=/dev/video1 ! video/x-raw,format=YUY2,width=640,height=480,framerate=10/1 ! videoconvert ! autovideosink
Now, as my USB webcam (which is video1, video0 being the computer's built in camera) supports h264 (I have checked using lsusb), I would like to try to get the h264 feed directly. I understand that this feed is muxed in the mjpeg one, but looking around on the web it seems that gstreamer is able to get it nonetheless.
Since my end goal is to stream it from a Beaglebone, I made an attempt using the solution given to this post (adding a listener from a different terminal):
#sender
gst-launch-1.0 v4l2src device=/dev/video1 ! video/x-264,width=320,height=90,framerate=10/1 ! tcpserversink host=192.168.148.112 port=9999
But this yields the following error :
WARNING: erroneous pipeline: could not link v4l2src0 to tcpserversink0
I also tried something similar to my first command, changing the source from raw to h264 (based on that post , trying the full command given there gives the same error message)
gst-launch-1.0 -v v4l2src device=/dev/video1 ! video/x-h264,width=640,height=480,framerate=10/1 ! h264parse ! avdec_h264 ! autovideosink
But again, this did not work either:
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data flow error.
Additional debug info:
gstbasesrc.c(2948): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
streaming task paused, reason not-negotiated (-4)
Execution ended after 0:00:00.036309961
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
I admit this is driving me pretty crazy: looking on SO or elsewhere on the web, there seems to be a lot of people who made it work with exactly the same webcam as the one I have (Logitech C920), but I keep running into issues one after the other.
What would be an example of correct pipe to extract the h264 from that webcam?
You definitely need to use a payloader before it hits the wire. For example rtph264pay. Here is an example that cannot test as I don't have your hardware available. I have working udp examples from alternates sources if this doesn't steer you in the right direction.
server
gst-launch v4l2src device=/dev/video1 \
! video/x-264,width=320,height=90,framerate=10/1 \
! x264enc \
! queue \
! rtph264pay, config-interval=3, pt=96, mtu=1500 \
! queue \
! tcpserversink host=127.0.0.1 port=9002
client
gst-launch tcpserversrc host=127.0.0.1 port=9002 \
! application/x-rtp, media=video, clock-rate=90000, encoding-name=H264, payload=96 \
! rtph264depay \
! video/x-h264 \
! queue \
! ffdec_h264 \
! queue \
! xvimagesink

Gstreamer: Pausing/resuming video in RTP streams

I'm constructing a gstreamer pipeline that receives two RTP streams from an networked source:
ILBC Audio stream + corresponding RTCP stream
H263 Video stream + corresponding RTCP stream
Everything is put into one gstreamer pipeline so it will use the RTCP from both streams to synchronize audio/video. So far I've come up with this (using gst-launch for prototyping):
gst-launch -vvv gstrtpbin name=rtpbin
udpsrc caps="application/x-rtp,media=(string)video,clock-rate=(int)90000,encoding-name=(string)H263-2000" port=40000 ! rtpbin.recv_rtp_sink_0
rtpbin. ! rtph263pdepay ! ffdec_h263 ! xvimagesink
udpsrc port=40001 ! rtpbin.recv_rtcp_sink_0
rtpbin.send_rtcp_src_0 ! udpsink port=40002 sync=false async=false
udpsrc caps="application/x-rtp,media=(string)audio,clock-rate=(int)8000,encoding-name=(string)PCMU,encoding-params=(string)1,octet-align=(string)1" port=60000 rtpbin.recv_rtp_sink_1
rtpbin. ! rtppcmudepay ! autoaudiosink
udpsrc port=60001 ! rtpbin.recv_rtcp_sink_1
rtpbin.send_rtcp_src_1 ! udpsink port=60002 sync=false async=false
This pipeline works well if the networked source starts out with sending both video and audio. If the videostream is paused later on, gstreamer will still playback audio and even will start playing back the video when the networked source resumes the video stream.
My problem is however that if the networked source starts out with only an audio stream (video might be added later on), the pipeline seems to pause/freeze until the video stream starts as well.
Since video is optional (and can be added/removed at will by the user) in my application, is there any way I can hook up for instance an 'videotestsrc' that will provide some kind of fallback video data to keep the pipeline running when there is no networked video data?
I've tried experimenting with 'videotestsrc' and a thing called 'videomixer' but I think that mixer still requires both streams to be alive. Any feedback is greatly appreciated!
I present a simple function for pause resume by changing bins. In the following example I provide the logic to change destination bin on the fly dynamically. This shall not completely stop the pipeline which is what you seek I believe. A similar logic could be used for src bins. Here you may remove your network source bin and related decoder/demux bins and add videotestsrc bins.
private static void dynamic_bin_replacement(Pipeline pipe, Element src_bin, Element dst_bin_new, Element dst_bin_old) {
pipe.pause();
src_bin.unlink(dst_bin_old);
pipe.remove(dst_bin_old);
pipe.add(dst_bin_new);
dst_bin_new.syncStateWithParent();
src_bin.link(dst_bin_new);
pipe.ready();
pipe.play();
}
The other logic you may want to try is "PADLOCKING". Please take a look at the following posts
http://cgit.freedesktop.org/gstreamer/gstreamer/tree/docs/design/part-block.txt
and
http://web.archiveorange.com/archive/v/8yxpz7FmOlGqxVYtkPb4
and
Adding and removing audio sources to/from GStreamer pipeline on-the-go
UPDATE
Try output-selector and input-selector bins as they seem to be better alternative. I found them most reliable and have had immense luck with them. I use fakesink or fakesrc respectively as the other end of the selector.
valve bin is another alternative that I found doesn't even need fakesink or fakesrc bins. It is also extremely reliable.
Also the correct state transition order for media file source
NULL -> READY -> PAUSED -> PLAYING (Upwards)
PLAYING -> PAUSED -> READY -> NULL (Downwards)
My order in the above example should be corrected where ready() should come before pause(). Also I would tend to think un-linking should be performed after null() state and not after pause(). I haven't tried these changes but theoretically they should work.
See the following link for detailed info
http://cgit.freedesktop.org/gstreamer/gstreamer/tree/docs/design/part-states.txt?h=BRANCH-RELEASE-0_10_19