xh264 streaming to website using Gstreamer-1.0 - gstreamer

I am very much new to the whole GStreamer-thing, therefore I would be happy if you could help me.
I need to stream a near-zero-latency videosignal from a webcam to a server and them be able to view the stream on a website.
The webcam is linked to a Raspberry Pi 3, because there are space-constraints on the mounting plattform. As a result of using the Pi I really can't transcode the video on the Pi itself. Therefore I bought a Logitech C920 Webcam, which is able to output a raw h264-stream.
By now I managed to view the stream on my windows-machine, but didn't manage to get the whole website-thing working.
My "achivements":
Sender:
gst-launch-1.0 -e -v v4l2src device=/dev/video0 ! video/x-h264,width=1920,height=1080,framerate=30/1 ! rtph264pay pt=96 config-interval=5 mtu=60000 ! udpsink host=192.168.0.132 port=5000
My understanding of this command is: Get the signal of video-device0, which is a h264-stream with a certain width, height and framerate. Then pack it into a rtp-package with a high enough mtu to have no artefacts and capsulate the rtp-package into a udp-package and stream in to a ip+port.
Receiver:
gst-launch-1.0 -e -v udpsrc port=5000 ! application/x-rtp, payload=96 ! rtpjitterbuffer ! rtph264depay ! avdec_h264 ! fpsdisplaysink sync=false text-overlay=false
My understanding of this command is: Receive a udp-package at port 5000. Application says it is a rtp-package inside. I don't know what rtpjitterbuffer does, but it reduces the latency of the video a bit.
rtph264depay says that inside the rtp is a h264-encoded stream. To get the raw data, which fpsdisplaysink understands we need to decode the h264 signal by the use of avdec_h264.
My next step was to change the receiver-sink to a local tcp-sink and output that signal with the following html5-tag:
<video width=320 height=240 autoplay>
<source src="http://localhost:#port#">
</video>
If I view the website I can't see the stream, but I can view the videodata, which arrived as plain text, when I analyse the data.
Am I missing a videocontainer like MP4 for my video?
Am I wrong with decoding?
What am I doing wrong?
How can I improve my solution?
How would you solve that problem?
Best regards

Related

Using Gstreamer, i can't ind a solution to send av1 video throught Udpsink in rtp packets

I'm currently working on Gstreamer and my goal is to take video from camera(coded natively in h264) decode it, then encode in AV1 and send it in udp to another computer on the network.
My pipelines currently are :
Server :
gst-launch-1.0 -v rtspsrc location= rtsp://192.168.33.104:8554/vis.0 latency=1 is-live=TRUE ! decodebin ! autovideoconvert ! x265enc tune=zerolatency bitrate=300 speed-preset=3 ! rtph265pay ! udpsink host=192.168.33.39 port=8123
Client :
gst-launch-1.0 udpsrc address=192.168.33.39 port=8123 ! application/x-rtp,media=video,clock-rate=90000,encoding-name=H265,payload=96 ! rtph265depay ! avdec_h265 ! autovideosink
So with h265 it works but i cannot find how to do it with AV1 because i can't find a rtpav1pay (and depay).
Thanks in advance.
I tried to search for rtpav1pay but found nothing. I tried rtpgstpay(and depay) didn't work. The main goal is to use as little as possible the network without lag so maybe it's not the best solution. If you have any other idea please share it.
There are rtpav1pay and rtpav1depay plugins provided by gst-plugins-rs; they can be built along with GStreamer if you enable the Rust plugins option, but you could also build them separately from their own repo (instructions on the README).

Synchronize two RTSP/RTP H264 video streams capture using GStreamer

I have two AXIS IP cameras streaming H264 stream over RTSP/RTP. Both cameras are set to synchronize with same NTP server so I assume both cameras will have same exact clock (may be minor diff in ms).
In my application, both cameras are pointing to same view and its required to process both camera images of same time. Thus, I want to synchronize the image capture using GStreamer.
I have tried invoking two pipelines separately on different cmd prompts but the videos are 2-3 seconds apart .
gst-launch rtspsrc location=rtsp://192.168.16.136:554/live ! rtph264depay ! h264parse ! splitmuxsink max-size-time=100000000 location=cam1_video_%d.mp4
gst-launch rtspsrc location=rtsp://192.168.16.186:554/live ! rtph264depay ! h264parse ! splitmuxsink max-size-time=100000000 location=cam2_video_%d.mp4
Can someone suggest a gstreamer pipeline to synchronize both H264 streams and record them into separate video files?
Thanks!
ARM
I am able to launch a pipeline using gst-launch as shown below. It shows good improvement on captured frame synchronization compare to lanuching two pipelines. Most times they differ by 0-500 msec. Though, I still want to synchronize them less than 150 msec accuracy.
rtspsrc location=rtsp://192.168.16.136:554/axis-media/media.amp?videocodec=h264 \
! rtph264depay ! h264parse \
! splitmuxsink max-size-time=10000000000 location=axis/video_136_%d.mp4 \
rtspsrc location=rtsp://192.168.16.186:554/axis-media/media.amp?videocodec=h264 \
! rtph264depay ! h264parse \
! splitmuxsink max-size-time=10000000000 location=axis/video_186_%d.mp4
Appreciate if someone can point other ideas!
~Arm
What do you mean synchronize? if you record to separate video files you do not need any synchronization.. as this is going to totaly separate them.. each RT(S)P stream will contain different timestamps, if you want to align them somehow to the same time (I mean real human time.. like "both should start from 15:00") then you have to configure them this way somehow (this is just idea)..
Also you did not tell us whats inside those rtp/rtsp streams (is it MPEG ts or pure IP.. etc). So I will give example of mpeg ts encapsulated rtp streams.
We will go step by step:
Suppose this is one camera just to demonstrate how it may look like:
gst-launch-1.0 -v videotestsrc ! videoconvert ! x264enc ! mpegtsmux ! rtpmp2tpay ! udpsink host=127.0.0.1 port=8888
Then this would be reciever (it must use rtmp2tdepay. We are encapsulating metadata inside MPEG container):
gst-launch-1.0 udpsrc port=8888 caps=application/x-rtp\,\ media\=\(string\)video\,\ encoding-name\=\(string\)MP2T ! rtpmp2tdepay ! decodebin ! videoconvert ! autovideosink
If you test this with your camera .. the autovideosink means that new window will popup displaying your camera..
Then you can try to store it inside file.. we will use mp4mux..
So for same camera input we do:
gst-launch-1.0 -e udpsrc port=8888 caps=application/x-rtp\,\ media\=\(string\)video\,\ encoding-name\=\(string\)MP2T ! rtpmp2tdepay ! tsdemux ! h264parse ! mp4mux ! filesink location=test.mp4
Explanation: We do not decode and reencode(waste of processing power) so I will just demux the MPEG ts stream and then instead of decoding H264 I will just parse it for the mp4mux which accepts video/x-h264.
Now you could use the same pipeline for each camera.. or you can just copypaste all elements into the same pipeline..
Now as you did not provide any - at least partial - attempt to make something out this is going to be your homework :) or make yourself more clear about the synchronization as I do not understand it..
UPDATE
After your update to question this answer is not very useful, but I will keep it here as reference. I have no idea how to synchronize that..
Another advise.. try to look at timestamps after udpsrc.. maybe they are synchronized already.. in that case you can use streamsynchronizer to synchronize two streams.. or maybe video/audio mixer:
gst-launch-1.0 udpsrc -v port=8888 ! identity silent=false ! fakesink
This should print the timestamps (PTS, DTS, Duration ..):
/GstPipeline:pipeline0/GstIdentity:identity0: last-message = chain ******* (identity0:sink) (1328 bytes, dts: 0:00:02.707033598, pts:0:00:02.707033598, duration: none, offset: -1, offset_end: -1, flags: 00004000 tag-memory ) 0x7f57dc016400
Compare PTS of each stream.. maybe you could combine two udpsrc in one pipeline and after each udpsrc put identity (with different name=something1) to make them start reception together..
HTH

gstreamer pipeline only generates mono stream

I'm trying to get UPNP streaming to work. Rygel runs fine, however, all I get is a mono stream, even if the input is stereo. Doing some debugging, I replicated Rygel's gstreamer pipeline with
gst-launch-1.0 pulsesrc device=upnp.monitor num-buffers=100 ! audioconvert ! lamemp3enc target=quality quality=6 ! filesink location=test.mp3
where the problem is also apparent:
mp3info -x test.mp3
...
Media Type: MPEG 1.0 Layer III
Audio: Variable kbps, 44 kHz (mono)
...
Where does this pipeline lose the second channel? How can I debug this?
You never ask for stereo:
gst-launch-1.0 pulsesrc device=upnp.monitor num-buffers=100 ! "audio/x-raw,channels=2" ! audioconvert ! lamemp3enc target=quality quality=6 ! filesink location=test.mp3
Add a -v to the launch-line to see all the caps negotiated on all pads of the pipeline. Look for "channels" and see where it goes from 2 to 1.

convert a video to a sequence of frame images

I need to capture a video using a webcam and output a single image for each video frame captured.
I have tried using gstreamer with a multifilesink, e.g.:
gst-launch v4l2src device=/dev/video1 ! video/x-raw-yuv,framerate=30/1 ! ffmpegcolorspace ! pngenc ! multifilesink location="frame%d.png"
However, this does not actually output every frame, meaning that if I record for 2 seconds at 30 fps, I don't get 60 images. I'm assuming this is because the encoding can't go that fast, so I need another method.
I figured it might work if I have one pipeline capture a video, and a separate pipeline convert that video to frames, but I don't know enough about codecs. Do I need to encode the video to a file like h264 or mp4 just to then decode it again?
Does anyone have any thoughts or suggestions? Keep in mind that I need to be able to do this in code, not using an application like Adobe Premiere, for example.
Thanks!
You could simply add a queue in there like this:
gst-launch v4l2src device=/dev/video1 ! video/x-raw-yuv,framerate=30/1 ! queue ! ffmpegcolorspace ! pngenc ! multifilesink location="frame%d.png"
This should make sure the video-capture is allowed to run at 30 fps, and then writing it to
disk can happen in its own tempo. Just be aware that the queue will grow to quite a large size
if you leave this setup for too long.
the solution I have to offer doesn't use gstreamer but ffmpeg. I hope that's fine for you too.
As described in this forum post, you can use something like this:
ffmpeg -i movie.avi frame%d.png
to get a png/jpg image for each frame of the video.
But depending on the input file you use, you might have to convert it to an MPEG vid before running ffmpeg.
Note:
If you want leading zeroes in your image file names, use %05d instead (for 5-digit numbers, like in C's printf()):
ffmpeg -i movie.avi frame%05d.png
The output file format depends on the file extension, so you might use .jpg, .bmp, ... instead of .png.
I ended up doing this in two parts.
Write video to file.
gst-launch v4l2src device=/dev/video2 ! video/x-raw-yuv,framerate=30/1 ! xvidenc ! queue ! avimux ! filesink location=test.avi
Post process.
gst-launch-1.0 --gst-debug-level=3 filesrc location=test.avi ! decodebin ! queue ! autovideoconvert ! pngenc ! multifilesink location="frame%d.png"

Streaming using GStreamer

I have got one HD video "ed_hd.avi" on System#1. Would like to stream it over network and play the content from System#2. I am using GStreamer on Ubuntu 11.04, tried a lot on this. Variety of errors makes this objective difficult to diagnose. Will be thankful for getting a working command for the System#1-end and System#2-end.
What I have tried is as follows:
System #1:
gst-launch filesrc location=ed_hd.avi ! decodedin ! x263enc ! video/x-h264 ! rtph264pay ! udpsink host=127.0.0.1 port=5000
System #2:
gst-launch udpsrc port=5000 ! rtph264depay ! decodebin ! xvimagesink
Objective is : Convert avi file to raw video. Stream it from the second System#2.
Thank You.
Could you try the following
gst-launch filesrc location=ed_hd.avi ! decodebin ! ffenc_mpeg4 ! rtpmp4vpay ! udpsink host=127.0.0.1 port=5000
If that doesn't work then proceed:
Did you try the following? You need to replace his audio bins with video bins
http://delog.wordpress.com/2011/06/01/stream-raw-vorbis-audio-over-udp-or-tcp-with-gstreamer/
Also take a look at
http://pastebin.com/PtD21Bx7
Here replace v4l2src with your video src
Also
https://metalab.at/wiki/Gstreamer_One_Liners
I think your problem is the 127.0.0.1 portion. That is a loopback address (check ifconfig lo0 to see the Link encap:Local Loopback for the 127.0.0.1 address). This won't work across two systems, though it might work fine on a single system.
Instead, use the address that is publicly visible for the second machine; check ip addr show or ifconfig output to find the address. Write the actual address for System #2 in the command line on System #1.