So I have a fisheye camera piped through gstreamer, over the internet to another pc where I want to display it on an Oculus Rift. The Oculus expects a 1280×800 resolution input just like a normal monitor, but the left 640×800 of the screen displays in the left eye, other 640×800 for right eye.
I need to modify this:
gst-launch-1.0 -e -v udpsrc port=5001 ! application/x-rtp, payload=96 ! rtpjitterbuffer ! rtph264depay ! avdec_h264 ! fpsdisplaysink sync=false text-overlay=false
to show the stream twice, side-by-side. If I run this command and I winKey+leftArrow, it displays really well in one eye. The oculus even crops out edges (read: windows decorations). But gstreamer won't let me run gst-launch twice at the same time. Any way to make it work? Admittedly, it's quite a hack, but it seemed to work quite well in the one eye.
Alternatively, can someone help me use videomixer?
windows 8, btw'
Thanks!
You should be able to duplicate the video with a
... ! tee name=t ! queue ! videomixer name=m sink_0::xpos=0 sink_1::xpos=640 ! ... t. ! queue ! m.
the key is to use the videomixers pad properties to position the copies.
Related
I'm trying to create screenshot (i.e. grab one frame) from RTSP camera stream using gstreamer pipeline.
The pipeline used looks like this:
gst-launch-1.0 rtspsrc location=$CAM_URL is_live=true ! decodebin ! videoconvert ! jpegenc snapshot=true ! filesink location=/tmp/frame.jpg
Problem is that the result image is always gray, with random artifacts. It looks like it's grabbing the very first frame, and it doesn't wait for the key frame.
Is there any way how can I modify the pipeline to actually grab first valid frame of video? Or just wait long enough to be sure that there was at least one key frame already?
I'm unsure why, but after some trial and error it is now working with decodebin3 instead of decodebin. Documentation is still bit discouraging though, stating decodebin3 is still experimental API and a technology preview. Its behaviour and exposed API is subject to change.
Full pipeline looks like this:
gst-launch-1.0 rtspsrc location=$CAM_URL is_live=true ! decodebin3 ! videoconvert ! jpegenc snapshot=true ! filesink location=/tmp/frame.jpg
I am trying to get a camera that uses Gstreamer to capture images in a specific interval of time, while searching in the web I found the following line:
gst-launch-1.0 -v videotestsrc is-live=true ! clockoverlay font-desc="Sans, 48" ! videoconvert ! videorate ! video/x-raw,framerate=1/3 ! jpegenc ! multifilesink location=file-%02d.jpg
I believe it would work great, but unfortunately, I don't know how to get it to work with my specific camera, meaning I don't know how to identify my video source in RPi4 and if that is the only thing I have to change to get it to work. I would appreciate either help with the video source, or any other method to get those images.
I am trying to create virtual web camera using GStreamer and v4l2loopback. The problem is that I want to use Playbin but the video speed is too fast when I use it. For instance, it happens when I execute the following command:
gst-launch-1.0 -v playbin uri=file:/vagrant/test.avi
video-sink="videoconvert
! videoscale
! video/x-raw,format=YUY2,width=320,height=320
! v4l2sink device=/dev/video0"
Adding "framerate=20/1" to the caps throws "Not negotiated error" while setting it to "30/1" works but doesn't help to fix the issue with the speed.
On the other hand, I am getting normal speed when executing the following command:
gst-launch-1.0 -v filesrc location=/vagrant/test.avi
! avidemux
! decodebin
! videoconvert
! videoscale
! "video/x-raw,format=YUY2,width=320,height=320"
! v4l2sink device=/dev/video0
I tried a lot of combinations with filters from the last example with the Playbin but none of them helped.
Any help would be highly appreciated!
The problem was with the virtual machine running on top of the VirtualBox. To be more precise - I had 3d acceleration turned on, which resulted in all the videos to play at speed 2x.
Turning off 3d acceleration by setting --accelerate3d=off helped to solve the issue.
I'm launching a gst-launch-1.0 that captures camera images with nvgstcamera. The images are encoded to VP9 video. The video is tee'd to a filesink that saves the video in a webm container and to a VP9 decoder that pipes the images into an appsink.
Later, I want to extract frames from the saved video and run them through the application again. It is important that the frames are absolutely identical to the ones that were piped into the appsink during video capture.
Unfortunately, the decoded frames look slightly different, depending on how you extract them.
A minimal working example:
Recording:
$ gst-launch-1.0 nvcamerasrc ! "video/x-raw(memory:NVMM), format=NV12" ! omxvp9enc ! tee name=splitter \
splitter. ! queue ! webmmux ! filesink location="record.webm" \
splitter. ! queue ! omxvp9dec ! nvvidconv ! "video/x-raw,format=RGBA" ! pngenc ! multifilesink location="direct_%d.png"
Replaying with nvvidconv element:
$ gst-launch-1.0 filesrc location=record.webm ! matroskademux ! omxvp9dec \
! nvvidconv ! pngenc ! multifilesink location="extracted_nvvidconv_%d.png"
Replaying with videoconvert element:
$ gst-launch-1.0 filesrc location=record.webm ! matroskademux ! omxvp9dec \
! videoconvert ! pngenc ! multifilesink location="extracted_videoconvert_%d.png"
Testing image differences:
$ compare -metric rmse direct_25.png extracted_nvvidconv_25.png null
0
$ compare -metric rmse direct_25.png extracted_videoconvert_25.png null
688.634 (0.0105079)
nvvidconv:
videoconvert:
My guess is that this has to do with the I420 to RGB conversion. So videoconvert seems to use a different color conversion than nvvidconv.
Launching the pipeline with gst-launch -v shows that the element capabilities are basically the same for both replay pipelines, the only difference is that videoconvert uses RGB by default, while nvvidconv uses RGBA. Adding the caps string "video/x-raw,format=RGBA" behind videoconvert makes however no difference in color conversion.
Note that this is on an Nvidia Jetson TX2 and I would like to use hardware accelerated gstreamer plugins during recording (omxvp9enc, nvvidconv), but not during replay on another machine.
How can I extract images from the video that are identical to the images running through the pipeline during recording, but without the use of Nvidia's Jetson-specific plugins?
Check for colorimetry information - https://developer.gnome.org/gst-plugins-libs/stable/gst-plugins-base-libs-gstvideo.html#GstVideoColorimetry
Videoconvert for example take these into account when converting images. Depending on the caps found at input and output.
You probably have to check what the Tegra is doing here. Most likely there is a difference if the signal is interpreted as full range or tv range. Or the matrices differ from 601 and 709.
Depending on precision there may still be some loss during the conversion. For metrics at video codecs it may make sense to stay at YUV color space and use only RGB for display if you must.
I am currently working on a project that utilizes a Nvidia Jetson. We need to stream 3 cameras over UDP RTP to a single source (unicast), while saving the contents of all three cameras.
I am having issues with my pipeline, It is probably a simple mistake somewhere that I simply am not seeing.
gst-launch-1.0 -e v4l2src device=/dev/video0 ! 'video/x-raw, width=(int)640, height=(int)480' ! tee name=c c. ! queue ! omxvp8enc bitrate=1500000 ! rtpvp8pay ! udpsink bind-port=8574 host=129.21.57.204 port=8574 loop=false c. ! queue ! omxh264enc bitrate=1500000 ! mp4mux ! queue ! filesink location=test-RightFacingCamera.mp4 v4l2src device=/dev/video1 ! 'video/x-raw, width=(int)640, height=(int)480' ! tee name=b b. ! queue ! omxvp8enc bitrate=1500000 ! rtpvp8pay ! udpsink bind-port=8564 host=129.21.57.204 port=8564 loop=false b. ! queue ! omxh264enc bitrate=1500000 ! mp4mux ! queue ! filesink location=test-LeftFacingCamera.mp4 v4l2src device=/dev/video2 ! 'video/x-raw, width=(int)640, height=(int)480' ! tee name=a a. ! queue ! omxvp8enc bitrate=1500000 ! rtpvp8pay ! udpsink bind-port=8554 host=129.21.57.204 port=8554 loop=false a. ! queue ! omxh264enc bitrate=1500000 ! mp4mux ! queue ! filesink location=test-FrontFacingCamera.mp4
Now the issue here is that 2 of the 3 streams will simply stop without cause, there is no debug information at all, they will simply cease to stream and write to the file after about 2 minutes of up time.
Additionally, I have considered converting this into C/C++ w/Gstreamer, I would not know where to begin if someone would like to point me in a direction. Currently I have a javascript code written up that detects each camera by serial number and assigns a port to the given camera. Then runs this command.
Thanks for any help.
This issue was caused by the cameras themselves. Turns out that ECON brand cameras have an issue where 3 of the identical camera will not work in v4l2. My team and I have bought new cameras, all identical model to test, and it works fine.
We were using ECONS because of supposed scientific quality and USB-3 speeds. Unfortunately we do not have USB3 speeds or bandwidth, so we are stuck on a lower resolution.
Hope that helps anyone that runs into a simaler problem, the current cameras that seem to all work asynchronously over USB2.0 are Logitech c922s
This is usb bandwidth limitation of Jetson. We can support 3 camera at a time with compromising the frame-rate. The Logitech camera is compared and that camera is H.264 camera (It gives compressed frames) so it afford to give 60fps bandwidth.