I am planning on doing VideoCapture from OpenCV for video file stream/live rtsp stream. However, the VideoCapture has alot of latency when used in my program so i decided to use the gstreamer pipeline instead. For example, i used
VideoCapture capVideo("filesrc location=CarsDriving.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw, format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink ", CAP_GSTREAMER);
My program is able to run but if i were to do something like
capVideo.get(CAP_PROP_FRAME_COUNT)
It always returns -1 because GStreamer has this warnings
[ WARN:0] global /home/nvidia/Downloads/opencv-4.4.0/source/modules/videoio/src/cap_gstreamer.cpp (898) open OpenCV | GStreamer warning: unable to query duration of stream
[ WARN:0] global /home/nvidia/Downloads/opencv-4.4.0/source/modules/videoio/src/cap_gstreamer.cpp (935) open OpenCV | GStreamer warning: Cannot query video position: status=1, value=1, duration=-1
How do i get the frame count in opencv if i use gstreamer for the video pipeline? I need the framecount for exceptions and also video processing techniques.
This is a bug which #alekhin mentioned here and here. Also mentioned how to fix. After changing you should rebuild opencv.
Also you said:
However, the VideoCapture has alot of latency when used in my program
so i decided to use the gstreamer pipeline instead.
rtsp cameras generaly streams as h264/h265 encoded data. If you are trying to decode that data via on CPU but not GPU, it will not give you much increasing about speed. Why don't you choose CAP_FFMPEG flag instead of CAP_GSTREAMER? CAP_FFMPEG will be faster than CAP_GSTREAMER
Related
I'm trying to create screenshot (i.e. grab one frame) from RTSP camera stream using gstreamer pipeline.
The pipeline used looks like this:
gst-launch-1.0 rtspsrc location=$CAM_URL is_live=true ! decodebin ! videoconvert ! jpegenc snapshot=true ! filesink location=/tmp/frame.jpg
Problem is that the result image is always gray, with random artifacts. It looks like it's grabbing the very first frame, and it doesn't wait for the key frame.
Is there any way how can I modify the pipeline to actually grab first valid frame of video? Or just wait long enough to be sure that there was at least one key frame already?
I'm unsure why, but after some trial and error it is now working with decodebin3 instead of decodebin. Documentation is still bit discouraging though, stating decodebin3 is still experimental API and a technology preview. Its behaviour and exposed API is subject to change.
Full pipeline looks like this:
gst-launch-1.0 rtspsrc location=$CAM_URL is_live=true ! decodebin3 ! videoconvert ! jpegenc snapshot=true ! filesink location=/tmp/frame.jpg
I have created Video0 device using V4l2loopback and used the following sample Git code V4l2Loopback_cpp as a application to stream jpg images from a folder sequential by altering some conditions in the code.But the code read images as 24Bit RGB image and send it Video0 device which is fine ,because the image run like a proper video on VLC video device capture. As i mentioned earlier thet if i checked the VLC properties of the video its Shows the following content
i need this video0 device to stream rtsp h264 video in vlc using the gstreamer lib .
i have used the following command to check in commandline for testing but its show some internal process error
gst-launch-1.0 -v v4l2src device=/dev/video0 ! video/x-raw,width=590,height=332,framerate=30/1 ! x264enc ! rtph264pay name=pay0 pt=96
i dont know whats the problem here .is it 24bit jpeg format or the gstreamer command i use. I need a proper gstreamer command line to process the video0 devide to stream h264 rtsp video any help is appreciated thank u.
image Format - jpg (sequence image passed)
Video0 recives - 24-bit RGB image
output need - h264 rtsp stream from video0
Not sure this is a solution, but the following may help:
You may try adjusting resolution according to what V4L reports (width=584) :
v4l2src device=/dev/video0 ! video/x-raw,format=RGB,width=584,height=332,framerate=30/1 ! videoconvert ! x264enc insert-vui=1 ! h264parse ! rtph264pay name=pay0 pt=96
Note that this error may happen on v4l2loopback receiver side while it may be a sender error. If you're feeding v4l2loopback with a gstreamer pipeline to v4l2sink, you would try adding identity such as:
... ! video/x-raw,format=RGB,width=584,height=332,framerate=30/1 ! identity drop-allocation=1 ! v4l2sink
I'm trying to make a UI-3370CP-C-HQ R2 Camera work on a Coral DevBoard with gstreamer.
Since the camera is no standard v4l2 camera, I downloaded and compiled the ueyesrc gst plugin (https://github.com/atdgroup/gst-plugin-ueye) on the devboard.
In my application I need to have the frame as opengl textures and I'm stuck at building a working pipeline.
So far the only way I managed to get something from the camera is to save a frame as jpeg:
gst-launch-1.0 tee ueyesrc num-buffers=10 ! jpegenc ! filesinklocation=ueyesrc-frame.jpg
The pipeline example provided with ueyesrc gst-launch-1.0 ueyesrc ! videoconvert ! xvimagesink doesn't work in my case because there is no X-Server on the device (but Wayland)
gst-launch-1.0 ueyesrc ! videoconvert ! glimagesink returns the following error:
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Got context from element 'sink': gst.gl.GLDisplay=context, gst.gl.GLDisplay=(GstGLDisplay)"\(GstGLDisplayWayland\)\ gldisplaywayland0";
Setting pipeline to PLAYING ...
New clock: GstSystemClock
ERROR: from element /GstPipeline:pipeline0/GstGLImageSinkBin:glimagesinkbin0/GstGLImageSink:sink: Failed to convert multiview video buffer
Additional debug info:
gstglimagesink.c(1741): gst_glimage_sink_prepare (): /GstPipeline:pipeline0/GstGLImageSinkBin:glimagesinkbin0/GstGLImageSink:sink
Execution ended after 0:00:00.486558117
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
With a standard USB webcam (Logitech HD Pro Webcam C920), gst-launch-1.0 v4l2src ! videoconvert ! glimagesink works fine.
I don't really understand what is going wrong or how to find more clues about it, I suppose I'm missing a conversion step in the middle but I don't know how to fix it. Does someone have an idea?
edit 1: it is indeed a conversion issue. I got it to work by specifying the format in the videoconvert caps: gst-launch-1.0 ueyesrc exposure=2 ! videoconvert ! video/x-raw,format=YUY2 ! glimagesink sync=False
Although the CPU usage is super high (>90% on all 4 cores of the iMx8) and the framerate reach a max of 6.5 fps.
I've looked through tons of threads on OpenCV and Gstreamer and simply cannot resolve the issue to my error. I am trying to open a Gstreamer pipeline in OpenCV. I have built OpenCV with GStreamer and it says YES at the CMake step indicating that OpenCV built successfully. The command to retrieve the stream works fine from command line, however it just displays a frame and hangs in OpenCV.
My Syntax for Server:
gst-launch-1.0 v4l2src device="/dev/video0" ! video/x-raw,format=I420,width=640,height=480,framerate=15/1 ! jpegenc ! rtpjpegpay ! udpsink host=<IP Address> port=5000
My Syntax in OpenCV for Client (C++):
Mat frame;
//create video capture from video camera
VideoCapture cap("udpsrc port=5000 ! application/x-rtp,encoding-
name=JPEG,payload=26 ! rtpjpegdepay ! jpegdec ! autovideosink ! appsink");
cap.set(CV_CAP_PROP_FRAME_WIDTH, 640);
cap.set(CV_CAP_PROP_FRAME_HEIGHT, 480);
for(;;)
{
cap >> frame;
char c = (char)waitKey(1);
//![display]
imshow(window_name, frame);
frame.release();
}
The error:
GStreamer Plugin: Embedded video playback halted; module
autovideosink0-actual-sink-xvimage reported: Output window was closed
OpenCV Error: Unspecified error (GStreamer: unable to start pipeline )
in icvStartPipeline, file
/home/dev/Downloads/OpenCV/opencv-3.0.0/modules/videoio/src/cap_gstreamer.cpp,
line 383 terminate called after throwing an instance of
'cv::Exception' what(): /home/dev/Downloads/OpenCV/opencv-
3.0.0/modules/videoio/src/cap_gstreamer.cpp:383: error: (-2) GStreamer: unable to start pipeline in function icvStartPipeline
Please provide any assistance I've been through at least 20 Stack posts and I am no closer to when I started with the exception of having Gstreamer enabled in OpenCV. I even tried different versions of OpenCV.
Thanks
VideoCapture cap("udpsrc port=5000 ! application/x-rtp,encoding-name=JPEG,payload=26 ! rtpjpegdepay ! jpegdec ! videoconvert ! appsink");
After a lot more digging through Gstreamer documentation today I solved the issue. The addition of videoconvert solved the issue. According to the Gstreamer documentation videoconvert automatically converts the data to the appropriate format for appsink. This allows it to be read correctly in OpenCV VideoCapture.
So I am not sure whether this is the best place to ask this question, but I'll give it a try.
I have some C++ openCV code running remotely. The openCV code draws different thing on the images that are continuously being captured by the camera.
In other words, when I run my software locally I can just do
imshow("some image", image);
and look at what my cod draws on the frames.
However now I would like to be able to see those frames remotely, like a videostream. What are the possibilities?
How can I see what is being output by my openCV software?
If you want to create a videostream remotely then you can use GStreamer to create a pipeline over the network. For that you can use cv::VideoWriter to write the frames to GStreamer pipeline.
cv::VideoWriter writer;
writer.open("appsrc ! videoconvert ! x264enc ! h264parse ! rtph264pay ! udpsink host=localhost port=9999", 0, (double)30, cv::Size(640, 480), true);
if (!writer.isOpened()) {
printf("=ERR= can't create video writer\n");
return -1;
}
while (true) {
/* Process the frame here */
writer << frame;
}
You can change localhost with the ip of your remote machine. On the receiving machine you can use the following command gst-launch-1.0 udpsrc port=9999 ! application/x-rtp,encoding-name=H264,payload=96 ! rtph264depay ! h264parse ! avdec_h264 ! autovideosink
For more information on GStreamer you can see this link.
By using tcp socket programming you can send the image meta data over network. There is a project sable-netcv at https://code.google.com/archive/p/sable-netcv/ , you can try it out.