I've looked through tons of threads on OpenCV and Gstreamer and simply cannot resolve the issue to my error. I am trying to open a Gstreamer pipeline in OpenCV. I have built OpenCV with GStreamer and it says YES at the CMake step indicating that OpenCV built successfully. The command to retrieve the stream works fine from command line, however it just displays a frame and hangs in OpenCV.
My Syntax for Server:
gst-launch-1.0 v4l2src device="/dev/video0" ! video/x-raw,format=I420,width=640,height=480,framerate=15/1 ! jpegenc ! rtpjpegpay ! udpsink host=<IP Address> port=5000
My Syntax in OpenCV for Client (C++):
Mat frame;
//create video capture from video camera
VideoCapture cap("udpsrc port=5000 ! application/x-rtp,encoding-
name=JPEG,payload=26 ! rtpjpegdepay ! jpegdec ! autovideosink ! appsink");
cap.set(CV_CAP_PROP_FRAME_WIDTH, 640);
cap.set(CV_CAP_PROP_FRAME_HEIGHT, 480);
for(;;)
{
cap >> frame;
char c = (char)waitKey(1);
//![display]
imshow(window_name, frame);
frame.release();
}
The error:
GStreamer Plugin: Embedded video playback halted; module
autovideosink0-actual-sink-xvimage reported: Output window was closed
OpenCV Error: Unspecified error (GStreamer: unable to start pipeline )
in icvStartPipeline, file
/home/dev/Downloads/OpenCV/opencv-3.0.0/modules/videoio/src/cap_gstreamer.cpp,
line 383 terminate called after throwing an instance of
'cv::Exception' what(): /home/dev/Downloads/OpenCV/opencv-
3.0.0/modules/videoio/src/cap_gstreamer.cpp:383: error: (-2) GStreamer: unable to start pipeline in function icvStartPipeline
Please provide any assistance I've been through at least 20 Stack posts and I am no closer to when I started with the exception of having Gstreamer enabled in OpenCV. I even tried different versions of OpenCV.
Thanks
VideoCapture cap("udpsrc port=5000 ! application/x-rtp,encoding-name=JPEG,payload=26 ! rtpjpegdepay ! jpegdec ! videoconvert ! appsink");
After a lot more digging through Gstreamer documentation today I solved the issue. The addition of videoconvert solved the issue. According to the Gstreamer documentation videoconvert automatically converts the data to the appropriate format for appsink. This allows it to be read correctly in OpenCV VideoCapture.
Related
I have created Video0 device using V4l2loopback and used the following sample Git code V4l2Loopback_cpp as a application to stream jpg images from a folder sequential by altering some conditions in the code.But the code read images as 24Bit RGB image and send it Video0 device which is fine ,because the image run like a proper video on VLC video device capture. As i mentioned earlier thet if i checked the VLC properties of the video its Shows the following content
i need this video0 device to stream rtsp h264 video in vlc using the gstreamer lib .
i have used the following command to check in commandline for testing but its show some internal process error
gst-launch-1.0 -v v4l2src device=/dev/video0 ! video/x-raw,width=590,height=332,framerate=30/1 ! x264enc ! rtph264pay name=pay0 pt=96
i dont know whats the problem here .is it 24bit jpeg format or the gstreamer command i use. I need a proper gstreamer command line to process the video0 devide to stream h264 rtsp video any help is appreciated thank u.
image Format - jpg (sequence image passed)
Video0 recives - 24-bit RGB image
output need - h264 rtsp stream from video0
Not sure this is a solution, but the following may help:
You may try adjusting resolution according to what V4L reports (width=584) :
v4l2src device=/dev/video0 ! video/x-raw,format=RGB,width=584,height=332,framerate=30/1 ! videoconvert ! x264enc insert-vui=1 ! h264parse ! rtph264pay name=pay0 pt=96
Note that this error may happen on v4l2loopback receiver side while it may be a sender error. If you're feeding v4l2loopback with a gstreamer pipeline to v4l2sink, you would try adding identity such as:
... ! video/x-raw,format=RGB,width=584,height=332,framerate=30/1 ! identity drop-allocation=1 ! v4l2sink
I am planning on doing VideoCapture from OpenCV for video file stream/live rtsp stream. However, the VideoCapture has alot of latency when used in my program so i decided to use the gstreamer pipeline instead. For example, i used
VideoCapture capVideo("filesrc location=CarsDriving.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw, format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink ", CAP_GSTREAMER);
My program is able to run but if i were to do something like
capVideo.get(CAP_PROP_FRAME_COUNT)
It always returns -1 because GStreamer has this warnings
[ WARN:0] global /home/nvidia/Downloads/opencv-4.4.0/source/modules/videoio/src/cap_gstreamer.cpp (898) open OpenCV | GStreamer warning: unable to query duration of stream
[ WARN:0] global /home/nvidia/Downloads/opencv-4.4.0/source/modules/videoio/src/cap_gstreamer.cpp (935) open OpenCV | GStreamer warning: Cannot query video position: status=1, value=1, duration=-1
How do i get the frame count in opencv if i use gstreamer for the video pipeline? I need the framecount for exceptions and also video processing techniques.
This is a bug which #alekhin mentioned here and here. Also mentioned how to fix. After changing you should rebuild opencv.
Also you said:
However, the VideoCapture has alot of latency when used in my program
so i decided to use the gstreamer pipeline instead.
rtsp cameras generaly streams as h264/h265 encoded data. If you are trying to decode that data via on CPU but not GPU, it will not give you much increasing about speed. Why don't you choose CAP_FFMPEG flag instead of CAP_GSTREAMER? CAP_FFMPEG will be faster than CAP_GSTREAMER
I'm trying to make a UI-3370CP-C-HQ R2 Camera work on a Coral DevBoard with gstreamer.
Since the camera is no standard v4l2 camera, I downloaded and compiled the ueyesrc gst plugin (https://github.com/atdgroup/gst-plugin-ueye) on the devboard.
In my application I need to have the frame as opengl textures and I'm stuck at building a working pipeline.
So far the only way I managed to get something from the camera is to save a frame as jpeg:
gst-launch-1.0 tee ueyesrc num-buffers=10 ! jpegenc ! filesinklocation=ueyesrc-frame.jpg
The pipeline example provided with ueyesrc gst-launch-1.0 ueyesrc ! videoconvert ! xvimagesink doesn't work in my case because there is no X-Server on the device (but Wayland)
gst-launch-1.0 ueyesrc ! videoconvert ! glimagesink returns the following error:
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Got context from element 'sink': gst.gl.GLDisplay=context, gst.gl.GLDisplay=(GstGLDisplay)"\(GstGLDisplayWayland\)\ gldisplaywayland0";
Setting pipeline to PLAYING ...
New clock: GstSystemClock
ERROR: from element /GstPipeline:pipeline0/GstGLImageSinkBin:glimagesinkbin0/GstGLImageSink:sink: Failed to convert multiview video buffer
Additional debug info:
gstglimagesink.c(1741): gst_glimage_sink_prepare (): /GstPipeline:pipeline0/GstGLImageSinkBin:glimagesinkbin0/GstGLImageSink:sink
Execution ended after 0:00:00.486558117
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
With a standard USB webcam (Logitech HD Pro Webcam C920), gst-launch-1.0 v4l2src ! videoconvert ! glimagesink works fine.
I don't really understand what is going wrong or how to find more clues about it, I suppose I'm missing a conversion step in the middle but I don't know how to fix it. Does someone have an idea?
edit 1: it is indeed a conversion issue. I got it to work by specifying the format in the videoconvert caps: gst-launch-1.0 ueyesrc exposure=2 ! videoconvert ! video/x-raw,format=YUY2 ! glimagesink sync=False
Although the CPU usage is super high (>90% on all 4 cores of the iMx8) and the framerate reach a max of 6.5 fps.
I am capturing and processing video frames with OpenCV, and I would like to write them as a h265 video file. I am struggling to get a proper Gstreamer pipeline to work from OpenCV.
Gstreamer works fine by itself. In particular, I am able to run this command, which encodes video very quickly (thanks to GPU acceleration) and saves it to a mkv file:
gst-launch-1.0 videotestsrc num-buffers=90 ! 'video/x-raw, format=(string)I420, width=(int)640, height=(int)480' ! omxh265enc ! matroskamux ! filesink location=test.mkv
Now I would like to do the same thing from within my OpenCV application. My code is something like:
Mat img_vid = Mat(1024, 1024, CV_8UC3);
VideoWriter video;
video.open("appsrc ! autovideoconvert ! omxh265enc ! matroskamux ! filesink location=test.mkv", 0, (double)25, cv::Size(1024, 1024), true);
if (!video.isOpened()) {
printf("can't create writer\n");
return -1;
}
while ( ... ) {
// Capture frame into img_vid => That works fine
video.write(img_vid);
...
}
At first sight, this seems to work, but what it does is it creates file named "appsrc ! autovideoconvert ! omxh265enc ! matroskamux ! filesink location=test.mkv" and fills it with uncompressed video frames, completely ignoring the fact that this is a Gstreamer pipeline.
I have tried other pipelines, but they result in a variety of errors:
video.open("appsrc ! autovideoconvert ! omxh264enc ! 'video/x-h264, streamformat=(string)byte-stream' ! h264parse ! qtmux ! filesink location=test.mp4 -e", 0, (double)25, cv::Size(1024, 1024), true);
Which results in:
(Test:5533): GStreamer-CRITICAL **: gst_element_make_from_uri:
assertion 'gst_uri_is_valid (uri)' failed OpenCV Error: Unspecified
error (GStreamer: cannot find appsrc in manual pipeline ) in
CvVideoWriter_GStreamer::open, file
/home/ubuntu/opencv/modules/videoio/src/cap_gstreamer.cpp, line 1363
VIDEOIO(cvCreateVideoWriter_GStreamer(filename, fourcc, fps,
frameSize, is_color)): raised OpenCV exception:
/home/ubuntu/opencv/modules/videoio/src/cap_gstreamer.cpp:1363: error:
(-2) GStreamer: cannot find appsrc in manual pipeline in function
CvVideoWriter_GStreamer::open
I also tried the simple:
video.open("appsrc ! autovideosink", 0, (double)25, cv::Size(1024, 1024), true);
which yields:
GStreamer Plugin: Embedded video playback halted; module appsrc0
reported: Internal data flow error.
I am using OpenCV 3.1 with Gstreamer support. The hardware is a Jetson TX1 with L4T 24.2.1.
I encountered a similar problem before. Since the pipe/file name ends with .mkv, OpenCV interprets it as a video file instead of a pipe.
You can try ending it with a dummy spacing like after mkv
video.open("appsrc ! autovideoconvert ! omxh265enc ! matroskamux ! filesink location=test.mkv ", 0, (double)25, cv::Size(1024, 1024), true);
or with a dummy property like
video.open("appsrc ! autovideoconvert ! omxh265enc ! matroskamux ! filesink location=test.mkv sync=false", 0, (double)25, cv::Size(1024, 1024), true);
So I am not sure whether this is the best place to ask this question, but I'll give it a try.
I have some C++ openCV code running remotely. The openCV code draws different thing on the images that are continuously being captured by the camera.
In other words, when I run my software locally I can just do
imshow("some image", image);
and look at what my cod draws on the frames.
However now I would like to be able to see those frames remotely, like a videostream. What are the possibilities?
How can I see what is being output by my openCV software?
If you want to create a videostream remotely then you can use GStreamer to create a pipeline over the network. For that you can use cv::VideoWriter to write the frames to GStreamer pipeline.
cv::VideoWriter writer;
writer.open("appsrc ! videoconvert ! x264enc ! h264parse ! rtph264pay ! udpsink host=localhost port=9999", 0, (double)30, cv::Size(640, 480), true);
if (!writer.isOpened()) {
printf("=ERR= can't create video writer\n");
return -1;
}
while (true) {
/* Process the frame here */
writer << frame;
}
You can change localhost with the ip of your remote machine. On the receiving machine you can use the following command gst-launch-1.0 udpsrc port=9999 ! application/x-rtp,encoding-name=H264,payload=96 ! rtph264depay ! h264parse ! avdec_h264 ! autovideosink
For more information on GStreamer you can see this link.
By using tcp socket programming you can send the image meta data over network. There is a project sable-netcv at https://code.google.com/archive/p/sable-netcv/ , you can try it out.