GStreamer Pipline with OpenCV on Coral Dev Board - c++

I am trying to access the CSI Camera on the Coral Dev Board via OpenCV and GStreamer in C++;
This is my pipline code that seems to work fine when testing with gst:
gst-launch-1.0 v4l2src device = /dev/video0 ! video/x-raw,format=YUY2,width=640,height=480,framerate=30/1 ! videoconvert ! video/x-raw,format=BGR ! appsink drop=1
But when trying to open it with OpenCV it doesnt seem to do the trick:
#include "opencv2/opencv.hpp"
#include <iostream>
int main() {
std::string pipline = "v4l2src device = /dev/video0 ! video/x-raw,format=YUY2,width=640,height=480,framerate=30/1 ! videoconvert ! video/x-raw,format=BGR ! appsink drop=1";
cv::VideoCapture capture(pipline, cv::CAP_GSTRAMER);
if (!_capture.isOpened()) {
std::cout << "Could not open Camera" << std::endl;
}
return 0;
}
Also, is there somehow a more detailed error message available via opencv? When running gst-launch-1.0 it tells me quite specifically what it didnt like about my pipline string. But in opencv it just seems to tell me that it didnt work.

OpenCv VideoCapture has 5 overloaded functions and all of them don't include this type implementation. You can use the followings as VideoCapture inputs:
video file or image file sequence
a capturing device index
IP video stream link
As I see in your command, your CSI camera locates on device = /dev/video0 so you can call it simply with the line:
cv::VideoCapture capture(0, cv::CAP_GSTREAMER);

Related

OpenCV returns no error when open is called, but gstreamer does

I have the problem when I open a camera with GStreamer, and the camera is not connected, I don't get an error code back from OpenCV. GStreamer returns an error in the console. When I check if the camera is open with .isOpend() the return value is true. When the camera is connected, it works without any issue.
std::string pipeline = "nvarguscamerasrc sensor_id=0 ! video/x-raw(memory:NVMM), width=(int)3264, height=(int)2464, format=(string)NV12, framerate=(fraction)21/1 ! nvvidconv flip-method=2 ! video/x-raw, width=(int)3264, height=(int)2464, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink"
cap_device_.open(pipeline, cv::CAP_GSTREAMER);
bool err = cap_device_.isOpened();
if (!err) {
printf("Not possible to open camera");
return EXIT_FAILURE;
}
The GStreamer error code in the console is:
(Argus) Error Timeout: (propagating from src/rpc/socket/client/SocketClientDispatch.cpp, function openSocketConnection(), line 219)
(Argus) Error Timeout: Cannot create camera provider (in src/rpc/socket/client/SocketClientDispatch.cpp, function createCameraProvider(), line 106)
Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, execute:720 Failed to create CameraProvider
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (933) open OpenCV | GStreamer warning: Cannot query video position: status=0, value=-1, duration=-1
If I understand everything correct, .isOpend() should return false. If not, how can I check if the pipeline is initialized correct?
My system runs with Ubuntu 18.04 on an Nvidia Jetson Nano with a MIPI-CSI camera.
GStreamer version 1.14.5, OpenCV version 4.1.1
This may just be because of a typo. nvarguscamerasrc has no property sensor_id but has sensor-id. It should work after fixing this.
In not working case, cap.read() should return false.

How to use Opencv VideoWriter with GStreamer?

I am trying to transfer a h264 stream using Opencv VideoWriter to get it on another pc on the network using VideoCapture. However, I am stuck on VideoWriter. Execution of this code returns with an error and out.isOpened () is always false.
    int FOURCC = cv::VideoWriter::fourcc('H', '2', '6', '4');
    cv::VideoWriter out;
    out.open ("appsrc ! videoconvert ! x264enc tune=zerolatency bitrate=500 speed-preset=superfast ! rtph264pay ! udpsink host=127.0.0.1 port=5000",
        cv::CAP_GSTREAMER,
        FOURCC,
        16,
        cv::Size (640, 480),
        true);
    if (!out.isOpened ()) {
        qDebug () << "\ n ***** Failed to open videowriter *****";
        return -1;
    }
[ WARN:0] global D:\Libs\opencv-4.3.0\modules\videoio\src\cap_gstreamer.cpp (1274) cv::CvVideoWriter_GStreamer::close_ OpenCV | GStreamer warning: No source in GStreamer pipeline. Ignore
[ERROR:0] global D:\Libs\opencv-4.3.0\modules\videoio\src\cap.cpp (527) cv::VideoWriter::open VIDEOIO(GSTREAMER): raised OpenCV exception:
OpenCV(4.3.0) D:\Libs\opencv-4.3.0\modules\videoio\src\cap_gstreamer.cpp:144: error: (-215:Assertion failed) ptr in function 'cv::`anonymous-namespace'::GSafePtr<struct _GstElement>::get'
***** Failed open videowriter *****
Even a simple example returns me an error and out.isOpened() false.
out.open("autovideosrc ! videoconvert ! appsink",
cv::CAP_GSTREAMER,
FOURCC,
16,
cv::Size(640, 480),
true);
[ WARN:0] global D:\Libs\opencv-4.3.0\modules\videoio\src\cap_gstreamer.cpp (1500) cv::CvVideoWriter_GStreamer::open OpenCV | GStreamer warning: OpenCV backend does not support this file type (extension): autovideosrc ! videoconvert ! appsink
[ WARN:0] global D:\Libs\opencv-4.3.0\modules\videoio\src\cap_gstreamer.cpp (1274) cv::CvVideoWriter_GStreamer::close_ OpenCV | GStreamer warning: No source in GStreamer pipeline. Ignore
***** Failed to open videowriter *****
The version of opencv 4.3.0 is compiled from source code with gstreamer support.
cv::getBuildInformation () says:
    Video I/O:
      DC1394: NO
      FFMPEG: YES (prebuilt binaries)
        avcodec: YES (58.54.100)
        avformat: YES (58.29.100)
        avutil: YES (56.31.100)
        swscale: YES (5.5.100)
        avresample: YES (4.0.0)
      GStreamer: YES (1.16.2)
      DirectShow: YES
      Media Foundation: YES
        DXVA: YES
How can I stream the stream? What parameters should be specified by VideoWriter? I tried various tips from google, but none of them helped me. I would be grateful for a simple example of how to send a stream from VideoWriter on one side and receive it from VideoCapture on the other.
I am using Windows 10 x64 and Qt5.13 MSVC2017
You need to feed raw video to appsrc. Setting fourcc to h264 forces VideoWriter to encode video instead of gstreamer pipe. You can set your fourcc to 0 to push raw video. The following should work.
cv::VideoWriter out;
out.open ("appsrc ! videoconvert ! x264enc tune=zerolatency bitrate=500 speed-preset=superfast ! rtph264pay ! udpsink host=127.0.0.1 port=5000",
cv::CAP_GSTREAMER,
0,
16,
cv::Size (640, 480),
true);
I believe that this is a bug in OpenCV, which should be fixed by this PR, or maybe this one (or maybe both).
Indeed, it seems like the OpenCV code used to work with some version of GStreamer that would set the error pointer to NULL when the function succeeds, but that is not guaranteed with 1.16.2 in my experience (and from the documentation).
With those PRs, it should properly use the return value instead of relying on the state of the error pointer.

xh264 streaming to website using Gstreamer-1.0

I am very much new to the whole GStreamer-thing, therefore I would be happy if you could help me.
I need to stream a near-zero-latency videosignal from a webcam to a server and them be able to view the stream on a website.
The webcam is linked to a Raspberry Pi 3, because there are space-constraints on the mounting plattform. As a result of using the Pi I really can't transcode the video on the Pi itself. Therefore I bought a Logitech C920 Webcam, which is able to output a raw h264-stream.
By now I managed to view the stream on my windows-machine, but didn't manage to get the whole website-thing working.
My "achivements":
Sender:
gst-launch-1.0 -e -v v4l2src device=/dev/video0 ! video/x-h264,width=1920,height=1080,framerate=30/1 ! rtph264pay pt=96 config-interval=5 mtu=60000 ! udpsink host=192.168.0.132 port=5000
My understanding of this command is: Get the signal of video-device0, which is a h264-stream with a certain width, height and framerate. Then pack it into a rtp-package with a high enough mtu to have no artefacts and capsulate the rtp-package into a udp-package and stream in to a ip+port.
Receiver:
gst-launch-1.0 -e -v udpsrc port=5000 ! application/x-rtp, payload=96 ! rtpjitterbuffer ! rtph264depay ! avdec_h264 ! fpsdisplaysink sync=false text-overlay=false
My understanding of this command is: Receive a udp-package at port 5000. Application says it is a rtp-package inside. I don't know what rtpjitterbuffer does, but it reduces the latency of the video a bit.
rtph264depay says that inside the rtp is a h264-encoded stream. To get the raw data, which fpsdisplaysink understands we need to decode the h264 signal by the use of avdec_h264.
My next step was to change the receiver-sink to a local tcp-sink and output that signal with the following html5-tag:
<video width=320 height=240 autoplay>
<source src="http://localhost:#port#">
</video>
If I view the website I can't see the stream, but I can view the videodata, which arrived as plain text, when I analyse the data.
Am I missing a videocontainer like MP4 for my video?
Am I wrong with decoding?
What am I doing wrong?
How can I improve my solution?
How would you solve that problem?
Best regards

Error while streaming MJPEG video with RTSP protocol

I want to stream a MJPEG video from my ethernet camera with OpenCV and Gstreamer.
I have tried to open the stream with the following gstreamer pipeline:
std::string pipe(
"rtspsrc location=rtsp://192.168.1.219:554/video.pro1 ! application/x-rtp,encoding-name=JPEG,payload=26 ! "
"rtpjpegdepay ! jpegdec ! xvimagesink sync=false ! appsink"
);
cv::VideoCapture cap(pipe, cv::CAP_GSTREAMER);
but the previous code returns me the following error:
OpenCV Error: Unspecified error (GStreamer: cannot find appsink in manual pipeline
) in cvCaptureFromCAM_GStreamer, file /home/nvidia/Documents/CameraTests/src/opencv-3.3.0/modules/videoio/src/cap_gstreamer.cpp, line 796
VIDEOIO(cvCreateCapture_GStreamer (CV_CAP_GSTREAMER_FILE, filename)): raised OpenCV exception:
/home/nvidia/Documents/CameraTests/src/opencv-3.3.0/modules/videoio/src/cap_gstreamer.cpp:796: error: (-2) GStreamer: cannot find appsink in manual pipeline
in function cvCaptureFromCAM_GStreamer
I managed to stream the the same RTSP source with H264 and H265 algorithms,
MJPEG is the only format that doesn't work, then I suppose that is a gstreamer pipeline problem..
thanks

Using glcolorscale with gstreamer GPU video scaling

I am trying to find a way to do video scaling on the GPU and the only thing I could find was the glcolorscale filter. I am running streamer 1.8.0 on my ARM device and I tried to execute the following:
gst-launch-1.0 -v videotestsrc ! "video/x-raw-yuv" | glcolorscale ! ximagesink
This is an example that I found in the documentation for glcolorscale but it returns an error:
"Could not return videotestsrc0 to glcolorscale0"