UE5 OpenCV: Can’t grab VideoCapture - c++

I’m trying to read camera input using the built-in OpenCV library in UE5 but no matter what I do, I can’t seem to make cv::VideoCapture.read() return anything. cv::VideoCapture.grab() also returns false every time.
It works fine on the same machine with regular C++ with OpenCV 4.6.0 and the VideoCapture is definitely open and the camera turns on as expected.
Is there something about Unreal’s built-in implementation I need to know about? (Other than it used OpenCV 4.5.5). I can’t seem to find any info online about this.
This is what my header files look like:
#pragma once
#include "PreOpenCVHeaders.h"
#include <opencv2/core.hpp>
#include <opencv2/videoio.hpp>
#include <opencv2/imgproc.hpp>
#include "PostOpenCVHeaders.h"
#include "CameraReader.generated.h"
And my Plugin’s Build file:
PublicDependencyModuleNames.AddRange(
new string[]
{
"Core",
"OpenCVHelper",
"OpenCV",
// ... add other public dependencies that you statically link with here ...
}
);
Using Windows 11 with Kinect 2.0.
I thought maybe I was missing some OpenCV DLLs since I can only find a custom Unreal version of the OpenCV World DLL, so I tried overriding the default OpenCV plugin and adding the FFMPEG and MSMF DLLs myself but that didn’t change anything.
Edit: After many attempted building OpenCV from source, I only managed to get GStreamer and FFMPEG to work.

The OpenCV library that comes with UE5 is not built with FFmpeg or with the Media Foundation library, so it cannot read video from a file or from a camera.
You can try to build your own version of OpenCV with FFmpeg or Media Foundation support and use that instead of the one that comes with UE5.

Related

Files/directories to include in Visual Studio C++ to use CUDA?

I want to use CUDA/GPU in OpenCV in Visual Studio. For example, cuda::GpuMat. I successfully build OpenCV with the extra modules with CUDA enabled
I tried the following code
#include <string>
#include <opencv2/core/core.hpp>
#include <opencv2/imgcodecs.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/photo/cuda.hpp>
#include <opencv2/photo.hpp>
using namespace std;
using namespace cv;
int main(){
string imageName("input.bmp");
//CPU version
Mat image = imread(imageName.c_str(), IMREAD_GRAYSCALE);
//CUDA version
cuda::GpuMat imageGPU;
cuda::GpuMat downloadGPU;
Mat buff;
imageGPU.upload(image);
downloadGPU.download(buff);
imwrite("gpu.bmp", buff);
return 0;
}
But I get an unhandled exception error.
I originally downloaded OpenCV in C:\Users\me\Downloads\opencv
I then downloaded and installed the latest OpenCV extra modules with CUDA on in
In Property Pages->C/C++->General->Additional Include Directories, I have:
C:\Users\me\Downloads\opencv\build\include\opencv
C:\Users\me\Downloads\opencv\build\include\opencv2
C:\Users\me\Downloads\opencv\build\include\
In Property Pages->Linker->General->Additional Library Directories, I have:
C:\Users\me\Downloads\opencv\build\x64\vc15\lib
and in Property Pages->Linker->Input->Additional Dependencies, I have:
opencv_world343d.lib
opencv_world343.lib
what else am I supposed to include so I can get GpuMat to work properly?
Most of the cases, yes, but you need to know which library you need to add, it may be cufft.lib, cublas.lib, cudnn.lib, etc. It depends of the function you use inside your code.
Opencv includes a cmake include file that would set all of this up for you if you use cmake to build you VS test project. This file will be in the root of the opencv install directory, i.e. after building opencv running cmake --install or the equivalent in VS. The file is OpenCVConfig.cmake, and it would be included in the CMakeLists.txt file for your project. Then you would call FindPackage(OpenCV), which would locate the OpenCV install, setup a few variables, which you would then use to link against your app.
I can post a sample CMakeList.txt file if you feel that would help.

How to deal with "DNN module was not built with CUDA backend; switching to CPU" warning in C++?

I am trying to run YOLOv3 on Visual Studio 2019 using CUDA 10.2 with cuDNN v7.6.5 on Windows 10 using NVidia GeForce 930M. Here is part of the code I used.
#include <fstream>
#include <sstream>
#include <iostream>
#include <opencv2/dnn.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/highgui.hpp>
using namespace cv;
using namespace dnn;
using namespace std;
int main()
{
// Load names of classes
string classesFile = "coco.names";
ifstream ifs(classesFile.c_str());
string line;
while (getline(ifs, line)) classes.push_back(line);
// Give the configuration and weight files for the model
String modelConfiguration = "yolovs.cfg";
String modelWeights = "yolov3.weights";
// Load the network
Net net = readNetFromDarknet(modelConfiguration, modelWeights);
net.setPreferableBackend(DNN_BACKEND_CUDA);
net.setPreferableTarget(DNN_TARGET_CUDA);
// Open the video file
inputFile = "vid.mp4";
cap.open(inputFile);
// Get frame from the video
cap >> frame;
// Create a 4D blob from a frame
blobFromImage(frame, blob, 1 / 255.0, Size(inpWidth, inpHeight), Scalar(0, 0, 0), true, false);
// Sets the input to the network
net.setInput(blob);
// Runs the forward pass to get output of the output layers
vector<Mat> outs;
net.forward(outs, getOutputsNames(net));
}
Although I add $(CUDNN)\include;$(cudnn)\include; to Additional Include Directories in both C/C++ and Linker, added CUDNN_HALF;CUDNN; to C/C++>Preprocessor Definitions, and added cudnn.lib; to Linker>Input, I still get this warning:
DNN module was not built with CUDA backend; switching to CPU
and it runs on CPU instead of GPU, can anyone help me with this problem?
I solved it by using CMake, but I had first to add this opencv_contrib then rebuilding it using Visual Studio. Make sure that these WITH_CUDA, WITH_CUBLAS, WITH_CUDNN, OPENCV_DNN_CUDA, BUILD_opencv_world are checked in CMake.
I had a similar issue happen to me about a week ago, but I was using Python and Tensorflow. Although the languages were different compared to C++, I did get the same error. To fix this, I uninstalled CUDA 10.2 and downgraded to CUDA 10.1. From what I have found, there might be a dependency issue with CUDA, or in your case, OpenCV hasn't created support yet for the latest version of CUDA.
EDIT
After some further research it seems to be an issue with Opencv rather than CUDA. Referencing this github thread, if you installed Opencv with cmake, remove the arch bin version below 7 on the config file, then rebuild/reinstall Opencv. However, if that doesn't work, another option would be to remove CUDA arch bin version < 5.3 and rebuild.

OpenCV on C++/Namespace issue?

I am new to C++ so I get some trouble to use openCV on my C++ project. I'm using Xcode as an IDE.
So I used brew to install opencv using the two command lines:
brew install opencv3 --with-ffmpeg --with-tbb --with-contrib
brew reinstall opencv3 --HEAD --with-python3 --with-ffmpeg --with-tbb --with-contrib
I checked the path to add to my project to load the library using recursivity, so I added on Xcode the path for header path and library:
/usr/local/Cellar/**
I also tried to install it another way, but still got the same issue:
brew install opencv
And adding the path to:
/usr/local/include/**
Everything seems to work since the library is detected, but import is not working because I got namespace errors in the openCV files, for instance:
No type named 'unique_ptr' in namespace 'std'
No member named 'allocator_traits' in namespace 'std'; did you mean 'allocator_arg_t'?
I checked on the internet and maybe it should be due to the the C++ language dialect or standard library, but I use GNU ++ 14 and libc++ . From what I found it should be working in that config, but I still got the issues. Do you have any ideas ?
EDIT: I don't even try to use it yet, I just used the include and print an hello:
#include <iostream>
#include "cv.h"
int main(int argc, char *argv[]){
std::cout<<"hello";
}
I also tried cv.hpp instead of cv.h, still not working
Thanks a lot !
I don't think you are using the correct #include paths, if you look at the OpenCV Example, you need the following for OpenCV 3.0 to open an image:
#include <opencv2/core/core.hpp>
#include <opencv2/imgcodecs.hpp>
#include <opencv2/highgui/highgui.hpp>
None of these are like the headers that you have, which are likely for older versions.
This tutorial looks like a very sensible one to get up and running with xcode, and the example at the bottom looks like a better start

cv has no member BackgroundSubtractorMOG

I am new to opencv and followed instructions to install it as described here:
http://docs.opencv.org/doc/tutorials/introduction/windows_install/windows_install.html#windows-installation
I used the section "Installation by Making Your Own Libraries from the Source Files", which worked well (using Visual Studio 2013). I am able to run basic commands, like read an image, write an image, run edge detection, video processing etc.
But now I tried to use BackgroundSubtractorMOG and I get the error that BackgroundSubtractorMOG is not a member of cv. The simplest code is below and I don't know where to start. Am I missing something in my installation? Any ideas?
#include "stdafx.h"
#include<opencv2/opencv.hpp>
int main()
{
cv::BackgroundSubtractorMOG bg;
return 0;
}
with opencv3.0, BackgroundSubtractorMOG was moved to the opencv_contrib repo
to use the remaining BackgroundSubtractorMOG2 or BackgroundSubtractorKNN you'd have to use:
Ptr<BackgroundSubtractorMOG2> bgm = createBackgroundSubtractorMOG2(...);
you forgot to include the header
#include <background_segm.hpp>
Reference: http://physics.nyu.edu/grierlab/manuals/opencv/classcv_1_1BackgroundSubtractorMOG.html
path to the header file could be: /opencv2/video/background_segm.hpp

Use OpenCV C++ commands within OpenFrameworks Xcode OSX Project

I would like to use OpenFrameworks for an OSX application I am building. However, I need to include some existing code that uses C++ OpenCV commands, e.g. cv::imread().
The Xcode linker throws the error Undefined symbols for architecture i386: "cv::imread(std::string const&, int). At first I tried to use the existing OpenCV code that is in ofxOpenCv, then fell back to including the OpenCV framework as I had in previous non-OpenFrameworks projects. Neither approach solved the linking problem.
As far as I can tell, the problem is that OpenCV is compiled with the libc++ compiler, while OpenFrameworks is compiled with the libstdc++ compiler.
This presentation shows how to use C++ OpenCV commands within OpenFrameworks, but it is under Windows and not a detailed account.
This SO Question implies that OpenCV can be recompiled under libstdc++, but the solution given is for iOS and referenced make file does not exist for OSX/linux.
Is it at all possible to use OpenFrameworks with the OpenCV C++ commands under OSX?
Please use Kyle McDonald's ofxCv addon, it's a much nicer interface to opencv from openframeworks. It includes Utilities to convert from cv::Mat of ofImage and back for example or helpers to draw cv::Mat straight in your of application:
#include "testApp.h"
using namespace ofxCv;
using namespace cv;
Mat img;
void testApp::setup() {
img = imread("yourImage.png");//make sure the path is correct
}
void testApp::update() {
}
void testApp::draw() {
drawMat(img,0, 0);
}
Also, your Undefined symbols for architecture i386: error means you probably linked against the x64(64-bit) opencv .dylib files, not the i386(32b-bit) ones.