I want to use CUDA/GPU in OpenCV in Visual Studio. For example, cuda::GpuMat. I successfully build OpenCV with the extra modules with CUDA enabled
I tried the following code
#include <string>
#include <opencv2/core/core.hpp>
#include <opencv2/imgcodecs.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/photo/cuda.hpp>
#include <opencv2/photo.hpp>
using namespace std;
using namespace cv;
int main(){
string imageName("input.bmp");
//CPU version
Mat image = imread(imageName.c_str(), IMREAD_GRAYSCALE);
//CUDA version
cuda::GpuMat imageGPU;
cuda::GpuMat downloadGPU;
Mat buff;
imageGPU.upload(image);
downloadGPU.download(buff);
imwrite("gpu.bmp", buff);
return 0;
}
But I get an unhandled exception error.
I originally downloaded OpenCV in C:\Users\me\Downloads\opencv
I then downloaded and installed the latest OpenCV extra modules with CUDA on in
In Property Pages->C/C++->General->Additional Include Directories, I have:
C:\Users\me\Downloads\opencv\build\include\opencv
C:\Users\me\Downloads\opencv\build\include\opencv2
C:\Users\me\Downloads\opencv\build\include\
In Property Pages->Linker->General->Additional Library Directories, I have:
C:\Users\me\Downloads\opencv\build\x64\vc15\lib
and in Property Pages->Linker->Input->Additional Dependencies, I have:
opencv_world343d.lib
opencv_world343.lib
what else am I supposed to include so I can get GpuMat to work properly?
Most of the cases, yes, but you need to know which library you need to add, it may be cufft.lib, cublas.lib, cudnn.lib, etc. It depends of the function you use inside your code.
Opencv includes a cmake include file that would set all of this up for you if you use cmake to build you VS test project. This file will be in the root of the opencv install directory, i.e. after building opencv running cmake --install or the equivalent in VS. The file is OpenCVConfig.cmake, and it would be included in the CMakeLists.txt file for your project. Then you would call FindPackage(OpenCV), which would locate the OpenCV install, setup a few variables, which you would then use to link against your app.
I can post a sample CMakeList.txt file if you feel that would help.
Related
I am trying to run YOLOv3 on Visual Studio 2019 using CUDA 10.2 with cuDNN v7.6.5 on Windows 10 using NVidia GeForce 930M. Here is part of the code I used.
#include <fstream>
#include <sstream>
#include <iostream>
#include <opencv2/dnn.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/highgui.hpp>
using namespace cv;
using namespace dnn;
using namespace std;
int main()
{
// Load names of classes
string classesFile = "coco.names";
ifstream ifs(classesFile.c_str());
string line;
while (getline(ifs, line)) classes.push_back(line);
// Give the configuration and weight files for the model
String modelConfiguration = "yolovs.cfg";
String modelWeights = "yolov3.weights";
// Load the network
Net net = readNetFromDarknet(modelConfiguration, modelWeights);
net.setPreferableBackend(DNN_BACKEND_CUDA);
net.setPreferableTarget(DNN_TARGET_CUDA);
// Open the video file
inputFile = "vid.mp4";
cap.open(inputFile);
// Get frame from the video
cap >> frame;
// Create a 4D blob from a frame
blobFromImage(frame, blob, 1 / 255.0, Size(inpWidth, inpHeight), Scalar(0, 0, 0), true, false);
// Sets the input to the network
net.setInput(blob);
// Runs the forward pass to get output of the output layers
vector<Mat> outs;
net.forward(outs, getOutputsNames(net));
}
Although I add $(CUDNN)\include;$(cudnn)\include; to Additional Include Directories in both C/C++ and Linker, added CUDNN_HALF;CUDNN; to C/C++>Preprocessor Definitions, and added cudnn.lib; to Linker>Input, I still get this warning:
DNN module was not built with CUDA backend; switching to CPU
and it runs on CPU instead of GPU, can anyone help me with this problem?
I solved it by using CMake, but I had first to add this opencv_contrib then rebuilding it using Visual Studio. Make sure that these WITH_CUDA, WITH_CUBLAS, WITH_CUDNN, OPENCV_DNN_CUDA, BUILD_opencv_world are checked in CMake.
I had a similar issue happen to me about a week ago, but I was using Python and Tensorflow. Although the languages were different compared to C++, I did get the same error. To fix this, I uninstalled CUDA 10.2 and downgraded to CUDA 10.1. From what I have found, there might be a dependency issue with CUDA, or in your case, OpenCV hasn't created support yet for the latest version of CUDA.
EDIT
After some further research it seems to be an issue with Opencv rather than CUDA. Referencing this github thread, if you installed Opencv with cmake, remove the arch bin version below 7 on the config file, then rebuild/reinstall Opencv. However, if that doesn't work, another option would be to remove CUDA arch bin version < 5.3 and rebuild.
i am making a simple c++ program with openCV library included. Eclipse IDE recognises openCV commands and library locations, but when i try to build the project, compiler gives external error, referring to opencv.hpp or core.hpp file calling a "opencv2/core.hpp" path, which does not exist in opencv folder. I figured out that the problem is linked to the way core.hpp is called, but the library files are read-only.
From what i saw in opencv.hpp file, this relative "opencv2/[module].hpp" reference is not only for the core, but all other modules as well. There is no opencv2 folder inside the one to where openCV is installed at all, in fact.
I've tried reinstalling and remaking openCV with different making arguments, using a different IDE and including direct search folders in eclipse. The problem, apparently, lies in the files themselves, or the way it maybe gets installed in the system the wrong way. The problem persists on both my main ubuntu machine and the ARMbian orange pi.
i get this error when trying to include any openCV library that contains
#include "opencv2/[opencv module].hpp" in it
as a result, compilation is terminated with the error message stating "/usr/local/include/opencv4/opencv2/opencv.hpp:52:28: fatal error: opencv2/core.hpp: No such file or directory"
edit 1: GCC c++ compiler options are -Iusr/local/include/opencv4/opencv2 -O3 -Wall -c -fmessage-length=0 and linker's options are -L/usr/local/lib.
The code is a simple displayImage
#include <opencv4/opencv2/opencv.hpp>
#include <opencv4/opencv2/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
using namespace cv;
int main( int argc, char** argv )
{
Mat image;
image = imread( argv[1], 1 );
namedWindow( "Display Image", CV_WINDOW_AUTOSIZE );
imshow( "Display Image", image );
waitKey(0);
return 0;
}
edit 2: $ pkg-config --libs opencv does not see openCV as installed in the system, altho i've made sure to run make install and ldconfig on the path. This may be a signal of faulty installation, but this is just a sidenote, not entirely related to main problem. I have tried reinstalls and to different folders, but this also persists as well as a main problem
apparently, #sgarizvi 's comment was the answer. I just needed to set the include path to I/usr/local/include/opencv4 and it worked. After that, the error was fixed.
I am replying to my own question to close the case, as i cannot upvote/veryfy a comment
In your case, since your include path is /usr/local/include/opencv4/opencv2
Replace the first three lines
#include <opencv4/opencv2/opencv.hpp>
#include <opencv4/opencv2/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
by
#include <opencv.hpp>
#include <imgproc.hpp>
#include <highgui.hpp>
I am new to C++ so I get some trouble to use openCV on my C++ project. I'm using Xcode as an IDE.
So I used brew to install opencv using the two command lines:
brew install opencv3 --with-ffmpeg --with-tbb --with-contrib
brew reinstall opencv3 --HEAD --with-python3 --with-ffmpeg --with-tbb --with-contrib
I checked the path to add to my project to load the library using recursivity, so I added on Xcode the path for header path and library:
/usr/local/Cellar/**
I also tried to install it another way, but still got the same issue:
brew install opencv
And adding the path to:
/usr/local/include/**
Everything seems to work since the library is detected, but import is not working because I got namespace errors in the openCV files, for instance:
No type named 'unique_ptr' in namespace 'std'
No member named 'allocator_traits' in namespace 'std'; did you mean 'allocator_arg_t'?
I checked on the internet and maybe it should be due to the the C++ language dialect or standard library, but I use GNU ++ 14 and libc++ . From what I found it should be working in that config, but I still got the issues. Do you have any ideas ?
EDIT: I don't even try to use it yet, I just used the include and print an hello:
#include <iostream>
#include "cv.h"
int main(int argc, char *argv[]){
std::cout<<"hello";
}
I also tried cv.hpp instead of cv.h, still not working
Thanks a lot !
I don't think you are using the correct #include paths, if you look at the OpenCV Example, you need the following for OpenCV 3.0 to open an image:
#include <opencv2/core/core.hpp>
#include <opencv2/imgcodecs.hpp>
#include <opencv2/highgui/highgui.hpp>
None of these are like the headers that you have, which are likely for older versions.
This tutorial looks like a very sensible one to get up and running with xcode, and the example at the bottom looks like a better start
First, I am new to C++ and dlib but I have successfully built the examples and started working on my own project. Things have been progressing smoothly until I try to save a jpeg. Attempting to compile code using dlib::save_jpeg throws a linker error and I cannot track down the solution. I have attempted to add #define DLIB_JPEG_SUPPORT above and below my #includes but no luck. I am using XCode and used cmake -G "Xcode" .. when I compiled the examples. Relevant code below. Since I am on a Mac, I have added header and library search paths for X11 (for dlib gui), OpenCV, and DLIB. I have libjpeg.dylib and linked that to my project with and without #define DLIB_JPEG_SUPPORT in main.cpp. Is there some other build setting I need to specify? Thank you in advance for your help.
Finally, I have seen other questions and pages about dlib and libjpeg issues but no luck yet. And yes I have source.cpp included in the project.
// the standard stuff
#include <string>
#include <iostream>
#include <unistd.h>
// opencv mat object
#include <opencv2/opencv.hpp>
// dlib>
#include <dlib/opencv.h>
#include <dlib/image_io.h>
#include <dlib/gui_widgets.h>
#include <dlib/image_transforms.h>
int main(int argc, const char * argv[]) {
// retrieving images from a TCP connection
// decode data stream
img = cv::imdecode(rawImage, CV_LOAD_IMAGE_COLOR);
// perform image processing
dlib::cv_image<dlib::bgr_pixel> d_image(img);
// finally save the result to jpg
std::string fname = argv[1] + std::to_string(image_id) + ".jpg";
dlib::save_jpeg(d_image, fname); // <- line that won't compile
return 0;
}
After quit a bit of struggling and side-by-side comparisons I finally found the issue. In XCode go to to Build Settings and modify Other Linker Flags, Run Search Paths, and Other C++ Flags to match the compiled and working face_ex example. I wholesale copied all of those flags and included a missing libjpeg.dylib and was able to get things running. It should look something like this for the C++ flags . Hope this helps the next person.
I am new to opencv and followed instructions to install it as described here:
http://docs.opencv.org/doc/tutorials/introduction/windows_install/windows_install.html#windows-installation
I used the section "Installation by Making Your Own Libraries from the Source Files", which worked well (using Visual Studio 2013). I am able to run basic commands, like read an image, write an image, run edge detection, video processing etc.
But now I tried to use BackgroundSubtractorMOG and I get the error that BackgroundSubtractorMOG is not a member of cv. The simplest code is below and I don't know where to start. Am I missing something in my installation? Any ideas?
#include "stdafx.h"
#include<opencv2/opencv.hpp>
int main()
{
cv::BackgroundSubtractorMOG bg;
return 0;
}
with opencv3.0, BackgroundSubtractorMOG was moved to the opencv_contrib repo
to use the remaining BackgroundSubtractorMOG2 or BackgroundSubtractorKNN you'd have to use:
Ptr<BackgroundSubtractorMOG2> bgm = createBackgroundSubtractorMOG2(...);
you forgot to include the header
#include <background_segm.hpp>
Reference: http://physics.nyu.edu/grierlab/manuals/opencv/classcv_1_1BackgroundSubtractorMOG.html
path to the header file could be: /opencv2/video/background_segm.hpp