OpenCV GPU SURF ROI error? - c++

I am using VS2012 C++/CLI on a Win 8.1 64 bit machine with OpenCV 3.0.
I am trying to implement the GPU version of SURF.
When I do not specify an ROI, I have no problem. Thus, the following line gives no problem and detects keypoints in the full image:
surf(*dImages->d_gpuGrayFrame,cuda::GpuMat(),Images->SurfKeypoints);
However, efforts to specify an ROI cause a crash. For example, I specify the ROI by Top,Left and Bottom,Right coordinates (which I know work in non-GPU code). In the GPU code, the following causes a crash if the ROI is any smaller than the source image itself (no crash if the ROI is the same size as the source image).
int theight = (Images->LoadFrameClone.rows-Images->CropBottom) - Images->CropTop ;
int twidth = (Images->LoadFrameClone.cols-Images->CropRight)-Images->CropLeft ;
Rect tRect = Rect(Images->CropLeft,Images->CropTop,twidth,theight);
cuda::GpuMat tmask = cuda::GpuMat(*dImages->d_gpuGrayFrame,tRect);
surf(*dImages->d_gpuGrayFrame,tmask,Images->SurfKeypoints); // fails on this line
I know that tmask is non-zero in size and that the underlying image is correct. As far as I can tell, the only issue is specifying an ROI in the SURF GPU call. Any leads on why this may be happening?
Thanks

I experienced the same problem in OpenCV 3.1. Presumably the SURF algorithm doesn't work with images which have a step or stride which has been introduced by setting an ROI. I have not tried masking to see if this makes a difference.
A work around is to just copy the ROI to another contiguous GpuMat. Memory to memory copies are almost free on the GPU (my GTX780 does device to device memcopy at 142 GB/sec), which makes this hack a bit less odious.
GpuMat src; // filled with some image
GpuMat srcRoi = GpuMat (src, roi); // roi within src
GpuMat dst;
srcRoi.copyTo (dst);
surf(dst, GpuMat(), KeypointsGpu, DescriptorsGpu);

Related

OpenCV blur creating linesblob like areas

I'm using OpenCV with a very high kernel (50 and higher) to get a very exaggerated blur effect.
I am getting these weird line/area like effects on the generated imagery. Please refer to the wall area on the image below.
Is this something that is inherent to blurring at a very high kernel size?
What would be some techniques to smooth out and eliminate this effect?
I am using OpenFrameworks with the ofxCV addon. The relevant part of my code is just
blur(camScaled, 51);
If you are not familiar ofxCV is essentially a bridge and maps back to this OpenCV call in the end.
CV_EXPORTS_W void blur( InputArray src, OutputArray dst,
Size ksize, Point anchor=Point(-1,-1),
int borderType=BORDER_DEFAULT );
This effect is pretty normal because blurring means averaging the pixels value through the Kernel.
You should try an edge-preserving filter such as bilateral filter.
If you still want to use a "classic" blur you could try the median blur instead of mean blur, that should give you at least a more attenuated result.

EasyAR access Camera Frames as OpenCV Mat

I'm using EasyAR to develop an app on android using C++ & I'm trying to use opencv with it, what I'm trying to achieve is: get the easyAR frames that it got from the camera as Mat and do some processing using opencv then return the frames to view.
Why do all that? simply I'm only after the EasyAR camera frame crossplatform access (I think it's really fast, I just built the sample HelloAR)
in the Sample HelloAR, there is a line
auto frame = streamer->peek();
is there is a way to convert this to be used in openCV ?
is there is an alternative way to access camera frames from c++ in both IOS & Android (Min API 16)?
your help is appreciated, thank you.
here is the samples link, I'm using HelloAR
http://s3-us-west-2.amazonaws.com/easyar/sdk/EasyAR_SDK_2.0.0_Basic_Samples_Android_2017-05-29.tar.xz
Okay, I managed to get a solution for this
so simply frame (class Frame in EasyAR) contains a vector of images (probably different images for the same frame), accessing that vector returns an Image object with a method called data (a byte array) and that can be used to initialize a Mat in opencv
here is the code to clarify for anyone searching for the same
unsigned char* imageBuffer = static_cast<unsigned char*>(frame->images().at(0)->data());
int height = frame->images()[0]->height(); // height of the image
int width = frame->images()[0]->width(); // width of image
// Obtained Frame is YUV21 by default, so convert that to RGBA
cv::Mat _yuv(height+height/2, width, CV_8UC1, imageBuffer);
cv::cvtColor(_yuv, _yuv, CV_YUV2RGBA_NV21);

OpenCV result changes between Debug / Release and on other machine

I have a program that tries to detect rectangular objects on an image (i.e. solar modules). For that I use c++ with opencv 3 and Visual Studio 2015 Update 1.
In general my program uses GaussianBlur -> morphologyEx -> Canny -> HoughLines -> findContours-> approxPolyDP. Since, I have problems to find optimal parameters I tried to run many parameter combinations in order to get an optimal parameter setting.
The problem I have is that I get different results between "Debug in Visual Studio", "Debug by using the generated .exe", "Release in Visual Studio", "Release by using the generated .exe". Additionally running the .exe files on other machines once again changes the result.
Running the program on the same machine with the same settings does not change the result (i.e. it seems to be deterministic). There is also no concurrency in the program (except there is some in opencv I am not aware of).
Any idea why there is such a huge mismatch between the different settings ( parameter combinations that detect a solar module with 99% accuracy in one setting do not detect the module at all in the other)?.
EDIT:
I tried to create a minimum working example (see below) where I included the code until I get the first mismatch (perhaps there are more mismatches later on). I tried to initialize every variable I found.
The identifier paramterset is an instance of an object that contains all parameters I modify to find the optimum. I checked that those parameters were all initialized and are identical in Debug and Relase.
With this code, the first 3 images created by writeIntermediateResultImage (which basically just uses the opencv method imwrite and only specifies the path the image is stored to) are identical but the morphology image differs (by 13.43% according to some online image comparer I found). One difference is that the left and upper edge of the morphology image in Release mode is black for some pixels but there are additional differences within the image, too.
Edit: It seems that when running the code with the generated .exe file in Release mode, the morphology algorithm isn't applied at all but the image is just shifted left and down leaving a black edge at the top and bottom.
Edit: This shift seems to dependent on the machine it is running on. On my notebook I have the shift without the applying of morphology and on my desktop morphology is applied without a shift and black edges.
void findSquares(const Mat& image, vector<vector<Point> >& squares, string srcName)
{
// 1) Get HSV channels
Mat firstStepResult(image.size(), CV_8U);
Mat hsvImage(image.size(), CV_8UC3);
// Convert to HSV space
cvtColor(image, hsvImage, CV_BGR2HSV);
writeIntermediateResultImage("HSV.jpg", hsvImage, srcName);
// Transform Value channel of HSV image to greyscale
Mat channel0Mat(image.size(), CV_8U);
Mat channel1Mat(image.size(), CV_8U);
Mat channel2Mat(image.size(), CV_8U);
Mat hsv_channels[3]{ channel0Mat, channel1Mat, channel2Mat };
split(hsvImage, hsv_channels);
firstStepResult = hsv_channels[parameterset.hsvChannel];
writeIntermediateResultImage("HSVChannelImage.jpg", firstStepResult, srcName);
// 2) Gaussian Denoising
Mat gaussImage = firstStepResult;
GaussianBlur(gaussImage, gaussImage, Size(parameterset.gaussKernelSize, parameterset.gaussKernelSize), 0, 0);
writeIntermediateResultImage("GaussianBlur.jpg", gaussImage, srcName);
// 3) Morphology
Mat morphologyImage = gaussImage;
morphologyEx(morphologyImage, morphologyImage, parameterset.morphologyOperator, Mat(parameterset.dilateKernelSize, parameterset.dilateKernelSize, 0), cv::Point(-1, -1), parameterset.numMorpholgies);
writeIntermediateResultImage("Morphology.jpg", morphologyImage, srcName);
}
I also checked the library paths and the right libraries are used in the right compile mode (Debug with 'd', Release without).
I found the error in my code and I now get the same result in each configuration. The problem was the line that used the morphology operator.
morphologyEx(morphologyImage, morphologyImage, parameterset.morphologyOperator, Mat(parameterset.dilateKernelSize, parameterset.dilateKernelSize, 0), cv::Point(-1, -1), parameterset.numMorpholgies);
Even though the created Mat object (Mat(parameterset.dilateKernelSize, parameterset.dilateKernelSize, 0)) worked as a structuring element in Debug, it kind of messed up everything in Release.
Using
getStructuringElement(MORPH_RECT, Size(parameterset.dilateKernelSize, parameterset.dilateKernelSize))
as the structuring element did the trick.

Taking a screenshot of a particular area

Looking for a way for taking a screenshot of a particular area on the screen in C++. (So not the whole screen) Then it should save it as .png .jpg whatever to use it with another function afterwards.
Also, I am going to use it, somehow, with openCV. Thought i'd mention that, maybe it's a helpful detail.
OpenCV cannot take screenshots from your computer directly. You will need a different framework/method to do this. #Ben is correct, this link would be worth investigating.
Once you have read this image in, you will need to store it into a cv:Mat so that you are able to perform OpenCV operations on it.
In order to crop an image in OpenCV the following code snippet would help.
CVMat * imagesource;
// Transform it into the C++ cv::Mat format
cv::Mat image(imagesource);
// Setup a rectangle to define your region of interest
cv::Rect myROI(10, 10, 100, 100);
// Crop the full image to that image contained by the rectangle myROI
// Note that this doesn't copy the data
cv::Mat croppedImage = image(myROI);

Face detector in OpenCV becoming slow when there is no face in the frame

I am trying to implement face detection program with webcam input using Viola-Jones face detector in OpenCV and it works fine except that it becomes about 10 times slower when no face is detected in the frame.
This is really weird because if there is no face in the frame, most of the windows will be rejected in the earlier stages of the cascade and it should be slightly faster I guess (NOT SLOWER!).
I am using detectMultiScale function (not cvHaarDetectObjects function) for some reasons but I don't think this should matter in any way.
Can anyone give me an advice on this problem please?
Thanks in advance.
Did you try to add the min and the max size of the face rectangle to be detected ?
You can also check your pyramid scale value, it must be > 1, and if it is too slow, try to use a highter value, the detection will not be as good but it will be faster.
cv::CascadeClassifier cascade;
// ... init classifier
cv::Mat grayImage;
// ... set image
std::vector<cv::Rect> > results;
cv::Size minSize(60,60);
cv::Size maxSize(80,80);
int minNeighbors= 3; // edited
float pyramidScale = 1.1f;
cascade.detectMultiScale(grayImage, results, pyramidScale, minNeighbors,
0 , minSize, maxSize);