I am using ORB detector for detecting the keypoints of frame of a video but it gives me the following error:
OpenCV Error: Assertion failed (img.type() == (((0) & ((1 << 3) - 1)) + (((1)-1) << 3))) in detectAsync
The CPU functions for ORB detector works fine. But somehow for GPU it is not able to detect the keypoints of the image. I have also tried using FastFeatureDetector from CUDA but it also fails.
I am attaching my code below:
int main()
{
cv::VideoCapture input("/home/admin/Pictures/cars.mp4");
// Create image matrix
cv::Mat img,desc;
cv::cuda::GpuMat obj1;
// create vector keypoints
std::vector<cv::KeyPoint>keypoints;
// create keypoint detector
cv::Ptr<cv::cuda::ORB> detector = cv::cuda::ORB::create();
for(;;)
{
if(!input.read(img))
break;
obj1.upload(img);
detector->detect(obj1,keypoints, cv::cuda::GpuMat());
obj1.download(desc);
//Create cricle at keypoints
for(size_t i=0; i<keypoints.size(); i++)
cv::circle(desc, keypoints[i].pt, 2,cv::Scalar(0,0,255),1);
// Display Image
cv::imshow("img", desc);
char c = cv::waitKey();
// NOTE: Press any key to run the next frame
// Wait for a key to press
if(c == 27) // 27 is ESC key code
break;
}
}
The main problem is that the detector takes CV_8UC1 as the input format for Mat. But the format of my image is CV_8UC3. I have tried converting that image using img.convertTo(img1, CV_8UC1)but still it is unable to process and throws me the error OpenCV Error: Bad flag (parameter or structure field) (Unrecognized or unsupported array type) in cvGetMat.
You have to convert your image from CV_8UC3 - 3 channel image to CV_8UC1 - single channel (grayscale) image. To do so simply call
cv::cvtColor(img, img, cv::BGR2GRAY);
prior to uploading data to the GPU.
Related
I am trying to extract features of an image using SIFT in opencv 4.5.1, but when I try to check the result by using drawKeypoints() I keep getting this cryptic error:
OpenCV(4.5.1) Error: Assertion failed (!fixedType() || ((Mat*)obj)->type() == mtype) in cv::debug_build_guard::_OutputArray::create, file C:\build\master_winpack-build-win64-vc14\opencv\modules\core\src\matrix_wrap.cpp, line 1147
D:\School\IP2\OpenCVApplication-VS2019_OCV451_basic\x64\Debug\OpenCVApplication.exe (process 6140) exited with code -1.
To automatically close the console when debugging stops, enable Tools->Options->Debugging->Automatically close the console when debugging stops.
The problem seems to be with the drawKeypoints() function but I'm not sure what causes the problem.
The function:
vector<KeyPoint> extractFeatures(String path) {
Mat_<uchar> source = imread(path, 0);
Mat_<uchar> output(source.rows, source.cols);
vector<KeyPoint> keypoints;
Ptr<SIFT> sift = SIFT::create();
sift->detect(source, keypoints);
drawKeypoints(source, keypoints, output);
imshow("sift_result", output);
return keypoints;
}
You are getting a exception because output argument of drawKeypoints must be 3 channels colored image, and you are initializing output to 1 channel (grayscale) image.
When using: Mat output(source.rows, source.cols); or Mat output;, the drawKeypoints function creates a new colored matrix automatically.
When using the derived template matrix class Mat_<uchar>, the function drawKeypoints raises an exception!
You may replace: Mat_<uchar> output(source.rows, source.cols); with:
Mat_<Vec3b> output(source.rows, source.cols); //Create 3 color channels image (matrix).
Note:
You may also use Mat instead of Mat_:
Mat output; //The matrix is going to be dynamically allocated inside drawKeypoints function.
Note:
My current OpenCV version (4.2.0) has no SIFT support, so I used ORB instead (for testing).
Here is the code sample used for testing:
#include "opencv2/opencv.hpp"
using namespace cv;
using namespace std;
int main()
{
Mat_<uchar> source = imread("graf.png", 0);
Mat_<Vec3b> output(source.rows, source.cols); //Create 3 color channels image (matrix).
vector<KeyPoint> keypoints;
Ptr<ORB> orb = ORB::create();
orb->detect(source, keypoints);
drawKeypoints(source, keypoints, output);
imshow("orb_result", output);
waitKey(0);
destroyAllWindows();
return 0;
}
Result:
I'm creating the visual histograms (following the Bag of Visual Words model) of a dataset of n images using OpenCV.
This is the code:
std::vector<std::string> fileList;
std::unique_ptr<cv::BOWImgDescriptorExtractor> bowDE;
cv::Mat vocabulary;//will be filled through KMeans
...
bowDE->setVocabulary(vocabulary);
cv::Ptr<cv::Feature2D> featureDetectorDescriptor = cv::xfeatures2d::SIFT::create(0,3,0.04,10,1.6);;
for(size_t i=0;i<n;i++) {
cv::UMat img;
cv::Mat word;
imread(fileList[i], cv::IMREAD_GRAYSCALE).copyTo(img);
std::vector<cv::KeyPoint> keyPoints;
featureDetectorDescriptor->detect(img, keyPoints);
bowDE->compute(img, keyPoints, word);
histograms.push_back(word);
assert(histograms.rows==(i+1));//this is for testing
}
The problem is that after 4625 images have been inserted, the assert condition is false. I tested keypoints.size() and it results 0, no keypoints are detected!
I tried to remove the image which created the error, but another one creates the same error (so the error seems not to depend from the image).
Why this happens?
Update: actually I found out that the error was image dependent. It seems that with the default parameters cv::SIFT detects 0 keypoints with this image from Caltech101:
I try to make average background but something wrong because it way to crash app
cv::Mat firstFrame;
cv::Mat averageBackground;
int frameCounter=0;
// this function is called for every frame of the camera
- (void)processImage:(Mat&)image; {
cv::Mat diffFrame;
cv::Mat currentFrame;
cv::Mat colourCopy;
cvtColor(image, currentFrame, COLOR_BGR2GRAY);
averageBackground = cv::Mat::zeros(image.size(), CV_32FC3);
cv::accumulateWeighted(currentFrame, averageBackground, 0.01);
cvtColor(image, colourCopy, COLOR_BGR2RGB);
I see in crash logs
OpenCV Error: Assertion failed (_src.sameSize(_dst) && dcn == scn) in accumulateWeighted, file /Volumes/Linux/builds/precommit_ios/opencv/modules/imgproc/src/accum.cpp, line 1108
libc++abi.dylib: terminating with uncaught exception of type cv::Exception: /Volumes/Linux/builds/precommit_ios/opencv/modules/imgproc/src/accum.cpp:1108: error: (-215) _src.sameSize(_dst) && dcn == scn in function accumulateWeighted
In cv::accumulateWeighted the input and the output image must have the same number of channels. In your case currentFrame has only one channel because of the COLOR_BGR2GRAY you did before, and averageBackground has three channels.
Also be careful with averageBackground = cv::Mat::zeros(image.size(), CV_32FC3); because with this line you are initializing the result image each time (so you are deleting the previous image values that allow you to calculate the average). You must initialize this image only once in the beginning of your program or wherever.
I have a grabber which can get the images and show them on the screen with the following code
while((lastPicNr = Fg_getLastPicNumberBlockingEx(fg,lastPicNr+1,0,10,_memoryAllc))<200) {
iPtr=(unsigned char*)Fg_getImagePtrEx(fg,lastPicNr,0,_memoryAllc);
::DrawBuffer(nId,iPtr,lastPicNr,"testing"); }
but I want to use the pointer to the image data and display them with OpenCV, cause I need to do the processing on the pixels. my camera is a CCD mono camera and the depth of the pixels is 8bits. I am new to OpenCV, is there any option in opencv that can get the return of the (unsigned char*)Fg_getImagePtrEx(fg,lastPicNr,0,_memoryAllc); and disply it on the screen? or get the data from the iPtr pointer an allow me to use the image data?
Creating an IplImage from unsigned char* raw_data takes 2 important instructions: cvCreateImageHeader() and cvSetData():
// 1 channel for mono camera, and for RGB would be 3
int channels = 1;
IplImage* cv_image = cvCreateImageHeader(cvSize(width,height), IPL_DEPTH_8U, channels);
if (!cv_image)
{
// print error, failed to allocate image!
}
cvSetData(cv_image, raw_data, cv_image->widthStep);
cvNamedWindow("win1", CV_WINDOW_AUTOSIZE);
cvShowImage("win1", cv_image);
cvWaitKey(10);
// release resources
cvReleaseImageHeader(&cv_image);
cvDestroyWindow("win1");
I haven't tested the code, but the roadmap for the code you are looking for is there.
If you are using C++, I don't understand why your are not doing it the simple way like this:
If your camera is supported, I would do it this way:
cv::VideoCapture capture(0);
if(!capture.isOpened()) {
// print error
return -1;
}
cv::namedWindow("viewer");
cv::Mat frame;
while( true )
{
capture >> frame;
// ... processing here
cv::imshow("viewer", frame);
int c = cv::waitKey(10);
if( (char)c == 'c' ) { break; } // press c to quit
}
I would recommend starting to read the docs and tutorials which you can find here.
when I compile and run this code, I get an error. It compiles, but when I try to run it, it gives the following error:
The application has requested the Runtime to terminate in an unusual way.
This is the code:
#include <opencv2/opencv.hpp>
#include <string>
int main() {
cv::VideoCapture c(0);
double rate = 10;
bool stop(false);
cv::Mat frame;
cv::namedWindow("Hi!");
int delay = 1000/rate;
cv::Mat corners;
while(!stop){
if(!c.read(frame))
break;
cv::cornerHarris(frame,corners,3,3,0.1);
cv::imshow("Hi!",corners);
if(cv::waitKey(delay)>=0)
stop = true;
}
return 0;
}
BTW, I get the same error when using the Canny edge detector.
Your corners matrix is declared as a variable, but there is no memory allocated to it. The same with your frame variable. First you have to create a matrix big enough for the image to fit into it.
I suggest you first take a look at cvCreateImage so you can learn how basic images are created and handled, before you start working with video streams.
Make sure the capture is ready, and the image is ok:
if(!cap.IsOpened())
break;
if(!c.read(frame))
break;
if(frame.empty())
break;
You need to convert the image to grayscale before you use the corner detector:
cv::Mat frameGray;
cv::cvtColor(frame, frameGray, CV_RGB2GRAY);