Visual Studio 2015 community - Function definition not found but can compile - c++

I have a project in visual studio 2015 community. It compiles without any error but I get a green squiggly line under compute_edge_map_via_lab and compute_local_minima which says Function definition for "compute_edge_map_via_lab" not found. I can right click on the line that calls compute_edge_map_via_lab and then I click on "Go to definition" it even brings me to the definition in the cpp file implying that visual studio knows where the function is defined. So I don't understand this green error. Can anyone help me on this ?
I have pasted the function for compute_edge_map_via_lab and the image showing the error.
#include <boost/heap/fibonacci_heap.hpp>
#include <boost/graph/adjacency_list.hpp>
#include <boost/graph/iteration_macros.hpp>
#include <opencv2/opencv.hpp>
#include <iostream>
#include <unordered_set>
#include "image-processing.h"
int main {
cv::Mat image = imread("0001.jpg", CV_LOAD_IMAGE_COLOR);
//compute edge map
cv::Mat magnitude;
compute_edge_map_via_lab(image, magnitude);
//compute local minimas
cv::Mat markers;
compute_local_minima(magnitude, markers);
}
image-processing.h
#pragma once
#include <opencv2/opencv.hpp>
void compute_edge_map_via_lab(cv::Mat image, cv::Mat edge_map);
void compute_local_minima(cv::Mat magnitude, cv::Mat markers);
image-processing.cpp
#include "image-processing.h"
void compute_edge_map_via_lab(cv::Mat image, cv::Mat edge_map) {
int rows = image.rows;
int cols = image.cols;
//convert bgr to lab
cv::Mat image_lab;
cv::cvtColor(image, image_lab, CV_BGR2Lab);
//split lab
std::vector<cv::Mat> image_lab_split(3);
cv::split(image_lab, image_lab_split);
//run sobel x and y on lab sets
std::vector<cv::Mat> image_lab_split_dx(3), image_lab_split_dy(3);
for (int i = 0; i < 3; i++)
{
cv::Sobel(image_lab_split[i], image_lab_split_dx[i], CV_32FC1, 1, 0, 3);
cv::Sobel(image_lab_split[i], image_lab_split_dy[i], CV_32FC1, 0, 1, 3);
}
//-----------------------------------------------------------------------------
//compute magnitude = term_a + term_b
// = sqrt(Lx^2 + Ly^2) + sqrt(2(ax^2 + ay^2 + bx^2 + by^2))
//-----------------------------------------------------------------------------
//compute sqrt(Lx^2 + Ly^2)
cv::Mat Lx_squared = cv::Mat(cv::Size(cols, rows), CV_32FC1),
Ly_squared = cv::Mat(cv::Size(cols, rows), CV_32FC1);
cv::pow(image_lab_split_dx[0], 2, Lx_squared);
cv::pow(image_lab_split_dy[0], 2, Ly_squared);
//compute term_a
cv::Mat term_a = cv::Mat(cv::Size(cols, rows), CV_32FC1);
term_a = Lx_squared + Ly_squared;
cv::sqrt(term_a, term_a);
//compute sqrt(2(ax^2 + ay^2 + bx^2 + by^2))
cv::Mat ax_squared = cv::Mat(cv::Size(cols, rows), CV_32FC1),
ay_squared = cv::Mat(cv::Size(cols, rows), CV_32FC1),
bx_squared = cv::Mat(cv::Size(cols, rows), CV_32FC1),
by_squared = cv::Mat(cv::Size(cols, rows), CV_32FC1);
cv::pow(image_lab_split_dx[1], 2, ax_squared);
cv::pow(image_lab_split_dy[1], 2, ay_squared);
cv::pow(image_lab_split_dx[2], 2, bx_squared);
cv::pow(image_lab_split_dy[2], 2, by_squared);
//compute term_b
cv::Mat term_b = 2 * (ax_squared + ay_squared + bx_squared + by_squared);
cv::sqrt(term_b, term_b);
//compute magnitude
edge_map = term_a + term_b;
}
void compute_local_minima(cv::Mat magnitude, cv::Mat markers) {
}

As far as the C++ standard is concerned, it doesn't matter: you can declare function prototypes without definitions so long as the functions are not called.
In the old days before C++11 this was even exploited: e.g. The introduction of a default constructor prototype to suppress unwanted construction.
Such cases are difficult for intellisense to spot - and perhaps it's a good thing that is does highlight them. (By the way, intellisense uses a different lexical analyser to the actual compiler!)

Related

how do I know which distortion coefficients to use from my calibration

So in the function below
Mat intrinsic = Mat(3, 3, CV_32FC1);
Mat distCoeffs;
vector<Mat> rvecs;
vector<Mat> tvecs;
calibrateCamera(object_points, image_points, gray_image.size(), intrinsic, distCoeffs, rvecs, tvecs);//you'll have the intrinsic matrix, distortion coefficients and
distCoeffs is calculated from calibrateCamera resulting in a 1 x 5 Mat of 5 values. Is it safe to assume that these values correspond to K1 K2 P1 P2 K3, in that order? If I want to use the same values in another program that only calls for 4 distortion coefficients, would that correspond to K1 K2 P1 P2? Is it better to include as many coefficients as possible?
Here is the code as per below. My trouble starts when I try to use 5 of my coeffs instead of 4. If I uncomment this line:
//distCoeffs.at<double>(4) = CV_P3; // 5th value
And expand the matrix here:
cv::Mat distCoeffs = cv::Mat(1, 4, CV_64F, double(0));
to
cv::Mat distCoeffs = cv::Mat(1, 5, CV_64F, double(0));
then it will produce an exception. So I change 4 to 5 values like the above comment suggests, it breaks.
Here is whole, executable code:
#include <iostream>
#include <opencv2/highgui.hpp>
#include <opencv2/calib3d.hpp>
#include <opencv2/core/mat.hpp>
// set mats
/**/
using namespace std;
using namespace cv;
void set_camIntrinsics(Mat &cam_intrinsic, Mat &distCoeffs)
{
//set_intriniscs
double CV_CX = 386.6963736491591,
CV_CY = 234.7746525148251,
CV_FX = 260.257998425127,
CV_FY = 260.1583925187085;
cam_intrinsic.at<double>(0, 0) = CV_FX;
cam_intrinsic.at<double>(1, 0) = 0.0;
cam_intrinsic.at<double>(2, 0) = 0.0;
cam_intrinsic.at<double>(0, 1) = 0.0;
cam_intrinsic.at<double>(1, 1) = CV_FY;
cam_intrinsic.at<double>(2, 1) = 0.0;
cam_intrinsic.at<double>(0, 2) = CV_CX;
cam_intrinsic.at<double>(1, 2) = CV_CY;
cam_intrinsic.at<double>(2, 2) = 1.0;
// new coeffs 11.29.19
double CV_K1 = -0.2666308246430311,
CV_K2 = 0.06474699227144737,
CV_P1 = 0.0003621024764932747,
CV_P2 = -0.000726010205813438,
CV_P3 = -0.006384634912197317;
distCoeffs.at<double>(0) = CV_K1;
distCoeffs.at<double>(1) = CV_K2;
distCoeffs.at<double>(2) = CV_P1;
distCoeffs.at<double>(3) = CV_P2;
//distCoeffs.at<double>(4) = CV_P3; otherwise creates exception
}
int main()
{
cv::Mat cam_intrinsic = cv::Mat(3, 3, CV_64F, double(0));
cv::Mat distCoeffs = cv::Mat(1, 4, CV_64F, double(0));
// cv::Mat distCoeffs = cv::Mat(1, 5, CV_64F, double(0));
set_camIntrinsics(cam_intrinsic, distCoeffs);
cv::Mat input_frame = cv::imread("fisheye_pic.png");
cv::Mat output_frame;
cv::fisheye::undistortImage(input_frame, output_frame, cam_intrinsic, distCoeffs, cv::noArray(), cv::Size(input_frame.cols, input_frame.rows));
cv::imshow("Input Image", input_frame);
//cv::imshow("Output Image", output_frame);
cv::waitKey(-1);
return 0;
}

Unsharp mask implementation with OpenCV

I want to apply unsharp mask like Adobe Photoshop,
I know this answer, but it's not as sharp as Photoshop.
Photoshop has 3 parameters in Smart Sharpen dialog: Amount, Radius, Reduce Noise; I want to implement all of them.
This is the code I wrote, according to various sources in SO.
But the result is good in some stages ("blurred", "unsharpMask", "highContrast"), but in the last stage ("retval") the result is not good.
Where am I wrong, what should I improve?
Is it possible to improve the following algorithm in terms of performance?
#include "opencv2/opencv.hpp"
#include "fstream"
#include "iostream"
#include <chrono>
using namespace std;
using namespace cv;
// from https://docs.opencv.org/3.4/d3/dc1/tutorial_basic_linear_transform.html
void increaseContrast(Mat img, Mat* dst, int amountPercent)
{
*dst = img.clone();
double alpha = amountPercent / 100.0;
*dst *= alpha;
}
// from https://stackoverflow.com/a/596243/7206675
float luminanceAsPercent(Vec3b color)
{
return (0.2126 * color[2]) + (0.7152 * color[1]) + (0.0722 * color[0]);
}
// from https://stackoverflow.com/a/2938365/7206675
Mat usm(Mat original, int radius, int amountPercent, int threshold)
{
// copy original for our return value
Mat retval = original.clone();
// create the blurred copy
Mat blurred;
cv::GaussianBlur(original, blurred, cv::Size(0, 0), radius);
cv::imshow("blurred", blurred);
waitKey();
// subtract blurred from original, pixel-by-pixel to make unsharp mask
Mat unsharpMask;
cv::subtract(original, blurred, unsharpMask);
cv::imshow("unsharpMask", unsharpMask);
waitKey();
Mat highContrast;
increaseContrast(original, &highContrast, amountPercent);
cv::imshow("highContrast", highContrast);
waitKey();
// assuming row-major ordering
for (int row = 0; row < original.rows; row++)
{
for (int col = 0; col < original.cols; col++)
{
Vec3b origColor = original.at<Vec3b>(row, col);
Vec3b contrastColor = highContrast.at<Vec3b>(row, col);
Vec3b difference = contrastColor - origColor;
float percent = luminanceAsPercent(unsharpMask.at<Vec3b>(row, col));
Vec3b delta = difference * percent;
if (*(uchar*)&delta > threshold) {
retval.at<Vec3b>(row, col) += delta;
//retval.at<Vec3b>(row, col) = contrastColor;
}
}
}
return retval;
}
int main(int argc, char* argv[])
{
if (argc < 2) exit(1);
Mat mat = imread(argv[1]);
mat = usm(mat, 4, 110, 66);
imshow("usm", mat);
waitKey();
//imwrite("USM.png", mat);
}
Original Image:
Blurred stage - Seemingly good:
UnsharpMask stage - Seemingly good:
HighContrast stage - Seemingly good:
Result stage of my code - Looks bad!
Result From Photoshop - Excellent!
First of all, judging by the artefacts that Photoshop left on the borders of the petals, I'd say that it applies the mask by using a weighted sum between the original image and the mask, as in the answer you tried first.
I modified your code to implement this scheme and I tried to tweak the parameters to get as close as the Photoshop result, but I couldn't without creating a lot of noise. I wouldn't try to guess what Photoshop is exactly doing (I would definitely like to know), however I discovered that it is fairly reproducible by applying some filter on the mask to reduce the noise. The algorithm scheme would be:
blurred = blur(image, Radius)
mask = image - blurred
mask = some_filter(mask)
sharpened = (mask < Threshold) ? image : image - Amount * mask
I implemented this and tried using basic filters (median blur, mean filter, etc) on the mask and this is the kind of result I can get:
which is a bit noisier than the Photoshop image but, in my opinion, close enough to what you wanted.
On another note, it will of course depend on the usage you have for your filter, but I think that the settings you used in Photoshop are too strong (you have big overshoots near petals borders). This is sufficient to have a nice image at the naked eye, with limited overshoot:
Finally, here is the code I used to generate the two images above:
#include <opencv2/opencv.hpp>
#include <iostream>
using namespace std;
using namespace cv;
Mat usm(Mat original, float radius, float amount, float threshold)
{
// work using floating point images to avoid overflows
cv::Mat input;
original.convertTo(input, CV_32FC3);
// copy original for our return value
Mat retbuf = input.clone();
// create the blurred copy
Mat blurred;
cv::GaussianBlur(input, blurred, cv::Size(0, 0), radius);
// subtract blurred from original, pixel-by-pixel to make unsharp mask
Mat unsharpMask;
cv::subtract(input, blurred, unsharpMask);
// --- filter on the mask ---
//cv::medianBlur(unsharpMask, unsharpMask, 3);
cv::blur(unsharpMask, unsharpMask, {3,3});
// --- end filter ---
// apply mask to image
for (int row = 0; row < original.rows; row++)
{
for (int col = 0; col < original.cols; col++)
{
Vec3f origColor = input.at<Vec3f>(row, col);
Vec3f difference = unsharpMask.at<Vec3f>(row, col);
if(cv::norm(difference) >= threshold) {
retbuf.at<Vec3f>(row, col) = origColor + amount * difference;
}
}
}
// convert back to unsigned char
cv::Mat ret;
retbuf.convertTo(ret, CV_8UC3);
return ret;
}
int main(int argc, char* argv[])
{
if (argc < 3) exit(1);
Mat original = imread(argv[1]);
Mat expected = imread(argv[2]);
// closer to Photoshop
Mat current = usm(original, 0.8, 12., 1.);
// better settings (in my opinion)
//Mat current = usm(original, 2., 1., 3.);
cv::imwrite("current.png", current);
// comparison plot
cv::Rect crop(127, 505, 163, 120);
cv::Mat crops[3];
cv::resize(original(crop), crops[0], {0,0}, 4, 4, cv::INTER_NEAREST);
cv::resize(expected(crop), crops[1], {0,0}, 4, 4, cv::INTER_NEAREST);
cv::resize( current(crop), crops[2], {0,0}, 4, 4, cv::INTER_NEAREST);
char const* texts[] = {"original", "photoshop", "current"};
cv::Mat plot = cv::Mat::zeros(120 * 4, 163 * 4 * 3, CV_8UC3);
for(int i = 0; i < 3; ++i) {
cv::Rect region(163 * 4 * i, 0, 163 * 4, 120 * 4);
crops[i].copyTo(plot(region));
cv::putText(plot, texts[i], region.tl() + cv::Point{5,40},
cv::FONT_HERSHEY_SIMPLEX, 1.5, CV_RGB(255, 0, 0), 2.0);
}
cv::imwrite("plot.png", plot);
}
Here's my attempt at 'smart' unsharp masking. Result isn't very good, but I'm posting anyway. Wikipedia article on unsharp masking has details about smart sharpening.
Several things I did differently:
Convert BGR to Lab color space and apply the enhancements to the brightness channel
Use an edge map to apply enhancement to the edge regions
Original:
Enhanced: sigma=2 amount=3 low=0.3 high=.8 w=2
Edge map: low=0.3 high=.8 w=2
#include "opencv2/core.hpp"
#include "opencv2/imgproc.hpp"
#include "opencv2/highgui.hpp"
#include <cstring>
cv::Mat not_so_smart_sharpen(
const cv::Mat& bgr,
double sigma,
double amount,
double canny_low_threshold_weight,
double canny_high_threshold_weight,
int edge_weight)
{
cv::Mat enhanced_bgr, lab, enhanced_lab, channel[3], blurred, difference, bw, kernel, edges;
// convert to Lab
cv::cvtColor(bgr, lab, cv::ColorConversionCodes::COLOR_BGR2Lab);
// perform the enhancement on the brightness component
cv::split(lab, channel);
cv::Mat& brightness = channel[0];
// smoothing for unsharp masking
cv::GaussianBlur(brightness, blurred, cv::Size(0, 0), sigma);
difference = brightness - blurred;
// calculate an edge map. I'll use Otsu threshold as the basis
double thresh = cv::threshold(brightness, bw, 0, 255, cv::ThresholdTypes::THRESH_BINARY | cv::ThresholdTypes::THRESH_OTSU);
cv::Canny(brightness, edges, thresh * canny_low_threshold_weight, thresh * canny_high_threshold_weight);
// control edge thickness. use edge_weight=0 to use Canny edges unaltered
cv::dilate(edges, edges, kernel, cv::Point(-1, -1), edge_weight);
// unsharp masking on the edges
cv::add(brightness, difference * amount, brightness, edges);
// use the enhanced brightness channel
cv::merge(channel, 3, enhanced_lab);
// convert to BGR
cv::cvtColor(enhanced_lab, enhanced_bgr, cv::ColorConversionCodes::COLOR_Lab2BGR);
// cv::imshow("edges", edges);
// cv::imshow("difference", difference * amount);
// cv::imshow("original", bgr);
// cv::imshow("enhanced", enhanced_bgr);
// cv::waitKey(0);
return enhanced_bgr;
}
int main(int argc, char *argv[])
{
double sigma = std::stod(argv[1]);
double amount = std::stod(argv[2]);
double low = std::stod(argv[3]);
double high = std::stod(argv[4]);
int w = std::stoi(argv[5]);
cv::Mat bgr = cv::imread("flower.jpg");
cv::Mat enhanced = not_so_smart_sharpen(bgr, sigma, amount, low, high, w);
cv::imshow("original", bgr);
cv::imshow("enhanced", enhanced);
cv::waitKey(0);
return 0;
}

Image Convolution with Multi Channel Kernel

I want to do three-channel image filtering with the help of the C++ OpenCV library. I want to do it with kernels of 3x3 matrix size, each of which is of different value. To do this, I first divided the RGB image into three channels: red, green and blue. Then I defined different kernel matrices for these three channels. Then, after processing them with the help of the filter2d function, the code threw an exception:
Unhandled exception at 0x00007FFAA150A388 in opencvTry.exe: Microsoft
C++ exception: cv::Exception at memory location 0x0000002D4CAF9660.
occurred
What is the reason I can't do it in the code below?
#include "opencv2/objdetect/objdetect.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
#include <typeinfo>
#include <stdlib.h>
using namespace cv;
using namespace std;
int main()
{
Mat src = imread("path\\color_palette.png", IMREAD_COLOR); //load image
int blue_array[159][318];
int green_array[159][318];
int red_array[159][318];
for (int i = 0; i < src.rows; i++) {
for (int j = 0; j < src.cols; j++) {
int a = int(src.at<Vec3b>(i, j).val[0]);
blue_array[i][j] = a;
//cout << blue_array[i][j] << ' ' ;
int b = int(src.at<Vec3b>(i, j).val[1]);
green_array[i][j] = b;
int c = int(src.at<Vec3b>(i, j).val[2]);
red_array[i][j] = c;
}
}
cv::Mat blue_array_mat(159, 318, CV_32S, blue_array);
cv::Mat green_array_mat(159, 318, CV_32S, green_array);
cv::Mat red_array_mat(159, 318, CV_32S, red_array);
float kernelForBlueData[9] = { 1,0,1, 2,0,-2, 1,0,-1};
cv::Mat kernelForBlue(3, 3, CV_32F, kernelForBlueData);
float kernelForGreenData[9] = { 1./16, 2./16, 1./16, 2./16, 4./16,2./16, 1./16, 2./16, 1./16 };
cv::Mat kernelForGreen(3, 3, CV_32F, kernelForGreenData);
float kernelForRedData[9] = { 1./9,1./9, 1./9, 1./9, 1./9,1./9, 1./9, 1./9,1./9 };
cv::Mat kernelForRed(3, 3, CV_32F, kernelForRedData);
//cv::filter2D(blue_array_mat, blue_array_mat, -1, kernelForBlue, Point(-1, -1), 5.0, BORDER_REPLICATE);
filter2D(blue_array_mat, blue_array_mat, 0, kernelForBlue);
imshow("filter", blue_array_mat);
waitKey(0);
return 0;
}
You’re using a constructor for cv::Mat that expects a pointer to data (e.g. int*) but you put an int** into it. This is the reason for the crash, I presume.
Why not create the cv::Mat first and then directly write data into it?
Note the OpenCV has a function that does this for you:
cv::Mat chans[3];
cv::split(src, chans);
//...
cv::filter2D(chans[2], chans[2], 0, kernelForBlue);

OpenCV - warpPerspective

I'm trying to use the function "warpPerspective" with OpenCV 3.0. I'm using this example:
http://answers.opencv.org/question/98110/how-do-i-stitch-images-with-two-different-angles/
I have to create a ROI on the right side of the first image and another one on the left side of the second image. Use ORB to extract and compute descriptions and match these ones. I didn't changed much of the original code. Just the ROI.
The problem is that every image that i try to warp the perspective comes out like this:
I already tried with multiple pairs of images and the problem persists.
#include "opencv2/opencv.hpp"
#include <iostream>
#include <fstream>
#include <ctype.h>
using namespace cv;
using namespace std;
int main(int argc, char* argv[])
{
Mat img1 = imread("image2.jpg");
Mat img2 = imread("image1.jpg");
namedWindow("I2", WINDOW_NORMAL); namedWindow("I1", WINDOW_NORMAL);
Ptr<ORB> o1 = ORB::create();
Ptr<ORB> o2 = ORB::create();
vector<KeyPoint> pts1, pts2;
Mat desc1, desc2;
vector<DMatch> matches;
Size s = img1.size();
Size s2 = img2.size();
Rect r1(s.width - 200, 0, 200, s.height);
//rectangle(img1, r1, Scalar(255, 0, 0), 5);
Rect r2(0, 0, 200, s2.height);
//rectangle(img2, r2, Scalar(255, 0, 0), 5);
Mat mask1 = Mat::zeros(img1.size(), CV_8UC1);
Mat mask2 = Mat::zeros(img1.size(), CV_8UC1);
mask1(r1) = 1;
mask2(r2) = 1;
o1->detectAndCompute(img1, mask1, pts1, desc1);
o2->detectAndCompute(img2, mask2, pts2, desc2);
BFMatcher descriptorMatcher(NORM_HAMMING, true);
descriptorMatcher.match(desc1, desc2, matches, Mat());
// Keep best matches only to have a nice drawing.
// We sort distance between descriptor matches
Mat index;
int nbMatch = int(matches.size());
Mat tab(nbMatch, 1, CV_32F);
for (int i = 0; i<nbMatch / 2; i++)
{
tab.at<float>(i, 0) = matches[i].distance;
}
sortIdx(tab, index, SORT_EVERY_COLUMN + SORT_ASCENDING);
vector<DMatch> bestMatches;
vector<Point2f> src, dst;
for (int i = 0; i < nbMatch / 2; i++)
{
int j = index.at<int>(i, 0);
cout << pts1[matches[j].queryIdx].pt << "\t" << pts2[matches[j].trainIdx].pt << "\n";
src.push_back(pts1[matches[j].queryIdx].pt + Point2f(0, img1.rows)); // necessary offset
dst.push_back(pts2[matches[j].trainIdx].pt);
}
cout << "\n";
Mat h = findHomography(src, dst, RANSAC);
Mat result;
cout << h << endl;
warpPerspective(img2, result, h.inv(), Size(3 * img2.cols + img1.cols, 2 * img2.rows + img1.rows));
imshow("I1", img1);
imshow("I2", img2);
Mat roi1(result, Rect(0, img1.rows, img1.cols, img1.rows));
img1.copyTo(roi1);
namedWindow("I3", WINDOW_NORMAL);
imshow("I3", result);
imwrite("result.jpg", result);
waitKey();
return 0;
Does that comes from bad matches? Am i missing something? Since i'm kind of new to this topic, any help or ideas would be really appreciated.
Here's the quick things you need to check when your warp perspective is not working-
Did you select the right points in both the images ?
Reason: You need to choose exactly the same points that correspond to each
other when finding a perspective transform. Unrelated points ruin it.
Are your points in right order in the array ?
R: You need to put them in the right corresponding order in both the source and
destination before passing to findhomography.
Are you passing then in the right order to findHomography ? Try switching in
case you are not sure. So that is doesn't reverse warp it
Those are the mistakes i did when i first used it. Now if you see your images, there's a little part overlapping in both the images. You need to be more careful over there. Your rect mask might be the fault.

OpenCV Harris Corner Detection crashes

I'm trying to use Harris Corner detection algorithm of OpenCV to find corners in an image. I want to track it across consecutive frames using Lucas-Kanade Pyramidal Optical flow.
I have this C++ code, which doesn't seem to work for some reason:
#include <stdio.h>
#include "opencv2/core/core.hpp"
#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/contrib/contrib.hpp"
#include "opencv2/opencv.hpp"
using namespace cv;
using namespace std;
void main()
{
Mat img1, img2;
Mat disp1, disp2;
int thresh = 200;
vector<Point2f> left_corners;
vector<Point2f> right_corners;
vector<unsigned char> status;
vector<float> error;
Size s;
s.height = 400;
s.width = 400;
img1 = imread("D:\\img_l.jpg",0);
img2 = imread("D:\\img_r.jpg",0);
resize(img2, img2, s, 0, 0, INTER_CUBIC);
resize(img1, img1, s, 0, 0, INTER_CUBIC);
disp1 = Mat::zeros( img1.size(), CV_32FC1 );
disp2 = Mat::zeros( img2.size(), CV_32FC1 );
int blockSize = 2;
int apertureSize = 3;
double k = 0.04;
cornerHarris( img1, disp1, blockSize, apertureSize, k, BORDER_DEFAULT );
normalize( disp1, disp1, 0, 255, NORM_MINMAX, CV_32FC1, Mat() );
for( int j = 0; j < disp1.size().height ; j++ )
{
for( int i = 0; i < disp1.size().width; i++ )
{
if( (int) disp1.at<float>(j,i) > thresh )
{
left_corners.push_back(Point2f( j, i ));
}
}
}
right_corners.resize(left_corners.size());
calcOpticalFlowPyrLK(img1,img2,left_corners,right_corners,status,error, Size(11,11),5);
printf("Vector size : %d",left_corners.size());
waitKey(0);
}
When I run it, I get the following error message:
Microsoft Visual Studio C Runtime Library has detected a fatal error in OpenCVTest.exe.
(OpenCVTest being the name of my project)
OpenCV Error: Assertion failed ((npoints = prevPtsMat.checkVector(2, CV_32F, true)) >= 0) in unknown function, file ..\..\OpenCV-2.3.0-win-src\OpenCV-2.3.0\modules\video\src\lkpyramid.cpp, line 71
I have been trying to debug this from yesterday, but in vain. Please help.
As we can see in the source code, this error is thrown if the previous points array is in someway faulty. Exactly what makes it bad is hard to say since the documentation for checkVector is a bit sketchy. You can still look at the code to find out.
But my guess is that your left_corners variable have either the wrong type (not CV_32F) or the wrong shape.