How to use mask in drawMatches function c++ - c++

I´m doing a program with opencv and a stereo camera. I want to know what detected point in firs camera below with what detected point in second camera. The think is I have some detectors, extractors and matches methods, and following the example in opencv I have a algorithm to filter the matches and only draw good matches but in my case the min_dist parameter depends on my trackBar position.
This is the code of the opencv example:http://docs.opencv.org/doc/tutorials/features2d/feature_flann_matcher/feature_flann_matcher.html#feature-flann-matcher
And there are the changes that I did to move the minimum distance between matches.
//TrackBar position
dist_track = getTrackbarPos(nombreTrackbar, BUTTON_WINDOW);
cout <<"Posicion de la barra: " << dist_track << endl;
good_matches.clear();
//Obtain good_matches
for( int i = 0; i < descriptors[0].rows; i++ )
{ if( matches[i].distance <= coef*dist_track)
{ good_matches.push_back( matches[i]);}
}
The main think is that when I put the trackBar at the begining I have correct matches but when the trackbar is put in the end, the matches that I found aren´t correct. In this case I found a lot of matches but many of them are wrong.
Now I´m trying to do correctly the images. I want to use a mask in drawmatches function to force that the second-camera-points detected has to be near to the epipolar line. Can someone ask me something about it?
Do someone knows how to use the mask parameter to force that the founded matcher need to be in the epipolar line?
Or how to create the mask parameter?
Thanks friends!

Finally I decided to change the way i will operate. I'm trying to cut the originals images and keep only the necessary information. I mean I get out the information of the photo that it doesn't mind for my application and only keep the information that I'm goint to use.
My idea is to use the epipolar lines of both cameras to determine the interest area that I have, I will calculate where is the epipolar lines in both images and then y cut the images and only keep the information where the epipolar lines are.
Doing it I obtain two new images and my idea is to pass the news images to the matcher method to see if I can obtain more successful matching.
Image before cut off:
Image later cut off:
However I have a problem with the computation time. My code requiresa high computacional cost and sometimes the program fails. The error says "Segmentation fault:11".
"Bus error 10" appears if I quit the waitKey() line.
My code to copy the main image content in the second one is there:
> for (int i=0; i<RightEpipolarLines.rows; i++) {
> float m = -RightEpipolarLines(i, 0)/RightEpipolarLines(i, 1);
> float n = -RightEpipolarLines(i, 2)/RightEpipolarLines(i, 1);
> for (int x = 0; x < 480; x++) {
> float y_prima = m*x + n;
> int y = int (y_prima);
> (cut_image[0].at<float>(y, x)) = capture[0].at<float>(y, x);
> }
>
> }
> waitKey();
> for (int i = 0; i<LeftEpipolarLines.rows; i++) {
> float m = LeftEpipolarLines(i, 0)/LeftEpipolarLines(i, 1);
> float n = -LeftEpipolarLines(i, 2)/LeftEpipolarLines(i, 1);
> for (int x = 0; x < 480; x++) {
> float y_prima = m*x + n;
> int y = int (y_prima);
> (cut_image[1].at<float>(y, x)) = capture[1].at<float>(y, x);
> }
> }
> waitKey();
Does someone know how to pass the information between real capture and cut_image more efficiently? I only would like to pass the pixels information that are near the epipolar lines.

Related

producing "unhandled exception X at memory location Y" when checking every pixel of an grayscale image

I am trying to solve a problem where I have to check every pixel from a 640x480 cv::Mat (Greyscale Video Image -> cv::Mat MainImageSW (global)) and check if the pixel above, below, to the left and to the right is equal or below the Threshholdvalue ( GetTreshholdSW() ).
My target matrix XY_ThreshholdMat is also 640x480 and set equal to Matrix255 which is initialized with"255".
If the requirements meet I set the pixel of my new Mat "XY_ThreshholdMat" to "0" in that location.
The aim is to have every pixel which meets the req. to be black and the others white.
To do so I wrote the following code:
void GraphicsView_PP::Fill_XY_TreshholdMat()
{
cv::Mat Matrix255(GetPixmapWidth(), GetPixmapHeight(), CV_8UC1, cv::Scalar(255));
XY_ThreshholdMat = Matrix255;
for (int y = 1; y < (GetPixmapHeight()-1); y++)
{
for (int x = 1; x < (GetPixmapWidth()-1); x++)
{
if ((MainImageSW.at<uchar>(y, x)) <= GetTreshholdSW())
{
if ((MainImageSW.at<uchar>((y + 1), x) <= GetTreshholdSW()) &&
(MainImageSW.at<uchar>((y - 1), x) <= GetTreshholdSW()) &&
(MainImageSW.at<uchar>(y, (x + 1)) <= GetTreshholdSW()) &&
(MainImageSW.at<uchar>(y, (x - 1)) <= GetTreshholdSW()))
{
XY_ThreshholdMat.at<uchar>(y, x) = 0;
}
}
}
}
QImage Image((uchar*)XY_ThreshholdMat.data, XY_ThreshholdMat.cols, XY_ThreshholdMat.rows, XY_ThreshholdMat.step, QImage::Format_Grayscale8);
QPixmap Pixmap = QPixmap::fromImage(Image);
ui.graphicsView_Bild->UpdateStream(Pixmap);
cv::waitKey(5000);
}
Because I am checking the pixels above, below, to the left and to the right I start both for-loops at X/Y = 1 and let it end to (MaxWidth-1) and (MaxHeight-1).
After checking every individual pixel I want to save the new Mat where every Match is a "0" and everything else is "255" to a Pixmap and display it in my custom QGraphicsView element.
Once I run the code I get the typical "unhandled exception X at memory location Y" error but I dont know why..
Also: I am rather new at programming and I know that the way the code is designed is not the most efficient way. All that matters is that it works..
Anyone having an idea how to solve this problem?
cv::Mat Matrix255(GetPixmapWidth(), GetPixmapHeight(), ...
should be
cv::Mat Matrix255(GetPixmapHeight(), GetPixmapWidth(), ...
Because of this mistake you access the elements out of bounds and your program therefore has undefined behavior.

Find a single approximating boundary for multiple objects in binary image

An example case of the image is given below. I use the binary version of the image for further processing. I would like to get the contour of the object to perform shape matching. I have written a code that returns the boundary contour given a binary image. However, the problem arises when the image does not contain one single object as is the case here, You can see the lower portion of the leg even a palm is not attached to the body. I am interested in finding the contour approximating the entire object. So, far opencv has not helped.
My get contour function is using opencvblobslib library and looks like this. Basically it detects all the blobs and filters out really small blobs. Then, it checks to see if there is only one contour left. This works as long as the objects are connected, but I would like to make it robust so that it can handle the multiple object case as well.
vector<Point> getContour(Mat binaryImg) {
CBlobResult res(binaryImg, Mat(),NUMCORES); // We get all possible bounding box using this object
CBlobResult resFiltered; // We shall then filter the bboxes if their area is less
// than alpha % of the total area
double originalImageArea = binaryImg.rows * binaryImg.cols;
double alpha = 0.05;
double lowerLimitRectArea = alpha * originalImageArea;
double bbox_area;
for(int i = 0; i < res.GetNumBlobs(); i++){
CBlob* b = res.GetBlob(i);
Rect bbox = b->GetBoundingBox();
double bbox_area = bbox.area();
if(bbox_area < lowerLimitRectArea){
b->to_be_deleted = 1.0;
}
}
res.Filter(resFiltered, B_EXCLUDE, CBlobGetTBDeleted(), B_EQUAL, 1.0,1.0);
if (resFiltered.GetNumBlobs() > 1) {
cout << " Errror case " << endl;
exit(1);
}
CBlob* b = resFiltered.GetBlob(0);
CBlobContour* extCont = b->GetExternalContour();
t_PointList contPts = extCont->GetContourPoints();
vector<Point> contour;
contour.resize(contPts.size());
for( int i=0; i < contPts.size(); i++) {
contour[i] = cv::Point(contPts[i].x, contPts[i].y);
}
return contour;
}

Implementing FFT low-pass filter in C with FFTW

I am trying to create a very simple C++ program that given an argument in range [0-100] applies a low-pass filter to a grayscale image that should "compress" it proprotionally to the value of the given argument.
I am using the FFTW library.
I have some doubts about how I define the frequency threshold, cut. Is there any more effective way to define such value?
//fftw_complex *fft
//double[] magnitude
// . . .
int percent = 100;
if (percent < 0 || percent > 100) {
cerr << "Compression rate must be a value between 0 and 100." << endl;
return -1;
}
double cut =(double)(w*h) * ((double)percent / (double)100);
for (i = 0; i < (w * h); i++) {
magnitude[i] = sqrt(pow(fft[i][0], 2.0) + pow(fft[i][1], 2.0));
if (magnitude[i] < cut) {
fft[i][0] = 0.0;
fft[i][1] = 0.0;
}
}
Update1:
I've changed my code to this, but again I'm not sure this is a proper way to filter frequencies. The image is surely compressed, but non-square images are messed up and setting compression to 100% isn't the real maximum compression available (I can go up to ~140%).
Here you can find an image of what I see now.
int cX = w/2;
int cY = h/2;
cout<<"TEST "<<((double)percent/(double)100)*h<<endl;
for(i = 0; i<(w*h);i++){
int row = i/s;
int col = i%s;
int distance = sqrt((col-cX)*(col-cX)+(row-cY)*(row-cY));
if(distance<((double)percent/(double)100)*min(cX,cY)){
fft[i][0] = 0.0;
fft[i][1] = 0.0;
}
}
This is not a low-pass filter at all. A low-pass filter passes low frequencies, i.e. it removes fine details (blurring). You obviously need a 2D FFT for that.
This code just removes random bits, essentially.
[edit]
The new code looks a lot more like a low-pass filter. The 141% setting is expected: the diagonal of a square is sqrt(2)=1.41 times its side. Converting an index into a row/column pair should use the image width, not some random unexplained s.
I don't know where your zero frequency is located. That should be easy to spot (largest value) but it might be in (0,0) instead of (w/2,h/2)

Assert thrown by OpenCV checkVector() on otherwise valid vector<vector<Point3f>>

Been chasing this bug all night, so please forgive any incoherence.
I'm attempting to use the OpenCV's calibrateCamera() to extract intrinsic and extrinsic parameters from a set of fifteen pictures whose object points and world points are given. From what I can tell from debugging, I'm grabbing valid points from the input files and placing them in a vector<Point3f>, which is itself placed into another vector.
I pass the whole shebang to calibrateCamera(),
double rms = calibrateCamera(worldPoints, pixelPoints, src.size(), intrinsic, distCoeffs, rvecs, tvecs);
which throws Assertion failed (ni >= 0) in unknown function, file ...\calibration.cpp, line 3173
Pulling up this file gives us
static void collectCalibrationData( InputArrayOfArrays objectPoints,
InputArrayOfArrays imagePoints1,
InputArrayOfArrays imagePoints2,
Mat& objPtMat, Mat& imgPtMat1, Mat* imgPtMat2,
Mat& npoints )
{
int nimages = (int)objectPoints.total();
int i, j = 0, ni = 0, total = 0;
CV_Assert(nimages > 0 && nimages == (int)imagePoints1.total() &&
(!imgPtMat2 || nimages == (int)imagePoints2.total()));
for( i = 0; i < nimages; i++ )
{
ni = objectPoints.getMat(i).checkVector(3, CV_32F);
CV_Assert( ni >= 0 );
total += ni;
}
...
So far as I know, a Point3f is of CV_32F depth, and I can see good data in the double vector just before calibrateCamera is called.
Any ideas what might be happening here? calibrateCamera() requires a vector<vector<Point3f>>, as said by http://aishack.in/tutorials/calibrating-undistorting-with-opencv-in-c-oh-yeah/ and the documentation; hopefully getMat(i) isn't failing due to that.
Could it possibly have been called on the vector<vector<Point2f>> of pixel points just after it? I have been over so many errors I am willing to believe anything.
Edit:
Consequently, checkVector()'s documentation was not really helpful
int cv::Mat::checkVector (int elemChannels, int depth = -1, bool RequireContinuous = true) const
returns N if the matrix is 1-channel (N x ptdim) or ptdim-channel (1 x N) or (N x 1); negative number otherwise
The problem is possibly in one of your InputArrayOfArrays arguments (in worldPoints precisely, if the assertion is thrown from the line pasted in your question). Mat:s should work just fine here.
I solved the same assertion error in my code by making all the 3 InputArrayOfArrays (or vector > and vector > in my case) same length vectors with fully populated entries. So my problem was in my architecture: my objectPoints vector was containing empty entries (even though the existing data was valid), and calibrate.cpp requires that no empty entries are present in any of the 3 InputArrayOfArrays. Btw I am using greyscale images for calibration so single channel data.
In calib3d source the most probable reason for throwing error is a null value if you have checked that data types match. You might try double-checking your valid input data:
1) count the # of valid calibration images from your chosen structure
validCalibImages = (int)goodCalibrationImages.size()
2) define worldPoints as vector<vector<Point3f> > worldPoints
3) IMPORTANT: resize to accommodate for data for each calibration entry
worldPoints.resize(validCalibImages)
4) populate with data e.g.
for(int k = 0; k < (int)goodCalibImages.size(); k++){
for(int i = 0; i < chessboardSize.height; i++){
for(int j = 0; j < chessboardSize.width; j++){
objectPoints[k].push_back(Point3f(i*squareSize, j*squareSize, 0));
}
}
}
'
Hope it helps!
I agree with FSaccilotto - call checkVector and make sure you are passing a vector of size n of Mat 1x1:{3 channel} and not vector of Mat 1 x n:{3 channel} or worse Mat 1 x n:{2 channel} which is what MatOfPoint spits out. That usually fixes 90% of assert failed issues. Explicitly declare the Mat yourself.
The object pattern is somewhat strange in that the x y z coords are in the channels not in the Mat dimensions.

Finding Local Maxima Grayscale Image opencv

I am trying to create my personal Blob Detection algorithm
As far as I know I first must create different Gaussian Kernels with different sigmas (which I am doing using Mat kernel= getGaussianKernel(x,y);) Then get the Laplacian of that kernel and then filter the Image with that so I create my scalespace. Now I need to find the Local Maximas in each result Image of the scalespace. But I cannot seem to find a proper way to do so.... my Code so far is
vector <Point> GetLocalMaxima(const cv::Mat Src,int MatchingSize, int Threshold)
{
vector <Point> vMaxLoc(0);
if ((MatchingSize % 2 == 0) ) // MatchingSize has to be "odd" and > 0
{
return vMaxLoc;
}
vMaxLoc.reserve(100); // Reserve place for fast access
Mat ProcessImg = Src.clone();
int W = Src.cols;
int H = Src.rows;
int SearchWidth = W - MatchingSize;
int SearchHeight = H - MatchingSize;
int MatchingSquareCenter = MatchingSize/2;
uchar* pProcess = (uchar *) ProcessImg.data; // The pointer to image Data
int Shift = MatchingSquareCenter * ( W + 1);
int k = 0;
for(int y=0; y < SearchHeight; ++y)
{
int m = k + Shift;
for(int x=0;x < SearchWidth ; ++x)
{
if (pProcess[m++] >= Threshold)
{
Point LocMax;
Mat mROI(ProcessImg, Rect(x,y,MatchingSize,MatchingSize));
minMaxLoc(mROI,NULL,NULL,NULL,&LocMax);
if (LocMax.x == MatchingSquareCenter && LocMax.y == MatchingSquareCenter)
{
vMaxLoc.push_back(Point( x+LocMax.x,y + LocMax.y ));
// imshow("W1",mROI);cvWaitKey(0); //For gebug
}
}
}
k += W;
}
return vMaxLoc;
}
which I found in this thread here, which it supposedly returns a vector of points where the maximas are. it does return a vector of points but all the x and y coordinates of each point are always -17891602... What to do???
Please if you are to lead me in something else other than correcting my code be informative because I know nothing about opencv. I am just learning
The problem here is that your LocMax point is declared inside the inner loop and never initialized, so it's returning garbage data every time. If you look back at the StackOverflow question you linked, you'll see that their similar variable Point maxLoc(0,0) is declared at the top and constructed to point at the middle of the search window. It only needs to be initialized once. Subsequent loop iterations will replace the value with the minMaxLoc function result.
In summary, remove this line in your inner loop:
Point LocMax; // delete this
And add a slightly altered version near the top:
vector <Point> vMaxLoc(0); // This was your original first line
Point LocMax(0,0); // your new second line
That should get you started anyway.
I found it guys. The problem was my threshold was too high. I do not understand why it gave me negative points instead of zero points but lowering the threshold worked