Opencv solve throws cv::Exception - c++

On Windows 10, running Visual Studio 2015. Opencv 3.0
Using Opencv to first correlate two images and determine translation between them using matchTemplate. I want to get subpixel estimate so I am going to input an 11X11 window of values from the correlation output and fit a quadratic surface to those points.
void Sector1::ResampSector(cv::Mat In, cv::Mat R, cv::Mat Out, cv::Point Loc)
{
// first get fractional offset
int lsq = 5;
// Ax^2 + B xy + Cy^2 + Dx +Ey + F = R
cv::setBreakOnError(true);
cv::Mat A( 121, 6, CV_32F);
cv::Mat B( 121, 1, CV_32F);
cv::Mat C (6, 1, CV_32F);
int L = 0;
for (int i = Loc.y-lsq; i <= Loc.y+lsq; i++) {
for (int j = Loc.x-lsq; j <= Loc.x+lsq; j++) {
A.at<float>(L, 0) = float(i*i);
A.at<float>(L, 1) = (float)i*j;
A.at<float>(L, 2) = (float)j*j;
A.at<float>(L, 3) = (float)i;
A.at<float>(L, 4) = (float)j;
A.at<float>(L, 5) = 1.f;
B.at<float>(L) = R.at<float>(i, j); // since is 3 band stuff ?
L++;
} // for j
} // for i
bool rc = cv::solve(A, B, C);
the call to cv::solve returns false and there are two cv::Exceptions at same address which is outside of any of the image matrices or other variables. I have looked at the contents of A, B and C using memory window and they all appear correct. A,B,C structures all appear correct. I have tried to step into solve but i do not have the library with symbolic tables.
Any clue where i have gone wrong? suggestions for further tracking the problem?

Lapack complains that the default method will not work. correction is to add the flag=DECOMP_QR as the 4th, optional, arguement to the call to solve()

Related

Affine transform in C++

I am currently making a project for school on image processing in visual Studio 2013, using Open CV 3.1. My goal (for now) is to transform an image, using affine transform, so that the trapezoidal board will be transformed into a rectangle.
To do that I have substracted certain channels and thresholded the image so that now I have a binary image with white blocks in the corners of the board.
Now I need to pick 4 white points that are closest to each corner and (using affine transform) set them as corners of the transformed image.
And since this is my first time using Open CV, I am stuck.
Here's my code:
#include <iostream>
#include <opencv2\core.hpp>
#include <opencv2\highgui.hpp>
#include<opencv2/imgproc.hpp>
#include <stdlib.h>
#include <stdio.h>
#include <vector>
int main(){
double dist;
cv::Mat image;
image = cv::imread("C:\\Users\\...\\ideal.png");
cv::Mat imagebin;
imagebin = cv::imread("C:\\Users\\...\\ideal.png");
cv::Mat imageerode;
//cv::imshow("Test", image);
cv::Mat src = cv::imread("C:\\Users\\...\\ideal.png");
std::vector<cv::Mat>img_rgb;
cv::split(src, img_rgb);
//cv::imshow("ideal.png", img_rgb[2] - img_rgb[1]);
cv::threshold(img_rgb[2] - 0.5*img_rgb[1], imagebin , 20, 255, CV_THRESH_BINARY);
cv::erode(imagebin, imageerode, cv::Mat(), cv::Point(1, 1), 2, 1, 1);
cv::erode(imageerode, imageerode, cv::Mat(), cv::Point(1, 1), 2, 1, 1);
// cv::Point2f array[4];
// std::vector<cv::Point2f> array;
for (int i = 0; i < imageerode.cols; i++)
{
for (int j = 0; j < imageerode.rows; j++)
{
if (imageerode.at<uchar>(i,j) > 0)
{
dist = std::min(dist, i + j);
}
}
}
//cv::imshow("Test binary", imagebin);
cv::namedWindow("Test", CV_WINDOW_NORMAL);
cv::imshow("Test", imageerode);
cv::waitKey(0);
std::cout << "Hello world!";
return 0;
}
As you can see I don't know how to loop over each white pixel using image.at and save the distance to each corner.
I would appreciate some help.
Also: I don't want to just do this. I really want to learn how to do that. But I'm currently having some mindstuck.
Thank you
EDIT:
I think I'm done with finding the coordinates of the 4 points. But I can't really get the idea of the warpAffine syntax.
Code:
for (int i = 0; i < imageerode.cols; i++)
{
for (int j = 0; j < imageerode.rows; j++)
{
if (imageerode.at<uchar>(i, j) > 0)
{
if (i + j < distances[0])
{
distances[0] = i + j;
coordinates[0] = i;
coordinates[1] = j;
}
if (i + imageerode.cols-j < distances[1])
{
distances[1] = i + imageerode.cols-j;
coordinates[2] = i;
coordinates[3] = j;
}
if (imageerode.rows-i + j < distances[2])
{
distances[2] = imageerode.rows - i + j;
coordinates[4] = i;
coordinates[5] = j;
}
if (imageerode.rows-i + imageerode.cols-j < distances[3])
{
distances[3] = imageerode.rows - i + imageerode.cols - j;
coordinates[6] = i;
coordinates[7] = j;
}
}
}
Where I set all of the distances values to imageerode.cols+imageerode.rows since it's the maximum value it can get.
Also: note that I'm using taxicab geometry. I was told it's faster and the results are pretty much the same.
If anyone could help me with warpAffine it would be great. I don't understand where do I put the coordinates I have found.
Thank you
I am not sure how your "trapezoidal board" looks like but if it has a perspective transform like when you capture a rectangle with a camera, then an affine transform is not enough. Use perspective transform. I think Features2D + Homography to find a known object is very close to what you want to do.

adding float openCV3.0

I actually have a problem on openCV3.0.
I used 12 gabor filters(12 differents orientation) on 1 image and stocked them.
Now I want to add all those images and then divide by 12 each value to obtain the mean of the 12 filters.
Because those image are RGB, I have to work on each channel separatly.
The problem is : when I add all the values, I obtain values > 12 while all the values are between 0 and 1.
The part of the code bugged :
for (i = 0; i < gaborV.size(); ++i) { //gaborV contain the 12 gabor filters
std::vector<cv::Mat> vec_split; //I split because of the 3 channels
cv::split(gaborV[i], vec_split);
for (int k = 0; k < imgCol.rows; ++k) {
for (int j = 0; j < imgCol.cols; ++j) {
if (k == 1 && j == 1)
std::cout << mat_X.at<float>(k, j) << " " << vec_split[0].at<float>(k, j) << std::endl;
mat_X.at<float>(k, j) += vec_split[0].at<float>(k, j);
mat_Y.at<float>(k, j) += vec_split[1].at<float>(k, j);
mat_Z.at<float>(k, j) += vec_split[2].at<float>(k, j);
}
}
}
and mat_X, mat_Y and mat_Z are created as follow :
mat_X = mat_Y = mat_Z = cv::Mat(cvSize(imgColNormalize.cols, imgColNormalize.rows), CV_32FC1, cvScalar(0.));
As I said, all values in vec_split are between 0 and 1, but when I'm out of the loop, mat_X, mat_Y and mat_Z contain values > 12..
The output of the cout I used :
0 0.507358
1.54751 0.496143
3.00963 0.528832
4.53887 0.465426
... and at the end I have 15.9459
And i don't understand since 0 + 0.507358 != 1.54751; 1.54751 + 0.496143 != 3.00963 ...
Do someone understand the problem?
Thanks for all!
I think the problem is here:
mat_X = mat_Y = mat_Z = cv::Mat(cvSize(imgColNormalize.cols,
imgColNormalize.rows), CV_32FC1, cvScalar(0.));
The way you initialise these arrays results in all three cv::Mat objects referencing the same data. Only one Mat is created and so your code increments the values in this array three times.
For info, OpenCV uses a reference counting mechanism with cv::Mat and the assignment operator simply creates a new reference to existing data. If you wanted to create a genuine deep-copy of a cv::Mat, you would need to use cv::Mat::clone().
So, instead, initialise like so:
mat_X = cv::Mat(cvSize(imgColNormalize.cols, imgColNormalize.rows), CV_32FC1, cvScalar(0.));
mat_Y = cv::Mat(cvSize(imgColNormalize.cols, imgColNormalize.rows), CV_32FC1, cvScalar(0.));
mat_Z = cv::Mat(cvSize(imgColNormalize.cols, imgColNormalize.rows), CV_32FC1, cvScalar(0.));
An excerpt from the documentation copied below for posterity:

Convert Matlab based Ridge Segment Function into C++

I am going to perform ridge segmentation on an input image using OpenCV. From the internet, I found a Matlab code as follows, which fits quite well with my goal:
function [normim, mask, maskind] = ridgesegment(im, blksze, thresh)
im = normalise(im,0,1); % normalise to have zero mean, unit std dev
fun = inline('std(x(:))*ones(size(x))');
stddevim = blkproc(im, [blksze blksze], fun);
mask = stddevim > thresh;
maskind = find(mask);
% Renormalise image so that the *ridge regions* have zero mean, unit
% standard deviation.
im = im - mean(im(maskind));
normim = im/std(im(maskind));
end
So I tried to convert it to C++. Up to now, I can only finish these parts:
cv::Mat ridgeSegment(cv::Mat inputImg, int blockSize, double thresh)
{
cv::normalize(inputImg, inputImg, 0, 1.0, cv::NORM_MINMAX, CV_8UC1);
blkproc(inputImg, cv::Size(blockSize, blockSize), thresh);
...// how to do the next steps ????
}
cv::Mat blkproc(cv::Mat img, cv::Size size, double thresh)
{
cv::Mat croppedImg;
for (int i = 0; i < im.cols; i += size.width)
{
for (int j = 0; j < im.rows; j += size.height)
{
croppedImg = im(cv::Rect(i, j, size.width, size.height)).clone();
//perform standard deviation calculation here???
}
}
return croppedImg;
}
I don't know how to proceed further here. Especially that stddevim and its later parts. Could someone explain and show me the rest? Thank you in advance.

SVM in openCV throws "cv::Exception at memory location"

I'm trying to create a simple OCR application with SVM, openCV, C++ and Visual Studio 2008 (mfc app).
My training samples are binary images of machine-printed digits (0-9). I want to use DAGSVM for this multi-class problem. So I need to create 45 SVMs, each of which is the SVM of 2 class (SVM(0,1), SVM(0,2)... SVM(8,9)).
Here's how things are going:
SVM's parameters:
CvSVMParams params;
params.svm_type = CvSVM::C_SVC;
params.kernel_type = CvSVM::LINEAR;
params.term_crit = cvTermCriteria(CV_TERMCRIT_ITER, 100, 1e-6);
Data of training images of class i are stored in matrix trainData[i] (each row is the pixels of a 28x28 image, which means the matrix has 784 cols).
When training each SVM, I create 2 matrix called curTrainData & curTrainLabel.
for (int i = 0; i < 9; i++)
for (int j = i+1; j < 10; j++)
{
curTrainData.create(trainData[i].rows + trainData[j].rows, 784, CV_32FC1);
curTrainLabel.create(curTrainData.rows, 1, CV_32FC1);
// merge 2 matrix: trainData[i] & trainData[j]
for (int k = 0; k < trainData[i].rows; k++)
{
curTrainLabel.at<float>(k, 0) = 1.0; // class of digit i
for (int l = 0; l < 784; l++)
curTrainData.at<float>(k,l) = trainData[i].at<float>(k,l);
}
for (int k = 0; k < trainData[j].rows; k++)
{
curTrainLabel.at<float>(k + trainData[i].rows, 0) = -1.0; // class of digit j
for (int l = 0; l < 784; l++)
curTrainData.at<float>(k + trainData[i].rows,l) = trainData[j].at<float>(k,l);
}
svms[i][j].train(curTrainData, curTrainLabel, Mat(), Mat(), params);
}
I got error at the call svms[i][j].train.... The full error is:
Unhandled exception at 0x75b5d36f in svm.exe: Microsoft C++ exception: cv::Exception at memory location 0x0022af8c..
To tell the truth I don't fully understand SVM implemented in openCV and I can't find any example of them working with objects in images.
I'm really grateful if someone can tell me what is (are) wrong :(
Update 09/03:
I had mistaken. The error comes from:
str.Format(_T("Results\trained_%d_%d.xml"), i, j);
svms[i][j].save(CT2A(str));
str is a CString variable.
It remains even if I change to:
svms[i][j].save("Results\trained.xml");
I've created the folder Results and others files are written well into it (files for methods fopen(), imwrite()...). I don't know why I can't add the folder when it comes to this save method of svm.
If you use backslash "\", you have to put "\\" instead (or you can use a frontslash "/").

Accessing certain pixel RGB value in openCV

I have searched internet and stackoverflow thoroughly, but I haven't found answer to my question:
How can I get/set (both) RGB value of certain (given by x,y coordinates) pixel in OpenCV? What's important-I'm writing in C++, the image is stored in cv::Mat variable. I know there is an IplImage() operator, but IplImage is not very comfortable in use-as far as I know it comes from C API.
Yes, I'm aware that there was already this Pixel access in OpenCV 2.2 thread, but it was only about black and white bitmaps.
EDIT:
Thank you very much for all your answers. I see there are many ways to get/set RGB value of pixel. I got one more idea from my close friend-thanks Benny! It's very simple and effective. I think it's a matter of taste which one you choose.
Mat image;
(...)
Point3_<uchar>* p = image.ptr<Point3_<uchar> >(y,x);
And then you can read/write RGB values with:
p->x //B
p->y //G
p->z //R
Try the following:
cv::Mat image = ...do some stuff...;
image.at<cv::Vec3b>(y,x); gives you the RGB (it might be ordered as BGR) vector of type cv::Vec3b
image.at<cv::Vec3b>(y,x)[0] = newval[0];
image.at<cv::Vec3b>(y,x)[1] = newval[1];
image.at<cv::Vec3b>(y,x)[2] = newval[2];
The low-level way would be to access the matrix data directly. In an RGB image (which I believe OpenCV typically stores as BGR), and assuming your cv::Mat variable is called frame, you could get the blue value at location (x, y) (from the top left) this way:
frame.data[frame.channels()*(frame.cols*y + x)];
Likewise, to get B, G, and R:
uchar b = frame.data[frame.channels()*(frame.cols*y + x) + 0];
uchar g = frame.data[frame.channels()*(frame.cols*y + x) + 1];
uchar r = frame.data[frame.channels()*(frame.cols*y + x) + 2];
Note that this code assumes the stride is equal to the width of the image.
A piece of code is easier for people who have such problem. I share my code and you can use it directly. Please note that OpenCV store pixels as BGR.
cv::Mat vImage_;
if(src_)
{
cv::Vec3f vec_;
for(int i = 0; i < vHeight_; i++)
for(int j = 0; j < vWidth_; j++)
{
vec_ = cv::Vec3f((*src_)[0]/255.0, (*src_)[1]/255.0, (*src_)[2]/255.0);//Please note that OpenCV store pixels as BGR.
vImage_.at<cv::Vec3f>(vHeight_-1-i, j) = vec_;
++src_;
}
}
if(! vImage_.data ) // Check for invalid input
printf("failed to read image by OpenCV.");
else
{
cv::namedWindow( windowName_, CV_WINDOW_AUTOSIZE);
cv::imshow( windowName_, vImage_); // Show the image.
}
The current version allows the cv::Mat::at function to handle 3 dimensions. So for a Mat object m, m.at<uchar>(0,0,0) should work.
uchar * value = img2.data; //Pointer to the first pixel data ,it's return array in all values
int r = 2;
for (size_t i = 0; i < img2.cols* (img2.rows * img2.channels()); i++)
{
if (r > 2) r = 0;
if (r == 0) value[i] = 0;
if (r == 1)value[i] = 0;
if (r == 2)value[i] = 255;
r++;
}
const double pi = boost::math::constants::pi<double>();
cv::Mat distance2ellipse(cv::Mat image, cv::RotatedRect ellipse){
float distance = 2.0f;
float angle = ellipse.angle;
cv::Point ellipse_center = ellipse.center;
float major_axis = ellipse.size.width/2;
float minor_axis = ellipse.size.height/2;
cv::Point pixel;
float a,b,c,d;
for(int x = 0; x < image.cols; x++)
{
for(int y = 0; y < image.rows; y++)
{
auto u = cos(angle*pi/180)*(x-ellipse_center.x) + sin(angle*pi/180)*(y-ellipse_center.y);
auto v = -sin(angle*pi/180)*(x-ellipse_center.x) + cos(angle*pi/180)*(y-ellipse_center.y);
distance = (u/major_axis)*(u/major_axis) + (v/minor_axis)*(v/minor_axis);
if(distance<=1)
{
image.at<cv::Vec3b>(y,x)[1] = 255;
}
}
}
return image;
}