operations with vectors and xml file for SVM on c++ - c++

i'm writing a code for SVM detection using Opencv. Actualy, for the training data, i've two matrix (positive and negative features) creating by:
const size_t N=12;
std::vector<std::array<int,N>> matrixForTrainingDataPos;
std::vector<std::array<int,N>> matrixForTrainingDataNeg;
populated with 12 features for each image. I've 100 positive images and 140 negative images and then matrixForTrainingDataPos is [100][12] and matrixForTrainingDataNeg[140][12]. Now i have to concatenate them to get:
float trainingData[240][12] = {--------};
Mat trainingDataMat(240, 12, CV_32FC1, trainingData);
I tried with some operation as pusk_back but I did not succeed. I am , however, managed to build an array of 240 elements for the labeling: 100 with 1 and 140 with -1 using two for cicle. Next step is save a trainingData on xml file so that once launched the program if there is no file creates it , avoiding all the processing of trainingData if you have already made
can you help me?
tanks!

int count = 0;
for(int i = 0; i < matrixForTrainingDataPos.size(); i++)
{
for (int j = 0; j < N; j++)
{
trainingData[count][j] = matrixForTrainingDataPos[i][j];
}
count++;
}
/* copy negative sample matrix data */

it works. But the compiler returned an exception for me so I had to declare ad unsigned the i j variables: for (unsigned i=0......
for my second question? how i save and load this matrix on xml file, on first passage, to avoid in the next steps to be recalculated?

Related

c++ Search Blocks Of Large Image For Sub Image

Given that my large and sub image are 2D matrices, how would I be able search my large matrix block by block until my sub matrix is found. It's like OpenCV template matching but I'm not using that so this needs to be C++ purely.
Something like this. Some sample code would be much appreciated.
SearchBlock(//parameters)
{
Matrix Block;
Block.Rows = //define block rows;
Block.Cols = //define block cols;
Block.data = new double[Block.Rows * Block.Cols];
for (int i = 0; i < Block.Rows; i++)
for (int j = 0; j < Block.Cols; j++)
return Block;
}
You can achieve this by iterating over the elements of the image matrix and checking if the neighbouring elements correspond to the elements of the subimage you are looking for.

OpenCV not recognizing Mat size

I'm trying to print an image using OpenCV defining a 400x400 Mat:
plot2 = cv::Mat(400,400, CV_8U, 255);
But when I try print the points, something strange happens. The y coordinate only prints to the first 100 values. That is, if I print the point (50,100), it does not print it in the 100/400th part of the columns, but at the end. Somehow, 400 columns have turned into 100.
For example, when running this:
for (int j = 0; j < 95; ++j){
plot2.at<int>(20, j) = 0;
}
cv::imshow("segunda pared", plot2);
Shows this (the underlined part is the part corresponding to the code above):
A line that goes to 95 almost occupies all of the 400 points when it should only occupy 95/400th of the screen.
What am I doing wrong?
When you defined your cv::Mat, you told clearly that it is from the type CV_8U:
plot2 = cv::Mat(400,400, CV_8U, 255);
But when you are trying to print it, you are telling that its type is int which is usually a signed 32 bit not unsigned 8 bit. So the solution is:
for (int j = 0; j < 95; ++j){
plot2.at<uchar>(20, j) = 0;
}
Important note: Be aware that OpenCV uses the standard C++ types not the fixed ones. So, there is no need to use fixed size types like uint16_t or similar. because when compiling OpenCV & your code on another platform both of them will change together.
BTW, one of the good way to iterate through your cv::Mat is:
for (size_t row = 0; j < my_mat.rows; ++row){
auto row_ptr=my_mat.ptr<uchar>(row);
for(size_t col=0;col<my_mat.cols;++col){
//do whatever you want with row_ptr[col] (read/write)
}
}

Opencv Mat vector assignment to a row of a matrix, fastest way?

What is the fastest way of assigning a vector to a matrix row in a loop? I want to fill a data matrix along its rows with vectors. These vectors are computed in a loop. This loop last until all the entries of data matrix is filled those vectors.
Currently I am using cv::Mat::at<>() method for accessing the elements of the matrix and fill them with the vector, however, it seems this process is quite slow. I have tried another way by using cv::Mat::X.row(index) = data_vector, it works fast but fill my matrix X with some garbage values which I can not understand, why.
I read that there exists another way of using pointers (fastest way), however, I can not able to understand. Can somebody explain how to use them or other different methods?
Here is a part of my code:
#define OFFSET 2
cv::Mat im = cv::imread("001.png", CV_LOAD_IMAGE_GRAYSCALE);
cv::Mat X = cv::Mat((im.rows - 2*OFFSET)*(im.cols - 2*OFFSET), 25, CV_64FC1); // Holds the training data. Data contains image patches
cv::Mat patch = cv::Mat(5, 5, im.type()); // Holds a cropped image patch
typedef cv::Vec<float, 25> Vec25f;
int ind = 0;
for (int row = 0; row < (im.rows - 2*OFFSET); row++){
for (int col = 0; col < (im.cols - 2*OFFSET); col++){
cv::Mat temp_patch = im(cv::Rect(col, row, 5, 5)); // crop an image patch (5x5) at each pixel
patch = temp_patch.clone(); // Needs to do this because temp_patch is not continuous in memory
patch.convertTo(patch, CV_64FC1);
Vec25f data_vector = patch.reshape(0, 1); // make it row vector (1X25).
for (int i = 0; i < 25; i++)
{
X.at<float>(ind, i) = data_vector[i]; // Currently I am using this way (quite slow).
}
//X_train.row(ind) = patch.reshape(0, 1); // Tried this but it assigns some garbage values to the data matrix!
ind += 1;
}
}
To do it the regular opencv way you could do :-
ImageMat.row(RowIndex) = RowMat.clone();
or
RowMat.copyTo(ImageMat.row(RowIndex));
Haven't tested for correctness or speed.
Just a couple of edits in your code
double * xBuffer = X.ptr<double>(0);
for (int row = 0; row < (im.rows - 2*OFFSET); row++){
for (int col = 0; col < (im.cols - 2*OFFSET); col++){
cv::Mat temp_patch = im(cv::Rect(col, row, 5, 5)); // crop an image patch (5x5) at each pixel
patch = temp_patch.clone(); // Needs to do this because temp_patch is not continuous in memory
patch.convertTo(patch, CV_64FC1);
memcpy(xBuffer, patch.data, 25*sizeof(double));
xBuffer += 25;
}
}
Also, you dont seem to do any computation in patch just extract grey level values, so you can create X with the same type as im, and convert it to double at the end. In this way, you could memcpy each row of your patch, the address in memory beeing `unsigned char* buffer = im.ptr(row) + col
According to the docs:
if you need to process a whole row of matrix, the most efficient way is to get the pointer to the row first, and then just use plain C operator []:
// compute sum of positive matrix elements
// (assuming that M is double-precision matrix)
double sum=0;
for(int i = 0; i < M.rows; i++)
{
const double* Mi = M.ptr<double>(i);
for(int j = 0; j < M.cols; j++)
sum += std::max(Mi[j], 0.);
}

exc_bad_access when using Mat in OpenCV although it looks like my indexes are correct

I develop some algorithm on Mac using Xcode 5 and OpenCV. I do it as C++.
I define matrix:
Mat src;
int cols = 560;
int rows = 260;
src.create( cols, rows, DataType<double>::type);
In the code I have a loop looks like this:
for (int i=0; i<src.rows; i++) {
const double* srcIterator = src.ptr<double>(i);
for (int j=0; j<src.cols; j++) {
double temp = srcIterator[j];
temp++;
}
}
I read the function that has this loop for every frame I read. Most of the times it runs correctly (it is running in endless loop and it always ok).
In some runs I get exc_bad_access error. When it happened it happened for the first frame.
The error is on the line: double temp = srcIterator[j];
When it happened j is much bellow 560 and alway above 500, but each time it has a deferent value.
I thought may be I mix the cols and rows but if it was right I would get this error when j was 260 (the size of rows).
Please, Anyone has any guess what can it be?
From the doc, you seem to have inverted the rows and columns parameters in the call to cv::Mat::create().
This would also explain that you get an invalid access when you try to read with large values for i and j.

FFT of an image

I have an assignment about fftw and I was trying to write a small program to create an fft of an image. I am using CImg to read and write images. But all I get is a dark image with a single white dot :(
I'm most likely doing this the wrong way and I would appreciate if someone could explain how this should be done. I don't need the code, I just need to know what is the right way to do this.
Here is my code:
CImg<double> input("test3.bmp");
CImg<double> image_fft(input, false);
unsigned int nx = input.dimx(), ny = input.dimy();
size_t align = sizeof(Complex);
array2<Complex> in (nx, ny, align);
fft2d Forward(-1, in);
for (int i = 0; i < input.dimx(); ++i) {
for (int j = 0; j < input.dimy(); ++j) {
in(i,j) = input(i,j);
}
}
Forward.fft(in);
for (int i = 0; i < input.dimx(); ++i) {
for (int j = 0; j < input.dimy(); ++j) {
image_fft(i,j,0) = image_fft(i,j,1) = image_fft(i,j,2) = std::abs(in(i,j));
}
}
image_fft.normalize(0, 255);
image_fft.save("test.bmp");
You need to take the log of the magnitude. The single white dot is the base value (0 Hz, DC, whatever you want to call it), so it will almost ALWAYS be by far the largest component of any image you take (Since pixel values cannot be negative, the DC value will always be positive and large).
What you need to do is calculate the log (ln, whatever, some type of logarithmic calculation) of the magnitude (so after you've converted from complex to magnitude/phase form (phasor notation iirc?)) on each point before you normalize it.
Please note that the values are there, they are just REALLY small compared to the DC value, taking the log (Which makes smaller values bigger by a lot, and bigger values only slightly larger) will make the other frequencies visible.