assignment error with Mat OpenCv - c++

I am working with OpenCV and C++ for a project and I found the following problem: after initializing a mat with the following statement
Mat or_mat=Mat(img->height,img->width,CV_32FC1);
check the following value
or_mat.at <float> (i, j) = atan (fy / fx) / 2 +1.5707963;
After completing returning the mat for the output of the function but when I go to read there are many values ​​that do not correspond to the output. Precise in incorrect values ​​for the I-4.31602e +008 is inserted and if I make a cout the value of the expression is correct. What could be the error??
relevant Code:
Mat or_mat=Mat(img->height,img->width,CV_32FC1);
to angle
if(fx > 0){
or_mat.at<float>(i,j) = atan(fy/fx)/2+1.5707963;
}
else if(fx<0 && fy >0){
or_mat.at<float>(i,j) = atan(fy/fx)/2+3.1415926;
}
else if(fx<0 && fy <0){
or_mat.at<float>(i,j) = atan(fy/fx)/2;
}
else if(fy!=0 && fx==0){
or_mat.at<float>(i,j) = 1.5707963;
}
I have to calculate the local orientation of the fingerprint image, the following code I have omitted several statements and calculations that do not have errors.

I would triple check that you are indexing correctly. The following code shows my initialising a matrix full of zeros, and then filling it with some float using at .at operator. It compiles and runs nicely:
int main()
{
int height = 10;
int width = 3;
// Initialise or_mat to with every element set to zero
cv::Mat or_mat = cv::Mat::zeros(height, width, CV_32FC1);
std::cout << "Original or_mat:\n" << or_mat << std::endl;
// Loop through and set each element equal to some float
float value = 10.254;
for (int i = 0; i < or_mat.rows; ++i)
{
for (int j = 0; j < or_mat.cols; ++j)
{
or_mat.at<float>(i,j) = value;
}
}
std::cout << "Final or_mat:\n" << or_mat << std::endl;
return 0;
}

Related

How to count a raster's pixels with a certain value inside a given rectangle in C++ with GDAL?

In C++ is there a function to count all the pixels with a certain value inside a given rectangle (xmin, ymin, xmax, ymax), using GDAL library? Or I'll have to read each block and count them all pixel by pixel? I have searched, but only found a few Python scripts that do that.
I did it for myself (it's a boolean raster - 0/1 - and I'm counting the "1" pixels), but the result is different from that given in GRASS GIS r.report function (via QGIS).
long long openTIF(string ftif,double x0,double y0,double x1,double y1) {
long long sum = 0;
GDALDataset *poDataset;
GDALAllRegister();
poDataset = (GDALDataset*)GDALOpen(ftif.c_str(),GA_ReadOnly);
if (poDataset == NULL) {
cout << "Error reading raster '" << ftif << "'\n";
exit(1);
}
int tx=poDataset->GetRasterXSize(),
ty=poDataset->GetRasterYSize();
double geoTransf[6], rx0=0, ry0=0, rx1=0, ry1=0;
if (poDataset->GetGeoTransform(geoTransf) == CE_None) {
rx0 = geoTransf[0];
ry0 = geoTransf[3]-ty*geoTransf[1];
rx1 = geoTransf[0]-tx*geoTransf[5];
ry1 = geoTransf[3];
} else {
exit(2);
}
int nBlockXSize,nBlockYSize;
GDALRasterBand *poBand;
poBand = poDataset->GetRasterBand(1);
poBand->GetBlockSize(&nBlockXSize,&nBlockYSize);
uint32_t i;
int y,
col0 = round((rx0-x0)/geoTransf[5]),
col1 = round((rx0-x1)/geoTransf[5]),
row0 = round((ry1-y1)/geoTransf[1]),
row1 = round((ry1-y0)/geoTransf[1]);
CPLErr error;
uint8_t *data = (uint8_t*)CPLMalloc(nBlockXSize*nBlockYSize);
GDALDataType type = poBand->GetRasterDataType();
if (type == GDT_Byte) {
cout << "Byte" << endl;
}
for (y=row0; y<row1; y++) {
error = poBand->ReadBlock(0,y,data);
if (error > 0) {
cout << "Error reading data block.\n";
exit(3);
}
for (i=col0; i<col1; i++) {
sum += (uint8_t)data[i];
}
}
CPLFree(data);
return sum;
}
For this given raster, and the given coordinates (precision %.8f), GRASS GIS r.report function is reporting 28096011 pixels with value 1, while my sum is giving 28094709 (difference = -1302). With other coordinates r.report gives 5605458, my sum gives 5604757 (difference = -701). Any idea what may be going on?
EDIT: Since my sum was always smaller than GRASS GIS r.report, I though of including the last row and column, changing the lines
for (y=row0; y<row1; y++) {
for (i=col0; i<col1; i++) {
to
for (y=row0; y<=row1; y++) {
for (i=col0; i<=col1; i++) {
but now, with another set of coordinates, r.report gives 249707, while my sum gives 250157 (difference = +450).
It looks like a rounding error, you need to round down your column and row indexes. It's easier to reason if you think about every raster pixel as a rectangular region. For example, first pixel of your raster spans from (x, y) to (x + width, y + height).
rx = geoTransf[0];
rx_size = geoTransf[1]; // you got geoTransf[1] and geoTransf[5] swapped!
ry = geoTransf[3];
ry_size = geoTransf[5];
// assert, that geoTrans[2] and geoTrans[4] are zero
col_first = floor((x0 - rx) / rx_size));
col_last = floor((x1 - rx) / rx_size));
row_first = floor((y0 - ry) / ry_size));
row_last = floor((y1 - ry) / ry_size));
for (int y = row_first; y <= row_last; ++y) {
poBand->ReadBlock(col_first, y, data);
for (int x = 0; x <= col_last - col_first; ++x) {
data[x];
}
}

Number and character recognition using ANN OpenCV 3.1

I have implemented Neural network using OpenCV ANN Library. I am newbie in this field and I learn everything about it online (Mostly StackOverflow).
I am using this ANN for detection of number plate. I did segmentation part using OpenCV image processing library and it is working good. It performs character segmentation and gives it to the NN part of the project. NN is going to recognize the number plate.
I have sample images of 20x30, therefore I have 600 neurons in input layer. As there are 36 possibilities (0-9,A-Z) I have 36 output neurons. I kept 100 neurons in hidden layer. The predict function of OpenCV is giving me the same output for every segmented image. That output is also showing some large negative(< -1). I have used cv::ml::ANN_MLP::SIGMOID_SYM as an activation function.
Please don't mind as there is lot of code wrongly commented (I am doing trial and error).
I need to find out what is the output of predict function. Thank you for your help.
#include <opencv2/opencv.hpp>
int inputLayerSize = 1;
int outputLayerSize = 1;
int numSamples = 2;
Mat layers = Mat(3, 1, CV_32S);
layers.row(0) =Scalar(600) ;
layers.row(1) = Scalar(20);
layers.row(2) = Scalar(36);
vector<int> layerSizes = { 600,100,36 };
Ptr<ml::ANN_MLP> nnPtr = ml::ANN_MLP::create();
vector <int> n;
//nnPtr->setLayerSizes(3);
nnPtr->setLayerSizes(layers);
nnPtr->setTrainMethod(ml::ANN_MLP::BACKPROP);
nnPtr->setTermCriteria(TermCriteria(cv::TermCriteria::COUNT | cv::TermCriteria::EPS, 1000, 0.00001f));
nnPtr->setActivationFunction(cv::ml::ANN_MLP::SIGMOID_SYM, 1, 1);
nnPtr->setBackpropWeightScale(0.5f);
nnPtr->setBackpropMomentumScale(0.5f);
/*CvANN_MLP_TrainParams params = CvANN_MLP_TrainParams(
// terminate the training after either 1000
// iterations or a very small change in the
// network wieghts below the specified value
cvTermCriteria(CV_TERMCRIT_ITER + CV_TERMCRIT_EPS, 1000, 0.000001),
// use backpropogation for training
CvANN_MLP_TrainParams::BACKPROP,
// co-efficents for backpropogation training
// (refer to manual)
0.1,
0.1);*/
/* Mat samples(Size(inputLayerSize, numSamples), CV_32F);
samples.at<float>(Point(0, 0)) = 0.1f;
samples.at<float>(Point(0, 1)) = 0.2f;
Mat responses(Size(outputLayerSize, numSamples), CV_32F);
responses.at<float>(Point(0, 0)) = 0.2f;
responses.at<float>(Point(0, 1)) = 0.4f;
*/
//reading chaos image
// we will read the classification numbers into this variable as though it is a vector
// close the traning images file
/*vector<int> layerInfo;
layerInfo=nnPtr->get;
for (int i = 0; i < layerInfo.size(); i++) {
cout << "size of 0" <<layerInfo[i] << endl;
}*/
cv::imshow("chaos", matTrainingImagesAsFlattenedFloats);
// cout <<abc << endl;
matTrainingImagesAsFlattenedFloats.convertTo(matTrainingImagesAsFlattenedFloats, CV_32F);
//matClassificationInts.reshape(1, 496);
matClassificationInts.convertTo(matClassificationInts, CV_32F);
matSamples.convertTo(matSamples, CV_32F);
std::cout << matClassificationInts.rows << " " << matClassificationInts.cols << " ";
std::cout << matTrainingImagesAsFlattenedFloats.rows << " " << matTrainingImagesAsFlattenedFloats.cols << " ";
std::cout << matSamples.rows << " " << matSamples.cols;
imshow("Samples", matSamples);
imshow("chaos", matTrainingImagesAsFlattenedFloats);
Ptr<ml::TrainData> trainData = ml::TrainData::create(matTrainingImagesAsFlattenedFloats, ml::SampleTypes::ROW_SAMPLE, matSamples);
nnPtr->train(trainData);
bool m = nnPtr->isTrained();
if (m)
std::cout << "training complete\n\n";
// cv::Mat matCurrentChar = Mat(cv::Size(matTrainingImagesAsFlattenedFloats.cols, matTrainingImagesAsFlattenedFloats.rows), CV_32F);
// cout << "samples:\n" << samples << endl;
//cout << "\nresponses:\n" << responses << endl;
/* if (!nnPtr->train(trainData))
return 1;*/
/* cout << "\nweights[0]:\n" << nnPtr->getWeights(0) << endl;
cout << "\nweights[1]:\n" << nnPtr->getWeights(1) << endl;
cout << "\nweights[2]:\n" << nnPtr->getWeights(2) << endl;
cout << "\nweights[3]:\n" << nnPtr->getWeights(3) << endl;*/
//predicting
std::vector <cv::String> filename;
cv::String folder = "./plate/";
cv::glob(folder, filename);
if (filename.empty()) { // if unable to open image
std::cout << "error: image not read from file\n\n"; // show error message on command line
return(0); // and exit program
}
String strFinalString;
for (int i = 0; i < filename.size(); i++) {
cv::Mat matTestingNumbers = cv::imread(filename[i]);
cv::Mat matGrayscale; //
cv::Mat matBlurred; // declare more image variables
cv::Mat matThresh; //
cv::Mat matThreshCopy;
cv::Mat matCanny;
//
cv::cvtColor(matTestingNumbers, matGrayscale, CV_BGR2GRAY); // convert to grayscale
matThresh = cv::Mat(cv::Size(matGrayscale.cols, matGrayscale.rows), CV_8UC1);
for (int i = 0; i < matGrayscale.cols; i++) {
for (int j = 0; j < matGrayscale.rows; j++) {
if (matGrayscale.at<uchar>(j, i) <= 130) {
matThresh.at<uchar>(j, i) = 255;
}
else {
matThresh.at<uchar>(j, i) = 0;
}
}
}
// blur
cv::GaussianBlur(matThresh, // input image
matBlurred, // output image
cv::Size(5, 5), // smoothing window width and height in pixels
0); // sigma value, determines how much the image will be blurred, zero makes function choose the sigma value
// filter image from grayscale to black and white
/* cv::adaptiveThreshold(matBlurred, // input image
matThresh, // output image
255, // make pixels that pass the threshold full white
cv::ADAPTIVE_THRESH_GAUSSIAN_C, // use gaussian rather than mean, seems to give better results
cv::THRESH_BINARY_INV, // invert so foreground will be white, background will be black
11, // size of a pixel neighborhood used to calculate threshold value
2); */ // constant subtracted from the mean or weighted mean
// cv::imshow("thresh" + std::to_string(i), matThresh);
matThreshCopy = matThresh.clone();
std::vector<std::vector<cv::Point> > ptContours; // declare a vector for the contours
std::vector<cv::Vec4i> v4iHierarchy;// make a copy of the thresh image, this in necessary b/c findContours modifies the image
cv::Canny(matBlurred, matCanny, 20, 40, 3);
/*std::vector<std::vector<cv::Point> > ptContours; // declare a vector for the contours
std::vector<cv::Vec4i> v4iHierarchy; // declare a vector for the hierarchy (we won't use this in this program but this may be helpful for reference)
cv::findContours(matThreshCopy, // input image, make sure to use a copy since the function will modify this image in the course of finding contours
ptContours, // output contours
v4iHierarchy, // output hierarchy
cv::RETR_EXTERNAL, // retrieve the outermost contours only
cv::CHAIN_APPROX_SIMPLE); // compress horizontal, vertical, and diagonal segments and leave only their end points
/*std::vector<std::vector<cv::Point> > contours_poly(ptContours.size());
std::vector<cv::Rect> boundRect(ptContours.size());
for (int i = 0; i < ptContours.size(); i++)
{
approxPolyDP(cv::Mat(ptContours[i]), contours_poly[i], 3, true);
boundRect[i] = cv::boundingRect(cv::Mat(contours_poly[i]));
}*/
/*for (int i = 0; i < ptContours.size(); i++) { // for each contour
ContourWithData contourWithData; // instantiate a contour with data object
contourWithData.ptContour = ptContours[i]; // assign contour to contour with data
contourWithData.boundingRect = cv::boundingRect(contourWithData.ptContour); // get the bounding rect
contourWithData.fltArea = cv::contourArea(contourWithData.ptContour); // calculate the contour area
allContoursWithData.push_back(contourWithData); // add contour with data object to list of all contours with data
}
for (int i = 0; i < allContoursWithData.size(); i++) { // for all contours
if (allContoursWithData[i].checkIfContourIsValid()) { // check if valid
validContoursWithData.push_back(allContoursWithData[i]); // if so, append to valid contour list
}
}
//sort contours from left to right
std::sort(validContoursWithData.begin(), validContoursWithData.end(), ContourWithData::sortByBoundingRectXPosition);
// std::string strFinalString; // declare final string, this will have the final number sequence by the end of the program
*/
/*for (int i = 0; i < validContoursWithData.size(); i++) { // for each contour
// draw a green rect around the current char
cv::rectangle(matTestingNumbers, // draw rectangle on original image
validContoursWithData[i].boundingRect, // rect to draw
cv::Scalar(0, 255, 0), // green
2); // thickness
cv::Mat matROI = matThresh(validContoursWithData[i].boundingRect); // get ROI image of bounding rect
cv::Mat matROIResized;
cv::resize(matROI, matROIResized, cv::Size(RESIZED_IMAGE_WIDTH, RESIZED_IMAGE_HEIGHT)); // resize image, this will be more consistent for recognition and storage
*/
cv::Mat matROIFloat;
cv::resize(matThresh, matThresh, cv::Size(RESIZED_IMAGE_WIDTH, RESIZED_IMAGE_HEIGHT));
matThresh.convertTo(matROIFloat, CV_32FC1, 1.0 / 255.0); // convert Mat to float, necessary for call to find_nearest
cv::Mat matROIFlattenedFloat = matROIFloat.reshape(1, 1);
cv::Point maxLoc = { 0,0 };
cv::Point minLoc;
cv::Mat output = cv::Mat(cv::Size(36, 1), CV_32F);
vector<float>output2;
// cv::Mat output2 = cv::Mat(cv::Size(36, 1), CV_32F);
nnPtr->predict(matROIFlattenedFloat, output2);
// float max = output.at<float>(0, 0);
int fo = 0;
float m = output2[0];
imshow("predicted input", matROIFlattenedFloat);
// float b = output.at<float>(0, 0);
// cout <<"\n output0,0:"<<b<<endl;
// minMaxLoc(output, 0, 0, &minLoc, &maxLoc, Mat());
// cout << "\noutput:\n" << maxLoc.x << endl;
for (int j = 1; j < 36; j++) {
float value =output2[j];
if (value > m) {
m = value;
fo = j;
}
}
float * p = 0;
p = &m;
cout << "j value in output " << fo << " Max value " << p << endl;
//imshow("output image" + to_string(i), output);
// cout << "\noutput:\n" << minLoc.x << endl;
//float fltCurrentChar = (float)maxLoc.x;
output.release();
m = 0;
fo = 0;
}
// strFinalString = strFinalString + char(int(fltCurrentChar)); // append current char to full string
// cv::imshow("Predict output", output);
/*cv::Point maxLoc = {0,0};
Mat output=Mat (cv::Size(matSamples.cols,matSamples.rows),CV_32F);
nnPtr->predict(matTrainingImagesAsFlattenedFloats, output);
minMaxLoc(output, 0, 0, 0, &maxLoc, 0);
cout << "\noutput:\n" << maxLoc.x << endl;*/
// getchar();
/*for (int i = 0; i < 10;i++) {
for (int j = 0; j < 36; j++) {
if (matCurrentChar.at<float>(i, j) >= 0.6) {
cout << " "<<j<<" ";
}
}
}*/
waitKey(0);
return(0);
}
void gen() {
std::string dir, filepath;
int num, imgArea, minArea;
int pos = 0;
bool f = true;
struct stat filestat;
cv::Mat imgTrainingNumbers;
cv::Mat imgGrayscale;
cv::Mat imgBlurred;
cv::Mat imgThresh;
cv::Mat imgThreshCopy;
cv::Mat matROIResized=cv::Mat (cv::Size(RESIZED_IMAGE_WIDTH,RESIZED_IMAGE_HEIGHT),CV_8UC1);
cv::Mat matROI;
std::vector <cv::String> filename;
std::vector<std::vector<cv::Point> > ptContours;
std::vector<cv::Vec4i> v4iHierarchy;
int count = 0, contoursCount = 0;
matSamples = cv::Mat(cv::Size(36, 496), CV_32FC1);
matTrainingImagesAsFlattenedFloats = cv::Mat(cv::Size(600, 496), CV_32FC1);
for (int j = 0; j <= 35; j++) {
int tmp = j;
cv::String folder = "./Training Data/" + std::to_string(tmp);
cv::glob(folder, filename);
for (int k = 0; k < filename.size(); k++) {
count++;
// If the file is a directory (or is in some way invalid) we'll skip it
// if (stat(filepath.c_str(), &filestat)) continue;
//if (S_ISDIR(filestat.st_mode)) continue;
imgTrainingNumbers = cv::imread(filename[k]);
imgArea = imgTrainingNumbers.cols*imgTrainingNumbers.rows;
// read in training numbers image
minArea = imgArea * 50 / 100;
if (imgTrainingNumbers.empty()) {
std::cout << "error: image not read from file\n\n";
//return(0);
}
cv::cvtColor(imgTrainingNumbers, imgGrayscale, CV_BGR2GRAY);
//cv::equalizeHist(imgGrayscale, imgGrayscale);
imgThresh = cv::Mat(cv::Size(imgGrayscale.cols, imgGrayscale.rows), CV_8UC1);
/*cv::adaptiveThreshold(imgGrayscale,
imgThresh,
255,
cv::ADAPTIVE_THRESH_GAUSSIAN_C,
cv::THRESH_BINARY_INV,
3,
0);
*/
for (int i = 0; i < imgGrayscale.cols; i++) {
for (int j = 0; j < imgGrayscale.rows; j++) {
if (imgGrayscale.at<uchar>(j, i) <= 130) {
imgThresh.at<uchar>(j, i) = 255;
}
else {
imgThresh.at<uchar>(j, i) = 0;
}
}
}
// cv::imshow("imgThresh"+std::to_string(count), imgThresh);
imgThreshCopy = imgThresh.clone();
cv::GaussianBlur(imgThreshCopy,
imgBlurred,
cv::Size(5, 5),
0);
cv::Mat imgCanny;
// cv::Canny(imgBlurred,imgCanny,20,40,3);
cv::findContours(imgBlurred,
ptContours,
v4iHierarchy,
cv::RETR_EXTERNAL,
cv::CHAIN_APPROX_SIMPLE);
for (int i = 0; i < ptContours.size(); i++) {
if (cv::contourArea(ptContours[i]) > MIN_CONTOUR_AREA) {
contoursCount++;
cv::Rect boundingRect = cv::boundingRect(ptContours[i]);
cv::rectangle(imgTrainingNumbers, boundingRect, cv::Scalar(0, 0, 255), 2); // draw red rectangle around each contour as we ask user for input
matROI = imgThreshCopy(boundingRect); // get ROI image of bounding rect
std::string path = "./" + std::to_string(contoursCount) + ".JPG";
cv::imwrite(path, matROI);
// cv::imshow("matROI" + std::to_string(count), matROI);
cv::resize(matROI, matROIResized, cv::Size(RESIZED_IMAGE_WIDTH, RESIZED_IMAGE_HEIGHT)); // resize image, this will be more consistent for recognition and storage
std::cout << filename[k] << " " << contoursCount << "\n";
//cv::imshow("matROI", matROI);
//cv::imshow("matROIResized"+std::to_string(count), matROIResized);
// cv::imshow("imgTrainingNumbers" + std::to_string(contoursCount), imgTrainingNumbers);
int intChar;
if (j<10)
intChar = j + 48;
else {
intChar = j + 55;
}
/*if (intChar == 27) { // if esc key was pressed
return(0); // exit program
}*/
// if (std::find(intValidChars.begin(), intValidChars.end(), intChar) != intValidChars.end()) { // else if the char is in the list of chars we are looking for . . .
// append classification char to integer list of chars
cv::Mat matImageFloat;
matROIResized.convertTo(matImageFloat,CV_32FC1);// now add the training image (some conversion is necessary first) . . .
//matROIResized.convertTo(matImageFloat, CV_32FC1); // convert Mat to float
cv::Mat matImageFlattenedFloat = matImageFloat.reshape(1, 1);
//matTrainingImagesAsFlattenedFloats.push_back(matImageFlattenedFloat);// flatten
try {
//matTrainingImagesAsFlattenedFloats.push_back(matImageFlattenedFloat);
std::cout << matTrainingImagesAsFlattenedFloats.rows << " " << matTrainingImagesAsFlattenedFloats.cols;
//unsigned char* re;
int ii = 0; // Current column in training_mat
for (int i = 0; i<matImageFloat.rows; i++) {
for (int j = 0; j < matImageFloat.cols; j++) {
matTrainingImagesAsFlattenedFloats.at<float>(contoursCount-1, ii++) = matImageFloat.at<float>(i,j);
}
}
}
catch (std::exception &exc) {
f = false;
exc.what();
}
if (f) {
matClassificationInts.push_back((float)intChar);
matSamples.at<float>(contoursCount-1, j) = 1.0;
}
f = true;
// add to Mat as though it was a vector, this is necessary due to the
// data types that KNearest.train accepts
} // end if
//} // end if
} // end for
}//end i
}//end j
}
Output of predict function
Unfortunately, I don't have the necessary time to really review the code, but I can say off the top that to train a model that performs well for prediction with 36 classes, you will need several things:
A large number of good quality images. Ideally, you'd want thousands of images for each class. Of course, you can see somewhat decent results with less than that, but if you only have a few images per class, it's never going to be able to generalize adequately.
You need a model that is large and sophisticated enough to provide the necessary expressiveness to solve the problem. For a problem like this, a plain old multi-layer perceptron with one hidden layer with 100 units may not be enough. This is actually a problem that would benefit from using a Convolutional Neural Net (CNN) with a couple layers just to extract useful features first. But assuming you don't want to go down that path, you may at least want to tweak the size of your hidden layer.
To even get to a point where the training process converges, you will probably need to experiment and crucially, you need an effective way to test the accuracy of the ANN after each experiment. Ideally, you want to observe the loss as the training is proceeding, but I'm not sure whether that's possible using OpenCV's ML functionality. At a minimum, you should fully expect to have to play around with the various so-called "hyper-parameters" and run many experiments before you have a reasonable model.
Anyway, the most important thing is to make sure you have a solid mechanism for validating the accuracy of the model after training. If you aren't already doing so, set aside some images as a separate test set, and after each experiment, use the trained ANN to predict each test image to see the accuracy.
One final general note: what you're trying to do is complex. You will save yourself a huge number of headaches if you take the time early and often to refactor your code. No matter how many experiments you run, if there's some defect causing (for example) your training data to be fundamentally different in some way than your test data, you will never see good results.
Good luck!
EDIT: I should also point out that seeing the same result for every input image is a classic sign that training failed. Unfortunately, there are many reasons why that might happen and it will be very difficult for anyone to isolate that for you without some cleaner code and access to your image data.
I have solved the issue of not getting the output of predict. The issue was created because of the input Mat image to train (ie. matTrainingImagesAsFlattenedFloats) was having values 255.0 for a white pixel. This happened because I haven't use convertTo() properly. You need to use convertTo(OutputImage name, CV_32FC1, 1.0 / 255.0); like this which will convert all the pixel values with 255.0 to 1.0 and after that I am getting the correct output.
Thank you for all the help.
This is too broad to be in one question. Sorry for the bad news. I tried this over and over and couldn't find a solution. I recommend that you implement a simple AND, OR or XOR first just to make sure that the learning part is working and that you are getting better results the more passes you do. Also I suggest to try the Tangent Hyperbolic as a Transfer Function instead of Sigmoid. And Good luck!
Here is some of my own posts that might help you:
Exact results as yours: HERE
Some codes: HERE
I don't want to say that, but several professors I met said Backpropagation just doesn't work and they had (and me have) to implement my own method of teaching the network.

Calculating the mean and standard deviation in C++ for single channeled histogram

I want calculate the mean and standard deviations for a histogram of a HSV image but I only want to do this histogram and calculations for the V channel.
I have been reading examples on how to do this for a set of channels and have tried these approaches but I am getting confused over whether my approach for initially creating the histogram is correct or not for just one channel because the program keeps crashing when i try to execute it.
Here is what I have at the moment (The variable test is a cv::Mat image and this can be any image you wish to use to recreate the issue). I have probably missed something obvious and the for loop might not be correct in terms of the range of values but I haven't done this in C++ before.
cv::cvtColor(test, test, CV_BGR2HSV);
int v_bins = 50;
int histSize[] = { v_bins };
cv::MatND hist;
float v_ranges[] = { 0, 255};
cv::vector<cv::Mat> channel(3);
split(test, channel);
const float* ranges[] = { v_ranges };
int channels[] = {0};
cv::calcHist(&channel[2], 1, channels, cv::Mat(), hist, 1, histSize, ranges, true, false); //histogram calculation
float mean=0;
float rows= hist.size().height;
float cols = hist.size().width;
for (int v = 0; v < v_bins; v++)
{
std::cout << hist.at<float>(v, v) << std::endl;;
mean = mean + hist.at<float>(v);
}
mean = mean / (rows*cols);
std::cout << mean<< std::endl;;
You can simply use cv::meanStdDev, that calculates a mean and standard deviation of array elements.
Note that both mean and stddev arguments are cv::Scalar, so you need to do mean[0] and stddev[0] to get the double values of your single channel array hist.
This code will clarify it's usage:
#include <opencv2\opencv.hpp>
#include <iostream>
int main()
{
cv::Mat test = cv::imread("path_to_image");
cv::cvtColor(test, test, CV_BGR2HSV);
int v_bins = 50;
int histSize[] = { v_bins };
cv::MatND hist;
float v_ranges[] = { 0, 255 };
cv::vector<cv::Mat> channel(3);
split(test, channel);
const float* ranges[] = { v_ranges };
int channels[] = { 0 };
cv::calcHist(&channel[2], 1, channels, cv::Mat(), hist, 1, histSize, ranges, true, false); //histogram calculation
cv::Scalar mean, stddev;
cv::meanStdDev(hist, mean, stddev);
std::cout << "Mean: " << mean[0] << " StdDev: " << stddev[0] << std::endl;
return 0;
}
UPDATE
You can compute the mean and the standard deviation by their definition:
double dmean = 0.0;
double dstddev = 0.0;
// Mean standard algorithm
for (int i = 0; i < v_bins; ++i)
{
dmean += hist.at<float>(i);
}
dmean /= v_bins;
// Standard deviation standard algorithm
std::vector<double> var(v_bins);
for (int i = 0; i < v_bins; ++i)
{
var[i] = (dmean - hist.at<float>(i)) * (dmean - hist.at<float>(i));
}
for (int i = 0; i < v_bins; ++i)
{
dstddev += var[i];
}
dstddev = sqrt(dstddev / v_bins);
std::cout << "Mean: " << dmean << " StdDev: " << dstddev << std::endl;
and you'll get the same values as OpenCV meanStdDev.
Be careful about calculating statistics on a histogram. If you just run meanStdDev, you'll get the mean and stdev of the bin values. That doesn't tell you an awful lot.
Probably what you want is the mean and stdev intensity.
So, if you want to derive the image mean and standard deviation from a histogram (or set of histograms), then you can use the following code:
// assume histogram is of type cv::Mat and comes from cv::calcHist
double s = 0;
double total_hist = 0;
for(int i=0; i < histogram.total(); ++i){
s += histogram.at<float>(i) * (i + 0.5); // bin centre
total_hist += histogram.at<float>(i);
}
double mean = s / total_hist;
double t = 0;
for(int i=0; i < histogram.total(); ++i){
double x = (i - mean);
t += histogram.at<float>(i)*x*x;
}
double stdev = std::sqrt(t / total_hist);
From the definitions of the mean:
mean = sum(x * p(x)) // expectation
std = sqrt(sum( p(x)*(x - mean)**2 ) // sqrt(variance)
The mean is the expectation value for x. So histogram[x]/sum(histogram) gives you p(x). The definition of standard deviation is similar and comes from the variance. The numbers are slightly simpler because pixels can only take integer values and are unit spaced.
Note this is also useful if you want to calculate normalisation statistics for a batch of images using the accumulate option.
Adapted from: How to calculate the standard deviation from a histogram? (Python, Matplotlib)

Unhandled Exception at Memory Location

I am new at OpenCV and I am trying to write a simple code to get the mean of a block size in an image. I wrote the following code, the build is ok, however, the debug is giving me an unhandled exception at memory location. This exception is at the following line:
mean_img.at<double>(i/block_size, j/block_size) = mean_img.at<double>(i/block_size,j/block_size) + new_img.at<double>(i + x, j + y) / (mean);
So, I will be grateful if anyone give me some hints. Thanks in advance and here is the whole code:
#include "opencv2/highgui/highgui.hpp" // Include Libs for OpenCV and Image Processing
#include <opencv2/opencv.hpp> // check that
#include "opencv2/core/core.hpp" // check that
#include <iostream> // Include Libs for C++
#include "opencv2/imgproc/imgproc.hpp" // Include Libs for OpenCV and Image Processing
#include <math.h>
using namespace cv; // namespace parameters not important in OpenCV2.4.6
using namespace std; // namespace parameters not important in OpenCV2.4.6
int main( int argc, const char** argv )
{
/*This part is to compute the parameters(block size, resize parameter) of the new_img*/
int resize_parameter; // resize parameter must be multiplication of 2
resize_parameter = 500;
int block_size; // block parameter must be divisable by of block size
block_size = 50;
if ((resize_parameter % 2) != 0) resize_parameter = resize_parameter - (resize_parameter % 2);
while ((resize_parameter % block_size) != 0) block_size = block_size - 1;
int mean_size = resize_parameter/block_size; // this is the size of the mean matrix
int mean = block_size * block_size; // this no is ti get the mean of every element in the matrix
//int mean_img [mean_size][mean_size] = {}; // the mean image matrix initialized by zero
/*This part is to allocate the array with dynamic size*/
//int** mean_img = new int*[mean_size];
//for(int x = 0; x < mean_size; x++)
//mean_img[x] = new int[mean_size];
/*Then we can use the array*/
/*This part is to fill all the elements of the mean matrix with zeros*/
//memset(mean_img, 0, sizeof(mean_img[0][0]) * mean_size * mean_size);
/*This part is the definition of the matrices that are used for the images*/
Mat mean_img = Mat(mean_size,mean_size,CV_64FC4, cv::Scalar(0)); // define a new matrix with meansize*meansize elements to compute the mean
Mat mean_img_full = Mat(resize_parameter,resize_parameter,CV_64FC4, cv::Scalar(0)); // define a new matrix with resizeparameter*resizeparameter elements to compute the mean
Mat new_img = Mat(resize_parameter,resize_parameter,CV_64FC4); // define a new matrix with resize_parameter*resize_parameter elements
Mat original_img = imread("Desert.JPG", CV_LOAD_IMAGE_GRAYSCALE); //define a new matrix and read the image data in the file "Desert.JPG" and store it in 'original_img'
// notes: the location of the image must be in the same directory of the C++ file
if (original_img.empty()) //check whether the image is loaded or not
{
cout << "Error : Image cannot be loaded..!!" << endl;
//system("pause"); //wait for a key press
return -1;
}
// explicitly specify dsize=dst.size(); fx and fy will be computed from that.
// resize( src matrix, dst matrix, dst.size to get the size of the dst matrix, 0, 0 "to deal with the dst matrix size, may be 0.5 or any fraction from the src size, "AREA,CUBIC,LINEAR")
resize(original_img, new_img, new_img.size(), 0, 0, CV_INTER_AREA);
/*This part is to compute the mean of each block*/
for ( int i = 0; i < resize_parameter; i = i + block_size) // i represents the index of the raw
{
for ( int j = 0; j < resize_parameter; j = j + block_size) // for the blocks in the same raw with different columns
{
for ( int x = 0; x < block_size; x++) // x represents the index of the raw
{
for ( int y = 0; y < block_size; y++) // y represents the index of the column
{
//cout << i ; //cout << "\n"; //cout << j ; //cout << "\n"; //cout << x ; //cout << "\n"; //cout << y ; //cout << "\n";
mean_img.at<double>(i/block_size, j/block_size) = mean_img.at<double>(i/block_size,j/block_size) + new_img.at<double>(i + x, j + y) / (mean);
}
}
}
}
/*This is the end of the part to compute the mean of each block*/
/*This part is to fill all the resize matrix with the mean value*/
for ( int x = 0; x < resize_parameter/block_size; x++) // x represents the index of the raw in the mean matrix
{
for ( int y = 0; y < resize_parameter/block_size; y++) // y represents the index of the column in the mean matrix
{
for ( int i = 0; i < block_size; i++) // i represents the index of the raw in the mean_full matrix
{
for ( int j = 0; j < block_size; j++) // j represents the index of the column in the mean_full matrix
{
mean_img_full.at<double>((x*block_size)+i,(y*block_size)+j) = mean_img.at<double>(x,y);
}
}
}
}
//cout << cv::getBuildInformation() << endl;
/*This is the end of the part to fill all the resize matrix with the mean value*/
namedWindow("OriginalImage", CV_WINDOW_AUTOSIZE); //create a window with the name "OriginalImage"
imshow("OriginalImage", original_img); //display the image which is stored in the 'original_img' in the "OriginalImage" window
namedWindow("NewImage", CV_WINDOW_AUTOSIZE); //create a window with the name "NewImage"
imshow("NewImage", new_img); //display the image which is stored in the 'new_img' in the "NewImage" window
namedWindow("MeanImage", CV_WINDOW_AUTOSIZE); //create a window with the name "MeanImage"
imshow("MeanImage", mean_img); //display the image which is stored in the 'mean_img' in the "MeanImage" window
namedWindow("MeanFullImage", CV_WINDOW_AUTOSIZE); //create a window with the name "MeanFullImage"
imshow("MeanFullImage", mean_img_full); //display the image which is stored in the 'mean_img_full' in the "MeanFullImage" window
waitKey(0); //wait infinite time for a keypress
destroyWindow("OriginalImage"); //destroy the window with the name, "OriginalImage"
destroyWindow("NewImage"); //destroy the window with the name, "NewImage"
destroyWindow("MeanImage"); //destroy the window with the name, "MeanImage"
destroyWindow("MeanFullImage"); //destroy the window with the name, "MeanImage"
return 0;
}
The problem was at the definition of the type of each matrix. It has to be 8 Bits Unsigned Character. It is working now. Thanks a lot ,,,

How to access the RGB values in Opencv?

I am confused about the use of number of channels.
Which one is correct of the following?
// roi is the image matrix
for(int i = 0; i < roi.rows; i++)
{
for(int j = 0; j < roi.cols; j+=roi.channels())
{
int b = roi.at<cv::Vec3b>(i,j)[0];
int g = roi.at<cv::Vec3b>(i,j)[1];
int r = roi.at<cv::Vec3b>(i,j)[2];
cout << r << " " << g << " " << b << endl ;
}
}
Or,
for(int i = 0; i < roi.rows; i++)
{
for(int j = 0; j < roi.cols; j++)
{
int b = roi.at<cv::Vec3b>(i,j)[0];
int g = roi.at<cv::Vec3b>(i,j)[1];
int r = roi.at<cv::Vec3b>(i,j)[2];
cout << r << " " << g << " " << b << endl ;
}
}
the second one is correct,
the rows and cols inside the Mat represents the number of pixels,
while the channel has nothing to do with the rows and cols number.
and CV use BGR by default, so assuming the Mat is not converted to RGB then the code is correct
reference, personal experience, OpenCV docs
A quicker way to get color components from an image is to have the image represented as an IplImage structure and then make use of the pixel size and number of channels to iterate through it using pointer arithmetic.
For example, if you know that your image is a 3-channel image with 1 byte per pixel and its format is BGR (the default in OpenCV), the following code will get access to its components:
(In the following code, img is of type IplImage.)
for (int y = 0; y < img->height; y++) {
for(int x = 0; x < img->width; x++) {
uchar *blue = ((uchar*)(img->imageData + img->widthStep*y))[x*3];
uchar *green = ((uchar*)(img->imageData + img->widthStep*y))[x*3+1];
uchar *red = ((uchar*)(img->imageData + img->widthStep*y))[x*3+2];
}
}
For a more flexible approach, you can use the CV_IMAGE_ELEM macro defined in types_c.h:
/* get reference to pixel at (col,row),
for multi-channel images (col) should be multiplied by number of channels */
#define CV_IMAGE_ELEM( image, elemtype, row, col ) \
(((elemtype*)((image)->imageData + (image)->widthStep*(row)))[(col)])
I guess the 2nd one is correct, nevertheless it is very time consuming to get the data like that.
A quicker method would be to use the IplImage* data structure and increment the address pointed with the size of the data contained in roi...