Different Pixel Values in MATLAB and C++ with OpenCV - c++

I see there are similar questions to this but don't quiet answer what I am asking so here is my question.
In C++ with OpenCV I run the code I will provide below and it returns an average pixel value of 6.32. However, when I open the image and use the mean function in MATLAB it returns an average pixel intensity of approximately 6.92ish. As you can see I convert the OpenCV values to double to try to ease this issue and have found that openCV loads the image as a set of integers whereas MATLAB loads the image as decimal values that are approximately but not quite the same obviously as the integers. So my question is, being new to coding, which is correct? I'm assuming MATLAB is returning more accurate values and if that is the case I would like to know if there is a way to load the images in the same fashion to avoid the discrepancy.
Thank you, Code below
Mat img = imread("Cells2.tif");
cv::cvtColor(img, img, CV_BGR2GRAY);
cv::imshow("stuff",img);
Mat dst;
if(img.channels() == 3)
{
img.convertTo(dst, CV_64FC1);
}
else if (img.channels() == 1)
{
img.convertTo(dst, CV_64FC1);
}
cv::imshow("output",dst/255);
int NumPixels = img.total();
double avg;
double c = 0;
double std;
for(int y = 0; y < dst.cols; y++)
{
for(int x = 0; x < dst.rows; x++)
{
c+=dst.at<double>(x,y)*255;
}
}
avg = c/NumPixels;
cout << "asfa = " << c << endl;
double deviation;
double var;
double z = 0;
double q;
//for(int a = 0; a<= img.cols; a++)
for(int y = 0; y< dst.cols; y++)
{
//for(int b = 0; b<= dst.rows; b++)
for(int x = 0; x< dst.rows; x++)
{
q=dst.at<double>(x,y);
deviation = q - avg;
z = z + pow(deviation,2);
//cout << "q = " << q << endl;
}
}
var = z/(NumPixels);
std = sqrt(var);
cv::Scalar avgPixel = cv::mean(dst);
cout << "Avg Value = " << avg << endl;
cout << "StdDev = " << std << endl;
cout << "AvgPixel =" << avgPixel;
cvWaitKey(0);
return 0;
}

According to your comment, the image seems to be stored with a 16-bit depth. MATLAB loads the TIFF image as is, while by default OpenCV will load images as 8-bit. This might explain the difference in precision that you are seeing.
Use the following to open the image in OpenCV:
cv::Mat img = cv::imread("file.tif", cv::IMREAD_ANYDEPTH|cv::IMREAD_ANYCOLOR);
In MATLAB, it's simply:
img = imread('file.tif');
Next you need to be aware of the data type you are working with. In OpenCV its CV_16U, in MATLAB its uint16. Therefore you need to convert types accordingly.
For example, in MATLAB:
img2 = double(img) ./ double(intmax('uint16'));
would convert it to a double image with values in the range [0,1]

When you load the image, you must use similar methods in both environments (MATLAB and OpenCV) to avoid possible conversions which may be done by default in either environment.

You are converting the image if certain conditions are met, this can change some color values while MATLAB can choose to not convert the image but use the raw image
colors are mostly represented in hex format with popular implementations in the format of 0xAARRGGBB or 0xRRGGBBAA, so 32 bit integers will do (unsigned/signed doesn't matter, the hex value is still the same), create a 64 bit variable, add all the 32 bit variables together and then divide by the amount of pixels, this will get you a quite accurate result (for images up to 16384 by 16384 pixels (where a 32 bit value is representing the color of one pixel), if larger, then a 64 bit integer will not be enough).
long long total = 0;
long long divisor = image.width * image.height;
for(int x = 0; x < image.width; ++x)
{
for(int y = 0; x < image.height; ++x)
{
total += image.at(x,y).color;
}
}
double avg = total / divisor;
std::cout << "Average color value: " << avg << std::endl;

Not sure what difficulty you are having with mean value in Matlab versus OpenCV. If I understand your question correctly, your goal is to implement Matlab's mean(image(:)) in OpenCV. For example in Matlab you do the following:
>> image = imread('sheep.jpg')
>> avg = mean(image(:))
ans =
119.8210
Here's how you do the same in OpenCV:
Mat image = imread("sheep.jpg");
Scalar avg_pixel;
avg_pixel = mean(image);
float avg = 0;
cout << "mean pixel (RGB): " << avg_pixel << endl;
for(int i; i<image.channels(); ++i) {
avg = avg + avg_pixel[i];
}
avg = avg/image.channels();
cout << "mean, that's equivalent to mean(image(:)) in Matlab: " << avg << endl;
OpenCV console output:
mean pixel (RGB): [77.4377, 154.43, 127.596, 0]
mean, that's equivalent to mean(image(:)) in Matlab: 119.821
So the results are the same in Matlab and OpenCV.
Follow up
Found some problems in your code.
OpenCV stores data differently from Matlab. Look at this answer for a rough explanation on how to access a pixel in OpenCV. For example:
// NOT a correct way to access a pixel in CV_F32C3 type image
double pixel = image.at<double>(x,y);
//The correct way (where the pixel value is stored in a vector)
// Note that Vec3d is defined as: typedef Vec<double, 3> Vec3d;
Vec3d pixel = image.at<Vec3d>(x, y);
Another error I found
if(img.channels() == 3)
{
img.convertTo(dst, CV_64FC1); //should be CV_64FC3, instead of CV_64FC1
}
Accessing Mat elements may be confusing. I suggest getting a book on OpenCV to get started, for example this one, and read OpenCV tutorials and documentation. Hope this helps.

Related

Accessing RGB values of all pixels in a certain image in openCV

I have searched internet and stackoverflow thoroughly, but I didn't find what exactly I'm looking for!
How can I get RGB (BGR actually) values of a certain image (all pixels of the image) in OpenCV? I'm using C++, the image is stored in cv::Mat variable.
I'm showing some of my efforts so far: I tried this code from another stackoverflow link. But every time I re-run the code the value in Hexadecimal changed! For example once its 00CD5D7C, in next run it is 00C09D7C.
cv::Mat img_rgb = cv::imread("img6.jpg");
Point3_<uchar>* p = img_rgb.ptr<Point3_<uchar> >(10,10);
p->x; //B
p->y; //G
p->z; //R
std::cout<<p;
In another try I used this code from another answer. Here output is always -858993460.
img_rgb.at<cv::Vec3b>(10,10);
img_rgb.at<cv::Vec3b>(10,10)[0] = newval[0];
img_rgb.at<cv::Vec3b>(10,10)[1] = newval[1];
img_rgb.at<cv::Vec3b>(10,10)[2] = newval[2];
cout<<newval[0]; //For cout<<newval[1]; cout<<newval[2]; the result is still same
NOTE: I used (10,10) as a test to get the RGB, my target is get the RGB values if the whole image!
Since you are loading a color image (of type CV_8UC3), you need to access its elements with .at<Vec3b>(row, col). The elements are in BGR order:
Mat img_bgr = imread("path_to_img");
for(int r = 0; r < img_bgr.rows; ++r) {
for(int c = 0; c < img_bgr.cols; ++c) {
std::cout << "Pixel at position (x, y) : (" << c << ", " << r << ") =" <<
img_bgr.at<Vec3b>(r,c) << std::endl;
}
}
You can also simplify using Mat3b (aka Mat_<Vec3b>), so you don't need to use the .at function, but using directly the parenthesis:
Mat3b img_bgr = imread("path_to_img");
for(int r = 0; r < img_bgr.rows; ++r) {
for(int c = 0; c < img_bgr.cols; ++c) {
std::cout << "Pixel at position (x, y) : (" << c << ", " << r << ") =" <<
img_bgr(r,c) << std::endl;
}
}
To get each single channel, you can easily do:
Vec3b pixel = img_bgr(r,c); // or img_bgr.at<Vec3b>(r,c)
uchar blue = pixel[0];
uchar green = pixel[1];
uchar red = pixel[2];

Number and character recognition using ANN OpenCV 3.1

I have implemented Neural network using OpenCV ANN Library. I am newbie in this field and I learn everything about it online (Mostly StackOverflow).
I am using this ANN for detection of number plate. I did segmentation part using OpenCV image processing library and it is working good. It performs character segmentation and gives it to the NN part of the project. NN is going to recognize the number plate.
I have sample images of 20x30, therefore I have 600 neurons in input layer. As there are 36 possibilities (0-9,A-Z) I have 36 output neurons. I kept 100 neurons in hidden layer. The predict function of OpenCV is giving me the same output for every segmented image. That output is also showing some large negative(< -1). I have used cv::ml::ANN_MLP::SIGMOID_SYM as an activation function.
Please don't mind as there is lot of code wrongly commented (I am doing trial and error).
I need to find out what is the output of predict function. Thank you for your help.
#include <opencv2/opencv.hpp>
int inputLayerSize = 1;
int outputLayerSize = 1;
int numSamples = 2;
Mat layers = Mat(3, 1, CV_32S);
layers.row(0) =Scalar(600) ;
layers.row(1) = Scalar(20);
layers.row(2) = Scalar(36);
vector<int> layerSizes = { 600,100,36 };
Ptr<ml::ANN_MLP> nnPtr = ml::ANN_MLP::create();
vector <int> n;
//nnPtr->setLayerSizes(3);
nnPtr->setLayerSizes(layers);
nnPtr->setTrainMethod(ml::ANN_MLP::BACKPROP);
nnPtr->setTermCriteria(TermCriteria(cv::TermCriteria::COUNT | cv::TermCriteria::EPS, 1000, 0.00001f));
nnPtr->setActivationFunction(cv::ml::ANN_MLP::SIGMOID_SYM, 1, 1);
nnPtr->setBackpropWeightScale(0.5f);
nnPtr->setBackpropMomentumScale(0.5f);
/*CvANN_MLP_TrainParams params = CvANN_MLP_TrainParams(
// terminate the training after either 1000
// iterations or a very small change in the
// network wieghts below the specified value
cvTermCriteria(CV_TERMCRIT_ITER + CV_TERMCRIT_EPS, 1000, 0.000001),
// use backpropogation for training
CvANN_MLP_TrainParams::BACKPROP,
// co-efficents for backpropogation training
// (refer to manual)
0.1,
0.1);*/
/* Mat samples(Size(inputLayerSize, numSamples), CV_32F);
samples.at<float>(Point(0, 0)) = 0.1f;
samples.at<float>(Point(0, 1)) = 0.2f;
Mat responses(Size(outputLayerSize, numSamples), CV_32F);
responses.at<float>(Point(0, 0)) = 0.2f;
responses.at<float>(Point(0, 1)) = 0.4f;
*/
//reading chaos image
// we will read the classification numbers into this variable as though it is a vector
// close the traning images file
/*vector<int> layerInfo;
layerInfo=nnPtr->get;
for (int i = 0; i < layerInfo.size(); i++) {
cout << "size of 0" <<layerInfo[i] << endl;
}*/
cv::imshow("chaos", matTrainingImagesAsFlattenedFloats);
// cout <<abc << endl;
matTrainingImagesAsFlattenedFloats.convertTo(matTrainingImagesAsFlattenedFloats, CV_32F);
//matClassificationInts.reshape(1, 496);
matClassificationInts.convertTo(matClassificationInts, CV_32F);
matSamples.convertTo(matSamples, CV_32F);
std::cout << matClassificationInts.rows << " " << matClassificationInts.cols << " ";
std::cout << matTrainingImagesAsFlattenedFloats.rows << " " << matTrainingImagesAsFlattenedFloats.cols << " ";
std::cout << matSamples.rows << " " << matSamples.cols;
imshow("Samples", matSamples);
imshow("chaos", matTrainingImagesAsFlattenedFloats);
Ptr<ml::TrainData> trainData = ml::TrainData::create(matTrainingImagesAsFlattenedFloats, ml::SampleTypes::ROW_SAMPLE, matSamples);
nnPtr->train(trainData);
bool m = nnPtr->isTrained();
if (m)
std::cout << "training complete\n\n";
// cv::Mat matCurrentChar = Mat(cv::Size(matTrainingImagesAsFlattenedFloats.cols, matTrainingImagesAsFlattenedFloats.rows), CV_32F);
// cout << "samples:\n" << samples << endl;
//cout << "\nresponses:\n" << responses << endl;
/* if (!nnPtr->train(trainData))
return 1;*/
/* cout << "\nweights[0]:\n" << nnPtr->getWeights(0) << endl;
cout << "\nweights[1]:\n" << nnPtr->getWeights(1) << endl;
cout << "\nweights[2]:\n" << nnPtr->getWeights(2) << endl;
cout << "\nweights[3]:\n" << nnPtr->getWeights(3) << endl;*/
//predicting
std::vector <cv::String> filename;
cv::String folder = "./plate/";
cv::glob(folder, filename);
if (filename.empty()) { // if unable to open image
std::cout << "error: image not read from file\n\n"; // show error message on command line
return(0); // and exit program
}
String strFinalString;
for (int i = 0; i < filename.size(); i++) {
cv::Mat matTestingNumbers = cv::imread(filename[i]);
cv::Mat matGrayscale; //
cv::Mat matBlurred; // declare more image variables
cv::Mat matThresh; //
cv::Mat matThreshCopy;
cv::Mat matCanny;
//
cv::cvtColor(matTestingNumbers, matGrayscale, CV_BGR2GRAY); // convert to grayscale
matThresh = cv::Mat(cv::Size(matGrayscale.cols, matGrayscale.rows), CV_8UC1);
for (int i = 0; i < matGrayscale.cols; i++) {
for (int j = 0; j < matGrayscale.rows; j++) {
if (matGrayscale.at<uchar>(j, i) <= 130) {
matThresh.at<uchar>(j, i) = 255;
}
else {
matThresh.at<uchar>(j, i) = 0;
}
}
}
// blur
cv::GaussianBlur(matThresh, // input image
matBlurred, // output image
cv::Size(5, 5), // smoothing window width and height in pixels
0); // sigma value, determines how much the image will be blurred, zero makes function choose the sigma value
// filter image from grayscale to black and white
/* cv::adaptiveThreshold(matBlurred, // input image
matThresh, // output image
255, // make pixels that pass the threshold full white
cv::ADAPTIVE_THRESH_GAUSSIAN_C, // use gaussian rather than mean, seems to give better results
cv::THRESH_BINARY_INV, // invert so foreground will be white, background will be black
11, // size of a pixel neighborhood used to calculate threshold value
2); */ // constant subtracted from the mean or weighted mean
// cv::imshow("thresh" + std::to_string(i), matThresh);
matThreshCopy = matThresh.clone();
std::vector<std::vector<cv::Point> > ptContours; // declare a vector for the contours
std::vector<cv::Vec4i> v4iHierarchy;// make a copy of the thresh image, this in necessary b/c findContours modifies the image
cv::Canny(matBlurred, matCanny, 20, 40, 3);
/*std::vector<std::vector<cv::Point> > ptContours; // declare a vector for the contours
std::vector<cv::Vec4i> v4iHierarchy; // declare a vector for the hierarchy (we won't use this in this program but this may be helpful for reference)
cv::findContours(matThreshCopy, // input image, make sure to use a copy since the function will modify this image in the course of finding contours
ptContours, // output contours
v4iHierarchy, // output hierarchy
cv::RETR_EXTERNAL, // retrieve the outermost contours only
cv::CHAIN_APPROX_SIMPLE); // compress horizontal, vertical, and diagonal segments and leave only their end points
/*std::vector<std::vector<cv::Point> > contours_poly(ptContours.size());
std::vector<cv::Rect> boundRect(ptContours.size());
for (int i = 0; i < ptContours.size(); i++)
{
approxPolyDP(cv::Mat(ptContours[i]), contours_poly[i], 3, true);
boundRect[i] = cv::boundingRect(cv::Mat(contours_poly[i]));
}*/
/*for (int i = 0; i < ptContours.size(); i++) { // for each contour
ContourWithData contourWithData; // instantiate a contour with data object
contourWithData.ptContour = ptContours[i]; // assign contour to contour with data
contourWithData.boundingRect = cv::boundingRect(contourWithData.ptContour); // get the bounding rect
contourWithData.fltArea = cv::contourArea(contourWithData.ptContour); // calculate the contour area
allContoursWithData.push_back(contourWithData); // add contour with data object to list of all contours with data
}
for (int i = 0; i < allContoursWithData.size(); i++) { // for all contours
if (allContoursWithData[i].checkIfContourIsValid()) { // check if valid
validContoursWithData.push_back(allContoursWithData[i]); // if so, append to valid contour list
}
}
//sort contours from left to right
std::sort(validContoursWithData.begin(), validContoursWithData.end(), ContourWithData::sortByBoundingRectXPosition);
// std::string strFinalString; // declare final string, this will have the final number sequence by the end of the program
*/
/*for (int i = 0; i < validContoursWithData.size(); i++) { // for each contour
// draw a green rect around the current char
cv::rectangle(matTestingNumbers, // draw rectangle on original image
validContoursWithData[i].boundingRect, // rect to draw
cv::Scalar(0, 255, 0), // green
2); // thickness
cv::Mat matROI = matThresh(validContoursWithData[i].boundingRect); // get ROI image of bounding rect
cv::Mat matROIResized;
cv::resize(matROI, matROIResized, cv::Size(RESIZED_IMAGE_WIDTH, RESIZED_IMAGE_HEIGHT)); // resize image, this will be more consistent for recognition and storage
*/
cv::Mat matROIFloat;
cv::resize(matThresh, matThresh, cv::Size(RESIZED_IMAGE_WIDTH, RESIZED_IMAGE_HEIGHT));
matThresh.convertTo(matROIFloat, CV_32FC1, 1.0 / 255.0); // convert Mat to float, necessary for call to find_nearest
cv::Mat matROIFlattenedFloat = matROIFloat.reshape(1, 1);
cv::Point maxLoc = { 0,0 };
cv::Point minLoc;
cv::Mat output = cv::Mat(cv::Size(36, 1), CV_32F);
vector<float>output2;
// cv::Mat output2 = cv::Mat(cv::Size(36, 1), CV_32F);
nnPtr->predict(matROIFlattenedFloat, output2);
// float max = output.at<float>(0, 0);
int fo = 0;
float m = output2[0];
imshow("predicted input", matROIFlattenedFloat);
// float b = output.at<float>(0, 0);
// cout <<"\n output0,0:"<<b<<endl;
// minMaxLoc(output, 0, 0, &minLoc, &maxLoc, Mat());
// cout << "\noutput:\n" << maxLoc.x << endl;
for (int j = 1; j < 36; j++) {
float value =output2[j];
if (value > m) {
m = value;
fo = j;
}
}
float * p = 0;
p = &m;
cout << "j value in output " << fo << " Max value " << p << endl;
//imshow("output image" + to_string(i), output);
// cout << "\noutput:\n" << minLoc.x << endl;
//float fltCurrentChar = (float)maxLoc.x;
output.release();
m = 0;
fo = 0;
}
// strFinalString = strFinalString + char(int(fltCurrentChar)); // append current char to full string
// cv::imshow("Predict output", output);
/*cv::Point maxLoc = {0,0};
Mat output=Mat (cv::Size(matSamples.cols,matSamples.rows),CV_32F);
nnPtr->predict(matTrainingImagesAsFlattenedFloats, output);
minMaxLoc(output, 0, 0, 0, &maxLoc, 0);
cout << "\noutput:\n" << maxLoc.x << endl;*/
// getchar();
/*for (int i = 0; i < 10;i++) {
for (int j = 0; j < 36; j++) {
if (matCurrentChar.at<float>(i, j) >= 0.6) {
cout << " "<<j<<" ";
}
}
}*/
waitKey(0);
return(0);
}
void gen() {
std::string dir, filepath;
int num, imgArea, minArea;
int pos = 0;
bool f = true;
struct stat filestat;
cv::Mat imgTrainingNumbers;
cv::Mat imgGrayscale;
cv::Mat imgBlurred;
cv::Mat imgThresh;
cv::Mat imgThreshCopy;
cv::Mat matROIResized=cv::Mat (cv::Size(RESIZED_IMAGE_WIDTH,RESIZED_IMAGE_HEIGHT),CV_8UC1);
cv::Mat matROI;
std::vector <cv::String> filename;
std::vector<std::vector<cv::Point> > ptContours;
std::vector<cv::Vec4i> v4iHierarchy;
int count = 0, contoursCount = 0;
matSamples = cv::Mat(cv::Size(36, 496), CV_32FC1);
matTrainingImagesAsFlattenedFloats = cv::Mat(cv::Size(600, 496), CV_32FC1);
for (int j = 0; j <= 35; j++) {
int tmp = j;
cv::String folder = "./Training Data/" + std::to_string(tmp);
cv::glob(folder, filename);
for (int k = 0; k < filename.size(); k++) {
count++;
// If the file is a directory (or is in some way invalid) we'll skip it
// if (stat(filepath.c_str(), &filestat)) continue;
//if (S_ISDIR(filestat.st_mode)) continue;
imgTrainingNumbers = cv::imread(filename[k]);
imgArea = imgTrainingNumbers.cols*imgTrainingNumbers.rows;
// read in training numbers image
minArea = imgArea * 50 / 100;
if (imgTrainingNumbers.empty()) {
std::cout << "error: image not read from file\n\n";
//return(0);
}
cv::cvtColor(imgTrainingNumbers, imgGrayscale, CV_BGR2GRAY);
//cv::equalizeHist(imgGrayscale, imgGrayscale);
imgThresh = cv::Mat(cv::Size(imgGrayscale.cols, imgGrayscale.rows), CV_8UC1);
/*cv::adaptiveThreshold(imgGrayscale,
imgThresh,
255,
cv::ADAPTIVE_THRESH_GAUSSIAN_C,
cv::THRESH_BINARY_INV,
3,
0);
*/
for (int i = 0; i < imgGrayscale.cols; i++) {
for (int j = 0; j < imgGrayscale.rows; j++) {
if (imgGrayscale.at<uchar>(j, i) <= 130) {
imgThresh.at<uchar>(j, i) = 255;
}
else {
imgThresh.at<uchar>(j, i) = 0;
}
}
}
// cv::imshow("imgThresh"+std::to_string(count), imgThresh);
imgThreshCopy = imgThresh.clone();
cv::GaussianBlur(imgThreshCopy,
imgBlurred,
cv::Size(5, 5),
0);
cv::Mat imgCanny;
// cv::Canny(imgBlurred,imgCanny,20,40,3);
cv::findContours(imgBlurred,
ptContours,
v4iHierarchy,
cv::RETR_EXTERNAL,
cv::CHAIN_APPROX_SIMPLE);
for (int i = 0; i < ptContours.size(); i++) {
if (cv::contourArea(ptContours[i]) > MIN_CONTOUR_AREA) {
contoursCount++;
cv::Rect boundingRect = cv::boundingRect(ptContours[i]);
cv::rectangle(imgTrainingNumbers, boundingRect, cv::Scalar(0, 0, 255), 2); // draw red rectangle around each contour as we ask user for input
matROI = imgThreshCopy(boundingRect); // get ROI image of bounding rect
std::string path = "./" + std::to_string(contoursCount) + ".JPG";
cv::imwrite(path, matROI);
// cv::imshow("matROI" + std::to_string(count), matROI);
cv::resize(matROI, matROIResized, cv::Size(RESIZED_IMAGE_WIDTH, RESIZED_IMAGE_HEIGHT)); // resize image, this will be more consistent for recognition and storage
std::cout << filename[k] << " " << contoursCount << "\n";
//cv::imshow("matROI", matROI);
//cv::imshow("matROIResized"+std::to_string(count), matROIResized);
// cv::imshow("imgTrainingNumbers" + std::to_string(contoursCount), imgTrainingNumbers);
int intChar;
if (j<10)
intChar = j + 48;
else {
intChar = j + 55;
}
/*if (intChar == 27) { // if esc key was pressed
return(0); // exit program
}*/
// if (std::find(intValidChars.begin(), intValidChars.end(), intChar) != intValidChars.end()) { // else if the char is in the list of chars we are looking for . . .
// append classification char to integer list of chars
cv::Mat matImageFloat;
matROIResized.convertTo(matImageFloat,CV_32FC1);// now add the training image (some conversion is necessary first) . . .
//matROIResized.convertTo(matImageFloat, CV_32FC1); // convert Mat to float
cv::Mat matImageFlattenedFloat = matImageFloat.reshape(1, 1);
//matTrainingImagesAsFlattenedFloats.push_back(matImageFlattenedFloat);// flatten
try {
//matTrainingImagesAsFlattenedFloats.push_back(matImageFlattenedFloat);
std::cout << matTrainingImagesAsFlattenedFloats.rows << " " << matTrainingImagesAsFlattenedFloats.cols;
//unsigned char* re;
int ii = 0; // Current column in training_mat
for (int i = 0; i<matImageFloat.rows; i++) {
for (int j = 0; j < matImageFloat.cols; j++) {
matTrainingImagesAsFlattenedFloats.at<float>(contoursCount-1, ii++) = matImageFloat.at<float>(i,j);
}
}
}
catch (std::exception &exc) {
f = false;
exc.what();
}
if (f) {
matClassificationInts.push_back((float)intChar);
matSamples.at<float>(contoursCount-1, j) = 1.0;
}
f = true;
// add to Mat as though it was a vector, this is necessary due to the
// data types that KNearest.train accepts
} // end if
//} // end if
} // end for
}//end i
}//end j
}
Output of predict function
Unfortunately, I don't have the necessary time to really review the code, but I can say off the top that to train a model that performs well for prediction with 36 classes, you will need several things:
A large number of good quality images. Ideally, you'd want thousands of images for each class. Of course, you can see somewhat decent results with less than that, but if you only have a few images per class, it's never going to be able to generalize adequately.
You need a model that is large and sophisticated enough to provide the necessary expressiveness to solve the problem. For a problem like this, a plain old multi-layer perceptron with one hidden layer with 100 units may not be enough. This is actually a problem that would benefit from using a Convolutional Neural Net (CNN) with a couple layers just to extract useful features first. But assuming you don't want to go down that path, you may at least want to tweak the size of your hidden layer.
To even get to a point where the training process converges, you will probably need to experiment and crucially, you need an effective way to test the accuracy of the ANN after each experiment. Ideally, you want to observe the loss as the training is proceeding, but I'm not sure whether that's possible using OpenCV's ML functionality. At a minimum, you should fully expect to have to play around with the various so-called "hyper-parameters" and run many experiments before you have a reasonable model.
Anyway, the most important thing is to make sure you have a solid mechanism for validating the accuracy of the model after training. If you aren't already doing so, set aside some images as a separate test set, and after each experiment, use the trained ANN to predict each test image to see the accuracy.
One final general note: what you're trying to do is complex. You will save yourself a huge number of headaches if you take the time early and often to refactor your code. No matter how many experiments you run, if there's some defect causing (for example) your training data to be fundamentally different in some way than your test data, you will never see good results.
Good luck!
EDIT: I should also point out that seeing the same result for every input image is a classic sign that training failed. Unfortunately, there are many reasons why that might happen and it will be very difficult for anyone to isolate that for you without some cleaner code and access to your image data.
I have solved the issue of not getting the output of predict. The issue was created because of the input Mat image to train (ie. matTrainingImagesAsFlattenedFloats) was having values 255.0 for a white pixel. This happened because I haven't use convertTo() properly. You need to use convertTo(OutputImage name, CV_32FC1, 1.0 / 255.0); like this which will convert all the pixel values with 255.0 to 1.0 and after that I am getting the correct output.
Thank you for all the help.
This is too broad to be in one question. Sorry for the bad news. I tried this over and over and couldn't find a solution. I recommend that you implement a simple AND, OR or XOR first just to make sure that the learning part is working and that you are getting better results the more passes you do. Also I suggest to try the Tangent Hyperbolic as a Transfer Function instead of Sigmoid. And Good luck!
Here is some of my own posts that might help you:
Exact results as yours: HERE
Some codes: HERE
I don't want to say that, but several professors I met said Backpropagation just doesn't work and they had (and me have) to implement my own method of teaching the network.

Calculating the mean and standard deviation in C++ for single channeled histogram

I want calculate the mean and standard deviations for a histogram of a HSV image but I only want to do this histogram and calculations for the V channel.
I have been reading examples on how to do this for a set of channels and have tried these approaches but I am getting confused over whether my approach for initially creating the histogram is correct or not for just one channel because the program keeps crashing when i try to execute it.
Here is what I have at the moment (The variable test is a cv::Mat image and this can be any image you wish to use to recreate the issue). I have probably missed something obvious and the for loop might not be correct in terms of the range of values but I haven't done this in C++ before.
cv::cvtColor(test, test, CV_BGR2HSV);
int v_bins = 50;
int histSize[] = { v_bins };
cv::MatND hist;
float v_ranges[] = { 0, 255};
cv::vector<cv::Mat> channel(3);
split(test, channel);
const float* ranges[] = { v_ranges };
int channels[] = {0};
cv::calcHist(&channel[2], 1, channels, cv::Mat(), hist, 1, histSize, ranges, true, false); //histogram calculation
float mean=0;
float rows= hist.size().height;
float cols = hist.size().width;
for (int v = 0; v < v_bins; v++)
{
std::cout << hist.at<float>(v, v) << std::endl;;
mean = mean + hist.at<float>(v);
}
mean = mean / (rows*cols);
std::cout << mean<< std::endl;;
You can simply use cv::meanStdDev, that calculates a mean and standard deviation of array elements.
Note that both mean and stddev arguments are cv::Scalar, so you need to do mean[0] and stddev[0] to get the double values of your single channel array hist.
This code will clarify it's usage:
#include <opencv2\opencv.hpp>
#include <iostream>
int main()
{
cv::Mat test = cv::imread("path_to_image");
cv::cvtColor(test, test, CV_BGR2HSV);
int v_bins = 50;
int histSize[] = { v_bins };
cv::MatND hist;
float v_ranges[] = { 0, 255 };
cv::vector<cv::Mat> channel(3);
split(test, channel);
const float* ranges[] = { v_ranges };
int channels[] = { 0 };
cv::calcHist(&channel[2], 1, channels, cv::Mat(), hist, 1, histSize, ranges, true, false); //histogram calculation
cv::Scalar mean, stddev;
cv::meanStdDev(hist, mean, stddev);
std::cout << "Mean: " << mean[0] << " StdDev: " << stddev[0] << std::endl;
return 0;
}
UPDATE
You can compute the mean and the standard deviation by their definition:
double dmean = 0.0;
double dstddev = 0.0;
// Mean standard algorithm
for (int i = 0; i < v_bins; ++i)
{
dmean += hist.at<float>(i);
}
dmean /= v_bins;
// Standard deviation standard algorithm
std::vector<double> var(v_bins);
for (int i = 0; i < v_bins; ++i)
{
var[i] = (dmean - hist.at<float>(i)) * (dmean - hist.at<float>(i));
}
for (int i = 0; i < v_bins; ++i)
{
dstddev += var[i];
}
dstddev = sqrt(dstddev / v_bins);
std::cout << "Mean: " << dmean << " StdDev: " << dstddev << std::endl;
and you'll get the same values as OpenCV meanStdDev.
Be careful about calculating statistics on a histogram. If you just run meanStdDev, you'll get the mean and stdev of the bin values. That doesn't tell you an awful lot.
Probably what you want is the mean and stdev intensity.
So, if you want to derive the image mean and standard deviation from a histogram (or set of histograms), then you can use the following code:
// assume histogram is of type cv::Mat and comes from cv::calcHist
double s = 0;
double total_hist = 0;
for(int i=0; i < histogram.total(); ++i){
s += histogram.at<float>(i) * (i + 0.5); // bin centre
total_hist += histogram.at<float>(i);
}
double mean = s / total_hist;
double t = 0;
for(int i=0; i < histogram.total(); ++i){
double x = (i - mean);
t += histogram.at<float>(i)*x*x;
}
double stdev = std::sqrt(t / total_hist);
From the definitions of the mean:
mean = sum(x * p(x)) // expectation
std = sqrt(sum( p(x)*(x - mean)**2 ) // sqrt(variance)
The mean is the expectation value for x. So histogram[x]/sum(histogram) gives you p(x). The definition of standard deviation is similar and comes from the variance. The numbers are slightly simpler because pixels can only take integer values and are unit spaced.
Note this is also useful if you want to calculate normalisation statistics for a batch of images using the accumulate option.
Adapted from: How to calculate the standard deviation from a histogram? (Python, Matplotlib)

Detecting difference between 2 images

I am working on the following code
#include <iostream>
#include <opencv2/core/core.hpp>
#include <string>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/video/background_segm.hpp>
using namespace std;
using namespace cv;
int main()
{
Mat current,currentGrey,next,abs;
VideoCapture cam1,cam2;
std:: vector<vector<Point>>contours;
vector<vector<Point>>contoursPoly(contours.size());
cam1.open(0);
cam2.open(0);
namedWindow("Normal");
namedWindow("Difference");
if(!cam1.isOpened())
{
cout << "Cam not found" << endl;
return -1;
}
while(true)
{
//Take the input
cam1 >> current;
currentGrey = current;
cam2 >> next;
//Convert to grey
cvtColor(currentGrey,currentGrey,CV_RGB2GRAY);
cvtColor(next,next,CV_RGB2GRAY);
//Reduce Noise
cv::GaussianBlur(currentGrey,currentGrey,Size(0,0),4);
cv::GaussianBlur(next,next,Size(0,0),4);
imshow("Normal",currentGrey);
//Get the absolute difference
absdiff(currentGrey,next,abs);
imshow("Difference",abs);
for(int i=0;i<abs.rows;i++)
{
for(int j=0;j<abs.cols;j++)
{
if(abs.at<int>(j,i)>0)
{
cout << "Change Detected" << endl;
j = abs.cols+1;
i = abs.rows+1;
}
}
}
if(waitKey(30)>=0)
{
break;
}
}
}
In here, what I am trying to do is print a message whenever a difference between images are detected. Following part is the technique
for(int i=0;i<abs.rows;i++)
{
for(int j=0;j<abs.cols;j++)
{
if(abs.at<int>(j,i)>0)
{
cout << "Change Detected" << endl;
j = abs.cols+1;
i = abs.rows+1;
}
}
}
Unfortunately, instead of printing messages when a difference is detected, it prints the message always. Why is this?
You should calculate the mean square error between the two frames.
MSE = sum((frame1-frame2)^2 ) / no. of pixels
There is an example of calculating it in an OpenCV tutorial.
Based on that code you could have
double getMSE(const Mat& I1, const Mat& I2)
{
Mat s1;
absdiff(I1, I2, s1); // |I1 - I2|
s1.convertTo(s1, CV_32F); // cannot make a square on 8 bits
s1 = s1.mul(s1); // |I1 - I2|^2
Scalar s = sum(s1); // sum elements per channel
double sse = s.val[0] + s.val[1] + s.val[2]; // sum channels
if( sse <= 1e-10) // for small values return zero
return 0;
else
{
double mse =sse /(double)(I1.channels() * I1.total());
return mse;
// Instead of returning MSE, the tutorial code returned PSNR (below).
//double psnr = 10.0*log10((255*255)/mse);
//return psnr;
}
}
You can use it in your code like this:
if(getMSE(currentGrey,next) > some_threshold)
cout << "Change Detected" << endl;
It is up to you to decide the magnitude of MSE below which you consider the images to be the same.
Also you should prefilter with GaussianBlur() to reduce noise, like you already do. The blur method suggested by #fatih_k is not a Gaussian filter; it is a box filter and although faster may introduce artifacts.
Image differencing has some tricks. Due to noise any 2 frames may not be same.
In order to alleviate the effect of the noise you can use method blur() or GaussianBlur() for every frame so that minute details may be removed with simple box or Gaussian filter.
Then, as a similarity criterion, you can take the difference of two frames and after taking the absolute value of the resulting difference matrix with abs, you can sum all the elements and calculate the ratio of this sum to the total pixel sum of the first frame. If this ratio is more than some threshold, lets say 0.05, then you can infer that image frames are sufficiently different.
Let's take a look what OpenCV documentation says about cv::waitKey returned value:
Returns the code of the pressed key or -1 if no key was pressed before the specified time had elapsed.
So... the loop is infinite and "Change Detected" is printed once for every two images compared until the program is terminated.
The function getMSE() described above can be tweaked a little to better cover unsigned integer 8 data type. The difference on unsigned integer 8 datatype will produce 0 every time the result is negative. By converting matrices to double datatype at first and then computing the mean squared error, this problem would be avoided.
double getMSE(Mat& I1, Mat& I2)
{
Mat s1;
// save the I! and I2 type before converting to float
int im1type = I1.type();
int im2type = I2.type();
// convert to float to avoid producing zero for negative numbers
I1.convertTo(I1, CV_32F);
I2.convertTo(I2, CV_32F);
absdiff(I1, I2, s1); // |I1 - I2|
s1.convertTo(s1, CV_32F); // cannot make a square on 8 bits
s1 = s1.mul(s1); // |I1 - I2|^2
Scalar s = sum(s1); // sum elements per channel
double sse = s.val[0] + s.val[1] + s.val[2]; // sum channels
if( sse <= 1e-10) // for small values return zero
return 0;
else
{
double mse =sse /(double)(I1.channels() * I1.total());
return mse;
// Instead of returning MSE, the tutorial code returned PSNR (below).
//double psnr = 10.0*log10((255*255)/mse);
//return psnr;
}
// return I1 and I2 to their initial types
I1.convertTo(I1, im1type);
I2.convertTo(I2, im2type);
}
The above code returns zero for small mse values (under 1e-10). Terms s.val1 and s.val[2] are zero for 1D images.
If you want to check for 1D image input as well (it is basically supporting 3 channel image), use the following code to test (with random unsigned numbers):
Mat I1(12, 12, CV_8UC1), I2(12, 12, CV_8UC1);
double low = 0;
double high = 255;
cv::randu(I1, Scalar(low), Scalar(high));
cv::randu(I2, Scalar(low), Scalar(high));
double mse = getMSE(I1, I2);
cout << mse << endl;
If you want to check for 3D image input, use the following code to test (with random unsigned numbers):
Mat I1(12, 12, CV_8UC3), I2(12, 12, CV_8UC3);
double low = 0;
double high = 255;
cv::randu(I1, Scalar(low), Scalar(high));
cv::randu(I2, Scalar(low), Scalar(high));
double mse = getMSE(I1, I2);
cout << mse << endl;

How to access the RGB values in Opencv?

I am confused about the use of number of channels.
Which one is correct of the following?
// roi is the image matrix
for(int i = 0; i < roi.rows; i++)
{
for(int j = 0; j < roi.cols; j+=roi.channels())
{
int b = roi.at<cv::Vec3b>(i,j)[0];
int g = roi.at<cv::Vec3b>(i,j)[1];
int r = roi.at<cv::Vec3b>(i,j)[2];
cout << r << " " << g << " " << b << endl ;
}
}
Or,
for(int i = 0; i < roi.rows; i++)
{
for(int j = 0; j < roi.cols; j++)
{
int b = roi.at<cv::Vec3b>(i,j)[0];
int g = roi.at<cv::Vec3b>(i,j)[1];
int r = roi.at<cv::Vec3b>(i,j)[2];
cout << r << " " << g << " " << b << endl ;
}
}
the second one is correct,
the rows and cols inside the Mat represents the number of pixels,
while the channel has nothing to do with the rows and cols number.
and CV use BGR by default, so assuming the Mat is not converted to RGB then the code is correct
reference, personal experience, OpenCV docs
A quicker way to get color components from an image is to have the image represented as an IplImage structure and then make use of the pixel size and number of channels to iterate through it using pointer arithmetic.
For example, if you know that your image is a 3-channel image with 1 byte per pixel and its format is BGR (the default in OpenCV), the following code will get access to its components:
(In the following code, img is of type IplImage.)
for (int y = 0; y < img->height; y++) {
for(int x = 0; x < img->width; x++) {
uchar *blue = ((uchar*)(img->imageData + img->widthStep*y))[x*3];
uchar *green = ((uchar*)(img->imageData + img->widthStep*y))[x*3+1];
uchar *red = ((uchar*)(img->imageData + img->widthStep*y))[x*3+2];
}
}
For a more flexible approach, you can use the CV_IMAGE_ELEM macro defined in types_c.h:
/* get reference to pixel at (col,row),
for multi-channel images (col) should be multiplied by number of channels */
#define CV_IMAGE_ELEM( image, elemtype, row, col ) \
(((elemtype*)((image)->imageData + (image)->widthStep*(row)))[(col)])
I guess the 2nd one is correct, nevertheless it is very time consuming to get the data like that.
A quicker method would be to use the IplImage* data structure and increment the address pointed with the size of the data contained in roi...