Number and character recognition using ANN OpenCV 3.1 - c++

I have implemented Neural network using OpenCV ANN Library. I am newbie in this field and I learn everything about it online (Mostly StackOverflow).
I am using this ANN for detection of number plate. I did segmentation part using OpenCV image processing library and it is working good. It performs character segmentation and gives it to the NN part of the project. NN is going to recognize the number plate.
I have sample images of 20x30, therefore I have 600 neurons in input layer. As there are 36 possibilities (0-9,A-Z) I have 36 output neurons. I kept 100 neurons in hidden layer. The predict function of OpenCV is giving me the same output for every segmented image. That output is also showing some large negative(< -1). I have used cv::ml::ANN_MLP::SIGMOID_SYM as an activation function.
Please don't mind as there is lot of code wrongly commented (I am doing trial and error).
I need to find out what is the output of predict function. Thank you for your help.
#include <opencv2/opencv.hpp>
int inputLayerSize = 1;
int outputLayerSize = 1;
int numSamples = 2;
Mat layers = Mat(3, 1, CV_32S);
layers.row(0) =Scalar(600) ;
layers.row(1) = Scalar(20);
layers.row(2) = Scalar(36);
vector<int> layerSizes = { 600,100,36 };
Ptr<ml::ANN_MLP> nnPtr = ml::ANN_MLP::create();
vector <int> n;
//nnPtr->setLayerSizes(3);
nnPtr->setLayerSizes(layers);
nnPtr->setTrainMethod(ml::ANN_MLP::BACKPROP);
nnPtr->setTermCriteria(TermCriteria(cv::TermCriteria::COUNT | cv::TermCriteria::EPS, 1000, 0.00001f));
nnPtr->setActivationFunction(cv::ml::ANN_MLP::SIGMOID_SYM, 1, 1);
nnPtr->setBackpropWeightScale(0.5f);
nnPtr->setBackpropMomentumScale(0.5f);
/*CvANN_MLP_TrainParams params = CvANN_MLP_TrainParams(
// terminate the training after either 1000
// iterations or a very small change in the
// network wieghts below the specified value
cvTermCriteria(CV_TERMCRIT_ITER + CV_TERMCRIT_EPS, 1000, 0.000001),
// use backpropogation for training
CvANN_MLP_TrainParams::BACKPROP,
// co-efficents for backpropogation training
// (refer to manual)
0.1,
0.1);*/
/* Mat samples(Size(inputLayerSize, numSamples), CV_32F);
samples.at<float>(Point(0, 0)) = 0.1f;
samples.at<float>(Point(0, 1)) = 0.2f;
Mat responses(Size(outputLayerSize, numSamples), CV_32F);
responses.at<float>(Point(0, 0)) = 0.2f;
responses.at<float>(Point(0, 1)) = 0.4f;
*/
//reading chaos image
// we will read the classification numbers into this variable as though it is a vector
// close the traning images file
/*vector<int> layerInfo;
layerInfo=nnPtr->get;
for (int i = 0; i < layerInfo.size(); i++) {
cout << "size of 0" <<layerInfo[i] << endl;
}*/
cv::imshow("chaos", matTrainingImagesAsFlattenedFloats);
// cout <<abc << endl;
matTrainingImagesAsFlattenedFloats.convertTo(matTrainingImagesAsFlattenedFloats, CV_32F);
//matClassificationInts.reshape(1, 496);
matClassificationInts.convertTo(matClassificationInts, CV_32F);
matSamples.convertTo(matSamples, CV_32F);
std::cout << matClassificationInts.rows << " " << matClassificationInts.cols << " ";
std::cout << matTrainingImagesAsFlattenedFloats.rows << " " << matTrainingImagesAsFlattenedFloats.cols << " ";
std::cout << matSamples.rows << " " << matSamples.cols;
imshow("Samples", matSamples);
imshow("chaos", matTrainingImagesAsFlattenedFloats);
Ptr<ml::TrainData> trainData = ml::TrainData::create(matTrainingImagesAsFlattenedFloats, ml::SampleTypes::ROW_SAMPLE, matSamples);
nnPtr->train(trainData);
bool m = nnPtr->isTrained();
if (m)
std::cout << "training complete\n\n";
// cv::Mat matCurrentChar = Mat(cv::Size(matTrainingImagesAsFlattenedFloats.cols, matTrainingImagesAsFlattenedFloats.rows), CV_32F);
// cout << "samples:\n" << samples << endl;
//cout << "\nresponses:\n" << responses << endl;
/* if (!nnPtr->train(trainData))
return 1;*/
/* cout << "\nweights[0]:\n" << nnPtr->getWeights(0) << endl;
cout << "\nweights[1]:\n" << nnPtr->getWeights(1) << endl;
cout << "\nweights[2]:\n" << nnPtr->getWeights(2) << endl;
cout << "\nweights[3]:\n" << nnPtr->getWeights(3) << endl;*/
//predicting
std::vector <cv::String> filename;
cv::String folder = "./plate/";
cv::glob(folder, filename);
if (filename.empty()) { // if unable to open image
std::cout << "error: image not read from file\n\n"; // show error message on command line
return(0); // and exit program
}
String strFinalString;
for (int i = 0; i < filename.size(); i++) {
cv::Mat matTestingNumbers = cv::imread(filename[i]);
cv::Mat matGrayscale; //
cv::Mat matBlurred; // declare more image variables
cv::Mat matThresh; //
cv::Mat matThreshCopy;
cv::Mat matCanny;
//
cv::cvtColor(matTestingNumbers, matGrayscale, CV_BGR2GRAY); // convert to grayscale
matThresh = cv::Mat(cv::Size(matGrayscale.cols, matGrayscale.rows), CV_8UC1);
for (int i = 0; i < matGrayscale.cols; i++) {
for (int j = 0; j < matGrayscale.rows; j++) {
if (matGrayscale.at<uchar>(j, i) <= 130) {
matThresh.at<uchar>(j, i) = 255;
}
else {
matThresh.at<uchar>(j, i) = 0;
}
}
}
// blur
cv::GaussianBlur(matThresh, // input image
matBlurred, // output image
cv::Size(5, 5), // smoothing window width and height in pixels
0); // sigma value, determines how much the image will be blurred, zero makes function choose the sigma value
// filter image from grayscale to black and white
/* cv::adaptiveThreshold(matBlurred, // input image
matThresh, // output image
255, // make pixels that pass the threshold full white
cv::ADAPTIVE_THRESH_GAUSSIAN_C, // use gaussian rather than mean, seems to give better results
cv::THRESH_BINARY_INV, // invert so foreground will be white, background will be black
11, // size of a pixel neighborhood used to calculate threshold value
2); */ // constant subtracted from the mean or weighted mean
// cv::imshow("thresh" + std::to_string(i), matThresh);
matThreshCopy = matThresh.clone();
std::vector<std::vector<cv::Point> > ptContours; // declare a vector for the contours
std::vector<cv::Vec4i> v4iHierarchy;// make a copy of the thresh image, this in necessary b/c findContours modifies the image
cv::Canny(matBlurred, matCanny, 20, 40, 3);
/*std::vector<std::vector<cv::Point> > ptContours; // declare a vector for the contours
std::vector<cv::Vec4i> v4iHierarchy; // declare a vector for the hierarchy (we won't use this in this program but this may be helpful for reference)
cv::findContours(matThreshCopy, // input image, make sure to use a copy since the function will modify this image in the course of finding contours
ptContours, // output contours
v4iHierarchy, // output hierarchy
cv::RETR_EXTERNAL, // retrieve the outermost contours only
cv::CHAIN_APPROX_SIMPLE); // compress horizontal, vertical, and diagonal segments and leave only their end points
/*std::vector<std::vector<cv::Point> > contours_poly(ptContours.size());
std::vector<cv::Rect> boundRect(ptContours.size());
for (int i = 0; i < ptContours.size(); i++)
{
approxPolyDP(cv::Mat(ptContours[i]), contours_poly[i], 3, true);
boundRect[i] = cv::boundingRect(cv::Mat(contours_poly[i]));
}*/
/*for (int i = 0; i < ptContours.size(); i++) { // for each contour
ContourWithData contourWithData; // instantiate a contour with data object
contourWithData.ptContour = ptContours[i]; // assign contour to contour with data
contourWithData.boundingRect = cv::boundingRect(contourWithData.ptContour); // get the bounding rect
contourWithData.fltArea = cv::contourArea(contourWithData.ptContour); // calculate the contour area
allContoursWithData.push_back(contourWithData); // add contour with data object to list of all contours with data
}
for (int i = 0; i < allContoursWithData.size(); i++) { // for all contours
if (allContoursWithData[i].checkIfContourIsValid()) { // check if valid
validContoursWithData.push_back(allContoursWithData[i]); // if so, append to valid contour list
}
}
//sort contours from left to right
std::sort(validContoursWithData.begin(), validContoursWithData.end(), ContourWithData::sortByBoundingRectXPosition);
// std::string strFinalString; // declare final string, this will have the final number sequence by the end of the program
*/
/*for (int i = 0; i < validContoursWithData.size(); i++) { // for each contour
// draw a green rect around the current char
cv::rectangle(matTestingNumbers, // draw rectangle on original image
validContoursWithData[i].boundingRect, // rect to draw
cv::Scalar(0, 255, 0), // green
2); // thickness
cv::Mat matROI = matThresh(validContoursWithData[i].boundingRect); // get ROI image of bounding rect
cv::Mat matROIResized;
cv::resize(matROI, matROIResized, cv::Size(RESIZED_IMAGE_WIDTH, RESIZED_IMAGE_HEIGHT)); // resize image, this will be more consistent for recognition and storage
*/
cv::Mat matROIFloat;
cv::resize(matThresh, matThresh, cv::Size(RESIZED_IMAGE_WIDTH, RESIZED_IMAGE_HEIGHT));
matThresh.convertTo(matROIFloat, CV_32FC1, 1.0 / 255.0); // convert Mat to float, necessary for call to find_nearest
cv::Mat matROIFlattenedFloat = matROIFloat.reshape(1, 1);
cv::Point maxLoc = { 0,0 };
cv::Point minLoc;
cv::Mat output = cv::Mat(cv::Size(36, 1), CV_32F);
vector<float>output2;
// cv::Mat output2 = cv::Mat(cv::Size(36, 1), CV_32F);
nnPtr->predict(matROIFlattenedFloat, output2);
// float max = output.at<float>(0, 0);
int fo = 0;
float m = output2[0];
imshow("predicted input", matROIFlattenedFloat);
// float b = output.at<float>(0, 0);
// cout <<"\n output0,0:"<<b<<endl;
// minMaxLoc(output, 0, 0, &minLoc, &maxLoc, Mat());
// cout << "\noutput:\n" << maxLoc.x << endl;
for (int j = 1; j < 36; j++) {
float value =output2[j];
if (value > m) {
m = value;
fo = j;
}
}
float * p = 0;
p = &m;
cout << "j value in output " << fo << " Max value " << p << endl;
//imshow("output image" + to_string(i), output);
// cout << "\noutput:\n" << minLoc.x << endl;
//float fltCurrentChar = (float)maxLoc.x;
output.release();
m = 0;
fo = 0;
}
// strFinalString = strFinalString + char(int(fltCurrentChar)); // append current char to full string
// cv::imshow("Predict output", output);
/*cv::Point maxLoc = {0,0};
Mat output=Mat (cv::Size(matSamples.cols,matSamples.rows),CV_32F);
nnPtr->predict(matTrainingImagesAsFlattenedFloats, output);
minMaxLoc(output, 0, 0, 0, &maxLoc, 0);
cout << "\noutput:\n" << maxLoc.x << endl;*/
// getchar();
/*for (int i = 0; i < 10;i++) {
for (int j = 0; j < 36; j++) {
if (matCurrentChar.at<float>(i, j) >= 0.6) {
cout << " "<<j<<" ";
}
}
}*/
waitKey(0);
return(0);
}
void gen() {
std::string dir, filepath;
int num, imgArea, minArea;
int pos = 0;
bool f = true;
struct stat filestat;
cv::Mat imgTrainingNumbers;
cv::Mat imgGrayscale;
cv::Mat imgBlurred;
cv::Mat imgThresh;
cv::Mat imgThreshCopy;
cv::Mat matROIResized=cv::Mat (cv::Size(RESIZED_IMAGE_WIDTH,RESIZED_IMAGE_HEIGHT),CV_8UC1);
cv::Mat matROI;
std::vector <cv::String> filename;
std::vector<std::vector<cv::Point> > ptContours;
std::vector<cv::Vec4i> v4iHierarchy;
int count = 0, contoursCount = 0;
matSamples = cv::Mat(cv::Size(36, 496), CV_32FC1);
matTrainingImagesAsFlattenedFloats = cv::Mat(cv::Size(600, 496), CV_32FC1);
for (int j = 0; j <= 35; j++) {
int tmp = j;
cv::String folder = "./Training Data/" + std::to_string(tmp);
cv::glob(folder, filename);
for (int k = 0; k < filename.size(); k++) {
count++;
// If the file is a directory (or is in some way invalid) we'll skip it
// if (stat(filepath.c_str(), &filestat)) continue;
//if (S_ISDIR(filestat.st_mode)) continue;
imgTrainingNumbers = cv::imread(filename[k]);
imgArea = imgTrainingNumbers.cols*imgTrainingNumbers.rows;
// read in training numbers image
minArea = imgArea * 50 / 100;
if (imgTrainingNumbers.empty()) {
std::cout << "error: image not read from file\n\n";
//return(0);
}
cv::cvtColor(imgTrainingNumbers, imgGrayscale, CV_BGR2GRAY);
//cv::equalizeHist(imgGrayscale, imgGrayscale);
imgThresh = cv::Mat(cv::Size(imgGrayscale.cols, imgGrayscale.rows), CV_8UC1);
/*cv::adaptiveThreshold(imgGrayscale,
imgThresh,
255,
cv::ADAPTIVE_THRESH_GAUSSIAN_C,
cv::THRESH_BINARY_INV,
3,
0);
*/
for (int i = 0; i < imgGrayscale.cols; i++) {
for (int j = 0; j < imgGrayscale.rows; j++) {
if (imgGrayscale.at<uchar>(j, i) <= 130) {
imgThresh.at<uchar>(j, i) = 255;
}
else {
imgThresh.at<uchar>(j, i) = 0;
}
}
}
// cv::imshow("imgThresh"+std::to_string(count), imgThresh);
imgThreshCopy = imgThresh.clone();
cv::GaussianBlur(imgThreshCopy,
imgBlurred,
cv::Size(5, 5),
0);
cv::Mat imgCanny;
// cv::Canny(imgBlurred,imgCanny,20,40,3);
cv::findContours(imgBlurred,
ptContours,
v4iHierarchy,
cv::RETR_EXTERNAL,
cv::CHAIN_APPROX_SIMPLE);
for (int i = 0; i < ptContours.size(); i++) {
if (cv::contourArea(ptContours[i]) > MIN_CONTOUR_AREA) {
contoursCount++;
cv::Rect boundingRect = cv::boundingRect(ptContours[i]);
cv::rectangle(imgTrainingNumbers, boundingRect, cv::Scalar(0, 0, 255), 2); // draw red rectangle around each contour as we ask user for input
matROI = imgThreshCopy(boundingRect); // get ROI image of bounding rect
std::string path = "./" + std::to_string(contoursCount) + ".JPG";
cv::imwrite(path, matROI);
// cv::imshow("matROI" + std::to_string(count), matROI);
cv::resize(matROI, matROIResized, cv::Size(RESIZED_IMAGE_WIDTH, RESIZED_IMAGE_HEIGHT)); // resize image, this will be more consistent for recognition and storage
std::cout << filename[k] << " " << contoursCount << "\n";
//cv::imshow("matROI", matROI);
//cv::imshow("matROIResized"+std::to_string(count), matROIResized);
// cv::imshow("imgTrainingNumbers" + std::to_string(contoursCount), imgTrainingNumbers);
int intChar;
if (j<10)
intChar = j + 48;
else {
intChar = j + 55;
}
/*if (intChar == 27) { // if esc key was pressed
return(0); // exit program
}*/
// if (std::find(intValidChars.begin(), intValidChars.end(), intChar) != intValidChars.end()) { // else if the char is in the list of chars we are looking for . . .
// append classification char to integer list of chars
cv::Mat matImageFloat;
matROIResized.convertTo(matImageFloat,CV_32FC1);// now add the training image (some conversion is necessary first) . . .
//matROIResized.convertTo(matImageFloat, CV_32FC1); // convert Mat to float
cv::Mat matImageFlattenedFloat = matImageFloat.reshape(1, 1);
//matTrainingImagesAsFlattenedFloats.push_back(matImageFlattenedFloat);// flatten
try {
//matTrainingImagesAsFlattenedFloats.push_back(matImageFlattenedFloat);
std::cout << matTrainingImagesAsFlattenedFloats.rows << " " << matTrainingImagesAsFlattenedFloats.cols;
//unsigned char* re;
int ii = 0; // Current column in training_mat
for (int i = 0; i<matImageFloat.rows; i++) {
for (int j = 0; j < matImageFloat.cols; j++) {
matTrainingImagesAsFlattenedFloats.at<float>(contoursCount-1, ii++) = matImageFloat.at<float>(i,j);
}
}
}
catch (std::exception &exc) {
f = false;
exc.what();
}
if (f) {
matClassificationInts.push_back((float)intChar);
matSamples.at<float>(contoursCount-1, j) = 1.0;
}
f = true;
// add to Mat as though it was a vector, this is necessary due to the
// data types that KNearest.train accepts
} // end if
//} // end if
} // end for
}//end i
}//end j
}
Output of predict function

Unfortunately, I don't have the necessary time to really review the code, but I can say off the top that to train a model that performs well for prediction with 36 classes, you will need several things:
A large number of good quality images. Ideally, you'd want thousands of images for each class. Of course, you can see somewhat decent results with less than that, but if you only have a few images per class, it's never going to be able to generalize adequately.
You need a model that is large and sophisticated enough to provide the necessary expressiveness to solve the problem. For a problem like this, a plain old multi-layer perceptron with one hidden layer with 100 units may not be enough. This is actually a problem that would benefit from using a Convolutional Neural Net (CNN) with a couple layers just to extract useful features first. But assuming you don't want to go down that path, you may at least want to tweak the size of your hidden layer.
To even get to a point where the training process converges, you will probably need to experiment and crucially, you need an effective way to test the accuracy of the ANN after each experiment. Ideally, you want to observe the loss as the training is proceeding, but I'm not sure whether that's possible using OpenCV's ML functionality. At a minimum, you should fully expect to have to play around with the various so-called "hyper-parameters" and run many experiments before you have a reasonable model.
Anyway, the most important thing is to make sure you have a solid mechanism for validating the accuracy of the model after training. If you aren't already doing so, set aside some images as a separate test set, and after each experiment, use the trained ANN to predict each test image to see the accuracy.
One final general note: what you're trying to do is complex. You will save yourself a huge number of headaches if you take the time early and often to refactor your code. No matter how many experiments you run, if there's some defect causing (for example) your training data to be fundamentally different in some way than your test data, you will never see good results.
Good luck!
EDIT: I should also point out that seeing the same result for every input image is a classic sign that training failed. Unfortunately, there are many reasons why that might happen and it will be very difficult for anyone to isolate that for you without some cleaner code and access to your image data.

I have solved the issue of not getting the output of predict. The issue was created because of the input Mat image to train (ie. matTrainingImagesAsFlattenedFloats) was having values 255.0 for a white pixel. This happened because I haven't use convertTo() properly. You need to use convertTo(OutputImage name, CV_32FC1, 1.0 / 255.0); like this which will convert all the pixel values with 255.0 to 1.0 and after that I am getting the correct output.
Thank you for all the help.

This is too broad to be in one question. Sorry for the bad news. I tried this over and over and couldn't find a solution. I recommend that you implement a simple AND, OR or XOR first just to make sure that the learning part is working and that you are getting better results the more passes you do. Also I suggest to try the Tangent Hyperbolic as a Transfer Function instead of Sigmoid. And Good luck!
Here is some of my own posts that might help you:
Exact results as yours: HERE
Some codes: HERE
I don't want to say that, but several professors I met said Backpropagation just doesn't work and they had (and me have) to implement my own method of teaching the network.

Related

how to interpret mediapipe palm detection model outputs

I'm trying to use the mediapipe palm detection model in opencv c++.
I downloaded the model with pb format from this github repos, and i can successfully load and get he ouput values of the model. However, I am unable to use these outputs to draw the detection rectangles.
code :
cvtColor(frame,frame,COLOR_BGR2RGB);
Mat blob = dnn::blobFromImage(frame,1.0/255, cv::Size(256, 256), cv::Scalar(0, 0, 0));
net.setInput(blob);
cv::Mat outputs, classificators_outs;
net.forward(outputs,"regressors");
net.forward(classificators_outs,"classificators");
Mat reg = outputs.reshape(1, outputs.size[1]); //2d Mat, 2944 rows, 18 cols
Mat prob = classificators_outs.reshape(1, classificators_outs.size[1]); //2d Mat, 2944 rows, 1 col
float dw = float(frame.cols) / 256;
float dh = float(frame.rows) / 256;
vector<Rect> boxes;
std::vector<float> confidences;
for (int i=0; i<reg.rows; i++) {
if (prob.at<float>(i,0) < 0.7)
continue;
cout << prob.at<float>(i,0) << endl;
Mat_<float> row = reg.row(i);
Mat_<float> pro = prob.row(i);
if(i==1)
{
cout << row << endl;
}
// scale to orig. image coords:
Rect b(row(0,0) * dw, row(0,1)*dh, row(0,2)*dw, row(0,3)*dh);
boxes.push_back(b);
cout << b << endl;
}
result of cout << row << endl; :
[3.6905072, 5.4042335, 32.863857, 32.863861, 6.3343191, 5.0829744, 7.8449326, 4.7407198, -0.25462124, 1.8523651, -6.654418, 0.3679803, -11.397835, 0.078130387, 13.685674, 8.8667402, 15.410878, 11.636487]
results of cout << b << endl; :
[13129 x 9847 from (-552, -668)]
I also noticed that if I let the code run further more, the rectangles get bigger and bigger.
Can anyone help ?
thanks in advance.

How to apply custom filters on image?

I'm using OpenCV4 on Ubuntu 20.04 LTS on WSL + XServer for GUI.
I want to create custom convlutional filter kernels and apply them to my image. this is the code I've written for it:
cv::Mat filter2D(cv::Mat input, cv::Mat filter)
{
using namespace cv;
Mat dst = input.clone();
//cout << " filter data successfully found. Rows:" << filter.rows << " cols:" << filter.cols << " channels:" << filter.channels() << "\n";
//cout << " input data successfully found. Rows:" << input.rows << " cols:" << input.cols << " channels:" << input.channels() << "\n";
for (int i = 0-(filter.rows/2);i<input.rows-(filter.rows/2);i++)
{
for (int j = 0-(filter.cols/2);j<input.cols-(filter.cols/2);j++)
{ //adding k and l to i and j will make up the difference and allow us to process the whole image
float filtertotal = 0;
for (int k = 0; k < filter.rows;k++)
{
for (int l = 0; l < filter.rows;l++)
{
if(i+k >= 0 && i+k < input.rows && j+l >= 0 && j+l < input.cols)
{ //don't try to process pixels off the edge of the map
float a = input.at<uchar>(i+k,j+l);
float b = filter.at<float>(k,l);
float product = a * b;
filtertotal += product;
}
}
}
//filter all proccessed for this pixel, write it to dst
dst.at<uchar>(i+(filter.rows/2),j+(filter.cols/2)) = filtertotal;
}
}
return dst;
}
int main(int argc, char** argv)
{
// Declare variables
cv::Mat_<float> src;
const char* window_name = "filter2D Demo";
// Loads an image
src = cv::imread("fapan.png", cv::IMREAD_GRAYSCALE ); // Load an image
if( src.empty() )
{
printf(" Error opening image\n");
return EXIT_FAILURE;
}
static float x[3][3] = {
{-1, -1, -1},
{-1, 8, -1},
{-1, -1, -1}
};
cv::Mat kernel(3,3, CV_16FC1, x);
// Apply filter
filter2D(src, kernel);
cv::imshow( window_name, src );
cv::waitKey(0);
return EXIT_SUCCESS;
}
the problem is that the output image is like this.
as you can see not only the edges are white, but also inside of it is white too.
the input image
The output you have posted for the input code is correct as you are applying a normal filter on a image .
It may cause a little blurring or sharpening in it but it will never cause it to completely detect edges.
In order to detect only the edges along the images you must apply Laplacian along a certain direction.
https://www.l3harrisgeospatial.com/docs/LaplacianFilters.html#:~:text=A%20Laplacian%20filter%20is%20an,an%20edge%20or%20continuous%20progression. ( A link with some info )
Which is the derivative of the image it will only detect the change .
I recommend you do this on matlab image processing toolbox .

Splitting individual contour points into it's HSV channels to perform additional operations

I am currently playing around idea of calculating an of average HSV for points in a contour. I did some research and came across the split function which allows for a mat of an image to be broken into it's channels, However the contour datatype is a vector of points. Here is an example of code.
findcontours(detected_edges,contours,CV_RETR_LIST,CV_CHAIN_APPROX_SIMPLE);
vector<vector<Point>> ContourHsvChannels(3);
split(contours,ContourHsvChannels);
Basically the goal is to split each point of a contour into its HSV channels so I can perform operations on them. Any guidance would be appreciated.
You can simply draw the contours onto a blank image the same size as your original image to create a mask, and then use that to mask your image (in HSV or whatever colorspace you want). The mean() function takes in a mask parameter so that you only get the mean of the values highlighted by the mask.
If you also want the standard deviation you can use the meanStdDev() function, it also accepts a mask.
Here's an example in Python:
import cv2
import numpy as np
# read image, ensure binary
img = cv2.imread('fg.png', 0)
img[img>0] = 255
# find contours in the image
contours = cv2.findContours(img, cv2.RETR_LIST, cv2.CHAIN_APPROX_NONE)[1]
# create an array of blank images to draw contours on
n_contours = len(contours)
contour_imgs = [np.zeros_like(img) for i in range(n_contours)]
# draw each contour on a new image
for i in range(n_contours):
cv2.drawContours(contour_imgs[i], contours, i, 255)
# color image of where the HSV values are coming from
color_img = cv2.imread('image.png')
hsv = cv2.cvtColor(color_img, cv2.COLOR_BGR2HSV)
# find the means and standard deviations of the HSV values for each contour
means = []
stddevs = []
for cnt_img in contour_imgs:
mean, stddev = cv2.meanStdDev(hsv, mask=cnt_img)
means.append(mean)
stddevs.append(stddev)
print('First mean:')
print(means[0])
print('First stddev:')
print(stddevs[0])
First mean:
[[ 146.3908046 ]
[ 51.2183908 ]
[ 202.95402299]]
First stddev:
[[ 7.92835204]
[ 11.78682811]
[ 9.61549043]]
There's three values; one for each channel.
The other option is to just look up all the values; a contour is an array of points, so you can index the image with those points for each contour in your contour array and store them in individual arrays, and then find the meanStdDev() or mean() over those (and not bother with the mask). For e.g. (again in Python, sorry about that):
# color image of where the HSV values are coming from
color_img = cv2.imread('image.png')
hsv = cv2.cvtColor(color_img, cv2.COLOR_BGR2HSV)
# read image, ensure binary
img = cv2.imread('fg.png', 0)
img[img>0] = 255
# find contours in the image
contours = cv2.findContours(img, cv2.RETR_LIST, cv2.CHAIN_APPROX_NONE)[1]
means = []
stddevs = []
for contour in contours:
contour_colors = []
n_points = len(contour)
for point in contour:
x, y = point[0]
contour_colors.append(hsv[y, x])
contour_colors = np.array(contour_colors).reshape(1, n_points, 3)
mean, stddev = cv2.meanStdDev(contour_colors)
means.append(mean)
stddevs.append(stddev)
print('First mean:')
print(means[0])
print('First stddev:')
print(stddevs[0])
First mean:
[[ 146.3908046 ]
[ 51.2183908 ]
[ 202.95402299]]
First stddev:
[[ 7.92835204]
[ 11.78682811]
[ 9.61549043]]
So this gives the same values. In Python I just simply created blank lists for the means and standard deviations and appended to them. In C++ you can create a std::vector<cv::Vec3b> (assuming uint8 image, otherwise Vec3f or whatever is appropriate) for each. Then inside the loop I create another blank list to hold the colors for each contour; again this would be a std::vector<cv::Vec3b>, and then run the meanStdDev() on that vector in each loop, and append the value to the means and standard deviations vectors. You don't have to append, you can easily grab the number of contours and the number of points in each contour and preallocate for speed, and then just index into those vectors instead of appending.
In Python there's virtually no speed difference between either method. Of course there's better memory efficiency in the second example; instead of storing a whole blank Mat we just store a few of the values. However the backend OpenCV methods work really quickly for masking operations, so you'll have to test the speed difference yourself in C++ and see which way is better. As the number of contours increases I imagine the benefits of the second method increases. If you do time both approaches, please let us know your results!
Here is the solution written in c++
#include <opencv2\opencv.hpp>
#include <iostream>
#include <vector>
#include <cmath>
using namespace cv;
using namespace std;
int main(int argc, char** argv) {
// Mat Declarations
// Mat img = imread("white.jpg");
// Mat src = imread("Rainbro.png");
Mat src = imread("multi.jpg");
// Mat src = imread("DarkRed.png");
Mat Hist;
Mat HSV;
Mat Edges;
Mat Grey;
vector<vector<Vec3b>> hueMEAN;
vector<vector<Point>> contours;
// Variables
int edgeThreshold = 1;
int const max_lowThreshold = 100;
int ratio = 3;
int kernel_size = 3;
int lowThreshold = 0;
// Windows
namedWindow("img", WINDOW_NORMAL);
namedWindow("HSV", WINDOW_AUTOSIZE);
namedWindow("Edges", WINDOW_AUTOSIZE);
namedWindow("contours", WINDOW_AUTOSIZE);
// Color Transforms
cvtColor(src, HSV, CV_BGR2HSV);
cvtColor(src, Grey, CV_BGR2GRAY);
// Perform Hist Equalization to help equalize Red hues so they stand out for
// better Edge Detection
equalizeHist(Grey, Grey);
// Image Transforms
blur(Grey, Edges, Size(3, 3));
Canny(Edges, Edges, max_lowThreshold, lowThreshold * ratio, kernel_size);
findContours(Edges, contours, CV_RETR_LIST, CV_CHAIN_APPROX_NONE);
//Rainbro MAT
//Mat drawing = Mat::zeros(432, 700, CV_8UC1);
//Multi MAT
Mat drawing = Mat::zeros(630, 1200, CV_8UC1);
//Red variation Mat
//Mat drawing = Mat::zeros(600, 900, CV_8UC1);
vector <vector<Point>> ContourPoints;
/* This code for loops through all contours and assigns the value of the y coordinate as a parameter
for the row pointer in the HSV mat. The value vec3b pointer pointing to the pixel in the mat is accessed
and stored for any Hue value that is between 0-10 and 165-179 as Red only contours.*/
for (int i = 0; i < contours.size(); i++) {
vector<Vec3b> vf;
vector<Point> points;
bool isContourRed = false;
for (int j = 0; j < contours[i].size(); j++) {
//Row Y-Coordinate of Mat from Y-Coordinate of Contour
int MatRow = int(contours[i][j].y);
//Row X-Coordinate of Mat from X-Coordinate of Contour
int MatCol = int(contours[i][j].x);
Vec3b *HsvRow = HSV.ptr <Vec3b>(MatRow);
int h = int(HsvRow[int(MatCol)][0]);
int s = int(HsvRow[int(MatCol)][1]);
int v = int(HsvRow[int(MatCol)][2]);
cout << "Coordinate: ";
cout << contours[i][j].x;
cout << ",";
cout << contours[i][j].y << endl;
cout << "Hue: " << h << endl;
// Get contours that are only in the red spectrum Hue 0-10, 165-179
if ((h <= 10 || h >= 165 && h <= 180) && ((s > 0) && (v > 0))) {
cout << "Coordinate: ";
cout << contours[i][j].x;
cout << ",";
cout << contours[i][j].y << endl;
cout << "Hue: " << h << endl;
vf.push_back(Vec3b(h, s, v));
points.push_back(contours[i][j]);
isContourRed = true;
}
}
if (isContourRed == true) {
hueMEAN.push_back(vf);
ContourPoints.push_back(points);
}
}
drawContours(drawing, ContourPoints, -1, Scalar(255, 255, 255), 2, 8);
// Calculate Mean and STD for each Contour
cout << "contour Means & STD of Vec3b:" << endl;
for (int i = 0; i < hueMEAN.size(); i++) {
Scalar meanTemp = mean(hueMEAN.at(i));
Scalar sdTemp;
cout << i << ": " << endl;
cout << meanTemp << endl;
cout << " " << endl;
meanStdDev(hueMEAN.at(i), meanTemp, sdTemp);
cout << sdTemp << endl;
cout << " " << endl;
}
cout << "Actual Contours: " << contours.size() << endl;
cout << "# Contours: " << hueMEAN.size() << endl;
imshow("img", src);
imshow("HSV", HSV);
imshow("Edges", Edges);
imshow("contours", drawing);
waitKey(0);
return 0;
}

Calculating the mean and standard deviation in C++ for single channeled histogram

I want calculate the mean and standard deviations for a histogram of a HSV image but I only want to do this histogram and calculations for the V channel.
I have been reading examples on how to do this for a set of channels and have tried these approaches but I am getting confused over whether my approach for initially creating the histogram is correct or not for just one channel because the program keeps crashing when i try to execute it.
Here is what I have at the moment (The variable test is a cv::Mat image and this can be any image you wish to use to recreate the issue). I have probably missed something obvious and the for loop might not be correct in terms of the range of values but I haven't done this in C++ before.
cv::cvtColor(test, test, CV_BGR2HSV);
int v_bins = 50;
int histSize[] = { v_bins };
cv::MatND hist;
float v_ranges[] = { 0, 255};
cv::vector<cv::Mat> channel(3);
split(test, channel);
const float* ranges[] = { v_ranges };
int channels[] = {0};
cv::calcHist(&channel[2], 1, channels, cv::Mat(), hist, 1, histSize, ranges, true, false); //histogram calculation
float mean=0;
float rows= hist.size().height;
float cols = hist.size().width;
for (int v = 0; v < v_bins; v++)
{
std::cout << hist.at<float>(v, v) << std::endl;;
mean = mean + hist.at<float>(v);
}
mean = mean / (rows*cols);
std::cout << mean<< std::endl;;
You can simply use cv::meanStdDev, that calculates a mean and standard deviation of array elements.
Note that both mean and stddev arguments are cv::Scalar, so you need to do mean[0] and stddev[0] to get the double values of your single channel array hist.
This code will clarify it's usage:
#include <opencv2\opencv.hpp>
#include <iostream>
int main()
{
cv::Mat test = cv::imread("path_to_image");
cv::cvtColor(test, test, CV_BGR2HSV);
int v_bins = 50;
int histSize[] = { v_bins };
cv::MatND hist;
float v_ranges[] = { 0, 255 };
cv::vector<cv::Mat> channel(3);
split(test, channel);
const float* ranges[] = { v_ranges };
int channels[] = { 0 };
cv::calcHist(&channel[2], 1, channels, cv::Mat(), hist, 1, histSize, ranges, true, false); //histogram calculation
cv::Scalar mean, stddev;
cv::meanStdDev(hist, mean, stddev);
std::cout << "Mean: " << mean[0] << " StdDev: " << stddev[0] << std::endl;
return 0;
}
UPDATE
You can compute the mean and the standard deviation by their definition:
double dmean = 0.0;
double dstddev = 0.0;
// Mean standard algorithm
for (int i = 0; i < v_bins; ++i)
{
dmean += hist.at<float>(i);
}
dmean /= v_bins;
// Standard deviation standard algorithm
std::vector<double> var(v_bins);
for (int i = 0; i < v_bins; ++i)
{
var[i] = (dmean - hist.at<float>(i)) * (dmean - hist.at<float>(i));
}
for (int i = 0; i < v_bins; ++i)
{
dstddev += var[i];
}
dstddev = sqrt(dstddev / v_bins);
std::cout << "Mean: " << dmean << " StdDev: " << dstddev << std::endl;
and you'll get the same values as OpenCV meanStdDev.
Be careful about calculating statistics on a histogram. If you just run meanStdDev, you'll get the mean and stdev of the bin values. That doesn't tell you an awful lot.
Probably what you want is the mean and stdev intensity.
So, if you want to derive the image mean and standard deviation from a histogram (or set of histograms), then you can use the following code:
// assume histogram is of type cv::Mat and comes from cv::calcHist
double s = 0;
double total_hist = 0;
for(int i=0; i < histogram.total(); ++i){
s += histogram.at<float>(i) * (i + 0.5); // bin centre
total_hist += histogram.at<float>(i);
}
double mean = s / total_hist;
double t = 0;
for(int i=0; i < histogram.total(); ++i){
double x = (i - mean);
t += histogram.at<float>(i)*x*x;
}
double stdev = std::sqrt(t / total_hist);
From the definitions of the mean:
mean = sum(x * p(x)) // expectation
std = sqrt(sum( p(x)*(x - mean)**2 ) // sqrt(variance)
The mean is the expectation value for x. So histogram[x]/sum(histogram) gives you p(x). The definition of standard deviation is similar and comes from the variance. The numbers are slightly simpler because pixels can only take integer values and are unit spaced.
Note this is also useful if you want to calculate normalisation statistics for a batch of images using the accumulate option.
Adapted from: How to calculate the standard deviation from a histogram? (Python, Matplotlib)

Access Mat binary images elements in OpenCV

I tried the following code to print all the white pixels of this binary image without success:
Mat grayImage;
Mat rgb_Image;
int Max_value = 255;
int Global_Threshold = 155;
rgb_Image = imread( "../Tests/Object/Object.jpg", CV_LOAD_IMAGE_COLOR); // Read the file
if(! rgb_Image.data ) // Check for invalid input
{
cout << "Could not open or find the image" << endl ;
return -1;
}
//Convert to Grayscale.
cvtColor(rgb_Image, grayImage, CV_BGR2GRAY);
//Binarize the image with a fixed threshold.
threshold( grayImage, binImage, Global_Threshold, Max_Value, 0);
//Printing white pixels
for(int i = 0; i < 640; i++)
{
for(int j = 0; j < 480; j++)
{
if(255 == binImage.at<float>(i,j))
{
cout << binImage.at<float>(i,j) << endl;
}
}
}
If I print the values that are not zeroes I get strange values but not 255.
Thanks,
cvtColor will create a uchar image, and threshold will keep the data format, so your binImage is made up of uchars, and not float's.
Change
binImage.at<float>(i,j)
to
binImage.at<uchar>(i,j)
and keep in mind that comparing floats with == is usually a bad idea (even when you work with floats), because of floating-point representation errors. You may end up having a matrix full of 254.9999999999, and never meet the image(i,j)==255 condition
You need to cast it to float.
Although the image is an openCV float it's still stored in a unsigned char* block of data (so it can be easily converted) so "<<" thinks it's receiving a C string and prints the binary data until it sees a '0'