Access Mat binary images elements in OpenCV - c++

I tried the following code to print all the white pixels of this binary image without success:
Mat grayImage;
Mat rgb_Image;
int Max_value = 255;
int Global_Threshold = 155;
rgb_Image = imread( "../Tests/Object/Object.jpg", CV_LOAD_IMAGE_COLOR); // Read the file
if(! rgb_Image.data ) // Check for invalid input
{
cout << "Could not open or find the image" << endl ;
return -1;
}
//Convert to Grayscale.
cvtColor(rgb_Image, grayImage, CV_BGR2GRAY);
//Binarize the image with a fixed threshold.
threshold( grayImage, binImage, Global_Threshold, Max_Value, 0);
//Printing white pixels
for(int i = 0; i < 640; i++)
{
for(int j = 0; j < 480; j++)
{
if(255 == binImage.at<float>(i,j))
{
cout << binImage.at<float>(i,j) << endl;
}
}
}
If I print the values that are not zeroes I get strange values but not 255.
Thanks,

cvtColor will create a uchar image, and threshold will keep the data format, so your binImage is made up of uchars, and not float's.
Change
binImage.at<float>(i,j)
to
binImage.at<uchar>(i,j)
and keep in mind that comparing floats with == is usually a bad idea (even when you work with floats), because of floating-point representation errors. You may end up having a matrix full of 254.9999999999, and never meet the image(i,j)==255 condition

You need to cast it to float.
Although the image is an openCV float it's still stored in a unsigned char* block of data (so it can be easily converted) so "<<" thinks it's receiving a C string and prints the binary data until it sees a '0'

Related

How to apply custom filters on image?

I'm using OpenCV4 on Ubuntu 20.04 LTS on WSL + XServer for GUI.
I want to create custom convlutional filter kernels and apply them to my image. this is the code I've written for it:
cv::Mat filter2D(cv::Mat input, cv::Mat filter)
{
using namespace cv;
Mat dst = input.clone();
//cout << " filter data successfully found. Rows:" << filter.rows << " cols:" << filter.cols << " channels:" << filter.channels() << "\n";
//cout << " input data successfully found. Rows:" << input.rows << " cols:" << input.cols << " channels:" << input.channels() << "\n";
for (int i = 0-(filter.rows/2);i<input.rows-(filter.rows/2);i++)
{
for (int j = 0-(filter.cols/2);j<input.cols-(filter.cols/2);j++)
{ //adding k and l to i and j will make up the difference and allow us to process the whole image
float filtertotal = 0;
for (int k = 0; k < filter.rows;k++)
{
for (int l = 0; l < filter.rows;l++)
{
if(i+k >= 0 && i+k < input.rows && j+l >= 0 && j+l < input.cols)
{ //don't try to process pixels off the edge of the map
float a = input.at<uchar>(i+k,j+l);
float b = filter.at<float>(k,l);
float product = a * b;
filtertotal += product;
}
}
}
//filter all proccessed for this pixel, write it to dst
dst.at<uchar>(i+(filter.rows/2),j+(filter.cols/2)) = filtertotal;
}
}
return dst;
}
int main(int argc, char** argv)
{
// Declare variables
cv::Mat_<float> src;
const char* window_name = "filter2D Demo";
// Loads an image
src = cv::imread("fapan.png", cv::IMREAD_GRAYSCALE ); // Load an image
if( src.empty() )
{
printf(" Error opening image\n");
return EXIT_FAILURE;
}
static float x[3][3] = {
{-1, -1, -1},
{-1, 8, -1},
{-1, -1, -1}
};
cv::Mat kernel(3,3, CV_16FC1, x);
// Apply filter
filter2D(src, kernel);
cv::imshow( window_name, src );
cv::waitKey(0);
return EXIT_SUCCESS;
}
the problem is that the output image is like this.
as you can see not only the edges are white, but also inside of it is white too.
the input image
The output you have posted for the input code is correct as you are applying a normal filter on a image .
It may cause a little blurring or sharpening in it but it will never cause it to completely detect edges.
In order to detect only the edges along the images you must apply Laplacian along a certain direction.
https://www.l3harrisgeospatial.com/docs/LaplacianFilters.html#:~:text=A%20Laplacian%20filter%20is%20an,an%20edge%20or%20continuous%20progression. ( A link with some info )
Which is the derivative of the image it will only detect the change .
I recommend you do this on matlab image processing toolbox .

Splitting individual contour points into it's HSV channels to perform additional operations

I am currently playing around idea of calculating an of average HSV for points in a contour. I did some research and came across the split function which allows for a mat of an image to be broken into it's channels, However the contour datatype is a vector of points. Here is an example of code.
findcontours(detected_edges,contours,CV_RETR_LIST,CV_CHAIN_APPROX_SIMPLE);
vector<vector<Point>> ContourHsvChannels(3);
split(contours,ContourHsvChannels);
Basically the goal is to split each point of a contour into its HSV channels so I can perform operations on them. Any guidance would be appreciated.
You can simply draw the contours onto a blank image the same size as your original image to create a mask, and then use that to mask your image (in HSV or whatever colorspace you want). The mean() function takes in a mask parameter so that you only get the mean of the values highlighted by the mask.
If you also want the standard deviation you can use the meanStdDev() function, it also accepts a mask.
Here's an example in Python:
import cv2
import numpy as np
# read image, ensure binary
img = cv2.imread('fg.png', 0)
img[img>0] = 255
# find contours in the image
contours = cv2.findContours(img, cv2.RETR_LIST, cv2.CHAIN_APPROX_NONE)[1]
# create an array of blank images to draw contours on
n_contours = len(contours)
contour_imgs = [np.zeros_like(img) for i in range(n_contours)]
# draw each contour on a new image
for i in range(n_contours):
cv2.drawContours(contour_imgs[i], contours, i, 255)
# color image of where the HSV values are coming from
color_img = cv2.imread('image.png')
hsv = cv2.cvtColor(color_img, cv2.COLOR_BGR2HSV)
# find the means and standard deviations of the HSV values for each contour
means = []
stddevs = []
for cnt_img in contour_imgs:
mean, stddev = cv2.meanStdDev(hsv, mask=cnt_img)
means.append(mean)
stddevs.append(stddev)
print('First mean:')
print(means[0])
print('First stddev:')
print(stddevs[0])
First mean:
[[ 146.3908046 ]
[ 51.2183908 ]
[ 202.95402299]]
First stddev:
[[ 7.92835204]
[ 11.78682811]
[ 9.61549043]]
There's three values; one for each channel.
The other option is to just look up all the values; a contour is an array of points, so you can index the image with those points for each contour in your contour array and store them in individual arrays, and then find the meanStdDev() or mean() over those (and not bother with the mask). For e.g. (again in Python, sorry about that):
# color image of where the HSV values are coming from
color_img = cv2.imread('image.png')
hsv = cv2.cvtColor(color_img, cv2.COLOR_BGR2HSV)
# read image, ensure binary
img = cv2.imread('fg.png', 0)
img[img>0] = 255
# find contours in the image
contours = cv2.findContours(img, cv2.RETR_LIST, cv2.CHAIN_APPROX_NONE)[1]
means = []
stddevs = []
for contour in contours:
contour_colors = []
n_points = len(contour)
for point in contour:
x, y = point[0]
contour_colors.append(hsv[y, x])
contour_colors = np.array(contour_colors).reshape(1, n_points, 3)
mean, stddev = cv2.meanStdDev(contour_colors)
means.append(mean)
stddevs.append(stddev)
print('First mean:')
print(means[0])
print('First stddev:')
print(stddevs[0])
First mean:
[[ 146.3908046 ]
[ 51.2183908 ]
[ 202.95402299]]
First stddev:
[[ 7.92835204]
[ 11.78682811]
[ 9.61549043]]
So this gives the same values. In Python I just simply created blank lists for the means and standard deviations and appended to them. In C++ you can create a std::vector<cv::Vec3b> (assuming uint8 image, otherwise Vec3f or whatever is appropriate) for each. Then inside the loop I create another blank list to hold the colors for each contour; again this would be a std::vector<cv::Vec3b>, and then run the meanStdDev() on that vector in each loop, and append the value to the means and standard deviations vectors. You don't have to append, you can easily grab the number of contours and the number of points in each contour and preallocate for speed, and then just index into those vectors instead of appending.
In Python there's virtually no speed difference between either method. Of course there's better memory efficiency in the second example; instead of storing a whole blank Mat we just store a few of the values. However the backend OpenCV methods work really quickly for masking operations, so you'll have to test the speed difference yourself in C++ and see which way is better. As the number of contours increases I imagine the benefits of the second method increases. If you do time both approaches, please let us know your results!
Here is the solution written in c++
#include <opencv2\opencv.hpp>
#include <iostream>
#include <vector>
#include <cmath>
using namespace cv;
using namespace std;
int main(int argc, char** argv) {
// Mat Declarations
// Mat img = imread("white.jpg");
// Mat src = imread("Rainbro.png");
Mat src = imread("multi.jpg");
// Mat src = imread("DarkRed.png");
Mat Hist;
Mat HSV;
Mat Edges;
Mat Grey;
vector<vector<Vec3b>> hueMEAN;
vector<vector<Point>> contours;
// Variables
int edgeThreshold = 1;
int const max_lowThreshold = 100;
int ratio = 3;
int kernel_size = 3;
int lowThreshold = 0;
// Windows
namedWindow("img", WINDOW_NORMAL);
namedWindow("HSV", WINDOW_AUTOSIZE);
namedWindow("Edges", WINDOW_AUTOSIZE);
namedWindow("contours", WINDOW_AUTOSIZE);
// Color Transforms
cvtColor(src, HSV, CV_BGR2HSV);
cvtColor(src, Grey, CV_BGR2GRAY);
// Perform Hist Equalization to help equalize Red hues so they stand out for
// better Edge Detection
equalizeHist(Grey, Grey);
// Image Transforms
blur(Grey, Edges, Size(3, 3));
Canny(Edges, Edges, max_lowThreshold, lowThreshold * ratio, kernel_size);
findContours(Edges, contours, CV_RETR_LIST, CV_CHAIN_APPROX_NONE);
//Rainbro MAT
//Mat drawing = Mat::zeros(432, 700, CV_8UC1);
//Multi MAT
Mat drawing = Mat::zeros(630, 1200, CV_8UC1);
//Red variation Mat
//Mat drawing = Mat::zeros(600, 900, CV_8UC1);
vector <vector<Point>> ContourPoints;
/* This code for loops through all contours and assigns the value of the y coordinate as a parameter
for the row pointer in the HSV mat. The value vec3b pointer pointing to the pixel in the mat is accessed
and stored for any Hue value that is between 0-10 and 165-179 as Red only contours.*/
for (int i = 0; i < contours.size(); i++) {
vector<Vec3b> vf;
vector<Point> points;
bool isContourRed = false;
for (int j = 0; j < contours[i].size(); j++) {
//Row Y-Coordinate of Mat from Y-Coordinate of Contour
int MatRow = int(contours[i][j].y);
//Row X-Coordinate of Mat from X-Coordinate of Contour
int MatCol = int(contours[i][j].x);
Vec3b *HsvRow = HSV.ptr <Vec3b>(MatRow);
int h = int(HsvRow[int(MatCol)][0]);
int s = int(HsvRow[int(MatCol)][1]);
int v = int(HsvRow[int(MatCol)][2]);
cout << "Coordinate: ";
cout << contours[i][j].x;
cout << ",";
cout << contours[i][j].y << endl;
cout << "Hue: " << h << endl;
// Get contours that are only in the red spectrum Hue 0-10, 165-179
if ((h <= 10 || h >= 165 && h <= 180) && ((s > 0) && (v > 0))) {
cout << "Coordinate: ";
cout << contours[i][j].x;
cout << ",";
cout << contours[i][j].y << endl;
cout << "Hue: " << h << endl;
vf.push_back(Vec3b(h, s, v));
points.push_back(contours[i][j]);
isContourRed = true;
}
}
if (isContourRed == true) {
hueMEAN.push_back(vf);
ContourPoints.push_back(points);
}
}
drawContours(drawing, ContourPoints, -1, Scalar(255, 255, 255), 2, 8);
// Calculate Mean and STD for each Contour
cout << "contour Means & STD of Vec3b:" << endl;
for (int i = 0; i < hueMEAN.size(); i++) {
Scalar meanTemp = mean(hueMEAN.at(i));
Scalar sdTemp;
cout << i << ": " << endl;
cout << meanTemp << endl;
cout << " " << endl;
meanStdDev(hueMEAN.at(i), meanTemp, sdTemp);
cout << sdTemp << endl;
cout << " " << endl;
}
cout << "Actual Contours: " << contours.size() << endl;
cout << "# Contours: " << hueMEAN.size() << endl;
imshow("img", src);
imshow("HSV", HSV);
imshow("Edges", Edges);
imshow("contours", drawing);
waitKey(0);
return 0;
}

Using saturate_cast or not

This is a simple program to change contrast and brightness of an image. I have noticed that there is a an another program with one simple difference:saturate_cast is added to code.
And I don't realize what is the reason of doing this and there is no need to converting to unsigned char or uchar both code (with saturate_cast<uchar> and to not use this) are outputting the same result. I appreciate if anyone help.
Here it is code :
#include "opencv2/imgcodecs.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <iostream>
#include "Source.h"
using namespace cv;
double alpha;
int beta;
int main(int, char** argv)
{
/// Read image given by user
Mat image = imread(argv[1]);
Mat image2 = Mat::zeros(image.size(), image.type());
/// Initialize values
std::cout << " Basic Linear Transforms " << std::endl;
std::cout << "-------------------------" << std::endl;
std::cout << "* Enter the alpha value [1.0-3.0]: ";std::cin >> alpha;
std::cout << "* Enter the beta value [0-100]: "; std::cin >> beta;
for (int x = 0; x < image.rows; x++)
{
for (int y = 0; y < image.cols; y++)
{
for (int c = 0; c < 3; c++)
{
image2.at<Vec3b>(x, y)[c] =
saturate_cast<uchar>(alpha*(image.at<Vec3b>(x, y)[c]) + beta);
}
}
/// Create Windows
namedWindow("Original Image", 1);
namedWindow("New Image", 1);
/// Show stuff
imshow("Original Image", image);
imshow("New Image", image2);
/// Wait until user press some key
waitKey();
return 0;
}
Since the result of your expression may go outside the valid range for uchar, i.e. [0,255], you'd better always use saturate_cast.
In your case, the result of the expression: alpha*(image.at<Vec3b>(x, y)[c]) + beta is a double, so it's safer to use saturate_cast<uchar> to clamp values correctly.
Also, this improves readability, since it's easy to see that you want a uchar out of an expression.
Without using saturate_cast you may have unexpected values:
uchar u1 = 257; // u1 = 1, why a very bright value is set to almost black?
uchar u2 = saturate_cast<uchar>(257); // u2 = 255, a very bright value is set to white
inline unsigned char saturate_cast_uchar(double val) {
val += 0.5; // to round the value
return unsigned char(val < 0 ? 0 : (val > 0xff ? 0xff : val));
}
if val lies between 0 to 255 than this function will return rounded value,
if val lies outside the range [0, 255] than it will return lower or upper boundary value.

Number and character recognition using ANN OpenCV 3.1

I have implemented Neural network using OpenCV ANN Library. I am newbie in this field and I learn everything about it online (Mostly StackOverflow).
I am using this ANN for detection of number plate. I did segmentation part using OpenCV image processing library and it is working good. It performs character segmentation and gives it to the NN part of the project. NN is going to recognize the number plate.
I have sample images of 20x30, therefore I have 600 neurons in input layer. As there are 36 possibilities (0-9,A-Z) I have 36 output neurons. I kept 100 neurons in hidden layer. The predict function of OpenCV is giving me the same output for every segmented image. That output is also showing some large negative(< -1). I have used cv::ml::ANN_MLP::SIGMOID_SYM as an activation function.
Please don't mind as there is lot of code wrongly commented (I am doing trial and error).
I need to find out what is the output of predict function. Thank you for your help.
#include <opencv2/opencv.hpp>
int inputLayerSize = 1;
int outputLayerSize = 1;
int numSamples = 2;
Mat layers = Mat(3, 1, CV_32S);
layers.row(0) =Scalar(600) ;
layers.row(1) = Scalar(20);
layers.row(2) = Scalar(36);
vector<int> layerSizes = { 600,100,36 };
Ptr<ml::ANN_MLP> nnPtr = ml::ANN_MLP::create();
vector <int> n;
//nnPtr->setLayerSizes(3);
nnPtr->setLayerSizes(layers);
nnPtr->setTrainMethod(ml::ANN_MLP::BACKPROP);
nnPtr->setTermCriteria(TermCriteria(cv::TermCriteria::COUNT | cv::TermCriteria::EPS, 1000, 0.00001f));
nnPtr->setActivationFunction(cv::ml::ANN_MLP::SIGMOID_SYM, 1, 1);
nnPtr->setBackpropWeightScale(0.5f);
nnPtr->setBackpropMomentumScale(0.5f);
/*CvANN_MLP_TrainParams params = CvANN_MLP_TrainParams(
// terminate the training after either 1000
// iterations or a very small change in the
// network wieghts below the specified value
cvTermCriteria(CV_TERMCRIT_ITER + CV_TERMCRIT_EPS, 1000, 0.000001),
// use backpropogation for training
CvANN_MLP_TrainParams::BACKPROP,
// co-efficents for backpropogation training
// (refer to manual)
0.1,
0.1);*/
/* Mat samples(Size(inputLayerSize, numSamples), CV_32F);
samples.at<float>(Point(0, 0)) = 0.1f;
samples.at<float>(Point(0, 1)) = 0.2f;
Mat responses(Size(outputLayerSize, numSamples), CV_32F);
responses.at<float>(Point(0, 0)) = 0.2f;
responses.at<float>(Point(0, 1)) = 0.4f;
*/
//reading chaos image
// we will read the classification numbers into this variable as though it is a vector
// close the traning images file
/*vector<int> layerInfo;
layerInfo=nnPtr->get;
for (int i = 0; i < layerInfo.size(); i++) {
cout << "size of 0" <<layerInfo[i] << endl;
}*/
cv::imshow("chaos", matTrainingImagesAsFlattenedFloats);
// cout <<abc << endl;
matTrainingImagesAsFlattenedFloats.convertTo(matTrainingImagesAsFlattenedFloats, CV_32F);
//matClassificationInts.reshape(1, 496);
matClassificationInts.convertTo(matClassificationInts, CV_32F);
matSamples.convertTo(matSamples, CV_32F);
std::cout << matClassificationInts.rows << " " << matClassificationInts.cols << " ";
std::cout << matTrainingImagesAsFlattenedFloats.rows << " " << matTrainingImagesAsFlattenedFloats.cols << " ";
std::cout << matSamples.rows << " " << matSamples.cols;
imshow("Samples", matSamples);
imshow("chaos", matTrainingImagesAsFlattenedFloats);
Ptr<ml::TrainData> trainData = ml::TrainData::create(matTrainingImagesAsFlattenedFloats, ml::SampleTypes::ROW_SAMPLE, matSamples);
nnPtr->train(trainData);
bool m = nnPtr->isTrained();
if (m)
std::cout << "training complete\n\n";
// cv::Mat matCurrentChar = Mat(cv::Size(matTrainingImagesAsFlattenedFloats.cols, matTrainingImagesAsFlattenedFloats.rows), CV_32F);
// cout << "samples:\n" << samples << endl;
//cout << "\nresponses:\n" << responses << endl;
/* if (!nnPtr->train(trainData))
return 1;*/
/* cout << "\nweights[0]:\n" << nnPtr->getWeights(0) << endl;
cout << "\nweights[1]:\n" << nnPtr->getWeights(1) << endl;
cout << "\nweights[2]:\n" << nnPtr->getWeights(2) << endl;
cout << "\nweights[3]:\n" << nnPtr->getWeights(3) << endl;*/
//predicting
std::vector <cv::String> filename;
cv::String folder = "./plate/";
cv::glob(folder, filename);
if (filename.empty()) { // if unable to open image
std::cout << "error: image not read from file\n\n"; // show error message on command line
return(0); // and exit program
}
String strFinalString;
for (int i = 0; i < filename.size(); i++) {
cv::Mat matTestingNumbers = cv::imread(filename[i]);
cv::Mat matGrayscale; //
cv::Mat matBlurred; // declare more image variables
cv::Mat matThresh; //
cv::Mat matThreshCopy;
cv::Mat matCanny;
//
cv::cvtColor(matTestingNumbers, matGrayscale, CV_BGR2GRAY); // convert to grayscale
matThresh = cv::Mat(cv::Size(matGrayscale.cols, matGrayscale.rows), CV_8UC1);
for (int i = 0; i < matGrayscale.cols; i++) {
for (int j = 0; j < matGrayscale.rows; j++) {
if (matGrayscale.at<uchar>(j, i) <= 130) {
matThresh.at<uchar>(j, i) = 255;
}
else {
matThresh.at<uchar>(j, i) = 0;
}
}
}
// blur
cv::GaussianBlur(matThresh, // input image
matBlurred, // output image
cv::Size(5, 5), // smoothing window width and height in pixels
0); // sigma value, determines how much the image will be blurred, zero makes function choose the sigma value
// filter image from grayscale to black and white
/* cv::adaptiveThreshold(matBlurred, // input image
matThresh, // output image
255, // make pixels that pass the threshold full white
cv::ADAPTIVE_THRESH_GAUSSIAN_C, // use gaussian rather than mean, seems to give better results
cv::THRESH_BINARY_INV, // invert so foreground will be white, background will be black
11, // size of a pixel neighborhood used to calculate threshold value
2); */ // constant subtracted from the mean or weighted mean
// cv::imshow("thresh" + std::to_string(i), matThresh);
matThreshCopy = matThresh.clone();
std::vector<std::vector<cv::Point> > ptContours; // declare a vector for the contours
std::vector<cv::Vec4i> v4iHierarchy;// make a copy of the thresh image, this in necessary b/c findContours modifies the image
cv::Canny(matBlurred, matCanny, 20, 40, 3);
/*std::vector<std::vector<cv::Point> > ptContours; // declare a vector for the contours
std::vector<cv::Vec4i> v4iHierarchy; // declare a vector for the hierarchy (we won't use this in this program but this may be helpful for reference)
cv::findContours(matThreshCopy, // input image, make sure to use a copy since the function will modify this image in the course of finding contours
ptContours, // output contours
v4iHierarchy, // output hierarchy
cv::RETR_EXTERNAL, // retrieve the outermost contours only
cv::CHAIN_APPROX_SIMPLE); // compress horizontal, vertical, and diagonal segments and leave only their end points
/*std::vector<std::vector<cv::Point> > contours_poly(ptContours.size());
std::vector<cv::Rect> boundRect(ptContours.size());
for (int i = 0; i < ptContours.size(); i++)
{
approxPolyDP(cv::Mat(ptContours[i]), contours_poly[i], 3, true);
boundRect[i] = cv::boundingRect(cv::Mat(contours_poly[i]));
}*/
/*for (int i = 0; i < ptContours.size(); i++) { // for each contour
ContourWithData contourWithData; // instantiate a contour with data object
contourWithData.ptContour = ptContours[i]; // assign contour to contour with data
contourWithData.boundingRect = cv::boundingRect(contourWithData.ptContour); // get the bounding rect
contourWithData.fltArea = cv::contourArea(contourWithData.ptContour); // calculate the contour area
allContoursWithData.push_back(contourWithData); // add contour with data object to list of all contours with data
}
for (int i = 0; i < allContoursWithData.size(); i++) { // for all contours
if (allContoursWithData[i].checkIfContourIsValid()) { // check if valid
validContoursWithData.push_back(allContoursWithData[i]); // if so, append to valid contour list
}
}
//sort contours from left to right
std::sort(validContoursWithData.begin(), validContoursWithData.end(), ContourWithData::sortByBoundingRectXPosition);
// std::string strFinalString; // declare final string, this will have the final number sequence by the end of the program
*/
/*for (int i = 0; i < validContoursWithData.size(); i++) { // for each contour
// draw a green rect around the current char
cv::rectangle(matTestingNumbers, // draw rectangle on original image
validContoursWithData[i].boundingRect, // rect to draw
cv::Scalar(0, 255, 0), // green
2); // thickness
cv::Mat matROI = matThresh(validContoursWithData[i].boundingRect); // get ROI image of bounding rect
cv::Mat matROIResized;
cv::resize(matROI, matROIResized, cv::Size(RESIZED_IMAGE_WIDTH, RESIZED_IMAGE_HEIGHT)); // resize image, this will be more consistent for recognition and storage
*/
cv::Mat matROIFloat;
cv::resize(matThresh, matThresh, cv::Size(RESIZED_IMAGE_WIDTH, RESIZED_IMAGE_HEIGHT));
matThresh.convertTo(matROIFloat, CV_32FC1, 1.0 / 255.0); // convert Mat to float, necessary for call to find_nearest
cv::Mat matROIFlattenedFloat = matROIFloat.reshape(1, 1);
cv::Point maxLoc = { 0,0 };
cv::Point minLoc;
cv::Mat output = cv::Mat(cv::Size(36, 1), CV_32F);
vector<float>output2;
// cv::Mat output2 = cv::Mat(cv::Size(36, 1), CV_32F);
nnPtr->predict(matROIFlattenedFloat, output2);
// float max = output.at<float>(0, 0);
int fo = 0;
float m = output2[0];
imshow("predicted input", matROIFlattenedFloat);
// float b = output.at<float>(0, 0);
// cout <<"\n output0,0:"<<b<<endl;
// minMaxLoc(output, 0, 0, &minLoc, &maxLoc, Mat());
// cout << "\noutput:\n" << maxLoc.x << endl;
for (int j = 1; j < 36; j++) {
float value =output2[j];
if (value > m) {
m = value;
fo = j;
}
}
float * p = 0;
p = &m;
cout << "j value in output " << fo << " Max value " << p << endl;
//imshow("output image" + to_string(i), output);
// cout << "\noutput:\n" << minLoc.x << endl;
//float fltCurrentChar = (float)maxLoc.x;
output.release();
m = 0;
fo = 0;
}
// strFinalString = strFinalString + char(int(fltCurrentChar)); // append current char to full string
// cv::imshow("Predict output", output);
/*cv::Point maxLoc = {0,0};
Mat output=Mat (cv::Size(matSamples.cols,matSamples.rows),CV_32F);
nnPtr->predict(matTrainingImagesAsFlattenedFloats, output);
minMaxLoc(output, 0, 0, 0, &maxLoc, 0);
cout << "\noutput:\n" << maxLoc.x << endl;*/
// getchar();
/*for (int i = 0; i < 10;i++) {
for (int j = 0; j < 36; j++) {
if (matCurrentChar.at<float>(i, j) >= 0.6) {
cout << " "<<j<<" ";
}
}
}*/
waitKey(0);
return(0);
}
void gen() {
std::string dir, filepath;
int num, imgArea, minArea;
int pos = 0;
bool f = true;
struct stat filestat;
cv::Mat imgTrainingNumbers;
cv::Mat imgGrayscale;
cv::Mat imgBlurred;
cv::Mat imgThresh;
cv::Mat imgThreshCopy;
cv::Mat matROIResized=cv::Mat (cv::Size(RESIZED_IMAGE_WIDTH,RESIZED_IMAGE_HEIGHT),CV_8UC1);
cv::Mat matROI;
std::vector <cv::String> filename;
std::vector<std::vector<cv::Point> > ptContours;
std::vector<cv::Vec4i> v4iHierarchy;
int count = 0, contoursCount = 0;
matSamples = cv::Mat(cv::Size(36, 496), CV_32FC1);
matTrainingImagesAsFlattenedFloats = cv::Mat(cv::Size(600, 496), CV_32FC1);
for (int j = 0; j <= 35; j++) {
int tmp = j;
cv::String folder = "./Training Data/" + std::to_string(tmp);
cv::glob(folder, filename);
for (int k = 0; k < filename.size(); k++) {
count++;
// If the file is a directory (or is in some way invalid) we'll skip it
// if (stat(filepath.c_str(), &filestat)) continue;
//if (S_ISDIR(filestat.st_mode)) continue;
imgTrainingNumbers = cv::imread(filename[k]);
imgArea = imgTrainingNumbers.cols*imgTrainingNumbers.rows;
// read in training numbers image
minArea = imgArea * 50 / 100;
if (imgTrainingNumbers.empty()) {
std::cout << "error: image not read from file\n\n";
//return(0);
}
cv::cvtColor(imgTrainingNumbers, imgGrayscale, CV_BGR2GRAY);
//cv::equalizeHist(imgGrayscale, imgGrayscale);
imgThresh = cv::Mat(cv::Size(imgGrayscale.cols, imgGrayscale.rows), CV_8UC1);
/*cv::adaptiveThreshold(imgGrayscale,
imgThresh,
255,
cv::ADAPTIVE_THRESH_GAUSSIAN_C,
cv::THRESH_BINARY_INV,
3,
0);
*/
for (int i = 0; i < imgGrayscale.cols; i++) {
for (int j = 0; j < imgGrayscale.rows; j++) {
if (imgGrayscale.at<uchar>(j, i) <= 130) {
imgThresh.at<uchar>(j, i) = 255;
}
else {
imgThresh.at<uchar>(j, i) = 0;
}
}
}
// cv::imshow("imgThresh"+std::to_string(count), imgThresh);
imgThreshCopy = imgThresh.clone();
cv::GaussianBlur(imgThreshCopy,
imgBlurred,
cv::Size(5, 5),
0);
cv::Mat imgCanny;
// cv::Canny(imgBlurred,imgCanny,20,40,3);
cv::findContours(imgBlurred,
ptContours,
v4iHierarchy,
cv::RETR_EXTERNAL,
cv::CHAIN_APPROX_SIMPLE);
for (int i = 0; i < ptContours.size(); i++) {
if (cv::contourArea(ptContours[i]) > MIN_CONTOUR_AREA) {
contoursCount++;
cv::Rect boundingRect = cv::boundingRect(ptContours[i]);
cv::rectangle(imgTrainingNumbers, boundingRect, cv::Scalar(0, 0, 255), 2); // draw red rectangle around each contour as we ask user for input
matROI = imgThreshCopy(boundingRect); // get ROI image of bounding rect
std::string path = "./" + std::to_string(contoursCount) + ".JPG";
cv::imwrite(path, matROI);
// cv::imshow("matROI" + std::to_string(count), matROI);
cv::resize(matROI, matROIResized, cv::Size(RESIZED_IMAGE_WIDTH, RESIZED_IMAGE_HEIGHT)); // resize image, this will be more consistent for recognition and storage
std::cout << filename[k] << " " << contoursCount << "\n";
//cv::imshow("matROI", matROI);
//cv::imshow("matROIResized"+std::to_string(count), matROIResized);
// cv::imshow("imgTrainingNumbers" + std::to_string(contoursCount), imgTrainingNumbers);
int intChar;
if (j<10)
intChar = j + 48;
else {
intChar = j + 55;
}
/*if (intChar == 27) { // if esc key was pressed
return(0); // exit program
}*/
// if (std::find(intValidChars.begin(), intValidChars.end(), intChar) != intValidChars.end()) { // else if the char is in the list of chars we are looking for . . .
// append classification char to integer list of chars
cv::Mat matImageFloat;
matROIResized.convertTo(matImageFloat,CV_32FC1);// now add the training image (some conversion is necessary first) . . .
//matROIResized.convertTo(matImageFloat, CV_32FC1); // convert Mat to float
cv::Mat matImageFlattenedFloat = matImageFloat.reshape(1, 1);
//matTrainingImagesAsFlattenedFloats.push_back(matImageFlattenedFloat);// flatten
try {
//matTrainingImagesAsFlattenedFloats.push_back(matImageFlattenedFloat);
std::cout << matTrainingImagesAsFlattenedFloats.rows << " " << matTrainingImagesAsFlattenedFloats.cols;
//unsigned char* re;
int ii = 0; // Current column in training_mat
for (int i = 0; i<matImageFloat.rows; i++) {
for (int j = 0; j < matImageFloat.cols; j++) {
matTrainingImagesAsFlattenedFloats.at<float>(contoursCount-1, ii++) = matImageFloat.at<float>(i,j);
}
}
}
catch (std::exception &exc) {
f = false;
exc.what();
}
if (f) {
matClassificationInts.push_back((float)intChar);
matSamples.at<float>(contoursCount-1, j) = 1.0;
}
f = true;
// add to Mat as though it was a vector, this is necessary due to the
// data types that KNearest.train accepts
} // end if
//} // end if
} // end for
}//end i
}//end j
}
Output of predict function
Unfortunately, I don't have the necessary time to really review the code, but I can say off the top that to train a model that performs well for prediction with 36 classes, you will need several things:
A large number of good quality images. Ideally, you'd want thousands of images for each class. Of course, you can see somewhat decent results with less than that, but if you only have a few images per class, it's never going to be able to generalize adequately.
You need a model that is large and sophisticated enough to provide the necessary expressiveness to solve the problem. For a problem like this, a plain old multi-layer perceptron with one hidden layer with 100 units may not be enough. This is actually a problem that would benefit from using a Convolutional Neural Net (CNN) with a couple layers just to extract useful features first. But assuming you don't want to go down that path, you may at least want to tweak the size of your hidden layer.
To even get to a point where the training process converges, you will probably need to experiment and crucially, you need an effective way to test the accuracy of the ANN after each experiment. Ideally, you want to observe the loss as the training is proceeding, but I'm not sure whether that's possible using OpenCV's ML functionality. At a minimum, you should fully expect to have to play around with the various so-called "hyper-parameters" and run many experiments before you have a reasonable model.
Anyway, the most important thing is to make sure you have a solid mechanism for validating the accuracy of the model after training. If you aren't already doing so, set aside some images as a separate test set, and after each experiment, use the trained ANN to predict each test image to see the accuracy.
One final general note: what you're trying to do is complex. You will save yourself a huge number of headaches if you take the time early and often to refactor your code. No matter how many experiments you run, if there's some defect causing (for example) your training data to be fundamentally different in some way than your test data, you will never see good results.
Good luck!
EDIT: I should also point out that seeing the same result for every input image is a classic sign that training failed. Unfortunately, there are many reasons why that might happen and it will be very difficult for anyone to isolate that for you without some cleaner code and access to your image data.
I have solved the issue of not getting the output of predict. The issue was created because of the input Mat image to train (ie. matTrainingImagesAsFlattenedFloats) was having values 255.0 for a white pixel. This happened because I haven't use convertTo() properly. You need to use convertTo(OutputImage name, CV_32FC1, 1.0 / 255.0); like this which will convert all the pixel values with 255.0 to 1.0 and after that I am getting the correct output.
Thank you for all the help.
This is too broad to be in one question. Sorry for the bad news. I tried this over and over and couldn't find a solution. I recommend that you implement a simple AND, OR or XOR first just to make sure that the learning part is working and that you are getting better results the more passes you do. Also I suggest to try the Tangent Hyperbolic as a Transfer Function instead of Sigmoid. And Good luck!
Here is some of my own posts that might help you:
Exact results as yours: HERE
Some codes: HERE
I don't want to say that, but several professors I met said Backpropagation just doesn't work and they had (and me have) to implement my own method of teaching the network.

Different Pixel Values in MATLAB and C++ with OpenCV

I see there are similar questions to this but don't quiet answer what I am asking so here is my question.
In C++ with OpenCV I run the code I will provide below and it returns an average pixel value of 6.32. However, when I open the image and use the mean function in MATLAB it returns an average pixel intensity of approximately 6.92ish. As you can see I convert the OpenCV values to double to try to ease this issue and have found that openCV loads the image as a set of integers whereas MATLAB loads the image as decimal values that are approximately but not quite the same obviously as the integers. So my question is, being new to coding, which is correct? I'm assuming MATLAB is returning more accurate values and if that is the case I would like to know if there is a way to load the images in the same fashion to avoid the discrepancy.
Thank you, Code below
Mat img = imread("Cells2.tif");
cv::cvtColor(img, img, CV_BGR2GRAY);
cv::imshow("stuff",img);
Mat dst;
if(img.channels() == 3)
{
img.convertTo(dst, CV_64FC1);
}
else if (img.channels() == 1)
{
img.convertTo(dst, CV_64FC1);
}
cv::imshow("output",dst/255);
int NumPixels = img.total();
double avg;
double c = 0;
double std;
for(int y = 0; y < dst.cols; y++)
{
for(int x = 0; x < dst.rows; x++)
{
c+=dst.at<double>(x,y)*255;
}
}
avg = c/NumPixels;
cout << "asfa = " << c << endl;
double deviation;
double var;
double z = 0;
double q;
//for(int a = 0; a<= img.cols; a++)
for(int y = 0; y< dst.cols; y++)
{
//for(int b = 0; b<= dst.rows; b++)
for(int x = 0; x< dst.rows; x++)
{
q=dst.at<double>(x,y);
deviation = q - avg;
z = z + pow(deviation,2);
//cout << "q = " << q << endl;
}
}
var = z/(NumPixels);
std = sqrt(var);
cv::Scalar avgPixel = cv::mean(dst);
cout << "Avg Value = " << avg << endl;
cout << "StdDev = " << std << endl;
cout << "AvgPixel =" << avgPixel;
cvWaitKey(0);
return 0;
}
According to your comment, the image seems to be stored with a 16-bit depth. MATLAB loads the TIFF image as is, while by default OpenCV will load images as 8-bit. This might explain the difference in precision that you are seeing.
Use the following to open the image in OpenCV:
cv::Mat img = cv::imread("file.tif", cv::IMREAD_ANYDEPTH|cv::IMREAD_ANYCOLOR);
In MATLAB, it's simply:
img = imread('file.tif');
Next you need to be aware of the data type you are working with. In OpenCV its CV_16U, in MATLAB its uint16. Therefore you need to convert types accordingly.
For example, in MATLAB:
img2 = double(img) ./ double(intmax('uint16'));
would convert it to a double image with values in the range [0,1]
When you load the image, you must use similar methods in both environments (MATLAB and OpenCV) to avoid possible conversions which may be done by default in either environment.
You are converting the image if certain conditions are met, this can change some color values while MATLAB can choose to not convert the image but use the raw image
colors are mostly represented in hex format with popular implementations in the format of 0xAARRGGBB or 0xRRGGBBAA, so 32 bit integers will do (unsigned/signed doesn't matter, the hex value is still the same), create a 64 bit variable, add all the 32 bit variables together and then divide by the amount of pixels, this will get you a quite accurate result (for images up to 16384 by 16384 pixels (where a 32 bit value is representing the color of one pixel), if larger, then a 64 bit integer will not be enough).
long long total = 0;
long long divisor = image.width * image.height;
for(int x = 0; x < image.width; ++x)
{
for(int y = 0; x < image.height; ++x)
{
total += image.at(x,y).color;
}
}
double avg = total / divisor;
std::cout << "Average color value: " << avg << std::endl;
Not sure what difficulty you are having with mean value in Matlab versus OpenCV. If I understand your question correctly, your goal is to implement Matlab's mean(image(:)) in OpenCV. For example in Matlab you do the following:
>> image = imread('sheep.jpg')
>> avg = mean(image(:))
ans =
119.8210
Here's how you do the same in OpenCV:
Mat image = imread("sheep.jpg");
Scalar avg_pixel;
avg_pixel = mean(image);
float avg = 0;
cout << "mean pixel (RGB): " << avg_pixel << endl;
for(int i; i<image.channels(); ++i) {
avg = avg + avg_pixel[i];
}
avg = avg/image.channels();
cout << "mean, that's equivalent to mean(image(:)) in Matlab: " << avg << endl;
OpenCV console output:
mean pixel (RGB): [77.4377, 154.43, 127.596, 0]
mean, that's equivalent to mean(image(:)) in Matlab: 119.821
So the results are the same in Matlab and OpenCV.
Follow up
Found some problems in your code.
OpenCV stores data differently from Matlab. Look at this answer for a rough explanation on how to access a pixel in OpenCV. For example:
// NOT a correct way to access a pixel in CV_F32C3 type image
double pixel = image.at<double>(x,y);
//The correct way (where the pixel value is stored in a vector)
// Note that Vec3d is defined as: typedef Vec<double, 3> Vec3d;
Vec3d pixel = image.at<Vec3d>(x, y);
Another error I found
if(img.channels() == 3)
{
img.convertTo(dst, CV_64FC1); //should be CV_64FC3, instead of CV_64FC1
}
Accessing Mat elements may be confusing. I suggest getting a book on OpenCV to get started, for example this one, and read OpenCV tutorials and documentation. Hope this helps.