Segmentation error when trying to equalize an image - c++

I've been reading about opencv and I've been doing some exercises, in this case I want to perform an image equalization, I have implemented the following code, but when I execute it I get the following error:
"Segmentation fault (core dumped)"
So I have no idea what is due.
The formula I am trying to use is the following:
equalization
The code is the following:
#include <opencv2/opencv.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <stdio.h>
using namespace cv;
using namespace std;
void equalization(cv::Mat &image,cv::Mat &green, int m) {
Mat eqIm;
int nl= image.rows; // number of lines
int nc= image.cols * image.channels();
for (int j=0; j<nl; j++) {
uchar* data= image.ptr<uchar>(j);
uchar* data2= green.ptr<uchar>(j);
uchar* eqIm= green.ptr<uchar>(j);
for (int i=0; i<nc; i++) {
eqIm[i]= data[i]+m-data2[i];
}
}
cv::imshow("Image",eqIm);
imwrite("eqIm.png",eqIm);
}
float mean(cv::Mat &image){
cv:Scalar tempVal = mean( image );
float myMAtMean = tempVal.val[0];
cout << "The value is " << myMAtMean;
}
int main(int argc, char** argv ){
Mat dst;
Mat image= cv::imread("img.jpg");
Mat green= cv::imread("green.jpg");
cv::imshow("Image",image);
float m= mean(image);
equalization(image,green,m);
cv::namedWindow("Image");
cv::imshow("Image",image);
imwrite("equalizated.png",dst);
waitKey(0);
return 0;
}
and the image "Equalization.png" that is written contains nothing

You never initialized Mat eqIm, so when you do cv::imshow("Image", eqIm);
imwrite("eqIm.png", eqIm); there is nothing in the mat. https://docs.opencv.org/2.4/doc/tutorials/core/mat_the_basic_image_container/mat_the_basic_image_container.html
Also, I should note that you have 2 variables of eqIm. That may be part of the confusion.
One last thing, in your mean function, you may end up with a recursive function. You should specify what mean function you are using in the mean function you create, i.e.
float mean(cv::Mat &image) {
cv:Scalar tempVal = cv::mean(image);
float myMAtMean = tempVal.val[0];
cout << "The value is " << myMAtMean;
return myMAtMean;
}
The following is something closer to what you are looking for in your equalization function.
void equalization(cv::Mat &image, cv::Mat &green, int m) {
Mat eqIm(image.rows,image.cols,image.type());
int nl = image.rows; // number of lines
int nc = image.cols * image.channels();
for (int j = 0; j<nl; j++) {// j is each row
for (int ec = 0; ec < nc; ec++) {//ec is each col and channels
eqIm.data[j*image.cols*image.channels() + ec] = image.data[j*image.cols*image.channels() + ec] + m - green.data[j*image.cols*image.channels() + ec];
}
}
cv::imshow("Image", eqIm);
imwrite("eqIm.png", eqIm);
}
I do j*image.cols*image.channels() to step through the entire size of j lines (the number of columns times the number of channels per pixel).

Related

OpenCV: can't get segmentation of image using k-means

Before going into deep of my question, I want you to know that I've read other posts on this forum, but none regards my problem.
In particular, the post here answers the question "how to do this?" with k-means, while I already know that I have to use it and I'd like to know why my implementation doesn't work.
I want to use k-means algorithm to divide pixels of an input image into clusters, according to their color. Then, after completing such task, I want each pixel to have the color of the center of the cluster it's been assigned to.
Taking as reference the OpenCV examples and other stuff retrieved on the web, I've designed the following code:
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <iostream>
using namespace std;
using namespace cv;
int main( int argc, char** argv )
{
Mat src = imread( argv[1], 1 );
// reshape matrix
Mat resized(src.rows*src.cols, 3, CV_8U);
int row_counter = 0;
for(int i = 0; i<src.rows; i++)
{
for(int j = 0; j<src.cols; j++)
{
Vec3b channels = src.at<Vec3b>(i,j);
resized.at<char>(row_counter,0) = channels(0);
resized.at<char>(row_counter,1) = channels(1);
resized.at<char>(row_counter,2) = channels(2);
row_counter++;
}
}
//cout << src << endl;
// change data type
resized.convertTo(resized, CV_32F);
// determine termination criteria and number of clusters
TermCriteria criteria(TermCriteria::COUNT + TermCriteria::EPS, 10, 1.0);
int K = 8;
// apply k-means
Mat labels, centers;
double compactness = kmeans(resized, K, labels, criteria, 10, KMEANS_RANDOM_CENTERS, centers);
// change data type in centers
centers.convertTo(centers, CV_8U);
// create output matrix
Mat result = Mat::zeros(src.rows, src.cols, CV_8UC3);
row_counter = 0;
int matrix_row_counter = 0;
while(row_counter < result.rows)
{
for(int z = 0; z<result.cols; z++)
{
int index = labels.at<char>(row_counter+z, 0);
//cout << index << endl;
Vec3b center_channels(centers.at<char>(index,0),centers.at<char>(index,1), centers.at<char>(index,2));
result.at<Vec3b>(matrix_row_counter, z) = center_channels;
}
row_counter += result.cols;
matrix_row_counter++;
}
cout << "Labels " << labels.rows << " " << labels.cols << endl;
//cvtColor( src, gray, CV_BGR2GRAY );
//gray.convertTo(gray, CV_32F);
imshow("Result", result);
waitKey(0);
return 0;
}
Anyway, at the end of computation, I simply get a black image.
Do you know why?
Strangely, if I initialize result matrix as
Mat result(src.size(), src.type())
at the end of algorithm it will display exactly the input image, without any segmentation.
In particular, I have two doubts:
1) is it correct to lay the RGB values of a pixel on each row of matrix resized the way I've done it? is there a way to do it without a loop?
2) what's exactly the content of centers, after k-means function finishes working? it's a 3 columns matrix, does it contains the RGB values of clusters' centers?
thanks for support.
-The below posted OpenCV program assigns the user preferred color to a particular pixel value in an image
-ScanImageAndReduceC() is a predefined method in OpenCV to scan through all the pixels of an Image
-I.atuchar>(10, 10) = 255; is used to access a particular pixel value of an image
Here is the code:
Mat& ScanImageAndReduceC(Mat& I)
{
// accept only char type matrices
CV_Assert(I.depth() == CV_8U);
int channels = I.channels();
int nRows = I.rows;
int nCols = I.cols * channels;
if (I.isContinuous())
{
nCols *= nRows;
nRows = 1;
}
int i, j;
uchar* p;
for (i = 0; i < nRows; ++i)
{
p = I.ptr<uchar>(i);
for (j = 0; j < nCols; ++j)
{
I.at<uchar>(10, 10) = 255;
}
}
return I;
}
-------Main Program-------
Calling the above method in our main program
diff = ScanImageAndReduceC(diff);
namedWindow("Difference", WINDOW_AUTOSIZE);// Create a window for display.
imshow("Difference", diff); // Show our image inside it.
waitKey(0); // Wait for a keystroke in the window
return 0;
}

How to access a particular kmeans cluster in opencv

I am new to opencv, and I am trying to find and save the largest cluster of a kmeaned clustered image. I have:
clustered the image following the method provided by Mercury and Bill the Lizard in the following post (Color classification with k-means in OpenCV),
determined the largest cluster by finding the largest label count from the kmeans output (bestLables)
tried to store the position of the pixels that constitute the largest cluster in an array of Point2i
However, the mystery is that I found myself with a number of stored points that is significantly less than the count
obtained when trying to find the largest cluster. In other words: inc < max. Plus the number given by inc does not even correspond to any other clusters' number of points.
What did I do wrong? or is there a better way to do what I'm trying to do?, any input will be much appreciated.
Thanks in advance for your precious help!!
#include <iostream>
#include "opencv2/opencv.hpp"
#include<opencv2/highgui/highgui.hpp>
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
using namespace cv;
using namespace std;
int main(int argc, char** argv)
{
Mat img = imread("pic.jpg", CV_LOAD_IMAGE_COLOR);
if (!img.data)
{
cout << "Could not open or find the image" << std::endl;
return -1;
}
//imshow("img", img);
Mat imlab;
cvtColor(img, imlab, CV_BGR2Lab);
/* Cluster image */
vector<cv::Mat> imgRGB;
int k = 5;
int n = img.rows *img.cols;
Mat img3xN(n, 3, CV_8U);
split(imlab, imgRGB);
for (int i = 0; i != 3; ++i)
imgRGB[i].reshape(1, n).copyTo(img3xN.col(i));
img3xN.convertTo(img3xN, CV_32F);
Mat bestLables;
kmeans(img3xN, k, bestLables, cv::TermCriteria(), 10, cv::KMEANS_RANDOM_CENTERS);
/*bestLables= bestLables.reshape(0,img.rows);
cv::convertScaleAbs(bestLables,bestLables,int(255/k));
cv::imshow("result",bestLables);*/
/* Find the largest cluster*/
int max = 0, indx= 0, id = 0;
int clusters[5];
for (int i = 0; i < bestLables.rows; i++)
{
id = bestLables.at<int>(i, 0);
clusters[id]++;
if (clusters[id] > max)
{
max = clusters[id];
indx = id;
}
}
/* save largest cluster */
int cluster = 1, inc = 0;
Point2i shape[2000];
for (int y = 0; y < imlab.rows; y++)
{
for (int x = 0; x < imlab.cols; x++)
{
if (bestLables.data[y + x*imlab.cols] == cluster) shape[inc++] = { y, x };
}
}
waitKey(0);
return 0;
}
You are pretty close, but there are a few errors. The code below should work as expected. I also added a small piece of code to show the classification result, where pixels of the larger cluster are red, the other with shades of green.
You never initialized int clusters[5];, so it will contains random numbers at the beginning, compromising it as an accumulator. I recommend to use vector<int> instead.
You access bestLabels with wrong indices. Instead of bestLables.data[y + x*imlab.cols], it should be bestLables.data[y*imlab.cols + x]. That caused your inc < max issue. In the code below I used a vector<int> to contain indices, since it's easier to see the content of the vector. So I access bestLabels a little differently, i.e. bestLables[y*imlab.cols + x] instead of bestLables.data[y*imlab.cols + x], but the result is the same.
You had Point2i shape[2000];. I used a vector<Point>. Note that Point is just a typedef of Point2i. Since you don't know how many points will be there, better use a dynamic array. If you know that there will be, say, 2000 points, you'd better call reserve to avoid reallocations, but that's not mandatory. With Point2i shape[2000]; if you have more than 2000 points you'll go out of bounds, with a vector you're safe. I used emplace_back to avoid a copy when appending the point (just like you did with the initializer list). Note that the contructor of Point is (x,y), not (y,x).
Using vector<Point> you don't need inc, since you append the value at the end. If you need inc to store the number of points in the largest cluster, simply call int inc = shape.size();
You initialized int cluster = 1. That's an error, you should initialize it with the index of the largest cluster, i.e. int cluster = indx;.
You are calling the vector of planes imgRGB, but you're working on Lab. You'd better change the name, but it's not an issue per se. Also, remember that RGB values are stored in OpenCV as BGR, not RGB (reversed order).
I prefer Mat1b, Mat3b, etc... where possible over Mat. It allows easier access and is more readable (in my opinion). That's not an issue, but you'll see that in my code.
Here we go:
#include <opencv2/opencv.hpp>
#include <iostream>
using namespace cv;
int main(int argc, char** argv)
{
Mat3b img = imread("path_to_image");
if (!img.data)
{
std::cout << "Could not open or find the image" << std::endl;
return -1;
}
Mat3b imlab;
cvtColor(img, imlab, CV_BGR2Lab);
/* Cluster image */
vector<cv::Mat3b> imgRGB;
int k = 5;
int n = img.rows * img.cols;
Mat img3xN(n, 3, CV_8U);
split(imlab, imgRGB);
for (int i = 0; i != 3; ++i)
imgRGB[i].reshape(1, n).copyTo(img3xN.col(i));
img3xN.convertTo(img3xN, CV_32F);
vector<int> bestLables;
kmeans(img3xN, k, bestLables, cv::TermCriteria(), 10, cv::KMEANS_RANDOM_CENTERS);
/* Find the largest cluster*/
int max = 0, indx= 0, id = 0;
vector<int> clusters(k,0);
for (size_t i = 0; i < bestLables.size(); i++)
{
id = bestLables[i];
clusters[id]++;
if (clusters[id] > max)
{
max = clusters[id];
indx = id;
}
}
/* save largest cluster */
int cluster = indx;
vector<Point> shape;
shape.reserve(2000);
for (int y = 0; y < imlab.rows; y++)
{
for (int x = 0; x < imlab.cols; x++)
{
if (bestLables[y*imlab.cols + x] == cluster)
{
shape.emplace_back(x, y);
}
}
}
int inc = shape.size();
// Show results
Mat3b res(img.size(), Vec3b(0,0,0));
vector<Vec3b> colors;
for(int i=0; i<k; ++i)
{
if(i == indx) {
colors.push_back(Vec3b(0, 0, 255));
} else {
colors.push_back(Vec3b(0, 255 / (i+1), 0));
}
}
for(int r=0; r<img.rows; ++r)
{
for(int c=0; c<img.cols; ++c)
{
res(r,c) = colors[bestLables[r*imlab.cols + c]];
}
}
imshow("Clustering", res);
waitKey(0);
return 0;
}

Opencv findNonZero returns array instead of matrix [duplicate]

I'm atttempting to find the non-zero (x,y) coordinates of a binary image.
I've found a few references to the function countNonZero() which only counts the non-zero coordinates and findNonZero() which I'm unsure how to access or use since it seems to have been removed from the documentation completely.
This is the closest reference I found, but still not helpful at all. I would appreciate any specific help.
Edit:
- To specify, this is using C++
Here is an explanation for how findNonZero() saves non-zero elements. The following codes should be useful to access non-zero coordinates of your binary image. Method 1 used findNonZero() in OpenCV, and Method 2 checked every pixels to find the non-zero (positive) ones.
Method 1:
#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
using namespace std;
using namespace cv;
int main(int argc, char** argv) {
Mat img = imread("binary image");
Mat nonZeroCoordinates;
findNonZero(img, nonZeroCoordinates);
for (int i = 0; i < nonZeroCoordinates.total(); i++ ) {
cout << "Zero#" << i << ": " << nonZeroCoordinates.at<Point>(i).x << ", " << nonZeroCoordinates.at<Point>(i).y << endl;
}
return 0;
}
Method 2:
#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
using namespace std;
using namespace cv;
int main(int argc, char** argv) {
Mat img = imread("binary image");
for (int i = 0; i < img.cols; i++ ) {
for (int j = 0; j < img.rows; j++) {
if (img.at<uchar>(j, i) > 0) {
cout << i << ", " << j << endl; // Do your operations
}
}
}
return 0;
}
There is the following source code that was supplied for OpenCV 2.4.3, which may be helpful:
#include <opencv2/core/core.hpp>
#include <vector>
/*! #brief find non-zero elements in a Matrix
*
* Given a binary matrix (likely returned from a comparison
* operation such as compare(), >, ==, etc, return all of
* the non-zero indices as a std::vector<cv::Point> (x,y)
*
* This function aims to replicate the functionality of
* Matlab's command of the same name
*
* Example:
* \code
* // find the edges in an image
* Mat edges, thresh;
* sobel(image, edges);
* // theshold the edges
* thresh = edges > 0.1;
* // find the non-zero components so we can do something useful with them later
* vector<Point> idx;
* find(thresh, idx);
* \endcode
*
* #param binary the input image (type CV_8UC1)
* #param idx the output vector of Points corresponding to non-zero indices in the input
*/
void find(const cv::Mat& binary, std::vector<cv::Point> &idx) {
assert(binary.cols > 0 && binary.rows > 0 && binary.channels() == 1 && binary.depth() == CV_8U);
const int M = binary.rows;
const int N = binary.cols;
for (int m = 0; m < M; ++m) {
const char* bin_ptr = binary.ptr<char>(m);
for (int n = 0; n < N; ++n) {
if (bin_ptr[n] > 0) idx.push_back(cv::Point(n,m));
}
}
}
Note - it looks like the function signature was wrong so I've changed the output vector to pass-by-reference.
you can find it without using findNonZero() this opencv method. rather u can get it by simply using 2 for loops. here is the snippet. hope it can help u.
**
for(int i = 0 ;i <image.rows() ; i++){// image : the binary image
for(int j = 0; j< image.cols() ; j++){
double[] returned = image.get(i,j);
int value = (int) returned[0];
if(value==255){
System.out.println("x: " +i + "\ty: "+j);// returned the (x,y) //co ordinates of all white pixels.
}
}
}
**

Error. Expression must have a class type

I'm trying to pass a Mat to a function but I'm getting some errors when I try to get the float data of the image. Can someone enlighten me on what's wrong?
int _tmain(int argc, _TCHAR* argv[])
{
cv::Mat img;//gradients from fingerprint image
cv::Mat dst;
bh2Rad(&img,&dst);
}
void bh2Rad(Mat* srcMat,cv::Mat* dstMat)
{
for (int i=0; i < srcMat->rows ;i++)
{
float* srcP = srcMat->data.fl + srcMat->width * i;// srcMat Error.
float* dstP = dstMat->data.fl + dstMat->width * i;//dstMat Error
for (int j = 0; j < srcMat->cols ;j++)
dstP[j] = srcP[j] * BH_DEG_TO_RAD;
}
}
you seem to confuse the older(c-api) CvMat with cv::Mat with the pixel-operations.
also, a grayscale image is uchar, not float, and you can't access its pixels in an arbitrary format (unless you convertTo() float before).
int main(int argc, char* argv[])
{
cv::Mat img = cv::imread("original.bmp", CV_LOAD_IMAGE_GRAYSCALE);
cv::Mat dst;
bh2Rad(img,dst);
}
//
// please use references with cv::Mat, not pointers.
// those things are refcounted, you're thrashing that with passing pointers.
//
void bh2Rad(const cv::Mat & srcMat, cv::Mat & dstMat)
{
dstMat.create(srcMat.size(),srcMat.type());
for (int i=0; i < srcMat.rows ;i++)
{
const uchar* srcP = srcMat.ptr<uchar>(i);
uchar* dstP = dstMat.ptr<uchar>(i);
for (int j = 0; j < srcMat.cols ;j++)
dstP[j] = srcP[j] * BH_DEG_TO_RAD;
}
}
The error marks the only instance where you didn't qualify Mat with the namespace CV. I assume you don't have a using-directive for namespace CV, therefore the type Mat which is declared only in CV is unknown and not recognized.
void bh2Rad(cv::Mat* srcMat, cv::Mat* dstMat)
(note the cv:: directly after the opening bracket).

OpenCV: Find all non-zero coordinates of a binary Mat image

I'm atttempting to find the non-zero (x,y) coordinates of a binary image.
I've found a few references to the function countNonZero() which only counts the non-zero coordinates and findNonZero() which I'm unsure how to access or use since it seems to have been removed from the documentation completely.
This is the closest reference I found, but still not helpful at all. I would appreciate any specific help.
Edit:
- To specify, this is using C++
Here is an explanation for how findNonZero() saves non-zero elements. The following codes should be useful to access non-zero coordinates of your binary image. Method 1 used findNonZero() in OpenCV, and Method 2 checked every pixels to find the non-zero (positive) ones.
Method 1:
#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
using namespace std;
using namespace cv;
int main(int argc, char** argv) {
Mat img = imread("binary image");
Mat nonZeroCoordinates;
findNonZero(img, nonZeroCoordinates);
for (int i = 0; i < nonZeroCoordinates.total(); i++ ) {
cout << "Zero#" << i << ": " << nonZeroCoordinates.at<Point>(i).x << ", " << nonZeroCoordinates.at<Point>(i).y << endl;
}
return 0;
}
Method 2:
#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
using namespace std;
using namespace cv;
int main(int argc, char** argv) {
Mat img = imread("binary image");
for (int i = 0; i < img.cols; i++ ) {
for (int j = 0; j < img.rows; j++) {
if (img.at<uchar>(j, i) > 0) {
cout << i << ", " << j << endl; // Do your operations
}
}
}
return 0;
}
There is the following source code that was supplied for OpenCV 2.4.3, which may be helpful:
#include <opencv2/core/core.hpp>
#include <vector>
/*! #brief find non-zero elements in a Matrix
*
* Given a binary matrix (likely returned from a comparison
* operation such as compare(), >, ==, etc, return all of
* the non-zero indices as a std::vector<cv::Point> (x,y)
*
* This function aims to replicate the functionality of
* Matlab's command of the same name
*
* Example:
* \code
* // find the edges in an image
* Mat edges, thresh;
* sobel(image, edges);
* // theshold the edges
* thresh = edges > 0.1;
* // find the non-zero components so we can do something useful with them later
* vector<Point> idx;
* find(thresh, idx);
* \endcode
*
* #param binary the input image (type CV_8UC1)
* #param idx the output vector of Points corresponding to non-zero indices in the input
*/
void find(const cv::Mat& binary, std::vector<cv::Point> &idx) {
assert(binary.cols > 0 && binary.rows > 0 && binary.channels() == 1 && binary.depth() == CV_8U);
const int M = binary.rows;
const int N = binary.cols;
for (int m = 0; m < M; ++m) {
const char* bin_ptr = binary.ptr<char>(m);
for (int n = 0; n < N; ++n) {
if (bin_ptr[n] > 0) idx.push_back(cv::Point(n,m));
}
}
}
Note - it looks like the function signature was wrong so I've changed the output vector to pass-by-reference.
you can find it without using findNonZero() this opencv method. rather u can get it by simply using 2 for loops. here is the snippet. hope it can help u.
**
for(int i = 0 ;i <image.rows() ; i++){// image : the binary image
for(int j = 0; j< image.cols() ; j++){
double[] returned = image.get(i,j);
int value = (int) returned[0];
if(value==255){
System.out.println("x: " +i + "\ty: "+j);// returned the (x,y) //co ordinates of all white pixels.
}
}
}
**