Get coordinates of white pixels (OpenCV) - c++

In OpenCV (C++) I have a b&w image where some shapes appear filled with white (255). Knowing this, how can I get the coordinate points in the image where theses objects are? I'm interested on getting all the white pixels coordinates.
Is there a cleaner way than this?
std::vector<int> coordinates_white; // will temporaly store the coordinates where "white" is found
for (int i = 0; i<img_size.height; i++) {
for (int j = 0; j<img_size.width; j++) {
if (img_tmp.at<int>(i,j)>250) {
coordinates_white.push_back(i);
coordinates_white.push_back(j);
}
}
}
// copy the coordinates into a matrix where each row represents a *(x,y)* pair
cv::Mat coordinates = cv::Mat(coordinates_white.size()/2,2,CV_32S,&coordinates_white.front());

there is a built-in function to do that cv::findNonZero
Returns the list of locations of non-zero pixels.
Given a binary matrix (likely returned from an operation such as cv::threshold(), cv::compare(), >, ==, etc) returns all of the non-zero indices as a cv::Mat or std::vector<cv::Point>
For example:
cv::Mat binaryImage; // input, binary image
cv::Mat locations; // output, locations of non-zero pixels
cv::findNonZero(binaryImage, locations);
// access pixel coordinates
Point pnt = locations.at<Point>(i);
or
cv::Mat binaryImage; // input, binary image
vector<Point> locations; // output, locations of non-zero pixels
cv::findNonZero(binaryImage, locations);
// access pixel coordinates
Point pnt = locations[i];

you can use this method to get the white pixels.. hope it will help u.
for(int i = 0 ;i <image.rows() ; i++){// image:the binary image
for(int j = 0; j< image.cols() ; j++){
double[] returned = image.get(i,j);
int value = (int) returned[0];
if(value==255){
System.out.println("x: " +i + "\ty: "+j);//(x,y) coordinates
}
}
}

Related

OpenCV: output image is blue

so i'm making this project where i'm making the reflection of an image on OpenCV (without using the flip function), and the only problem (i think) to finish it, is that the image that is suppose to come out reflected, is coming out as all blue.
The code i have (i took out the usual part, the problem should be around here):
Mat imageReflectionFinal = Mat::zeros(Size(220,220),CV_8UC3);
for(unsigned int r=0; r<221; r++)
for(unsigned int c=0; c<221; c++) {
Vec3b intensity = image.at<Vec3b>(r,c);
imageReflectionFinal.at<Vec3b>(r,c) = (uchar)(c, -r + (220)/2);
}
///displays images
imshow( "Original Image", image );
imshow("Reflected Image", imageReflectionFinal);
waitKey(0);
return 0;
}
There are some problems with your code. As pointed out, your iteration variables go beyond the actual image dimensions. Do not use hardcoded bounds, you can use inputImage.cols and inputImage.rows instead to obtain the image dimensions.
There’s the variable (a BGR Vec3b) that is set but not used - Vec3b intensity = image.at<Vec3b>(r,c);
Most importantly, it is not clear what you are trying to achieve. The line (uchar)(c, -r + (220)/2); does not give out much info. Also, which direction are you flipping the original image around? X or Y axis?
Here’s a possible solution to flip your image in the X direction:
//get input image:
cv::Mat testMat = cv::imread( "lena.png" );
//Get the input image size:
int matCols = testMat.cols;
int matRows = testMat.rows;
//prepare the output image:
cv::Mat imageReflectionFinal = cv::Mat::zeros( testMat.size(), testMat.type() );
//the image will be flipped around the x axis, so the "target"
//row will start at the last row of the input image:
int targetRow = matRows-1;
//loop thru the original image, getting the current pixel value:
for( int r = 0; r < matRows; r++ ){
for( int c = 0; c < matCols; c++ ) {
//get the source pixel:
cv::Vec3b sourcePixel = testMat.at<cv::Vec3b>( r , c );
//source and target columns are the same:
int targetCol = c;
//set the target pixel
imageReflectionFinal.at<cv::Vec3b>( targetRow , targetCol ) = sourcePixel;
}
//for every iterated source row, decrease the number of
//target rows, as we are flipping the pixels in the x dimension:
targetRow--;
}
Result:

Assign 3x1 mat to 3 channels mat

This question is continuance from my question in this link. After i get mat matrix, the 3x1 matrix is multiplied with 3x3 mat matrix.
for (int i = 0; i < im.rows; i++)
{
for (int j = 0; j < im.cols; j++)
{
for (int k = 0; k < nChannels; k++)
{
zay(k) = im.at<Vec3b>(i, j)[k]; // get pixel value and assigned to Vec4b zay
}
//convert to mat, so i can easily multiplied it
mat.at <double>(0, 0) = zay[0];
mat.at <double>(1, 0) = zay[1];
mat.at <double>(2, 0) = zay[2];
We get 3x1 mat matrix and do multiplication with the filter.
multiply= Filter*mat;
And i get mat matrix 3x1. I want to assign the value into my new 3 channels mat matrix, how to do that? I want to construct an images using this operation. I'm not use convolution function, because i think the result is different. I'm working in c++, and i want to change the coloured images to another color using matrix multiplication. I get the algorithm from this paper. In that paper, we need to multiplied several matrix to get the result.
OpenCV gives you a reshape function to change the number of channels/rows/columns implicitly:
http://docs.opencv.org/modules/core/doc/basic_structures.html#mat-reshape
This is very efficient since no data is copied, only the matrix header is changed.
try:
cv::Mat mat3Channels = mat.reshape(3,1);
Didn't test it, but should work. It should give you a 1x1 matrix with 3 channel element (Vec3d) if you want a Vec3b element instead, you have to convert it:
cv::Mat mat3ChannelsVec3b;
mat3Channels.convertTo(mat3ChannelsVec3b, CV_8UC3);
If you just want to write your mat back, it might be better to create a single Vec3b element instead:
cv::Vec3b element3Channels;
element3Channels[0] = multiply.at<double>(0,0);
element3Channels[1] = multiply.at<double>(1,0);
element3Channels[2] = multiply.at<double>(2,0);
But care in all cases, that Vec3b elements can't save values < 0 and > 255
Edit: After reading your question again, you ask how to assign...
I guess you have another matrix:
cv::Mat outputMatrix = cv::Mat(im.rows, im.cols, CV_8UC3, cv::Scalar(0,0,0));
Now to assign multiply to the element in outputMatrix you ca do:
cv::Vec3b element3Channels;
element3Channels[0] = multiply.at<double>(0,0);
element3Channels[1] = multiply.at<double>(1,0);
element3Channels[2] = multiply.at<double>(2,0);
outputMatrix.at<Vec3b>(i, j) = element3Channels;
If you need alpha channel too, you can adapt that easily.

How to rotate pixels of contour or approximate polygon of contour in OpenCV?

After finding contours in image, consider I have contours pixels and approximate polygon of it.
I want to rotate contours pixels or the approximate polygon of contour with a given angle. Is it possible in OpenCV?
this is how you can rotate an object within the image
this is the input image with known object/contour position (the colored thing there)
int main()
{
cv::Mat input = cv::imread("rotateObjects_input.png");
std::vector<cv::Point> myContour;
myContour.push_back(cv::Point(100,100));
myContour.push_back(cv::Point(150,100));
myContour.push_back(cv::Point(150,300));
myContour.push_back(cv::Point(100,300));
cv::Point2f cog(0,0);
for(unsigned int i=0; i<myContour.size(); ++i)
{
cog = cog + cv::Point2f(myContour[i].x, myContour[i].y);
}
cog = 1.0f/(float)myContour.size()*cog;
std::cout << "center of gravity: " << cog << std::endl;
// create and draw mask
cv::Mat mask = cv::Mat::zeros(input.size(), CV_8UC1);
cv::fillConvexPoly(mask,myContour,255);
// create rotation mat
float angleDEG = 45;
cv::Mat transformation = cv::getRotationMatrix2D(cog,angleDEG,1);
std::cout << transformation << std::endl;
// rotated mask holds the object position after rotation
cv::Mat rotatedMask;
cv::warpAffine(mask,rotatedMask,transformation,input.size());
cv::Mat rotatedInput;
cv::warpAffine(input,rotatedInput,transformation, input.size());
cv::imshow("input",input);
cv::imshow("rotated input",rotatedInput);
cv::imshow("rotated mask",rotatedMask);
// copy rotated object to original image:
cv::Mat output = input.clone();
rotatedInput.copyTo(output, rotatedMask);
cv::imwrite("rotateObjects_beforeHolefilling.png", output);
// now there are pixel left from the old object position.
cv::Mat holePixelMask = mask & (255-rotatedMask);
// we have to fill those pixel with some kind of background...
cv::Mat legalBackground = (255-mask);
//cv::erode(legalBackground,)
// fill holes. here you could try to find some better background color by averaging in neighborhood or sth.
cv::Vec3b lastLegalPixel(0,0,0);
for(unsigned int j=0; j<mask.rows; ++j)
for(unsigned int i=0; i<mask.cols; ++i)
{
if(holePixelMask.at<unsigned char>(j,i))
{
output.at<cv::Vec3b>(j,i) = lastLegalPixel;
}
else
{
if(legalBackground.at<unsigned char>(j,i))
lastLegalPixel = input.at<cv::Vec3b>(j,i);
}
}
cv::imshow("holes before filling", holePixelMask);
cv::imshow("legal background", legalBackground);
cv::imshow("result", output);
cv::waitKey(-1);
return 0;
}
this is the output before hole filling
and this is after hole filling

Finding all objects in an image based on color

I am looking for a way to take an image and get masks of all objects in it by color. My goal is to be able to separate similarly colored objects into layers so I can further examine each layer. The plan is to use each mask against the original image to create a histogram of the colors in each object and determine the similarity with other objects in the image. If something is similar enough it will be combined with other objects to form a layer.
The problem is that I can not find a function in opencv to find all objects in an image based on color contiguity. I am sure such an algorithm exists, but it seems to be evading me. Does anyone know of an algorithm or function like this?
The best method that I have found is K-means Clustering. This separates the image into different layers based on color. It uses a k-neighbors algorithm to do so. With this I am able to effectively split the image into several layers that are of similar color.
#define numClusters 7
cv::Mat src = cv::imread("img0.png");
cv::Mat kMeansSrc(src.rows * src.cols, 3, CV_32F);
//resize the image to src.rows*src.cols x 3
//cv::kmeans expects an image that is in rows with 3 channel columns
//this rearranges the image into (rows * columns, numChannels)
for( int y = 0; y < src.rows; y++ )
{
for( int x = 0; x < src.cols; x++ )
{
for( int z = 0; z < 3; z++)
kMeansSrc.at<float>(y + x*src.rows, z) = src.at<Vec3b>(y,x)[z];
}
}
cv::Mat labels;
cv::Mat centers;
int attempts = 2;
//perform kmeans on kMeansSrc where numClusters is defined previously as 7
//end either when desired accuracy is met or the maximum number of iterations is reached
cv::kmeans(kMeansSrc, numClusters, labels, cv::TermCriteria( CV_TERMCRIT_EPS+CV_TERMCRIT_ITER, 8, 1), attempts, KMEANS_PP_CENTERS, centers );
//create an array of numClusters colors
int colors[numClusters];
for(int i = 0; i < numClusters; i++) {
colors[i] = 255/(i+1);
}
std::vector<cv::Mat> layers;
for(int i = 0; i < numClusters; i++)
{
layers.push_back(cv::Mat::zeros(src.rows,src.cols,CV_32F));
}
//use the labels to draw the layers
//using the array of colors, draw the pixels onto each label image
for( int y = 0; y < src.rows; y++ )
{
for( int x = 0; x < src.cols; x++ )
{
int cluster_idx = labels.at<int>(y + x*src.rows,0);
layers[cluster_idx].at<float>(y, x) = (float)(colors[cluster_idx]);;
}
}
std::vector<cv::Mat> srcLayers;
//each layer to mask a portion of the original image
//this leaves us with sections of similar color from the original image
for(int i = 0; i < numClusters; i++)
{
layers[i].convertTo(layers[i], CV_8UC1);
srcLayers.push_back(cv::Mat());
src.copyTo(srcLayers[i], layers[i]);
}
I suggest you convert the image to the HSV-space (Hue-Saturation-Value). Then make a histogram based on the Hue value to find thresholds online, or define them before (depends if this is a general problem or a given one).
Crate one-channel images for each layer you want to form. (set them as black)
Then then use the HSV-image and mark a layer based on the threshold values. You might want to add some constant thresholds for value and saturation too (to avoid dark and light areas)
Does this make sense to you?
I think that you should proceed in the following proceess:
Smooth you image if it has too much details.
find edges
Find all contours
Try to find the color of each contour..lets say you want to keep all contours which are red. So, keep only those contours which are red.
Once you find the contours which you want to keep, then create a mask image based upon the contours you want to keep.
Using mask image, extract the required objects from the original image.

How to run findContours() on meanShiftSegmentation() output?

I'm trying to rewrite my very slow naive segmentation using floodFill to something faster. I ruled out meanShiftFiltering a year ago because of the difficulty in labelling the colours and then finding their contours.
The current version of opencv seems to have a fast new function that labels segments using mean shift: gpu::meanShiftSegmentation(). It produces images like the following:
(source: ekran.org)
So this looks to me pretty close to being able to generating contours. How can I run findContours to generate segments?
Seems to me, this would be done by extracting the labelled colours from the image, and then testing which pixel values in the image match each label colour to make a boolean image suitable for findContours. This is what I have done in the following (but its a bit slow and strikes me there should be a better way):
Mat image = imread("test.png");
...
// gpu operations on image resulting in gpuOpen
...
// Mean shift
TermCriteria iterations = TermCriteria(CV_TERMCRIT_ITER, 2, 0);
gpu::meanShiftSegmentation(gpuOpen, segments, 10, 20, 300, iterations);
// convert to greyscale (HSV image)
vector<Mat> channels;
split(segments, channels);
// get labels from histogram of image.
int size = 256;
labels = Mat(256, 1, CV_32SC1);
calcHist(&channels.at(2), 1, 0, Mat(), labels, 1, &size, 0);
// Loop through hist bins
for (int i=0; i<256; i++) {
float count = labels.at<float>(i);
// Does this bin represent a label in the image?
if (count > 0) {
// find areas of the image that match this label and findContours on the result.
Mat label = Mat(channels.at(2).rows, channels.at(2).cols, CV_8UC1, Scalar::all(i)); // image filled with label colour.
Mat boolImage = (channels.at(2) == label); // which pixels in labeled image are identical to this label?
vector<vector<Point>> labelContours;
findContours(boolImage, labelContours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
// Loop through contours.
for (int idx = 0; idx < labelContours.size(); idx++) {
// get bounds for this contour.
bounds = boundingRect(labelContours[idx]);
// create ROI for bounds to extract this region
Mat patchROI = image(bounds);
Mat maskROI = boolImage(bounds);
}
}
}
Is this the best approach or is there a better way to get the label colours? Seems it would be logical for meanShiftSegmentation to provide this information? (vector of colour values, or vector of masks for each label, etc.)
Thank you.
Following is another way of doing this without thowing away the colour information in the meanShiftSegmentation results. I did not compare the two for performance.
// Loop through whole image, pixel and pixel and then use the colour to index an array of bools indicating presence.
vector<Scalar> colours;
vector<Scalar>::iterator colourIter;
vector< vector< vector<bool> > > colourSpace;
vector< vector< vector<bool> > >::iterator colourSpaceBIter;
vector< vector<bool> >::iterator colourSpaceGIter;
vector<bool>::iterator colourSpaceRIter;
// Initialize 3D Vector
colourSpace.resize(256);
for (int i = 0; i < 256; i++) {
colourSpace[i].resize(256);
for (int j = 0; j < 256; j++) {
colourSpace[i][j].resize(256);
}
}
// Loop through pixels in the image (should be fastish, look into LUT for faster)
uchar r, g, b;
for (int i = 0; i < segments.rows; i++)
{
Vec3b* pixel = segments.ptr<Vec3b>(i); // point to first pixel in row
for (int j = 0; j < segments.cols; j++)
{
b = pixel[j][0];
g = pixel[j][1];
r = pixel[j][2];
colourSpace[b][g][r] = true; // this colour is in the image.
//cout << "BGR: " << int(b) << " " << int(g) << " " << int(r) << endl;
}
}
// Get all the unique colours from colourSpace
// loop through colourSpace
int bi=0;
for (colourSpaceBIter = colourSpace.begin(); colourSpaceBIter != colourSpace.end(); colourSpaceBIter++) {
int gi=0;
for (colourSpaceGIter = colourSpaceBIter->begin(); colourSpaceGIter != colourSpaceBIter->end(); colourSpaceGIter++) {
int ri=0;
for (colourSpaceRIter = colourSpaceGIter->begin(); colourSpaceRIter != colourSpaceGIter->end(); colourSpaceRIter++) {
if (*colourSpaceRIter)
colours.push_back( Scalar(bi,gi,ri) );
ri++;
}
gi++;
}
bi++;
}
// For each colour
int segmentCount = 0;
for (colourIter = colours.begin(); colourIter != colours.end(); colourIter++) {
Mat label = Mat(segments.rows, segments.cols, CV_8UC3, *colourIter); // image filled with label colour.
Mat boolImage = Mat(segments.rows, segments.cols, CV_8UC3);
inRange(segments, *colourIter, *colourIter, boolImage); // which pixels in labeled image are identical to this label?
vector<vector<Point> > labelContours;
findContours(boolImage, labelContours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
// Loop through contours.
for (int idx = 0; idx < labelContours.size(); idx++) {
// get bounds for this contour.
Rect bounds = boundingRect(labelContours[idx]);
float area = contourArea(labelContours[idx]);
// Draw this contour on a new blank image
Mat maskImage = Mat::zeros(boolImage.rows, boolImage.cols, boolImage.type());
drawContours(maskImage, labelContours, idx, Scalar(255,255,255), CV_FILLED);
Mat patchROI = frame(bounds);
Mat maskROI = maskImage(bounds);
}
segmentCount++;
}