OpenCV: How to write vector<point> to image - c++

cv::Mat in = cv::imread("SegmentedImage.png");
// vector with all non-white point positions
std::vector<Point> nonWhiteList;
nonWhiteList.reserve(in.rows*in.cols);
// add all non-white points to the vector
for(int j=0; j< in.rows; ++j)
{
for(int i=0; i<in.cols; ++i)
{
// if not white: add to the list
if(in.at<cv::Vec3b>(j,i) != cv::Vec3b(255,255,255))
{
nonWhiteList.push_back(cv::Point(i,j));
}
}
}
cv::Mat BKGR = imread("photo_booth_Cars.png", CV_LOAD_IMAGE_COLOR); //1529x736
I need to write the vector<Point> nonWhiteList to image BKGR, How to do it?
Basically, need to remove the white background from the image and put non-white points on another background image. Researched very much on grabcut and findcontours.
I am completely new to Opencv. Thanks so much for help.

cv::Mat BKGR = imread("photo_booth_Cars.png", CV_LOAD_IMAGE_COLOR); //1529x736
for(int j=0; j<in.rows; ++j)
for(int i=0; i<in.cols; ++i)
{
if(in.at<cv::Vec3b>(j,i) != cv::Vec3b(255,255,255))
{
BKGR.at<cv::Vec3b>(j,i) = in.at<cv::Vec3b>(j,i);
}
}
cv::imwrite("newFinalImage.png", BKGR);

If the images are of same dimensionality than it makes sense else it is difficult to copy directly unless you know the 2 image camera parameters. However, you may use Interpolation to get the two images to same size.
If the images are same size, than why do you need to create a std::vector than copy it to another cv::Mat. You can achieve this in the same loop without extra computation overhead. Just simple as filling an array. However your question is ambiguous.

Related

Zeros Mat object has some values other than zero

I declared a Mat object D4 with initial values zero with given dimensions and datatype.
Then to display it in smaller dimension, I wrote this
Mat D4=Mat::zeros(7168,7424,CV_32FC1);
Mat res6;
for (i=0; i<7168; i++)
for(j=0; j<7424; j++)
{
DC_.at<uchar>(i, j) = (unsigned char)D4.at<float>(i, j);
}
resize(DC_, res6, Size(512, 512));
imshow("Test", res6);
I expect a complete black image. But I get a patch of gray values on the bottom right side (that patch resembles my input image at that exact location) Why does this happen ? What is going wrong ? Please answer asap.
can you please try whether the problem still occurs if you use this snippet instead?
Mat D4=Mat::zeros(DC_.rows,DC_.cols,CV_32FC1);
Mat res6;
for (i=0; i<DC_.rows; i++)
for(j=0; j<DC_.cols; j++)
{
DC_.at<uchar>(i,j) = (unsigned char)D4.at<float>(i,j);
}
resize(DC_,res6,Size(512,512));
imshow("Test",res6);

Change pixel in a zero matrix not working correctly

I try to do a loop over a new zero matrix and change every pixel to white.
cv::Mat background = cv::Mat::zeros(frame.rows, frame.cols,frame.type());
for (int i=0; i<frame.rows; i++)
{
for (int j=0; j<frame.cols; j++)
{
background.at<char>(i,j)=255;
}
}
Normally at the end i have to have a matrix totally white But i don't understand why finally i have this picture:
Thanks
EDIT:
solution:
cv::Mat background = cv::Mat::zeros(frame.rows, frame.cols,frame.type());
for (int i=0; i<frame.rows; i++)
{
for (int j=0; j<frame.cols; j++)
{
Vec3b bgrPixel = Vec3b(255,255,255);
background.at<Vec3b>(i,j)=bgrPixel;
// background.at<char>(i,j)=255;
}
}
Thank you !
Your matrix is made up of i*j pixels - each pixel is made up of 3 (RGB) or 4 (RGBA) chars (bytes/channels). You are only looping over the first i*j bytes of the matrix, when you need to be looping over i*j pixels. I'm guessing whatever type you're passing in as the third argument is the 'pixel type'.
Look here for an example usage: OpenCV get pixel channel value from Mat image

Finding all objects in an image based on color

I am looking for a way to take an image and get masks of all objects in it by color. My goal is to be able to separate similarly colored objects into layers so I can further examine each layer. The plan is to use each mask against the original image to create a histogram of the colors in each object and determine the similarity with other objects in the image. If something is similar enough it will be combined with other objects to form a layer.
The problem is that I can not find a function in opencv to find all objects in an image based on color contiguity. I am sure such an algorithm exists, but it seems to be evading me. Does anyone know of an algorithm or function like this?
The best method that I have found is K-means Clustering. This separates the image into different layers based on color. It uses a k-neighbors algorithm to do so. With this I am able to effectively split the image into several layers that are of similar color.
#define numClusters 7
cv::Mat src = cv::imread("img0.png");
cv::Mat kMeansSrc(src.rows * src.cols, 3, CV_32F);
//resize the image to src.rows*src.cols x 3
//cv::kmeans expects an image that is in rows with 3 channel columns
//this rearranges the image into (rows * columns, numChannels)
for( int y = 0; y < src.rows; y++ )
{
for( int x = 0; x < src.cols; x++ )
{
for( int z = 0; z < 3; z++)
kMeansSrc.at<float>(y + x*src.rows, z) = src.at<Vec3b>(y,x)[z];
}
}
cv::Mat labels;
cv::Mat centers;
int attempts = 2;
//perform kmeans on kMeansSrc where numClusters is defined previously as 7
//end either when desired accuracy is met or the maximum number of iterations is reached
cv::kmeans(kMeansSrc, numClusters, labels, cv::TermCriteria( CV_TERMCRIT_EPS+CV_TERMCRIT_ITER, 8, 1), attempts, KMEANS_PP_CENTERS, centers );
//create an array of numClusters colors
int colors[numClusters];
for(int i = 0; i < numClusters; i++) {
colors[i] = 255/(i+1);
}
std::vector<cv::Mat> layers;
for(int i = 0; i < numClusters; i++)
{
layers.push_back(cv::Mat::zeros(src.rows,src.cols,CV_32F));
}
//use the labels to draw the layers
//using the array of colors, draw the pixels onto each label image
for( int y = 0; y < src.rows; y++ )
{
for( int x = 0; x < src.cols; x++ )
{
int cluster_idx = labels.at<int>(y + x*src.rows,0);
layers[cluster_idx].at<float>(y, x) = (float)(colors[cluster_idx]);;
}
}
std::vector<cv::Mat> srcLayers;
//each layer to mask a portion of the original image
//this leaves us with sections of similar color from the original image
for(int i = 0; i < numClusters; i++)
{
layers[i].convertTo(layers[i], CV_8UC1);
srcLayers.push_back(cv::Mat());
src.copyTo(srcLayers[i], layers[i]);
}
I suggest you convert the image to the HSV-space (Hue-Saturation-Value). Then make a histogram based on the Hue value to find thresholds online, or define them before (depends if this is a general problem or a given one).
Crate one-channel images for each layer you want to form. (set them as black)
Then then use the HSV-image and mark a layer based on the threshold values. You might want to add some constant thresholds for value and saturation too (to avoid dark and light areas)
Does this make sense to you?
I think that you should proceed in the following proceess:
Smooth you image if it has too much details.
find edges
Find all contours
Try to find the color of each contour..lets say you want to keep all contours which are red. So, keep only those contours which are red.
Once you find the contours which you want to keep, then create a mask image based upon the contours you want to keep.
Using mask image, extract the required objects from the original image.

How to access vector<vector<Point>> contours in opencv like matrix element?

My problem is I do not know how to access vector <vector <Point>> contour (this is 2D like matrix on OpenCV)
I want to do this. If Mat element does not consists in contours area I want to suppress that matrix elements. In order to do this I need to know contours element too.
You need a for loop for both the first and the second vector. Something Like this:
vector< vector<Point> > contours;
for(int i= 0; i < contours.size(); i++)
{
for(int j= 0; j < contours[i].size();j++) // run until j < contours[i].size();
{
cout << contours[i][j] << endl; //do whatever
}
}
If my situation was so urgent, I would ask my question more carefully.
If I try hard to understand your question, you basically want to consider a contour in pixel level. In order to do that, you should draw the contour into a blank matrix with drawContour. And then compare two matrices or match a pixel in that matrix in case you want pixel by pixel.
If you need all points rather than just the edge points, you can use drawContours(....,thickness=CV_FILLED) to dump this contour on a dummy Mat, then you can obtain those points by scanning the dummy Mat.

LUT for different channels in C++, opencv2

I've been playing around with opencv2 implemented in C++ for a couple of days and noticed that the lookup tables are the fastest way to apply changes to an image. However, I've been having some troubles with using them for my purposes.
The code below shows an example of inverting pixels' values:
bool apply(Image& img) {
int dim(256);
Mat lut(1, &dim, CV_8U);
for (int i=0; i<256; i++)
lut.at<uchar>(i)= 255-i;
LUT(img.final,lut,img.final);
return true;
}
class Image {
public:
const Mat& original;
Mat final;
...
};
As it's very efficient, much more efficient than changing each pixel by one (verified by my own tests), I'd like to use this method for other operations. However to do this, I have to access each layer (each color, the picture is in BGR) separately. So for example, I'd like to change blue to 255-i, green to 255-i/2 and red to 255-i/3.
I've been searching the net for a while, but couldn't come up with a correct solution. As far as I know, it's possible (documentation) but I can't find a way to implement it.
The key is this paragraph in the docs:
the table should either have a single channel (in this case the same table is used for all channels) or the same number of channels as in the source array
So, you must create a multichannel LUT:
bool apply(Image& img) {
int dim(256);
Mat lut(1, &dim, CV_8UC(img.final.channels()));
if( img.final.channels() == 1)
{
for (int i=0; i<256; i++)
lut.at<uchar>(i)= 255-i;
}
else // stupid idea that all the images are either mono either multichannel
{
for (int i=0; i<256; i++)
{
lut.at<Vec3b>(i)[0]= 255-i; // first channel (B)
lut.at<Vec3b>(i)[1]= 255-i/2; // second channel (G)
lut.at<Vec3b>(i)[2]= 255-i/3; // ... (R)
}
}
LUT(img.final,lut,img.final); // are you sure you are doing final->final?
// if yes, correct the LUT allocation part
return true;
}