Multi region mask OpenCV - c++

I would like to create a mask in OpenCV containing some full rectangular regions (say 1 to 10 regions). Think of it as a mask showing the location of features of interest on an image. I know the pixel coordinates of the corners of each region.
Right now, I am first initializing a Mat to 0, then I am looping through each element. Using "if" logic, I put each pixel to 255 if they belong to the region, such as:
for (int i = 0; i<mymask.cols, i++) {
for (int j = 0; j<mymask.rows, j++) {
if ( ((i > x_lowbound1) && (i < x_highbound1) &&
(j > y_lowbound1) && (j < y_highbound1)) ||
((i > x_lowbound2) && (i < x_highbound2) &&
(j > y_lowbound2) && (j < y_highbound2))) {
mymask.at<uchar>(i,j) = 255;
}
}
}
But this is very clumsy and I think inefficient. In this case, I "fill" 2 rectangular region with 255. But there is no viable way to change the number of region I fill, beside using a switch-case and repeating the code n times.
Is there anyone thinking of something more intelligent? I would rather not use a 3rd party stuff (beside OpenCV ;) ) and I am using VisualStudio 2012.

Use cv::rectangle():
//bounds are inclusive in this code!
cv::Rect region(x_lowbound1, y_lowbound1,
x_highbound1 - x_lowbound1 + 1, y_highbound1 - y_lowbound1 + 1)
cv::rectangle(mymask, region, cv::Scalar(255), CV_FILLED);

Related

Opencv obatin certain pixel RGB value based on mask

My title may not be clear enough, but please look carefully on the following description.Thanks in advance.
I have a RGB image and a binary mask image:
Mat img = imread("test.jpg")
Mat mask = Mat::zeros(img.rows, img.cols, CV_8U);
Give some ones to the mask, assume the number of ones is N. Now the nonzero coordinates are known, based on these coordinates, we can surely obtain the corresponding pixel RGB value of the origin image.I know this can be accomplished by the following code:
Mat colors = Mat::zeros(N, 3, CV_8U);
int counter = 0;
for (int i = 0; i < mask.rows; i++)
{
for (int j = 0; j < mask.cols; j++)
{
if (mask.at<uchar>(i, j) == 1)
{
colors.at<uchar>(counter, 0) = img.at<Vec3b>(i, j)[0];
colors.at<uchar>(counter, 1) = img.at<Vec3b>(i, j)[1];
colors.at<uchar>(counter, 2) = img.at<Vec3b>(i, j)[2];
counter++;
}
}
}
And the coords will be as follows:
enter image description here
However, this two layer of for loop costs too much time. I was wondering if there is a faster method to obatin colors, hope you guys can understand what I was trying to convey.
PS:If I can use python, this can be done in only one sentence:
colors = img[mask == 1]
The .at() method is the slowest way to access Mat values in C++. Fastest is to use pointers, but best practice is an iterator. See the OpenCV tutorial on scanning images.
Just a note, even though Python's syntax is nice for something like this, it still has to loop through all of the elements at the end of the day---and since it has some overhead before this, it's de-facto slower than C++ loops with pointers. You necessarily need to loop through all the elements regardless of your library, you're doing comparisons with the mask for every element.
If you are flexible with using any other open source library using C++, try Armadillo. You can do all linear algebra operations with it and also, you can reduce above code to one line(similar to your Python code snippet).
Or
Try findNonZero()function and find all coordinates in image containing non-zero values. Check this: https://stackoverflow.com/a/19244484/7514664
Compile with optimization enabled, try profiling this version and tell us if it is faster:
vector<Vec3b> colors;
if (img.isContinuous() && mask.isContinuous()) {
auto pimg = img.ptr<Vec3b>();
for (auto pmask = mask.datastart; pmask < mask.dataend; ++pmask, ++pimg) {
if (*pmask)
colors.emplace_back(*pimg);
}
}
else {
for (int r = 0; r < img.rows; ++r) {
auto prowimg = img.ptr<Vec3b>(r);
auto prowmask = img.ptr(r);
for (int c = 0; c < img.cols; ++c) {
if (prowmask[c])
colors.emplace_back(prowimg[c]);
}
}
}
If you know the size of colors, reserve the space for it beforehand.

OpenCV pixel manipulation sometimes is not working

I try to modify a BGRA mat using a pointer like this:
//Bound the value between 0 to 255
uchar boundPixelValue(double c) {
c = int(c);
if (c > 255)
c = 255;
if (c < 0)
c = 0;
return (uchar) c;
}
for (int i = 0; i < rows; i++)
for (int j = 0; j < cols; j++)
for (int k = 0; k < 3; k++){
//This loop is accessing the first three channels
mat.ptr<Vec4b>(i)[j][k] = boundPixelValue(
1.0 * mat.ptr<Vec4b>(i)[j][k] * max / avg[k]);
}
But this gives different outputs every time, sometimes work and sometimes give a white blank image. I am suspecting if this is due to the noncontinuous data, can anyone help?
One extra question, usually we access the columns of a 2D array first before accessing the rows because it is usually faster. However, I have to access the pixel using mat.ptr<Vec4b>(row)[col]. So, should I loop through the rows first then column?
Easier, less intensive way of doing is:
std::vector<cv::Mat> matArray;
cv::split(toBoundMat, matArray);
matArray[0].setTo(0, matArray[0] < 0);
matArray[0].setTo(255, matArray[0] > 255);
matArray[1].setTo(0, matArray[1] < 0);
matArray[1].setTo(255, matArray[1] > 255);
matArray[2].setTo(0, matArray[2] < 0);
matArray[2].setTo(255, matArray[2] > 255);
cv::Mat boundedMat;
cv::merge(matArray, boundedMat);
But I really don't understand what you are trying to do. Your double data may have values between 1.7E +/- 308. You either are expecting a very specific kind of data, or you are going to mess it up. If you want to make a Mat visualizable, just normalize it like this:
cv::normalize(inMat, destMat, 0, 255, CV_MINMAX);
cv::cvtColor(destMat, destMat, CV_8UC1) //--(8 bit visualizable mat)
This will check the min and max of your current Mat and will set the minimum to 0, the maximum to 255, and all the in between values proportionally :)

OpenCV function pointPolygonTest() does not act like what I thought

Please check my code, it does not work well. There is no errors occur during both build and debug session. I want to mark all the pixels with WHITE inside every contour. The contours are correct because I have drawn them separately. But the ultimate result is not right.
//Draw the Sketeches
Mat sketches(detected.size(), CV_8UC1, Scalar(0));
for (int j = ptop; j <= pbottom; ++j) {
for (int i = pleft; i <= pright; ++i) {
if (pointPolygonTest(contours[firstc], Point(j, i), false) >= 0) {
sketches.at<uchar> (i, j)= 255;
}
if (pointPolygonTest(contours[secondc], Point(j, i), false) >= 0) {
sketches.at<uchar> (i, j)= 255;
}
}
}
The variable "Mat detected" is another image used for hand detection. I have extracted two contours from it as contours[firstc] and contours[secondc]. And I also narrow down the hand part in the image to row(ptop:pbottom), and col(pleft,pright), and the two "for" loop goes correctly as well. So where exactly is the problem?.
Here is my result! Something goes wrong with it!

Using cv::Mat image(opencv) how can i detect object?

Using
Mat image;
I used
inRange(image,Scalar(170,100,0),Scalar(255,255,70),image);
and i detect object in blue, but I can't draw rectangle around the object.
Should I use mask? or something?
inRange(image,Scalar(170,100,0),Scalar(255,255,70),image);
GaussianBlur(image,image,Size(9,9),1.5);
for(int i = 2; i <image.cols-2;i++)
for(int j = 2; j <image.rows-2;j++){
if( image.at<Vec3b>(i-1,j-1)[0] > 200 &&
image.at<Vec3b>(i-1,j)[0] > 200 &&
image.at<Vec3b>(i-1,j+1)[0] > 200 &&
image.at<Vec3b>(i,j-1)[0] > 200 &&
image.at<Vec3b>(i,j)[0] > 200 &&
image.at<Vec3b>(i,j+1)[0] > 200 &&
image.at<Vec3b>(i+1,j-1)[0] > 200 &&
image.at<Vec3b>(i+1,j)[0] > 200 &&
image.at<Vec3b>(i+1,j+1)[0] > 200
)
{
if(min_x > i)
min_x = i;
if(min_y >j)
min_y = j;
if(max_x < i)
max_x =i;
if(max_y < j)
max_y = j;
}
}
if(!(max_x==0 && max_y==0 && min_x==image.rows && min_y == image.cols))
{
rectangle(image,Point(min_x,min_y),Point(max_x,max_y),CV_RGB(255,0,0),2);
}
imshow("working", image);
if(waitKey(100) >= 0) break;
}
}
This isn't working and a run time error.
I don't know why.. help me!
Some tips:
Your image might be CV_8U3C, but inRange probably converts it to CV_8U, so better use for output a new Mat instance.
Use cv::findContours to detect your area.
Study meanshift used for tracking by opencv which might help you.
You cannot use RGB image for the inrange method. You should transform your image to HSV color space, and use hue range of blue then, which is 95-135. There are so many "blue" possibilities at RGB space.
inRange(image,Scalar(95,0,0),Scalar(135,255,255),image);
The result will be a binary image, just find the contour and draw bounding rectangle around it.

xna 4.0 perparing for multiple collision detection with list

I have a List of objects of which all have a bounding rectangle updated everytime... How can I effectively iterate among them ? I thought that checking it like this is fine but any ideas ?
for (int i = 0; i < birds.Count; i++)
{
for (int j = 0; j < birds.Count; j++)
{
if (j > i)
{
if (birds[j].boundingRectangle.Intersects(birds[i].boundingRectangle))
{
birds[i].tintColor = Color.Yellow;
birds[j].tintColor = Color.Yellow;
}
else
{
birds[i].tintColor = Color.White;
birds[j].tintColor = Color.White;
}
}
}
}
I can't see there why it would fail to detect the collision, the code seems to be ok. You should output some string showing the values of the bounding rectangles to see if they're properly set, or if you are executing that code at all, or if your "birds" array survives the scope in where this code is executed (you may be modifying a copy instead of the actual array).
As for improvements, you can do:
for (int j = i+1; j < birds.Count; j++)
and then you can remove the if (j > i) (j will always be > i).
Other thing that I would recommend is not declaring int j in the for statement. It's always better to having it declared outside than instancing on every i, so:
int i, j;
for (i = 0; i < birds.Count; i++)
for (j = i+1; j < birds.Count; j++)
I don't think there is much more room for improvement there without being able to use pointers. Your method is fine for 2D graphics anyway, unless you're checking for hundreds of objects.
PS: I believe your question could fit in the Game Development - SE site (unless you're using XNA and bounding boxes for something else :P)
Your problem is that when comparing rectangles you set the bird to white, even if it was previously set to yellow by a previous hit detection, so it may have been set to yellow, but it'll be set back again if the last test fails.
How about at the start of each frame (before collision detection) set the rectangles to white then if you get a collision set them to yellow (leaving them alone if there's no collision).