Using
Mat image;
I used
inRange(image,Scalar(170,100,0),Scalar(255,255,70),image);
and i detect object in blue, but I can't draw rectangle around the object.
Should I use mask? or something?
inRange(image,Scalar(170,100,0),Scalar(255,255,70),image);
GaussianBlur(image,image,Size(9,9),1.5);
for(int i = 2; i <image.cols-2;i++)
for(int j = 2; j <image.rows-2;j++){
if( image.at<Vec3b>(i-1,j-1)[0] > 200 &&
image.at<Vec3b>(i-1,j)[0] > 200 &&
image.at<Vec3b>(i-1,j+1)[0] > 200 &&
image.at<Vec3b>(i,j-1)[0] > 200 &&
image.at<Vec3b>(i,j)[0] > 200 &&
image.at<Vec3b>(i,j+1)[0] > 200 &&
image.at<Vec3b>(i+1,j-1)[0] > 200 &&
image.at<Vec3b>(i+1,j)[0] > 200 &&
image.at<Vec3b>(i+1,j+1)[0] > 200
)
{
if(min_x > i)
min_x = i;
if(min_y >j)
min_y = j;
if(max_x < i)
max_x =i;
if(max_y < j)
max_y = j;
}
}
if(!(max_x==0 && max_y==0 && min_x==image.rows && min_y == image.cols))
{
rectangle(image,Point(min_x,min_y),Point(max_x,max_y),CV_RGB(255,0,0),2);
}
imshow("working", image);
if(waitKey(100) >= 0) break;
}
}
This isn't working and a run time error.
I don't know why.. help me!
Some tips:
Your image might be CV_8U3C, but inRange probably converts it to CV_8U, so better use for output a new Mat instance.
Use cv::findContours to detect your area.
Study meanshift used for tracking by opencv which might help you.
You cannot use RGB image for the inrange method. You should transform your image to HSV color space, and use hue range of blue then, which is 95-135. There are so many "blue" possibilities at RGB space.
inRange(image,Scalar(95,0,0),Scalar(135,255,255),image);
The result will be a binary image, just find the contour and draw bounding rectangle around it.
Related
i'm working on OpenCV 4 in ROS Melodic. After undistort(), images have a black background that is detected by SURF. How can I fix this?
I found solution thanks to Micka's comment. I filtered featurs during lowe ratio test:
//-- Filter matches using the Lowe's ratio test
//Default ratio_thresh: 0.7f;
vector<DMatch> matches;
size_t i = 0;
bool lowe_condition = false;
bool black_background_condition = false;
//Filter matches in black background
for (i; i < knn_matches.size(); i++)
{
lowe_condition = (knn_matches[i][0].distance < ratio_thresh * knn_matches[i][1].distance);
black_background_condition = ((keypoints1[i].pt.x >= width_low ) && (keypoints1[i].pt.x <= width_high)) && ((keypoints1[i].pt.y >= height_low ) && (keypoints1[i].pt.y <= height_high));
if (lowe_condition && black_background_condition)
{
matches.push_back(knn_matches[i][0]);
}
}
How to get all objects from image i am separating image objects through colors.
There are almost 20 colors in following image. I want to extract all colors and their position in a vector(Vec3b and Rect).
I'm using egbis algorithum for segmentation
Segmented image
Mat src, dst;
String imageName("/home/pathToImage.jpg" );
src = imread(imageName,1);
if(src.rows < 1)
return -1;
for(int i=0; i<src.rows; i=i+5)
{ for(int j=0; j<src.cols; j=j+5)
{
Vec3b color = src.at<Vec3b>(Point(i,j));
if(colors.empty())
{
colors.push_back(color);
}
else{
bool add = true;
for(int k=0; k<colors.size(); k++)
{
int rmin = colors[k].val[0]-5,
rmax = colors[k].val[0]+5,
gmin = colors[k].val[1]-5,
gmax = colors[k].val[1]+5,
bmin = colors[k].val[2]-5,
bmax = colors[k].val[2]+5;
if((
(color.val[0] >= rmin && color.val[0] <= rmax) &&
(color.val[1] >= gmin && color.val[1] <= gmax) &&
(color.val[2] >= bmin && color.val[2] <= bmax))
)
{
add = false;
break;
}
}
if(add)
colors.push_back(color);
}
}
}
int size = colors.size();
for(int i=0; i<colors.size();i++)
{
Mat inrangeImage;
//cv::inRange(src, Scalar(lowBlue, lowGreen, lowRed), Scalar(highBlue, highGreen, highRed), redColorOnly);
cv::inRange(src, cv::Scalar(colors[i].val[0]-1, colors[i].val[1]-1, colors[i].val[2]-1), cv::Scalar(colors[i].val[0]+1, colors[i].val[1]+1, colors[i].val[2]+1), inrangeImage);
imwrite("/home/kavtech/Segmentation/1/opencv-wrapper-egbis/images/inrangeImage.jpg",inrangeImage);
}
/// Display
namedWindow("Image", WINDOW_AUTOSIZE );
imshow("Image", src );
waitKey(0);
I want to get each color position so that
i can differentiate object positions. Please Help!
That's just a trivial data formatting problem. You want to turn a truecolour image with only 20 or so colours into a colour-indexed image.
So simply step through the image, look up the colour in your growing dictionary, and assign and integer 0-20 to each pixel.
Now you can turn the images into binary images simply by saying one colour is set and the rest are clear, and use standard algorithms for fitting rectangles.
Please check my code, it does not work well. There is no errors occur during both build and debug session. I want to mark all the pixels with WHITE inside every contour. The contours are correct because I have drawn them separately. But the ultimate result is not right.
//Draw the Sketeches
Mat sketches(detected.size(), CV_8UC1, Scalar(0));
for (int j = ptop; j <= pbottom; ++j) {
for (int i = pleft; i <= pright; ++i) {
if (pointPolygonTest(contours[firstc], Point(j, i), false) >= 0) {
sketches.at<uchar> (i, j)= 255;
}
if (pointPolygonTest(contours[secondc], Point(j, i), false) >= 0) {
sketches.at<uchar> (i, j)= 255;
}
}
}
The variable "Mat detected" is another image used for hand detection. I have extracted two contours from it as contours[firstc] and contours[secondc]. And I also narrow down the hand part in the image to row(ptop:pbottom), and col(pleft,pright), and the two "for" loop goes correctly as well. So where exactly is the problem?.
Here is my result! Something goes wrong with it!
I would like to create a mask in OpenCV containing some full rectangular regions (say 1 to 10 regions). Think of it as a mask showing the location of features of interest on an image. I know the pixel coordinates of the corners of each region.
Right now, I am first initializing a Mat to 0, then I am looping through each element. Using "if" logic, I put each pixel to 255 if they belong to the region, such as:
for (int i = 0; i<mymask.cols, i++) {
for (int j = 0; j<mymask.rows, j++) {
if ( ((i > x_lowbound1) && (i < x_highbound1) &&
(j > y_lowbound1) && (j < y_highbound1)) ||
((i > x_lowbound2) && (i < x_highbound2) &&
(j > y_lowbound2) && (j < y_highbound2))) {
mymask.at<uchar>(i,j) = 255;
}
}
}
But this is very clumsy and I think inefficient. In this case, I "fill" 2 rectangular region with 255. But there is no viable way to change the number of region I fill, beside using a switch-case and repeating the code n times.
Is there anyone thinking of something more intelligent? I would rather not use a 3rd party stuff (beside OpenCV ;) ) and I am using VisualStudio 2012.
Use cv::rectangle():
//bounds are inclusive in this code!
cv::Rect region(x_lowbound1, y_lowbound1,
x_highbound1 - x_lowbound1 + 1, y_highbound1 - y_lowbound1 + 1)
cv::rectangle(mymask, region, cv::Scalar(255), CV_FILLED);
I paint a picture to test:
And I want to know how much blobs I have in the black circle and what is the size of each blobs (all blobs are ~white).
For example, in this case I have 12 spots:
I know how to found white pixels and it easy to verify sequence from left:
int whitePixels = 0;
for (int i = 0; i < height; ++i)
{
uchar * pixel = image.ptr<uchar>(i);
for (int j = 0; j < width; ++j)
{
if (j>0 && pixel[j-1]==0) // to group pixels for one spot
whitePixels++;
}
}
but it's clear that this code is not good enough (blobs can be diagonally, etc.).
So, the bottom line, I need help: how can I define the blobs?
Thank you
Following code finds bounding rects (blobs) for all white spots.
Remark: if we can assume white spots are really white (namely have values 255 in grayscaled image), you can use this snippet. Consider putting it in some class to avoid passing uncecessary params to function Traverse. Although it works. The idea is based on DFS. Apart from the gryscaled image, we have ids matrix to assign and remember which pixel belongs to which blob (all pixels having the same id belong to the same blob).
void Traverse(int xs, int ys, cv::Mat &ids,cv::Mat &image, int blobID, cv::Point &leftTop, cv::Point &rightBottom) {
std::stack<cv::Point> S;
S.push(cv::Point(xs,ys));
while (!S.empty()) {
cv::Point u = S.top();
S.pop();
int x = u.x;
int y = u.y;
if (image.at<unsigned char>(y,x) == 0 || ids.at<unsigned char>(y,x) > 0)
continue;
ids.at<unsigned char>(y,x) = blobID;
if (x < leftTop.x)
leftTop.x = x;
if (x > rightBottom.x)
rightBottom.x = x;
if (y < leftTop.y)
leftTop.y = y;
if (y > rightBottom.y)
rightBottom.y = y;
if (x > 0)
S.push(cv::Point(x-1,y));
if (x < ids.cols-1)
S.push(cv::Point(x+1,y));
if (y > 0)
S.push(cv::Point(x,y-1));
if (y < ids.rows-1)
S.push(cv::Point(x,y+1));
}
}
int FindBlobs(cv::Mat &image, std::vector<cv::Rect> &out, float minArea) {
cv::Mat ids = cv::Mat::zeros(image.rows, image.cols,CV_8UC1);
cv::Mat thresholded;
cv::cvtColor(image, thresholded, CV_RGB2GRAY);
const int thresholdLevel = 130;
cv::threshold(thresholded, thresholded, thresholdLevel, 255, CV_THRESH_BINARY);
int blobId = 1;
for (int x = 0;x<ids.cols;x++)
for (int y=0;y<ids.rows;y++){
if (thresholded.at<unsigned char>(y,x) > 0 && ids.at<unsigned char>(y,x) == 0) {
cv::Point leftTop(ids.cols-1, ids.rows-1), rightBottom(0,0);
Traverse(x,y,ids, thresholded,blobId++, leftTop, rightBottom);
cv::Rect r(leftTop, rightBottom);
if (r.area() > minArea)
out.push_back(r);
}
}
return blobId;
}
EDIT: I fixed a bug, lowered threshold level and now the output is given below. I think it is a good start point.
EDIT2: I get rid of recursion in Traverse(). In bigger images recursion caused Stackoverflow.