i'm working on OpenCV 4 in ROS Melodic. After undistort(), images have a black background that is detected by SURF. How can I fix this?
I found solution thanks to Micka's comment. I filtered featurs during lowe ratio test:
//-- Filter matches using the Lowe's ratio test
//Default ratio_thresh: 0.7f;
vector<DMatch> matches;
size_t i = 0;
bool lowe_condition = false;
bool black_background_condition = false;
//Filter matches in black background
for (i; i < knn_matches.size(); i++)
{
lowe_condition = (knn_matches[i][0].distance < ratio_thresh * knn_matches[i][1].distance);
black_background_condition = ((keypoints1[i].pt.x >= width_low ) && (keypoints1[i].pt.x <= width_high)) && ((keypoints1[i].pt.y >= height_low ) && (keypoints1[i].pt.y <= height_high));
if (lowe_condition && black_background_condition)
{
matches.push_back(knn_matches[i][0]);
}
}
Related
How to get all objects from image i am separating image objects through colors.
There are almost 20 colors in following image. I want to extract all colors and their position in a vector(Vec3b and Rect).
I'm using egbis algorithum for segmentation
Segmented image
Mat src, dst;
String imageName("/home/pathToImage.jpg" );
src = imread(imageName,1);
if(src.rows < 1)
return -1;
for(int i=0; i<src.rows; i=i+5)
{ for(int j=0; j<src.cols; j=j+5)
{
Vec3b color = src.at<Vec3b>(Point(i,j));
if(colors.empty())
{
colors.push_back(color);
}
else{
bool add = true;
for(int k=0; k<colors.size(); k++)
{
int rmin = colors[k].val[0]-5,
rmax = colors[k].val[0]+5,
gmin = colors[k].val[1]-5,
gmax = colors[k].val[1]+5,
bmin = colors[k].val[2]-5,
bmax = colors[k].val[2]+5;
if((
(color.val[0] >= rmin && color.val[0] <= rmax) &&
(color.val[1] >= gmin && color.val[1] <= gmax) &&
(color.val[2] >= bmin && color.val[2] <= bmax))
)
{
add = false;
break;
}
}
if(add)
colors.push_back(color);
}
}
}
int size = colors.size();
for(int i=0; i<colors.size();i++)
{
Mat inrangeImage;
//cv::inRange(src, Scalar(lowBlue, lowGreen, lowRed), Scalar(highBlue, highGreen, highRed), redColorOnly);
cv::inRange(src, cv::Scalar(colors[i].val[0]-1, colors[i].val[1]-1, colors[i].val[2]-1), cv::Scalar(colors[i].val[0]+1, colors[i].val[1]+1, colors[i].val[2]+1), inrangeImage);
imwrite("/home/kavtech/Segmentation/1/opencv-wrapper-egbis/images/inrangeImage.jpg",inrangeImage);
}
/// Display
namedWindow("Image", WINDOW_AUTOSIZE );
imshow("Image", src );
waitKey(0);
I want to get each color position so that
i can differentiate object positions. Please Help!
That's just a trivial data formatting problem. You want to turn a truecolour image with only 20 or so colours into a colour-indexed image.
So simply step through the image, look up the colour in your growing dictionary, and assign and integer 0-20 to each pixel.
Now you can turn the images into binary images simply by saying one colour is set and the rest are clear, and use standard algorithms for fitting rectangles.
Everywhere online, you can find little tutorials on different segments of a BOW, but (from I've found anyway) nothing on what you do after:
bowDE.setVocabulary(dictionary);
...
bowDE.compute(image, keypoints, descriptors);
Once you've used the BOWImgDescriptorExtractor to compute, what do you then do?
How do you find out what is a good match, and what is not?
And then can you then utilize that information?
If so, how?
If you have both descriptor and extractor, you can use a matcher to find matches.
Here is a sample function:
void drawMatches(const Mat& Img1,const Mat& Img2,const vector<KeyPoint>& Keypoints1,
const vector<KeyPoint>& Keypoints2,const Mat& Descriptors1,const Mat& Descriptors2)
{
Ptr<DescriptorMatcher> descriptorMatcher = DescriptorMatcher::create( "BruteForce" ); //
vector<DMatch> matches;
descriptorMatcher->match( Descriptors1, Descriptors2, matches );
Mat matchImg;
drawMatches(Img1,Keypoints1,Img2,Keypoints2,matches,matchImg,Scalar::all(-1),CV_RGB(255,255,255),Mat(),4);
imshow("match",show);
}
Once you get those matches, you can determine which matches are "good" by inspecting their max distance, average distance, total match size and so on.
There is also an official tutorial about how to use those descriptors and keypoints to get matches
Features2D + Homography to find a known object
Although it uses a different feature detector from yours, you can still use the matching part of the article.
Update:
There is no way to make an accurate answer to whether a match is a "correct" match. But you can get the values of the matching pairs.
Here is an example of "wrong" matches and "right" matches, using SIFT feature detector and BruteForce matcher.
Part of the code:
size_t matches_size = matches.size();
for( unsigned i = 0; i < matches_size; i++ )
{
if( matches[i].distance < MY_GOOD_DISTANCE)//You can get the matching distance like this.
{
good_matches.push_back( matches[i]);
}
}
This is a right match.
After computing the matches, I listed the distance of the matches:
27.7669 43.715 45.2217 47.4552 53.1601 54.074 57.3672 58.2924 59.0593 63.3009
63.6475 64.1093 64.8922 67.0075 70.9718 73.4507 74.0878 76.6225 76.6551 80.075
81.2219 82.2192 83.6959 89.2412 90.7855 91.4604 95.3363 95.352 95.6033 98.209
98.3362 98.3412 99.4082 101.035 104.024 109.567 110.095 110.345 112.858 118.339
119.311 123.976 125.948 126.625 128.02 128.269 130.219 133.015 135.739 138.43
144.499 146.055 146.492 147.054 152.925 160.044 161.165 168.899 170.871 179.881
183.39 183.573 187.061 192.764 192.961 194.268 194.44 196.489 202.255 204.854
230.643 230.92 231.961 233.238 235.253 236.023 244.225 246.337 253.829 260.384
261.383 263.934 266.933 269.232 272.586 273.651 283.891 289.261 291.805 297.165
297.22 297.627 304.132 307.633 307.695 314.798 325.294 334.74 335.272 344.17
352.095 353.456 354.144 357.398 363.762 366.344 367.301 368.977 371.102 371.44
371.863 372.459 372.85 373.17 376.082 378.844 382.372 389.01 389.704 397.028
398.236 400.53 414.523 417.628 422.61 430.731 461.3
Min value: 27.76
Max value: 461.3
Average: 210.2526882
And here is a wrong match:
336.161 437.132 310.587 376.245 368.683 449.708 334.148 354.79 333.981 399.794 368.889
361.653 341.778 266.443 259.365 338.726 352.789 381.097 427.143 350.732 355.522 349.819
358.569 373.139 348.201 341.923 383.188 378.233 399.844 294.16 505.107 347.978 314.021
332.983 335.364 403.217 385.8 408.859 381.472 372.078 434.167 436.489 279.646 253.271
268.522 376.303 418.071 373.3 369.004 272.145 254.448 408.185 326.351 351.886 333.981
371.59 440.336 230.558 250.928 337.368 288.579 262.107 409.971 339.391 380.58 374.162
361.96 392.59 345.936 328.691 383.586 398.986 336.283 365.768 492.984 392.379 377.042
371.652 279.014 370.849 378.213 351.048 311.148 319.168 324.268 319.191 261.555 339.257
298.572 241.622 406.977 286.068 438.586
Min value: 230
Max value: 505
Average: 352.6009711
After you get the distance of all matches, you can easily see what is a "good" match and what is a "bad" match.
Here's the scoring part. A little bit tricky, highly related with data.
MY_AVG_DISTANCE, MY_LEAST_DISTANCE, MY_MAX_DISTANCE and MY_GOOD_DISTANCE are values you should carefully select. Check your own matching distances, and select some value for them.
int good_size = good_matches.size() > 30 ? 30 : good_matches.size(); //In case there are too many "good matches"
//...
//===========SCORE ============
double avg = 0; //Calculates the average of some of the matches.
int avgCount = 0;
int goodCount = 0 ;
for( unsigned i = 0; i < matches.size(); i++ )
{
double dist = matches[i].distance;
if( dist < MY_AVG_DISTANCE && dist > MY_LEAST_DISTANCE )
{
avg += dist;
avgCount++;
}
if(dist < MY_GOOD_DISTANCE && dist > MY_LEAST_DISTANCE ){
goodCount++;
}
}
if(avgCount > 6){
avg /= avgCount;
if(goodCount < 12){
avg = avg + (12-goodCount) * 4;
}
}else{
avg = MY_MAX_DISTANCE;
}
avg = avg > MY_AVG_DISTANCE ? MY_AVG_DISTANCE : avg;
avg = avg < MY_MIN_DISTANCE ? MY_MIN_DISTANCE : avg;
double score_avg = (MY_AVG_DISTANCE - avg) / ( MY_AVG_DISTANCE - MY_MIN_DISTANCE ) * 100;
if(formsHomography){ //Some bonus...not related with your matching method, but you can adopt something like this
score_avg += 40;
score_avg = score_avg > 100 ? 100 : score_avg;
}else{
score_avg -= 5;
score_avg = score_avg < 0 ? 0 : score_avg;
}
return score_avg;
You can find a simple implementation of bag of words in C++ here. So you don't need OpenCV to depend on.
class Statistics {
std::unordered_map<std::string, int64_t> _counts;
int64_t _totWords;
void process(std::string& token);
public:
explicit Statistics(const std::string& text);
double Dist(const Statistics& fellow) const;
bool IsEmpty() const { return _totWords == 0; }
};
namespace {
const std::string gPunctStr = ".,;:!?";
const std::unordered_set<char> gPunctSet(gPunctStr.begin(), gPunctStr.end());
}
Statistics::Statistics(const std::string& text) {
std::string lastToken;
for (size_t i = 0; i < text.size(); i++) {
int ch = static_cast<uint8_t>(text[i]);
if (!isspace(ch)) {
lastToken.push_back(tolower(ch));
continue;
}
process(lastToken);
}
process(lastToken);
}
void Statistics::process(std::string& token) {
do {
if (token.size() == 0) {
break;
}
if (gPunctSet.find(token.back()) != gPunctSet.end()) {
token.pop_back();
}
} while (false);
if (token.size() != 0) {
auto it = _counts.find(token);
if (it == _counts.end()) {
_counts.emplace(token, 1);
}
else {
it->second++;
}
_totWords++;
token.clear();
}
}
double Statistics::Dist(const Statistics& fellow) const {
double sum = 0;
for (const auto& wordInfo : _counts) {
const std::string wordText = wordInfo.first;
const double freq = double(wordInfo.second) / _totWords;
auto it = fellow._counts.find(wordText);
double fellowFreq;
if (it == fellow._counts.end()) {
fellowFreq = 0;
}
else {
fellowFreq = double(it->second) / fellow._totWords;
}
const double d = freq - fellowFreq;
sum += d * d;
}
return std::sqrt(sum);
}
I'm doing a project in open cv.
The goal is to recognize 2d chess pieces on a chessboard,and their location.
the algorithm goes like this:
run canny on the picture.
run houghLines on the image.
3.Merge close lines
4.sort to vertical lines and horizontal lines.
5.sort the lines and choose all the boarder lines +1 (so we get the actual play board lines and not the board lines)
find intersection of boarder lines. (4 corners)
do Homography with found corners.
split new image to 8X8 cells (length/8,width/8).
detect pieces on board and location using template matching.
Right now I'm stuck on doing the Homography.
1. Sometimes I will detect the lines of the outer board , Sometimes I wont , and sometimes I will detect 2 lines on the outer board.
2.the merge line function I found online seems not to be doing a good job. there can be 2 vertical lines on each other which it does not merge.
void processChessboardImage(Mat & image)
{
Vec2f temp(0,0);
vector<Vec2f> lines;
vector<Vec2f> boarderLines;
vector<Vec2f> verticalLines;
vector<Vec2f> horizontalLines;
Mat canny_output,hough_output;
canny_output = CannyOnImage(image);
//cvtColor(canny_output, image, CV_GRAY2BGR);
HoughLines(canny_output, lines, 1, CV_PI/180, 120, 0, 0 );
//at this point we have lines - all straight lines in image
MergeRelatedLines(&lines,canny_output);
for(int i=0; i< lines.size(); i++)
{
if(lines[i][1] != -100)
{
if((lines[i][1] >= 0 && lines[i][1] < 1 )|| lines[i][1] >= 3)//Vertical
{
verticalLines.push_back(lines[i]);
}
else // horizontal;
{
horizontalLines.push_back(lines[i]);
}
}
}
sort(verticalLines.begin(),verticalLines.end(),[](const Vec2f& elem1, const Vec2f& elem2){ return elem1[0]*cos(elem1[1]) < elem2[0]*cos(elem2[1]); });
sort(horizontalLines.begin(),horizontalLines.end(),[](const Vec2f& elem1, const Vec2f& elem2){ return elem1[0] < elem2[0]; });
int numVerticalLines = verticalLines.size();
int numHorizontalLines = horizontalLines.size();
boarderLines.push_back(verticalLines[0]);
boarderLines.push_back(verticalLines[verticalLines.size()-1]);
boarderLines.push_back(horizontalLines[0]);
boarderLines.push_back(horizontalLines[horizontalLines.size() -1 ]);
void MergeRelatedLines(vector<Vec2f> *lines, Mat &img)
{
vector<Vec2f>::iterator current;
vector<Vec4i> points(lines->size());
for(current=lines->begin();current!=lines->end();current++)
{
if((*current)[0]==0 && (*current)[1]==-100)
continue;
float p1 = (*current)[0];
float theta1 = (*current)[1];
Point pt1current, pt2current;
if(theta1>CV_PI*45/180 && theta1<CV_PI*135/180)
{
pt1current.x=0;
pt1current.y = p1/sin(theta1);
pt2current.x=img.size().width;
pt2current.y=-pt2current.x/tan(theta1) + p1/sin(theta1);
}
else
{
pt1current.y=0;
pt1current.x=p1/cos(theta1);
pt2current.y=img.size().height;
pt2current.x=-pt2current.y/tan(theta1) + p1/cos(theta1);
}
vector<Vec2f>::iterator pos;
for(pos=lines->begin();pos!=lines->end();pos++)
{
if(*current==*pos)
continue;
if(fabs((*pos)[0]-(*current)[0])<20 && fabs((*pos)[1]-(*current)[1])<CV_PI*10/180)
{
float p = (*pos)[0];
float theta = (*pos)[1];
Point pt1, pt2;
if((*pos)[1]>CV_PI*45/180 && (*pos)[1]<CV_PI*135/180)
{
pt1.x=0;
pt1.y = p/sin(theta);
pt2.x=img.size().width;
pt2.y=-pt2.x/tan(theta) + p/sin(theta);
}
else
{
pt1.y=0;
pt1.x=p/cos(theta);
pt2.y=img.size().height;
pt2.x=-pt2.y/tan(theta) + p/cos(theta);
}
if(((double)(pt1.x-pt1current.x)*(pt1.x-pt1current.x)
+ (pt1.y-pt1current.y)*(pt1.y-pt1current.y)<64*64)
&& ((double)(pt2.x-pt2current.x)*(pt2.x-pt2current.x) + (pt2.y-pt2current.y)*(pt2.y-pt2current.y)<64*64))
{
printf("Merging\n");
// Merge the two
(*current)[0] = ((*current)[0]+(*pos)[0])/2;
(*current)[1] = ((*current)[1]+(*pos)[1])/2;
(*pos)[0]=0;
(*pos)[1]=-100;
}
}
}
}
}
example images:
good image
bad image
for the bad image-this code works , but of course i need something more generic...
boarderLines.push_back(verticalLines[1]);
boarderLines.push_back(verticalLines[verticalLines.size()-3]);
boarderLines.push_back(horizontalLines[1]);
boarderLines.push_back(horizontalLines[horizontalLines.size() -2 ]);
thanks in advance for your help!
I paint a picture to test:
And I want to know how much blobs I have in the black circle and what is the size of each blobs (all blobs are ~white).
For example, in this case I have 12 spots:
I know how to found white pixels and it easy to verify sequence from left:
int whitePixels = 0;
for (int i = 0; i < height; ++i)
{
uchar * pixel = image.ptr<uchar>(i);
for (int j = 0; j < width; ++j)
{
if (j>0 && pixel[j-1]==0) // to group pixels for one spot
whitePixels++;
}
}
but it's clear that this code is not good enough (blobs can be diagonally, etc.).
So, the bottom line, I need help: how can I define the blobs?
Thank you
Following code finds bounding rects (blobs) for all white spots.
Remark: if we can assume white spots are really white (namely have values 255 in grayscaled image), you can use this snippet. Consider putting it in some class to avoid passing uncecessary params to function Traverse. Although it works. The idea is based on DFS. Apart from the gryscaled image, we have ids matrix to assign and remember which pixel belongs to which blob (all pixels having the same id belong to the same blob).
void Traverse(int xs, int ys, cv::Mat &ids,cv::Mat &image, int blobID, cv::Point &leftTop, cv::Point &rightBottom) {
std::stack<cv::Point> S;
S.push(cv::Point(xs,ys));
while (!S.empty()) {
cv::Point u = S.top();
S.pop();
int x = u.x;
int y = u.y;
if (image.at<unsigned char>(y,x) == 0 || ids.at<unsigned char>(y,x) > 0)
continue;
ids.at<unsigned char>(y,x) = blobID;
if (x < leftTop.x)
leftTop.x = x;
if (x > rightBottom.x)
rightBottom.x = x;
if (y < leftTop.y)
leftTop.y = y;
if (y > rightBottom.y)
rightBottom.y = y;
if (x > 0)
S.push(cv::Point(x-1,y));
if (x < ids.cols-1)
S.push(cv::Point(x+1,y));
if (y > 0)
S.push(cv::Point(x,y-1));
if (y < ids.rows-1)
S.push(cv::Point(x,y+1));
}
}
int FindBlobs(cv::Mat &image, std::vector<cv::Rect> &out, float minArea) {
cv::Mat ids = cv::Mat::zeros(image.rows, image.cols,CV_8UC1);
cv::Mat thresholded;
cv::cvtColor(image, thresholded, CV_RGB2GRAY);
const int thresholdLevel = 130;
cv::threshold(thresholded, thresholded, thresholdLevel, 255, CV_THRESH_BINARY);
int blobId = 1;
for (int x = 0;x<ids.cols;x++)
for (int y=0;y<ids.rows;y++){
if (thresholded.at<unsigned char>(y,x) > 0 && ids.at<unsigned char>(y,x) == 0) {
cv::Point leftTop(ids.cols-1, ids.rows-1), rightBottom(0,0);
Traverse(x,y,ids, thresholded,blobId++, leftTop, rightBottom);
cv::Rect r(leftTop, rightBottom);
if (r.area() > minArea)
out.push_back(r);
}
}
return blobId;
}
EDIT: I fixed a bug, lowered threshold level and now the output is given below. I think it is a good start point.
EDIT2: I get rid of recursion in Traverse(). In bigger images recursion caused Stackoverflow.
Using
Mat image;
I used
inRange(image,Scalar(170,100,0),Scalar(255,255,70),image);
and i detect object in blue, but I can't draw rectangle around the object.
Should I use mask? or something?
inRange(image,Scalar(170,100,0),Scalar(255,255,70),image);
GaussianBlur(image,image,Size(9,9),1.5);
for(int i = 2; i <image.cols-2;i++)
for(int j = 2; j <image.rows-2;j++){
if( image.at<Vec3b>(i-1,j-1)[0] > 200 &&
image.at<Vec3b>(i-1,j)[0] > 200 &&
image.at<Vec3b>(i-1,j+1)[0] > 200 &&
image.at<Vec3b>(i,j-1)[0] > 200 &&
image.at<Vec3b>(i,j)[0] > 200 &&
image.at<Vec3b>(i,j+1)[0] > 200 &&
image.at<Vec3b>(i+1,j-1)[0] > 200 &&
image.at<Vec3b>(i+1,j)[0] > 200 &&
image.at<Vec3b>(i+1,j+1)[0] > 200
)
{
if(min_x > i)
min_x = i;
if(min_y >j)
min_y = j;
if(max_x < i)
max_x =i;
if(max_y < j)
max_y = j;
}
}
if(!(max_x==0 && max_y==0 && min_x==image.rows && min_y == image.cols))
{
rectangle(image,Point(min_x,min_y),Point(max_x,max_y),CV_RGB(255,0,0),2);
}
imshow("working", image);
if(waitKey(100) >= 0) break;
}
}
This isn't working and a run time error.
I don't know why.. help me!
Some tips:
Your image might be CV_8U3C, but inRange probably converts it to CV_8U, so better use for output a new Mat instance.
Use cv::findContours to detect your area.
Study meanshift used for tracking by opencv which might help you.
You cannot use RGB image for the inrange method. You should transform your image to HSV color space, and use hue range of blue then, which is 95-135. There are so many "blue" possibilities at RGB space.
inRange(image,Scalar(95,0,0),Scalar(135,255,255),image);
The result will be a binary image, just find the contour and draw bounding rectangle around it.