Problems with forloop and Points - c++

So I'm trying too clean up my code as I was using too many Points written up. So I came up with the idea too use a forloop instead unfortunately I can't seem too get work.
I've changed my points into CVpoint arrays and made a forloop but I cant seem too get too work
Anyone know how I can make this work ? My error is Can't convert CVpoint to Int
My functions :
bool FindWhiteLine(Vec3b white)
{
bool color = false;
uchar blue = white.val[0];
uchar green = white.val[1];
uchar red = white.val[2];
if(blue == 255 && green == 255 && red == 255)
{
color = true;
}
return color;
}
// extends the line until whiteline is found
CvPoint DrawingLines(Mat img , CvPoint point,bool right)
{
int cols = img.cols;
Vec3b drawingLine = img.at<Vec3b>(point); //defines the color at current positions
while(point.x != cols){
if(right == true)
{
point.x = point.x +1; //increases the line too the right
drawingLine = img.at<cv::Vec3b>(point);
if(FindWhiteLine(drawingLine)){ // quites incase white line is found
break;
}
}
else if(right == false)
{
point.x = point.x -1; //Decrease the line too the left
drawingLine = img.at<cv::Vec3b>(point);
if(FindWhiteLine(drawingLine)){ // quites incase white line is found
break;
}
}
}
return point;
}
My main :
void LaneDetector::processImage() {
//http://docs.opencv.org/doc/user_guide/ug_mat.html Handeling images
Mat matImg(m_image);
Mat gray; // for converting to gray
cvtColor(matImg, gray, CV_BGR2GRAY); //Let's make the image gray
Mat canny; //Canny for detecting edges ,http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/canny_detector/canny_detector.html
Canny(gray, canny, 50, 170, 3); //inputing Canny limits
cvtColor(canny, matImg, CV_GRAY2BGR); //Converts back from gray
// get matrix size http://docs.opencv.org/modules/core/doc/basic_structures.html
int rows = matImg.rows;
int cols = matImg.cols;
//Points
Point centerPoint; // Old way
Point centerPointEnd;
CvPoint startPos[4] , endXRight[4] , endxLeft[4]; // new way I tried
for (int i = 0; i< 4; i ++) {
startPos[i].x = cols/2;
endXRight[i].x = DrawingLines(matImg,endXRight[i],true); // error here
endxLeft[i].x = DrawingLines(matImg,endxLeft[i],false);
}
if (m_debug) {
line(matImg, centerPoint,centerPointEnd,cvScalar(0, 0, 255),2, 8);
for (i = 0; i< 4; i ++) {
line(matImg, startPos[i],endXRight[i],cvScalar(0, 0, 255),2, 8);
line(matImg, startPos[i],endXLeft[i],cvScalar(0, 0, 255),2, 8);
}
Error code :
/home/nicho/2015-mini-smart-vehicles/project-template/sources/OpenDaVINCI-msv/apps/lanedetector/src/LaneDetector.cpp:176:25: error: cannot convert ‘CvPoint’ to ‘int’ in assignment
endXRight[i].x = DrawingLines(matImg,endXRight[i],true);
/home/nicho/2015-mini-smart-vehicles/project-template/sources/OpenDaVINCI-msv/apps/lanedetector/src/LaneDetector.cpp:177:24: error: cannot convert ‘CvPoint’ to ‘int’ in assignment
endxLeft[i].x = DrawingLines(matImg,endxLeft[i],false);

The error couldn't be much clearer. The function returns a value of type CvPoint, and you try to assign it to a variable of type int. That can't be done because you can't convert CvPoint to int.
It looks like you want to assign to the point itself, not one of its co-ordinates:
endXRight[i] = DrawingLines(matImg,endXRight[i],true);
^ remove .x

Related

Opencv: Get all objects from segmented colorful image

How to get all objects from image i am separating image objects through colors.
There are almost 20 colors in following image. I want to extract all colors and their position in a vector(Vec3b and Rect).
I'm using egbis algorithum for segmentation
Segmented image
Mat src, dst;
String imageName("/home/pathToImage.jpg" );
src = imread(imageName,1);
if(src.rows < 1)
return -1;
for(int i=0; i<src.rows; i=i+5)
{ for(int j=0; j<src.cols; j=j+5)
{
Vec3b color = src.at<Vec3b>(Point(i,j));
if(colors.empty())
{
colors.push_back(color);
}
else{
bool add = true;
for(int k=0; k<colors.size(); k++)
{
int rmin = colors[k].val[0]-5,
rmax = colors[k].val[0]+5,
gmin = colors[k].val[1]-5,
gmax = colors[k].val[1]+5,
bmin = colors[k].val[2]-5,
bmax = colors[k].val[2]+5;
if((
(color.val[0] >= rmin && color.val[0] <= rmax) &&
(color.val[1] >= gmin && color.val[1] <= gmax) &&
(color.val[2] >= bmin && color.val[2] <= bmax))
)
{
add = false;
break;
}
}
if(add)
colors.push_back(color);
}
}
}
int size = colors.size();
for(int i=0; i<colors.size();i++)
{
Mat inrangeImage;
//cv::inRange(src, Scalar(lowBlue, lowGreen, lowRed), Scalar(highBlue, highGreen, highRed), redColorOnly);
cv::inRange(src, cv::Scalar(colors[i].val[0]-1, colors[i].val[1]-1, colors[i].val[2]-1), cv::Scalar(colors[i].val[0]+1, colors[i].val[1]+1, colors[i].val[2]+1), inrangeImage);
imwrite("/home/kavtech/Segmentation/1/opencv-wrapper-egbis/images/inrangeImage.jpg",inrangeImage);
}
/// Display
namedWindow("Image", WINDOW_AUTOSIZE );
imshow("Image", src );
waitKey(0);
I want to get each color position so that
i can differentiate object positions. Please Help!
That's just a trivial data formatting problem. You want to turn a truecolour image with only 20 or so colours into a colour-indexed image.
So simply step through the image, look up the colour in your growing dictionary, and assign and integer 0-20 to each pixel.
Now you can turn the images into binary images simply by saying one colour is set and the rest are clear, and use standard algorithms for fitting rectangles.

Create function too draw line

So I'm pretty new to C++ and OpenCV and I've met on a problem where I'm reusing the same piece of code over and over again. So I would like to make this into a function where I can just input my points and it would run this code per argument I enter
So my question is : How do i do this code below as a Function where a just can replace "myPoint" with my own argument. So I don't need too copy paste the code several times
Vec3b Rightcolor = matImg.at<Vec3b>(myPoint); //defines the color at current positions
while(myPoint.x != cols){
myPoint.x = myPoint.x +1; //extend the arrow
int blue = Rightcolor .val[0];
int green = Rightcolor .val[1];
int red = Rightcolor .val[2];
Rightcolor = matImg.at<cv::Vec3b>(myPoint);
if(blue == 255 && green == 255 && red == 255){
break; //color is white at current position
}
}
//end of function
if (m_debug) { //this draws the line in the main file
line(matImg, bottomPoint,myPoint,cvScalar(0,0,102), 1, 8);
}
updated with my function which doesn't work , as it doesnt draw a line and the rest of my code below it :
Point increase(Mat img ,Point myPoint , int cols)
{
Vec3b rightLaneColor = img.at<Vec3b>(myPoint);
while(myPoint.x != cols){
myPoint.x = myPoint.x +1; //extend the arrow
int blue = rightLaneColor.val[0];
int green = rightLaneColor.val[1];
int red = rightLaneColor.val[2];
rightLaneColor = img.at<cv::Vec3b>(myPoint); //checks for color at current position
if(blue == 255 && green == 255 && red == 255){
break; //color is white at current position
}
}
return myPoint;
}
void LaneDetector::processImage() {
Mat mat_img(m_image); //IPL Is outdated
Mat gray;
Mat canny; //Canny for detecting edges
cvtColor(mat_img, grayImg, CV_BGR2GRAY);
Canny( grayImg, cannyImg, 50, 170, 3);
cvtColor(canny, mat_img, CV_GRAY2BGR);
cannyImg.release();
int rows = mat_img.rows;
int cols = mat_img.cols;
Point bottomPoint; /*Rightbottom Arrow Start*/
Point rightPoint; /*Right Arrow End*/
bottomPoint.x = cols/2;
bottomPoint.y = 350;
rightPoint.x = bottomPoint.x;
rightPoint.y = bottomPoint.y;
/*
Vec3b rightLaneColor = mat_img.at<Vec3b>(rightPoint); //defines the color at current positions
while(rightPoint.x != cols){
rightPoint.x = rightPoint.x +1; //extend the arrow
int blue = rightLaneColor.val[0];
int green = rightLaneColor.val[1];
int red = rightLaneColor.val[2];
rightLaneColor = mat_img.at<cv::Vec3b>(rightPoint); //checks for color at current position
if(blue == 255 && green == 255 && red == 255){
break; //color is white at current position
}
}
*/
increase(mat_img,rightPoint,cols);
if (m_debug) {
line(mat_img, bottomPoint,rightPoint,cvScalar(80,255,12), 1, 8);
}
imshow("Lanedetection", mat_img);
cvWaitKey(10);
//Steering instructions from here
SteeringData sd;
// Create container for finally sending the data.
Container c(Container::USER_DATA_1, sd);
// Send container.
getConference().send(c);
}

Open cv chessBoard and 2d pieces recognition

I'm doing a project in open cv.
The goal is to recognize 2d chess pieces on a chessboard,and their location.
the algorithm goes like this:
run canny on the picture.
run houghLines on the image.
3.Merge close lines
4.sort to vertical lines and horizontal lines.
5.sort the lines and choose all the boarder lines +1 (so we get the actual play board lines and not the board lines)
find intersection of boarder lines. (4 corners)
do Homography with found corners.
split new image to 8X8 cells (length/8,width/8).
detect pieces on board and location using template matching.
Right now I'm stuck on doing the Homography.
1. Sometimes I will detect the lines of the outer board , Sometimes I wont , and sometimes I will detect 2 lines on the outer board.
2.the merge line function I found online seems not to be doing a good job. there can be 2 vertical lines on each other which it does not merge.
void processChessboardImage(Mat & image)
{
Vec2f temp(0,0);
vector<Vec2f> lines;
vector<Vec2f> boarderLines;
vector<Vec2f> verticalLines;
vector<Vec2f> horizontalLines;
Mat canny_output,hough_output;
canny_output = CannyOnImage(image);
//cvtColor(canny_output, image, CV_GRAY2BGR);
HoughLines(canny_output, lines, 1, CV_PI/180, 120, 0, 0 );
//at this point we have lines - all straight lines in image
MergeRelatedLines(&lines,canny_output);
for(int i=0; i< lines.size(); i++)
{
if(lines[i][1] != -100)
{
if((lines[i][1] >= 0 && lines[i][1] < 1 )|| lines[i][1] >= 3)//Vertical
{
verticalLines.push_back(lines[i]);
}
else // horizontal;
{
horizontalLines.push_back(lines[i]);
}
}
}
sort(verticalLines.begin(),verticalLines.end(),[](const Vec2f& elem1, const Vec2f& elem2){ return elem1[0]*cos(elem1[1]) < elem2[0]*cos(elem2[1]); });
sort(horizontalLines.begin(),horizontalLines.end(),[](const Vec2f& elem1, const Vec2f& elem2){ return elem1[0] < elem2[0]; });
int numVerticalLines = verticalLines.size();
int numHorizontalLines = horizontalLines.size();
boarderLines.push_back(verticalLines[0]);
boarderLines.push_back(verticalLines[verticalLines.size()-1]);
boarderLines.push_back(horizontalLines[0]);
boarderLines.push_back(horizontalLines[horizontalLines.size() -1 ]);
void MergeRelatedLines(vector<Vec2f> *lines, Mat &img)
{
vector<Vec2f>::iterator current;
vector<Vec4i> points(lines->size());
for(current=lines->begin();current!=lines->end();current++)
{
if((*current)[0]==0 && (*current)[1]==-100)
continue;
float p1 = (*current)[0];
float theta1 = (*current)[1];
Point pt1current, pt2current;
if(theta1>CV_PI*45/180 && theta1<CV_PI*135/180)
{
pt1current.x=0;
pt1current.y = p1/sin(theta1);
pt2current.x=img.size().width;
pt2current.y=-pt2current.x/tan(theta1) + p1/sin(theta1);
}
else
{
pt1current.y=0;
pt1current.x=p1/cos(theta1);
pt2current.y=img.size().height;
pt2current.x=-pt2current.y/tan(theta1) + p1/cos(theta1);
}
vector<Vec2f>::iterator pos;
for(pos=lines->begin();pos!=lines->end();pos++)
{
if(*current==*pos)
continue;
if(fabs((*pos)[0]-(*current)[0])<20 && fabs((*pos)[1]-(*current)[1])<CV_PI*10/180)
{
float p = (*pos)[0];
float theta = (*pos)[1];
Point pt1, pt2;
if((*pos)[1]>CV_PI*45/180 && (*pos)[1]<CV_PI*135/180)
{
pt1.x=0;
pt1.y = p/sin(theta);
pt2.x=img.size().width;
pt2.y=-pt2.x/tan(theta) + p/sin(theta);
}
else
{
pt1.y=0;
pt1.x=p/cos(theta);
pt2.y=img.size().height;
pt2.x=-pt2.y/tan(theta) + p/cos(theta);
}
if(((double)(pt1.x-pt1current.x)*(pt1.x-pt1current.x)
+ (pt1.y-pt1current.y)*(pt1.y-pt1current.y)<64*64)
&& ((double)(pt2.x-pt2current.x)*(pt2.x-pt2current.x) + (pt2.y-pt2current.y)*(pt2.y-pt2current.y)<64*64))
{
printf("Merging\n");
// Merge the two
(*current)[0] = ((*current)[0]+(*pos)[0])/2;
(*current)[1] = ((*current)[1]+(*pos)[1])/2;
(*pos)[0]=0;
(*pos)[1]=-100;
}
}
}
}
}
example images:
good image
bad image
for the bad image-this code works , but of course i need something more generic...
boarderLines.push_back(verticalLines[1]);
boarderLines.push_back(verticalLines[verticalLines.size()-3]);
boarderLines.push_back(horizontalLines[1]);
boarderLines.push_back(horizontalLines[horizontalLines.size() -2 ]);
thanks in advance for your help!

How to detect white blobs using OpenCV

I paint a picture to test:
And I want to know how much blobs I have in the black circle and what is the size of each blobs (all blobs are ~white).
For example, in this case I have 12 spots:
I know how to found white pixels and it easy to verify sequence from left:
int whitePixels = 0;
for (int i = 0; i < height; ++i)
{
uchar * pixel = image.ptr<uchar>(i);
for (int j = 0; j < width; ++j)
{
if (j>0 && pixel[j-1]==0) // to group pixels for one spot
whitePixels++;
}
}
but it's clear that this code is not good enough (blobs can be diagonally, etc.).
So, the bottom line, I need help: how can I define the blobs?
Thank you
Following code finds bounding rects (blobs) for all white spots.
Remark: if we can assume white spots are really white (namely have values 255 in grayscaled image), you can use this snippet. Consider putting it in some class to avoid passing uncecessary params to function Traverse. Although it works. The idea is based on DFS. Apart from the gryscaled image, we have ids matrix to assign and remember which pixel belongs to which blob (all pixels having the same id belong to the same blob).
void Traverse(int xs, int ys, cv::Mat &ids,cv::Mat &image, int blobID, cv::Point &leftTop, cv::Point &rightBottom) {
std::stack<cv::Point> S;
S.push(cv::Point(xs,ys));
while (!S.empty()) {
cv::Point u = S.top();
S.pop();
int x = u.x;
int y = u.y;
if (image.at<unsigned char>(y,x) == 0 || ids.at<unsigned char>(y,x) > 0)
continue;
ids.at<unsigned char>(y,x) = blobID;
if (x < leftTop.x)
leftTop.x = x;
if (x > rightBottom.x)
rightBottom.x = x;
if (y < leftTop.y)
leftTop.y = y;
if (y > rightBottom.y)
rightBottom.y = y;
if (x > 0)
S.push(cv::Point(x-1,y));
if (x < ids.cols-1)
S.push(cv::Point(x+1,y));
if (y > 0)
S.push(cv::Point(x,y-1));
if (y < ids.rows-1)
S.push(cv::Point(x,y+1));
}
}
int FindBlobs(cv::Mat &image, std::vector<cv::Rect> &out, float minArea) {
cv::Mat ids = cv::Mat::zeros(image.rows, image.cols,CV_8UC1);
cv::Mat thresholded;
cv::cvtColor(image, thresholded, CV_RGB2GRAY);
const int thresholdLevel = 130;
cv::threshold(thresholded, thresholded, thresholdLevel, 255, CV_THRESH_BINARY);
int blobId = 1;
for (int x = 0;x<ids.cols;x++)
for (int y=0;y<ids.rows;y++){
if (thresholded.at<unsigned char>(y,x) > 0 && ids.at<unsigned char>(y,x) == 0) {
cv::Point leftTop(ids.cols-1, ids.rows-1), rightBottom(0,0);
Traverse(x,y,ids, thresholded,blobId++, leftTop, rightBottom);
cv::Rect r(leftTop, rightBottom);
if (r.area() > minArea)
out.push_back(r);
}
}
return blobId;
}
EDIT: I fixed a bug, lowered threshold level and now the output is given below. I think it is a good start point.
EDIT2: I get rid of recursion in Traverse(). In bigger images recursion caused Stackoverflow.

Finding HSV Thresholds Via Histograms with OpenCV

I'm trying to write a method that will find the proper threshold values in HSV space for an object placed at the center of the screen. These values are used for an object tracking algorithm. I've tested that piece of code with hand coded threshold values and it works well. The idea behind the method is that it should calculate the histograms for each of the channels and then return the 5th and 95th percentile for each to be used as the threshold values. (credit: How to find RGB/HSV color parameters for color tracking?) The image being passed is a picture of the object to be tracked (which is set by the user before the whole process begins. Here is the code
std::vector<cv::Scalar> HSV_Threshold_Determiner::Get_Threshold_Values(const cv::Mat& image)
{
cv::Mat inputImage;
cv::cvtColor(image, inputImage, CV_BGR2HSV);
std::vector<cv::Mat> bgrPlanes;
cv::split(inputImage, bgrPlanes);
cv::Mat hHist, sHist, vHist;
int hMax = 180, svMax = 256;
float hRanges[] = { 0, (float)hMax };
const float* hRange = { hRanges };
float svRanges[] = { 0, (float)svMax };
const float* svRange = { svRanges };
//float sRanges[] = { 0, 256 };
cv::calcHist(&bgrPlanes[0], 1, 0, cv::Mat(), hHist, 1, &hMax, &hRange);
cv::calcHist(&bgrPlanes[1], 1, 0, cv::Mat(), sHist, 1, &svMax, &svRange);
cv::calcHist(&bgrPlanes[2], 1, 0, cv::Mat(), vHist, 1, &svMax, &svRange);
int totalEntries = image.cols * image.rows;
int fiveCutoff = (int)(totalEntries * .05);
int ninetyFiveCutoff = (int)(totalEntries * .95);
float hTotal = 0, sTotal = 0, vTotal = 0;
bool hMinFound = false, hMaxFound = false, sMinFound = false, sMaxFound = false,
vMinFound = false, vMaxFound = false;
cv::Scalar hThresholds;
cv::Scalar sThresholds;
cv::Scalar vThresholds;
for(int i = 0; i < vHist.rows; ++i)
{
if(i < hHist.rows)
{
hTotal += hHist.at<float>(i, 0);
if(hTotal >= fiveCutoff && !hMinFound)
{
hThresholds.val[0] = i;
hMinFound = true;
}
else if(hTotal>= ninetyFiveCutoff && !hMaxFound)
{
hThresholds.val[1] = i;
hMaxFound = true;
}
}
sTotal += sHist.at<float>(i, 0);
vTotal += vHist.at<float>(i, 0);
if(sTotal >= fiveCutoff && !sMinFound)
{
sThresholds.val[0] = i;
sMinFound = true;
}
else if(sTotal >= ninetyFiveCutoff && !sMaxFound)
{
sThresholds.val[1] = i;
sMaxFound = true;
}
if(vTotal >= fiveCutoff && !vMinFound)
{
vThresholds.val[0] = i;
vMinFound = true;
}
else if(vTotal >= ninetyFiveCutoff && !vMaxFound)
{
vThresholds.val[1] = i;
vMaxFound = true;
}
if(vMaxFound && sMaxFound && hMaxFound)
{
break;
}
}
std::vector<cv::Scalar> returnVect;
returnVect.push_back(hThresholds);
returnVect.push_back(sThresholds);
returnVect.push_back(vThresholds);
return returnVect;
}
What I am trying to do is sum up the number of entries in each bucket until I get to a number that is greater than or equal to five percent and ninety-five percent of the total. Unfortunately the numbers I get are never close to the ones I get if I do the thresholding by hand.
Mat img = ... // from camera or some other source
// STEP 1: learning phase
Mat hsv, imgThreshed, processed, denoised;
cv::GaussianBlur(img, denoised, cv::Size(5,5), 2, 2); // remove noise
cv::cvtColor(denoised, hsv, CV_BGR2HSV);
// lets say we picked manually a region of 100x100 px with the interested color/object using mouse
cv::Mat roi = hsv (cv::Range(mousex-50, mousey+50), cv::Range(mousex-50, mousey+50));
// must split all channels to get Hue only
std::vector<cv::Mat> hsvPlanes;
cv::split(roi, hsvPlanes);
// compute statistics for Hue value
cv::Scalar mean, stddev;
cv::meanStdDev(hsvPlanes[0], mean, stddev);
// ensure we get 95% of all valid Hue samples (statistics 3*sigma rule)
float minHue = mean[0] - stddev[0]*3;
float maxHue = mean[0] + stddev[0]*3;
// STEP 2: detection phase
cv::inRange(hsvPlanes[0], cv::Scalar(minHue), cv::Scalar(maxHue), imgThreshed);
imshow("thresholded", imgThreshed);
cv_erode(imgThreshed, processed, 5); // minimizes noise
cv_dilate(processed, processed, 20); // maximize left regions
imshow("final", processed);
//STEP 3: do some blob/contour detection on processed image & find maximum blob/region, etc ...
A much simpler solution - just calculate mean & std. deviation for a region of interest, i.e. containing the Hue value.
Since Hue is the most stable component in the image, the other components saturation & value should be discarded as they vary too much. However you can still compute mean for them if needed.