OpenCV floodFill() fills unconnected regions - c++

I have implemented the connected component identification algorithm from here, but it seems, that the cv::floodFill(...) fills unconnected regions in some cases.
First of all, here is the code:
void ImageMatchingOpenCV::getConnectedComponents(const cv::Mat& binImg, vector<vector<cv::Point>>& components, vector<vector<cv::Point>>& contours, const int minSize)
{
cv::Mat ccImg;
binImg.convertTo(ccImg, CV_32FC1);
int gap=startPointParams.gap;
int label = 1;
for(int y=gap; y<binImg.rows-gap; ++y)
{
for(int x=gap; x<binImg.cols-gap; ++x)
{
if((int)ccImg.at<float>(y, x)!=255) continue;
cv::Rect bBox;
cv::floodFill(ccImg, cv::Point(x, y), cv::Scalar(label), &bBox, cv::Scalar(0), cv::Scalar(0), 4 /*| cv::FLOODFILL_FIXED_RANGE*/);
if(bBox.x<gap || bBox.y<gap || bBox.x+bBox.width>=binImg.cols-gap || bBox.y+bBox.height>=binImg.rows-gap) continue;
components.push_back(vector<cv::Point>()); contours.push_back(vector<cv::Point>());
for(int i=bBox.y; i<bBox.y+bBox.height; ++i)
{
for(int j=bBox.x; j<bBox.x+bBox.width; ++j)
{
if((int)ccImg.at<float>(i, j)!=label) continue;
components.back().push_back(cv::Point(j, i));
if( (int)ccImg.at<float>(i+1, j)!=label
|| (int)ccImg.at<float>(i-1, j)!=label
|| (int)ccImg.at<float>(i, j+1)!=label
|| (int)ccImg.at<float>(i, j-1)!=label) contours.back().push_back(cv::Point(j, i));
}
}
if(components.back().size()<minSize)
{
components.pop_back();
contours.pop_back();
}
else
{
++label;
if(label==255) ++label;
break;
}
}
if(label!=1) break;
}
}
The input cv::Mat contains 2448x2050 pixels of size CV_8U. The pixel values are either 0 (background) or 255 (foreground). There are 17 connected components in the image. All components but the first are identified correctly. The erroneous component is by far largest one (~1.5 million pixels) and contains some small disconnected pixel-groups. It encompasses all of the other components. The small disconnected pixel-groups, which are wrongly assigned to the first component are all connected to the top of the components bounding box.
EDIT: I added some images to visualize the problem. The first image shows all identified connected components. The second image shows only the erroneous component (notice the small disconnected pixel groups at the top). The third images zooms a part of the second image:
If someone has an idea, where the error might be, I would be thankful.

I found the bug myself. At the end of the method small components are thrown away. In this case the component's number (label) is not increased:
if(components.back().size()<minSize)
{
components.pop_back();
contours.pop_back();
}
else
{
++label;
if(label==255) ++label;
}
This means, the label number is used again to mark the next component in the image. Hence, several small components and a sufficiantly large component might have the same label number. If now the bounding box of the large component is iterated, then this bounding box might contain some small previously identified, but unused components with the same label number.
The solution is to remove the else branch and instead increase the label number always.

Related

How to find length of upper and lower arc from ellipse image

Here i try to find the upper arc and lower arc using image vector(contours of images) But It could n't gave Extract result. Suggest any other method to find upper and lower arc from images and their length.
Here my code
Mat image =cv::imread("thinning/20d.jpg");
int i=0,j=0,k=0,x=320;
for(int y = 0; y < image.rows; y++)
{
if(image.at<Vec3b>(Point(x, y))[0] >= 250 && image.at<Vec3b>(Point(x, y))[1] >= 250 && image.at<Vec3b>(Point(x, y))[2] >= 250){
qDebug()<<x<<y;
x1[i]=x;
y1[i]=y;
i=i+1;
}
}
for(i=0;i<=1;i++){
qDebug()<<x1[i]<<y1[i];
}
qDebug()<<"UPPER ARC";
for(int x = 0; x < image.cols; x++)
{
for(int y = 0; y <= (y1[0]+20); y++)
{
if(image.at<Vec3b>(Point(x, y))[0] >= 240 && image.at<Vec3b>(Point(x, y))[1] >= 240 && image.at<Vec3b>(Point(x, y))[2] >= 240){
x2[j]=x;
y2[j]=y;
j=j+1;
qDebug()<<x<<y;
}}
}
qDebug()<<"Lower ARC";
for(int x = 0; x < image.cols; x++)
{
for(int y = (y1[1]-20); y <= image.rows; y++)
{
if(image.at<Vec3b>(Point(x, y))[0] >= 240 && image.at<Vec3b>(Point(x, y))[1] >= 240 && image.at<Vec3b>(Point(x, y))[2] >= 240){
x3[k]=x;
y3[k]=y;
k=k+1;
qDebug()<<x<<y;
}}
}
By Above code I get Coordinates, by using Coordinates points I can find the length of arc but its mismatch with extract result.
Here is actual image:
Image1:
After thinning i got:
Expected Output:
As you are unable to define what exactly is upper/lower arc then I will assume you cut the ellipse in halves by horizontal line going through the ellipse's middle point. If that is not the case then you have to adapt this on your own... Ok now how to do it:
binarize image
As you provide JPG the colors are distorted so there is more then just black and white
thin the border to 1 pixel
Fill the inside with white and then recolor all white pixels not neighboring any black pixels to some unused or black color. There are many other variation how to achieve this...
find the bounding box
search all pixels and remember min,max x,y coordinates of all white pixels. Let call them x0,y0,x1,y1.
compute center of ellipse
simply find middle point of bounding box
cx=(x0+x1)/2
cy=(y0+y1)/2
count the pixels for each elliptic arc
have counter for each arc and simply increment upper arc counter for any white pixel that have y<=cy and lower if y>=cy. If your coordinate system is different then the conditions can be reverse.
find ellipse parameters
simply find white pixel closest to (cx,cy) this will be endpoint of minor semi-axis b let call it (bx,by). Also find the most far white pixel to (cx,cy) that will be the major semi axis endpoint (ax,ay). The distances between them and center will give you a,b and their position substracted by center will give you vectors with rotation of your ellipse. the angle can be obtained by atan2 or use basis vectors as I do. You can test ortogonality by dot product. There can be more then 2 points for closest and farest point. in that case you should find the middle of each group to enhance precision.
Integrate fitted ellipse
You need first to find angle at which the ellipse points are with y=cy then integrate ellipse between these two angles. The other half is the same just integrate angles + PI. To determine which half it is just compute point in the middle between angle range and decide according y>=cy ...
[Edit2] Here updated C++ code I busted for this:
picture pic0,pic1,pic2;
// pic0 - source
// pic1 - output
float a,b,a0,a1,da,xx0,xx1,yy0,yy1,ll0,ll1;
int x,y,i,threshold=127,x0,y0,x1,y1,cx,cy,ax,ay,bx,by,aa,bb,dd,l0,l1;
pic1=pic0;
// bbox,center,recolor (white,black)
x0=pic1.xs; x1=0;
y0=pic1.ys; y1=0;
for (y=0;y<pic1.ys;y++)
for (x=0;x<pic1.xs;x++)
if (pic1.p[y][x].db[0]>=threshold)
{
if (x0>x) x0=x;
if (y0>y) y0=y;
if (x1<x) x1=x;
if (y1<y) y1=y;
pic1.p[y][x].dd=0x00FFFFFF;
} else pic1.p[y][x].dd=0x00000000;
cx=(x0+x1)/2; cy=(y0+y1)/2;
// fill inside (gray) left single pixel width border (thining)
for (y=y0;y<=y1;y++)
{
for (x=x0;x<=x1;x++) if (pic1.p[y][x].dd)
{
for (i=x1;i>=x;i--) if (pic1.p[y][i].dd)
{
for (x++;x<i;x++) pic1.p[y][x].dd=0x00202020;
break;
}
break;
}
}
for (x=x0;x<=x1;x++)
{
for (y=y0;y<=y1;y++) if (pic1.p[y][x].dd) { pic1.p[y][x].dd=0x00FFFFFF; break; }
for (y=y1;y>=y0;y--) if (pic1.p[y][x].dd) { pic1.p[y][x].dd=0x00FFFFFF; break; }
}
// find min,max radius (periaxes)
bb=pic1.xs+pic1.ys; bb*=bb; aa=0;
ax=cx; ay=cy; bx=cx; by=cy;
for (y=y0;y<=y1;y++)
for (x=x0;x<=x1;x++)
if (pic1.p[y][x].dd==0x00FFFFFF)
{
dd=((x-cx)*(x-cx))+((y-cy)*(y-cy));
if (aa<dd) { ax=x; ay=y; aa=dd; }
if (bb>dd) { bx=x; by=y; bb=dd; }
}
aa=sqrt(aa); ax-=cx; ay-=cy;
bb=sqrt(bb); bx-=cx; by-=cy;
//a=float((ax*bx)+(ay*by))/float(aa*bb); // if (fabs(a)>zero_threshold) not perpendicular semiaxes
// separate/count upper,lower arc by horizontal line
l0=0; l1=0;
for (y=y0;y<=y1;y++)
for (x=x0;x<=x1;x++)
if (pic1.p[y][x].dd==0x00FFFFFF)
{
if (y>=cy) { l0++; pic1.p[y][x].dd=0x000000FF; } // red
if (y<=cy) { l1++; pic1.p[y][x].dd=0x00FF0000; } // blue
}
// here is just VCL/GDI info layer output so you can ignore it...
// arc separator axis
pic1.bmp->Canvas->Pen->Color=0x00808080;
pic1.bmp->Canvas->MoveTo(x0,cy);
pic1.bmp->Canvas->LineTo(x1,cy);
// draw analytical ellipse to compare
pic1.bmp->Canvas->Pen->Color=0x0000FF00;
pic1.bmp->Canvas->MoveTo(cx,cy);
pic1.bmp->Canvas->LineTo(cx+ax,cy+ay);
pic1.bmp->Canvas->MoveTo(cx,cy);
pic1.bmp->Canvas->LineTo(cx+bx,cy+by);
pic1.bmp->Canvas->Pen->Color=0x00FFFF00;
da=0.01*M_PI; // dash step [rad]
a0=0.0; // start
a1=2.0*M_PI; // end
for (i=1,a=a0;i;)
{
a+=da; if (a>=a1) { a=a1; i=0; }
x=cx+(ax*cos(a))+(bx*sin(a));
y=cy+(ay*cos(a))+(by*sin(a));
pic1.bmp->Canvas->MoveTo(x,y);
a+=da; if (a>=a1) { a=a1; i=0; }
x=cx+(ax*cos(a))+(bx*sin(a));
y=cy+(ay*cos(a))+(by*sin(a));
pic1.bmp->Canvas->LineTo(x,y);
}
// integrate the arclengths from fitted ellipse
da=0.001*M_PI; // integration step [rad] (accuracy)
// find start-end angles
ll0=M_PI; ll1=M_PI;
for (i=1,a=0.0;i;)
{
a+=da; if (a>=2.0*M_PI) { a=0.0; i=0; }
xx1=(ax*cos(a))+(bx*sin(a));
yy1=(ay*cos(a))+(by*sin(a));
b=atan2(yy1,xx1);
xx0=fabs(b-0.0); if (xx0>M_PI) xx0=2.0*M_PI-xx0;
xx1=fabs(b-M_PI);if (xx1>M_PI) xx1=2.0*M_PI-xx1;
if (ll0>xx0) { ll0=xx0; a0=a; }
if (ll1>xx1) { ll1=xx1; a1=a; }
}
// [upper half]
ll0=0.0;
xx0=cx+(ax*cos(a0))+(bx*sin(a0));
yy0=cy+(ay*cos(a0))+(by*sin(a0));
for (i=1,a=a0;i;)
{
a+=da; if (a>=a1) { a=a1; i=0; }
xx1=cx+(ax*cos(a))+(bx*sin(a));
yy1=cy+(ay*cos(a))+(by*sin(a));
// sum arc-line sizes
xx0-=xx1; xx0*=xx0;
yy0-=yy1; yy0*=yy0;
ll0+=sqrt(xx0+yy0);
// pic1.p[int(yy1)][int(xx1)].dd=0x0000FF00; // recolor for visualy check the right arc selection
xx0=xx1; yy0=yy1;
}
// lower half
a0+=M_PI; a1+=M_PI; ll1=0.0;
xx0=cx+(ax*cos(a0))+(bx*sin(a0));
yy0=cy+(ay*cos(a0))+(by*sin(a0));
for (i=1,a=a0;i;)
{
a+=da; if (a>=a1) { a=a1; i=0; }
xx1=cx+(ax*cos(a))+(bx*sin(a));
yy1=cy+(ay*cos(a))+(by*sin(a));
// sum arc-line sizes
xx0-=xx1; xx0*=xx0;
yy0-=yy1; yy0*=yy0;
ll1+=sqrt(xx0+yy0);
// pic1.p[int(yy1)][int(xx1)].dd=0x00FF00FF; // recolor for visualy check the right arc selection
xx0=xx1; yy0=yy1;
}
// handle if the upper/lower parts are swapped
a=a0+0.5*(a1-a0);
if ((ay*cos(a))+(by*sin(a))<0.0) { a=ll0; ll0=ll1; ll1=a; }
// info texts
pic1.bmp->Canvas->Font->Color=0x00FFFF00;
pic1.bmp->Canvas->Brush->Style=bsClear;
x=5; y=5; i=16; y-=i;
pic1.bmp->Canvas->TextOutA(x,y+=i,AnsiString().sprintf("center = (%i,%i) px",cx,cy));
pic1.bmp->Canvas->TextOutA(x,y+=i,AnsiString().sprintf("a = %i px",aa));
pic1.bmp->Canvas->TextOutA(x,y+=i,AnsiString().sprintf("b = %i px",bb));
pic1.bmp->Canvas->TextOutA(x,y+=i,AnsiString().sprintf("upper = %i px",l0));
pic1.bmp->Canvas->TextOutA(x,y+=i,AnsiString().sprintf("lower = %i px",l1));
pic1.bmp->Canvas->TextOutA(x,y+=i,AnsiString().sprintf("upper`= %.3lf px",ll0));
pic1.bmp->Canvas->TextOutA(x,y+=i,AnsiString().sprintf("lower`= %.3lf px",ll1));
pic1.bmp->Canvas->Brush->Style=bsSolid;
It use my own picture class with members:
xs,ys resolution of image
p[y][x].dd pixel access as 32bit unsigned integer as color
p[y][x].db[4] pixel access as 4*8bit unsigned integer as color channels
You can look at picture::p member as simple 2D array of
union color
{
DWORD dd; WORD dw[2]; byte db[4];
int i; short int ii[2];
color(){}; color(color& a){ *this=a; }; ~color(){}; color* operator = (const color *a) { dd=a->dd; return this; }; /*color* operator = (const color &a) { ...copy... return this; };*/
};
int xs,ys;
color p[ys][xs];
Graphics::TBitmap *bmp; // VCL GDI Bitmap object you do not need this...
where each cell can be accessed as 32 bit pixel p[][].dd as 0xAABBGGRR or 0xAARRGGBB not sure now which. Also you can access the channels directly with p[][].db[4] as 8bit BYTEs.
The bmp member is GDI bitmap so bmp->Canvas-> access all the GDI stuff which is not important for you.
Here result for your second image:
Gray horizontal line is the arc boundary line going through center
Red,Blue are the arc halves (recolored during counting)
Green are the semi-axes basis vectors
Aqua dash-dash is analytical ellipse overlay to compare the fit.
As you can see the fit is pretty close (+/-1 pixel). The counted arc-lengths upper,lower are pretty close to approximated average circle half perimeter(circumference).
You should add a0 range check to decide if the start is upper or lower half because there is no quarantee which side of major axis this will find. The integration of both halves are almost the same and saturated around integration step 0.001*M_PI around 307.3 pixels per arc-length which is only 17 and 22 pixels difference from the direct pixel count which is even better then I anticipate due to aliasing ...
For more eccentric ellipses the fit is not as good but the results are still good enough:

QGraphicsScene.itemAt() only returns zero and whole scene is slow

I am trying to create a dot matrix in a QGraphicsScene. I have a button, which fills a 2d-array with random numbers and than i will paint a pixel on every position where the array has a 0.
Now, when I wants to generate the matrix again i want to check every pixel and array-field whether they are empty or not. If the pixel is empty and the array not, i want to set a pixel. If there is a pixel but the array is empty i want to remove the pixel. Now the problem is, the function itemAt() always returns 0 even if i can clearly see existen pixels.
What is my problem?
//creating the scene in the constructor
QPainter MyPainter(this);
scene = new QGraphicsScene(this);
ui.graphicsView->setScene(scene);
//generating matrix
void MaReSX_ClickDummy::generate(void)
{
QGraphicsItem *item;
int x, y;
for(x=0; x< 400; x++)
{
for(y=0; y<400; y++)
{
dataArray[x][y] = rand()%1001;
}
}
for(x=0; x < 400; x++)
{
for(y=0; y<400; y++)
{
item = scene->itemAt(x, y, QTransform());//supposed to check whether there is already a pixel on that place but always returns zero
if(dataArray[x][y] == 0 && item == 0)
scene->addEllipse(x, y, 1, 1);
//does not work
else if(dataArray[x][y] != 0 && item != 0)
scene->removeItem(item);
}
}
}
Also the generating of the matrix is very slow. Since the matrix is supposed to show realtime data later, it should run as fast as possible. (and the scene will be bigger than 400*400 pixels like now). Any ideas how to improve the code?
And can somebody explain what the third parameter of itemAt() is supposed to do?
Thank you!
400x400 'dot matrix' is up to 16000 dots, or up to 2500 characters, which quite big.
The QGraphicsScene is designed to handle a small number of large shapes, and was probably not designed to handle this many shapes. Using it in this way to create thousands of identical tiny 'pixel' objects is incredibly inefficent.
Could you create a 400x400 bitmap(QBitmap?) instead, and set the individual pixels that you want?
You are supposed to be using a QGraphicsPixmapItem instead of an array of dots!

Only drawing contours that exist over several frames to remove flickering

I've been researching here and the rest of the web for over a week now and am unable to come up with anything.
I'm coding using C++ and opencv on linux.
I have this video in black and white of a cloud chamber (http://youtu.be/40wnB8ukI7s). I want to draw contours around the moving particle tracks. Currently I'm using findContours and drawContours; however, it draws contours around all of the white pixels, including the ones that quickly appear and disappear. I don't want to draw contours around my background, the flickering white pixels.
My problem is that the background is also moving so background subtraction doesn't work. Is there a way to:
a) only draw a contour if it exists roughly in the same location over several frames
b) remove a white pixel if it doesn't exist for multiple frames (probably at least 4 or 5 frames)
Thank you for any help you can provide.
Edit: Code for comparing two frames (firstFrame and secondFrame)
Vec3b frameColour;
Vec3b frameColour2;
for (int x = 0; x < firstFrame.cols; x++){
for (int y = 0; y < firstFrame.rows; y++){
frameColour = firstFrame.at<Vec3b>(Point(x, y));
frameColour2 = secondFrame.at<Vec3b>(Point(x, y));
if(frameColour == white && frameColour2 == white){
secondFrameAfter.at<Vec3b>(Point(x, y)) = white;
}else{
secondFrameAfter.at<Vec3b>(Point(x, y)) = black;
}
}
}
You could implement your idea:
For each frame do:
For each white pixel do:
If the pixels in the neigbourhood of the last N frames are *mostly* white
Set the current pixel to white
Else
Set the current pixel to black
The neigbourhood can be defined as a 3x3 mask around the pixel.
Mostly refers to an appropriate threshold, let's say 80% of the N frames should support (be white) the pixel position.
The red pixel is the current pixel (x,y) and the green pixels are its neigbourhood.
Comparing the neigbouring pixel of a pixel (x,y) can be achieved as follows:
const int MASK_SIZE = 3;
int numberOfSupportingFrames = 0;
for(int k = 0; k < N; k++)
{
Mat currentPreviousFrame = previousFrames.at(k);
bool whitePixelAvailable = false;
for(int i = x-(MASK_SIZE/2); i < x+(MASK_SIZE/2) && !whitePixelAvailable; i++)
{
for(int j = y-(MASK_SIZE/2); j < y+(MASK_SIZE/2) && !whitePixelAvailable; j++)
{
if(currentPreviousFrame.at<Vec3b>(Point(i, j)) == white)
{
whitePixelAvailable = true;
numberOfSupportingFrames++;
}
}
}
}
if((float)numberOfSupportingFrames / (float)N > 0.8)
secondFrameAfter.at<Vec3b>(Point(x, y)) = white;
else
secondFrameAfter.at<Vec3b>(Point(x, y)) = black;
The previous frames are stored inside std::vector previousFrames.
The algorithm checks the spatio-temporal neigbourhood of the pixel (x,y). The outer loop iterates over the neigbouring frames (temporal neigbourhood), while the inner two loops iterate over the neigbouring eight pixels (spatial neighbourhood). If there is a white pixel in the current spatial neighbourhood, this previous frame supports the current pixel (x,y). At the end it is checked if there are enough frames supporting the current pixel (80% of the previous frames should contain at least on white pixel in the 8-neigbourhood).
This code should be nested inside your two for-loops with some modifications (variable names, border handling).

Unable to calculate pixel difference with opencv

I am forced to calculate pixel difference pixel by pixel between frames of a video.
I initialize the pixeldifference variable to 0. (assumeall the frames are valid frames in the video)
The problem is lastFrame and frame are always identical which means the code with the cout statement "Pixel count incremented" is never triggered. I know a couple of frames can be identical but i never see that output statement even once. Which leads me to believe the two frames are identical always. Should i do something else? I'd appreciate any guidance. I'm very new to opencv (excus a little bad coding practice inside it was for debugging purposes)
Mat lastFrame;
Mat frame;
capture.read(lastFrame);
capture.read(frame);
while(counter<tofind)
{
for(int cur_row=0_PX;cur_row<max_PX;cur_row++)
{
for(int cur_cols=0;cur_cols<frame.cols;cur_cols++)
{
Vec3b pixels_currentFrame = frame.at<cv::Vec3b>(cur_row, cur_cols);
Vec3b pixels_lastFrame = lastFrame.at<cv::Vec3b>(cur_row, cur_cols);
bCur=int(pixels_currentFrame.val[0]);
gCur=int(pixels_currentFrame.val[1]);
rCur=int(pixels_currentFrame.val[2]);
bPrev=int(pixels_lastFrame.val[0]);
gPrev=int(pixels_lastFrame.val[1]);
rPrev=int(pixels_lastFrame.val[2]);
bDiff=abs(bCur-bPrev);
gDiff=abs(gCur-gPrev);
rDiff=abs(rCur-rPrev);
if (abs(bCur-bPrev) > checkval ||
abs(rCur-rPrev) > checkval ||
abs(gCur-gPrev) > checkval)
{
pixeldifference++;
cout<<"Pixel count incremented"<<endl;
}
}
}
lastFrame=frame;
capture.read(frame);
/*
some other stuff happens here
*/
counter++;
}

Finding furthest away pixel (Open CV)

I've been struggling with a small problem for some time now and just can't figure out what is wrong.
So I have a black 126 x 126 image with a 1 pixel blue border ( [B,G,R] = [255, 0, 0] ).
What I want, is the pixel which is furthest away from all blue pixels (such as the border). I understand how this is done. Iterate through every pixel, if it is black then compute distance to every other pixel which is blue looking for the minimum, then select the black pixel with the largest minimum distance to any blue.
Note: I don't need to actually know the true distance, so when doing the sum of the squares for distance I don't square root, I only want to know which distance is larger (less expensive).
First thing I do is loop through every pixel and if it is blue, add the row and column to a vector. I can confirm this part works correctly. Next, I loop through all pixels again and compare every black pixel's distance to every pixel in the blue pixel vector.
Where blue is a vector of Blue objects (has row and column)
region is the image
int distance;
int localShortest = 0;
int bestDist = 0;
int posX = 0;
int posY = 0;
for(int i = 0; i < image.rows; i++)
{
for(int j = 0; j < image.cols; j++)
{
//Make sure pixel is black
if(image.at<cv::Vec3b>(i,j)[0] == 0
&& image.at<cv::Vec3b>(i,j)[1] == 0
&& image.at<cv::Vec3b>(i,j)[2] == 0)
{
for(int k = 0; k < blue.size(); k++)
{
//Distance between pixels
distance = (i - blue.at(k).row)*(i - blue.at(k).row) + (j - blue.at(k).col)*(j - blue.at(k).col);
if(k == 0)
{
localShortest = distance;
}
if(distance < localShortest)
{
localShortest = distance;
}
}
if(localShortest > bestDist)
{
posX = i;
posY = j;
bestDistance = localShortest;
}
}
}
}
This works absolutely fine for a 1 pixel border around the edge.
https://dl.dropboxusercontent.com/u/3879939/works.PNG
Similarly, if I add more blue but keep a square ish black region, then it also works.
https://dl.dropboxusercontent.com/u/3879939/alsoWorks.PNG
But as soon as I make the image not have a square black portion, but maybe rectangular. Then the 'furthest away' is off. Sometimes it even says a blue pixel is the furthest away from blue, which is just not right.
https://dl.dropboxusercontent.com/u/3879939/off.PNG
Any help much appreciated! Hurting my head a bit.
One possibility, given that you're using OpenCV anyway, is to just use the supplied distance transform function.
For your particular case, you would need to do the following:
Convert your input to a single-channel binary image (e.g. map black to white and blue to black)
Run the cv::distanceTransform function with CV_DIST_L2 (Euclidean distance)
Examine the resulting greyscale image to get the results.
Note that there may be more than one pixel at the maximum distance from the border, so you need to handle this case according to your application.
The brightest pixels in the distance transform will be the ones that you need. For example, here is a white rectangle and its distance transform:
In square due to its symmetry the furthest black point (the center) is also the furthest no matter in which direction you look from there. But now try to imagine a very long rectangle with a very short height. There will be multiple points on its horizontal axis, to which the largest minimum distance will be the short distance to both the top and bottom sides, because the left and right sides are far away. In this case the pixel your algorithm finds can be any one on this line, and the result will depend on your pixel scanning order.
It's because there is a line(more than one pixel) to meet your condition for a rectangular