How to remove elongated structures(contours) from the binary image - python-2.7

I am trying to remove elongated contours from my binary image but I am still getting most of them. I have tried to remove them using but considering compactness and eccentricity factors but that didn't work in my case.
im=cv2.imread('thresh.png')
gray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
cv2.imshow('thres',gray)
gray2 = gray.copy()
mask = np.zeros(gray.shape,np.uint8)
contours, hier = cv2.findContours(gray2,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
for cnt in contours:
area=cv2.contourArea(cnt)
if area>=5:
ellipse = cv2.fitEllipse(cnt)
# center, axis_length and orientation of ellipse
(center,axes,orientation) = ellipse
# length of MAJOR and minor axis
majoraxis_length = max(axes)
minoraxis_length = min(axes)
eccentricity = np.sqrt(1-(minoraxis_length/majoraxis_length)**2)
#############compactness################
area=cv2.contourArea(cnt)
equi_diameter = np.sqrt(4*area/np.pi)
compactness=equi_diameter/majoraxis_length
########################################
if(eccentricity<=0.6 or eccentricity >1) or (compactness <0.8):
cv2.drawContours(gray2,[cnt],0,(0,0,0),1)
cv2.drawContours(mask,[cnt],0,255,-1)
cv2.imshow('final',mask)
Can anyone suggest me some method for removing these elongated contours.

One option i can think of, is to calculate each object area and max length, than set a threshold to area/length.

Related

better detection C++

My idea was to implement a simple square detection using openCV & C++ (objc++). I've already extracted the biggest areas of the image like you can see below (the colored ones) but now I'd like to extract the corner points (like TopLeft, TopRight, BottomLeft, BottomRight) of all the areas to afterwards check if the distance between the 4 corners is similar on each of them or the angle between the lines is nearly 45°.
See the images I was talking about:
However - I got to this point where I've tried to extract the areas corner points to get something like this afterwards:
This was my first idea how to get the 4 corner points (See the steps below):
1.calculate the contours center by
for (int i=0; i<contourPoints.size(); i++) {
avgx += contourPoints[i].x;
avgy += contourPoints[i].y;
}
avgx/=contourPoints.size(); // centerx
avgy/=contourPoints.size(); // centery
2.loop trough all contour points to get the points with the highest distance to the center --> Probably the corners if the contour is a square/rectangle
for (int i=0; i<contourPoints.size(); i++) {
dx = abs(avgx - contourPoints[i].x);
dy = abs(avgy - contourPoints[i].y);
dist = sqrt( dx*dx + dy*dy );
distvector.push_back(dist);
}
// sort distvector > distvector and get 4 corners with highest distance to the center -> hopefully the corners.
This procedure was my idea but I'm pretty sure there must be a better way to detect squares and extract it's corner points using just the given contour coordinates.
So any help how to improve my code to get a way better & more efficient detection would be very appreciated. Thanks a million in advance, Tempi.

Perspective Transformation for bird's eye view opencv c++

I am interested in perspective transformation to bird's eye view. So far I have tried getPerspectiveTransform and findHomography and then passing it onto warpPerspective. The results are quite close but a skew in TL and BR is present. Also the contourArea are not translated equally post transformation.
The contour is a square with multiple shapes inside.
Any suggestion on how to go ahead.
Code block of what I have done so far.
std::vector<Point2f> quad_pts;
std::vector<Point2f> squre_pts;
cv::approxPolyDP( Mat(validContours[largest_contour_index]), contours_poly[0], epsilon, true );
if (approx_poly.size() > 4) return false;
for (int i=0; i< 4; i++)
quad_pts.push_back(contours_poly[0][i]);
if (! orderRectPoints(quad_pts))
return false;
float widthTop = (float)distanceBetweenPoints(quad_pts[1], quad_pts[0]); // sqrt( pow(quad_pts[1].x - quad_pts[0].x, 2) + pow(quad_pts[1].y - quad_pts[0].y, 2));
float widthBottom = (float)distanceBetweenPoints(quad_pts[2], quad_pts[3]); // sqrt( pow(quad_pts[2].x - quad_pts[3].x, 2) + pow(quad_pts[2].y - quad_pts[3].y, 2));
float maxWidth = max(widthTop, widthBottom);
float heightLeft = (float)distanceBetweenPoints(quad_pts[1], quad_pts[2]); // sqrt( pow(quad_pts[1].x - quad_pts[2].x, 2) + pow(quad_pts[1].y - quad_pts[2].y, 2));
float heightRight = (float)distanceBetweenPoints(quad_pts[0], quad_pts[3]); // sqrt( pow(quad_pts[0].x - quad_pts[3].x, 2) + pow(quad_pts[0].y - quad_pts[3].y, 2));
float maxHeight = max(heightLeft, heightRight);
int mDist = (int)max(maxWidth, maxHeight);
// transform TO points
const int offset = 50;
squre_pts.push_back(Point2f(offset, offset));
squre_pts.push_back(Point2f(mDist-1, offset));
squre_pts.push_back(Point2f(mDist-1, mDist-1));
squre_pts.push_back(Point2f(offset, mDist-1));
maxWidth += offset; maxHeight += offset;
Size matSize ((int)maxWidth, (int)maxHeight);
Mat transmtx = getPerspectiveTransform(quad_pts, squre_pts);
// Mat homo = findHomography(quad_pts, squre_pts);
warpPerspective(mRgba, mRgba, transmtx, matSize);
return true;
Link to transformed image
Image pre-transformation
corner on pre-transformed image
Corners from CornerSubPix
Your original pre-transformation image is not so good, the squares have different sizes there and it looks wavy. The results you get are quite good given the quality of your input.
You could try to calibrate your camera (https://docs.opencv.org/2.4/doc/tutorials/calib3d/camera_calibration/camera_calibration.html) to compensate lens distortion, and your results may improve.
EDIT: Just to summarize the comments below, approxPolyDp may not locate the corners properly if the square has rounded corners or it is blurred. You may need to improve the corner location by other means such as a sharper original image, different preprocessing (median filter or threshold, as you suggest in the comments), or other algorithms for finer corner location (such as using the cornersubpix function or detecting the sides with Hough Transform and then calculating the intersections of them)

How do I find an object in image/video knowing its real physical dimension?

I have a sample of images and would like to detect the object among others in the image/video already knowing in advance the real physical dimensions of that object. I have one of the image sample (its airplane door) and would like to find the window in the airplane door knowing its physical dimensions(let we say it has inner radius of 20cm and out radius of 23cm) and its real world position in the door (for example its minimal distance to the door frame is 15cm) .Also I can know prior my camera resolution. Any matlab code or OpenCV C++ that can do that automatically with image processing?
Here is my image sample
And more complex image with round logos.
I run the code for second complex image and do not get the same results. Here is the image result.
You are looking for a circle in the image so i suggest you use Hough circle transform.
Convert image to gray
Find edges in the image
Use Hugh circle transform to find circles in the image.
For each candidate circle sample the values of the circle and if the values corresponds to a predefined values accept.
The code:
clear all
% Parameters
minValueWindow = 90;
maxValueWindow = 110;
% Read file
I = imread('image1.jpg');
Igray = rgb2gray(I);
[row,col] = size(Igray);
% Edge detection
Iedge = edge(Igray,'canny',[0 0.3]);
% Hough circle transform
rad = 40:80; % The approximate radius in pixels
detectedCircle = {};
detectedCircleIndex = 1;
for radIndex=1:1:length(rad)
[y0detect,x0detect,Accumulator] = houghcircle(Iedge,rad(1,radIndex),rad(1,radIndex)*pi/2);
if ~isempty(y0detect)
circles = struct;
circles.X = x0detect;
circles.Y = y0detect;
circles.Rad = rad(1,radIndex);
detectedCircle{detectedCircleIndex} = circles;
detectedCircleIndex = detectedCircleIndex + 1;
end
end
% For each detection run a color filter
ang=0:0.01:2*pi;
finalCircles = {};
finalCircleIndex = 1;
for i=1:1:detectedCircleIndex-1
rad = detectedCircle{i}.Rad;
xp = rad*cos(ang);
yp = rad*sin(ang);
for detectedPointIndex=1:1:length(detectedCircle{i}.X)
% Take each detected center and sample the gray image
samplePointsX = round(detectedCircle{i}.X(detectedPointIndex) + xp);
samplePointsY = round(detectedCircle{i}.Y(detectedPointIndex) + yp);
sampleValueInd = sub2ind([row,col],samplePointsY,samplePointsX);
sampleValueMean = mean(Igray(sampleValueInd));
% Check if the circle color is good
if(sampleValueMean > minValueWindow && sampleValueMean < maxValueWindow)
circle = struct();
circle.X = detectedCircle{i}.X(detectedPointIndex);
circle.Y = detectedCircle{i}.Y(detectedPointIndex);
circle.Rad = rad;
finalCircles{finalCircleIndex} = circle;
finalCircleIndex = finalCircleIndex + 1;
end
end
end
% Find Main circle by merging close hyptosis together
for finaCircleInd=1:1:length(finalCircles)
circleCenter(finaCircleInd,1) = finalCircles{finaCircleInd}.X;
circleCenter(finaCircleInd,2) = finalCircles{finaCircleInd}.Y;
circleCenter(finaCircleInd,3) = finalCircles{finaCircleInd}.Rad;
end
[ind,C] = kmeans(circleCenter,2);
c = [length(find(ind==1));length(find(ind==2))];
[~,maxInd] = max(c);
xCircle = median(circleCenter(ind==maxInd,1));
yCircle = median(circleCenter(ind==maxInd,2));
radCircle = median(circleCenter(ind==maxInd,3));
% Plot circle
imshow(Igray);
hold on
ang=0:0.01:2*pi;
xp=radCircle*cos(ang);
yp=radCircle*sin(ang);
plot(xCircle+xp,yCircle+yp,'Color','red', 'LineWidth',5);
The resulted image:
Remarks:
For other images will still have to fine tune several parameters like the radius that you search for the color and Hough circle threshold and canny edge thresholds.
In the function i searched for circle with radius from 40 pixels to 80. In here you can use your prior information about the real world radius of the window and the resolution of the camera. If you know approximately the distance the camera was from the airplane and the resolution of the camera and also the window radius in cm you can use this to get the radius in pixels and use this for the hough circle transform.
I wouldn't worry too much about the exact geometry and calibration and rather find the window by its own characteristics.
Binarization works relatively well, be it on the whole image or in a large region of interest.
Then you can select the most likely blob based on it approximate area and/or circularity.

How to remove border components in Python 2.7 using Opencv

I want to remove the components which are touching the border of the image.
I'm using OpenCV 2.4.10 and Python 2.7.
I have done HSV conversion and THRESHOLD_BINARY of the image, next I want to remove the components (objects) which are touching to border of the image.
It was explained in Matlab here - http://blogs.mathworks.com/steve/2007/09/04/clearing-border-components/
but I want to do in Python using OpenCV.
Please explain me the code.
There is no direct method in openCV to do that. You can write a function using the method floodFill and loop over for border pixels as seed points.
floodFill(dstImg,seed,Scalar (0));
where:
dstImg : Output with border removed.
seed : [(x,y) points] All the border co-ordinates
Scalar(0) : The color to be filled if a connected region towards a seed point is found. Hence (0) as your case is to fill it as black.
Sample:
int totalRows = srcImg.rows;
int totalCols = srcImg.cols;
int strt = 0, flg = 0;
int iRows = 0, jCols = 0;
while (iRows < srcImg.rows)
{
if (flg ==1)
totalRows = -1;
Point seed(strt,iRows);
iRows++;
floodFill(dstImg,seed,Scalar (0));
if (iRows == totalRows)
{
flg++;
iRows = 0;
strt = totalCols - 1;
}
}
Similarly do modify it for columns.
Hope It helps.
Not very elegant, but you could enclose each contour in a bounding rectangle and test whether the coordinates of this rectangle fall on or outside the boundary of the image (im)
for c in contours:
include = True
# omit this contour if it touches the edge of the image
x,y,w,h = cv2.boundingRect(c)
if x <= 1 or y <=1:
include = False
if x+w+1 >= im.shape[1] or y+h+1 >= im.shape[0]:
include = False
# draw the contour
if include == True:
cv2.drawContours(im, [c], -1, (255, 0, 255), 2)

OpenCV: Calculating new red pixel value

I'm currently aiming to adjust the red pixels in an image (more specifically, an eye region to remove red eyes caused by flash), and this works well, but the issue I'm getting is sometimes green patches appear on the skin.
This is a good result (before and after):
I realize why this is happening, but when I go to adjust the threshold to a higher a value (meaning the red intensity must be stronger), less red pixels are picked up and changed, i.e.:
The lower the threshold, the more green shows up on the skin.
I was wondering if there was an alternate method to what I'm currently doing to change the red pixels?
int lcount = 0;
for(int y=0;y<lcroppedEye.rows;y++)
{
for(int x=0;x<lcroppedEye.cols;x++)
{
double b = lcroppedEye.at<cv::Vec3b>(y, x)[0];
double g = lcroppedEye.at<cv::Vec3b>(y, x)[1];
double r = lcroppedEye.at<cv::Vec3b>(y, x)[2];
double redIntensity = r / ((g + b) / 2);
//currently causes issues with non-red-eye images
if (redIntensity >= 1.8)
{
double newRedValue = (g + b) / 2;
cv::Vec3b pixelColor(newRedValue,g,b);
lroi.at<cv::Vec3b>(cv::Point(x,y)) = pixelColor;
lcount++;
}
}
}
EDIT: I can possibly add in a check to ensure the new RGB values are low enough, and so R, G, B values are similar/close values so black/grey pixels are written out only... or have a range of RGB values (greenish) which aren't allowed... would that work?
Adjusting color in RGB space has caveats like this greenish areas you faced. Convert the R,G,B values to a better color space, like HSV or LUV.
I suggest you go for HSV to detect and change the red-eye colors. R/(G+B) is not a good way for calculating red intensity. This means you are calling (R=10,G=1,B=0) a very red color, but it is deadly black. Take a look at the comparison below:
So, you'd better check if Saturation and Value are high values which is the case for a red-eye color. If you encounter other high intensity colors, you may check the Hue is in the range of something like [0-20] and [340-359]. But without this, you are still safe against the white itself, as it has a very low saturation and you won't select white areas anyway.
That was for selecting, for changing the color, it is again better to not use RGB, as changing in that space is not linear as we perceive colors. Looking at the image above, you can see that lowering both the saturation and value would be a good start. But you may experiment with it and see what looks better. Maybe you'll be fine with a dark gray always, that would mean set Saturation to zero, and lower the Value a bit. You may think a dark brown would be better, go for a low saturation and value but set Hue to something about 30 degrees.
References that may help you:
Converting color values in OpenCV
An online tool to experiment with RGB and HSV colors
It may be better to change
double redIntensity = r / ((g + b) / 2);
to
double redIntensity = r / ((g+b+1) / 2);
because g+b can be equal to 0, and you'll get NAN.
Also take alook at cv::floodfill method.
May be it is better to ignore color information at red zones at all, as soon as color information in extra red area is too much distorted by extra red values. So new values could be:
newRedValue = (g+b)/2; newGreenValue = newRedValue; newBlueValue = newRedValue;
Even if you will detect wrong red area its desaturating will give better result than greenish area.
You can also use morphological closing operations (using circle structuring element) to avoid gaps in your red area mask. So you will need perform 3 steps: 1. find red areas and create mask for this 2. do red area mask morphological closing operations 3. desaturate image using this mask
And yes, don't use "r /((g+b)/2)" as it can lead to division by zero error.
Prepare a mask the same size as your lcroppedEye image, which is initially all black (I'll call this image maskImage here onwards).
For every pixel in lcroppedEye(row, col) that pass your (redIntensity >= 1.8) condition, set the maskImage(row, col) pixel to white.
When you are done with all the pixels in lcroppedEye, maskImage will have all redeye-like pixels in white.
If you perform a connected component analysis on this maskImage, you should be able to filter out other regions by considering circle or disk-like-features etc.
Now you can use this maskImage as a mask to apply the color transformation to the ROI of the original image
(You may have to do some preprocessing on maskImage before moving on to connected component analysis. Also you can replace the code segment in the question with split, divide and threshold functions unless there's special reason to iterate through pixels)
The problem seems to be that you replace regardless of the presence of any red eye, so you must somehow test if there is any high red values (more red than your skin).
My guess it that the areas where there is reflection there will also be specific blue and green values, either high or low that should be check so that you for example need high red values combined with low blue and/or low green values.
// first pass, getting the highest red value
int highRed = 0;
cv::Point redPos = cv::Point(0,0);
int lcount = 0;
for(int y=0;y<lcroppedEye.rows;y++)
{
for(int x=0;x<lcroppedEye.cols;x++)
{
double r = lcroppedEye.at<cv::Vec3b>(y, x)[2];
if (redIntensity > highRed)
{
highRed = redIntensity ;
redPos = cv::Point(x,y);
}
}
}
// decide if its red enough, need to find a good minRed value.
if (highRed < minRed)
return;
Original code here with the following changes.
// avoid division by zero, code from #AndreySmorodov
double redIntensity = r / ((g+b+1) / 2);
// add check for actual red colour.
if (redIntensity >= 1.8 && r > highRed*0.75)
// potential add check for low absolute r/b values.
{
double newRedValue = (g + b) / 2;
cv::Vec3b pixelColor(newRedValue,g,b);
lroi.at<cv::Vec3b>(cv::Point(x,y)) = pixelColor;
lcount++;
}
}