I want to draw a line trough two given points in an image using OpenCV in Python. The definition of a line states as "The shortest distance between two points." But I don't want the line to stop at the points, I want it to go all the way to the borders of my image. Currently I am obviously just using the cv2.line function, with 1 being the thickness of the line.
cv2.line(img,(x1,y1),(x2,y2),blue,1)
The algorithm I'm writing doesn't really matter, I just can't find any function in cv2 or numpy that draws a "line" through two points instead of between them.
Thanks in advance
EDIT: solved it myself
a = (1.0*(y2-y1))/(1.0*(x2-x1))
b = -a*x1+y1
y1_ = (int)(0)
y2_ = (int)(np.shape(img)[1])
x1_ = (int)((y1_-b)/a)
x2_ = (int)((y2_-b)/a)
cv2.line(img,(x1_,y1_),(x2_,y2_),(colors[i][0],colors[i][1],colors[i][2]),5)
Related
Recently I started working on a project with OpenCV and C++.
I graph a curved line on desmos, and feed it into the program. The objective of the program is to measure how curved the line is, and how much to straighten it.
The red line you see in the image below (with the green and red dots on it) is the graphed line from desmos. The purple line with hollow circles on it is the model line that needs to be met. I have written some basic functions to calculate what each x value of a given y value would be on the straight line. I did this in order to align the points from the curved line with points on the straight line.
The code for calculating the values is:
double slopeOf(Point first, Point second) {
return (second.y - first.y) / (second.x - first.x);
}
double f(Point first, Point second, int y) {
// return x of point with y value, on given line
return ((y - first.y) / slopeOf(first, second)) + first.x;
}
As you can see, the hollow points at the bottom of the line are centered with the straight line, and they are accurate. But the higher up you go in the line, the points begin to stray away from the line.
Why is this happening?
Also: I tired using the LineIterator in OpenCV but couldn't get the flexible results I would if I used my own functions.
Thanks in advance, any help is appreciated.
I have a QPainterPath that has two Elements, both of which are cubic Bezier curves, like this:
If I want to find a particular point along the path, I can use the pointAtPercent method. The documentation states that
When curves are present the percentage argument is mapped to the t parameter of the Bezier equations.
When I get the percentage, it's from 0 to 1 along the length of the entire path. That middle control point, for example, is at t = 0.46, when actually it's the end of the left Element (t = 1.0) and the start of the next (t = 0). So in my image if I get the percentage at the green circle, it'll be around 0.75. What I'd like is to get something like 0.5 for the green circle, i.e. the percentage of just the second Bezier.
So my question is, is there a way in Qt to determine the percentage value of a given Element instead of relative to the entire path length? In my example I happen to know the percentage value for the middle control point, but in general I won't know it, so I can't just scale the percentages or assume even distribution.
I'm using PyQt4 (Qt 4.8) if that matters. Thanks!
t scales along the total length(), but you can also know the length of individual segments, and thus adjust t accordingly. The path's element is a rather specific term: there are 3 elements per each cubicTo, assuming no intervening position changes. An arbitrary path like yours with consist of a MoveToElement, CurveToElement, two CurveToDataElements, another CurveToElement, another two CurveToDataElements. You have to iterate the elements and extract the length of the first cubic, to adjust the t.
A function extracting the first cubic, determining its length, and then using that to compute t2 from t would look similar to (untested):
def t2(path, t):
if path.elementCount() != 7:
raise ValueError('invalid path element count')
path1 = QPainterPath()
path1.moveTo(path.elementAt(0))
path1.cubicTo(path.elementAt(2), path.elementAt(3), path.elementAt(1))
l = path.length()
l1 = path1.length()
l2 = l - l1
t2 = (t*l - l1)/l2
return t2
I have found Github codes for Pupil detection Pupil Detection with Python and OpenCV which explains how to detect eye pupil but explain only one Eye. I would like to detect both eyes. Please give me ideas how I can detect both eyes pupil from the codes.
Thanks
Briefly looking over that code, it looks like it finds both eyes but then only gives you the one. Just modify the code as needed to extract the two found blobs rather than just the one. lines 55-72 are where it is pruning your candidate pool from some number of blobs (possible pupils) down to 1.
All of these lines: "if len(contours) >= n" are basically saying, if you still have more than one blob, try to cut one out. The thing is, you want the TWO biggest blobs, not the one. So, you need to rewrite those check statements such that you eliminate all but the two largest blobs, and then draw circles on each of their centroids. As far as I can tell, nothing else should need modification.
here is some sample code (untested) that may help. I don't know python syntax and just modified some stuff from your linked file:
while len(contours) > 2:
#find smallest blob and delete it
minArea = 1000000 #this should be the dimensions of your image to be safe
MAindex = 0 #to get the unwanted frame
currentIndex = 0
for cnt in contours:
area = cv2.contourArea(cnt)
if area < minArea:
minArea = area
MAindex = currentIndex
currentIndex = currentIndex + 1
del contours[MAindex] #remove the picture frame contour
del distanceX[MAindex]
This will get you down to your two eye blobs, then you will still need to add a circle drawing for each blob center. (you need to delete all the "if len..." statements and replace them with this one while statement)
I want to compute the fundamental matrix between two image pairs. For that purpose I use SIFT-Features which are matched via the cv::FlannBasedMatcher. Afterwards i call cv::findFundamentalMat(left_pts, right_pts, CV_RANSAC, 2.0, 0.99, ransac_mask) with the found matches.
To visualize the computed fundamental matrix I want to draw epipolar-lines. To do that I tried the cv::computeCorrespondEpilines and a manual mulplication with a Point and extraction of the resulting line equation (ax + by + c = 0).
To draw the lines I simply use that snipped (which I've found in many other examples)
for (cv::vector<cv::Vec3f>::const_iterator it = lines1.begin(); it!=lines1.end(); ++it)
{
cv::line(left_new,
cv::Point(0,-(*it)[2]/(*it)[1]),
cv::Point(left.cols,-((*it)[2] + (*it)[0]*left.cols)/(*it)[1]),
cv::Scalar(255,255,255));
}
But corresponding points are not on the epipolar lines. I've even tried it on already rectified images but the lines are not horizontal.
Now i've also tested that example:
http://opencv-cookbook.googlecode.com/svn/trunk/Chapter%2009/estimateF.cpp
only to find out that it has the same problem.
Can someone share a working example for such a use-case or give me a hint what could go wrong? (Matchings seem to be fine so it has to be something with the actual estimation)
I want to create an image of an object from its morphological skeleton. Is there any function in MATLAB or C,C++ code? Thanks in advance.
Original image, and its skeleton (obtained using bwmorph(image,'skel',Inf)):
As stated in the comments above, bwmorph(..,'skel',Inf) gives you a binary image of the skeleton, which is not enough on its own to recover the original image.
On the other, if you had, for each skeleton pixel, the values returned by the distance transform, then you can successfully apply the inverse distance transform (as suggested by #belisarius):
Note that this implementation of InverseDistanceTransform is rather slow (I based it on a previous answer). It repeatedly uses POLY2MASK to get pixels inside the specified circles, so there is room for improvement..
%# get binary image
BW = ~imread('http://img546.imageshack.us/img546/3154/hand2.png');
%# SkeletonTransform[]
skel = bwmorph(BW,'skel',Inf);
DD = double(bwdist(~BW));
D = zeros(size(DD));
D(skel) = DD(skel);
%# zero-centered unit circle
t = linspace(0,2*pi,50);
ct = cos(t);
st = sin(t);
%# InverseDistanceTransform[] : union of all disks centered around each
%# pixel of the distance transform, taking pixel values as radius
[r c] = size(D);
BW2 = false(r,c);
for j=1:c
for i=1:r
if D(i,j)==0, continue; end
mask = poly2mask(D(i,j).*st + j, D(i,j).*ct + i, r, c);
BW2(mask) = true;
end
end
%# plot
figure
subplot(131), imshow(BW), title('original')
subplot(132), imshow(D,[]), title('Skeleton+DistanceTransform')
subplot(133), imshow(BW2), title('InverseDistanceTransform')
The result:
Depending on your object, you may be able to get a meaningful result using dilation (IMDILATE in Matlab).
The function bwmorph can be used for code generation as seen here Image Processing functions for code generation. Write the code in a MATLAB function and use the codegen command. for generating code. The option for code generation is available past R2012b MATLAB Release Notes.