I made a C++ program using OpenCV to allow the use of my webcam to recognize my face and my eyes. I would then like to determine the center of my pupils and then the point or area of gaze on my screen. Does anybody know how to do that? Please not my program uses a simple computer webcam.
Thank you in advance for your advice.
I think my Optimeyes project here:
https://github.com/LukeAllen/optimeyes
does what you're looking for: pupil detection and gaze tracking. Its included "Theory Paper" pdf discusses the principles of operation, and has references to other papers. The project was written using the Python version of OpenCV but you're welcome to port it to C++!
In case you are looking to identify the Point of Gaze on your laptop screen. Then below is method you can use:
Using shape_predictor_68_face_landmarks.dat, get the eye landmarks (six points per eye)
Calculate the center of eye (Ex, Ey) from the eye landmarks
If you could get the iris center (Ix, Iy) from above answer or from HCT
Calculate scaling factor : W(eye) = Topleftcorner(x) - Toprightcorner(x)
H(eye) = Topleftcorner(x) - Toprightcorner(x)
Scaling factor R(x) = W(screen)/W(eye)
R(y) = H(Screen)/H(eye)
POG (x) = (W(Screen)/2) + (R(x) *r (x))
POG (y) = (H(Screen)/2) +(R(y) *r(y))
r(x) and r(y) indicates the distance of Iris from Eye Center which is calculated as follows:
r(x) = COI (x) - COE (x)
, r(y) = COI(y) - COE (x)
Hope this helps!!
Related
I'm trying to project a point from 3D to 2D in OpenCV with C++. At the Moment, I'm using cv::projectPoints() but it's just not working out.
But first things first. I'm trying to write a program, that finds an intersection between a point cloud and a line in space. So I calibrated two cameras, did rectification and matching using SGBM. Finally I projected the disparity map to 3d using reprojectTo3D(). That all works very well and in meshlab, I can visualize my point cloud.
After that I wrote an algorithm to find the intersection between the point cloud and a line which I coded manually. That works fine, too. I found a point in the point cloud about 1.5 mm away from the line, which is good enough for the beginning. So I took this point and tried to project it back to the image, so I could mark it. But here is the problem.
Now the point is not inside the image anymore. As I took an intersection in the middle of the image, this is not possible. I think the problem could be in the coordinate systems, as I don't know in which coordinate system the point cloud is written (left camera, right camera or something else).
My projectPoints function looks like:
projectPoints(intersectionPoint3D, R, T, cameraMatrixLeft, distortionCoeffsLeft, intersectionPoint2D, noArray(), 0);
R and T are the rotation and translation from one camera to another (got that from stereoCalibrate). Here might be my mistake, but how can I fix it? I also tried to set these to (0,0,0) but it doesn't work either. Also I tried to transform the R Matrix using Rodrigues to a vector. Still same problem.
I'm sorry if this has been asked before, but I'm not sure how to search for this problem. I hope my text is clear enought to help me... if you need more information, I will gladly provide it.
Many thanks in advance.
You have a 3D point and you want to get the corresponding 2D location of it right? If you have the camera calibration matrix (3x3 matrix), you will be able to project the point to the image
cv::Point2d get2DFrom3D(cv::Point3d p, cv::Mat1d CameraMat)
{
cv::Point2d pix;
pix.x = (p.x * CameraMat(0, 0)) / p.z + CameraMat(0, 2);
pix.y = ((p.y * CameraMat(1, 1)) / p.z + CameraMat(1, 2));
return pix;
}
This is quite complicated to explain, so I will do my best, sorry if there is anything I missed out, let me know and I will rectify it.
My question is, I have been tasked to draw this shape,
(source: learnersdictionary.com)
This is to be done using C++ to write code that will calculate the points on this shape.
Important details.
User Input - Centre Point (X, Y), number of points to be shown, Font Size (influences radius)
Output - List of co-ordinates on the shape.
The overall aim once I have the points is to put them into a graph on Excel and it will hopefully draw it for me, at the user inputted size!
I know that the maximum Radius is 165mm and the minimum is 35mm. I have decided that my base [Font Size][1] shall be 20. I then did some thinking and came up with the equation.
Radius = (Chosen Font Size/20)*130. This is just an estimation, I realise it probably not right, but I thought it could work at least as a template.
I then decided that I should create two different circles, with two different centre points, then link them together to create the shape. I thought that the INSIDE line will have to have a larger Radius and a centre point further along the X-Axis (Y staying constant), as then it could cut into the outside line.*
*(I know this is not what it looks like on the picture, just my chain of thought as it will still give the same shape)
So I defined 2nd Centre point as (X+4, Y). (Again, just estimation, thought it doesn't really matter how far apart they are).
I then decided Radius 2 = (Chosen Font Size/20)*165 (max radius)
So, I have my 2 Radii, and two centre points.
This is my code so far (it works, and everything is declared/inputted above)
for(int i=0; i<=n; i++) //output displayed to user
{
Xnew = -i*(Y+R1)/n; //calculate x coordinate
Ynew = pow((((Y+R1)*(Y+R1)) - (Xnew*Xnew)), 0.5); //calculate y coordinate
AND
for(int j=0; j<=n; j++)//calculation for angles and output displayed to user
{
Xnew2 = -j*(Y+R2)/((n)+((0.00001)*(n==0))); //calculate x coordinate
Ynew2 = Y*(pow(abs(1-(pow((Xnew2/X),2))),0.5));
if(abs(Ynew2) <= R1)
cout<<"\n("<<Xnew2<<", "<<Ynew2<<")"<<endl;
I am having the problem drawing the crescent moon that I cannot get the two circles to have the same starting point?
I have managed to get the results to Excel. Everything in that regard works. But when i plot the points on a graph on Excel, they do not have the same starting points. Its essentially just two half circles, one smaller than the other (Stops at the Y axis, giving the half doughnut shape).
If this makes sense, I am trying to get two parts of circles to draw the shape as such that they have the same start and end points.
If anyone has any suggestions on how to do this, it would be great, currently all I am getting more a 'half doughnut' shape, due to the circles not being connected.
So. Does anyone have any hints/tips/links they can share with me on how to fix this exactly?
Thanks again, any problems with the question, sorry will do my best to rectify if you let me know.
Cheers
Formular for points on a circle:
(x-h)^2+(y-k)^2=r^2
The center of the circle is at (h/k)
Solving for y
2y1 = k +/- sqrt( -x^2 + 2hx +r^2 - h^2)
So now if the inner circle has its center # h/k , the half-moon will begin # h and will stretch to h - r2
Now you need to solve the endpoint formular for the inner circle and the outter circle and plot it. Per x you should receive 4 points (solve the equation two times, each with two solutions)
I did not implement it, but this would be my train of thought...
I am struggling with the interpretation of kinect depth data.
In order to obtain real world distance from kinect, i used the following formula :
if(i<2047){
depthToMeterTable[i] = i * -0.0030711016 + 3.3309495161;
}
else{
depthToMeterTable[i] = 0;
}
This formula gives something pretty good as a distance estimator.
However i do obtain strange output from a 90° wall corner visualisation.
On the following image is two different information. First, the violet lines represent the wall as i SHOULD see it. A 90° corner. The red dots represent the wall seen from the kinect. As you can see, the angle of the two planes is now bigger.
http://img843.imageshack.us/img843/4061/kinectbias.jpg
Do you have any idea where i could correct this bias, and how to do it ?
Thank you for reading,
Al_th
I'm not familiar with that conversion formula (also not sure how your depthToMeterTable gets filled - what formula is used there).
There's a built-in function in libfreenect for that though: freenect_camera_to_world
Before that utility function was added I used Matt Fischer's conversion functions(RawDepthToMeters and DepthToWorld).
HTH
EDIT: If the question is badly written have a look at the video (3), same link as in the bottom of this page.
I am trying to draw a very simple bezier curve using ccBezierConfig and Cocos2D. Reading on Wikipedia I tried to understand a bit controls points and found this image:
http://upload.wikimedia.org/wikipedia/commons/thumb/b/bf/Bezier_2_big.png/240px-Bezier_2_big.png
If you look on the wikipedia page from which I took the image there is a cool animation. Have a look here.
This is the code I used:
CCSprite *r = [CCSprite spriteWithFile:#"hi.png"];
r.anchorPoint = CGPointMake(0.5f, 0.5f);
r.position = CGPointMake(0.0f, 200.0f);
ccBezierConfig bezier;
bezier.controlPoint_1 = CGPointMake(0.0f, 200.0f);
bezier.controlPoint_1 = CGPointMake(180.0f, 330.0f);
bezier.endPosition = CGPointMake(320.0f,200.0f);
id bezierForward = [CCBezierBy actionWithDuration:1 bezier:bezier];
[r runAction:bezierForward];
[self addChild:r z:0 tag:77];
The app runs on Portrait mode and my speculation matching the control points of 1 and the ones in my code was that:
sprite.position should correspond to P0
bezier.controlPoint_1 should correspond to P0
bezier.controlPoint_2 should correspond to P1
bezier.endPosition should correspond to P2
I did try two approaches. By setting the position of the sprite and by not setting it.
I assumed that position should be the same as controlPoint_1 as in the wikipedia schema 1 there are only three points.
I get an output I don't quiet understand.. I made a little video of it, is a private youtube video:
to see the video click here
OK, the answer is rather simple...
The quadratic Bézier curve is not the one drawn by cocos2d. Instead, check the same wiki page for cubic Bézier curve. That's what you should be looking at.
Initial position is the position of the sprite (P0)
Control points 1 & 2 are P1 & P2, respectively.
End point is the end point.
Nice video, btw, I enjoyed it xD.
Evidence from the CCActionInterval.h file in the cocos2d library:
/** An action that moves the target with a cubic Bezier curve by a certain distance.
*/
#interface CCBezierBy : CCActionInterval <NSCopying>
{
ccBezierConfig config;
CGPoint startPosition;
}
This link should help you construct your cubic Bézier curves visually, instead of tweaking the values and running the app.
I am building a recognition program in C++ and to make it more robust, I need to be able to find the distance of an object in an image.
Say I have an image that was taken 22.3 inches away of an 8.5 x 11 picture. The system correctly identifies that picture in a box with the dimensions 319 pixels by 409 pixels.
What is an effective way for relating the actual Height and width (AH and AW) and the pixel Height and width (PH and PW) to the distance (D)?
I am assuming that when I actually go to use the equation, PH and PW will be inversely proportional to D and AH and AW are constants (as the recognized object will always be an object where the user can indicate width and height).
I don't know if you changed your question at some point but my first answer it quite complicated for what you want. You probably can do something simpler.
1) Long and complicated solution (more general problems)
First you need the know the size of the object.
You can to look at computer vision algorithms. If you know the object (its dimensions and shape). Your main problem is the problem of pose estimation (that is find the position of the object relative the camera) from this you can find the distance. You can look at [1] [2] (for example, you can find other articles on it if you are interested) or search for POSIT, SoftPOSIT. You can formulate the problem as an optimization problem : find the pose in order to minimize the "difference" between the real image and the expected image (the projection of the object given the estimated pose). This difference is usually the sum of the (squared) distances between each image point Ni and the projection P(Mi) of the corresponding object (3D) point Mi for the current parameters.
From this you can extract the distance.
For this you need to calibrate you camera (roughly, find the relation between the pixel position and the viewing angle).
Now you may not want do code all of this for by yourself, you can use Computer Vision libs such as OpenCV, Gandalf [3] ...
Now you may want to do something more simple (and approximate). If you can find the image distance between two points at the same "depth" (Z) from the camera, you can relate the image distance d to the real distance D with : d = a D/Z (where a is a parameter of the camera related to the focal length, number of pixels that you can find using camera calibration)
2) Short solution (for you simple problem)
But here is the (simple, short) answer : if you picture in on a plane parallel to the "camera plane" (i.e. it is perfectly facing the camera) you can use :
PH = a AH / Z
PW = a AW / Z
where Z is the depth of the plane of the picture and a in an intrinsic parameter of the camera.
For reference the pinhole camera model relates image coordinated m=(u,v) to world coordinated M=(X,Y,Z) with :
m ~ K M
[u] [ au as u0 ] [X]
[v] ~ [ av v0 ] [Y]
[1] [ 1 ] [Z]
[u] = [ au as ] X/Z + u0
[v] [ av ] Y/Z + v0
where "~" means "proportional to" and K is the matrix of intrinsic parameters of the camera. You need to do camera calibration to find the K parameters. Here I assumed au=av=a and as=0.
You can recover the Z parameter from any of those equations (or take the average for both). Note that the Z parameter is not the distance from the object (which varies on the different points of the object) but the depth of the object (the distance between the camera plane and the object plane). but I guess that is what you want anyway.
[1] Linear N-Point Camera Pose Determination, Long Quan and Zhongdan Lan
[2] A Complete Linear 4-Point Algorithm for Camera Pose Determination, Lihong Zhi and Jianliang Tang
[3] http://gandalf-library.sourceforge.net/
If you know the size of the real-world object and the angle of view of the camera then assuming you know the horizontal angle of view alpha(*), the horizontal resolution of the image is xres, then the distance dw to an object in the middle of the image that is xp pixels wide in the image, and xw meters wide in the real world can be derived as follows (how is your trigonometry?):
# Distance in "pixel space" relates to dinstance in the real word
# (we take half of xres, xw and xp because we use the half angle of view):
(xp/2)/dp = (xw/2)/dw
dw = ((xw/2)/(xp/2))*dp = (xw/xp)*dp (1)
# we know xp and xw, we're looking for dw, so we need to calculate dp:
# we can do this because we know xres and alpha
# (remember, tangent = oposite/adjacent):
tan(alpha) = (xres/2)/dp
dp = (xres/2)/tan(alpha) (2)
# combine (1) and (2):
dw = ((xw/xp)*(xres/2))/tan(alpha)
# pretty print:
dw = (xw*xres)/(xp*2*tan(alpha))
(*) alpha = The angle between the camera axis and a line going through the leftmost point on the middle row of the image that is just visible.
Link to your variables:
dw = D, xw = AW, xp = PW
This may not be a complete answer but may push you in the right direction. Ever seen how NASA does it on those pictures from space? The way they have those tiny crosses all over the images. Thats how they get a fair idea about the deapth and size of the object as far as I know. The solution might be to have an object that you know the correct size and deapth of in the picture and then calculate the others' relative to that. Time for you to do some research. If thats the way NASA does it then it should be worth checking out.
I have got to say This is one of the most interesting questions i have seen for a long time on stackoverflow :D. I just noticed you have only two tags attached to this question. Adding something more in relation to images might help you better.