This is quite complicated to explain, so I will do my best, sorry if there is anything I missed out, let me know and I will rectify it.
My question is, I have been tasked to draw this shape,
(source: learnersdictionary.com)
This is to be done using C++ to write code that will calculate the points on this shape.
Important details.
User Input - Centre Point (X, Y), number of points to be shown, Font Size (influences radius)
Output - List of co-ordinates on the shape.
The overall aim once I have the points is to put them into a graph on Excel and it will hopefully draw it for me, at the user inputted size!
I know that the maximum Radius is 165mm and the minimum is 35mm. I have decided that my base [Font Size][1] shall be 20. I then did some thinking and came up with the equation.
Radius = (Chosen Font Size/20)*130. This is just an estimation, I realise it probably not right, but I thought it could work at least as a template.
I then decided that I should create two different circles, with two different centre points, then link them together to create the shape. I thought that the INSIDE line will have to have a larger Radius and a centre point further along the X-Axis (Y staying constant), as then it could cut into the outside line.*
*(I know this is not what it looks like on the picture, just my chain of thought as it will still give the same shape)
So I defined 2nd Centre point as (X+4, Y). (Again, just estimation, thought it doesn't really matter how far apart they are).
I then decided Radius 2 = (Chosen Font Size/20)*165 (max radius)
So, I have my 2 Radii, and two centre points.
This is my code so far (it works, and everything is declared/inputted above)
for(int i=0; i<=n; i++) //output displayed to user
{
Xnew = -i*(Y+R1)/n; //calculate x coordinate
Ynew = pow((((Y+R1)*(Y+R1)) - (Xnew*Xnew)), 0.5); //calculate y coordinate
AND
for(int j=0; j<=n; j++)//calculation for angles and output displayed to user
{
Xnew2 = -j*(Y+R2)/((n)+((0.00001)*(n==0))); //calculate x coordinate
Ynew2 = Y*(pow(abs(1-(pow((Xnew2/X),2))),0.5));
if(abs(Ynew2) <= R1)
cout<<"\n("<<Xnew2<<", "<<Ynew2<<")"<<endl;
I am having the problem drawing the crescent moon that I cannot get the two circles to have the same starting point?
I have managed to get the results to Excel. Everything in that regard works. But when i plot the points on a graph on Excel, they do not have the same starting points. Its essentially just two half circles, one smaller than the other (Stops at the Y axis, giving the half doughnut shape).
If this makes sense, I am trying to get two parts of circles to draw the shape as such that they have the same start and end points.
If anyone has any suggestions on how to do this, it would be great, currently all I am getting more a 'half doughnut' shape, due to the circles not being connected.
So. Does anyone have any hints/tips/links they can share with me on how to fix this exactly?
Thanks again, any problems with the question, sorry will do my best to rectify if you let me know.
Cheers
Formular for points on a circle:
(x-h)^2+(y-k)^2=r^2
The center of the circle is at (h/k)
Solving for y
2y1 = k +/- sqrt( -x^2 + 2hx +r^2 - h^2)
So now if the inner circle has its center # h/k , the half-moon will begin # h and will stretch to h - r2
Now you need to solve the endpoint formular for the inner circle and the outter circle and plot it. Per x you should receive 4 points (solve the equation two times, each with two solutions)
I did not implement it, but this would be my train of thought...
Related
Assume that I took two panoramic image with vertical offset of H and each image is presented in equirectangular projection with size Xm and Ym. To do this, I place my panoramic camera at position say A and took an image, then move camera H meter up and took another image.
I know that a point in image 1 with coordinate of X1,Y1 is the same point on image 2 with coordinate X2 and Y2(assuming that X1=X2 as we have only vertical offset).
My question is that How I can calculate the range of selected of point (the point that know its X1and Y1 is on image 1 and its position on image 2 is X2 and Y2 from the Point A (where camera was when image no 1 was taken.).
Yes, you can do it - hold on!!!
Key thing y = focal length of your lens - now I can do it!!!
So, I think your question can be re-stated more simply by saying that if you move your camera (on the right in the diagram) up H metres, a point moves down p pixels in the image taken from the new location.
Like this if you imagine looking from the side, across you taking the picture.
If you know the micron spacing of the camera's CCD from its specification, you can convert p from pixels to metres to match the units of H.
Your range from the camera to the plane of the scene is given by x + y (both in red at the bottom), and
x=H/tan(alpha)
y=p/tan(alpha)
so your range is
R = x + y = H/tan(alpha) + p/tan(alpha)
and
alpha = tan inverse(p/y)
where y is the focal length of your lens. As y is likely to be something like 50mm, it is negligible, so, to a pretty reasonable approximation, your range is
H/tan(alpha)
and
alpha = tan inverse(p in metres/focal length)
Or, by similar triangles
Range = H x focal length of lens
--------------------------------
(Y2-Y1) x CCD photosite spacing
being very careful to put everything in metres.
Here is a shot in the dark, given my understanding of the problem at hand you want to do something similar to computer stereo vision, I point you to http://en.wikipedia.org/wiki/Computer_stereo_vision to start. Not sure if this is still possible to do in the manner you are suggesting, it sounds like you may need some more physical constraints but I do remember being able to correlate two 2d points in images after undergoing a strict translation. Think :
lambda[x,y,1]^t = W[r1, tx;r2, ty;ry, tz][x; y; z; 1]^t
Where lamda is a scale factor, W is a 3x3 matrix covering the intrinsic parameters of your camera, r1, r2, and r3 are row vectors that make up the 3x3 rotation matrix (in your case you can assume the identity matrix since you have only applied a translation), and tx, ty, tz which are your translation components.
Since you are looking at two 2d points at the same 3d point [x,y,z] this 3d point is shared by both 2d points. I cannot say if you can rationalize the actual x,y, and z values particularly for your depth calculation but this is where I would start.
I'm trying to implement a 'raypicker' for selecting objects within my project. I do not fully understand how to implement this, but I understand conceptually how it should work. I've been trying to learn how to do this, but most tutorials I find go way over my head. My current code is based on one of the recent tutorials I found, here.
After several hours of revisions, I believe the problem I'm having with my raypicker is actually the creation of the ray in the first place. If I substitute/hardcode my near/far planes with a coordinate that would undisputably be located within the region of a triangle, the picker identifies it correctly.
My problem is this: my ray creation doesn't seem to fully take my current "camera" or perspective into account, so camera rotation won't affect where my mouse is.
I believe to remedy this I need something like using gluUnProject() or something, but whenever I used this the x,y,z coordinates returned would be incredibly small,
My current ray creation is a mess. I tried to use methods that others proposed initially, but it seemed like whatever method I tried it never worked with my picker/intersection function.
Here's the code for my ray creation:
void oglWidget::mousePressEvent(QMouseEvent *event)
{
QVector3D nearP = QVector3D(event->x()+camX, -event->y()-camY, -1.0);
QVector3D farP = QVector3D(event->x()+camX, -event->y()-camY, 1.0);
int i = -1;
for (int x = 0; x < tileCount; x++)
{
bool rayInter = intersect(nearP, farP, tiles[x]->vertices);
if (rayInter == true)
i = x;
}
if (i != -1)
{
tiles[i]->showSelection();
}
else
{
for (int x = 0; x < tileCount; x++)
tiles[x]->hideSelection();
}
//tiles[0]->showSelection();
}
To repeat, I used to load up the viewport, model & projection matrices, and unproject the mouse coordinates, but within a 1920x1080 window, all I get is values in the range of -2 to 2 for x y & z for each mouse event, which is why I'm trying this method, but this method doesn't work with camera rotation and zoom.
I don't want to do pixel color picking, because who knows I may need this technique later on, and I'd rather not give up after the amount of effort I put in so far
As you seem to have problems constructing your rays, here's how I would do it. This has not been tested directly. You could do it like this, making sure that all vectors are in the same space. If you use multiple model matrices (or stacks thereof) the calculation needs to be repeated separately with each of them.
use pos = gluUnproject(winx, winy, near, ...) to get the position of the mouse coordinate on the near plane in model space; near being the value given to glFrustum() or gluPerspective()
origin of the ray is the camera position in model space: rayorig = inv(modelmat) * camera_in_worldspace
the direction of the ray is the normalized vector from the position from 1. to the ray origin: raydir = normalize(pos - rayorig)
On the website linked they use two points for the ray and they don't seem to normalize the ray direction vector, so this is optional.
Ok, so this is the beginning of my trail of breadcrumbs.
I was somehow having issues with the QT datatypes for the matrices, and the logic pertaining to matrix transformations.
This particular problem in this question resulted from not actually performing any transformations whatsoever.
Steps to solving this problem were:
Converting mouse coordinates into NDC space (within the range of -1 to 1: x/screen width * 2 - 1, y - height / height * 2 - 1)
grabbing the 4x4 matrix for my view matrix (can be the one used when rendering, or re calculated)
In a new vector, have it equal the inverse view matrix multiplied by the inverse projection matrix.
In order to build the ray, I had to do the following:
Take the previously calculated value for the matrices that were multiplied together. This will be multiplied by a vector 4 (array of 4 spots), where it will hold the previously calculated x and y coordinates, as well as -1, then +1.
Then this vector will be divided by the last spot value of the entire vector
Create another vector 4 which was just like the last, but instead of -1, put "1" .
Once again divide that by its last spot value.
Now the coordinates for the ray have been created at the far and near planes, so it can intersect with anything along it in the scene.
I opened a series of questions (because of great uncertainty with my series of problems), so parts of my problem overlap in them too.
In here, I learned that I needed to take the screen height into consideration for switching the origin of the y axis for a Cartesian system, since windows has the y axis start at the top left. Additionally, retrieval of matrices was redundant, but also wrong since they were never declared "properly".
In here, I learned that unProject wasn't working because I was trying to pull the model and view matrices using OpenGL functions, but I never actually set them in the first place, because I built the matrices by hand. I solved that problem in 2 fold: I did the math manually, and I made all the matrices of the same data type (they were mixed data types earlier, leading to issues as well).
And lastly, in here, I learned that my order of operations was slightly off (need to multiply matrices by a vector, not the reverse), that my near plane needs to be -1, not 0, and that the last value of the vector which would be multiplied with the matrices (value "w") needed to be 1.
Credits goes to those individuals who helped me solve these problems:
srobins of facepunch, in this thread
derhass from here, in this question, and this discussion
Take a look at
http://www.realtimerendering.com/intersections.html
Lot of help in determining intersections between various kinds of geometry
http://geomalgorithms.com/code.html also has some c++ functions one of them serves your purpose
Basically, I have a sprite that I render using SDL 2.0 that I can rotate a variable amount around a center orgin point of the texture clockwise using SDL_RenderCopyEx(). I want to rotate it based on the mouse position by using the angle x between my physical slope line and my two straight lines based off of my base line. The base line I'm talking about can be represented mathematically as x = orgin_x, where orgin_x is the rotation orgin. The other line is a segment along the baseline that connects the horizontal line end point to the orgin_x point vertically. With the angle to the mouse cursor being the one I want to find to rotate my character.
Please no complicated math symbols. I would rather the formula be posted in C-style format, and please explain the logic behind the math so I can maybe understand what's happening and fix similar future problems if needed.
Some basic trigonometry. You can use atan2(delta_y, delta_x). With this you will get your angle in RAD. To get your angle in degree, because RenderCopyEx use Degree for angle, you need to convert your angle. You got 360 Degree and 2*PI Rad for a full circle. So
angle_deg = (atan2(delta_y, delta_x)*180.0000)/3.1416
Now you got your angle to do a RenderCopyEx
BTW :
delta_y = origin_y - mouse_y
AND
delta_x = origin_x - mouse_x
I'm writing an application in OpenGL (though I don't think this problem is related to that). I have some 2d point set data that I need to rotate. It later gets projected into 3d.
I apply my rotation using this formula:
x' = x cos f - y sin f
y' = y cos f + x sin f
Where 'f' is the angle. When I rotate the point set, the result is skewed. The severity of the effect varies with the angle.
It's hard to describe so I have pictures;
The red things are some simple geometry. The 2d point sets are the vertices for the white polylines you see around them. The first picture shows the undistorted pointsets, and the second picture shows them after rotation. It's not just skew that's occuring with the rotation; sometimes it seems like displacement occurs as well.
The code itself is trivial:
double cosTheta = cos(2.4);
double sinTheta = sin(2.4);
CalcSimplePolyCentroid(listHullVx,xlate);
for(size_t j=0; j < listHullVx.size(); j++) {
// translate
listHullVx[j] = listHullVx[j] - xlate;
// rotate
double xPrev = listHullVx[j].x;
double yPrev = listHullVx[j].y;
listHullVx[j].x = ((xPrev*cosTheta) - (yPrev*sinTheta));
listHullVx[j].y = ((yPrev*cosTheta) + (xPrev*sinTheta));
// translate
listHullVx[j] = listHullVx[j] + xlate;
}
If I comment out the code under '//rotate' above, the output of the application is the first image. And adding it back in gives the second image. There's literally nothing else that's going on (afaik).
The data types being used are all doubles so I don't think its a precision issue. Does anyone have any idea why rotation would cause skewing like the above pictures show?
EDIT
filipe's comment below was correct. This probably has nothing to do with the rotation and I hadn't provided enough information for the problem;
The geometry I've shown in the pictures represents buildings. They're generated from lon/lat map coordinates. In the point data I use to do the transform, I forgot to use an actual projection to cartesian coordinate space and just mapped x->lon, y->lat, and I think this is the reason I'm seeing the distortion. I'm going to request that this question be deleted since I don't think it'll be useful to anyone else.
Update:
As a result of your comments it tunred out the it is unlikely that the bug is in the presented code.
One final other hint: std transform formulars are only valid if the cooridnate system is cartesian,
on ios you sometimes have inverted y Achsis.
I am developing application to track small animals in Petri dishes (or other circular containers).
Before any tracking takes place, the first few frames are used to define areas.
Each dish will match an circular independent static area (i.e. will not be updated during tracking).
The user can request the program to try to find dishes from the original image and use them as areas.
Here are examples:
In order to perform this task, I am using Hough Circle Transform.
But in practice, different users will have very different settings and images and I do not want to ask the user to manually define the parameters.
I cannot just guess all the parameters either.
However, I have got additional informations that I would like to use:
I know the exact number of circles to be detected.
All the circles have the almost same dimensions.
The circles cannot overlap.
I have a rough idea of the minimal and maximal size of the circles.
The circles must be entirely in the picture.
I can therefore narrow down the number of parameters to define to one: the threshold.
Using these informations and considering that I have got N circles to find, my current solution is to
test many values of threshold and keep the circles between which the standard deviation is the smallest (since all the circles should have a similar size):
//at this point, minRad and maxRad were calculated from the size of the image and the number of circles to find.
//assuming circles should altogether fill more than 1/3 of the images but cannot be altogether larger than the image.
//N is the integer number of circles to find.
//img is the picture of the scene (filtered).
//the vectors containing the detected circles and the --so far-- best circles found.
std::vector<cv::Vec3f> circles, bestCircles;
//the score of the --so far-- best set of circles
double bestSsem = 0;
for(int t=5; t<400 ; t=t+2){
//Apply Hough Circles with the threshold t
cv::HoughCircles(img, circles, CV_HOUGH_GRADIENT, 3, minRad*2, t,3, minRad, maxRad );
if(circles.size() >= N){
//call a routine to give a score to this set of circles according to the similarity of their radii
double ssem = scoreSetOfCircles(circles,N);
//if no circles are recorded yet, or if the score of this set of circles is higher than the former best
if( bestCircles.size() < N || ssem > bestSsem){
//this set become the temporary best set of circles
bestCircles=circles;
bestSsem=ssem;
}
}
}
With:
//the methods to assess how good is a set of circle (the more similar the circles are, the higher is ssem)
double scoreSetOfCircles(std::vector<cv::Vec3f> circles, int N){
double ssem=0, sum = 0;
double mean;
for(unsigned int j=0;j<N;j++){
sum = sum + circles[j][2];
}
mean = sum/N;
for(unsigned int j=0;j<N;j++){
double em = mean - circles[j][2];
ssem = 1/(ssem + em*em);
}
return ssem;
}
I have reached a higher accuracy by performing a second pass in which I repeated this algorithm narrowing the [minRad:maxRad] interval using the result of the first pass.
For instance minRad2 = 0.95 * average radius of best circles and maxRad2 = 1.05 * average radius of best circles.
I had fairly good results using this method so far. However, it is slow and rather dirty.
My questions are:
Can you thing of any alternative algorithm to solve this problem in a cleaner/faster manner ?
Or what would you suggest to improve this algorithm?
Do you think I should investigate generalised Hough transform ?
Thank you for your answers and suggestions.
The following approach should work pretty well for your case:
Binarize your image (you might need to do this on several levels of threshold to make algorithm independent of the lighting conditions)
Find contours
For each contour calculate the moments
Filter them by area to remove too small contours
Filter contours by circularity:
double area = moms.m00;
double perimeter = arcLength(Mat(contours[contourIdx]), true);
double ratio = 4 * CV_PI * area / (perimeter * perimeter);
ratio close to 1 will give you circles.
Calculate radius and center of each circle
center = Point2d(moms.m10 / moms.m00, moms.m01 / moms.m00);
And you can add more filters to improve the robustness.
Actually you can find an implementation of the whole procedure in OpenCV. Look how the SimpleBlobDetector class and findCirclesGrid function are implemented.
Within the current algorithm, the biggest thing that sticks out is the for(int t=5; t<400; t=t+2) loop. Trying recording score values for some test images. Graph score(t) versus t. With any luck, it will either suggest a smaller range for t or be a smoothish curve with a single maximum. In the latter case you can change your loop over all t values into a smarter search using Hill Climbing methods.
Even if it's fairly noisy, you can first loop over multiples of, say, 30, and for the best 1 or 2 of those loop over nearby multiples of 2.
Also, in your score function, you should disqualify any results with overlapping circles and maybe penalize overly spaced out circles.
You don't explain why you are using a black background. Unless you are using a telecentric lens (which seems unlikely, given the apparent field of view), and ignoring radial distortion for the moment, the images of the dishes will be ellipses, so estimating them as circles may lead to significant errors.
All and all, it doesn't seem to me that you are following a good approach. If the goals is simply to remove the background, so you can track the bugs inside the dishes, then your goal should be just that: find which pixels are background and mark them. The easiest way to do that is to take a picture of the background without dishes, under the same illumination and camera, and directly detect differences with the picture with the images. A colored background would be preferable to do that, with a color unlikely to appear in the dishes (e.g. green or blue velvet). So you'd have reduced the problem to bluescreening (or chroma keying), a classic technique in machine vision as applied to visual effects. Do a google search for "matte petro vlahos assumption" to find classic algorithms for solving this problem.