How to draw an Arc in OpenGL - c++

While making a little Pong game in C++ OpenGL, I decided it'd be fun to create arcs (semi-circles) when stuff bounces. I decided to skip Bezier curves for the moment and just go with straight algebra, but I didn't get far. My algebra follows a simple quadratic function (y = +- sqrt(mx+c)).
This little excerpt is just an example I've yet to fully parameterize, I just wanted to see how it would look. When I draw this, however, it gives me a straight vertical line where the line's tangent line approaches -1.0 / 1.0.
Is this a limitation of the GL_LINE_STRIP style or is there an easier way to draw semi-circles / arcs? Or did I just completely miss something obvious?
void Ball::drawBounce()
{ float piecesToDraw = 100.0f;
float arcWidth = 10.0f;
float arcAngle = 4.0f;
glBegin(GL_LINE_STRIP);
for (float i = 0.0f; i < piecesToDraw; i += 1.0f) // Positive Half
{ float currentX = (i / piecesToDraw) * arcWidth;
glVertex2f(currentX, sqrtf((-currentX * arcAngle)+ arcWidth));
}
for (float j = piecesToDraw; j > 0.0f; j -= 1.0f) // Negative half (go backwards in X direction now)
{ float currentX = (j / piecesToDraw) * arcWidth;
glVertex2f(currentX, -sqrtf((-currentX * arcAngle) + arcWidth));
}
glEnd();
}
Thanks in advance.

What is the purpose of sqrtf((-currentX * arcAngle)+ arcWidth)? When i>25, that expression becomes imaginary. The proper way of doing this would be using sin()/cos() to generate the X and Y coordinates for a semi-circle as stated in your question. If you want to use a parabola instead, the cleaner way would be to calculate y=H-H(x/W)^2

Related

Moving an object in the direction of the camera

I'm making a project where I need to move a player in any direction using an analog stick. I'm limited to specific functions and I only have the positions of the camera and the player and the analog stick. The camera is always pointed to the player.
vec2 &leftStick = getLeftStick(-1); // results in an x and a y, both ranging from -1 to 1.
vec3 *playerPos = getTrans(player);
vec3 *cameraPos = getCameraPos(player, 0);
playerPos->x += leftStick.x * 10.0f;
playerPos->z -= leftStick.y * 10.0f;
This code works to move the player, however its using the orientation of the world. I need it where holding up on the analog stick (left stick y = 1) makes the player go forward, no matter what way the player/camera are facing.
My solution, thank you #Borgleader for a majority of it.
I found an equation to find the distance and velocity for the x and z online, then I tested a bunch of combinations until it worked properly. Not a good way to do this but it worked out.
// this all replaces the last two lines of the previous code snippet
float speed = 30.0f;
float d = sqrt(powf(playerPos->x - cameraPos->x, 2) + powf(playerPos->z - cameraPos->z, 2));
float vx = (speed/d)*(playerPos->x - cameraPos->x);
float vz = (speed/d)*(playerPos->z - cameraPos->z);
playerPos->x -= leftStick.x * vz;
playerPos->z += leftStick.x * vx;
playerPos->x += leftStick.y * vx;
playerPos->z += leftStick.y * vz;

Skewed / Off-axis stereoscopic projection with glm::frustum flickering

I have a 3d, stereoscopic rendering application that currently uses parallel stereoscopy by just moving (shifting) the camera to the side for each Left and Right views. It does work, but recently I felt it could be much improved if I had the off-axis option. I got a semi-working algorithm for glm::frustum() to allow for this but am having some troubles immediately when I switch to it over glm::perspective().
I followed the only GL guide I could find, Simple, Low-Cost Stereographics, that which said to replace my existing glm::perspective() with (2 calls
//OFF-AXIS STEREO
if (myAbj.stereoOffsetAxis) {
glm::vec3 targ0_stored = i->targO;
if (myAbj.stereoLR == 0)
{
float sgn = -1.f * (float)myAbj.stereoSwitchLR;
float eyeSep = myAbj.stereoSep;
float focalLength = 50.f;
float eyeOff = (sgn * (eyeSep / 2.f) * (myAbj.selCamLi->nearClip->val_f / focalLength));
float top = myAbj.selCamLi->nearClip->val_f * tan(myAbj.selCamLi->fov->val_f / 2.f);
float right = myAbj.aspect * top;
myAbj.selCamLi->PM = glm::frustum(-right - eyeOff, right - eyeOff, -top, top, myAbj.selCamLi->nearClip->val_f, myAbj.selCamLi->farClip->val_f);
i->targO += myAbj.selCamLi->rightO * myAbj.stereoSep * (float)myAbj.stereoSwitchLR;
VMup(i);
i->targO = targ0_stored;
}
if (myAbj.stereoLR == 1)
{
float sgn = 1.f * (float)myAbj.stereoSwitchLR;
float eyeSep = myAbj.stereoSep;
float focalLength = 50.f;
float eyeOff = (sgn * (eyeSep / 2.f) * (myAbj.selCamLi->nearClip->val_f / focalLength));
float top = myAbj.selCamLi->nearClip->val_f * tan(myAbj.selCamLi->fov->val_f / 2.f);
float right = myAbj.aspect * top;
myAbj.selCamLi->PM = glm::frustum(-right - eyeOff, right - eyeOff, -top, top, myAbj.selCamLi->nearClip->val_f, myAbj.selCamLi->farClip->val_f);
i->targO += myAbj.selCamLi->rightO * -myAbj.stereoSep * (float)myAbj.stereoSwitchLR;
VMup(i);
i->targO = targ0_stored;
}
}
Using this equation, my View Matrix is rotated 180 degrees on the Z axis. However, the bigger issue is a large amount of black dots and flickering on my objects. When I move the camera to a close enough point the flickering stops. Even when I minimize the scene, the issue is still there.
Why is this flickering happening and what can I do to prevent it? It is ruining my scenes.
My near clip was causing the problem. It couldnt be set to the same low value that glm::perspective() was using - it needed a little bit more.

Simple Ray Tracing with Lambertian Shading, Confusion

I didn't see another post with a problem similar to mine, so hopefully this is not redundant.
I've been reading a book on the fundamentals of computer graphics (third edition) and I've been implementing a basic ray tracing program based on the principles I've learned from it. I had little trouble implementing parallel and perspective projection but after moving onto Lambertian and Blinn-Phong Shading I've run into a snag that I'm having trouble figuring out on my own.
I believe my problem is related to how I am calculating the ray-sphere intersection point and the vectors to the camera/light. I attached a picture that is output when I run simply perspective projection with no shading.
Perspective Output
However, when I attempt the same scene with Lambertian shading the spheres disappear.
Blank Ouput
While trying to debug this myself I noticed that if I negate the x, y, z coordinates calculated as the hit point, the spheres appear again. And I believe the light is coming from the opposite direction I expect.
Lambertian, negated hitPoint
I am calculating the hit point by adding the product of the projected direction vector and the t value, calculated by the ray-sphere intersection formula, to the origin (where my "camera" is, 0,0,0) or just e + td.
The vector from the hit point to the light, l, I am setting to the light's position minus the hit point's position (so hit point's coords minus light's coords).
v, the vector from the hit point to the camera, I am getting by simply negating the projected view vector;
And the surface normal I am getting by hit point minus the sphere's position.
All of which I believe is correct. However, while stepping through the part that calculates the surface normal, I notice something I think is odd. When subtracting the hit point's position from the sphere's position to get the vector from the sphere's center to the hit point, I believe I should expect to get a vector where all of the values lie within the range (-r,r); but that is not happening.
This is an example from stepping through my code:
Calculated hit point: (-0.9971, 0.1255, -7.8284)
Sphere center: (0, 0, 8) (radius is 1)
After subtracting, I get a vector where the z value is -15.8284. This seems wrong to me; but I do not know what is causing it. Would a z value of -15.8284 not imply that the sphere center and the hit position are ~16 units away from each other in the z plane? Obviously these two numbers are within 1 from each other in absolute value terms, that's what leads me to think my problem has something to do with this.
Here's the main ray-tracing loop:
auto origin = Position3f(0, 0, 0);
for (int i = 0; i < numPixX; i++)
{
for (int j = 0; j < numPixY; j++)
{
for (SceneSurface* object : objects)
{
float imgPlane_u = left + (right - left) * (i + 0.5f) / numPixX;
float imgPlane_v = bottom + (top - bottom) * (j + 0.5f) / numPixY;
Vector3f direction = (w.negated() * focal_length) + (u * imgPlane_u) + (v * imgPlane_v);
Ray viewingRay(origin, eye, direction);
RayTestResult testResult = object->TestViewRay(viewingRay);
if (testResult.m_bRayHit)
{
Position3f hitPoint = (origin + (direction) * testResult.m_fDist);//.negated();
Vector3f light_direction = (light - hitPoint).toVector().normalized();
Vector3f view_direction = direction.negated().normalized();
Vector3f surface_normal = object->GetNormalAt(hitPoint);
image[j][i] = object->color * intensity * fmax(0, surface_normal * light_direction);
}
}
}
}
GetNormalAt is simply:
Vector3f Sphere::GetNormalAt(Position3f &surface)
{
return (surface - position).toVector().normalized();
}
My spheres are positioned at (0, 0, 8) and (-1.5, -1, 6) with rad 1.0f.
My light is at (-3, -3, 0) with an intensity of 1.0f;
I ignore any intersection where t is not greater than 0 so I do not believe that is causing this problem.
I think I may be doing some kind of mistake when it comes to keeping positions and vectors in the same coordinate system (same transform?), but I'm still learning and admittedly don't understand that very well. If the view direction is always in the -w direction, why do we position scene objects in the positive w direction?
Any help or wisdom is greatly appreciated. I'm teaching this all to myself so far and I'm pleased with how much I've taken in, but something in my gut tells me this is a relatively simple mistake.
Just in case it is of any use, here's the TestViewRay function:
RayTestResult Sphere::TestViewRay(Ray &viewRay)
{
RayTestResult result;
result.m_bRayHit = false;
Position3f &c = position;
float r = radius;
Vector3f &d = viewRay.getDirection();
Position3f &e = viewRay.getPosition();
float part = d*(e - c);
Position3f part2 = (e - c);
float part3 = d * d;
float discriminant = ((part*part) - (part3)*((part2*part2) - (r * r)));
if (discriminant > 0)
{
float t_add = ((d) * (part2)+sqrt(discriminant)) / (part3);
float t_sub = ((d) * (part2)-sqrt(discriminant)) / (part3);
float t = fmin(t_add, t_sub);
if (t > 0)
{
result.m_iNumberOfSolutions = 2;
result.m_bRayHit = true;
result.m_fDist = t;
}
}
else if (discriminant == 0)
{
float t_add = ((d)* (part2)+sqrt(discriminant)) / (part3);
float t_sub = ((d)* (part2)-sqrt(discriminant)) / (part3);
float t = fmin(t_add, t_sub);
if (t > 0)
{
result.m_iNumberOfSolutions = 1;
result.m_bRayHit = true;
result.m_fDist = t;
}
}
return result;
}
EDIT:
I'm happy to report I figured out my problem.
Upon sitting down with my sister to look at this I noticed in my ray-sphere hit detection I had this:
float t_add = ((d) * (part2)+sqrt(discriminant)) / (part3);
Which is incorrect. d should be negative. It should be:
float t_add = ((neg_d * (e_min_c)) + sqrt(discriminant)) / (part2);
(I renamed a couple variables) Previously I had a zero'd vector so I could express -d as (zero_vector - d)and I had removed that because I implemented a member function to negate any given vector; but I forgot to go back and call it on d. After fixing that and moving my sphere's into the negative z plane my Lambertian and Blinn-Phong shading implementations work correctly.
Lambertian + Blinn-Phong

Making balls bounce off each other (openGL)

I'm trying to make an application where balls bounce off the walls and also off each other. The bouncing off the walls works fine, but I'm having some trouble getting them to bounce off each other. Here's the code I'm using to make them bounce off another ball (for testing I only have 2 balls)
// Calculate the distance using Pyth. Thrm.
GLfloat x1, y1, x2, y2, xd, yd, distance;
x1 = balls[0].xPos;
y1 = balls[0].yPos;
x2 = balls[1].xPos;
y2 = balls[1].yPos;
xd = x2 - x1;
yd = y2 - y1;
distance = sqrt((xd * xd) + (yd * yd));
if(distance < (balls[0].ballRadius + balls[1].ballRadius))
{
std::cout << "Collision\n";
balls[0].xSpeed = -balls[0].xSpeed;
balls[0].ySpeed = -balls[0].ySpeed;
balls[1].xSpeed = -balls[1].xSpeed;
balls[1].ySpeed = -balls[1].ySpeed;
}
What happens is that they randomly bounce, or pass through each other. Is there some physics that I'm missing?
EDIT: Here's the full function
// Callback handler for window re-paint event
void display()
{
glClear(GL_COLOR_BUFFER_BIT); // Clear the color buffer
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND);
// FOR LOOP
for (int i = 0; i < numOfBalls; i++)
{
glLoadIdentity(); // Reset model-view matrix
int numSegments = 100;
GLfloat angle = 0;
glTranslatef(balls[i].xPos, balls[i].yPos, 0.0f); // Translate to (xPos, yPos)
// Use triangular segments to form a circle
glBegin(GL_TRIANGLE_FAN);
glColor4f(balls[i].colorR, balls[i].colorG, balls[i].colorB, balls[i].colorA);
glVertex2f(0.0f, 0.0f); // Center of circle
for (int j = 0; j <= numSegments; j++)
{
// Last vertex same as first vertex
angle = j * 2.0f * PI / numSegments; // 360 deg for all segments
glVertex2f(cos(angle) * balls[i].ballRadius, sin(angle) * balls[i].ballRadius);
}
glEnd();
// Animation Control - compute the location for the next refresh
balls[i].xPos += balls[i].xSpeed;
balls[i].yPos += balls[i].ySpeed;
// Calculate the distance using Pyth. Thrm.
GLfloat x1, y1, x2, y2, xd, yd, distance;
x1 = balls[0].xPos;
y1 = balls[0].yPos;
x2 = balls[1].xPos;
y2 = balls[1].yPos;
xd = x2 - x1;
yd = y2 - y1;
distance = sqrt((xd * xd) + (yd * yd));
if(distance < (balls[0].ballRadius + balls[1].ballRadius))
{
std::cout << "Collision\n";
balls[0].xSpeed = -balls[0].xSpeed;
balls[0].ySpeed = -balls[0].ySpeed;
balls[1].xSpeed = -balls[1].xSpeed;
balls[1].ySpeed = -balls[1].ySpeed;
}
else
{
std::cout << "No collision\n";
}
// Check if the ball exceeds the edges
if (balls[i].xPos > balls[i].xPosMax)
{
balls[i].xPos = balls[i].xPosMax;
balls[i].xSpeed = -balls[i].xSpeed;
}
else if (balls[i].xPos < balls[i].xPosMin)
{
balls[i].xPos = balls[i].xPosMin;
balls[i].xSpeed = -balls[i].xSpeed;
}
if (balls[i].yPos > balls[i].yPosMax) {
balls[i].yPos = balls[i].yPosMax;
balls[i].ySpeed = -balls[i].ySpeed;
}
else if (balls[i].yPos < balls[i].yPosMin)
{
balls[i].yPos = balls[i].yPosMin;
balls[i].ySpeed = -balls[i].ySpeed;
}
}
glutSwapBuffers(); // Swap front and back buffers (of double buffered mode)
}
**Note: Most of the function uses a for loop with numOfBalls as the counter, but to test collision, I'm only using 2 balls, hence the balls[0] and balls[1]
Here are some things to consider.
If the length of (xSpeed,ySpeed) and is roughly comparable with .ballRadius it is possible for two balls to travel "thru" each other between "ticks" of the simulation's clock (one step). Consider two balls which are traveling perfectly vertical, one up, one down, and 1 .ballRadius apart horizontally. In real life they would clearly collide but it would be easy for your simulation to miss this event if .ySpeed ~ .ballRadius.
Second, you change in the vector of the balls results in each ball coming to rest, since
balls[0].xSpeed -= balls[0].xSpeed;
is a really exotic way of writing
balls[0].xSpeed = 0;
For the physics almost correct stuff, you need to invert only the component perpendicular to the plane of contact.
In other words take collision_vector to be the vector between the center of the balls (just subtract one point's coordinates from the other's). Because you have spheres this also happens to be the normal of the collision plane.
Now for each ball in turn, you need to decompose their speeds. The A component will be the one aligned with the colision_vector you can obtain it by doing some vector arithmetic A = doc(Speed, collision_vector) * collision_vector. This will be the thing you want to invert. You also want to extract the B component that is parallel to the collision plane. Because it's parallel it won't change because of the collision. You obtain it by subtracting A from the speed vector.
Finally the new speed will be something like B - A. If you want to get the balls to spin you will need an angular momentum in the direction of A - B. If the balls have different mass then you will need use the weight ratio as a multiplier for A in the first formula.
This will make the collision look legit. The detection still needs to happen correctly. Make sure that the speeds are significantly smaller than the radius of the balls. For comparable or bigger speeds you will need more complex algorithms.
Note: most of the stuff above is vector arithmetics. Also It's late here so I might have mixed up some signs (sorry). Take a simple example on paper and work it out. It will also help you understand the solution better.

OpenGl Implicit Circle function-- incomplete circle

I am having a little issue drawing a circle. The function draws an almost complete circle and I am just missing a tiny bit of the loop. I am assuming that the issue has something to do with an automatic redraw of something?
Here is the function
for(x = radius; x >= -radius; x -= 0.05) // draw the plot
{
double temp = (radius * radius) - (x * x);
y = sqrt(temp);
glVertex2f(x, y);
}
for(x = -radius; x <= radius; x += 0.05) // draw the plot
{
double temp = (radius * radius) - (x * x);
y = sqrt(temp);
glVertex2f(x, -y);
}
Would any of the other code be helpful?
I think what you're experiencing is just a floating point precision issue. You assume your x values to go to exactly -radius (or radius repsectively) at the end of each loop, which it probably doesn't due to accumulated rounding errors from all the additions.
This is no problem at -radius since its merged with the start of the second loop anyway, but at the end it won't end at radius. Try to make the whole thing a GL_LINE_LOOP instead of a GL_LINE_STRIP to merge the first and last vertices into a line.