Rotating a vector around a point - c++

I've looked around here for some answers on this, I've found a few good ones, but when I implement them in my code, I get some unexpected results.
Here's my problem:
I'm creating a top down geometry shooter, and when an enemy is hit by a bullet, the enemy should explode into smaller clones, shooting out from the center of the enemy in a circular fashion, in even intervals around the enemy. I assumed I could accomplish this by getting an initial vector, coming straight out of the side of the enemy shape, then rotate that vector the appropriate amount of times. Here's my code:
void Game::spawnSmallEnemies(s_ptr<Entity> e)
{
int vertices = e->cShape->shape.getPointCount();
float angle = 360.f / vertices;
double conv = M_PI / 180.f;
double cs = cos(angle * (M_PI / 180));
double sn = sin(angle * (M_PI / 180));
// Radius of enemy shape
Vec2 velocity { e->cTransform->m_pos.m_x + m_enemyCfg.SR , e->cTransform->m_pos.m_y} ;
velocity = velocity.get_normal();
Vec2 origin {e->cTransform->m_pos};
for (int i = 0; i < vertices; i++)
{
auto small = m_entityMgr.addEntity("small");
small->cTransform = std::make_shared<CTransform>(origin, velocity * 3, 0);
small->cShape = std::make_shared<CShape>(m_enemyCfg.SR / 4, vertices,
e->cShape->shape.getFillColor(), e->cShape->shape.getOutlineColor(),
e->cShape->shape.getOutlineThickness(), small->cTransform->m_pos);
small->cCircleCollider = std::make_shared<CCircleCollider>(m_enemyCfg.SR / 4);
small->cLife = std::make_shared<CLifespan>(m_enemyCfg.L);
velocity.m_x = ((velocity.m_x - origin.m_x) * cs) - ((origin.m_y - velocity.m_y) * sn) + origin.m_x;
velocity.m_y = origin.m_y - ((origin.m_y - velocity.m_y) * cs) + ((velocity.m_x - origin.m_x) * sn);
}
}
I got the rotation code at the bottom from this post, however each of the smaller shapes only shoot toward the bottom right, all clumped together. I would assume my error is logic, however I have been unable to find a solution.

Related

2d Elastic Collision with Circles

I've seen there is a lot of posts about this already but I can't find one that relates to what I want to do,
I used the formula from here:
https://www.vobarian.com/collisions/2dcollisions2.pdf
As well as this one:
https://www.plasmaphysics.org.uk/programs/coll2d_cpp.htm
I think they area basically the same thing, now my problem is one of my circles is always static, and what I want is when the other circle hits it straight on, I want it to bounce back with the same speed, but these formulas have the circle stop still, presumably as it would pass it's energy to the other circle which would then move away.
I tried doing things like bounce = vel.x pre collision - vel.y post collision and add or subtract that to vel.x post collision and it kinda works but not really, the angles are wrong and depending on which direction the ball is coming from it may bounce up instead of down, left instead of right,
would probably require a lot of if/else statements to get to work at all.
Can someone suggest something?
here's the code for the function :
void Collision2(sf::CircleShape* b1, sf::CircleShape* b2, sf::Vector2f vel1,sf::Vector2f& vel2) {
//vel1 is 0,0 but i might want to use it later
//mass
float m1 = 10;
float m2 = 10;
//normal vector
sf::Vector2f nVec((b2->getPosition().x - b1->getPosition().x), (b2->getPosition().y - b1->getPosition().y));
//unit vector
sf::Vector2f uNVec(nVec / sqrt((nVec.x * nVec.x) + (nVec.y * nVec.y)));
//unit tangent vec
sf::Vector2f uTVec(-uNVec.y, uNVec.x);
float v1n = (uNVec.x * vel1.x) + (uNVec.y * vel1.y);
float v2n = (uNVec.x * vel2.x) + (uNVec.y * vel2.y);
float v1t = uTVec.x * vel1.x + uTVec.y * vel2.y;
float v2t = (uTVec.x * vel2.x) + (uTVec.y * vel2.y);
//v1t and v1n after collision
float v1tN = v1t;
float v2tN = v2t;
float v1nN = (v1n * (m1 - m2) + (2 * m2) * v2n) / (m1 + m2);
float v2nN = (v2n * (m2 - m1) + (2 * m1) * v1n) / (m1 + m2);
//new velocities
sf::Vector2f vel1N(v1nN*uNVec);
sf::Vector2f vel1tN(v1tN * uTVec);
sf::Vector2f vel2N(v2nN * uNVec);
sf::Vector2f vel2tN(v2tN * uTVec);
vel1 = (vel1N + vel1tN);
vel2 = (vel2N + vel2tN);
}
Physics part
The sources you added illustrate the physics behind it very well. when the two balls collide they transfer momentum between them. In an elastic collision this transfer keeps the energy of the system, the same.
We can think of the collision in terms of inertia and momentum, rather than starting from velocity. The kinetic energy of a body is generally p^2/(2m), so if we transfer dp from the moving body then we will have change in energy: dE = -pdp/m + dp^2/(2m) + dp^2/(2M) = 0. Here m is the moving and M is the stationary mass. Rearranging gives pdp/m = dp^2*(1/(2m) + 1/(2M)). We can consider m = M yielding p = dp (i.e. All moment is transferred (Note: this is a simplistic view, only dealing with head on collisions)). In the limit where the stationary object is massive however (M >> m) the result will be dp = 2p, simply bouncing off.
Programming
You can achieve the results by setting M to the maximum allowed float value (if I recall 1/inf == NaN in the IEEE standard so that doesn't work unfortunately). Alternatively you can do the collision within the circle by creating custom classes like:
class Circle : public sf::CircleShape{
public:
virtual collide (Circle*);
}
class StaticCircle : public Circle{
public:
collide (Circle*) override;
}
in the second one you can omit any terms where you divide by the mass of the circle, as it is in essence infinite.

Skewed / Off-axis stereoscopic projection with glm::frustum flickering

I have a 3d, stereoscopic rendering application that currently uses parallel stereoscopy by just moving (shifting) the camera to the side for each Left and Right views. It does work, but recently I felt it could be much improved if I had the off-axis option. I got a semi-working algorithm for glm::frustum() to allow for this but am having some troubles immediately when I switch to it over glm::perspective().
I followed the only GL guide I could find, Simple, Low-Cost Stereographics, that which said to replace my existing glm::perspective() with (2 calls
//OFF-AXIS STEREO
if (myAbj.stereoOffsetAxis) {
glm::vec3 targ0_stored = i->targO;
if (myAbj.stereoLR == 0)
{
float sgn = -1.f * (float)myAbj.stereoSwitchLR;
float eyeSep = myAbj.stereoSep;
float focalLength = 50.f;
float eyeOff = (sgn * (eyeSep / 2.f) * (myAbj.selCamLi->nearClip->val_f / focalLength));
float top = myAbj.selCamLi->nearClip->val_f * tan(myAbj.selCamLi->fov->val_f / 2.f);
float right = myAbj.aspect * top;
myAbj.selCamLi->PM = glm::frustum(-right - eyeOff, right - eyeOff, -top, top, myAbj.selCamLi->nearClip->val_f, myAbj.selCamLi->farClip->val_f);
i->targO += myAbj.selCamLi->rightO * myAbj.stereoSep * (float)myAbj.stereoSwitchLR;
VMup(i);
i->targO = targ0_stored;
}
if (myAbj.stereoLR == 1)
{
float sgn = 1.f * (float)myAbj.stereoSwitchLR;
float eyeSep = myAbj.stereoSep;
float focalLength = 50.f;
float eyeOff = (sgn * (eyeSep / 2.f) * (myAbj.selCamLi->nearClip->val_f / focalLength));
float top = myAbj.selCamLi->nearClip->val_f * tan(myAbj.selCamLi->fov->val_f / 2.f);
float right = myAbj.aspect * top;
myAbj.selCamLi->PM = glm::frustum(-right - eyeOff, right - eyeOff, -top, top, myAbj.selCamLi->nearClip->val_f, myAbj.selCamLi->farClip->val_f);
i->targO += myAbj.selCamLi->rightO * -myAbj.stereoSep * (float)myAbj.stereoSwitchLR;
VMup(i);
i->targO = targ0_stored;
}
}
Using this equation, my View Matrix is rotated 180 degrees on the Z axis. However, the bigger issue is a large amount of black dots and flickering on my objects. When I move the camera to a close enough point the flickering stops. Even when I minimize the scene, the issue is still there.
Why is this flickering happening and what can I do to prevent it? It is ruining my scenes.
My near clip was causing the problem. It couldnt be set to the same low value that glm::perspective() was using - it needed a little bit more.

Simple Ray Tracing with Lambertian Shading, Confusion

I didn't see another post with a problem similar to mine, so hopefully this is not redundant.
I've been reading a book on the fundamentals of computer graphics (third edition) and I've been implementing a basic ray tracing program based on the principles I've learned from it. I had little trouble implementing parallel and perspective projection but after moving onto Lambertian and Blinn-Phong Shading I've run into a snag that I'm having trouble figuring out on my own.
I believe my problem is related to how I am calculating the ray-sphere intersection point and the vectors to the camera/light. I attached a picture that is output when I run simply perspective projection with no shading.
Perspective Output
However, when I attempt the same scene with Lambertian shading the spheres disappear.
Blank Ouput
While trying to debug this myself I noticed that if I negate the x, y, z coordinates calculated as the hit point, the spheres appear again. And I believe the light is coming from the opposite direction I expect.
Lambertian, negated hitPoint
I am calculating the hit point by adding the product of the projected direction vector and the t value, calculated by the ray-sphere intersection formula, to the origin (where my "camera" is, 0,0,0) or just e + td.
The vector from the hit point to the light, l, I am setting to the light's position minus the hit point's position (so hit point's coords minus light's coords).
v, the vector from the hit point to the camera, I am getting by simply negating the projected view vector;
And the surface normal I am getting by hit point minus the sphere's position.
All of which I believe is correct. However, while stepping through the part that calculates the surface normal, I notice something I think is odd. When subtracting the hit point's position from the sphere's position to get the vector from the sphere's center to the hit point, I believe I should expect to get a vector where all of the values lie within the range (-r,r); but that is not happening.
This is an example from stepping through my code:
Calculated hit point: (-0.9971, 0.1255, -7.8284)
Sphere center: (0, 0, 8) (radius is 1)
After subtracting, I get a vector where the z value is -15.8284. This seems wrong to me; but I do not know what is causing it. Would a z value of -15.8284 not imply that the sphere center and the hit position are ~16 units away from each other in the z plane? Obviously these two numbers are within 1 from each other in absolute value terms, that's what leads me to think my problem has something to do with this.
Here's the main ray-tracing loop:
auto origin = Position3f(0, 0, 0);
for (int i = 0; i < numPixX; i++)
{
for (int j = 0; j < numPixY; j++)
{
for (SceneSurface* object : objects)
{
float imgPlane_u = left + (right - left) * (i + 0.5f) / numPixX;
float imgPlane_v = bottom + (top - bottom) * (j + 0.5f) / numPixY;
Vector3f direction = (w.negated() * focal_length) + (u * imgPlane_u) + (v * imgPlane_v);
Ray viewingRay(origin, eye, direction);
RayTestResult testResult = object->TestViewRay(viewingRay);
if (testResult.m_bRayHit)
{
Position3f hitPoint = (origin + (direction) * testResult.m_fDist);//.negated();
Vector3f light_direction = (light - hitPoint).toVector().normalized();
Vector3f view_direction = direction.negated().normalized();
Vector3f surface_normal = object->GetNormalAt(hitPoint);
image[j][i] = object->color * intensity * fmax(0, surface_normal * light_direction);
}
}
}
}
GetNormalAt is simply:
Vector3f Sphere::GetNormalAt(Position3f &surface)
{
return (surface - position).toVector().normalized();
}
My spheres are positioned at (0, 0, 8) and (-1.5, -1, 6) with rad 1.0f.
My light is at (-3, -3, 0) with an intensity of 1.0f;
I ignore any intersection where t is not greater than 0 so I do not believe that is causing this problem.
I think I may be doing some kind of mistake when it comes to keeping positions and vectors in the same coordinate system (same transform?), but I'm still learning and admittedly don't understand that very well. If the view direction is always in the -w direction, why do we position scene objects in the positive w direction?
Any help or wisdom is greatly appreciated. I'm teaching this all to myself so far and I'm pleased with how much I've taken in, but something in my gut tells me this is a relatively simple mistake.
Just in case it is of any use, here's the TestViewRay function:
RayTestResult Sphere::TestViewRay(Ray &viewRay)
{
RayTestResult result;
result.m_bRayHit = false;
Position3f &c = position;
float r = radius;
Vector3f &d = viewRay.getDirection();
Position3f &e = viewRay.getPosition();
float part = d*(e - c);
Position3f part2 = (e - c);
float part3 = d * d;
float discriminant = ((part*part) - (part3)*((part2*part2) - (r * r)));
if (discriminant > 0)
{
float t_add = ((d) * (part2)+sqrt(discriminant)) / (part3);
float t_sub = ((d) * (part2)-sqrt(discriminant)) / (part3);
float t = fmin(t_add, t_sub);
if (t > 0)
{
result.m_iNumberOfSolutions = 2;
result.m_bRayHit = true;
result.m_fDist = t;
}
}
else if (discriminant == 0)
{
float t_add = ((d)* (part2)+sqrt(discriminant)) / (part3);
float t_sub = ((d)* (part2)-sqrt(discriminant)) / (part3);
float t = fmin(t_add, t_sub);
if (t > 0)
{
result.m_iNumberOfSolutions = 1;
result.m_bRayHit = true;
result.m_fDist = t;
}
}
return result;
}
EDIT:
I'm happy to report I figured out my problem.
Upon sitting down with my sister to look at this I noticed in my ray-sphere hit detection I had this:
float t_add = ((d) * (part2)+sqrt(discriminant)) / (part3);
Which is incorrect. d should be negative. It should be:
float t_add = ((neg_d * (e_min_c)) + sqrt(discriminant)) / (part2);
(I renamed a couple variables) Previously I had a zero'd vector so I could express -d as (zero_vector - d)and I had removed that because I implemented a member function to negate any given vector; but I forgot to go back and call it on d. After fixing that and moving my sphere's into the negative z plane my Lambertian and Blinn-Phong shading implementations work correctly.
Lambertian + Blinn-Phong

How to check if two circles drawn on an Adafruit TFT screen are touching eachother?

im making (or rather, trying to make, lol) a snake game on a Adafruit TFT 1.8 screen. Then i ofcourse need the snakehead to know when it hits the "point", and therefore i need to know when the two circles which are of even size are touching eachother. However, my function for this is not working (in other words printing "NOT TOUCHING").
Im trying to follow this formula:
(sqrt(dx2 + dy2))
The radius of both circles are 3, and i get the center for the formula from adding the screen position x and y of the circles together (am i even getting the centers correctly?).
void pointCondition() {
double centerPoint = pointPositionX + pointPositionY;
double centerSnakeHead = positionX + positionY;
int distanceBetweenCenter = (sqrt(centerPoint * 3 + centerSnakeHead * 3));
int weight = 3 / 2;
if (distanceBetweenCenter < weight) {
Serial.println("TOUCHING");
} else {
Serial.println("NOT TOUCHING");
}
}
Can you see what i am doing wrong?
You need something like this:
double dx = pointPositionX - positionX,
dy = pointPositionY - positionY,
d = sqrt(dx * dx + dy * dy);
bool touching = d <= 3;

opengl trackball

I am trying to rotate opengl scene using track ball. The problem i am having is i am getting rotations opposite to direction of my swipe on screen. Here is the snippet of code.
prevPoint.y = viewPortHeight - prevPoint.y;
currentPoint.y = viewPortHeight - currentPoint.y;
prevPoint.x = prevPoint.x - centerx;
prevPoint.y = prevPoint.y - centery;
currentPoint.x = currentPoint.x - centerx;
currentPoint.y = currentPoint.y - centery;
double angle=0;
if (prevPoint.x == currentPoint.x && prevPoint.y == currentPoint.y) {
return;
}
double d, z, radius = viewPortHeight * 0.5;
if(viewPortWidth > viewPortHeight) {
radius = viewPortHeight * 0.5f;
} else {
radius = viewPortWidth * 0.5f;
}
d = (prevPoint.x * prevPoint.x + prevPoint.y * prevPoint.y);
if (d <= radius * radius * 0.5 ) { /* Inside sphere */
z = sqrt(radius*radius - d);
} else { /* On hyperbola */
z = (radius * radius * 0.5) / sqrt(d);
}
Vector refVector1(prevPoint.x,prevPoint.y,z);
refVector1.normalize();
d = (currentPoint.x * currentPoint.x + currentPoint.y * currentPoint.y);
if (d <= radius * radius * 0.5 ) { /* Inside sphere */
z = sqrt(radius*radius - d);
} else { /* On hyperbola */
z = (radius * radius * 0.5) / sqrt(d);
}
Vector refVector2(currentPoint.x,currentPoint.y,z);
refVector2.normalize();
Vector axisOfRotation = refVector1.cross(refVector2);
axisOfRotation.normalize();
angle = acos(refVector1*refVector2);
I recommend artificially setting prevPoint and currentPoint to (0,0) (0,1) and then stepping through the code (with a debugger or with your eyes) to see if each part makes sense to you, and the angle of rotation and axis at the end of the block are what you expect.
If they are what you expect, then I'm guessing the error is in the logic that occurs after that. i.e. you then take the angle and axis and convert them to a matrix which gets multiplied to move the model. A number of convention choices happen in this pipeline --which if swapped can lead to the type of bug you're having:
Whether the formula assumes the angle is winding left or right handedly around the axis.
Whether the transformation is meant to rotate an object in the world or meant to rotate the camera.
Whether the matrix is meant to operate by multiplication on the left or right.
Whether rows or columns of matrices are contiguous in memory.