Incorrect rotation and scaling of shape objects C++ - c++

(Please help, still unable to solve) After doing some rotation and scaling of my polygon shaped objects, I managed to render an image but it's different from the correct image as shown below(Correct image). I am puzzled by why that is. I have found the center of the vertices and scaled and rotated my polygon shaped objects from the center of the vertices to hopefully, get a straight path. However, I still am not able to get the straight path as desired. As I am new to the rotations, scaling and translation methods, I would sincerely hope that you are able to help so that I will be able to get the image to match properly. I do not know what I need to change already. Do I also need to find the center of vertices for scaling? Then translate the point back to center OR back to the original pivot point? Same question I have for rotation. Please help. If you can, please help me identify the mistake in my code. Hope the question is clear.Thank you.
Note: In my test case provided, translation is called first, followed by rotate, and then scale.
So, t->translate({ 0.0f, 50.0f }); Then, r->rotate(0.25f);. Then, s->scale(0.85f);. Test case CANNOT be modified.
Incorrect image
Correct image
Translating method
template<typename T>
void translate(const T& displacement)
{
_pivotPt = T((_pivotPt.x() + displacement.x()),
(_pivotPt.y() + displacement.y()));
}
Scaling method
template<typename T>
void Polygon<T>::scale(const float factor) //temporarily treat other point as origin
{
for (size_t i{}; i < _nsize; i++)
{
center += _npts[i];
}
center = T(center.x() / _nsize, center.y() / _nsize);
for (auto& verts : _npts)
{
verts = T((static_cast<float>
(center.x()) + (factor) *
(static_cast<float>(verts.x() - center.x()))),
(static_cast<float
(center.y()) + (factor) *
(static_cast<float>(verts.y() - center.y()))));
}
}
Rotation method
template<typename T>
void Polygon<T>::rotate(const float angle)
{
typename Point<T>::type _xn, _yn;
for (size_t i{}; i < _nsize; i++)
{
center += _npts[i];
}
center = T(center.x() / _nsize, center.y() / _nsize); //Find center from all given coordinates
for (auto& verts : _npts)
{
float xn = verts.x() - center.x(); //Subtract pivot point
float yn = verts.y() - center.y();
_xn = (center.x() + std::cos(angle) * xn - std::sin(angle) * yn); //translate back to origin.
_yn = (center.y() + std::sin(angle) * xn + std::cos(angle) * yn);
verts = T(_xn, _yn);
}
}

Seems like you should rotate around the centroid, so I don't see why you are using _pivotPt.x() when computing new coordinates. It should be
_xn = (center.x() + std::cos(angle) * xn - std::sin(angle) * yn);
_yn = (center.y() + std::sin(angle) * xn + std::cos(angle) * yn);
edit : Seems like center and _pivotPt should always be the same.
Edit : Your center object is a global variable which keep being updated. Each time you try to compute the centroid, the old value mess up the computation
ps : It seems your translation method translate the centroid (pivot point), and assume the new value will be used correctly by the next functions.by itself it is not a bad idea, but it is error prone. Given your situation it make more sense to code conservatively and translate all points in _npts

Related

Enemies path following (Space Shooter game)

I am recently working with SFML libraries and I am trying to do a Space Shooter game from scratch. After some time working on it I get something that works fine but I am facing one issue and I do not know exactly how to proceed, so I hope your wisdom can lead me to a good solution. I will try to explain it the best I can:
Enemies following a path: currently in my game, I have enemies that can follow linear paths doing the following:
float vx = (float)m_wayPoints_v[m_wayPointsIndex_ui8].x - (float)m_pos_v.x;
float vy = (float)m_wayPoints_v[m_wayPointsIndex_ui8].y - (float)m_pos_v.y;
float len = sqrt(vx * vx + vy * vy);
//cout << len << endl;
if (len < 2.0f)
{
// Close enough, entity has arrived
//cout << "Has arrived" << endl;
m_wayPointsIndex_ui8++;
if (m_wayPointsIndex_ui8 >= m_wayPoints_v.size())
{
m_wayPointsIndex_ui8 = 0;
}
}
else
{
vx /= len;
vy /= len;
m_pos_v.x += vx * float(m_moveSpeed_ui16) * time;
m_pos_v.y += vy * float(m_moveSpeed_ui16) * time;
}
*m_wayPoints_v is a vector that basically holds the 2d points to be followed.
Related to this small piece of code, I have to say that is sometimes given me problems because getting closer to the next point becomes difficult as the higher the speed of the enemies is.
Is there any other way to be more accurate on path following independtly of the enemy speed? And also related to path following, if I would like to do an introduction of the enemies before each wave movement pattern starts (doing circles, spirals, ellipses or whatever before reaching the final point), for example:
For example, in the picture below:
The black line is the path I want a spaceship to follow before starting the IA pattern (move from left to right and from right to left) which is the red circle.
Is it done hardcoding all and each of the movements or is there any other better solution?
I hope I made myself clear on this...in case I did not, please let me know and I will give more details. Thank you very much in advance!
Way points
You need to add some additional information to the way points and the NPC's position in relationship to the way points.
The code snippet (pseudo code) shows how a set of way points can be created as a linked list. Each way point has a link and a distance to the next way point, and the total distance for this way point.
Then each step you just increase the NPC distance on the set of way points. If that distance is greater than the totalDistance at the next way point, follow the link to the next. You can use a while loop to search for the next way point so you will always be at the correct position no matter what your speed.
Once you are at the correct way point its just a matter of calculating the position the NPC is between the current and next way point.
Define a way point
class WayPoint {
public:
WayPoint(float, float);
float x, y, distanceToNext, totalDistance;
WayPoint next;
WayPoint addNext(WayPoint wp);
}
WayPoint::WayPoint(float px, float py) {
x = px; y = py;
distanceToNext = 0.0f;
totalDistance = 0.0f;
}
WayPoint WayPoint::addNext(WayPoint wp) {
next = wp;
distanceToNext = sqrt((next.x - x) * (next.x - x) + (next.y - y) * (next.y - y));
next.totalDistance = totalDistance + distanceToNext;
return wp;
}
Declaring and linking waypoints
WayPoint a(10.0f, 10.0f);
WayPoint b(100.0f, 400.0f);
WayPoint c(200.0f, 100.0f);
a.addNext(b);
b.addNext(c);
NPC follows way pointy path at any speed
WayPoint currentWayPoint = a;
NPC ship;
ship.distance += ship.speed * time;
while (ship.distance > currentWayPoint.next.totalDistance) {
currentWayPoint = currentWayPoint.next;
}
float unitDist = (ship.distance - currentWayPoint.totalDistance) / currentWayPoint.distanceToNext;
// NOTE to smooth the line following use the ease curve. See Bottom of answer
// float unitDist = sigBell((ship.distance - currentWayPoint.totalDistance) / currentWayPoint.distanceToNext);
ship.pos.x = (currentWayPoint.next.x - currentWayPoint.x) * unitDist + currentWayPoint.x;
ship.pos.y = (currentWayPoint.next.y - currentWayPoint.y) * unitDist + currentWayPoint.y;
Note you can link back to the start but be careful to check when the total distance goes back to zero in the while loop or you will end up in an infinite loop. When you pass zero recalc NPC distance as modulo of last way point totalDistance so you never travel more than one loop of way points to find the next.
eg in while loop if passing last way point
if (currentWayPoint.next.totalDistance == 0.0f) {
ship.distance = mod(ship.distance, currentWayPoint.totalDistance);
}
Smooth paths
Using the above method you can add additional information to the way points.
For example for each way point add a vector that is 90deg off the path to the next.
// 90 degh CW
offX = -(next.y - y) / distanceToNext; // Yes offX = - y
offY = (next.x - x) / distanceToNext; //
offDist = ?; // how far from the line you want to path to go
Then when you calculate the unitDist along the line between to way points you can use that unit dist to smoothly interpolate the offset
float unitDist = (ship.distance - currentWayPoint.totalDistance) / currentWayPoint.distanceToNext;
// very basic ease in and ease out or use sigBell curve
float unitOffset = unitDist < 0.5f ? (unitDist * 2.0f) * (unitDist * 2.0f) : sqrt((unitDist - 0.5f) * 2.0f);
float x = currentWayPoint.offX * currentWayPoint.offDist * unitOffset;
float y = currentWayPoint.offY * currentWayPoint.offDist * unitOffset;
ship.pos.x = (currentWayPoint.next.x - currentWayPoint.x) * unitDist + currentWayPoint.x + x;
ship.pos.y = (currentWayPoint.next.y - currentWayPoint.y) * unitDist + currentWayPoint.y + y;
Now if you add 3 way points with the first offDist a positive distance and the second a negative offDist you will get a path that does smooth curves as you show in the image.
Note that the actual speed of the NPC will change over each way point. The maths to get a constant speed using this method is too heavy to be worth the effort as for small offsets no one will notice. If your offset are too large then rethink your way point layout
Note The above method is a modification of a quadratic bezier curve where the control point is defined as an offset from center between end points
Sigmoid curve
You don't need to add the offsets as you can get some (limited) smoothing along the path by manipulating the unitDist value (See comment in first snippet)
Use the following to function convert unit values into a bell like curve sigBell and a standard ease out in curve. Use argument power to control the slopes of the curves.
float sigmoid(float unit, float power) { // power should be > 0. power 1 is straight line 2 is ease out ease in 0.5 is ease to center ease from center
float u = unit <= 0.0f ? 0.0f : (unit >= 1.0f ? 1.0f: unit); // clamp as float errors will show
float p = pow(u, power);
return p / (p + pow(1.0f - u, power));
}
float sigBell(float unit, float power) {
float u = unit < 0.5f ? unit * 2.0f : 1.0f - (unit - 0.5f) * 2.0f;
return sigmoid(u, power);
}
This doesn't answer your specific question. I'm just curious why you don't use the sfml type sf::Vector2 (or its typedefs 2i, 2u, 2f)? Seems like it would clean up some of your code maybe.
As far as the animation is concerned. You could consider loading the directions for the flight pattern you want into a stack or something. Then pop each position and move your ship to that position and render, repeat.
And if you want a sin-like flight path similar to your picture, you can find an equation similar to the flight path you like. Use desmos or something to make a cool graph that fits your need. Then iterate at w/e interval inputting each iteration into this equation, your results are your position at each iteration.
Well, I think I found one of the problems but I am not sure what the solution can be.
When using the piece of code I posted before, I found that there is a problem when reaching the destination point due to the speed value. Currently to move a space ship fluently, I need to set the speed to 200...which means that in these formulas:
m_pos_v.x += vx * float(m_moveSpeed_ui16) * time;
m_pos_v.y += vy * float(m_moveSpeed_ui16) * time;
The new position might exceed the "2.0f" tolerance so the space ship cannot find the destination point and it gets stuck because the minimum movement that can be done per frame (assuming 60fps) 200 * 1 / 60 = 3.33px. Is there any way this behavior can be avoided?

Projecting line from camera

I'm trying to convert a viewport click onto a world position for an object.
It would be quite simple if all I wanted was to draw a point exactly where the user clicks in the canvas:
void Canvas::getClickPosition(int x, int y, Vector3d(&out)[2]) const
{
Vector4d point4d[2];
Vector2d point2d(x, y);
int w = canvas.width();
int h = canvas.height();
Matrix4d model = m_world * m_camera;
for (int i = 0; i < 2; ++i) {
Vector4d sw(point2d.x() / (0.5 * w) - 1,
point2d.y() / (0.5* h) - 1, i * 1, 1);
point4d[i] = (m_proj * model).inverse() * sw;
out[i] = point4d.block<1, 3>(0, 0);
}
}
The expected behavior is achieved with this simple code.
The problem arises when I try to actually make a line that will look like a one pixel when the user first clicks it. Until the camera is rotated in any direction it should look like it was perfectly shot from the camera and that it has whatever length (doesn't matter).
I tried the obvious:
Vector4d sw(point2d.x() / (0.5 * w) - 1,
point2d.y() / (0.5* h) - 1, 1, 1); // Z is now 1 instead of 0.
The result is, as most of you guys should expect, a line that pursues the vanishing point, at the center of the screen. Therefore, the farther I click from the center, the more the line is twitched from it's expected direction.
What can I do to have a line show as a dot from the click point of view, no matter where at the screen?
EDIT: for better clarity, I'm trying to draw the lines like this:
glBegin(GL_LINES);
line.p1 = m_proj * (m_world * m_camera) * line.p1;
line.p2 = m_proj * (m_world * m_camera) * line.p2;
glVertex3f(line.p1.x(), line.p1.y(), line.p1.z());
glVertex3f(line.p2.x(), line.p2.y(), line.p2.z());
glEnd();
Your initial attempt is actually very close. The only thing you are missing is the perspective divide:
out[i] = point4d.block<1, 3>(0, 0) / point4d.w();
Depending on your projection matrix, you might also need to specify a z-value of -1 for the near plane instead of 0.
And yes, your order of matrices projection * model * view seems strange. But as long as you keep the same order in both procedures, you should get a consistent result.
Make sure that the y-axis of your window coordinate system is pointing upwards. Otherwise, you will get a result that is reflected at the horizontal midline.

Raytracing Reflection distortion

I've started coding a raytracer, but today I encounter a problem when dealing with reflection.
First, here is an image of the problem:
I only computed the object's reflected color (so no light effect is applied on the reflected object)
The problem is that distortion that I really don't understand.
I looked at the angle between my rayVector and the normalVector and it looks ok, the reflected vector also looks fine.
Vector Math::calcReflectedVector(const Vector &ray,
const Vector &normal) const {
double cosAngle;
Vector copyNormal = normal;
Vector copyView = ray;
copyNormal.makeUnit();
copyView.makeUnit();
cosAngle = copyView.scale(copyNormal);
return (-2.0 * cosAngle * normal + ray);
}
So for example when my ray is hitting the bottom of my sphere I have the following values:
cos: 1
ViewVector: [185.869,-2.44308,-26.3504]
NormalVector: [185.869,-2.44308,-26.3504]
ReflectedVector: [-185.869,2.44308,26.3504]
Bellow if the code that handles the reflection:
Color Rt::getReflectedColor(std::shared_ptr<SceneObj> obj, Camera camera,
Vector rayVec, double k, unsigned int pass) {
if (pass > 10)
return obj->getColor();
if (obj->getReflectionIndex() == 0) {
// apply effects
return obj->getColor();
}
Color cuColor(obj->getColor());
Color newColor(0);
Math math;
Vector view;
Vector normal;
Vector reflected;
Position impact;
std::pair<std::shared_ptr<SceneObj>, double> reflectedObj;
normal = math.calcNormalVector(camera.pos, obj, rayVec, k, impact);
view = Vector(impact.x, impact.y, impact.z) -
Vector(camera.pos.x, camera.pos.y, camera.pos.z);
reflected = math.calcReflectedVector(view, normal);
reflectedObj = this->getClosestObj(reflected, Camera(impact));
if (reflectedObj.second <= 0) {
cuColor.mix(0x000000, obj->getReflectionIndex());
return cuColor;
}
newColor = this->getReflectedColor(reflectedObj.first, Camera(impact),
reflected, reflectedObj.second, pass + 1);
// apply effects
cuColor.mix(newColor, obj->getReflectionIndex());
return newColor;
}
To calculate the normal and the reflected Vector:
Vector Math::calcReflectedVector(const Vector &ray,
const Vector &normal) const {
double cosAngle;
Vector copyRay = ray;
copyRay.makeUnit();
cosAngle = copyRay.scale(normal);
return (-2.0 * cosAngle * normal + copyRay);
}
Vector Math::calcNormalVector(Position pos, std::shared_ptr<SceneObj> obj,
Vector rayVec, double k, Position& impact) const {
const Position &objPos = obj->getPosition();
Vector normal;
impact.x = pos.x + k * rayVec.x;
impact.y = pos.y + k * rayVec.y;
impact.z = pos.z + k * rayVec.z;
obj->calcNormal(normal, impact);
return normal;
}
[EDIT1]
I have a new image, i removed the plane only to keep the spheres:
As you can see there is blue and yellow on the border of the sphere.
Thanks to neam I colored the sphere applying the following formula:
newColor.r = reflected.x * 127.0 + 127.0;
newColor.g = reflected.y * 127.0 + 127.0;
newColor.b = reflected.z * 127.0 + 127.0;
Bellow is the visual result:
Ask me if you need any information.
Thanks in advance
There are many little things with the example you provided. This may -- or may not -- answer your question, but as I suppose you're doing a raytracer for learning purposes (either at school or in your free time) I'll give you some hints.
you have two classes Vector and Position. It may well seems like it's a good idea, but why not seeing the position as the translation vector from the origin ? This would avoid some code duplication I think (except if you've done something like using Position = Vector;). You may also want to look at some libraries that does all the mathematical things for you (like glm could do). (and this way, you'll avoid some errors like naming your dot function scale())
you create a camera from the position (that is a really strange thing). Reflections doesn't involve any camera. In a typical raytracer, you have one camera {position + direction + fov + ...} and for each pixels of your image/reflections/refractions/..., you cast rays {origin + direction} (thus the name raytracer, which isn't cameratracer). The Camera class is usually tied to the concept of physical camera with things like focal, depth of field, aperture, chromatic aberration, ... whereas the ray is simply... a ray. (could be a ray from the plane where the output image is mapped to the first object, or a ray created from reflection, diffraction, scattering, ...).
and for the final point, I think that your error may comes from the Math::calcNormalVector(...) function. For a sphere at a position P and for an intersection point I, the normal N is: N = normalize(I - P);.
EDIT: seems like your problem comes from the Rt::getClosestObj. Everything else is looking fine
There's ton a websites/blogs/educative content online about creating a simple raytracer, so for the first two points I let them teach you. Take a look at glm.
If don't figure out what is wrong with calcNormalVector(...) please post its code :)
Did that works ?
I assume that your ray and normal vector are already normalized.
Vector Math::reflect(const Vector &ray, const Vector &normal) const
{
return ray - 2.0 * Math::dot(normal, ray) * normal;
}
Moreover, I can't understand with your provided code this call :
this->getClosestObj(reflected, Camera(obj->getPosition()));
That should be something like that no ?
this->getClosestObj(reflected, Camera(impact));

Free Flight Camera - strange rotation around X-axis

So I nearly implemented a free-flight camera using vectors and something like gluLookAt.
The movement in all 4 directions and rotation around the Y-axis work fine.
For the rotation around the Y-axis I calculate the vector between the eye and center vector and then rotate it with the rotation matrix like this:
Vector temp = vecmath.vector(center.x() - eye.x(),
center.y() - eye.y(), center.z() - eye.z());
float vecX = (temp.x()*(float) Math.cos(-turnSpeed)) + (temp.z()* (float)Math.sin(-turnSpeed));
float vecY = temp.y();
float vecZ = (temp.x()*(float) -Math.sin(-turnSpeed))+ (temp.z()*(float)Math.cos(-turnSpeed));
center = vecmath.vector(vecX, vecY, vecZ);
At the end I just set center to the newly calculated vector.
Now when I try to do the same thing for rotation around the X-axis it DOES rotate the vector but in a very strange way, kind of like it would be moving in a wavy line.
I use the same logic as for the previous rotation, just with the x rotation matrix:
Vector temp = vecmath.vector(center.x() - eye.x(),
center.y() - eye.y(), center.z() - eye.z());
float vecX = temp.x();
float vecY = (temp.y()*(float) Math.cos(turnSpeed)) + (temp.z()* (float)-Math.sin(turnSpeed));
float vecZ = (temp.y()*(float) Math.sin(turnSpeed)) + (temp.z()*(float)Math.cos(turnSpeed));
center = vecmath.vector(vecX, vecY, vecZ);
But why does this not work? Maybe I do something else somewhere wrong?
The problem you're facing is the exact same that I had trouble with the first time I tried to implement the camera movement. The problem occurs because if you first turn so that you are looking straight down the X axis and then try to "tilt" the camera by rotating around the X axis, you will effectively actually spin around the direction you are looking.
I find that the best way to handle camera movement is to accumulate the angles in separate variables and every time rotate completely from origin. If you do this you can first "tilt" by rotating around the X-axis then turn by rotating around the Y-axis. By doing it in this order you make sure that the tilting will always be around the correct axis relative to the camera. Something like this:
public void pan(float turnSpeed)
{
totalPan += turnSpeed;
updateOrientation();
}
public void tilt(float turnSpeed)
{
totalTilt += turnSpeed;
updateOrientation();
}
private void updateOrientation()
{
float afterTiltX = 0.0f; // Not used. Only to make things clearer
float afterTiltY = (float) Math.sin(totalTilt));
float afterTiltZ = (float) Math.cos(totalTilt));
float vecX = (float)Math.sin(totalPan) * afterTiltZ;
float vecY = afterTiltY;
float vecZ = (float)Math.cos(totalPan) * afterTiltZ;
center = eye + vecmath.vector(vecX, vecY, vecZ);
}
I don't know if the syntax is completely correct. Haven't programmed in java in a while.

Calculating vertices of a rotated rectangle

I am trying to calculate the vertices of a rotated rectangle (2D).
It's easy enough if the rectangle has not been rotated, I figured that part out.
If the rectangle has been rotated, I thought of two possible ways to calculate the vertices.
Figure out how to transform the vertices from local/object/model space (the ones I figured out below) to world space. I honestly have no clue, and if it is the best way then I feel like I would learn a lot from it if I could figure it out.
Use trig to somehow figure out where the endpoints of the rectangle are relative to the position of the rectangle in world space. This has been the way I have been trying to do up until now, I just haven't figured out how.
Here's the function that calculates the vertices thus far, thanks for any help
void Rect::calculateVertices()
{
if(m_orientation == 0) // if no rotation
{
setVertices(
&Vertex( (m_position.x - (m_width / 2) * m_scaleX), (m_position.y + (m_height / 2) * m_scaleY), m_position.z),
&Vertex( (m_position.x + (m_width / 2) * m_scaleX), (m_position.y + (m_height / 2) * m_scaleY), m_position.z),
&Vertex( (m_position.x + (m_width / 2) * m_scaleX), (m_position.y - (m_height / 2) * m_scaleY), m_position.z),
&Vertex( (m_position.x - (m_width / 2) * m_scaleX), (m_position.y - (m_height / 2) * m_scaleY), m_position.z) );
}
else
{
// if the rectangle has been rotated..
}
//GLfloat theta = RAD_TO_DEG( atan( ((m_width/2) * m_scaleX) / ((m_height / 2) * m_scaleY) ) );
//LOG->writeLn(&theta);
}
I would just transform each point, applying the same rotation matrix to each one. If it's a 2D planar rotation, it would look like this:
x' = x*cos(t) - y*sin(t)
y' = x*sin(t) + y*cos(t)
where (x, y) are the original points, (x', y') are the rotated coordinates, and t is the angle measured in radians from the x-axis. The rotation is counter-clockwise as written.
My recommendation would be to do it out on paper once. Draw a rectangle, calculate the new coordinates, and redraw the rectangle to satisfy yourself that it's correct before you code. Then use this example as a unit test to ensure that you coded it properly.
I think you were on the right track using atan() to return an angle. However you want to pass height divided by width instead of the other way around. That will give you the default (unrotated) angle to the upper-right vertex of the rectangle. You should be able to do the rest like this:
// Get the original/default vertex angles
GLfloat vertex1_theta = RAD_TO_DEG( atan(
(m_height/2 * m_scaleY)
/ (m_width/2 * m_scaleX) ) );
GLfloat vertex2_theta = -vertex1_theta; // lower right vertex
GLfloat vertex3_theta = vertex1_theta - 180; // lower left vertex
GLfloat vertex4_theta = 180 - vertex1_theta; // upper left vertex
// Now get the rotated vertex angles
vertex1_theta += rotation_angle;
vertex2_theta += rotation_angle;
vertex3_theta += rotation_angle;
vertex4_theta += rotation_angle;
//Calculate the distance from the center (same for each vertex)
GLfloat r = sqrt(pow(m_width/2*m_scaleX, 2) + pow(m_height/2*m_scaleY, 2));
/* Calculate each vertex (I'm not familiar with OpenGL, DEG_TO_RAD
* might be a constant instead of a macro)
*/
vertexN_x = m_position.x + cos(DEG_TO_RAD(vertexN_theta)) * r;
vertexN_y = m_position.y + sin(DEG_TO_RAD(vertexN_theta)) * r;
// Now you would draw the rectangle, proceeding from vertex1 to vertex4.
Obviously more longwinded than necessary, for the sake of clarity. Of course, duffymo's solution using a transformation matrix is probably more elegant and efficient :)
EDIT: Now my code should actually work. I changed (width / height) to (height / width) and used a constant radius from the center of the rectangle to calculate the vertices. Working Python (turtle) code at http://pastebin.com/f1c76308c