I am dealing with some positions of objects in Cocos2dx but this question can apply to virtually every situation in which a smooth start and stop is necessary.
Here's what I am looking for:
Given a origin position at x = 0 and a final position of x = 8, I want to accelerate slowly and get further the further I am from the start and then have it slow down as it reaches the end. Is there a smoothing algorithm for this?
There are lots of algorithms for this. One idea is to set up a linear interpolation:
x(t) = t * x0 + (1.0 - t) * x1;
If you feed evenly spaced values of t from 0.0 to 1.0, you'll get a smooth, linear animation.
If you want slow start and slow end, you can use t = sin(theta)/2.0 + 1.0 for theta from -pi/2 to pi/2.
A second-order smooth path has constant acceleration during the first half, then constant deceleration during the second part.
This means you accelerate from x=0 to x=4. The formula is x(t)=a*t*t so your choice of acceleration a directly influences the time needed. If you set the deceleration to the same value, you'll arrive after twice the time in x=8. The formula for the second part is therefore x(t) = 16 - a*t*t. The halfway point in time is t=sqrt(4/a).
Related
I'm currently building a simplified Reaction Control System for a Satellite game, and need a way to use the system to align the satellite to a given unit direction in world-space coordinates. Because this is a game simulation, I am faking the system and just applying a torque force around the objects epicenter.
This is difficult because in my case, the Torque cannot be varied in strength, it is either on or off. It's either full force or no force. Calculating the direction that the torque needs to be applied in is relatively easy, but I'm having trouble getting it to align perfectly without spinning out of control and getting stuck in a logical loop. it needs to apply the opposing force at precisely the right 'time' to land on the target orientation with zero angular velocity.
What I've determined so far is that I need to calculate the 'time' it will take to reach zero velocity based on my current angular velocity and the angle between the two vectors. If that exceeds the time until I reach angle zero, then it needs to apply the opposing torque. In theory this will also prevent it from 'bouncing' around the axis too much. I almost have it working, but in some cases it seems to get stuck applying force in one direction, so I'm hoping somebody can check the logic. My simulation does NOT take mass into account at the moment, so you can ignore the Inertia Tensor (unless it makes the calculation easier!)
For one axis, I'm currently doing it this way, but I figure someone will have a far more elegant solution that can actually compute both Yaw and Pitch axes at once (Roll is invalid).
Omega = Angular Velocity in Local-Space (Degrees Per Second)
Force = Strength of the Thrusters
// Calculate Time Variables
float Angle = AcosD(DotProduct(ForwardVector, DirectionVector));
float Time1 = Abs(Angle / Omega.Z); // Time taken to reach angle 0 at current velocity
float Time2 = Abs(DeltaTime * (Omega.Z / Force); // Time it will take to reach Zero velocity based on force strength.
// Calculate Direction we need to apply the force to rotate toward the target direction. Note that if we are at perfect opposites, this will be zero!
float AngleSign = Sign(DotProduct(RightVector, DirectionVector));
float Torque.Z = 0;
if (Time1 < Time2)
{
Torque.Z = AngleSign * Force;
}
else
{
Torque.Z = AngleSign * Force * -1.0f
}
// Torque is applied to object as a change in acceleration (no mass) and modified by DeltaSeconds for frame-rate independent force.
This is far from elegant and there are definitely some sign issues. Do you folks know a better way to achieve this?
EDIT:
If anybody understands Unreal Engine's Blueprint system, this is how I'm currently prototyping it before I move it to C++
Beginning from the "Calculate Direction" line, you could instead directly compute the correction torque vector in 3D, then modify its sign if you know that the previous correction is about to overshoot:
// Calculate Direction we need to apply the force to rotate toward the target direction
Torque = CrossProduct(DirectionVector, ForwardVector)
Torque = Normalize(Torque) * Force
if (Time2 < Time1)
{
Torque = -Torque
}
But you should handle the problematic cases:
// Calculate Direction we need to apply the force to rotate toward the target direction
Torque = CrossProduct(DirectionVector, ForwardVector)
if (Angle < 0.1 degrees)
{
// Avoid divide by zero in Normalize
Torque = {0, 0, 0}
}
else
{
// Handle case of exactly opposite direction (where CrossProduct is zero)
if (Angle > 179.9 degrees)
{
Torque = {0, 0, 1}
}
Torque = Normalize(Torque) * Force
if (Time2 < Time1)
{
Torque = -Torque
}
}
Okay well what i take from the pseudocode above is that you want to start braking when the time needed to break exceeds the time left till angle 0 is reached. Have you tried to slowly start breaking (in short steps because of the constant torque) BEFORE the time to break exceeds the time till angle 0?
When you do so and your satellite is near angle 0 and the velocity very low, you can just set velocity and angle to 0 so it doesn't wobble around anymore.
Did you ever figure this out? I'm working on a similar problem in UE4. I also have a constant force. I'm rotating to a new forward vector. I've realized time can't be predicted. Take for example you're rotating on Z axis at 100 degrees/second and a reverse force in exactly .015 seconds will nail your desired rotation and velocity but the next frame takes .016 seconds to render and you've just overshot it since you aren't changing your force. I think the solution is something like cheating by manually setting the forward vector once velocity is zeroed out.
In GLSL (specifically 3.00 that I'm using), there are two versions of
atan(): atan(y_over_x) can only return angles between -PI/2, PI/2, while atan(y/x) can take all 4 quadrants into account so the angle range covers everything from -PI, PI, much like atan2() in C++.
I would like to use the second atan to convert XY coordinates to angle.
However, atan() in GLSL, besides not able to handle when x = 0, is not very stable. Especially where x is close to zero, the division can overflow resulting in an opposite resulting angle (you get something close to -PI/2 where you suppose to get approximately PI/2).
What is a good, simple implementation that we can build on top of GLSL atan(y,x) to make it more robust?
I'm going to answer my own question to share my knowledge. We first notice that the instability happens when x is near zero. However, we can also translate that as abs(x) << abs(y). So first we divide the plane (assuming we are on a unit circle) into two regions: one where |x| <= |y| and another where |x| > |y|, as shown below:
We know that atan(x,y) is much more stable in the green region -- when x is close to zero we simply have something close to atan(0.0) which is very stable numerically, while the usual atan(y,x) is more stable in the orange region. You can also convince yourself that this relationship:
atan(x,y) = PI/2 - atan(y,x)
holds for all non-origin (x,y), where it is undefined, and we are talking about atan(y,x) that is able to return angle value in the entire range of -PI,PI, not atan(y_over_x) which only returns angle between -PI/2, PI/2. Therefore, our robust atan2() routine for GLSL is quite simple:
float atan2(in float y, in float x)
{
bool s = (abs(x) > abs(y));
return mix(PI/2.0 - atan(x,y), atan(y,x), s);
}
As a side note, the identity for mathematical function atan(x) is actually:
atan(x) + atan(1/x) = sgn(x) * PI/2
which is true because its range is (-PI/2, PI/2).
Depending on your targeted platform, this might be a solved problem. The OpenGL spec for atan(y, x) specifies that it should work in all quadrants, leaving behavior undefined only when x and y are both 0.
So one would expect any decent implementation to be stable near all axes, as this is the whole purpose behind 2-argument atan (or atan2).
The questioner/answerer is correct in that some implementations do take shortcuts. However, the accepted solution makes the assumption that a bad implementation will always be unstable when x is near zero: on some hardware (my Galaxy S4 for example) the value is stable when x is near zero, but unstable when y is near zero.
To test your GLSL renderer's implementation of atan(y,x), here's a WebGL test pattern. Follow the link below and as long as your OpenGL implementation is decent, you should see something like this:
Test pattern using native atan(y,x): http://glslsandbox.com/e#26563.2
If all is well, you should see 8 distinct colors (ignoring the center).
The linked demo samples atan(y,x) for several values of x and y, including 0, very large, and very small values. The central box is atan(0.,0.)--undefined mathematically, and implementations vary. I've seen 0 (red), PI/2 (green), and NaN (black) on hardware I've tested.
Here's a test page for the accepted solution. Note: the host's WebGL version lacks mix(float,float,bool), so I added an implementation that matches the spec.
Test pattern using atan2(y,x) from accepted answer: http://glslsandbox.com/e#26666.0
Your proposed solution still fails in the case x=y=0. Here both of the atan() functions return NaN.
Further I would not rely on mix to switch between the two cases. I am not sure how this is implemented/compiled, but IEEE float rules for x*NaN and x+NaN result again in NaN. So if your compiler really used mix/interpolation the result should be NaN for x=0 or y=0.
Here is another fix which solved the problem for me:
float atan2(in float y, in float x)
{
return x == 0.0 ? sign(y)*PI/2 : atan(y, x);
}
When x=0 the angle can be ±π/2. Which of the two depends on y only. If y=0 too, the angle can be arbitrary (vector has length 0). sign(y) returns 0 in that case which is just ok.
Sometimes the best way to improve the performance of a piece of code is to avoid calling it in the first place. For example, one of the reasons you might want to determine the angle of a vector is so that you can use this angle to construct a rotation matrix using combinations of the angle's sine and cosine. However, the sine and cosine of a vector (relative to the origin) are already hidden in plain sight inside the vector itself. All you need to do is to create a normalized version of the vector by dividing each vector coordinate by the total length of the vector. Here's the two-dimensional example to calculate the sine and cosine of the angle of vector [ x y ]:
double length = sqrt(x*x + y*y);
double cos = x / length;
double sin = y / length;
Once you have the sine and cosine values, you can now directly populate a rotation matrix with these values to perform a clockwise or counterclockwise rotation of arbitrary vectors by the same angle, or you can concatenate a second rotation matrix to rotate to an angle other than zero. In this case, you can think of the rotation matrix as "normalizing" the angle to zero for an arbitrary vector. This approach is extensible to the three-dimensional (or N-dimensional) case as well, although for example you will have three angles and six sin/cos pairs to calculate (one angle per plane) for 3D rotation.
In situations where you can use this approach, you get a big win by bypassing the atan calculation completely, which is possible since the only reason you wanted to determine the angle was to calculate the sine and cosine values. By skipping the conversion to angle space and back, you not only avoid worrying about division by zero, but you also improve precision for angles which are near the poles and would otherwise suffer from being multiplied/divided by large numbers. I've successfully used this approach in a GLSL program which rotates a scene to zero degrees to simplify a computation.
It can be easy to get so caught up in an immediate problem that you can lose sight of why you need this information in the first place. Not that this works in every case, but sometimes it helps to think out of the box...
A formula that gives an angle in the four quadrants for any value
of coordinates x and y. For x=y=0 the result is undefined.
f(x,y)=pi()-pi()/2*(1+sign(x))* (1-sign(y^2))-pi()/4*(2+sign(x))*sign(y)
-sign(x*y)*atan((abs(x)-abs(y))/(abs(x)+abs(y)))
I want to limit maximum speed a body can travel with.
The problem is, even if I do something like this answer suggests:
/* after applying forces from input for example */
b2Vec2 vel = body->GetLinearVelocity();
float speed = vel.Normalize();//normalizes vector and returns length
if ( speed > maxSpeed )
body->SetLinearVelocity( maxSpeed * vel );
What if, for example, right before clamping the velocity I am applying some huge force to the body?
Even if linear velocity is capped to maxSpeed for the moment, in the next timestep Box2D will take the b2Body::m_force value into account and effectively move my body faster than maxSpeed.
So I came up with this (had to move b2Body::m_force to public):
if ( speed > maxSpeed ) {
body->SetLinearVelocity( maxSpeed * vel );
body->m_force = b2Vec2(0, 0)
}
Yet this still doesn't handle the problem properly.
What if the velocity is slightly smaller than maxSpeed so the condition will not be hit, but still the m_force value will be big enough to increase velocity too much?
The point is I can't make accurate predictions as to how force will impact the velocity as I am stepping using delta accumulator and I don't know how many physics steps will be required for the moment.
Is there any way to handle this other than just to limit the velocity directly before integrating position in Box2D source code?
My first attempt to solve this problem was by simply executing above pieces of code not every loop, but every physics substep, which means that if my delta accumulator tells me I have to perform n b2World::Step's, I also cap the velocity n times:
// source code taken form above link and modified for my purposes
for (int i = 0; i < nStepsClamped; ++ i)
{
resetSmoothStates_ ();
// here I execute whole systems that apply accelerations, drag forces and limit maximum velocities
// ...
if ( speed > maxSpeed )
body->SetLinearVelocity( maxSpeed * vel );
// ...
singleStep_ (FIXED_TIMESTEP);
// NOTE I'M CLEARING FORCES EVERY SUBSTEP to avoid excessive accumulation
world_->ClearForces ();
}
Now while this gives me constant speed regardless of frame rate (which was my primary concern as my movement was jittery), it is not always <= maxSpeed. The same scenario: imagine a huge force applied just before capping the velocity and exceuting b2World::Step.
Now, I could simply calculate the actual force to be applied according to the current velocity, as I know the force will be applied only once until next validation, but there's another simple solution that I've already mentioned and eventually sticked with:
Go to Box2D\Dynamics\b2Body.h
Add float32 m_max_speed public member and initialize it with -1.f so initially we don't limit velocities for any body.
Go to Box2D\Dynamics\b2Island.cpp.
Locate line number 222.
Add following if condition:
m_positions[i].c = c;
m_positions[i].a = a;
if (b->m_max_speed >= 0.f) {
float32 speed = v.Normalize();
if (speed > b->m_max_speed)
v *= b->m_max_speed;
else v *= speed;
}
m_velocities[i].v = v;
m_velocities[i].w = w;
This will work even without substepping that I've described above but keep in mind that if you were to simulate air resistance, applying drag force every substep guarantees correctness of the simulation even with varying framerates.
First, answer for yourself, who can apply force to a body. Box2D itself can impact bodies via contacts and gravity. Contacts are not using forces, but impulses. To manage them setup contact listener and modify normalImpulses and tangentImpulses . Gravity I think cant impact body a lot, but it also can be controlled via b2BodyDef::gravityScale.
If your code aplying some manual forces, it maybe usefull to introduce some proxy interface to manage them.
I cant see some easy way, because at each step box2d makes several velocity and position iterations. So, forces and impulses applied to it at begin of step will cause change of position accordingly.
I cant imagine the way, how strict velocity without hacking box2d source code. By the way, I think it is not bad variant. For example, insert restriction in Dynamics/b2Island.cpp:219 (b2Island::Solve) to w and v variables.
I'm making an API for skeletal animation. Right now it works fine, except Lets say you want to go from 2.0f to 1.0f. It will end up doing almost a full circle when it should only do about 1/6th of one.
I think I've got a way to find it it should go counter clockwise but I'm not sure how to use it with this:
bool CCW = fmod( (endKeyFrame->getAngle() -
startKeyFrame->getAngle() + TWO_PI), TWO_PI) > 3.141592;
remainingInterpolationFrames = endKeyFrame->getFrame() - startKeyFrame->getFrame();
//Linear interpolation
curIncreaseAngle = (endKeyFrame->getAngle() -
startKeyFrame->getAngle()) / remainingInterpolationFrames;
Thanks
I think this may help. Especially sections 8,9 and 30.
I'm making a software rasterizer for school, and I'm using an unusual rendering method instead of traditional matrix calculations. It's based on a pinhole camera. I have a few points in 3D space, and I convert them to 2D screen coordinates by taking the distance between it and the camera and normalizing it
Vec3 ray_to_camera = (a_Point - plane_pos).Normalize();
This gives me a directional vector towards the camera. I then turn that direction into a ray by placing the ray's origin on the camera and performing a ray-plane intersection with a plane slightly behind the camera.
Vec3 plane_pos = m_Position + (m_Direction * m_ScreenDistance);
float dot = ray_to_camera.GetDotProduct(m_Direction);
if (dot < 0)
{
float time = (-m_ScreenDistance - plane_pos.GetDotProduct(m_Direction)) / dot;
// if time is smaller than 0 the ray is either parallel to the plane or misses it
if (time >= 0)
{
// retrieving the actual intersection point
a_Point -= (m_Direction * ((a_Point - plane_pos).GetDotProduct(m_Direction)));
// subtracting the plane origin from the intersection point
// puts the point at world origin (0, 0, 0)
Vec3 sub = a_Point - plane_pos;
// the axes are calculated by saying the directional vector of the camera
// is the new z axis
projected.x = sub.GetDotProduct(m_Axis[0]);
projected.y = sub.GetDotProduct(m_Axis[1]);
}
}
This works wonderful, but I'm wondering: can the algorithm be made any faster? Right now, for every triangle in the scene, I have to calculate three normals.
float length = 1 / sqrtf(GetSquaredLength());
x *= length;
y *= length;
z *= length;
Even with a fast reciprocal square root approximation (1 / sqrt(x)) that's going to be very demanding.
My questions are thus:
Is there a good way to approximate the three normals?
What is this rendering technique called?
Can the three vertex points be approximated using the normal of the centroid? ((v0 + v1 + v2) / 3)
Thanks in advance.
P.S. "You will build a fully functional software rasterizer in the next seven weeks with the help of an expert in this field. Begin." I ADORE my education. :)
EDIT:
Vec2 projected;
// the plane is behind the camera
Vec3 plane_pos = m_Position + (m_Direction * m_ScreenDistance);
float scale = m_ScreenDistance / (m_Position - plane_pos).GetSquaredLength();
// times -100 because of the squared length instead of the length
// (which would involve a squared root)
projected.x = a_Point.GetDotProduct(m_Axis[0]).x * scale * -100;
projected.y = a_Point.GetDotProduct(m_Axis[1]).y * scale * -100;
return projected;
This returns the correct results, however the model is now independent of the camera position. :(
It's a lot shorter and faster though!
This is called a ray-tracer - a rather typical assignment for a first computer graphics course* - and you can find a lot of interesting implementation details on the classic Foley/Van Damm textbook (Computer Graphics Principes and Practice). I strongly suggest you buy/borrow this textbook and read it carefully.
*Just wait until you get started on reflections and refraction... Now the fun begins!
It is difficult to understand exactly what your code doing, because it seems to be performing a lot of redundant operations! However, if I understand what you say you're trying to do, you are:
finding the vector from the pinhole to the point
normalizing it
projecting backwards along the normalized vector to an "image plane" (behind the pinhole, natch!)
finding the vector to this point from a central point on the image plane
doing dot products on the result with "axis" vectors to find the x and y screen coordinates
If the above description represents your intentions, then the normalization should be redundant -- you shouldn't have to do it at all! If removing the normalization gives you bad results, you are probably doing something slightly different from your stated plan... in other words, it seems likely that you have confused yourself along with me, and that the normalization step is "fixing" it to the extent that it looks good enough in your test cases, even though it probably still isn't doing quite what you want it to.
The overall problem, I think, is that your code is massively overengineered: you are writing all your high-level vector algebra as code to be executed in the inner loop. The way to optimize this is to work out all your vector algebra on paper, find the simplest expression possible for your inner loop, and precompute all the necessary constants for this at camera setup time. The pinhole camera specs would only be the inputs to the camera setup routine.
Unfortunately, unless I miss my guess, this should reduce your pinhole camera to the traditional, boring old matrix calculations. (ray tracing does make it easy to do cool nonstandard camera stuff -- but what you describe should end up perfectly standard...)
Your code is a little unclear to me (plane_pos?), but it does seem that you could cut out some unnecessary calculation.
Instead of normalizing the ray (scaling it to length 1), why not scale it so that the z component is equal to the distance from the camera to the plane-- in fact, scale x and y by this factor, you don't need z.
float scale = distance_to_plane/z;
x *= scale;
y *= scale;
This will give the x and y coordinates on the plane, no sqrt(), no dot products.
Well, off the bat, you can calculate normals for every triangle when your program starts up. Then when you're actually running, you just have to access the normals. This sort of startup calculation to save costs later tends to happen a lot in graphics. This is why we have large loading screens in a lot of our video games!