I'm curious if there's a way to make this function:
float linearInterpolation(float startPoint, float endPoint, float time)
{
return startPoint + ((endPoint - startPoint) * time);
}
More linear, as right now when the start point nears its endpoint it goes slower, i just want it to go the same speed all the way through no slowing down/speeding up. If i need to implement another variable or something that can be done. Another function that would take the same variables and output the next value would also be acceptable.
That looks like a correct implementation of lerp to me. Without seeing the code that you are calling it with, I'm going to guess that the reason it is slowing down is that you are calling it with a different startPoint (calling with the previous result of the call to linearInterpolation would cause this) each time (frame?) which would cause it to slow down as the distance to interpolate is reduced each time it is called.
Make sure the startPoint and endPoint variables are the same for each call over the life of the interpolation and only the time variable is increasing.
Related
I wrote a program that simulates perfectly elastic collisions in 1D using an event-driven algorithm. The problem is something akin to Newton's cradle.
I was trying to fix something I perceive to be an issue, which is that when I give two spheres an equal initial velocity, the position and velocity arrays are updated twice due to simultaneous collisions (and thus I get a double output for the same instant).
To this effect, I created a variable that would check whether the next collision would happen within 0 seconds (i.e.: "now"). However, when I do that, the time until the collision changes completely for those simultaneous collisions.
There are 5 particles in total, and each one has a radius of 1, with a separation of 0.5 between them. Collisions are perfectly elastic, gravity is disregarded, and the algorithm stops when the final particle hits a wall placed arbitrarily in front of it.
Initial velocity for the first two particles is 1, so the first collision should occur after 0.5, and the second and third collisions should occur simultaneously after another 0.5.
Before adding the variable to check whether or not the time until collision is 0, the time until collision was outputted as 6.94906e-310 (this was verified by outputting the return value of the function that calculates the time until collision).
With the new variable that was going to be used to check if the previous value was zero, the time until collision during a simultaneous collision is now outputted as -1 in both the new variable and the return value of the aforementioned function.
I'm guessing this has something to do with the fact that it's an extremely small double value, but I don't quite understand the problem. How could creating one variable affect the value of another (albeit somewhat related) variable to this extent?
The code below is just to help visualize the problem. It is not a MWE. I'd have to post almost my entire code here to produce a MWE. Because this is for a class, I could get plagiarized or be accused of plagiarism myself, so I don't feel comfortable posting it.
Again, not a MWE, just to better explain what I'm doing.
//x and v are arrays with the positions and
//velocities, respectively, of each particle
//hasReachedWall(x) checks if the last particle has hit the wall
while (!hasReachedWall(x)) {
//The return value of updatePos(x, v) is a double with the time until the next collision
//aux, is the cause of the problem. Its existence somehow changes
//the return value into -1 when it should be near-zero. Without this variable,
//the return value is as expected (near-zero)
double aux = updatePos(x, v);
cout << aux << endl;
//t is also a double
t += aux;
}
EDIT: I am aware of how doubles are stored internally and that operations performed on them have errors. My problem is that the mere creation of an intermediary variable completely changes the result of an operation.
I'm creating a double to store the return value of a function (another double - not long double, just double), and the return value of the function changes radically. I don't understand why.
I'm trying to make an Xbox controller move the cursor is there anyway I can use the setCursorPos() function and increment by two doubles instead of two ints. The problem is that 1 is still too fast of a change.
SetCursorPos() doesn't increment the cursor, it moves it to a new absolute x/y location. See the documentation for a description. And no, you can't call it with float params, it takes int params.
You did not provide any code, so making comments on other ways to do it is impossible. If you are incrementing the location such as e.g.
x = x+1;
x = y+1;
SetCursoPos(x,y);
then to make it move slower you can simply add a delay between successive calls to SetCursorPos().
For reliable input injection you should be using SendInput instead of SetCursorPos. This ensures that the system performs the entire pipeline of input processing, keeping applications happy. Setting the MOUSEEVENTF_ABSOLUTE flags allows you to pass normalized coordinates in the range of [0..65535]. In the vast majority of cases this provides a higher resolution than the display device; essentially this allows to use sub-pixel precision.
I am working on a very time consuming application and I want to speed it up a little. I analyzed the runtime of single parts using the clock() function of the ctime library and found something, which is not totally clear to me.
I have time prints outside and inside of a method, lets call it Method1. The print inside Method1 includes the whole body of it, only the return of a float is exluded of course. Well, the thing is, that the print outside states twice to three times the time of the print inside Method1. It's obvious, that the print outside should state more time, but the difference seems quite big to me.
My method looks as follows, I am using references and pointers as parameters to prevent copying of data. Note, that the data vector includes 330.000 pointers to instances.
float ClassA::Method1(vector<DataClass*>& data, TreeClass* node)
{
//start time measurement
vector<Mat> offset_vec_1 = vector<Mat>();
vector<Mat> offset_vec_2 = vector<Mat>();
for (int i = 0; i < data.size(); i++)
{
DataClass* cur_data = data.at(i);
Mat offset1 = Mat();
Mat offset2 = Mat();
getChildParentOffsets(cur_data, node, offset1, offset2);
offset_vec_1.push_back(offset1);
offset_vec_2.push_back(offset2);
}
float ret = CalculateCovarReturnTrace(offset_vec_1) + CalculateCovarReturnTrace(offset_vec_2);
//end time measurement
return ret;
}
Is there any "obvious" way to increase the call speed? I would prefer to keep the method for readability reasons, thus, can I change anything to gain a speed up?
I am appreciating any suggestions!
Based on your updated code, the only code between the end time measurement and the measurement after the function call is the destructors for constructed objects in the function. That being the two vectors of 330,000 Mats each. Which will likely take some time depending on the resources used by each of those Mats.
Without trying to lay claim to any of the comments made by others to the OP ...
(1) The short-answer might well be, "no." This function appears to be quite clear, and it's doing a lot of work 30,000 times. Then, it's doing a calculation over "all that data."
(2) Consider re-using the "offset1" and "offset2" matrices, instead of creating entirely new ones for each iteration. It remains to be seen, of course, whether this would actually be faster. (And in any case, see below, it amounts to "diddling the code.")
(3) Therefore, borrowing from The Elements of Programming Style: "Don't 'diddle' code to make it faster: find a better algorithm." And in this case, there just might not be one. You might need to address the runtime issue by "throwing silicon at it," and I'd suggest that the first thing to do would be to add as much RAM as possible to this computer. A process that "deals with a lot of data" is very-exposed to virtual memory page-faults, each of which requires on the order of *milli-*seconds to resolve. (Those hundredths of a second add-up real fast.)
I personally do not see anything categorically wrong with this code, nor anything that would categorically cause it to run faster. Nor would I advocate re-writing ("diddling") the code from the very-clear expression of it that you have right now.
This may be the wrong approach, but as I dig deeper into developing my game engine I have ran into a timing issue.
So lets say I have a set of statements like:
for(int i = 0; i < 400; i++)
{
engine->get2DObject("bullet")->Move(1, 0);
}
The bullet will move, however, on the screen there won't be any animation. It will basically "warp" from one part of the screen to the next.
So, I am thinking...create a vector of function pointers for each base object (which the user inherits from) and when they call "Move" I actually don't move the object until the next iteration of the game loop.
So something like:
while(game.running())
{
game.go();
}
go()
{
for(...)
2dobjs.at(i)->action.pop_back();
}
Or something like that, and this way it runs only a single action of each object during each iteration (of course I'll add in a check to see if there is actually "actions" to be ran).
And if that is a good idea, my question is how do I store the parameters. Since each object can do more than a single type of "action" rather than move (rotateby is one example), I think it would be nonsensical to create a struct similar in fashion to:
struct MoveAction {
MovePTR...;
int para1;
int para2;
};
Thoughts? Or is this the completely wrong direction?
Thoughts? Or is this the completely wrong direction?
It's the wrong direction.
Actually, the idea of being able to queue actions or orders has some merit. But usually this applies to things like AI and to more general tasks (like "proceed to location X", "attack anything you see", etc). So it usually doesn't cover the frame-by-frame movement issue that you're dealing with here.
For very simply entities like bullets, queuing tasks is rather overkill. After all, what else is a bullet going to do except move forward each time? It's also an awkward way to implement such simple motion. It brings up issues like, why only 400 steps forward? What if the area in front is longer? What if its shorter? What if some other object gets in the way after only say 50 steps? (Then 350 queued move actions were a waste.)
Another problem with this idea (at least in the simplistic way you've presented it) is that your bullets are going to move a fixed amount per each iteration of your game loop. But not all iterations are going to take the same amount of time. Some people's computers will obviously be able to run the code faster than others. And even disregarding that, sometimes the game loop may be doing considerably more work than other times (such as when many more entities are active in the game). So you would really want a way to factor in the time that each iteration is taking to adjust the movement and such so that the everything appears to move at a consistent speed to the user, regardless of how fast the game loop is running.
So instead of each movement action being "move X amount" they would be "move at X speed" and would then be multiplied by something like the time elapsed since the last iteration. And then you would really never know how many move actions to queue, because it would depend on how fast your game loop is running in that scenario. But again, this is another indication that queuing movement actions is an awkward solution to frame-by-frame movement.
The way most game engines do it is to simply call some sort of function on the each object each frame which evaluates its movement. Maybe a function like void ApplyMovement(float timeElapsedSinceLastFrame);. In the case of a bullet, it would multiply the speed at which a bullet should move per second by the passed in time since the last frame to determine the amount that the bullet should move this frame. For more complex objects you might want to do math with rotations, accelerations, decelerations, target-seeking, etc. But the general idea is the same: call a function each iteration of the game loop to evaluate what should happen for that iteration.
I don't think this is the right direction. After you store 400 moves in a vector of function pointers, you'll still need to pop and perform a move, redraw the screen, and repeat. Isn't it easier just to move(), redraw, and repeat?
I'd say your bullet is warping because it's moving 400 pixels/frame, not because you need to delay the move calculations.
But if this is the correct solution for your architecture, in C++ you should use classes rather than function pointers. For example:
class Action // an abstract interface for Actions
{
public:
virtual void execute() = 0; // pure virtual function
}
class MoveAction: public Action
{
public:
MoveAction(Point vel) : velocity(vel) {}
virtual void execute();
Point velocity;
...
}
std::vector<Action*> todo;
gameloop
{
...
todo.push_back(new MoveAction(Point(1,0))
...
}
So where does 400 come from? Why not just do it this way:
go()
{
engine->get2DObject("bullet")->Move(1, 0);
}
I think the general approach could be different to get what you want. I personally would prefer a setup where each go loop called functions to set the position for all objects including the bullet, then draw the scene. So each loop it would put the bullet where it should be right then.
If you do decide to go your route, you will likely have to do your struct idea. It's not pleasant, but until we get closures, you'll have to do.
Here is what I'm doing. My application takes points from the user while dragging and in real time displays a filled polygon.
It basically adds the mouse position on MouseMove. This point is a USERPOINT and has bezier handles because eventually I will do bezier and this is why I must transfer them into a vector.
So basically MousePos -> USERPOINT. USERPOINT gets added to a std::vector<USERPOINT> . Then in my UpdateShape() function, I do this:
DrawingPoints is defined like this:
std::vector<std::vector<GLdouble>> DrawingPoints;
Contour[i].DrawingPoints.clear();
for(unsigned int x = 0; x < Contour[i].UserPoints.size() - 1; ++x)
SetCubicBezier(
Contour[i].UserPoints[x],
Contour[i].UserPoints[x + 1],
i);
SetCubicBezier() currently looks like this:
void OGLSHAPE::SetCubicBezier(USERFPOINT &a,USERFPOINT &b, int ¤tcontour )
{
std::vector<GLdouble> temp(2);
if(a.RightHandle.x == a.UserPoint.x && a.RightHandle.y == a.UserPoint.y
&& b.LeftHandle.x == b.UserPoint.x && b.LeftHandle.y == b.UserPoint.y )
{
temp[0] = (GLdouble)a.UserPoint.x;
temp[1] = (GLdouble)a.UserPoint.y;
Contour[currentcontour].DrawingPoints.push_back(temp);
temp[0] = (GLdouble)b.UserPoint.x;
temp[1] = (GLdouble)b.UserPoint.y;
Contour[currentcontour].DrawingPoints.push_back(temp);
}
else
{
//do cubic bezier calculation
}
So for the reason of cubic bezier, I need to make USERPOINTS into GlDouble[2] (since GLUTesselator takes in a static array of double.
So I did some profiling. At ~ 100 points, the code:
for(unsigned int x = 0; x < Contour[i].UserPoints.size() - 1; ++x)
SetCubicBezier(
Contour[i].UserPoints[x],
Contour[i].UserPoints[x + 1],
i);
Took 0 ms to execute. then around 120, it jumps to 16ms and never looks back. I'm positive this is due to std::vector. What can I do to make it stay at 0ms. I don't mind using lots of memory while generating the shape then removing the excess when the shape is finalized, or something like this.
0ms is no time...nothing executes in no time. This should be your first indicator that you might want to check your timing methods over timing results.
Namely, timers typically don't have good resolution. Your pre-16ms results are probably just actually 1ms - 15ms being incorrectly reported at 0ms. In any case, if we could tell you how to keep it at 0ms, we'd be rich and famous.
Instead, find out which parts of the loop take the longest, and optimize those. Don't work towards an arbitrary time measure. I'd recommend getting a good profiler to get accurate results. Then you don't need to guess what's slow (something in the loop), but can actually see what part is slow.
You could use vector::reserve() to avoid unnecessary reallocations in DrawingPoints:
Contour[i].DrawingPoints.reserve(Contour[i].size());
for(unsigned int x = 0; x < Contour[i].UserPoints.size() - 1; ++x) {
...
}
If you actually timed the second code snippet only (as you stated in your post), then you're probably just reading from the vector. This means, the cause can not be the re-allocation cost of the vector. In that case, it may due to cache issues of the CPU (i.e. the small datasets can be read in lightning speed from cpu cache, but whenever the dataset is larger than the cache [or when alternately reading from different memory locations], the cpu has to access ram, which is distinctly slower than cache access).
If the part of the code, which you profiled, appends data to the vector, then use std::vector::reserve() with an appropriate capacity (number of expected entries in vector) before filling it.
However, regard two general rules for profiling/benchmarking:
1) Use time measurement methods with high resolution precision (as others stated, the resolution of your timer IS too low)
2) In any case, run the code snippet more than once (e.g. 100 times), get the total time of all runs and divide it by number of runs. This will give you some REAL numbers.
There's a lot of guessing going on here. Good guesses, I imagine, but guesses nevertheless. And when you try to measure the time functions take, that doesn't tell you how they take it. You can see if you try different things that the time will change, and from that you can have some suggestion of what was taking the time, but you can't really be certain.
If you really want to know what's taking the time, you need to catch it when it's taking that time, and find out for certain what it's doing. One way is to single-step it at the instruction level through that code, but I suspect that's out of the question. The next best way is to get stack samples. You can find profilers that are based on stack samples. Personally, I rely on the manual technique, for the reasons given here.
Notice that it's not really about measuring time. It's about finding out why that extra time is being spent, which is a very different question.