Tweener framework for c++? - c++

for ActionScript there are quite a few "tweening" frameworks to facilitate animating objects. for example TweenLite: http://www.greensock.com/tweenlite/
it allows to animate an arbitrary object with a single line of code:
Pseudocode:
tween(myObject, 3.0f, {xpos:300});
what this line of code does is instanciating a new tweening object, which will step by step, during 3 seconds, animate the "xpos" property of 'myObject' from whatever value it currently has to 300. additionally it allows to use a variaty of different interpolation functions.
So in order to animate an object to a new point, i can write a single line of code and forget about it (the tweening object will destroy itself once it finished animating the value).
My Question is, whether there is anything comparable for c++?
I know that those languages are completely different. Anyway - i think it should be possible and would be highly convenient so if anyone knows a framework that does the trick, would be welcome :)
thanks!

I have stumbled upon libClaw's tweeners, and it looks promising - well documented, pretty mature and more or less alive.
I'm not sure I like the fact that it operates on doubles only whereas I would need it primarily for floats and sometimes ints, but I don't think the double calculation and casting performance penalty should be too big...

How about cpptweener. Of course which is ported from the awesome as3 tweener library.

Related

Factory pattern : Can "definition" be too large?

Is there disadvantage when "definition of object" that passed into "factory" become (too) big/complex?
Example
In a game engine, I have a prototype class of 3D-graphic object that is quite large, at least for me.
It contains:-
a pointer(handle) of 3D mesh
a pointer(handle) of 8 textures (e.g. lambertian, specular)
colors of 8 textures (e.g. color multiplier) - 4 floats each
custom setting for 8 textures - 4 floats each
~ 10 boolean flag for blending, depth test, depth write, etc
(gradually added as the project proceed)
In game logic, I cache some (100?) instances of prototype scattering around. Most of them are stored as fields in many subsystems.
I found that it is also very convenient to store prototype by value.
Question
Besides the obvious direct memory/CPU cost, are there any "easy-to-be-overlooked" disadvantage that occur when prototype is very big?
What are criteria to determine that prototype (definition that pass into factory) is too big/complex? What is the remedy/design-pattern that can cure it?
Should the prototype be stored in business/game logic, by handle/pointer instead? (I have this idea because people tend to use pointer for large object, but it is a very weak reason.)
Answers:
Besides the obvious direct memory/CPU cost, are there any "easy-to-be-overlooked" disadvantage that occur when prototype is very big?
For one graphic object, if you hold it in different places with copies, when you change the object, you have to change all the copies under a lock, or you would met inconsistency issue, which increase the code complexity and potential inconsistency issues.
What are criteria to determine that prototype (definition that pass into factory) is too big/complex? What is the remedy/design-pattern that can cure it?
Factory pattern is used for object creation. If you find the logic or code in factory is too complex, the problem should be your object structure not factory pattern.
Should the prototype be stored in business/game logic, by handle/pointer instead? (I have this idea because people tend to use pointer for large object, but it is a very weak reason.)
For your case, i recommend P-Impl pattern or smart pointer pattern to store the same object, which could highly reduce the complex and object numbers.

Can splitting a one-liner into multiple lines with temporary variables impact performance i.e. by inhibiting some optimizations?

It is a very general c++ question. Consider the following two blocks (they do the same thing):
v_od=((x-wOut*svd.matrixV().topLeftCorner(p,Q).adjoint()).cwiseAbs2().rowwise().sum()).array().sqrt();
and
MatrixXd wtemp=(x-wOut*svd.matrixV().topLeftCorner(p,Q).adjoint());
v_od=(wtemp.cwiseAbs2().rowwise().sum()).array().sqrt();
Now the first construct feels more efficient. But is it true,
or would the c++ compiler compile them down to the same thing (I'm assuming the compiler is a good one and has all the safe optimization flag turned on. For argument's sake wtemp is mild sized, say a matrix with 100k elements all told)?
I know the generic answer to this is 'benchmark it and come back to us'
but I want a general answer.
There are two cases where your second expression could be fundamentally less efficient than your first.
The first case is where the writer of the MatrixXd class did rvalue reference to this overloads on cwiseAbs2(). In the first code, the value we call the method on is a temporary, in the second it is not. We can fix this by simply changing the second expression to:
v_od=(std::move(wtemp).cwiseAbs2().rowwise().sum()).array().sqrt();
which casts wtemp into an rvalue reference, and basically tells cwiseAbs2() that the matrix it is being called on can be reused as scratch space. This only matters if the writers of the MatrixXd class implemented this particular feature.
The second possible way it could be fundamentally slower is if the writers of the MatrixXd class used expression templates for pretty much every operation listed. This technique builds the parse tree of the operations, and only finalizes all of them when you assign the result to a value at the end.
Some expression templates are written to handle being able to be stored in an intermediate object like this:
auto&& wtemp=(x-wOut*svd.matrixV().topLeftCorner(p,Q).adjoint());
v_od=(std::move(wtemp).cwiseAbs2().rowwise().sum()).array().sqrt();
where the first stores the expression template wtemp rather than evaluating it into a matrix, and the second line consumes the first intermediate result. Other expression template implementations break horribly if you try to do something like the above.
Expression templates are also something that the matrix class writers would have to have specifically implemented. And is again a somewhat obscure technique -- it would mainly be of use in situations where extending a buffer is done by seemingly cheap operations, like string append.
Barring those two cases, any difference in performance is going to be purely "noise" -- there would be no reason, a priori, to expect the compiler to be confused by one or the other more or less.
And both of these are relatively advanced/modern techniques.
Neither of them will be implemented "by the compiler" without explicitly being done by the library writer.
In general second case is much more readable, and that's why preferred. It clearly names temporary variable, that helps to understand code better. Moreover, it's much easier to debug! That's why I would strongly recommend to go for second option.
I would not care much about preformance difference: I think good compiler will make identical code from both examples.
The most important aspects of code in order, most important -> less important:
Correct code
Readable code
Fast code
Of course, this can change (i.e. on embedded devices where you have to squeeze out every last bit of performance in limited memory space) but this is the general case.
Therefor, you want the code that is easier to read over a possibly neglible performance increase.
I wouldn't expect a performance hit for storing temporaries - at least not in the general case. In fact, in some cases you can expect it to be faster, i.e. caching the result of strlen() when working with c_strings (as the first example that comes to mind)
Once you have written the code, verified that it is correct code, and found a performace problem, only then should you worry about profiling and making it faster, at which point you'll probably find that having more maintainable / readable code actually helps you isolate the problem.

Optimization and testability at the same time - how to break up code into smaller units

I am trying to break up a long "main" program in order to be able to modify it, and also perhaps to unit-test it. It uses some huge data, so I hesitate:
What is best: to have function calls, with possibly extremely large (memory-wise) data being passed,
(a) by value, or
(b) by reference
(by extremely large, I mean maps and vectors of vectors of some structures and small classes... even images... that can be really large)
(c) Or to have private data that all the functions can access ? That may also mean that main_processing() or something could have a vector of all of them, while some functions will only have an item... With the advantage of functions being testable.
My question though has to do with optimization, while I am trying to break this monster into baby monsters, I also do not want to run out of memory.
It is not very clear to me how many copies of data I am going to have, if I create local variables.
Could someone please explain ?
Edit: this is not a generic "how to break down a very large program into classes". This program is part of a large solution, that is already broken down into small entities.
The executable I am looking at, while fairly large, is a single entity, with non-divisible data. So the data will either be all created as member variable in a single class, which I have already created, or it will (all of it) be passed around as argument around functions.
Which is better ?
If you want unit testing, you cannot "have private data that all the functions can access" because then, all of that data would be a part of each test case.
So, you must think about each function, and define exactly on which part of the data it works. As for function parameters and return values, it's very simple: use pass-by-value for small objects, and pass-by-reference for large objects.
You can use a guesstimate for the threshold that separates small and large. I use the rule "8 is small, anything more is large" but what is good for my system cannot be equally good for yours.
This seems more like a general question about OOP. Split up your data into logically grouped concepts (classes), and place the code that works with those data elements with the data (member functions), then tie it all together with composition, inheritance, etc.
Your question is too broad to give more specific advice.

How should I store and and use gun fire data in a game?

On this game I have 3 defense towers (the number is configurable) which fire a "bullet" every 3 seconds at 30km/h. These defense towers have a radar and they only start firing when the player is under the tower radar. That's not the issue.
My question is how to store the data for the gun fire. I'm not sure exactly what data do I need for each bullet, but one that comes to mind is the position of the bullet of course. Let's assume that I only need to store that (I already have a struct defined for a 3D point) for now.
Should I try to figure it out the maximum bullets the game can have at a particular point and declare an array with that size? Should I use a linked-list? Or maybe something else?
I really have no idea how to do this. I don't need anything fancy or complex. Something basic that just works and it's easy to use and implement is more than enough.
P.S: I didn't post this question on the game development website (despite the tag) because I think it fits better here.
Generally, fixed length arrays aren't a good idea.
Given your game model, I wouldn't go for any data structure that doesn't allow O(1) removal. That rules out plain arrays anyway, and might suggest a linked list. However the underlying details should be abstracted out by using a generic container class with the right attributes.
As for what you should store:
Position (as you mentioned)
Velocity
Damage factor (your guns are upgradeable, aren't they?)
Maximum range (ditto)
EDIT To slightly complicated matters the STL classes always take copies of the elements put in them, so in practise if any of the attributes might change over the object's lifetime you'll need to allocate your structures on the heap and store (smart?) pointers to them in the collection.
I'd probably use a std::vector or std::list. Whatever's easiest.
Caveat: If you are coding for a very constrained platform (slow CPU, little memory), then it might make sense to use a plain-old fixed-size C array. But that's very unlikely these days. Start with whatever is easiest to code, and change it later if and only if it turns out you can't afford the abstractions.
I guess you can start off with std::vector<BulletInfo> and see how it works from there. It provides the array like interface but is dynamically re-sizable.
In instances like this I prefer a slightly more complex method to managing bullets. Since the number of bullets possible on screen is directly related to the number of towers I would keep a small fixed length array of bullets inside each tower class. Whenever a tower goes to fire a bullet it would search through its array, find an un-used bullet, setup the bullet with a new position/velocity and mark it active.
The slightly more complex part is I like to keep a second list of bullets in an outside manager, say a BulletManager. When each tower is created the tower would add all its bullets to the central manager. Then the central manager can be in charge of updating the bullets.
I like this method because it easily allows me to manage memory constrains related to bullets, just tweak the 'number of active towers' number and all of the bullets are created for you. You don't need to allocate bullets on the fly because they are all pooled, and you don't have just one central pool that you constantly need to change the size of as you add/remove towers.
It does involve slightly move overhead because there is a central manager with a list of pointers. And you need to be careful to always remove any bullets from a destroyed tower from the central manager. But for me the benefits are worth it.

Building static (but complicated) lookup table using templates

I am currently in the process of optimizing a numerical analysis code. Within the code, there is a 200x150 element lookup table (currently a static std::vector <std::vector<double>> ) that is constructed at the beginning of every run. The construction of the lookup table is actually quite complex- the values in the lookup table are constructed using an iterative secant method on a complicated set of equations. Currently, for a simulation, the construction of the lookup table is 20% of the run time (run times are on the order of 25 second, lookup table construction takes 5 seconds). While 5-seconds might not seem to be a lot, when running our MC simulations, where we are running 50k+ simulations, it suddenly becomes a big chunk of time.
Along with some other ideas, one thing that has been floated- can we construct this lookup table using templates at compile time? The table itself never changes. Hard-coding a large array isn't a maintainable solution (the equations that go into generating the table are constantly being tweaked), but it seems that if the table can be generated at compile time, it would give us the best of both worlds (easily maintainable, no overhead during runtime).
So, I propose the following (much simplified) scenario. Lets say you wanted to generate a static array (use whatever container suits you best- 2D c array, vector of vectors, etc..) at compile time. You have a function defined-
double f(int row, int col);
where the return value is the entry in the table, row is the lookup table row, and col is the lookup table column. Is it possible to generate this static array at compile time using templates, and how?
Usually the best solution is code generation. There you have all the freedom and you can be sure that the output is actually a double[][].
Save the table on disk the first time the program is run, and only regenerate it if it is missing, otherwise load it from the cache.
Include a version string in the file so it is regenerated when the code changes.
A couple of things here.
What you want to do is almost certainly at least partially possible.
Floating point values are invalid template arguments (just is, don't ask why). Although you can represent rational numbers in templates using N1/N2 representation, the amount of math that you can do on them does not encompass the entire set that can be done on rational numbers. root(n) for instance is unavailable (see root(2)). Unless you want a bajillion instantiations of static double variables you'll want your value accessor to be a function. (maybe you can come up with a new template floating point representation that splits exp and mant though and then you're as well off as with the double type...have fun :P)
Metaprogramming code is hard to format in a legible way. Furthermore, by its very nature, it's rather tough to read. Even an expert is going to have a tough time analyzing a piece of TMP code they didn't write even when it's rather simple.
If an intern or anyone under senior level even THINKS about just looking at TMP code their head explodes. Although, sometimes senior devs blow up louder because they're freaking out at new stuff (making your boss feel incompetent can have serious repercussions even though it shouldn't).
All of that said...templates are a Turing-complete language. You can do "anything" with them...and by anything we mean anything that doesn't require some sort of external ability like system access (because you just can't make the compiler spawn new threads for example). You can build your table. The question you'll need to then answer is whether you actually want to.
Why not have separate programs? One that generates the table and stores it in a file, and one that loads the file and runs the simulation on it. That way, when you need to tweak the equations that generate the table, you only need to recompile that program.
If your table was a bunch of ints, then yes, you could. Maybe. But what you certainly couldn't do is generate doubles at compile-time.
More importanly, I think that a plain double[][] would be better than a vector of vectors here- you're pushing a LOT of dynamic allocation for a statically sized table.