In my program I am dealing with tracking device movement with CMMotionManager and use quaternion representation of device attitude. For positioning the device in global coordinate system I need to do some basic calculations involving quaternion. For that I want to write some Quaternion class with all needed functions implemented. Does somebody know what is the proper way or maybe some general guidelines how it should be done.
I've found a sample project from Apple which has Quaternion, Vector2 and Vector3 classes written in cpp, but I think that it's not very easy to use them in Cocoa, since I can't define properties of the object in Obj-C header file using these c++ classes. Or am I wrong?
Thank you.
You're quite possibly not interested in OpenGL at all but as part of GLKit, Apple supplies GLKQuaternion. It has a C interface but should be easy to extrapolate into a class. Using it is recommended over expressing the mathematics directly since it likely uses the maths vector unit as fully as possible.
Alternatively you can declare C++ and Objective-C uses of GLKQuaternion directly since both superset C.
You're asking multiple questions here. How to implement a Quaternion class, how to use them to represent orientation (attitude), how to integrate a C++ implementation with Objective C, how to integrate all that into Cocoa. I'll answer the last with a question: Does Cocoa really need to know what you are using under the hood to represent attitude?
There are lots of existing packages out there that use quaternions to represent orientation in three dimensional space. Eigen is one. There are lots of others. Don't reinvent the wheel. There are some gotchas that you do need to beware of, particularly when using a third party package. See View Matrix from Quaternion . If you scroll down and look at some of the other answers, you'll see that two of the packages mentioned by others are subject to the very issues I talked about in my answer.
Note well: I am not saying don't use quaternions. That would be rather hypocritical; I use them all the time. They work very nicely as a compact means of representing rotation. You just need to beware of issues, mainly because too many people who use them / implement software for them are clueless regarding those issues.
Related
I am using Box2D for a game that communicates to a server and I need complete determinism. I would simply like to use integer math/fixed-point math to achieve this and I was wondering if there was a way to enable that in Box2D.
Yes. Albeit with a fixed-point implementation and with modifications to the Box2D library code.
The C++ library code for Box2D 2.3.2 uses the float32-type for its implementation of real-number-like values. As float32 is defined in b2Settings.h via a typedef (to the C++ float type), it can be changed on the one line to use a different underlying implementation of real-number-like values.
Unfortunately some of the code (like b2Max) is used or written in ways that break if float32 is not defined to be a float. So then those errors have to be chased down and the errant code rewritten such that the new type can be used.
I have done this sort of work myself including writing my own fixed-point implementation. The short of this is that I'd recommend using a 64-bit implementation with between 14 to 24 bits for the fractional portion of values (at least to make it through most of the Testbed tests without unusable amounts of underflow/overflow issues). You can take a look at my fork to see how I've done this but it's not presently code that's ready for release (not as of 2/11/2017).
The only way you can achieve determinism in Physics Engine, is to use a Fixed Time step update on the physics engine. You can read more information in this link. http://saltares.com/blog/games/fixing-your-timestep-in-libgdx-and-box2d/
I was trying to find some examples or documentation as for how to implement functionality using the Unreal 4 ProceduralMeshComponents through code. The documentation of these classes on the website is very sparse and only provides the barest details of how they function:
https://docs.unrealengine.com/latest/INT/BlueprintAPI/Components/ProceduralMesh/index.html
I know, I know, they are already exposed to the Blueprint Editor so I am aware I can use them in the engine itself. However, I want to understand the exact ins and outs of the process, which means that I need to implement this in a project through code.
Also I feel that using these components through Blueprint nodes alone limits the extent of what can be done with this powerful functionality.
I have also made searches for any examples (either on the net or on the forums) but can't find any that don't involve using Blueprints in some way. The other problem is that this functionality was introduced relatively recently, and before this Rama (a stellar Unreal user) had put up a similar API that allowed procedural mesh generation. However it is deprecated now and there are many examples that refer to that version instead.
Don't get me wrong, I'm not dissing Blueprints here. I love the tool and consider them one of the best bits of Unreal 4. But for my purpose I require the process to be completely exposed to me from start to finish.
I would appreciate any resources or examples that you could share that implement the Unreal Procedural Mesh classes completely through code towards some effect.
This is quite a big question, since Procedural Mesh Components can be used in very many ways. They just represent an arbitrary runtime-generated mesh.
Most of the functions listed in the documentation are quite self-explanatory if you know the data representations of meshes in 3D applications.
A mesh can have several LODs. Each individual mesh LOD is made up of sections. Each section can have a visual representation and a collision representation. For the "visual" representation, there are lists of point locations, lists of triangles that are represented by three point indices, lists of edges connecting two points, lists of normals for each triangle, lists of UV-space positions for each point, etc. For "collision" representation of the meshes, there are of course separate lists, in most cases smaller in size for more optimized calculation.
Depending on your use case, you fill these lists with the data you need. This data can be generated in whatever way of course, that is up to you. Whatever you do, you just need to have some arrays of the needed stuff, be it points, edges, etc.
can't find any that don't involve using Blueprints in some way
The beauty of UE is, any “blueprint” example can act as a C++ example. You can recreate a BP graph on code easily one-to-one, since BP nodes are based on C++ functions which are marked as UFUNCTION(BlueprintCallable). For the C++ versions of the functions, you can see the C++ source by double clicking the node.
p.s. I understand it might not be the best that I don't provide you code examples, but instead a long-winded explanation, but for a question this wide there's really no one-size-fits-all code snippet.
Plus, procedural mesh generation is a massive world of its own, with each individual scenario requiring thousands of lines of specialized code. If you make a procedural terrain vs a procedural animal vs a procedural chair, you can imagine the code is both very complex and so specific it's nearly useless for the other use cases.
Question
This is something that's bugging me for some time now, but I couldn't find a definitive answer for it:
Is anyone aware of a proposal to introduce a standard 2D and/or 3D Vector (a struct with x,y and z members) to the STL?
If not, is there a realistic way to get such a class into the next version of the standard - short of writing a complete and perfectly written proposal myself?
And, are there any good reasons (aside from no one having the time) why this hasn't already been done?
I'm definitely willing to contribute, but I believe I lack the experience to produce something of high enough quality to get accepted (I'm not a professional programmer).
Reasoning / Background
By now I've seen dozens of libraries and frameworks (be it for graphics, physics, math, navigation, sensor fusion ...) which all basically implement their own version of
struct Vector2d {
double x,y;
//...
};
/* ...
* operator overloads
*/
and/or its 3D equivalent - not to mention all the occasions, where I implemented one myself before I took the time to do a proper, reusable version.
Obviously, this is not something difficult and I'm not worrying about suboptimal implementations, but every time, I want to combine two libraries or reuse code from a different project, I have to take care of converting one version into the other (either by casting or - if possible - text replacement).
Now that the committee strives to significantly extend the standard library for c++17 (especially with a 2D graphics framework), I really would like to have a common 2D vector baked into all interfaces from the start, so I can just write e.g.:
drawLine(transformCoordinates(trackedObject1.estimatePos(),params),
transformCoordinates(trackedObject2.estimatePos(),params));
rather than
MyOwnVec2D t1{trackedObject1.estimatePosX(), trackedObject1.estPosY()};
MyOwnVec2D t2{trackedObject2.estimatePosX(), trackedObject2.estPosY()};
t1 = transformCoordinates(t1,params);
t2 = transformCoordinates(t2,params);
drawLine(t1.x,t1.y,t2.x,t2.y);
The example might be a little exaggerated, but I think it shows my point.
I'm aware of std::valarray, which already goes in the right direction, as it allows standard operations like addition and multiplication, but it carries far too much weight if all you need are two or three coordinates. I think a valarray, with fixed size and no dynamic memory allocation (e.g. based on std::array) would be an acceptable solution, especially as it would come with a trivial iterator implementation, but I personally would prefer a class with x, y (and z) members.
Remark: I'm sorry if this topic has already been discussed (and I would be surprised if it hasn't), but every time I'm searching for 2d vectors I get results talking about something like std::vector<std::vector<T>> or how to implement a certain transformation, but nothing on the topic of standardization.
are there any good reasons (aside from no one having the time) why this hasn't already been done?
There's essentially no reason to.
Forming a type that contains two or three elements is utterly trivial, and all the operations can be trivially defined too. Furthermore, the C++ standard library is not intended to be a general-purpose mathematical toolsuite: it makes sense to use specialised third-party libraries for that if you are serious about mathematical types and constructs beyond the functions and operators that you can throw together in half an hour.
And we do not standardise things that do not need to be standardised.
If C++ were to gain some kind of standardised 3D graphics API then I can see this changing but not until then. And hopefully C++ will never gain any kind of standardised 3D graphics API, because that is not what it is for.
However, if you feel strongly about it, you can start a conversation on std-discussion where all the experts (and some assuredly non-experts) live; sometimes such conversations lead to the formation of proposals, and it needn't necessarily be you who ends up writing it.
In case someone else has an interest in it, I wanted to point out that the July 2014 version of "A Proposal to Add 2D Graphics Rendering and Display to C++" includes a 2D point class / struct (my question was based on the initial draft from January 2014). So maybe there will be at least a simple standard 2D-Vector in c++1z.
I'm doing some linear algebra math, and was looking for some really lightweight and simple to use matrix class that could handle different dimensions: 2x2, 2x1, 3x1 and 1x2 basically.
I presume such class could be implemented with templates and using some specialization in some cases, for performance.
Anybody know of any simple implementation available for use? I don't want "bloated" implementations, as I'll running this in an embedded environment where memory is constrained.
Thanks
You could try Blitz++ -- or Boost's uBLAS
I've recently looked at a variety of C++ matrix libraries, and my vote goes to Armadillo.
The library is heavily templated and header-only.
Armadillo also leverages templates to implement a delayed evaluation framework (resolved at compile time) to minimize temporaries in the generated code (resulting in reduced memory usage and increased performance).
However, these advanced features are only a burden to the compiler and not your implementation running in the embedded environment, because most Armadillo code 'evaporates' during compilation due to its design approach based on templates.
And despite all that, one of its main design goals has been ease of use - the API is deliberately similar in style to Matlab syntax (see the comparison table on the site).
Additionally, although Armadillo can work standalone, you might want to consider using it with LAPACK (and BLAS) implementations available to improve performance. A good option would be for instance OpenBLAS (or ATLAS). Check Armadillo's FAQ, it covers some important topics.
A quick search on Google dug up this presentation showing that Armadillo has already been used in embedded systems.
std::valarray is pretty lightweight.
I use Newmat libraries for matrix computations. It's open source and easy to use, although I'm not sure it fits your definition of lightweight (it includes over 50 source files which Visual Studio compiles it into a 1.8MB static library).
CML matrix is pretty good, but may not be lightweight enough for an embedded environment. Check it out anyway: http://cmldev.net/?p=418
Another option, altough may be too late is:
https://launchpad.net/lwmatrix
I for one wasn't able to find simple enough library so I wrote it myself: http://koti.welho.com/aarpikar/lib/
I think it should be able to handle different matrix dimensions (2x2, 3x3, 3x1, etc) by simply setting some rows or columns to zero. It won't be the most fastest approach since internally all operations will be done with 4x4 matrices. Although in theory there might exist that kind of processors that can handle 4x4-operations in one tick. At least I would much rather believe in existence of such processors that than go optimizing those low level matrix calculations. :)
How about just store the matrix in an array, like
2x3 matrix = {2,3,val1,val2,...,val6}
This is really simple, and addition operations are trivial. However, you need to write your own multiplication function.
I would need some basic vector mathematics constructs in an application. Dot product, cross product. Finding the intersection of lines, that kind of stuff.
I can do this by myself (in fact, have already) but isn't there a "standard" to use so bugs and possible optimizations would not be on me?
Boost does not have it. Their mathematics part is about statistical functions, as far as I was able to see.
Addendum:
Boost 1.37 indeed seems to have this. They also gracefully introduce a number of other solutions at the field, and why they still went and did their own. I like that.
Re-check that ol'good friend of C++ programmers called Boost. It has a linear algebra package that may well suits your needs.
I've not tested it, but the C++ eigen library is becoming increasingly more popular these days. According to them, they are on par with the fastest libraries around there and their API looks quite neat to me.
Armadillo
Armadillo employs a delayed evaluation
approach to combine several operations
into one and reduce (or eliminate) the
need for temporaries. Where
applicable, the order of operations is
optimised. Delayed evaluation and
optimisation are achieved through
recursive templates and template
meta-programming.
While chained operations such as
addition, subtraction and
multiplication (matrix and
element-wise) are the primary targets
for speed-up opportunities, other
operations, such as manipulation of
submatrices, can also be optimised.
Care was taken to maintain efficiency
for both "small" and "big" matrices.
I would stay away from using NRC code for anything other than learning the concepts.
I think what you are looking for is Blitz++
Check www.netlib.org, which is maintained by Oak Ridge National Lab and the University of Tennessee. You can search for numerical packages there. There's also Numerical Recipes in C++, which has code that goes with it, but the C++ version of the book is somewhat expensive and I've heard the code described as "terrible." The C and FORTRAN versions are free, and the associated code is quite good.
There is a nice Vector library for 3d graphics in the prophecy SDK:
Check out http://www.twilight3d.com/downloads.html
For linear algebra: try JAMA/TNT . That would cover dot products. (+matrix factoring and other stuff) As far as vector cross products (really valid only for 3D, otherwise I think you get into tensors), I'm not sure.
For an extremely lightweight (single .h file) library, check out CImg. It's geared towards image processing, but has no problem handling vectors.