I need to serialize/deserialize certain data structures between two identical applications, but built using different compilers.
Consider primitive datatypes. Boost documentation mentions that in order to ensure compatibility you need to use the numeric types in <boost/cstdint.hpp>. So did I understand it right that I can't simply declare int number;, but rather I should do something like int16_t number;?
Related
I am using a large physics package, Geant4, for running simulations. There are a number of typedefs defined in the language, and used exclusively throughout the package.
typedef double G4double;
typedef float G4float;
typedef int G4int;
typedef bool G4bool;
typedef long G4long;
I understand the use of typedefs for exposing numeric types as domain-specific types, as this improves readability and allows the typedef to be changed at a later point, if needed. In this case, though, the typedefs are so broad that they do not serve that purpose.
I have also heard of typedefs are being used to ensure a consistent bitsize of each type, as sizeof(int) is not guaranteed by the standard. This cannot be in this case, as these typedefs are always present, rather than being generated by a script after checking the size of the type in question.
What other purposes might there be that I am missing?
Some libraries like to do this to maintain consistency within their code. All types they use (even primitives) start with G4.
In theory you could redefine these on specific platforms if the size of some primitive type causes a problem.
Personally, I've never heard of someone needing to do such. Where specific sizes are required, they're generally used in the first place. But that doesn't mean it hasn't happened.
It's impossible for me to state why this particular library did this. However, I have done similar in my library, and I can give a few reasons, some of which you already mentioned:
It allows you to enhance cross-platform portability without having xintxx_t types floating around.
It allows you to swap out types - possibly at build time. An example of this is if you wanted to have a 3D engine that supported both single and double float precision, you might declare typedef float MyLibraryFloat; or typedef double MyLibraryFloat; triggered by a define macro.
It can help library abstraction. Say you have a library that acts as the front end to several different libraries with (mostly) compatible APIs. Abstracting the types can help this, although it is often not as simple as a typedef in this case.
I think the best reason is the second. It is definitely the most useful if you are dealing with a situation where your types might change based on compiler settings. However, the first is common, especially in older C programs. OpenGL/GLUT does abstraction of types for size safety.
In my 3d graphics program, I can write my classes to use either float or double to represent real numbers. I want to be able to easily make the choice at compile-time. I see two options:
1) Change all classes into templates which take the floating point type as an argument. Almost all of my classes will need to be changed to templates.
2) Create a typedef (e.g. typedef float real;) and then make sure I use only real throughout my classes.
I am now at a point at which I can make both changes to my existing code. Is there any existing idiom for doing this, or do you see anything wrong with either of the choices above?
In my opinion, templates are overkill for this situation (longer compilation?), a typedef is sufficient and also preserves type-safety and readability.
The GLM Maths library also chose typedefs, for example:
GLM Repository
There is one or several guides for that: use the typedef.
Standards like the MISRA C++ Coding Standard define an advisory rule for that (Rule 3-9-2)
typedefs that indicate size and signedness should be used in place
of the basic numerical types
There might be other standards hinting the same behaviour. Afaik Win API uses typedefs (WORD, DWORD, etc) to do this.
Ive spent the day reading notes and watching a video on boost::fusion and I really don't get some aspects to it.
Take for example, the boost::fusion::has_key<S> function. What is the purpose of having this in boost::fusion? Is the idea that we just try and move as much programming as possible to happen at compile-time? So pretty much any boost::fusion function is the same as the run-time version, except it now evaluates at compile time? (and we assume doing more at compile-time is good?).
Related to boost::fusion, i'm also a bit confused why metafunctions always return types. Why is this?
Another way to look at boost::fusion is to think of it as "poor man introspection" library. The original motivation for boost::fusion comes from the direction of boost::spirit parser/generator framework, in particular the need to support what is called "parser attributes".
Imagine, you've got a CSV string to parse:
aaaa, 1.1
The type, this string parses into, can be described as "tuple of string and double". We can define such tuples in "plain" C++, either with old school structs (struct { string a; double b; } or newer tuple<string, double>). The only thing we miss is some sort of adapter, which will allow to pass tuples (and some other types) of arbitrary composition to a unified parser interface and expect it to make sense of it without passing any out of band information (such as string parsing templates used by scanf).
That's where boost::fusion comes into play. The most straightforward way to construct a "fusion sequence" is to adapt a normal struct:
struct a {
string s;
double d;
};
BOOST_FUSION_ADAPT_STRUCT(a, (string, s)(double, d))
The "ADAPT_STRUCT" macro adds the necessary information for parser framework (in this example) to be able to "iterate" over members of struct a to the tune of the following questions:
I just parsed a string. Can I assign it to first member of struct a?
I just parsed a double. Can I assign it to second member of struct a?
Are there any other members in struct a or should I stop parsing?
Obviously, this basic example can be further extended (and boost::fusion supplies the capability) to address much more complex cases:
Variants - let's say parser can encounter either sting or double and wants to assign it to the right member of struct a. BOOST_FUSION_ADAPT_ASSOC_STRUCT comes to the rescue (now our parser can ask questions like "which member of struct a is of type double?").
Transformations - our parser can be designed to accept certain types as parameters but the rest of the programs had changed quite a bit. Yet, fusion metafunctions can be conveniently used to adapt new types to old realities (or vice versa).
The rest of boost::fusion functionality naturally follows from the above basics. fusion really shines when there's a need for conversion (in either direction) of "loose IO data" to strongly typed/structured data C++ programs operate upon (if efficiency is of concern). It is the enabling factor behind spirit::qi and spirit::karma being such an efficient (probably the fastest) I/O frameworks .
Fusion is there as a bridge between compile-time and run-time containers and algorithms. You may or may not want to move some of your processing to compile-time, but if you do want to then Fusion might help. I don't think it has a specific manifesto to move as much as possible to compile-time, although I may be wrong.
Meta-functions return types because template meta-programming wasn't invented on purpose. It was discovered more-or-less by accident that C++ templates can be used as a compile-time programming language. A meta-function is a mapping from template arguments to instantiations of a template. As of C++03 there were are two kinds of template (class- and function-), therefore a meta-function has to "return" either a class or a function. Classes are more useful than functions, since you can put values etc. in their static data members.
C++11 adds another kind of template (for typedefs), but that is kind of irrelevant to meta-programming. More importantly for compile-time programming, C++11 adds constexpr functions. They're properly designed for the purpose and they return values just like normal functions. Of course, their input is not a type, so they can't be mappings from types to something else in the way that templates can. So in that sense they lack the "meta-" part of meta-programming. They're "just" compile-time evaluation of normal C++ functions, not meta-functions.
In C++, is string a built-in data type?
Thanks.
What is the definition of built-in that you want to use? Is it built-in the compiler toolset that you have yes, it should. Is it treated specially by the compiler? no, the compiler treats that type as any user defined type. Note that the same can probably be applied to many other languages for which most people will answer yes.
One of the focuses of the C++ committee is keeping the core language to a bare minimum, and provide as much functionality as possible in libraries. That has two intentions: the core language is more stable, libraries can be reimplemented, enhanced... without changing the compiler core. But more importantly, the fact that you do not need special compiler support to handle most of the standard library guarantees that the core language is expressive enough for most uses.
Simpler said in negated form: if the language required special compiler support to implement std::string that would mean that users don't have enough power to express that or a similar concept in the core language.
It's not a primitive -- that is, it's not "built in" the way that int, char, etc are. The closest built-in string-like type is char * or char[], which is the old C way of doing stringy stuff, but even that requires a bunch of library code in order to use productively.
Rather, std::string is a part of the standard library that comes with nearly every modern C++ compiler in existence. You'll need to #include <string> (or include something else that includes it, but really you should include what your code refers to) in order to use it.
If you are talking about std::string then no.
If you are talking about character array, I guess you can treat it as an array of a built in type.
No.
Built-in or "primitive" types can be used to create string-life functionality with the built-in type char. This, along with utility functions were what was used in C. In C++, there is still this functionality but a more intuitive way of using strings was added.
The string class is a part of the std namespace and is an instantiation of the basic_string template class. It is defined as
typedef basic_string<char> string;
It is a class with the ability to dynamically resize as needed and has many member functions acting as utilities. It also uses operator overloading so it is more intuitive to use. However, this functionality also means it has an overhead in terms of speed.
Depends on what you mean by built-in, but probably not. std::string is defined by the standard library (and thus the C++ standard) and is very universally supported by different compilers, but it is not a part of the core language like int or char.
It can be built-in, but it doesn't have to be.
The C++ standard library has a documented interface for its components. This can be realized either as library code or as compiler built-ins. The standard doesn't say how it should be implemented.
When you use #include <string> you have the std::string implementation available. This could be either because the compiler implements it directly, or because it links to some library code. We don't know for sure, unless we check each compiler.
None of the known compilers have chosen to make it a built-in type, because they didn't have to. The performance of a pure library implementation was obviously good enough.
No. It's part of standard library.
No, string is a class.
Definitely not. String is a class from standard library.
char *, or char[] are built-in types, but char, int, float, double, void, bool without any additions (as pointers, arrays, sign or size modifiers - unsigned, long etc.) are fundamental types.
No. There are different imlementations (eg Microsoft Visual C++), but char* is the C++ way of representing strings.
I'm writing a math library, the core of it is in C++. Later it may be implemented in pure C (C99 I suppose). I think I need a C like API so that I can get Python and matlab and the like to use the library. My impression is that doing this with C++ is painful.
So is there a good or standard or proper way to cast between double complex *some_array_in_C99, and complex<double> *some_array_in_cpp ?
I could just use void *pointers, but I'm not sure if that's good.
This may be nitpicking, because ctypes seems to work fine with complex<double>, but I'm worried about matlab and other possible numerical environments.
The C99 and C++0x standards both specify that their respective double complex types must have the same alignment and layout as an array of two doubles. This means that you can get away with passing arguments as a void * and have your routines be (relatively) easily callable from either language, and this is an approach that many libraries have taken.
The C++0x standard guarantees (ยง26.4) that a reinterpret_cast of std::complex<double>* to double* will do the right thing; if I remember correctly, this was not so clearly specified in earlier versions of the standard. If you are willing to target C++0x, it may be possible for you to use this to do something cleaner for your interfaces.
Given that the actual layout and alignment specifications are defined to agree, I would be tempted to just condition the type in the header file on the language; your implementation can use either language, and the data will be laid out properly in memory either way. I'm not sure how MATLAB does things internally though, so I don't know if this is compatible with MATLAB or not; if they use the standard LAPACK approach, then it will be on many but not all platforms in all circumstances; LAPACK defines its own double complex type to be a struct with two double members, which will usually be laid out the same way in memory (this isn't guaranteed), but could follow a different calling convention on some platforms.