I am using a large physics package, Geant4, for running simulations. There are a number of typedefs defined in the language, and used exclusively throughout the package.
typedef double G4double;
typedef float G4float;
typedef int G4int;
typedef bool G4bool;
typedef long G4long;
I understand the use of typedefs for exposing numeric types as domain-specific types, as this improves readability and allows the typedef to be changed at a later point, if needed. In this case, though, the typedefs are so broad that they do not serve that purpose.
I have also heard of typedefs are being used to ensure a consistent bitsize of each type, as sizeof(int) is not guaranteed by the standard. This cannot be in this case, as these typedefs are always present, rather than being generated by a script after checking the size of the type in question.
What other purposes might there be that I am missing?
Some libraries like to do this to maintain consistency within their code. All types they use (even primitives) start with G4.
In theory you could redefine these on specific platforms if the size of some primitive type causes a problem.
Personally, I've never heard of someone needing to do such. Where specific sizes are required, they're generally used in the first place. But that doesn't mean it hasn't happened.
It's impossible for me to state why this particular library did this. However, I have done similar in my library, and I can give a few reasons, some of which you already mentioned:
It allows you to enhance cross-platform portability without having xintxx_t types floating around.
It allows you to swap out types - possibly at build time. An example of this is if you wanted to have a 3D engine that supported both single and double float precision, you might declare typedef float MyLibraryFloat; or typedef double MyLibraryFloat; triggered by a define macro.
It can help library abstraction. Say you have a library that acts as the front end to several different libraries with (mostly) compatible APIs. Abstracting the types can help this, although it is often not as simple as a typedef in this case.
I think the best reason is the second. It is definitely the most useful if you are dealing with a situation where your types might change based on compiler settings. However, the first is common, especially in older C programs. OpenGL/GLUT does abstraction of types for size safety.
Related
In my 3d graphics program, I can write my classes to use either float or double to represent real numbers. I want to be able to easily make the choice at compile-time. I see two options:
1) Change all classes into templates which take the floating point type as an argument. Almost all of my classes will need to be changed to templates.
2) Create a typedef (e.g. typedef float real;) and then make sure I use only real throughout my classes.
I am now at a point at which I can make both changes to my existing code. Is there any existing idiom for doing this, or do you see anything wrong with either of the choices above?
In my opinion, templates are overkill for this situation (longer compilation?), a typedef is sufficient and also preserves type-safety and readability.
The GLM Maths library also chose typedefs, for example:
GLM Repository
There is one or several guides for that: use the typedef.
Standards like the MISRA C++ Coding Standard define an advisory rule for that (Rule 3-9-2)
typedefs that indicate size and signedness should be used in place
of the basic numerical types
There might be other standards hinting the same behaviour. Afaik Win API uses typedefs (WORD, DWORD, etc) to do this.
I find this atrocious:
std::numeric_limits<int>::max()
And really wish I could just write this:
int::max
Yes, there is INT_MAX and friends. But sometimes you are dealing with something like streamsize, which is a synonym for an unspecified built-in, so you don't know whether you should use INT_MAX or LONG_MAX or whatever. Is there a technical limitation that prevents something like int::max from being put into the language? Or is it just that nobody but me is interested in it?
Primitive types are not class types, so they don't have static members, that's it.
If you make them class types, you are changing the foundations of the language (although thinking about it it wouldn't be such a problem for compatibility reasons, more like some headaches for the standard guys to figure out exactly what members to add to them).
But more importantly, I think that nobody but you is interested in it :) ; personally I don't find numeric_limits so atrocious (actually, it's quite C++-ish - although many can argue that often what is C++-ish looks atrocious :P ).
All in all, I'd say that this is the usual "every feature starts with minus 100 points" point; the article talks about C#, but it's even more relevant for C++, that has already tons of language features and subtleties, a complex standard and many compiler vendors that can put their vetoes:
One way to do that is through the concept of “minus 100 points”. Every feature starts out in the hole by 100 points, which means that it has to have a significant net positive effect on the overall package for it to make it into the language. Some features are okay features for a language to have, they just aren't quite good enough to make it into the language.
Even if the proposal were carefully prepared by someone else, it would still take time for the standard committee to examine and discuss it, and it would probably be rejected because it would be a duplication of stuff that is already possible without problems.
There are actually multiple issues:
built-in types aren't classes in C++
classes can't be extended with new members in C++
assuming the implementation were required to supply certain "members": which? There are lots of other attributes you might want to find for type and using traits allows for them being added.
That said, if you feel you want shorter notation for this, just create it:
namespace traits {
template <typename T> constexpr T max() {
return std::numeric_limits<T>::max();
}
}
int m = traits::max<int>();
using namespace traits;
int n = max<int>();
Why don't you use std::numeric_limits<streamsize>::max()? As for why it's a function (max()) instead of a constant (max), I don't know. In my own app I made my own num_traits type that provides the maximum value as a static constant instead of a function, (and provides significantly more information than numeric_limits).
It would be nice if they had defined some constants and functions on "int" itself, the way C# has int.MaxValue, int.MaxValue and int.Parse(string), but that's just not what the C++ committee decided.
For my projects, I usually define a lot of aliases for types like unsigned int, char and double as well as std::string and others.
I also aliased and to &&, or to ||, not to !, etc.
Is this considered bad practice or okay to do?
Defining types to add context within your code is acceptable, and even encouraged. Screwing around with operators will only encourage the next person that has to maintain your code to bring a gun to your house.
Well, consider the newcomers who are accustomed to C++. They will have difficulties maintaining your project.
Mind that there are many possibilities for the more valid aliasing. A good example is complicated nested STL containers.
Example:
typedef int ClientID;
typedef double Price;
typedef map<ClientID, map<Date, Price> > ClientPurchases;
Now, instead of
map<int, map<Date, double> >::iterator it = clientPurchases.find(clientId);
you can write
ClientPurchases::iterator it = clientPurchases.find(clientId);
which seems to be more clear and readable.
If you're only using it for pointlessly renaming language features (as opposed to the example #Vlad gives), then it's the Wrong Thing.
It definitely makes the code less readable - anyone proficient in C++ will see (x ? y : z) and know that it's a ternary conditional operator. Although ORLY x YARLY y NOWAI z KTHX could be the same thing, it will confuse the viewer: "is this YARLY NOWAI the exact same thing as ? :, renamed for the author's convenience, or does it have subtle differences?" If these "aliases" are the same thing as the standard language elements, they will only slow down the next person to maintain your code.
TLDR: Reading code, any code, is hard enough without having to look up your private alternate syntax all the time.
That’s horrible. Don’t do it, please. Write idiomatic C++, not some macro-riddled monstrosity. In general, it’s extremely bad practice to define such macros, except in very specific cases (such as the BOOST_FOREACH macro).
That said, and, or and not are actually already valid aliases for &&, || and ! in C++!
It’s just that Visual Studio only knows them if you first include the standard header <ciso646>. Other compilers don’t need this.
Types are something else. Using typedef to create type aliases depending on context makes sense if it augments the expressiveness of the code. Then it’s encouraged. However, even better would be to create own types instead of aliases. For example, I can’t imagine it ever being beneficial to create an alias for std::string – why not just use std::string directly? (An exception are of course generic data structures and algorithms.)
"and", "or", "not" are OK because they're part of the language, but it's probably better to write C++ in a style that other C++ programmers use, and very few people bother using them. Don't alias them yourself: they're reserved names and it's not valid in general to use reserved names even in the preprocessor. If your compiler doesn't provide them in its default mode (i.e. it's not standard-compliant), you could fake them up with #define, but you may be setting yourself up for trouble in future if you change compiler, or change compiler options.
typedefs for builtin types might make sense in certain circumstances. For example in C99 (but not in C++03), there are extended integer types such as int32_t, which specifies a 32 bit integer, and on a particular system that might be a typedef for int. They come from stdint.h (<cstdint> in C++0x), and if your C++ compiler doesn't provide that as an extension, you can generally hunt down or write a version of it that will work for your system. If you have some purpose in mind for which you might in future want to use a different integer type (on a different system perhaps), then by all means hide the "real" type behind a name that describes the important properties that are the reason you chose that type for the purpose. If you just think "int" is unnecessarily brief, and it should be "integer", then you're not really helping anyone, even yourself, by trying to tweak the language in such a superficial way. It's an extra indirection for no gain, and in the long run you're better off learning C++, than changing C++ to "make more sense" to you.
I can't think of a good reason to use any other name for string, except in a case similar to the extended integer types, where your name will perhaps be a typedef for string on some builds, and wstring on others.
If you're not a native English-speaker, and are trying to "translate" C++ to another language, then I sort of see your point but I don't think it's a good idea. Even other speakers of your language will know C++ better than they know the translations you happen to have picked. But I am a native English speaker, so I don't really know how much difference it makes. Given how many C++ programmers there are in the world who don't translate languages, I suspect it's not a huge deal compared with the fact that all the documentation is in English...
If every C++ developer was familiar with your aliases then why not, but you are with these aliases essentially introducing a new language to whoever needs to maintain your code.
Why add this extra mental step that for the most part does not add any clarity (&& and || are pretty obvious what they are doing for any C/C++ programmer, and any way in C++ you can use the and and or keywords)
I'm writing a math library, the core of it is in C++. Later it may be implemented in pure C (C99 I suppose). I think I need a C like API so that I can get Python and matlab and the like to use the library. My impression is that doing this with C++ is painful.
So is there a good or standard or proper way to cast between double complex *some_array_in_C99, and complex<double> *some_array_in_cpp ?
I could just use void *pointers, but I'm not sure if that's good.
This may be nitpicking, because ctypes seems to work fine with complex<double>, but I'm worried about matlab and other possible numerical environments.
The C99 and C++0x standards both specify that their respective double complex types must have the same alignment and layout as an array of two doubles. This means that you can get away with passing arguments as a void * and have your routines be (relatively) easily callable from either language, and this is an approach that many libraries have taken.
The C++0x standard guarantees (§26.4) that a reinterpret_cast of std::complex<double>* to double* will do the right thing; if I remember correctly, this was not so clearly specified in earlier versions of the standard. If you are willing to target C++0x, it may be possible for you to use this to do something cleaner for your interfaces.
Given that the actual layout and alignment specifications are defined to agree, I would be tempted to just condition the type in the header file on the language; your implementation can use either language, and the data will be laid out properly in memory either way. I'm not sure how MATLAB does things internally though, so I don't know if this is compatible with MATLAB or not; if they use the standard LAPACK approach, then it will be on many but not all platforms in all circumstances; LAPACK defines its own double complex type to be a struct with two double members, which will usually be laid out the same way in memory (this isn't guaranteed), but could follow a different calling convention on some platforms.
I have a char (ie. byte) buffer that I'm sending over the network. At some point in the future I might want to switch the buffer to a different type like unsigned char or short. I've been thinking about doing something like this:
typedef char bufferElementType;
And whenever I do anything with a buffer element I declare it as bufferElementType rather than char. That way I could switch to another type by changing this typedef (of course it wouldn't be that simple, but it would at least be easy to identify the places that need to be modified... there'll be a bufferElementType nearby).
Is this a valid / good use of typedef? Is it not worth the trouble? Is it going to give me a headache at some point in the future? Is it going to make maintainance programmers hate me?
I've read through When Should I Use Typedef In C++, but no one really covered this.
It is a great (and normal) usage. You have to be careful, though, that, for example, the type you select meet the same signed/unsigned criteria, or that they respond similarly to operators. Then it would be easier to change the type afterwards.
Another option is to use templates to avoid fixing the type till the moment you're compiling. A class that is defined as:
template <typename CharType>
class Whatever
{
CharType aChar;
...
};
is able to work with any char type you select, while it responds to all the operators in the same way.
Another advantage of typedefs is that, if used wisely, they can increase readability. As a really dumb example, a Meter and a Degree can both be doubles, but you'd like to differentiate between them. Using a typedef is onc quick & easy solution to make errors more visible.
Note: a more robust solution to the above example would have been to create different types for a meter and a degree. Thus, the compiler can enforce things itself. This requires a bit of work, which doesn't always pay off, however. Using typedefs is a quick & easy way to make errors visible, as described in the article linked above.
Yes, this is the perfect usage for typedef, at least in C.
For C++ it may be argued that templates are a better idea (as Diego Sevilla has suggested) but they have their drawbacks. (Extra work if everything using the data type is not already wrapped in a few classes, slower compilation times and a more complex source file structure, etc.)
It also makes sense to combine the two approaches, that is, give a typedef name to a template parameter.
Note that as you're sending data over a network, char and other integer types may not be interchangeable (e.g. due to endian-ness). In that case, using a templated class with specialized functions might make more sense. (send<char> sends the byte, send<short> converts it to network byte order first)
Yet another solution would be to create a "BufferElementType" class with helper methods (convertToNetworkOrderBytes()) but I'll bet that would be an overkill for you.