My knowledge of C++ at this point is more academic than anything else. In all my reading thus far, the use of explicit conversion with named casts (const_cast, static_cast, reinterpret_cast, dynamic_cast) has come with a big warning label (and it's easy to see why) that implies that explicit conversion is symptomatic of bad design and should only be used as a last resort in desperate circumstances. So, I have to ask:
Is explicit conversion with named casts really just jury rigging code or is there a more graceful and positive application to this feature? Is there a good example of the latter?
There're cases when you just can't go without it. Like this one. The problem there is that you have multiple inheritance and need to convert this pointer to void* while ensuring that the pointer that goes into void* will still point to the right subobject of the current object. Using an explicit cast is the only way to achieve that.
There's an opinion that if you can't go without a cast you have bad design. I can't agree with this completely - different situations are possible, including one mentioned above, but perhaps if you need to use explicit casts too often you really have bad design.
There are situations when you can't really avoid explicit casts. Especially when interacting with C libraries or badly designed C++ libraries (like the COM library sharptooth used as examples).
In general, the use of explicit casts IS a red herring. It does not necessarily means bad code, but it does attract attention to a potential dangerous use.
However you should not throw the 4 casts in the same bag: static_cast and dynamic_cast are frequently used for up-casting (from Base to Derived) or for navigating between related types. Their occurrence in the code is pretty normal (indeed it's difficult to write a Visitor Pattern without either).
On the other hand, the use of const_cast and reinterpret_cast is much more dangerous.
using const_cast to try and modify a read-only object is undefined behavior (thanks to James McNellis for correction)
reinterpret_cast is normally only used to deal with raw memory (allocators)
They have their use, of course, but should not be encountered in normal code. For dealing with external or C APIs they might be necessary though.
At least that's my opinion.
How bad a cast is typically depends on the type of cast. There are legitimate uses for all of these casts, but some smell worse than others.
const_cast is used to cast away constness (since adding it doesn't require a cast). Ideally, that should never be used. It makes it easy to invoke undefined behavior (trying to change an object originally designated const), and in any case breaks the const-correctness of the program. It is sometimes necessary when interfacing with APIs that are not themselves const-correct, which may for example ask for a char * when they're going to treat it as const char *, but since you shouldn't write APIs that way it's a sign that you're using a really old API or somebody screwed up.
reinterpret_cast is always going to be platform-dependent, and is therefore at best questionable in portable code. Moreover, unless you're doing low-level operations on the physical structure of objects, it doesn't preserve meaning. In C and C++, a type is supposed to be meaningful. An int is a number that means something; an int that is basically the concatenation of chars doesn't really mean anything.
dynamic_cast is normally used for downcasting; e.g. from Base * to Derived *, with the proviso that either it works or it returns 0. This subverts OO in much the same way as a switch statement on a type tag does: it moves the code that defines what a class is away from the class definition. This couples the class definitions with other code and increases the potential maintenance load.
static_cast is used for data conversions that are known to be generally correct, such as conversions to and from void *, known safe pointer casts within the class hierarchy, that sort of thing. About the worst you can say for it is that it subverts the type system to some extent. It's likely to be needed when interfacing with C libraries, or the C part of the standard library, as void * is often used in C functions.
In general, well-designed and well-written C++ code will avoid the use cases above, in some cases because the only use of the cast is to do potentially dangerous things, and in other cases because such code tends to avoid the need for such conversions. The C++ type system is generally seen as a good thing to maintain, and casts subvert it.
IMO, like most things, they're tools, with appropriate uses and inappropriate ones. Casting is probably an area where the tools frequently get used inappropriately, for example, to cast between an int and pointer type with a reinterpret_cast (which can break on platforms where the two are different sizes), or to const_cast away constness purely as a hack, and so on.
If you know what they're for and the intended uses, there's absolutely nothing wrong with using them for what they were designed for.
There is an irony to explicit casts. The developer whose poor C++ design skills lead him to write code requiring a lot of casting is the same developer who doesn't use the explicit casting mechanisms appropriately, or at all, and litters his code with C-style casts.
On the other hand, the developer who understands their purpose, when to use them and when not to, and what the alternatives are, is not writing code that requires much casting!
Check out the more fine-grained variations on these casts, such as polymorphic_cast, in the boost conversions library, to give you an idea of just how careful C++ programmers are when it comes to casting.
Casts are a sign that you're trying to put a round peg in a square hole. Sometimes that's part of the job. But if you have some control over both the hole and the peg, it would be better not create this condition, and writing a cast should trigger you to ask yourself if there was something you could have done so this was a little smoother.
Related
I recently learned that it is Undefined Behavior to reinterpret a POD as a different POD by reinterpret_casting its address. So I'm just wondering what a potential use-case of reinterpret_cast might be, if it can't be used for what its name suggests?
There are two situations in which I’ve used reinterpret_cast:
To cast to and from char* for the purpose of serialisation or when talking to a legacy API. In this case the cast from char* to an object pointer is still strictly speaking UB (even though done very frequently). And you don’t actually need reinterpret_cast here — you could use memcpy instead, but the cast might under specific circumstances avoid a copy (but in situations where reinterpreting the bytes is valid in the first place, memcpy usually doesn’t generate any redundant copies either, the compiler is smart enough for that).
To cast pointers from/to std::uintptr_t to serialise them across a legacy API or to perform some non-pointer arithmetic on them. This is definitely an odd beast and doesn’t happen frequently (even in low-level code) but consider the situation where one wants to exploit the fact that pointers on a given platform don’t use the most significant bits, and these bits can thus be used to store some bit flags. Garbage collector implementations occasionally do this. The lower bits of a pointer can sometimes also be used, if the programmer knows that the pointer will always be aligned e.g. at an 8 byte boundary (so the lowest three bits must be 0).
But to be honest I can’t remember the last concrete, legitimate situation where I’ve actually used reinterpret_cast. It’s definitely many years ago.
Conforming implementations of C and C++ are allowed to extend the semantics of C or C++ by behaving meaningfully even in cases where the Standards would not require them to do so. Implementations that do so will may be more suitable for a wider range of tasks than implementations that do not. In many cases, it is useful to have consistent syntax to specify constructs which will be processed meaningfully and consistently by implementations that are designed to be suitable for low-level programming tasks, even if implementations which are not designed to be suitable for such purposes would process them nonsensically.
One very frequent use case is when you're working with C library functions that take an opaque void * that gets forwarded to a callback function. Using reinterpret_cast on both sides of the fence, so to speak, keeps everything proper.
I am new to C++ style casts and I am worried that using C++ style casts will ruin the performance of my application because I have a real-time-critical deadline in my interrupt-service-routine.
I heard that some casts will even throw exceptions!
I would like to use the C++ style casts because it would make my code more "robust". However, if there is any performance hit then I will probably not use C++ style casts and will instead spend more time testing the code that uses C-style casts.
Has anyone done any rigorous testing/profiling to compare the performance of C++ style casts to C style casts?
What were your results?
What conclusions did you draw?
If the C++ style cast can be conceptualy replaced by a C-style cast there will be no overhead. If it can't, as in the case of dynamic_cast, for which there is no C equivalent, you have to pay the cost one way or another.
As an example, the following code:
int x;
float f = 123.456;
x = (int) f;
x = static_cast<int>(f);
generates identical code for both casts with VC++ - code is:
00401041 fld dword ptr [ebp-8]
00401044 call __ftol (0040110c)
00401049 mov dword ptr [ebp-4],eax
The only C++ cast that can throw is dynamic_cast when casting to a reference. To avoid this, cast to a pointer, which will return 0 if the cast fails.
The only one with any extra cost at runtime is dynamic_cast, which has capabilities that cannot be reproduced directly with a C style cast anyway. So you have no problem.
The easiest way to reassure yourself of this is to instruct your compiler to generate assembler output, and examine the code it generates. For example, in any sanely implemented compiler, reinterpret_cast will disappear altogether, because it just means "go blindly ahead and pretend the data is of this type".
Why would there be a performance hit? They perform exactly the same functionality as C casts. The only difference is that they catch more errors at compile-time, and they're easier to search for in your source code.
static_cast<float>(3) is exactly equivalent to (float)3, and will generate exactly the same code.
Given a float f = 42.0f
reinterpret_cast<int*>(&f) is exactly equivalent to (int*)&f, and will generate exactly the same code.
And so on. The only cast that differs is dynamic_cast, which, yes, can throw an exception. But that is because it does things that the C-style cast cannot do. So don't use dynamic_cast unless you need its functionality.
It is usually safe to assume that compiler writers are intelligent. Given two different expressions that have the same semantics according to the standard, it is usually safe to assume that they will be implemented identically in the compiler.
Oops: The second example should be reinterpret_cast, not dynamic_cast, of course. Fixed it now.
Ok, just to make it absolutely clear, here is what the C++ standard says:
§5.4.5:
The conversions performed by
a const_cast (5.2.11)
a static_cast (5.2.9)
a static_cast followed by a const_cast
a reinterpret_cast (5.2.10), or
a reinterpret_cast followed by a const_cast.
can be performed using the cast
notation of explicit type conversion.
The same semantic restrictions and
behaviors apply. If a conversion can
be interpreted in more than one of the
ways listed above, the interpretation
that appears first in the list is
used, even if a cast resulting from
that interpretation is ill-formed.
So if anything, since the C-style cast is implemented in terms of the C++ casts, C-style casts should be slower. (of course they aren't, because the compiler generates the same code in any case, but it's more plausible than the C++-style casts being slower.)
There are four C++ style casts:
const_cast
static_cast
reinterpret_cast
dynamic_cast
As already mentioned, the first three are compile-time operations. There is no run-time penalty for using them. They are messages to the compiler that data that has been declared one way needs to be accessed a different way. "I said this was an int*, but let me access it as if it were a char* pointing to sizeof(int) chars" or "I said this data was read-only, and now I need to pass it to a function that won't modify it, but doesn't take the parameter as a const reference."
Aside from data corruption by casting to the wrong type and trouncing over data (always a possibility with C-style casts) the most common run-time problem with these casts is data that actually is declared const may not be castable to non-const. Casting something declared const to non-const and then modifying it is undefined. Undefined means you're not even guaranteed to get a crash.
dynamic_cast is a run-time construct and has to have a run-time cost.
The value of these casts is that they specifically say what you're trying to cast from/to, stick out visually, and can be searched for with brain-dead tools. I would recommend using them over using C-style casts.
When using dynamic_cast several checks are made during runtime to prevent you from doing something stupid (more at the GCC mailing list), the cost of one dynamic_cast depends on how many classes are affected, what classes are affected, etc.
If you're really sure the cast is safe, you can still use reinterpret_cast.
Although I agree with the statement "the only one with any extra cost at runtime is dynamic_cast", keep in mind there may be compiler-specific differences.
I've seen a few bugs filed against my current compiler where the code generation or optimization was slightly different depending on whether you use a C-style vs. C++-style static_cast cast.
So if you're worried, check the disassembly on hotspots. Otherwise just avoid dynamic casts when you don't need them. (If you turn off RTTI, you can't use dynamic_cast anyway.)
The canonical truth is the assembly, so try both and see if you get different logic.
If you get the exact same assembly, there is no difference- there can't be. The only place you really need to stick with the old C casts is in pure C routines and libraries, where it makes no sense to introduce C++ dependence just for type casting.
One thing to be aware of is that casts happen all over the place in a decent sized piece of code. In my entire career I've never searched on "all casts" in a piece of logic- you tend to search for casts to a specific TYPE like 'A', and a search on "(A)" is usually just as efficient as something like "static_cast<A>". Use the newer casts for things like type validation and such, not because they make searches you'll never do anyway easier.
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Why do we have reinterpret_cast in C++ when two chained static_cast can do it's job?
I have been suggested that i should not use reinterpret_cast or const_cast in case of pointer to pointer conversion. Only dynamic_cast should be used.Because other cast can create problem in future. So my question is why not reinterpret_cast or other cast which is dangerous has been removed from c++ standard.
Because there are times when you need a cast which does what reinterpret_cast does. As for dynamic_cast, I almost never use this cast; that's only for casting from a parent type to a more derived type. Most of the time I can prove which child type I'm working with, and I use static_cast. I also use static_cast for compile time type conversions, for example from signed to unsigned integers.
static_cast, not dynamic_cast, is the most common kind of cast. If your design relies on dynamic_cast too heavily, that's a code smell indicating your class hierarchy violates LSP in most cases.
In the real world, you frequently have to cast pointers in ways the compiler/runtime can't validate. For example, in pthreads you pass a void* to the new thread's start routine. Sometimes that arg is actually a class? The compiler has no way to tell. It's one of those "real life" issues.
Incidentally I find myself using dynamic_cast infrequently. My main use for is has been exception type scraping in catch blocks.
They are dangerous but sometimes you need them.
C++ is not know for removing constructs that are dangerous to the new user. We will let you run with those scissors (while eating cake).
What makes them good is that the dangerous code sticks out so it is easy to spot. So when people do code reviews they can quickly spot the stuff that is dangerous and add a little scrutiny more checking.
Use reinterpret_cast for casting between unrelated pointer types.
Use static_cast for explicit, supported conversions.
Use dynamic_cast to cast a pointer of one type to a pointer of a derived type.
If you know a pointer to a parent type points to a child type, you can safely static_cast from the parent type to the child type. A cast from a child type pointer to a parent type pointer is implicit and requires no explicit cast.
A reinterpret_cast example from my own code base:
unsigned int CTaskManager::CWorker::WorkerMain(void* Parameters)
{
CWorker* This = reinterpret_cast<CWorker*>(Parameters);
// ...
}
bool CTaskManager::CWorker::Initialize()
{
// ...
// Create worker.
m_ThreadHandle = reinterpret_cast<HANDLE>(_beginthreadex(NULL, 0, &(WorkerMain), this, 0, NULL));
// ...
}
A lot of dangerous operations, though they should generally be avoided, do have a rare legitimate usage. Furthermore, the rule of thumb when it comes to language features and APIs is that once it's there, you can't get rid of it; removing a feature from the C++ language has the potential to break lots of existing C++ code. Typically removal of features requires demonstration that it is not used or that the use is so limited that the cost impact of getting rid of it would be small. Even trigraphs, which are almost never used (unless you are at IBM) and which people wanted to get rid of survived the axe. These various casts and other usually dangerous operations are used way, way more than trigraphs are.
I am new to C++ style casts and I am worried that using C++ style casts will ruin the performance of my application because I have a real-time-critical deadline in my interrupt-service-routine.
I heard that some casts will even throw exceptions!
I would like to use the C++ style casts because it would make my code more "robust". However, if there is any performance hit then I will probably not use C++ style casts and will instead spend more time testing the code that uses C-style casts.
Has anyone done any rigorous testing/profiling to compare the performance of C++ style casts to C style casts?
What were your results?
What conclusions did you draw?
If the C++ style cast can be conceptualy replaced by a C-style cast there will be no overhead. If it can't, as in the case of dynamic_cast, for which there is no C equivalent, you have to pay the cost one way or another.
As an example, the following code:
int x;
float f = 123.456;
x = (int) f;
x = static_cast<int>(f);
generates identical code for both casts with VC++ - code is:
00401041 fld dword ptr [ebp-8]
00401044 call __ftol (0040110c)
00401049 mov dword ptr [ebp-4],eax
The only C++ cast that can throw is dynamic_cast when casting to a reference. To avoid this, cast to a pointer, which will return 0 if the cast fails.
The only one with any extra cost at runtime is dynamic_cast, which has capabilities that cannot be reproduced directly with a C style cast anyway. So you have no problem.
The easiest way to reassure yourself of this is to instruct your compiler to generate assembler output, and examine the code it generates. For example, in any sanely implemented compiler, reinterpret_cast will disappear altogether, because it just means "go blindly ahead and pretend the data is of this type".
Why would there be a performance hit? They perform exactly the same functionality as C casts. The only difference is that they catch more errors at compile-time, and they're easier to search for in your source code.
static_cast<float>(3) is exactly equivalent to (float)3, and will generate exactly the same code.
Given a float f = 42.0f
reinterpret_cast<int*>(&f) is exactly equivalent to (int*)&f, and will generate exactly the same code.
And so on. The only cast that differs is dynamic_cast, which, yes, can throw an exception. But that is because it does things that the C-style cast cannot do. So don't use dynamic_cast unless you need its functionality.
It is usually safe to assume that compiler writers are intelligent. Given two different expressions that have the same semantics according to the standard, it is usually safe to assume that they will be implemented identically in the compiler.
Oops: The second example should be reinterpret_cast, not dynamic_cast, of course. Fixed it now.
Ok, just to make it absolutely clear, here is what the C++ standard says:
§5.4.5:
The conversions performed by
a const_cast (5.2.11)
a static_cast (5.2.9)
a static_cast followed by a const_cast
a reinterpret_cast (5.2.10), or
a reinterpret_cast followed by a const_cast.
can be performed using the cast
notation of explicit type conversion.
The same semantic restrictions and
behaviors apply. If a conversion can
be interpreted in more than one of the
ways listed above, the interpretation
that appears first in the list is
used, even if a cast resulting from
that interpretation is ill-formed.
So if anything, since the C-style cast is implemented in terms of the C++ casts, C-style casts should be slower. (of course they aren't, because the compiler generates the same code in any case, but it's more plausible than the C++-style casts being slower.)
There are four C++ style casts:
const_cast
static_cast
reinterpret_cast
dynamic_cast
As already mentioned, the first three are compile-time operations. There is no run-time penalty for using them. They are messages to the compiler that data that has been declared one way needs to be accessed a different way. "I said this was an int*, but let me access it as if it were a char* pointing to sizeof(int) chars" or "I said this data was read-only, and now I need to pass it to a function that won't modify it, but doesn't take the parameter as a const reference."
Aside from data corruption by casting to the wrong type and trouncing over data (always a possibility with C-style casts) the most common run-time problem with these casts is data that actually is declared const may not be castable to non-const. Casting something declared const to non-const and then modifying it is undefined. Undefined means you're not even guaranteed to get a crash.
dynamic_cast is a run-time construct and has to have a run-time cost.
The value of these casts is that they specifically say what you're trying to cast from/to, stick out visually, and can be searched for with brain-dead tools. I would recommend using them over using C-style casts.
When using dynamic_cast several checks are made during runtime to prevent you from doing something stupid (more at the GCC mailing list), the cost of one dynamic_cast depends on how many classes are affected, what classes are affected, etc.
If you're really sure the cast is safe, you can still use reinterpret_cast.
Although I agree with the statement "the only one with any extra cost at runtime is dynamic_cast", keep in mind there may be compiler-specific differences.
I've seen a few bugs filed against my current compiler where the code generation or optimization was slightly different depending on whether you use a C-style vs. C++-style static_cast cast.
So if you're worried, check the disassembly on hotspots. Otherwise just avoid dynamic casts when you don't need them. (If you turn off RTTI, you can't use dynamic_cast anyway.)
The canonical truth is the assembly, so try both and see if you get different logic.
If you get the exact same assembly, there is no difference- there can't be. The only place you really need to stick with the old C casts is in pure C routines and libraries, where it makes no sense to introduce C++ dependence just for type casting.
One thing to be aware of is that casts happen all over the place in a decent sized piece of code. In my entire career I've never searched on "all casts" in a piece of logic- you tend to search for casts to a specific TYPE like 'A', and a search on "(A)" is usually just as efficient as something like "static_cast<A>". Use the newer casts for things like type validation and such, not because they make searches you'll never do anyway easier.
Could somebody please elaborate on the differences?
The difference is that (int)foo can mean half a dozen different things.
It might be a static_cast (convert between statically known types), it might be a const_cast (adding or removing const-ness), or it might be a reinterpret_cast (converting between pointer types)
The compiler tries each of them until it finds one that works. Which means that it may not always pick the one you expect, so it can become a subtle source of bugs.
Further, static_cast is a lot easier to search for or do search/replace on.
Look at what Stroustrup has to say about that, including the following:
Because the C-style cast (T) can be used to express many logically different operations, the compiler has only the barest chance to catch misuses. [...]
The "new-style casts" were introduced to give programmers a chance to state their intentions more clearly and for the compiler to catch more errors. [...]
In particular, C++ makes the distinction between static_cast and reinterpret_cast:
The idea is that conversions allowed by static_cast are somewhat less likely to lead to errors than those that require reinterpret_cast. In principle, it is possible to use the result of a static_cast without casting it back to its original type, whereas you should always cast the result of a reinterpret_cast back to its original type before using it to ensure portability.
(int) foo compares most to c++ reinterpret_cast<int>, i.e. no checks on the validity of the cast.