In the WinAPI, WndProc has lParam and wParam which are longs. This means you generally have to typecast them into the correct type.
I've read that message systems in OOP should not need to cast things and that this is a bad practice. Therefore, in a language like C++, how would a basic message system work, where each message has 2 parameters, or even object pointers, depending on the message, but doing so without typecasting?
Thanks
For the general case I doubt you can do without some typecasting.
However, in C++ level design the typecasting can be mostly be centralized.
Look up visitor pattern.
Cheers & hth.,
The problem with type-casting is that it isn't safe. Boost provides a number of ways to do typecasting in a safe way.
If the data that could be sent in your message system is well-defined, limited to a few possible choices, then one could employ a boost::variant object. Variants are kind of like type-safe unions that have built-in visitation support.
However, if the set of possible data is more or less arbitrary, then you won't be able to use a variant. You still want to preserve type-safety, so that the person receiving the message cannot cast it to a different type other than the type that it was originally given with. In that case, boost::any is a good choice. Yes, you still have to use an any_cast, but that will fail if it is not of the proper type.
Related
I recently learned that it is Undefined Behavior to reinterpret a POD as a different POD by reinterpret_casting its address. So I'm just wondering what a potential use-case of reinterpret_cast might be, if it can't be used for what its name suggests?
There are two situations in which I’ve used reinterpret_cast:
To cast to and from char* for the purpose of serialisation or when talking to a legacy API. In this case the cast from char* to an object pointer is still strictly speaking UB (even though done very frequently). And you don’t actually need reinterpret_cast here — you could use memcpy instead, but the cast might under specific circumstances avoid a copy (but in situations where reinterpreting the bytes is valid in the first place, memcpy usually doesn’t generate any redundant copies either, the compiler is smart enough for that).
To cast pointers from/to std::uintptr_t to serialise them across a legacy API or to perform some non-pointer arithmetic on them. This is definitely an odd beast and doesn’t happen frequently (even in low-level code) but consider the situation where one wants to exploit the fact that pointers on a given platform don’t use the most significant bits, and these bits can thus be used to store some bit flags. Garbage collector implementations occasionally do this. The lower bits of a pointer can sometimes also be used, if the programmer knows that the pointer will always be aligned e.g. at an 8 byte boundary (so the lowest three bits must be 0).
But to be honest I can’t remember the last concrete, legitimate situation where I’ve actually used reinterpret_cast. It’s definitely many years ago.
Conforming implementations of C and C++ are allowed to extend the semantics of C or C++ by behaving meaningfully even in cases where the Standards would not require them to do so. Implementations that do so will may be more suitable for a wider range of tasks than implementations that do not. In many cases, it is useful to have consistent syntax to specify constructs which will be processed meaningfully and consistently by implementations that are designed to be suitable for low-level programming tasks, even if implementations which are not designed to be suitable for such purposes would process them nonsensically.
One very frequent use case is when you're working with C library functions that take an opaque void * that gets forwarded to a callback function. Using reinterpret_cast on both sides of the fence, so to speak, keeps everything proper.
I've just learned about what std::function really is about and what it is used for and I have a question: now that we essentially have delegates, where and when should we use Abstract Base Classes and when, instead, we should implement polymorphism via std::function objects fed to a generic class? Did ABC receive a fatal blow in C++11?
Personally my experience so far is that switching delegates is much simpler to code than creating multiple inherited classes each for particular behaviour... so I am a little confused abotu how useful Abstract Bases will be from now on.
Prefer well defined interfaces over callbacks
The problem with std::function (previously boost::function) is that most of the time you need to have a callback to a class method, and therefore need to bind this to the function object. However in the calling code, you have no way to know if this is still around. In fact, you have no idea that there even is a this because bind has molded the signature of the calling function into what the caller requires.
This can naturally cause weird crashes as the callback attempts to fire into methods for classes that no longer exist.
You can, of course use shared_from_this and bind a shared_ptr to a callback, but then your instance may never go away. The person that has a callback to you now participates in your ownership without them even knowing about it. You probably want more predictable ownership and destruction.
Another problem, even if you can get the callback to work fine, is with callbacks, the code can be too decoupled. The relationships between objects can be so difficult to ascertain that the code readability becomes decreased. Interfaces, however, provide a good compromise between an appropriate level of decoupling with a clearly specified relationship as defined be the interface's contract. You can also more clearly specify, in this relationship, issues like who owns whom, destrcution order, etc.
An additional problem with std::function is that many debuggers do not support them well. In VS2008 and boost functions, you have to step through about 7 layers to get to your function. Even if all other things being equal a callback was the best choice, the sheer annoyance and time wasted accidentally stepping over the target of std::function is reason enough to avoid it. Inheritance is a core feature of the language, and stepping into an overridden method of an interface is instantaneous.
Lastly I'll just add we don't have delegates in C++. Delegates in C# are a core part of the language, just like inheritance is in C++ and C#. We have a std library feature which IMO is one layer removed from a core language functionality. So its not going to be as tightly integrated with other core features of the language. It instead helps formalize the idea of function objects that have been a C++ idiom for a good while now.
I do not see how one can come to the conclusion that function pointers make abstract base classes obsolete.
A class, encapsulates methods and data that pertains to it.
A function pointer, is a function pointer. It has no notion of an encapsulating object, it merely knows about the parameters passed to it.
When we write classes, we are describing well-defined objects.
Function pointers are great for a good number of things, but changing the behaviour of an object isn't necessarily the target-audience (though I admit there may be times when you would want to do so, e.g. callbacks, as Doug.T mentions).
Please don't confuse the two.
My knowledge of C++ at this point is more academic than anything else. In all my reading thus far, the use of explicit conversion with named casts (const_cast, static_cast, reinterpret_cast, dynamic_cast) has come with a big warning label (and it's easy to see why) that implies that explicit conversion is symptomatic of bad design and should only be used as a last resort in desperate circumstances. So, I have to ask:
Is explicit conversion with named casts really just jury rigging code or is there a more graceful and positive application to this feature? Is there a good example of the latter?
There're cases when you just can't go without it. Like this one. The problem there is that you have multiple inheritance and need to convert this pointer to void* while ensuring that the pointer that goes into void* will still point to the right subobject of the current object. Using an explicit cast is the only way to achieve that.
There's an opinion that if you can't go without a cast you have bad design. I can't agree with this completely - different situations are possible, including one mentioned above, but perhaps if you need to use explicit casts too often you really have bad design.
There are situations when you can't really avoid explicit casts. Especially when interacting with C libraries or badly designed C++ libraries (like the COM library sharptooth used as examples).
In general, the use of explicit casts IS a red herring. It does not necessarily means bad code, but it does attract attention to a potential dangerous use.
However you should not throw the 4 casts in the same bag: static_cast and dynamic_cast are frequently used for up-casting (from Base to Derived) or for navigating between related types. Their occurrence in the code is pretty normal (indeed it's difficult to write a Visitor Pattern without either).
On the other hand, the use of const_cast and reinterpret_cast is much more dangerous.
using const_cast to try and modify a read-only object is undefined behavior (thanks to James McNellis for correction)
reinterpret_cast is normally only used to deal with raw memory (allocators)
They have their use, of course, but should not be encountered in normal code. For dealing with external or C APIs they might be necessary though.
At least that's my opinion.
How bad a cast is typically depends on the type of cast. There are legitimate uses for all of these casts, but some smell worse than others.
const_cast is used to cast away constness (since adding it doesn't require a cast). Ideally, that should never be used. It makes it easy to invoke undefined behavior (trying to change an object originally designated const), and in any case breaks the const-correctness of the program. It is sometimes necessary when interfacing with APIs that are not themselves const-correct, which may for example ask for a char * when they're going to treat it as const char *, but since you shouldn't write APIs that way it's a sign that you're using a really old API or somebody screwed up.
reinterpret_cast is always going to be platform-dependent, and is therefore at best questionable in portable code. Moreover, unless you're doing low-level operations on the physical structure of objects, it doesn't preserve meaning. In C and C++, a type is supposed to be meaningful. An int is a number that means something; an int that is basically the concatenation of chars doesn't really mean anything.
dynamic_cast is normally used for downcasting; e.g. from Base * to Derived *, with the proviso that either it works or it returns 0. This subverts OO in much the same way as a switch statement on a type tag does: it moves the code that defines what a class is away from the class definition. This couples the class definitions with other code and increases the potential maintenance load.
static_cast is used for data conversions that are known to be generally correct, such as conversions to and from void *, known safe pointer casts within the class hierarchy, that sort of thing. About the worst you can say for it is that it subverts the type system to some extent. It's likely to be needed when interfacing with C libraries, or the C part of the standard library, as void * is often used in C functions.
In general, well-designed and well-written C++ code will avoid the use cases above, in some cases because the only use of the cast is to do potentially dangerous things, and in other cases because such code tends to avoid the need for such conversions. The C++ type system is generally seen as a good thing to maintain, and casts subvert it.
IMO, like most things, they're tools, with appropriate uses and inappropriate ones. Casting is probably an area where the tools frequently get used inappropriately, for example, to cast between an int and pointer type with a reinterpret_cast (which can break on platforms where the two are different sizes), or to const_cast away constness purely as a hack, and so on.
If you know what they're for and the intended uses, there's absolutely nothing wrong with using them for what they were designed for.
There is an irony to explicit casts. The developer whose poor C++ design skills lead him to write code requiring a lot of casting is the same developer who doesn't use the explicit casting mechanisms appropriately, or at all, and litters his code with C-style casts.
On the other hand, the developer who understands their purpose, when to use them and when not to, and what the alternatives are, is not writing code that requires much casting!
Check out the more fine-grained variations on these casts, such as polymorphic_cast, in the boost conversions library, to give you an idea of just how careful C++ programmers are when it comes to casting.
Casts are a sign that you're trying to put a round peg in a square hole. Sometimes that's part of the job. But if you have some control over both the hole and the peg, it would be better not create this condition, and writing a cast should trigger you to ask yourself if there was something you could have done so this was a little smoother.
I have been reading a lot about C++ casting and I am starting to get confused because I have always used C style casting.
I have read that C style casting should be avoided in C++ and that reinterpret_cast is very very dangerous and should not be used whenever there is an alternative. On the contrary to not using reinterpret_cast, I have seen it used many times on MSDN in their sample code. This leads me to ask my first question, when is it ok to use reinterpret_cast?
For example:
LRESULT CALLBACK WndProc(HWND hWnd, UINT Msg, WPARAM wParam, LPARAM lParam)
{
switch (Msg)
{
case WM_CREATE:
{
LPCREATESTRUCT lpCreateStruct = reinterpret_cast<LPCREATESTRUCT>(lParam);
return 0;
}
}
...
}
If that is not ok, then how would I cast the LPARAM value to a pointer using only static, dynamic, and/or const casting?
Also: If reinterpret_cast is not portable, how would I rewrite it to be portable (for good practice)
Using reinterpret_cast is acceptable if you know that the pointer was originally of the destination type. Any other use is taking advantage of implementation-dependent behavior, although in many cases this is necessary and useful, such as casting a pointer to a structure into a pointer to bytes so that it can be serialized.
It is considered dangerous because it does no checking, either at compile-time or at runtime. If you make a mistake, it can and will crash and burn horribly, and be difficult to debug. You are essentially telling the compiler "I know better than you what this actually is, so just compile the code and let me worry about the consequences."
The reason you see it on MSDN is because the Win32 API is a C API, but people insist on giving examples in C++.
Reinterpret cast is fine when you're writing code that interfaces with other libraries. It should be avoided within your own app.
This is an example of programming the Windows Platform SDK, a C API, with C++. The window procedure only has the WPARAM and LPARAM parameters and if you need to pass a pointer to a structure through a window message, it has to be cast. This is a perfectly acceptable use of reinterpret_cast<> in my opinion. You cannot avoid a cast because the SDK you are writing to, which is not your code, was not designed for C++, much less type-safety, and needs the casting to provide generic parameter types with a C binding.
The reinterpret_cast<> here is a flag that lets you know you need to be careful, but it is not to be avoided at all costs.
If, on the other hand, you controlled both sides of the code, the API and the consumer, it would be better to make an API that was type-safe and did not require the consumer to perform casts to use it correctly.
Not to disrespect MSDN, but MSDN is not the best place to go for proper C++ coding.
One reason to use reinterpret_cast is when you're casting to/from opaque datatypes. reinterpret_cast is not "dangerous" it's just that it's easy to screw up and lead to problems in your code, which is why it should be avoided.
The reason why the C++ style casts are preferred are that, static_cast is typesafe, and all casting times are easier to search for.
Programmers [incorrectly] often use casts to "cast off compiler warnings" such as converting from unsigned to signed integers, or from a 32bit integer to an 8bit one.
Essentially reinterpret_cast is "safe" with C structures and basic types (baring plain mistakes like casting int to a pointer and back, which works on ILP32 architecture but breaks on LP64 one.) A C structure doesn't have anything in it, except possible padding for alignment, that you didn't declare.
reinterpret_cast is not safe with C++ polymorphic types since compiler inserts data items into your class - things like pointers to virtual tables and pointers to virtual base classes. Other C++ casts take care of adjusting these when, say, down-casting from pointer to base class to pointer to derived class, reinterpret_cast and C-style casts don't.
Last week a young student ask me if marshalling is the same as casting.
My answer was definetly no. Marshalling is seralization, the way to transform a
memory representation of an objet into bytes in order to be transmitted to a
network whereas casting is related to type convertion / coercion.
Later on, rethinking the question I was thought that marshalling can be seen as a special case of casting. For example the transformation of the memory representation is in xml then one can say that you are "casting" to the type defined by the corresponding xsd grammar of that xml file.
What do you think about this?
Casting doesn't modify the data type. That is a major distinction. When you marshal something, you are transforming the data into something else.
A simple cast only changes how you are interpreting the object, not what the object is internally.
I agree that the distinction should be clear else unfamiliar people may be confused.
They're both a "type conversion", but, they are different kinds of type conversion: casting is usually between related object types (e.g. a downcast from a superclass to a subclass), whereas a marshalling might be for example from an object graph to plain-text representation.
Marshalling is generally about a technology boundary (e.g. going across a network or from one memory type to another as in the case of managed/unmanaged) whereas casting is generally within the same technology boundary therefore I think they are definitely different things.
It would be exceptionally confusing if we used the same term for both approaches meaning we would need to define them differently as they have different behaviours.