How to cast from bool to void*? - c++

I'm trying to build cairomm for gtkmm on windows using mingw. Compilation breaks at a function call which has a parameter which does a reinterpret_cast of a bool to a void*.
cairo_font_face_set_user_data(cobj(), &USER_DATA_KEY_DEFAULT_TEXT_TO_GLYPHS, reinterpret_cast<void*>(true), NULL);
This is where the code breaks, and reason is "invalid reinterpret_cast from bool to void*". Why is this happening, and how can I modify this line to get it to compile? Need help

I see this is user data and you have control over what is done with the value, cast the bool to an int first: reinterpret_cast<void *> (static_cast<int> (true)). Doing this makes sense in that the void* parameter takes the place of template functions in this ANSI-C library. All you need is a true/false value. So, there should be no danger in temporarily encoding this as a pointer as long as it is well documented as such. Really, you would be better off with this: reinterpret_cast<void *> (1) or reinterpret_cast<void *> (+true).

It looks like it should work, according to the standard. Section 3.9.1-7 says bool is an integral type, and 5.2.10-5 says a value of integral type can be explicitly converted to a pointer using reinterpret_cast. It appears that your compiler is not fully standard.
Could you get away with changing the "true" to a 1? Converting between integers and pointer types is an old and dishonorable tradition in C and hence C++, and it would be surprising to find a compiler that wouldn't do it.
Or, if you really really have to do this, try (void *)true. Then wash your hands.

The only compiler I have that complains about this is GCC (MinGW with GCC 3.4.5) - and I'm not sure why. The standard seems to clearly indicate this is permitted:
3.9.1 Fundamental types
...
Types bool, char, wchar_t, and the
signed and unsigned integer types are
collectively called integral types.
5.2.10 Reinterpret cast:
...
A value of integral type or
enumeration type can be explicitly
converted to a pointer.
That said, monjardin's workaround of using reinterpret_cast<void *> (static_cast<int> (true)) or reinterpret_cast<void *> (1) are reasonable workarounds.

reinterpret_cast is a bad idea. Tell us more about the problem you're trying to solve, and perhaps we'll find a solution without resorting to reinterpret. Why do you want to convert bool to void*?

In some situations, it is highly desirable to have the compiler warn or error on code like reinterpret_cast<void*>(true), even though this code is apparently legal C++. For example, it aids in porting to 64-bit platforms.
Casting a 64-bit pointer into an integral type that is smaller than a pointer (such as int or bool) is often a bug: you're truncating the pointer's value. Furthermore, the C++ specification doesn't seem to guarantee that you can directly cast a pointer into a smaller integral type (emphasis added):
5.2.10.4. A pointer can be explicitly converted to any integral type large enough to hold it. The mapping function is implementation-defined.
Likewise, casting a smaller integral type into a 64-bit pointer (as with reinterpret_cast<void*>(true)) is often a bug as well: the compiler has to fill in the pointer's upper bits with something; does it zero-fill or sign-extend? Unless you're writing low-level platform-specific code for memory mapped I/O access or DMA, you usually don't want to be doing this at all, unless you're doing something hacky (like stuffing a Boolean into a pointer). But the C++ specification doesn't seem to say much about this case other than that it is implementation-defined (footnote omitted):
5.2.10.5. A value of integral type or enumeration type can be explicitly converted to a pointer.*
A pointer converted to an integer of sufficient size (if any such exists on the implementation) and back to the same pointer type will have its original value; mappings between pointers and integers are otherwise implementation-defined.
#monjardin suggested reinterpret_cast<void*>(static_cast<int>(true)). If the origin of the error was the mismatch between the integral type's size and the pointer size, then this will work on most 32-bit platforms (where both int and void* are 32 bits) but fail on most 64-bit platforms (where int is 32 bits and void* is 64 bits). In that case, replacing int in this expression with a pointer-sized integer type such as uintptr_t or DWORD_PTR (on Windows) should work, since conversions between bool and pointer-sized integers are allowed, and so are conversions between pointer-sized integers and pointers.
Later versions of GCC have the following warning suppression options, but not for C++:
-Wno-int-to-pointer-cast (C and Objective-C only)
Suppress warnings from casts to pointer type of an integer of a different size.
-Wno-pointer-to-int-cast (C and Objective-C only)
Suppress warnings from casts from a pointer to an integer type of a different size.

It fails because the cast makes no sense - you are taking a boolean true/false value, and asking the compilre to interpret this as a pointer, which in blunt terms is a memory location. The two arent even remotely related.

Try a newer version of your compiler. I just tested and this cast works on at least gcc 4.1 and above. I don't know exactly how gcc versions map to mingw versions though.

Related

Can wrapping a type conversion in a static_cast ever hurt?

I'm updating some old code and I get hundreds of warnings along the lines of warning C4244: '+=': conversion from 'std::streamsize' to 'unsigned long', possible loss of data in Visual Studio.
The project compiles and runs fine when just ignoring the warnings, but I want to remove them and put a static_cast<unsigned long>() function around each. Considering the code runs fine now, could this potentially be harmful?
Yes, static_cast can hurt, as it will tell the compiler to shut up, as you know what you are doing. The question is whether you actually know what you are doing?
Obviously, casting to a smaller type can result in an unexpected result if the stored data exceeds the smaller types size. Use a static_cast if you know for sure, that this case won't ever happen or that you expect a truncated value. If not, keep the warning until you have properly designed your code.
The project compiles and runs fine when just ignoring the warnings
To start with, never ignore warnings. Think over what your code actually does instead.
Considering the code runs fine now, could this potentially be harmful?
Regarding your particular case, a static_cast<unsigned long> from std::streamsize will be harmful.
As the reference documentation of std::streamsize says, it's intentionally a signed type:
The type std::streamsize is a signed integral type used to represent the number of characters transferred in an I/O operation or the size of an I/O buffer. It is used as a signed counterpart of std::size_t, similar to the POSIX type ssize_t.
Static casting in that case actually means loss of semantics.
The idea is that conversions allowed by static_cast are somewhat less
likely to lead to errors than those that require reinterpret_cast. In
principle, it is possible to use the result of a static_cast without
casting it back to its original type, whereas you should always cast
the result of a reinterpret_cast back to its original type before
using it to ensure portability.
from Bjarne Stroustrup's C++ Style and Technique FAQ
Be careful with consts. static_cast doesn't cast away const.
In general, I think it would be better to rewrite your code instead of "casting every problem away" in hope that everything goes well.

MISRA C++ 2008 Rule 5-2-7 violation: An object with pointer type shall not be converted to an unrelated pointer type, either directly or indirectly

In the following example:
void bad_function()
{
char_t * ptr = 0;
// MISRA doesn't complains here, it allows cast of char* to void* pointer
void* p2 = ptr;
// the following 2 MISRA violations are reported in each of the casts bellow (two per code line)
// (1) Event misra_violation: [Required] MISRA C++-2008 Rule 5-2-7 violation: An object with pointer type shall not be converted to an unrelated pointer type, either directly or indirectly
// (1) Event misra_violation: [Required] MISRA C++-2008 Rule 5-2-8 violation: An object with integer type or pointer to void type shall not be converted to an object with pointer type
ptr = (char_t*) (p2);
ptr = static_cast<char_t*> (p2);
ptr = reinterpret_cast<char_t*> (p2);
}
MISRA 5-2-8 and 5-2-7 violations are reported.
How I can remove this violation ?
I need someone experienced with C++ static analysis to help me. I am hitting my head with this stupid rules from few days.
According to MISRA C++ standard (MISRA-Cpp-2008.pdf: Rule 5-2-7 (required): An object with pointer type shall not be converted to an unrelated pointer type, either directly or indirectly.
Ok but we have a lot of code which for example needs to convert address to char* and then to use it with std::ifstream, which read(char* buffer, int length) function requires to type cast the address to (char_t*). So how according to MISRA guys someone can program in C++ and not using at all any casts? The standard doesn't say HOW pointer conversion then must be done.
In my production code my problems are in file reading operations using read with std::ifstream from files in predefined data structures:
if (file.read((char_t*)&info, (int32_t)sizeof(INFO)).gcount() != (int32_t)sizeof(INFO)
{
LOG("ERROR: Couldn't read the file info header\n");
res = GENERAL_FAILURE;
}
How is supposed to do it according to MISRA?
So are there any solutions at all?
EDIT: Peter and Q.Q. answers are both correct, it seems that MISRA really wants to do everything without any casts which is hard to be done if the project is in the final stage. Therea are two options:
1 - document the MISRA deviations one by one and explain why casts are Ok, explain how this has been tested (Q.Q. suggestion)
2 - use byte array from char type for file.read(), then after safely reading the file content cast the byte array to the headers content, this must be done for each member one by one because if you cast char* to int32_t this is again Rule 5-2-7 violation. Sometimes it is too much work.
The basic reason for the MISRA rule is that converting any pointer/address to any non-void pointer allows using that address as if it is a different object than it actually is. The compiler would complain about an implicit conversion in those cases. Using a typecast (or C++ _cast operators) essentially stops the compile complaining and - in too many circumstances to count - dereferencing that pointer gives undefined behaviour.
In other words, by forcing a type conversion, you are introducing potential undefined behaviour, and turning off all possibility of the compiler alerting you to the possibility. MISRA think that is a bad idea .... not withstanding the fact that a lot of programmers who think in terms of ease of coding think it is a good idea in some cases.
You have to realise that the philosophy of MISRA checks is less concerned about ease of programming than typical programmers, and more concerned about preventing circumstances where undefined (or implementation defined or unspecified, etc) behaviours get past all checks, and result in code "in the wild" that can cause harm.
The thing is, in your actual use case, you are relying on file.read() correctly populating the (presumably) data structure named info.
if (file.read((char_t*)&info, (int32_t)sizeof(INFO)).gcount() != (int32_t)sizeof(INFO)
{
LOG("ERROR: Couldn't read the file info header\n");
res = GENERAL_FAILURE;
}
What you need to do is work a bit harder to provide valid code that will pass the MISRA checker. Something like
std::streamsize size_to_read = whatever();
std::vector<char> buffer(size_to_read);
if (file.read(&buffer[0], size_to_read) == size_to_read)
{
// use some rules to interpret contents of buffer (i.e. a protocol) and populate info
// generally these rules will check that the data is in a valid form
// but not rely on doing any pointer type conversions
}
else
{
LOG("ERROR: Couldn't read the file info header\n");
res = GENERAL_FAILURE;
}
Yes, I realise it is more work than simply using a type conversion, and allowing binary saves and reads of structs. But them's the breaks. Apart from getting past the MISRA checker, this approach has other advantages if you do it right, such as the file format being completely independent of what compiler is used to build your code. Your code depends on implementation-defined quantities (the layout of members in a struct, the results of sizeof) so your code - if built with compiler A - may be unable to read a file generated by code built with compiler B. And one of the common themes of MISRA requirements is reducing or eliminated any code with behaviour that may be sensitive to implementation-defined quantities.
Note: you are also passing char_t * to std::istream::read() as first argument and an int32_t as the second argument. Both are actually incorrect. The actual arguments are of type char * and std::streamsize (which may be, but is not required to be an int32_t).
Converting unrelated pointers to char* is not a good practice.
But, if you have a large legacy codebase doing this frequently, you can suppress rules by adding special comments.
fread is a perfectly good C++ function for file input and it uses void*, which MISRA allows.
It also is good at reading binary data, unlike fstream which processes all data through localized character conversion logic (this is the "facet" on an iostream, which is configurable, but the Standard doesn't define any portable way to achieve a no-op conversion).
The C-style of fopen/fclose is unfortunate in a C++ program, since you might forget to cleanup your files. Luckily we have this std::unique_ptr which can add RAII functionality to an arbitrary pointer type. Use std::unique_ptr<FILE*, decltype(&fclose)> and have fast exception-safe binary file I/O in C++.
NB: A common misconception is that std::ios::binary gives binary file I/O. It does not. All it affects are newline conversion and (on some systems) end-of-file marker processing, but there is no effect on facet-driven character conversion.

Conversion to enumeration type requires an explicit cast (static_cast, C-style cast or function-style cast)

I need to work on a project that was written in msvcpp6.0sp6
I DIDN'T write the project. I know very little about its inner works. I DO know it WAS POSSIBLE to build it in the past.
while trying to build this project that was built successfuly in the past(not by me)
I get the error:
Conversion to enumeration type requires an explicit cast (static_cast, C-style cast or function-style cast)
for example:
error C2664: 'strncpy' : cannot convert parameter 2 from 'const unsigned short *' to 'const char *'
error C2664: 'void __cdecl CString::Format(const unsigned short *,...)' : cannot convert parameter 1
for a few dozen implicit conversions. I mustn't change the code. how can I force the complier to accept the implicit convertions?
I mustn't change the code. how can I force the complier to accept the
implicit convertions?
Quite likely you need to get the same compiler that was used for the code in the first place, and use that.
If my guess (in a comment on unwind's answer) is correct about that unsigned short* error then it's simply not possible to compile this code in Unicode mode, because the source is insufficiently portable. Suppressing the error for the conversion, even if it's possible via some compiler setting, will just result in code that compiles but doesn't work.
I'd expect that also to imply that the old dll probably isn't compatible with the rest of your current code, but if you've been using it up to now then either I'm wrong about the reason, or else you've got away with it somehow.
That sounds crazy.
The use of unsigned short * with string-handling functions like strncpy() initially seems to make no sense at all. On second thought though, it makes me wonder if there is some kind of "wide character" configuration that is failing. If strncpy() was "re-targeted" by the compiler to work on 16-bit characters, having it expect unsigned short * makes sense and would explain why the code passes it such. At least "kind of" explain, it's still odd.
You can't. There are no such implicit conversions defined by the C++ language.
Visual C++ 6.0 was a law unto itself; by implementing something that merely looked a bit like the C++ language, it may have accepted this invalid code.
C++ is a typesafe language. But it allows you to tell the compiler to "shut up" by the evil known as casting.
Casting from integers to enums is often a necessary "evil" cast, for example, you cannot loop through enums where you have, say, a restricted number of values for which you have given enumerations. Therefore you have to use an integer and cast them to enums for this purpose.
Sometimes you do need to cast data structors to const char * rather than const void * just so you can perform pointer arithmetic. However for the purpose of strcpy, it is difficult to see why you want to cast in unsigned shorts. If these are wide characters (and the old compiler did not know of wchar_t) then it may be "safe" to cast it to const wchar_t * and use it in a wide-string copy. You could also use C++ strings, i.e. std::string and std::wstring.
If you really do not wish to update the source code for ISO compliance then your best bet is to use the original VC++ 6.0 compiler for your legacy code. Not least because even though you know this code to work, if it were compiled with a different compiler it will be different code and may no longer work. Any undefined or implementation defined compiler behaviour either exploited or used inadvertently could cause problems if a different compiler is used.
If you have an MSDN subscription, you can download all previous versions of VC++ for this purpose.

Why does C++ allow implicit conversion from int to unsigned int?

Consider following code:
void foo(unsigned int x)
{
}
int main()
{
foo(-5);
return 0;
}
This code compiles with no problems. Errors like this can cause lots of problems and are hard to find. Why does C++ allow such conversion?
The short answer is because C supported such conversions originally and they didn't want to break existing software in C++.
Note that some compilers will warn on this. For example g++ -Wconversion will warn on that construct.
In many cases the implicit conversion is useful, for example when int was used in calculations, but the end result will never be negative (known from the algorithm and optionally asserted upon).
EDIT: Additional probable explanation: Remember that originally C was a much looser-typed language than C++ is now. With K&R style function declarations there would have been no way for the compiler to detect such implicit conversions, so why bother restricting it in the language. For example your code would look roughly like this:
int foo(x)
unsigned int x
{
}
int main()
{
foo(-5);
return 0;
}
while the declaration alone would have been int foo(x);
The compiler actually relied on the programmer to pass the right types into each function call and did no conversions at the call site. Then when the function actually got called the data on the stack (etc) was interpreted in the way the function declaration indicated.
Once code was written that relied on that sort of implicit conversion it would have become much harder to remove it from ANSI C even when function prototypes were added with actual type information. This is likely why it remains in C even now. Then C++ came along and again decided to not break backwards compatibility with C, continuing to allow such implicit conversions.
Just another quirk of a language that has lots of silly quirks.
The conversion is well-defined to wrap around, which may be useful in some cases.
It's backward-compatible with C, which does it for the above reasons.
Take your pick.
#user168715 is right. C++ was initially designed to be a superset of C, pretending to be as backward-compatible as possible.
The "C" philosophy is to deliver most of the responsibility to the programmer, instead of disallowing dangerous things. For C programmers it is heaven, for Java programmers, it is hell... a matter of taste.
I will dig the standards to see where exactly it is written, but I have no time for this right now. I'll edit my answer as soon as I can.
I also agree that some of the inherited freedom can lead to errors that are really hard to debug, so I am adding to what was said that in g++ you can turn on a warning to prevent you from doing this kind of mistake: -Wconversion flag.
-Wconversion
Warn for implicit conversions that may alter a value. This includes
conversions between real and integer,
like abs (x) when x is double;
conversions between signed and
unsigned, like unsigned ui = -1; and
conversions to smaller types, like
sqrtf (M_PI). Do not warn for explicit
casts like abs ((int) x) and ui =
(unsigned) -1, or if the value is not
changed by the conversion like in abs
(2.0). Warnings about conversions
between signed and unsigned integers
can be disabled by using
-Wno-sign-conversion.
For C++, also warn for confusing overload resolution for user-defined
conversions; and conversions that will
never use a type conversion operator:
conversions to void, the same type, a
base class or a reference to them.
Warnings about conversions between
signed and unsigned integers are
disabled by default in C++ unless
-Wsign-conversion is explicitly enabled.
Other compilers may have similar flags.
By the time of the original C standard, the conversion was already allowed by many (all?) compilers. Based on the C rationale, there appears to have been little (if any) discussion of whether such implicit conversions should be allowed. By the time C++ came along, such implicit conversions were sufficiently common that eliminating them would have rendered the language incompatible with a great deal of C code. It would probably have made C++ cleaner; it would certainly have made it much less used -- to the point that it would probably never have gotten beyond the "C with Classes" stage, and even that would just be a mostly-ignored footnote in the history of Bell labs.
The only real question along this line was between "value preserving" and "unsigned preserving" rules when promoting unsigned values "smaller" than int. The difference between the two arises when you have (for example) an unsigned short being added to an unsigned char.
Unsigned preserving rules say that you promote both to unsigned int. Value preserving rules say that you promote both values to int, if it can represent all values of the original type (e.g., the common case of 8-bit char, 16-bit short, and 32-bit int). On the other hand, if int and short are both 16 bits, so int cannot represent all values of unsigned short, then you promote the unsigned short to unsigned int (note that it's still considered a promotion, even though it only happens when it's really not a promotion -- i.e., the two types are the same size).
For better or worse, (and it's been argued both directions many times) the committee chose value preserving rather than unsigned preserving promotions. Note, however, that this deals with a conversion in the opposite direction: rather than from signed to unsigned, it's about whether you convert unsigned to signed.
Because the standard allows implicit conversion from signed to unsigned types.
Also (int)a + (unsigned)b results to unsigned - this is a c++ standard.

static_cast wchar_t* to int* or short* - why is it illegal?

In both Microsoft VC2005 and g++ compilers, the following results in an error:
On win32 VC2005: sizeof(wchar_t) is 2
wchar_t *foo = 0;
static_cast<unsigned short *>(foo);
Results in
error C2440: 'static_cast' : cannot convert from 'wchar_t *' to 'unsigned short *' ...
On Mac OS X or Linux g++: sizeof(wchar_t) is 4
wchar_t *foo = 0;
static_cast<unsigned int *>(foo);
Results in
error: invalid static_cast from type 'wchar_t*' to type 'unsigned int*'
Of course, I can always use reinterpret_cast. However, I would like to understand why it is deemed illegal by the compiler to static_cast to the appropriate integer type. I'm sure there is a good reason...
You cannot cast between unrelated pointer types. The size of the type pointed to is irrelevant. Consider the case where the types have different alignment requirements, allowing a cast like this could generate illegal code on some processesors. It is also possible for pointers to different types to have differrent sizes. This could result in the pointer you obtain being invalid and or pointing at an entirely different location. Reinterpret_cast is one of the escape hatches you hacve if you know for your program compiler arch and os you can get away with it.
As with char, the signedness of wchar_t is not defined by the standard. Put this together with the possibility of non-2's complement integers, and for for a wchar_t value c,
*reinterpret_cast<unsigned short *>(&c)
may not equal:
static_cast<unsigned short>(c)
In the second case, on implementations where wchar_t is a sign+magnitude or 1's complement type, any negative value of c is converted to unsigned using modulo 2^N, which changes the bits. In the former case the bit pattern is picked up and used as-is (if it works at all).
Now, if the results are different, then there's no realistic way for the implementation to provide a static_cast between the pointer types. What could it do, set a flag on the unsigned short* pointer, saying "by the way, when you load from this, you have to also do a sign conversion", and then check this flag on all unsigned short loads?
That's why it's not, in general, safe to cast between pointers to distinct integer types, and I believe this unsafety is why there is no conversion via static_cast between them.
If the type you're casting to happens to be the so-called "underlying type" of wchar_t, then the resulting code would almost certainly be OK for the implementation, but would not be portable. So the standard doesn't offer a special case allowing you a static_cast just for that type, presumably because it would conceal errors in portable code. If you know reinterpret_cast is safe, then you can just use it. Admittedly, it would be nice to have a straightforward way of asserting at compile time that it is safe, but as far as the standard is concerned you should design around it, since the implementation is not required even to dereference a reinterpret_casted pointer without crashing.
By spec using of static_cast restricted by narrowable types, eg: std::ostream& to std::ofstream&. In fact wchar_t is just extension but widely used.
Your case (if you really need it) should be fixed by reinterpret_cast
By the way MSVC++ has an option - either treat wchar_t as macro (short) or as stand-alone datatype.
Pointers are not magic "no limitations, anything goes" tools.
They are, by the language specification actually very constrained. They do not allow you to bypass the type system or the rest of the C++ language, which is what you're trying to do.
You are trying to tell the compiler to "pretend that the wchar_t you stored at this address earlier is actually an int. Now read it."
That does not make sense. The object stored at that address is a wchar_t, and nothing else. You are working in a statically typed language, which means that every object has one, and juts one, type.
If you're willing to wander into implementation-defined behavior-land, you can use a reinterpret_cast to tell the compiler to just pretend it's ok, and interpret the result as it sees fit. But then the result is not specified by the standard, but by the implementation.
Without that cast, the operation is meaningless. A wchar_t is not an int or a short.