I'm trying to use boost::polygon with boost::multiprecision to calculate polygon differences but no matter how i try to combine the two, i'm always getting compiler errors like
assigning to 'long double' from incompatible type 'boost::multiprecision::number<boost::multiprecision::backends::cpp_int_backend<0, 0, boost::multiprecision::signed_magnitude, boost::multiprecision::unchecked, std::allocator<unsigned long long>>, boost::multiprecision::et_off>'
in
https://github.com/boostorg/polygon/blob/develop/include/boost/polygon/detail/polygon_arbitrary_formation.hpp#L401
and similar places when trying to read back the results:
using DifferenceResults = std::vector<gtl::polygon_with_holes_data<ValueType>>;
DifferenceResults differenceResult;
result.value().get<DifferenceResults>(differenceResult);
Here's what i tried.
Am i doing this right?
I tried different things for coordinate_traits but at the end it looks like the types and expressions used in the lazy functions like evalAtXforYlazy refuse to support this.
It compiles fine without multiprecision.
I also tried to modify polygon_arbitryry_formation.hpp and added explicit casts wherever needed which was promising at first but at the end lead me to overload resultion problems here which originate here.
I further tried to replace boost.multiprecision with MPIR but they don't even have explicit cast operators but just member functions for conversions which would be even less compatible with evalAtXforYlazy and others.
What am i doing wrong?
Related
I was writing some code recently and found myself doing a lot of c-style casts, such as the following:
Client* client = (Client*)GetWindowLong(hWnd, GWL_USERDATA);
I thought to myself; why do we actually need to do these?
I can somewhat understand why this would be needed in circumstances where there is lot of code where the compiler may not what types can be converted to what, such as when using reflection.
but when casting from a long to a pointer where both types are of the same size, I don't understand why the compiler would not allow us to do this?
when casting from a long to a pointer where both types are of the same size, I don't understand why the compiler would not allow us to do this?
Ironically, this is the place where compiler's intervention is most important!
In vast majority of situations, converting between long and a pointer is a programming error which you don't want to go unnoticed, even if your platform allows it.
For example, when you write this
unsigned long *ptr = getLongPtr();
unsigned long val = ptr; // Probably an error
it is almost a certainty that you are missing an asterisk in front of ptr:
unsigned long val = *ptr; // This is what it should be
Finding errors like this without compiler's help is very hard, hence the compiler wants you to tell it that you know when you are doing on conversions like that.
Moreover, something that is fine on one platform may not work on other platforms. For example, an integral type and a pointer may have the same size on 32-bit platforms, but have different sizes on 64-bit platform. If you want to maintain any degree of portability, the compiler should warn you of the conversion even on the 32-bit platform, where the sizes are identical. Compiler warning will help you identify an error, and switch to a portable pointer-as-integer type intptr_t.
I think the idea is that we want compiler to tell us when we are doing something dodgy and/or potentially unintended. That way we don't do it by accident. So the compiler complains unless we explicitly tell the compiler that this is what we really, really want. We do that by using the a cast.
Edited to add:
It might be better to ask why we are allowed to cast between types. Originally C was created as a strongly typed language. Although it allows promotion/conversion between related object types (like between ints and floats) it is supposed to prevent access and assignment to the wrong type as a language feature, a safety measure. However occasionally this is useful so casting was put in the language to allow us to circumvent the type rules on those occasions when we need to.
Suppose there's a complex expression EXPRESSION, and it's quite hard even for the IDE to find some of the methods called in it etc., so it's very hard to figure out the type it evaluates to. Currently to make the compiler (gcc) print the human-readable type I'm using a construct like
struct {} s=EXPRESSION;
which won't compile for any expression if it evaluates not to {}. In this case gcc says something like
Conversion from Type_I_am_Interested_In to non-scalar type main()::<anonymous struct> requested
, which allows me to see the Type_I_am_Interested_In.
My question is now, is there a nicer way to get human-readable Type_I_am_Interested_In using some gcc/clang extensions or whatever instead of relying on error message format?
You can use decltype to get the type of the expression and then use partially specialized templates and typeid (demangle via cxxabi.h) to create a readable form as you like.
While you can skip the template decomposition step, you will receive slightly less information without it.
In a program using libtooling, is there a way to make some types recognized as "built-in type" ?
For example, I'd like to make int16_t, uint32_t etc. recognized as canonical built-in types rather than it's typedef to short, unsigned etc.
If you have a look at ".../llvm/tools/clang/include/clang/AST/BuiltinTypes.def", then that would declare the builtin types like int and long long. It's not entirely straight forward tho'. You will need to modify quite a bit of code, for example there are portions of type definitions in ".../llvm/tools/clang/lib/Sema/Sema.cpp" and ".../llvm/tools/clang/lib/AST/Type.cpp". If you grep for Int128 (good choice as clang itself doesn't use that [much] in itself, as opposed to for example size_t), you will see that it turns up in a lot of places. You'd have to cover all (or at least most) of those places with additional code to introduce new types of your own making.
I would say that it's probably much easier to do something like clang -include cstdint myprog.cpp. In other words, make sure that the #include <cstdint> [or your own version of the same kind of file] is done behind the scenes in the compiler - you could add this to your driver in your own code too.
I need to work on a project that was written in msvcpp6.0sp6
I DIDN'T write the project. I know very little about its inner works. I DO know it WAS POSSIBLE to build it in the past.
while trying to build this project that was built successfuly in the past(not by me)
I get the error:
Conversion to enumeration type requires an explicit cast (static_cast, C-style cast or function-style cast)
for example:
error C2664: 'strncpy' : cannot convert parameter 2 from 'const unsigned short *' to 'const char *'
error C2664: 'void __cdecl CString::Format(const unsigned short *,...)' : cannot convert parameter 1
for a few dozen implicit conversions. I mustn't change the code. how can I force the complier to accept the implicit convertions?
I mustn't change the code. how can I force the complier to accept the
implicit convertions?
Quite likely you need to get the same compiler that was used for the code in the first place, and use that.
If my guess (in a comment on unwind's answer) is correct about that unsigned short* error then it's simply not possible to compile this code in Unicode mode, because the source is insufficiently portable. Suppressing the error for the conversion, even if it's possible via some compiler setting, will just result in code that compiles but doesn't work.
I'd expect that also to imply that the old dll probably isn't compatible with the rest of your current code, but if you've been using it up to now then either I'm wrong about the reason, or else you've got away with it somehow.
That sounds crazy.
The use of unsigned short * with string-handling functions like strncpy() initially seems to make no sense at all. On second thought though, it makes me wonder if there is some kind of "wide character" configuration that is failing. If strncpy() was "re-targeted" by the compiler to work on 16-bit characters, having it expect unsigned short * makes sense and would explain why the code passes it such. At least "kind of" explain, it's still odd.
You can't. There are no such implicit conversions defined by the C++ language.
Visual C++ 6.0 was a law unto itself; by implementing something that merely looked a bit like the C++ language, it may have accepted this invalid code.
C++ is a typesafe language. But it allows you to tell the compiler to "shut up" by the evil known as casting.
Casting from integers to enums is often a necessary "evil" cast, for example, you cannot loop through enums where you have, say, a restricted number of values for which you have given enumerations. Therefore you have to use an integer and cast them to enums for this purpose.
Sometimes you do need to cast data structors to const char * rather than const void * just so you can perform pointer arithmetic. However for the purpose of strcpy, it is difficult to see why you want to cast in unsigned shorts. If these are wide characters (and the old compiler did not know of wchar_t) then it may be "safe" to cast it to const wchar_t * and use it in a wide-string copy. You could also use C++ strings, i.e. std::string and std::wstring.
If you really do not wish to update the source code for ISO compliance then your best bet is to use the original VC++ 6.0 compiler for your legacy code. Not least because even though you know this code to work, if it were compiled with a different compiler it will be different code and may no longer work. Any undefined or implementation defined compiler behaviour either exploited or used inadvertently could cause problems if a different compiler is used.
If you have an MSDN subscription, you can download all previous versions of VC++ for this purpose.
When I moved a program from a Mac to this Windows PC, the VC++ 2008 compiler is giving me errors for passing unsigned ints to the cmath pow() function. As I understand, this function is not overloaded to accept anything but floating-point numbers.
Is there some compiler flag/setting that will ignore these errors? Also does anyone know how to find the documentation on the VC++ compiler?
Edit
This isn't a warning, it's an error. However, for me it's not an issue since my program is only dealing with numbers that come out as integers, so I don't care that they aren't floats. If it was just warnings I would move on with my life, but it's not letting me compile. Can I suppress errors somehow? Like I said, the errors aren't coming up on my Mac and the program is fine.
Regarding other answers here, it is not a good idea to tell the question author to turn off this warning. His code is broken - he's passing an unsigned int instead of a float. You should be telling him to fix his code!
This isn't a warning, it's an error. However, for me it's not an issue since my
program is only dealing with numbers that come out as integers, so I don't care that
they aren't floats. If it was just warnings I would move on with my life, but it's not
letting me compile. Can I suppress errors somehow? Like I said, the errors aren't
coming up on my Mac and the program is fine.
Integers and floats use different representations internally. If you have the same number in an int and a float, the bit pattern inside the storage for them is completely different. You cannot under any circumstances whatsoever expect your code to work if you are passing an integer when you should be passing a float.
Furthermore, I assert your Mac code either is silently using an overloaded version of that function (e.g. you are on that platform compiling with C++) or you believe it works when in fact it is working by chance or is not actually working.
Addendum
No compilers ever written has the ability to turn off errors.
A warning means the compiler thinks you're making a mistake.
An error means the compiler doesn't know what to do.
There are a couple of options:
In C, the solution is simply to cast the ints to doubles:
pow((double)i, (double)j)
In C++, you can do the same, although you should use a C++-style cast:
pow(static_cast<double>(i), static_cast<double>(j))
But a better idea is to use the overload C++ provides:
std::pow(static_cast<double>(i), j);
The base still has to be a floating-point value, but the exponent can be an int at least
The std:: prefix probably isn't necessary (most compilers make the function available in the global namespace as well).
Of course, to access the C++ versions of the function, you have to include the C++ version of the header.
So instead of #include <math.h> you need to #include <cmath>
C++ provides C++ versions of every C header, using this naming convention. If the C header is called foo.h, the C++ version will be cfoo. When you're writing in C++, you should always prefer these versions.
I don't know of a flag, but getting rid of the warnings was easy enough for me. Just double click on each of the warnings in the "Task List" and add the appropriate casting, whether you prefer
(double) my_variable
or
static_cast<double>(my_variable)
I'm guessing if you're getting the ambiguous warning, there are multiple pow functions defined somewhere. It's better to be explicit in my opinion anyway. For what it's worth, my vote goes with the static_cast option.
As Mehrdad mentioned, use the #pragma warning syntax to disable a warning. Documentation is here - http://msdn.microsoft.com/en-us/library/2c8f766e.aspx
I would be inclined to fix the warnings rather than hide them though!
C++ has overloads for pow/powf for int exponents. Heed the warning.
Don't ignore this or any warnings. Fix them. The compiler is your friend, trying to get you to write good code. It's a friend that believes in tough love, but it is your friend.
If you have an unsigned int and need a float, convert your unsigned in to a float.
And the MSDN Library is the documentation for both the VC++ implementation of the language and the IDE itself.