error: cast from 'char*' to 'unsigned int' loses precision - c++

I am compiling a code, which gives me error in the following function :
inline char *align(char *var, DataType_e type)
{
return (DataTypeSize[type] == 0) ? var :
(char*) (((unsigned int)(var) + DataTypeSize[type]-1) /
DataTypeSize[type] * DataTypeSize[type]);
}
The following error comes in line with "(unsigned int)(var)" :
error: cast from 'char*' to 'unsigned int' loses precision
If i change "unsigned int" to "unsigned long", the compilation works but i don't get the expected results while running my program. Any idea on how to resolve this issue ?

In C you should use [u]intptr_t if you need to convert a pointer to an integer type (but which you should avoid in the first place).
If they exist, these types are guaranteed to not lose information.

The uintptr_t type is the same size as a pointer to POD. Pointers to other data types, and pointers to member functions in particular, can be larger.
inline char *align(char *var, DataType_e type)
{
size_t alignSize = DataTypeSize[type];
if (1 >= alignSize) {
return var;
}
uintptr_t varInt = reinterpret_cast<uintptr_t>(var);
varInt = alignSize * ((varInt + alignSize - 1) / alignSize);
return reinterpret_cast<char *>(varInt);
}

When you're going to do math operations with pointers you should use pointer arithmetics instead of casting your pointer values to 'int' or 'long', calculate and cast back to pointer. This is prone for bad results, because the compiler can't respect alignment rules for the calculations.
I'm pretty sure that the 'unexpected' result of the function in your example has nothing to do with the cast at all. You should explain more about the calculations done with the DataSize[type] values and what you want to achieve with it.

Related

C++ warn when storing 32 bit value in a 64 bit variable

I recently discovered a hard to find bug in a project I am working on. The problem was that we did a calculation that had a uint32_t result and stored that in a uint64_t variable. We expected the result to be a uint64_t because we knew that the result can be too big for a 32 bit unsigned integer.
My question is: is there a way to make the compiler (or a static analysis tool like clang-tidy) warn me when something like this happens?
An example:
#include <iostream>
constexpr uint64_t MUL64 { 0x00000000ffffffff };
constexpr uint32_t MUL32 { 0xffffffff };
int main() {
const uint32_t value { 0xabababab };
const uint64_t value1 { MUL64 * value }; // the result is a uint64_t because
// MUL64 is a uint64_t
const uint64_t value2 { MUL32 * value }; // i'd like to have a warning here
if (value1 == value2) {
std::cout << "Looks good!\n";
return EXIT_SUCCESS;
}
std::cout << "Whoopsie\n";
return EXIT_FAILURE;
}
Edit:
The overflow was expected, i.e. we knew that we would need an uint64_t to store the calculated value. We also know how to fix the problem and we changed it later to something like:
const uint64_t value2 { static_cast<uint64_t>(MUL32) * value };
That way the upper 32 bits aren't cut off during the calculation. But things like that may still happen from time to time, and I just want to know whether there is way to detect this kind of mistakes.
Thanks in advance!
Greetings,
Sebastian
The multiplication behavior for unsigned integral types is well-defined to wrap around modulo 2 to the power of the width of the integer type. Therefore there isn't anything here that the compiler could be warning about. The behavior is expected and may be intentional. Warning about it would give too many false positives.
Also, in general the compiler cannot test for overflow at compile-time outside a constant expression evaluation. In this specific case the values are obvious enough that it could do that though.
Warning about any widening conversion after arithmetic would very likely also be very noisy.
I am not aware of any compiler flag that would add warnings for the reasons given above.
Clang-tidy does have a check named bugprone-implicit-widening-of-multiplication-result specifically for this case of performing a multiplication in a narrower type which is then implicitly widened. It seems the check is present since LLVM 13. I don't think there is an equivalent for addition though.
This check works here as expected:
<source>:11:29: warning: performing an implicit widening conversion to type 'const uint64_t' (aka 'const unsigned long') of a multiplication performed in type 'unsigned int' [bugprone-implicit-widening-of-multiplication-result]
const uint64_t value2 { MUL32 * value }; // i'd like to have a warning here
^
<source>:11:29: note: make conversion explicit to silence this warning
const uint64_t value2 { MUL32 * value }; // i'd like to have a warning here
^~~~~~~~~~~~~
static_cast<const uint64_t>( )
<source>:11:29: note: perform multiplication in a wider type
const uint64_t value2 { MUL32 * value }; // i'd like to have a warning here
^~~~~
static_cast<const uint64_t>()
Clang's undefined behavior sanitizer also has a check that flags all unsigned integer overflows at runtime, which is not normally included in -fsanitize=undefined. It can be included with -fsanitize=unsigned-integer-overflow. That will very likely require adding suppressions for intended wrap-around behavior. See https://clang.llvm.org/docs/UndefinedBehaviorSanitizer.html for details.
It seems however that this check isn't applied here since the arithmetic is performed by the compiler at compile-time. If you remove the const on value2, UBSan does catch it:
/app/example.cpp:11:29: runtime error: unsigned integer overflow: 4294967295 * 2880154539 cannot be represented in type 'unsigned int'
SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior /app/example.cpp:11:29 in
Whoopsie
GCC does not seem to have an equivalent option.
If you want consistent warnings for overflow in unsigned arithmetic, you need to define your own wrapper classes around the integer types that perform the overflow check and e.g. throw an exception if it fails or alternatively you can implement functions for overflow-safe addition/multiplication which you would then have to use instead of the + and * operators.

Recommended way to perform a cast to a smaller type on micro controller in C++

I am programming for the Arduino Due, which is based on a 32-bits micro controller. I want to read an ADC result register (it has width 32 bits if I am right, but it has a true maximum width of 12 bits, which is the resolution of the ADC), and write it at a given location in an array of integers with width 16 bits.
This works:
volatile uint16_t Buf[nchannels];
[... some code...]
void ADC_Handler() {
for (int i = 0; i < nchannels; i++)
{
Buf[i] = (volatile uint16_t) * (ADC->ADC_CDR + channels[i]); // my modification
}
FlagConversion = true;
}
But using instead a more "explicit" cast does not work:
Buf[i] = dynamic_cast<volatile uint16_t *>(ADC->ADC_CDR + channels[i]);
Produces:
"error: cannot dynamic_cast '(((RoReg*)1074528336) + ((sizetype)(((unsigned int)channels[i]) * 4)))' (of type 'RoReg* {aka volatile long unsigned int*}') to type 'volatile uint16_t* {aka volatile short unsigned int*}' (target is not pointer or reference to class)"
and similar unclear errors with static and reinterpret casts:
"error: cannot dynamic_cast '(((RoReg*)1074528336) + ((sizetype)(((unsigned int)channels[i]) * 4)))' (of type 'RoReg* {aka volatile long unsigned int*}') to type 'volatile uint16_t* {aka volatile short unsigned int*}' (target is not pointer or reference to class)"
and
"error: invalid conversion from 'volatile uint16_t* {aka volatile short unsigned int*}' to 'uint16_t {aka short unsigned int}"
Any idea why these more explicit casts fail?
What would be the best practice here?
A 'safe' way to do the cast is to first dereference the pointer to the 32-bit data, then mask out the upper 16 bits of that data item, then static_cast the result to your 16-bit destination data:
Buf[i] = static_cast<volatile uint16_t>( *(ADC->ADC_CDR + channels[i]) & 0x0FFFF );
It is always best to use the 'simplest' form of cast, and to avoid both C-style casts and reinterpret_cast whenever possible. In this case, it makes little sense to cast the pointer, and it is much safer and simpler to convert/cast the data itself.
Actually, the masking with 0x0FFFF is, strictly speaking, entirely unnecessary and superfluous. However, it makes it clear to any future reader of your code that you do, indeed, know what you're doing!

Error when compiling C++ program: "error: cast from ‘double*’ to ‘int’ loses precision"

I get 2 errors when trying to compile this code:
#include <iostream>
using namespace std;
int main() {
int i;
char myCharArray[51] = "This string right here contains exactly 50 chars.";
double myDoubleArray[4] = {100, 101, 102, 103};
char *cp, *cbp;
double *dp, *dbp;
dp = &myDoubleArray[0];
dbp = myDoubleArray;
cp = &myCharArray[0];
cbp = myCharArray;
while ((cp-cbp) < sizeof(myCharArray)) {cp++; dp++; }
cout << "Without cast: " << (dp-dbp) << endl;
cout << " Cast 1: " << ((int *) dp-(int *) dbp) << endl;
cout << " Cast 2: " << ((int) dp-(int) dbp) << endl;
}
The errors I get are:
error: cast from ‘double*’ to ‘int’ loses precision [-fpermissive]
error: cast from ‘double*’ to ‘int’ loses precision [-fpermissive]
g++ won't let me compile the program. I'm asking what I could change to make it compile.
cast from ‘double*’ to ‘int’ loses precision
is as simple as it could be read: The number of bits to store in an int is less then the number of bits which are stored in a pointer on your platform. Normally it helps to make the int to an unsigned int, because on most platforms a pointer can be stored in an unsigned int type. A unsigned int has one more bit for the value because there is no need to decide between positive and negative. And pointers are always positive.
And even better is to use the types for such things to make your code more portable. Have a look for uintptr_t
Your "Without cast" line performs pointer subtraction, which yields the difference (in units of the size of the pointed-to type) between two pointers. If the two pointers point to elements of the same array, or just past the end of it, then the difference is the number of array elements between them. The result is of the signed integer type ptrdiff_t.
That's a perfectly sensible thing to do.
Your second line ("Cast 1:") converts the pointers (which are of type double*) to int* before the subtraction. That in effect pretends that the pointers are pointing to elements of an array of int, and determines the number of elements between the int objects to which they point. It's not at all clear why you'd want to do that.
Your third line ("Cast 2:") converts both pointer values to int before subtracting them. If int is not big enough to hold the converted pointer value, then the result may be nonsense. If it is, then on most systems it will probably yield the distinct between the two pointed-to objects in bytes. But I've worked on systems (Cray T90) where the byte offset of a pointer is stored in the high-order 3 bits of the pointer value. On such a system your code would probably yield the distance between the pointed-to objects in words. Or it might yield complete nonsense. In any case, the behavior is undefined.
The problem with the conversion from double* to int isn't just that it loses precision (which is what your compiler happened to complain about). It's that the result of the conversion doesn't necessarily mean anything.
The easiest, and probably the best, way to get your code to compile is to delete the second and third lines.
If you want a solution other than that, you'll have to explain what you're trying to do. Converting the pointer values to uintptr_t will probably avoid the error message, but it won't cause what you're doing to make sense.

error: cast from ‘...’ to ‘unsigned int’ loses precision [-fpermissive]

In my code Graph is a class having a member node, which is a structure. When I do
unsigned int id = ((unsigned int)n - (unsigned int)_nodes) / sizeof(Graph::node);
I get the following error (compiled on 64-bit Linux):
error: cast from ‘Graph::node* {aka Graph::node_st*}’ to ‘unsigned int’ loses precision [-fpermissive]
Googled and found a similar question but it does not seem to me that the answer is applicable here (note that I want to get the size of the object but not itself).
Thank you in advance for any suggestions!
If n and _nodes point to Graph::node, i.e., they are of type Graph::node * (which seems to be the case from the error message), and if you wan to calculate the "distance" between the two in terms of the number of Graph::node elements, you can do:
unsigned int id = n - _nodes;
In C and C++, pointer arithmetic will result in the difference being the number of elements (instead of number of bytes).
For this to work portably, both n and _nodes must point to a contiguous block of Graph::node values, and n should be "after" _nodes. If you can get negative differences, you can use ptrdiff_t type instead of unsigned int.
The first answer in the SO post you have a link to provides an answer that should work for you.
Use
intptr_t id = ((intptr_t)n - (intptr_t)_nodes) / sizeof(Graph::node);

Can I cast an unsigned char* to an unsigned int*?

error: invalid static_cast from type ‘unsigned char*’ to type ‘uint32_t* {aka unsigned int*}’
uint32_t *starti = static_cast<uint32_t*>(&memory[164]);
I've allocated an array of chars, and I want to read 4 bytes as a 32bit int, but I get a compiler error.
I know that I can bit shift, like this:
(start[0] << 24) + (start[1] << 16) + (start[2] << 8) + start[3];
And it will do the same thing, but this is a lot of extra work.
Is it possible to just cast those four bytes as an int somehow?
static_cast is meant to be used for "well-behaved" casts, such as double -> int.
You must use reinterpret_cast:
uint32_t *starti = reinterpret_cast<uint32_t*>(&memory[164]);
Or, if you are up to it, C-style casts:
uint32_t *starti = (uint32_t*)&memory[164];
Yes, you can convert an unsigned char* pointer value to uint32_t* (using either a C-style cast or a reinterpret_cast) -- but that doesn't mean you can necessarily use the result.
The result of such a conversion might not point to an address that's properly aligned to hold a uint32_t object. For example, an unsigned char* might point to an odd address; if uint32_t requires even alignment, you'll have undefined behavior when you try to dereference the result.
If you can guarantee somehow that the unsigned char* does point to a properly aligned address, you should be ok.
I am used to BDS2006 C++ but anyway this should work fine on other compilers too
char memory[164];
int *p0,*p1,*p2;
p0=((int*)((void*)(memory))); // p0 starts from start
p1=((int*)((void*)(memory+64))); // p1 starts from 64th char
p2=((int*)((void*)(&memory[64]))); // p2 starts from 64th char
You can use reinterpret_cast as suggested by faranwath but please understand the risk of going that route.
The value of what you get back will be radically different in a little endian system vs a big endian system. Your method will work in both cases.