In my code Graph is a class having a member node, which is a structure. When I do
unsigned int id = ((unsigned int)n - (unsigned int)_nodes) / sizeof(Graph::node);
I get the following error (compiled on 64-bit Linux):
error: cast from ‘Graph::node* {aka Graph::node_st*}’ to ‘unsigned int’ loses precision [-fpermissive]
Googled and found a similar question but it does not seem to me that the answer is applicable here (note that I want to get the size of the object but not itself).
Thank you in advance for any suggestions!
If n and _nodes point to Graph::node, i.e., they are of type Graph::node * (which seems to be the case from the error message), and if you wan to calculate the "distance" between the two in terms of the number of Graph::node elements, you can do:
unsigned int id = n - _nodes;
In C and C++, pointer arithmetic will result in the difference being the number of elements (instead of number of bytes).
For this to work portably, both n and _nodes must point to a contiguous block of Graph::node values, and n should be "after" _nodes. If you can get negative differences, you can use ptrdiff_t type instead of unsigned int.
The first answer in the SO post you have a link to provides an answer that should work for you.
Use
intptr_t id = ((intptr_t)n - (intptr_t)_nodes) / sizeof(Graph::node);
Related
Lets say I want to store the low 16 bits of a uint32_t in a uint16_t on windows, I could do it either
uint32_t value = 123456789;
uint16_t low1 = value; //like this
uint16_t low2 = value & 0xFFFF; //or this
There appears to be no difference in the results but I couldn't find any documentation explicitly stating that this is defined behavior. Could it be different under circumstances X or Y? Or is this just how it works?
The C++ standard guarantees that assignment to and initialization of unsigned types gives you the value modulo 2n, where n is the number of bits in the value representation of the unsigned type.
In Windows all bits participate in the value representation.
Hence using a bitmask serves no purpose other than to put a little stumbling block in the way for the future, when one might change the types.
If you absolutely want to use a mask, e.g. to avoid a compilation warning from an over-zealous compiler, then you can do it in a type-independent way like this, assuming that the type is unsigned:
uint16_t low2 = value & uint16_t(-1);
Which relies on the aforementioned modulo-2n guarantee.
The compiler should give you a warning, that value will be truncated if you use -Wconversion.
warning: conversion to 'uint16_t {aka short unsigned int}' from 'uint32_t {aka unsigned int}' may alter its value [-Wconversion]
uint16_t low1 = value;
^
With bitmask g++ do not produce a warning...
I get 2 errors when trying to compile this code:
#include <iostream>
using namespace std;
int main() {
int i;
char myCharArray[51] = "This string right here contains exactly 50 chars.";
double myDoubleArray[4] = {100, 101, 102, 103};
char *cp, *cbp;
double *dp, *dbp;
dp = &myDoubleArray[0];
dbp = myDoubleArray;
cp = &myCharArray[0];
cbp = myCharArray;
while ((cp-cbp) < sizeof(myCharArray)) {cp++; dp++; }
cout << "Without cast: " << (dp-dbp) << endl;
cout << " Cast 1: " << ((int *) dp-(int *) dbp) << endl;
cout << " Cast 2: " << ((int) dp-(int) dbp) << endl;
}
The errors I get are:
error: cast from ‘double*’ to ‘int’ loses precision [-fpermissive]
error: cast from ‘double*’ to ‘int’ loses precision [-fpermissive]
g++ won't let me compile the program. I'm asking what I could change to make it compile.
cast from ‘double*’ to ‘int’ loses precision
is as simple as it could be read: The number of bits to store in an int is less then the number of bits which are stored in a pointer on your platform. Normally it helps to make the int to an unsigned int, because on most platforms a pointer can be stored in an unsigned int type. A unsigned int has one more bit for the value because there is no need to decide between positive and negative. And pointers are always positive.
And even better is to use the types for such things to make your code more portable. Have a look for uintptr_t
Your "Without cast" line performs pointer subtraction, which yields the difference (in units of the size of the pointed-to type) between two pointers. If the two pointers point to elements of the same array, or just past the end of it, then the difference is the number of array elements between them. The result is of the signed integer type ptrdiff_t.
That's a perfectly sensible thing to do.
Your second line ("Cast 1:") converts the pointers (which are of type double*) to int* before the subtraction. That in effect pretends that the pointers are pointing to elements of an array of int, and determines the number of elements between the int objects to which they point. It's not at all clear why you'd want to do that.
Your third line ("Cast 2:") converts both pointer values to int before subtracting them. If int is not big enough to hold the converted pointer value, then the result may be nonsense. If it is, then on most systems it will probably yield the distinct between the two pointed-to objects in bytes. But I've worked on systems (Cray T90) where the byte offset of a pointer is stored in the high-order 3 bits of the pointer value. On such a system your code would probably yield the distance between the pointed-to objects in words. Or it might yield complete nonsense. In any case, the behavior is undefined.
The problem with the conversion from double* to int isn't just that it loses precision (which is what your compiler happened to complain about). It's that the result of the conversion doesn't necessarily mean anything.
The easiest, and probably the best, way to get your code to compile is to delete the second and third lines.
If you want a solution other than that, you'll have to explain what you're trying to do. Converting the pointer values to uintptr_t will probably avoid the error message, but it won't cause what you're doing to make sense.
I am getting
g++ -O3 cache-l1-line.cpp -o cache-l1-line -lrt
cache-l1-line.cpp: In function 'int main()':
cache-l1-line.cpp:33:58: warning: format '%d' expects argument of type 'int', but argument 2 has type 'long unsigned int' [-Wformat]
On my schools sunfire server ... but not my my machine (Arch Linux). Why might that be. The line in question seems to be
printf("%d, %1.2f \n", i * sizeof(int), totalTime/TIMES);
Where i is defined:
for (int i = 4; i <= MAX_STRIDE/sizeof(int); i*=2) {
Whats the problem: full source on GitHub (link to revision)
sizeof() returns a size_t, not an int. You should always cast such "special" types to the ones expected by your printf format:
printf("%d, %1.2f \n", (int)(i * sizeof(int)), totalTime/TIMES);
Note: some people prefer digging into their library to see what the types are typedef-ed to, and use the appropiate type in the format string. However, this has two problems: first, it can be different on another compiler. Second, a size_t is not a long, it's a size_t. Thus, from a more formal point of view, you'll always have a type mismatch because there is no format argument that takes a size_t.
In a 64-bit architecture expressions evaluate to 64-bit. Thus the proper specifier in that architecture would be %llu. Or conversely the expression should be casted to the width and type expected by the specifier %d.
EDIT: %llu instead of %lld -- thanks for comment.
The expression (i * size_t) has different width and type in different architectures;
it's (unsigned int) apparently in your i3 -system and (unsigned long long) in your i7 -system.
Normally printf is not type safe this means that if you have something like this :
struct Point {int x, int y} point;
printf("%d", point);
It will compile but will probably crash when executed.But gcc has an extension that allows cheking the format string with the arguments.
"%d" expects an int but "i * sizeof(int)" is of type long unsigned int that's why you are getting a warning.
I am new to programming and I have been trying to create a program that would solve any Sudoku puzzle. However, I've been getting a lot of errors, and in this one I just can't figure out what's wrong.
This is the code where the error is:
for (short o = indice;o>=divergencias[n_diver];o--){
N=historico[o];
P=tabela[N]; //Line 205
tabela[N]=0; //Line 206
}
indice -= divergencias[n_diver];
n_diver --;
}
And the errors, which happened on the lines marked with comments, are:
C:\(...)\main.cpp|205|error: invalid conversion from 'short unsigned int*' to 'short unsigned int'|
and
C:\(...)\main.cpp|206|error: incompatible types in assignment of 'int' to 'short unsigned int [9]'|
I've been searching for this error, and didn't find any satisfying answer to it. Moreover, the website in which I learn what I know about programming specifies that writing something like b = billy [a+2]; is valid. So I just can't understand what's wrong with this...
It looks like tabela is declared as short unsigned tabela[9][9]. In order to get an item of type unsigned short from it you have to provide two indexes, not one.
On the other hand, if you are looking to get an entire sub-array from tabela, the left side of the assignment needs to be compatible with a 1-D array of unsigned short, for example, an unsigned short* pointer.
I am compiling a code, which gives me error in the following function :
inline char *align(char *var, DataType_e type)
{
return (DataTypeSize[type] == 0) ? var :
(char*) (((unsigned int)(var) + DataTypeSize[type]-1) /
DataTypeSize[type] * DataTypeSize[type]);
}
The following error comes in line with "(unsigned int)(var)" :
error: cast from 'char*' to 'unsigned int' loses precision
If i change "unsigned int" to "unsigned long", the compilation works but i don't get the expected results while running my program. Any idea on how to resolve this issue ?
In C you should use [u]intptr_t if you need to convert a pointer to an integer type (but which you should avoid in the first place).
If they exist, these types are guaranteed to not lose information.
The uintptr_t type is the same size as a pointer to POD. Pointers to other data types, and pointers to member functions in particular, can be larger.
inline char *align(char *var, DataType_e type)
{
size_t alignSize = DataTypeSize[type];
if (1 >= alignSize) {
return var;
}
uintptr_t varInt = reinterpret_cast<uintptr_t>(var);
varInt = alignSize * ((varInt + alignSize - 1) / alignSize);
return reinterpret_cast<char *>(varInt);
}
When you're going to do math operations with pointers you should use pointer arithmetics instead of casting your pointer values to 'int' or 'long', calculate and cast back to pointer. This is prone for bad results, because the compiler can't respect alignment rules for the calculations.
I'm pretty sure that the 'unexpected' result of the function in your example has nothing to do with the cast at all. You should explain more about the calculations done with the DataSize[type] values and what you want to achieve with it.