Why system accepts without warning long int passed as int argument? - c++

Quick simple question (almost out of a curiosity):
If I declare for example in a C++ program a long int, and then call it in a function taking an int, I know it will work without any problem unless I give it a 4 byte size value which will lead to bad printing content.
However what surprises me is that it doesn't warn in any way about this. If I declare a 4 byte long int, the system knows that it has 32 bits to store that value. But then, if I pass that same long int to a function where it calls only an int (2 bytes), I'm assuming that I'm using 16 bits in memory that shouldn't be used by this value.
Am I right? Or will it use only the lowest 16 bits from that long int received as argument? What is the process here?
Code example:
#include <stdio.h>
void test(int x) { // My question is why it accepts this?
printf("%d", x);
}
int main() {
long int y=4294967200; // 32 bits
test(y);
return 0;
}

Most likely it's because you didn't enable that feature in your compiler. For example, using GCC with conversion warnings enabled gives:
warning: conversion to ‘int’ from ‘long int’ may alter its value
If the question is why such warnings aren't enabled by default, it's because a lot of very common code patterns produce a large quantity of spurious warnings due to automatic promotions. For example, unsigned char p[10]; ... p[1] ^= 1;.

Related

how does the short(vector.size()) command conversion work in C++?

I don't know any other way to return the size of a vector other than the .size() command, and it works very well, but, it return a variable of type long long unsigned int, and this in very cases are very good, but I'm sure my program will never have a vector so big that it need all that size of return, short int is more than enough.
I know, for today's computers those few enused bytes are irrelevant, but I don't like to leave these "loose ends" even if they are small, and whem I was programming, I came across some details that bothered me.
Look at these examples:
for(short int X = 0 ; X < Vector.size() ; X++){
}
compiling this, I receive this warning:
warning: comparison of integer expressions of different signedness: 'short int' and 'std::vector<unsigned char>::size_type' {aka 'long long unsigned int'} [-Wsign-compare]|
this is because the .size() return value type is different from the short int I'm compiling, "X" is a short int, and Vector.size() return a long long unsigned int, was expected, so if I do this:
for(size_t X = 0 ; X < Vector.size() ; X++){
}
the problem is gone, but by doing this, I'm creating a long long unsigned int in variable size_t and I'm returning another variable long long unsigned int, so, my computer allocale two variables long long unsigned int, so, what I do for returning a simple short int? I don't need anything more than this, long long unsigned int is overkill, so I did this:
for(short int X = 0 ; X < short(Vector.size()) ; X++){
}
but... how is this working? short int X = 0 is allocating a short int, nothing new, but what about short (Vector.size()), is the computer allocating a long unsigned int and converting it to a short int? or is the compiler "changing" the return of the .size() function by making it naturally return a short int and, in this case, not allocating a long long unsined int? because I know the compilers are responsible for optimizing the code too, is there any "problem" or "detail" when using this method? since I rarely see anyone using this, what exactly is this short() doing in memory allocation? where can i read more about it?
(thanks to everyone who responded)
Forget for a moment that this involves a for loop; that's important for the underlying code, but it's a distraction from what's going on with the conversion.
short X = Vector.size();
That line calls Vector.size(), which returns a value of type std::size_t. std::size_t is an unsigned type, large enough to hold the size of any object. So it could be unsigned long, or it could be unsigned long long. In any event, it's definitely not short. So the compiler has to convert that value to short, and that's what it does.
Most compilers these days don't trust you to understand what this actually does, so they warn you. (Yes, I'm rather opinionated about compilers that nag; that doesn't change the analysis here). So if you want to see that warning (i.e., you don't turn it off), you'll see it. If you want to write code that doesn't generate that warning, then you have to change the code to say "yes, I know, and I really mean it". You do that with a cast:
short X = short(Vector.size());
The cast tells the compiler to call Vector.size() and convert the resulting value to short. The code then assigns the result of that conversion to X. So, more briefly, in this case it tells the compiler that you want it to do exactly what it would have done without the cast. The difference is that because you wrote a cast, the compiler won't warn you that you might not know what you're doing.
Some folks prefer to write that cast is with a static_cast:
short X = static_cast<short>(Vector.size());
That does the same thing: it tells the compiler to do the conversion to short and, again, the compiler won't warn you that you did it.
In the original for loop, a different conversion occurs:
X < Vector.size()
That bit of code calls Vector.size(), which still returns an unsigned type. In order to compare that value with X, the two sides of the < have to have the same type, and the rules for this kind of expression require that X gets promoted to std::size_t, i.e., that the value of X gets treated as an unsigned type. That's okay as long as the value isn't negative. If it's negative, the conversion to the unsigned type is okay, but it will produce results that probably aren't what was intended. Since we know that X is not negative here, the code works perfectly well.
But we're still in the territory of compiler nags: since X is signed, the compiler warns you that promoting it to an unsigned type might do something that you don't expect. Again, you know that that won't happen, but the compiler doesn't trust you. So you have to insist that you know what you're doing, and again, you do that with a cast:
X < short(Vector.size())
Just like before, that cast converts the result of calling Vector.size() to short. Now both sides of the < are the same type, so the < operation doesn't require a conversion from a signed to an unsigned type, so the compiler has nothing to complain about. There is still a conversion, because the rules say that values of type short get promoted to int in this expression, but don't worry about that for now.
Another possibility is to use an unsigned type for that loop index:
for (unsigned short X = 0; X < Vector.size(); ++X)
But the compiler might still insist on warning you that not all values of type std::size_t can fit in an unsigned short. So, again, you might need a cast. Or change the type of the index to match what the compiler think you need:
for (std::size_t X = 0; X < Vector.size(); ++X_
If I were to go this route, I would use unsigned int and if the compiler insisted on telling me that I don't know what I'm doing I'd yell at the compiler (which usually isn't helpful) and then I'd turn off that warning. There's really no point in using short here, because the loop index will always be converted to int (or unsigned int) wherever it's used. It will probably be in a register, so there is no space actually saved by storing it as a short.
Even better, as recommended in other answers, is to use a range-base for loop, which avoids managing that index:
for (auto& value: Vector) ...
In all cases, X has a storage duration of automatic, and the result of Vector.size() does not outlive the full expression where it is created.
I don't need anything more than this, long long unsigned int is overkill
Typically, automatic duration variables are "allocated" either on the stack, or as registers. In either case, there is no performance benefit to decreasing the allocation size, and there can be a performance penalty in narrowing and then widening values.
In the very common case where you are using X solely to index into Vector, you should strongly consider using a different kind of for:
for (auto & value : Vector) {
// replace Vector[X] with value in your loop body
}

Are signed hexdecimal literals possible?

I have an array of bitmasks, the idea being to use them to clear a specified number of the least significant bits of an integer that is being used as a set of flags. It is defined as follows:
int clearLow[10]=
{
0xffffffff, 0xfffffffe, 0xfffffffc, 0xfffffff8, 0xfffffff0, 0xffffffe0, 0xffffffc0, 0xffffff80, 0xffffff00, 0xfffffe00
};
I recently switching to using gcc 4.8 I have found that this array starts throwing warnings,
warning: narrowing conversion of ‘4294967295u’ from ‘unsigned int’ to ‘int’ inside { } is ill-formed in C++11
etc
etc
Clearly my hexadecimal literals are being taken as unsigned ints and the fix is easy as, honestly, I do not care if this array is int or unsigned int it just needs to have the appropriate bits set in each cell, but my question is this:
Are there any ways to set literals in hexadecimal, for the purposes of simply setting bits, without the compiler assuming them to be unsigned?
You describe that you just want to use the values as operands to bit operations. As that is the case, just always use unsigned datatypes. That's the simple solution.
It looks like you just want an array of unsigned int to use for your bit masking:
const unsigned clearLow[] = {
0xffffffff, 0xfffffffe, 0xfffffffc, 0xfffffff8, 0xfffffff0, 0xffffffe0, 0xffffffc0, 0xffffff80, 0xffffff00, 0xfffffe00
};

Why is memcpy from int to char not working?

I have the hex value 0x48656c6c6f for which every byte represents the ASCII value of each character in the string "Hello". I also have the a char array that I want to insert these values into.
When I had a hex value that was smaller (for example, 0x48656c6c, which represents "Hell"), printing out the char array gave the correct output. But the following code prints "olle" (in little-endian) but not "olleH". Why is this?
#include <iostream>
#include <cstring>
int main()
{
char x[6] = {0};
int y = 0x48656c6c6f;
std::memcpy(x, &y, sizeof y);
for (char c : x)
std::cout << c;
}
Demo is here.
Probably int is 32 bit on your machine, which means that the upper byte of your constant is cut; so, your int y = 0x48656c6c6f; is actually int y = 0x656c6c6f; (by the way, I think that it counts as signed integer overflow, thus undefined behavior; to have defined behavior here you should use unsigned int).
So, on a little endian machine the in-memory representation of y is 6f 6c 6c 65, which is copied to x, resulting in the "olle" you see.
To "fix" the problem, you should use a bigger-sized integer, which, depending on your platform, may be long long, int64_t or similar stuff. In such a case, be sure to make x big enough (char x[sizeof(y)+1]={0}) to avoid buffer overflows or to change the memcpy to copy only the bytes that fit in x.
Also, always use unsigned integers when doing these kind of tricks - you avoid UB and get predictable behavior in case of overflow.
Probably int is four bytes on your platform.
ideone does show a warning, if there is also an error:
http://ideone.com/TSmDk5
prog.cpp: In function ‘int main()’:
prog.cpp:7:13: warning: overflow in implicit constant conversion [-Woverflow]
prog.cpp:12:5: error: ‘error’ was not declared in this scope
int y = 0x48656c6c6f;
int is not guaranteeed to store it, probably because your machine is 32-bit. Use long long instead
It is because an int is only 4 bytes on your platform and the H is being cut out when you provide a literal that is larger than that.

c bitfields strange behaviour with long int in struct

i am observing strange behaviour when i run the following code.
i create a bitfield by using a struct, where i want to use 52 bits, so i use long int.
The size of long int is 64 bits on my system, i check it inside the code.
Somehow when i try to set one bit, it alwas sets two bits. one of them is the one i wanted to set and the second one is the index of the first one plus 32.
Cann anybody tell me, why is that?
#include <stdio.h>
typedef struct foo {
long int x:52;
long int:12;
};
int main(){
struct foo test;
int index=0;
printf("%ld\n",sizeof(test));
while(index<64){
if(test.x & (1<<index))
printf("%i\n",index);
index++;
}
test.x=1;
index=0;
while(index<64){
if(test.x & (1<<index))
printf("%i\n",index);
index++;
}
return 0;
}
Sry forgot to post the output, so my question was basicly not understandable...
The Output it gives me is the following:
8
0
32
index is of type int, which is probably 32 bits on your system. Shifting a value by an amount greater than or equal to the number of bits in its type has undefined behavior.
Change index to unsigned long (bit-shifting signed types is ill-advised). Or you can change 1<<index to 1L << index, or even 1LL << index.
As others have pointed out, test is uninitialized. You can initialize it to all zeros like this:
struct foo test = { 0 };
The correct printf format for size_t is %zu, not %ld.
And it wouldn't be a bad idea to modify your code so it doesn't depend on the non-portable assumption that long is 64 bits. It can be as narrow as 32 bits. Consider using the uint_N_t types defined in <stdint.h>.
I should also mention that bit fields of types other than int, unsigned int, signed int, and _Bool (or bool) are implementation-defined.
You have undefined behavior in your code, as you check the bits in text.x without initializing the structure. Because you don't initialize the variable, it will contain random data.

assigning a value to a long long integers using gcc on sparc solaris

I came across something that I think rather strange. The test program
int main(int argc, char* argv[])
{
cout<<"hello"<<endl;
long unsigned l = 0x12345678;
long long unsigned ll = 0x12345678;
cout<<sizeof(l)<<endl;
cout<<sizeof(ll)<<endl;
};
output is:
hello
4
8
No surprises there. The long int has a size of 4 bytes and the long long has a size of 8 bytes.
However, when I change it so that the long long is assigned
long long unsigned ll = 0x123456789;
at compile time I get
error: integer constant is too large for "long" type
Now this same test does compile if I force a 64 bit build using the option -m64. Am I doing something wrong or is this a bug in GCC?
Change that to
long long unsigned ll = 0x123456789ULL; // notice the suffix
Without the suffix, the literal is bigger than the maximum unsigned long value on your machine, and that, according to C++03 (but not C++11, which has long long), is undefined behavior. This means that anything can happen, including a compile-time error.
It's also worth nothing that there's no long long in C++03, so it's not guaranteed to work, you're relying on an extension. You'd probably better be off using C++11 instead.
The thing here is that many people seem to look at a line of code like your:
unsigned long long ll = 0x123456789; /* ANTI-PATTERN! Don't do this! */
and reason "oh, the type is unsigned long long, so the value is unsigned long long and it gets assigned", but that's just not how C works. Literals have their own type, that doesn't depend on the context in which they're being used. And the type of integer literals is int.
This is the same fallacy as when folks do:
const double one_third = 1 / 3; /* ANTI-PATTERN! Don't do this! */
Thinking "the type on the left is double, so this should assign 0.3333333...". That's just (again!) not how C works. The types of the literals being divided is still int, so the right hand side evaluates to exactly 0, which is then converted to double and stored in the one_third variable.
For some reason, this behavior is deeply non-intuitive to many people, which is why there are many variants of the same question.