I don't know any other way to return the size of a vector other than the .size() command, and it works very well, but, it return a variable of type long long unsigned int, and this in very cases are very good, but I'm sure my program will never have a vector so big that it need all that size of return, short int is more than enough.
I know, for today's computers those few enused bytes are irrelevant, but I don't like to leave these "loose ends" even if they are small, and whem I was programming, I came across some details that bothered me.
Look at these examples:
for(short int X = 0 ; X < Vector.size() ; X++){
}
compiling this, I receive this warning:
warning: comparison of integer expressions of different signedness: 'short int' and 'std::vector<unsigned char>::size_type' {aka 'long long unsigned int'} [-Wsign-compare]|
this is because the .size() return value type is different from the short int I'm compiling, "X" is a short int, and Vector.size() return a long long unsigned int, was expected, so if I do this:
for(size_t X = 0 ; X < Vector.size() ; X++){
}
the problem is gone, but by doing this, I'm creating a long long unsigned int in variable size_t and I'm returning another variable long long unsigned int, so, my computer allocale two variables long long unsigned int, so, what I do for returning a simple short int? I don't need anything more than this, long long unsigned int is overkill, so I did this:
for(short int X = 0 ; X < short(Vector.size()) ; X++){
}
but... how is this working? short int X = 0 is allocating a short int, nothing new, but what about short (Vector.size()), is the computer allocating a long unsigned int and converting it to a short int? or is the compiler "changing" the return of the .size() function by making it naturally return a short int and, in this case, not allocating a long long unsined int? because I know the compilers are responsible for optimizing the code too, is there any "problem" or "detail" when using this method? since I rarely see anyone using this, what exactly is this short() doing in memory allocation? where can i read more about it?
(thanks to everyone who responded)
Forget for a moment that this involves a for loop; that's important for the underlying code, but it's a distraction from what's going on with the conversion.
short X = Vector.size();
That line calls Vector.size(), which returns a value of type std::size_t. std::size_t is an unsigned type, large enough to hold the size of any object. So it could be unsigned long, or it could be unsigned long long. In any event, it's definitely not short. So the compiler has to convert that value to short, and that's what it does.
Most compilers these days don't trust you to understand what this actually does, so they warn you. (Yes, I'm rather opinionated about compilers that nag; that doesn't change the analysis here). So if you want to see that warning (i.e., you don't turn it off), you'll see it. If you want to write code that doesn't generate that warning, then you have to change the code to say "yes, I know, and I really mean it". You do that with a cast:
short X = short(Vector.size());
The cast tells the compiler to call Vector.size() and convert the resulting value to short. The code then assigns the result of that conversion to X. So, more briefly, in this case it tells the compiler that you want it to do exactly what it would have done without the cast. The difference is that because you wrote a cast, the compiler won't warn you that you might not know what you're doing.
Some folks prefer to write that cast is with a static_cast:
short X = static_cast<short>(Vector.size());
That does the same thing: it tells the compiler to do the conversion to short and, again, the compiler won't warn you that you did it.
In the original for loop, a different conversion occurs:
X < Vector.size()
That bit of code calls Vector.size(), which still returns an unsigned type. In order to compare that value with X, the two sides of the < have to have the same type, and the rules for this kind of expression require that X gets promoted to std::size_t, i.e., that the value of X gets treated as an unsigned type. That's okay as long as the value isn't negative. If it's negative, the conversion to the unsigned type is okay, but it will produce results that probably aren't what was intended. Since we know that X is not negative here, the code works perfectly well.
But we're still in the territory of compiler nags: since X is signed, the compiler warns you that promoting it to an unsigned type might do something that you don't expect. Again, you know that that won't happen, but the compiler doesn't trust you. So you have to insist that you know what you're doing, and again, you do that with a cast:
X < short(Vector.size())
Just like before, that cast converts the result of calling Vector.size() to short. Now both sides of the < are the same type, so the < operation doesn't require a conversion from a signed to an unsigned type, so the compiler has nothing to complain about. There is still a conversion, because the rules say that values of type short get promoted to int in this expression, but don't worry about that for now.
Another possibility is to use an unsigned type for that loop index:
for (unsigned short X = 0; X < Vector.size(); ++X)
But the compiler might still insist on warning you that not all values of type std::size_t can fit in an unsigned short. So, again, you might need a cast. Or change the type of the index to match what the compiler think you need:
for (std::size_t X = 0; X < Vector.size(); ++X_
If I were to go this route, I would use unsigned int and if the compiler insisted on telling me that I don't know what I'm doing I'd yell at the compiler (which usually isn't helpful) and then I'd turn off that warning. There's really no point in using short here, because the loop index will always be converted to int (or unsigned int) wherever it's used. It will probably be in a register, so there is no space actually saved by storing it as a short.
Even better, as recommended in other answers, is to use a range-base for loop, which avoids managing that index:
for (auto& value: Vector) ...
In all cases, X has a storage duration of automatic, and the result of Vector.size() does not outlive the full expression where it is created.
I don't need anything more than this, long long unsigned int is overkill
Typically, automatic duration variables are "allocated" either on the stack, or as registers. In either case, there is no performance benefit to decreasing the allocation size, and there can be a performance penalty in narrowing and then widening values.
In the very common case where you are using X solely to index into Vector, you should strongly consider using a different kind of for:
for (auto & value : Vector) {
// replace Vector[X] with value in your loop body
}
This question already has answers here:
C++ : integer constant is too large for its type
(2 answers)
Closed 3 years ago.
This is a curiosity question. I was working with a Boolean to keep track of some parts of my code. I had the Boolean, say track initialised to be false. Now when I change it somewhere else to true using integer constant like :
track = 1;
this is defined. I understand how this would work true being 1 and false being 0. But now when you I say
track = 500;
this is still defined. Reasonable since it's any value other than or greater than 0 meaning it's true. My confusion now is when I do
track = 2147483648
which is 1 greater than INT_MAX the behaviour is still defined as true. Even when I push it a bit further to 2147483649454788. But when I equate to 21474836494547845784578 it throws an error
error: integer constant is too large for its type [-Werror]
_softExit = 21474836494547845784578;
^~~~~~~~~~~~~~~~~~~~~~~
Now this is just confusing. I'm pretty new to C++ so I'm not sure why or what any of this means. I know I could just use track = true; but I'm just curious.
As you have discovered yourself, an int object implicitly converts to a bool. So does a long long (or std::int64_t). So far so good, but the compiler message you show has nothing to do with bool. It's just that what it says: in your program, you have an integer literal that doesn't fit into into the domain that built-in integer types can handle. Hence the error, you would get it without trying to initialize a bool.
So this is ok:
const bool test = std::numeric_limits<long long>::max();
while inserting the actual literal value that std::numeric_limits<long long>::max() yields +1 is not ok.
This question already has answers here:
Why does the most negative int value cause an error about ambiguous function overloads?
(3 answers)
Closed 3 years ago.
I'm trying to write a test case for some corner case. For input of type int64_t, the following line won't compile:
int64_t a = -9223372036854775808LL;
The error/warning is:
error: integer constant is so large that it is unsigned [-Werror]
I thought the number was out of range, so I tried:
std::cout << std::numeric_limits<int64_t>::min() << std::endl;
It outputs exactly the same number!!! So the constant is within the range.
How can I fix this error?
You may write
int64_t a = -1 - 9223372036854775807LL;
The problem is that the - is not part of the literal, it is unary minus. So the compiler first sees 9223372036854775808LL (out of range for signed int64_t) and then finds the negative of this.
By applying binary minus, we can use two literals which are each in range.
Ben's already explained the reason, here's two other possible solutions.
Try this
int64_t a = INT64_MIN;
or this
int64_t a = std::numeric_limits<int64_t>::min();
This question already has answers here:
When to use unsigned values over signed ones?
(5 answers)
Closed 6 years ago.
So I understand that unsigned variables can only hold positive values and signed variables can hold negative and positive. However, it is unclear to me why would someone use unsigned variables? Isn't that risky? I mean I would personally just stick with signed variables just in case. Is there a memory/performance advantage in using unsigned variables?
Selecting the right kind of primitive data type for your particular problem is all about being correct about showing your intent. While, for example, a size of an array might as well be stored in a signed type (like it is the case in Java or C#), but why should you? An array cannot be of negative size. Doing so anyway is only confusing to readers of your program and will not give you any benefit.
There is no measurable performance gain by not using unsigned values; actually it can even be dangerous to do so since unsigned values can hold bigger positive numbers than signed values of the same memory size, thus risking a narrowing conversion when assigning, for example, array sizes that are naturally unsigned to a signed value of same memory size:
// While unlikely, truncation can happen
int64_t x = sizeof(...);
// ~~~~^~~~~~~ uint64_t on my system
Those bugs are generally hard to track, but compilers have gotten better at warning you about committing them.
Naturally, you should be aware that using unsigned integers can indeed be dangerous in some cases. As an example I wrote a simple for loop. We do not expect the value i to be negative, so we do the seemingly correct decision to use a value of unsigned type:
for(unsigned i = 5; i >= 0; --i)
{
}
But in this case, the loop will never terminate since the unsigned value of 0 - 1 (happens in fifth iteration here) will be a big positive value (this is called wrap around), thus defeating the loop termination check.
This can, for example, be solved like this:
for(unsigned i = 5; (i+1) > 0; --i)
{
}
But this should not deter you from using the right data type. Just exercise caution about things like value ranges and wrap around and you will be fine.
In conclusion, use the type that is most appropriate and seems to show your intent the best.
Unsigned is more appropriate if your value is actually a bit-field, and if you do bit manipulations.
Signed behaviour is by default undefined if an operation causes an overflow.
The ranges of signed and unsigned numbers are different.
For example, on a 32 bit system the range of a signed integer would be between -2G and 2G - 1 (i.e. -2147483648 to 2147483647).
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions concerning problems with code you've written must describe the specific problem — and include valid code to reproduce it — in the question itself. See SSCCE.org for guidance.
Closed 9 years ago.
Improve this question
What is the safest way to divide B by A, assuming the following types for each of them?
unsigned long long A;
unsigned long int B;
I am already using the following line to do that. It works fine, however, sometimes it fails with the Segmentation Faults.
double C;
C= double(B)/double(A);
Thanks
(Firstly, unsigned long int is the same as unsigned long)
Data type promotion rules mean that when evaluating A / B, B is promoted to unsigned long long and the division performed in integer arithmetic; i.e. any remainder is lost.
Casting either Aor B to double causes the operation to be performed in floating point double. (But note that casting a long long type to a double can result in precision loss.)
Rest assured that C = double(B) / double(A); will not cause a segmentation fault. You must have memory corruption / other undefined behaviour prior to this statement. I suspect you've messed up your stack.
These are ints so no need to cast to a double unless you're actually expecting a floating point fractional result (i.e. you probably do want to cast, but that has nothing to do with your fault).
More than likely you're getting errors because of divide by zero.