What is wrong with auto? - c++

vector<int> function(vector<int> &arr)
{
for(auto i = arr.size() - 1; i >= 0; i--)
std::cout << arr[i] << " ";
return arr;
}
int main()
{
vector<int> arr{1,2,3,4};
function(arr);
}
Why does the above program cycle?
if I change auto with int everything is ok

arr.size() is an unsigned data type, usually size_t. With i being unsigned, i >= 0 is always true. Subtracting 1 from an unsigned variable that is 0 results in the biggest amount that the type can hold. As a result, it will cycle forever.
What then happens is uncertain, since your array index will turn into a gigantic value, and arr[i] will have undefined behavior for values >= arr.size(). If you have an int instead of auto, it works because the i-- will cause it to eventually be -1 and then i >= 0 will be false, exiting the loop.
An explanation of this rollover behavior can be found here:
Unsigned integer arithmetic is always performed modulo 2n where n is the number of bits in that particular integer. E.g. for unsigned int, adding one to UINT_MAX gives ​0​, and subtracting one from ​0​ gives UINT_MAX.
So, for size_t, subtracting 1 from 0 results in SIZE_MAX, which commonly has a value of 18446744073709551615.

What is you problem was already answered by Blaze and rafix07, but I wanted to add that in modern C++ its better to use iterators whenever possible. This has few advantages including code portability, better performance and more readable code.
Your code can look something like this:
std::vector<int> function(std::vector<int> &arr)
{
for(auto it = arr.rbegin(); i != arr.rend(); ++i)
std::cout << *it << " ";
return arr;
}
or like this
std::vector<int> function(std::vector<int> &arr)
{
std::for_each(arr.rbegin(), arr.rend(), [](int val) {
std::cout << val << " ";
});
return arr;
}
or even like this
std::vector<int> function(std::vector<int> &arr)
{
std::copy(arr.rbegin(), arr.rend(), std::ostream_iterator<int>(std::cout, " "));
return arr;
}

When you have a loop that goes the other way ( from max to 0 ) then you usually have this problem:
Either the max is size_t so i >= 0 is always true
Or you change i to int so you have a comparison of a signed with an unsigned which would result in a compiler warning, or a comparison of an int to a larger size_t in x64.
Redesign the loop:
Use a new type for i which would be long in x86 and long long in x64, now i >= 0 is good when your objects are not above 2^63 in x64 (most likely).
when i == 0, break inside the loop.
Change to the normal i = 0 and i < obj.size() method.

Related

why for (int i=0; i<-1; i++) runs? [duplicate]

Please take a look at this simple program:
#include <iostream>
#include <vector>
using namespace std;
int main() {
vector<int> a;
std::cout << "vector size " << a.size() << std::endl;
int b = -1;
if (b < a.size())
std::cout << "Less";
else
std::cout << "Greater";
return 0;
}
I'm confused by the fact that it outputs "Greater" despite it's obvious that -1 is less than 0. I understand that size method returns unsigned value but comparison is still applied to -1 and 0. So what's going on? can anyone explain this?
Because the size of a vector is an unsigned integral type. You are comparing an unsigned type with a signed one, and the two's complement negative signed integer is being promoted to unsigned. That corresponds to a large unsigned value.
This code sample shows the same behaviour that you are seeing:
#include <iostream>
int main()
{
std::cout << std::boolalpha;
unsigned int a = 0;
int b = -1;
std::cout << (b < a) << "\n";
}
output:
false
The signature for vector::size() is:
size_type size() const noexcept;
size_type is an unsigned integral type. When comparing an unsigned and a signed integer, the signed one is promoted to unsigned. Here, -1 is negative so it rolls over, effectively yielding the maximal representable value of the size_type type. Hence it will compare as greater than zero.
-1 unsigned is a higher value than zero because the high bit is set to indicate that it's negative but unsigned comparison uses this bit to expand the range of representable numbers so it's no longer used as a sign bit. The comparison is done as (unsigned int)-1 < 0 which is false.

Maximum Pair Product strange results

I have been trying to learn C++ from a C# background, and I am running in to a very weird mistake in my Maximum Pair Product method.
The method works as expected with small numbers. For products of large numbers it produces a strange rounding and outputs the incorrect output.
int64_t Testing::MaximumPairProductFast(const std::vector<int>& numbers)
{
int64_t largestIntA = 0;
int64_t largestIntB = 0;
for (int i = 0; i < numbers.size(); i++)
{
// find first largest number
if (numbers[i] > largestIntA) largestIntA = numbers[i];
}
for (int i = 0; i < numbers.size(); i++)
{
//find second largest number
if (numbers[i] > largestIntB && numbers[i]!= largestIntA) largestIntB = numbers[i];
}
return largestIntA * largestIntB;
}
The main program
int main()
{
Testing ch;
std::vector<int> test{ 100000,90000 };
int result= ch.MaximumPairProductFast(test);
std::cout << result << "\n";
}
The output of the computation will be 410065408 instead of 9000000000 which is the correct answer
The weird thing is when I try to print this:
std::cout << (int64_t) 100000 * 90000 << "\n";
It does give the correct result 9000000000.
But if I don't cast it to int64_t :
std::cout << 100000 * 90000 << "\n";
The result is 410065408, which is exactly the output from my method even though I have ensured to return an int64_t type.
I have also tried this, but the outcome is the same:
int64_t Test::MaximumPairProductFast(const std::vector<int>& numbers)
{
int64_t largestIntA = 0;
int64_t largestIntB = 0;
for (int i = 0; i < numbers.size(); i++)
{
// find first largest number
if ((int64_t) numbers[i] > largestIntA) largestIntA = (int64_t) numbers[i];
}
for (int i = 0; i < numbers.size(); i++)
{
//find second largest number
if ((int64_t) numbers[i] > largestIntB && (int64_t) numbers[i]!= largestIntA) largestIntB = (int64_t) numbers[i];
}
return largestIntA * largestIntB;
}
Am I missing an obvious detail?
You are truncating your result from a int64_t to an int. On a typical system these days, you'll truncate a 64 bit integer and store it in a 32 bit value.
If you compile with the warning level increased, the compiler will tell you about this problem.
The solution is to declare result with the proper type:
int64_t result = ch.MaximumPairProductFast(test);
Or, just let the compiler use the correct type with auto:
auto result = ch.MaximumPairProductFast(test);
In your second example, the constants 100000 and 90000 are integers, so the multiplication of them will be done as integers. When you case one of the values to int64_t, the multiplication will be done with 64 bit ints. This can also be expressed with 100000LL.
On your system int is a 32-bit integer. int64_t is 64-bit integer. The problem in your code is you are multiplying two 32-bit ints so you're getting a 32-bit int result. Your answer is overflowing. When you explicitly cast it to int64_t, the operands are now 64-bit ints and you're getting a 64-bit int result which can hold the large value and not overflow.
I'll suggest you either:
use std::vector<int64_t> instead of std::vector<int>
If you're worried about space, do as you did, explicitely cast it to 64-bit int and then multiply

I ran into a weird bug in c++ where a statement calculating an addition of 2 small integers overflow into a long long value

I recently ran into this weird C++ bug that I could not understand. Here's my code:
#include <bits/stdc++.h>
using namespace std;
typedef vector <int> vi;
typedef pair <int, int> ii;
#define ff first
#define ss second
#define pb push_back
const int N = 2050;
int n, k, sum = 0;
vector <ii> a;
vi pos;
int main (void) {
cin >> n >> k;
for (int i = 1; i < n+1; ++i) {
int val;
cin >> val;
a.pb(ii(val, i));
}
cout << a.size()-1 << " " << k << " " << a.size()-k-1 << "\n";
}
When I tried out with test:
5 5
1 1 1 1 1
it returned:
4 5 4294967295
but when I changed the declaration from:
int n, k, sum = 0;
to:
long long n, k, sum = 0;
then the program returned the correct value which was:
4 5 -1
I could not figure out why the program behaved like that since -1 should not exceed an integer value. Can anyone explain this to me? I'm really appreciated your kind helps.
Thanks
Obviously, on your machine, your size_t is a 32-bit integer, whereas long long is 64 bit. size_t always is an unsigned type, so you get:
cout << a.size() - 1
// ^ unsigned ^ promoted to unsigned
// output as uint32_t
// ^ (!)
a.size() - k - 1
// ^ promoted to long long, as of smaller size!
// -> overall expression is int64_t
// ^ (!)
You would not have seen any difference in the two values printed (would have been 18446744073709551615) if size_t was 64 bit as well, as then the signed long long k (int64_t) would have promoted to unsigned (uint64_t) instead.
Be aware that static_cast<UnsignedType>(-1) always evaluates (according to C++ conversion rules) to std::numeric_limits<UnsignedType>::max()!
Side note about size_t: This is defined as an unsigned integral type large enough to hold the maximum size you can allocate on your system for an object, so the size in bits is hardware dependent and in the end, correlates with the size in bits of the memory address bus (first power of two not smaller than).
vector::size returns size_t (unsigned), the expression a.size()-k-1 evaluates to an unsigned type, so you end up with an underflow.

Auto failed for vector size

I am trying use auto to infer the type.
for (auto i = remaining_group.size() - 1; i >= 0; --i) {
std::cout << i;
}
I get very large number like 18446744073709534800 which is not expected. When I change auto to int it is the number I expected between 0 and 39.
Is there any reason auto will fail here?
remaining_group's type is a std::vector<lidar_point> and lidar_point is struct like:
struct LidarPoint {
float x;
float y;
float z;
uint8_t field1;
double field2;
uint8_t field3;
uint16_t field4;
}
When using auto the type of i would be std::vector::size_type, which is an unsigned integer type. That means the condition i >= 0; would be always true, and if overflow happens you'll get some large numbers.
Unsigned integer arithmetic is always performed modulo 2n
where n is the number of bits in that particular integer. E.g. for unsigned int, adding one to UINT_MAX gives ​0​, and subtracting one from ​0​ gives UINT_MAX.
Simple reproduction of problem:
#include <iostream>
size_t size() {return 1;}
int main() {
for (auto i = size() - 1; i >= 0; --i) {
std::cout << i << std::endl;
}
}
size() got type size_t and literal constant 1 would be promoted to size_t, in result auto would become size_t, which cannot be less than zero, resulting in infinite loop and underflow of i.
If you need a reverse index loop, use operator -->
When you write a normal index loop, you write it with 0, size, and <.
When you write a normal reverse index loop, things become a bit wonky: you need size - 1, >=, 0, and you can't use unsigned index, because unsigned i is always >= 0 so your check i >= 0 always returns true, your loop could run forever.
With fake operator "goes to", you can use 0, size, and > to write the reverse index loop, and it doesn't matter if i is signed or unsigned:
for (auto i = a.size(); i --> 0; ) //or just i--, or i --> 1, i --> 10...
std::cout << i << ' ';

Find largest unsigned int .... Why doesn't this work?

Couldn't you initialize an unsigned int and then increment it until it doesn't increment anymore? That's what I tried to do and I got a runtime error "Timeout." Any idea why this doesn't work? Any idea how to do it correctly?
#include
int main() {
unsigned int i(0), j(1);
while (i != j) {
++i;
++j;
}
std::cout << i;
return 0;
}
Unsigned arithmetic is defined as modulo 2^n in C++ (where n is
the number of bits). So when you increment the maximum value,
you get 0.
Because of this, the simplest way to get the maximum value is to
use -1:
unsigned int i = -1;
std::cout << i;
(If the compiler gives you a warning, and this bothers you, you
can use 0U - 1, or initialize with 0, and then decrement.)
Since i will never be equal to j, you have an infinite loop.
Additionally, this is a very inefficient method for determining the maximum value of an unsigned int. numeric_limits gives you the result without looping for 2^(16, 32, 64, or however many bits are in your unsigned int) iterations. If you didn't want to do that, you could write a much smaller loop:
unsigned int shifts = sizeof(unsigned int) * 8; // or CHAR_BITS
unsigned int maximum_value = 1;
for (int i = 1; i < shifts; ++i)
{
maximum_value <<= 1;
++maximum_value;
}
Or simply do
unsigned int maximum = (unsigned int)-1;
i will always be different than j, so you have entered an endless loop. If you want to take this approach, your code should look like this:
unsigned int i(0), j(1);
while (i < j) {
++i;
++j;
}
std::cout << i;
return 0;
Notice I changed it to while (i<j). Once j overflows i will be greater than j.
When an overflow happens, the value doesn't just stay at the highest, it wraps back abound to the lowest possible number.
i and j will be never equal to each other. When an unsigned integral value achieves its maximum adding to it 1 will result in that the next value will be the minimum that is 0.
For example if to consider unsigned char then its maximum is 255. After adding 1 you will get 0.
So your loop is infinite.
I assume you're trying to find the maximum limit that an unsigned integer can store (which is 65,535 in decimal). The reason that the program will time out is because when the int hits the maximum value it can store, it "Goes off the end." The next time j increments, it will be 65,535; i will be 0.
This means that the way you're going about it, i would NEVER equal j, and the loop would run indefinitely. If you changed it to what Damien has, you'd have i == 65,535; j equal to 0.
Couldn't you initialize an unsigned int and then increment it until it doesn't increment anymore?
No. Unsigned arithmetic is modular, so it wraps around to zero after the maximum value. You can carry on incrementing it forever, as your loop does.
Any idea how to do it correctly?
unsigned int max = -1; // or
unsigned int max = std::numeric_limits<unsigned int>::max();
or, if you want to use a loop to calculate it, change your condition to (j != 0) or (i < j) to stop when j wraps. Since i is one behind j, that will contain the maximum value. Note that this might not work for signed types - they give undefined behaviour when they overflow.