I have a function with two arguments, a vector whose elements are being tested, and a bool variable that we enter as true or false. If we enter true then it is supposed to isolate and place all the elements whose sum of digits is an even number to a new vector (in the same order they came) and return that vector. With false it's the opposite, odd numbers. And you can only use pretty much the things I have already used here, nothing else.
This is how it looks.
std::vector<int> IzdvojiElemente(std::vector<int> v, bool flag){
std::vector<int> n;
for(int i(0); i<v.size();i++){
int suma(0);
int temp(v[i]);
if(temp<0) temp*=-1;
while(temp>0){
suma+=temp%10;
temp/=10;
}
if(flag && suma%2==0) n.push_back(v[i]);
if(!flag && suma%2!=0) n.push_back(v[i]);
}
return n;
}
And this is one of the main functions for which it doesn't work:
std::vector<int> v1 {1,std::numeric_limits<int>::min(),2, std::numeric_limits<int>::max(),5};
std::vector<int> v2;
v2 = IzdvojiElemente(v1, false);
for(int i=0; i < v2.size(); i++)
std::cout << v2[i] << " ";
This is what I was supposed to get (as output):
1 -2147483648 5
This is what I got:
1 5
For some reason it either ignores the numeric limits, or doesn't but sorts them with the wrong vector. And I don't know why. In any other cases it works as it should. And maybe it's overflow, but I can't see where.
Yes, this is overflow. Note that in 2's complement representation of signed integers (the common representation on mainstream platforms), the representable range is not symmetric: when the lowest representable number is -2147483648, then the highest representable one is 2147483647.
-2147483648 * -1 is therefore signed integer overflow and Undefined Behaviour, meaning the program is wrong and anything can happen.
If you're supposed to handle std::numeric_limits<int>::min() correctly regardless of internal representation, you will have to deal with negative numbers differently (such as computing the digit sum in negative and then just sign-reverse the computed sum).
Related
if i used nounce = 32766 it only gives 1 time output but for 32767 it goes to infinite loop..... why ?? same thing happen when i used int
#include<iostream>
using namespace std;
class Mining
{
short int Nounce;
};
int main()
{
Mining Mine;
Mine.Nounce = 32767;
for (short int i = 0; i <= Mine.Nounce; i++)
{
if (i == Mine.Nounce)
{
cout << " Nounce is " << i << endl;
}
}
return 0;
}
When you use the largest possible positive value, every other value will be <= to it, so this loop goes on forever:
for(short int i=0;i<=Mine.Nounce;i++)
You can see that 32767 is the largest value for a short on your platform by using numeric_limits:
std::cout << std::numeric_limits<short>::max() << std::endl; //32767
When i reaches 32767, i++ will attempt to increment it. This is undefined behavior because of signed overflow, however most implementations (like your own apparently) will simply roll over to the maximum negative value, and then i++ will happily increment up again.
Numeric types have a limit to the range of values they can represent. It seems like the maximum value a int short can store on your platform is 32767. So i <= 32767 is necessarily true, there exists no int short that is larger than 32767 on your platform. This is also why the compiler complains when you attempt to assign 100000 to Mine.Nounce, it cannot represent that value. See std::numeric_limits to find out what the limits are for your platform.
To increment a signed integer variable that already has the largest possible representable value is undefined behavior. Your loop will eventually try to execute i++ when i == 32767 which will lead to undefined behavior.
Consider using a larger integer type. int is at least 32 bit on the majority of platforms, which would allow it to represent values up to 2147483647. You could also consider using unsigned short which on your platform would likely be able to represent values up to 65535.
In your for loop, i will never be greater than the value of Mine.Nounce because of the way that shorts are represented in memory. Most implementations use 2 bytes for a short with one bit for the sign bit. Therefore , the maximum value that can be represented by a signed short is 2^15 - 1 = 32767.
It goes to infinite loop because your program exhibits undefined behavior due to a signed integer overflow.
Variable i of type short overflows after it reaches the value of Mine.Nounce which is 32767 which is probably the max value short can hold on your implementation. You should change your condition to:
i < Mine.Nounce
which will keep the value of i at bay.
I am not an advanced C++ programmer. But I have been using C++ for a long time now. So, I love playing with it. Lately I was thinking about ways to maximize a variable programmatically. So I tried Bitwise Operators to fill a variable with 1's. Then there's signed and unsigned issue. My knowledge of memory representation is not very well. However, I ended up writing the following code which is working for both signed and unsigned short, int and long (although int and long are basically the same). Unfortunately, for long long, the program is failing.
So, what is going on behind the scenes for long long? How is it represented in memory? Besides, Is there any better way to do achieve the same thing in C++?
#include <bits/stdc++.h>
using namespace std;
template<typename T>
void Maximize(T &val, bool isSigned)
{
int length = sizeof(T) * 8;
cout << "\nlength = " << length << "\n";
// clearing
for(int i=0; i<length; i++)
{
val &= 0 << i;
}
if(isSigned)
{
length--;
}
val = 1 << 0;
for(int i=1; i<length; i++)
{
val |= 1 << i;
cout << "\ni = " << i << "\nval = " << val << "\n";
}
}
int main()
{
long long i;
Maximize(i, true);
cout << "\n\nsizeof(i) = " << sizeof(i) << " bytes" << "\n";
cout << "i = " << i << "\n";
return 0;
}
The basic issue with your code is in the statements
val &= 0 << i;
and
val |= 1 << i;
in the case that val is longer than an int.
In the first expression, 0 << i is (most likely) always 0, regardless of i (technically, it suffers from the same undefined behaviour described below, but you will not likely encounter the problem.) So there was no need for the loop at all; all of the statements do the same thing, which is to zero out val. Of course, val = 0; would have been a simpler way of writing that.
The issue 1 << i is that the constant literal 1 is an int (because it is small enough to be represented as an int, and int is the narrowest representation used for integeral constants). Since 1 is an int, so is 1 << i. If i is greater than or equal to the number of value bits in an int, that expression has undefined behaviour, so in theory the result could be anything. In practice, however, the result is likely to be the same width as an int, so only the low-order bits will be affected.
It is certainly possible to convert the 1 to type T (although in general, you might need to be cautious about corner cases when T is signed), but it is easier to convert the 1 to an unsigned type at least as wide as Tby using the maximum-width unsigned integer type defined in cstdint, uintmax_t:
val |= std::uintmax_t(1) << i;
In real-world code, it is common to see the assumption that the widest integer type is long long:
val |= 1ULL << i;
which will work fine if the program never attempts to instantiate the template with a extended integer type.
Of course, this is not the way to find the largest value for an integer type. The correct solution is to #include <limits> and then use the appropriate specialization of std::numeric_limits<T>::max()
C++ allows only one representation for positive (and unsigned) integers, and three possible representations for negative signed integers. Positive and unsigned integers are simply represented as a sequence of bits in binary notation. There may be padding bits as well, and signed integers have a single sign bit which must be 0 in the case of positive integers, so there is no guarantee that there are 8*sizeof(T) useful bits in the representation, even if the number of bits in a byte is known to be 8 (and, in theory, it could be larger). [Note 1]
The sign bit for negative signed integers is always 1, but there are three different formats for the value bits. The most common is "two's complement", where the value bits interpreted as a positive number would be exactly 2k more than the actual value of the number, where k is the number of value bits. (This is equivalent to specifying a weight of 2-k to the sign bits, which is why it is called 2s complement.)
Another alternative is "one's complement", in which the value bits are all inverted individually. This differs by exactly one from two's-complement representation.
The third allowable alternative is "sign-magnitude", in which the value bits are precisely the absolute value of the negative number. This representation is frequently used for floating point values, but only rarely used in integer values.
Both sign-magnitude and one's complement suffer from the disadvantage that there is a bit pattern which represents "negative 0". On the other hand, two's complement representation has the feature that the magnitude of the most negative representable value is one larger than the magnitude of the most positive representable value, with the result that both -x and x/-1 can overflow, leading to undefined behaviour.
Notes
I believe that it is theoretically possible for padding to be inserted between the value bits and the sign bits, but I certainly do not know of any real-world implementation with that feature. However, the fact that attempting to shift a 1 into the sign bit position is undefined behaviour makes it incorrect to assume that the sign bit is contiguous with the value bits.
I was thinking about ways to maximize a variable programmatically.
You are trying to reinvent the wheel. C++ STL already has this functionality: std::numeric_limits::max()
// x any kind of numeric type: any integer or any floating point value
x = std::numeric_limits<decltype(x)>::max();
This is also better since you will not relay on undefined behavior.
As harold commented, the solution is to use T(1) << i instead of 1 << i. Also as Some programmer dude mentioned, long long is represented as consecutive bytes (typically 8 bytes) with sign bit at the MSB if it is signed.
This question is regarding the modulo operator %. We know in general a % b returns the remainder when a is divided by b and the remainder is greater than or equal to zero and strictly less than b. But does the above hold when a and b are of magnitude 10^9 ?
I seem to be getting a negative output for the following code for input:
74 41 28
However changing the final output statement does the work and the result becomes correct!
#include<iostream>
using namespace std;
#define m 1000000007
int main(){
int n,k,d;
cin>>n>>k>>d;
if(d>n)
cout<<0<<endl;
else
{
long long *dp1 = new long long[n+1], *dp2 = new long long[n+1];
//build dp1:
dp1[0] = 1;
dp1[1] = 1;
for(int r=2;r<=n;r++)
{
dp1[r] = (2 * dp1[r-1]) % m;
if(r>=k+1) dp1[r] -= dp1[r-k-1];
dp1[r] %= m;
}
//build dp2:
for(int r=0;r<d;r++) dp2[r] = 0;
dp2[d] = 1;
for(int r = d+1;r<=n;r++)
{
dp2[r] = ((2*dp2[r-1]) - dp2[r-d] + dp1[r-d]) % m;
if(r>=k+1) dp2[r] -= dp1[r-k-1];
dp2[r] %= m;
}
cout<<dp2[n]<<endl;
}
}
changing the final output statement to:
if(dp2[n]<0) cout<<dp2[n]+m<<endl;
else cout<<dp2[n]<<endl;
does the work, but why was it required?
By the way, the code is actually my solution to this question
This is a limit imposed by the range of int.
int can only hold values between –2,147,483,648 to 2,147,483,647.
Consider using long long for your m, n, k, d & r variables. If possible use unsigned long long if your calculations should never have a negative value.
long long can hold values from –9,223,372,036,854,775,808 to 9,223,372,036,854,775,807
while unsigned long long can hold values from 0 to 18,446,744,073,709,551,615. (2^64)
The range of positive values is approximately halved in signed types compared to unsigned types, due to the fact that the most significant bit is used for the sign; When you try to assign a positive value greater than the range imposed by the specified Data Type the most significant bit is raised and it gets interpreted as a negative value.
Well, no, modulo with positive operands does not produce negative results.
However .....
The int type is only guaranteed by the C standards to support values in the range -32767 to 32767, which means your macro m is not necessarily expanding to a literal of type int. It will fit in a long though (which is guaranteed to have a large enough range).
If that's happening (e.g. a compiler that has a 16-bit int type and a 32-bit long type) the results of your modulo operations will be computed as long, and may have values that exceed what an int can represent. Converting that value to an int (as will be required with statements like dp1[r] %= m since dp1 is a pointer to int) gives undefined behaviour.
Mathematically, there is nothing special about big numbers, but computers only have a limited width to write down numbers in, so when things get too big you get "overflow" errors. A common analogy is the counter of miles traveled on a car dashboard - eventually it will show as all 9s and roll round to 0. Because of the way negative numbers are handled, standard signed integers don't roll round to zero, but to a very large negative number.
You need to switch to larger variable types so that they overflow less quickly - "long int" or "long long int" instead of just "int", the range doubling with each extra bit of width. You can also use unsigned types for a further doubling, since no range is used for negatives.
I've ran into a pretty wierd problem with doubles. I have a list of floating point numbers (double) that are sorted in decreasing order. Later in my program I find however that they are not exactly sorted anymore. For example:
0.65801139819
0.6545651031 <-- a
0.65456513001 <-- b
0.64422968678
The two numbers in the middle are flipped. One might think that this problem lies in the representations of the numbers, and they are just printed out wrong. But I compare each number with the previous one using the same operator I use to sort them - there is no conversion to base 10 or similar going on:
double last_pt = 0;
for (int i = 0; i < npoints; i++) {
if (last_pt && last_pt < track[i]->Pt()) {
cout << "ERROR: new value " << track[i]->Pt()
<< " is higher than previous value " << last_pt << endl;
}
last_pt = track[i]->Pt();
}
The values are compared during sorting by
bool moreThan(const Track& a, const Track& b) {
return a.Pt() > b.Pt();
}
and I made sure that they are always double, and not converted to float. Pt() returns a double. There are no NaNs in the list, and I don't touch the list after sorting.
Why is this, what's wrong with these numbers, and (how) can I sort the numbers so that they stay sorted?
Are you sure you're not converting double to float at some time? Let us take a look at binary representation of these two numbers:
0 01111111110 0100111100100011001010000011110111010101101100010101
0 01111111110 0100111100100011001010010010010011111101011010001001
In double we've got 1 bit of sign, 11 bits of exponent and 53 bits of mantissa, while in float there's 1 bit of sign, 8 bit of exponent and 23 bits of mantissa. Notice that mantissa in both numbers are identical at their first 23 bits.
Depending on rounding method, there would be different behaviour. In case when bits >23 are just trimmed, these two numbers as float are identical:
0 011111110 01001111001000110010100 (trim: 00011110111010101101100010101)
0 011111110 01001111001000110010100 (trim: 10010010011111101011010001001)
You're comparing the return value of a function. Floating point return
values are returned in a floating point register, which has higher
precision than a double. When comparing two such values (e.g. a.Pt() >
b.Pt()), the compiler will call one of the functions, store the return
value in an unnamed temporary of type double (thus rounding the
results to double), then call the other function, and compare its
results (still in the floating point register, and not rounded to
double) with the stored value. This means that you can end up with
cases where a.Pt() > b.Pt() and b.Pt() > a.Pt(), or a.Pt() >
a.Pt(). Which will cause sort to get more than a little confused.
(Formally, if we're talking about std::sort here, this results in
undefined behavior, and I've heard of cases where it did cause a core
dump.)
On the other hand, you say that Pt() "just returns a double field".
If Pt() does no calculation what so ever; if it's only:
double Pt() const { return someDouble; }
, then this shouldn't be an issue (provided someDouble has type
double). The extended precision can represent all possible double
values exactly.
I am a novice to programming and computing.
I am running a C++ based program that is taking approx 6 hours on this machine. I use the timing utility of the framework I work in.
I tried calculating the total number of iterations of my nested loop via a simple program:
{
int k=0;
for (int i = 0; i < 196779; i++)
for (int j= i+1; j< 196779; j++)
{
k++;
if((k+1)%10000 == 0)
cout<< "\n Number of Instructions: " << k;
}
cout<< "\n Total Number of iterations = " << k << endl;
}
Mathematically I would expect it to agree with the value 1.9360889031 × 10^10 , which is the total number of 2-element subsets. I inserted that cout statement to see if something funny was happening, and indeed it does.
The outpute exceeds the mathematically expected value. Is my calculation wrong?
The output goes into negative values after a while as it exceeds the int range, but it shouldn't.
Sample output at the end where I manually break
Number of Instructions: -2078590001
Number of Instructions: -2078580001
Number of Instructions: -2078570001
Number of Instructions: -2078560001
I found out the range of Int to be 2147483647, but I had made the calculations and concluded that my k should never exceed the limit. So where is the problem?
Mathematically I would expect it to agree with the value 1.9360889031 × 10^10
1.9360889031 × 10^10 (19,360,889,031) is larger than 2,147,483,647, the maximum value representable by an int (at least on your compiler).
The output goes into negative values after a while as it exceeds the int range, but it shouldn't.
You can use an unsigned int, which correctly "rolls over" when a computation yields a value too large to be represented (or a negative value). When you overflow an int, you get undefined behavior.