I have the following code in Ruby.
x = 33078
x << 16
# => 2167799808
In C++ That code is
int x = 33078;
x << 16
# => -2127167488
I know this has to do with overflows, but how can I get the C++ to give the same result as Ruby?
33078 << 16 does not fit into an integer and that is why in C++ it overflows and gets to a negative value. Meanwhile in ruby the type is automatically converted to something big enough to store the result of this computation.
If you want to be able to compute this value in C++, use a type with higher max value. unsigned int will be enough in this case but if you want to compute bigger values you may need long long or even unsigned long long.
You need to use an integer that is the same byte size as a Ruby int.
pry(main)> x = 33078
=> 33078
pry(main)> x.size
=> 8
Try
long int x
Generally int's in C are 32bit, not 64bit ( or 8 bytes ).
#include <iostream>
int main()
{
uint64_t x= 33078;
std::cout<< (x<< 16);
}
$ g++ -std=c++11 test.cpp && ./a.out
$ 2167799808
Related
I am trying to solve problem of find square root of given number using binary search on c++ which works perfect for small numbers, but if input >= 2000000000 it doesn't work at all
code:
int main() {
int n; cin >> n;
int l = 0, r = n + 1;
while (r - l > 1) {
int m = (r + l) / 2;
if (m * m <= n) {
l = m;
} else {
r = m;
}
}
cout << l;
return 0;
}
some tests:
1
1
16
4
but
2000000000000000
-3456735426738
can't understand why...
tested the same code on python, it works good
probably it's some c++ feature which i don't know
A number n >= 2000000000 surely works, as long as it doesn't reach it's maximum allowed value (more on that shortly).
Because it seems you're not familiar with data types and their sizes in C and C++, I'll keep it simple.
A type of int is normally 4 bytes (yes, I said "normally" as there are exceptions to this rule - this is a different discussion regarding platforms and their architecture, for now, take the simple explanation that it's 4 bytes in most cases), meaning 32 bits. It can be signed or unsigned.
Minor caveat: when unsigned is not explicitly specified, then it's considered to be signed by default, so int x; would mean that x can take negative values as well
A signed int (signed, meaning it has both, positive and negative numbers, so apart from zero and the maximum negative number, you'd have each value "twice", once with + and one more time with -, hence the terminology of signed) has the following ranges: -2147483648 to +2147483647.
To "increase" the maximum allowed value, you'd need an unsigned int. Its range is 0 to 4294967295.
There are "bigger" types in C and C++ but I think that discussion is slightly more advanced. Short version is this: for a 64bit integer, if you're using GCC you can use uint64_t, if you're using MSVS you can either use __int64, but you can also use uint64_t.
For even larger values, well... it gets really complicated. Python has native support for larger numbers, that is why it works there from the get-go.
You need to check the data types available in C and C++, preferably read up on C-17 (the 2017 standard on C, which is the newest released) and C++20 (the 2020 standard for C++). The roadmap says the next standard update for both would be in 2023 (so fingers crossed :) ).
Regarding your code, however, also keep in mind what molbdnilo and ALX23z said regarding overflowing, in their comments. Even if you would cover sufficient data type ranges, there's still a risk of overflowing due to mistakes in your code:
molbdnilo: m * m overflows
ALX23z: Instead of m * m <= n write m < n/m. And inspect better the case when m == n/m
While debugging an issue in our codebase, I stumbled upon a problem which is quite similar to this sample problem below
#include <iostream>
#include <vector>
int main() {
std::vector<int> v;
int MAX = 100;
int result = (v.size() - 1) / MAX;
std::cout << result << std::endl;
return 0;
}
I would expect the output of the program should be 0 but it's -171798692.
Can someone help me understand this?
v.size() returns an unsigned value std::vector::size_type. This is typically size_t.
Arithmetic in unsigned value wraps around, and (v.size() - 1) will be 0xffffffffffffffff (18446744073709551615) if your size_t is 64-bit long.
Dividing this value with 100 yields 0x28F5C28F5C28F5C (184467440737095516).
Then, this result is converted to int. If your int is 32-bit long and the conversion is done by simple truncation, the value will be 0xF5C28F5C.
This value represents -171798692, which you got, in two's complement.
The problem is v.size() - 1.
The size() function returns an unsigned value. When you subtract 1 from unsigned 0 don't get -1 but rather a very large value.
Then you convert this large unsigned value back into a signed integer type which could turn it negative.
Not only that, but on a 64-bit system it's likely that size() returns a 64 bit value, while int stays 32 bits, making you loose half the data.
Vector v is empty, so v.size() is 0. Also v.size() is unsigned, so strange things happen when you subtract from that 0.
I'm programming in C++ and I have to store big numbers in one of my exercices.
The biggest number i have to store is : 9 780 321 563 842.
Each time i try to print the number (contained in a variable) it gives me a wrong result (not that number).
A 32bit type isn't enough since 2^32 is a 10 digit number and I have to store a 13 digit number. But with 64 bits you can respresent a number that has 20digits. So I tried using the type "uint64_t" but that didn't work for me and I really don't understand why.
So I searched on the internet to find which type would be sufficient for my variable to fit in. I saw on this forum persons with the same problem but they solved it using long long int or long double as type. But none worked for me (neither did long float).
I really don't know which other type could store that number, as I tried a lot but nothing worked for me.
Thanks for your help! :)
--
EDIT : The code is a bit long and complex and would not matter for the question, so this is actually what I do with the variable containing that number :
string barcode_s = "9780321563842";
uint64_t barcode = atoi(barcode_s.c_str());
cout << "Barcode is : " << barcode << endl;
Off course I don't put that number in a variable (of type string) "barcode_s" to convert it directly to a number, but that's what happen in my program. I read text from an input file and put it in "barcode_s" (the text I read and put in that variable is always a number) and then I convert that string to a number (using atoi).
So i presume the problem comes from the "atoi" function?
Thanks for your help!
The problem is indeed atoi: it returns an int, which is on most platforms a 32-bits integer. Converting to uint64_t from int will not magically restore the information that has been lost.
There are several solutions, though. In C++03, you could use stringstream to handle the conversion:
std::istringstream stream(barcode_s);
unsigned long barcode = 0;
if (not (stream >> barcode)) { std::abort(); }
In C++11, you can simply use stoul or stoull:
unsigned long long const barcode = std::stoull(barcode_s);
Your number 9 780 321 563 842 is hex 8E52897B4C2, which fits into 44 bits (4 bits per hex digit), so any 64 bit integer, no matter if signed or unsigned, will have space to spare. 'uint64_t' will work, and it will even fit into a 'double' with no loss of precision.
It follows that the remaining issue is a mistake in your code, usually that is either an accidental conversion of the 64 bit number to another type somewhere, or you are calling the wrong fouction to print a 64 bit integer.
Edit: just saw your code. 'atoi' returns int. As in 'int32_t'. Converting that to 'unit64_t' will not reconstruct the 64 bit number. Have a look at this: http://msdn.microsoft.com/en-us/library/czcad93k.aspx
The atoll () function converts char* to a long long.
If you don't have the longer function available, write your own in the mean time.
uint64_t result = 0 ;
for (unsigned int ii = 0 ; str.c_str()[ii] != 0 ; ++ ii)
{
result *= 10 ;
result += str.c_str () [ii] - '0' ;
}
As the question title reads, assigning 2^31 to a signed and unsigned 32-bit integer variable gives an unexpected result.
Here is the short program (in C++), which I made to see what's going on:
#include <cstdio>
using namespace std;
int main()
{
unsigned long long n = 1<<31;
long long n2 = 1<<31; // this works as expected
printf("%llu\n",n);
printf("%lld\n",n2);
printf("size of ULL: %d, size of LL: %d\n", sizeof(unsigned long long), sizeof(long long) );
return 0;
}
Here's the output:
MyPC / # c++ test.cpp -o test
MyPC / # ./test
18446744071562067968 <- Should be 2^31 right?
-2147483648 <- This is correct ( -2^31 because of the sign bit)
size of ULL: 8, size of LL: 8
I then added another function p(), to it:
void p()
{
unsigned long long n = 1<<32; // since n is 8 bytes, this should be legal for any integer from 32 to 63
printf("%llu\n",n);
}
On compiling and running, this is what confused me even more:
MyPC / # c++ test.cpp -o test
test.cpp: In function ‘void p()’:
test.cpp:6:28: warning: left shift count >= width of type [enabled by default]
MyPC / # ./test
0
MyPC /
Why should the compiler complain about left shift count being too large? sizeof(unsigned long long) returns 8, so doesn't that mean 2^63-1 is the max value for that data type?
It struck me that maybe n*2 and n<<1, don't always behave in the same manner, so I tried this:
void s()
{
unsigned long long n = 1;
for(int a=0;a<63;a++) n = n*2;
printf("%llu\n",n);
}
This gives the correct value of 2^63 as the output which is 9223372036854775808 (I verified it using python). But what is wrong with doing a left shit?
A left arithmetic shift by n is equivalent to multiplying by 2n
(provided the value does not overflow)
-- Wikipedia
The value is not overflowing, only a minus sign will appear since the value is 2^63 (all bits are set).
I'm still unable to figure out what's going on with left shift, can anyone please explain this?
PS: This program was run on a 32-bit system running linux mint (if that helps)
On this line:
unsigned long long n = 1<<32;
The problem is that the literal 1 is of type int - which is probably only 32 bits. Therefore the shift will push it out of bounds.
Just because you're storing into a larger datatype doesn't mean that everything in the expression is done at that larger size.
So to correct it, you need to either cast it up or make it an unsigned long long literal:
unsigned long long n = (unsigned long long)1 << 32;
unsigned long long n = 1ULL << 32;
The reason 1 << 32 fails is because 1 doesn't have the right type (it is int). The compiler doesn't do any converting magic before the assignment itself actually happens, so 1 << 32 gets evaluated using int arithmic, giving a warning about an overflow.
Try using 1LL or 1ULL instead which respectively have the long long and unsigned long long type.
The line
unsigned long long n = 1<<32;
results in an overflow, because the literal 1 is of type int, so 1 << 32 is also an int, which is 32 bits in most cases.
The line
unsigned long long n = 1<<31;
also overflows, for the same reason. Note that 1 is of type signed int, so it really only has 31 bits for the value and 1 bit for the sign. So when you shift 1 << 31, it overflows the value bits, resulting in -2147483648, which is then converted to an unsigned long long, which is 18446744071562067968. You can verify this in the debugger, if you inspect the variables and convert them.
So use
unsigned long long n = 1ULL << 31;
I tryed to get MAX value for int, using tilde.But output is not what I have expected.
When I run this:
#include <stdio.h>
#include <limits.h>
int main(){
int a=0;
a=~a;
printf("\nMax value: %d",-a);
printf("\nMax value: %d",INT_MAX);
return 0;
}
I get output:
Max value: 1
Max value: 2147483647
I thought,(for exemple) if i have 0000 in RAM (i know that first bit shows is number pozitiv or negativ).After ~ 0000 => 1111 and after -(1111) => 0111 ,that I would get MAX value.
You have a 32-bit two's complement system. So - a = 0 is straightforward. ~a is 0xffffffff. In a 32-bit two's complement representation, 0xffffffff is -1. Basic algebra explains that -(-1) is 1, so that's where your first printout comes from. INT_MAX is 0x7fffffff.
Your logical error is in this statement: "-(1111) => 0111", which is not true. The arithmetic negation operation for a two's complement number is equivalent to ~x+1 - for your example:
~x + 1 = ~(0xffffffff) + 1
= 0x00000000 + 1
= 0x00000001
Is there a reason you can't use std::numeric_limits<int>::max()? Much easier and impossible to make simple mistakes.
In your case, assuming 32 bit int:
int a = 0; // a = 0
a = ~a; // a = 0xffffffff = -1 in any twos-comp system
a = -a; // a = 1
So that math is an incorrect way of computer the max. I can't see a formulaic way to compute the max: Just use numeric_limits (or INT_MAX if you're in a C-only codebase).
Your trick of using '~' to get maximum value works with unsigned integers. As others have pointed out, it doesn't work for signed integers.
Your posting shows an int which is equivalent to signed int. Try changing the type to unsigned int and see what happens.
There is no formula to compute the max value of a signed integer type in C. You simply must use the INT_MAX, etc. macros from limits.h and stdint.h.
binary 1...1111 would always represent -1. Simple math says -1 * -1 = 1!
Always remember there's just one zero: 0...0000. If you'd now swap the MSB and you'd be right, then you'd have 10...0000 which would then be -0 which can't be true (as 0 = -0 in math, but your binary numbers would be different).
Getting the negative value of a number isn't just about swapping the MSB.
It's not quite as straightforward as the top-bit indicating the sign. If it were, you could have both +0 and -0. You should read up on two's complement.
The correct answer is
max = (~0) >> 1;
I'm not a C/C++ expert, so you might need >>> instead. You need the shift operator that does NOT do sign extension.
In 2's complement notation 111111... is -1; now, the unary minus operator does not simply change the sign bit (otherwise it would provide strange results in every normal context), but computes correctly the opposite of the number, i.e. +1.
If you want to change the MSB you could use bitwise operators to simply set it to zero. Notice that however this way of finding the maximum value for the int type is not portable, since you're making assumptions about how the number is represented that are not required by the standard.