I compiled the following code in turbo C compiler
void main()
{
int i =400*400/400;
if(i==400)
cout<<"Filibusters";
else
cout<<"Sea ghirkins";
}
I expected the value of 'i' to be 400 and hence, the output should be Filibusters. However, the output I got is Sea Ghirkins. How is this possible?
You are overflowing your int type: the behaviour on doing that is undefined.
The range of an int can be as small as -32767 to +32767. Check the value of sizeof int. If it's 2, then it will not be able to represent 400 * 400. You can also check the values of INT_MAX and INT_MIN.
Use a long instead, which must be at least 32 bits. And perhaps treat yourself to a new compiler this weekend?
Look at operator associativity: * and / are left-associative, that means your formula is calculated in this order: (400*400)/400
400*400=160000. Computer arithmetic is finite. You are using 16-bit compiler where int fits into 16 bits and can only hold values from range -32768 ... 32767 (why -32768?). 160000 obviously doesn't fit into that range and is trimmed (integer overflow occurs). Then you divide trimmed value by 400 and get something unexpected to you. Compilers of the past were quite straightforward so I would expect something like 72 to be stored into i.
The above means you can either use a bigger integer type - long which is able to store 160000 or change assiciativity manually using parenthesses: 400*(400/400).
Related
I am currently attempting to learn C++. I usually learn by playing around with stuff, and since I was reading up on data types, and then ways you could declare the value of an integer (decimal,binary,hex etc), I decided to test how "unsigned short"s worked. I am now confused.
Here is my code:
#include <cstdio>
int main(){
unsigned short a = 0b0101010101010101;
unsigned short b = 0b01010101010101011;
unsigned short c = 0b010101010101010101;
printf("%hu\n%hu\n%hu\n", a, b, c);
}
Integers of type "unsigned short" should have a size of 2 bytes across all operating systems.
I have used binary to declare the values of these integers, because this is the easiest way to make the source of my confusion obvious.
Integer "a" has 16 digits in binary. 16 digits in a data type with a size of 16 bits (2 bytes). When I print it, I get the number 21845. Seems okay. Checks out.
Then it gets weird.
Integer "b" has 17 digits. When we print it, we get the decimal version of the whole 17 digit number, 43691. How does a binary number that takes up 17 digits fit into a variable that should only have 16 bits of memory allocated to it? Is someone lying about the size? Is this some sort of compiler magic?
And then it gets even weirder. Integer "c" has 18 digits, but here we hit the upper limit. When we build, we get the following error:
/home/dimitrije/workarea/c++/helloworld.cpp: In function ‘int main()’:
/home/dimitrije/workarea/c++/helloworld.cpp:6:22: warning: large integer implicitly truncated to unsigned type [-Woverflow]
unsigned short c = 0b010101010101010101;
Okay, so we can put 17 digits in 16 bits, but we can't put in 18. Makes some kind of sense I guess? Like we can magic away 1 digit but two wont work. But the supposed "truncation", rather than truncating to the actual maximum value, 17 digits (or 43691 in this example), truncates to what the limit logically should be, 21845.
This is frying my brain and I'm too far into the rabbit whole to stop now. Does anyone understand why C++ behaves this way?
---EDIT---
So after someone pointed out to me that my binary numbers started with a 0, I realized I was stupid.
However, when I took the 0 from the left hand side and carried it right (meaning that a,b c were actually 16,17,18 bits respectively), I realized that the truncating behaviour still doesn't make sense. Here is the output:
43690
21846
43690
43960 is the maximum value for 16 bits. I could've checked this before asking the original question and saved myself some time.
Why does 17 bits truncate to 15, and 18 (and also 19,20,21) truncate to 16?
--EDIT 2---
I've changed all the digits in my integers to 1, and my mistake makes sense now. I get back 65535. I took the time to type 2^16 into a calculator this time. The entirety of my question was a result of the fact that I didn't properly look at the binary value I was assigning.
Thanks to the guy who linked implicit conversions, I will read up on that.
On most systems a unsigned short is 16 bits. No matter what you assign to a unsigned short it will be truncated to 16 bits. In your example the first bit is a 0 which is essentially being ignored, in the same way int x = 05; will just equal 5 and not 05.
If you change the first bit from a 0 to a 1, you will see the expected behaviour of the assignment truncating the value to 16 bits.
The range for an unsigned short int (16 bits) is 0 to 65535
65535 = 1111 1111 1111 1111 in binary
I am implementing an image processing filter within OTB (a C++ library).
I got a strange behaviour with some very basic testing. When I read my image as "char", the following code always outputs the pixel value (in the 0-200 range) even if it is larger than 150. However, it works fine when I use "short int". Is there a special behaviour with "char" (like a letter comparison instead of its numerical value) or could there be any other reason ?
Of course, my image pixels are stored in Bytes, so I prefer to handle "char" instead of "int" because the image is quite large (> 10 Gb).
if (pixelvalue > 150)
{
out = 255;
}
else
{
out = pixelvalue;
}
unsigned char runs to (at least) 255, but char may be signed and limited to 127.
The type char is either signed or unsigned (depending on what the compiler "likes" - and sometimes there are options to select signed or unsigned char types), and the guaranteed size is 8 bits minimum (could be 9, 16, 18, 32 or 36 bits, even if such machines are relatively rare).
Your value of 150 is higher than the biggest signed value for an 8-bit value, which implies that the system you are using has a signed char.
If the purpose of the type is to be an integer value of a particular size, use [u]intN_t - in this case, since you want an unsigned value, uint8_t. That indicates much better that the data you are working on is not a "string" [even if behind the scenes, the compiler translates to unsigned char. This will fail to compile if you ever encounter a machine where a char isn't 8 bits, which is a safeguard against trying to debug weird problems.
Up to 255, I can understand how the integers are stored in char and unsigned char ;
#include<stdio.h>
int main()
{
unsigned char a = 256;
printf("%d\n",a);
return(0);
}
In the code above I have an output of 0 for unsigned char as well as char.
For 256 I think this is the way the integer stored in the code (this is just a guess):
First 256 converted to binary representation which is 100000000 (totally 9 bits).
Then they remove the remove the leftmost bit (the bit which is set) because the char datatype only have 8 bits of memory.
So its storing in the memory as 00000000 , that's why its printing 0 as output.
Is the guess correct or any other explanation is there?
Your guess is correct. Conversion to an unsigned type uses modular arithmetic: if the value is out of range (either too large, or negative) then it is reduced modulo 2N, where N is the number of bits in the target type. So, if (as is often the case) char has 8 bits, the value is reduced modulo 256, so that 256 becomes zero.
Note that there is no such rule for conversion to a signed type - out-of-range values give implementation-defined results. Also note that char is not specified to have exactly 8 bits, and can be larger on less mainstream platforms.
On your platform (as well as on any other "normal" platform) unsigned char is 8 bit wide, so it can hold numbers from 0 to 255.
Trying to assign 256 (which is an int literal) to it results in an unsigned integer overflow, that is defined by the standard to result in "wraparound". The result of u = n where u is an unsigned integral type and n is an unsigned integer outside its range is u = n % (max_value_of_u +1).
This is just a convoluted way to say what you already said: the standard guarantees that in these cases the assignment is performed keeping only the bits that fit in the target variable. This norm is there since most platform already implement this at the assembly language level (unsigned integer overflow typically results in this behavior plus some kind of overflow flag set to 1).
Notice that all this do not hold for signed integers (as often plain char is): signed integer overflow is undefined behavior.
yes, that's correct. 8 bits can hold 0 to 255 unsigned, or -128 to 127 signed. Above that and you've hit an overflow situation and bits will be lost.
Does the compiler give you warning on the above code? You might be able to increase the warning level and see something. It won't warn you if you assign a variable that can't be determined statically (before execution), but in this case it's pretty clear you're assigning something too large for the size of the variable.
I have this function which generates a specified number of so called 'triangle numbers'. If I print out the deque afterwords, the numbers increase, jumps down, then increases again. Triangle numbers should never get lower as i rises so there must be some kind of overflow happening. I tried to fix it by adding the line if(toPush > INT_MAX) return i - 1; to try to stop the function from generating more numbers (and return the number it generated) if the result is overflowing. That is not working however, the output continues to be incorrect (increases for a while, jumps down to a lower number, then increases again). The line I added doesn't actually seem to be doing anything at all. Return is not being reached. Does anyone know what's going on here?
#include <iostream>
#include <deque>
#include <climits>
int generateTriangleNumbers(std::deque<unsigned int> &triangleNumbers, unsigned int generateCount) {
for(unsigned int i = 1; i <= generateCount; i++) {
unsigned int toPush = (i * (i + 1)) / 2;
if(toPush > INT_MAX) return i - 1;
triangleNumbers.push_back(toPush);
}
return generateCount;
}
INT_MAX is the maximum value of signed int. It's about half the maximum value of unsigned int (UINT_MAX). Your calculation of toPush may well get much higher than UINT_MAX because you square the value (if it's near INT_MAX the result will be much larger than UINT_MAX that your toPush can hold). In this case the toPush wraps around and results in smaller value than previous one.
First of all, your comparison to INT_MAX is flawed since your type is unsigned int, not signed int. Secondly, even a comparison to UINT_MAX would be incorrect since it implies that toPush (the left operand of the comparison expression) can hold a value above it's maximum - and that's not possible. The correct way would be to compare your generated number with the previous one. If it's lower, you know you have got an overflow and you should stop.
Additionally, you may want to use types that can hold a larger range of values (such as unsigned long long).
The 92682th triangle number is already greater than UINT32_MAX. But the culprit here is much earlier, in the computation of i * (i + 1). There, the calculation overflows for the 65536th triangular number. If we ask Python with its native bignum support:
>>> 2**16 * (2**16+1) > 0xffffffff
True
Oops. Then if you inspect your stored numbers, you will see your sequence dropping back to low values. To attempt to emulate what the Standard says about the behaviour of this case, in Python:
>>> (int(2**16 * (2**16+1)) % 0xffffffff) >> 1
32768
and that is the value you will see for the 65536th triangular number, which is incorrect.
One way to detect overflow here is ensure that the sequence of numbers you generate is monotonic; that is, if the Nth triangle number generated is strictly greater than the (N-1)th triangle number.
To avoid overflow, you can use 64-bit variables to both generate & store them, or use a big number library if you need a large amount of triangle numbers.
In Visual C++ int (and of course unsigned int) is 32 bits even on 64-bit computers.
Either use unsigned long long or uint64_t to use a 64-bit value.
This is a very basic question.Please don't mind but I need to ask this. Adding two integers
int main()
{
cout<<"Enter a string: ";
int a,b,c;
cout<<"Enter a";
cin>>a;
cout<<"\nEnter b";
cin>>b;
cout<<a<<"\n"<<b<<"\n";
c= a + b;
cout <<"\n"<<c ;
return 0;
}
If I give a = 2147483648 then
b automatically takes a value of 4046724. Note that cin will not be prompted
and the result c is 7433860
If int is 2^32 and if the first bit is MSB then it becomes 2^31
c= 2^31+2^31
c=2^(31+31)
is this correct?
So how to implement c= a+b for a= 2147483648 and b= 2147483648 and should c be an integer or a double integer?
When you perform any sort of input operation, you must always include an error check! For the stream operator, this could look like this:
int n;
if (!(std::cin >> n)) { std::cerr << "Error!\n"; std::exit(-1); }
// ... rest of program
If you do this, you'll see that your initial extraction of a already fails, so whatever values are read afterwards are not well defined.
The reason the extraction fails is that the literal token "2147483648" does not represent a value of type int on your platform (it is too large), no different from, say, "1z" or "Hello".
The real danger in programming is to assume silently that an input operation succeeds when often it doesn't. Fail as early and as noisily as possible.
The int type is signed and therefor it's maximum value is 2^31-1 = 2147483648 - 1 = 2147483647
Even if you used unsigned integer it's maximum value is 2^32 -1 = a + b - 1 for the values of a and b you give.
For the arithmetics you are doing, you should better use "long long", which has maximum value of 2^63-1 and is signed or "unsigned long long" which has a maximum value of 2^64-1 but is unsigned.
c= 2^31+2^31
c=2^(31+31)
is this correct?
No, but you're right that the result takes more than 31 bits. In this case the result takes 32 bits (whereas 2^(31+31) would take 62 bits). You're confusing multiplication with addition: 2^31 * 2^31 = 2^(31+31).
Anyway, the basic problem you're asking about dealing with is called overflow. There are a few options. You can detect it and report it as an error, detect it and redo the calculation in such a way as to get the answer, or just use data types that allow you to do the calculation correctly no matter what the input types are.
Signed overflow in C and C++ is technically undefined behavior, so detection consists of figuring out what input values will cause it (because if you do the operation and then look at the result to see if overflow occurred, you may have already triggered undefined behavior and you can't count on anything). Here's a question that goes into some detail on the issue: Detecting signed overflow in C/C++
Alternatively, you can just perform the operation using a data type that won't overflow for any of the input values. For example, if the inputs are ints then the correct result for any pair of ints can be stored in a wider type such as (depending on your implementation) long or long long.
int a, b;
...
long c = (long)a + (long)b;
If int is 32 bits then it can hold any value in the range [-2^31, 2^31-1]. So the smallest value obtainable would be -2^31 + -2^31 which is -2^32. And the largest value obtainable is 2^31 - 1 + 2^31 - 1 which is 2^32 - 2. So you need a type that can hold these values and every value in between. A single extra bit would be sufficient to hold any possible result of addition (a 33-bit integer would hold any integer from [-2^32,2^32-1]).
Or, since double can probably represent every integer you need (a 64-bit IEEE 754 floating point data type can represent integers up to 53 bits exactly) you could do the addition using doubles as well (though adding doubles may be slower than adding longs).
If you have a library that offers arbitrary precision arithmetic you could use that as well.