Storing negative number in an unsigned int [closed] - c++

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 8 years ago.
Improve this question
I have access to a program which I'm running which SHOULD be guessing a very low number for certain things and outputting the number (probably 0 or 1). However, 0.2% of the time when it should be outputting 0 it outputs a number from 4,294,967,286 - 4,294,967,295. (Note: the latter is the max number an unsigned integer can be).
What I GUESS is happening is the function is guessing the number of the data to be less than 0 aka -1 to -9 and when it assigns that number to an unsigned int it's wrapping the number around to be the max or close to the max number.
I therefore assumed the program is written in C (I do not have access to the source code) and then tested in Visual Studio .NET 2012 C what would happen if I assign a variety of negative numbers to an unsigned integer. Unfortunately, nothing seemed to happen - it would still output the number to the console as a negative integer. I'm wondering if this is to do with MSVS 2012 trying to be smart or perhaps some other reason.
Anyway, am I correct in assuming that this is in fact what is happening and the reason why the programs outputs the max number of an unisnged int? Or are there any other valid reasons as to why this is happening?
Edit: All I want to know is if it's valid to assume that attempting to assign a negative number to an unsigned integer can result in setting the integer to the max number aka 4,294,967,295. If this is IMPOSSIBLE then okay, I'm not looking at SPECIFICS on exactly why this is happening with the program as I do not have access to the code. All I want to know is if it's possible and therefore a possible explanation as to why I am getting these results.

In C and C++ assigning -1 to an unsigned number will give you the maximum unsigned value.
This is guaranteed by the standard and all compilers I know (even VC) implement this part correctly. Probably your C example has some other problem for not showing this result (cannot say without seeing the code).

You can think of negative numbers to have its first bit counting negative.
A 4 bit integer would be
Binary HEX INT4 UINT4
(In Memory) (As decimal) (As decimal)
0000 0x0 0 0 (UINT4_MIN)
0001 0x1 1 1
0010 0x2 2 2
0100 0x4 4 4
0111 0x7 7 (INT4_MAX) 7
1000 0x8 -8 (INT4_MIN) 8
1111 0xF -1 15 (UINT4_MAX)
It may be that the header of a library lies to you and the value is negative.
If the library has no other means of telling you about errors this may be a deliberate error value. I have seen "nonsensical" values used in that manner before.
The error could be calculated as (UINT4_MAX - error) or always UINT4_MAX if an error occurs.
Really, without any source code this is a guessing game.
EDIT:
I expanded the illustrating table a bit.
If you want to log a number like that you may want to log it in hexadecimal form. The Hex view allows you to peek into memory a bit quicker if you are used to it.

Related

Why large positive number stored as negative numbers in computer memory? [duplicate]

This question already has answers here:
C++ integer overflow
(4 answers)
Closed 1 year ago.
I am using C++14. Size of int is 4 bytes.
Here is my code:
#include<iostream>
using namespace std;
int main()
{
int a=4294967290;
int b=-6;
if(b==a)
cout<<"Equal numbers";
}
This is giving output as Equal numbers, that means 4294967290 is equal to -6 in memory in binary format.
Then how are large positive numbers distinguished from negative numbers?
Is this only with C++ or with any other programming language?
Bits is bits. What the bits mean is up to you.
Let's talk about 8-bit quantities to make it easier on us. Consider the bit pattern
1 0 0 0 0 0 0 0
What does that 'mean'?
If you want to consider it as an unsigned binary integer, it's 128 (equals 2 to the 7th power).
If you want to consider it as a signed binary integer in twos-complenent representation, it's -128.
If you want to treat it as a signed binary integer in sign-and-magnitude representation (which nobody does any more), it's -0. Which is one reason we don't do that.
In short, the way large positive numbers are distinguished from negative numbers is that the programmer knows what he intends the bits to mean. It's something that does not exist in the bits themselves.
Languages like C/C++ have signed and unsigned types to help (by defining whether, for example, 1000 0000 is greater or less than 0000 0000), but there will always be pitfalls you need to be aware of, because integers in computer hardware are finite, unlike the real world.

Twos Complement Addition. -48 - 23. Is it necessary to use 8-bit representation?

The approach I am using to solve this is that I first write both numbers in 2s Complement form.
For this I first convert 48 and 23 to binary, then ones complement the binary representation and add 1.
48 = (0110000), 23 = (0010111) {In Binary Signed Representation)
Now their twos complement are -48 = (1010000), -23 = (1101001).
Now I just add them:
Now in my textbook it's written that final carry 1 should be discarded. If I discard that I get wrong answer. If I use 8-bit representation instead of 7 bits I get correct answer.
So my question is Why isn't 7-bit representation giving correct answer? Is it necessary to use some 2^n representation?
What you've just encountered is the classic 'overflow' problem. If you only have 7 bits to represent a number, the 'correct answer' is unrepresentable, because it is simply too big to fit within 7 bits. Of course in an ideal case, you would want to keep all the bits to ensure your answer is correct, but this is subject to the limitations of hardware. This is how integers have their associated maximum and minimum value determined(e.g 2,147,483,647 for a 32-bit signed integer).
For some added info, overflow checks are common in programming(automated in some higher level languages, but manual in others), generally if you're adding two numbers with the same sign(both positive/negative) and the end result is of the opposite sign(cuz the most significant bit is removed) then you know an overflow has occurred.

Is there anything wrong if the out come is zero

i'm doing an exercise on two complement, the question sound like this:
Solving 11base10 – 11base10 using 2’s complement will lead to a problem; by using 7-bit data representation. Explain what the problem is and suggest steps to overcome the problem.
i got 0 for the answer because 11-11=0, what problem if the answer is 0?
and is there a way to overcome it?
So 11 in base 10 is the following in 7-bit base 2:
000 1011
To subtract 11, you need to find -11 first. One of the many ways is to invert all the bits and add 1, leaving you with:
111 0101
Add the two numbers together:
1000 0000
Well, that's interesting. The 8th bit is a 1.
You didn't end up with zero. Or did you?
That's the question that your homework is attempting to get you to answer.

Advice needed for an API for reading bits

I found a wonderful project called python-bitstring, and I believe a C++ port could be very helpful in quite some situations (for sure in some projects of mine).
While porting the read/write/patch bytes methods, I didn't get any problems at all; it was as easy as translating Python to C++.
Anyway, now I'm getting to the bits methods and I'm not really sure how to express that functionality.
For example, let's say I want to create a method like:
readBits(uint64_t n_bits_to_read, uint64_t n_bits_to_skip = 0) {...}
Let's suppose, for the sake of this example, that this->data is a chunk of memory (void *) holding the entire data from which I'm reading.
So, the method will receive a number of bits to read and an optional number of bits to skip.
this->readBits(5, 2);
That way I'll be reading bits from position 2 to position 6 inclusive (forget little/big endian for the sake of this example).
0 1 1 0 1 0 1 1
‾ ‾ ‾ ‾ ‾
I can't return anything smaller than a byte (or can I?), so even if I actually read 5 bits, I'll still be returning 8. But what if I read 14 bits and skip 1? Is there any other way I could return only those bits in some more useful way?
I'm thinking about a few common situations, for example:
Do the first 14 bits match "010101....."
Do the next 13 bits after skipping 2 match "00011010....."
Read the first 5 bits and convert them to an int/float
Read 7 bits after skipping 5 and convert them to an int/float
My question is: what type of data/structure/methods should I return/expose in order to make working with bits easier (or at least easier for the previously described situations).

How are Overflow situations dealt with? [duplicate]

This question already has answers here:
Why is unsigned integer overflow defined behavior but signed integer overflow isn't?
(6 answers)
Closed 7 years ago.
I just simply wanted to know, who is responsible to deal with mathematical overflow cases in a computer ?
For example, in the following C++ code:
short x = 32768;
std::cout << x;
Compiling and running this code on my machine gave me a result of -32767
A "short" variable's size is 2 bytes .. and we know 2 bytes can hold a maximum decimal value of 32767 (if signed) .. so when I assigned 32768 to x .. after exceeding its max value 32767 .. It started counting from -32767 all over again to 32767 and so on ..
What exactly happened so the value -32767 was given in this case ?
ie. what are the binary calculations done in the background the resulted in this value ?
So, who decided that this happens ? I mean who is responsible to decide that when a mathematical overflow happens in my program .. the value of the variable simply starts again from its min value, or an exception is thrown for example, or the program simply freezes .. etc ?
Is it the language standard, the compiler, my OS, my CPU, or who is it ?
And how does it deal with that overflow situation ? (Simple explanation or a link explaining it in details would be appreciated :) )
And btw, pls .. Also, who decides what a size of a 'short int' for example on my machine would be ? also is it a language standard, compiler, OS, CPU .. etc ?
Thanks in advance! :)
Edit:
Ok so I understood from here : Why is unsigned integer overflow defined behavior but signed integer overflow isn't?
that It's the processor who defines what happens in an overflow situation (like for example in my machine it started from -32767 all over again), depending on "representations for signed values" of the processor, ie. is it sign magnitude, one's complement or two's complement ...
is that right ?
and in my case (When the result given was like starting from the min value -32767 again.. how do you suppose my CPU is representing the signed values, and how did the value -32767 for example come up (again, binary calculations that lead to this, pls :) ? )
It doesn't start at it's min value per se. It just truncates its value, so for a 4 bit number, you can count until 1111 (binary, = 15 decimal). If you increment by one, you get 10000, but there is no room for that, so the first digit is dropped and 0000 remains. If you would calculate 1111 + 10, you'd get 1.
You can add them up as you would on paper:
1111
0010
---- +
10001
But instead of adding up the entire number, the processor will just add up until it reaches (in this case) 4 bits. After that, there is no more room to add up any more, but if there is still 1 to 'carry', it sets the overflow register, so you can check whether the last addition it did overflowed.
Processors have basic instructions to add up numbers, and they have those for smaller and larger values. A 64 bit processor can add up 64 bit numbers (actually, usually they don't add up two numbers, but actually add a second number to the first number, modifying the first, but that's not really important for the story).
But apart from 64 bits, they often can also add up 32, 16 and 8 bit numbers. That's partly because it can be efficient to add up only 8 bits if you don't need more, but also sometimes to be backwards compatible with older programs for a previous version of a processor which could add up to 32 bits but not 64 bits.
Such a program uses an instruction to add up 32 bits numbers, and the same instruction must also exist on the 64 bit processor, with the same behavior if there is an overflow, otherwise the program wouldn't be able to run properly on the newer processor.
Apart from adding up using the core constructions of the processor, you could also add up in software. You could make an inc function that treats a big chunk of bits as a single value. To increment it, you can let the processor increment the first 64 bits. The result is stored in the first part of your chunk. If the overflow flag is set in the processor, you take the next 64 bits and increment those too. This way, you can extend the limitation of the processor to handle large numbers from software.
And same goes for the way an overflow is handled. The processor just sets the flag. Your application can decide whether to act on it or not. If you want to have a counter that just increments to 65535 and then wraps to 0, you (your program) don't need to do anything with the flag.