I have the following code as my entity:
/**
*
* #Entity
* #Table(name="group")
*
**/
Class Group {
/**
*
* #Column(type="string", nullable=true)
**/
private $phone;
(etc...)
__get( $var ) {
return $this->$var;
}
__set( $var, $val ){
$this->$var = $val;
}
I then try to modify this with a number larger than nine digits. for example,
$group->phone = 5147811326;
$phone is set to 2147483647. The same occurs with 1111111111 (it is conveted to 2147483647)
If I try 514781, it keeps the numbers fine. But over 9 digits it just converts. I originally had the mysql column as an integer but converted it to string...still a problem.
what is happening?!??!!
The number "5147811326" cannot be stored in a 32-bit integer because there are simply not enough bits. The highest binary value of a 32-bit integer is "11111111 11111111 11111111 11111111" (or 2147483647 in the decimal system). Try eiter a 64-bit integer or a string to store the phone number.
This means the data type you are using is 32 bits. 2147483647 is equal to 231 − 1.
Related
I want to know why in formula to calculate range of any data type
i.e.2^(n-1),why it is n-1 ,n is the number of bits occupied by the given data type
Assuming that the type is unsigned, the maximum value is (2n)-1, because there are 2n values, and one of them is zero.
2(n-1) is the value of the n:th bit alone - bit 1 is 20, bit 2 is 21, and so on.
This is the same for any number base - in the decimal system, n digits can represent 10n different values, with the maximum value being 10n-1, and the n:th digit is "worth" 10(n-1).
For example, the largest number with three decimal digits is 999 (that is, 103-1), and the third decimal digit is "the hundreds digit", 102.
First 2^(n-1) is not correct, the maximum (unsigned) number represented by the data type is:
max = 2^n - 1
So for a 8 Bit data type, the maximum represented value is 255
2^n tells you the amount of numbers represented (256 for the 8-Bit example) but because you want to include 0 the range is 0 to 255 and not 1 to 256
I saw the following line of code here in C.
int mask = ~0;
I have printed the value of mask in C and C++. It always prints -1.
So I do have some questions:
Why assigning value ~0 to the mask variable?
What is the purpose of ~0?
Can we use -1 instead of ~0?
It's a portable way to set all the binary bits in an integer to 1 bits without having to know how many bits are in the integer on the current architecture.
C and C++ allow 3 different signed integer formats: sign-magnitude, one's complement and two's complement
~0 will produce all-one bits regardless of the sign format the system uses. So it's more portable than -1
You can add the U suffix (i.e. -1U) to generate an all-one bit pattern portably1. However ~0 indicates the intention clearer: invert all the bits in the value 0 whereas -1 will show that a value of minus one is needed, not its binary representation
1 because unsigned operations are always reduced modulo the number that is one greater than the largest value that can be represented by the resulting type
That on a 2's complement platform (that is assumed) gives you -1, but writing -1 directly is forbidden by the rules (only integers 0..255, unary !, ~ and binary &, ^, |, +, << and >> are allowed).
You are studying a coding challenge with a number of restrictions on operators and language constructions to perform given tasks.
The first problem is return the value -1 without the use of the - operator.
On machines that represent negative numbers with two's complement, the value -1 is represented with all bits set to 1, so ~0 evaluates to -1:
/*
* minusOne - return a value of -1
* Legal ops: ! ~ & ^ | + << >>
* Max ops: 2
* Rating: 1
*/
int minusOne(void) {
// ~0 = 111...111 = -1
return ~0;
}
Other problems in the file are not always implemented correctly. The second problem, returning a boolean value representing the fact the an int value would fit in a 16 bit signed short has a flaw:
/*
* fitsShort - return 1 if x can be represented as a
* 16-bit, two's complement integer.
* Examples: fitsShort(33000) = 0, fitsShort(-32768) = 1
* Legal ops: ! ~ & ^ | + << >>
* Max ops: 8
* Rating: 1
*/
int fitsShort(int x) {
/*
* after left shift 16 and right shift 16, the left 16 of x is 00000..00 or 111...1111
* so after shift, if x remains the same, then it means that x can be represent as 16-bit
*/
return !(((x << 16) >> 16) ^ x);
}
Left shifting a negative value or a number whose shifted value is beyond the range of int has undefined behavior, right shifting a negative value is implementation defined, so the above solution is incorrect (although it is probably the expected solution).
Loooong ago this was how you saved memory on extremely limited equipment such as the 1K ZX 80 or ZX 81 computer. In BASIC, you would
Let X = NOT PI
rather than
LET X = 0
Since numbers were stored as 4 byte floating points, the latter takes 2 bytes more than the first NOT PI alternative, where each of NOT and PI takes up a single byte.
There are multiple ways of encoding numbers across all computer architectures. When using 2's complement this will always be true:~0 == -1. On the other hand, some computers use 1's complement for encoding negative numbers for which the above example is untrue, because ~0 == -0. Yup, 1s complement has negative zero, and that is why it is not very intuitive.
So to your questions
the ~0 is assigned to mask so all the bits in mask are equal 1 -> making mask & sth == sth
the ~0 is used to make all bits equal to 1 regardless of the platform used
you can use -1 instead of ~0 if you are sure that your computer platform uses 2's complement number encoding
My personal thought - make your code as much platform-independent as you can. The cost is relatively small and the code becomes fail proof
As the headline supposes I am trying to convert hex numbers like 0x320000dd to decimal numbers.
My code only works for positive numbers but fails when it comes to hex numbers that represent negative decimal numbers. Here is a excerpt of my code:
cin>>hex>> x;
unsigned int number = x;
int r = number & 0xffffff;
My input is alreay in hex, and the computer converts it automatically in an integer. What I am trying to do is getting the operand of the hex number, so the last 24 bits.
Can you help me get my code working for negative values like :0x32fffdc9 or 0x32ffffff? Thank's a lot!
EDIT:
I would like my output to be :
0x32fffdc9 --> -567
or
0x32ffffff --> -1
so just the plain decimal values but instead it gives me 16776649 and 16777215 for the upper examples.
Negative integers are typically stored in 2's complement Meaning that if your most significant bit (MSB) is not set the number is not negative. This means that just as you need to unset the 8-MSBs of your number to clamp a 32-bit number to a 24-bit positive number, so you'll need to set the 8-MSBs of your number to clamp to a negative number:
const int32_t r = 0x800000 & number ? 0xFF000000 | number : number & 0xFFFFFF;
vector<bool> or bitset may be worth your consideration, as they would clarify the hexadecimal numbers to the range of bits to be set.
float x = 384.951257;
std::cout << std::fixed << std::setprecision(6) << x << std::endl;
The output is 384.951263. Why? I'm using gcc.
float is usually only 32-bit. With about 3 bits per decimal digit (210 roughly equals 103) that means it can't possibly represent more than about 11 decimal digits, and accounting for other information it also needs to represent, such as magnitude, let's say 6-7 decimal digits. Hey, that's what you got!
Check e.g. Wikipedia for details.
Use double or long double for better precision. double is the default in C++. E.g., the literal 3.14 is of type double.
Floats have a limited resolution. So it gets rounded when you assing the value to x.
All answers here talk as though the issue is due to floating-point numbers and their capacity, but those are just implementation details; the issue is deeper than that. This issue occurs when representing decimal numbers using binary number system. Even something as simple as 0.1)10 is not precisely representable in binary, since it can only represent those numbers as a finite fraction where the denominator is a power of 2. Unfortunately, this does not include most of the numbers that can be represented as finite fraction in base 10, like 0.1.
The single-precision float datatype usually gets mapped to binary32 as called by the IEEE 754 standard, has 32-bits which is partitioned into 1 sign bit, 8 exponent bits and 23 significand bits (excluding the hidden/implicit bit). Thus we've to calculate upto 24 bits when converting to binary32.
Other answers here evade the actual calculations involved, I'll try to do it. This method is explained in greater detail here. So lets convert the real number into a binary number:
Integer part 384)10 = 110000000)2 (using the usual method of successive division by 2)
Fractional part 0.951257)10 can be converted by successive multiplication by 2 and taking the integer part
0.951257 * 2 = 1.902514
0.902514 * 2 = 1.805028
0.805028 * 2 = 1.610056
0.610056 * 2 = 1.220112
0.220112 * 2 = 0.440224
0.440224 * 2 = 0.880448
0.880448 * 2 = 1.760896
0.760896 * 2 = 1.521792
0.521792 * 2 = 1.043584
0.043584 * 2 = 0.087168
0.087168 * 2 = 0.174336
0.174336 * 2 = 0.348672
0.348672 * 2 = 0.697344
0.697344 * 2 = 1.394688
0.394688 * 2 = 0.789376
Gathering the obtined fractional part in binary we've 0.111100111000010)2. The overall number in binary would be 110000000.111100111000010)2; this has 24 bits as required.
Converting this back to decimal would give you 384 + (15585 / 16384) = 384.951232)10. With the rounding mode (round to nearest) enabled this comes to, what you see, 384.951263)10.
This can be verified here.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
What kind of method does the compiler use to store numbers? One char is 0-255. Two chars side by side are 0-255255. In c++ one short is 2 bytes. The size is 0-65535. Now, how does the compiler convert 255255 to 65535 and what happens with the - in the unsigned numbers?
The maximum value you can store in n bits (when the lowest value is 0 and the values represented are a continuous range), is 2ⁿ − 1. For 8 bits, this gives 255. For 16 bits, this gives 65535.
Your mistake is thinking that you can just concatenate 255 with 255 to get the maximum value in two chars - this is definitely wrong. Instead, to get from the range of 8 bits, which is 256, to the range of 16 bits, you would do 256 × 256 = 65536. Since our first value is 0, the maximum value is 65535, once again.
Note that a char is only guaranteed to have at least 8 bits and a short at least 16 bits (and must be at least as large as a char).
You have got the math totally wrong. Here's how it really is
since each bit can only take on either of two states only(1 and 0) , n bits as a whole can represents 2^n different quantities not numbers. When dealing with integers a standard short integer size of 2 bytes can represent 2^n - 1 (n=16 so 65535)which are mapped to decimal numbers in real life for compuational simplicity.
When dealing with 2 character they are two seperate entities (string is an array of characters). There are many ways to read the two characters on a whole, if you read is at as a string then it would be same as two seperate characters side by side. let me give you an example :
remember i will be using hexadecimal notation for simplicity!
if you have doubt mapping ASCII characters to hex take a look at this ascii to hex
for simplicity let us assume the characters stored in two adjacent positions are both A.
Now hex code for A is 0x41 so the memory would look like
1 byte ....... 2nd byte
01000100 01000001
if you were to read this from the memory as a string and print it out then the output would be
AA
if you were to read the whole 2 bytes as an integer then this would represent
0 * 2^15 + 1 * 2^14 + 0 * 2^13 + 0 * 2^12 + 0 * 2^11 + 1 * 2^10 + 0 * 2^9 + 0 * 2^8 + 0 * 2^7 + 1 * 2^6 + 0 * 2^5 + 0 * 2^4 + 0 * 2^3 + 0 * 2^2 + 0 * 2^1 + 1 * 2^0
= 17537
if unsigned integers were used then the 2 bytes of data would me mapped to integers between
0 and 65535 but if the same represented a signed value then then , though the range remains the same the biggest positive number that can be represented would be 32767. the values would lie between -32768 and 32767 this is because all of the 2 bytes cannot be used and the highest order bit is left to determine the sign. 1 represents negative and 2 represents positive.
You must also note that type conversion (two characters read as single integer) might not always give you the desired results , especially when you narrow the range. (example a doble precision float is converted to an integer.)
For more on that see this answer double to int
hope this helps.
When using decimal system it's true that range of one digit is 0-9 and range of two digits is 0-99. When using hexadecimal system the same thing applies, but you have to do the math in hexadecimal system. Range of one hexadecimal digit is 0-Fh, and range of two hexadecimal digits (one byte) is 0-FFh. Range of two bytes is 0-FFFFh, and this translates to 0-65535 in decimal system.
Decimal is a base-10 number system. This means that each successive digit from right-to-left represents an incremental power of 10. For example, 123 is 3 + (2*10) + (1*100). You may not think of it in these terms in day-to-day life, but that's how it works.
Now you take the same concept from decimal (base-10) to binary (base-2) and now each successive digit is a power of 2, rather than 10. So 1100 is 0 + (0*2) + (1*4) + (1*8).
Now let's take an 8-bit number (char); there are 8 digits in this number so the maximum value is 255 (2**8 - 1), or another way, 11111111 == 1 + (1*2) + (1*4) + (1*8) + (1*16) + (1*32) + (1*64) + (1*128).
When there are another 8 bits available to make a 16-bit value, you just continue counting powers of 2; you don't just "stick" the two 255s together to make 255255. So the maximum value is 65535, or another way, 1111111111111111 == 1 + (1*2) + (1*4) + (1*8) + (1*16) + (1*32) + (1*64) + (1*128) + (1*256) + (1*512) + (1*1024) + (1*2048) + (1*4096) + (1*8192) + (1*16384) + (1*32768).
It depends on the type: integral types must be stored as binary
(or at least, appear so to a C++ program), so you have one bit
per binary digit. With very few exceptions, all of the bits are
significant (although this is not required, and there is at
least one machine where there are extra bits in an int). On
a typical machine, char will be 8 bits, and if it isn't
signed, can store values in the range [0,2^8); in other words,
between 0 and 255 inclusive. unsigned short will be 16 bits
(range [0,2^16)), unsigned int 32 bits (range [0,2^32))
and unsigned long either 32 or 64 bits.
For the signed values, you'll have to use at least one of the
bits for the sign, reducing the maximum positive value. The
exact representation of negative values can vary, but in most
machines, it will be 2's complement, so the ranges will be
signed char:[-2^7,2^7-1)`, etc.
If you're not familiar with base two, I'd suggest you learn it
very quickly; it's fundamental to how all modern machines store
numbers. You should find out about 2's complement as well: the
usual human representation is called sign plus magnitude, and is
very rare in computers.