add two integers without using + or -.
This is my solution.
class Solution {
public:
int getSum(int a, int b) {
int temp=a & b;
a=a^b;
while (temp>0){
b=temp<<1;
temp=a & b;
a=a^b;
}
return a;
}
};
But it doesn't work on the cases when a=-12,
b=-8.
Compare it side by side with another people's working solution, he has:
class Solution {
public:
int getSum(int a, int b) {
int sum = a;
while (b != 0)
{
sum = a ^ b;//calculate sum of a and b without thinking the carry
b = (a & b) << 1;//calculate the carry
a = sum;//add sum(without carry) and carry
}
return sum;
}
};
which is bascially the same. Why my solution doesn't work?
Strictly speaking both your solution and the one you are comparing with are incorrect, unless you make specific assumptions about the representation of signedintegral types. The reason yours differs is order of operations.
The explanation is written in the C standard itself. For example, from the 2011 ISO C standard (ISO/IEC 9899:2011) Section 6.5, para 4.
Some operators (the unary operator ~ , and the binary operators <<, >>, &, ^, and |, collectively described as bitwise operators) shall have operands that have integral type. These operators return values that depend on the internal representations of integers, and thus have implementation-defined and undefined aspects for signed types.
These concerns hit home with expressions like a & b if either a or b is negative .... and your example has both. a << 1 gives a similar concern if a is negative.
To eliminate your problem, you need to work with unsigned values (for which bitwise operators have well defined behaviour). If you need to deal with negative values, simply keep track of sign in another way (e.g. another variable of type bool).
In practice, bitwise operations work as expected for signed types with a twos-complement representation. The problem with relying on that, however, is that an implementation is not required to use such a representation.
Related
There are several posts here about floating point numbers and their nature. It is clear that comparing floats and doubles must always be done cautiously. Asking for equality has also been discussed and the recommendation is clearly to stay away from it.
But what if there is a direct assignement:
double a = 5.4;
double b = a;
assumg a is any non-NaN value - can a == b ever be false?
It seems that the answer is obviously no, yet I can't find any standard defining this behaviour in a C++ environment. IEEE-754 states that two floating point numbers with equal (non-NaN) bitset patterns are equal. Does it now mean that I can continue comparing my doubles this way without having to worry about maintainability? Do I have to worried about other compilers / operating systems and their implementation regarding these lines? Or maybe a compiler that optimizes some bits away and ruins their equality?
I wrote a little program that generates and compares non-NaN random doubles forever - until it finds a case where a == b yields false. Can I compile/run this code anywhere and anytime in the future without having to expect a halt? (ignoring endianness and assuming sign, exponent and mantissa bit sizes / positions stay the same).
#include <iostream>
#include <random>
struct double_content {
std::uint64_t mantissa : 52;
std::uint64_t exponent : 11;
std::uint64_t sign : 1;
};
static_assert(sizeof(double) == sizeof(double_content), "must be equal");
void set_double(double& n, std::uint64_t sign, std::uint64_t exponent, std::uint64_t mantissa) {
double_content convert;
memcpy(&convert, &n, sizeof(double));
convert.sign = sign;
convert.exponent = exponent;
convert.mantissa = mantissa;
memcpy(&n, &convert, sizeof(double_content));
}
void print_double(double& n) {
double_content convert;
memcpy(&convert, &n, sizeof(double));
std::cout << "sign: " << convert.sign << ", exponent: " << convert.exponent << ", mantissa: " << convert.mantissa << " --- " << n << '\n';
}
int main() {
std::random_device rd;
std::mt19937_64 engine(rd());
std::uniform_int_distribution<std::uint64_t> mantissa_distribution(0ull, (1ull << 52) - 1);
std::uniform_int_distribution<std::uint64_t> exponent_distribution(0ull, (1ull << 11) - 1);
std::uniform_int_distribution<std::uint64_t> sign_distribution(0ull, 1ull);
double a = 0.0;
double b = 0.0;
bool found = false;
while (!found){
auto sign = sign_distribution(engine);
auto exponent = exponent_distribution(engine);
auto mantissa = mantissa_distribution(engine);
//re-assign exponent for NaN cases
if (mantissa) {
while (exponent == (1ull << 11) - 1) {
exponent = exponent_distribution(engine);
}
}
//force -0.0 to be 0.0
if (mantissa == 0u && exponent == 0u) {
sign = 0u;
}
set_double(a, sign, exponent, mantissa);
b = a;
//here could be more (unmodifying) code to delay the next comparison
if (b != a) { //not equal!
print_double(a);
print_double(b);
found = true;
}
}
}
using Visual Studio Community 2017 Version 15.9.5
The C++ standard clearly specifies in [basic.types]#3:
For any trivially copyable type T, if two pointers to T point to distinct T objects obj1 and obj2, where neither obj1 nor obj2 is a potentially-overlapping subobject, if the underlying bytes ([intro.memory]) making up obj1 are copied into obj2, obj2 shall subsequently hold the same value as obj1.
It gives this example:
T* t1p;
T* t2p;
// provided that t2p points to an initialized object ...
std::memcpy(t1p, t2p, sizeof(T));
// at this point, every subobject of trivially copyable type in *t1p contains
// the same value as the corresponding subobject in *t2p
The remaining question is what a value is. We find in [basic.fundamental]#12 (emphasis mine):
There are three floating-point types: float, double, and long double.
The type double provides at least as much precision as float, and the type long double provides at least as much precision as double.
The set of values of the type float is a subset of the set of values of the type double; the set of values of the type double is a subset of the set of values of the type long double.
The value representation of floating-point types is implementation-defined.
Since the C++ standard has no further requirements on how floating point values are represented, this is all you will find as guarantee from the standard, as assignment is only required to preserve values ([expr.ass]#2):
In simple assignment (=), the object referred to by the left operand is modified by replacing its value with the result of the right operand.
As you correctly observed, IEEE-754 requires that non-NaN, non-zero floats compare equal if and only if they have the same bit pattern. So if your compiler uses IEEE-754-compliant floats, you should find that assignment of non-NaN, non-zero floating point numbers preserves bit patterns.
And indeed, your code
double a = 5.4;
double b = a;
should never allow (a == b) to return false. But as soon as you replace 5.4 with a more complicated expression, most of this nicety vanishes. It's not the exact subject of the article, but https://randomascii.wordpress.com/2013/07/16/floating-point-determinism/ mentions several possible ways in which innocent looking code can yield different results (which breaks "identical to the bit pattern" assertions). In particular, you might be comparing an 80 bit intermediate result with a 64 bit rounded result, possibly yielding inequality.
There are some complications here. First, note that the title asks a different question than the question. The title asks:
is assigning two doubles guaranteed to yield the same bitset patterns?
while the question asks:
can a == b ever be false?
The first of these asks whether different bits might occur from an assignment (which could be due to either the assignment not recording the same value as its right operand or due to the assignment using a different bit pattern that represents the same value), while the second asks whether, whatever bits are written by an assignment, the stored value must compare equal to the operand.
In full generality, the answer to the first question is no. Using IEEE-754 binary floating-point formats, there is a one-to-one map between non-zero numeric values and their encodings in bit patterns. However, this admits several cases where an assignment could produce a different bit pattern:
The right operand is the IEEE-754 −0 entity, but +0 is stored. This is not a proper IEEE-754 operation, but C++ is not required to conform to IEEE 754. Both −0 and +0 represent mathematical zero and would satisfy C++ requirements for assignment, so a C++ implementation could do this.
IEEE-754 decimal formats have one-to-many maps between numeric values and their encodings. By way of illustration, three hundred could be represented with bits whose direct meaning is 3•102 or bits whose direct meaning is 300•100. Again, since these represent the same mathematical value, it would be permissible under the C++ standard to store one in the left operand of an assignment when the right operand is the other.
IEEE-754 includes many non-numeric entities called NaNs (for Not a Number), and a C++ implementation might store a NaN different from the right operand. This could include either replacing any NaN with a “canonical” NaN for the implementation or, upon assignment of a signaling Nan, indicating the signal in some way and then converting the signaling NaN to a quiet NaN and storing that.
Non-IEEE-754 formats may have similar issues.
Regarding the latter question, can a == b be false after a = b, where both a and b have type double, the answer is no. The C++ standard does require that an assignment replace the value of the left operand with the value of the right operand. So, after a = b, a must have the value of b, and therefore they are equal.
Note that the C++ standard does not impose any restrictions on the accuracy of floating-point operations (although I see this only stated in non-normative notes). So, theoretically, one might interpret assignment or comparison of floating-point values to be floating-point operations and say that they do not need to be accuracy, so the assignment could change the value or the comparison could return an inaccurate result. I do not believe this is a reasonable interpretation of the standard; the lack of restrictions on floating-point accuracy is intended to allow latitude in expression evaluation and library routines, not simple assignment or comparison.
One should note the above applies specifically to a double object that is assigned from a simple double operand. This should not lull readers into complacency. Several similar but different situations can result in failure of what might seem intuitive mathematically, such as:
After float x = 3.4;, the expression x == 3.4 will generally evaluate as false, since 3.4 is a double and has to be converted to a float for the assignment. That conversion reduces precision and alters the value.
After double x = 3.4 + 1.2;, the expression x == 3.4 + 1.2 is permitted by the C++ standard to evaluate to false. This is because the standard permits floating-point expressions to be evaluated with more precision than the nominal type requires. Thus, 3.4 + 1.2 might be evaluated with the precision of long double. When the result is assigned to x, the standard requires that the excess precision be “discarded,” so the value is converted to a double. As with the float example above, this conversion may change the value. Then the comparison x == 3.4 + 1.2 may compare a double value in x to what is essentially a long double value produced by 3.4 + 1.2.
For example:
short a = 10;
int b = a & 0xffff;
Similarly if I want to convert from int to short, how do I do using bitwise operators? I don't want to use the usual casting using (short).
If you want sign extension:
int b = a;
If you don't (i.e. negative values of a will yield (weird) positive values of b)
// note that Standard Conversion of shorts to int happens before &
int b = a & std::numeric_limits<unsigned short>::max();
Doing bit-operations on signed types may not be a good idea and lead to surprising results: Are the results of bitwise operations on signed integers defined?. Why do you need bit-operations?
short int2short(int x) {
if (x > std::numeric_limits<short>::max()) {
// what to do now? Throw exception, return default value ...
}
else if (x < std::numeric_limits<short>::min()) {
// what to do now? Throw exception, return default value ...
} else
{
return static_cast<short>(x);
}
}
This could generalized into a template method and also have policies for the error cases.
Why not using (short)? That's the easiest way and gets what you want.
Unless it's an interview problem, then you need to assume how many bits a short and a int contains. If the number is positive, just using bitwise AND. If the number is negative, flip it to positive number, and do bitwise AND. After AND, you need to change the highest bit to 1.
Suppose I have these two types:
typedef unsigned long long uint64;
typedef signed long long sint64;
And I have these variables:
uint64 a = ...;
uint64 b = ...;
sint64 c;
I want to subtract b from a and assign the result to c, clearly if the absolute value of the difference is greater than 2^63 than it will wrap (or be undefined) which is ok. But for cases where the absolute difference is less than 2^63 I want the result to be correct.
Of the following three ways:
c = a - b; // sign conversion warning ignored
c = sint64(a - b);
c = sint64(a) - sint64(b);
Which of the them are guaranteed to work by the standard? (and why/how?)
None of the three work. The first fails if the difference is negative (no matter the absolute value), the second is the same as the first, and the third fails if either operand is too large.
It's impossible to implement without a branch.
c = b < a? a - b : - static_cast< sint64 >( b - a );
Fundamentally, unsigned types use modulo arithmetic without any kind of sign bit. They don't know they wrapped around, and the language spec doesn't identify wraparound with negative numbers. Also, assigning a value outside the range of a signed integral variable results in an implementation-defined, potentially nonsense result (integral overflow).
Consider a machine with no hardware to convert between native negative integers and two's complement. It can perform two's complement subtraction using bitwise negation and native two's complement addition, though. (Bizarre, maybe, but that is what C and C++ currently require.) The language leaves it up to the programmer, then, to convert the negative values. The only way to do that is to negate a positive value, which requires that the computed difference be positive. So…
The best solution is to avoid any attempt to represent a negative number as a large positive number in the first place.
EDIT: I forgot the cast before, which would have produced a large unsigned value, equivalently to the other solutions!
Potatoswatter's answer is probably the most pragmatic solution, but "impossible to implement without a branch" is like a red rag to a bull for me. If your hypothetical system implements undefined overflow/cast operations like that, my hypothetical system implements branches by killing puppies.
So I'm not completely familiar with what the standard(s) would say, but how about this:
sint64 c,d,r;
c = a >> 1;
d = b >> 1;
r = (c-d) * 2;
c = a & 1;
d = b & 1;
r += c - d;
I've written it in a fairly verbose fasion so the individual operations are clear, but have left some implicit casts. Is anything there undefined?
Steve Jessop rightly points out that this does fail in the case where the difference is exactly 2^63-1, as the multiply overflows before the 1 is subtracted.
So here's an even uglier version which should cover all underflow/overflow conditions:
sint64 c,d,r,ov;
c = a >> 1;
d = b >> 1;
ov = a >> 63;
r = (c-d-ov) * 2;
c = a & 1;
d = b & 1;
r += ov + ov + c - d;
if the absolute value of the difference is greater than 2^63 than it
will wrap (or be undefined) which is ok. But for cases where the
absolute difference is less than 2^63 I want the result to be correct.
Then all three of the notations you suggest work, assuming a conventional architecture. The notable difference
is that the third one sint64(a) - sint64(b) invokes undefined behavior
when the difference is not representable, whereas the first two are
guaranteed to wrap around (unsigned arithmetic overflow is guaranteed to wrap around and conversion from unsigned to signed is implementation-defined, whereas signed arithmetic overflow is undefined).
I want convert the storage of a floating point number to an integer (the 'number' value is not required to be equal).
If a floating point number (eg 10), is represented in binary (at least on my implementation) as:
01000001001000000000000000000000
Then it should (also on my implementation) represent the integer value 1092616192.
I am currently doing this through the following:
union UNFI {
float a;
int b;
};
UNFI n;
n.a = 10;
int bits = n.b; // foo!
for (int i=31; i>=0; --i) {
bool bit = ((bits >> i) & 1);
std::cout << bit;
}
Which is achieving what I would like, but it is, in its nature, undefined behavior. Therefore I would like to know what the 'correct' way of achieving this outcome is.
From the C99 standard:
With one exception, if the value of a member of a union object is used when the most recent store to the object was to a different member,the behavior is implementation-defined.
Is that not undefined behavior?
The correct way is to call memcpy. Most compilers will optimize this to an efficient single-word memory access just like the union, but on platforms with extra alignment requirements, they will do the right thing. And it won't ever trigger a signalling NaN.
float a = 10;
char bits[sizeof a];
memcpy(bits, &a, sizeof a);
or
int b;
static_assert(sizeof b == sizeof a);
memcpy(&b, &a, sizeof a);
What's undefined?
The layout of an IEEE float is very well defined, the only extra complication is the byte ordering on your platform (and how big is an int)
What you're after is simple type hosing. What you are doing seems like the easiest thing. Another alternative I suppose would be pointer conversion.
int const bits = *(reinterpret_cast<int *>(&n))
Here are the goals I'm trying to achieve:
I need to pack 32 bit IEEE floats into 30 bits.
I want to do this by decreasing the size of mantissa by 2 bits.
The operation itself should be as fast as possible.
I'm aware that some precision will be lost, and this is acceptable.
It would be an advantage, if this operation would not ruin special cases like SNaN, QNaN, infinities, etc. But I'm ready to sacrifice this over speed.
I guess this questions consists of two parts:
1) Can I just simply clear the least significant bits of mantissa? I've tried this, and so far it works, but maybe I'm asking for trouble... Something like:
float f;
int packed = (*(int*)&f) & ~3;
// later
f = *(float*)&packed;
2) If there are cases where 1) will fail, then what would be the fastest way to achieve this?
Thanks in advance
You actually violate the strict aliasing rules (section 3.10 of the C++ standard) with these reinterpret casts. This will probably blow up in your face when you turn on the compiler optimizations.
C++ standard, section 3.10 paragraph 15 says:
If a program attempts to access the stored value of an object through an lvalue of other than one of the following types the behavior is undefined
the dynamic type of the object,
a cv-qualified version of the dynamic type of the object,
a type similar to the dynamic type of the object,
a type that is the signed or unsigned type corresponding to the dynamic type of the object,
a type that is the signed or unsigned type corresponding to a cv-qualified version of the dynamic type of the object,
an aggregate or union type that includes one of the aforementioned types among its members (including, recursively, a member of a subaggregate or contained union),
a type that is a (possibly cv-qualified) base class type of the dynamic type of the object,
a char or unsigned char type.
Specifically, 3.10/15 doesn't allow us to access a float object via an lvalue of type unsigned int. I actually got bitten myself by this. The program I wrote stopped working after turning on optimizations. Apparently, GCC didn't expect an lvalue of type float to alias an lvalue of type int which is a fair assumption by 3.10/15. The instructions got shuffled around by the optimizer under the as-if rule exploiting 3.10/15 and it stopped working.
Under the following assumptions
float really corresponds to a 32bit IEEE-float,
sizeof(float)==sizeof(int)
unsigned int has no padding bits or trap representations
you should be able to do it like this:
/// returns a 30 bit number
unsigned int pack_float(float x) {
unsigned r;
std::memcpy(&r,&x,sizeof r);
return r >> 2;
}
float unpack_float(unsigned int x) {
x <<= 2;
float r;
std::memcpy(&r,&x,sizeof r);
return r;
}
This doesn't suffer from the "3.10-violation" and is typically very fast. At least GCC treats memcpy as an intrinsic function. In case you don't need the functions to work with NaNs, infinities or numbers with extremely high magnitude you can even improve accuracy by replacing "r >> 2" with "(r+1) >> 2":
unsigned int pack_float(float x) {
unsigned r;
std::memcpy(&r,&x,sizeof r);
return (r+1) >> 2;
}
This works even if it changes the exponent due to a mantissa overflow because the IEEE-754 coding maps consecutive floating point values to consecutive integers (ignoring +/- zero). This mapping actually approximates a logarithm quite well.
Blindly dropping the 2 LSBs of the float may fail for small number of unusual NaN encodings.
A NaN is encoded as exponent=255, mantissa!=0, but IEEE-754 doesn't say anything about which mantiassa values should be used. If the mantissa value is <= 3, you could turn a NaN into an infinity!
You should encapsulate it in a struct, so that you don't accidentally mix the usage of the tagged float with regular "unsigned int":
#include <iostream>
using namespace std;
struct TypedFloat {
private:
union {
unsigned int raw : 32;
struct {
unsigned int num : 30;
unsigned int type : 2;
};
};
public:
TypedFloat(unsigned int type=0) : num(0), type(type) {}
operator float() const {
unsigned int tmp = num << 2;
return reinterpret_cast<float&>(tmp);
}
void operator=(float newnum) {
num = reinterpret_cast<int&>(newnum) >> 2;
}
unsigned int getType() const {
return type;
}
void setType(unsigned int type) {
this->type = type;
}
};
int main() {
const unsigned int TYPE_A = 1;
TypedFloat a(TYPE_A);
a = 3.4;
cout << a + 5.4 << endl;
float b = a;
cout << a << endl;
cout << b << endl;
cout << a.getType() << endl;
return 0;
}
I can't guarantee its portability though.
How much precision do you need? If 16-bit float is enough (sufficient for some types of graphics), then ILM's 16-bit float ("half"), part of OpenEXR is great, obeys all kinds of rules (http://www.openexr.com/), and you'll have plenty of space left over after you pack it into a struct.
On the other hand, if you know the approximate range of values they're going to take, you should consider fixed point. They're more useful than most people realize.
I can't select any of the answers as the definite one, because most of them have valid information, but not quite what I was looking for. So I'll just summarize my conclusions.
The method for conversion I've posted in my question's part 1) is clearly wrong by C++ standard, so other methods to extract float's bits should be used.
And most important... as far as I understand from reading the responses and other sources about IEEE754 floats, it's ok to drop the least significant bits from mantissa. It will mostly affect only precision, with one exception: sNaN. Since sNaN is represented by exponent set to 255, and mantissa != 0, there can be situation where mantissa would be <= 3, and dropping last two bits would convert sNaN to +/-Infinity. But since sNaN are not generated during floating point operations on CPU, its safe under controlled environment.