XOR operator not evaluating correctly in C++ - c++

I'm building a BigInt class from scratch in C++, but something is driving me nuts: my XOR isn't working properly, and I have no idea why. I was hoping someone could enlighten me. Below is a minimal working example:
class BigInt
{
private:
bool pos;
int size; // Number of binary digits to use
short compare(BigInt input);
public:
BigInt(long long input, int inSize) { pos = true; size = inSize; }
};
short BigInt::compare(BigInt input)
{
// Partial compare function for minimal working example
// Return:
// 1: (*this) > input
// 0: (*this) == input
// -1: (*this) < input
string a = (*this).toDecimal(), b = input.toDecimal();
bool c = (*this).size > input.size, d = (*this).pos ^ input.pos;
bool thispos = (*this).pos, inpos = input.pos;
bool xorpos = (thispos != inpos);
bool x = true, y = true;
bool z = x ^ y;
if ((*this).size > input.size || (*this).pos != input.pos)
return 1 - ((*this).pos ? 0 : 2);
else if ((*this).size < input.size)
return -1 + ((*this).pos ? 0 : 2);
return 0;
}
I have a breakpoint on the first if statement. Below is what I have on my watch list.
thispos true bool
inpos true bool
xorpos true bool
x true bool
y true bool
z false bool
Anyone know what's going on? I'd rather avoid kluging my if statement. I've never had a problem with such simple usage of my XOR.
As far as I can tell, there should be nothing wrong, but there's something about these values that just won't evaluate the way they're expected to.
Edit: Changed code to minimal working example.

Well, even though ^ is a bitwise xor operator, your initializations
bool thispos = (*this).pos, inpos = input.pos;
are required to convert the source values to bool type. Values of bool type are guaranteed to act as either 0 or 1 in arithmetic contexts. This means that
bool xorpos = thispos ^ inpos;
is required to initialize xorpos with false if both thispos and inpos were originally true.
If you observe different behavior, it might be a bug in your compiler. Integral-to-bool conversion might be implemented incorrectly or something like that.
Another opportunity is that someone "redefined" the bool keyword by doing something like
#define bool unsigned char
This will disable the proper bool semantics in the first pair of initializations and cause the bitwise nature of ^ to affect the result.

Why not simply do x != y? This is more consistent with your types as well.

Related

when need to do this in c++; if(x = 0) {}

I noticed sometimes developers do these in their project;
while(int x = 0) {//run code}
int y = 0;
if (y = someFunction())
{//run code}
I want ask;
why c++ allow to do these in loops like while,(maybe for as well?) and if-else statements, what is the usage?
when someone should do this in his project?
and when conditions like these give result true?
what is the usage?
Declaration inside if/for/while allows to reduce scope of the variable.
so, instead of
int y = somevalue();
if (y) {// y != 0
// ...
}
// y still usable :/
or
{ // extra block :/
int y = somevalue();
if (y) {// y != 0
// ...
}
}
// y no longer usable :)
you might directly do:
if (int y = somevalue()) // y != 0
// ...
}
// y no longer usable :)
with syntax which might indeed surprise.
For more controversial:
int y;
// ...
if (y = somevalue()) // y != 0
// ...
}
// y still usable :)
it is allowed as that assignation is an expression convertible to bool (value of y = x is y (which has been assigned to x)).
it is more controversial as error prone as unclear if = or == is expected.
some convention use extra parents to more clearly express assignation
if ((y = somevalue())) // really = intended, not ==
// ...
}
when someone should do this in his project?
For existing projects, try to use existing convention.
For new project, you have to do the trade of:
scope versus syntax not shared with other language which might be surprising
and when conditions like these give result true?
when given expression convert to true, so non zero integral, non null pointer.
c++17 introduces another syntax to allow to split declaration and condition for if/while. (for already allow that split)
if (int y = somevalue(); y != 42) // y != 42
// ...
}
// y no longer usable :)

Is there an easy and efficient way to assign the values of bool [8] to std::uint8_t

Consider the follow variables:
std::uint8_t value;
const bool bits[8] = { true, false, false, true,
false, false, true, false };
If I was to print out the array of bools to the console
for (int i = 0; i < 7; i++ )
std::cout << bits[i];
it would give the following output:
10010010
simple enough and straight forward.
What I would like to do is to generate either a constexpr function, a function template, a lambda, or a combination of them that can run either during compile time or runtime depending on the context in which it is being used to where I could take each of these boolean values of 0s and 1s and store them into the variable value above. If the value is known at compile-time then I'd like for this assignment to be resolved. If the value isn't known at compile-time, then the value will be initialized to 0 until it is updated then it would be used in a runtime context.
However, there is one caveat that isn't obvious at first, but by indexing through the array, the 0th index of the array will be the LSB bit of the value and the 7th index will be the MSB. So the order of bits that you are seeing printed from the screen would have a hex value of 0x92 but the value to be stored needs to be 01001001 which would have the hex value of 0x49 or 73 in decimal and not 146.
The above are members in a class where one is the data value representation and the array of bools is the bit representation. I have a few constructors where one will set the data or value member directly and the other constructors will set the array of bools, but I need for both of these values to stay concurrent with each other through the life of the class object if one updates the other needs to be changed as well. Also, the array of bools is a member of a non-named union with a nameless struct of 8 individual bools as a single bit within a bit field. The class also has an index operator to access the individual bits as single boolean values of 0s or 1s.
Here is what my class looks like:
constexpr unsigned BIT_WIDTH(const unsigned bits = 8) { return bits; }
struct Register_8 {
union {
bool bits_[BIT_WIDTH()];
struct {
bool b0 : 1;
bool b1 : 1;
bool b2 : 1;
bool b3 : 1;
bool b4 : 1;
bool b5 : 1;
bool b6 : 1;
bool b7 : 1;
};
};
std::uint8_t data_;
Register_8() : data_{ 0 } {}
Register_8(std::uint8_t data) : data_{ data } {
}
Register_8(const bool bits[BIT_WIDTH()]) {
for (unsigned i = 0; i < 8; i++)
bits_[i] = bits[i];
}
Register_8(const bool a, const bool b, const bool c, const bool d,
const bool e, const bool f, const bool g, const bool h) {
b0 = a; b1 = b, b2 = c, b3 = d;
b4 = e, b5 = f, b6 = g, b7 = h;
}
const std::uint8_t operator[](std::uint8_t idx) {
// I know there is no bounds checking here, I'll add that later!
return bits_[idx];
}
};
So how can I make each of the values in bits[] to be the individual bits of value where bit[0] is the LSB of value? I would also like to do this in a context where it will not generate any UB! Or does there already exist an algorithm within the STL under c++17 that will do this for me? I don't have a C++20 compiler yet... I've tried including the std::uint8_t within the union but it doesn't work as I would like it too and I wouldn't expect it to work either!
I walked away for a little bit and came back to what I was working on... I think the short break had helped. The suggestion by user Nicol Bolas had also helped by letting me know that I can do it with a constexpr function. Now I don't have to worry about templates or lambdas for this part of the code.
Here is the function that I have came up with that I believe will assign the bits in the appropriate order.
constexpr unsigned BIT_WIDTH(const unsigned bits = CHAR_BIT) { return bits; }
constexpr std::uint8_t byte_from_bools(const bool bits[BIT_WIDTH()]) {
std::uint8_t ret = 0x00;
std::uint8_t pos = 0x00;
for (unsigned i = 0; i < BIT_WIDTH(); i++) {
ret |= static_cast<std::uint8_t>(bits[i]) << pos++; // << pos--;
}
return ret;
}
If there are any kind of optimizations that can be done or any bugs or code smells, please let me know...
Now, it's just a matter of extracting individual bits and assigning them to my bit-field members, and the track when either one changes to make sure both are updated in a concurrent fashion.

How to safely compare two unsigned integer counters?

We have two unsigned counters, and we need to compare them to check for some error conditions:
uint32_t a, b;
// a increased in some conditions
// b increased in some conditions
if (a/2 > b) {
perror("Error happened!");
return -1;
}
The problem is that a and b will overflow some day. If a overflowed, it's still OK. But if b overflowed, it would be a false alarm. How to make this check bulletproof?
I know making a and b uint64_t would delay this false-alarm. but it still could not completely fix this issue.
===============
Let me clarify a little bit: the counters are used to tracking memory allocations, and this problem is found in dmalloc/chunk.c:
#if LOG_PNT_SEEN_COUNT
/*
* We divide by 2 here because realloc which returns the same
* pointer will seen_c += 2. However, it will never be more than
* twice the iteration value. We divide by two to not overflow
* iter_c * 2.
*/
if (slot_p->sa_seen_c / 2 > _dmalloc_iter_c) {
dmalloc_errno = ERROR_SLOT_CORRUPT;
return 0;
}
#endif
I think you misinterpreted the comment in the code:
We divide by two to not overflow iter_c * 2.
No matter where the values are coming from, it is safe to write a/2 but it is not safe to write a*2. Whatever unsigned type you are using, you can always divide a number by two while multiplying may result in overflow.
If the condition would be written like this:
if (slot_p->sa_seen_c > _dmalloc_iter_c * 2) {
then roughly half of the input would cause a wrong condition. That being said, if you worry about counters overflowing, you could wrap them in a class:
class check {
unsigned a = 0;
unsigned b = 0;
bool odd = true;
void normalize() {
auto m = std::min(a,b);
a -= m;
b -= m;
}
public:
void incr_a(){
if (odd) ++a;
odd = !odd;
normalize();
}
void incr_b(){
++b;
normalize();
}
bool check() const { return a > b;}
}
Note that to avoid the overflow completely you have to take additional measures, but if a and b are increased more or less the same amount this might be fine already.
The posted code actually doesn’t seem to use counters that may wrap around.
What the comment in the code is saying is that it is safer to compare a/2 > b instead of a > 2*b because the latter could potentially overflow while the former cannot. This particularly true of the type of a is larger than the type of b.
Note overflows as they occur.
uint32_t a, b;
bool aof = false;
bool bof = false;
if (condition_to_increase_a()) {
a++;
aof = a == 0;
}
if (condition_to_increase_b()) {
b++;
bof = b == 0;
}
if (!bof && a/2 + aof*0x80000000 > b) {
perror("Error happened!");
return -1;
}
Each a, b interdependently have 232 + 1 different states reflecting value and conditional increment. Somehow, more than an uint32_t of information is needed. Could use uint64_t, variant code paths or an auxiliary variable like the bool here.
Normalize the values as soon as they wrap by forcing them both to wrap at the same time. Maintain the difference between the two when they wrap.
Try something like this;
uint32_t a, b;
// a increased in some conditions
// b increased in some conditions
if (a or b is at the maximum value) {
if (a > b)
{
a = a-b; b = 0;
}
else
{
b = b-a; a = 0;
}
}
if (a/2 > b) {
perror("Error happened!");
return -1;
}
If even using 64 bits is not enough, then you need to code your own "var increase" method, instead of overload the ++ operator (which may mess your code if you are not careful).
The method would just reset var to '0' or other some meaningfull value.
If your intention is to ensure that action x happens no more than twice as often as action y, I would suggest doing something like:
uint32_t x_count = 0;
uint32_t scaled_y_count = 0;
void action_x(void)
{
if ((uint32_t)(scaled_y_count - x_count) > 0xFFFF0000u)
fault();
x_count++;
}
void action_y(void)
{
if ((uint32_t)(scaled_y_count - x_count) < 0xFFFF0000u)
scaled_y_count+=2;
}
In many cases, it may be desirable to reduce the constants in the comparison used when incrementing scaled_y_count so as to limit how many action_y operations can be "stored up". The above, however, should work precisely in cases where the operations remain anywhere close to balanced in a 2:1 ratio, even if the number of operations exceeds the range of uint32_t.

Are enums the canonical way to implement bit flags?

Currently I'm using enums to represent a state in a little game experiment. I declare them like so:
namespace State {
enum Value {
MoveUp = 1 << 0, // 00001 == 1
MoveDown = 1 << 1, // 00010 == 2
MoveLeft = 1 << 2, // 00100 == 4
MoveRight = 1 << 3, // 01000 == 8
Still = 1 << 4, // 10000 == 16
Jump = 1 << 5
};
}
So that I can use them this way:
State::Value state = State::Value(0);
state = State::Value(state | State::MoveUp);
if (mState & State::MoveUp)
movement.y -= mPlayerSpeed;
But I'm wondering if this is the right way to implement bit flags. Isn't there a special container for bit flags? I heard about std::bitset, is it what I should use? Do you know something more efficient?
Am I doing it right?
I forgot to point out I was overloading the basic operators of my enum:
inline State::Value operator|(State::Value a, State::Value b)
{ return static_cast<State::Value>(static_cast<int>(a) | static_cast<int>(b)); }
inline State::Value operator&(State::Value a, State::Value b)
{ return static_cast<State::Value>(static_cast<int>(a) & static_cast<int>(b)); }
inline State::Value& operator|=(State::Value& a, State::Value b)
{ return (State::Value&)((int&)a |= (int)b); }
I had to use a C-style cast for the |=, it didn't work with a static_cast - any idea why?
The STL contains std::bitset, which you can use for precisely such a case.
Here is just enough code to illustrate the concept:
#include <iostream>
#include <bitset>
class State{
public:
//Observer
std::string ToString() const { return state_.to_string();};
//Getters
bool MoveUp() const{ return state_[0];};
bool MoveDown() const{ return state_[1];};
bool MoveLeft() const{ return state_[2];};
bool MoveRight() const{ return state_[3];};
bool Still() const{ return state_[4];};
bool Jump() const{ return state_[5];};
//Setters
void MoveUp(bool on) {state_[0] = on;}
void MoveDown(bool on) {state_[1] = on;}
void MoveLeft(bool on) {state_[2] = on;}
void MoveRight(bool on) {state_[3] = on;}
void Still(bool on) {state_[4] = on;}
void Jump(bool on) {state_[5] = on;}
private:
std::bitset<6> state_;
};
int main() {
State s;
auto report = [&s](std::string const& msg){
std::cout<<msg<<" "<<s.ToString()<<std::endl;
};
report("initial value");
s.MoveUp(true);
report("move up set");
s.MoveDown(true);
report("move down set");
s.MoveLeft(true);
report("move left set");
s.MoveRight(true);
report("move right set");
s.Still(true);
report("still set");
s.Jump(true);
report("jump set");
return 0;
}
Here's it working: http://ideone.com/XLsj4f
The interesting thing about this is that you get std::hash support for free, which is typically one of the things you would need when using state inside various data structures.
EDIT:
There is one limitation to std::bitset and that is the fact that you need to know the maximum number of bits in your bitset at compile time. However, that is the same case with enums anyway.
However, if you don't know the size of your bitset at compile time, you can use boost::dynamic_bitset, which according to this paper (see page 5) is actually really fast. Finally, according to Herb Sutter, std::bitset was designed to be used in cases you would normally want to use std::vector.
That said, there really is no substitute for real world tests. So if you really want to know, profile. That will give you performance numbers for a context that you care about.
I should also mention that std::bitset has an advantage that enum does not - there is no upper limit on the number of bits you can use. So std::bitset<1000> is perfectly valid.
I believe that your approach is right (except several things):
1. You can explicitly specify underlying type to save memory;
2. You can not use unspecified enum values.
namespace State {
enum Value : char {
None = 0,
MoveUp = 1 << 0, // 00001 == 1
MoveDown = 1 << 1, // 00010 == 2
MoveLeft = 1 << 2, // 00100 == 4
MoveRight = 1 << 3, // 01000 == 8
Still = 1 << 4, // 10000 == 16
Jump = 1 << 5
};
}
and:
State::Value state = State::Value::None;
state = State::Value(state | State::MoveUp);
if (mState & State::MoveUp) {
movement.y -= mPlayerSpeed;
}
about overloading:
inline State::Value& operator|=(State::Value& a, State::Value b) {
return a = static_cast<State::Value> (a | b);
}
and since you use C++11, you should use constexpr every were is possible:
inline constexpr State::Value operator|(State::Value a, State::Value b) {
return a = static_cast<State::Value> (a | b);
}
inline constexpr State::Value operator&(State::Value a, State::Value b) {
return a = static_cast<State::Value> (a & b);
}
To be honest I don't think there is a consistent pattern for them.
Just look at std::ios_base::openmode and std::regex_constants::syntax_option_type as two completely different ways of structuring it in the standard library -- one using a struct, the other using an entire namespace. Both are enums all right, but structured differently.
Check your standard library implementation to see the details of how the above two are implemented.

Getting "Debug Assertion Failed!" for set comparator

I know similar issue has been answered at this link Help me fix this C++ std::set comparator
but unfortunately I am facing exactly same issue and I am unable to understand the reason behind it thus need some help to resolve it.
I am using VS2010 and my release binary is running fine without any issue but the debug binary reports:
My comparator looks like this:
struct PathComp {
bool operator() (const wchar_t* path1, const wchar_t* path2) const
{
int c = wcscmp(path1, path2);
if (c < 0 || c > 0) {
return true;
}
return false;
}
};
My set is declared like this:
set<wchar_t*,PathComp> pathSet;
Could somebody suggest me why my debug binary is failing at this assertion? Is it because I am using wcscmp() function to compare the wide character string getting stored in my set?
Thanks in advance!!!
std::set requires a valid comperator that behaves like operator< or std::less.
The std::set code detected that your operator< is not valid, and as a help to you triggered the assert you showed.
And indeed: your comperator looks like an operator!=, not like an operator<
One of the rules an operator< should follow, is that a<b and b<a cannot be both true. In your implementation, it is.
Correct your code to:
bool operator() (const wchar_t* path1, const wchar_t* path2) const
{
int c = wcscmp(path1, path2);
return (c < 0);
}
and you should be fine.
The problem is that your comparator does not induce a strict-weak ordering. It should only really return true for paths that are "less" - not for all that are different. Change it to:
struct PathComp {
bool operator() (const wchar_t* path1, const wchar_t* path2) const
{
int c = wcscmp(path1, path2);
if (c < 0) { // <- this is different
return true;
}
return false;
}
};
Alternatively, using only c > 0 will also work - but the set will have a reverse order.
The algorithm needs to know the difference between smaller and greater to work, just unequal does not give enough information.
Without smaller-than/greater-than information, the set cannot possibly maintain an order - but that is what a set is all about.
After spending some more time on it we finally decided to take another approach which worked for me.
So we converted wchar_t* to string using this method:
// Converts LPWSTR to string
bool convertLPWSTRToString(string& str, const LPWSTR wStr)
{
bool b = false;
char* p = 0;
int bSize;
// get the required buffer size in bytes
bSize = WideCharToMultiByte(CP_UTF8,
0,
wStr,-1,
0,0,
NULL,NULL);
if (bSize > 0) {
p = new char[bSize];
int rc = WideCharToMultiByte(CP_UTF8,
0,
wStr,-1,
p,bSize,
NULL,NULL);
if (rc != 0) {
p[bSize-1] = '\0';
str = p;
b = true;
}
}
delete [] p;
return b;
}
And then stored that string in the in the set, by doing this I didn't had to worry about comparing the elements getting stored to make sure that all entries are unique.
// set that will hold unique path
set<string> strSet;
So all I had to do was this:
string str;
convertLPWSTRToString(str, FileName);
// store path in the set
strSet.insert(str);
Though I still don't know what was causing "Debug Assertion Failed" issue when I was using a set comparator (PathComp) for set<wchar_t*,PathComp> pathSet;