Gray codes addition - c++

Is there any known way to compute the addition (and maybe the subtraction) of two Gray codes without having to convert the two Gray codes to regular binary, perform a binary addition then convert the result back to a Gray code? I managed to write increment and decrement functions, but the addition and subtraction seem even less documented and harder to write.

In this document under #6, there is an algorithm for serial Gray code addition (copied directly; note, that ⊕ is xor):
procedure add (n: integer; A,B:word; PA,PB:bit;
var S:word; var PS:bit; var CE, CF:bit);
var i: integer; E, F, T: bit;
begin
E := PA; F := PB;
for i:= 0 to n-1 do begin {in parallel, using previous inputs}
S[i] := (E and F) ⊕ A[i] ⊕ B[i];
E := (E and (not F)) ⊕ A[i];
F := ((not E) and F) ⊕ B[i];
end;
CE := E; CF := F;
end;
This adds the Gray code words A and B to form the Gray code word S. The operand parities are PA and PB, the sum parity is PS. This propagates two carry bits internally, E and F, and produces two external carry bits CE and CF
Unfortunately, it doesn't state anything about subtraction, but I assume, when you can encode negative numbers, you can use addition for that.

I accepted #Sebastian Dressler's answer because the suggested algorithm indeed works. For the sake of completeness, I propose here a corresponding C99 implementation of the algorithm (C++ compatible):
// lhs and rhs are encoded as Gray codes
unsigned add_gray(unsigned lhs, unsigned rhs)
{
// e and f, initialized with the parity of lhs and rhs
// (0 means even, 1 means odd)
bool e = __builtin_parity(lhs);
bool f = __builtin_parity(rhs);
unsigned res = 0u;
for (unsigned i = 0u ; i < CHAR_BIT * sizeof(unsigned) ; ++i)
{
// Get the ith bit of rhs and lhs
bool lhs_i = (lhs >> i) & 1u;
bool rhs_i = (rhs >> i) & 1u;
// Copy e and f (see {in parallel} in the original paper)
bool e_cpy = e;
bool f_cpy = f;
// Set the ith bit of res
unsigned res_i = (e_cpy & f_cpy) ^ lhs_i ^ rhs_i;
res |= (res_i << i);
// Update e and f
e = (e_cpy & (!f_cpy)) ^ lhs_i;
f = ((!e_cpy) & f_cpy) ^ rhs_i;
}
return res;
}
Note: __builtin_parity is a compiler intrinsic (GCC and Clang) that returns the parity of the number of set bits in an integer (if the intrinsic does not exist, there are other methods to compute it by hand). A gray code is even when it has an even number of set bits. The algorithm can still be improved, but this implementation is rather faithful to the original algorithm. If you want details about an optimized implementation, you can have a look at this Q&A on Code Review.

I recently devised a new algorithm to add two Gray codes. Unfortunately, it is still slower than the naive double-conversion solution and is also slower than Harold Lucal's algorithm (the one in the accepted answer). But any new solution to a problem is welcome, right?
// lhs and rhs are encoded as Gray codes
unsigned add_gray(unsigned lhs, unsigned rhs)
{
// Highest power of 2 in lhs and rhs
unsigned lhs_base = hyperfloor(lhs);
unsigned rhs_base = hyperfloor(rhs);
if (lhs_base == rhs_base) {
// If lhs and rhs are equal, return lhs * 2
if (lhs == rhs) {
return (lhs << 1u) ^ __builtin_parity(lhs);
}
// Else return base*2 + (lhs - base) + (rhs - base)
return (lhs_base << 1u) ^ add_gray(lhs_base ^ lhs, lhs_base ^ rhs);
}
// It's easier to operate from the greatest value
if (lhs_base < rhs_base) {
swap(&lhs, &rhs);
swap(&lhs_base, &rhs_base);
}
// Compute lhs + rhs
if (lhs == lhs_base) {
return lhs ^ rhs;
}
// Compute (lhs - base) + rhs
unsigned tmp = add_gray(lhs ^ lhs_base, rhs);
if (hyperfloor(tmp) < lhs_base) {
// Compute base + (lhs - base) + rhs
return lhs_base ^ tmp;
}
// Here, hyperfloor(lhs) == hyperfloor(tmp)
// Compute hyperfloor(lhs) * 2 + ((lhs - hyperfloor(lhs)) + rhs) - hyperfloor(lhs)
return (lhs_base << 1u) ^ (lhs_base ^ tmp);
}
The algorithm uses the following utility functions in order tow work correctly:
// Swap two values
void swap(unsigned* a, unsigned* b)
{
unsigned temp = *a;
*a = *b;
*b = temp;
}
// Isolate the most significant bit
unsigned isomsb(unsigned x)
{
for (unsigned i = 1u ; i <= CHAR_BIT * sizeof(unsigned) / 2u ; i <<= 1u) {
x |= x >> i;
}
return x & ~(x >> 1u);
}
// Return the greatest power of 2 not higher than
// x where x and the power of 2 are encoded in Gray
// code
unsigned hyperfloor(unsigned x)
{
unsigned msb = isomsb(x);
return msb | (msb >> 1u);
}
So, how does it work?
I have to admit, it's quite a wall of code for something as « simple » as an addition. It is mostly based on observations about bit patterns in Gray codes; that is, I didn't prove anything formally, but I have yet to find cases where the algorithm doesn't work (if I don't take into account overflow, it does not handle overflow). Here are the main observations used to construct the algorithm, assume that everything is a Gray code:
2 * n = (n << 1) ⊕ parity(n)
If a is a power of 2 and a > b, then a ⊕ b = a + b
Consequently, i a is a power of 2 and a < b, then a ⊕ b = a - b. This will only work if b < 2 * a though.
If a and b have the same hyperfloor but are not equal, then a + b = (hyperfloor(a) << 1) ⊕ ((hyperfloor(a) ⊕ a) + (hyperfloor(b) ⊕ b)).
Basically, it means that we know how to multiply by 2, how to add a power of 2 to a smaller Gray code and how to subtract a power of 2 from a Gray code that is bigger than that power of 2 but smaller than the next power of 2. Everything else are tricks so that we can reason in terms of equal values or powers of 2.
If you want more details/information, you can also check this Q&A on Code Review which proposes a modern C++ implementation of the algorithm as well as some optimizations (as a bonus, there are some nice MathJax equations that we can't have here :D).

Related

A few questions about checking whether a set in C++ is an algebraic group

I've started making a library for doing things with abstract algebra. Right now I'm trying to make a function that checks whether a set is a group. It should be self-explanatory:
In mathematics, a group is a set of elements together with an operation that combines any two of its elements to form a third element satisfying four conditions called the group axioms, namely closure, associativity, identity and invertibility. One of the most familiar examples of a group is the set of integers together with the addition operation; the addition of any two integers forms another integer. (http://en.wikipedia.org/wiki/Group_(mathematics))
#include <set>
#include <iostream>
template <typename ObType, typename BinaryFunction>
bool isGroup(const std::set<ObType> & S, BinaryFunction & op)
{
/*
isGroup returns true or false depending on whether the set S
along with the operator op is a group in the Algebraic sense.
That is, S is a group if and only if all the 4 following
conditions are true:
(1) If a, b in S, then a op b in S
(2) If a, b, c in S, then (a + b) + c = a + (b + c)
(3) There is an element 0 in S such that a + 0 = 0 + a for all a in S
(4) If a in S, then there is a b in S such that a + b = b + a = 0
*/
typename std::set<ObType>::const_iterator beg(S.cbegin()), offend(S.cend());
bool noProblems = true;
for (std::set<ObType>::const_iterator ia = beg; ia != offend && noProblems; ++ia)
{
for (std::set<ObType>::const_iterator ia = beg; ib != offend && noProblems; ++ib)
{
// ---------- (1) --------------
if (S.count(op(*ia, *ib)) == 0)
noProblems = false;
// -----------------------------
for (std::set<ObType>::const_iterator ic = beg; ic != offend && noProblems; ++ic)
{
// ---------- (2) -------------
if (((*ia + *ib) + *ic) != (*ia + (*ib + *ic)))
noProblems = false;
// ----------------------------
}
}
}
return noProblems;
}
template <typename T>
class Plus
{
public:
T operator() (const T & x, const T & y) { return x + y; };
};
int main()
{
std::set<int> S1 = { 0, 1, -1 };
std::set<int> S2 = { 0 };
class Plus<int> p;
std::cout << isGroup(S1, p);
return 0;
}
No compiler errors, but I have a few questions:
How can I check for (3) and (4) inside my nest of loops? I
Later, I'd like to check whether entire sets of native objects like int and long are groups. How can I set S1 equal to an std::set of all longs?
You should create a class to express notion of a set with an operation op (+) (note: this "+" is not ordinary arithmetic + so
// ---------- (2) -------------
if (((*ia + *ib) + *ic) != (*ia + (*ib + *ic)))
noProblems = false;
is wrong, this should be
// ---------- (2) -------------
if ( op( op(*ia,*ib), *ic) != op( *ia, op( *ib, *ic)))
noProblems = false;
), values (or rather elements) of this set (a container), and a special element called 1 (or e) element (it is 0 for integers R with (operation called) addition +, but 1 for integers R\0 with multiplication "x"). You need to add a 1 to your class. It is absolutely necessary to check (3) and (4). Further, identity 1 is not an integer 0 in general but a description of some special identity element that will yield same element x if x itself is subject to operation + with 1 ,(e), and 1 + x = x. (you can skip one expression if the operation "+" is commutative, what is true if S is abelian group).
Now what you will do depends on the thing if you would like to introduce hint parameter or not. To find identity element in a given set with hint you can write
template <typename ObType, typename BinaryFunction>
bool isGroup( const std::set<ObType> & S, BinaryFunction & op, ObType e)
{
//... important define BinaryFunction as taking const args !
typename std::set<ObType>::const_iterator beg(S.cbegin()), offend(S.cend());
bool isGroup = true;
for (std::set<ObType>::const_iterator ia = beg; ia != offend && noProblems; ++ia)
{
// ---------- (3) --------------
if( op( *ia, e)) != *ia || op( e, *ia)) != *ia)
isGroup = false;
// -----------------------------
This is not straightforward to indicate identity element in general. Simple example of integers or other arithmetic types with our well known + is just one of the simplest and not extensible, i.e. in the field of fractions of ring Z, the Q(Z), e for + is given by a pair [0,1] and for "x" by [1,1]. So to make this more general you have to iterate over elements, choose e and call op to check if (3) holds for all elements.
template <typename ObType, typename BinaryFunction>
bool isGroup( const std::set<ObType> & S, BinaryFunction & op)
{
//... important define BinaryFunction as taking const args !
typename std::set<ObType>::const_iterator beg(S.cbegin()), offend(S.cend());
for (std::set<ObType>::const_iterator ia = beg; ia != offend; ++ia)
{
// ---------- (3) -------------
/* let e be an *ia */
ObType e = *ia;
bool isGroup = true;
for ( auto ia2 : S) {
if( op( ia2, e)) != ia2 || op( e, ia2)) != ia2) {
isGroup = false;
break;
}
// identity found, set e_ in class to e and return
if( isGroup) {
e_ = e;
return true;
}
}
}
/* identity not found, this is not a group */
return false;
}
First an error: associativity requires op too, not +.
Closure looks ok.
Neutral element: Well, you have to search for a elment N
so that each element A of the set fulfills op(A,N)=A and op(N,A)=A
(these two things are not the same)
There have to be such a N, or it isn´t a group.
Try every element for N, and in the N loop every A...
And inverse elements: For each element A there has to be a B (can be the same as A)
so that op(A,B)=N (N from before).
These things will fit easy in your loops.
Every A {Every B {...}}
N is known.
BUT: If you want to work with big data sets like all long (or even infinite stuff)
you can´t use such simple methods anymore (even the associativity etc. is bad)
And asking about reimplementing Mathematica etc. is a bit much...

Fast inner product of ternary vectors

Consider two vectors, A and B, of size n, 7 <= n <= 23. Both A and B consists of -1s, 0s and 1s only.
I need a fast algorithm which computes the inner product of A and B.
So far I've thought of storing the signs and values in separate uint32_ts using the following encoding:
sign 0, value 0 → 0
sign 0, value 1 → 1
sign 1, value 1 → -1.
The C++ implementation I've thought of looks like the following:
struct ternary_vector {
uint32_t sign, value;
};
int inner_product(const ternary_vector & a, const ternary_vector & b) {
uint32_t psign = a.sign ^ b.sign;
uint32_t pvalue = a.value & b.value;
psign &= pvalue;
pvalue ^= psign;
return __builtin_popcount(pvalue) - __builtin_popcount(psign);
}
This works reasonably well, but I'm not sure whether it is possible to do it better. Any comment on the matter is highly appreciated.
I like having the 2 uint32_t, but I think your actual calculation is a bit wasteful
Just a few minor points:
I'm not sure about the reference (getting a and b by const &) - this adds a level of indirection compared to putting them on the stack. When the code is this small (a couple of clocks maybe) this is significant. Try passing by value and see what you get
__builtin_popcount can be, unfortunately, very inefficient. I've used it myself, but found that even a very basic implementation I wrote was far faster than this. However - this is dependent on the platform.
Basically, if the platform has a hardware popcount implementation, __builtin_popcount uses it. If not - it uses a very inefficient replacement.
The one serious problem here is the reuse of the psign and pvalue variables for the positive and negative vectors. You are doing neither your compiler nor yourself any favors by obfuscating your code in this way.
Would it be possible for you to encode your ternary state in a std::bitset<2> and define the product in terms of and? For example, if your ternary types are:
1 = P = (1, 1)
0 = Z = (0, 0)
-1 = M = (1, 0) or (0, 1)
I believe you could define their product as:
1 * 1 = 1 => P * P = P => (1, 1) & (1, 1) = (1, 1) = P
1 * 0 = 0 => P * Z = Z => (1, 1) & (0, 0) = (0, 0) = Z
1 * -1 = -1 => P * M = M => (1, 1) & (1, 0) = (1, 0) = M
Then the inner product could start by taking the and of the bits of the elements and... I am working on how to add them together.
Edit:
My foolish suggestion did not consider that (-1)(-1) = 1, which cannot be handled by the representation I proposed. Thanks to #user92382 for bringing this up.
Depending on your architecture, you may want to optimize away the temporary bit vectors -- e.g. if your code is going to be compiled to FPGA, or laid out to an ASIC, then a sequence of logical operations will be better in terms of speed/energy/area than storing and reading/writing to two big buffers.
In this case, you can do:
int inner_product(const ternary_vector & a, const ternary_vector & b) {
return __builtin_popcount( a.value & b.value & ~(a.sign ^ b.sign))
- __builtin_popcount( a.value & b.value & (a.sign ^ b.sign));
}
This will lay out very well -- the (a.value & b.value & ... ) can enable/disable an XOR gate, whose output splits into two signed accumulators, with the first pathway NOTed before accumulation.

How can I check if std::pow will overflow double

I have a function that deals with arbitrarily large grids. I need to compute if a grid to the power of another number will fit into a double due to using std::pow. If it cannot, I want to take a different branch and use gnu multiprecision library instead of normal.
Is there a quick way to see if:
int a = 1024;
int b = 0-10;
if(checkPowFitsDouble(a, b)) {
long c = static_cast<long>(std::pow(a, b)); //this will only work if b < 6
} else {
mpz_t c; //yada yada gmp
}
I am completely stumped on checkPowFitsDouble; perhaps there is some math trick I don't know of.
A common trick to check whether exponentiations will overflow uses logarithms. The idea is based on these relationships:
a^b <= m <=> log(a^b) <= log(m) <=> b * log(a) <= log(m) <=> b <= log(m) / log(a)
For instance,
int a = 1024;
for (int b = 0; b < 10; ++b) {
if (b * std::log(a) < std::log(std::numeric_limits<long>::max())) {
long c = std::pow(a, b);
std::cout << c << '\n';
}
else
std::cout << "overflow\n";
}
This gives the idea. I hope this helps.
Unless it's particularly performance-critical, the suggestion would be to try it and see. If it overflows a double, std::pow will return HUGE_VAL. Hence something like:
double val = std::pow(a, b);
if(val != HUGE_VAL) {
...
} else {
mpz_t c;
//...
}
You can easily use the reverse functions in the test:
if ( std::log( DBL_MAX ) / std::log( a ) < b ) {
// std::pow( a, b ) will not overflow...
} else {
}
It might be just as good to just do the pow, and see if it
succeeds:
errno = 0;
double powab = std::pow( a, b );
if ( errno == 0 ) {
// std::pow succeeded (without overflow)
} else {
// some error (probably overflow) with std::pow.
}
You won't gain much time by just calculating std::log( a ).
(std::log( DBL_MAX ) is, of course, a constant, so only needs
to be calculated once.)
With a logarithm base 10, you can deduce that std:pow(a, b) has log(a^b) = b log a digits. You can then trivially see if it fits a double, which can fit values up to DBL_MAX.
However, this method performs additional computation than just computing a^b once. Measure a version with GMP first and see if checking for overflow actually provides any measurable and reproducible benefits.
EDIT: Ignore this, std::pow already returns an appropriate value in case an overflow occurs, so use that.

Simplify this expression

Let a, b be positive integers with different values. Is there any way to simplify these expressions:
bool foo(unsigned a, unsigned b)
{
if (a % 2 == 0)
return (b % 2) ^ (a < b); // Should I write "!=" instead of "^" ?
else
return ! ( (b % 2) ^ (a < b) ); // Should I write "(b % 2) == (a < b)"?
}
I am interpreting the returned value as a boolean.
How is it different from
(a%2)^(b%2)^(a<b)
which in turn is
((a^b)&1)^(a<b)
or, indeed
((a ^ b) & 1) != (a < b)
Edited to add: Thinking about it some more, this is just the xor of the first and last bits of (a-b) (if you use 2's complement), so there is probably a machine-specific ASM sequence which is faster, involving a rotate instruction.
As a rule of thumb, don't mix operators of different operator families. You are mixing relational/boolean operators with bitwise operators, and regular arithmetic.
This is what I think you are trying to do, I'm not sure, since I don't understand the purpose of your code: it is neither readable nor self-explaining.
bool result;
bool a_is_even = (a % 2) == 0;
bool b_is_even = (b % 2) == 0;
if (a_is_even == b_is_even) // both even or both odd
result = a < b;
else
result = a >= b;
return result;
I program in C# but I'd think about something like this:
return (a % 2 == 0) && ((b % 2) ^ (a < b))
Considering from you comments that '^' is equivalent to '=='
If you are returning a truth value, a boolean, then your proposed changes do not change the semantics of the code. That's because bitwise XOR, when used in a truth context, is the same as !=.
In my view your proposed changes make the code much easier to understand. Quite why the author thought bitwise XOR would be appropriate eludes me. I guess some people think that sort of coding is clever. I don't.
If you want to know the relative performance of the two versions, write a program and time the difference. I'd be surprised if you could measure any difference between them. And I'd be equally surprised if these lines of code were your performance bottleneck.
Since there is not much information around this issue, try this:
int temp = (b % 2) ^ (a < b);
if (a % 2 == 0)
return temp;
else
return !temp;
This should be less code if the compiler hasn't optimized it already.

JavaScript Bit Manipulation to C++ Bit Manipulation

The following Pseudo and JavaScript code is a extract from the implementation of a algorithm , i want to convert it to C++ .
Pseudo Code :
for b from 0 to 2|R| do
for i from 0 to |R| do
if BIT-AT(b, i) = 1 then // b’s bit at index i
JavaScript Code :
for (var b = 0; b < Math.pow(2, orders[r].length); b++) // use b's bits for directions
{
for (var i = 0; i < orders[r].length; i++)
{
if (((b >> i) & 1) == 1) { // is b's bit at index i on?
I don't understand what is happening in the last line of this code , What Should be the C++ code for the above given JavaScript code . So far what i have written is :
for (int b = 0; b < pow(2, orders.at(r).size()); b++)
{
for (int i = 0; i < orders.at(r).size(); i++)
{
if (((b >> i) & 1) == 1)***//This line is not doing what it is supposed to do according to pseudo code***
The last line is giving me segmentation fault .
--
Edit:I apologize the problem was somewhere else , This code works fine .
(((b >> i) & 1) == 1)
| |
| |
| bitwise AND between the result of the shift and number 1.
|
shift b by i bits to the right
After that the result is compared with the number 1.
So if, for example, b is 8, and i is 2, it will do the following:
shift 8 (which is 00001000) by 2 bits to the right. The result will be 00000100.
apply the bitwise AND: 00000100 BITWISE_AND 00000001, the result will be 0.
Compare it with 1. Since 0 =/= 1, you will not enter that last if.
As for the logic behind this, the code ((b >> i) & 1) == 1) returns true if the bit number i of the b variable is 1, and false otherwise.
And I believe that c++ code will be the same, with the exception that we don't have Math class in c++, and you'll have to replace vars with the corresponding types.
>> is the right shift operator, i.e. take the left operand and move its bit n positions to the right (defined by the right operand).
So essentially, 1 << 5 would move 1 to 100000.
In your example (b >> i) & 1 == 1 will check whether the i-th bit is set (1) due to the logical and (&).
As for your code, you can use it (almost) directly in C or C++. Math.pow() would become pow() inside math.h, but (in this case) you could simply use the left shift operator:
for (int b = 0; b < (1 << orders[r].length); ++b) // added the brackets to make it easier to read
for (int i = 0; i < orders[r].length; ++i)
if (((b >> i) & 1) == 1) {
// ...
}
1 << orders[r].length will essentially be the same as pow(2, orders[r].length), but without any function call.