How can you convert a std::bitset<64> to a double? - c++

Is there a way to convert a std::bitset<64> to a double without using any external library (Boost, etc.)? I am using a bitset to represent a genome in a genetic algorithm and I need a way to convert a set of bits to a double.

The C++11 road:
union Converter { uint64_t i; double d; };
double convert(std::bitset<64> const& bs) {
Converter c;
c.i = bs.to_ullong();
return c.d;
}
EDIT: As noted in the comments, we can use char* aliasing as it is unspecified instead of being undefined.
double convert(std::bitset<64> const& bs) {
static_assert(sizeof(uint64_t) == sizeof(double), "Cannot use this!");
uint64_t const u = bs.to_ullong();
double d;
// Aliases to `char*` are explicitly allowed in the Standard (and only them)
char const* cu = reinterpret_cast<char const*>(&u);
char* cd = reinterpret_cast<char*>(&d);
// Copy the bitwise representation from u to d
memcpy(cd, cu, sizeof(u));
return d;
}
C++11 is still required for to_ullong.

Most people are trying to provide answers that let you treat the bit-vector as though it directly contained an encoded int or double.
I would advise you completely avoid that approach. While it does "work" for some definition of working, it introduces hamming cliffs all over the place. You usually want your encoding to arrange things so that if two decoded values are near to one another, then their encoded values are near to one another as well. It also forces you to use 64-bits of precision.
I would manage the conversion manually. Say you have three variables to encode, x, y, and z. Your domain expertise can be used to say, for example, that -5 <= x < 5, 0 <= y < 100, and 0 <= z < 1, where you need 8 bits of precision for x, 12 bits for y, and 10 bits for z. This gives you a total search space of only 30 bits. You can have a 30 bit string, treat the first 8 as encoding x, the next 12 as y, and the last 10 as z. You are also free to gray code each one to remove the hamming cliffs.
I've personally done the following in the past:
inline void binary_encoding::encode(const vector<double>& params)
{
unsigned int start=0;
for(unsigned int param=0; param<params.size(); ++param) {
// m_bpp[i] = number of bits in encoding of parameter i
unsigned int num_bits = m_bpp[param];
// map the double onto the appropriate integer range
// m_range[i] is a pair of (min, max) values for ith parameter
pair<double,double> prange=m_range[param];
double range=prange.second-prange.first;
double max_bit_val=pow(2.0,static_cast<double>(num_bits))-1;
int int_val=static_cast<int>((params[param]-prange.first)*max_bit_val/range+0.5);
// convert the integer to binary
vector<int> result(m_bpp[param]);
for(unsigned int b=0; b<num_bits; ++b) {
result[b]=int_val%2;
int_val/=2;
}
if(m_gray) {
for(unsigned int b=0; b<num_bits-1; ++b) {
result[b]=!(result[b]==result[b+1]);
}
}
// insert the bits into the correct spot in the encoding
copy(result.begin(),result.end(),m_genotype.begin()+start);
start+=num_bits;
}
}
inline void binary_encoding::decode()
{
unsigned int start = 0;
// for each parameter
for(unsigned int param=0; param<m_bpp.size(); param++) {
unsigned int num_bits = m_bpp[param];
unsigned int intval = 0;
if(m_gray) {
// convert from gray to binary
vector<int> binary(num_bits);
binary[num_bits-1] = m_genotype[start+num_bits-1];
intval = binary[num_bits-1];
for(int i=num_bits-2; i>=0; i--) {
binary[i] = !(binary[i+1] == m_genotype[start+i]);
intval += intval + binary[i];
}
}
else {
// convert from binary encoding to integer
for(int i=num_bits-1; i>=0; i--) {
intval += intval + m_genotype[start+i];
}
}
// convert from integer to double in the appropriate range
pair<double,double> prange = m_range[param];
double range = prange.second - prange.first;
double m = range / (pow(2.0,double(num_bits)) - 1.0);
// m_phenotype is a vector<double> containing all the decoded parameters
m_phenotype[param] = m * double(intval) + prange.first;
start += num_bits;
}
}
Note that for reasons that probably don't matter to you, I wasn't using bit vectors -- just ordinary vector<int> to encoding things. And of course, there's a bunch of stuff tied into this code that isn't shown here, but you can probably get the basic idea.
One other note, if you're doing GPU calculations or if you have a particular problem such that 64 bits are the appropriate size anyway, it may be worth the extra overhead to stuff everything into native words. Otherwise, I would guess that the overhead you add to the search process will probably overwhelm whatever benefits you get by faster encoding and decoding.

Edit:: I've decided that I was being a bit silly with this. While you do end up with a double it assumes that the bitset holds an integer... which is a big assumption to make. You will end up with a predictable and repeatable value per bitset but still I don't think that this is what the author intended.
Well if you iterate over the bit values and do
output_double += pow( 2, 64-(bit_position+1) ) * bit_value;
That would work. As long as it is big-endian

Related

calculating the mean of an array c++

I encountered a problem when I tried to calculate the mean of an array in two ways. Below is the code:
float sum1, sum2, tmp, mean1, mean2;
double sum1_double, sum2_double, tmp_double;
int i, j;
int Nt=29040000; //array size
int piecesize=32;
int Npiece=Nt/piecesize;
float* img;
float* d_img;
double* img_double;
img_double = (double*)calloc(Nt, sizeof(double));
cudaHostAlloc((void**)&img, sizeof(float)*Nt, cudaHostAllocDefault);
cudaMalloc((void**)&d_img, sizeof(float)*Nt);
...
//Some calculation is done in GPU and the results are stored in d_img;
...
cudaMemcpy(img, d_img, Nt*sizeof(float), cudaMemcpyDeviceToHost);
for (i=0;i<Nt;i++) img_double[i]=(double)img[i];
//Method 1
sum1=0;
for (i=0;i<Nt;i++)
{ sum1 += img[i]; }
sum1_double=0;
for (i=0;i<Nt;i++)
{ sum1_double += img_double[i]; }
//Method 2
sum2=0;
for (i=0;i<Npiece;i++)
{ tmp=0;
for (j=0;j<piecesize;j++)
{ tmp += img[i*piecesize+j];}
sum2 += tmp;
}
sum2_double=0;
for (i=0;i<Npiece;i++)
{ tmp_double=0;
for (j=0;j<piecesize;j++)
{ tmp_double += img_double[i*piecesize+j];}
sum2_double += tmp_double;
}
mean1=sum1/(float)Nt;
mean2=sum2/(float)Nt;
mean1_double=sum1_double/(double)Nt;
mean2_double=sum2_double/(double)Nt;
cout<<setprecision(15)<<mean1<<endl;
cout<<setprecision(15)<<mean2<<endl;
cout<<setprecision(15)<<mean1_double<<endl;
cout<<setprecision(15)<<mean2_double<<endl;
Output:
132.221862792969
129.565872192383
129.565938340543
129.565938340543
The results obtained from the two methods, mean1=129.6, mean2=132.2, are significantly different. May I know why?
Thanks a lot in advance!
The reason is that floating point arithmetic is not precise. When you accumulate integers, float becomes imprecise when abs(value) becomes larger than 224 (I'm supposing IEEE-754 32-bit here). For example, float is incapable to store 16777217 precisely (it will become 16777216 or 16777218, depending on the rounding mode).
Supposedly your second calculation is the more precise one, as less precision is lost, because of the separate tmp accumulation.
Change your sum1, sum2, tmp variables to long long int, and hopefully you'll get the same result for both calculations.
Note: I've supposed that your img stores integer data. If it stores floats, then there is no easy way to fix this perfectly. One way is to use double instead of float for sum1, sum2 and tmp. The difference will be there, but it will be much smaller. And there are techniques how to accumuluate floats more precisely than simple summing. Like Kahan Summation.

Arithmetic Coding FPAQ0 (a simple order-0 arithmetic file compressor )

I am trying to understand the code of fpaq0 aritmetic compressor but I am not able to fully understand it.Here is the link to the code -fpaq0.cpp
I am not able to understand exactly the how ct[512]['2] and cxt are working.Also I am not very much clear how decoder is working.Why before encoding every charater e.encode(0) is being called.
NOTE; I have understood the arithmetic coder presented in the link-Data Compression with Arithmetic Encoding
void update(int y) {
if (++ct[cxt][y] > 65534) {
ct[cxt][0] >>= 1;
ct[cxt][1] >>= 1;
}
if ((cxt+=cxt+y) >= 512)
cxt=1;
}
// Assume a stationary order 0 stream of 9-bit symbols
int p() const {
return 4096*(ct[cxt][1]+1)/(ct[cxt][0]+ct[cxt][1]+2);
}
inline void Encoder::encode(int y) {
// Update the range
const U32 xmid = x1 + ((x2-x1) >> 12) * predictor.p();
assert(xmid >= x1 && xmid < x2);
if (y)
x2=xmid;
else
x1=xmid+1;
predictor.update(y);
// Shift equal MSB's out
while (((x1^x2)&0xff000000)==0) {
putc(x2>>24, archive);
x1<<=8;
x2=(x2<<8)+255;
}
}
inline int Encoder::decode() {
// Update the range
const U32 xmid = x1 + ((x2-x1) >> 12) * predictor.p();
assert(xmid >= x1 && xmid < x2);
int y=0;
if (x<=xmid) {
y=1;
x2=xmid;
}
else
x1=xmid+1;
predictor.update(y);
// Shift equal MSB's out
while (((x1^x2)&0xff000000)==0) {
x1<<=8;
x2=(x2<<8)+255;
int c=getc(archive);
if (c==EOF) c=0;
x=(x<<8)+c;
}
return y;
}
fpaq0 is a file compressor which uses an order-0 bitwise model for modeling and uses 12-bits carry-less arithmetic coder for entropy coding stage. ct[512][2] stores counters for each contexts to compute symbol probabilities. The context (order-0 in fpaq0) is calculated with partial bits with a leading one (to simplify calculations).
For more easy explanation, let's skip EOF symbol for now. Order-0 context calculated as follow without EOF symbol (simplified):
// Full byte encoding
int cxt = 1; // context starts with leading one
for (int i = 0; i < 8; ++i) {
// Encoding part
int y = ReadNextBit();
int p = GetProbability(ctx);
EncodeBit(y, p);
// Model updating
UpdateCounter(cxt, y); // Update related counter
cxt = (cxt << 1) | y; // shift left and insert new bit
}
For decoding, context is used without EOF symbol like following (simplified):
// Full byte decoding
int cxt = 1; // context starts with leading one
for (int i = 0; i < 8; ++i) {
// Decoding part
int p = GetProbability(ctx);
int y = DecodeBit(p);
WriteBit(y);
// Model updating
UpdateCounter(cxt, y); // Update related counter
cxt = (cxt << 1) | y; // shift left and insert new bit
}
fpaq0 is designed as a streaming compressor. Meaning that it doesn't need to know exact length of the input stream. So, the question how decoder should know when to stop? EOF symbol used exactly for that. While encoding every single byte, a zero bit is encoded as a flag to indicate there is more data to follow. One indicates we reached the end of stream. So, decoder knows when to stop. That's the reason why our context model is 9-bits (EOF flag + 8 bits data).
Now, the last part: probability calculation. fpaq0 uses just counts of past symbols under order-0 context to calculate final probability.
n0 = count of 0
n1 = count of 1
p = n1 / (n0 + n1)
There are two implementation details that should be addressed: counter overflow and division by zero.
Counter overflow is addressed by halving both counts when they reach a threshold. Since, we're dealing with p, it makes sense.
Division by zero is addressed by inserting one into formula for each variables. So,
p = (n1 + 1) / ((n0 + 1) + (n1 + 1))

Converting double array values to char values in C++

I have a matrix array of doubles that I need to store into an array of chars. These 32-bit double values are guaranteed to be small enough to fit into an 8-bit char value. (The maximum double value in my program is 31). I've researched a bit and what I find are solutions to store a double as a char*, in other words convert a double to a c_string. This is NOT what I seek to achieve.
// I'm dealing with a 15*4 double array
double **d_array = new double*[15];
d_array[i] = new double[4];
// This creates a char array (That will have > 15*4 spaces)
unsigned char *c_array = new unsigned char [1024];
I can iterate a loop over the double matrix to store to the character matrix.
Say I had d_array[1][0] = 4. I want to have c_array[5] = 4. Because 4 is 00000100, it should be able to fit.
I think you should be able to just make the assignment in your loop and it will automatically be truncated and converted (you may get a compiler warning):
c_array[0] = d_array[0][0];
To be safe, you could do
c_array[0] = (char)(int)d_array[0][0];
You may want to use uint8_t since a char is either signed, unsigned or char.
You can use static_cast:
uint8_t value = static_cast<uint8_t>(d_array[i][j]);
If you want to copy the bytes of a floating point to a buffer:
uint8_t buffer[4096];
float f_value = 3.14;
uint8_t * p_float = static_cast<uint8_t *>(&f_value);
for (unsigned int i = 0; i < sizeof(float); ++i)
{
buffer[index + i] = p_float[i];
}

Split Multiplication of integers

I need an algorithm that uses two 32-bit integers as parameters, and returns the multiplication of these parameters split into two other 32-bit integers: 32-highest-bits part and 32-lowest-bits part.
I would try:
uint32_t p1, p2; // globals to hold the result
void mult(uint32_t x, uint32_t y){
uint64_t r = (x * y);
p1 = r >> 32;
p2 = r & 0xFFFFFFFF;
}
Although it works1, it's not guaranteed the existence of 64-bit integers in the machine, neither is the use of them by the compiler.
So, how is the best way to solve it?
Note1: Actually, it didn't work because my compiler does not support 64-bit integers.
Obs: Please, avoid using boost.
Just use 16 bits digits.
void multiply(uint32_t a, uint32_t b, uint32_t* h, uint32_t* l) {
uint32_t const base = 0x10000;
uint32_t al = a%base, ah = a/base, bl = b%base, bh = b/base;
*l = al*bl;
*h = ah*bh;
uint32_t rlh = *l/base + al*bh;
*h += rlh/base;
rlh = rlh%base + ah*bl;
*h += rlh/base;
*l = (rlh%base)*base + *l%base;
}
As I commented, you can treat each number as a binary string of length 32.
Just multiply these numbers using school arithmetic. You will get a 64 character long string.
Then just partition it.
If you want fast multiplication, then you can look into Karatsuba multiplication algorithm.
This is the explanation and an implementation of the Karatsubas-Algorithm.
I have downloaded the code and ran it several times. It seems that it's doing well. You can modify the code according to your need.
If the unsigned long type are supported, this should work:
void umult32(uint32 a, uint32 b, uint32* c, uint32* d)
{
unsigned long long x = ((unsigned long long)a)* ((unsigned long long)b); //Thanks to #Толя
*c = x&0xffffffff;
*d = (x >> 32) & 0xffffffff;
}
Logic borrowed from here.

float value issue

I am facing problem using float
in loop its value stuck at 8388608.00
int count=0;
long X=10;
cout.precision(flt::digits10);
cout<<"Iterration #"<<setw(15)<<"Add"<<setw(21)<<"Mult"<<endl;
float Start=0.0;
float Multiplication = Addition * N;
long i = 1;
for (i; i <= N; i++){
float temp = Start + Addition;
Start=temp;
count++;
if(count%X==0 && count!=0)
{
X*=10;
cout<<i;
cout<<fixed<<setw(30)<<Start<<setw(20)<<fixed<<i*Addition<<endl;
}
}
what should i do??
Floating point addition doesn't work when you're adding (relatively) small number to (relatively) big one. It's caused by the way float is stored in memory.
You may try replacing single precision floating point (float) with double precision floating point (double) representation but if that doesn't work you'll probably need to implement hack like this:
// Lets say
double OriginalAddition = 0.123;
int Addition = 1;
// You just use base math substitution:
// Addition = OriginalAddition
int temp = Start + Addition; // You will treat transform floating point to fixed point
// with step 0.123, so 1 = 0.123
// And when displaying result (transform back into original floating point):
printf( "%f", (double)result*OriginalAddition)
This needs a lot of thought to find a substitution that doesn't cause data loss, covers required precision and won't cause int to overflow. Try to google fixed point int C (some results: 1, 2) to get better idea what to do.