Decoding integer value - c++

I have this program that encodes integer values:
#include "stdafx.h"
#define _SECURE_SCL_DEPRECATE 0
#include <iostream>
#include <list>
#include <vector>
#include <algorithm>
using namespace std;
template<class T>
vector<unsigned char> nToB(T );
unsigned long ByteToint(vector<unsigned char> v)
{
unsigned long int a = 0;
int s = v.size();
for (int i = 0; i<s ; i++)
{
a |= (v[s - 1 - i] << (8 * (s - i - 1)));
}
return a;
}
static unsigned long int Encode7Bits(unsigned long int);
int main()
{
cout << Encode7Bits(420);
getchar();
return 0;
}
static unsigned long int Encode7Bits( unsigned long int x)
{
vector<unsigned char> Result;
do
{
unsigned long int tmp = x & 0x7f;
x = x >> 7;
if (x > 0)
tmp |= 0x80;
Result.push_back((unsigned char )tmp);
} while (x > 0);
return ByteToint(Result);
}
If the argument to this function is 420 it will return 932.
My question is whether it is possible to do the reverse operation, a decoding function that given 932, returns 420.

No it isn't.
|= is non-invertible in the sense that if you write c = a | b, then given c, and either a or b, you can't recover the other variable.
The bitwise operators << and >> are obviously lossy since they introduce 0 bits.
You'll have better luck with XOR: if you write c = a ^ b, then c ^ b will be a.

Related

Convert int bits to float verbatim and print them

I'm trying to just copy the contents of a 32-bit unsigned int to be used as float. Not casting it, just re-interpreting the integer bits to be used as float. I'm aware memcpy is the most-suggested option for this. However, when I do memcpy from uint_32 to float, and print out the individual bits, I see they are quite different.
Here is my code snippet:
#include <iostream>
#include <stdint.h>
#include <cstring>
using namespace std;
void print_bits(unsigned n) {
unsigned i;
for(i=1u<<31;i > 0; i/=2)
(n & i) ? printf("1"): printf("0");
}
union {
uint32_t u_int;
float u_float;
} my_union;
int main()
{
uint32_t my_int = 0xc6f05705;
float my_float;
//Method 1 using memcpy
memcpy(&my_float, &my_int, sizeof(my_float));
//Print using function
print_bits(my_int);
printf("\n");
print_bits(my_float);
//Print using printf
printf("\n%0x\n",my_int);
printf("%0x\n",my_float);
//Method 2 using unions
my_union.u_int = 0xc6f05705;
printf("union int = %0x\n",my_union.u_int);
printf("union float = %0x\n",my_union.u_float);
return 0;
}
Outputs:
11000110111100000101011100000101
11111111111111111000011111010101
c6f05705
400865
union int = c6f05705
union float = 40087b
Can someone explain what's happening? I expected the bits to match. Didn't work with a union either.
You need to change the function print_bits to
inline
int is_big_endian(void)
{
const union
{
uint32_t i;
char c[sizeof(uint32_t)];
} e = { 0x01000000 };
return e.c[0];
}
void print_bits( const void *src, unsigned int size )
{
//Check for the order of bytes in memory of the compiler:
int t, c;
if (is_big_endian())
{
t = 0;
c = 1;
}
else
{
t = size - 1;
c = -1;
}
for (; t >= 0 && t <= size - 1; t += c)
{ //print the bits of each byte from the MSB to the LSB
unsigned char i;
unsigned char n = ((unsigned char*)src)[t];
for(i = 1 << (CHAR_BIT - 1); i > 0; i /= 2)
{
printf("%d", (n & i) != 0);
}
}
printf("\n");
}
and call it like this:
int a = 7;
print_bits(&a, sizeof(a));
that way there won't be any type conversion when you call print_bits and it would work for any struct size.
EDIT: I replaced 7 with CHAR_BIT - 1 because the size of byte can be different than 8 bits.
EDIT 2: I added support for both little endian and big endian compilers.
Also as #M.M suggested in the comments if you want to you can use template to make the function call be: print_bits(a) instead of print_bits(&a, sizeof(a))

C++ combination function always resulting 0

can anybody tell me why my Combination function is always resulting 0 ?
I also tried to make it calculate the combination without the use of the permutation function but the factorial and still the result is 0;
#include <iostream>
#include <cmath>
using namespace std;
int factorial(int& n)
{
if (n <= 1)
{
return 1;
}
else
{
n = n-1;
return (n+1) * factorial(n);
}
}
int permutation(int& a, int& b)
{
int x = a-b;
return factorial(a) / factorial(x);
}
int Combination(int& a, int& b)
{
return permutation(a,b) / factorial(b);
}
int main()
{
int f, s;
cin >> f >> s;
cout << permutation(f,s) << endl;
cout << Combination(f,s);
return 0;
}
Your immediate problem is that that you pass a modifiable reference to your function. This means that you have Undefined Behaviour here:
return (n+1) * factorial(n);
// ^^^ ^^^
because factorial(n) modifies n, and is indeterminately sequenced with (n+1). A similar problem exists in Combination(), where b is modified twice in the same expression:
return permutation(a,b) / factorial(b);
// ^^^ ^^^
You will get correct results if you pass n, a and b by value, like this:
int factorial(int n)
Now, factorial() gets its own copy of n, and doesn't affect the n+1 you're multiplying it with.
While we're here, I should point out some other flaws in the code.
Avoid using namespace std; - it has traps for the unwary (and even for the wary!).
You can write factorial() without modifying n once you pass by value (rather than by reference):
int factorial(const int n)
{
if (n <= 1) {
return 1;
} else {
return n * factorial(n-1);
}
}
Consider using iterative code to compute factorial.
We should probably be using unsigned int, since the operations are meaningless for negative numbers. You might consider unsigned long or unsigned long long for greater range.
Computing one factorial and dividing by another is not only inefficient, it also risks unnecessary overflow (when a is as low as 13, with 32-bit int). Instead, we can multiply just down to the other number:
unsigned int permutation(const unsigned int a, const unsigned int b)
{
if (a < b) return 0;
unsigned int permutations = 1;
for (unsigned int i = a; i > a-b; --i) {
permutations *= i;
}
return permutations;
}
This works with much higher a, when b is small.
We didn't need the <cmath> header for anything.
Suggested fixed code:
unsigned int factorial(const unsigned int n)
{
unsigned int result = 1;
for (unsigned int i = 2; i <= n; ++i) {
result *= i;
}
return result;
}
unsigned int permutation(const unsigned int a, const unsigned int b)
{
if (a < b) return 0;
unsigned int result = 1;
for (unsigned int i = a; i > a-b; --i) {
result *= i;
}
return result;
}
unsigned int combination(const unsigned int a, const unsigned int b)
{
// C(a, b) == C(a, a - b), but it's faster to compute with small b
if (b > a - b) {
return combination(a, a - b);
}
return permutation(a,b) / factorial(b);
}
You dont calculate with the pointer value you calculate withe the pointer address.

Store a non numeric string as binary integer [duplicate]

This question already has answers here:
Fastest way to Convert String to Binary?
(3 answers)
Closed 5 years ago.
How do I convert a string like
string a = "hello";
to it's bit representation which is stored in a int
int b = 0110100001100101011011000110110001101111
here a and b being equivalent.
You cannot store a long character sequence (e.g. an std::string) inside an int (or inside a long int) because the size of a character is usually 8-bit and the length of an int is usually 32-bit, therefore a 32-bit long int can store only 4 characters.
If you limit the length of the number of characters, you can store them as the following example shows:
#include <iostream>
#include <string>
#include <climits>
int main() {
std::string foo = "Hello";
unsigned long bar = 0ul;
for(std::size_t i = 0; i < foo.size() && i < sizeof(bar); ++i)
bar |= static_cast<unsigned long>(foo[i]) << (CHAR_BIT * i);
std::cout << "Test: " << std::hex << bar << std::endl;
}
Seems like a daft thing to do, bit I think the following (untested) code should work.
#include <string>
#include <climits>
int foo(std::string const & s) {
int result = 0;
for (int i = 0; i < std::min(sizeof(int), s.size()); ++i) {
result = (result << CHAR_BIT) || s[i];
}
return result;
}
int output[CHAR_BIT];
char c;
int i;
for (i = 0; i < CHAR_BIT; ++i) {
output[i] = (c >> i) & 1;
}
More info in this link: how to convert a char to binary?

c++ random number between two integers using WELL512

I see that this question may have been answered here: Random using WELL512
However, it's not quite user friendly and doesn't provide an example how to use it in a 'real world' piece of code.
Here is what I currently have:
#define m (unsigned long)2147483647
#define q (unsigned long)127773
#define a (unsigned int)16807
#define r (unsigned int)2836
static unsigned long seed;
void x_srandom(unsigned long initial_seed);
unsigned long x_random(void);
void x_srandom(unsigned long initial_seed)
{
seed = initial_seed;
}
unsigned long x_random(void)
{
int lo, hi, test;
hi = (seed / q);
lo = (seed % q);
test = (a * lo - r * hi);
if (test > 0)
seed = test;
else
seed = (test + m);
return (seed);
}
int RANDOM(int from, int to)
{
if (from > to)
{
int tmp = from;
from = to;
to = tmp;
}
return ((x_random() % (to - from + 1)) + from);
}
// Real world function using RANDOM()
void testFunction()
{
printf("A random number between 1 and 1000 is %d \r\n", RANDOM(1, 1000));
printf("A random number between 36 and 100 is %d \r\n", RANDOM(36, 100));
printf("A random number between 1 and 2147483647 is %d \r\n", RANDOM(1, 2147483647));
printf("A random number between 1 and 5 is %d \r\n", RANDOM(1, 5));
}
The above example shows everything you need to know to implement it.
I would like to use WELL512 to determine my random numbers instead of the way in which I currently am, put in a way as exampled above.
It is really time to move away from using % for generating a distribution.
To me you should use WELL512 as a uniform random number generator (just like mt19937 in the standard library). You wrap it in a class that exposes a typedef (or using) for result_type. In your case that would probably be unsigned long. Then you need two constexpr for min() and max(). That would be 0 and ULONG_MAX. Finally you need to expose operator() that returns a single unsigned long.
After that you use the features in <random> together with your engine.
class well512 {
public:
typedef unsigned long result_type;
static constexpr result_type min() { return 0; }
static constexpr result_type max() { return ULONG_MAX; }
result_type operator()() { /* return some value from the underlying well512 implementation */ }
};
int main()
{
well512 engine();
std::uniform_int_distribution<> dist { 1, 5 };
for (int i = 0; i != 10; ++i)
{
std::cout << dist(engine) << std::endl;
}
return 0;
}
Here is a complete example. It does not have all the bells and whistles you may want. E.g. there is no default constructor, or a constructor from a single word. I leave that as an exercise.
#include <algorithm>
#include <array>
#include <cstdint>
#include <functional>
#include <iostream>
#include <iterator>
#include <limits>
#include <numeric>
#include <ostream>
#include <random>
#include <vector>
class seed_seq
{
public:
template <typename InputIterator>
seed_seq(InputIterator first, InputIterator last)
{
for (; first != last; ++first)
{
v.push_back(*first);
}
}
template <typename RandomAccessIterator>
void generate(RandomAccessIterator first, RandomAccessIterator last)
{
std::vector<unsigned int>::size_type i = 0;
for (; first != last; ++first)
{
*first = v[i];
if (++i == v.size()){ i = 0; }
}
}
private:
std::vector<unsigned int> v;
};
class well512
{
public:
using result_type = unsigned int;
static result_type min() { return 0; }
static result_type max() { return std::numeric_limits<std::uint32_t>::max(); }
static const unsigned int state_size = 16;
explicit well512(seed_seq& sequence) : index(0)
{ sequence.generate(std::begin(state), std::end(state)); }
result_type operator()()
{
std::uint32_t z0 = state[(index + 15) & 0x0fU];
std::uint32_t z1 = xsl(16, state[index]) ^ xsl(15, state[(index + 13) & 0x0fU]);
std::uint32_t z2 = xsr(11, state[(index + 9) & 0x0fU]);
state[index] = z1 ^ z2;
std::uint32_t t = xslm(5, 0xda442d24U, state[index]);
index = (index + state_size - 1) & 0x0fU;
state[index] = xsl(2, z0) ^ xsl(18, z1) ^ (z2 << 28) ^ t;
return state[index];
}
private:
// xor-shift-right
std::uint32_t xsr(unsigned int shift, std::uint32_t value)
{ return value ^ (value >> shift); }
// xor-shift-left
std::uint32_t xsl(unsigned int shift, std::uint32_t value)
{ return value ^ (value << shift); }
// xor-shift-left and mask
std::uint32_t xslm(unsigned int shift, std::uint32_t mask, std::uint32_t value)
{ return value ^ ((value << shift) & mask); }
unsigned int index;
std::array<std::uint32_t, state_size> state;
};
int main()
{
// Use a random device to generate 16 random words used as seed for the well512 engine
std::random_device rd;
std::vector<well512::result_type> seed_data;
std::generate_n(std::back_inserter(seed_data), well512::state_size, std::ref(rd));
seed_seq sequence(std::begin(seed_data), std::end(seed_data));
// Create a well512 engine
well512 engine(sequence);
// Now apply it like any other random engine in C++11
std::uniform_int_distribution<> dist{ 1, 6 };
auto rand = std::function <int()> { std::bind(std::ref(dist), std::ref(engine)) };
// Print out some random numbers between 1 and 6 (simulating throwing a dice)
const int n = 100;
std::generate_n(std::ostream_iterator<int>(std::cout, " "), n, rand);
std::cout << std::endl;
return 0;
}
Like this user515430 ?
#define m (unsigned long)2147483647
#define W 32
#define R 16
#define P 0
#define M1 13
#define M2 9
#define M3 5
#define MAT0POS(t,v) (v^(v>>t))
#define MAT0NEG(t,v) (v^(v<<(-(t))))
#define MAT3NEG(t,v) (v<<(-(t)))
#define MAT4NEG(t,b,v) (v ^ ((v<<(-(t))) & b))
#define V0 STATE[state_i ]
#define VM1 STATE[(state_i+M1) & 0x0000000fU]
#define VM2 STATE[(state_i+M2) & 0x0000000fU]
#define VM3 STATE[(state_i+M3) & 0x0000000fU]
#define VRm1 STATE[(state_i+15) & 0x0000000fU]
#define VRm2 STATE[(state_i+14) & 0x0000000fU]
#define newV0 STATE[(state_i+15) & 0x0000000fU]
#define newV1 STATE[state_i ]
#define newVRm1 STATE[(state_i+14) & 0x0000000fU]
#define FACT 2.32830643653869628906e-10
static unsigned int state_i = 0;
static unsigned int STATE[R];
static unsigned int z0, z1, z2;
void InitWELLRNG512a(unsigned int *init){
int j;
state_i = 0;
for (j = 0; j < R; j++)
STATE[j] = init[j];
}
double WELLRNG512a(void){
z0 = VRm1;
z1 = MAT0NEG(-16, V0) ^ MAT0NEG(-15, VM1);
z2 = MAT0POS(11, VM2);
newV1 = z1 ^ z2;
newV0 = MAT0NEG(-2, z0) ^ MAT0NEG(-18, z1) ^ MAT3NEG(-28, z2) ^ MAT4NEG(-5, 0xda442d24U, newV1);
state_i = (state_i + 15) & 0x0000000fU;
return ((double)STATE[state_i]) * FACT;
}
int RANDOM(int from, int to)
{
if (from > to)
{
int tmp = from;
from = to;
to = tmp;
}
return to + (from - to) * (WELLRNG512a() / (long double)m);
}
The simplest way to get a uniformly distributed random integer between two values is with floating point math.
double get_uniform_rand() {
/*Assumes unsigned 32 bit return value from myrnd in range 0 - 0xFFFFFFFF*/
return (double)myrnd() / (double)0xFFFFFFFF;
}
int32_t get_rnd_in_range(int32_t l, int32_t h) {
return (int32_t)((double)l + get_uniform_rand() * (double)(h-l));
}
This is more of a C approach, as, like user515430 mentioned, there's a standard C++ way of doing this (though I personally have not used it).

convert 64-bit binary string representation of a double number back to double number in c++

I have a IEEE754 Double precision 64-bit binary string representation of a double number.
example : double value = 0.999;
Its binary representation is "0011111111101111111101111100111011011001000101101000011100101011"
I want to convert this string back to a double number in c++.
I dont want to use any external libraries or .dll's as my program would operate in any platform.
C string solution:
#include <cstring> // needed for all three solutions because of memcpy
double bitstring_to_double(const char* p)
{
unsigned long long x = 0;
for (; *p; ++p)
{
x = (x << 1) + (*p - '0');
}
double d;
memcpy(&d, &x, 8);
return d;
}
std::string solution:
#include <string>
double bitstring_to_double(const std::string& s)
{
unsigned long long x = 0;
for (std::string::const_iterator it = s.begin(); it != s.end(); ++it)
{
x = (x << 1) + (*it - '0');
}
double d;
memcpy(&d, &x, 8);
return d;
}
generic solution:
template<typename InputIterator>
double bitstring_to_double(InputIterator begin, InputIterator end)
{
unsigned long long x = 0;
for (; begin != end; ++begin)
{
x = (x << 1) + (*begin - '0');
}
double d;
memcpy(&d, &x, 8);
return d;
}
example calls:
#include <iostream>
int main()
{
const char * p = "0011111111101111111101111100111011011001000101101000011100101011";
std::cout << bitstring_to_double(p) << '\n';
std::string s(p);
std::cout << bitstring_to_double(s) << '\n';
std::cout << bitstring_to_double(s.begin(), s.end()) << '\n';
std::cout << bitstring_to_double(p + 0, p + 64) << '\n';
}
Note: I assume unsigned long long has 64 bits. A cleaner solution would be to include <cstdint> and use uint64_t instead, assuming your compiler is up to date and provides that C++11 header.
A starting point would be to iterate through the individual characters in the string and set individual bits of an existing double.
Is it really a character string of binary bits? If so, first convert to a 64-bit int. Then either use a library routine (probably there is one somewhere), or more simply, use a double aliased over the 64-bit int to convert to double.
(If it's already a 64-bit int then skip the first step.)
Ignoring byte-ordering issues, but I suppose this should be a viable option:
The below has an outcome of .999 on i386 with gcc. See it live: https://ideone.com/i4ygJ
#include <cstdint>
#include <sstream>
#include <iostream>
#include <bitset>
int main()
{
std::istringstream iss("0011111111101111111101111100111011011001000101101000011100101011");
std::bitset<32> hi, lo;
if (iss >> hi >> lo)
{
struct { uint32_t lo, hi; } words = { lo.to_ulong(), hi.to_ulong() };
double converted = *reinterpret_cast<double*>(&words);
std::cout << hi << std::endl;
std::cout << lo << std::endl;
std::cout << converted << std::endl;
}
}
my program would operate in any platform
Included I assume those whose double format isn't IEEE. Something like this should work:
#include <math.h>
...
int const dbl_exponent_bits = 11;
int const dbl_exponent_offset = 1023;
int const dbl_significand_bits = 52;
bool negative = (*num++ == '1');
int exponent = 0;
for (int i = 0; i < dbl_exponent_bits; ++i, ++num) {
exponent = 2*exponent + (*num == '1' ? 1 : 0);
}
double significand = 1;
for (int i = 0; i < dbl_significand_bits; ++i, ++num) {
significand = 2*significand + (*num == '1' ? 1 : 0);
}
assert(*num == '\0');
double result = ldexp(significand, exponent-(dbl_exponent_offset+dbl_significand_bits));
if (negative)
result = -result;