Converting many bits to Base 10 - c++

I am building a class in C++ which can be used to store arbitrarily large integers. I am storing them as binary in a vector. I need to be able to print this vector in base 10 so it is easier for a human to understand. I know that I could convert it to an int and then output that int. However, my numbers will be much larger than any primitive types. How can I convert this directly to a string.
Here is my code so far. I am new to C++ so if you have any other suggestions that would be great too. I need help filling in the string toBaseTenString() function.
class BinaryInt
{
private:
bool lastDataUser = true;
vector<bool> * data;
BinaryInt(vector<bool> * pointer)
{
data = pointer;
}
public:
BinaryInt(int n)
{
data = new vector<bool>();
while(n > 0)
{
data->push_back(n % 2);
n = n >> 1;
}
}
BinaryInt(const BinaryInt & from)
{
from.lastDataUser = false;
this->data = from.data;
}
~BinaryInt()
{
if(lastDataUser)
delete data;
}
string toBinaryString();
string toBaseTenString();
static BinaryInt add(BinaryInt a, BinaryInt b);
static BinaryInt mult(BinaryInt a, BinaryInt b);
};
BinaryInt BinaryInt::add(BinaryInt a, BinaryInt b)
{
int aSize = a.data->size();
int bSize = b.data->size();
int newDataSize = max(aSize, bSize);
vector<bool> * newData = new vector<bool>(newDataSize);
bool carry = 0;
for(int i = 0; i < newDataSize; i++)
{
int sum = (i < aSize ? a.data->at(i) : 0) + (i < bSize ? b.data->at(i) : 0) + carry;
(*newData)[i] = sum % 2;
carry = sum >> 1;
}
if(carry)
newData->push_back(carry);
return BinaryInt(newData);
}
string BinaryInt::toBinaryString()
{
stringstream ss;
for(int i = data->size() - 1; i >= 0; i--)
{
ss << (*data)[i];
}
return ss.str();
}
string BinaryInt::toBaseTenString()
{
//Not sure how to do this
}

I know you said in your OP that "my numbers will be much larger than any primitive types", but just hear me out on this.
In the past, I've used std::bitset to work with binary representations of numbers and converting back and forth from various other representations. std::bitset is basically a fancy std::vector with some added functionality. You can read more about it here if it sounds interesting, but here's some small stupid example code to show you how it could work:
std::bitset<8> myByte;
myByte |= 1; // mByte = 00000001
myByte <<= 4; // mByte = 00010000
myByte |= 1; // mByte = 00010001
std::cout << myByte.to_string() << '\n'; // Outputs '00010001'
std::cout << myByte.to_ullong() << '\n'; // Outputs '17'
You can access the bitset by standard array notation as well. By the way, that second conversion I showed (to_ullong) converts to an unsigned long long, which I believe has a max value of 18,446,744,073,709,551,615. If you need larger values than that, good luck!

Just iterate (backwards) your vector<bool> and accumulate the corresponding value when the iterator is true:
int base10(const std::vector<bool> &value)
{
int result = 0;
int bit = 1;
for (vb::const_reverse_iterator b = value.rbegin(), e = value.rend(); b != e; ++b, bit <<= 1)
result += (*b ? bit : 0);
return result;
}
Live demo.
Beware! this code is only a guide, you will need to take care of int overflowing if the value is pretty big.
Hope it helps.

Related

Using c++ is it possible to convert an Ascii character to Hex?

I have written a program that sets up a client/server TCP socket over which the user sends an integer value to the server through the use of a terminal interface. On the server side I am executing byte commands for which I need hex values stored in my array.
sprint(mychararray, %X, myintvalue);
This code takes my integer and prints it as a hex value into a char array. The only problem is when I use that array to set my commands it registers as an ascii char. So for example if I send an integer equal to 3000 it is converted to 0x0BB8 and then stored as 'B''B''8' which corresponds to 42 42 38 in hex. I have looked all over the place for a solution, and have not been able to come up with one.
Finally came up with a solution to my problem. First I created an array and stored all hex values from 1 - 256 in it.
char m_list[256]; //array defined in class
m_list[0] = 0x00; //set first array index to zero
int count = 1; //count variable to step through the array and set members
while (count < 256)
{
m_list[count] = m_list[count -1] + 0x01; //populate array with hex from 0x00 - 0xFF
count++;
}
Next I created a function that lets me group my hex values into individual bytes and store into the array that will be processing my command.
void parse_input(char hex_array[], int i, char ans_array[])
{
int n = 0;
int j = 0;
int idx = 0;
string hex_values;
while (n < i-1)
{
if (hex_array[n] = '\0')
{
hex_values = '0';
}
else
{
hex_values = hex_array[n];
}
if (hex_array[n+1] = '\0')
{
hex_values += '0';
}
else
{
hex_values += hex_array[n+1];
}
cout<<"This is the string being used in stoi: "<<hex_values; //statement for testing
idx = stoul(hex_values, nullptr, 16);
ans_array[j] = m_list[idx];
n = n + 2;
j++;
}
}
This function will be called right after my previous code.
sprint(mychararray, %X, myintvalue);
void parse_input(arrayA, size of arrayA, arrayB)
Example: arrayA = 8byte char array, and arrayB is a 4byte char array. arrayA should be double the size of arrayB since you are taking two ascii values and making a byte pair. e.g 'A' 'B' = 0xAB
While I was trying to understand your question I realized what you needed was more than a single variable. You needed a class, this is because you wished to have a string that represents the hex code to be printed out and also the number itself in the form of an unsigned 16 bit integer, which I deduced would be something like unsigned short int. So I created a class that did all this for you named hexset (I got the idea from bitset), here:
#include <iostream>
#include <string>
class hexset {
public:
hexset(int num) {
this->hexnum = (unsigned short int) num;
this->hexstring = hexset::to_string(num);
}
unsigned short int get_hexnum() {return this->hexnum;}
std::string get_hexstring() {return this->hexstring;}
private:
static std::string to_string(int decimal) {
int length = int_length(decimal);
std::string ret = "";
for (int i = (length > 1 ? int_length(decimal) - 1 : length); i >= 0; i--) {
ret = hex_arr[decimal%16]+ret;
decimal /= 16;
}
if (ret[0] == '0') {
ret = ret.substr(1,ret.length()-1);
}
return "0x"+ret;
}
static int int_length(int num) {
int ret = 1;
while (num > 10) {
num/=10;
++ret;
}
return ret;
}
static constexpr char hex_arr[16] = {'0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F'};
unsigned short int hexnum;
std::string hexstring;
};
constexpr char hexset::hex_arr[16];
int main() {
int number_from_file = 3000; // This number is in all forms technically, hex is just another way to represent this number.
hexset hex(number_from_file);
std::cout << hex.get_hexstring() << ' ' << hex.get_hexnum() << std::endl;
return 0;
}
I assume you'll probably want to do some operator overloading to make it so you can add and subtract from this number or assign new numbers or do any kind of mathematical or bit shift operation.

How to convert Biginteger to string

I have a vector with digits of number, vector represents big integer in system with base 2^32. For example:
vector <unsigned> vec = {453860625, 469837947, 3503557200, 40}
This vector represent this big integer:
base = 2 ^ 32
3233755723588593872632005090577 = 40 * base ^ 3 + 3503557200 * base ^ 2 + 469837947 * base + 453860625
How to get this decimal representation in string?
Here is an inefficient way to do what you want, get a decimal string from a vector of word values representing an integer of arbitrary size.
I would have preferred to implement this as a class, for better encapsulation and so math operators could be added, but to better comply with the question, this is just a bunch of free functions for manipulating std::vector<unsigned> objects. This does use a typedef BiType as an alias for std::vector<unsigned> however.
Functions for doing the binary division make up most of this code. Much of it duplicates what can be done with std::bitset, but for bitsets of arbitrary size, as vectors of unsigned words. If you want to improve efficiency, plug in a division algorithm which does per-word operations, instead of per-bit. Also, the division code is general-purpose, when it is only ever used to divide by 10, so you could replace it with special-purpose division code.
The code generally assumes a vector of unsigned words and also that the base is the maximum unsigned value, plus one. I left a comment wherever things would go wrong for smaller bases or bases which are not a power of 2 (binary division requires base to be a power of 2).
Also, I only tested for 1 case, the one you gave in the OP -- and this is new, unverified code, so you might want to do some more testing. If you find a problem case, I'll be happy to fix the bug here.
#include <iostream>
#include <string>
#include <vector>
#include <algorithm>
namespace bigint {
using BiType = std::vector<unsigned>;
// cmp compares a with b, returning 1:a>b, 0:a==b, -1:a<b
int cmp(const BiType& a, const BiType& b) {
const auto max_size = std::max(a.size(), b.size());
for(auto i=max_size-1; i+1; --i) {
const auto wa = i < a.size() ? a[i] : 0;
const auto wb = i < b.size() ? b[i] : 0;
if(wa != wb) { return wa > wb ? 1 : -1; }
}
return 0;
}
bool is_zero(BiType& bi) {
for(auto w : bi) { if(w) return false; }
return true;
}
// canonize removes leading zero words
void canonize(BiType& bi) {
const auto size = bi.size();
if(!size || bi[size-1]) return;
for(auto i=size-2; i+1; --i) {
if(bi[i]) {
bi.resize(i + 1);
return;
}
}
bi.clear();
}
// subfrom subtracts b from a, modifying a
// a >= b must be guaranteed by caller
void subfrom(BiType& a, const BiType& b) {
unsigned borrow = 0;
for(std::size_t i=0; i<b.size(); ++i) {
if(b[i] || borrow) {
// TODO: handle error if i >= a.size()
const auto w = a[i] - b[i] - borrow;
// this relies on the automatic w = w (mod base),
// assuming unsigned max is base-1
// if this is not the case, w must be set to w % base here
borrow = w >= a[i];
a[i] = w;
}
}
for(auto i=b.size(); borrow; ++i) {
// TODO: handle error if i >= a.size()
borrow = !a[i];
--a[i];
// a[i] must be set modulo base here too
// (this is automatic when base is unsigned max + 1)
}
}
// binary division and its helpers: these require base to be a power of 2
// hi_bit_set is base/2
// the definition assumes CHAR_BIT == 8
const auto hi_bit_set = unsigned(1) << (sizeof(unsigned) * 8 - 1);
// shift_right_1 divides bi by 2, truncating any fraction
void shift_right_1(BiType& bi) {
unsigned carry = 0;
for(auto i=bi.size()-1; i+1; --i) {
const auto next_carry = (bi[i] & 1) ? hi_bit_set : 0;
bi[i] >>= 1;
bi[i] |= carry;
carry = next_carry;
}
// if carry is nonzero here, 1/2 was truncated from the result
canonize(bi);
}
// shift_left_1 multiplies bi by 2
void shift_left_1(BiType& bi) {
unsigned carry = 0;
for(std::size_t i=0; i<bi.size(); ++i) {
const unsigned next_carry = !!(bi[i] & hi_bit_set);
bi[i] <<= 1; // assumes high bit is lost, i.e. base is unsigned max + 1
bi[i] |= carry;
carry = next_carry;
}
if(carry) { bi.push_back(1); }
}
// sets an indexed bit in bi, growing the vector when required
void set_bit_at(BiType& bi, std::size_t index, bool set=true) {
std::size_t widx = index / (sizeof(unsigned) * 8);
std::size_t bidx = index % (sizeof(unsigned) * 8);
if(bi.size() < widx + 1) { bi.resize(widx + 1); }
if(set) { bi[widx] |= unsigned(1) << bidx; }
else { bi[widx] &= ~(unsigned(1) << bidx); }
}
// divide divides n by d, returning the result and leaving the remainder in n
// this is implemented using binary division
BiType divide(BiType& n, BiType d) {
if(is_zero(d)) {
// TODO: handle divide by zero
return {};
}
std::size_t shift = 0;
while(cmp(n, d) == 1) {
shift_left_1(d);
++shift;
}
BiType result;
do {
if(cmp(n, d) >= 0) {
set_bit_at(result, shift);
subfrom(n, d);
}
shift_right_1(d);
} while(shift--);
canonize(result);
canonize(n);
return result;
}
std::string get_decimal(BiType bi) {
std::string dec_string;
// repeat division by 10, using the remainder as a decimal digit
// this will build a string with digits in reverse order, so
// before returning, it will be reversed to correct this.
do {
const auto next_bi = divide(bi, {10});
const char digit_value = static_cast<char>(bi.size() ? bi[0] : 0);
dec_string.push_back('0' + digit_value);
bi = next_bi;
} while(!is_zero(bi));
std::reverse(dec_string.begin(), dec_string.end());
return dec_string;
}
}
int main() {
bigint::BiType my_big_int = {453860625, 469837947, 3503557200, 40};
auto dec_string = bigint::get_decimal(my_big_int);
std::cout << dec_string << '\n';
}
Output:
3233755723588593872632005090577

Large factorial series

I have to print series :-
n*(n-1),n*(n-1)*(n-2),n*(n-1)*(n-2)*(n-3),n*(n-1)*(n-2)*(n-3)*(n-4)...,n!.
Problem is large value of n , it can go upto 37 and n! will obviously go out of bounds ?
I just cant get started , please help , how would you have tackled situation if you were in my place ?
It depends on the language you are using. Some languages automatically switch to a large integer package when numbers get too large for the machine's native integer representation. In other languages, just use a large integer library, which should handle 37! easily.
Wikipedia has a list of arbitrary-precision arithmetic libraries for some languages. There are also lots of other resources on the web.
3 year old problem looked fun.
Simple create a routine to "multiply" a string by a factor. Not highly efficient, yet gets the job done.
#include <stdlib.h>
#include <string.h>
void mult_array(char *x, unsigned factor) {
unsigned accumulator = 0;
size_t n = strlen(x);
size_t i = n;
while (i > 0) {
i--;
accumulator += (unsigned)(x[i]-'0')*factor;
x[i] = (char) (accumulator%10 + '0');
accumulator /= 10;
}
while (accumulator > 0) {
memmove(x+1, x, ++n);
x[i] = (char) (accumulator%10 + '0');
accumulator /= 10;
}
}
#include <stdio.h>
void AS_Factorial(unsigned n) {
char buf[1000]; // Right-size buffer (problem for another day)
sprintf(buf, "%u", n);
fputs(buf, stdout);
while (n>1) {
n--;
mult_array(buf, n);
printf(",%s", buf);
}
puts("");
}
Sample usage and output
int main(void) {
AS_Factorial(5);
AS_Factorial(37);
return 0;
}
5,20,60,120,120
37,1332,46620,1585080,52307640,1673844480,...,13763753091226345046315979581580902400000000
I have just tried BigInteger in Java and it works.
Working code for demonstration purpose:
import java.math.BigInteger;
public class Factorial {
public static int[] primes = {2,3,5,7,11,13,17,19,23,29,31,37};
public static BigInteger computeFactorial(int n) {
if (n==0) {
return new BigInteger(String.valueOf(1));
} else {
return new BigInteger(String.valueOf(n)).multiply(computeFactorial(n-1));
}
}
public static String getPowers(int n){
BigInteger input = computeFactorial(n);
StringBuilder sb = new StringBuilder();
int count = 0;
for (int i = 0; i < primes.length && input.intValue() != 1;) {
BigInteger[] result = input.divideAndRemainder(new BigInteger(String.valueOf(primes[i])));
if (result[1].intValue() == 0) {
input = input.divide(new BigInteger(String.valueOf(primes[i])));
count++;
if (input.intValue() == 1) {sb.append(primes[i] + "(" + count + ") ");}
} else {
if (count!=0)
sb.append(primes[i] + "(" + count + ") ");
count = 0;
i++;
}
}
return sb.toString();
}
public static void main(String[] args) {
System.out.println(getPowers(37));
}
}
Feel free to use it without worrying about copyright if you want.
Update: I didn't realize you were using C++ until now. In that case, you can give boost BigInteger a try.
You may use big integer, but however this still has some limitations, but even though, this datatype can handle a very large value. The value that the big integer can hold, ranges from
-9223372036854775808 to 9223372036854775807 for the signed big integer, and
0 to 18446744073709551615 for the unsigned big integer.
If you really plan to do some bigger value computation which is bigger than the big integer data type, why not try the GMP library?
As from what the site says, "GMP is a free library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and floating point numbers. There is no practical limit to the precision except the ones implied by the available memory in the machine GMP runs on. GMP has a rich set of functions, and the functions have a regular interface." - gmplib.org
You could implement your own big-integer type if it's not permitted to use any thirdparty libraries. You can do something like that:
#include <iostream>
#include <iomanip>
#include <vector>
using namespace std;
const int base = 1000 * 1000 * 1000; // base value, should be the power of 10
const int lbase = 9; // lg(base)
void output_biginteger(vector<int>& a) {
cout << a.back();
for (int i = (int)a.size() - 2; i >= 0; --i)
cout << setw(lbase) << setfill('0') << a[i];
cout << endl;
}
void multiply_biginteger_by_integer(vector<int>& a, int b) {
int carry = 0;
for (int i = 0; i < (int)a.size(); ++i) {
long long cur = (long long)a[i] * b + carry;
carry = cur / base;
a[i] = cur % base;
}
if (carry > 0) {
a.push_back(carry);
}
}
int main() {
int n = 37; // input your n here
vector<int> current(1, n);
for (int i = n - 1; n >= 1; --n) {
multiply_biginteger_by_integer(current, i);
output_biginteger(current);
}
return 0;
}

Convert integer to binary and store it in an integer array of specified size:c++

I want to convert an integer to binary string and then store each bit of the integer string to an element of a integer array of a given size. I am sure that the input integer's binary expression won't exceed the size of the array specified. How to do this in c++?
Pseudo code:
int value = ???? // assuming a 32 bit int
int i;
for (i = 0; i < 32; ++i) {
array[i] = (value >> i) & 1;
}
template<class output_iterator>
void convert_number_to_array_of_digits(const unsigned number,
output_iterator first, output_iterator last)
{
const unsigned number_bits = CHAR_BIT*sizeof(int);
//extract bits one at a time
for(unsigned i=0; i<number_bits && first!=last; ++i) {
const unsigned shift_amount = number_bits-i-1;
const unsigned this_bit = (number>>shift_amount)&1;
*first = this_bit;
++first;
}
//pad the rest with zeros
while(first != last) {
*first = 0;
++first;
}
}
int main() {
int number = 413523152;
int array[32];
convert_number_to_array_of_digits(number, std::begin(array), std::end(array));
for(int i=0; i<32; ++i)
std::cout << array[i] << ' ';
}
Proof of compilation here
You could use C++'s bitset library, as follows.
#include<iostream>
#include<bitset>
int main()
{
int N;//input number in base 10
cin>>N;
int O[32];//The output array
bitset<32> A=N;//A will hold the binary representation of N
for(int i=0,j=31;i<32;i++,j--)
{
//Assigning the bits one by one.
O[i]=A[j];
}
return 0;
}
A couple of points to note here:
First, 32 in the bitset declaration statement tells the compiler that you want 32 bits to represent your number, so even if your number takes fewer bits to represent, the bitset variable will have 32 bits, possibly with many leading zeroes.
Second, bitset is a really flexible way of handling binary, you can give a string as its input or a number, and again you can use the bitset as an array or as a string.It's a really handy library.
You can print out the bitset variable A as
cout<<A;
and see how it works.
You can do like this:
while (input != 0) {
if (input & 1)
result[index] = 1;
else
result[index] =0;
input >>= 1;// dividing by two
index++;
}
As Mat mentioned above, an int is already a bit-vector (using bitwise operations, you can check each bit). So, you can simply try something like this:
// Note: This depends on the endianess of your machine
int x = 0xdeadbeef; // Your integer?
int arr[sizeof(int)*CHAR_BIT];
for(int i = 0 ; i < sizeof(int)*CHAR_BIT ; ++i) {
arr[i] = (x & (0x01 << i)) ? 1 : 0; // Take the i-th bit
}
Decimal to Binary: Size independent
Two ways: both stores binary represent into a dynamic allocated array bits (in msh to lsh).
First Method:
#include<limits.h> // include for CHAR_BIT
int* binary(int dec){
int* bits = calloc(sizeof(int) * CHAR_BIT, sizeof(int));
if(bits == NULL) return NULL;
int i = 0;
// conversion
int left = sizeof(int) * CHAR_BIT - 1;
for(i = 0; left >= 0; left--, i++){
bits[i] = !!(dec & ( 1u << left ));
}
return bits;
}
Second Method:
#include<limits.h> // include for CHAR_BIT
int* binary(unsigned int num)
{
unsigned int mask = 1u << ((sizeof(int) * CHAR_BIT) - 1);
//mask = 1000 0000 0000 0000
int* bits = calloc(sizeof(int) * CHAR_BIT, sizeof(int));
if(bits == NULL) return NULL;
int i = 0;
//conversion
while(mask > 0){
if((num & mask) == 0 )
bits[i] = 0;
else
bits[i] = 1;
mask = mask >> 1 ; // Right Shift
i++;
}
return bits;
}
I know it doesn't add as many Zero's as you wish for positive numbers. But for negative binary numbers, it works pretty well.. I just wanted to post a solution for once :)
int BinToDec(int Value, int Padding = 8)
{
int Bin = 0;
for (int I = 1, Pos = 1; I < (Padding + 1); ++I, Pos *= 10)
{
Bin += ((Value >> I - 1) & 1) * Pos;
}
return Bin;
}
This is what I use, it also lets you give the number of bits that will be in the final vector, fills any unused bits with leading 0s.
std::vector<int> to_binary(int num_to_convert_to_binary, int num_bits_in_out_vec)
{
std::vector<int> r;
// make binary vec of minimum size backwards (LSB at .end() and MSB at .begin())
while (num_to_convert_to_binary > 0)
{
//cout << " top of loop" << endl;
if (num_to_convert_to_binary % 2 == 0)
r.push_back(0);
else
r.push_back(1);
num_to_convert_to_binary = num_to_convert_to_binary / 2;
}
while(r.size() < num_bits_in_out_vec)
r.push_back(0);
return r;
}

How to check if the binary representation of an integer is a palindrome?

How to check if the binary representation of an integer is a palindrome?
Hopefully correct:
_Bool is_palindrome(unsigned n)
{
unsigned m = 0;
for(unsigned tmp = n; tmp; tmp >>= 1)
m = (m << 1) | (tmp & 1);
return m == n;
}
Since you haven't specified a language in which to do it, here's some C code (not the most efficient implementation, but it should illustrate the point):
/* flip n */
unsigned int flip(unsigned int n)
{
int i, newInt = 0;
for (i=0; i<WORDSIZE; ++i)
{
newInt += (n & 0x0001);
newInt <<= 1;
n >>= 1;
}
return newInt;
}
bool isPalindrome(int n)
{
int flipped = flip(n);
/* shift to remove trailing zeroes */
while (!(flipped & 0x0001))
flipped >>= 1;
return n == flipped;
}
EDIT fixed for your 10001 thing.
Create a 256 lines chart containing a char and it's bit reversed char.
given a 4 byte integer,
take the first char, look it on the chart, compare the answer to the last char of the integer.
if they differ it is not palindrome, if the are the same repeat with the middle chars.
if they differ it is not palindrome else it is.
Plenty of nice solutions here. Let me add one that is not the most efficient, but very readable, in my opinion:
/* Reverses the digits of num assuming the given base. */
uint64_t
reverse_base(uint64_t num, uint8_t base)
{
uint64_t rev = num % base;
for (; num /= base; rev = rev * base + num % base);
return rev;
}
/* Tells whether num is palindrome in the given base. */
bool
is_palindrome_base(uint64_t num, uint8_t base)
{
/* A palindrome is equal to its reverse. */
return num == reverse_base(num, base);
}
/* Tells whether num is a binary palindrome. */
bool
is_palindrome_bin(uint64_t num)
{
/* A binary palindrome is a palindrome in base 2. */
return is_palindrome_base(num, 2);
}
The following should be adaptable to any unsigned type. (Bit operations on signed types tend to be fraught with problems.)
bool test_pal(unsigned n)
{
unsigned t = 0;
for(unsigned bit = 1; bit && bit <= n; bit <<= 1)
t = (t << 1) | !!(n & bit);
return t == n;
}
int palidrome (int num)
{
int rev = 0;
num = number;
while (num != 0)
{
rev = (rev << 1) | (num & 1); num >> 1;
}
if (rev = number) return 1; else return 0;
}
I always have a palindrome function that works with Strings, that returns true if it is, false otherwise, e.g. in Java. The only thing I need to do is something like:
int number = 245;
String test = Integer.toString(number, 2);
if(isPalindrome(test)){
...
}
A generic version:
#include <iostream>
#include <limits>
using namespace std;
template <class T>
bool ispalindrome(T x) {
size_t f = 0, l = (CHAR_BIT * sizeof x) - 1;
// strip leading zeros
while (!(x & (1 << l))) l--;
for (; f != l; ++f, --l) {
bool left = (x & (1 << f)) > 0;
bool right = (x & (1 << l)) > 0;
//cout << left << '\n';
//cout << right << '\n';
if (left != right) break;
}
return f != l;
}
int main() {
cout << ispalindrome(17) << "\n";
}
I think the best approach is to start at the ends and work your way inward, i.e. compare the first bit and the last bit, the second bit and the second to last bit, etc, which will have O(N/2) where N is the size of the int. If at any point your pairs aren't the same, it isn't a palindrome.
bool IsPalindrome(int n) {
bool palindrome = true;
size_t len = sizeof(n) * 8;
for (int i = 0; i < len / 2; i++) {
bool left_bit = !!(n & (1 << len - i - 1));
bool right_bit = !!(n & (1 << i));
if (left_bit != right_bit) {
palindrome = false;
break;
}
}
return palindrome;
}
Sometimes it's good to report a failure too;
There are lots of great answers here about the obvious way to do it, by analyzing in some form or other the bit pattern. I got to wondering, though, if there were any mathematical solutions? Are there properties of palendromic numbers that we might take advantage of?
So I played with the math a little bit, but the answer should really have been obvious from the start. It's trivial to prove that all binary palindromic numbers must be either odd or zero. That's about as far as I was able to get with it.
A little research showed no such approach for decimal palindromes, so it's either a very difficult problem or not solvable via a formal system. It might be interesting to prove the latter...
public static bool IsPalindrome(int n) {
for (int i = 0; i < 16; i++) {
if (((n >> i) & 1) != ((n >> (31 - i)) & 1)) {
return false;
}
}
return true;
}
bool PaLInt (unsigned int i, unsigned int bits)
{
unsigned int t = i;
unsigned int x = 0;
while(i)
{
x = x << bits;
x = x | (i & ((1<<bits) - 1));
i = i >> bits;
}
return x == t;
}
Call PalInt(i,1) for binary pallindromes
Call PalInt(i,3) for Octal Palindromes
Call PalInt(i,4) for Hex Palindromes
I know that this question has been posted 2 years ago, but I have a better solution which doesn't depend on the word size and all,
int temp = 0;
int i = num;
while (1)
{ // let's say num is the number which has to be checked
if (i & 0x1)
{
temp = temp + 1;
}
i = i >> 1;
if (i) {
temp = temp << 1;
}
else
{
break;
}
}
return temp == num;
In JAVA there is an easy way if you understand basic binary airthmetic, here is the code:
public static void main(String []args){
Integer num=73;
String bin=getBinary(num);
String revBin=reverse(bin);
Integer revNum=getInteger(revBin);
System.out.println("Is Palindrome: "+((num^revNum)==0));
}
static String getBinary(int c){
return Integer.toBinaryString(c);
}
static Integer getInteger(String c){
return Integer.parseInt(c,2);
}
static String reverse(String c){
return new StringBuilder(c).reverse().toString();
}
#include <iostream>
#include <math.h>
using namespace std;
int main()
{
unsigned int n = 134217729;
unsigned int bits = floor(log(n)/log(2)+1);
cout<< "Number of bits:" << bits << endl;
unsigned int i=0;
bool isPal = true;
while(i<(bits/2))
{
if(((n & (unsigned int)pow(2,bits-i-1)) && (n & (unsigned int)pow(2,i)))
||
(!(n & (unsigned int)pow(2,bits-i-1)) && !(n & (unsigned int)pow(2,i))))
{
i++;
continue;
}
else
{
cout<<"Not a palindrome" << endl;
isPal = false;
break;
}
}
if(isPal)
cout<<"Number is binary palindrome" << endl;
}
The solution below works in python:
def CheckBinPal(b):
b=str(bin(b))
if b[2:]==b[:1:-1]:
return True
else:
return False
where b is the integer
If you're using Clang, you can make use of some __builtins.
bool binaryPalindrome(const uint32_t n) {
return n == __builtin_bitreverse32(n << __builtin_clz(n));
}
One thing to note is that __builtin_clz(0) is undefined so you'll need to check for zero. If you're compiling on ARM using Clang (next generation mac), then this makes use of the assembly instructions for reverse and clz (compiler explorer).
clz w8, w0
lsl w8, w0, w8
rbit w8, w8
cmp w8, w0
cset w0, eq
ret
x86 has instructions for clz (sort of) but not reversing. Still, Clang will emit the fastest code possible for reversing on the target architecture.
Javascript Solution
function isPalindrome(num) {
const binaryNum = num.toString(2);
console.log(binaryNum)
for(let i=0, j=binaryNum.length-1; i<=j; i++, j--) {
if(binaryNum[i]!==binaryNum[j]) return false;
}
return true;
}
console.log(isPalindrome(0))