I am writing a program that needs to take an array of size n and convert that into it's hex value as follows:
int a[] = { 0, 1, 1, 0 };
I would like to take each value of the array to represent it as binary and convert it to a hex value. In this case:
0x6000000000000000; // 0110...0
it also has to be packed to the right with 0's to be 64 bits (i am on a 64 bit machine).
Or i could also take the array elements, convert to decimal and convert to hexadecimal it that's easier... What you be the best way of doing this in C++?
(this is not homework)
The following assumes that your a[] will only ever use 0 and 1 to represent bits. You'll also need to specify the array length, sizeof(a)/sizeof(int) can be used in this case, but not for heap allocated arrays. Also, result will need to be a 64bit integer type.
for (int c=0; c<array_len; c++)
result |= a[c] << (63-c);
If you want to see what it looks like in hex, you can use (s)printf( "%I64x", result )
std::bitset<64>::to_ulong() might be your friend. The order will probably be backwards (it is unspecified, but typically index 3 will be fetched by right-shifting the word by 3 and masking with 1), but you can remedy that by subtracting the desired index from 63.
#include <bitset>
std::bitset<64> bits;
for ( int index = 0; index < sizeof a/sizeof *a, ++ index ) {
bits[ 63 - index ] = a[ index ];
}
std::cout << std::hex << std::setw(64) << std::setfill('0')
<< bits.to_ulong() << std::endl;
unsigned long long answer= 0;
for (int i= 0; i<sizeof(a)/sizeof(a[0]); ++i)
{
answer= (answer << 1) | a[i];
}
answer<<= (64 - sizeof(a)/sizeof(a[0]));
Assumptions: a[] is at most 64 entries, is defined at compile time, and only contains 1 or 0. Being defined at compile time sidesteps issues of shifting left by 64, as you cannot declare an empty array.
Here's a rough answer:
int ConvertBitArrayToInt64(int a[])
{
int answer = 0;
for(int i=0; i<64; ++i)
{
if (isValidIndex(i))
{
answer = answer << 1 | a[i];
}
else
{
answer = answer << 1;
}
}
return answer;
}
byte hexValues[16];
for(int i = 15; i >= 0; i--)
{
hexValues = a[i*4] * 8 + a[i*4-1] * 4 + [i*4-2] * 2 + a[i*4-3];
}
This will give you an array of bytes where each byte represents one of your hex values.
Note that each byte in hexValues will be a value from 0 to 16.
Related
void decimaltobin()
{
binaryNum = 0;
m = 1;
while (num != 0)
{
rem = num % 2;
num /= 2;
binaryNum += rem * m;
m *= 10;
}
}
Just wondering if there was an easy fix to get this function to print an 8-bit binary number instead of a 4-bit number, e.g. 0000 0101 instead of 0101.
As mentioned in the comments, your code does not print anything yet and the data type of binaryNum is not clear. Here is a working solution.
#include <iostream>
using namespace std;
void decToBinary(int n)
{
// array to store binary number
int binaryNum[32];
// counter for binary array
int i = 0;
while (n > 0) {
// storing remainder in binary array
binaryNum[i] = n % 2;
n = n / 2;
i++;
}
// printing the required number of zeros
int zeros = 8 - i;
for(int m = 0; m < zeros; m++){
cout<<0;
}
// printing binary array in reverse order
for (int j = i - 1; j >= 0; j--)
cout << binaryNum[j];
}
// Driver program to test above function
int main()
{
int n = 17;
decToBinary(n);
return 0;
}
The code implements the following:
Store the remainder when the number is divided by 2 in an array.
Divide the number by 2
Repeat the above two steps until the number is greater than zero.
Print the required number of zeros. That is 8 - length of the binary number. Note that this code will work for numbers that can be expressed in 8 bits only.
Print the array in reverse order now
Ref
Maybe I am missing your reason but why do you want to code from scratch instead of using a standard library?
You may use standard c++ without having to code a conversion from scratch using for instance std::bitset<NB_OF_BITS>.
Here is a simple example:
#include <iostream>
#include <bitset>
std::bitset<8> decimalToBin(int numberToConvert)
{
return std::bitset<8>(numberToConvert);
}
int main() {
int a = 4, b=8, c=12;
std::cout << decimalToBin(a)<< std::endl;
std::cout << decimalToBin(b)<< std::endl;
std::cout << decimalToBin(c)<< std::endl;
}
It outputs:
00000100
00001000
00001100
Problem: Given an integer led as input, create a bitset (16 bits) with led bits set to 1. Then, create the following sequence (assume in this case led = 7):
0000000001111111
0000000000111111
0000000001011111
0000000001101111
0000000001110111
0000000001111011
0000000001111101
0000000001111110
Note that it is a "zero" that is moving to the right. The code I wrote is:
void create_mask(int led){
string bitString;
for (int i = 0; i < led; i++){
bitString += "1";
}
bitset<16> bitMap(bitString);
for (int i = led; i >= 0; i--){
bitMap[i] = false;
cout << bitMap << endl;
bitString = "";
for (int j = 0; j < led; j++){
bitString += "1";
}
bitMap = bitset<16> (bitString);
}
}
I don't like the nested loop where I set each bit to 0. I think that could be made better with less complexity.
This is what I came up with:
void createMask(int len) {
std::bitset<16> bitMap;
for (int i = 1; i < len; i++)
{
bitMap.set();
bitMap >>= 16 - len;
bitMap[len - i] = false;
std::cout << bitMap << std::endl;
}
}
bitMap.set() sets all bits in the bitset to 1 (or true)
bitMap >>= 16 - len shifts all the bits to the right but it does it 16 - 7 (if len was 7) so there are 9 zeros and seven ones.
bitMap[len - i] = false sets the bit at 7 - i to 0 (or false). len - i is a way of specifying the inverse number (basically it starts setting the zeros on the left and works towards the right depending on the value of i)
The loop starts at 1 because you're setting the bit to 0 anyways and prevents program from crashing when len is 16 –
If you want to use a std::bitset, you can take advantage of the bit functions like shifting and XOR'ing. In this solution I have a base bitset of all ones, a mask that shifts right, and I output the XOR of the two on each iteration.
Untested.
void output_masks(int bits, std::ostream& os){
std::bitset<16> all_ones((1 << bits) - 1);
std::bitset<16> bit_mask(1 << (bits - 1));
while (bit_mask.any()) {
os << (all_ones ^ bit_mask);
bit_mask >>= 1;
}
}
I need a fast algorithm which will generate all possible numbers upto a given number N in binary into an array.
e.g N=3
Then the array should be {0,0,0},{0,0,1}.....{1,1,1}
N<=17.
I have tried this so far which is a recursive solution.
void print_digits(int n, std::string const& prefix = "") {
if (!n) {
printf("%s,",prefix.c_str());
return;
}
print_digits(n-1, prefix + '0');
print_digits(n-1, prefix + '1');
}
i need a better algorithm.
All the integers in C++ are stored directly in memory as their binary representation. Thus, if you just want to store N numbers, you should just write them directly into an array "as-is"
std::vector<unsigned> Numbers;
// if N is length of the number, calculate the maximum as 2^N - 1
long long Max = 1 << N - 1;
for (unsinged i = 0; i < Max; ++i)
Numbers.push_back(i);
If you want to write them in the binary representation, it's also pretty straightforward, even if you want to code it all by yourself. (Please excuse me, as this is just an simple example implementation)
void PrintAsBits(unsigned value) {
for (int i = sizeof(unsigned) * 8 - 1; i >= 0; --i)
cout << ((1 << i) & value) ? 1 : 0;
cout << '\n';
}
Just in case anyone cares anymore, the following code implements the original spec, which calls for a way to populate a 2-dimensional array where each value is represented as a numeric array whose elements correspond to its value's binary digits, in big-endian order.
#include <iostream>
static const int DIGIT_COUNT = 10;
static const int VALUE_COUNT = 1 << DIGIT_COUNT;
unsigned char g_binarray[VALUE_COUNT][DIGIT_COUNT];
void Populate() {
for(int i=0; i<VALUE_COUNT; ++i) {
unsigned char (&curr)[DIGIT_COUNT] = g_binarray[i];
for(int di=0; di<DIGIT_COUNT; ++di) {
curr[di] = unsigned char((i >> (DIGIT_COUNT - 1 - di)) & 1);
}
}
}
void DumpArray() {
static const char *digits = "01";
for(int i=1; i<VALUE_COUNT; ++i) {
for(int di=0; di<DIGIT_COUNT; ++di) {
std::cout << digits[!!g_binarray[i][di]];
}
std::cout << " " << i << std::endl;
}
}
int main(int argc, char* argv[]) {
Populate();
DumpArray();
return 0;
}
As I wrote in 1 post:
Example: If you need of length 4, then you must have 2^4 = 16 different arrays.
You can use this simple Java code to generate all arrays:
for (int i=0; i < 16; i++) {
System.out.println(Integer.toBinaryString(i));
}
The output of this:
0 1 10 11 100 101 110 111 1000 1001 1010 1011 1100 1101 1110 1111
convert a positive integer number in C++ (0 to 2,147,483,647) to a 32 bit binary and display.
I want do it in traditional "mathematical" way (rather than use bitset or use vector *.pushback* or recursive function or some thing special in C++...), (one reason is so that you can implement it in different languages, well maybe)
So I go ahead and implement a simple program like this:
#include <iostream>
using namespace std;
int main()
{
int dec,rem,i=1,sum=0;
cout << "Enter the decimal to be converted: ";
cin>>dec;
do
{
rem=dec%2;
sum=sum + (i*rem);
dec=dec/2;
i=i*10;
} while(dec>0);
cout <<"The binary of the given number is: " << sum << endl;
system("pause");
return 0;
}
Problem is when you input a large number such as 9999, result will be a negative or some weird number because sum is integer and it can't handle more than its max range, so you know that a 32 bit binary will have 32 digits so is it too big for any number type in C++?. Any suggestions here and about display 32 bit number as question required?
What you get in sum as a result is hardly usable for anything but printing. It's a decimal number which just looks like a binary.
If the decimal-binary conversion is not an end in itself, note that numbers in computer memory are already represented in binary (and it's not the property of C++), and the only thing you need is a way to print it. One of the possible ways is as follows:
int size = 0;
for (int tmp = dec; tmp; tmp >>= 1)
size++;
for (int i = size - 1; i >= 0; --i)
cout << ((dec >> i) & 1);
Another variant using a character array:
char repr[33] = { 0 };
int size = 0;
for (int tmp = dec; tmp; tmp >>= 1)
size++;
for (int i = 0; i < size; ++i)
repr[i] = ((dec >> (size - i - 1)) & 1) ? '1' : '0';
cout << repr << endl;
Note that both variants don't work if dec is negative.
You have a number and want its binary representation, i.e, a string. So, use a string, not an numeric type, to store your result.
Using a for-loop, and a predefined array of zero-chars:
#include <iostream>
using namespace std;
int main()
{
int dec;
cout << "Enter the decimal to be converted: ";
cin >> dec;
char bin32[] = "00000000000000000000000000000000";
for (int pos = 31; pos >= 0; --pos)
{
if (dec % 2)
bin32[pos] = '1';
dec /= 2;
}
cout << "The binary of the given number is: " << bin32 << endl;
}
For performance reasons, you may prematurely suspend the for loop:
for (int pos = 31; pos >= 0 && dec; --pos)
Note, that in C++, you can treat an integer as a boolean - everything != 0 is considered true.
You could use an unsigned integer type. However, even with a larger type you will eventually run out of space to store binary representations. You'd probably be better off storing them in a string.
As others have pointed out, you need to generate the results in a
string. The classic way to do this (which works for any base between 2 and 36) is:
std::string
toString( unsigned n, int precision, unsigned base )
{
assert( base >= 2 && base <= 36 );
static char const digits[] = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ";
std::string retval;
while ( n != 0 ) {
retval += digits[ n % base ];
n /= base;
}
while ( retval.size() < precision ) {
retval += ' ';
}
std::reverse( retval.begin(), retval.end() );
return retval;
}
You can then display it.
Recursion. In pseudocode:
function toBinary(integer num)
if (num < 2)
then
print(num)
else
toBinary(num DIV 2)
print(num MOD 2)
endif
endfunction
This does not handle leading zeros or negative numbers. The recursion stack is used to reverse the binary bits into the standard order.
Just write:
long int dec,rem,i=1,sum=0
Instead of:
int dec,rem,i=1,sum=0;
That should solve the problem.
I am working on decimal to binary conversion. I can convert them, using
char bin_x [10];
itoa (x,bin_x,2);
but the problem is, i want answer in 8 bits. And it gives me as, for example x =5, so output will be 101, but i want 00000101.
Is there any way to append zeros in the start of array? or is it possible to get answer in 8 bits straight away? I am doing this in C++
In C++, the easiest way is probably to use a std::bitset:
#include <iostream>
#include <bitset>
int main() {
int x = 5;
std::bitset<8> bin_x(x);
std::cout << bin_x;
return 0;
}
Result:
00000101
To print out the bits of a single digit, you need to do the following:
//get the digit (in this case, the least significant digit)
short digit = number % 10; //shorts are 8 bits
//print out each bit of the digit
for(int i = 0; i < 8; i++){
if(0x80 & digit) //if the high bit is on, print 1
cout << 1;
else
cout << 0; //otherwise print 0
digit = digit << 1; //shift the bits left by one to get the next highest bit.
}
itoa() is not a standard function so it's not good to use it if you want to write portable code.
You can also use something like that:
std::string printBinary(int num, int bits) {
std::vector<char> digits(bits);
for (int i = 0; i < bits; ++i) {
digits.push_back(num % 2 + '0');
num >>= 1;
}
return std::string(digits.rbegin(), digits.rend());
}
std:: cout << printBinary(x, 8) << std::endl;
However I must agree that using bitset would be better.