Problem: Given an integer led as input, create a bitset (16 bits) with led bits set to 1. Then, create the following sequence (assume in this case led = 7):
0000000001111111
0000000000111111
0000000001011111
0000000001101111
0000000001110111
0000000001111011
0000000001111101
0000000001111110
Note that it is a "zero" that is moving to the right. The code I wrote is:
void create_mask(int led){
string bitString;
for (int i = 0; i < led; i++){
bitString += "1";
}
bitset<16> bitMap(bitString);
for (int i = led; i >= 0; i--){
bitMap[i] = false;
cout << bitMap << endl;
bitString = "";
for (int j = 0; j < led; j++){
bitString += "1";
}
bitMap = bitset<16> (bitString);
}
}
I don't like the nested loop where I set each bit to 0. I think that could be made better with less complexity.
This is what I came up with:
void createMask(int len) {
std::bitset<16> bitMap;
for (int i = 1; i < len; i++)
{
bitMap.set();
bitMap >>= 16 - len;
bitMap[len - i] = false;
std::cout << bitMap << std::endl;
}
}
bitMap.set() sets all bits in the bitset to 1 (or true)
bitMap >>= 16 - len shifts all the bits to the right but it does it 16 - 7 (if len was 7) so there are 9 zeros and seven ones.
bitMap[len - i] = false sets the bit at 7 - i to 0 (or false). len - i is a way of specifying the inverse number (basically it starts setting the zeros on the left and works towards the right depending on the value of i)
The loop starts at 1 because you're setting the bit to 0 anyways and prevents program from crashing when len is 16 –
If you want to use a std::bitset, you can take advantage of the bit functions like shifting and XOR'ing. In this solution I have a base bitset of all ones, a mask that shifts right, and I output the XOR of the two on each iteration.
Untested.
void output_masks(int bits, std::ostream& os){
std::bitset<16> all_ones((1 << bits) - 1);
std::bitset<16> bit_mask(1 << (bits - 1));
while (bit_mask.any()) {
os << (all_ones ^ bit_mask);
bit_mask >>= 1;
}
}
Related
I'm wondering whether how you can write something like this recursively or using a different loop system:
std::string a = "00000000";
for (int i = 0; i<8; i++) {
a[i] = '1';
for (int j = 0; j<8; j++) {
if (i!=j) {
a[j] = '1';
... //more for loops with the same structure
std::cout<<a[j]<<"\n";
a[j] = '0';
}
a[i] = '0';
}
I'm trying to print out every possible eight bit combination of 0s and 1s without using any libraries (except bitset if I have to). If I do it this way, I'll end up with 8 for loops, which is a bit much. I'm wondering whether there is a way to condense this using either recursion or a clever trick with using the standard do/while/for loops.
This task can be achieved with a simple for loop and binary operations.
Bitshift i by an amount, then & it by 1 to mask that bit.
#include <iostream>
void printBinary()
{
for(int i = 0; i < 256; i++){
for(int bit = 7; bit >= 0; bit--){
std::cout << (i >> bit & 1);
}
std::cout << std::endl;
}
}
First, your loops are incorrect: they run from 0 to 7, inclusive, while they should be running from 0 to 1, inclusive, because a bit is either zero or one.
As far as going through all 8-bit combinations goes, you can do it with a single loop: use an int counting from 0 to 255, inclusive, and print its binary representation:
for (int i = 0 ; i != 256 ; i++) {
cout << bitset<8>(i).to_string() << endl;
}
I'm working on a program that will allow me to multiply/divide/add/subtract binary numbers together. In my program I'm making all integers be represented as vectors of digits.
I've managed to figure out how to do this with addition, however multiplication has got me stumbled and I was wondering if anyone could give me some advice on how to get the pseudo code as a guide for this program.
Thanks in advance!
EDIT: I'm trying to figure out how to create the algorithm for multiplication still to clear things up. Any help on how to figure this algorithm would be appreciated. I usually don't work with C++, so it takes me a bit longer to figure things out with it.
You could also consider the Booth's algorithm if you'd like to multiply:
Booth's multiplication algorithm
Long multiplication in pseudocode would look something like:
vector<digit> x;
vector<digit> y;
total = 0;
multiplier = 1;
for i = x->last -> x->first //start off with the least significant digit of x
total = total + i * y * multiplier
multiplier *= 10;
return total
you could try simulating a binary multiplier or any other circuit that is used in a CPU.
Just tried something, and this would work if you only multiply unsigned values in binary:
unsigned int multiply(unsigned int left, unsigned int right)
{
unsigned long long result = 0; //64 bit result
unsigned int R = right; //32 bit right input
unsigned int M = left; //32 bit left input
while (R > 0)
{
if (R & 1)
{// if Least significant bit exists
result += M; //add by shifted left
}
R >>= 1;
M <<= 1; //next bit
}
/*-- if you want to check for multiplication overflow: --
if ((result >> 32) != 0)
{//if has more than 32 bits
return -1; //multiplication overflow
}*/
return (unsigned int)result;
}
However, that's at the binary level of it... I just you have vector of digits as input
I made this algorithm that uses a binary addition function that I found on the web in combination with some code that first adjusts "shifts" the numbers before sending them to be added together.
It works with the logic that's in this video https://www.youtube.com/watch?v=umqLvHYeGiI
and this is the code:
#include <iostream>
#include <string>
using namespace std;
// This function adds two binary strings and return
// result as a third string
string addBinary(string a, string b)
{
string result = ""; // Initialize result
int s = 0; // Initialize digit sum
int flag =0;
// Traverse both strings starting from last
// characters
int i = a.size() - 1, j = b.size() - 1;
while (i >= 0 || j >= 0 || s == 1)
{
// Computing the sum of the digits from right to left
//x = (condition) ? (value_if_true) : (value_if_false);
//add the fire bit of each string to digit sum
s += ((i >= 0) ? a[i] - '0' : 0);
s += ((j >= 0) ? b[j] - '0' : 0);
// If current digit sum is 1 or 3, add 1 to result
//Other wise it will be written as a zero 2%2 + 0 = 0
//and it will be added to the heading of the string (to the left)
result = char(s % 2 + '0') + result;
// Compute carry
//Not using double so we get either 1 or 0 as a result
s /= 2;
// Move to next digits (more to the left)
i--; j--;
}
return result;
}
int main()
{
string a, b, result= "0"; //Multiplier, multiplicand, and result
string temp="0"; //Our buffer
int shifter = 0; //Shifting counter
puts("Enter you binary values");
cout << "Multiplicand = ";
cin >> a;
cout<<endl;
cout << "Multiplier = ";
cin >> b;
cout << endl;
//Set a pointer that looks at the multiplier from the bit on the most right
int j = b.size() - 1;
// Loop through the whole string and see if theres any 1's
while (j >= 0)
{
if (b[j] == '1')
{
//Reassigns the original value every loop to delete the old shifting
temp = a;
//We shift by adding zeros to the string of bits
//If it is not the first iteration it wont add any thing because we did not "shift" yet
temp.append(shifter, '0');
//Add the shifter buffer bits to the result variable
result = addBinary(result, temp);
}
//we shifted one place
++shifter;
//move to the next bit on the left
j--;
}
cout << "Result = " << result << endl;
return 0;
}
while(i < length)
{
pow = 1;
for(int j = 0; j < 8; j++, pow *=2)
{
ch += (str[j] - 48) * pow;
}
str = str.substr(8);
i+=8;
cout << ch;
ch = 0;
}
This seems to be slowing my program down a lot. Is it because of the string functions I'm using in there, or is this approach wrong in general. I know there's the way where you implement long division, but I wanted to see if that was actually more efficient than this method. I can't think of another way that doesn't use the same general algorithm, so maybe it's just my implementation that is the problem.
Perhaps you want might to look into using the standard library functions. They're probably at least as optimised as anything you run through the compiler:
#include <iostream>
#include <iomanip>
#include <cstdlib>
int main (void) {
const char *str = "10100101";
// Use str.c_str() if it's a real C++ string.
long int li = std::strtol (str, 0, 2);
std::cout
<< "binary string = " << str
<< ", decimal = " << li
<< ", hex = " << std::setbase (16) << li
<< '\n';
return 0;
}
The output is:
binary string = 10100101, decimal = 165, hex = a5
You are doing some things unnecessarily, like creating a new substring for each each loop. You could just use str[i + j] instead.
It is also not necessary to multiply 0 or 1 with the power. Just use an if-statement.
while(i < length)
{
pow = 1;
for(int j = 0; j < 8; j++, pow *=2)
{
if (str[i + j] == '1')
ch += pow;
}
i+=8;
cout << ch;
ch = 0;
}
This will at least run a bit faster.
short answer could be:
long int x = strtol(your_binary_c++_string.c_str(),(char **)NULL,2)
Probably you can use int or long int like below:
Just traverse the binary number step by step, starting from 0 to n-1, where n is the most significant bit(MSB) ,
multiply them with 2 with raising powers and add the sum together. E.g to convert 1000(which is binary equivalent of 8), just do the following
1 0 0 0 ==> going from right to left
0 x 2^0 = 0
0 x 2^1 = 0;
0 x 2^2 = 0;
1 x 2^3 = 8;
now add them together i.e 0+0+0+8 = 8; this the decimal equivalent of 1000. Please read the program below to have a better understanding how the concept
work. Note : The program works only for 16-bit binary numbers(non-floating) or less. Leave a comment if anything is not clear. You are bound to receive a reply.
// Program to convert binary to its decimal equivalent
#include <iostream>
#include <math.h>
int main()
{
int x;
int i=0,sum = 0;
// prompts the user to input a 16-bit binary number
std::cout<<" Enter the binary number (16-bit) : ";
std::cin>>x;
while ( i != 16 ) // runs 16 times
{
sum += (x%10) * pow(2,i);
x = x/10;
i++;
}
std::cout<<"\n The decimal equivalent is : "<<sum;
return 0;
}
How about something like:
int binstring_to_int(const std::string &str)
{
// 16 bits are 16 characters, but -1 since bits are numbered 0 to 15
std::string::size_type bitnum = str.length() - 1;
int value = 0;
for (auto ch : str)
{
value |= (ch == '1') << bitnum--;
}
return value;
}
It's the simplest I can think of. Note that this uses the new C++11 for-each loop construct, if your compiler can't handle it you can use
for (std::string::const_iterator i = str.begin(); i != str.end(); i++)
{
char ch = *i;
// ...
}
Minimize the number of operations and don't compute things more than once. Just multiply and move up:
unsigned int result = 0;
for (char * p = str; *p != 0; ++p)
{
result *= 2;
result += (*p - '0'); // this is either 0 or 1
}
The scheme is readily generalized to any base < 10.
I am working on decimal to binary conversion. I can convert them, using
char bin_x [10];
itoa (x,bin_x,2);
but the problem is, i want answer in 8 bits. And it gives me as, for example x =5, so output will be 101, but i want 00000101.
Is there any way to append zeros in the start of array? or is it possible to get answer in 8 bits straight away? I am doing this in C++
In C++, the easiest way is probably to use a std::bitset:
#include <iostream>
#include <bitset>
int main() {
int x = 5;
std::bitset<8> bin_x(x);
std::cout << bin_x;
return 0;
}
Result:
00000101
To print out the bits of a single digit, you need to do the following:
//get the digit (in this case, the least significant digit)
short digit = number % 10; //shorts are 8 bits
//print out each bit of the digit
for(int i = 0; i < 8; i++){
if(0x80 & digit) //if the high bit is on, print 1
cout << 1;
else
cout << 0; //otherwise print 0
digit = digit << 1; //shift the bits left by one to get the next highest bit.
}
itoa() is not a standard function so it's not good to use it if you want to write portable code.
You can also use something like that:
std::string printBinary(int num, int bits) {
std::vector<char> digits(bits);
for (int i = 0; i < bits; ++i) {
digits.push_back(num % 2 + '0');
num >>= 1;
}
return std::string(digits.rbegin(), digits.rend());
}
std:: cout << printBinary(x, 8) << std::endl;
However I must agree that using bitset would be better.
I am writing a program that needs to take an array of size n and convert that into it's hex value as follows:
int a[] = { 0, 1, 1, 0 };
I would like to take each value of the array to represent it as binary and convert it to a hex value. In this case:
0x6000000000000000; // 0110...0
it also has to be packed to the right with 0's to be 64 bits (i am on a 64 bit machine).
Or i could also take the array elements, convert to decimal and convert to hexadecimal it that's easier... What you be the best way of doing this in C++?
(this is not homework)
The following assumes that your a[] will only ever use 0 and 1 to represent bits. You'll also need to specify the array length, sizeof(a)/sizeof(int) can be used in this case, but not for heap allocated arrays. Also, result will need to be a 64bit integer type.
for (int c=0; c<array_len; c++)
result |= a[c] << (63-c);
If you want to see what it looks like in hex, you can use (s)printf( "%I64x", result )
std::bitset<64>::to_ulong() might be your friend. The order will probably be backwards (it is unspecified, but typically index 3 will be fetched by right-shifting the word by 3 and masking with 1), but you can remedy that by subtracting the desired index from 63.
#include <bitset>
std::bitset<64> bits;
for ( int index = 0; index < sizeof a/sizeof *a, ++ index ) {
bits[ 63 - index ] = a[ index ];
}
std::cout << std::hex << std::setw(64) << std::setfill('0')
<< bits.to_ulong() << std::endl;
unsigned long long answer= 0;
for (int i= 0; i<sizeof(a)/sizeof(a[0]); ++i)
{
answer= (answer << 1) | a[i];
}
answer<<= (64 - sizeof(a)/sizeof(a[0]));
Assumptions: a[] is at most 64 entries, is defined at compile time, and only contains 1 or 0. Being defined at compile time sidesteps issues of shifting left by 64, as you cannot declare an empty array.
Here's a rough answer:
int ConvertBitArrayToInt64(int a[])
{
int answer = 0;
for(int i=0; i<64; ++i)
{
if (isValidIndex(i))
{
answer = answer << 1 | a[i];
}
else
{
answer = answer << 1;
}
}
return answer;
}
byte hexValues[16];
for(int i = 15; i >= 0; i--)
{
hexValues = a[i*4] * 8 + a[i*4-1] * 4 + [i*4-2] * 2 + a[i*4-3];
}
This will give you an array of bytes where each byte represents one of your hex values.
Note that each byte in hexValues will be a value from 0 to 16.