So I've been working on a project for my C++ class, and we have to create a binary calculator. Yet the Professor says the functions should return an 8 bit binary set back. My issue is this
11111111 + 11111111 = 0111111110
Yet in the function we originally create this is the outcome
11111111 + 1111111 = 00000000
Which to me is incorrect. So I changed my function to this
Decimal to Binary
string DecToBin(int num)
{
/*
Purpose: Changing a Decimal to a Binary Set
Pre: Valid positive integer
Post: Returns the valid binary string
*/
string bin = "";
while (num >= 0)
{
bin += (num % 2 == 0 ? "0" : "1");
if (num != 0)
num /= 2;
else break;
}
return bin;
}
Though here lies the issue again
01010101 + 10101010 = 011111111
But my function above returns
01010101 + 10101010 = 111111110
What would be the best possible function to create if one I NEED to return an 8 bit set or if like my said function above which returns the correct answer for some and wrong for the others which I need to figure out why that is in the first place.
Binary to Decimal
int BinToDec(string bin)
{
/*
Purpose: To generate a decimal integer from a string binary set
Pre: Valid String binary set
Post: Output the output decimal integer
*/
int output = 0; //initialize output as 0
int base2Start = 128;//base2 start at 128
int len = bin.length();//get the string length
for (int i = 0; i < len; i++)//iterate
{
if (bin[i] == '1')//if bin[i] in the array of string is a char 1
{
output = output + base2Start;//output gets + base2Start
}//end if condition
base2Start = base2Start / 2;//divide base2Start after each iteration
}//end for loop
return output;//return the output
}
Addition Function
int Addition(string st1, string st2)
{
/*
Purpose: Get's two valid Binary sets, then adds their decimal conversion, and returns the addition
Pre: Need two strings that SHOULD be valid binary
Post: Returns binary from decimal conversion
*/
int first, second;
if (ValidBin(st1))
{
first = BinToDec(st1);
}
else return 0;
if (ValidBin(st2)){
second = BinToDec(st2);
}
else return 0;
add++;
return first + second;
}
bin += (num % 2 == 0 ? "0" : "1");
Should be
bin = (num % 2 == 0 ? "0" : "1") + bin;
Since you each time you are adding the least significant bit of the num to string. So at the end, according to your code, you will have the least significant most left instead of most right.
Edit: In order to truncate the result to 8 bit width change the following line:
return first + second;
By the this one:
return (first + second) & 0xFF; // Same as (first + second) % 256
#include <iostream>
using namespace std;
string sumBinary (string s1, string s2);
int main()
{
cout << "output = "<< sumBinary ("10","10");
}
string sumBinary (string s1, string s2)
{
if (s1.empty())
return s2;
if (s2.empty())
return s1;
int len1 = s1.length() -1;
int len2 = s2.length() -1;
string s3;
s3 = len1 > len2 ? s1: s2;
int len3 = s3.length() -1;
bool carry = false;
while (len1>=0 || len2>=0) {
int i1 = len1>=0? s1[len1--] - '0': 0;
int i2 = len2>=0? s2[len2--] - '0': 0;
// Check if any invalid character
if (i1 <0 || i1>1 || i2<0 || i2>1)
return "";
// 3 bit sum
int sum = i1 ^ i2 ^ carry;
// 3 bit carry
carry = (i1 & carry) | (i2 & carry) | (i1 & i2);
s3[len3--] = '0' + sum;
}
if (carry)
s3 = "1" + s3;
return s3;
}
Related
Below statement gives smallest number whose sum of digit is equal to the given number n.
If the input is 10 output will be 19 (1+9=10)
digits=(c % 9 + 1) * pow(10, (c / 9)) - 1
But when the input is greater like 100000, the output shows Inf. Can anyone help me to solve this, I even tried with unsigned long long int.
Assuming you just want to print the answer and not keep it stored in an integer variable, you can avoid overflow by taking the first digit as c%9and append c/9 number of '9' chars to complete the summation.
std::string getDigits(long long c)
{
if (c == 0)
{
return "0";
}
if (c < 0)
{
return "";
}
auto first = (c % 9);
c -= first;
auto nineCount = c / 9;
std::string result;
if (first != 0)
{
result += std::string(1, (char)(first+'0'));
}
result += std::string(nineCount, '9');
return result;
}
Example run:
int main()
{
std::cout << getDigits(987) << std::endl;
return 0;
}
prints:
69999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999
I was asked to write code for converting a decimal to its binary form. I have tried several different ways but doesn't gives me the order i need. So i am currently stuck on how to proceed.
I have tried by normally finding the binary comparison but it gives me in the incorrect order, lets say the correct order is 1001100, i just get 0011001. and i have no way of changing the order. I am not allowed to use any other library other than iostream, cmath and string. I am now trying to simply find the conversion using the exponent 2^exponent.
This is what i currently have:
int num, exp,rem;
string biNum;
cout<<"Enter decimal number: "<<endl;
cin>>num;
for (exp = 0; pow(2, exp) < num; exp++) {
}
while (num > 0) {
rem = num % (int) pow(2, exp);
if (rem != 0) {
biNum = biNum + '1';
} else {
biNum = biNum + '0';
}
exp--;
}
cout<<biNum;
return 0;
}
I am currently receiving no result at all.
Here is an example that collects the bits in Least Significant Bit (LSB):
//...
while (num > 0)
{
const char bit = '0' + (num & 1);
biNum += bit;
num = num >> 1;
}
Explanation
The loop continues until the num variable is zero. There is no point in adding extra zeros unless you really want them.
The (num & 1) expression returns 1 if the bit is 1, or 0 if the bit is 0.
This is then added to the character 0 to produce either '0' or '1'.
The variable is declared as const since it won't be modified after declaration (definition).
The newly created character is appended to the bit string.
Finally, the num is right shifted by one bit (because that bit was already processed).
There are many other ways to collect the bits in Most Significant Bit (MSB) order. Those ways are left for the OP and the reader. :-)
Here you go. This outputs the bits in the right order:
#include <iostream>
#include <string>
int main ()
{
unsigned num;
std::string biNum;
std::cin >> num;
while (num)
{
char bit = (num & 1) + '0';
biNum.insert (biNum.cbegin (), bit);
num >>= 1;
}
std::cout << biNum;
return 0;
}
Live demo
You can use a recursive function to print the result in reverse order, avoiding using a container/array, like so:
void to_binary(int num) {
int rem = num % 2;
num = (num - rem) / 2;
if (num < 2){
std::cout << rem << num;
return;
}
to_binary(num);
std::cout << rem;
}
int main()
{
to_binary(100);
}
This is what I have:
string decimal_to_binary(int n){
string result = "";
while(n > 0){
result = string(1, (char) (n%2 + 48)) + result;
n = n/2;
}
return result; }
This works, but it doesn't work if I put a negative number, any help?
Just
#include <bitset>
Then use bitset and to_string to convert from int to string
std::cout << std::bitset<sizeof(n)*8>(n).to_string();
It works for negative numbers too.
Well I would recommend calling a separate function for negative numbers. Given that, for example, -1 and 255 will both return 11111111. Converting from the positive to the negative would be easiest instead of changing the logic entirely to handle both.
Going from the positive binary to the negative is just running XOR and adding 1.
You can modify your code like this for a quick fix.
string decimal_to_binary(int n){
if (n<0){ // check if negative and alter the number
n = 256 + n;
}
string result = "";
while(n > 0){
result = string(1, (char) (n%2 + 48)) + result;
n = n/2;
}
return result;
}
This works, but it doesn't work if I put a negative number, any help?
Check whether the number is negative. If so, call the function again with -n and return the concatenated result.
You also need to add a clause to check against 0 unless you want to return an empty string when the input is 0.
std::string decimal_to_binary(int n){
if ( n < 0 )
{
return std::string("-") + decimal_to_binary(-n);
}
if ( n == 0 )
{
return std::string("0");
}
std::string result = "";
while(n > 0){
result = std::string(1, (char) (n%2 + 48)) + result;
n = n/2;
}
return result;
}
I'm working on a program that will allow me to multiply/divide/add/subtract binary numbers together. In my program I'm making all integers be represented as vectors of digits.
I've managed to figure out how to do this with addition, however multiplication has got me stumbled and I was wondering if anyone could give me some advice on how to get the pseudo code as a guide for this program.
Thanks in advance!
EDIT: I'm trying to figure out how to create the algorithm for multiplication still to clear things up. Any help on how to figure this algorithm would be appreciated. I usually don't work with C++, so it takes me a bit longer to figure things out with it.
You could also consider the Booth's algorithm if you'd like to multiply:
Booth's multiplication algorithm
Long multiplication in pseudocode would look something like:
vector<digit> x;
vector<digit> y;
total = 0;
multiplier = 1;
for i = x->last -> x->first //start off with the least significant digit of x
total = total + i * y * multiplier
multiplier *= 10;
return total
you could try simulating a binary multiplier or any other circuit that is used in a CPU.
Just tried something, and this would work if you only multiply unsigned values in binary:
unsigned int multiply(unsigned int left, unsigned int right)
{
unsigned long long result = 0; //64 bit result
unsigned int R = right; //32 bit right input
unsigned int M = left; //32 bit left input
while (R > 0)
{
if (R & 1)
{// if Least significant bit exists
result += M; //add by shifted left
}
R >>= 1;
M <<= 1; //next bit
}
/*-- if you want to check for multiplication overflow: --
if ((result >> 32) != 0)
{//if has more than 32 bits
return -1; //multiplication overflow
}*/
return (unsigned int)result;
}
However, that's at the binary level of it... I just you have vector of digits as input
I made this algorithm that uses a binary addition function that I found on the web in combination with some code that first adjusts "shifts" the numbers before sending them to be added together.
It works with the logic that's in this video https://www.youtube.com/watch?v=umqLvHYeGiI
and this is the code:
#include <iostream>
#include <string>
using namespace std;
// This function adds two binary strings and return
// result as a third string
string addBinary(string a, string b)
{
string result = ""; // Initialize result
int s = 0; // Initialize digit sum
int flag =0;
// Traverse both strings starting from last
// characters
int i = a.size() - 1, j = b.size() - 1;
while (i >= 0 || j >= 0 || s == 1)
{
// Computing the sum of the digits from right to left
//x = (condition) ? (value_if_true) : (value_if_false);
//add the fire bit of each string to digit sum
s += ((i >= 0) ? a[i] - '0' : 0);
s += ((j >= 0) ? b[j] - '0' : 0);
// If current digit sum is 1 or 3, add 1 to result
//Other wise it will be written as a zero 2%2 + 0 = 0
//and it will be added to the heading of the string (to the left)
result = char(s % 2 + '0') + result;
// Compute carry
//Not using double so we get either 1 or 0 as a result
s /= 2;
// Move to next digits (more to the left)
i--; j--;
}
return result;
}
int main()
{
string a, b, result= "0"; //Multiplier, multiplicand, and result
string temp="0"; //Our buffer
int shifter = 0; //Shifting counter
puts("Enter you binary values");
cout << "Multiplicand = ";
cin >> a;
cout<<endl;
cout << "Multiplier = ";
cin >> b;
cout << endl;
//Set a pointer that looks at the multiplier from the bit on the most right
int j = b.size() - 1;
// Loop through the whole string and see if theres any 1's
while (j >= 0)
{
if (b[j] == '1')
{
//Reassigns the original value every loop to delete the old shifting
temp = a;
//We shift by adding zeros to the string of bits
//If it is not the first iteration it wont add any thing because we did not "shift" yet
temp.append(shifter, '0');
//Add the shifter buffer bits to the result variable
result = addBinary(result, temp);
}
//we shifted one place
++shifter;
//move to the next bit on the left
j--;
}
cout << "Result = " << result << endl;
return 0;
}
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I had to convert a the 128 bits of a character array which has size 16 (1 byte each character), into a decimal and hexadecimal, without using any other libraries than included. Converting it to hexadecimal was easy as four bits were processed each time an the result was printed for each four bits as soon as it was generated.
But when it comes to decimal. Converting it in the normal mathematical way was not possible, in which each bit is multiplied by 2 to the power the index of the bit from left.
So I thought to convert it like I did with hexadecimal by printing digit by digit. But the problem is that in decimal it is not possible as the maximum digit is 9 and it needs 4 bits to represented while 4 bits can represent decimal numbers up to 15. I tried making some mechanism to carry the additional part, but couldn't find a way to do so. And I think, that was not going to work either. I have been trying aimlessly for three days as I have no idea what to do. And couldn't even find any helpful solution on the internet.
So, I want some way to get this done.
Here is My Complete Code:
#include <iostream>
#include <cstring>
#include <cmath>
using namespace std;
const int strng = 128;
const int byts = 16;
class BariBitKari {
char bits_ar[byts];
public:
BariBitKari(char inp[strng]) {
set_bits_ar(inp);
}
void set_bits_ar(char in_ar[strng]) {
char b_ar[byts];
cout << "Binary 1: ";
for (int i=0, j=0; i<byts; i++) {
for (int k=7; k>=0; k--) {
if (in_ar[j] == '1') {
cout << '1';
b_ar[i] |= 1UL << k;
}
else if (in_ar[j] == '0') {
cout << '0';
b_ar[i] &= ~(1UL << k);
}
j++;
}
}
cout << endl;
strcpy(bits_ar, b_ar);
}
char * get_bits_ar() {
return bits_ar;
}
// Functions
void print_deci() {
char b_ar[byts];
strcpy(b_ar, get_bits_ar());
int sum = 0;
int carry = 0;
cout << "Decimal : ";
for (int i=byts-1; i >= 0; i--){
for (int j=4; j>=0; j-=4) {
char y = (b_ar[i] << j) >> 4;
// sum = 0;
for (int k=0; k <= 3; k++) {
if ((y >> k) & 1) {
sum += pow(2, k);
}
}
// sum += carry;
// if (sum > 9) {
// carry = 1;
// sum -= 10;
// }
// else {
// carry = 0;
// }
// cout << sum;
}
}
cout << endl;
}
void print_hexa() {
char b_ar[byts];
strcpy(b_ar, get_bits_ar());
char hexed;
int sum;
cout << "Hexadecimal : 0x";
for (int i=0; i < byts; i++){
for (int j=0; j<=4; j+=4) {
char y = (b_ar[i] << j) >> 4;
sum = 0;
for (int k=3; k >= 0; k--) {
if ((y >> k) & 1) {
sum += pow(2, k);
}
}
if (sum > 9) {
hexed = sum + 55;
}
else {
hexed = sum + 48;
}
cout << hexed;
}
}
cout << endl;
}
};
int main() {
char ar[strng];
for (int i=0; i<strng; i++) {
if ((i+1) % 8 == 0) {
ar[i] = '0';
}
else {
ar[i] = '1';
}
}
BariBitKari arr(ar);
arr.print_hexa();
arr.print_deci();
return 0;
}
To convert a 128-bit number into a "decimal" string, I'm going to make the assumption that the large decimal value just needs to be contained in a string and that we're only in the "positive" space. Without using a proper big number library, I'll demonstrate a way to convert any array of bytes into a decimal string. It's not the most efficient way because it continually parses, copies, and scans strings of digit characters.
We'll take advantage of the fact that any large number such as the following:
0x87654321 == 2,271,560,481
Can be converted into a series of bytes shifted in 8-bit chunks. Adding back these shifted chunks results in the original value
0x87 << 24 == 0x87000000 == 2,264,924,160
0x65 << 16 == 0x00650000 == 6,619,136
0x43 << 8 == 0x00004300 == 17,152
0x21 << 0 == 0x00000021 == 33
Sum == 0x87654321 == 2,271,560,481
So our strategy for converting a 128-bit number into a string will be to:
Convert the original 16 byte array into 16 strings - each string representing the decimal equivalent for each byte of the array
"Shift left" each string by the appropriate number of bits based on the index of the original byte in the array. Taking advantage of the fact that a left shift is equivalent of multiplying by 2.
Add all these shifted strings together
So to make this work, we introduce a function that can "Add" two strings (consisting only of digits) together:
// s1 and s2 are string consisting of digits chars only ('0'..'9')
// This function will compute the "sum" for s1 and s2 as a string
string SumStringValues(const string& s1, const string& s2)
{
string result;
string str1=s1, str2=s2;
// make str2 the bigger string
if (str1.size() > str2.size())
{
swap(str1, str2);
}
// pad zeros onto the the front of str1 so it's the same size as str2
while (str1.size() < str2.size())
{
str1 = string("0") + str1;
}
// now do the addition operation as loop on these strings
size_t len = str1.size();
bool carry = false;
while (len)
{
len--;
int d1 = str1[len] - '0';
int d2 = str2[len] - '0';
int sum = d1 + d2 + (carry ? 1 : 0);
carry = (sum > 9);
if (carry)
{
sum -= 10;
}
result.push_back('0' + sum);
}
if (carry)
{
result.push_back('1');
}
std::reverse(result.begin(), result.end());
return result;
}
Next, we need a function to do a "shift left" on a decimal string:
// s is a string of digits only (interpreted as decimal number)
// This function will "shift left" the string by N bits
// Basically "multiplying by 2" N times
string ShiftLeftString(const string& s, size_t N)
{
string result = s;
while (N > 0)
{
result = SumStringValues(result, result); // multiply by 2
N--;
}
return result;
}
Then to put it altogether to convert a byte array to a decimal string:
string MakeStringFromByteArray(unsigned char* data, size_t len)
{
string result = "0";
for (size_t i = 0; i < len; i++)
{
auto tmp = to_string((unsigned int)data[i]); // byte to decimal string
tmp = ShiftLeftString(tmp, (len - i - 1) * 8); // shift left
result = SumStringValues(result, tmp); // sum
}
return result;
}
Now let's test it out on the original 32-bit value we used above:
int main()
{
// 0x87654321
unsigned char data[4] = { 0x87,0x65,0x43,0x21 };
cout << MakeStringFromByteArray(data, 4) << endl;
return 0;
}
The resulting program will print out: 2271560481 - same as above.
Now let's try it out on a 16 byte value:
int main()
{
// 0x87654321aabbccddeeff432124681111
unsigned char data[16] = { 0x87,0x65,0x43,0x21,0xaa,0xbb,0xcc,0xdd,0xee,0xff,0x43,0x21,0x24,0x68,0x11,0x11 };
std::cout << MakeStringFromByteArray(data, sizeof(data)) << endl;
return 0;
}
The above prints: 179971563002487956319748178665913454865
And we'll use python to double-check our results:
Python 3.8.3 (tags/v3.8.3:6f8c832, May 13 2020, 22:37:02) [MSC v.1924 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> int("0x87654321aabbccddeeff432124681111", 16)
179971563002487956319748178665913454865
>>>
Looks good to me.
I originally had an implementation that would do the chunking and summation in 32-bit chunks instead of 8-bit chunks. However, little-endian vs. big endian byte order issues get involved. I'll leave that potential optimization as an exercise to do another day.