Here is the code to compute: anyone, with help to convert a signed integer to its binary equivalent (8 bits) (in 2's complement) and vice versa. The program should be able t0 first read in either a signed integer or a binary number (always in 2's complement) provided by the user
For example, when I input 00110101 to convert to signed decimal, I will get 203. But the correct answer is 53.
#include <iostream>
#include <string>
using namespace std;
string decimalToBinary(int);
string findTwosComplement(string);
string decimalToBinary(int n) {
if (n < 0)
n = 256 + n;
string res = "";
while (n > 0)
{
res = string(1, (char)(n % 2 + 48)) + res;
n = n / 2;
}
return res;
}
string decimalToBinary(int n) {
if (n < 0)
n = 256 + n;
string res = "";
while (n > 0)
{
res = string(1, (char)(n % 2 + 48)) + res;
n = n / 2;
}
return res;
}
string findTwosComplement(string s)
{
int num = s.length();
int i;
for (i = num - 1; i >= 0; i--)
if (s[i] == '1')
break;
if (i == -1)
return '1' + s;
for (int j = i - 1; j >= 0; j--)
{
if (s[j] == '1')
s[j] = '0';
else
s[j] = '1';
}
return s;
}
I am assuming you want this for didactic purpose, other wise you just do (signed char) n.
The two complements is -2<<(n-1) + sum(2<<k for 0 <= k < n-1) so for 8-bit you have the usual binary representation for the numbers 0 <= N < 128 the difference is that the most significant bit is not adding 128, but subtracting 128. This is why it can represent numbers -128 <= N < 128. Important thing is that you can increase the width of the number by replicating the signal digit. C++ numbers are already 2-complements 32-bit numbers, so for the numbers in the valid range the 24 most significant bits are either all '0' or all '1', so you can just slice the bits there.
#include <iostream>
#include <string>
using namespace std;
string decimalToBinary(int n) {
// C++ integers are already in 2-complements so you only have to
// extract the bits.
if((n < -128) || (n >= 128)){
std::cerr << "The number "<< n << " does not fit into a 8-bit 2-complement representation" << std::endl;
}
string res = "";
for(int i = 0; i < 8; ++i){
res = string(1, (char)(((n >> i) & 1) + '0')) + res;
}
return res;
}
int binaryToDecimal(string s) {
// The most significant bit is the signal
int result = -((int)(s[0] - '0'));
for(int i = 1; i < s.size(); ++i){
result = 2*result + ((int)(s[i] - '0'));
}
return result;
}
int main(){
int input;
while(cin){
cout << "> ";
cin >> input;
string binary = decimalToBinary(input);
int output = binaryToDecimal(binary);
cout << input << " -> " << binary << " -> " << output << endl;
}
}
Related
For this program, I will input a Binary number and it will convert into a decimal number. At the end I wanted to return the number of digits in the Binary number that I had input. For example, 1001 - 4 Binary digits.The output of the digits of Binary number is always 0. Should I use size_type to do it ?
#include<iostream>
#include<string>
#include<bitset>
#include<limits>
#include<algorithm>
using namespace std;
int multiply(int x);
int multiply(int x)
{
if (x == 0)
{
return 1;
}
if (x == 1)
{
return 2;
}
else
{
return 2 * multiply(x - 1);
}
}
int count_set_bit(int n)
{
int count = 0;
while (n != 0)
{
if (n & 1 == 1)
{
count++;
}
n = n >> 1;
}
return count;
}
int main()
{
string binary;
cout << "\n\tDualzahlen : ";
cin >> binary;
reverse(binary.begin(), binary.end());
int sum = 0;
int size = binary.size();
for (int x = 0; x < size; x++)
{
if (binary[x] == '1')
{
sum = sum + multiply(x);
}
}
cout << "\tDezimal : " << sum << endl;
int n{};
cout << "\tAnzahl der Stelle : " << count_set_bit(n) << endl;
}
It looks like you are on the right track for unsigned integers. Signed integers in a multiply are generally converted to positive with a saved result sign.
You can save some time with a bit data solution, as value testing is not cost free plus 'and' has no carry delay. Of course, some CPUs can ++ faster than += 1, but hopefully the compiler knows how to use that:
int bits( unsigned long num ){
retVal = 0 ;
while ( num ){
retVal += ( num & 1 );
num >>= 1 ;
}
return retVal ;
}
I recall the H-4201 multiplied 2 bits at a time, using maybe shift, maybe add, maybe carry/borrow, so 0 was no add, 1 was add, 2 was shift and add, 3 was carry/borrow add (4 - 1)! That was before IC multiply got buried in circuits and ROMs. :D
Problem: Children are taught to add multi-digit numbers from right to left, one digit at a time. Many find the “carry” operation, where a 1 is carried from one digit position to the next, to be a significant challenge. Your job is to count the number of carry operations for each for a set of addition problems so that educators my assess their difficulty.
Input: Each line of input contains two unsigned integers less than 10 digits. The last line of input contains “0 “0.
Input Examples: For each line of input except the last, compute the number of carry operations that result from adding the two numbers and print them in the format shown below.
This is the original code that satisfies inputs which are less than 10 digits
#include <iostream>
#include <cstdio>
#include <cstdlib>
int main(void){
unsigned int n1, n2, remain_n1, remain_n2, carry;
while(1)
{
std::cin >> n1 >> n2;
if(n1 == 0 && n2 == 0)
break;
int carry = 0;
int count = 0;
int sum = 0;
while(n1 != 0 || n2 != 0)
{
remain_n1 = n1 % 10;
remain_n2 = n2 % 10;
if(carry == 1)
remain_n1++;
sum = remain_n1 + remain_n2;
carry = 0;
if(sum >= 10){
carry = carry + 1;
count = count + 1;
}
n1 = n1 / 10;
n2 = n2 / 10;
}
if(count == 0)
std::cout << "No carry operation." << std::endl;
else if(count == 1)
std::cout << count << " " << "carry operation" << std::endl;
else
std::cout << count << " " << "carry operations" << std::endl;
}
return 0;
}
The problem input says less than 10 digits, but I want to change to satisfy the condition whatever input comes. And this is my code. How should I fix this?
std:;string n1, n2;
while(1){
std:;cint >> n1 >> n2;
if(n1 == "0" && n2 == "0")
break;
int max_len = n1.szie();
if (max_len < n2.size())
max_len = n2.size();
int nn1[max_len] - {0);
int nn2[max_len] = {0};
for(int i = 0; i < n1.size(); i++)
nn1[max_len - n1.size() + i]; = n1[i] - '0';
}
You don't need any extra storage for numbers, you can use the strings' digits directly and convert as you go.
Something like this, perhaps:
std::cin >> a >> b;
int carry = 0;
int count = 0;
// Iterate in reverse.
for (auto ia = a.rbegin(), ib = b.rbegin(); ia != a.rend() && ib != b.rend(); ++ia, ++ib)
{
int sum = (*ia - '0') + (*ib - '0') + carry;
carry = sum >= 10;
count += carry;
}
I wish to create a string with up to 46 octets, filled in with 7-bit ASCII chars. For example, for the string 'Hello':
I take the last 7 bits of 'H' (0x48 - 100 1000) and put it in the first 7 bits of the first octet.
I take the next char 'e' (0x65 - 110 0101), the first bit will go to the last bit of the first octet then it will fill the next 6 bits of octet 2.
Repeat 1-2 until end of string, then the rest of the octets will be filled in with 1's.
Here is my attempt which I have worked quite a bit on, I've tried using bitset but it seems that it is not appropriate for this task as I do not have to have 46 octets all the time. If the string can fit in 12 (or 24, 36) octets (and just have the rest filled in by 1's) then I do not have to use 46.
#include <iostream>
#include <string>
#include <vector>
using namespace std;
int main()
{
std::string a = "Hello";
int N = 0;
if (a.size() <= 11) {
// I'm supposed to implement some logic here to check if it
// will fit 12, 24, 36 or 46 octets but I will do it later.
N = 80;
}
std::vector<bool> temp(N);
int j = 0;
for (int i = 0; i < a.size(); i++) {
std::vector<bool> chartemp(a[i]);
cout << a[i] << "\n";
cout << chartemp[0] << "\n";
cout << chartemp[1] << "\n";
cout << chartemp[2] << "\n";
cout << chartemp[3] << "\n";
cout << chartemp[4] << "\n";
temp[j++] = chartemp[0];
temp[j++] = chartemp[1];
temp[j++] = chartemp[2];
temp[j++] = chartemp[3];
temp[j++] = chartemp[4];
temp[j++] = chartemp[5];
temp[j++] = chartemp[6];
}
for (int k = j; k < N; k++) {
temp[j++] = 1;
}
std::string s = "";
for (int l = 0; l <= temp.size(); l++)
{
if (temp[l]) {
s += '1';
}
else {
s += '0';
}
}
cout << s << "\n";
}
The result is
000000000000000000000000000000000001111111111111111111111111111111111111111111110
It seems as if you expect statement std::vector<bool> chartemp(a[i]) to copy the i'th character of a as a series of bits into the vector. Yet the constructor of a vector interprets the value as the initial size, and a[i] is the ASCII-value of the respective character in a (e.g. 72 for 'H'). So you have a good chance to create vectors of larger size than expected, each position initialized with false.
Instead, I'd suggest to use bit-masking:
temp[j++] = a[i] & (1 << 6);
temp[j++] = a[i] & (1 << 5);
temp[j++] = a[i] & (1 << 4);
temp[j++] = a[i] & (1 << 3);
temp[j++] = a[i] & (1 << 2);
temp[j++] = a[i] & (1 << 1);
temp[j++] = a[i] & (1 << 0);
And instead of using temp[j++], you could use temp.push_back(a[i] & (1 << 0)), thereby also overcoming the need of initializing the vector with the right size.
Try something like this:
#include <string>
#include <vector>
std::string stuffIt(const std::string &str, const int maxOctets)
{
const int maxBits = maxOctets * 8;
const int maxChars = maxBits / 7;
if (str.size() > maxChars)
{
// t0o many chars to stuff into maxOctes!
return "";
}
std::vector<bool> temp(maxBits);
int idx = temp.size()-1;
for (int i = 0; i < str.size(); ++i)
{
char ch = str[i];
for(int j = 0; j < 7; ++j)
temp[idx--] = (ch >> (6-j)) & 1;
}
int numBits = (((7 * str.size()) + 7) & ~7);
for (int i = (temp.size()-numBits-1); i >= 0; --i) {
temp[i] = 1;
}
std::string s;
s.reserve(temp.size());
for(int j = temp.size()-1; j >= 0; --j)
s.push_back(temp[j] ? '1' : '0');
return s;
}
stuffIt("Hello", 12) returns:
100100011001011101100110110011011110000011111111111111111111111111111111111111111111111111111111
stuffIt("Hello", 24) returns:
100100011001011101100110110011011110000011111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111
stuffIt("Hello", 36) returns:
100100011001011101100110110011011110000011111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111
stuffIt("Hello", 46) returns:
10010001100101110110011011001101111000001111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111
If you want to know how many octets a given string will require (without adding octets full of 1s), you can use this formula:
const int numChars = str.size();
const int numBits = (numChars * 7);
const int bitsNeeded = ((numBits + 7) & ~7);
const int octetsNeeded = (bitsNeeded / 8);
If you want the extra 1s, just round octetsNeeded up to the desired value (for instance, the next even multiple of 12).
I am trying to write a code that takes a binary number input as a string and will only accept 1's or 0's if not there should be an error message displayed. Then it should go through a loop digit by digit to convert the binary number as a string to decimal. I cant seem to get it right I have the fact that it will only accept 1's or 0's correct. But then when it gets into the calculations something messes up and I cant seem to get it correct. Currently this is the closest I believe I have to getting it working. could anyone give me a hint or help me with what i am doing wrong?
#include <iostream>
#include <string>
using namespace std;
string a;
int input();
int main()
{
input();
int decimal, x= 0, length, total = 0;
length = a.length();
// atempting to make it put the digits through a formula backwords.
for (int i = length; i >= 0; i--)
{
// Trying to make it only add the 2^x if the number is 1
if (a[i] = '1')
{
//should make total equal to the old total plus 2^x if a[i] = 1
total = total + pow(x,2);
}
//trying to let the power start at 0 and go up each run of the loop
x++;
}
cout << endl << total;
int stop;
cin >> stop;
return 0;
}
int input()
{
int x, x2, count, repeat = 0;
while (repeat == 0)
{
cout << "Enter a string representing a binary number => ";
cin >> a;
count = a.length();
for (x = 0; x < count; x++)
{
if (a[x] != '0' && a[x] != '1')
{
cout << a << " is not a string representing a binary number>" << endl;
repeat = 0;
break;
}
else
repeat = 1;
}
}
return 0;
}
I don't think that pow suits for integer calculation. In this case, you can use shift operator.
a[i] = '1' sets the value of a[i] to '1' and return '1', which is always true.
You shouldn't access a[length], which should be meaningless.
fixed code:
int main()
{
input();
int decimal, x= 0, length, total = 0;
length = a.length();
// atempting to make it put the digits through a formula backwords.
for (int i = length - 1; i >= 0; i--)
{
// Trying to make it only add the 2^x if the number is 1
if (a[i] == '1')
{
//should make total equal to the old total plus 2^x if a[i] = 1
total = total + (1 << x);
}
//trying to let the power start at 0 and go up each run of the loop
x++;
}
cout << endl << total;
int stop;
cin >> stop;
return 0;
}
I would use this approach...
#include <iostream>
using namespace std;
int main()
{
string str{ "10110011" }; // max length can be sizeof(int) X 8
int dec = 0, mask = 1;
for (int i = str.length() - 1; i >= 0; i--) {
if (str[i] == '1') {
dec |= mask;
}
mask <<= 1;
}
cout << "Decimal number is: " << dec;
// system("pause");
return 0;
}
Works for binary strings up to 32 bits. Swap out integer for long to get 64 bits.
#include <iostream>
#include <stdio.h>
#include <string>
using namespace std;
string getBinaryString(int value, unsigned int length, bool reverse) {
string output = string(length, '0');
if (!reverse) {
for (unsigned int i = 0; i < length; i++) {
if ((value & (1 << i)) != 0) {
output[i] = '1';
}
}
}
else {
for (unsigned int i = 0; i < length; i++) {
if ((value & (1 << (length - i - 1))) != 0) {
output[i] = '1';
}
}
}
return output;
}
unsigned long getInteger(const string& input, size_t lsbindex, size_t msbindex) {
unsigned long val = 0;
unsigned int offset = 0;
if (lsbindex > msbindex) {
size_t length = lsbindex - msbindex;
for (size_t i = msbindex; i <= lsbindex; i++, offset++) {
if (input[i] == '1') {
val |= (1 << (length - offset));
}
}
}
else { //lsbindex < msbindex
for (size_t i = lsbindex; i <= msbindex; i++, offset++) {
if (input[i] == '1') {
val |= (1 << offset);
}
}
}
return val;
}
int main() {
int value = 23;
cout << value << ": " << getBinaryString(value, 5, false) << endl;
string str = "01011";
cout << str << ": " << getInteger(str, 1, 3) << endl;
}
I see multiple misstages in your code.
Your for-loop should start at i = length - 1 instead of i = length.
a[i] = '1' sets a[i] to '1' and does not compare it.
pow(x,2) means and not . pow is also not designed for integer operations. Use 2*2*... or 1<<e instead.
Also there are shorter ways to achieve it. Here is a example how I would do it:
std::size_t fromBinaryString(const std::string &str)
{
std::size_t result = 0;
for (std::size_t i = 0; i < str.size(); ++i)
{
// '0' - '0' == 0 and '1' - '0' == 1.
// If you don't want to assume that, you can use if or switch
result = (result << 1) + str[i] - '0';
}
return result;
}
We were asked to make function that that converts Decimal numbers into Binary and takes an integer as an input but must give a string as an output. How do I store the array into one string?
string DecToBin(int num)
{
string res;
for (int n = 15; n >= 0; n--)
{
if ((num - pow(2, n)) >= 0)
{
res[n] = 1;
num -= pow(2, n);
}
else
{
res[n] = 0;
}
}
for (int n = 15; n >= 0; n--)
{
res[n];
}
}
You just return the string
return res;
that's it.
You could do something like this,
#include <iostream>
#include <string>
std::string foo(int n)
{
std::string s;
while(n > 0){
s += n%2 == 0 ? '0' : '1';
n = (n >> 1);
}
return std::string(s.rbegin(), s.rend());
}
int main()
{
int n = 13;
std::cout << foo(n) << std::endl;
return 0;
}
Prints
1101
As you have begotten answer for how to return the result I add a supplement on the generation of the string.
As an alternative you can include <climits> which gives you CHAR_BIT and then calculate number of bits by e.g.:
#include <climits>
int bits = CHAR_BIT * sizeof(int);
or you can use <limits> and say something like:
#include <limits>
int bits = std::numeric_limits<int>::digits;
Then it is only to start the loop from the end:
string int2bitstr(int n)
{
int bits = numeric_limits<int>::digits;
string s;
int i;
for (i = bits - 1; i >= 0; --i)
And say something like:
s += n & (1 << i) ? '1' : '0';
or:
s += (n >> i) & 1 ? '1' : '0';
or the bit (no pun intended) quicker:
s += ((n >> i) & 1) + '0';
Reason the last one work is because one add character value of the glyph 0 to the number 0 or 1 – in effect giving '0' or '1'.
Just add
return res;
to the end of your function.