I wrote a 'simple' (it took me 30 minutes) program that converts decimal number to binary. I am SURE that there's a lot simpler way so can you show me?
Here's the code:
#include <iostream>
#include <stdlib.h>
using namespace std;
int a1, a2, remainder;
int tab = 0;
int maxtab = 0;
int table[0];
int main()
{
system("clear");
cout << "Enter a decimal number: ";
cin >> a1;
a2 = a1; //we need our number for later on so we save it in another variable
while (a1!=0) //dividing by two until we hit 0
{
remainder = a1%2; //getting a remainder - decimal number(1 or 0)
a1 = a1/2; //dividing our number by two
maxtab++; //+1 to max elements of the table
}
maxtab--; //-1 to max elements of the table (when dividing finishes it adds 1 additional elemnt that we don't want and it's equal to 0)
a1 = a2; //we must do calculations one more time so we're gatting back our original number
table[0] = table[maxtab]; //we set the number of elements in our table to maxtab (we don't get 10's of 0's)
while (a1!=0) //same calculations 2nd time but adding every 1 or 0 (remainder) to separate element in table
{
remainder = a1%2; //getting a remainder
a1 = a1/2; //dividing by 2
table[tab] = remainder; //adding 0 or 1 to an element
tab++; //tab (element count) increases by 1 so next remainder is saved in another element
}
tab--; //same as with maxtab--
cout << "Your binary number: ";
while (tab>=0) //until we get to the 0 (1st) element of the table
{
cout << table[tab] << " "; //write the value of an element (0 or 1)
tab--; //decreasing by 1 so we show 0's and 1's FROM THE BACK (correct way)
}
cout << endl;
return 0;
}
By the way it's complicated but I tried my best.
edit - Here is the solution I ended up using:
std::string toBinary(int n)
{
std::string r;
while(n!=0) {r=(n%2==0 ?"0":"1")+r; n/=2;}
return r;
}
std::bitset has a .to_string() method that returns a std::string holding a text representation in binary, with leading-zero padding.
Choose the width of the bitset as needed for your data, e.g. std::bitset<32> to get 32-character strings from 32-bit integers.
#include <iostream>
#include <bitset>
int main()
{
std::string binary = std::bitset<8>(128).to_string(); //to binary
std::cout<<binary<<"\n";
unsigned long decimal = std::bitset<8>(binary).to_ulong();
std::cout<<decimal<<"\n";
return 0;
}
EDIT: Please do not edit my answer for Octal and Hexadecimal. The OP specifically asked for Decimal To Binary.
The following is a recursive function which takes a positive integer and prints its binary digits to the console.
Alex suggested, for efficiency, you may want to remove printf() and store the result in memory... depending on storage method result may be reversed.
/**
* Takes a unsigned integer, converts it into binary and prints it to the console.
* #param n the number to convert and print
*/
void convertToBinary(unsigned int n)
{
if (n / 2 != 0) {
convertToBinary(n / 2);
}
printf("%d", n % 2);
}
Credits to UoA ENGGEN 131
*Note: The benefit of using an unsigned int is that it can't be negative.
You can use std::bitset to convert a number to its binary format.
Use the following code snippet:
std::string binary = std::bitset<8>(n).to_string();
I found this on stackoverflow itself. I am attaching the link.
A pretty straight forward solution to print binary:
#include <iostream>
using namespace std;
int main()
{
int num,arr[64];
cin>>num;
int i=0,r;
while(num!=0)
{
r = num%2;
arr[i++] = r;
num /= 2;
}
for(int j=i-1;j>=0;j--){
cout<<arr[j];
}
}
Non recursive solution:
#include <iostream>
#include<string>
std::string toBinary(int n)
{
std::string r;
while(n!=0) {r=(n%2==0 ?"0":"1")+r; n/=2;}
return r;
}
int main()
{
std::string i= toBinary(10);
std::cout<<i;
}
Recursive solution:
#include <iostream>
#include<string>
std::string r="";
std::string toBinary(int n)
{
r=(n%2==0 ?"0":"1")+r;
if (n / 2 != 0) {
toBinary(n / 2);
}
return r;
}
int main()
{
std::string i=toBinary(10);
std::cout<<i;
}
An int variable is not in decimal, it's in binary. What you're looking for is a binary string representation of the number, which you can get by applying a mask that filters individual bits, and then printing them:
for( int i = sizeof(value)*CHAR_BIT-1; i>=0; --i)
cout << value & (1 << i) ? '1' : '0';
That's the solution if your question is algorithmic. If not, you should use the std::bitset class to handle this for you:
bitset< sizeof(value)*CHAR_BIT > bits( value );
cout << bits.to_string();
Here are two approaches. The one is similar to your approach
#include <iostream>
#include <string>
#include <limits>
#include <algorithm>
int main()
{
while ( true )
{
std::cout << "Enter a non-negative number (0-exit): ";
unsigned long long x = 0;
std::cin >> x;
if ( !x ) break;
const unsigned long long base = 2;
std::string s;
s.reserve( std::numeric_limits<unsigned long long>::digits );
do { s.push_back( x % base + '0' ); } while ( x /= base );
std::cout << std::string( s.rbegin(), s.rend() ) << std::endl;
}
}
and the other uses std::bitset as others suggested.
#include <iostream>
#include <string>
#include <bitset>
#include <limits>
int main()
{
while ( true )
{
std::cout << "Enter a non-negative number (0-exit): ";
unsigned long long x = 0;
std::cin >> x;
if ( !x ) break;
std::string s =
std::bitset<std::numeric_limits<unsigned long long>::digits>( x ).to_string();
std::string::size_type n = s.find( '1' );
std::cout << s.substr( n ) << std::endl;
}
}
The conversion from natural number to a binary string:
string toBinary(int n) {
if (n==0) return "0";
else if (n==1) return "1";
else if (n%2 == 0) return toBinary(n/2) + "0";
else if (n%2 != 0) return toBinary(n/2) + "1";
}
For this , In C++ you can use itoa() function .This function convert any Decimal integer to binary, decimal , hexadecimal and octal number.
#include<bits/stdc++.h>
using namespace std;
int main(){
int a;
char res[1000];
cin>>a;
itoa(a,res,10);
cout<<"Decimal- "<<res<<endl;
itoa(a,res,2);
cout<<"Binary- "<<res<<endl;
itoa(a,res,16);
cout<<"Hexadecimal- "<<res<<endl;
itoa(a,res,8);
cout<<"Octal- "<<res<<endl;return 0;
}
However, it is only supported by specific compilers.
You can see also: itoa - C++ Reference
Here is modern variant that can be used for ints of different sizes.
#include <type_traits>
#include <bitset>
template<typename T>
std::enable_if_t<std::is_integral_v<T>,std::string>
encode_binary(T i){
return std::bitset<sizeof(T) * 8>(i).to_string();
}
Your solution needs a modification. The final string should be reversed before returning:
std::reverse(r.begin(), r.end());
return r;
DECIMAL TO BINARY NO ARRAYS USED *made by Oya:
I'm still a beginner, so this code will only use loops and variables xD...
Hope you like it. This can probably be made simpler than it is...
#include <iostream>
#include <cmath>
#include <cstdlib>
using namespace std;
int main()
{
int i;
int expoentes; //the sequence > pow(2,i) or 2^i
int decimal;
int extra; //this will be used to add some 0s between the 1s
int x = 1;
cout << "\nThis program converts natural numbers into binary code\nPlease enter a Natural number:";
cout << "\n\nWARNING: Only works until ~1.073 millions\n";
cout << " To exit, enter a negative number\n\n";
while(decimal >= 0){
cout << "\n----- // -----\n\n";
cin >> decimal;
cout << "\n";
if(decimal == 0){
cout << "0";
}
while(decimal >= 1){
i = 0;
expoentes = 1;
while(decimal >= expoentes){
i++;
expoentes = pow(2,i);
}
x = 1;
cout << "1";
decimal -= pow(2,i-x);
extra = pow(2,i-1-x);
while(decimal < extra){
cout << "0";
x++;
extra = pow(2,i-1-x);
}
}
}
return 0;
}
here a simple converter by using std::string as container. it allows a negative value.
#include <iostream>
#include <string>
#include <limits>
int main()
{
int x = -14;
int n = std::numeric_limits<int>::digits - 1;
std::string s;
s.reserve(n + 1);
do
s.push_back(((x >> n) & 1) + '0');
while(--n > -1);
std::cout << s << '\n';
}
This is a more simple program than ever
//Program to convert Decimal into Binary
#include<iostream>
using namespace std;
int main()
{
long int dec;
int rem,i,j,bin[100],count=-1;
again:
cout<<"ENTER THE DECIMAL NUMBER:- ";
cin>>dec;//input of Decimal
if(dec<0)
{
cout<<"PLEASE ENTER A POSITIVE DECIMAL";
goto again;
}
else
{
cout<<"\nIT's BINARY FORM IS:- ";
for(i=0;dec!=0;i++)//making array of binary, but reversed
{
rem=dec%2;
bin[i]=rem;
dec=dec/2;
count++;
}
for(j=count;j>=0;j--)//reversed binary is printed in correct order
{
cout<<bin[j];
}
}
return 0;
}
There is in fact a very simple way to do so. What we do is using a recursive function which is given the number (int) in the parameter. It is pretty easy to understand. You can add other conditions/variations too. Here is the code:
int binary(int num)
{
int rem;
if (num <= 1)
{
cout << num;
return num;
}
rem = num % 2;
binary(num / 2);
cout << rem;
return rem;
}
// function to convert decimal to binary
void decToBinary(int n)
{
// array to store binary number
int binaryNum[1000];
// counter for binary array
int i = 0;
while (n > 0) {
// storing remainder in binary array
binaryNum[i] = n % 2;
n = n / 2;
i++;
}
// printing binary array in reverse order
for (int j = i - 1; j >= 0; j--)
cout << binaryNum[j];
}
refer :-
https://www.geeksforgeeks.org/program-decimal-binary-conversion/
or
using function :-
#include<bits/stdc++.h>
using namespace std;
int main()
{
int n;cin>>n;
cout<<bitset<8>(n).to_string()<<endl;
}
or
using left shift
#include<bits/stdc++.h>
using namespace std;
int main()
{
// here n is the number of bit representation we want
int n;cin>>n;
// num is a number whose binary representation we want
int num;
cin>>num;
for(int i=n-1;i>=0;i--)
{
if( num & ( 1 << i ) ) cout<<1;
else cout<<0;
}
}
#include <iostream>
#include <bitset>
#define bits(x) (std::string( \
std::bitset<8>(x).to_string<char,std::string::traits_type, std::string::allocator_type>() ).c_str() )
int main() {
std::cout << bits( -86 >> 1 ) << ": " << (-86 >> 1) << std::endl;
return 0;
}
Okay.. I might be a bit new to C++, but I feel the above examples don't quite get the job done right.
Here's my take on this situation.
char* DecimalToBinary(unsigned __int64 value, int bit_precision)
{
int length = (bit_precision + 7) >> 3 << 3;
static char* binary = new char[1 + length];
int begin = length - bit_precision;
unsigned __int64 bit_value = 1;
for (int n = length; --n >= begin; )
{
binary[n] = 48 | ((value & bit_value) == bit_value);
bit_value <<= 1;
}
for (int n = begin; --n >= 0; )
binary[n] = 48;
binary[length] = 0;
return binary;
}
#value = The Value we are checking.
#bit_precision = The highest left most bit to check for.
#Length = The Maximum Byte Block Size. E.g. 7 = 1 Byte and 9 = 2 Byte, but we represent this in form of bits so 1 Byte = 8 Bits.
#binary = just some dumb name I gave to call the array of chars we are setting. We set this to static so it won't be recreated with every call. For simply getting a result and display it then this works good, but if let's say you wanted to display multiple results on a UI they would all show up as the last result. This can be fixed by removing static, but make sure you delete [] the results when you are done with it.
#begin = This is the lowest index that we are checking. Everything beyond this point is ignored. Or as shown in 2nd loop set to 0.
#first loop - Here we set the value to 48 and basically add a 0 or 1 to 48 based on the bool value of (value & bit_value) == bit_value. If this is true the char is set to 49. If this is false the char is set to 48. Then we shift the bit_value or basically multiply it by 2.
#second loop - Here we set all the indexes we ignored to 48 or '0'.
SOME EXAMPLE OUTPUTS!!!
int main()
{
int val = -1;
std::cout << DecimalToBinary(val, 1) << '\n';
std::cout << DecimalToBinary(val, 3) << '\n';
std::cout << DecimalToBinary(val, 7) << '\n';
std::cout << DecimalToBinary(val, 33) << '\n';
std::cout << DecimalToBinary(val, 64) << '\n';
std::cout << "\nPress any key to continue. . .";
std::cin.ignore();
return 0;
}
00000001 //Value = 2^1 - 1
00000111 //Value = 2^3 - 1.
01111111 //Value = 2^7 - 1.
0000000111111111111111111111111111111111 //Value = 2^33 - 1.
1111111111111111111111111111111111111111111111111111111111111111 //Value = 2^64 - 1.
SPEED TESTS
Original Question's Answer: "Method: toBinary(int);"
Executions: 10,000 , Total Time (Milli): 4701.15 , Average Time (Nanoseconds): 470114
My Version: "Method: DecimalToBinary(int, int);"
//Using 64 Bit Precision.
Executions: 10,000,000 , Total Time (Milli): 3386 , Average Time (Nanoseconds): 338
//Using 1 Bit Precision.
Executions: 10,000,000, Total Time (Milli): 634, Average Time (Nanoseconds): 63
Below is simple C code that converts binary to decimal and back again. I wrote it long ago for a project in which the target was an embedded processor and the development tools had a stdlib that was way too big for the firmware ROM.
This is generic C code that does not use any library, nor does it use division or the remainder (%) operator (which is slow on some embedded processors), nor does it use any floating point, nor does it use any table lookup nor emulate any BCD arithmetic. What it does make use of is the type long long, more specifically unsigned long long (or uint64_t), so if your embedded processor (and the C compiler that goes with it) cannot do 64-bit integer arithmetic, this code is not for your application. Otherwise, I think this is production quality C code (maybe after changing long to int32_t and unsigned long long to uint64_t). I have run this overnight to test it for every 2³² signed integer values and there is no error in conversion in either direction.
We had a C compiler/linker that could generate executables and we needed to do what we could do without any stdlib (which was a pig). So no printf() nor scanf(). Not even an sprintf() nor sscanf(). But we still had a user interface and had to convert base-10 numbers into binary and back. (We also made up our own malloc()-like utility also and our own transcendental math functions too.)
So this was how I did it (the main program and calls to stdlib were there for testing this thing on my mac, not for the embedded code). Also, because some older dev systems don't recognize "int64_t" and "uint64_t" and similar types, the types long long and unsigned long long are used and assumed to be the same. And long is assumed to be 32 bits. I guess I could have typedefed it.
// returns an error code, 0 if no error,
// -1 if too big, -2 for other formatting errors
int decimal_to_binary(char *dec, long *bin)
{
int i = 0;
int past_leading_space = 0;
while (i <= 64 && !past_leading_space) // first get past leading spaces
{
if (dec[i] == ' ')
{
i++;
}
else
{
past_leading_space = 1;
}
}
if (!past_leading_space)
{
return -2; // 64 leading spaces does not a number make
}
// at this point the only legitimate remaining
// chars are decimal digits or a leading plus or minus sign
int negative = 0;
if (dec[i] == '-')
{
negative = 1;
i++;
}
else if (dec[i] == '+')
{
i++; // do nothing but go on to next char
}
// now the only legitimate chars are decimal digits
if (dec[i] == '\0')
{
return -2; // there needs to be at least one good
} // digit before terminating string
unsigned long abs_bin = 0;
while (i <= 64 && dec[i] != '\0')
{
if ( dec[i] >= '0' && dec[i] <= '9' )
{
if (abs_bin > 214748364)
{
return -1; // this is going to be too big
}
abs_bin *= 10; // previous value gets bumped to the left one digit...
abs_bin += (unsigned long)(dec[i] - '0'); // ... and a new digit appended to the right
i++;
}
else
{
return -2; // not a legit digit in text string
}
}
if (dec[i] != '\0')
{
return -2; // not terminated string in 64 chars
}
if (negative)
{
if (abs_bin > 2147483648)
{
return -1; // too big
}
*bin = -(long)abs_bin;
}
else
{
if (abs_bin > 2147483647)
{
return -1; // too big
}
*bin = (long)abs_bin;
}
return 0;
}
void binary_to_decimal(char *dec, long bin)
{
unsigned long long acc; // 64-bit unsigned integer
if (bin < 0)
{
*(dec++) = '-'; // leading minus sign
bin = -bin; // make bin value positive
}
acc = 989312855LL*(unsigned long)bin; // very nearly 0.2303423488 * 2^32
acc += 0x00000000FFFFFFFFLL; // we need to round up
acc >>= 32;
acc += 57646075LL*(unsigned long)bin;
// (2^59)/(10^10) = 57646075.2303423488 = 57646075 + (989312854.979825)/(2^32)
int past_leading_zeros = 0;
for (int i=9; i>=0; i--) // maximum number of digits is 10
{
acc <<= 1;
acc += (acc<<2); // an efficient way to multiply a long long by 10
// acc *= 10;
unsigned int digit = (unsigned int)(acc >> 59); // the digit we want is in bits 59 - 62
if (digit > 0)
{
past_leading_zeros = 1;
}
if (past_leading_zeros)
{
*(dec++) = '0' + digit;
}
acc &= 0x07FFFFFFFFFFFFFFLL; // mask off this digit and go on to the next digit
}
if (!past_leading_zeros) // if all digits are zero ...
{
*(dec++) = '0'; // ... put in at least one zero digit
}
*dec = '\0'; // terminate string
}
#if 1
#include <stdlib.h>
#include <stdio.h>
int main (int argc, const char* argv[])
{
char dec[64];
long bin, result1, result2;
unsigned long num_errors;
long long long_long_bin;
num_errors = 0;
for (long_long_bin=-2147483648LL; long_long_bin<=2147483647LL; long_long_bin++)
{
bin = (long)long_long_bin;
if ((bin&0x00FFFFFFL) == 0)
{
printf("bin = %ld \n", bin); // this is to tell us that things are moving along
}
binary_to_decimal(dec, bin);
decimal_to_binary(dec, &result1);
sscanf(dec, "%ld", &result2); // decimal_to_binary() should do the same as this sscanf()
if (bin != result1 || bin != result2)
{
num_errors++;
printf("bin = %ld, result1 = %ld, result2 = %ld, num_errors = %ld, dec = %s \n",
bin, result1, result2, num_errors, dec);
}
}
printf("num_errors = %ld \n", num_errors);
return 0;
}
#else
#include <stdlib.h>
#include <stdio.h>
int main (int argc, const char* argv[])
{
char dec[64];
long bin;
printf("bin = ");
scanf("%ld", &bin);
while (bin != 0)
{
binary_to_decimal(dec, bin);
printf("dec = %s \n", dec);
printf("bin = ");
scanf("%ld", &bin);
}
return 0;
}
#endif
My way of converting decimal to binary in C++. But since we are using mod, this function will work in case of hexadecimal or octal also. You can also specify bits. This function keeps calculating the lowest significant bit and place it on the end of the string. If you are not so similar to this method than you can vist: https://www.wikihow.com/Convert-from-Decimal-to-Binary
#include <bits/stdc++.h>
using namespace std;
string itob(int bits, int n) {
int count;
char str[bits + 1]; // +1 to append NULL character.
str[bits] = '\0'; // The NULL character in a character array flags the end
// of the string, not appending it may cause problems.
count = bits - 1; // If the length of a string is n, than the index of the
// last character of the string will be n - 1. Cause the
// index is 0 based not 1 based. Try yourself.
do {
if (n % 2)
str[count] = '1';
else
str[count] = '0';
n /= 2;
count--;
} while (n > 0);
while (count > -1) {
str[count] = '0';
count--;
}
return str;
}
int main() {
cout << itob(1, 0) << endl; // 0 in 1 bit binary.
cout << itob(2, 1) << endl; // 1 in 2 bit binary.
cout << itob(3, 2) << endl; // 2 in 3 bit binary.
cout << itob(4, 4) << endl; // 4 in 4 bit binary.
cout << itob(5, 15) << endl; // 15 in 5 bit binary.
cout << itob(6, 30) << endl; // 30 in 6 bit binary.
cout << itob(7, 61) << endl; // 61 in 7 bit binary.
cout << itob(8, 127) << endl; // 127 in 8 bit binary.
return 0;
}
The Output:
0
01
010
0100
01111
011110
0111101
01111111
Since you asked for a simple way, I am sharing this answer, after 8 years
Here is the expression!
Is it not interesting when there is no if condition, and we can get 0 or 1 with just a simple expression?
Well yes, NO if, NO long division
Here is what each variable means
Note: variable is the orange highlighted ones
Number: 0-infinity (a value to be converted to binary)
binary holder: 1 / 2 / 4 / 8 / 16 / 32 / ... (Place of binary needed, just like tens, hundreds)
Result: 0 or 1
If you want to make binary holder from 1 / 2 / 4 / 8 / 16 /... to 1 / 2 / 3 / 4 / 5/...
then use this expression
The procedure is simple for the second expression
First, the number variable is always, your number needed, and its stable.
Second the binary holder variable needs to be changed ,in a for loop, by +1 for the second image, x2 for the first image
I don't know c++ a lot ,here is a js code,for your understanding
function FindBinary(Number) {
var x,i,BinaryValue = "",binaryHolder = 1;
for (i = 1; Math.pow(2, i) <= Number; i++) {}//for trimming, you can even remove this and set i to 7,see the result
for (x = 1; x <= i; x++) {
var Algorithm = ((Number - (Number % binaryHolder)) / binaryHolder) % 2;//Main algorithm
BinaryValue = Algorithm + BinaryValue;
binaryHolder += binaryHolder;
}
return BinaryValue;
}
console.log(FindBinary(17));//your number
more ever, I think language doesn't matters a lot for algorithm questions
You want to do something like:
cout << "Enter a decimal number: ";
cin >> a1;
cout << setbase(2);
cout << a1
#include "stdafx.h"
#include<iostream>
#include<vector>
#include<cmath>
using namespace std;
int main() {
// Initialize Variables
double x;
int xOct;
int xHex;
//Initialize a variable that stores the order if the numbers in binary/sexagesimal base
vector<int> rem;
//Get Demical value
cout << "Number (demical base): ";
cin >> x;
//Set the variables
xOct = x;
xHex = x;
//Get the binary value
for (int i = 0; x >= 1; i++) {
rem.push_back(abs(remainder(x, 2)));
x = floor(x / 2);
}
//Print binary value
cout << "Binary: ";
int n = rem.size();
while (n > 0) {
n--;
cout << rem[n];
} cout << endl;
//Print octal base
cout << oct << "Octal: " << xOct << endl;
//Print hexademical base
cout << hex << "Hexademical: " << xHex << endl;
system("pause");
return 0;
}
#include <iostream>
using namespace std;
int main()
{
int a,b;
cin>>a;
for(int i=31;i>=0;i--)
{
b=(a>>i)&1;
cout<<b;
}
}
HOPE YOU LIKE THIS SIMPLE CODE OF CONVERSION FROM DECIMAL TO BINARY
#include<iostream>
using namespace std;
int main()
{
int input,rem,res,count=0,i=0;
cout<<"Input number: ";
cin>>input;`enter code here`
int num=input;
while(input > 0)
{
input=input/2;
count++;
}
int arr[count];
while(num > 0)
{
arr[i]=num%2;
num=num/2;
i++;
}
for(int i=count-1 ; i>=0 ; i--)
{
cout<<" " << arr[i]<<" ";
}
return 0;
}
#include <iostream>
// x is our number to test
// pow is a power of 2 (e.g. 128, 64, 32, etc...)
int printandDecrementBit(int x, int pow)
{
// Test whether our x is greater than some power of 2 and print the bit
if (x >= pow)
{
std::cout << "1";
// If x is greater than our power of 2, subtract the power of 2
return x - pow;
}
else
{
std::cout << "0";
return x;
}
}
int main()
{
std::cout << "Enter an integer between 0 and 255: ";
int x;
std::cin >> x;
x = printandDecrementBit(x, 128);
x = printandDecrementBit(x, 64);
x = printandDecrementBit(x, 32);
x = printandDecrementBit(x, 16);
std::cout << " ";
x = printandDecrementBit(x, 8);
x = printandDecrementBit(x, 4);
x = printandDecrementBit(x, 2);
x = printandDecrementBit(x, 1);
return 0;
}
this is a simple way to get the binary form of an int. credit to learncpp.com. im sure this could be used in different ways to get to the same point.
In this approach, the decimal will be converted to the respective binary number in the string formate. The string return type is chosen since it can handle more range of input values.
class Solution {
public:
string ConvertToBinary(int num)
{
vector<int> bin;
string op;
for (int i = 0; num > 0; i++)
{
bin.push_back(num % 2);
num /= 2;
}
reverse(bin.begin(), bin.end());
for (size_t i = 0; i < bin.size(); ++i)
{
op += to_string(bin[i]);
}
return op;
}
};
using bitmask and bitwise and .
string int2bin(int n){
string x;
for(int i=0;i<32;i++){
if(n&1) {x+='1';}
else {x+='0';}
n>>=1;
}
reverse(x.begin(),x.end());
return x;
}
You Could use std::bitset:
#include <bits/stdc++.h>
int main()
{
std::string binary = std::bitset<(int)ceil(log2(10))>(10).to_string(); // decimal number is 10
std::cout << binary << std::endl; // 1010
return 0;
}
SOLUTION 1
Shortest function. Recursive. No headers required.
size_t bin(int i) {return i<2?i:10*bin(i/2)+i%2;}
The simplicity of this function comes at the cost of some limitations. It returns correct values only for arguments between 0 and 1048575 (2 to the power of how many digits the largest unsigned int has, -1). I used the following program to test it:
#include <iostream> // std::cout, std::cin
#include <climits> // ULLONG_MAX
#include <math.h> // pow()
int main()
{
size_t bin(int);
int digits(size_t);
int i = digits(ULLONG_MAX); // maximum digits of the return value of bin()
int iMax = pow(2.0,i)-1; // maximum value of a valid argument of bin()
while(true) {
std::cout << "Decimal: ";
std::cin >> i;
if (i<0 or i>iMax) {
std::cout << "\nB Integer out of range, 12:1";
return 0;
}
std::cout << "Binary: " << bin(i) << "\n\n";
}
return 0;
}
size_t bin(int i) {return i<2?i:10*bin(i/2)+i%2;}
int digits(size_t i) {return i<10?1:digits(i/10)+1;}
SOLUTION 2
Short. Recursive. Some headers required.
std::string bin(size_t i){return !i?"0":i==1?"1":bin(i/2)+(i%2?'1':'0');}
This function can return the binary representation of the largest integers as a string. I used the following program to test it:
#include <string> // std::string
#include <iostream> // std::cout, std::cin
int main()
{
std::string s, bin(size_t);
size_t i, x;
std::cout << "Enter exit code: "; // Used to exit the program.
std::cin >> x;
while(i!=x) {
std::cout << "\nDecimal: ";
std::cin >> i;
std::cout << "Binary: " << bin(i) << "\n";
}
return 0;
}
std::string bin(size_t i){return !i?"0":i==1?"1":bin(i/2)+(i%2?'1':'0');}
convert a positive integer number in C++ (0 to 2,147,483,647) to a 32 bit binary and display.
I want do it in traditional "mathematical" way (rather than use bitset or use vector *.pushback* or recursive function or some thing special in C++...), (one reason is so that you can implement it in different languages, well maybe)
So I go ahead and implement a simple program like this:
#include <iostream>
using namespace std;
int main()
{
int dec,rem,i=1,sum=0;
cout << "Enter the decimal to be converted: ";
cin>>dec;
do
{
rem=dec%2;
sum=sum + (i*rem);
dec=dec/2;
i=i*10;
} while(dec>0);
cout <<"The binary of the given number is: " << sum << endl;
system("pause");
return 0;
}
Problem is when you input a large number such as 9999, result will be a negative or some weird number because sum is integer and it can't handle more than its max range, so you know that a 32 bit binary will have 32 digits so is it too big for any number type in C++?. Any suggestions here and about display 32 bit number as question required?
What you get in sum as a result is hardly usable for anything but printing. It's a decimal number which just looks like a binary.
If the decimal-binary conversion is not an end in itself, note that numbers in computer memory are already represented in binary (and it's not the property of C++), and the only thing you need is a way to print it. One of the possible ways is as follows:
int size = 0;
for (int tmp = dec; tmp; tmp >>= 1)
size++;
for (int i = size - 1; i >= 0; --i)
cout << ((dec >> i) & 1);
Another variant using a character array:
char repr[33] = { 0 };
int size = 0;
for (int tmp = dec; tmp; tmp >>= 1)
size++;
for (int i = 0; i < size; ++i)
repr[i] = ((dec >> (size - i - 1)) & 1) ? '1' : '0';
cout << repr << endl;
Note that both variants don't work if dec is negative.
You have a number and want its binary representation, i.e, a string. So, use a string, not an numeric type, to store your result.
Using a for-loop, and a predefined array of zero-chars:
#include <iostream>
using namespace std;
int main()
{
int dec;
cout << "Enter the decimal to be converted: ";
cin >> dec;
char bin32[] = "00000000000000000000000000000000";
for (int pos = 31; pos >= 0; --pos)
{
if (dec % 2)
bin32[pos] = '1';
dec /= 2;
}
cout << "The binary of the given number is: " << bin32 << endl;
}
For performance reasons, you may prematurely suspend the for loop:
for (int pos = 31; pos >= 0 && dec; --pos)
Note, that in C++, you can treat an integer as a boolean - everything != 0 is considered true.
You could use an unsigned integer type. However, even with a larger type you will eventually run out of space to store binary representations. You'd probably be better off storing them in a string.
As others have pointed out, you need to generate the results in a
string. The classic way to do this (which works for any base between 2 and 36) is:
std::string
toString( unsigned n, int precision, unsigned base )
{
assert( base >= 2 && base <= 36 );
static char const digits[] = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ";
std::string retval;
while ( n != 0 ) {
retval += digits[ n % base ];
n /= base;
}
while ( retval.size() < precision ) {
retval += ' ';
}
std::reverse( retval.begin(), retval.end() );
return retval;
}
You can then display it.
Recursion. In pseudocode:
function toBinary(integer num)
if (num < 2)
then
print(num)
else
toBinary(num DIV 2)
print(num MOD 2)
endif
endfunction
This does not handle leading zeros or negative numbers. The recursion stack is used to reverse the binary bits into the standard order.
Just write:
long int dec,rem,i=1,sum=0
Instead of:
int dec,rem,i=1,sum=0;
That should solve the problem.
I am currently working on a basic program which converts a binary number to an octal. Its task is to print a table with all the numbers between 0-256, with their binary, octal and hexadecimal equivalent. The task requires me only to use my own code (i.e. using loops etc and not in-built functions). The code I have made (it is quite messy at the moment) is as following (this is only a snippit):
int counter = ceil(log10(fabs(binaryValue)+1));
int iter;
if (counter%3 == 0)
{
iter = counter/3;
}
else if (counter%3 != 0)
{
iter = ceil((counter/3));
}
c = binaryValue;
for (int h = 0; h < iter; h++)
{
tempOctal = c%1000;
c /= 1000;
int count = ceil(log10(fabs(tempOctal)+1));
for (int counter = 0; counter < count; counter++)
{
if (tempOctal%10 != 0)
{
e = pow(2.0, counter);
tempDecimal += e;
}
tempOctal /= 10;
}
octalValue += (tempDecimal * pow(10.0, h));
}
The output is completely wrong. When for example the binary code is 1111 (decimal value 15), it outputs 7. I can understand why this happens (the last three digits in the binary number, 111, is 7 in decimal format), but can't be able to identify the problem in the code. Any ideas?
Edit: After some debugging and testing I figured the answer.
#include <iostream>
#include <cmath>
using namespace std;
int main()
{
while (true)
{
int binaryValue, c, tempOctal, tempDecimal, octalValue = 0, e;
cout << "Enter a binary number to convert to octal: ";
cin >> binaryValue;
int counter = ceil(log10(binaryValue+1));
cout << "Counter " << counter << endl;
int iter;
if (counter%3 == 0)
{
iter = counter/3;
}
else if (counter%3 != 0)
{
iter = (counter/3)+1;
}
cout << "Iterations " << iter << endl;
c = binaryValue;
cout << "C " << c << endl;
for (int h = 0; h < iter; h++)
{
tempOctal = c%1000;
cout << "3 digit binary part " << tempOctal << endl;
int count = ceil(log10(tempOctal+1));
cout << "Digits " << count << endl;
tempDecimal = 0;
for (int counterr = 0; counterr < count; counterr++)
{
if (tempOctal%10 != 0)
{
e = pow(2.0, counterr);
tempDecimal += e;
cout << "Temp Decimal value 0-7 " << tempDecimal << endl;
}
tempOctal /= 10;
}
octalValue += (tempDecimal * pow(10.0, h));
cout << "Octal Value " << octalValue << endl;
c /= 1000;
}
cout << "Final Octal Value: " << octalValue << endl;
}
system("pause");
return 0;
}
This looks overly complex. There's no need to involve floating-point math, and it can very probably introduce problems.
Of course, the obvious solution is to use a pre-existing function to do this (like { char buf[32]; snprintf(buf, sizeof buf, "%o", binaryValue); } and be done, but if you really want to do it "by hand", you should look into using bit-operations:
Use binaryValue & 3 to mask out the three lowest bits. These will be your next octal digit (three bits is 0..7, which is one octal digit).
use binaryValue >>= 3 to shift the number to get three new bits into the lowest position
Reverse the number afterwards, or (if possible) start from the end of the string buffer and emit digits backwards
It don't understand your code; it seems far too complicated. But one
thing is sure, if you are converting an internal representation into
octal, you're going to have to divide by 8 somewhere, and do a % 8
somewhere. And I don't see them. On the other hand, I see a both
operations with both 10 and 1000, neither of which should be present.
For starters, you might want to write a simple function which converts
a value (preferably an unsigned of some type—get unsigned
right before worrying about the sign) to a string using any base, e.g.:
//! \pre
//! base >= 2 && base < 36
//!
//! Digits are 0-9, then A-Z.
std::string convert(unsigned value, unsigned base);
This shouldn't take more than about 5 or 6 lines of code. But attention,
the normal algorithm generates the digits in reverse order: if you're
using std::string, the simplest solution is to push_back each digit,
then call std::reverse at the end, before returning it. Otherwise: a
C style char[] works well, provided that you make it large enough.
(sizeof(unsigned) * CHAR_BITS + 2 is more than enough, even for
signed, and even with a '\0' at the end, which you won't need if you
return a string.) Just initialize the pointer to buffer +
sizeof(buffer), and pre-decrement each time you insert a digit. To
construct the string you return:
std::string( pointer, buffer + sizeof(buffer) ) should do the trick.
As for the loop, the end condition could simply be value == 0.
(You'll be dividing value by base each time through, so you're
guaranteed to reach this condition.) If you use a do ... while,
rather than just a while, you're also guaranteed at least one digit in
the output.
(It would have been a lot easier for me to just post the code, but since
this is obviously homework, I think it better to just give indications
concerning what needs to be done.)
Edit: I've added my implementation, and some comments on your new
code:
First for the comments: there's a very misleading prompt: "Enter a
binary number" sounds like the user should enter binary; if you're
reading into an int, the value input should be decimal. And there are
still the % 1000 and / 1000 and % 10 and / 10 that I don't
understand. Whatever you're doing, it can't be right if there's no %
8 and / 8. Try it: input "128", for example, and see what you get.
If you're trying to input binary, then you really have to input a
string, and parse it yourself.
My code for the conversion itself would be:
//! \pre
//! base >= 2 && base <= 36
//!
//! Digits are 0-9, then A-Z.
std::string toString( unsigned value, unsigned base )
{
assert( base >= 2 && base <= 36 );
static char const digits[] = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ";
char buffer[sizeof(unsigned) * CHAR_BIT];
char* dst = buffer + sizeof(buffer);
do
{
*--dst = digits[value % base];
value /= base;
} while (value != 0);
return std::string(dst, buffer + sizeof(buffer));
}
If you want to parse input (e.g. for binary), then something like the
following should do the trick:
unsigned fromString( std::string const& value, unsigned base )
{
assert( base >= 2 && base <= 36 );
static char const digits[] = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ";
unsigned results = 0;
for (std::string::const_iterator iter = value.begin();
iter != value.end();
++ iter)
{
unsigned digit = std::find
( digits, digits + sizeof(digits) - 1,
toupper(static_cast<unsigned char>( *iter ) ) ) - digits;
if ( digit >= base )
throw std::runtime_error( "Illegal character" );
if ( results >= UINT_MAX / base
&& (results > UINT_MAX / base || digit > UINT_MAX % base) )
throw std::runtime_error( "Overflow" );
results = base * results + digit;
}
return results;
}
It's more complicated than toString because it has to handle all sorts
of possible error conditions. It's also still probably simpler than you
need; you probably want to trim blanks, etc., as well (or even ignore
them: entering 01000000 is more error prone than 0100 0000).
(Also, the end iterator for find has a - 1 because of the trailing
'\0' the compiler inserts into digits.)
Actually I don't understand why do you need so complex code to accomplish what you need.
First of all there is no such a thing as conversion from binary to octal (same is true for converting to/from decimal and etc.). The machine always works in binary, there's nothing you can (or should) do about this.
This is actually a question of formatting. That is, how do you print a number as octal, and how do you parse the textual representation of the octal number.
Edit:
You may use the following code for printing a number in any base:
const int PRINT_NUM_TXT_MAX = 33; // worst-case for binary
void PrintNumberInBase(unsigned int val, int base, PSTR szBuf)
{
// calculate the number of digits
int digits = 0;
for (unsigned int x = val; x; digits++)
x /= base;
if (digits < 1)
digits = 1; // will emit zero
// Print the value from right to left
szBuf[digits] = 0; // zero-term
while (digits--)
{
int dig = val % base;
val /= base;
char ch = (dig <= 9) ?
('0' + dig) :
('a' + dig - 0xa);
szBuf[digits] = ch;
}
}
Example:
char sz[PRINT_NUM_TXT_MAX];
PrintNumberInBase(19, 8, sz);
The code the OP is asking to produce is what your scientific calculator would do when you want a number in a different base.
I think your algorithm is wrong. Just looking over it, I see a function that is squared towards the end. why? There is a simple mathematical way to do what you are talking about. Once you get the math part, then you can convert it to code.
If you had pencil and paper, and no calculator (similar to not using pre built functions), the method is to take the base you are in, change it to base 10, then change to the base you require. In your case that would be base 8, to base 10, to base 2.
This should get you started. All you really need are if/else statements with modulus to get the remainders.
http://www.purplemath.com/modules/numbbase3.htm
Then you have to figure out how to get your desired output. Maybe store the remainders in an array or output to a txt file.
(For problems like this is the reason why I want to double major with applied math)
Since you want conversion from decimal 0-256, it would be easiest to make functions, say call them int binary(), char hex(), and int octal(). Do the binary and octal first as that would be the easiest since they can represented by only integers.
#include <cmath>
#include <iostream>
#include <string>
#include <cstring>
#include <cctype>
#include <cstdlib>
using namespace std;
char* toBinary(char* doubleDigit)
{
int digit = atoi(doubleDigit);
char* binary = new char();
int x = 0 ;
binary[x]='(';
//int tempDigit = digit;
int k=1;
for(int i = 9 ; digit != 0; i--)
{
k=1;//cout << digit << endl;
//cout << "i"<< i<<endl;
if(digit-k *pow(8,i)>=0)
{
k =1;
cout << "i" << i << endl;
cout << k*pow(8,i)<< endl;
while((k*pow(8,i)<=digit))
{
//cout << k <<endl;
k++;
}
k= k-1;
digit = digit -k*pow(8,i);
binary[x+1]= k+'0';
binary[x+2]= '*';
binary[x+3]= '8';
binary[x+4]='^';
binary[x+5]=i+'0';
binary[x+6]='+';
x+=6;
}
}
binary[x]=')';
return binary;
}
int main()
{
char value[6]={'4','0','9','8','7','9'};
cout<< toBinary(value);
return 0 ;
}