Convert negative binary number to decimal - c++

For example:
string binaryValue = "11111111111111111111111111111011" // -5
I need to convert this string to decimal representatin of this number.
stoi(binaryValue, nullptr, 2)
Will throw exception on this case. So how can i do this in c++ ? String or int doesn't matter.

See the documentation of:
int std::stoi( const std::string& str, std::size_t* pos = 0, int base = 10 );
in particular:
The valid integer value [of str] consists of the following parts:
(optional) plus or minus sign
...
...
If the minus sign was part of the input sequence, the numeric value calculated
from the sequence of digits is negated as if by unary minus in the result type.
Exceptions
std::invalid_argument if no conversion could be performed
std::out_of_range if the converted value would fall out of the range of
the result type...
In the absence of a preceding minus-sign, the string:
std::string binaryValue = "11111111111111111111111111111011";
will be interpreted in a the call:
std::stoi(binaryValue, nullptr, 2);
as a non-negative integer value in base-2 representation. But as such,
it is out of range, so std::out_of_range is thrown:
To represent -5 as a string that your std::stoi call will convert as you expect,
use:
std::string const binaryValue = "-101";
Live demo
If you don't want to prefix a minus sign to a non-negative base-2 numeral, or cannot do so in your real-world
situation, but wish to interpret "11111111111111111111111111111011"
as the two's complement representation of a signed integer using the std::sto* API,
then you must first convert the string to an unsigned integer of a wide-enough
type, and then convert that unsigned value to a signed one. E.g.
#include <string>
#include <iostream>
int main()
{
auto ul = std::stoul("11111111111111111111111111111011",nullptr,2);
std::cout << ul << std::endl;
int i = ul;
std::cout << i << std::endl;
return 0;
}
Live demo

as you probably know number are stored as Twos Complement
to convert it using simple pseudocode
flip numbers 0->1, 1->0 from left to write util you find last 1 in string don't toggle this one
this will be your answer
0000000000000000000000000101=5
here is the code brouht from https://www.geeksforgeeks.org/efficient-method-2s-complement-binary-string/
#include<bits/stdc++.h>
using namespace std;
string findTwoscomplement(string str)
{
int n = str.length();
// Traverse the string to get first '1' from
// the last of string
int i;
for (i = n ; i >= 0 ; i--)
if (str[i] == '1')
break;
// If there exists no '1' concat 1 at the
// starting of string
if (i == 0)
return '1' + str;
// Continue traversal after the position of
// first '1'
for (int k = i-1 ; k >= 0; k--)
{
//Just flip the values
if (str[k] == '1')
str[k] = '0';
else
str[k] = '1';
}
// return the modified string
return str;;
}
int main()
{
string str = "11111111111111111111111111111011";
cout << findTwoscomplement(str);
//now you convert it to decimal if you want
cout<<"Hello World";
cout << stoul( findTwoscomplement(str),nullptr,2);
return 0;
}
preview at https://onlinegdb.com/SyFYLVtdf

Related

Hexadecimal to decimal conversion problem.Also, how to convert a char number to an actual int number

Please help me to identify the error in this program, as for me it's looking correct,I have checked it,but it is giving wrong answers.
In this program I have checked explicitly for A,B,C,D,E,F,and according to them their respective values.
[Edited]:Also,this question relates to how a character number is converted to actual integer number.
#include<iostream>
#include<cmath>
#include<bits/stdc++.h>
using namespace std;
void convert(string num)
{
long int last_digit;
int s=num.length();
int i;
long long int result=0;
reverse(num.begin(),num.end());
for(i=0;i<s;i++)
{
if(num[i]=='a' || num[i]=='A')
{
last_digit=10;
result+=last_digit*pow(16,i);
}
else if(num[i]=='b'|| num[i]=='B')
{
last_digit=11;
result+=last_digit*pow(16,i);
}
else if(num[i]=='c' || num[i]=='C')
{
last_digit=12;
result+=last_digit*pow(16,i);
}
else if(num[i]=='d'|| num[i]=='D' )
{
last_digit=13;
result+=last_digit*pow(16,i);
}
else if(num[i]=='e'|| num[i]=='E' )
{
last_digit=14;
result+=last_digit*pow(16,i);
}
else if(num[i]=='f' || num[i]=='F')
{
last_digit=15;
result+=last_digit*pow(16,i);
}
else {
last_digit=num[i];
result+=last_digit*pow(16,i);
}
}
cout<<result;
}
int main()
{
string hexa;
cout<<"Enter the hexadecimal number:";
getline(cin,hexa);
convert(hexa);
}
Your code is very convoluted and wrong.
You probably want this:
void int convert(string num)
{
long int last_digit;
int s = num.length();
int i;
long long int result = 0;
for (i = 0; i < s; i++)
{
result <<= 4; // multiply by 16, using pow is overkill
auto digit = toupper(num[i]); // convert to upper case
if (digit >= 'A' && digit <= 'F')
last_digit = digit - 'A' + 10; // digit is in range 'A'..'F'
else
last_digit = digit - '0'; // digit is (hopefully) in range '0'..'9'
result += last_digit;
}
cout << result;
}
But this is still not very good:
the function should return a long long int instead of printing the result
a few other thing can be done mor elegantly
So a better version would be this:
#include <iostream>
#include <string>
using namespace std;
long long int convert(const string & num) // always pass objects as const & if possible
{
long long int result = 0;
for (const auto & ch : num) // use range based for loops whenever possible
{
result <<= 4;
auto digit = toupper(ch);
long int last_digit; // declare local variables in the inner most scope
if (digit >= 'A' && digit <= 'F')
last_digit = digit - 'A' + 10;
else
last_digit = digit - '0';
result += last_digit;
}
return result;
}
int main()
{
string hexa;
cout << "Enter the hexadecimal number:";
getline(cin, hexa);
cout << convert(hexa);
}
There is still room for more improvements as the code above assumes that the string to convert contains only hexadecimal characters. Ideally a check for invalid characters should be done somehow. I leave this as an exercise.
The line last_digit = digit - 'A' + 10; assumes that the codes for letters A to F are contiguous, which in theory might not be the case. But the probability that you'll ever encounter an encoding scheme where this is not the case is close to zero though. The vast majority of computer systems in use today use the ASCII encoding scheme, some use EBCDIC, but in both of these encoding schemes the character codes for letters A to F are contiguous. I'm not aware of any other encoding scheme in use today.
Your problem is in the elsecase in which you convert num[i] from char to its ascii equivalent. Thus, for instance, if you try to convert A0, the 0is converted into 48 but not 0.
To correct, you should instead convert your num[i] into its equivalent integer (not in asci).
To do so, replace :
else {
last_digit=num[i];
result+=last_digit*pow(16,i);
with
else {
last_digit = num[i]-'0';
result+=last_digit*pow(16,i);
}
In the new line, last_digit = num[i]-'0'; is equivalent to last_digit = (int)num[i]-(int)'0';which substracts the representation code of any one-digit-number from num[i] from the representation code of '0'
It works because the C++ standard guarantee that the number representation of the 10 decimal digits are contiguous and in incresing order (official ref iso-cpp and is stated in chapter 2.3 and paragraph 3
Thus, if you take the representation (for instance the ascii code) of any one-digit-number num[i] and substract it with the representation code of '0' (which is 48 in ascii), you obtain directly the number itself as an integer value.
An example of execution after the correction would give:
A0
160
F5
245
A small codereview:
You are repeating yourself with many result+=last_digit*pow(16,i);. you may do it only once at the end of the loop. But that's another matter.
You are complicating the problem more than you need to (std::pow is also kinda slow). std::stoul can take a numerical base and automatically convert to an integer for you:
#include <string>
#include <iostream>
std::size_t char_count{0u};
std::string hexa{};
std::getline(std::cin, hexa);
hexa = "0x" + hexa;
unsigned long value_uint = std::stoul(hexa, &char_count, 16);

Find Rightmost unset bit of a very large number

How to obtain rightmost unset bit position of a number N in c++.
1<= N <= 2^60
storing as number does not work since 2^60 can only be stored in string.
thus following does not work
long long getBitUtil(long long n) {
return log2(n&-n)+1;
}
long long getBit(long long n) {
return getBitUtil(~n);
}
Try this. Code is self explanatory with comments
int getRightmostUnSetBit(string s, int pos)
{
int l= s.size();
int lc = s[l-1];
if(lc>=48 && lc<=57 && lc%2 ==0)
return pos; // if last digit is even return position
else
{// divide the number into half and call function again.
string s2 = "";
int rem =0;
for(int i =0; i<l;i++)
{
int d = s[i]-'0';
d = rem *10 + d;
int q = d/2;
rem = d%2;
s2.push_back('0'+q);
}
return getRightmostUnSetBit(s2,pos+1); //call for s2
}
}
Take input in string and call from main
int pos = getRightmostUnSetBit(s,1);// will not work if s is "0" or similar to "000...". So check for it before function calling.
For normal integers the solution is basically given in the book Hackers Delight. I can only refer to the book. I cannot copy. But section 2.1 gives already good hints.
Depending on your OS you will most likely have 64 bit data types. With 64 bit data types, you can still use arithmentic solutions for your given number range. Above that, you should use string representations.
Then we will convert a big decimal number given as string into a string containing its binary equivalent.
Then we search for the last 0 in the resulting binary string.
The secret is to do a division by 2 for a number given as string. This can be done as we learned in school on a piece of paper.
And then check, if the number is even or odd and put a 1 or 0 respectively in the resulting string.
This will work for abitrary big numbers. Limiting factor (but here not really) is the length of the resulting binary string. That must fit into a std::string :-)
See:
#include <string>
#include <iostream>
#include <bitset>
#include <iterator>
#include <regex>
#include <stack>
#include <algorithm>
// Odd numbers. We will find out, if a digit is odd or even
std::string oddNumbers{ "13579" };
// Devide a number in a string by 2. Just like in school on a piece of paper
void divideDecimalNumberAsStringBy2(std::string& str)
{
// Handling of overflow during devisions
unsigned int nextNumberToAdd{ 0 };
// The resulting new string
std::string newString{};
// Go through all digits, starting from the beginning
for (char& c : str) {
// Get overflow for next round
unsigned int numberToAdd = nextNumberToAdd;
// Get the overflow value for the next devision run. If number is odd, it will be 5
nextNumberToAdd = (oddNumbers.find(c) != std::string::npos) ? 5 : 0;
// Do the devision. Add overflow from last round
unsigned int newDigit = (c - '0') / 2 + numberToAdd;
// Build up newe string
newString += static_cast<char>(newDigit + '0');
}
// Remove leading zeroes
str = std::regex_replace(newString, std::regex("^0+"), "");
}
// Convert a string with a big decimal number to a string with a binar representation of the number
std::string convertDecimalStringToBinaryString(std::string& str)
{
// Resulting string
std::string binaryDigits{};
// Until the string is empty. Will be shorter and short because of devision by 2
while (!str.empty()) {
// For an even number we add 0, for an odd number we add 1
binaryDigits += (oddNumbers.find(str.back()) == std::string::npos) ? '0' : '1';
// And devide by 2
divideDecimalNumberAsStringBy2(str);
}
// Bits come in wrong order, so we need to revers it
std::reverse(binaryDigits.begin(), binaryDigits.end());
return binaryDigits;
}
int main()
{
// Initial string with a big number. Get input from user
std::string bigNumber{};
std::cout << "Enter big number: ";
std::cin >> bigNumber;
// Convert it
std::string binaryDigits = convertDecimalStringToBinaryString(bigNumber);
// Find the last 0
unsigned int posOfLast0 = binaryDigits.rfind('0');
// Show result
if (posOfLast0 == std::string::npos)
std::cout << "\nNo digit is 0 --> " << binaryDigits << '\n';
else
std::cout << "\nSize of resulting string: "<< binaryDigits.size() << "\nPos of last 0: " << posOfLast0+1 << "\nBinary String:\n\n" << binaryDigits << '\n';
return 0;
}

How do I print the elements of a char array?

I have to convert a decimal value into a string that shows the binary value, e.g. given 8, I need to print a string "1000". I have the conversion from decimal to binary, but when I print the values directly form the char array, I get little question marks instead of numbers. I know it has something to do with the way char arrays read values, but I can't figure out how to correct the issue.
void dec2Bin(int value, char binaryString[]) {
int remainder = 0;
int binDigit = 0;
int i = 0;
while (value != 0) {
binDigit = value % 2;
value /= 2;
binaryString[i] = char(binDigit);
i++;
}
for (int k = i - 1; k > 0; k--) {
cout << binaryString[k];
}
}
int main()
{
cout << "Enter a decimal number: ";
int num;
cin >> num;
char binaryString[20] = "";
dec2Bin(num, binaryString);
return 0;
}
When you do
binaryString[i] = char(binDigit);
you are assigning the decimal value 0 or 1 to binaryString[i]. That's okay, a char is basically nothing more than a small integer.
The problems comes when you want to print the value, as the only overloaded << operator to handle char treats the characters as a character, and in most encodings the values 0 and 1 are not printable.
There are two solutions:
Either you convert the character you want to print into a larger integer which won't be treated as a character:
cout << static_cast<int>(binaryString[k]);
Or you make the array contain actual printable characters instead:
binaryString[i] = binDigit + '0';

A cleaner way to convert a string to int after checking for hex prefix?

This little exercise is meant to get a string from the user that could be decimal, hexadecimal, or octal. 1st I need to identify which kind of number the string is. 2nd I need to convert that number to int and display the number in its proper format, eg:
cout <<(dec,hex,oct, etc)<< number;
Here's what I came up with. I'd like a simpler, cleaner way to write this.
string number = "";
cin >> number;
string prefix = "dec";
char zero = '0';
char hex_prefix = 'x';
string temp = "";
int value = 0;
for(int i =0; i<number.size();++i)
{
if(number[0] == zero)//must be octal or hex
{
if (number[0] == zero && number[1] == hex_prefix ) //is hex
{
prefix = "hex";
for(int i = 0; i < (number.size() - 2); ++i)
{
temp[i] = number[i+2];
}
value = atoi(temp.c_str());
}
//... code continues to deal with octal and decimal
You are checking number[0] twice, that's the first most obvious problem.
The inner if already checks both number[0] and number[1], I don't see the point of the outer one.
The outermost loop is also hard to understand, do you expect non-hex data before the number, or what? Your question could be clearer on how the expected input string looks.
I think the cleanest would be to ignore this, and push it into existing (library) code that can parse integers in any base. In C I would recommend strtoul(), you can of course use that in C++ too.
You have two inner loop with same value integer this could be a conflict problem in your code. I suggest you look at the isdigit and islower methods in the c++ library and take advantage of those methods to accomplish your task. isdigit & islower
Good Luck
This is prints the number after deleting the hex prefix, otherwise return 0:
#include<iostream>
#include<cmath>
#include<stdlib.h>
using namespace std;
int main(){
string number = "";
cin >> number;
string prefix = "dec";
char zero = '0';
char hex_prefix = 'x';
string temp = "";
int value = 0;
if (number.size()>=2 && number[0] == zero && number[1] == hex_prefix ) //is hex
{
prefix = "hex";
for(int i = 0; i < (number.size() - 2); ++i)
{
temp[i] = number[i+2];
}
value = atoi(temp.c_str());
}
cout<<value;
return 0;
}
This partial solution that I found is as clean as possible, but it doesn't report the format of the integer:
int string_to_int(std::string str)
{
std::istringstream stream;
stream.unsetf(std::ios_base::dec);
int result;
if (stream >> result)
return result;
else
throw std::runtime_error("blah");
}
...
cout << string_to_int("55") << '\n'; // prints 55
cout << string_to_int("0x37") << '\n'; // prints 55
The point here is stream.unsetf(std::ios_base::dec) - it unsets the "decimal" flag that is set by default. This format flag tells iostreams to expect a decimal integer. If it is not set, iostreams expect the integer in any base.

Octal conversion using loops in C++

I am currently working on a basic program which converts a binary number to an octal. Its task is to print a table with all the numbers between 0-256, with their binary, octal and hexadecimal equivalent. The task requires me only to use my own code (i.e. using loops etc and not in-built functions). The code I have made (it is quite messy at the moment) is as following (this is only a snippit):
int counter = ceil(log10(fabs(binaryValue)+1));
int iter;
if (counter%3 == 0)
{
iter = counter/3;
}
else if (counter%3 != 0)
{
iter = ceil((counter/3));
}
c = binaryValue;
for (int h = 0; h < iter; h++)
{
tempOctal = c%1000;
c /= 1000;
int count = ceil(log10(fabs(tempOctal)+1));
for (int counter = 0; counter < count; counter++)
{
if (tempOctal%10 != 0)
{
e = pow(2.0, counter);
tempDecimal += e;
}
tempOctal /= 10;
}
octalValue += (tempDecimal * pow(10.0, h));
}
The output is completely wrong. When for example the binary code is 1111 (decimal value 15), it outputs 7. I can understand why this happens (the last three digits in the binary number, 111, is 7 in decimal format), but can't be able to identify the problem in the code. Any ideas?
Edit: After some debugging and testing I figured the answer.
#include <iostream>
#include <cmath>
using namespace std;
int main()
{
while (true)
{
int binaryValue, c, tempOctal, tempDecimal, octalValue = 0, e;
cout << "Enter a binary number to convert to octal: ";
cin >> binaryValue;
int counter = ceil(log10(binaryValue+1));
cout << "Counter " << counter << endl;
int iter;
if (counter%3 == 0)
{
iter = counter/3;
}
else if (counter%3 != 0)
{
iter = (counter/3)+1;
}
cout << "Iterations " << iter << endl;
c = binaryValue;
cout << "C " << c << endl;
for (int h = 0; h < iter; h++)
{
tempOctal = c%1000;
cout << "3 digit binary part " << tempOctal << endl;
int count = ceil(log10(tempOctal+1));
cout << "Digits " << count << endl;
tempDecimal = 0;
for (int counterr = 0; counterr < count; counterr++)
{
if (tempOctal%10 != 0)
{
e = pow(2.0, counterr);
tempDecimal += e;
cout << "Temp Decimal value 0-7 " << tempDecimal << endl;
}
tempOctal /= 10;
}
octalValue += (tempDecimal * pow(10.0, h));
cout << "Octal Value " << octalValue << endl;
c /= 1000;
}
cout << "Final Octal Value: " << octalValue << endl;
}
system("pause");
return 0;
}
This looks overly complex. There's no need to involve floating-point math, and it can very probably introduce problems.
Of course, the obvious solution is to use a pre-existing function to do this (like { char buf[32]; snprintf(buf, sizeof buf, "%o", binaryValue); } and be done, but if you really want to do it "by hand", you should look into using bit-operations:
Use binaryValue & 3 to mask out the three lowest bits. These will be your next octal digit (three bits is 0..7, which is one octal digit).
use binaryValue >>= 3 to shift the number to get three new bits into the lowest position
Reverse the number afterwards, or (if possible) start from the end of the string buffer and emit digits backwards
It don't understand your code; it seems far too complicated. But one
thing is sure, if you are converting an internal representation into
octal, you're going to have to divide by 8 somewhere, and do a % 8
somewhere. And I don't see them. On the other hand, I see a both
operations with both 10 and 1000, neither of which should be present.
For starters, you might want to write a simple function which converts
a value (preferably an unsigned of some type—get unsigned
right before worrying about the sign) to a string using any base, e.g.:
//! \pre
//! base >= 2 && base < 36
//!
//! Digits are 0-9, then A-Z.
std::string convert(unsigned value, unsigned base);
This shouldn't take more than about 5 or 6 lines of code. But attention,
the normal algorithm generates the digits in reverse order: if you're
using std::string, the simplest solution is to push_back each digit,
then call std::reverse at the end, before returning it. Otherwise: a
C style char[] works well, provided that you make it large enough.
(sizeof(unsigned) * CHAR_BITS + 2 is more than enough, even for
signed, and even with a '\0' at the end, which you won't need if you
return a string.) Just initialize the pointer to buffer +
sizeof(buffer), and pre-decrement each time you insert a digit. To
construct the string you return:
std::string( pointer, buffer + sizeof(buffer) ) should do the trick.
As for the loop, the end condition could simply be value == 0.
(You'll be dividing value by base each time through, so you're
guaranteed to reach this condition.) If you use a do ... while,
rather than just a while, you're also guaranteed at least one digit in
the output.
(It would have been a lot easier for me to just post the code, but since
this is obviously homework, I think it better to just give indications
concerning what needs to be done.)
Edit: I've added my implementation, and some comments on your new
code:
First for the comments: there's a very misleading prompt: "Enter a
binary number" sounds like the user should enter binary; if you're
reading into an int, the value input should be decimal. And there are
still the % 1000 and / 1000 and % 10 and / 10 that I don't
understand. Whatever you're doing, it can't be right if there's no %
8 and / 8. Try it: input "128", for example, and see what you get.
If you're trying to input binary, then you really have to input a
string, and parse it yourself.
My code for the conversion itself would be:
//! \pre
//! base >= 2 && base <= 36
//!
//! Digits are 0-9, then A-Z.
std::string toString( unsigned value, unsigned base )
{
assert( base >= 2 && base <= 36 );
static char const digits[] = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ";
char buffer[sizeof(unsigned) * CHAR_BIT];
char* dst = buffer + sizeof(buffer);
do
{
*--dst = digits[value % base];
value /= base;
} while (value != 0);
return std::string(dst, buffer + sizeof(buffer));
}
If you want to parse input (e.g. for binary), then something like the
following should do the trick:
unsigned fromString( std::string const& value, unsigned base )
{
assert( base >= 2 && base <= 36 );
static char const digits[] = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ";
unsigned results = 0;
for (std::string::const_iterator iter = value.begin();
iter != value.end();
++ iter)
{
unsigned digit = std::find
( digits, digits + sizeof(digits) - 1,
toupper(static_cast<unsigned char>( *iter ) ) ) - digits;
if ( digit >= base )
throw std::runtime_error( "Illegal character" );
if ( results >= UINT_MAX / base
&& (results > UINT_MAX / base || digit > UINT_MAX % base) )
throw std::runtime_error( "Overflow" );
results = base * results + digit;
}
return results;
}
It's more complicated than toString because it has to handle all sorts
of possible error conditions. It's also still probably simpler than you
need; you probably want to trim blanks, etc., as well (or even ignore
them: entering 01000000 is more error prone than 0100 0000).
(Also, the end iterator for find has a - 1 because of the trailing
'\0' the compiler inserts into digits.)
Actually I don't understand why do you need so complex code to accomplish what you need.
First of all there is no such a thing as conversion from binary to octal (same is true for converting to/from decimal and etc.). The machine always works in binary, there's nothing you can (or should) do about this.
This is actually a question of formatting. That is, how do you print a number as octal, and how do you parse the textual representation of the octal number.
Edit:
You may use the following code for printing a number in any base:
const int PRINT_NUM_TXT_MAX = 33; // worst-case for binary
void PrintNumberInBase(unsigned int val, int base, PSTR szBuf)
{
// calculate the number of digits
int digits = 0;
for (unsigned int x = val; x; digits++)
x /= base;
if (digits < 1)
digits = 1; // will emit zero
// Print the value from right to left
szBuf[digits] = 0; // zero-term
while (digits--)
{
int dig = val % base;
val /= base;
char ch = (dig <= 9) ?
('0' + dig) :
('a' + dig - 0xa);
szBuf[digits] = ch;
}
}
Example:
char sz[PRINT_NUM_TXT_MAX];
PrintNumberInBase(19, 8, sz);
The code the OP is asking to produce is what your scientific calculator would do when you want a number in a different base.
I think your algorithm is wrong. Just looking over it, I see a function that is squared towards the end. why? There is a simple mathematical way to do what you are talking about. Once you get the math part, then you can convert it to code.
If you had pencil and paper, and no calculator (similar to not using pre built functions), the method is to take the base you are in, change it to base 10, then change to the base you require. In your case that would be base 8, to base 10, to base 2.
This should get you started. All you really need are if/else statements with modulus to get the remainders.
http://www.purplemath.com/modules/numbbase3.htm
Then you have to figure out how to get your desired output. Maybe store the remainders in an array or output to a txt file.
(For problems like this is the reason why I want to double major with applied math)
Since you want conversion from decimal 0-256, it would be easiest to make functions, say call them int binary(), char hex(), and int octal(). Do the binary and octal first as that would be the easiest since they can represented by only integers.
#include <cmath>
#include <iostream>
#include <string>
#include <cstring>
#include <cctype>
#include <cstdlib>
using namespace std;
char* toBinary(char* doubleDigit)
{
int digit = atoi(doubleDigit);
char* binary = new char();
int x = 0 ;
binary[x]='(';
//int tempDigit = digit;
int k=1;
for(int i = 9 ; digit != 0; i--)
{
k=1;//cout << digit << endl;
//cout << "i"<< i<<endl;
if(digit-k *pow(8,i)>=0)
{
k =1;
cout << "i" << i << endl;
cout << k*pow(8,i)<< endl;
while((k*pow(8,i)<=digit))
{
//cout << k <<endl;
k++;
}
k= k-1;
digit = digit -k*pow(8,i);
binary[x+1]= k+'0';
binary[x+2]= '*';
binary[x+3]= '8';
binary[x+4]='^';
binary[x+5]=i+'0';
binary[x+6]='+';
x+=6;
}
}
binary[x]=')';
return binary;
}
int main()
{
char value[6]={'4','0','9','8','7','9'};
cout<< toBinary(value);
return 0 ;
}