Octal conversion using loops in C++ - c++

I am currently working on a basic program which converts a binary number to an octal. Its task is to print a table with all the numbers between 0-256, with their binary, octal and hexadecimal equivalent. The task requires me only to use my own code (i.e. using loops etc and not in-built functions). The code I have made (it is quite messy at the moment) is as following (this is only a snippit):
int counter = ceil(log10(fabs(binaryValue)+1));
int iter;
if (counter%3 == 0)
{
iter = counter/3;
}
else if (counter%3 != 0)
{
iter = ceil((counter/3));
}
c = binaryValue;
for (int h = 0; h < iter; h++)
{
tempOctal = c%1000;
c /= 1000;
int count = ceil(log10(fabs(tempOctal)+1));
for (int counter = 0; counter < count; counter++)
{
if (tempOctal%10 != 0)
{
e = pow(2.0, counter);
tempDecimal += e;
}
tempOctal /= 10;
}
octalValue += (tempDecimal * pow(10.0, h));
}
The output is completely wrong. When for example the binary code is 1111 (decimal value 15), it outputs 7. I can understand why this happens (the last three digits in the binary number, 111, is 7 in decimal format), but can't be able to identify the problem in the code. Any ideas?
Edit: After some debugging and testing I figured the answer.
#include <iostream>
#include <cmath>
using namespace std;
int main()
{
while (true)
{
int binaryValue, c, tempOctal, tempDecimal, octalValue = 0, e;
cout << "Enter a binary number to convert to octal: ";
cin >> binaryValue;
int counter = ceil(log10(binaryValue+1));
cout << "Counter " << counter << endl;
int iter;
if (counter%3 == 0)
{
iter = counter/3;
}
else if (counter%3 != 0)
{
iter = (counter/3)+1;
}
cout << "Iterations " << iter << endl;
c = binaryValue;
cout << "C " << c << endl;
for (int h = 0; h < iter; h++)
{
tempOctal = c%1000;
cout << "3 digit binary part " << tempOctal << endl;
int count = ceil(log10(tempOctal+1));
cout << "Digits " << count << endl;
tempDecimal = 0;
for (int counterr = 0; counterr < count; counterr++)
{
if (tempOctal%10 != 0)
{
e = pow(2.0, counterr);
tempDecimal += e;
cout << "Temp Decimal value 0-7 " << tempDecimal << endl;
}
tempOctal /= 10;
}
octalValue += (tempDecimal * pow(10.0, h));
cout << "Octal Value " << octalValue << endl;
c /= 1000;
}
cout << "Final Octal Value: " << octalValue << endl;
}
system("pause");
return 0;
}

This looks overly complex. There's no need to involve floating-point math, and it can very probably introduce problems.
Of course, the obvious solution is to use a pre-existing function to do this (like { char buf[32]; snprintf(buf, sizeof buf, "%o", binaryValue); } and be done, but if you really want to do it "by hand", you should look into using bit-operations:
Use binaryValue & 3 to mask out the three lowest bits. These will be your next octal digit (three bits is 0..7, which is one octal digit).
use binaryValue >>= 3 to shift the number to get three new bits into the lowest position
Reverse the number afterwards, or (if possible) start from the end of the string buffer and emit digits backwards

It don't understand your code; it seems far too complicated. But one
thing is sure, if you are converting an internal representation into
octal, you're going to have to divide by 8 somewhere, and do a % 8
somewhere. And I don't see them. On the other hand, I see a both
operations with both 10 and 1000, neither of which should be present.
For starters, you might want to write a simple function which converts
a value (preferably an unsigned of some type—get unsigned
right before worrying about the sign) to a string using any base, e.g.:
//! \pre
//! base >= 2 && base < 36
//!
//! Digits are 0-9, then A-Z.
std::string convert(unsigned value, unsigned base);
This shouldn't take more than about 5 or 6 lines of code. But attention,
the normal algorithm generates the digits in reverse order: if you're
using std::string, the simplest solution is to push_back each digit,
then call std::reverse at the end, before returning it. Otherwise: a
C style char[] works well, provided that you make it large enough.
(sizeof(unsigned) * CHAR_BITS + 2 is more than enough, even for
signed, and even with a '\0' at the end, which you won't need if you
return a string.) Just initialize the pointer to buffer +
sizeof(buffer), and pre-decrement each time you insert a digit. To
construct the string you return:
std::string( pointer, buffer + sizeof(buffer) ) should do the trick.
As for the loop, the end condition could simply be value == 0.
(You'll be dividing value by base each time through, so you're
guaranteed to reach this condition.) If you use a do ... while,
rather than just a while, you're also guaranteed at least one digit in
the output.
(It would have been a lot easier for me to just post the code, but since
this is obviously homework, I think it better to just give indications
concerning what needs to be done.)
Edit: I've added my implementation, and some comments on your new
code:
First for the comments: there's a very misleading prompt: "Enter a
binary number" sounds like the user should enter binary; if you're
reading into an int, the value input should be decimal. And there are
still the % 1000 and / 1000 and % 10 and / 10 that I don't
understand. Whatever you're doing, it can't be right if there's no %
8 and / 8. Try it: input "128", for example, and see what you get.
If you're trying to input binary, then you really have to input a
string, and parse it yourself.
My code for the conversion itself would be:
//! \pre
//! base >= 2 && base <= 36
//!
//! Digits are 0-9, then A-Z.
std::string toString( unsigned value, unsigned base )
{
assert( base >= 2 && base <= 36 );
static char const digits[] = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ";
char buffer[sizeof(unsigned) * CHAR_BIT];
char* dst = buffer + sizeof(buffer);
do
{
*--dst = digits[value % base];
value /= base;
} while (value != 0);
return std::string(dst, buffer + sizeof(buffer));
}
If you want to parse input (e.g. for binary), then something like the
following should do the trick:
unsigned fromString( std::string const& value, unsigned base )
{
assert( base >= 2 && base <= 36 );
static char const digits[] = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ";
unsigned results = 0;
for (std::string::const_iterator iter = value.begin();
iter != value.end();
++ iter)
{
unsigned digit = std::find
( digits, digits + sizeof(digits) - 1,
toupper(static_cast<unsigned char>( *iter ) ) ) - digits;
if ( digit >= base )
throw std::runtime_error( "Illegal character" );
if ( results >= UINT_MAX / base
&& (results > UINT_MAX / base || digit > UINT_MAX % base) )
throw std::runtime_error( "Overflow" );
results = base * results + digit;
}
return results;
}
It's more complicated than toString because it has to handle all sorts
of possible error conditions. It's also still probably simpler than you
need; you probably want to trim blanks, etc., as well (or even ignore
them: entering 01000000 is more error prone than 0100 0000).
(Also, the end iterator for find has a - 1 because of the trailing
'\0' the compiler inserts into digits.)

Actually I don't understand why do you need so complex code to accomplish what you need.
First of all there is no such a thing as conversion from binary to octal (same is true for converting to/from decimal and etc.). The machine always works in binary, there's nothing you can (or should) do about this.
This is actually a question of formatting. That is, how do you print a number as octal, and how do you parse the textual representation of the octal number.
Edit:
You may use the following code for printing a number in any base:
const int PRINT_NUM_TXT_MAX = 33; // worst-case for binary
void PrintNumberInBase(unsigned int val, int base, PSTR szBuf)
{
// calculate the number of digits
int digits = 0;
for (unsigned int x = val; x; digits++)
x /= base;
if (digits < 1)
digits = 1; // will emit zero
// Print the value from right to left
szBuf[digits] = 0; // zero-term
while (digits--)
{
int dig = val % base;
val /= base;
char ch = (dig <= 9) ?
('0' + dig) :
('a' + dig - 0xa);
szBuf[digits] = ch;
}
}
Example:
char sz[PRINT_NUM_TXT_MAX];
PrintNumberInBase(19, 8, sz);

The code the OP is asking to produce is what your scientific calculator would do when you want a number in a different base.
I think your algorithm is wrong. Just looking over it, I see a function that is squared towards the end. why? There is a simple mathematical way to do what you are talking about. Once you get the math part, then you can convert it to code.
If you had pencil and paper, and no calculator (similar to not using pre built functions), the method is to take the base you are in, change it to base 10, then change to the base you require. In your case that would be base 8, to base 10, to base 2.
This should get you started. All you really need are if/else statements with modulus to get the remainders.
http://www.purplemath.com/modules/numbbase3.htm
Then you have to figure out how to get your desired output. Maybe store the remainders in an array or output to a txt file.
(For problems like this is the reason why I want to double major with applied math)
Since you want conversion from decimal 0-256, it would be easiest to make functions, say call them int binary(), char hex(), and int octal(). Do the binary and octal first as that would be the easiest since they can represented by only integers.

#include <cmath>
#include <iostream>
#include <string>
#include <cstring>
#include <cctype>
#include <cstdlib>
using namespace std;
char* toBinary(char* doubleDigit)
{
int digit = atoi(doubleDigit);
char* binary = new char();
int x = 0 ;
binary[x]='(';
//int tempDigit = digit;
int k=1;
for(int i = 9 ; digit != 0; i--)
{
k=1;//cout << digit << endl;
//cout << "i"<< i<<endl;
if(digit-k *pow(8,i)>=0)
{
k =1;
cout << "i" << i << endl;
cout << k*pow(8,i)<< endl;
while((k*pow(8,i)<=digit))
{
//cout << k <<endl;
k++;
}
k= k-1;
digit = digit -k*pow(8,i);
binary[x+1]= k+'0';
binary[x+2]= '*';
binary[x+3]= '8';
binary[x+4]='^';
binary[x+5]=i+'0';
binary[x+6]='+';
x+=6;
}
}
binary[x]=')';
return binary;
}
int main()
{
char value[6]={'4','0','9','8','7','9'};
cout<< toBinary(value);
return 0 ;
}

Related

Code to convert decimal to hexadecimal without using arrays

I have this code here and I'm trying to do decimal to hexadecimal conversion without using arrays. It is working pretty much but it gives me wrong answers for values greater than 1000. What am I doing wrong? are there any counter solutions? kindly can anyone give suggestions how to improve this code.
for(int i = num; i > 0; i = i/16)
{
temp = i % 16;
(temp < 10) ? temp = temp + 48 : temp = temp + 55;
num = num * 100 + temp;
}
cout<<"Hexadecimal = ";
for(int j = num; j > 0; j = j/100)
{
ch = j % 100;
cout << ch;
}
There's a couple of errors in the code. But elements of the approach are clear.
This line sort of works:
(temp < 10) ? temp = temp + 48 : temp = temp + 55;
But is confusing because it's using 48 and 55 as magic numbers!
It also may lead to overflow.
It's repacking hex digits as decimal character values.
It's also unconventional to use ?: in that way.
Half the trick of radix output is that each digit is n%r followed by n/r but the digits come out 'backwards' for conventional left-right output.
This code reverses the hex digits into another variable then reads them out.
So it avoids any overflow risks.
It works with an unsigned value for clarity and a lack of any specification as how to handle negative values.
#include <iostream>
void hex(unsigned num){
unsigned val=num;
const unsigned radix=16;
unsigned temp=0;
while(val!=0){
temp=temp*radix+val%radix;
val/=radix;
}
do{
unsigned digit=temp%16;
char c=digit<10?'0'+digit:'A'+(digit-10);
std::cout << c;
temp/=16;
}while(temp!=0);
std::cout << '\n';
}
int main(void) {
hex(0x23U);
hex(0x0U);
hex(0x7U);
hex(0xABCDU);
return 0;
}
Expected Output:
23
0
8
ABCD
Arguably it's more obvious what is going on if the middle lines of the first loop are:
while(val!=0){
temp=(temp<<4)+(val&0b1111);
val=val>>4;
}
That exposes that we're building temp as blocks of 4 bits of val in reverse order.
So the value 0x89AB with be 0xBA98 and is then output in reverse.
I've not done that because bitwise operations may not be familiar.
It's a double reverse!
The mapping into characters is done at output to avoid overflow issues.
Using character literals like 0 instead of integer literals like 44 is more readable and makes the intention clearer.
So here's a single loop version of the solution to the problem which should work for any sized integer:-
#include <iostream>
#include <string>
using namespace std;
void main(int argc, char *argv[1])
{
try
{
unsigned
value = argc == 2 ? stoi(argv[1]) : 64;
for (unsigned i = numeric_limits<unsigned>::digits; i > 0; i -= 4)
{
unsigned
digit = (value >> (i - 4)) & 0xf;
cout << (char)((digit < 10) ? digit + 48 : digit + 55);
}
cout << endl;
}
catch (exception e)
{
cout << e.what() << endl;
}
}
There is a mistake in your code, in the second loop you should exit when j > original num, or set the cumulative sum with non-zero value, I also changed the cumulative num to be long int, rest should be fine.
void tohex(int value){
long int num = 1;
char ch = 0;
int temp = 0;
for(int i = value; i > 0; i = i/16)
{
temp = i % 16;
(temp < 10) ? temp = temp + 48 : temp = temp + 55;
num = num * 100 + temp;
}
cout<<"Hexadecimal = ";
for(long int j = num; j > 99; j = j/100)
{
ch = j % 100;
cout << ch;
}
cout << endl;
}
If this is a homework assignment, it is probably related to the chapter on Recursivity. See a solution below. To understand it, you need to know
what a lookup table is
what recursion is
how to convert a number from one base to another iteratively
basic io
void hex_out(unsigned n)
{
static const char* t = "0123456789abcdef"; // lookup table
if (!n) // recursion break condition
return;
hex_out(n / 16);
std::cout << t[n % 16];
}
Note that there is no output for zero. This can be solved simply by calling the recursive function from a second function.
You can also add a second parameter, base, so that you can call the function this way:
b_out(123, 10); // decimal
b_out(123, 2); // binary
b_out(123, 8); // octal

Printing in Base 10-16 with letters

I am stuck on a project where I have to print out any number in any base from 10-16. The problem is that in those bases, you have to add a letter to the front, which I don't really understand how to do with recursion. Can anyone help me?
int conversionFunction(int num, int base)
{
if (num == 0)
return 0;
int x = num % base;
num /= base;
if (x < 0)
num = num + 1;
conversionFunction(num, base);
if (x < 0){
cout << x+(base * -1);
}
else{
cout << x;
return x;
}
}
If I do 246 in base 16, I get 156. I know that the actual answer should be F6. 15 translates to F when converting. But how would I do that?
Something like
static const char* digits = "0123456789abcdef";
and
cout << digits[num % base];
is a nice way. static just means that digits is has global lifetime but is scoped to your function (basically, you won't have to recreate it over and over every time you enter your function).
You seem to be stuck just on the problem of converting between bases. I can think of two ways to do it:
Divide by decreasing powers of the radix, from n-1 to 0, where n is the largest power. That requires you to know the largest value that you might have to convert. Each division gives you a digit in the place that corresponds to that power. Using your example, you could decide to go up to four digits, so you'd have:
246 / 16^^3 = 0
246 / 16^^2 = 0
246 / 16^^1 = F
6 / 16^^0 = 6
So the answer is 0x00F6.
Use modulo arithmetic with increasing powers of the radix, from 1 to n. Again, each operation gives you a digit in the place that corresponds to the power of the radix. Using the same example:
246 mod 16^^1 = 6
240 mod 16^^2 = F
So again, you've got 0xF6.
Here's a version with comments in the code using a similar approach as in okovkos answer and Calebs second solution. It starts with the least significant digit and extracts until num is zero. It supports conversions in the range (INTMAX_MIN, INTMAX_MAX] using a base in the range [2, 36].
#include <iostream>
#include <string>
#include <cstdint> // std::intmax_t, std::uintmax_t
std::string itos(
std::intmax_t num, // number to convert, range: (INTMAX_MIN, INTMAX_MAX]
const int base=10, // base, range: [2, 36]
const std::string& prefix="", // user defined prefix
bool add_plus=false) // add plus sign for positive numbers
{
static const std::string digits = "0123456789abcdefghijklmnopqrstuvwxyz";
if(base>36 || base<2) return ""; // erroneous base
std::string rv; // the return value we'll create
if(num) {
bool negative = false;
if(num<0) {
if(num==INTMAX_MIN) return ""; // the ONE std::intmax_t number you can't use
// make it positive for the calculation
num = -num;
negative = true;
}
std::uintmax_t x;
while(num) {
x = num % base; // extract least significant digits index
rv.insert(rv.begin(), digits[x]); // insert digit first
num -= x; // reduce num with the extracted value
num /= base; // divide num down for next extraction
}
// the below two inserts could be moved to just before the
// return if you want to add the prefix for the value zero too
// insert prefix
rv.insert(0, prefix);
// insert minus sign if negative or plus if desired
if(negative) rv.insert(rv.begin(), '-');
else if(add_plus) rv.insert(rv.begin(), '+');
} else rv = "0"; // special case
return rv;
}
int main() {
std::cout << "bin " << itos(255, 2, "0b") << "\n";
std::cout << "oct " << itos(255, 8, "0") << "\n";
std::cout << "dec " << itos(255, 10, "", true) << "\n";
std::cout << "hex " << itos(-INTMAX_MAX, 16, "0x") << "\n";
std::cout << "hex " << itos(INTMAX_MAX, 16, "0x") << "\n";
}
Possible output:
bin 0b11111111
oct 0377
dec +255
hex -0x7fffffffffffffff
hex 0x7fffffffffffffff

C++ Program abruptly ends after cin

I am writing code to get the last digit of very large fibonacci numbers such as fib(239), etc.. I am using strings to store the numbers, grabbing the individual chars from end to beginning and then converting them to int and than storing the values back into another string. I have not been able to test what I have written because my program keeps abruptly closing after the std::cin >> n; line.
Here is what I have so far.
#include <iostream>
#include <string>
using std::cin;
using std::cout;
using namespace std;
char get_fibonacci_last_digit_naive(int n) {
cout << "in func";
if (n <= 1)
return (char)n;
string previous= "0";
string current= "1";
for (int i = 0; i < n - 1; ++i) {
//long long tmp_previous = previous;
string tmp_previous= previous;
previous = current;
//current = tmp_previous + current; // could also use previous instead of current
// for with the current length of the longest of the two strings
//iterates from the end of the string to the front
for (int j=current.length(); j>=0; --j) {
// grab consectutive positions in the strings & convert them to integers
int t;
if (tmp_previous.at(j) == '\0')
// tmp_previous is empty use 0 instead
t=0;
else
t = stoi((string&)(tmp_previous.at(j)));
int c = stoi((string&)(current.at(j)));
// add the integers together
int valueAtJ= t+c;
// store the value into the equivalent position in current
current.at(j) = (char)(valueAtJ);
}
cout << current << ":current value";
}
return current[current.length()-1];
}
int main() {
int n;
std::cin >> n;
//char& c = get_fibonacci_last_digit_naive(n); // reference to a local variable returned WARNING
// http://stackoverflow.com/questions/4643713/c-returning-reference-to-local-variable
cout << "before call";
char c = get_fibonacci_last_digit_naive(n);
std::cout << c << '\n';
return 0;
}
The output is consistently the same. No matter what I enter for n, the output is always the same. This is the line I used to run the code and its output.
$ g++ -pipe -O2 -std=c++14 fibonacci_last_digit.cpp -lm
$ ./a.exe
10
There is a newline after the 10 and the 10 is what I input for n.
I appreciate any help. And happy holidays!
I'm posting this because your understanding of the problem seems to be taking a backseat to the choice of solution you're attempting to deploy. This is an example of an XY Problem, a problem where the choice of solution method and problems or roadblocks with its implementation obfuscates the actual problem you're trying to solve.
You are trying to calculate the final digit of the Nth Fibonacci number, where N could be gregarious. The basic understanding of the fibonacci sequence tells you that
fib(0) = 0
fib(1) = 1
fib(n) = fib(n-1) + fib(n-2), for all n larger than 1.
The iterative solution to solving fib(N) for its value would be:
unsigned fib(unsigned n)
{
if (n <= 1)
return n;
unsigned previous = 0;
unsigned current = 1;
for (int i=1; i<n; ++i)
{
unsigned value = previous + current;
previous = current;
current = value;
}
return current;
}
which is all well and good, but will obviously overflow once N causes an overflow of the storage capabilities of our chosen data type (in the above case, unsigned on most 32bit platforms will overflow after a mere 47 iterations).
But we don't need the actual fib values for each iteration. We only need the last digit of each iteration. Well, the base-10 last-digit is easy enough to get from any unsigned value. For our example, simply replace this:
current = value;
with this:
current = value % 10;
giving us a near-identical algorithm, but one that only "remembers" the last digit on each iteration:
unsigned fib_last_digit(unsigned n)
{
if (n <= 1)
return n;
unsigned previous = 0;
unsigned current = 1;
for (int i=1; i<n; ++i)
{
unsigned value = previous + current;
previous = current;
current = value % 10; // HERE
}
return current;
}
Now current always holds the single last digit of the prior sum, whether that prior sum exceeded 10 or not really isn't relevant to us. Once we have that the next iteration can use it to calculate the sum of two single positive digits, which cannot exceed 18, and again, we only need the last digit from that for the next iteration, etc.. This continues until we iterate however many times requested, and when finished, the final answer will present itself.
Validation
We know the first 20 or so fibonacci numbers look like this, run through fib:
0:0
1:1
2:1
3:2
4:3
5:5
6:8
7:13
8:21
9:34
10:55
11:89
12:144
13:233
14:377
15:610
16:987
17:1597
18:2584
19:4181
20:6765
Here's what we get when we run the algorithm through fib_last_digit instead:
0:0
1:1
2:1
3:2
4:3
5:5
6:8
7:3
8:1
9:4
10:5
11:9
12:4
13:3
14:7
15:0
16:7
17:7
18:4
19:1
20:5
That should give you a budding sense of confidence this is likely the algorithm you seek, and you can forego the string manipulations entirely.
Running this code on a Mac I get:
libc++abi.dylib: terminating with uncaught exception of type std::out_of_range: basic_string before callin funcAbort trap: 6
The most obvious problem with the code itself is in the following line:
for (int j=current.length(); j>=0; --j) {
Reasons:
If you are doing things like current.at(j), this will crash immediately. For example, the string "blah" has length 4, but there is no character at position 4.
The length of tmp_previous may be different from current. Calling tmp_previous.at(j) will crash when you go from 8 to 13 for example.
Additionally, as others have pointed out, if the the only thing you're interested in is the last digit, you do not need to go through the trouble of looping through every digit of every number. The trick here is to only remember the last digit of previous and current, so large numbers are never a problem and you don't have to do things like stoi.
As an alternative to a previous answer would be the string addition.
I tested it with the fibonacci number of 100000 and it works fine in just a few seconds. Working only with the last digit solves your problem for even larger numbers for sure. for all of you requiring the fibonacci number as well, here an algorithm:
std::string str_add(std::string a, std::string b)
{
// http://ideone.com/o7wLTt
size_t n = max(a.size(), b.size());
if (n > a.size()) {
a = string(n-a.size(), '0') + a;
}
if (n > b.size()) {
b = string(n-b.size(), '0') + b;
}
string result(n + 1, '0');
char carry = 0;
std::transform(a.rbegin(), a.rend(), b.rbegin(), result.rbegin(), [&carry](char x, char y)
{
char z = (x - '0') + (y - '0') + carry;
if (z > 9) {
carry = 1;
z -= 10;
} else {
carry = 0;
}
return z + '0';
});
result[0] = carry + '0';
n = result.find_first_not_of("0");
if (n != string::npos) {
result = result.substr(n);
}
return result;
}
std::string str_fib(size_t i)
{
std::string n1 = "0";
std::string n2 = "1";
for (size_t idx = 0; idx < i; ++idx) {
const std::string f = str_add(n1, n2);
n1 = n2;
n2 = f;
}
return n1;
}
int main() {
const size_t i = 100000;
const std::string f = str_fib(i);
if (!f.empty()) {
std::cout << "fibonacci of " << i << " = " << f << " | last digit: " << f[f.size() - 1] << std::endl;
}
std::cin.sync(); std::cin.get();
return 0;
}
Try it with first calculating the fibonacci number and then converting the int to a std::string using std::to_string(). in the following you can extract the last digit using the [] operator on the last index.
int fib(int i)
{
int number = 1;
if (i > 2) {
number = fib(i - 1) + fib(i - 2);
}
return number;
}
int main() {
const int i = 10;
const int f = fib(i);
const std::string s = std::to_string(f);
if (!s.empty()) {
std::cout << "fibonacci of " << i << " = " << f << " | last digit: " << s[s.size() - 1] << std::endl;
}
std::cin.sync(); std::cin.get();
return 0;
}
Avoid duplicates of the using keyword using.
Also consider switching from int to long or long long when your numbers get bigger. Since the fibonacci numbers are positive, also use unsigned.

Is this an inefficent way to convert from a binary string to decimal value?

while(i < length)
{
pow = 1;
for(int j = 0; j < 8; j++, pow *=2)
{
ch += (str[j] - 48) * pow;
}
str = str.substr(8);
i+=8;
cout << ch;
ch = 0;
}
This seems to be slowing my program down a lot. Is it because of the string functions I'm using in there, or is this approach wrong in general. I know there's the way where you implement long division, but I wanted to see if that was actually more efficient than this method. I can't think of another way that doesn't use the same general algorithm, so maybe it's just my implementation that is the problem.
Perhaps you want might to look into using the standard library functions. They're probably at least as optimised as anything you run through the compiler:
#include <iostream>
#include <iomanip>
#include <cstdlib>
int main (void) {
const char *str = "10100101";
// Use str.c_str() if it's a real C++ string.
long int li = std::strtol (str, 0, 2);
std::cout
<< "binary string = " << str
<< ", decimal = " << li
<< ", hex = " << std::setbase (16) << li
<< '\n';
return 0;
}
The output is:
binary string = 10100101, decimal = 165, hex = a5
You are doing some things unnecessarily, like creating a new substring for each each loop. You could just use str[i + j] instead.
It is also not necessary to multiply 0 or 1 with the power. Just use an if-statement.
while(i < length)
{
pow = 1;
for(int j = 0; j < 8; j++, pow *=2)
{
if (str[i + j] == '1')
ch += pow;
}
i+=8;
cout << ch;
ch = 0;
}
This will at least run a bit faster.
short answer could be:
long int x = strtol(your_binary_c++_string.c_str(),(char **)NULL,2)
Probably you can use int or long int like below:
Just traverse the binary number step by step, starting from 0 to n-1, where n is the most significant bit(MSB) ,
multiply them with 2 with raising powers and add the sum together. E.g to convert 1000(which is binary equivalent of 8), just do the following
1 0 0 0 ==> going from right to left
0 x 2^0 = 0
0 x 2^1 = 0;
0 x 2^2 = 0;
1 x 2^3 = 8;
now add them together i.e 0+0+0+8 = 8; this the decimal equivalent of 1000. Please read the program below to have a better understanding how the concept
work. Note : The program works only for 16-bit binary numbers(non-floating) or less. Leave a comment if anything is not clear. You are bound to receive a reply.
// Program to convert binary to its decimal equivalent
#include <iostream>
#include <math.h>
int main()
{
int x;
int i=0,sum = 0;
// prompts the user to input a 16-bit binary number
std::cout<<" Enter the binary number (16-bit) : ";
std::cin>>x;
while ( i != 16 ) // runs 16 times
{
sum += (x%10) * pow(2,i);
x = x/10;
i++;
}
std::cout<<"\n The decimal equivalent is : "<<sum;
return 0;
}
How about something like:
int binstring_to_int(const std::string &str)
{
// 16 bits are 16 characters, but -1 since bits are numbered 0 to 15
std::string::size_type bitnum = str.length() - 1;
int value = 0;
for (auto ch : str)
{
value |= (ch == '1') << bitnum--;
}
return value;
}
It's the simplest I can think of. Note that this uses the new C++11 for-each loop construct, if your compiler can't handle it you can use
for (std::string::const_iterator i = str.begin(); i != str.end(); i++)
{
char ch = *i;
// ...
}
Minimize the number of operations and don't compute things more than once. Just multiply and move up:
unsigned int result = 0;
for (char * p = str; *p != 0; ++p)
{
result *= 2;
result += (*p - '0'); // this is either 0 or 1
}
The scheme is readily generalized to any base < 10.

convert decimal to 32 bit binary?

convert a positive integer number in C++ (0 to 2,147,483,647) to a 32 bit binary and display.
I want do it in traditional "mathematical" way (rather than use bitset or use vector *.pushback* or recursive function or some thing special in C++...), (one reason is so that you can implement it in different languages, well maybe)
So I go ahead and implement a simple program like this:
#include <iostream>
using namespace std;
int main()
{
int dec,rem,i=1,sum=0;
cout << "Enter the decimal to be converted: ";
cin>>dec;
do
{
rem=dec%2;
sum=sum + (i*rem);
dec=dec/2;
i=i*10;
} while(dec>0);
cout <<"The binary of the given number is: " << sum << endl;
system("pause");
return 0;
}
Problem is when you input a large number such as 9999, result will be a negative or some weird number because sum is integer and it can't handle more than its max range, so you know that a 32 bit binary will have 32 digits so is it too big for any number type in C++?. Any suggestions here and about display 32 bit number as question required?
What you get in sum as a result is hardly usable for anything but printing. It's a decimal number which just looks like a binary.
If the decimal-binary conversion is not an end in itself, note that numbers in computer memory are already represented in binary (and it's not the property of C++), and the only thing you need is a way to print it. One of the possible ways is as follows:
int size = 0;
for (int tmp = dec; tmp; tmp >>= 1)
size++;
for (int i = size - 1; i >= 0; --i)
cout << ((dec >> i) & 1);
Another variant using a character array:
char repr[33] = { 0 };
int size = 0;
for (int tmp = dec; tmp; tmp >>= 1)
size++;
for (int i = 0; i < size; ++i)
repr[i] = ((dec >> (size - i - 1)) & 1) ? '1' : '0';
cout << repr << endl;
Note that both variants don't work if dec is negative.
You have a number and want its binary representation, i.e, a string. So, use a string, not an numeric type, to store your result.
Using a for-loop, and a predefined array of zero-chars:
#include <iostream>
using namespace std;
int main()
{
int dec;
cout << "Enter the decimal to be converted: ";
cin >> dec;
char bin32[] = "00000000000000000000000000000000";
for (int pos = 31; pos >= 0; --pos)
{
if (dec % 2)
bin32[pos] = '1';
dec /= 2;
}
cout << "The binary of the given number is: " << bin32 << endl;
}
For performance reasons, you may prematurely suspend the for loop:
for (int pos = 31; pos >= 0 && dec; --pos)
Note, that in C++, you can treat an integer as a boolean - everything != 0 is considered true.
You could use an unsigned integer type. However, even with a larger type you will eventually run out of space to store binary representations. You'd probably be better off storing them in a string.
As others have pointed out, you need to generate the results in a
string. The classic way to do this (which works for any base between 2 and 36) is:
std::string
toString( unsigned n, int precision, unsigned base )
{
assert( base >= 2 && base <= 36 );
static char const digits[] = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ";
std::string retval;
while ( n != 0 ) {
retval += digits[ n % base ];
n /= base;
}
while ( retval.size() < precision ) {
retval += ' ';
}
std::reverse( retval.begin(), retval.end() );
return retval;
}
You can then display it.
Recursion. In pseudocode:
function toBinary(integer num)
if (num < 2)
then
print(num)
else
toBinary(num DIV 2)
print(num MOD 2)
endif
endfunction
This does not handle leading zeros or negative numbers. The recursion stack is used to reverse the binary bits into the standard order.
Just write:
long int dec,rem,i=1,sum=0
Instead of:
int dec,rem,i=1,sum=0;
That should solve the problem.