Here is code in Java:
int a = 456;
int b = 5;
String s = Integer.toString(a, b);
System.out.println(s);
Now I want the same in C++, but all the conversions i find convert to base 10 only. I ofc dont want to implement this by mysleft, why to write something what already exists
although std::strtol is more flexible, in a controlled case you can use itoa as well.
int a = 456;
int b = 5;
char buffer[32];
itoa(a, buffer, b);
If you want base 8 or 16 you can easily use the string manipulators std::oct and std::hex. If you want arbitrary bases, I suggest checking out this question.
Without error handling http://ideone.com/nCj2XG:
char *toString(unsigned int value, unsigned int radix)
{
char digit[] = "0123456789ABCDEFGHIJKLMNOPRSTUVWXYZ";
char stack[32];
static char out[33];
int quot, rem;
int digits = 0;
do
{
quot = value / radix;
rem = value % radix;
stack[digits] = digit[rem];
value = quot;
digits++;
}
while( value );
int i = 0;
while(digits--)
{
out[i++] = stack[digits];
}
out[i] = 0;
return out;
}
There is no standard function itoa, which performs conversion to an arbitrary calculus system. But for example, in my version of the compiler there is no implementation. My solution:
#include <string>
// maximum radix - base36
std::string int2string(unsigned int value, unsigned int radix) {
const char base36[] = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ";
std::string result;
while (value > 0) {
unsigned int remainder = value % radix;
value /= radix;
result.insert(result.begin(), base36[remainder]);
}
return result;
}
I am trying to convert a bit string (bitString) of length 'sLength' to an int.
The following code works fine for me in my computer. Is there any case where it may not work?
int toInt(string bitString, int sLength){
int tempInt;
int num=0;
for(int i=0; i<sLength; i++){
tempInt=bitString[i]-'0';
num=num+tempInt * pow(2,(sLength-1-i));
}
return num;
}
Thanks in advance
pow works with doubles. Result may be inaccurate. Use bit arithmetic instead
num |= (1 << (sLength-1-i)) * tempInt;
Don't also forget about cases when bitString contains symbols other than '0' and '1' or too long
Or, you can let the standard library do the heavy lifting:
#include <bitset>
#include <string>
#include <sstream>
#include <climits>
// note the result is always unsigned
unsigned long toInt(std::string const &s) {
static const std::size_t MaxSize = CHAR_BIT*sizeof(unsigned long);
if (s.size() > MaxSize) return 0; // handle error or just truncate?
std::bitset<MaxSize> bits;
std::istringstream is(s);
is >> bits;
return bits.to_ulong();
}
Why not change your for loop to the more efficient and far more simple C++11 version:
for (char c : bitString)
num = (num << 1) | // Shift the current set of bits to the left one bit
(c - '0'); // Add in the current bit via a bitwise-or
By the way, you should also check that the number of bits specified does not overrun an int and you may want to make sure that each char in the string is either a '0' or '1'.
Answer and notice about inaccuracy of floating-point numbers already given; here's a more readable implementation with integer arithmetic, though:
int toInt(const std::string &s)
{
int n = 0;
for (int i = 0; i < s.size(); i++) {
n <<= 1;
n |= s[i] - '0';
}
return n;
}
Notes:
You don't need an explicit length. That's why we have std::string::length().
Counting from zero results in cleaner code, because you don't have to do the subtraction every time.
for (std::string::reverse_iterator it = bitString.rbegin();
it != bitString.rend(); ++it) {
num *= 2;
num += *it == '1' ? 1 : 0;
}
I see directly three cases where it may not work :
pow Works with double, your result may be inaccurate, you can fix it with :
num |= tempInt * ( 1 << ( sLength - 1 - i ) );
If bitString[i] is not a '0' or '1',
If your number in the string in bigger than the int limit.
If you have control over the last two points, your resulting code could be :
int toInt( const string& bitString )
{
int num = 0;
for ( char c : bitString )
{
num <<= 1;
num |= ( c - '0' );
}
return num;
}
Don't forget the const reference as a parameter.
string convert_binary_to_hex(string binary_value, int number_of_bits)
{
bitset<number_of_bits> set(binary_value);
ostringstream result;
result << hex << set.to_ulong() << endl;
return result.str();
}
In the above method, I am converting binary strings to hex strings. Since hex values are 4 bits, the number_of_bits variable needs to be a multiple of 4 because the binary_value could range anywhere from 4 bits to 256 bits with the application I'm writing.
How do I get bitset to take a variable size?
My imports:
#include <stdio.h>
#include <iostream>
#include <string>
#include <bitset>
#include <sstream>
You can't. Template parameters like that need to be known at compile time since the compiler will need to generate different code based on the values passed.
In this case you probably want to iterate through your string instead and build up the value yourself, e.g.
unsigned long result = 0;
for(int i = 0; i < binary_value.length(); ++i)
{
result <<= 1;
if (binary_value[i] != '0') result |= 1;
}
which also assumes that your result is shorter than a long, though, and won't accommodate a 256-bit value - but neither will your sample code. You'll need a big-number type for that.
std::bitset's size can only be a compile-time known constant (constant expression) because it is an integral template parameter. Constant expressions include integral literals and/or constant integer variables initialized with constant expressions.
e.g.
std::bitset<4> q; //OK, 4 is a constant expression
const int x = 4;
std::bitset<x> qq; //OK, x is a constant expression, because it is const and is initialized with constant expression 4;
int y = 3;
const int z = y;
std::bitset<z> qqq; //Error, z isn't a constant expression, because even though it is const it is initialized with a non-constant expression
Use std::vector<bool> or boost::dynamic_bitset(link here) instead for dynamic (not known compile-time) size.
You don't - std::bitset is a template, its size must be specified at compile-time.
You need to make convert_binary_to_hex a template on its own. If the size is only known at runtime, you have to find another solution.
template<size_t number_of_bits>
string convert_binary_to_hex(string binary_value)
{
bitset<number_of_bits> set(binary_value);
ostringstream result;
result << hex << set.to_ulong() << endl;
return result.str();
}
As far as I remember you can use templates to get around this issue:
template <size_t number_of_bits>
string convert_binary_to_hex(string binary_value)
{
bitset<number_of_bits> set(binary_value);
ostringstream result;
result << hex << set.to_ulong() << endl;
return result.str();
}
then call it like this:
convert_binary_to_hex<32>(12345678);
Note that you still can pass constants only, but every call can get another constant now :)
Make your method a template as well if you can know the size at compile time, otherwise you'll need to use std::vector<bool> which is actually specialized to use only one bit per bool anyways, but you'll have to build the ulong manually using or's and bitshifts.
//template version
template <size_t number_of_bits>
string convert_binary_to_hex(string binary_value) {
bitset<number_of_bits> set(binary_value);
ostringstream result;
result << hex << set.to_ulong() << endl;
return result.str();
}
But since you're already assuming that a ulong will be large enough to hold the number of bits and since it won't make a difference if you have too many bits for the code you've given, why not just make it the size of a ulong?
//reuses ulong assumption
string convert_binary_to_hex(string binary_value) {
bitset<sizeof(ulong)> set(binary_value);
ostringstream result;
result << hex << set.to_ulong() << endl;
return result.str();
}
Or else you can just have 2 functions, one that performs the actual conversion for a 4 bit number, and another that uses that function to build up arbitrary length numbers:
string convert_nibble_to_hex(string binary_value) {
bitset<4> set(binary_value);
ostringstream result;
result << hex << set.to_ulong() << endl;
return result.str();
}
string convert_binary_to_hex(string binary_value) {
//call convert_nibble_to_hex binary_value.length()/4 times
//and concatenate results
}
You could use the fact that one 8-bit character always needs exactly two hex digits. There's no need to turn the entire string into a bit sequence at the same time, the string elements can be processed separately.
string convert_octets_to_hex(string value)
{
string result(2*value.size());
for( int i = 0; i < value.size(); i++ ) {
result[2*i] = "0123456789abcdef"[(value[i] >> 4) & 0x0f];
result[2*i+1] = "0123456789abcdef"[value[i] & 0x0f];
}
return result;
}
Oh, I see you have a 1-bit character string. Can handle that the same way:
string convert_binary_to_hex(string binary_value, int number_of_bits = -1)
{
if (number_of_bits < 0) number_of_bits = binary_value.size();
string result((number_of_bits + 3) / 4, '\0');
unsigned work;
char* in = &binary_value[0];
char* out = &result[0];
if (number_of_bits & 3) {
work = 0;
while (number_of_bits & 3) {
work <<= 1;
work |= *(in++) & 1;
number_of_bits--;
}
*(out++) = "0123456789abcdef"[work];
}
while (number_of_bits) {
work = ((in[0] & 1) << 3) | ((in[1] & 1) << 2) | ((in[2] & 1) << 1) | (in[3] & 1);
in += 4;
*(out++) = "0123456789abcdef"[work];
number_of_bits -= 4;
}
return result;
}
EDIT: Fixed some bugs, added a demo
I have an int that I want to store as a binary string representation. How can this be done?
Try this:
#include <bitset>
#include <iostream>
int main()
{
std::bitset<32> x(23456);
std::cout << x << "\n";
// If you don't want a variable just create a temporary.
std::cout << std::bitset<32>(23456) << "\n";
}
I have an int that I want to first convert to a binary number.
What exactly does that mean? There is no type "binary number". Well, an int is already represented in binary form internally unless you're using a very strange computer, but that's an implementation detail -- conceptually, it is just an integral number.
Each time you print a number to the screen, it must be converted to a string of characters. It just so happens that most I/O systems chose a decimal representation for this process so that humans have an easier time. But there is nothing inherently decimal about int.
Anyway, to generate a base b representation of an integral number x, simply follow this algorithm:
initialize s with the empty string
m = x % b
x = x / b
Convert m into a digit, d.
Append d on s.
If x is not zero, goto step 2.
Reverse s
Step 4 is easy if b <= 10 and your computer uses a character encoding where the digits 0-9 are contiguous, because then it's simply d = '0' + m. Otherwise, you need a lookup table.
Steps 5 and 7 can be simplified to append d on the left of s if you know ahead of time how much space you will need and start from the right end in the string.
In the case of b == 2 (e.g. binary representation), step 2 can be simplified to m = x & 1, and step 3 can be simplified to x = x >> 1.
Solution with reverse:
#include <string>
#include <algorithm>
std::string binary(unsigned x)
{
std::string s;
do
{
s.push_back('0' + (x & 1));
} while (x >>= 1);
std::reverse(s.begin(), s.end());
return s;
}
Solution without reverse:
#include <string>
std::string binary(unsigned x)
{
// Warning: this breaks for numbers with more than 64 bits
char buffer[64];
char* p = buffer + 64;
do
{
*--p = '0' + (x & 1);
} while (x >>= 1);
return std::string(p, buffer + 64);
}
AND the number with 100000..., then 010000..., 0010000..., etc. Each time, if the result is 0, put a '0' in a char array, otherwise put a '1'.
int numberOfBits = sizeof(int) * 8;
char binary[numberOfBits + 1];
int decimal = 29;
for(int i = 0; i < numberOfBits; ++i) {
if ((decimal & (0x80000000 >> i)) == 0) {
binary[i] = '0';
} else {
binary[i] = '1';
}
}
binary[numberOfBits] = '\0';
string binaryString(binary);
http://www.phanderson.com/printer/bin_disp.html is a good example.
The basic principle of a simple approach:
Loop until the # is 0
& (bitwise and) the # with 1. Print the result (1 or 0) to the end of string buffer.
Shift the # by 1 bit using >>=.
Repeat loop
Print reversed string buffer
To avoid reversing the string or needing to limit yourself to #s fitting the buffer string length, you can:
Compute ceiling(log2(N)) - say L
Compute mask = 2^L
Loop until mask == 0:
& (bitwise and) the mask with the #. Print the result (1 or 0).
number &= (mask-1)
mask >>= 1 (divide by 2)
I assume this is related to your other question on extensible hashing.
First define some mnemonics for your bits:
const int FIRST_BIT = 0x1;
const int SECOND_BIT = 0x2;
const int THIRD_BIT = 0x4;
Then you have your number you want to convert to a bit string:
int x = someValue;
You can check if a bit is set by using the logical & operator.
if(x & FIRST_BIT)
{
// The first bit is set.
}
And you can keep an std::string and you add 1 to that string if a bit is set, and you add 0 if the bit is not set. Depending on what order you want the string in you can start with the last bit and move to the first or just first to last.
You can refactor this into a loop and using it for arbitrarily sized numbers by calculating the mnemonic bits above using current_bit_value<<=1 after each iteration.
There isn't a direct function, you can just walk along the bits of the int (hint see >> ) and insert a '1' or '0' in the string.
Sounds like a standard interview / homework type question
Use sprintf function to store the formatted output in the string variable, instead of printf for directly printing. Note, however, that these functions only work with C strings, and not C++ strings.
There's a small header only library you can use for this here.
Example:
std::cout << ConvertInteger<Uint32>::ToBinaryString(21);
// Displays "10101"
auto x = ConvertInteger<Int8>::ToBinaryString(21, true);
std::cout << x << "\n"; // displays "00010101"
auto x = ConvertInteger<Uint8>::ToBinaryString(21, true, "0b");
std::cout << x << "\n"; // displays "0b00010101"
Solution without reverse, no additional copy, and with 0-padding:
#include <iostream>
#include <string>
template <short WIDTH>
std::string binary( unsigned x )
{
std::string buffer( WIDTH, '0' );
char *p = &buffer[ WIDTH ];
do {
--p;
if (x & 1) *p = '1';
}
while (x >>= 1);
return buffer;
}
int main()
{
std::cout << "'" << binary<32>(0xf0f0f0f0) << "'" << std::endl;
return 0;
}
This is my best implementation of converting integers(any type) to a std::string. You can remove the template if you are only going to use it for a single integer type. To the best of my knowledge , I think there is a good balance between safety of C++ and cryptic nature of C. Make sure to include the needed headers.
template<typename T>
std::string bstring(T n){
std::string s;
for(int m = sizeof(n) * 8;m--;){
s.push_back('0'+((n >> m) & 1));
}
return s;
}
Use it like so,
std::cout << bstring<size_t>(371) << '\n';
This is the output in my computer(it differs on every computer),
0000000000000000000000000000000000000000000000000000000101110011
Note that the entire binary string is copied and thus the padded zeros which helps to represent the bit size. So the length of the string is the size of size_t in bits.
Lets try a signed integer(negative number),
std::cout << bstring<signed int>(-1) << '\n';
This is the output in my computer(as stated , it differs on every computer),
11111111111111111111111111111111
Note that now the string is smaller , this proves that signed int consumes less space than size_t. As you can see my computer uses the 2's complement method to represent signed integers (negative numbers). You can now see why unsigned short(-1) > signed int(1)
Here is a version made just for signed integers to make this function without templates , i.e use this if you only intend to convert signed integers to string.
std::string bstring(int n){
std::string s;
for(int m = sizeof(n) * 8;m--;){
s.push_back('0'+((n >> m) & 1));
}
return s;
}
I'm taking a beginner C++ course. I received an assignment telling me to write a program that converts an arbitrary number from any base between binary and hex to another base between binary and hex. I was asked to use separate functions to convert to and from base 10. It was to help us get used to using arrays. (We already covered passing by reference previously in class.) I already turned this in, but I'm pretty sure this wasn't how I was meant to do it:
#include <iostream>
#include <conio.h>
#include <cstring>
#include <cmath>
using std::cout;
using std::cin;
using std::endl;
int to_dec(char value[], int starting_base);
char* from_dec(int value, int ending_base);
int main() {
char value[30];
int starting_base;
int ending_base;
cout << "This program converts from one base to another, so long as the bases are" << endl
<< "between 2 and 16." << endl
<< endl;
input_numbers:
cout << "Enter the number, then starting base, then ending base:" << endl;
cin >> value >> starting_base >> ending_base;
if (starting_base < 2 || starting_base > 16 || ending_base < 2 || ending_base > 16) {
cout << "Invalid base(s). ";
goto input_numbers;
}
for (int i=0; value[i]; i++) value[i] = toupper(value[i]);
cout << "Base " << ending_base << ": " << from_dec(to_dec(value, starting_base), ending_base) << endl
<< "Press any key to exit.";
getch();
return 0;
}
int to_dec(char value[], int starting_base) {
char hex[16] = {'0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'A', 'B', 'C', 'D', 'E', 'F'};
long int return_value = 0;
unsigned short int digit = 0;
for (short int pos = strlen(value)-1; pos > -1; pos--) {
for (int i=0; i<starting_base; i++) {
if (hex[i] == value[pos]) {
return_value+=i*pow((float)starting_base, digit++);
break;
}
}
}
return return_value;
}
char* from_dec(int value, int ending_base) {
char hex[16] = {'0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'A', 'B', 'C', 'D', 'E', 'F'};
char *return_value = (char *)malloc(30);
unsigned short int digit = (int)ceil(log10((double)(value+1))/log10((double)ending_base));
return_value[digit] = 0;
for (; value != 0; value/=ending_base) return_value[--digit] = hex[value%ending_base];
return return_value;
}
I'm pretty sure this is more advanced than it was meant to be. How do you think I was supposed to do it?
I'm essentially looking for two kinds of answers:
Examples of what a simple solution like the one my teacher probably expected would be.
Suggestions on how to improve the code.
I don't think you need the inner loop:
for (int i=0; i<starting_base; i++) {
What is its purpose?
Rather, you should get the character at value[ pos ] and convert it to an integer. The conversion depends on base, so it may be better to do it in a separate function.
You are defining char hex[ 16 ] twice, once in each function. It may better to do it at only one place.
EDIT 1:
Since this is "homework" tagged, I cannot give you the full answer. However, here is an example of how to_dec() is supposed to work. (Ideally, you should have constructed this!)
Input:
char * value = 3012,
int base = 4,
Math:
Number = 3 * 4^3 + 0 * 4^2 + 1 * 4^1 + 2 * 4^0 = 192 + 0 + 4 + 2 = 198
Expected working of the loop:
x = 0
x = 4x + 3 = 3
x = 4x + 0 = 12
x = 4x + 1 = 49
x = 4x + 2 = 198
return x;
EDIT 2:
Fair enough! So, here is some more :-)
Here is a code sketch. Not compiled or tested though. This is direct translation of the example I provided earlier.
unsigned
to_dec( char * inputString, unsigned base )
{
unsigned rv = 0; // return value
unsigned c; // character converted to integer
for( char * p = inputString; *p; ++p ) // p iterates through the string
{
c = *p - hex[0];
rv = base * rv + c;
}
return rv;
}
I would stay away from GOTO statements unless they are absolutely necessary. GOTO statements are easy to use but will lead to 'spaghetti code'.
Try using a loop instead. Something along the lines of this:
bool base_is_invalid = true;
while ( base_is_invalid ) {
cout << "Enter the number, then starting base, then ending base:" << endl;
cin >> value >> starting_base >> ending_base;
if (starting_base < 2 || starting_base > 16 || ending_base < 2 || ending_base > 16)
cout << "Invalid number. ";
else
base_is_invalid = false;
}
You can initialize arrays by string literals (notice that the terminating \0 is not included because the size of the array doesn't permit that):
char const hex[16] = "0123456789ABCDEF";
Or just use a pointer to the string literal for the same effect:
char const* hex = "0123456789ABCDEF";
to_dec() looks to complicated, here is my shot at it:
int to_dec(char* value, int starting_base)
{
int return_value = 0;
for (char* cur = value + strlen(value) - 1; cur >= value; cur--) {
// assuming chars are ascii/utf: 0-9=48-57, A-F=65-70
// faster than loop
int inval = *cur - 48;
if (inval > 9) {
inval = *cur - 55;
if (inval > 15) {
// throw input error
}
}
if (inval < 0) {
// throw input error
}
if (inval >= starting_base) {
// throw input error
}
// now the simple calc
return_value *= starting_base;
return_value += inval;
}
return return_value;
}
for the initial conversion from ascii to an integer, you can also use a lookup table (just as you are using a lookuptable to to the conversion the other way around) , which is much faster then searching through the array for every digit.
int to_dec(char value[], int starting_base)
{
char asc2BaseTab = {0,1,2,3,4,5,6,7,8,9,-1,-1,-1,-1,-1,-1,-1,10,11,12,13,14,15, //0-9 and A-F (big caps)
-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1, //unused ascii chars
10,11,12,13,14,15}; //a-f (small caps)
srcIdx = strlen(value);
int number=0;
while((--srcIdx) >= 0)
{
number *= starting_base;
char asciiDigit = value[srcIdx];
if(asciiDigit<'0' || asciiDigit>'f')
{
//display input error
}
char digit = asc2BaseTab[asciiDigit - '0'];
if(digit == -1)
{
//display input error
}
number += digit;
}
return number;
}
p.s. excuses if there are some compile errors in this...I couldn't test it...but the logic is sound.
In your description of the assignment as given it says:
"I was asked to use separate functions to convert to and from base 10."
If that is really what the teacher meant and wanted, which is doubtful, your code doesn't do that:
int to_dec(char value[], int starting_base)
is returning an int which is a binary number. :-) Which in my opinion does make more sense.
Did the teacher even notice that?
C and C++ are different languages, and with different styles of programming. You better not to mix them. (Where C and C++ differ)
If you are trying to use C++, then:
Use std::string instead of char* or char[].
int to_dec(string value, int starting_base);
string from_dec(int value, int ending_base);
No any mallocs, use new/delete. But actually C++ manages memory automatically. The memory is freed as soon as variable is out of scope (unless you are dealing with pointers). And pointers are the last thing you need to deal with.
We don't need here any lookup tables, just a magic string.
string hex = "0123456789ABCDEF";//The index of the letter is its decimal value. A is 10, F is 15.
//usage
char c = 'B';
int value = hex.find( c );//works only with uppercase;
The refactored to_dec can be like that.
int to_dec(string value, int starting_base) {
string hex = "0123456789ABCDEF";
int result = 0;
for (int power = 0; power < value.size(); ++power) {
result += hex.find( value.at(value.size()-power-1) ) * pow((float)starting_base, power);
}
return result;
}
And there is a more elegant algorithm to convert from base 10 to any other
See there for example. You have the opportunity to code it yourself :)
In your from_dec function, you're converting the digits from left to right. An alternative is to convert from right to left. That is,
std::string from_dec(int n, int base)
{
std::string result;
bool is_negative = n < 0;
if (is_negative)
{
n = - n;
}
while (n != 0)
{
result = DIGITS[n % base] + result;
n /= base;
}
if (is_negative)
{
result = '-' + result;
}
return result;
}
This way, you won't need the log function.
(BTW, to_dec and from_dec are inaccurate names. Your computer doesn't store numbers in base 10.)
Got this question on an interview once and brainfarted and spun wheels for a while. Go figure. Anyway, a couple years later I'm going through Math and Physics for Programmers to brush up for positions that are more math intensive than what I've been doing. CH1 "assignment" has
// Write a function ConvertBase(Number, Base1, Base2) which takes a
// string or array representing an integer in Base1 and converts it
// into base Base2, returning the new string.
So, I took an approach mentioned above: I convert string in arbitrary base to UINT64, then I convert UINT64 back to arbitrary base:
CString ConvertBase(const CString& strNumber, int base1, int base2)
{
return ValueToBaseString(BaseStringToValue(strNumber, base1), base2);
}
Each of the subfunctions has a recursive solution. Here's one for example:
UINT64 BaseStringToValue(const CString& strNumber, int base)
{
if (strNumber.IsEmpty())
{
return 0;
}
CString outDigit = strNumber.Right(1);
UINT64 output = DigitToInt(outDigit[0]);
CString strRemaining = strNumber.Left(strNumber.GetLength() - 1);
UINT64 val = BaseStringToValue(strRemaining, base);
output += val * base;
return output;
}
I find the other one slightly harder to grasp mentally, but it works roughly the same way.
I also implemented DigitToInt and IntToDigit which work just like they sound. You can take some neat shortcuts there, by the way, if you realize that chars are ints then you don't need huge switch statements:
int DigitToInt(wchar_t cDigit)
{
cDigit = toupper(cDigit);
if (cDigit >= '0' && cDigit <= '9')
{
return cDigit - '0';
}
return cDigit - 'A' + 10;
}
and unit tests are really your friend here:
typedef struct
{
CString number;
int base1;
int base2;
CString answer;
} Input;
Input input[] =
{
{ "345678", 10, 16, "5464E"},
{ "FAE211", 16, 8, "76561021" },
{ "FAE211", 16, 2, "111110101110001000010001"},
{ "110110111", 2, 10, "439" }
};
(snip)
for (int i = 0 ; i < sizeof(input) / sizeof(input[0]) ; i++)
{
CString result = ConvertBase(input[i].number, input[i].base1, input[i].base2);
printf("%S in base %d is %S in base %d (%S expected - %s)\n", (const WCHAR*)input[i].number,
input[i].base1,
(const WCHAR*) result,
input[i].base2,
(const WCHAR*) input[i].answer,
result == input[i].answer ? "CORRECT" : "WRONG");
}
And here's the output:
345678 in base 10 is 5464E in base 16 (5464E expected - CORRECT)
FAE211 in base 16 is 76561021 in base 8 (76561021 expected - CORRECT)
FAE211 in base 16 is 111110101110001000010001 in base 2 (111110101110001000010001 expected - CORRECT)
110110111 in base 2 is 439 in base 10 (439 expected - CORRECT)
Now I took some shortcuts in coding by using CString types, etc. I was giving no consideration to efficiency or performance, I just wanted to solve the algorithm with easiest coding possible.
It can help to understand how these algorithms are recursive if you write them like so: Say you want to determine the "value" of the "string" B4A3, which is in base 13. You know it's 3 + 13(A) + 13(13)(4) + 13(13)(13)(B) Another way to write that is: 0+3+13(A+13(4+13(B))) - and voila! Recursion.
Apart from the things already mentioned, I would suggest using the new-operator instead of free. The advantages of new are that it also does call constructors - which is irrelevant here since you're using a POD type, but important when it comes to objects such as std::string or your own custom classes - and that you can overload the new operator to suit your specific needs (which is irrelevant here, too :p). But don't go ahead using malloc for PODs and new for classes, since mixing them is considered bad style.
But okay, you got yourself some heap memory in from_dec... but where is it freed again? Basic rule: memory that you malloc (or calloc etc) must be passed to free at some point. The same rule applies to the new-operator, just that the release-operator is called delete. Note that for arrays, you need new[] and delete[]. DON'T ever allocate with new and release with delete[] or the other way around, since the memory won't be released correctly.
Nothing evil will happen when your toy program won't release the memory... I guess your PC has got enough RAM to cope with it and when you shut down your program, the OS releases the memory anyway.. but not all programs are (a) that tiny and (b) shut down often.
Also I'd avoid conio.h, since this is not portable. You're not using the most complicated IO, so the standard headers (iostream etc) should do.
Likewise, I think most programmers using modern languages follow the rule "Only use goto if other solutions are really crippled or tons of more work". This is a situation that can be easily solved by using loops, as shown by emceefly. In your program the goto is easy to handle, but you won't be writing such small programs forever, will you? ;)
I, for example, was presented with some legacy code recently.. 2000 lines of goto-littered code, yay! Trying to follow the code's logical flow was almost impossible ("Oh, jump ahead 200 lines, great... who needs context anyway"), even harder was to rewrite the damn thing.
So okay, your goto doesn't hurt here, but where's the benefit? 2-3 lines shorter? Doesn't really matter overall (if you're paid by lines of code, this could also be a major disadvantage ;)). Personally I find the loop version more readable and clean.
As you see, most of the points here can be ignored easily for your program, since it's a toy program. But when you think of larger programs, they make more sense (hopefully) ;)