C++: Format String Containing Hex Value - c++

Working with C++ on Visual Studio 2010.
Trying to come up with a robust function that will take a hex value as string and size as integer and then output the formatted hex value.
For e.g.,
If the input string is "A2" and size is 1, then the output is "0xA2"
If the input string is "800" and size is 2, then the output is "0x0800"
If the input string is "DEF" and size is 4, then the output is "0x00000DEF"
If the input string is "00775" and size is 4, then the output is "0x00000775"
If the input string is "FB600" and size is 3, then the output is "0x0FB600"
The basic idea is, multiply size by 2 and then if the string length is less than that, then add leading zeros to the hex value and then append it with "0x".
"0x" is appended irrespective of whether leading zeros are added.
As you see in 1st example, there's no zeros to add as the string already contains 2 characters.
I came up with below function, but it's having memory corruption. Also when i try to process large amount of data by calling this function few hundrend times, it crashes. Seems my logic has memory holes in it.
So am hoping that someone can come up with a robust intelligent code for this function.
What i tried:
void formatHexString(char* inputHex, int size, char* outputFormattedHex)
{
int len = size * 2;
int diff = len - strlen(inputHex);
char * tempHex = new char [diff + 2]; //"2" is for holding "0x"
tempHex[0] = '0';
tempHex[1] = 'x';
if (len > strlen(inputHex))
{
for (int i = 2; i < ((len - strlen(inputHex)) + 2); i++)
{
tempHex[i] = '0';
}
}
strcat(tempHex, inputHex);
sprintf(outputFormattedHex, "%s", tempHex);
delete [] tempHex;
cout <<outputFormattedHex <<endl;
}
int main
{
char bbb1[24];
formatHexString("23", 1, bbb1);
char bbb2[24];
formatHexString("A3", 2, bbb2);
char bbb3[24];
formatHexString("0AA23", 4, bbb3);
char bbb4[24];
formatHexString("7723", 4, bbb4);
char bbb5[24];
formatHexString("AA023", 4, bbb5);
return 0;
}
UPDATED:
I cannot modify the arguments to original function as this function is called from a different application. So i modified my original function with your code, but this is not working. Any ideas?
void formatHexString(char* inputHex, int size, char* outputFormattedHex)
{
string input(inputHex);
std::size_t const input_len(input.length());
if (!size || (size * 2 < input_len))
size = input_len / 2 + input_len % 2;
std::stringstream ss;
ss << "0x" << std::setw(2 * size) << std::setfill('0') << input;
sprintf(outputFormattedHex, "%s", ss.str());
}

#include <iostream>
#include <sstream>
#include <iomanip>
#include <string>
#include <cstddef>
std::string formatHexString(std::string const & input, std::size_t size = 0)
{
std::size_t const input_len(input.length());
// always round up to an even count of digits if no size is specified
// or size would cause the output to be truncated
if (!size || (size * 2 < input_len))
size = input_len / 2 + input_len % 2;
std::stringstream ss;
ss << "0x" << std::setw(2 * size) << std::setfill('0') << input;
return ss.str();
}
int main()
{
std::cout << formatHexString( "23") << '\n'
<< formatHexString( "A3", 2) << '\n'
<< formatHexString( "AA23", 4) << '\n'
<< formatHexString( "7723", 4) << '\n'
<< formatHexString("AA023", 4) << '\n';
}
Solution without std::stringstream:
#include <string>
#include <cstddef>
std::string formatHexString(std::string const & input, std::size_t size = 0)
{
std::size_t const input_len(input.length());
// always round up to an even count of digits if no size is specified
// or size would cause the output to be truncated
if (!size || (size * 2 < input_len))
size = input_len / 2 + input_len % 2;
std::string result("0x");
for (std::size_t i = 0, leading_zeros = size * 2 - input_len; i < leading_zeros; ++i)
result += '0';
result += input;
return result;
}
Updated:
#define _CRT_SECURE_NO_WARNINGS
#include <iostream>
#include <string>
#include <iomanip>
#include <sstream>
#include <cstddef>
#include <cstdio>
void formatHexString(char const * inputHex, int size, char * outputFormattedHex)
{
int const input_len(std::strlen(inputHex));
if (!size || (size * 2 < input_len))
size = input_len / 2 + input_len % 2;
std::stringstream ss;
ss << "0x" << std::setw(2 * size) << std::setfill('0') << inputHex;
std::strcpy(outputFormattedHex, ss.str().c_str());
}
int main()
{
char output[24];
formatHexString("23", 1, output);
std::cout << output << '\n';
formatHexString("A3", 2, output);
std::cout << output << '\n';
formatHexString("0AA23", 4, output);
std::cout << output << '\n';
formatHexString("7723", 4, output);
std::cout << output << '\n';
formatHexString("AA023", 4, output);
std::cout << output << '\n';
}

It is unclear from your question, what you expect to happen with leading zeros on the input: I.e. either input "00000000EA" Size 2 turns to "00EA", or it keeps all its leading zeros.
This simple solution is for both cases (bTrim = true, for the 1st case):
#include <string>
void formatHexString(std::string & strHex, unsigned int nSize, bool bTrim = true)
{
if (bTrim) // Trim leading-zeros:
strHex = strHex.substr(strHex.find_first_not_of('0'));
if (nSize > strHex.length()) // Pad with leading-zeros to fit nSize:
strHex.insert(0, std::string(nSize - strHex.length(), '0').c_str());
strHex.insert(0, "0x"); // Insert prefix
}
-
If it's important to keep the original signature, wrap the above formatHexString with:
void formatHexString(char* inputHex, int size, char* outputFormattedHex)
{
std::string strHex(inputHex);
formatHexString(strHex, size * 2);
strcpy_s(outputFormattedHex, strHex.length()+1, strHex.c_str()); // Visual Studio
}

Related

How to output a 4 byte string as a 4 byte integer in CPP

I am currently working on a Hash function and need to interpret a 4 byte string as a 4 byte integer number. Specifically I want every char value of the string interpreted as part of the integer number.
You can just copy the 4 bytes into a 32 bit unsigned integer variable to be able to interpret it as a 4 byte integer:
#include <cstdint>
#include <cstring>
#include <iostream>
#include <string>
std::uint32_t foo(const std::string& str) {
if(str.size() != 4) throw std::runtime_error("wrong string length");
std::uint32_t rv;
std::memcpy(&rv, str.c_str(), 4);
return rv;
}
int main() {
std::cout << (1 + 256 + 65536 + 16777216) << '\n'; // 16843009
std::cout << foo("\001\001\001\001") << '\n'; // 16843009
// The result is platform dependent, so these are possible results:
std::cout << foo("\004\003\002\001") << '\n'; // 16909060 or 67305985
std::cout << foo("\001\002\003\004") << '\n'; // 16909060 or 67305985
}
This function takes a string and interprets its char values as a 4 byte integer number.
Only tested strings with a maximum length of 4.
uint32_t buffToInteger( std::string buffer )
{
uint32_t b = 0;
for ( uint64_t i = 0; i < buffer.length(); i++ )
{
b |= static_cast< unsigned char >(buffer.at( i ) << (24 - (4 - buffer.length() + i) * 8));
}
return b;
}

Tiny Encryption Algorithm implementation produces unexpected results

Here is an implementation of TEA, that attempts to encrypt a file containing a text message:
main.cpp
#include <iostream>
#include <iomanip>
#include <string>
#include <fstream>
#include "TEA.h"
int main()
try
{
// std::cout <<"sizeof(long) = " << sizeof(long) <<'\n';
std::string src("in.txt");
std::string dest("out.txt");
std::string key("bs");
send_msg(src, dest, key);
}
catch(std::exception& e)
{
std::cerr << e.what();
exit(1);
}
TEA.h
#ifndef TEA_h
#define TEA_h
/*
src - eight (2 words or 2*4 bytes) characters to be enciphered.
dest- enciphered output.
key - array of 4 words.
Assumes sizeof(long) == 4 bytes.
*/
void encipher(const unsigned long* const v,
unsigned long* const w,
const unsigned long* const k)
{
unsigned long y = v[0];
unsigned long z = v[1];
unsigned long sum = 0;
unsigned long delta = 0x9E3779B9;
unsigned long n = 32;
while (n-- > 0)
{
y += (z<<4 ^ z>>5) + z^sum + k[sum&3];
sum += delta;
z += (z<<4 ^ z>>5) + y^sum + k[sum>>11&3];
}
w[0] = y;
w[1] = z;
}
//---------------------------------------------------------------------------
/*
Sends the clear text from: src_file as
encrypted text to: dest_file, using TEA
with key: the last argument.
*/
void send_msg(std::string& src_file,
std::string& dest_file,
std::string key)
{
const int nchar = 2 * sizeof(long); // size of I/O chunk: 8 bytes = 64 bits
const int kchar = 2 * nchar; // size of key: 16 bytes = 128 bits
// pad key with 0's to match en-/de- cipher argument input size
while (key.size() < kchar)
{
key += '0';
}
// prepare files
std::ifstream ifs(src_file.c_str());
std::ofstream ofs(dest_file.c_str());
if (!ifs || !ofs)
{
throw std::runtime_error("File can't open!\n");
}
// key: extract raw string data interpreted as pointer to const unsigned long
const unsigned long* k = reinterpret_cast<const unsigned long*>(key.data());
// define C-compatible way to read & write from / to file 128 bits (two unsigned longs) at a time
unsigned long outptr[2];
char inbuf[nchar];
unsigned long* inptr = reinterpret_cast<unsigned long*>(inbuf);
int count = 0;
while (ifs.get(inbuf[count]))
{
ofs << std::hex; // write output in hex
if (++count == nchar) // 8 characters in the input buffer: ready to encode
{
encipher(inptr, outptr, k);
// pad with leading 0's
ofs << std::setw(8) << std::setfill('0') << outptr[0] <<' '
<< std::setw(8) << std::setfill('0') << outptr[1] <<' ';
count = 0;
}
}
if (count) // pad at the end
{
while (count != nchar)
{
inbuf[count++] = '0';
}
encipher(inptr, outptr, k);
ofs << outptr[0] <<' '<< outptr[1] <<' ';
}
}
#endif
Input file, in.txt:
The Tiny
Expected in Output file:
5b8fb57c 806fbcce
Actual in Output file, out.txt:
f3a810ff 3874d755
What am I doing wrong?
The + operation has a higher precedence than ^, so (z<<4 ^ z>>5) + z^sum + k[sum&3] is parsed as
(((z<<4) ^ (z>>5)) + z)^(sum + k[sum&3]).
Similarly for the other expression.
You should add parenthesis to make your expression explicit in how it executes.
The problem was indeed related to those expressions (pointed out by #1201ProgramAlarm), however, it is not related to the (wrong) implicit operator precedence (nor arity).
y += (z<<4 ^ z>>5) + z^sum + k[sum&3];
sum += delta;
z += (z<<4 ^ z>>5) + y^sum + k[sum>>11&3]; // <------ the problem is here
the left and right bit shift operations have to be applied on variable y, i.e.:
z += (y<<4 ^ y>>5) + y^sum + k[sum>>11&3];

printf int to char array with left padded spaces

I have a bit of a brain dead moment.
I have to store the string representation of an int to a char[], but the ascii representation will have to be left padded by spaces.
A snprintf will do the job.
char data [6];
int msg_len = 10;
std::snprintf(data, 6, "%*d", 5, msg_len);
//" 10" <-- OK
I just wonder if there is a more elegant way to do it.
I have access to C++11
Also there's a a bit of a problem, I think snprintf will add also a terminating character, and I have to avoid that.
I could have an intermediary buffer and copy that one into my data, but it would add additional complexity.
I need to do it in place, because those data structures are part of a message I have to send to a server that accepts input formatted that way.
The message looks like:
struct
{
char first_field [6];
char second_field [8];
char data_field [12];
};
And I might need to set the second_field before having set the first one. Also I have a lot more fields to fill in, so a generic solution would be appreciated.
As long as I can convert an int to that string representation would be fine.
this can be achived by combination of std::ostringstream, and io manipulators, such as: std::setfill, std::setw and other:
std::ostringstream s;
s << std::setfill('0') << std::setw(7) << 1015;
std::string str = s.str();
std::cout << str << std::endl; // outputs "0001015"
Within your indicated constraints this may improve on your own answer:
#include <algorithm>
#include <cstddef>
#include <cassert>
inline void
unsigned_to_decimal( unsigned long number, char* buffer, std::size_t size)
{
std::size_t i = size;
buffer[size - 1] = '0';
for (unsigned long n ;(n = number) > 0 && i > 0 ;) {
buffer[--i] = '0' + n - (10 * (number /= 10));
}
assert(number == 0);
std::fill(buffer,buffer + (i - (i == size)),' ');
}
For a demo, append:
#include <iostream>
#include <string>
#include <climits>
#include <array>
constexpr std::size_t decimal_digits(unsigned long n)
{
return n / 10 > 0 ? 1 + decimal_digits(n / 10) : 1;
}
int main()
{
const std::size_t max_digits = decimal_digits(ULONG_MAX);
std::cout << "Print decimal 0, UINT_MAX, ULONG_MAX "
"from left-padded char buffer, size " << max_digits << ":-\n";
for (auto ul : std::array<unsigned long,3>{0,UINT_MAX,ULONG_MAX}) {
char buf[max_digits];
std::fill(buf,buf + max_digits,'?');
std::cout << '[' << std::string(buf,buf + max_digits) << "]\n";
unsigned_to_decimal(ul,buf,max_digits);
std::cout << '[' << std::string(buf,buf + max_digits) << "]\n";
}
return 0;
}
which runs like:
Print decimal 0, UINT_MAX, ULONG_MAX from left-padded char buffer, size 20:-
[????????????????????]
[ 0]
[????????????????????]
[ 4294967295]
[????????????????????]
[18446744073709551615]
(g++ -Wall -std=c++11 -pedantic, GCC 5.3.1)
I just wonder if there is a more elegant way to do it.
Andrei R. already showed an elegant way, so I won't repeat it, but I'll answer your other questions.
Also there's a a bit of a problem, I think snprintf will add also a terminating character, and I have to avoid that.
There is no way to avoid writing null with snprintf.
I could have an intermediary buffer and copy that one into my data, but it would add additional complexity.
That is the typical way to achieve what you want. This approach can also be taken using the result from std::ostringstream as the intermediary buffer.
Depending on what you intend to use the non null terminated string for, there might be approaches.
For those interested I rolled out my own simplistic solution, which fits my needs for now.
I only handle unsigned values and no locale, but it works for what I need to do.
inline void unsigned_to_decimal( unsigned long number, char* buffer, std::size_t size)
{
char* start = buffer;
char* end = buffer + size;
if(number == 0)
{
*buffer++ = '0';
}
else
{
while( number != 0 )
{
*buffer++ = '0' + number % 10;
number /= 10;
}
std::reverse( start, buffer );
//need to fill the remaining characters with spaces
std::fill(buffer, end, ' ');
std::rotate(start, start + std::distance(start, buffer), end);
}
}

How to put bit sequence into bytes (C/C++)

I have a couple of integers, for example (in binary represetation):
00001000, 01111111, 10000000, 00000001
and I need to put them in sequence to array of bytes(chars), without the leading zeros, like so:
10001111 11110000 0001000
I understand that it is must be done by bit shifting with <<,>> and using binary or |. But I can't find the correct algorithm, can you suggest the best approach?
The integers I need to put there are unsigned long long ints, so the length of one can be anywhere from 1 bit to 8 bytes (64 bits).
You could use a std::bitset:
#include <bitset>
#include <iostream>
int main() {
unsigned i = 242122534;
std::bitset<sizeof(i) * 8> bits;
bits = i;
std::cout << bits.to_string() << "\n";
}
There are doubtless other ways of doing it, but I would probably go with the simplest:
std::vector<unsigned char> integers; // Has your list of bytes
integers.push_back(0x02);
integers.push_back(0xFF);
integers.push_back(0x00);
integers.push_back(0x10);
integers.push_back(0x01);
std::string str; // Will have your resulting string
for(unsigned int i=0; i < integers.size(); i++)
for(int j=0; j<8; j++)
str += ((integers[i]<<j) & 0x80 ? "1" : "0");
std::cout << str << "\n";
size_t begin = str.find("1");
if(begin > 0) str.erase(0,begin);
std::cout << str << "\n";
I wrote this up before you mentioned that you were using long ints or whatnot, but that doesn't actually change very much of this. The mask needs to change, and the j loop variable, but otherwise the above should work.
Convert them to strings, then erase all leading zeros:
#include <iostream>
#include <sstream>
#include <string>
#include <cstdint>
std::string to_bin(uint64_t v)
{
std::stringstream ss;
for(size_t x = 0; x < 64; ++x)
{
if(v & 0x8000000000000000)
ss << "1";
else
ss << "0";
v <<= 1;
}
return ss.str();
}
void trim_right(std::string& in)
{
size_t non_zero = in.find_first_not_of("0");
if(std::string::npos != non_zero)
in.erase(in.begin(), in.begin() + non_zero);
else
{
// no 1 in data set, what to do?
in = "<no data>";
}
}
int main()
{
uint64_t v1 = 437148234;
uint64_t v2 = 1;
uint64_t v3 = 0;
std::string v1s = to_bin(v1);
std::string v2s = to_bin(v2);
std::string v3s = to_bin(v3);
trim_right(v1s);
trim_right(v2s);
trim_right(v3s);
std::cout << v1s << "\n"
<< v2s << "\n"
<< v3s << "\n";
return 0;
}
A simple approach would be having the "current byte" (acc in the following), the associated number of used bits in it (bitcount) and a vector of fully processed bytes (output):
int acc = 0;
int bitcount = 0;
std::vector<unsigned char> output;
void writeBits(int size, unsigned long long x)
{
while (size > 0)
{
// sz = How many bit we're about to copy
int sz = size;
// max avail space in acc
if (sz > 8 - bitcount) sz = 8 - bitcount;
// get the bits
acc |= ((x >> (size - sz)) << (8 - bitcount - sz));
// zero them off in x
x &= (1 << (size - sz)) - 1;
// acc got bigger and x got smaller
bitcount += sz;
size -= sz;
if (bitcount == 8)
{
// got a full byte!
output.push_back(acc);
acc = bitcount = 0;
}
}
}
void writeNumber(unsigned long long x)
{
// How big is it?
int size = 0;
while (size < 64 && x >= (1ULL << size))
size++;
writeBits(size, x);
}
Note that at the end of the processing you should check if there is any bit still in the accumulator (bitcount > 0) and you should flush them in that case by doing a output.push_back(acc);.
Note also that if speed is an issue then probably using a bigger accumulator is a good idea (however the output will depend on machine endianness) and also that discovering how many bits are used in a number can be made much faster than a linear search in C++ (for example x86 has a special machine language instruction BSR dedicated to this).

base32 conversion in C++

does anybody know any commonly used library for C++ that provides methods for encoding and decoding numbers from base 10 to base 32 and viceversa?
Thanks,
Stefano
[Updated] Apparently, the C++ std::setbase() IO manipulator and normal << and >> IO operators only handle bases 8, 10, and 16, and is therefore useless for handling base 32.
So to solve your issue of converting
strings with base 10/32 representation of numbers read from some input to integers in the program
integers in the program to strings with base 10/32 representations to be output
you will need to resort to other functions.
For converting C style strings containing base 2..36 representations to integers, you can use #include <cstdlib> and use the strtol(3) & Co. set of functions.
As for converting integers to strings with arbitrary base... I cannot find an easy answer. printf(3) style format strings only handle bases 8,10,16 AFAICS, just like std::setbase. Anyone?
Did you mean "base 10 to base 32", rather than integer to base32? The latter seems more likely and more useful; by default standard formatted I/O functions generate base 10 string format when dealing with integers.
For the base 32 to integer conversion the standard library strtol() function will do that. For the reciprocal, you don't need a library for something you can easily implement yourself (not everything is a lego brick).
Here's an example, not necessarily the most efficient, but simple;
#include <cstring>
#include <string>
long b32tol( std::string b32 )
{
return strtol( b32.c_str(), 0, 32 ) ;
}
std::string itob32( long i )
{
unsigned long u = *(reinterpret_cast<unsigned long*>)( &i ) ;
std::string b32 ;
do
{
int d = u % 32 ;
if( d < 10 )
{
b32.insert( 0, 1, '0' + d ) ;
}
else
{
b32.insert( 0, 1, 'a' + d - 10 ) ;
}
u /= 32 ;
} while( u > 0 );
return b32 ;
}
#include <iostream>
int main()
{
long i = 32*32*11 + 32*20 + 5 ; // BK5 in base 32
std::string b32 = itob32( i ) ;
long ii = b32tol( b32 ) ;
std::cout << i << std::endl ; // Original
std::cout << b32 << std::endl ; // Converted to b32
std::cout << ii << std::endl ; // Converted back
return 0 ;
}
In direct answer to the original (and now old) question, I don't know of any common library for encoding byte arrays in base32, or for decoding them again afterward. However, I was presented last week with a need to decode SHA1 hash values represented in base32 into their original byte arrays. Here's some C++ code (with some notable Windows/little endian artifacts) that I wrote to do just that, and to verify the results.
Note that in contrast with Clifford's code above, which, if I'm not mistaken, assumes the "base32hex" alphabet mentioned on RFC 4648, my code assumes the "base32" alphabet ("A-Z" and "2-7").
// This program illustrates how SHA1 hash values in base32 encoded form can be decoded
// and then re-encoded in base16.
#include "stdafx.h"
#include <string>
#include <vector>
#include <iostream>
#include <cassert>
using namespace std;
unsigned char Base16EncodeNibble( unsigned char value )
{
if( value >= 0 && value <= 9 )
return value + 48;
else if( value >= 10 && value <= 15 )
return (value-10) + 65;
else //assert(false);
{
cout << "Error: trying to convert value: " << value << endl;
}
return 42; // sentinal for error condition
}
void Base32DecodeBase16Encode(const string & input, string & output)
{
// Here's the base32 decoding:
// The "Base 32 Encoding" section of http://tools.ietf.org/html/rfc4648#page-8
// shows that every 8 bytes of base32 encoded data must be translated back into 5 bytes
// of original data during a decoding process. The following code does this.
int input_len = input.length();
assert( input_len == 32 );
const char * input_str = input.c_str();
int output_len = (input_len*5)/8;
assert( output_len == 20 );
// Because input strings are assumed to be SHA1 hash values in base32, it is also assumed
// that they will be 32 characters (and bytes in this case) in length, and so the output
// string should be 20 bytes in length.
unsigned char *output_str = new unsigned char[output_len];
char curr_char, temp_char;
long long temp_buffer = 0; //formerly: __int64 temp_buffer = 0;
for( int i=0; i<input_len; i++ )
{
curr_char = input_str[i];
if( curr_char >= 'A' && curr_char <= 'Z' )
temp_char = curr_char - 'A';
if( curr_char >= '2' && curr_char <= '7' )
temp_char = curr_char - '2' + 26;
if( temp_buffer )
temp_buffer <<= 5; //temp_buffer = (temp_buffer << 5);
temp_buffer |= temp_char;
// if 8 encoded characters have been decoded into the temp location,
// then copy them to the appropriate section of the final decoded location
if( (i>0) && !((i+1) % 8) )
{
unsigned char * source = reinterpret_cast<unsigned char*>(&temp_buffer);
//strncpy(output_str+(5*(((i+1)/8)-1)), source, 5);
int start_index = 5*(((i+1)/8)-1);
int copy_index = 4;
for( int x=start_index; x<(start_index+5); x++, copy_index-- )
output_str[x] = source[copy_index];
temp_buffer = 0;
// I could be mistaken, but I'm guessing that the necessity of copying
// in "reverse" order results from temp_buffer's little endian byte order.
}
}
// Here's the base16 encoding (for human-readable output and the chosen validation tests):
// The "Base 16 Encoding" section of http://tools.ietf.org/html/rfc4648#page-10
// shows that every byte original data must be encoded as two characters from the
// base16 alphabet - one charactor for the original byte's high nibble, and one for
// its low nibble.
unsigned char out_temp, chr_temp;
for( int y=0; y<output_len; y++ )
{
out_temp = Base16EncodeNibble( output_str[y] >> 4 ); //encode the high nibble
output.append( 1, static_cast<char>(out_temp) );
out_temp = Base16EncodeNibble( output_str[y] & 0xF ); //encode the low nibble
output.append( 1, static_cast<char>(out_temp) );
}
delete [] output_str;
}
int _tmain(int argc, _TCHAR* argv[])
{
//string input = "J3WEDSJDRMJHE2FUHERUR6YWLGE3USRH";
vector<string> input_b32_strings, output_b16_strings, expected_b16_strings;
input_b32_strings.push_back("J3WEDSJDRMJHE2FUHERUR6YWLGE3USRH");
expected_b16_strings.push_back("4EEC41C9238B127268B4392348FB165989BA4A27");
input_b32_strings.push_back("2HPUCIVW2EVBANIWCXOIQZX6N5NDIUSX");
expected_b16_strings.push_back("D1DF4122B6D12A10351615DC8866FE6F5A345257");
input_b32_strings.push_back("U4BDNCBAQFCPVDBL4FBG3AANGWVESI5J");
expected_b16_strings.push_back("A7023688208144FA8C2BE1426D800D35AA4923A9");
// Use the base conversion tool at http://darkfader.net/toolbox/convert/
// to verify that the above base32/base16 pairs are equivalent.
int num_input_strs = input_b32_strings.size();
for(int i=0; i<num_input_strs; i++)
{
string temp;
Base32DecodeBase16Encode(input_b32_strings[i], temp);
output_b16_strings.push_back(temp);
}
for(int j=0; j<num_input_strs; j++)
{
cout << input_b32_strings[j] << endl;
cout << output_b16_strings[j] << endl;
cout << expected_b16_strings[j] << endl;
if( output_b16_strings[j] != expected_b16_strings[j] )
{
cout << "Error in conversion for string " << j << endl;
}
}
return 0;
}
I'm not aware of any commonly-used library devoted to base32 encoding but Crypto++ includes a public domain base32 encoder and decoder.
I don't use cpp, so correct me if I'm wrong. I wrote this code for the sake of translating it from C# to save my acquaintance the trouble. The original source, that which I used to create these methods, is on a different post, here, on stackoverflow:
https://stackoverflow.com/a/10981113/13766753
That being said, here's my solution:
#include <iostream>
#include <math.h>
class Base32 {
public:
static std::string dict;
static std::string encode(int number) {
std::string result = "";
bool negative = false;
if (number < 0) {
negative = true;
}
number = abs(number);
do {
result = Base32::dict[fmod(floor(number), 32)] + result;
number /= 32;
} while(number > 0);
if (negative) {
result = "-" + result;
}
return result;
}
static int decode(std::string str) {
int result = 0;
int negative = 1;
if (str.rfind("-", 0) == 0) {
negative = -1;
str = str.substr(1);
}
for(char& letter : str) {
result += Base32::dict.find(letter);
result *= 32;
}
return result / 32 * negative;
}
};
std::string Base32::dict = "0123456789abcdefghijklmnopqrstuvwxyz";
int main() {
std::cout << Base32::encode(0) + "\n" << Base32::decode(Base32::encode(0)) << "\n";
return 0;
}