Why is the function displaying the hex code in reverse order? - c++

The following code (in C++) is supposed to get some data along with it's size (in terms of bytes) and return the string containing the hexadecimal code. size is the size of the memory block with its location stored in val.
std::string byteToHexString(const unsigned char* val, unsigned long long size)
{
unsigned char temp;
std::string vf;
vf.resize(2 * size+1);
for(unsigned long long i= 0; i < size; i++)
{
temp = val[i] / 16;
vf[2*i] = (temp <= 9)? '0' + temp: 'A' + temp - 10; // i.e., (10 = 9 + 1)
temp = val[i] % 16;
vf[2*i+1] = (temp <= 9)? '0' + temp: 'A' + temp - 10; // i.e., (10 = 9 + 1)
}
vf[2*size] = '\0';
return (vf);
}
So on executing the above function the following way:
int main()
{
unsigned int a = 5555;
std::cout << byteToHexString((unsigned char*)(&a), 4);
return 0;
}
The output we obtain is:
B3150000
Shouldn't the output rather be 000015B3? So why is this displaying in reverse order? Is there something wrong with the code (I am using g++ compiler in Ubuntu)?

You are seeing the order in which bytes are stored for representing integers on your architecture, which happens to be little-endian. That means, the least-significant byte comes first.
If you want to display it in normal numeric form, you either need to detect the endianness of your architecture and switch the code accordingly, or just use a string stream:
unsigned int a = 5555;
std::ostringstream ss;
ss << std::setfill( '0' ) << std::setw( sizeof(a)*2 ) << std::hex << a;
std::cout << ss.str() << std::endl;

Related

Store string as hex without converting

I have a string value:
string str = "2018";
Now I have to store in unsigned char array as hex representation but not really convert to hex:
unsigned char data [2]; //[0x20,0x18]
If I do it this way
data[0] = 0x20;
data[1] = 0x18;
It works, but my input is string, how I can resolve it?
Edit
If my input is unsigned char instead of string like
unsigned char y1 = 20;
unsigned char y2 = 18;
Is there any better way?.
A brief research made me find this function QString::toInt(bool&, int) which can be useful for your intent.
Basically you could:
if(str.size() % 2 == 1){
str = '0' + str;
}
for(int i = 0; i < str.size() / 2; i++){
data[i] = (str[2*i] + str[2*i+1]).toInt(res, 16);
}
I did not try this code, there surely a better way to extract the substring, and probably a more efficient way than to iterate over it.
Perhaps you could try something like this:
#include <iostream>
int main()
{
std::string s = "2018";
unsigned i;
std::sscanf(s.c_str(), "%04x", &i);
unsigned char data[2];
data[0] = i >> 8;
data[1] = i;
std::cout << std::hex << (int)data[0] << " " << (int)data[1] << std::endl;
return 0;
}
https://ideone.com/SyYKUl
Prints:
20 18
If you can assume the string to have 4 digits, you can convert it to BCD format simply and efficiently this way:
void convert_to_bcd4(unsigned char *data, const char *str) {
data[0] = (str[0] - '0') * 16 + (str[1] - '0');
data[1] = (str[2] - '0') * 16 + (str[3] - '0');
}
You can complete the conversion of "2018" to 0x20 0x18 using a hex string to binary converter. I think, for example, sscanf("%x",....) will do this. This typically gives an int. You can extract the byte values from the int in the normal way. (This method does not check for errors.)

Using c++ is it possible to convert an Ascii character to Hex?

I have written a program that sets up a client/server TCP socket over which the user sends an integer value to the server through the use of a terminal interface. On the server side I am executing byte commands for which I need hex values stored in my array.
sprint(mychararray, %X, myintvalue);
This code takes my integer and prints it as a hex value into a char array. The only problem is when I use that array to set my commands it registers as an ascii char. So for example if I send an integer equal to 3000 it is converted to 0x0BB8 and then stored as 'B''B''8' which corresponds to 42 42 38 in hex. I have looked all over the place for a solution, and have not been able to come up with one.
Finally came up with a solution to my problem. First I created an array and stored all hex values from 1 - 256 in it.
char m_list[256]; //array defined in class
m_list[0] = 0x00; //set first array index to zero
int count = 1; //count variable to step through the array and set members
while (count < 256)
{
m_list[count] = m_list[count -1] + 0x01; //populate array with hex from 0x00 - 0xFF
count++;
}
Next I created a function that lets me group my hex values into individual bytes and store into the array that will be processing my command.
void parse_input(char hex_array[], int i, char ans_array[])
{
int n = 0;
int j = 0;
int idx = 0;
string hex_values;
while (n < i-1)
{
if (hex_array[n] = '\0')
{
hex_values = '0';
}
else
{
hex_values = hex_array[n];
}
if (hex_array[n+1] = '\0')
{
hex_values += '0';
}
else
{
hex_values += hex_array[n+1];
}
cout<<"This is the string being used in stoi: "<<hex_values; //statement for testing
idx = stoul(hex_values, nullptr, 16);
ans_array[j] = m_list[idx];
n = n + 2;
j++;
}
}
This function will be called right after my previous code.
sprint(mychararray, %X, myintvalue);
void parse_input(arrayA, size of arrayA, arrayB)
Example: arrayA = 8byte char array, and arrayB is a 4byte char array. arrayA should be double the size of arrayB since you are taking two ascii values and making a byte pair. e.g 'A' 'B' = 0xAB
While I was trying to understand your question I realized what you needed was more than a single variable. You needed a class, this is because you wished to have a string that represents the hex code to be printed out and also the number itself in the form of an unsigned 16 bit integer, which I deduced would be something like unsigned short int. So I created a class that did all this for you named hexset (I got the idea from bitset), here:
#include <iostream>
#include <string>
class hexset {
public:
hexset(int num) {
this->hexnum = (unsigned short int) num;
this->hexstring = hexset::to_string(num);
}
unsigned short int get_hexnum() {return this->hexnum;}
std::string get_hexstring() {return this->hexstring;}
private:
static std::string to_string(int decimal) {
int length = int_length(decimal);
std::string ret = "";
for (int i = (length > 1 ? int_length(decimal) - 1 : length); i >= 0; i--) {
ret = hex_arr[decimal%16]+ret;
decimal /= 16;
}
if (ret[0] == '0') {
ret = ret.substr(1,ret.length()-1);
}
return "0x"+ret;
}
static int int_length(int num) {
int ret = 1;
while (num > 10) {
num/=10;
++ret;
}
return ret;
}
static constexpr char hex_arr[16] = {'0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F'};
unsigned short int hexnum;
std::string hexstring;
};
constexpr char hexset::hex_arr[16];
int main() {
int number_from_file = 3000; // This number is in all forms technically, hex is just another way to represent this number.
hexset hex(number_from_file);
std::cout << hex.get_hexstring() << ' ' << hex.get_hexnum() << std::endl;
return 0;
}
I assume you'll probably want to do some operator overloading to make it so you can add and subtract from this number or assign new numbers or do any kind of mathematical or bit shift operation.

reading binary from a file gives negative number

Hey everyone this may turn out to be a simple stupid question, but one that has been giving me headaches for a while now. I'm reading data from a Named Binary Tag file, and the code is working except when I try to read big-endian numbers. The code that gets an integer looks like this:
long NBTTypes::getInteger(istream &in, int num_bytes, bool isBigEndian)
{
long result = 0;
char buff[8];
//get bytes
readData(in, buff, num_bytes, isBigEndian);
//convert to integer
cout <<"Converting bytes to integer..." << endl;
result = buff[0];
cout <<"Result starts at " << result << endl;
for(int i = 1; i < num_bytes; ++i)
{
result = (result << 8) | buff[i];
cout <<"Result is now " << result << endl;
}
cout <<"Done." << endl;
return result;
}
And the readData function:
void NBTTypes::readData(istream &in, char *buffer, unsigned long num_bytes, bool BE)
{
char hold;
//get data
in.read(buffer, num_bytes);
if(BE)
{
//convert to little-endian
cout <<"Converting to a little-endian number..." << endl;
for(unsigned long i = 0; i < num_bytes / 2; ++i)
{
hold = buffer[i];
buffer[i] = buffer[num_bytes - i - 1];
buffer[num_bytes - i - 1] = hold;
}
cout <<"Done." << endl;
}
}
This code originally worked (gave correct positive values), but now for whatever reason the values I get are either over or underflowing. What am I missing?
Your byte order swapping is fine, however building the integer from the sequences of bytes is not.
First of all, you get the endianness wrong: the first byte you read in becomes the most significant byte, while it should be the other way around.
Then, when OR-ing in the characters from the array, be aware that they are promoted to an int, which, for a signed char, sets a lot of additional bits unless you mask them out.
Finally, when long is wider than num_bytes, you need to sign-extend the bits.
This code works:
union {
long s; // Signed result
unsigned long u; // Use unsigned for safe bit-shifting
} result;
int i = num_bytes-1;
if (buff[i] & 0x80)
result.s = -1; // sign-extend
else
result.s = 0;
for (; i >= 0; --i)
result.u = (result.u << 8) | (0xff & buff[i]);
return result.s;

Convert integer to binary and store it in an integer array of specified size:c++

I want to convert an integer to binary string and then store each bit of the integer string to an element of a integer array of a given size. I am sure that the input integer's binary expression won't exceed the size of the array specified. How to do this in c++?
Pseudo code:
int value = ???? // assuming a 32 bit int
int i;
for (i = 0; i < 32; ++i) {
array[i] = (value >> i) & 1;
}
template<class output_iterator>
void convert_number_to_array_of_digits(const unsigned number,
output_iterator first, output_iterator last)
{
const unsigned number_bits = CHAR_BIT*sizeof(int);
//extract bits one at a time
for(unsigned i=0; i<number_bits && first!=last; ++i) {
const unsigned shift_amount = number_bits-i-1;
const unsigned this_bit = (number>>shift_amount)&1;
*first = this_bit;
++first;
}
//pad the rest with zeros
while(first != last) {
*first = 0;
++first;
}
}
int main() {
int number = 413523152;
int array[32];
convert_number_to_array_of_digits(number, std::begin(array), std::end(array));
for(int i=0; i<32; ++i)
std::cout << array[i] << ' ';
}
Proof of compilation here
You could use C++'s bitset library, as follows.
#include<iostream>
#include<bitset>
int main()
{
int N;//input number in base 10
cin>>N;
int O[32];//The output array
bitset<32> A=N;//A will hold the binary representation of N
for(int i=0,j=31;i<32;i++,j--)
{
//Assigning the bits one by one.
O[i]=A[j];
}
return 0;
}
A couple of points to note here:
First, 32 in the bitset declaration statement tells the compiler that you want 32 bits to represent your number, so even if your number takes fewer bits to represent, the bitset variable will have 32 bits, possibly with many leading zeroes.
Second, bitset is a really flexible way of handling binary, you can give a string as its input or a number, and again you can use the bitset as an array or as a string.It's a really handy library.
You can print out the bitset variable A as
cout<<A;
and see how it works.
You can do like this:
while (input != 0) {
if (input & 1)
result[index] = 1;
else
result[index] =0;
input >>= 1;// dividing by two
index++;
}
As Mat mentioned above, an int is already a bit-vector (using bitwise operations, you can check each bit). So, you can simply try something like this:
// Note: This depends on the endianess of your machine
int x = 0xdeadbeef; // Your integer?
int arr[sizeof(int)*CHAR_BIT];
for(int i = 0 ; i < sizeof(int)*CHAR_BIT ; ++i) {
arr[i] = (x & (0x01 << i)) ? 1 : 0; // Take the i-th bit
}
Decimal to Binary: Size independent
Two ways: both stores binary represent into a dynamic allocated array bits (in msh to lsh).
First Method:
#include<limits.h> // include for CHAR_BIT
int* binary(int dec){
int* bits = calloc(sizeof(int) * CHAR_BIT, sizeof(int));
if(bits == NULL) return NULL;
int i = 0;
// conversion
int left = sizeof(int) * CHAR_BIT - 1;
for(i = 0; left >= 0; left--, i++){
bits[i] = !!(dec & ( 1u << left ));
}
return bits;
}
Second Method:
#include<limits.h> // include for CHAR_BIT
int* binary(unsigned int num)
{
unsigned int mask = 1u << ((sizeof(int) * CHAR_BIT) - 1);
//mask = 1000 0000 0000 0000
int* bits = calloc(sizeof(int) * CHAR_BIT, sizeof(int));
if(bits == NULL) return NULL;
int i = 0;
//conversion
while(mask > 0){
if((num & mask) == 0 )
bits[i] = 0;
else
bits[i] = 1;
mask = mask >> 1 ; // Right Shift
i++;
}
return bits;
}
I know it doesn't add as many Zero's as you wish for positive numbers. But for negative binary numbers, it works pretty well.. I just wanted to post a solution for once :)
int BinToDec(int Value, int Padding = 8)
{
int Bin = 0;
for (int I = 1, Pos = 1; I < (Padding + 1); ++I, Pos *= 10)
{
Bin += ((Value >> I - 1) & 1) * Pos;
}
return Bin;
}
This is what I use, it also lets you give the number of bits that will be in the final vector, fills any unused bits with leading 0s.
std::vector<int> to_binary(int num_to_convert_to_binary, int num_bits_in_out_vec)
{
std::vector<int> r;
// make binary vec of minimum size backwards (LSB at .end() and MSB at .begin())
while (num_to_convert_to_binary > 0)
{
//cout << " top of loop" << endl;
if (num_to_convert_to_binary % 2 == 0)
r.push_back(0);
else
r.push_back(1);
num_to_convert_to_binary = num_to_convert_to_binary / 2;
}
while(r.size() < num_bits_in_out_vec)
r.push_back(0);
return r;
}

base32 conversion in C++

does anybody know any commonly used library for C++ that provides methods for encoding and decoding numbers from base 10 to base 32 and viceversa?
Thanks,
Stefano
[Updated] Apparently, the C++ std::setbase() IO manipulator and normal << and >> IO operators only handle bases 8, 10, and 16, and is therefore useless for handling base 32.
So to solve your issue of converting
strings with base 10/32 representation of numbers read from some input to integers in the program
integers in the program to strings with base 10/32 representations to be output
you will need to resort to other functions.
For converting C style strings containing base 2..36 representations to integers, you can use #include <cstdlib> and use the strtol(3) & Co. set of functions.
As for converting integers to strings with arbitrary base... I cannot find an easy answer. printf(3) style format strings only handle bases 8,10,16 AFAICS, just like std::setbase. Anyone?
Did you mean "base 10 to base 32", rather than integer to base32? The latter seems more likely and more useful; by default standard formatted I/O functions generate base 10 string format when dealing with integers.
For the base 32 to integer conversion the standard library strtol() function will do that. For the reciprocal, you don't need a library for something you can easily implement yourself (not everything is a lego brick).
Here's an example, not necessarily the most efficient, but simple;
#include <cstring>
#include <string>
long b32tol( std::string b32 )
{
return strtol( b32.c_str(), 0, 32 ) ;
}
std::string itob32( long i )
{
unsigned long u = *(reinterpret_cast<unsigned long*>)( &i ) ;
std::string b32 ;
do
{
int d = u % 32 ;
if( d < 10 )
{
b32.insert( 0, 1, '0' + d ) ;
}
else
{
b32.insert( 0, 1, 'a' + d - 10 ) ;
}
u /= 32 ;
} while( u > 0 );
return b32 ;
}
#include <iostream>
int main()
{
long i = 32*32*11 + 32*20 + 5 ; // BK5 in base 32
std::string b32 = itob32( i ) ;
long ii = b32tol( b32 ) ;
std::cout << i << std::endl ; // Original
std::cout << b32 << std::endl ; // Converted to b32
std::cout << ii << std::endl ; // Converted back
return 0 ;
}
In direct answer to the original (and now old) question, I don't know of any common library for encoding byte arrays in base32, or for decoding them again afterward. However, I was presented last week with a need to decode SHA1 hash values represented in base32 into their original byte arrays. Here's some C++ code (with some notable Windows/little endian artifacts) that I wrote to do just that, and to verify the results.
Note that in contrast with Clifford's code above, which, if I'm not mistaken, assumes the "base32hex" alphabet mentioned on RFC 4648, my code assumes the "base32" alphabet ("A-Z" and "2-7").
// This program illustrates how SHA1 hash values in base32 encoded form can be decoded
// and then re-encoded in base16.
#include "stdafx.h"
#include <string>
#include <vector>
#include <iostream>
#include <cassert>
using namespace std;
unsigned char Base16EncodeNibble( unsigned char value )
{
if( value >= 0 && value <= 9 )
return value + 48;
else if( value >= 10 && value <= 15 )
return (value-10) + 65;
else //assert(false);
{
cout << "Error: trying to convert value: " << value << endl;
}
return 42; // sentinal for error condition
}
void Base32DecodeBase16Encode(const string & input, string & output)
{
// Here's the base32 decoding:
// The "Base 32 Encoding" section of http://tools.ietf.org/html/rfc4648#page-8
// shows that every 8 bytes of base32 encoded data must be translated back into 5 bytes
// of original data during a decoding process. The following code does this.
int input_len = input.length();
assert( input_len == 32 );
const char * input_str = input.c_str();
int output_len = (input_len*5)/8;
assert( output_len == 20 );
// Because input strings are assumed to be SHA1 hash values in base32, it is also assumed
// that they will be 32 characters (and bytes in this case) in length, and so the output
// string should be 20 bytes in length.
unsigned char *output_str = new unsigned char[output_len];
char curr_char, temp_char;
long long temp_buffer = 0; //formerly: __int64 temp_buffer = 0;
for( int i=0; i<input_len; i++ )
{
curr_char = input_str[i];
if( curr_char >= 'A' && curr_char <= 'Z' )
temp_char = curr_char - 'A';
if( curr_char >= '2' && curr_char <= '7' )
temp_char = curr_char - '2' + 26;
if( temp_buffer )
temp_buffer <<= 5; //temp_buffer = (temp_buffer << 5);
temp_buffer |= temp_char;
// if 8 encoded characters have been decoded into the temp location,
// then copy them to the appropriate section of the final decoded location
if( (i>0) && !((i+1) % 8) )
{
unsigned char * source = reinterpret_cast<unsigned char*>(&temp_buffer);
//strncpy(output_str+(5*(((i+1)/8)-1)), source, 5);
int start_index = 5*(((i+1)/8)-1);
int copy_index = 4;
for( int x=start_index; x<(start_index+5); x++, copy_index-- )
output_str[x] = source[copy_index];
temp_buffer = 0;
// I could be mistaken, but I'm guessing that the necessity of copying
// in "reverse" order results from temp_buffer's little endian byte order.
}
}
// Here's the base16 encoding (for human-readable output and the chosen validation tests):
// The "Base 16 Encoding" section of http://tools.ietf.org/html/rfc4648#page-10
// shows that every byte original data must be encoded as two characters from the
// base16 alphabet - one charactor for the original byte's high nibble, and one for
// its low nibble.
unsigned char out_temp, chr_temp;
for( int y=0; y<output_len; y++ )
{
out_temp = Base16EncodeNibble( output_str[y] >> 4 ); //encode the high nibble
output.append( 1, static_cast<char>(out_temp) );
out_temp = Base16EncodeNibble( output_str[y] & 0xF ); //encode the low nibble
output.append( 1, static_cast<char>(out_temp) );
}
delete [] output_str;
}
int _tmain(int argc, _TCHAR* argv[])
{
//string input = "J3WEDSJDRMJHE2FUHERUR6YWLGE3USRH";
vector<string> input_b32_strings, output_b16_strings, expected_b16_strings;
input_b32_strings.push_back("J3WEDSJDRMJHE2FUHERUR6YWLGE3USRH");
expected_b16_strings.push_back("4EEC41C9238B127268B4392348FB165989BA4A27");
input_b32_strings.push_back("2HPUCIVW2EVBANIWCXOIQZX6N5NDIUSX");
expected_b16_strings.push_back("D1DF4122B6D12A10351615DC8866FE6F5A345257");
input_b32_strings.push_back("U4BDNCBAQFCPVDBL4FBG3AANGWVESI5J");
expected_b16_strings.push_back("A7023688208144FA8C2BE1426D800D35AA4923A9");
// Use the base conversion tool at http://darkfader.net/toolbox/convert/
// to verify that the above base32/base16 pairs are equivalent.
int num_input_strs = input_b32_strings.size();
for(int i=0; i<num_input_strs; i++)
{
string temp;
Base32DecodeBase16Encode(input_b32_strings[i], temp);
output_b16_strings.push_back(temp);
}
for(int j=0; j<num_input_strs; j++)
{
cout << input_b32_strings[j] << endl;
cout << output_b16_strings[j] << endl;
cout << expected_b16_strings[j] << endl;
if( output_b16_strings[j] != expected_b16_strings[j] )
{
cout << "Error in conversion for string " << j << endl;
}
}
return 0;
}
I'm not aware of any commonly-used library devoted to base32 encoding but Crypto++ includes a public domain base32 encoder and decoder.
I don't use cpp, so correct me if I'm wrong. I wrote this code for the sake of translating it from C# to save my acquaintance the trouble. The original source, that which I used to create these methods, is on a different post, here, on stackoverflow:
https://stackoverflow.com/a/10981113/13766753
That being said, here's my solution:
#include <iostream>
#include <math.h>
class Base32 {
public:
static std::string dict;
static std::string encode(int number) {
std::string result = "";
bool negative = false;
if (number < 0) {
negative = true;
}
number = abs(number);
do {
result = Base32::dict[fmod(floor(number), 32)] + result;
number /= 32;
} while(number > 0);
if (negative) {
result = "-" + result;
}
return result;
}
static int decode(std::string str) {
int result = 0;
int negative = 1;
if (str.rfind("-", 0) == 0) {
negative = -1;
str = str.substr(1);
}
for(char& letter : str) {
result += Base32::dict.find(letter);
result *= 32;
}
return result / 32 * negative;
}
};
std::string Base32::dict = "0123456789abcdefghijklmnopqrstuvwxyz";
int main() {
std::cout << Base32::encode(0) + "\n" << Base32::decode(Base32::encode(0)) << "\n";
return 0;
}