I want to read a mac id from command line and convert it to an array of uint8_t values to use it in a struct. I can not get it to work. I have a vector of string for the mac id split about : and I want to use stringstream to convert them with no luck. What I am missing?
int parseHex(const string &num){
stringstream ss(num);
ss << std::hex;
int n;
ss >> n;
return n;
}
uint8_t tgt_mac[6] = {0, 0, 0, 0, 0, 0};
v = StringSplit( mac , ":" );
for( int j = 0 ; j < v.size() ; j++ ){
tgt_mac[j] = parseHex(v.at(j));
}
I hate to answer this in this fashion, but sscanf() is probably the most succinct way to parse out a MAC address. It handles zero/non-zero padding, width checking, case folding, and all of that other stuff that no one likes to deal with. Anyway, here's my not so C++ version:
void
parse_mac(std::vector<uint8_t>& out, std::string const& in) {
unsigned int bytes[6];
if (std::sscanf(in.c_str(),
"%02x:%02x:%02x:%02x:%02x:%02x",
&bytes[0], &bytes[1], &bytes[2],
&bytes[3], &bytes[4], &bytes[5]) != 6)
{
throw std::runtime_error(in+std::string(" is an invalid MAC address"));
}
out.assign(&bytes[0], &bytes[6]);
}
Your problem may be in the output of the parsed data. The "<<" operator makes decisions on how to display data based on the data type passed it, and uint8_t may be getting interpretted as a char. Make sure you cast the array values to ints when printing, or investigate in a debugger.
The sample program:
uint8_t tgt_mac[6] = {0};
std::stringstream ss( "AA:BB:CC:DD:EE:11" );
char trash;
for ( int i = 0; i < 6; i++ )
{
int foo;
ss >> std::hex >> foo >> trash;
tgt_mac[i] = foo;
std::cout << std::hex << "Reading: " << foo << std::endl;
}
std::cout << "As int array: " << std::hex
<< (int) tgt_mac[0]
<< ":"
<< (int) tgt_mac[1]
<< ":"
<< (int) tgt_mac[2]
<< ":"
<< (int) tgt_mac[3]
<< ":"
<< (int) tgt_mac[4]
<< ":"
<< (int) tgt_mac[5]
<< std::endl;
std::cout << "As unint8_t array: " << std::hex
<< tgt_mac[0]
<< ":"
<< tgt_mac[1]
<< ":"
<< tgt_mac[2]
<< ":"
<< tgt_mac[3]
<< ":"
<< tgt_mac[4]
<< ":"
<< tgt_mac[5]
<< std::endl;
Gives the following output ( cygwin g++ )
Reading: aa
Reading: bb
Reading: cc
Reading: dd
Reading: ee
Reading: 11
As int array: aa:bb:cc:dd:ee:11
As unint8_t array: ª:»:I:Y:î:◄
First I want to point out that I think #Steven's answer is a very good one - indeed I noticed the same: the values are correct, but the output looks awkward. This is due to ostream& operator<<( ostream&, unsigned char ) being used, since the uint8_t type you used is a typedef for unsigned char (as I found in the linux man pages). Note that on VC++, the typedef isn't there, and you have to use unsigned __int8 instead (which will also route you to the char specialization).
Next, you can test your code like this (output-independent):
assert( uint8_t( parseHex( "00" ) ) == uint8_t(0) );
assert( uint8_t( parseHex( "01" ) ) == uint8_t(1) );
//...
assert( uint8_t( parseHex( "ff" ) ) == uint8_t(255) );
In addition to Steven's answer, I just want to point out the existence of the transform algorithm, which could still simplify your code.
for( int j = 0 ; j < v.size() ; j++ ){
tgt_mac[j] = parseHex(v.at(j));
}
Writes in one line:
std::transform( v.begin(), v.end(), tgt_mac, &parseHex );
(And I know that hasn't to do with the question...)
(See codepad.org for what it then looks like)
I think you are using the std::hex in the wrong place:
#include <sstream>
#include <iostream>
int main()
{
std::string h("a5");
std::stringstream s(h);
int x;
s >> std::hex >> x;
std::cout << "X(" << x << ")\n";
}
Related
So, like the title says, I want to be able to convert between bytes loaded into memory as char* and uints. I have a program that demos some functions that seem to do this, but I am unsure if it is fully compliant with the c++ standard. Is all the casting I am doing legal and well defined? Am I handling sign extension, masking, and truncation correctly? I plan to eventually deploy this code to a variety of different platforms, sometimes with drastically different architectures, and everything I have tried so far seems to imply that this is valid cross platform code to serialize and deserialize my data, but I am more interested about what the standard says than whether or not this works on my particular machines. Here's the small test program to demo the conversion functions:
#include <type_traits>
#include <iostream>
#include <iomanip>
template<typename IntType>
IntType toUint( char byte ) {
static_assert( std::is_integral_v<IntType>, "IntType must be an integral" );
static_assert( std::is_unsigned_v<IntType>, "IntType must be unsigned" );
return static_cast<IntType>( byte ) & 0xFF;
}
template<typename IntType>
void printAs( signed char* cString, const int arraySize )
{
std::cout << "Values: [" << std::endl;
for( int i = 0; i < arraySize; i++ )
{
std::cout << std::dec << std::setfill('0') <<
std::setw(3) << toUint<IntType>( cString[i] ) <<
": " << "0x" << std::uppercase << std::setfill('0') <<
std::setw(16) << std::hex << toUint<IntType>( cString[i] );
if(i < (arraySize - 1) )
{
std::cout << ", ";
std::cout << std::endl;
}
}
std::cout << std::endl << "]" << std::endl;
}
template<typename IntType>
IntType cStringToUint( signed char* cString, const int arraySize )
{
IntType myValue = 0;
for( int i = 0; i < arraySize; i++ )
{
myValue <<= 8;
myValue |= toUint<IntType>( cString[i] );
}
return myValue;
}
template<typename IntType>
void printAsHex( IntType myValue )
{
std::cout << "0x" << std::uppercase << std::setfill('0') <<
std::setw(16) << std::hex << myValue <<std::endl;
}
int main()
{
const int arraySize = 9;
// assume Big Endian
signed char cString[arraySize] = {-1,2,4,8,16,-32,64,127,-128};
// convert each byte to a uint and print the value
printAs<uint64_t>( cString, arraySize );
// notice this trims leading MSB
printAsHex( cStringToUint<uint64_t>( cString, arraySize ) );
}
Which gives the following output with my compiler:
Values: [
255: 0x00000000000000FF,
002: 0x0000000000000002,
004: 0x0000000000000004,
008: 0x0000000000000008,
016: 0x0000000000000010,
224: 0x00000000000000E0,
064: 0x0000000000000040,
127: 0x000000000000007F,
128: 0x0000000000000080
]
0x02040810E0407F80
So, is this well defined and specified? Can I rest assured that I should get this output every time? I've tried to be thorough, but I would appreciate some second opinions on this at least, or preferably, cite the standard on how casting from char to uint and promoting to a wider type is well defined along with sign extension rules, if it is indeed well defined and specified? I really don't want to have to reach for boost just to do this in a cross platform way.
Also, feel free to assume that I will always be casting to a type of the same or wider width with this. Narrowing casts seem tricky, so I'm just ignoring them for now(I will probably eventually implement some kind of truncation similar to when I do static_cast<IntType>( byte ) & 0xFF; in this code depending on the width of input and desired types.
I have a problem I neither can solve on my own nor find answer anywhere. I have a file contains such a string:
01000000d08c9ddf0115d1118c7a00c04
I would like to read the file in the way, that I would do manually like that:
char fromFile[] =
"\x01\x00\x00\x00\xd0\x8c\x9d\xdf\x011\x5d\x11\x18\xc7\xa0\x0c\x04";
I would really appreciate any help.
I want to do it in C++ (the best would be vc++).
Thank You!
int t194(void)
{
// imagine you have n pair of char, for simplicity,
// here n is 3 (you should recognize them)
char pair1[] = "01"; // note:
char pair2[] = "8c"; // initialize with 3 char c-style strings
char pair3[] = "c7"; //
{
// let us put these into a ram based stream, with spaces
std::stringstream ss;
ss << pair1 << " " << pair2 << " " << pair3;
// each pair can now be extracted into
// pre-declared int vars
int i1 = 0;
int i2 = 0;
int i3 = 0;
// use formatted extractor to convert
ss >> i1 >> i2 >> i3;
// show what happened (for debug only)
std::cout << "Confirm1:" << std::endl;
std::cout << "i1: " << i1 << std::endl;
std::cout << "i2: " << i2 << std::endl;
std::cout << "i3: " << i3 << std::endl << std::endl;
// output is:
// Confirm1:
// i1: 1
// i2: 8
// i3: 0
// Shucks, not correct.
// We know the default radix is base 10
// I hope you can see that the input radix is wrong,
// because c is not a decimal digit,
// the i2 and i3 conversions stops before the 'c'
}
// pre-delcare
int i1 = 0;
int i2 = 0;
int i3 = 0;
{
// so we try again, with radix info added
std::stringstream ss;
ss << pair1 << " " << pair2 << " " << pair3;
// strings are already in hex, so we use them as is
ss >> std::hex // change radix to 16
>> i1 >> i2 >> i3;
// now show what happened
std::cout << "Confirm2:" << std::endl;
std::cout << "i1: " << i1 << std::endl;
std::cout << "i2: " << i2 << std::endl;
std::cout << "i3: " << i3 << std::endl << std::endl;
// output now:
// i1: 1
// i2: 140
// i3: 199
// not what you expected? Though correct,
// now we can see we have the wrong radix for output
// add output radix to cout stream
std::cout << std::hex // add radix info here!
<< "i1: " << i1 << std::endl
// Note: only need to do once for std::cout
<< "i2: " << i2 << std::endl
<< "i3: " << i3 << std::endl << std::endl
<< std::dec;
// output now looks correct, and easily comparable to input
// i1: 1
// i2: 8c
// i3: c7
// So: What next?
// read the entire string of hex input into a single string
// separate this into pairs of chars (perhaps using
// string::substr())
// put space separated pairs into stringstream ss
// extract hex values until ss.eof()
// probably should add error checks
// and, of course, figure out how to use a loop for these steps
//
// alternative to consider:
// read 1 char at a time, build a pairing, convert, repeat
}
//
// Eventually, you should get far enough to discover that the
// extracts I have done are integers, but you want to pack them
// into an array of binary bytes.
//
// You can go back, and recode to extract bytes (either
// unsigned char or uint8_t), which you might find interesting.
//
// Or ... because your input is hex, and the largest 2 char
// value will be 0xff, and this fits into a single byte, you
// can simply static_cast them (I use unsigned char)
unsigned char bin[] = {static_cast<unsigned char>(i1),
static_cast<unsigned char>(i2),
static_cast<unsigned char>(i3) };
// Now confirm by casting these back to ints to cout
std::cout << "Confirm4: "
<< std::hex << std::setw(2) << std::setfill('0')
<< static_cast<int>(bin[0]) << " "
<< static_cast<int>(bin[1]) << " "
<< static_cast<int>(bin[2]) << std::endl;
// you also might consider a vector (and i prefer uint8_t)
// because push_back operations does a lot of hidden work for you
std::vector<uint8_t> bytes;
bytes.push_back(static_cast<uint8_t>(i1));
bytes.push_back(static_cast<uint8_t>(i2));
bytes.push_back(static_cast<uint8_t>(i3));
// confirm
std::cout << "\nConfirm5: ";
for (size_t i=0; i<bytes.size(); ++i)
std::cout << std::hex << std::setw(2) << std::setfill(' ')
<< static_cast<int>(bytes[i]) << " ";
std::cout << std::endl;
Note: The cout (or ss) of bytes or char can be confusing, not always giving the result you might expect. My background is embedded software, and I have surprisingly small experience making stream i/o of bytes work. Just saying this tends to bias my work when dealing with stream i/o.
// other considerations:
//
// you might read 1 char at a time. this can simplify
// your loop, possibly easier to debug
// ... would you have to detect and remove eoln? i.e. '\n'
// ... how would you handle a bad input
// such as not hex char, odd char count in a line
//
// I would probably prefer to use getline(),
// it will read until eoln(), and discard the '\n'
// then in each string, loop char by char, creating char pairs, etc.
//
// Converting a vector<uint8_t> to char bytes[] can be an easier
// effort in some ways. A vector<> guarantees that all the values
// contained are 'packed' back-to-back, and contiguous in
// memory, just right for binary stream output
//
// vector.size() tells how many chars have been pushed
//
// NOTE: the formatted 'insert' operator ('<<') can not
// transfer binary data to a stream. You must use
// stream::write() for binary output.
//
std::stringstream ssOut;
// possible approach:
// 1 step reinterpret_cast
// - a binary block output requires "const char*"
const char* myBuff = reinterpret_cast<const char*>(&myBytes.front());
ssOut.write(myBuff, myBytes.size());
// block write puts binary info into stream
// confirm
std::cout << "\nConfirm6: ";
std::string s = ssOut.str(); // string with binary data
for (size_t i=0; i<s.size(); ++i)
{
// because binary data is _not_ signed data,
// we need to 'cancel' the sign bit
unsigned char ukar = static_cast<unsigned char>(s[i]);
// because formatted output would interpret some chars
// (like null, or \n), we cast to int
int intVal = static_cast<int>(ukar);
// cast does not generate code
// now the formatted 'insert' operator
// converts and displays what we want
std::cout << std::hex << std::setw(2) << std::setfill('0')
<< intVal << " ";
}
std::cout << std::endl;
//
//
return (0);
} // int t194(void)
The below snippet should be helpful!
std::ifstream input( "filePath", std::ios::binary );
std::vector<char> hex((
std::istreambuf_iterator<char>(input)),
(std::istreambuf_iterator<char>()));
std::vector<char> bytes;
for (unsigned int i = 0; i < hex.size(); i += 2) {
std::string byteString = hex.substr(i, 2);
char byte = (char) strtol(byteString.c_str(), NULL, 16);
bytes.push_back(byte);
}
char* byteArr = bytes.data()
The way I understand your question is that you want just the binary representation of the numbers, i.e. remove the ascii (or ebcdic) part. Your output array will be half the length of the input array.
Here is some crude pseudo code.
For each input char c:
if (isdigit(c)) c -= '0';
else if (isxdigit(c) c -= 'a' + 0xa; //Need to check for isupper or islower)
Then, depending on the index of c in your input array:
if (! index % 2) output[outputindex] = (c << 4) & 0xf0;
else output[outputindex++] = c & 0x0f;
Here is a function that takes a string as in your description, and outputs a string that has \x in front of each digit.
#include <iostream>
#include <algorithm>
#include <string>
std::string convertHex(const std::string& str)
{
std::string retVal;
std::string hexPrefix = "\\x";
if (!str.empty())
{
std::string::const_iterator it = str.begin();
do
{
if (std::distance(it, str.end()) == 1)
{
retVal += hexPrefix + "0";
retVal += *(it);
++it;
}
else
{
retVal += hexPrefix + std::string(it, it+2);
it += 2;
}
} while (it != str.end());
}
return retVal;
}
using namespace std;
int main()
{
cout << convertHex("01000000d08c9ddf0115d1118c7a00c04") << endl;
cout << convertHex("015d");
}
Output:
\x01\x00\x00\x00\xd0\x8c\x9d\xdf\x01\x15\xd1\x11\x8c\x7a\x00\xc0\x04
\x01\x5d
Basically it is nothing more than a do-while loop. A string is built from each pair of characters encountered. If the number of characters left is 1 (meaning that there is only one digit), a "0" is added to the front of the digit.
I think I'd use a proxy class for reading and writing the data. Unfortunately, the code for the manipulators involved is just a little on the verbose side (to put it mildly).
#include <vector>
#include <algorithm>
#include <iterator>
#include <iostream>
#include <iomanip>
#include <string>
#include <sstream>
struct byte {
unsigned char ch;
friend std::istream &operator>>(std::istream &is, byte &b) {
std::string temp;
if (is >> std::setw(2) >> std::setprecision(2) >> temp)
b.ch = std::stoi(temp, 0, 16);
return is;
}
friend std::ostream &operator<<(std::ostream &os, byte const &b) {
return os << "\\x" << std::setw(2) << std::setfill('0') << std::setprecision(2) << std::hex << (int)b.ch;
}
};
int main() {
std::istringstream input("01000000d08c9ddf115d1118c7a00c04");
std::ostringstream result;
std::istream_iterator<byte> in(input), end;
std::ostream_iterator<byte> out(result);
std::copy(in, end, out);
std::cout << result.str();
}
I do really dislike how verbose iomanipulators are, but other than that it seems pretty clean.
You can try a loop with fscanf
unsigned char b;
fscanf(pFile, "%2x", &b);
Edit:
#define MAX_LINE_SIZE 128
FILE* pFile = fopen(...);
char fromFile[MAX_LINE_SIZE] = {0};
char b = 0;
int currentIndex = 0;
while (fscanf (pFile, "%2x", &b) > 0 && i < MAX_LINE_SIZE)
fromFile[currentIndex++] = b;
I'm following a college course about operating systems and we're learning how to convert from binary to hexadecimal, decimal to hexadecimal, etc. and today we just learned how signed/unsigned numbers are stored in memory using the two's complement (~number + 1).
We have a couple of exercises to do on paper and I would like to be able to verify my answers before submitting my work to the teacher. I wrote a C++ program for the first few exercises but now I'm stuck as to how I could verify my answer with the following problem:
char a, b;
short c;
a = -58;
c = -315;
b = a >> 3;
and we need to show the binary representation in memory of a, b and c.
I've done it on paper and it gives me the following results (all the binary representations in memory of the numbers after the two's complement):
a = 00111010 (it's a char, so 1 byte)
b = 00001000 (it's a char, so 1 byte)
c = 11111110 11000101 (it's a short, so 2 bytes)
Is there a way to verify my answer? Is there a standard way in C++ to show the binary representation in memory of a number, or do I have to code each step myself (calculate the two's complement and then convert to binary)? I know the latter wouldn't take so long but I'm curious as to if there is a standard way to do so.
The easiest way is probably to create an std::bitset representing the value, then stream that to cout.
#include <bitset>
...
char a = -58;
std::bitset<8> x(a);
std::cout << x << '\n';
short c = -315;
std::bitset<16> y(c);
std::cout << y << '\n';
Use on-the-fly conversion to std::bitset. No temporary variables, no loops, no functions, no macros.
Live On Coliru
#include <iostream>
#include <bitset>
int main() {
int a = -58, b = a>>3, c = -315;
std::cout << "a = " << std::bitset<8>(a) << std::endl;
std::cout << "b = " << std::bitset<8>(b) << std::endl;
std::cout << "c = " << std::bitset<16>(c) << std::endl;
}
Prints:
a = 11000110
b = 11111000
c = 1111111011000101
In C++20 you can use std::format to do this:
unsigned char a = -58;
std::cout << std::format("{:b}", a);
Output:
11000110
On older systems you can use the {fmt} library, std::format is based on. {fmt} also provides the print function that makes this even easier and more efficient (godbolt):
unsigned char a = -58;
fmt::print("{:b}", a);
Disclaimer: I'm the author of {fmt} and C++20 std::format.
If you want to display the bit representation of any object, not just an integer, remember to reinterpret as a char array first, then you can print the contents of that array, as hex, or even as binary (via bitset):
#include <iostream>
#include <bitset>
#include <climits>
template<typename T>
void show_binrep(const T& a)
{
const char* beg = reinterpret_cast<const char*>(&a);
const char* end = beg + sizeof(a);
while(beg != end)
std::cout << std::bitset<CHAR_BIT>(*beg++) << ' ';
std::cout << '\n';
}
int main()
{
char a, b;
short c;
a = -58;
c = -315;
b = a >> 3;
show_binrep(a);
show_binrep(b);
show_binrep(c);
float f = 3.14;
show_binrep(f);
}
Note that most common systems are little-endian, so the output of show_binrep(c) is not the 1111111 011000101 you expect, because that's not how it's stored in memory. If you're looking for value representation in binary, then a simple cout << bitset<16>(c) works.
Is there a standard way in C++ to show the binary representation in memory of a number [...]?
No. There's no std::bin, like std::hex or std::dec, but it's not hard to output a number binary yourself:
You output the left-most bit by masking all the others, left-shift, and repeat that for all the bits you have.
(The number of bits in a type is sizeof(T) * CHAR_BIT.)
Similar to what is already posted, just using bit-shift and mask to get the bit; usable for any type, being a template (only not sure if there is a standard way to get number of bits in 1 byte, I used 8 here).
#include<iostream>
#include <climits>
template<typename T>
void printBin(const T& t){
size_t nBytes=sizeof(T);
char* rawPtr((char*)(&t));
for(size_t byte=0; byte<nBytes; byte++){
for(size_t bit=0; bit<CHAR_BIT; bit++){
std::cout<<(((rawPtr[byte])>>bit)&1);
}
}
std::cout<<std::endl;
};
int main(void){
for(int i=0; i<50; i++){
std::cout<<i<<": ";
printBin(i);
}
}
Reusable function:
template<typename T>
static std::string toBinaryString(const T& x)
{
std::stringstream ss;
ss << std::bitset<sizeof(T) * 8>(x);
return ss.str();
}
Usage:
int main(){
uint16_t x=8;
std::cout << toBinaryString(x);
}
This works with all kind of integers.
#include <iostream>
#include <cmath> // in order to use pow() function
using namespace std;
string show_binary(unsigned int u, int num_of_bits);
int main()
{
cout << show_binary(128, 8) << endl; // should print 10000000
cout << show_binary(128, 5) << endl; // should print 00000
cout << show_binary(128, 10) << endl; // should print 0010000000
return 0;
}
string show_binary(unsigned int u, int num_of_bits)
{
string a = "";
int t = pow(2, num_of_bits); // t is the max number that can be represented
for(t; t>0; t = t/2) // t iterates through powers of 2
if(u >= t){ // check if u can be represented by current value of t
u -= t;
a += "1"; // if so, add a 1
}
else {
a += "0"; // if not, add a 0
}
return a ; // returns string
}
Using the std::bitset answers and convenience templates:
#include <iostream>
#include <bitset>
#include <climits>
template<typename T>
struct BinaryForm {
BinaryForm(const T& v) : _bs(v) {}
const std::bitset<sizeof(T)*CHAR_BIT> _bs;
};
template<typename T>
inline std::ostream& operator<<(std::ostream& os, const BinaryForm<T>& bf) {
return os << bf._bs;
}
Using it like this:
auto c = 'A';
std::cout << "c: " << c << " binary: " << BinaryForm{c} << std::endl;
unsigned x = 1234;
std::cout << "x: " << x << " binary: " << BinaryForm{x} << std::endl;
int64_t z { -1024 };
std::cout << "z: " << z << " binary: " << BinaryForm{z} << std::endl;
Generates output:
c: A binary: 01000001
x: 1234 binary: 00000000000000000000010011010010
z: -1024 binary: 1111111111111111111111111111111111111111111111111111110000000000
Using old C++ version, you can use this snippet :
template<typename T>
string toBinary(const T& t)
{
string s = "";
int n = sizeof(T)*8;
for(int i=n-1; i>=0; i--)
{
s += (t & (1 << i))?"1":"0";
}
return s;
}
int main()
{
char a, b;
short c;
a = -58;
c = -315;
b = a >> 3;
cout << "a = " << a << " => " << toBinary(a) << endl;
cout << "b = " << b << " => " << toBinary(b) << endl;
cout << "c = " << c << " => " << toBinary(c) << endl;
}
a = => 11000110
b = => 11111000
c = -315 => 1111111011000101
I have had this problem when playing competitive coding games online. Here is a solution that is quick to implement and is fairly intuitive. It also avoids outputting leading zeros or relying on <bitset>
std::string s;
do {
s = std::to_string(r & 1) + s;
} while ( r>>=1 );
std::cout << s;
You should note however that this solution will increase your runtime, so if you are competing for optimization or not competing at all you should use one of the other solutions on this page.
Here is the true way to get binary representation of a number:
unsigned int i = *(unsigned int*) &x;
Is this what you're looking for?
std::cout << std::hex << val << std::endl;
I have this array : BYTE set[6] = { 0xA8,0x12,0x84,0x03,0x00,0x00, } and i need to insert this value : "" int Value = 1200; "" ....on last 4 bytes. Practically to convert from int to hex and then to write inside the array...
Is this possible ?
I already have BitConverter::GetBytes function, but that's not enough.
Thank you,
To answer original quesion: sure you can.
As soon as your sizeof(int) == 4 and sizeof(BYTE) == 1.
But I'm not sure what you mean by "converting int to hex". If you want a hex string representation, you'll be much better off just using one of standard methods of doing it.
For example, on last line I use std::hex to print numbers as hex.
Here is solution to what you've been asking for and a little more (live example: http://codepad.org/rsmzngUL):
#include <iostream>
using namespace std;
int main() {
const int value = 1200;
unsigned char set[] = { 0xA8,0x12,0x84,0x03,0x00,0x00 };
for (const unsigned char* c = set; c != set + sizeof(set); ++c) {
cout << static_cast<int>(*c) << endl;
}
cout << endl << "Putting value into array:" << endl;
*reinterpret_cast<int*>(&set[2]) = value;
for (const unsigned char* c = set; c != set + sizeof(set); ++c) {
cout << static_cast<int>(*c) << endl;
}
cout << endl << "Printing int's bytes one by one: " << endl;
for (int byteNumber = 0; byteNumber != sizeof(int); ++byteNumber) {
const unsigned char oneByte = reinterpret_cast<const unsigned char*>(&value)[byteNumber];
cout << static_cast<int>(oneByte) << endl;
}
cout << endl << "Printing value as hex: " << hex << value << std::endl;
}
UPD: From comments to your question:
1. If you need just getting separate digits out of the number in separate bytes, it's a different story.
2. Little vs Big endianness matters as well, I did not account for that in my answer.
did you mean this ?
#include <stdio.h>
#include <stdlib.h>
#define BYTE unsigned char
int main ( void )
{
BYTE set[6] = { 0xA8,0x12,0x84,0x03,0x00,0x00, } ;
sprintf ( &set[2] , "%d" , 1200 ) ;
printf ( "\n%c%c%c%c", set[2],set[3],set[4],set[5] ) ;
return 0 ;
}
output :
1200
I'm using Cryptopp to generate a random string.
This is the code:
const unsigned int BLOCKSIZE = 16 * 8;
byte pcbScratch[ BLOCKSIZE ];
// Construction
// Using a ANSI approved Cipher
CryptoPP::AutoSeededX917RNG<CryptoPP::DES_EDE3> rng;
rng.GenerateBlock( pcbScratch, BLOCKSIZE );
// Output
std::cout << "The generated random block is:" << std::endl;
string str = "";
for( unsigned int i = 0; i < BLOCKSIZE; i++ )
{
std::cout << "0x" << std::setbase(16) << std::setw(2) << std::setfill('0');
std::cout << static_cast<unsigned int>( pcbScratch[ i ] ) << " ";
str += pcbScratch[i];
}
std::cout << std::endl;
std::cout << str <<std::endl;
I've put int the code a new var: string str = "".
Then in the for append for each result, the part of the string.
But my output is dirty! I see only strange ASCII char.
How can I set well the string?
Thank you.
You will want to some output encoding, e.g.
base64
hex
because what you are seeing is the raw binary data, interpreted as if it were text. Random characters are the consequence
AFAICT (google) you should be able to use something like this
#include <base64.h>
string base64encoded;
StringSource(str, true, new Base64Encoder(new StringSink(base64encoded)));
Appending arbitrary bytes (chars) to the end of a string is going to result in that containing some non-printable characters:
http://en.wikipedia.org/wiki/Control_character
You don't mention what you wanted or expected. Did you want the string to be the same as what got sent to std::cout? If so, you can use a stringstream via #include <sstream>:
std::stringstream ss;
for( unsigned int i = 0; i < BLOCKSIZE; i++ )
{
ss << "0x" << std::setbase(16) << std::setw(2) << std::setfill('0');
ss << static_cast<unsigned int>(pcbScratch[i]);
}
str = ss.str();
You can also use Crypto++'s built in HexEncoder:
std::cout << "The generated random block is:" << std::endl;
string str = "0x";
StringSource ss(pcbScratch, BLOCKSIZE, true,
new HexEncoder(
new StringSink(str),
true, // uppercase
2, // grouping
" 0x" // separator
) // HexDecoder
); // StringSource
The StringSource 'owns' the HexEncoder, so there's no need to call delete.