I'm using Cryptopp to generate a random string.
This is the code:
const unsigned int BLOCKSIZE = 16 * 8;
byte pcbScratch[ BLOCKSIZE ];
// Construction
// Using a ANSI approved Cipher
CryptoPP::AutoSeededX917RNG<CryptoPP::DES_EDE3> rng;
rng.GenerateBlock( pcbScratch, BLOCKSIZE );
// Output
std::cout << "The generated random block is:" << std::endl;
string str = "";
for( unsigned int i = 0; i < BLOCKSIZE; i++ )
{
std::cout << "0x" << std::setbase(16) << std::setw(2) << std::setfill('0');
std::cout << static_cast<unsigned int>( pcbScratch[ i ] ) << " ";
str += pcbScratch[i];
}
std::cout << std::endl;
std::cout << str <<std::endl;
I've put int the code a new var: string str = "".
Then in the for append for each result, the part of the string.
But my output is dirty! I see only strange ASCII char.
How can I set well the string?
Thank you.
You will want to some output encoding, e.g.
base64
hex
because what you are seeing is the raw binary data, interpreted as if it were text. Random characters are the consequence
AFAICT (google) you should be able to use something like this
#include <base64.h>
string base64encoded;
StringSource(str, true, new Base64Encoder(new StringSink(base64encoded)));
Appending arbitrary bytes (chars) to the end of a string is going to result in that containing some non-printable characters:
http://en.wikipedia.org/wiki/Control_character
You don't mention what you wanted or expected. Did you want the string to be the same as what got sent to std::cout? If so, you can use a stringstream via #include <sstream>:
std::stringstream ss;
for( unsigned int i = 0; i < BLOCKSIZE; i++ )
{
ss << "0x" << std::setbase(16) << std::setw(2) << std::setfill('0');
ss << static_cast<unsigned int>(pcbScratch[i]);
}
str = ss.str();
You can also use Crypto++'s built in HexEncoder:
std::cout << "The generated random block is:" << std::endl;
string str = "0x";
StringSource ss(pcbScratch, BLOCKSIZE, true,
new HexEncoder(
new StringSink(str),
true, // uppercase
2, // grouping
" 0x" // separator
) // HexDecoder
); // StringSource
The StringSource 'owns' the HexEncoder, so there's no need to call delete.
Related
This C# code:
string code_verifier = "xe-V-ykFyCazK3jCWwqRCZHKAKJ0MqdZs8F6xenxjFE";
byte[] sha256verifier = sha256(code_verifier);
string code_challenge = base64urlencodeNoPadding(sha256verifier);
StringBuilder builder = new StringBuilder();
for (int i = 0; i < sha256verifier.Length; i++)
builder.Append(sha256verifier[i].ToString("x2"));
output("code_verifier: " + code_verifier);
output("builder: " + builder.ToString());
output("code_challenge: " + code_challenge);
Produces these results:
code_verifier: xe-V-ykFyCazK3jCWwqRCZHKAKJ0MqdZs8F6xenxjFE
builder: 8b6526951bf46153a9a276be579ee1070f86e0812fbde8b8c37a3e64c3368525
code_challenge: i2UmlRv0YVOpona-V57hBw-G4IEvvei4w3o-ZMM2hSU
I'm trying to do the same in C++ using poco, here's my code:
std::string verifier_ = "xe-V-ykFyCazK3jCWwqRCZHKAKJ0MqdZs8F6xenxjFE";
std::string sha256verifier = sha256(verifier_);
std::stringstream ss;
Poco::Base64Encoder b64enc(ss, Poco::BASE64_URL_ENCODING || Poco::BASE64_NO_PADDING);
b64enc << sha256(sha256verifier);
std::string challenge = ss.str();
cout << "verifier: " << verifier_ << endl;
cout << "sha256verifier: " << sha256verifier << endl;
cout << "challenge: " << challenge << endl;
The result is:
verifier: xe-V-ykFyCazK3jCWwqRCZHKAKJ0MqdZs8F6xenxjFE
sha256verifier: 8b6526951bf46153a9a276be579ee1070f86e0812fbde8b8c37a3e64c3368525
challenge: YTIzY2U1YTYxY2IwMTU0ZmFhZjU1ZWY2ZDEyNGYzZjE3MjQzN2M1MTExNmRiZTY1ZDU1ZTc1NWY2ZjMyNjZi
The C# sha256() function returns a 32 element byte array which base64urlencodeNoPadding converts into a 32 character string.
The C++ sha256() function returns a 32 element hex encoded string of the same data which Poco::Base64Encoder turns into a 64 element string which bears no relation to the C# string.
How can I get the same results from C++ as I get from C# ?
This is my poco sha256 function
std::string::sha256(std::string buffer) {
Poco::Crypto::RSAKey key(Poco::Crypto::RSAKey::KL_2048,
Poco::Crypto::RSAKey::EXP_LARGE);
Poco::Crypto::RSADigestEngine eng(key, "SHA256");
eng.update(buffer.c_str(), buffer.size());
const auto& sig = eng.digest(); // We just want the digest, unsigned.
return Poco::DigestEngine::digestToHex(sig);
}
There are 2 issues in your C++ solution:
you do SHA-256 twice - at sha256(verifier_) and sha256(sha256verifier)
Poco::DigestEngine::digestToHex converts the SHA-256 sum into a Hex string
Try something like this instead (untested):
std::string verifier_ = "xe-V-ykFyCazK3jCWwqRCZHKAKJ0MqdZs8F6xenxjFE";
auto sha256verifier = sha256(verifier_);
std::string challenge = toBase64(sha256verifier);
cout << "verifier: " << verifier_ << endl;
cout << "sha256verifier: " << toHex(sha256verifier) << endl;
cout << "challenge: " << challenge << endl;
std::vector<unsigned char> sha256(std::string const& buffer) {
Poco::Crypto::RSAKey key(Poco::Crypto::RSAKey::KL_2048, Poco::Crypto::RSAKey::EXP_LARGE);
Poco::Crypto::RSADigestEngine eng(key, "SHA256");
eng.update(buffer.c_str(), buffer.size());
return eng.digest();
}
std::string toBase64(std::vector<unsigned char> const& sig) {
std::stringstream ss;
Poco::Base64Encoder b64enc(ss, Poco::BASE64_URL_ENCODING || Poco::BASE64_NO_PADDING);
b64enc.write((const char*)sig.data(), sig.size());
b64enc.close();
return ss.str();
}
std::string toHex(std::vector<unsigned char> const& sig) {
return Poco::DigestEngine::digestToHex(sig);
}
The project I'm working on, as a custom file format consisting of the header of a few different variables, followed by the pixel data. My colleagues have developed a GUI, where processing, writing reading and displaying this type of file format works fine.
But my problem is, while I have assisted in writing the code for writing data to disk, I cannot myself read this kind of file and get satisfactorily values back. I am able to read the first variable back (char array) but not the following value(s).
So the file format matches the following structure:
typedef struct {
char hxtLabel[8];
u64 hxtVersion;
int motorPositions[9];
int filePrefixLength;
char filePrefix[100];
..
} HxtBuffer;
In the code, I create an object of the above structure and then set these example values:
setLabel("MY_LABEL");
setFormatVersion(3);
setMotorPosition( 2109, 5438, 8767, 1234, 1022, 1033, 1044, 1055, 1066);
setFilePrefixLength(7);
setFilePrefix( string("prefix_"));
setDataTimeStamp( string("000000_000000"));
My code for opening the file:
// Open data file, binary mode, reading
ifstream datFile(aFileName.c_str(), ios::in | ios::binary);
if (!datFile.is_open()) {
cout << "readFile() ERROR: Failed to open file " << aFileName << endl;
return false;
}
// How large is the file?
datFile.seekg(0, datFile.end);
int length = datFile.tellg();
datFile.seekg(0, datFile.beg);
cout << "readFile() file " << setw(70) << aFileName << " is: " << setw(15) << length << " long\n";
// Allocate memory for buffer:
char * buffer = new char[length];
// Read data as one block:
datFile.read(buffer, length);
datFile.close();
/// Looking at the start of the buffer, I should be seeing "MY_LABEL"?
cout << "buffer: " << buffer << " " << *(buffer) << endl;
int* mSSX = reinterpret_cast<int*>(*(buffer+8));
int* mSSY = reinterpret_cast<int*>(&buffer+9);
int* mSSZ = reinterpret_cast<int*>(&buffer+10);
int* mSSROT = reinterpret_cast<int*>(&buffer+11);
int* mTimer = reinterpret_cast<int*>(&buffer+12);
int* mGALX = reinterpret_cast<int*>(&buffer+13);
int* mGALY = reinterpret_cast<int*>(&buffer+14);
int* mGALZ = reinterpret_cast<int*>(&buffer+15);
int* mGALROT = reinterpret_cast<int*>(&buffer+16);
int* filePrefixLength = reinterpret_cast<int*>(&buffer+17);
std::string filePrefix; std::string dataTimeStamp;
// Read file prefix character by character into stringstream object
std::stringstream ss;
char* cPointer = (char *)(buffer+18);
int k;
for(k = 0; k < *filePrefixLength; k++)
{
//read string
char c;
c = *cPointer;
ss << c;
cPointer++;
}
filePrefix = ss.str();
// Read timestamp character by character into stringstream object
std::stringstream timeStampStream;
/// Need not increment cPointer, already pointing # 1st char of timeStamp
for (int l= 0; l < 13; l++)
{
char c;
c = * cPointer;
timeStampStream << c;
}
dataTimeStamp = timeStampStream.str();
cout << 25 << endl;
cout << " mSSX: " << mSSX << " mSSY: " << mSSY << " mSSZ: " << mSSZ;
cout << " mSSROT: " << mSSROT << " mTimer: " << mTimer << " mGALX: " << mGALX;
cout << " mGALY: " << mGALY << " mGALZ: " << mGALZ << " mGALROT: " << mGALROT;
Finally, what I see is here below. I added the 25 just to double check that not everything was coming out in hexadecimal. As you can see, I am able to see the label "MY_LABEL" as expected. But the 9 motorPositions all come out looking suspiciously like addresses are not values. The file prefix and the data timestamp (which should be strings, or at least characters), are just empty.
buffer: MY_LABEL M
25
mSSX: 0000000000000003 mSSY: 00000000001BF618 mSSZ: 00000000001BF620 mSSROT: 00000000001BF628 mTimer: 00000000001BF630 mGALX: 00000000001BF638 mGALY: 00000000001BF640 mGALZ: 00000000001BF648 mGALROT: 00000000001BF650filePrefix: dataTimeStamp:
I'm sure the solution can't be too complicated, but I reached a stage where I had this just spinning and I cannot make sense of things.
Many thanks for reading this somewhat long post.
-- Edit--
I might hit the maximum length allowed for a post, but just in case I thought I shall post the code that generates the data that I'm trying to read back:
bool writePixelOutput(string aOutputPixelFileName) {
// Write pixel histograms out to binary file
ofstream pixelFile;
pixelFile.open(aOutputPixelFileName.c_str(), ios::binary | ios::out | ios::trunc);
if (!pixelFile.is_open()) {
LOG(gLogConfig, logERROR) << "Failed to open output file " << aOutputPixelFileName;
return false;
}
// Write binary file header
string label("MY_LABEL");
pixelFile.write(label.c_str(), label.length());
pixelFile.write((const char*)&mFormatVersion, sizeof(u64));
// Include File Prefix/Motor Positions/Data Time Stamp - if format version > 1
if (mFormatVersion > 1)
{
pixelFile.write((const char*)&mSSX, sizeof(mSSX));
pixelFile.write((const char*)&mSSY, sizeof(mSSY));
pixelFile.write((const char*)&mSSZ, sizeof(mSSZ));
pixelFile.write((const char*)&mSSROT, sizeof(mSSROT));
pixelFile.write((const char*)&mTimer, sizeof(mTimer));
pixelFile.write((const char*)&mGALX, sizeof(mGALX));
pixelFile.write((const char*)&mGALY, sizeof(mGALY));
pixelFile.write((const char*)&mGALZ, sizeof(mGALZ));
pixelFile.write((const char*)&mGALROT, sizeof(mGALROT));
// Determine length of mFilePrefix string
int filePrefixSize = (int)mFilePrefix.size();
// Write prefix length, followed by prefix itself
pixelFile.write((const char*)&filePrefixSize, sizeof(filePrefixSize));
size_t prefixLen = 0;
if (mFormatVersion == 2) prefixLen = mFilePrefix.size();
else prefixLen = 100;
pixelFile.write(mFilePrefix.c_str(), prefixLen);
pixelFile.write(mDataTimeStamp.c_str(), mDataTimeStamp.size());
}
// Continue writing header information that is common to both format versions
pixelFile.write((const char*)&mRows, sizeof(mRows));
pixelFile.write((const char*)&mCols, sizeof(mCols));
pixelFile.write((const char*)&mHistoBins, sizeof(mHistoBins));
// Write the actual data - taken out for briefy sake
// ..
pixelFile.close();
LOG(gLogConfig, logINFO) << "Written output histogram binary file " << aOutputPixelFileName;
return true;
}
-- Edit 2 (11:32 09/12/2015) --
Thank you for all the help, I'm closer to solving the issue now. Going with the answer from muelleth, I try:
/// Read into char buffer
char * buffer = new char[length];
datFile.read(buffer, length);// length determined by ifstream.seekg()
/// Let's try HxtBuffer
HxtBuffer *input = new HxtBuffer;
cout << "sizeof HxtBuffer: " << sizeof *input << endl;
memcpy(input, buffer, length);
I can then display the different struct variables:
qDebug() << "Slice BUFFER label " << QString::fromStdString(input->hxtLabel);
qDebug() << "Slice BUFFER version " << QString::number(input->hxtVersion);
qDebug() << "Slice BUFFER hxtPrefixLength " << QString::number(input->filePrefixLength);
for (int i = 0; i < 9; i++)
{
qDebug() << i << QString::number(input->motorPositions[i]);
}
qDebug() << "Slice BUFFER filePrefix " << QString::fromStdString(input->filePrefix);
qDebug() << "Slice BUFFER dataTimeStamp " << QString::fromStdString(input->dataTimeStamp);
qDebug() << "Slice BUFFER nRows " << QString::number(input->nRows);
qDebug() << "Slice BUFFER nCols " << QString::number(input->nCols);
qDebug() << "Slice BUFFER nBins " << QString::number(input->nBins);
The output is then mostly as expected:
Slice BUFFER label "MY_LABEL"
Slice BUFFER version "3"
Slice BUFFER hxtPrefixLength "2"
0 "2109"
1 "5438"
...
7 "1055"
8 "1066"
Slice BUFFER filePrefix "-1"
Slice BUFFER dataTimeStamp "000000_000000P"
Slice BUFFER nRows "20480"
Slice BUFFER nCols "256000"
Slice BUFFER nBins "0"
EXCEPT, dataTimeStamp, which is 13 chars long, displays instead 14 chars. The 3 variables that follow: nRows, nCols and nBins are then incorrect. (Should be nRows=80, nCols=80, nBins=1000). My guess is that the bits belonging to the 14th char of dataTimeStamp should be read along with nRows, and so cascade on to produce the correct nCols and nBins.
I have separately verified (not shown here) using qDebug that what I'm writing into the file, really are the values I expect, and their individual sizes.
I personally would try to read exactly the number of bytes your struct is from the file, i.e. something like
int length = sizeof(HxtBuffer);
and then simply use memcpy to assign a local structure from the read buffer:
HxtBuffer input;
memcpy(&input, buffer, length);
You can then access your data e.g. like:
std::cout << "Data: " << input.hxtLabel << std::endl;
Why do you read to buffer, instead of using the structure for reading?
HxtBuffer data;
datFile.read(reinterpret_cast<char *>(&data), sizeof data);
if(datFile && datFile.gcount()!=sizeof data)
throw io_exception();
// Can use data.
If you want to read to a chracter buffer, than your way of getting the data is just wrong. You probably want to do something like this.
char *buf_offset=buffer+8+sizeof(u64); // Skip label (8 chars) and version (int64)
int mSSX = *reinterpret_cast<int*>(buf_offset);
buf_offset+=sizeof(int);
int mSSY = *reinterpret_cast<int*>(buf_offset);
buf_offset+=sizeof(int);
int mSSZ = *reinterpret_cast<int*>(buf_offset);
/* etc. */
Or, a little better (provided you don't change the contents of the buffer).
int *ptr_motors=reinterpret_cast<int *>(buffer+8+sizeof(u64));
int &mSSX = ptr_motors[0];
int &mSSY = ptr_motors[1];
int &mSSZ = ptr_motors[2];
/* etc. */
Notice that I don't declare mSSX, mSSY etc. as pointers. Your code was printing them as addresses because you told the compiler that they were addresses (pointers).
I have a problem I neither can solve on my own nor find answer anywhere. I have a file contains such a string:
01000000d08c9ddf0115d1118c7a00c04
I would like to read the file in the way, that I would do manually like that:
char fromFile[] =
"\x01\x00\x00\x00\xd0\x8c\x9d\xdf\x011\x5d\x11\x18\xc7\xa0\x0c\x04";
I would really appreciate any help.
I want to do it in C++ (the best would be vc++).
Thank You!
int t194(void)
{
// imagine you have n pair of char, for simplicity,
// here n is 3 (you should recognize them)
char pair1[] = "01"; // note:
char pair2[] = "8c"; // initialize with 3 char c-style strings
char pair3[] = "c7"; //
{
// let us put these into a ram based stream, with spaces
std::stringstream ss;
ss << pair1 << " " << pair2 << " " << pair3;
// each pair can now be extracted into
// pre-declared int vars
int i1 = 0;
int i2 = 0;
int i3 = 0;
// use formatted extractor to convert
ss >> i1 >> i2 >> i3;
// show what happened (for debug only)
std::cout << "Confirm1:" << std::endl;
std::cout << "i1: " << i1 << std::endl;
std::cout << "i2: " << i2 << std::endl;
std::cout << "i3: " << i3 << std::endl << std::endl;
// output is:
// Confirm1:
// i1: 1
// i2: 8
// i3: 0
// Shucks, not correct.
// We know the default radix is base 10
// I hope you can see that the input radix is wrong,
// because c is not a decimal digit,
// the i2 and i3 conversions stops before the 'c'
}
// pre-delcare
int i1 = 0;
int i2 = 0;
int i3 = 0;
{
// so we try again, with radix info added
std::stringstream ss;
ss << pair1 << " " << pair2 << " " << pair3;
// strings are already in hex, so we use them as is
ss >> std::hex // change radix to 16
>> i1 >> i2 >> i3;
// now show what happened
std::cout << "Confirm2:" << std::endl;
std::cout << "i1: " << i1 << std::endl;
std::cout << "i2: " << i2 << std::endl;
std::cout << "i3: " << i3 << std::endl << std::endl;
// output now:
// i1: 1
// i2: 140
// i3: 199
// not what you expected? Though correct,
// now we can see we have the wrong radix for output
// add output radix to cout stream
std::cout << std::hex // add radix info here!
<< "i1: " << i1 << std::endl
// Note: only need to do once for std::cout
<< "i2: " << i2 << std::endl
<< "i3: " << i3 << std::endl << std::endl
<< std::dec;
// output now looks correct, and easily comparable to input
// i1: 1
// i2: 8c
// i3: c7
// So: What next?
// read the entire string of hex input into a single string
// separate this into pairs of chars (perhaps using
// string::substr())
// put space separated pairs into stringstream ss
// extract hex values until ss.eof()
// probably should add error checks
// and, of course, figure out how to use a loop for these steps
//
// alternative to consider:
// read 1 char at a time, build a pairing, convert, repeat
}
//
// Eventually, you should get far enough to discover that the
// extracts I have done are integers, but you want to pack them
// into an array of binary bytes.
//
// You can go back, and recode to extract bytes (either
// unsigned char or uint8_t), which you might find interesting.
//
// Or ... because your input is hex, and the largest 2 char
// value will be 0xff, and this fits into a single byte, you
// can simply static_cast them (I use unsigned char)
unsigned char bin[] = {static_cast<unsigned char>(i1),
static_cast<unsigned char>(i2),
static_cast<unsigned char>(i3) };
// Now confirm by casting these back to ints to cout
std::cout << "Confirm4: "
<< std::hex << std::setw(2) << std::setfill('0')
<< static_cast<int>(bin[0]) << " "
<< static_cast<int>(bin[1]) << " "
<< static_cast<int>(bin[2]) << std::endl;
// you also might consider a vector (and i prefer uint8_t)
// because push_back operations does a lot of hidden work for you
std::vector<uint8_t> bytes;
bytes.push_back(static_cast<uint8_t>(i1));
bytes.push_back(static_cast<uint8_t>(i2));
bytes.push_back(static_cast<uint8_t>(i3));
// confirm
std::cout << "\nConfirm5: ";
for (size_t i=0; i<bytes.size(); ++i)
std::cout << std::hex << std::setw(2) << std::setfill(' ')
<< static_cast<int>(bytes[i]) << " ";
std::cout << std::endl;
Note: The cout (or ss) of bytes or char can be confusing, not always giving the result you might expect. My background is embedded software, and I have surprisingly small experience making stream i/o of bytes work. Just saying this tends to bias my work when dealing with stream i/o.
// other considerations:
//
// you might read 1 char at a time. this can simplify
// your loop, possibly easier to debug
// ... would you have to detect and remove eoln? i.e. '\n'
// ... how would you handle a bad input
// such as not hex char, odd char count in a line
//
// I would probably prefer to use getline(),
// it will read until eoln(), and discard the '\n'
// then in each string, loop char by char, creating char pairs, etc.
//
// Converting a vector<uint8_t> to char bytes[] can be an easier
// effort in some ways. A vector<> guarantees that all the values
// contained are 'packed' back-to-back, and contiguous in
// memory, just right for binary stream output
//
// vector.size() tells how many chars have been pushed
//
// NOTE: the formatted 'insert' operator ('<<') can not
// transfer binary data to a stream. You must use
// stream::write() for binary output.
//
std::stringstream ssOut;
// possible approach:
// 1 step reinterpret_cast
// - a binary block output requires "const char*"
const char* myBuff = reinterpret_cast<const char*>(&myBytes.front());
ssOut.write(myBuff, myBytes.size());
// block write puts binary info into stream
// confirm
std::cout << "\nConfirm6: ";
std::string s = ssOut.str(); // string with binary data
for (size_t i=0; i<s.size(); ++i)
{
// because binary data is _not_ signed data,
// we need to 'cancel' the sign bit
unsigned char ukar = static_cast<unsigned char>(s[i]);
// because formatted output would interpret some chars
// (like null, or \n), we cast to int
int intVal = static_cast<int>(ukar);
// cast does not generate code
// now the formatted 'insert' operator
// converts and displays what we want
std::cout << std::hex << std::setw(2) << std::setfill('0')
<< intVal << " ";
}
std::cout << std::endl;
//
//
return (0);
} // int t194(void)
The below snippet should be helpful!
std::ifstream input( "filePath", std::ios::binary );
std::vector<char> hex((
std::istreambuf_iterator<char>(input)),
(std::istreambuf_iterator<char>()));
std::vector<char> bytes;
for (unsigned int i = 0; i < hex.size(); i += 2) {
std::string byteString = hex.substr(i, 2);
char byte = (char) strtol(byteString.c_str(), NULL, 16);
bytes.push_back(byte);
}
char* byteArr = bytes.data()
The way I understand your question is that you want just the binary representation of the numbers, i.e. remove the ascii (or ebcdic) part. Your output array will be half the length of the input array.
Here is some crude pseudo code.
For each input char c:
if (isdigit(c)) c -= '0';
else if (isxdigit(c) c -= 'a' + 0xa; //Need to check for isupper or islower)
Then, depending on the index of c in your input array:
if (! index % 2) output[outputindex] = (c << 4) & 0xf0;
else output[outputindex++] = c & 0x0f;
Here is a function that takes a string as in your description, and outputs a string that has \x in front of each digit.
#include <iostream>
#include <algorithm>
#include <string>
std::string convertHex(const std::string& str)
{
std::string retVal;
std::string hexPrefix = "\\x";
if (!str.empty())
{
std::string::const_iterator it = str.begin();
do
{
if (std::distance(it, str.end()) == 1)
{
retVal += hexPrefix + "0";
retVal += *(it);
++it;
}
else
{
retVal += hexPrefix + std::string(it, it+2);
it += 2;
}
} while (it != str.end());
}
return retVal;
}
using namespace std;
int main()
{
cout << convertHex("01000000d08c9ddf0115d1118c7a00c04") << endl;
cout << convertHex("015d");
}
Output:
\x01\x00\x00\x00\xd0\x8c\x9d\xdf\x01\x15\xd1\x11\x8c\x7a\x00\xc0\x04
\x01\x5d
Basically it is nothing more than a do-while loop. A string is built from each pair of characters encountered. If the number of characters left is 1 (meaning that there is only one digit), a "0" is added to the front of the digit.
I think I'd use a proxy class for reading and writing the data. Unfortunately, the code for the manipulators involved is just a little on the verbose side (to put it mildly).
#include <vector>
#include <algorithm>
#include <iterator>
#include <iostream>
#include <iomanip>
#include <string>
#include <sstream>
struct byte {
unsigned char ch;
friend std::istream &operator>>(std::istream &is, byte &b) {
std::string temp;
if (is >> std::setw(2) >> std::setprecision(2) >> temp)
b.ch = std::stoi(temp, 0, 16);
return is;
}
friend std::ostream &operator<<(std::ostream &os, byte const &b) {
return os << "\\x" << std::setw(2) << std::setfill('0') << std::setprecision(2) << std::hex << (int)b.ch;
}
};
int main() {
std::istringstream input("01000000d08c9ddf115d1118c7a00c04");
std::ostringstream result;
std::istream_iterator<byte> in(input), end;
std::ostream_iterator<byte> out(result);
std::copy(in, end, out);
std::cout << result.str();
}
I do really dislike how verbose iomanipulators are, but other than that it seems pretty clean.
You can try a loop with fscanf
unsigned char b;
fscanf(pFile, "%2x", &b);
Edit:
#define MAX_LINE_SIZE 128
FILE* pFile = fopen(...);
char fromFile[MAX_LINE_SIZE] = {0};
char b = 0;
int currentIndex = 0;
while (fscanf (pFile, "%2x", &b) > 0 && i < MAX_LINE_SIZE)
fromFile[currentIndex++] = b;
I am looking to find a C++ fstream equivalent function of C fgets. I tried with get function of fstream but did not get what I wanted. The get function does not extract the delim character whereas the fgets function used to extract it. So, I wrote a code to insert this delim character from my code itself. But it is giving strange behaviour. Please see my sample code below;
#include <stdio.h>
#include <fstream>
#include <iostream>
int main(int argc, char **argv)
{
char str[256];
int len = 10;
std::cout << "Using C fgets function" << std::endl;
FILE * file = fopen("C:\\cpp\\write.txt", "r");
if(file == NULL){
std::cout << " Error opening file" << std::endl;
}
int count = 0;
while(!feof(file)){
char *result = fgets(str, len, file);
std::cout << result << std::endl ;
count++;
}
std::cout << "\nCount = " << count << std::endl;
fclose(file);
std::fstream fp("C:\\cpp\\write.txt", std::ios_base::in);
int iter_count = 0;
while(!fp.eof() && iter_count < 10){
fp.get(str, len,'\n');
int count = fp.gcount();
std::cout << "\nCurrent Count = " << count << std::endl;
if(count == 0){
//only new line character encountered
//adding newline character
str[1] = '\0';
str[0] = '\n';
fp.ignore(1, '\n');
//std::cout << fp.get(); //ignore new line character from stream
}
else if(count != (len -1) ){
//adding newline character
str[count + 1] = '\0';
str[count ] = '\n';
//std::cout << fp.get(); //ignore new line character from stream
fp.ignore(1, '\n');
//std::cout << "Adding new line \n";
}
std::cout << str << std::endl;
std::cout << " Stream State : Good: " << fp.good() << " Fail: " << fp.fail() << std::endl;
iter_count++;
}
std::cout << "\nCount = " << iter_count << std::endl;
fp.close();
return 0;
}
The txt file that I am using is write.txt with following content:
This is a new lines.
Now writing second
line
DONE
If you observe my program, I am using fgets function first and then using the get function on same file. In case of get function, the stream state goes bad.
Can anyone please point me out what is going wrong here?
UPDATED: I am now posting a simplest code which does not work at my end. If I dont care about the delim character for now and just read the entire file 10 characters at a time using getline:
void read_file_getline_no_insert(){
char str[256];
int len =10;
std::cout << "\nREAD_GETLINE_NO_INSERT FUNCITON\n" << std::endl;
std::fstream fp("C:\\cpp\\write.txt", std::ios_base::in);
int iter_count = 0;
while(!fp.eof() && iter_count < 10){
fp.getline(str, len,'\n');
int count = fp.gcount();
std::cout << "\nCurrent Count = " << count << std::endl;
std::cout << str << std::endl;
std::cout << " Stream State : Good: " << fp.good() << " Fail: " << fp.fail() << std::endl;
iter_count++;
}
std::cout << "\nCount = " << iter_count << std::endl;
fp.close();
}
int main(int argc, char **argv)
{
read_file_getline_no_insert();
return 0;
}
If wee see the output of above code:
READ_GETLINE_NO_INSERT FUNCITON
Current Count = 9
This is a
Stream State : Good: 0 Fail: 1
Current Count = 0
Stream State : Good: 0 Fail: 1
You would see that the state of stream goes Bad and the fail bit is set. I am unable to understand this behavior.
Rgds
Sapan
std::getline() will read a string from a stream, until it encounters a delimiter (newline by default).
Unlike fgets(), std::getline() discards the delimiter. But, also unlike fgets(), it will read the whole line (available memory permitting) since it works with a std::string rather than a char *. That makes it somewhat easier to use in practice.
All types derived from std::istream (which is the base class for all input streams) also have a member function called getline() which works a little more like fgets() - accepting a char * and a buffer size. It still discards the delimiter though.
The C++-specific options are overloaded functions (i.e. available in more than one version) so you need to read documentation to decide which one is appropriate to your needs.
I want to read a mac id from command line and convert it to an array of uint8_t values to use it in a struct. I can not get it to work. I have a vector of string for the mac id split about : and I want to use stringstream to convert them with no luck. What I am missing?
int parseHex(const string &num){
stringstream ss(num);
ss << std::hex;
int n;
ss >> n;
return n;
}
uint8_t tgt_mac[6] = {0, 0, 0, 0, 0, 0};
v = StringSplit( mac , ":" );
for( int j = 0 ; j < v.size() ; j++ ){
tgt_mac[j] = parseHex(v.at(j));
}
I hate to answer this in this fashion, but sscanf() is probably the most succinct way to parse out a MAC address. It handles zero/non-zero padding, width checking, case folding, and all of that other stuff that no one likes to deal with. Anyway, here's my not so C++ version:
void
parse_mac(std::vector<uint8_t>& out, std::string const& in) {
unsigned int bytes[6];
if (std::sscanf(in.c_str(),
"%02x:%02x:%02x:%02x:%02x:%02x",
&bytes[0], &bytes[1], &bytes[2],
&bytes[3], &bytes[4], &bytes[5]) != 6)
{
throw std::runtime_error(in+std::string(" is an invalid MAC address"));
}
out.assign(&bytes[0], &bytes[6]);
}
Your problem may be in the output of the parsed data. The "<<" operator makes decisions on how to display data based on the data type passed it, and uint8_t may be getting interpretted as a char. Make sure you cast the array values to ints when printing, or investigate in a debugger.
The sample program:
uint8_t tgt_mac[6] = {0};
std::stringstream ss( "AA:BB:CC:DD:EE:11" );
char trash;
for ( int i = 0; i < 6; i++ )
{
int foo;
ss >> std::hex >> foo >> trash;
tgt_mac[i] = foo;
std::cout << std::hex << "Reading: " << foo << std::endl;
}
std::cout << "As int array: " << std::hex
<< (int) tgt_mac[0]
<< ":"
<< (int) tgt_mac[1]
<< ":"
<< (int) tgt_mac[2]
<< ":"
<< (int) tgt_mac[3]
<< ":"
<< (int) tgt_mac[4]
<< ":"
<< (int) tgt_mac[5]
<< std::endl;
std::cout << "As unint8_t array: " << std::hex
<< tgt_mac[0]
<< ":"
<< tgt_mac[1]
<< ":"
<< tgt_mac[2]
<< ":"
<< tgt_mac[3]
<< ":"
<< tgt_mac[4]
<< ":"
<< tgt_mac[5]
<< std::endl;
Gives the following output ( cygwin g++ )
Reading: aa
Reading: bb
Reading: cc
Reading: dd
Reading: ee
Reading: 11
As int array: aa:bb:cc:dd:ee:11
As unint8_t array: ª:»:I:Y:î:◄
First I want to point out that I think #Steven's answer is a very good one - indeed I noticed the same: the values are correct, but the output looks awkward. This is due to ostream& operator<<( ostream&, unsigned char ) being used, since the uint8_t type you used is a typedef for unsigned char (as I found in the linux man pages). Note that on VC++, the typedef isn't there, and you have to use unsigned __int8 instead (which will also route you to the char specialization).
Next, you can test your code like this (output-independent):
assert( uint8_t( parseHex( "00" ) ) == uint8_t(0) );
assert( uint8_t( parseHex( "01" ) ) == uint8_t(1) );
//...
assert( uint8_t( parseHex( "ff" ) ) == uint8_t(255) );
In addition to Steven's answer, I just want to point out the existence of the transform algorithm, which could still simplify your code.
for( int j = 0 ; j < v.size() ; j++ ){
tgt_mac[j] = parseHex(v.at(j));
}
Writes in one line:
std::transform( v.begin(), v.end(), tgt_mac, &parseHex );
(And I know that hasn't to do with the question...)
(See codepad.org for what it then looks like)
I think you are using the std::hex in the wrong place:
#include <sstream>
#include <iostream>
int main()
{
std::string h("a5");
std::stringstream s(h);
int x;
s >> std::hex >> x;
std::cout << "X(" << x << ")\n";
}