Not sure how else to put that, but I'll start off with a code snippet and output:
uint32_t expires;
cout << "Expiration bytes: " << setfill('0') << hex
<< setw(2) << (unsigned short)rec[keyLen+4]
<< setw(2) << (unsigned short)rec[keyLen+5]
<< setw(2) << (unsigned short)rec[keyLen+6]
<< setw(2) << (unsigned short)rec[keyLen+7] << endl;
expires = ntohl(*(uint32_t*)&rec[keyLen+4]);
cout << "Expiration: " << (long)expires << endl;
cout << "Hex: " << hex << expires << endl;
Outputs:
Expiration bytes: 00000258
Expiration: 258
Hex: 258
I can confirm from other parts of the program that examining and outputting the hex representation of bytes works as expected, and that those are indeed the bytes in the byte stream (sent from another application).
Now, I would be able to understand a bit better if expiration just held some nonsense, because that would mean there's some egregious error (probably involving pointers). But this... this is clearly just spitting out the hex value as if it were a decimal, and that's plain wrong.
To make matters more confusing, this works at another point in the program:
fullSize = ntohs(*(uint16_t*)&buff[0]);
With a byte value of 0x0114, fullSize will contain the value 276.
So the question is, what the heck is going on here? How is it possible for an int to be wrong?
hex is sticky, so unless you reset it, cout will continue to output things in hex.
You can reset it by issuing sending std::dec to the stream. Alternatively you could build a more advanced mechanism that would store the original state and restore it afterwords.
cout << "Expiration: " << dec << (long)expires << endl; will output decimal, otherwise the last setting (hex or dec) will still be in effect.
Since you never switch cout back to decimal output, all of your outputs are in hex, even the output of cout << "Expiration: " << (long)expires << endl;.
Related
I am working on a very basic program for my Fundamentals I class and I have everything 98% working as intended.
This program takes the names of three grades, averages them, and outputs them into a table, but since assignmentName[] is on the same line of code as grade[], it pushes grade[] to the right determining on how many characters the user inputted.
Screenshot of the problem
Here is the code I currently have written for the table:
cout << "___________________________\n";
cout << name << "'s Grade Chart\n";
cout << "---------------------------\n";
cout << setprecision(1) << fixed;
cout << "Grade for " << assignmentName[0] << setw(8) << grade[0] << endl;
cout << "Grade for " << assignmentName[1] << setw(8) << grade[1] << endl;
cout << "Grade for " << assignmentName[2] << setw(8) << grade[2] << endl;
cout << "\nYour average grade between those three assignments is: " << setw(1) << avg << endl;`
I commented, "Place another setw(N) where N is a bit bigger than the largest assignmentName before each << assignmentName."
But on second thought it's bit more fun than that, so I figure a real answer is in order.
First, some reading materials:
Documentation on std::left and std::right
Documentation on std::max
And now on with the show!
First we need to know how big the largest assignment name is.
size_t max = 0;
for (const string & assn: assignmentName)
{
max = std::max(max, assn.length());
// You may need
//max = std::max(max, strlen(assn));
// if you've been forced to resort to barbarism and c-style strings
}
max++; // one extra character just in case we get a really long grade.
Sometimes this can get a lot neater. For example std::max_element can eliminate the need for the loop we used to get the maximum assignment name length. In this case we're looking for the size of the string, not the lexical order of the string, so I think the loop and std::max is a bit easier on the brain.
And now to format, we print the names left-justified and the grades right justified, with the names padded max characters and the grades 8 characters.
cout << "Grade for " << std::left << setw(max) << assignmentName[0]
<< std::right << setw(8) << grade[0] << '\n'
<< "Grade for " << std::left << setw(max) << assignmentName[1]
<< std::right << setw(8) << grade[1] << '\n'
<< "Grade for " << std::left << setw(max) << assignmentName[2]
<< std::right << setw(8) << grade[2] << '\n';
Note it's now one big cout. This was done mostly for demonstration purposes and because I think it looks better. It doesn't really save you much, if anything, in processing time. What does save time is the lack of endls. endl is actually a very expensive operation because not only does it end a line, but it also flushes. It forces whatever has been buffered in the stream out to the underlying media, the console in this case. Computers are at their best when they can avoid actually going out of the computer until they really have to. Drawing to the screen is way more expensive than writing to RAM or a cache, so don't do it until you have to.
Instead of writing:
"Grade for " << assignmentName[x] << setw[y] << grade(z)
Write:
"Grade for " << setw[a] << assignmentName[x] << setw[y] << grade(z)
Where a is greater than x in each case.
Maybe that should fix it.
Your a should be something like 10 or 15 or something. I hope it works after that. Try it.
This is a simple question I think. I'm messing around with the Crypto++ lib and a sample which I found on SO. So I tried to cout the text/ASCII characters (instead of hex) at the "Dump cipher text" part.
This part (original):
//
// Dump Cipher Text
//
std::cout << "Cipher Text (" << ciphertext.size() << " bytes)" << std::endl;
for( int i = 0; i < ciphertext.size(); i++ ) {
std::cout << "0x" << std::hex << (0xFF & static_cast<byte>(ciphertext[i])) << " ";
}
std::cout << std::endl << std::endl;
And I've noticed something odd... the ciphertext.size() returns a different number of bytes after I tried to do this:
My edited part:
//
// Dump Cipher Text
//
std::cout << "Cipher Text (" << ciphertext.size() << " bytes)" << std::endl; // returns 16 bytes.
int num = ciphertext.size(); // num returns 16 here
for (int i = 0; i < ciphertext.size(); i++) {
std::cout << "0x" << std::hex << (0xFF & static_cast<byte>(ciphertext[i])) << " ";
}
std::cout << std::endl << std::endl;
std::cout << "Ciphertext size: " << ciphertext.size() << std::endl; // now it's 10 bytes?
std::cout << "num: " << num; // returns 10 bytes?! what the hell...
std::cout << std::endl << std::endl;
So I bumped into this because I tried to print the ASCII characters instead of the hex bytes, and it worked but I just can't understand why...
Because this:
std::cout << "To Text: ";
for (int x = 0; x < ciphertext.size(); x++)
{
std::cout << ciphertext[x];
}
Prints all the ASCII chars (instead of hex), but... since ciphertext.size() returns 10, it shouldn't print all the chars (because it was first defined as 16 instead of 10) but yet, it does print them all perfectly.... I'm really confused here. How can a variable redefine itself EVEN if I placed/copied it in a int to make sure it doesn't get changed?
std::hex changes the base used to represent the numbers to 16 (hex). All the numbers inserted into std::cout after std::cout << std::hex are represented in base 16.
ciphertext.size() still returns 16 but, because of the std::hex previously sent to output, it is displayed in base 16 and its hex representation is, of course, 10.
Try this:
std::cout << std::dec << "Ciphertext size: " << ciphertext.size() << std::endl;
It sets 10 as base for numbers representation again and makes the value returned by ciphertext.size() to be displayed as 16.
Encryption output is binary bytes with values ranging from 0 to 255. ASCII printable characters range from 32 to 127. Perhaps you can see the problem at this point.
Not all byte values are representable in ASCII, not even extended ASCII or unicode, that is why Hexadecimal and Base64 are used for encrypted output.
See ASCII character values.
I have the following code
cout << setfill('0') << setw(4) << hex << 100 << 100 << std::endl;
The output is:
006464
If I want to let every number with width 4, I have to use
out << setfill('0') << setw(4) << hex << 100 << sew(4) << 100 << std::endl;
But if I want to print every number with hex and setfill('0'), I only need to set setfill('0') and std::hex once.
Does c++ design this on purpose? what is its intention?
Yes it is on purpose. The stream operations are internally peppered with resets of the field width, specified by the standard. I think there's no good answer as to why.
I did my googling for this thing, but haven't found the answer.
I want to find analogue for output formatting in plain C. To be more specific, something which works similar to printf(%.3x)
Probably, it could be done using manipulators. However, the code
cout << showbase << setfill('0') << setw(5) << hex << 19 << endl;
gives me 00x13 instead of desired 0x013.
P.S. Sorry, I don't have the Boost library, so this is not a solution..
Utilizing internal:
cout << showbase << setw(5) << setfill('0') << internal << hex << 19 << endl;
cout << "0x" << setfill('0') << setw(3) << hex << 19 << endl;
Note that setfill and hex alter the state of the stream for subsequent output as well, unlke setw which just affects the next output.
char buffer[40];
snprintf(buffer, 40, "%.3x", 19);
std::cout << buffer;
In C++ I need string representations of integers with leading zeroes, where the representation has 8 digits and no more than 8 digits, truncating digits on the right side if necessary. I thought I could do this using just ostringstream and iomanip.setw(), like this:
int num_1 = 3000;
ostringstream out_target;
out_target << setw(8) << setfill('0') << num_1;
cout << "field: " << out_target.str() << " vs input: " << num_1 << endl;
The output here is:
field: 00003000 vs input: 3000
Very nice! However if I try a bigger number, setw lets the output grow beyond 8 characters:
int num_2 = 2000000000;
ostringstream out_target;
out_target << setw(8) << setfill('0') << num_2;
cout << "field: " << out_target.str() << " vs input: " << num_2 << endl;
out_target.str("");
output:
field: 2000000000 vs input: 2000000000
The desired output is "20000000". There's nothing stopping me from using a second operation to take only the first 8 characters, but is field truncation truly missing from iomanip? Would the Boost formatting do what I need in one step?
I can't think of any way to truncate a numeric field like that. Perhaps it has not been implemented because it would change the value.
ostream::write() allows you to truncate a string buffer simply enough, as in this example...
int num_2 = 2000000000;
ostringstream out_target;
out_target << setw(8) << setfill('0') << num_2;
cout << "field: ";
cout.write(out_target.str().c_str(), 8);
cout << " vs input: " << num_2 << endl;
If you assume that snprintf() will write as many chars at it can (I don't think this is guaranteed),
char buf[9];
snprintf(buf, 10, "%08d", num);
buf[8] = 0;
cout << std::string(buf) << endl;
I am not sure why you want 2 billion to be the same as 20 million. It may make more sense to signal an error on truncation, like this:
if (snprintf(buf, 10, "%08d", num) > 8) {
throw std::exception("oops")
}