I have two values, 0 and 30, I need to store it's binary representation on one byte for each. Like:
byte 0 = 00000000
byte 1 = 00011110
and then concatenate them on a string that will print the ASCII for 0 (NULL) and for 30 (Record Separator). So, not print "030", but something I can't really right here and neither the command can print properly. I know those are not nice things to print.
I was doing like this:
string final_message = static_cast<unsigned char>(bitset<8>(0).to_ulong());
final_message += static_cast<unsigned char>((bitset<8>(answer.size())).to_ulong()); // where answer.size() = 30
cout << final_message << endl;
Not sure if it's right, I never worked with bitset since now. I think it's right but the server that receives my messages keep telling me that the numbers are wrong. I'm pretty sure that the numbers I need are 0 and 30 on that order, so, as the only part I'm not sure how it works are those three lines, I'm putting this question here.
Those three lines are right? There's a better way to do that?
A byte (or a char) holds a single 8-bit value, and the value is the same whether you "view" it in a binary format, in a decimal format, or as a character to be printed on the console. It's just the way you look at it.
See the following example. The first two examples byte1 and byte2 are those referred in your question. Unfortunately, you won't see much on the console.
Therefore I added another example, which illustrates three ways of viewing the same value 65 in different ways. Hope it helps.
int main(){
char byte1 = 0b00000000;
char byte2 = 0b00011110;
std::cout << "byte1 as 'int value': " << (int)byte1 << "; and as character: " << byte1 << endl;
std::cout << "byte2 as 'int value': " << (int)byte2 << "; and as character: " << byte2 << endl;
char a1 = 65;
char a2 = 'A';
char a3 = 0b001000001;
std::cout << "a1 as 'int value': " << (int)a1 << "; and as character: " << a1 << endl;
std::cout << "a2 as 'int value': " << (int)a2 << "; and as character: " << a2 << endl;
std::cout << "a3 as 'int value': " << (int)a3 << "; and as character: " << a3 << endl;
return 0;
}
Output:
byte1 as 'int value': 0; and as character:
byte2 as 'int value': 30; and as character:
a1 as 'int value': 65; and as character: A
a2 as 'int value': 65; and as character: A
a3 as 'int value': 65; and as character: A
The line
string final_message = static_cast<unsigned char>(bitset<8>(0).to_ulong());
does not compile. And obviously, there is no need for bitset here as you are essentially juts add extra conversions in the path.
If I split the line above in 2 and use +=, the resulting string as a size of 2 and contain characters with values of 0 and 30 (I have inspected in using the debugger).
So I don't know what is your problem as it appears to do what you want...
Related
I'm currently writing some tests for an MD5 hash generating function. The functions returns an unsigned char*. I have a reference sample to compare to hard coded into the test. From my research it appears that memcmp is the correct way to go, however I am having issues with the results.
When printed to the terminal they match, however memcmp is returning a negative match.
CODE sample:
unsigned char ref_digest[] = "d41d8cd98f00b204e9800998ecf8427e";
unsigned char *calculated_digest = md5_gen_ctx.get_digest();
std::cout << std::setfill('0') << std::setw(2) << std::hex << ref_digest << endl;
for(int i = 0; i < MD5_DIGEST_LENGTH; i++) {
std::cout << std::setfill('0') << std::setw(2) << std::hex << static_cast<int>(calculated_digest[i]);
}
cout << endl;
int compare = std::memcmp(calculated_digest, ref_digest , MD5_DIGEST_LENGTH);
cout << "Comparison result: " << compare << endl;
OUTPUT
2: Test timeout computed to be: 10000000
2: d41d8cd98f00b204e9800998ecf8427e
2: d41d8cd98f00b204e9800998ecf8427e
2: Comparison result: 70
Can anyone guide me as to what I am doing incorrectly here? I am wondering if there are issues with the definition of my reference hash. Is there a better way of managing this comparison for the test?
Cheers.
This is wrong:
unsigned char ref_digest[] = "d41d8cd98f00b204e9800998ecf8427e";
That is a string of 32 characters, when what you want is an array of 16 bytes. Note that two hexadecimal characters (4+4 bits) corresponds to one byte.
To fix it, you can use a pair of 64-bit integers:
uint64_t ref_digest[] = {htobe64(0xd41d8cd98f00b204), htobe64(0xe9800998ecf8427e)};
I used htobe64() to put the bytes in the correct order, e.g. 0xd4 needs to be the first byte.
I'm making a college job, a conversion between hexa numbers enclosed in a stringstream. I have a big hexa number (a private key), and I need to convert to int, to put in a map<int,int>.
So when I run the code, the result of conversion is the same for all the two hexa values inserted, what is incorrect, it should be differente results after conversion. I think it's an int sizes stack problem, because when I insert short hexas, it works greatly. As shown below the hexa has 64 bits.
Any idea to get it working?
int main()
{
unsigned int x;
std::stringstream ss;
ss << std::hex << "0x3B29786B4F7E78255E9F965456A6D989A4EC37BC4477A934C52F39ECFD574444";
ss >> x;
std::cout << "Saida" << x << std::endl;
// output it as a signed type
std::cout << "Result 1: " << static_cast<std::int64_t>(x) << std::endl;
ss << std::hex << "0x3C29786A4F7E78255E9A965456A6D989A4EC37BC4477A934C52F39ECFD573344";
ss >> x;
std::cout << "Saida 2 " << x << std::endl;
// output it as a signed type
std::cout << "Result 2: " << static_cast<std::int64_t>(x) << std::endl;
}
Firstly, the HEX numbers in your examples do not fit into an unsigned int.
You should clear the stream before loading the second HEX number there.
...
std::cout << "Result 1: " << static_cast<std::int64_t>(x) << std::endl;
ss.clear();
ss << std::hex << "0x3C29786A4F7E78255E9A965456A6D989A4EC37BC4477A934C52F39ECFD573344";
ss >> x;
...
Each hexadecimal digit equates to 4 bits (0xf -> 1111b). Those hex strings are both 64 x 4 = 256 bits long. You're looking at a range error.
You need to process the input 16 characters at a time. Each character is 4 bits. The 16 first characters will give you an unsigned 64-bit value. (16x4 is 64)
Then you can put the fist value in a vector or other container and move on to the next 16 characters. If you have questions about string manipulation, search this site for similar questions.
I am trying to print these data type. But I get a very strange output instead of what I expect.
#include <iostream>
using namespace std;
int main(void)
{
char data1 = 0x11;
int data2 = 0XFFFFEEEE;
char data3 = 0x22;
short data4 = 0xABCD;
cout << data1 << endl;
cout << data2 << endl;
cout << data3 << endl;
cout << data4 << endl;
}
Most likely you expect data1 and data3 to be printed as some kind of numbers. However, the data type is character, which is why C++ (or C) would interpret them as characters, mapping 0x11 to the corresponding ASCII character (a control character), similar for 0x22 except some other character (see an ASCII table).
If you want to print those characters as number, you need to convert them to int prior to printing them out like so (works for C and C++):
cout << (int)data1 << endl;
Or more C++ style would be:
cout << static_cast<int>(data1) << endl;
If you want to display the numbers in hexadecimal, you need to change the default output base using the hex IO manipulator. Afterwards all output is done in hexadecimal. If you want to switch back to decimal output, use dec. See cppreference.com for details.
cout << hex << static_cast<int>(data1) << endl;
The code asks for a positive integer, than the first output shows the corresponding ASCII code, the others are made to convert the integer to decimal, octal and hexadecimal equivalents. I understand the logic of the code, but I don't understand the assignment made on line 10 c=code than the assignment made on line 12 code=c. What happens on background when we 'swap' the two variables.
#include <iostream>
#include <iomanip>
using namespace std;
int main(){
unsigned char c = 0;
unsigned int code = 0;
cout << "\nPlease enter a decimal character code: ";
cin >> code;
c = code;
cout << "\nThe corresponding character: " << c << endl;
code = c;
cout << "\nCharacter codes"
<< "\n decimal: " << setw(3) << dec << code
<< "\n octal: " << setw(3) << oct << code
<< "\n hexadecimal: " << setw(3) << hex << code
<< endl;
return 0;
}
I could be wrong here so maybe someone else can weigh in, but I believe I know the answer.
If you assign a character a number, when you print that char it prints the corresponding character. Since c is of type char, the line c = code converts the integer entered into a character. You can test this yourself by assigning any int to a char variable and printing it out.
The second assignment, code = c, seems to be completely unnecessary.
That's not a swap. c is assigned the same value as code and then this value is assigned back to code. The original value of code is lost.
We can see this because unsigned char c is (usually) much smaller than unsigned int code and some information may be lost stuffing the value in code into c.
For example, code = 257. After c = code; code is still 257 and c, assuming an 8 bit char will be 1. After code = c;, both code and c will be 1. 257 has been lost.
Why is this being done? when given a char, operator<< will print out the character encoded, completely ignoring the request to print as hex, dec, or oct. So
<< "\n decimal: " << setw(3) << dec << c
is wasted. Given an int << will respect the modifiers, but if c and code have different values, you're comparing apples and Sasquatches.
i am confused by the output of the following code:
uint8_t x = 0, y = 0x4a;
std::stringstream ss;
std::string a = "4a";
ss << std::hex << a;
ss >> x;
std::cout << (int)x << " "<< (int)y << std::endl;
std::cout << x << " "<< y <<std::endl;
std::cout << std::hex << (int)x << " " << (int)y << std::endl;
uint8_t z(x);
std::cout << z;
the output for the above is:
52 74
4 J
34 4a
4
and when we change replace the first line with:
uint16_t x = 0, y = 0x4a;
the output turns into:
74 74
74 74
4a 4a
J
I think i understand what happens but i don't understand why it happens or how i can prevent it/work around it. From my understanding std::hex modifier is somehow undermined because of the type of x, maybe not exactly true at a technical level but it simply just writes the first character it reads.
Background: The input is supposed to be a string of hexadecimal digits, each pair representing a byte( just like a bitmap except in string). I want to be able to read each byte and store it in a uint8_t so i was experimenting with that when i came across this problem. I still can't determine what's the best method of this so if you think what i'm doing is inefficient or unnecessary i would appreciate to know why. Thank you for reading,
ss >> x
is treating uint8_t x as an unsigned char. The ascii value of '4' is (decimal) 52. It's reading the first char of the string "4a" into x as if x were a character. When you switch it to uint16_t, it's treating it as an unsigned short integer type. Same with y.