I don't see this an option in things like sprintf().
How would I convert the letter F to 255? Basically the reverse operation of conversion using the %x format in sprintf?
I am assuming this is something simple I'm missing.
char const* data = "F";
int num = int(strtol(data, 0, 16));
Look up strtol and boost::lexical_cast for more details and options.
Use the %x format in sscanf!
The C++ way of doing it, with streams:
#include <iomanip>
#include <iostream>
#include <sstream>
int main() {
std::string hexvalue = "FF";
int value;
// Construct an input stringstream, initialized with hexvalue
std::istringstream iss(hexvalue);
// Set the stream in hex mode, then read the value, with error handling
if (iss >> std::hex >> value) std::cout << value << std::endl;
else std::cout << "Conversion failed" << std::endl;
}
The program prints 255.
You can't get (s)printf to convert 'F' to 255 without some black magic. Printf will convert a character to other representations, but won't change its value. This might show how character conversion works:
printf("Char %c is decimal %i (0x%X)\n", 'F', 'F', 'F');
printf("The high order bits are ignored: %d: %X -> %hhX -> %c\n",
0xFFFFFF46, 0xFFFFFF46, 0xFFFFFF46, 0xFFFFFF46);
produces
Char F is decimal 70 (0x46)
The high order bits are ignored: -186: FFFFFF46 -> 46 -> F
Yeah, I know you asked about sprintf, but that won't show you anything until you do another print.
The idea is that each generic integer parameter to a printf is put on the stack (or in a register) by promotion. That means it is expanded to it's largest generic size: bytes, characters, and shorts are converted to int by sign-extending or zero padding. This keeps the parameter list on the stack in sensible state. It's a nice convention, but it probably had it's origin in the 16-bit word orientation of the stack on the PDP-11 (where it all started).
In the printf library (on the receiving end of the call), the code uses the format specifier to determine what part of the parameter (or all of it) are processed. So if the format is '%c', only 8 bits are used. Note that there may be some variation between systems on how the hex constants are 'promoted'. But if a value greater thann 255 is passed to a character conversion, the high order bits are ignored.
Related
How to send data in hex on SerialPort?
I used this function, I receive the "yes, I can write to port" but I do not receive the data I entered
QByteArray send_data;
if(serialPort->isWritable())
{
qDebug()<<"Yes, I can write to port!";
int size = sizeof(send_data);
serialPort->write(send_data,size);
}
send_data += static_cast<char>(0xAA);
serialPort->write(send_data);
Data are transmitted in binary (essentially a sequence of 0 and 1). No matter what. Showing data in hexadecimal rather than a string of characters is just a choice.
In the following example, you can see that the array string_c is initialized with the same string that you are using in your code. Next, I print the data in both, as hex and as a string. You can see that the only difference is in the way I decided to print the data. The source data is the same for both.
#include <stdio.h>
#include <stdint.h>
#include <inttypes.h>
void printCharInHexadecimal(const char* str, int len)
{
for (int i = 0; i < len; ++ i) {
uint8_t val = str[i];
char tbl[] = "0123456789ABCDEF";
printf("0x");
printf("%c", tbl[val / 16]);
printf("%c", tbl[val % 16]);
printf(" ");
}
printf("\n");
}
int main()
{
char string_c[] = "Yes, i can write to port";
// string printed in hex
printCharInHexadecimal(string_c, 24);
// same string printed as "text"
printf("%s\n",string_c);
return 0;
}
You can see the above code running here: https://onlinegdb.com/Y7fwaMTDoq
Note: I got the function printCharInHexadecimal from here: https://helloacm.com/the-c-function-to-print-a-char-array-string-in-hexadecimal/
As suspected, your use of sizeof is wrong. It is not returning the size of the contained data, it is returning a non-zero constant that is the size of a QByteArray object itself. Since that object was freshly constructed it should be empty, and any size you use in the first write other than zero will lead to undefined behavior. Use:
int size = (int)send_data.size();
Skip the first write entirely, and use the above for your second write.
You need to be clear about what you expect. 0xAA in your source code is simply an integer value using hex representation. It complies to exactly the same code regardless of the source code presentation: 0xAA == 170 == 0263.
If you actually intended to output a string of characters at run time representing a value in hexadecimal, you need to convert that value from an integer to a string. For example;
char hexbyte[3] ;
sprintf( hexbyte, "%02X", 170 ) ;
serialPort->write(send_data) ;
will output ASCII characters AA, whilst demonstrating the equivalence of 170 to 0xAA. That is the hex notation in the source does not affect the value or how it is stored or represented in the compiled machine code.
I found an explanation to decode hex-representations into decimal but only by using Qt:
How to get decimal value of a unicode character in c++
As I am not using Qt and cout << (int)c does not work (Edit: it actually does work if you use it properly..!):
How to do the following:
I got the hex representation of two chars which were transmitted over some socket (Just figured out how to get the hex repr finally!..) and both combined yield following utf16-representation:
char c = u"\0b7f"
This shall be converted into it's utf16 decimal value of 2943!
(see it at utf-table http://www.fileformat.info/info/unicode/char/0b7f/index.htm)
This should be absolut elementary stuff, but as a designated Python developer compelled to use C++ for a project I am hanging this issue for hours....
Use a wider character type (char is only 8 bits, you need at least 16), and also the correct format for UTC literals. This works (live demo):
#include <iostream>
int main()
{
char16_t c = u'\u0b7f';
std::cout << (int)c << std::endl; //output is 2943 as expected
return 0;
}
I will briefly explain what I want to do and help appreciated.
I have a hex number which is formatted as 16 byte number like this:
1: std::string myhex = "00000000000000000000000000000FFD";
Then I want to convert it to int. Which I think I successfully do using this:
// convert hex to int
unsigned int x = strtoul(myhex.c_str(), NULL, 16);
printf("x = %d\n", x); // prints 4093 as needed
Now, I want to convert this integer back to hex. Which I think I also successfully do using this:
// Convert int back to hex
char buff[50];
string hexval;
sprintf(buff,"%x",x);
hexval = buff;
cout << hexval.c_str(); // prints "ffd".
But my problem is that now, I want to convert the "ffd" string as above back to the format it was before, e.g., 16 byte number padded with zeros like this:
00000000000000000000000000000FFD
I want to convert the string not only print it.
Any help how to do this?
Also any corrections if anything I was achieving above is wrong or not OK are welcome.
Preferably I would like this to compile on Linux also.
Use the 0 flag (prefix) for zero-padding and field width specification in a printf:
printf("%032X", x);
Use snprintf to store it in your string:
snprintf(buff, sizeof(buff), "%032X", x);
Or use asprintf to store it in a newly-allocated string, to be certain that the memory available for the string is sufficient (since it's allocated by asprintf):
char *as_string = NULL;
asprintf(&as_string, "%032X", x);
I dont know if I have the correct tiltle for this, so please correct me if I am wrong and I will change my title.
I have a string, for this example I will use:
"8ce4b16b"
I would like to shift the bits (I think) along 1 so the string would be:
"9df5c27c"
Any Ideas?
EDIT:
Just so you know, these strings are hex. So it will never reach z.
All I want to do is add a number to the numbers and progress one step through the alphabet so a->b, f->g ect ect
If the number is 9 there will be a condition to keep it as 9.
The output DOES NOT need to be a hex.
Also the string is only an example. It is part of an MD5 encryption.
Transform a string? This sounds like a job for std::transform():
#include <cassert>
#include <string>
char increment(char c)
{
if ('9' == c)
{
return '9';
}
return ++c;
}
std::string increment_string(const std::string& in)
{
std::string out;
std::transform(in.begin(), in.end(), std::back_inserter(out), increment);
return out;
}
int main()
{
assert(increment_string("8ce4b16b") == "9df5c27c");
assert(increment_string("ffffffff") == "gggggggg");
assert(increment_string("89898989") == "99999999"); // N.B: this is one of 2^8 strings that will return "99999999"
assert(increment_string("99999999") == "99999999"); // This is one more. Mapping backwards is going to be tricky!
return 1;
}
Any limits you wish to impose on the characters can be implemented in the increment() function, as demonstrated.
If, on the other hand, you wish to treat the string as a hexadecimal number and add 0x11111111 to it:
#include <sstream>
#include <cassert>
int main()
{
std::istringstream iss("8ce4b16b");
long int i;
iss >> std::hex >> i;
i += 0x11111111;
std::ostringstream oss;
oss << std::hex << i;
assert(oss.str() == "9df5c27c");
return 1;
}
No bits were shifted in the construction of this string.
It looks like you simply added 0x11111111 to the integer. But can you specify precisely what tpye your input has? And what the result should be when you add one to "f" or "9"?
That's not shifting the bits ... shifting a bit multiplies a word value by 2. You're simply incrementing each hex value by 1, and that can be done by adding 0x11111111 to your dword.
For instance, if you took your value 0x8ce4b16b (that would be treating the values you printed above as-if they were a 4-byte double-word in hexadecimal), shifting it by one bit, you would end up with 0x19C962D6.
But if you simply want to increment each nibble of your dword (each individual value in a hex-number represents 4-bits or a nibble), you're going to have to add an offset of 0x1 to each nibble. Also there is no value of G in a hex-word ... you have the values 0->9, and then A->F, where F represents the base-10 value 15. Finally, when you add 0x1 to 0xF, you're going to wrap around to 0x0.
Do you mean you want to increment each character in the string?
You can do that my iterating through the array and adding one to each character.
I am reading the string of data from the oracle database that may or may not contain the Unicode characters into a c++ program.Is there any way for checking the string extracted from the database contains an Unicode characters(UTF-8).if any Unicode characters are present they should be converted into hexadecimal format and need to displayed.
There are two aspects to this question.
Distinguish UTF-8-encoded characters from ordinary ASCII characters.
UTF-8 encodes any code point higher than 127 as a series of two or more bytes. Values at 127 and lower remain untouched. The resultant bytes from the encoding are also higher than 127, so it is sufficient to check a byte's high bit to see whether it qualifies.
Display the encoded characters in hexadecimal.
C++ has std::hex to tell streams to format numeric values in hexadecimal. You can use std::showbase to make the output look pretty. A char isn't treated as numeric, though; streams will just print the character. You'll have to force the value to another numeric type, such as int. Beware of sign-extension, though.
Here's some code to demonstrate:
#include <iostream>
void print_characters(char const* s)
{
std::cout << std::showbase << std::hex;
for (char const* pc = s; *pc; ++pc) {
if (*pc & 0x80)
std::cout << (*pc & 0xff);
else
std::cout << *pc;
std::cout << ' ';
}
std::cout << std::endl;
}
You could call it like this:
int main()
{
char const* test = "ab\xef\xbb\xbfhu";
print_characters(test);
return 0;
}
Output on Solaris 10 with Sun C++ 5.8:
$ ./a.out
a b 0xef 0xbb 0xbf h u
The code detects UTF-8-encoded characters, but it makes no effort to decode them; you didn't mention needing to do that.
I used *pc & 0xff to convert the expression to an integral type and to mask out the sign-extended bits. Without that, the output on my computer was 0xffffffbb, for instance.
I would convert the string to UTF-32 (you can use something like UTF CPP for that - it is very easy), and then loop through the resulting string, detect code points (characters) that are above 0x7F and print them as hex.