I have char byte[0] = '1' (H'0x31)and byte[1] = 'C'(H'0x43)
I am using one more buffer to more buff char hex_buff[0] .i want to have hex content in this hex_buff[0] = 0x1C (i.e combination of byte[0] and byte[1])
I was using below code but i realized that my code is valid for the hex values 0-9 only
char s_nibble1 = (byte[0]<< 4)& 0xf0;
char s_nibble2 = byte[1]& 0x0f;
hex_buff[0] = s_nibble1 | s_nibble2;// here i want to have 0x1C instead of 0x13
What keeps you from using strtol()?
char bytes[] = "1C";
char buff[1];
buff[0] = strtol(bytes, NULL, 16); /* Sets buff[0] to 0x1c aka 28. */
To add this as per chux's comment: strtol() only operates on 0-terminated character arrays. Which does not necessarily needs to be the case for the OP's question.
A possible way to do it, without dependencies with other character manipulation functions:
char hex2byte(char *hs)
{
char b = 0;
char nibbles[2];
int i;
for (i = 0; i < 2; i++) {
if ((hs[i] >= '0') && (hs[i] <= '9'))
nibbles[i] = hs[i]-'0';
else if ((hs[i] >= 'A') && (hs[i] <= 'F'))
nibbles[i] = (hs[i]-'A')+10;
else if ((hs[i] >= 'a') && (hs[i] <= 'f'))
nibbles[i] = (hs[i]-'a')+10;
else
return 0;
}
b = (nibbles[0] << 4) | nibbles[1];
return b;
}
For example: hex2byte("a1") returns the byte 0xa1.
In your case, you should call the function as: hex_buff[0] = hex2byte(byte).
You are trying to get the nibble by masking out the bits of character code, rather than subtracting the actual value. This is not going to work, because the range is disconnected: there is a gap between [0..9] and [A-F] in the encoding, so masking is going to fail.
You can fix this by adding a small helper function, and using it twice in your code:
int hexDigit(char c) {
c = toupper(c); // Allow mixed-case letters
switch(c) {
case '0':
case '1':
case '2':
case '3':
case '4':
case '5':
case '6':
case '7':
case '8':
case '9': return c-'0';
case 'A':
case 'B':
case 'C':
case 'D':
case 'E':
case 'F': return c-'A'+10;
default: // Report an error
}
return -1;
}
Now you can code your conversion like this:
int val = (hexDigit(byte[0]) << 4) | hexDigit(byte[1]);
It looks like you are trying to convert ASCII hex into internal representation.
There are many ways to do this, but the one I use most often for each nibble is:
int nibval(unsigned short x)
{
if (('0' <= x) && ('9' >= x))
{
return x - '0';
}
if (('a' <= x) && ('f' >= x))
{
return x - ('a' - 10);
}
if (('A' <= x) && ('F' >= x))
{
return x - ('A' - 10);
}
// Invalid input
return -1;
}
This uses an unsigned int parameter so that it will work for single byte characters as well as wchar_t characters.
Related
poziomy= char;
pionowy= digit; ( no problems with this one)
So I need to convert char into a digit in function but obviusly I cannot do char=int, so I dont know how to pass on the converted char into digit properly.
I guees i can do two functions but maybe there is an easier way?
I thought of making a new variable poziomy_c but I dont know how to pass it to Ruch_gracza()
int Convert_digit (int cyfra)
{
switch (cyfra)
{
case 10: return 0;break;
case 9: return 1;break;
case 8: return 2;break;
case 7: return 3;break;
case 6: return 4;break;
case 5: return 5;break;
case 4: return 6;break;
case 3: return 7;break;
case 2: return 8;break;
case 1: return 9;break;
}
}
int Convert_letter (char literka)
{
switch (literka)
{
case 'A': return 0; break;
case 'B': return 1; break;
case 'C': return 2; break;
case 'D': return 3; break;
case 'E': return 4; break;
case 'F': return 5; break;
case 'G': return 6; break;
case 'H': return 7; break;
case 'I': return 8; break;
case 'J': return 9; break;
}
}
void Conwert(int &pionowy, char poziomy)
{
pionowy=Convert_digit(pionowy);
int poziomy_c;
poziomy_c=Convert_letter (poziomy);
}
void Ruch_gracza1 (int plansza[10][10])
{
int pionowy ;
char poziomy;
cout << "wprowadz wspolrzedne pola na ktorym lezy pion który chcesz ruszyc ( w pionie , potem w poziomie)" << endl;
cin >> pionowy >> poziomy;
Conwert (pionowy,poziomy);
cout << pionowy << endl;
cout << poziomy << endl;
}
You can use char arithmetic to make this a whole lot easier. Since 'A' to 'Z' will be contiguous in ASCII/Unicode, you can do literka - 'A' to get how far literka is from A (which is what your switch is doing):
int Convert_letter (char literka) {
if(!std::isalpha(literka)) { return literka; } // Not a letter
return std::toupper(literka) - 'A';
}
Or if you want a more robust solution to cover even less common character encodings:
int Convert_letter (char literka) {
if(!std::isalpha(literka)) { return literka; } // Not a letter
std::string alphabet = "abcdefghijklmnopqrstuvwxyz";
return std::distance(std::begin(alphabet), std::find(std::begin(alphabet), std::end(alphabet), literka));;
}
Convert_digit will look similar (except with std::isdigit instead of std::isalpha).
You can do as
char c = 'B';
int digit = c - 'A';
return digit;
You need some knowledge about the ASCII table and data type in C++.
Simply, a char is an integer from -128 ... 127. If you declare a char variable name ch like this:
char ch = 'B';
C++ will understand that ch = 66 (look at ASCII table). So that we can do arithmetic operator with ch like an integer variable.
ch - 'A'; // result 1, because 'A' = 65
ch - 65; // same result with ch - 'A'
Finally, you can write your function like this:
int functionChar2Int(char x){
return x - 'A';
}
I'm doing a function to convert an integer into a hexadecimal char * in Arduino, but I came across the problem of not being able to convert a String to a char *. Maybe if there is a way to allocate memory dynamically for char * I do not need a class String.
char *ToCharHEX(int x)
{
String s;
int y = 0;
int z = 1;
do
{
if (x > 16)
{
y = (x - (x % 16)) / 16;
z = (x - (x % 16));
x = x - (x - (x % 16));
}
else
{
y = x;
}
switch (y)
{
case 0:
s += "0";
continue;
case 1:
s += "1";
continue;
case 2:
s += "2";
continue;
case 3:
s += "3";
continue;
case 4:
s += "4";
continue;
case 5:
s += "5";
continue;
case 6:
s += "6";
continue;
case 7:
s += "7";
continue;
case 8:
s += "8";
continue;
case 9:
s += "9";
continue;
case 10:
s += "A";
continue;
case 11:
s += "B";
continue;
case 12:
s += "C";
continue;
case 13:
s += "D";
continue;
case 14:
s += "E";
continue;
case 15:
s += "F";
continue;
}
}while (x > 16 || y * 16 == z);
char *c;
s.toCharArray(c, s.length());
Serial.print(c);
return c;
}
The toCharArray () function is not converting the string to a char array. Serial.print (c) is returning empty printing. I do not know what I can do.
Updated: Your Question re: String -> char* conversion:
String.toCharArray(char* buffer, int length) wants a character array buffer and the size of the buffer.
Specifically - your problems here are that:
char* c is a pointer that is never initialized.
length is supposed be be the size of the buffer. The string knows how long it is.
So, a better way to run this would be:
char c[20];
s.toCharArray(c, sizeof(c));
Alternatively, you could initialize c with malloc, but then you'd have to free it later. Using the stack for things like this saves you time and keeps things simple.
Reference: https://www.arduino.cc/en/Reference/StringToCharArray
The intent in your code:
This is basically a duplicate question of: https://stackoverflow.com/a/5703349/1068537
See Nathan's linked answer:
// using an int and a base (hexadecimal):
stringOne = String(45, HEX);
// prints "2d", which is the hexadecimal version of decimal 45:
Serial.println(stringOne);
Unless this code is needed for academic purposes, you should use the mechanisms provided by the standard libraries, and not reinvent the wheel.
String(int, HEX) returns the hex value of the integer you're looking to convert
Serial.print accepts String as an argument
char* string2char(String command){
if(command.length()!=0){
char *p = const_cast<char*>(command.c_str());
return p;
}
}
I am writing a small program convert hex representation of a string , it is a kata to improve my skills.
This is what I have come up with
std::vector<int> decimal( std::string const & s )
{
auto getint = [](char const k){
switch(k){
case 'f':
return 15;
case 'e':
return 14;
case 'd':
return 13;
case 'c':
return 12;
case 'b':
return 11;
case 'a':
return 10;
case '9':
return 9;
case '8':
return 8;
case '7':
return 7;
case '6':
return 6;
case '5':
return 5;
case '4':
return 4;
case '3':
return 3;
case '2':
return 2;
case '1':
return 1;
case '0':
return 0;
};
std::vector<int> result;
for( auto const & k : s )
{
result.push_back(getint(k));
}
return result;
}
I was wondering if there in another way to do this. I have considered to use something as an std::map as well, but I am uncertain which one might be faster. If there is another way to do this please add it.
Please keep in mind that I am doing this as a code-kata to improve my skills, and learn.
Thanks and TIA!
To start with, you can probably simplify your logic like so:
auto getint = [](char const k){
if(k >= 'a' && k <= 'f') return (k - 'a');
else if(k >= 'A' && k <= 'F') return (k - 'A');
else if(k >= '0' && k <= '9') return (k - '0');
else return -1;
}
Beyond that, there may exist a Standard Library function that does exactly this, which you might prefer depending on your specific needs.
For the decimal digits it's very easy to convert a character to its digit, as the C++ specification says that all digits must be consecutive in all encodings, with '0' being the lowest and '9' the highest. That means you could convert a character to number by just subtracting '0', like e.g. k - '0'. There's no such requirement for the letters though, but the most common encoding (ASCII) the same is true, but it should not be counted on if you want to be portable.
You could also do it using e.g. std::transform and std::back_inserter, so no need for your own loop. Perhaps something like
std::transform(std::begin(s), std::end(s), std::back_inserter(result), getint);
In the getint function you could use e.g. std::isxdigit and std::isdigit to check if the character is a valid hexadecimal or decimal digit, respectively. You should probably be using e.g. std::tolower in case the hexadecimal digits are upper-case.
You can use strtol or strtoll to do most of the heavy lifting for you of converting from a base16 string to an integer value.
Then convert back to a regular string using a stringstream object.
// parse hex string with strtol
long value = ::strtol(s.c_str(), nullptr, 16); //not shown - checking for errors. Read the manual page for more info
// convert value back to base-10 string
std::stringstream st;
st << value;
std::string result = st.str();
return result;
My code includes a loop in which checks every character of a std::string, assign it to a char variable, which fails its assertion when -1 >= c >= 255.
It is a method from a JSON parser class that it's not mine:
static std::string UnescapeJSONString(const std::string& str)
{
std::string s = "";
for (int i = 0; i < str.length(); i++)
{
char c = str[i]; // << HERE FAILS WHEN 'É' CHARACTER
if ((c == '\\') && (i + 1 < str.length()))
{
int skip_ahead = 1;
unsigned int hex;
std::string hex_str;
switch (str[i+1])
{
case '"' : s.push_back('\"'); break;
case '\\': s.push_back('\\'); break;
case '/' : s.push_back('/'); break;
case 't' : s.push_back('\t'); break;
case 'n' : s.push_back('\n'); break;
case 'r' : s.push_back('\r'); break;
case 'b' : s.push_back('\b'); break;
case 'f' : s.push_back('\f'); break;
case 'u' : skip_ahead = 5;
hex_str = str.substr(i + 4, 2);
hex = (unsigned int)std::strtoul(hex_str.c_str(), nullptr, 16);
s.push_back((char)hex);
break;
default: break;
}
i += skip_ahead;
}
else
s.push_back(c);
}
return Trim(s);
}
How can I assign a Unicode value to a char? In this case the value is É, and the code is not ready to receive such a characters.
This is included into a dll library, and is giving me this error:
The std::string doesn't use Unicode. This is evident because there is a method c_str that lets you get a char array from the std::string.
Answering your question, your test is wrong:
-1 >= c && c >= 255
It should be:
-1 <= c && c <= 255
But you can't get c anywhere near 255 since the char is signed.
If you want to get 255 out a char it needs to be
unsigned char
This will not let you reach -1 though.
Read about char*'s here:
http://www.cplusplus.com/doc/tutorial/variables/
see char arrays here:
http://www.cplusplus.com/doc/tutorial/ntcs/
see std::string here:
http://www.cplusplus.com/reference/string/string/
I'm currently reverse engineering a network protocol and I wrote a small decryption protocol.
I used to define the bytes of the packet into an unsigned character array, as so:
unsigned char buff[] = "\x00\xFF\x0A" etc.
In order to not recompile the program multiple times per packet I made a small GUI tool where it would get the bytes in \xFF notation from a string. I did this the following way:
int length = int(stencString.length());
unsigned char *buff = new unsigned char[length+1];
memcpy(buff, stencString.c_str(), length+1);
When I call my function it gives me a proper decryption when I hardcode it using the prior method but it gives me garbage then the rest of my string when I memcpy from the string to the array. The creepy part? They both have the same print output!
Here's how I'm using it:
http://pastie.org/private/kndfbaqgvmjiuwlounss9g
Here's kdxalgo.h (c) Luigi Auriemma:
http://pastie.org/private/7dzemmwyyqtngiamlxy8tw
Can someone point me in the right direction?
Thanks!
See what happens when you use the following for the hardcoded version of buff.
unsigned char buff[] =
"\\xd3\\x8c\\x38\\x6b\\x82\\x4c\\xe1\\x1e"
"\\x6b\\x7a\\xff\\x4c\\x9d\\x73\\xbe\\xab"
"\\x38\\xc7\\xc5\\xb8\\x71\\x8f\\xd5\\xbb"
"\\xfa\\xb9\\xf3\\x7a\\x43\\xdd\\x12\\x41"
"\\x4b\\x01\\xa2\\x59\\x74\\x60\\x1e\\xe0"
"\\x6d\\x68\\x26\\xfa\\x0a\\x63\\xa3\\x88";
I have a suspicion that it will produce the same output as you entering the following: \xd3\x8c\x38\x6b\x82\x4c\xe1\x1e\x6b\x7a\xff\x4c\x9d\x73\xbe\xab\x38\xc7\xc5\xb8\x71\x8f\xd5\xbb\xfa\xb9\xf3\x7a\x43\xdd\x12\x41\x4b\x01\xa2\x59\x74\x60\x1e\xe0\x6d\x68\x26\xfa\x0a\x63\xa3\x88.
The compiler automatically takes "\xd3" and converts it into the expected underlying binary representation. You need to have a method of converting the characters backslash, x, d, 3 into the same binary representation.
If you are certain that you will receive properly formated input, then the answer isn't too hard:
unsigned char c2h(char ch)
{
switch (ch)
{
case '0': return 0;
case '1': return 1;
case '2': return 2;
case '3': return 3;
case '4': return 4;
case '5': return 5;
case '6': return 6;
case '7': return 7;
case '8': return 8;
case '9': return 9;
case 'a': return 10;
case 'b': return 11;
case 'c': return 12;
case 'd': return 13;
case 'e': return 14;
case 'f': return 15;
}
}
std::string handle_hex(const std::string& str)
{
std::string result;
for (size_t index = 0; index < str.length(); index += 4) // skip to next hex digit
{
// str[index + 0] is '\\' and str[index + 1] is 'x'
unsigned char ch = c2h(str[index+2]) * 16 + c2h(str[index+3]);
result.append((char)ch);
}
return result;
}
Again assuming perfect formatting, so there is not error handling. I know that I'll lose some points for this answer because it's not the best way of doing this, but I want to make the algorithm as easy to understand as possible.
The problem, as Jeffery points out, is that the compiler processes the \xd3 and generates a character with that value, but when you read into a string \xd3 you are actually reading 4 characters: \, x, d and 3.
You will need to read the string, and then parse it into valid contents. For a simple approach, you can change the format so that the input is a space separated sequence of characters encoded as 0xd3 (as this is really simple to parse):
std::string buffer;
std::string input( "0xd3 0x8c 0x38" ); // this would be read
std::istringstream in( input );
in >> std::hex;
std::copy( std::istream_iterator<int>( in ),
std::istream_iterator<int>(),
std::back_inserter( buffer ) );
Of course, there is no need to change the format, you can process it. For that you will only need to read one character at a time. When you encounter a \ then read the next character, if it is x then read the next two characters (say ch1 and ch2) and transform them into an integer value:
int value_of_hex( char ch ) {
if (ch >= '0' && ch <= '9')
return ch-'0';
if (tolower(ch) >= 'a' && tolower(ch) <= 'f')
return 10 + toupper(ch) - 'a';
// error
throw std::runtime_error( "Invalid input" );
}
value = value_of_hex( ch1 )*16 + value_of_hex( ch2 );