I'm trying to add a single 1 bit to a string, however, it seems that I keep getting the char converted to an int. I've tried typecasting it to an unsigned char, but I think that it's still converting it.
string output = "abc";
output += 0x80;
for (char c : output) printf("%02x", c);
I keep getting 616263ffffff80. However, I'm trying to get it to be 61626380;
The problem is your printf. It's using %x which expects a unsigned int value. However, your c value is signed char. And so this value is converted to the equivalent integer which happens to be 0xffffff80.
You can correct this in various ways:
Use unsigned value for your loop variable:
for (unsigned char c : output) printf("%02x", c);
Cast the value to unsigned char:
for (char c : output) printf("%02x", (unsigned char)c);
Use the appropriate format length modifier:
for (char c : output) printf("%02hhx", c);
Related
I'm trying to display an integer on an LCD-Display. The way the Lcd works is that you send an 8-Bit ASCII-Character to it and it displays the character.
The code I have so far is:
unsigned char text[17] = "ABCDEFGHIJKLMNOP";
int32_t n = 123456;
lcd.printInteger(text, n);
//-----------------------------------------
void LCD::printInteger(unsigned char headLine[17], int32_t number)
{
//......
int8_t str[17];
itoa(number,(char*)str,10);
for(int i = 0; i < 16; i++)
{
if(str[i] == 0x0)
break;
this->sendCharacter(str[i]);
_delay_ms(2);
}
}
void LCD::sendCharacter(uint8_t character)
{
//....
*this->cOutputPort = character;
//...
}
So if I try to display 123456 on the LCD, it actually displays -7616, which obviously is not the correct integer.
I know that there is probably a problem because I convert the characters to signed int8_t and then output them as unsigned uint8_t. But I have to output them in unsigned format. I don't know how I can convert the int32_t input integer to an ASCII uint8_t-String.
On your architecture, int is an int16_t, not int32_t. Thus, itoa treats 123456 as -7616, because:
123456 = 0x0001_E240
-7616 = 0xFFFF_E240
They are the same if you truncate them down to 16 bits - so that's what your code is doing. Instead of using itoa, you have following options:
calculate the ASCII representation yourself;
use ltoa(long value, char * buffer, int radix), if available, or
leverage s[n]printf if available.
For the last option you can use the following, "mostly" portable code:
void LCD::printInteger(unsigned char headLine[17], int32_t number) {
...
char str[17];
if (sizeof(int) == sizeof(int32_t))
snprintf(str, sizeof(str), "%d", num);
else if (sizeof(long int) == sizeof(int32_t))
snprintf(str, sizeof(str), "%ld", num);
else if (sizeof(long long int) == sizeof(int32_t))
snprintf(str, sizeof(str), "%lld", num);
...
}
If, and only if, your platform doesn't have snprintf, you can use sprintf and remove the 2nd argument (sizeof(str)). Your go-to function should always be the n variant, as it gives you one less bullet to shoot your foot with :)
Since you're compiling with a C++ compiler that is, I assume, at least half-decent, the above should do "the right thing" in a portable way, without emitting all the unnecessary code. The test conditions passed to if are compile-time constant expressions. Even some fairly old C compilers could deal with such properly.
Nitpick: Don't use int8_t where a char would do. itoa, s[n]printf, etc. expect char buffers, not int8_t buffers.
I don't know how to write a code that convert a byte array to a char array in C++ (using an Arduino board) and publish mqtt. I tried to search but I don't understand.
Example
byte Code[3] = {0x00 ,0x01 , 0x83};
char byteTochar[3];
for (int i = 0; i <= 2; i++) {
Serial.printf("%d", Code[i]);
Serial.println();
client.publish("publish/data", byteTochar[i]);
}
Error message
converting to 'String' form initializer list would use explicit constructor 'String::String'(unsigned char, unsigned char)'
Its actually do c, ArduinoPlayGround http://playground.arduino.cc/Main/Printf.
However, you can just use casting for each element:
char h = (char)Code[i];
char c;
int array[10][10];
while( !plik.eof())
{
getline( plik, text );
int string_l=text.length();
character_controler=false;
for(int i=0; i<string_l; ++i)
{
c=napis.at(i);
if(c==' ') continue;
else if(character_controler==false)
{
array[0][nood]=0;
cout<<"nood: "<<nood<< "next nood "<<c<<endl;
array[1][nood]=atoi(c); // there is a problem
character_controler=true;
}
else if(c==',') character_controler=false;
}
++nood;
}
I have no idea why atoi() doesn't work. The compiler error is:
invalid conversion from `char` to `const char*`
I need to convert c into int.
A char is already implicitly convertible to an int:
array[1][nood] = c;
But if you meant to convert the char '0' to the int 0, you'll have to take advantage of the fact that the C++ standard mandates that the digits are contiguous. From [lex.charset]:
In both the
source and execution basic character sets, the value of each character after 0 in the above list of decimal
digits shall be one greater than the value of the previous.
So you just have to subtract:
array[1][nood] = c - '0';
atoi() expects a const char*, which maps to a c string as an argument, you're passing a simple char. Thus the error, const char* represents a pointer, which is not compatible with a char.
Looks like you need to convert only one character to a numeric value, and in this case you can replace atoi(c) by c-'0', which will give you a number between 0 and 9. However, if your file contains hexadecimals digits, the logic get a little bit more complicated, but not much.
I've had some trouble with binary-to-(printable)hexa conversions. I've reached a functional (for my system) way of writing the code, but I need to know if it is portable on all systems (OS & hardware).
So this is my function (trying to construct a UUID from a piece of binary text):
int extractInfo( unsigned char * text )
{
char h[3];
int i;
this->str.append( "urn:uuid:" );
for( i = 56; i < 72; i++ )
{
ret = snprintf( h, 3, "%02x", text[i] );
if( ret != 2 )
return 1;
this->str.append( h );
if( i == 59 || i == 61 || i == 63 || i == 65 )
this->str.append( "-" );
}
return 0;
}
I understood that because of the sign extension my values are not printed well if I use char instead of unsigned char (C++ read binary file and convert to hex). Accepted and modified respectively.
But I've encountered more variants of doing this: Conversion from binary file to hex in C, and I am really lost. In unwind's piece of code:
sprintf(hex, "%02x", (unsigned int) buffer[0] & 0xff);
I did not understood why, although the array is unsigned char (as defined in the original posted code, by the one who asked the question), a cast to an unsigned int is needed, and also a bitwise AND on the byte to be converted...
So, as I did not understood very well the sign-extension thing, can you tell me at least if the piece of code I wrote will work on all systems?
since printf is not typesafe it expects for each formatting specifier a special sized argument.
thatswhy you have to cast your character argument to unsigned int if you use some formatting character that expects an int-sized type.
The "%x" specifier requires an unsigned int.
I'm working on a homework assignment to print out big and little endian values of an int and float. I'm having trouble converting to little endian.
here's my code
void convertLitteE(string input)
{
int theInt;
stringstream stream(input);
while(stream >> theInt)
{
float f = (float)theInt;
printf("\n%d\n",theInt);
printf("int: 0x");
printLittle((char *) &theInt, sizeof(theInt));
printf("\nfloat: 0x");
printLittle((char *) &f, sizeof(f));
printf("\n\n");
}
}
void printLittle(char *p, int nBytes)
{
for (int i = 0; i < nBytes; i++, p++)
{
printf("%02X", *p);
}
}
when input is 12 I get what I would expect
output:
int: 0x0C000000
float: 0x00004041
but when input is 1234
output:
int: 0xFFFFFFD20400000
float: 0x0040FFFFFFF9A44
but I would expect
int : 0xD2040000
float: 0x00409A44
When I step through the for loop I can see where there appears to be a garbage value and then it prints all the F's but I don't know why. I've tried this so many different ways but I can't get it to work.
Any help would be greatly appreciated.
Apparently on your system, char is a signed 8-bit type. Using unsigned 8-bit bytes, the 4-byte little-endian representation of 1234 would be 0xd2, 0x04, 0x00, 0x00. But when interpreted as a signed char on most systems, 0xd2 becomes -0x2e.
Then the call to printf promotes that char to the int with value -0x2e, then printf (which is not very typesafe) reads in an unsigned int where you passed the int. This is Undefined Behavior, but on most systems it will be the same as a static_cast, so you get the value 0xFFFFFFD2 when trying to print the first byte.
If you stick to using unsigned char instead of char in these functions, you can avoid this particular problem.
(But as #jogojapan pointed out, this entire approach is not portable at all.)