Character array typecasting to integer - c++

I have a char array and it holds value 0x4010, i want this value into an unsigned short varaible.
I did this by using atoi but getting short value as 0
unsigned short cvtValue = (unsigned short) atoi(aclDta);
character for 0x10 is DEL, i hope it is because of this.
Decimal is 6416

You don't need to convert the data with atoi, just cast it:
unsigned short cvtValue = *(unsigned short *)aclDta;

What you're asking doesn't make sense. 0x4010 in ascii is '#' followed by a 'data link escape'.
atoi, strtol etc are all about parsing ascii strings containing numbers - #\DLE isn't a number.
What you really seem to want is to treat the 0x4010 bytes as a single short.
here's a cheap way:
cvtValue |= ((short)aclData[0]) << 8;
cvtValue |= ((short)aclData[1]);

I'd comment, but apparently as a new user I can't? Anyway, antiduh's answer is more correct if you might ever port your application to platforms having different endienness.
char *str = "01";
unsigned short val = *(unsigned short *)str;
On little endien systems val == 0x3130. On big endien systems val == 0x3031.

Related

memcpy unsigned char to int

I'm trying to get an int value from a file I read. The trick is that I don't know how many bytes this value lays on, so I first read the length octet, then try to read as many data bytes as length octet tells me. The issue comes when I try to put the data octets in an int variable, and eventually print it - if the first data octet is 0, only the one that comes after is copied, so the int I try to read is wrong, as 0x00A2 is not the same as 0xA200. If i use ntohs or ntohl, then 0xA200 is decoded wrong as 0x00A2, so it does not resolve the hole problem. I am using memcpy like this:
memcpy(&dst, (const *)src, bytes2read)
where dst is int, src is unsigned char * and bytes2read is a size_t.
So what am I doing wrong? Thank you!
You cannot use memcpy to portably store bytes in an integer, because the order of bytes is not specified by the standard, not speaking of possible padding bits. The portable way is to use bitwise operations and shift:
unsigned char b, len;
unsigned int val = 0;
fdin >> len; // read the field len
if (len > sizeof(val)) { // ensure it will fit into an
// process error: cannot fit in an int variable
...
}
while (len-- > 0) { // store and shift one byte at a bite
val <<= 8; // shift previous value to leave room for new byte
fdin >> b; // read it
val |= b; // and store..
}

Char has incorrect number of bits

I'm currently working on a program that converts to and from base64 in Eclipse. However, I've just noticed that char values seem to have 7 bits instead of the usual 8. For example, the character 'o' is shown to be represented in binary as 1101111 instead of 01101111, which effectively prevents me from completing my project, as I need a total of 24 bits to work with for the conversion to work. Is there any way to either append a 0 to the beginning of the value (i tried bitshifting in both directions, but neither worked), or preventing the issue altogether?
The code for the (incomplete/nonfuntional) offending method is as follows, let me know if more is required:
std::string Encoder::encode( char* src, unsigned char* dest)
{
char ch0 = src[0];
char ch1 = src[1];
char ch2 = src[2];
char sixBit1 = ch0 >> 1;
dest[0] = ch2;
dest[1] = ch1;
dest[2] = ch0;
dest[3] = '-';
}
char for C/C++ language is always signed int8. So, it is excepted that you have only 7 useable bits - because one bit is used for sign storage.
Try to use unsigned char instead.
Either unsigned char or uint8_t from <stdint.h> should work. For maximum portability, uint_least8_t is guaranteed to exist.

Displaying integer on an LCD

I'm trying to display an integer on an LCD-Display. The way the Lcd works is that you send an 8-Bit ASCII-Character to it and it displays the character.
The code I have so far is:
unsigned char text[17] = "ABCDEFGHIJKLMNOP";
int32_t n = 123456;
lcd.printInteger(text, n);
//-----------------------------------------
void LCD::printInteger(unsigned char headLine[17], int32_t number)
{
//......
int8_t str[17];
itoa(number,(char*)str,10);
for(int i = 0; i < 16; i++)
{
if(str[i] == 0x0)
break;
this->sendCharacter(str[i]);
_delay_ms(2);
}
}
void LCD::sendCharacter(uint8_t character)
{
//....
*this->cOutputPort = character;
//...
}
So if I try to display 123456 on the LCD, it actually displays -7616, which obviously is not the correct integer.
I know that there is probably a problem because I convert the characters to signed int8_t and then output them as unsigned uint8_t. But I have to output them in unsigned format. I don't know how I can convert the int32_t input integer to an ASCII uint8_t-String.
On your architecture, int is an int16_t, not int32_t. Thus, itoa treats 123456 as -7616, because:
123456 = 0x0001_E240
-7616 = 0xFFFF_E240
They are the same if you truncate them down to 16 bits - so that's what your code is doing. Instead of using itoa, you have following options:
calculate the ASCII representation yourself;
use ltoa(long value, char * buffer, int radix), if available, or
leverage s[n]printf if available.
For the last option you can use the following, "mostly" portable code:
void LCD::printInteger(unsigned char headLine[17], int32_t number) {
...
char str[17];
if (sizeof(int) == sizeof(int32_t))
snprintf(str, sizeof(str), "%d", num);
else if (sizeof(long int) == sizeof(int32_t))
snprintf(str, sizeof(str), "%ld", num);
else if (sizeof(long long int) == sizeof(int32_t))
snprintf(str, sizeof(str), "%lld", num);
...
}
If, and only if, your platform doesn't have snprintf, you can use sprintf and remove the 2nd argument (sizeof(str)). Your go-to function should always be the n variant, as it gives you one less bullet to shoot your foot with :)
Since you're compiling with a C++ compiler that is, I assume, at least half-decent, the above should do "the right thing" in a portable way, without emitting all the unnecessary code. The test conditions passed to if are compile-time constant expressions. Even some fairly old C compilers could deal with such properly.
Nitpick: Don't use int8_t where a char would do. itoa, s[n]printf, etc. expect char buffers, not int8_t buffers.

unsigned char concatenation

I am creating a C++ program for communication with a gripper on a serial port.
I have to send a buffer of type "unsigned char [8]", but of these 8 bytes, 4 are entered from the keyboard, and 2 are the CRC, calculated at the time.
So, how can I concatenate several pieces in a single buffer of 8 bytes unsigned char?
For example:
unsigned char buffer[8];
----
unsigned char DLEN[1]={0x05};
----
unsigned char CMD[1]={0x01};
----
unsigned char data[4]={0x00,0x01,0x20,0x41};
----
unsigned char CRC[2]={0xFF,0x41};
----
how can I get this buffer: {0x05,0x01,0x00,0x01,0x20,0x41,0xFF,0x41} that is the union of DLEN,CMD,data and CRC?
This:
buffer[0] = DLEN[0];
buffer[1] = CMD[0];
buffer[2] = data[0];
buffer[3] = data[1];
buffer[4] = data[2];
buffer[5] = data[3];
buffer[6] = CRC[0];
buffer[7] = CRC[1];
An alternative solution is this:
Start off with an unsigned char array of 8 characters.
When you need to pass it off to other methods to have data inserted in them, pass it by reference like this: updateCRC(&buffer[6]) with the method signature taking an unsigned char pointer. Assuming you respect the respective sizes of the inputs, the result is the best of both worlds, handling the buffer as if they were separate strings, and not having to merge it into a single array afterwards.
You can use bit shifting, the << and >> operators, to get the appropriate fields to the right places in the buffer.
Something like buffer |= (DLEN << 7);
Just make sure your buffer is cleared to be all 0's first.
My version of hmjd's answer:
buffer[0] = DLEN[0];
buffer[1] = CMD[0];
std::copy(begin(data),end(data),buffer+sizeof DLEN+sizeof CMD);
std::copy(begin(CRC) ,end(CRC) ,buffer+sizeof DLEN+sizeof CMD+sizeof data);

Deciphering unsigned char*

I have a process that listens to an UDP multi-cast broadcast and reads in the data as a unsigned char*.
I have a specification that indicates fields within this unsigned char*.
Fields are defined in the specification with a type and size.
Types are: uInt32, uInt64, unsigned int, and single byte string.
For the single byte string I can merely access the offset of the field in the unsigned char* and cast to a char, such as:
char character = (char)(data[1]);
Single byte uint32 i've been doing the following, which also seems to work:
uint32_t integer = (uint32_t)(data[20]);
However, for multiple byte conversions I seem to be stuck.
How would I convert several bytes in a row (substring of data) to its corresponding datatype?
Also, is it safe to wrap data in a string (for use of substring functionality)? I am worried about losing information, since I'd have to cast unsigned char* to char*, like:
std::string wrapper((char*)(data),length); //Is this safe?
I tried something like this:
std::string wrapper((char*)(data),length); //Is this safe?
uint32_t integer = (uint32_t)(wrapper.substr(20,4).c_str()); //4 byte int
But it doesn't work.
Thoughts?
Update
I've tried the suggest bit shift:
void function(const unsigned char* data, size_t data_len)
{
//From specifiction: Field type: uInt32 Byte Length: 4
//All integer fields are big endian.
uint32_t integer = (data[0] << 24) | (data[1] << 16) | (data[2] << 8) | (data[3]);
}
This sadly gives me garbage (same number for every call --from a callback).
I think you should be very explicit, and not just do "clever" tricks with casts and pointers. Instead, write a function like this:
uint32_t read_uint32_t(unsigned char **data)
{
const unsigned char *get = *data;
*data += 4;
return (get[0] << 24) | (get[1] << 16) | (get[2] << 8) | get[3];
}
This extracts a single uint32_t value from a buffer of unsigned char, and increases the buffer pointer to point at the next byte of data in the buffer.
This assumes big-endian data, you need to have a well-defined idea of the buffer's endian-mode in order to interpret it.
Depends on the byte ordering of the protocol, for big-endian or so called network byte order do:
uint32_t i = data[0] << 24 | data[1] << 16 | data[2] << 8 | data[3];
Without commenting on whether it's a good idea or not, the reason why it doesn't work for you is that the result of wrapper.substring(20,4).c_str() is (uint32_t *), not (uint32_t). So if you do:
uint32_t * integer = (uint32_t *)(wrapper.substr(20,4).c_str(); it should work.
uint32_t integer = ntohl(*reinterpret_cast<const uint32_t*>(data + 20));
or (handles alignment issues):
uint32_t integer;
memcpy(&integer, data+20, sizeof integer);
integer = ntohl(integer);
The pointer way:
uint32_t n = *(uint32_t*)&data[20];
You will run into problems on different endian architectures though. The solution with bit shifts is better and consistent.
std::string wrapper((char*)(data),length); //Is this safe?
This should be safe since you specified the length of the data.
On the other hand if you did this:
std::string wrapper((char*)data);
The string length would be determined wherever the first 0 byte occurs, and you will more than likely chop off some data.