I have 8 bytes string with flags, some of them are booleans and some are chars. What I want is access that flags by it's names in my code, like myStruct.value1
I created some struct according to my wishes. I would expect I can split the string into that struct as both have size of 64 bits in total.
// destination
typedef struct myStruct_t {
uint8_t value1 : 8;
uint8_t value2 : 8;
uint16_t value3 : 16;
uint8_t value4 : 8;
uint8_t value5 : 1;
uint8_t value6 : 1;
uint8_t value7 : 1;
uint8_t value8 : 1;
uint8_t value9 : 1;
uint16_t value10 : 11;
uint8_t value11 : 8;
} myStruct_t;
// source
char buf[8] = "12345678";
// read about strcpy and memcpy but doesn't work
memcpy(myStruct, buf, 8);
However it does not work and i get following error message:
error: cannot convert 'myStruct_t' to 'void*' for argument '1' to 'void* memcpy(void*, const void*, size_t)'
memcpy(myStruct, buf, 8);
^
memcpy expects its first two arguments to be pointers.
Arrays like your buf will implicitly decay to pointers, but your type myStruct_t will not.
myStruct_t myStruct;
memcpy(&myStruct, buf, 8);
// ^ produces a POINTER to myStruct
If I understand correctly what you are trying to do, I would first convert the 8 character buffer to binary. Then, you can extract substrings from it for the length of each of the values you want. Finally, you can convert the binary strings to their numerical values.
Also, you should make your char array size 9. You need an extra character for the null terminator. The way you have it currently won't compile.
Related
I am trying to receive a message of a TCP socket and store it in an uint8_t array.
The buffer I am to receive is to be 8 bytes long and contains 4 unique values.
Byte 1: value 1 which is a uint8_t, Byte 2-3: value 2 which is a uint16_t, Byte 4: value 3 which is a uint8_t, Byte 5-8: value 4 which is an unsigned long.
Endiannessis big endian order.
int numBytes = 0;
uint8_t buff [8];
if ((numBytes = recv(sockfd, buff, 8, 0)) == -1)
{
perror("recv");
exit(1);
}
uint8_t *pt = buff;
printf("buff[0] = %u\n", *pt);
++pt;
printf("buff[1] = %u\n", *(uint16_t*)pt);
But the second printf prints out an unexpected value. Have I done something incorrectly to extract the two bytes or is something wrong with my print function?
You have 2 issues to take care of once your data has arrived in the buffer.
The first is obeying aliasing rules which is achieved by only casting non-char type pointers to char* because char can alias anything. You should never cast char* to non-char type pointers.
The second is obeying network byte ordering protocol whereby integers transmitted over a network are converted to network order before transfer and converted from network order after receipt. For this we generally use htons, htonl, ntohs and ntohl.
Something like this:
// declare receive buffer to be char, not uint8_t
char buff[8];
// receive chars in buff here ...
// now transfer and convert data
uint8_t a;
uint16_t b;
uint8_t c;
uint32_t d;
a = static_cast<uint8_t>(buff[0]);
// always cast the receiving type* to char*
// never cast char* to receiving type*
std::copy(buff + 1, buff + 3, (char*)&b)
// convert from network byte order to host order
b = ntohs(b); // short version (uint16_t)
c = static_cast<uint8_t>(buff[3]);
std::copy(buff + 4, buff + 8, (char*)&d)
d = ntohl(d); // long version (uint32_t)
Perhaps like this (big-endian)
uint8_t buff [8];
// ...
uint8_t val1 = buff[0];
unit16_t val2 = buff[1] * 256 + buff[2];
unit8_t val3 = buff[3];
unsigned long val4 = buff[4] * 16777216 + buff[5] * 65536 + buff[6] * 256 + buff[7];
For example , 130ABF (Hexadecimal) is equals to 1247935 (Decimal),
So my byte array is
char buf[3] = {0x13 , 0x0A , 0xBF};
and I need to retrieve the decimal value from the byte array.
Below are my sample code:
#include<iostream>
using namespace std;
int main()
{
char buf[3] = {0x13 , 0x0A , 0xBF};
int number = buf[0]*0x10000 + buf[1]*0x100 + buf[2];
cout<<number<<endl;
return 0;
}
and the result is : (Wrong)
1247679
Unless I change the
char buf[3] = {0x13 , 0x0A , 0xBF};
to
int buf[3] = {0x13 , 0x0A , 0xBF};
then It will get correct result.
Unfortunately, I must set my array as char type, anyone know how to solve this ?
Define the array as:
unsigned char buf[3];
Remember that char could be signed.
UPDATE: In order to complete the answer, it is interesting to add that "char" is a type that could be equivalent to "signed char" or "unsigned char", but it is not determined by the standard.
Array elements will be promouted to int before evaluating. So if your compiler treats char as signed you get next (assuming int is 32-bit):
int number = 19*0x10000 + 10*0x100 + (-65);
To avoid such effect you can declare your array as unsigned char arr[], or use masking plus shifts:
int number = ((buf[0] << 16) & 0xff0000)
| ((buf[1] << 8) & 0x00ff00)
| ((buf[2] << 0) & 0x0000ff;
Since your char array is signed, when you want to initialize the last element (0xBF), you are trying to assign 191 to it while the max it can store is 127: a narrowing conversion occurs... A workaround would be the following:
unsigned char[3] = { 0x13, 0x0A, 0xBF };
This will prevent the narrowing conversion. Your compiler should have given you a warning about it.
I have a char array which represents a GUID as bytes (not as chars) but I have to reverse half of the array.
That happened because I used sscanf to convert a GUID string into char array (which represents bytes) using:
sscanf(strguid,"%02x%02x%02x%02x-%02x%02x-%02x%02x-%02x%02x-%02x%02x%02x%02x%02x%02x",
,&arr[0],&arr[1],
,&arr[2],&arr[3],....,&arr[15]);
The array I have is for example:
2EC5D8AA85E74B5E872462155EAA9D51
and I have to reverse it so it will give the right GUID:
AAD8C52EE7855E4B872462155EAA9D51
What I tried it the following:
unsigned int temp;
memcpy(&temp,&arr[0],sizeof(char));
memcpy(&arr[0],&arr[3],sizeof(char));
memcpy(,&arr[3],&temp,sizeof(char));
And so on. (The second with the third, the fifth with the sixth and the seventh with the eighth)
Is there an easier way to do that?
If i understand you problem correctly you need change endianness of 3 first members of GUID struct
typedef struct {
unsigned long Data1;
unsigned short Data2;
unsigned short Data3;
byte Data4[ 8 ];
} GUID;
You can try this
std::reverse(guid_, guid_ + 4);
std::reverse(guid_ + 4, guid_ + 6);
std::reverse(guid_ + 6, guid_ + 8);
But i'd prefer changing sscanf format like this
const char *string_ = "AAD8C52E-E785-5E4B-8724-62155EAA9D51";
GUID guid_;
sscanf(string_, "%08x-%04x-%04x-%02x%02x-%02x%02x%02x%02x%02x%02x",
&guid_.Data1, &guid_.Data2, &guid_.Data3,
&guid_.Data4[0], &guid_.Data4[1], &guid_.Data4[2], &guid_.Data4[3], &guid_.Data4[4], &guid_.Data4[5], &guid_.Data4[6], &guid_.Data4[7]);
Be advised that you need to check input string length to avoid shorter string parsing
I am creating a C++ program for communication with a gripper on a serial port.
I have to send a buffer of type "unsigned char [8]", but of these 8 bytes, 4 are entered from the keyboard, and 2 are the CRC, calculated at the time.
So, how can I concatenate several pieces in a single buffer of 8 bytes unsigned char?
For example:
unsigned char buffer[8];
----
unsigned char DLEN[1]={0x05};
----
unsigned char CMD[1]={0x01};
----
unsigned char data[4]={0x00,0x01,0x20,0x41};
----
unsigned char CRC[2]={0xFF,0x41};
----
how can I get this buffer: {0x05,0x01,0x00,0x01,0x20,0x41,0xFF,0x41} that is the union of DLEN,CMD,data and CRC?
This:
buffer[0] = DLEN[0];
buffer[1] = CMD[0];
buffer[2] = data[0];
buffer[3] = data[1];
buffer[4] = data[2];
buffer[5] = data[3];
buffer[6] = CRC[0];
buffer[7] = CRC[1];
An alternative solution is this:
Start off with an unsigned char array of 8 characters.
When you need to pass it off to other methods to have data inserted in them, pass it by reference like this: updateCRC(&buffer[6]) with the method signature taking an unsigned char pointer. Assuming you respect the respective sizes of the inputs, the result is the best of both worlds, handling the buffer as if they were separate strings, and not having to merge it into a single array afterwards.
You can use bit shifting, the << and >> operators, to get the appropriate fields to the right places in the buffer.
Something like buffer |= (DLEN << 7);
Just make sure your buffer is cleared to be all 0's first.
My version of hmjd's answer:
buffer[0] = DLEN[0];
buffer[1] = CMD[0];
std::copy(begin(data),end(data),buffer+sizeof DLEN+sizeof CMD);
std::copy(begin(CRC) ,end(CRC) ,buffer+sizeof DLEN+sizeof CMD+sizeof data);
I have a process that listens to an UDP multi-cast broadcast and reads in the data as a unsigned char*.
I have a specification that indicates fields within this unsigned char*.
Fields are defined in the specification with a type and size.
Types are: uInt32, uInt64, unsigned int, and single byte string.
For the single byte string I can merely access the offset of the field in the unsigned char* and cast to a char, such as:
char character = (char)(data[1]);
Single byte uint32 i've been doing the following, which also seems to work:
uint32_t integer = (uint32_t)(data[20]);
However, for multiple byte conversions I seem to be stuck.
How would I convert several bytes in a row (substring of data) to its corresponding datatype?
Also, is it safe to wrap data in a string (for use of substring functionality)? I am worried about losing information, since I'd have to cast unsigned char* to char*, like:
std::string wrapper((char*)(data),length); //Is this safe?
I tried something like this:
std::string wrapper((char*)(data),length); //Is this safe?
uint32_t integer = (uint32_t)(wrapper.substr(20,4).c_str()); //4 byte int
But it doesn't work.
Thoughts?
Update
I've tried the suggest bit shift:
void function(const unsigned char* data, size_t data_len)
{
//From specifiction: Field type: uInt32 Byte Length: 4
//All integer fields are big endian.
uint32_t integer = (data[0] << 24) | (data[1] << 16) | (data[2] << 8) | (data[3]);
}
This sadly gives me garbage (same number for every call --from a callback).
I think you should be very explicit, and not just do "clever" tricks with casts and pointers. Instead, write a function like this:
uint32_t read_uint32_t(unsigned char **data)
{
const unsigned char *get = *data;
*data += 4;
return (get[0] << 24) | (get[1] << 16) | (get[2] << 8) | get[3];
}
This extracts a single uint32_t value from a buffer of unsigned char, and increases the buffer pointer to point at the next byte of data in the buffer.
This assumes big-endian data, you need to have a well-defined idea of the buffer's endian-mode in order to interpret it.
Depends on the byte ordering of the protocol, for big-endian or so called network byte order do:
uint32_t i = data[0] << 24 | data[1] << 16 | data[2] << 8 | data[3];
Without commenting on whether it's a good idea or not, the reason why it doesn't work for you is that the result of wrapper.substring(20,4).c_str() is (uint32_t *), not (uint32_t). So if you do:
uint32_t * integer = (uint32_t *)(wrapper.substr(20,4).c_str(); it should work.
uint32_t integer = ntohl(*reinterpret_cast<const uint32_t*>(data + 20));
or (handles alignment issues):
uint32_t integer;
memcpy(&integer, data+20, sizeof integer);
integer = ntohl(integer);
The pointer way:
uint32_t n = *(uint32_t*)&data[20];
You will run into problems on different endian architectures though. The solution with bit shifts is better and consistent.
std::string wrapper((char*)(data),length); //Is this safe?
This should be safe since you specified the length of the data.
On the other hand if you did this:
std::string wrapper((char*)data);
The string length would be determined wherever the first 0 byte occurs, and you will more than likely chop off some data.