I am creating a C++ program for communication with a gripper on a serial port.
I have to send a buffer of type "unsigned char [8]", but of these 8 bytes, 4 are entered from the keyboard, and 2 are the CRC, calculated at the time.
So, how can I concatenate several pieces in a single buffer of 8 bytes unsigned char?
For example:
unsigned char buffer[8];
----
unsigned char DLEN[1]={0x05};
----
unsigned char CMD[1]={0x01};
----
unsigned char data[4]={0x00,0x01,0x20,0x41};
----
unsigned char CRC[2]={0xFF,0x41};
----
how can I get this buffer: {0x05,0x01,0x00,0x01,0x20,0x41,0xFF,0x41} that is the union of DLEN,CMD,data and CRC?
This:
buffer[0] = DLEN[0];
buffer[1] = CMD[0];
buffer[2] = data[0];
buffer[3] = data[1];
buffer[4] = data[2];
buffer[5] = data[3];
buffer[6] = CRC[0];
buffer[7] = CRC[1];
An alternative solution is this:
Start off with an unsigned char array of 8 characters.
When you need to pass it off to other methods to have data inserted in them, pass it by reference like this: updateCRC(&buffer[6]) with the method signature taking an unsigned char pointer. Assuming you respect the respective sizes of the inputs, the result is the best of both worlds, handling the buffer as if they were separate strings, and not having to merge it into a single array afterwards.
You can use bit shifting, the << and >> operators, to get the appropriate fields to the right places in the buffer.
Something like buffer |= (DLEN << 7);
Just make sure your buffer is cleared to be all 0's first.
My version of hmjd's answer:
buffer[0] = DLEN[0];
buffer[1] = CMD[0];
std::copy(begin(data),end(data),buffer+sizeof DLEN+sizeof CMD);
std::copy(begin(CRC) ,end(CRC) ,buffer+sizeof DLEN+sizeof CMD+sizeof data);
Related
I'm trying to get an int value from a file I read. The trick is that I don't know how many bytes this value lays on, so I first read the length octet, then try to read as many data bytes as length octet tells me. The issue comes when I try to put the data octets in an int variable, and eventually print it - if the first data octet is 0, only the one that comes after is copied, so the int I try to read is wrong, as 0x00A2 is not the same as 0xA200. If i use ntohs or ntohl, then 0xA200 is decoded wrong as 0x00A2, so it does not resolve the hole problem. I am using memcpy like this:
memcpy(&dst, (const *)src, bytes2read)
where dst is int, src is unsigned char * and bytes2read is a size_t.
So what am I doing wrong? Thank you!
You cannot use memcpy to portably store bytes in an integer, because the order of bytes is not specified by the standard, not speaking of possible padding bits. The portable way is to use bitwise operations and shift:
unsigned char b, len;
unsigned int val = 0;
fdin >> len; // read the field len
if (len > sizeof(val)) { // ensure it will fit into an
// process error: cannot fit in an int variable
...
}
while (len-- > 0) { // store and shift one byte at a bite
val <<= 8; // shift previous value to leave room for new byte
fdin >> b; // read it
val |= b; // and store..
}
I have to copy the following structure to a char[] buffer.
struct AMG_ANGLES {
unsigned char bIsEnCrypted;
unsigned char bIsError;
unsigned short usErrorFlag;
unsigned char byteNumDABs;
unsigned short usBagId;
unsigned short usKvMa;
unsigned char byteDataType;
};
AMG_ANGLES struct_data;
struct_data.bIsEnCrypted = 1;
struct_data.bIsError = 1;
struct_data.usErrorFlag = 2;
struct_data.byteNumDABs = 1;
struct_data.usBagId =10;
struct_data.usKvMa=20;
struct_data.byteDataType = 1;
// here I am coping structure to a char buffer
char sendbuf[sizeof(struct_data)] = "";
memcpy(sendbuf,(char*)&struct_data, sizeof(struct_data));
on copy the buffer having first two unsigned char data and short (1,1,2) and size is only 3 bytes. reaming data was not copying.
Please help where i am doing wrong.
I tried following way also
memcpy(sendbuf+0, &struct_data.bIsEnCrypted, sizeof(struct_data.bIsEnCrypted));
memcpy(sendbuf+1, &struct_data.bIsError, sizeof(struct_data.bIsError));
memcpy(sendbuf+2, &struct_data.usErrorFlag, sizeof(struct_data.usErrorFlag));
memcpy(sendbuf+4, &struct_data.byteNumDABs, sizeof(struct_data.byteNumDABs));
memcpy(sendbuf+6, &struct_data.usBagId, sizeof(struct_data.usBagId));
memcpy(sendbuf+8, &struct_data.usKvMa, sizeof(struct_data.usKvMa));
memcpy(sendbuf+10, &struct_data.byteDataType, sizeof(struct_data.byteDataType));
same result I am getting.
Your code is fine; your approach to determine whether the contents of the buffer are correct is flawed.
You have not told us how you have determined that the contents of the buffer are wrong, but from your description I suspect that you did something like printf( "%s\n", sendbuf ). Well, that won't work, because your buffer does not really contain characters, it contains binary data.
Specifically, your short usErrorFlag is two bytes long, and since the value you store in it is 2, this means that it will be stored in sendbuf in two consecutive bytes, one byte will have the value of 0x02 and the next byte will have the value of 0x00. (Assuming, from hints in your description, that your hardware is "Little Endian".) So, when you try to view the contents of your sendbuf as a string, printf() will stop printing as soon as it encounters the 0x00 byte.
So, your code is correct. Proceed with sending your sendbuf to your UDP socket.
If I read "sendbuf" I immediately assume that you are sending data from one computer to another. These computers will have different compilers, the compilers will for example order their bytes in a different order. memcpy isn't going to work on all compilers.
I suggest you find where the contents of sendbuf is documented, and assign the individual bytes accordingly. For example
sendbuf [0] = struct_data.bIsEncrypted;
sendbuf [1] = struct_data.bIsError;
sendbuf [2] = struct_data.uIsErrorFlag >> 8;
sendbuf [3] = struct_data.uIsErrorFlag & 0xff;
This makes your code independent of byte ordering, independent of struct padding, independent of reordering of items once you are not using a POD, and so on. In your case I would bet money that there is at least padding between byteNumDABs and usBagId, and at the end.
(Bytes 2 and 3 might be exactly the other way round, that's why you find a spec for that data structure).
I wrote the following code which cast a long long to a byte array.
BYTE Buffer[8 +32];
BYTE * Temp = reinterpret_cast<BYTE*> (&Size);
Buffer[0] = Temp[0]; Buffer[1] = Temp[1]; Buffer[2] = Temp[2]; Buffer[3] = Temp[3];
Buffer[4] = Temp[4]; Buffer[5] = Temp[5]; Buffer[6] = Temp[6]; Buffer[7] = Temp[7];
//The next 32 bytes (Buffer[8] to Buffer[39]) contains the file name.
WriteFile(hFile,Buffer,40,&dwWrite,NULL);
Now the question is Is it safe to cast an int64 directly into bytes ? What are the possible bugs ?
I am well aware of other safer methods to do this (Bitshifting for example) but i want the code to be as fast as possible.
Thanks !
Problem is you write to disk with the byte ordering of the current machine. If you read it again on a machine with different byte ordering you will run into big trouble.
Thats why bit shifting would be the better way to do it.
I would like to set the first byte of an s_addr variable, which is just an unsigned long.
Is this possible, and if so, how?
It is not an array of bytes, so I can't access it like this:
struct in_addr addr;
addr.s_addr[0] = 1; // Set this byte to the number 1, or in hex: 0x01
EDIT:
It turns out that I needed the last (i.e. the 4th) byte and not the first. But thanks to your help I now have:
*((char *)&addr.s_addr + 3) = 1;
One ugly and possibly unsafe (but nevertheless widely practised) way is like this:
*(char *)&addr.s_addr = 42;
If you know that addr.s_addr is an unsigned long though, and if by "first" byte you mean "least significant" byte, then you can use bitwise operators as a much safer alternative, e.g.
addr.s_addr &= ~0xffUL; // clear previous contents of LS byte
addr.s_addr |= 0x01UL; // set LS byte to 1
You take its address, interpret it as a pointer to a char and dereference it:
unsigned long ul;
*reinterpret_cast<char*>(&ul) = 0;
// ina is struct sockaddr_in
char* address = a1 = inet_ntoa(ina.sin_addr);
*address = x;
or
*(char*)&addr.s_addr = x;
I have a process that listens to an UDP multi-cast broadcast and reads in the data as a unsigned char*.
I have a specification that indicates fields within this unsigned char*.
Fields are defined in the specification with a type and size.
Types are: uInt32, uInt64, unsigned int, and single byte string.
For the single byte string I can merely access the offset of the field in the unsigned char* and cast to a char, such as:
char character = (char)(data[1]);
Single byte uint32 i've been doing the following, which also seems to work:
uint32_t integer = (uint32_t)(data[20]);
However, for multiple byte conversions I seem to be stuck.
How would I convert several bytes in a row (substring of data) to its corresponding datatype?
Also, is it safe to wrap data in a string (for use of substring functionality)? I am worried about losing information, since I'd have to cast unsigned char* to char*, like:
std::string wrapper((char*)(data),length); //Is this safe?
I tried something like this:
std::string wrapper((char*)(data),length); //Is this safe?
uint32_t integer = (uint32_t)(wrapper.substr(20,4).c_str()); //4 byte int
But it doesn't work.
Thoughts?
Update
I've tried the suggest bit shift:
void function(const unsigned char* data, size_t data_len)
{
//From specifiction: Field type: uInt32 Byte Length: 4
//All integer fields are big endian.
uint32_t integer = (data[0] << 24) | (data[1] << 16) | (data[2] << 8) | (data[3]);
}
This sadly gives me garbage (same number for every call --from a callback).
I think you should be very explicit, and not just do "clever" tricks with casts and pointers. Instead, write a function like this:
uint32_t read_uint32_t(unsigned char **data)
{
const unsigned char *get = *data;
*data += 4;
return (get[0] << 24) | (get[1] << 16) | (get[2] << 8) | get[3];
}
This extracts a single uint32_t value from a buffer of unsigned char, and increases the buffer pointer to point at the next byte of data in the buffer.
This assumes big-endian data, you need to have a well-defined idea of the buffer's endian-mode in order to interpret it.
Depends on the byte ordering of the protocol, for big-endian or so called network byte order do:
uint32_t i = data[0] << 24 | data[1] << 16 | data[2] << 8 | data[3];
Without commenting on whether it's a good idea or not, the reason why it doesn't work for you is that the result of wrapper.substring(20,4).c_str() is (uint32_t *), not (uint32_t). So if you do:
uint32_t * integer = (uint32_t *)(wrapper.substr(20,4).c_str(); it should work.
uint32_t integer = ntohl(*reinterpret_cast<const uint32_t*>(data + 20));
or (handles alignment issues):
uint32_t integer;
memcpy(&integer, data+20, sizeof integer);
integer = ntohl(integer);
The pointer way:
uint32_t n = *(uint32_t*)&data[20];
You will run into problems on different endian architectures though. The solution with bit shifts is better and consistent.
std::string wrapper((char*)(data),length); //Is this safe?
This should be safe since you specified the length of the data.
On the other hand if you did this:
std::string wrapper((char*)data);
The string length would be determined wherever the first 0 byte occurs, and you will more than likely chop off some data.