Overhead of wrapping data in message - c++

Will wrapping a protobuf parameter in a separate message give any extra runtime overhead?
This:
message MyData {
optional uint32 data = 1;
}
message Container {
optional MyData data = 1;
}
vs.
message Container {
optional uint32 data = 1;
}
I'm only using the C++ implementation if that matters.
Extra wire overhead? Serialization overhead? Access overhead?

There is a overhead for every embedded message as per encoding explanation of protobufs
Below simple message with one int value would be encoded as 08 96 01
message Test1 {
required int32 a = 1;
}
Meanwhile encoding of message definition with an embedded message would look like 1a 03 08 96 01
Test1:
message Test3 {
required Test1 c = 3;
}
Documentation explains it saying that
the last three bytes are exactly the same as our first example (08 96 01), and they're preceded by the number 3 – embedded messages are treated in exactly the same way as strings (wire type = 2).
So in short 1a 03 is added as overhead into Test3 since Test1 is another message type

Related

What's the best way to parse a packet data in c++?

I'm transcoding a Delphi App to C++ and got a problem to get the correct way to do the cases of packet headers in C++.
The Headers Comes as "Dword" in a UCHAR buf and i dont want make like: if (packet[0] == 0xFF && packet[1] == 0xFF...);
Ex: Delphi Working with switch case:
case PDword(#Packet[6])^ of
$68FB0200:
begin
//Dword match correctly
End;
end;
is there a way to do the same in C++ like the Delphi Example?
Already checked for some methds but ever fails the comparsion.
UCHAR *Packet ;
Packet = ParsePacket(buf->Buf, buf->Size);
// The ParsePacket Returns FB 00 00 00 78 00 01 F3 02 FD
DWORD Op1 = 0x01F302FD;
DWORD *p_dw = (DWORD*) Packet[6];
if (p_dw == Op1)
{
//Dword never match...
}
Yes, the portable C (and C++) way is:
DWORD const Op1 = 0x01F302FD;
if (0 == memcmp(Packet + 6, &Op1, sizeof Op1)) { ... }
Note that we haven't tried to access the bytes at Packet+6 as a 32-bit quantity... they're likely to be misaligned for that. We're just asking whether the same sequence of 4 bytes exists there as exist in Op1.
The obvious issue with DWORD *p_dw = (DWORD*) Packet[6] is that you take a value as an address. You probably meant DWORD *p_dw = (DWORD*)&(Packet[6]).
Note, however, that you get additional troubles for two reasons:
First, alignment could be wrong such that the pointer received is not a valid address to be taken for dereferencing a DWORD. This is undefined behaviour then.
Second, you depend on a specific endianess.
To overcome the first issue, you could use memcpy:
DWORD Op2 = 0;
memcpy(&Op2,Packet+6,sizeof(Op2));
To overcome both the first and the second issue, you'll need to sum the bytes on your own, e.g.:
DWORD Op2 = Packet[6]<<24 | Packet[7]<<16 | Packet[8]<<8 | Packet[9]

Zero byte disappear when get hid feature report using ioctl() in Linux

I would like to get a hid feature report from a device. Since the host system is Ubuntu18.04, I followed this example (from line 125 to line 135). However, the loaded data is not complete. For example, the feature report is uint32_t 0xFFEEDDCC but what I get is DD EE FF, the zero byte CC disappears. So, I was wondering why the zero byte disappears and how to get complete data.
Below are my codes.
uchar buf[reportSize]; // reportSize = 5, in case the report id occupies 1 byte
int fdevice = open(devicePath, O_RDWR);
// get feature report
buf[0] = reportID;
featureResults = ioctl(fdevice, HIDIOCGFEATURE(reportSize), buf); // featureResults = 3 but should be 4
if (featureResults < 0)
{
perror("HIDIOCGFEATURE");
}
else
{
for (int i = 0; i < featureResults; i++)
printf("%hhx ", buf[i]); // only show DD EE FF
puts("\n");
}
I tried with larger reportSize e.g. 256 as well but it still does not work. Besides, buf[-1], buf[featureResults] and buf[featureResults+1] are not the lost data.
Thank you very much.
According to the description in this web, it seems that HIDIOCGFEATURE(len) would always skip the zeroth byte and start from the first byte. So, I add a placeholder byte (LSB) to the hid feature report (firmware). In this way, the placeholder would be skipped and the data could be read correctly.
Any other solution would be very much appreciated.

Does VS Code have a memory viewer and/or a disassembler for C++ extension?

I am using Visual Studio Code (VS Code) for debugging my C++ program. I'd like to view the memory at a variable's address and also be able to view the assembly code of my program. I am looking around on VS Code and I am not seeing an option for such views. I checked around on the marketplace and I don't anything out there.
Not sure if I am not looking in the right place, but do these features exist for VS Code?
When this question was first asked, neither the disassembly view nor the memory viewer were available.
In July of 2021, the disassembly view was released, which can be opened by clicking "Open Disassembly View" in the context menu of an editor. This is supported both by the generic C++ debugger debugger, and LLDB debugger has a "Toggle Disassembly" command which works quite well.
In February of 2022, the memory view was released in VS Code, which can be accessed by hovering on a variable in the "Variables" view. Support for this only exists in the LLDB C++ debugger for now.
At this time (Feb 2018), it seems that this feature still hasn't made it into VSCode. However it is possible to use the -exec command in the VSCode Debug Console to run GDB commands. See https://code.visualstudio.com/docs/languages/cpp#_gdb-lldb-and-mi-commands-gdblldb
The GDB examine command "x" displays memory in various formats. So in the VSCode Debug Console
-exec x/64b 0x74ae70
will display 64 bytes in hexadecimal from 0x74ae70. See https://sourceware.org/gdb/onlinedocs/gdb/Memory.html for more details.
At the time (Jun 2020), it seems that this feature still dosen't exist in VS code, link (and maybe the answer you are looking for). However, we are coders and we can make our own features ;). I am fairly new to c++ so this code might not be any good, but it works and that is the important part. Code to memory view:
#include <iostream>
#include<cmath>
namespace mem
{
std::string IntToHexa(int num)
{
int values[2];
int rest;
for(int i = 0; i < 2; i++)
{
rest = num % 16;
if(rest == 0 && num > 0)
{
values[i] = num/16;
}else{
values[i] = num % 16;
}
num -= values[i] * std::pow(16, i);
if(values[i] > 9)
{
values[i] = 65 + values[i] - 10;
}
}
std::string output;
for(int i = 1; i > -1; i--)
{
if(values[i] > 10)
{
output += char(values[i]);
}else{
output += std::to_string(values[i]);
}
}
return output;
}
template<typename POINTER>
void MemView(POINTER pointer, int length = 10, int lines = 1)
{
unsigned char* ptr= (unsigned char*)pointer;
for(int x = 0; x < lines; x++)
{
std::cout << (void*)ptr << " ";
for(int i = 0; i < length; i++)
{
std::cout << IntToHexa((int)*ptr) << " ";
ptr++;
}
std::cout << std::endl;
}
}
}
So how do you use it some may wonder. Call the MemView function, MemView(POINTER pointer, int length = 10, int lines = 1). The first parameter is a pointer (non function pointer). The next is how many bytes are shown in one row (set to 10 from start). The last parameter is how many lins (with so many byts set by the second parameter) should get printed (set to 1 from start). An example!
int main()
{
int a = 10;
int* ptr = &a;
mem::MemView(ptr);
}
Output: 0x7ffee0a7a6ec 0A 00 00 00 00 A7 A7 2E FE 7F
The memory is written in hexadecimal which means that every pair is a byte. This is taken from the stack and is therefore in the reversed order. Because an int is 4 byts (0A 00 00 00) and it is in reversed order, you can see that (00 00 00 0A) has the value 10, which is the value of a. Another example!
int main()
{
int a = 10;
int* ptr = &a;
mem::MemView(ptr, 4, 2);
}
The output:
0x7ffee4bec6ec 0A 00 00 00
0x7ffee4bec6f0 00 C7 BE E4
Now when the second parameter is 4, every row contains 4 bytes. There are 2 rows because the third parameter is set to 2. In the beginning of every row, a pointer to the first byte is showed. (The first byte in the next row is the following byte to the last byte in the first row). This program works will almost every pointer type, for example int, char, double ... However it dosen't work with function pointers (try and you will get an error).
It is coming, with a preview feature in VSCode 1.59 (Jul. 2021)
Preview feature: Disassembly View
Thanks to a large code contribution by the C++ team, we are happy to include a preview of a Disassembly View in this milestone.
The Disassembly view can be opened from an editor's context menu to show the disassembled source of the active stack frame, and it supports stepping through assembly instructions and setting breakpoints on individual instructions.
The Disassembly view is only available in an active debug session and when the underlying debug extension supports it.
As of today only the "C++" and "Mock Debug" extensions can feed the Disassembly view.
From a technical perspective VS Code's implementation of the Disassembly view now supports four more features of the Debug Adapter Protocol:
the disassembly request for providing the disassembled source for a memory location,
the instructionPointerReference property on stack frames,
the granularity property on the stepping requests,
instruction breakpoints and the setInstructionBreakpoints request.

Different behaviour of same code on different devices

I'm baffled. I have an identical program being uploaded to two different Arduino boards. It's in C++.
It's a much larger program, but I'm cutting it down to only the problematic part. Basically, I have a "host" Arduino and a "rover" Arduino communicating wirelessly. There are multiple rover units, but the problem is only happening on one of them. The rovers have motors that need to be calibrated, so I have static variables in my Motor namespace to hold those calibration values. To prevent having to change these values in the source code, recompile and reupload every time I want to calibrate it, I'm using the wireless system to allow the host to send calibration values to the rover at runtime.
Here's the problem: on one rover, the values aren't being updated if I call the ChangeSpeed method, but they do get updated if I modify the variables directly.
Let me stress that it works fine on four out of five rovers. The problem is happening on exactly one rover. The code being uploaded to each rover is identical.
The following code is causing a problem:
Motor.h:
namespace Motor
{
static unsigned char left_speed = 0;
static unsigned char right_speed = 0;
void ChangeSpeed(unsigned char, unsigned char);
}
Motor.cpp:
void Motor::ChangeSpeed(unsigned char l_speed, unsigned char r_speed)
{
left_speed = l_speed;
right_speed = r_speed;
soft.println("Change speed: " + String(left_speed) + ", " + String(right_speed));
}
Main.cpp:
void UpdateSpeedValuesBad(unsigned char l_speed, unsigned char r_speed)
{
Motor::ChangeSpeed(l_speed, r_speed);
soft.println("Motor write: " + String(l_speed) + ", " + String(r_speed));
}
void UpdateSpeedValuesGood(unsigned char l_speed, unsigned char r_speed)
{
Motor::left_speed = l_speed;
Motor::right_speed = r_speed;
soft.println("Motor write: " + String(l_speed) + ", " + String(r_speed));
}
void ReturnSpeedValues()
{
soft.println("Motor read: " + String(Motor::left_speed) + ", " + String(Motor::right_speed));
}
Case 1:
On the bad rover, the host invokes UpdateSpeedValuesBad(5, 5), and then invokes ReturnSpeedValues. The output is:
Change speed: 5, 5
Motor write: 5, 5
Motor read: 0, 0
Case 2:
On the bad rover, the host invokes UpdateSpeedValuesGood(5, 5), and then invokes ReturnSpeedValues. The output is:
Motor write: 5, 5
Motor read: 5, 5
Case 3:
On a good rover, the host invokes UpdateSpeedValuesBad(5, 5), and then invokes ReturnSpeedValues. The output is:
Change speed: 5, 5
Motor write: 5, 5
Motor read: 5, 5
Am I doing something fundamentally wrong? I come from a C# background so C++ is pretty alien to me. I have no idea if I'm doing something that has undefined behaviour.
Edit: If I shove everything into one single file, it works fine. It only fails once I split it up across a header file and a cpp file.
Main.cpp:
#include <SoftwareSerial.h>
SoftwareSerial soft(9, 10);
namespace Motor
{
static int left_speed = 0;
void ChangeSpeed(unsigned char);
}
void Motor::ChangeSpeed(unsigned char l_speed)
{
left_speed = l_speed;
soft.println("Change speed: " + String(left_speed));
}
void setup()
{
soft.begin(9600);
soft.println("Before: " + String(Motor::left_speed));
Motor::ChangeSpeed(5);
soft.println("Bad attempt: " + String(Motor::left_speed));
Motor::left_speed = 5;
soft.println("Good attempt: " + String(Motor::left_speed));
}
void loop()
{
}
Output:
Before: 0
Change speed: 5
Bad attempt: 5
Good attempt: 5
Edit 2: I dove into the assembly and found this for the bad case. It's using different memory addresses based on whether I call ChangeSpeed or I update the values directly. Anyone know why that would be? Is it a compiler bug or is it not guaranteed that the addresses will be the same?
000000a8 <setup>:
{
Motor::ChangeSpeed(5, 6);
a8: 85 e0 ldi r24, 0x05 ; 5
aa: 66 e0 ldi r22, 0x06 ; 6
ac: 0e 94 5f 00 call 0xbe ; 0xbe <_ZN5Motor11ChangeSpeedEhh>
Motor::left_speed = 5;
b0: 85 e0 ldi r24, 0x05 ; 5
b2: 80 93 00 01 sts 0x0100, r24
Motor::right_speed = 6;
b6: 86 e0 ldi r24, 0x06 ; 6
b8: 80 93 01 01 sts 0x0101, r24
}
bc: 08 95 ret
000000be <_ZN5Motor11ChangeSpeedEhh>:
void Motor::ChangeSpeed( unsigned char l_speed, unsigned char r_speed )
{
left_speed = l_speed;
be: 80 93 02 01 sts 0x0102, r24
right_speed = r_speed;
c2: 60 93 03 01 sts 0x0103, r22
c6: 08 95 ret
You should not make these variables static. A static global variable means the variable is local to the compilation unit (generally, the .cpp file that is being compiled) so if you have the static variable declared in a header file and include that header file in 3 different .cpp files that are compiled separately then you will have 3 independent versions of that variable, one for each .cpp file.
Instead, in the header file declare them as
namespace Motor {
extern unsigned char left_speed;
extern unsigned char right_speed;
void ChangeSpeed(unsigned char, unsigned char);
}
This tells the compiler that some file will provide a definition for these variables and to use that common shared definition.
Then, since the variables need to be defined exactly once (this is called the one definition rule) you should add the definition to Motor.cpp:
unsigned char Motor::left_speed = 0;
unsigned char Motor::right_speed = 0;
I chose Motor.cpp to hold the definition since this is where the definition of the ChangeSpeed function is.
In C++, the static keyword works much differently than in C#. It might be somewhat similar when used inside a class definition, but that is where the similarities end.
By declaring the variables static, you limit their range to the current code unit. In other words, by having static variables in your .h, you cause Motor.cpp and Main.cpp to have separate copies of those two variables.
Your case 1 modifies the Motor.cpp copies of those variables while outputs the ones from Main.cpp. Case 2 works only on Main.cpp copies so it works as expected. And if you shove everything into a single file, you just get one copy of those variables.
You should either:
Declare the variables as extern unsigned char left_speed, right_speed; in the header, and then declare the values as unsigned char left_speed = 0; in one of the .cpp files;
Declare the static variables in one of the .cpp files directly (e.g. Rotor.cpp) and use functions to get their values like you use one to set them.

Why are my TCP transfers corrupted on cygwin?

I am trying to debug why my TCP transfers are corrupted when sent from Cygwin. I see that only the first 24 bytes of each structure are showing up in my server program running on Centos. The 25th through 28th bytes are scrambled and all others after that are zeroed out. Going in the other direction, receiving from Centos on Cygwin, again only the first 24 bytes of each block are showing up in my server program running on Cygwin. The 25th through 40th bytes are scrambled and all others after that are zeroed out. I also see the issue when sending or receiving to/from localhost on Cygwin. For localhost, the first 34 bytes are correct and all after that are zeroed out.
The application I am working on work fine on Centos4 talking to Centos and I am trying to port it to Cygwin. Valgrind reports no issues on Centos, I do not have Valgrind running on Cygwin. Both platforms are little-endian x86.
I've run Wireshark on the host Windows XP system under which Cygwin is running. When I sniff the packets with Wireshark they look perfect, for both sent packets from Cygwin and received packets to Cygwin.
Somehow, the data is corrupted between the level Wireshark looks at and the program itself.
The C++ code uses ::write(fd, buffer, size) and ::read(fd, buffer, size) to write and read the TCP packets where fd is a file descriptor for the socket that is opened between the client and server. This code works perfectly on Centos4 talking to Centos.
The strangest thing to me is that the packet sniffer shows the correct complete packet for all cases, yet the cygwin application never reads the complete packet or in the other direction, the Centos application never reads the complete packet.
Can anyone suggest how I might go about debugging this?
Here is some requested code:
size_t
read_buf(int fd, char *buf, size_t count, bool &eof, bool immediate)
{
if (count > SSIZE_MAX) {
throw;
}
size_t want = count;
size_t got = 0;
fd_set readFdSet;
int fdMaxPlus1 = fd + 1;
FD_ZERO(&readFdSet);
FD_SET(fd, &readFdSet);
while (got < want) {
errno = 0;
struct timeval timeVal;
const int timeoutSeconds = 60;
timeVal.tv_usec = 0;
timeVal.tv_sec = immediate ? 0 : timeoutSeconds;
int selectReturn = ::select(fdMaxPlus1, &readFdSet, NULL, NULL, &timeVal);
if (selectReturn < 0) {
throw;
}
if (selectReturn == 0 || !FD_ISSET(fd, &readFdSet)) {
throw;
}
errno = 0;
// Read buffer of length count.
ssize_t result = ::read(fd, buf, want - got);
if (result < 0) {
throw;
} else {
if (result != 0) {
// Not an error, increment the byte counter 'got' & the read pointer,
// buf.
got += result;
buf += result;
} else { // EOF because zero result from read.
eof = true;
break;
}
}
}
return got;
}
I've discovered more about this failure. The C++ class where the packet is being read into is laid out like this:
unsigned char _array[28];
long long _sequence;
unsigned char _type;
unsigned char _num;
short _size;
Apparently, the long long is getting scrambled with the four bytes that follow.
The C++ memory sent by Centos application, starting with _sequence, in hex, looks like this going to write():
_sequence: 45 44 35 44 33 34 43 45
_type: 05
_num: 33
_size: 02 71
Wireshark shows the memory laid out in network big-endian format like this in the packet:
_sequence: 45 43 34 33 44 35 44 45
_type: 05
_num: 33
_size: 71 02
But, after read() in the C++ cygwin little-endian application, it looks like this:
_sequence: 02 71 33 05 45 44 35 44
_type: 00
_num: 00
_size: 00 00
I'm stumped as to how this is occurring. It seems to be an issue with big-endian and little-endian, but the two platforms are both little-endian.
Here _array is 7 ints instead of 28 chars.
Complete memory dump at sender:
_array[0]: 70 a2 b7 cf
_array[1]: 9b 89 41 2c
_array[2]: aa e9 15 76
_array[3]: 9e 09 b6 e2
_array[4]: 85 49 08 81
_array[5]: bd d7 9b 1e
_array[6]: f2 52 df db
_sequence: 41 41 31 35 32 43 38 45
_type: 05
_num: 45
_size: 02 71
And at receipt:
_array[0]: 70 a2 b7 cf
_array[1]: 9b 89 41 2c
_array[2]: aa e9 15 76
_array[3]: 9e 09 b6 e2
_array[4]: 85 49 08 81
_array[5]: bd d7 9b 1e
_array[6]: f2 52 df db
_sequence: 02 71 45 05 41 41 31 35
_type: 0
_num: 0
_size: 0
Cygwin test result:
4
8
48
0x22be08
0x22be28
0x22be31
0x22be32
0x22be38
Centos test result:
4
8
40
0xbfffe010
0xbfffe02c
0xbfffe035
0xbfffe036
0xbfffe038
Now that you've shown the data, your problem is clear. You're not controlling the alignment of your struct, so the compiler is automatically putting the 8 byte field (the long long) on an 8 byte boundary (offset 32) from the start of the struct, leaving 4 bytes of padding.
Change the alignment to 1 byte and everything should resolve. Here's the snippet you need:
__attribute__ ((aligned (1))) __attribute ((packed))
I also suggest that you use the fixed-size types for structures being blitted across the network, e.g. uint8_t, uint32_t, uint64_t
Previous thoughts:
With TCP, you don't read and write packets. You read and write from a stream of bytes. Packets are used to carry these bytes, but boundaries are not preserved.
Your code looks like it deals with this reasonably well, you might want to update the wording of your question.
Hopefully final update :-)
Based on your latest update, Centos is packing your structures at the byte level whilst CygWin is not. This causes alignment problems. I'm not sure why the CygWin-to-CygWin case is having problems since the padding should be identical there but I can tell you how to fix the other case.
Using the code I gave earlier:
#include <stdio.h>
typedef struct {
unsigned char _array[28];
long long _sequence;
unsigned char _type;
unsigned char _num;
short _size;
} tType;
int main (void) {
tType t[2];
printf ("%d\n", sizeof(long));
printf ("%d\n", sizeof(long long));
printf ("%d\n", sizeof(tType));
printf ("%p\n", &(t[0]._array));
printf ("%p\n", &(t[0]._sequence));
printf ("%p\n", &(t[0]._num));
printf ("%p\n", &(t[0]._size));
printf ("%p\n", &(t[1]));
return 0;
}
If you don't want any padding, you have two choices. The first is to re-organise your structure to put the more restrictive types up front:
typedef struct {
long long _sequence;
short _size;
unsigned char _array[28];
unsigned char _type;
unsigned char _num;
} tType;
which gives you:
4
8
40
0x22cd42
0x22cd38
0x22cd5f
0x22cd40
0x22cd60
In other words, each structure is exactly 40 bytes (8 for sequence, 2 for size, 28 for array and 1 each for type and num). But this may not be possible if you want it in a specific order.
In that case, you can force the alignments to be on a byte level with:
typedef struct {
unsigned char _array[28];
long long _sequence;
unsigned char _type;
unsigned char _num;
short _size;
} __attribute__ ((aligned(1),packed)) tType;
The aligned(1) sets it to byte alignment but that won't affect much since objects don't like having their alignments reduced. To force that, you need to use packed as well.
Doing that gives you:
4
8
40
0x22cd3c
0x22cd58
0x22cd61
0x22cd62
0x22cd64
Earlier history for prosperity:
Well, since I wget and ftp huge files just fine from CygWin, my psychic debugging skills tell me it's more likely to be a problem with your code rather than the CygWin software.
In other words, regarding the sentence "the packets are corrupted between the level Wireshark looks at and the program itself", I'd be seriously looking towards the upper end of that scale rather than the lower end :-)
Usually, it's the case that you've assumed a read will get the whole packet that was sent rather than bits at a time but, without seeing the code in question, that's a pretty wild guess.
Make sure you're checking the return value from read to see how many bytes are actually being received. Beyond that, post the code responsible for the read so we can give a more in-depth analysis.
Based on your posted code, it looks okay. The only thing I can suggest is that you check that the buffers you're passing in are big enough and, even if they are, make sure you print them immediately after return in case some other piece of code is corrupting the data.
In fact, in re-reading your question more closely, I'm a little confused. You state you have the same problem with your server code on both Linux and CygWin yet say it's working on Centos.
My only advice at this point is to put debugging printf statements in that function you've shown, such as after the select and read calls to output the relevant variables, including got and buf after changing them, and also in every code path so you can see what it's doing. And also dump the entire structure byte-for-byte at the sending end.
This will hopefully show you immediately where the problem lies, especially since you seem to have data showing up in the wrong place.
And make sure your types are compatible at both ends. By that, I mean if long long is different sizes on the two platforms, your data will be misaligned.
Okay, checking alignments at both ends, compile and run this program on both systems:
#include <stdio.h>
typedef struct {
unsigned char _array[28];
long long _sequence;
unsigned char _type;
unsigned char _num;
short _size;
} tType;
int main (void) {
tType t[2];
printf ("%d\n", sizeof(long));
printf ("%d\n", sizeof(long long));
printf ("%d\n", sizeof(tType));
printf ("%p\n", &(t[0]._array));
printf ("%p\n", &(t[0]._sequence));
printf ("%p\n", &(t[0]._num));
printf ("%p\n", &(t[0]._size));
printf ("%p\n", &(t[1]));
return 0;
}
On my CygWin, I get:
4 long size
8 long long size
48 structure size
0x22cd30 _array start (size = 28, padded to 32)
0x22cd50 _sequence start (size = 8, padded to 9???)
0x22cd59 _type start (size = 1)
0x22cd5a _size start (size = 2, padded to 6 for long long alignment).
0x22cd60 next array element.
The only odd bit there is the padding before _type but that's certainly valid though unexpected.
Check the output from Centos to see if it's incompatible. However, your statement that CygWin-to-CygWin doesn't work is incongruous with that possibility since the alinments and sizes would be compatible (unless your sending and receiving code is compiled differently).