Send a Struct via UDP - c++

I would like to send a struct via a QUdpSocket.
I know I should use QDataStream and QByteArray, but I can't because the receiver will not use Qt.
I tried many things, but I never find something that seems to do the work properly.
My struct will be :
typedef struct myStruct
    {
       int nb_trame;
       std::vector<bool> vBool;
       std::vector<int> vInt;
       std::vector<float> vFloat;
    } myStruct;
How do I procede to do that properly ?

The solution to this is called serialization
serialization is the process of translating data structures or object state into a format that can be stored (for example, in a file or memory buffer, or transmitted across a network connection link) and reconstructed later in the same or another computer environment.
The following is a fully working example how to serialize the mentioned struct using QDataStream:
// main.cpp
#include <limits>
#include <QDataStream>
#include <QVector>
#include <vector>
typedef struct myStruct
{
int nb_trame;
std::vector<bool> vBool;
std::vector<int> vInt;
std::vector<float> vFloat;
void serialize(QDataStream &out) {
out << nb_trame;
out << QVector<bool>::fromStdVector(vBool);
out << QVector<qint32>::fromStdVector(vInt);
out << QVector<float>::fromStdVector(vFloat);
}
} myStruct;
void fillData(myStruct &s) {
s.nb_trame = 0x42;
s.vBool.push_back(true);
s.vBool.push_back(false);
s.vBool.push_back(false);
s.vBool.push_back(true);
s.vInt.push_back(0xB0);
s.vInt.push_back(0xB1);
s.vInt.push_back(0xB2);
s.vInt.push_back(0xB3);
s.vFloat.push_back(std::numeric_limits<float>::min());
s.vFloat.push_back(0.0);
s.vFloat.push_back(std::numeric_limits<float>::max());
}
int main()
{
myStruct s;
fillData(s);
QByteArray buf;
QDataStream out(&buf, QIODevice::WriteOnly);
s.serialize(out);
}
Then you can send buf with QUdpSocket::writeDatagram()
How QDataStream serializes
If we replace
QByteArray buf;
QDataStream out(&buf, QIODevice::WriteOnly);
with
QFile file("file.dat");
file.open(QIODevice::WriteOnly);
QDataStream out(&file);
The serialized data gets written to the file "file.dat". This is the data the code above generates:
> hexdump -C file.dat
00000000 00 00 00 42 00 00 00 04 01 00 00 01 00 00 00 04 |...B............|
00000010 00 00 00 b0 00 00 00 b1 00 00 00 b2 00 00 00 b3 |................|
00000020 00 00 00 03 38 10 00 00 00 00 00 00 00 00 00 00 |....8...........|
00000030 00 00 00 00 47 ef ff ff e0 00 00 00 |....G.......|
The data starts with four bytes that represent the member nb_trame (00 00 00 42)
The next eight bytes are the serialized form of the vector vBool (00 00 00 04 01 00 00 01)
00 00 00 04 --> Number of entries in the vector
01 00 00 01 --> True, False, False, True
The next 20 bytes are for vInt (00 00 00 04 00 00 00 b0 00 00 00 b1 00 00 00 b2 00 00 00 b3)
00 00 00 04 --> Number of entries in the vector
00 00 00 b0 00 00 00 b1 00 00 00 b2 00 00 00 b3 --> 0xB0, 0xB1, 0xB2, 0xB3 (4 bytes per entry)
The next 28 bytes are for vFloat (00 00 00 03 38 10 00 00 00 00 00 00 00 00 00 00 00 00 00 00 47 ef ff ff e0 00 00 00)
00 00 00 03 --> Number of entries in the vector
38 10 00 00 00 00 00 00 00 00 00 00 00 00 00 00 47 ef ff ff e0 00 00 00 --> 1.1754943508222875e-38, 0.0, 3.4028234663852886e+38 (8 bytes per entry)
Additional information (Original posting)
Serialization is not a trivial topic but there are many libraries out there, that can help you with that. In the end you have two options to choose from:
Define your own serialization format
Use an existing serialization format
Binary
Text based (e.g. JSON, XML)
Which one you choose depends highly on your needs and use cases. In general I would prefer established formats over self-brewed. Text-based formats are by nature less compact and need more space and therefore also bandwidth. This is something you should take into account when you decide for a format. On the other hand text-based/human-readable formats have the advantage of being easier to debug, as you can open them in a text editor. And there are many more factors you should consider.
Serialization works because you do not rely on machine dependent things. The only thing you have to take care of is that the serialized data is consistent and follows the defined format. So for the serialized data you know exactly how the byte order is defined, where specific data is stored and so on.
The idea is that the sender serializes the data, sends it over whatever channel is required, and the receiver deserializes the data again. In which format the data is stored on each of the both sides doesn't matter.
+--------------------------------+ +--------------------------------+
| Host A | | Host B |
| | | |
| | | |
| | | |
| +-------------------------+ | | +-------------------------+ |
| | Raw data | | | | Raw data | |
| |(Specific to plattfrom A)| | | |(Specific to plattfrom B)| |
| +-------------------------+ | | +-------------------------+ |
| | | | ^ |
| | serialize | | | deserialize |
| v | | | |
| +-----------------+ | transmit | +-----------------+ |
| | Serialized Data +----------------------------> Serialized Data | |
| +-----------------+ | | +-----------------+ |
| | | |
+--------------------------------+ +--------------------------------+

Related

Getting e_lfanew from a dll, yielding E8 and not F8?

I'm reading a DLL file to a buffer (pSrcData), from here I wanted print the e_lfanew
bool readDll(const char* dllfile)
{
BYTE* pSrcData;
std::ifstream File(dllfile, std::ios::binary | std::ios::ate);
auto FileSize = File.tellg();
pSrcData = new BYTE[static_cast<UINT_PTR>(FileSize)];
File.seekg(0, std::ios::beg);
File.read(reinterpret_cast<char*>(pSrcData), FileSize);
File.close();
std::cout << std::hex << reinterpret_cast<IMAGE_DOS_HEADER*>(pSrcData)->e_lfanew;
pOldNtHeader = reinterpret_cast<IMAGE_NT_HEADERS*>(pSrcData + reinterpret_cast<IMAGE_DOS_HEADER*>(pSrcData)->e_lfanew);
return true;
}
Output: E8
Opening the dll in HxD i get this (address 0000000 - 00000030):
4D 5A 90 00 03 00 00 00 04 00 00 00 FF FF 00 00
B8 00 00 00 00 00 00 00 40 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 F8 00 00 00
Meaning e_lfanew should be F8. However, I get E8 when running the code above. Can anyone see what I'm doing wrong?
Addition:
Getting e_magic works as std::cout << std::hex << reinterpret_cast<IMAGE_DOS_HEADER*>(pSrcData)->e_magic yields 5a4d, using little endian translated to 4D 5A
Sorry, I found setting the configuration in Visual Studio 2019 to x86 Release sets e_lfanew to F9 and x86 Debug sets e_lfanew to E8. I was comparing different debug/release versions.

Incomplete binary data between QSignal and QSlot

I have in my Qt code a function f1() that emits a QSignal with a char array of binary data as parameter.
My problem is the QSlot that is connected to this QSignal receives this array but the data is incomplete: it receives the data until the first "0x00" byte.
I tried to change the char [] to char*, didn't help.
How can I do to receive the full data, including the "0x00" bytes ?
connect(dataStream, &BaseConnection::GotPacket, this, &myClass::HandleNewPacket);
void f1()
{
qDebug() << "Binary read = " << inStream.readRawData(logBuffer, static_cast<int>(frmIndex->dataSize));
//logBuffer contains the following hexa bytes: "10 10 01 30 00 00 30 00 00 00 01 00 D2 23 57 A5 38 A2 05 00 E8 03 00 00 6C E9 01 00 00 00 00 00 0B 00 00 00 00 00 00 00 A6 AF 01 00 00 00 00 00"
Q_EMIT GotPacket(logBuffer, frmIndex->dataSize);
}
void myClass::HandleNewPacket(char p[LOG_BUFFER_SIZE], int size)
{
// p contains the following hexa bytes : "10 10 01 30"
}
Thank you.

couldn't write specific content into stringstream

I have some sample code reading some binary data from file and then writing the content into stringstream.
#include <sstream>
#include <cstdio>
#include <fstream>
#include <cstdlib>
std::stringstream * raw_data_buffer;
int main()
{
std::ifstream is;
is.open ("1.raw", std::ios::binary );
char * buf = (char *)malloc(40);
is.read(buf, 40);
for (int i = 0; i < 40; i++)
printf("%02X ", buf[i]);
printf("\n");
raw_data_buffer = new std::stringstream("", std::ios_base::app | std::ios_base::out | std::ios_base::in | std::ios_base::binary);
raw_data_buffer -> write(buf, 40);
const char * tmp = raw_data_buffer -> str().c_str();
for (int i = 0; i < 40; i++)
printf("%02X ", tmp[i]);
printf("\n");
delete raw_data_buffer;
return 0;
}
With a specific input file I have, the program doesn't function correctly. You could download the test file here.
So the problem is, I write the file content into raw_data_buffer and immediately read it back, and the content differs. The program's output is:
FFFFFFC0 65 59 01 00 00 00 00 00 00 00 00 00 00 00 00 FFFFFFE0 0A 40 00 00 00 00 00 FFFFFF80 08 40 00 00 00 00 00 70 FFFFFFA6 57 6E FFFFFFFF 7F 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 FFFFFFE0 0A 40 00 00 00 00 00 FFFFFF80 08 40 00 00 00 00 00 70 FFFFFFA6 57 6E FFFFFFFF 7F 00 00
The content FFFFFFC0 65 59 01 is overwritten with 0. Why so?
I suspect this a symptom of undefined behavior from using deallocated memory. You're getting a copy of the string from the stringstream but you're only grabbing a raw pointer to the internals that is then immediately deleted. (the link actually warns against this exact case)
const char* tmp = raw_data_buffer->str().c_str();
// ^^^^^ returns a temporary that is destroyed
// at the end of this statement
// ^^^ now a dangling pointer
Any use of tmp would exhibit undefined behavior and could easily cause the problem you're seeing. Keep the result of str() in scope.

TAR file format issue

It is unclear to me, what is a correct .tar file format, as I am experiencing proper functionality with three scenarios (see below).
Based on .tar specification I have been working with, the magic field (ustar) is null-terminated character string and version field is octal number with no trailing nulls.
However I've review several .tar files I found on my server and I found different implementation of magic and version field and all three of them seems to work properly, probably because system ignore those fields.
See different (3) bytes between words ustar and root in the following examples >>
Scenario 1 (20 20 00):
000000F0 00 00 00 00 | 00 00 00 00 | 00 00 00 00 ............
000000FC 00 00 00 00 | 00 75 73 74 | 61 72 20 20 .....ustar
00000108 00 72 6F 6F | 74 00 00 00 | 00 00 00 00 .root.......
00000114 00 00 00 00 | 00 00 00 00 | 00 00 00 00 ............
Scenario 2 (00 20 20):
000000F0 00 00 00 00 | 00 00 00 00 | 00 00 00 00 ............
000000FC 00 00 00 00 | 00 75 73 74 | 61 72 00 20 .....ustar.
00000108 20 72 6F 6F | 74 00 00 00 | 00 00 00 00 root.......
00000114 00 00 00 00 | 00 00 00 00 | 00 00 00 00 ............
Scenario 3 (00 00 00):
000000F0 00 00 00 00 | 00 00 00 00 | 00 00 00 00 ............
000000FC 00 00 00 00 | 00 75 73 74 | 61 72 00 00 .....ustar..
00000108 00 72 6F 6F | 74 00 00 00 | 00 00 00 00 .root.......
00000114 00 00 00 00 | 00 00 00 00 | 00 00 00 00 ............
Which one is the correct format?
In my opinion none of your examples is the correct one, at least not for the POSIX format.
As you can read here:
/* tar Header Block, from POSIX 1003.1-1990. */
/* POSIX header */
struct posix_header { /* byte offset */
char name[100]; /* 0 */
char mode[8]; /* 100 */
char uid[8]; /* 108 */
char gid[8]; /* 116 */
char size[12]; /* 124 */
char mtime[12]; /* 136 */
char chksum[8]; /* 148 */
char typeflag; /* 156 */
char linkname[100]; /* 157 */
char magic[6]; /* 257 */
char version[2]; /* 263 */
char uname[32]; /* 265 */
char gname[32]; /* 297 */
char devmajor[8]; /* 329 */
char devminor[8]; /* 337 */
char prefix[155]; /* 345 */
};
#define TMAGIC "ustar" /* ustar and a null */
#define TMAGLEN 6
#define TVERSION "00" /* 00 and no null */
#define TVERSLEN 2
The format of your first example (Scenario 1) seems to be matching with the old GNU header format:
/* OLDGNU_MAGIC uses both magic and version fields, which are contiguous.
Found in an archive, it indicates an old GNU header format, which will be
hopefully become obsolescent. With OLDGNU_MAGIC, uname and gname are
valid, though the header is not truly POSIX conforming */
#define OLDGNU_MAGIC "ustar " /* 7 chars and a null */
In both your second and third examples (Scenario 2 and Scenario 3), the version field is set to an unexpected value (according to the above documentation, the correct value should be 00 ASCII or 0x30 0x30 hex), so this field is most likely ignored.
With Fedora 18 if I execute this command:
tar --format=posix -cvf testPOSIX.tar test.txt
I have a POSIX tar file format with: ustar\0 (0x757374617200)
else if I execute this:
tar --format=gnu -cvf testGNU.tar test.txt
I have a GNU tar file format with: ustar 0x20 0x20 0x00 (0x7573746172202000) (old gnu format)
From /usr/share/magic file:
# POSIX tar archives
257 string ustar\0 POSIX tar archive
!:mime application/x-tar # encoding: posix
257 string ustar\040\040\0 GNU tar archive
!:mime application/x-tar # encoding: gnu
0x20 is 40 in octal.
I've also tried to edit the hex code with:
00 20 20
and however the tar worked correctly. I've exctract test.txt without problem.
but when I've tried to edit the hex code with:
00 00 00
The tar was not recognized.
So, my conclusion is that the correct format is:
20 20 00

Accessing specific binary information based on binary format documentation

I have a binary file and documentation of the format the information is stored in. I'm trying to write a simple program using c++ that pulls a specific piece of information from the file but I'm missing something since the output isn't what I expect.
The documentation is as follows:
Half-word Field Name Type Units Range Precision
10 Block Divider INT*2 N/A -1 N/A
11-12 Latitude INT*4 Degrees -90 to +90 0.001
There are other items in the file obviously but for this case I'm just trying to get the Latitude value.
My code is:
#include <cstdlib>
#include <iostream>
#include <fstream>
using namespace std;
int main(int argc, char* argv[])
{
char* dataFileLocation = "testfile.bin";
ifstream dataFile(dataFileLocation, ios::in | ios::binary);
if(dataFile.is_open())
{
char* buffer = new char[32768];
dataFile.seekg(10, ios::beg);
dataFile.read(buffer, 4);
dataFile.close();
cout << "value is << (int)(buffer[0] & 255);
}
}
The result of which is "value is 226" which is not in the allowed range.
I'm quite new to this and here's what my intentions where when writing the above code:
Open file in binary mode
Seek to the 11th byte from the start of the file
Read in 4 bytes from that point
Close the file
Output those 4 bytes as an integer.
If someone could point out where I'm going wrong I'd sure appreciate it. I don't really understand the (buffer[0] & 255) part (took that from some example code) so layman's terms for that would be greatly appreciated.
Hex Dump of the first 100 bytes:
testfile.bin 98,402 bytes 11/16/2011 9:01:52
-0 -1 -2 -3 -4 -5 -6 -7 -8 -9 -A -B -C -D -E -F
00000000- 00 5F 3B BF 00 00 C4 17 00 00 00 E2 2E E0 00 00 [._;.............]
00000001- 00 03 FF FF 00 00 94 70 FF FE 81 30 00 00 00 5F [.......p...0..._]
00000002- 00 02 00 00 00 00 00 00 3B BF 00 00 C4 17 3B BF [........;.....;.]
00000003- 00 00 C4 17 00 00 00 00 00 00 00 00 80 02 00 00 [................]
00000004- 00 05 00 0A 00 0F 00 14 00 19 00 1E 00 23 00 28 [.............#.(]
00000005- 00 2D 00 32 00 37 00 3C 00 41 00 46 00 00 00 00 [.-.2.7.<.A.F....]
00000006- 00 00 00 00 [.... ]
Since the documentation lists the field as an integer but shows the precision to be 0.001, I would assume that the actual value is the stored value multiplied by 0.001. The integer range would be -90000 to 90000.
The 4 bytes must be combined into a single integer. There are two ways to do this, big endian and little endian, and which you need depends on the machine that wrote the file. x86 PCs for example are little endian.
int little_endian = buffer[0] | buffer[1]<<8 | buffer[2]<<16 | buffer[3]<<24;
int big_endian = buffer[0]<<24 | buffer[1]<<16 | buffer[2]<<8 | buffer[3];
The &255 is used to remove the sign extension that occurs when you convert a signed char to a signed integer. Use unsigned char instead and you probably won't need it.
Edit: I think "half-word" refers to 2 bytes, so you'll need to skip 20 bytes instead of 10.