Storing integer to QByteArray using only 4 bytes - c++

It takes 4 bytes to represent an integer. How can I store an int in a QByteArray so that it only takes 4 bytes?
QByteArray::number(..) converts the integer to string thus taking up more than 4 bytes.
QByteArray((const char*)&myInteger,sizeof(int)) also doesn't seem to work.

There are several ways to place an integer into a QByteArray, but the following is usually the cleanest:
QByteArray byteArray;
QDataStream stream(&byteArray, QIODevice::WriteOnly);
stream << myInteger;
This has the advantage of allowing you to write several integers (or other data types) to the byte array fairly conveniently. It also allows you to set the endianness of the data using QDataStream::setByteOrder.
Update
While the solution above will work, the method used by QDataStream to store integers can change in future versions of Qt. The simplest way to ensure that it always works is to explicitly set the version of the data format used by QDataStream:
QDataStream stream(&byteArray, QIODevice::WriteOnly);
stream.setVersion(QDataStream::Qt_5_10); // Or use earlier version
Alternately, you can avoid using QDataStream altogether and use a QBuffer:
#include <QBuffer>
#include <QByteArray>
#include <QtEndian>
...
QByteArray byteArray;
QBuffer buffer(&byteArray);
buffer.open(QIODevice::WriteOnly);
myInteger = qToBigEndian(myInteger); // Or qToLittleEndian, if necessary.
buffer.write((char*)&myInteger, sizeof(qint32));

#Primož Kralj did not get around to posting a solution with his second method, so here it is:
int myInt = 0xdeadbeef;
QByteArray qba(reinterpret_cast<const char *>(&myInt), sizeof(int));
qDebug("QByteArray has bytes %s", qPrintable(qba.toHex(' ')));
prints:
QByteArray has bytes ef be ad de
on an x64 machine.

Recently I faced the same problem with a little variation. I had to store a vector of unsigned short into QByteArray. The trick with QDataStream did not work for unknown reason. So, my solution is:
QVector<uint16_t> d={1,2,3,4,5};
QByteArray dd((char*)d.data(),d.size()*sizeof(uint16_t));
The way to get the vector back is:
QVector<uint16_t> D;
for(int i=0; i<dd.size()/sizeof(uint16_t); ++i){
D.push_back(*(uint16_t*)(dd.data()+i*sizeof(uint16_t)) );
}

Related

How to append an enum represented by a series of hex values to QByteArray?

I have some enums which are represented by series of hex values in the following manner:
enum someEnum
{
NameA = 0x2121,
NameB = 0x2223,
NameC = 0x2020
};
I want to append one of these enums to QByteArray in the following way:
QByteArray anArray;
anArray.append(NameA);
But this approach produces the warning
implicit conversion from 'int' to 'char' changes value from 8481 to 33.
In fact, even if I do the following:
anArray.append(static_cast<char>(NameA));
it only appends 0x21 (in decimal 33).
I also tried doing the following:
const char * t = reinterpret_cast<char*>(NameA);
anArray.append(t, sizeof(t));
but that leads to a segmentation fault.
I could of course do the following without any loss of value or crash or any other problem:
anArray.append(0x21);
anArray.append(0x21);
But I don't want that, I want to directly append the enum. Could you please suggest a correct way to do it?
Thanks a lot.
Probably you can use QDataStream:
QByteArray byteArray;
QDataStream dataStream(&byteArray, QIODevice::WriteOnly);
dataStream << NameA;
Sorry, but I do not have qt available right now and i cannot tested this
Use following code
QByteArray byteArray;
// this will store the integer as a hex value
byteArray.append(QByteArray::number(NameA, 16));
// this will store the integer as a base 10 value
byteArray.append(QByteArray::number(NameA));

Qt QDataStream: operator>> for quint16 - I didn't get it at all

I have such code:
QByteArray portnoStr = "41034";
quint16 portno;
QDataStream stream(&portnoStr, QIODevice::ReadOnly);
stream >> portno;
std::cout << "portno: " << portno << "\n";
And as completely unexpected it print
portno: 13361
I look at the code of Qt (4x + 5x):
inline QDataStream &QDataStream::operator>>(quint16 &i)
{ return *this >> reinterpret_cast<qint16&>(i); }
At now I understand why it give me such result,
but I can not understand why QDataStream has such strange implementation?
QDataStream is not meant for converting data from one type to another in order to display text. From the docs:
You can also use a data stream to read/write raw unencoded binary data. If you want a "parsing" input stream, see QTextStream.
The QDataStream class implements the serialization of C++'s basic data types, like char, short, int, char *, etc. Serialization of more complex data is accomplished by breaking up the data into primitive units.
A data stream cooperates closely with a QIODevice. A QIODevice represents an input/output medium one can read data from and write data to. The QFile class is an example of an I/O device.
You're using cout to print encoded binary data, which is interpreted as an integer. That data is meant for reading and writing to IO devices, not printing.
Regarding reinterpret_cast to a qint16: since QDataStream simply writes raw binary data, pretending an unsigned int is signed has no effect on the output to the data stream. This is just a cheap way of reusing code: the bits are ultimately written as bits, regardless of type. It's up to you to cast them back to the appropriate data type (quint16) when reading back out from the data stream.

QSerialPort and sending bytes (with value > 127?)

I have:
QString hex = "0234301c4c49541d4741546f77617220a5a91e42411e43311e44332c30301e45332c30301e47737a74756b613742413303";
QByteArray test = QByteArray::fromHex(hex.toLatin1());
Now, i want to send it over SerialPort:
serial = new QSerialPort(this);
serial->setPortName("ttyACM0");
serial->setBaudRate(QSerialPort::Baud9600);
serial->setDataBits(QSerialPort::Data8);
serial->setParity(QSerialPort::NoParity);
serial->setStopBits(QSerialPort::OneStop);
serial->setFlowControl(QSerialPort::NoFlowControl);
if(serial->open(QIODevice::ReadWrite))
{
qDebug()<<"Port is open!";
if(serial->isWritable())
{
qDebug()<<"Yes, i can write to port!";
}
serial->waitForBytesWritten(-1);
serial->write(test.data());
serial->flush(); // Port Error 12 (timed out???)
serial->close()
}
I become no result (also char/byte values they are less than 127 seems to be send, but not those they are over this value)
My question -> how to convert this QByteArray to send all bytes corectly?
(i tried to found the answer in google, but without success / i'm newbie)
You almost certainly do not want to be calling the QSerialPort write overload that takes a const char*. Look at the docs for that overload:
Writes data from a zero-terminated string of 8-bit characters to the device.
But you are NOT writing a zero-terminated C-string, you're writing arbitrary binary data. So you need to call the write overload that takes a QByteArray directly, like this:
serial->write(test);
As stated in some of the other comments, the stuff about waitForBytesWritten before calling write doesn't make a lot of sense either, but your biggest issue is trying to treat your QByteArray of arbitrary data as a null-terminated C-String.
they was an error in crc code (downloaded from some web-portal)
it was cast to uint16_t instead uint8_t
now it really works.
thank You for help!

The difference between QDataStream and QByteArray

QTemporaryFile tf;
tf.open ();
QDataStream tfbs (&tf);
tfbs << "hello\r\n" << "world!\r\n";
const int pos = int (tf.pos ());
QByteArray ba;
ba.append ("hello\r\n");
ba.append ("world!\r\n");
const int size = ba.size ();
Basically my question is, what am I doing wrong? Why is pos > size? Should I not be using << ? Should I not be using QDataStream?
Edit: Is there a way to configure QDataStream or QTemporaryFile so that the << operator doesn't prepend strings with 32bit lengths and store the null terminators in the file? Calling QDataStream::writeBytes when I just have a series of quoted strings and QStrings makes for very ugly code.
The answer is in the docs. I'm not going to go over QByteArray, as I believe it's fairly obvious that it is working as expected.
The QDataStream operator<<(char*) overload evaluates to the writeBytes() function.
This function outputs:
Writes the length specifier len and the buffer s to the stream and
returns a reference to the stream. The len is serialized as a quint32,
followed by len bytes from s. Note that the data is not encoded.
So for "hello\r\n", I would expect the output to be:
0,0,0,8,'h','e','l','l','o','\r','\n',0
The 4-byte length, followed by the bytes from the string. The string-ending NULL is probably also being added to the end, which would account for the otherwise mysterious extra two bytes.
So I ended up writing my own helper class to serialize my data:
class QBinaryStream
{
public:
QBinaryStream (QIODevice& iod) : m_iod (iod) {}
QBinaryStream& operator << (const char* data)
{
m_iod.write (data);
return *this;
}
QBinaryStream& operator << (const QString& data)
{
return operator << (data.toUtf8 ());
}
QBinaryStream& operator << (const QByteArray& data)
{
m_iod.write (data);
return *this;
}
private:
QIODevice& m_iod;
};
Should I not be using QDataStream?
In your case maybe QTextStream or even QString would do.
The QTextStream class provides a convenient interface for reading and
writing text.
QTextStream can operate on a QIODevice, a QByteArray or a QString.
Using QTextStream's streaming operators, you can conveniently read and
write words, lines and numbers.
As for QByteArray, Qstring should be preferred to it whenever possible
The QByteArray class provides an array of bytes.
QByteArray can be used to store both raw bytes (including '\0's) and
traditional 8-bit '\0'-terminated strings. Using QByteArray is much
more convenient than using const char *. Behind the scenes, it always
ensures that the data is followed by a '\0' terminator, and uses
implicit sharing (copy-on-write) to reduce memory usage and avoid
needless copying of data.
In addition to QByteArray, Qt also provides the QString class to store
string data. For most purposes, QString is the class you want to use.
It stores 16-bit Unicode characters, making it easy to store
non-ASCII/non-Latin-1 characters in your application. Furthermore,
QString is used throughout in the Qt API. The two main cases where
QByteArray is appropriate are when you need to store raw binary data,
and when memory conservation is critical (e.g., with Qt for Embedded
Linux).

Parsing binary data from file

and thank you in advance for your help!
I am in the process of learning C++. My first project is to write a parser for a binary-file format we use at my lab. I was able to get a parser working fairly easily in Matlab using "fread", and it looks like that may work for what I am trying to do in C++. But from what I've read, it seems that using an ifstream is the recommended way.
My question is two-fold. First, what, exactly, are the advantages of using ifstream over fread?
Second, how can I use ifstream to solve my problem? Here's what I'm trying to do. I have a binary file containing a structured set of ints, floats, and 64-bit ints. There are 8 data fields all told, and I'd like to read each into its own array.
The structure of the data is as follows, in repeated 288-byte blocks:
Bytes 0-3: int
Bytes 4-7: int
Bytes 8-11: float
Bytes 12-15: float
Bytes 16-19: float
Bytes 20-23: float
Bytes 24-31: int64
Bytes 32-287: 64x float
I am able to read the file into memory as a char * array, with the fstream read command:
char * buffer;
ifstream datafile (filename,ios::in|ios::binary|ios::ate);
datafile.read (buffer, filesize); // Filesize in bytes
So, from what I understand, I now have a pointer to an array called "buffer". If I were to call buffer[0], I should get a 1-byte memory address, right? (Instead, I'm getting a seg fault.)
What I now need to do really ought to be very simple. After executing the above ifstream code, I should have a fairly long buffer populated with a number of 1's and 0's. I just want to be able to read this stuff from memory, 32-bits at a time, casting as integers or floats depending on which 4-byte block I'm currently working on.
For example, if the binary file contained N 288-byte blocks of data, each array I extract should have N members each. (With the exception of the last array, which will have 64N members.)
Since I have the binary data in memory, I basically just want to read from buffer, one 32-bit number at a time, and place the resulting value in the appropriate array.
Lastly - can I access multiple array positions at a time, a la Matlab? (e.g. array(3:5) -> [1,2,1] for array = [3,4,1,2,1])
Firstly, the advantage of using iostreams, and in particular file streams, relates to resource management. Automatic file stream variables will be closed and cleaned up when they go out of scope, rather than having to manually clean them up with fclose. This is important if other code in the same scope can throw exceptions.
Secondly, one possible way to address this type of problem is to simply define the stream insertion and extraction operators in an appropriate manner. In this case, because you have a composite type, you need to help the compiler by telling it not to add padding bytes inside the type. The following code should work on gcc and microsoft compilers.
#pragma pack(1)
struct MyData
{
int i0;
int i1;
float f0;
float f1;
float f2;
float f3;
uint64_t ui0;
float f4[64];
};
#pragma pop(1)
std::istream& operator>>( std::istream& is, MyData& data ) {
is.read( reinterpret_cast<char*>(&data), sizeof(data) );
return is;
}
std::ostream& operator<<( std::ostream& os, const MyData& data ) {
os.write( reinterpret_cast<const char*>(&data), sizeof(data) );
return os;
}
char * buffer;
ifstream datafile (filename,ios::in|ios::binary|ios::ate);
datafile.read (buffer, filesize); // Filesize in bytes
you need to allocate a buffer first before you read into it:
buffer = new filesize[filesize];
datafile.read (buffer, filesize);
as to the advantages of ifstream, well it is a matter of abstraction. You can abstract the contents of your file in a more convenient way. You then do not have to work with buffers but instead can create the structure using classes and then hide the details about how it is stored in the file by overloading the << operator for instance.
You might perhaps look for serialization libraries for C++. Perhaps s11n might be useful.
This question shows how you can convert data from a buffer to a certain type. In general, you should prefer using a std::vector<char> as your buffer. This would then look like this:
#include <iostream>
#include <vector>
#include <algorithm>
#include <iterator>
int main() {
std::ifstream input("your_file.dat");
std::vector<char> buffer;
std::copy(std::istreambuf_iterator<char>(input),
std::istreambuf_iterator<char>(),
std::back_inserter(buffer));
}
This code will read the entire file into your buffer. The next thing you'd want to do is to write your data into valarrays (for the selection you want). valarray is constant in size, so you have to be able to calculate the required size of your array up-front. This should do it for your format:
std::valarray array1(buffer.size()/288); // each entry takes up 288 bytes
Then you'd use a normal for-loop to insert the elements into your arrays:
for(int i = 0; i < buffer.size()/288; i++) {
array1[i] = *(reinterpret_cast<int *>(buffer[i*288])); // first position
array2[i] = *(reinterpret_cast<int *>(buffer[i*288]+4)); // second position
}
Note that on a 64-bit system this is unlikely to work as you expect, because an integer would take up 8 bytes there. This question explains a bit about C++ and sizes of types.
The selection you describe there can be achieved using valarray.