I have this simple code that use QtSerialPort:
char foo[] = {130,50,'\0'};
serial->write(foo);
The result on my serial is {2, 50}. I think that che biggest number that I can send is 128 (char go from -128 to 128). Where is the manner for send number from 0 to 255? I try to use unsigned char but the method "write" don't work with it. The same problem also appear with QByteArray.
Thankyou all.
The QIODevice interface has the default char for sending which may be compiler dependent. See the documentation for details.
qint64 QIODevice::write(const char * data, qint64 maxSize)
Writes at most maxSize bytes of data from data to the device. Returns the number of bytes that were actually written, or -1 if an error occurred.
However, you should not be concerned much if you take the data properly on the other hand. you can still send the greater than 128 values through as signed, but they will come across as negative values, for instance 0xFF will be -1.
If you take the same logic in the reverse order on the receiving end, there should be no problems about it.
However, this does not seem to relate to your issue because you do not get the corresponding negative value for 130, but you get it chopped at 7 bits. Make sure you connection is 8 data bit.
You can set that explicitly after opening the port with this code:
QSerialPort serialPort;
QString serialPortName = "foo";
serialPort.setPortName(serialPortName);
if (!serialPort.open(QIODevice::WriteOnly)) {
standardOutput << QObject::tr("Failed to open port %1, error: %2").arg(serialPortName).arg(serialPort.errorString()) << endl;
return 1;
}
if (!serialPort.setDataBits(QSerialPort::Data8)) {
standardOutput << QObject::tr("Failed to set 8 data bits for port %1, error: %2").arg(serialPortName).arg(serialPort.errorString()) << endl;
return 1;
}
// Other setup code here
char foo[] = {130,50,'\0'};
serialPort.write(foo);
Make sure you've set the serial port to send 8 bits, using QSerialPort::DataBits
The fact that '130' is coming as '2' implies that the most significant bit is being truncated.
Related
I'm using a measurement device which sends (binary) float values using a tcp socket with up to 70 kHz.
My goal is to read these values as fast as possible and use them in other parts of my program.
Till now I'm able to extract value by value using a QTcpSocket and QDataStream:
First I create the socket and connect the stream to it
mysock = new QTcpSocket(this);
mysock->connectToHost(ip, port);
QDataStream stream(mysock);
stream.setByteOrder(QDataStream::LittleEndian);
stream.setFloatingPointPrecision(QDataStream::SinglePrecision);
Then I read from the socket and write the stream data to my float value
while(true) //only for test purpose (dont stop reading)
if (mysock->waitForReadyRead())
{
while (mysock->bytesAvailable() >= 6)
{
QByteArray a = mysock->read(6); //each value sent is 6 bytes long
stream.skipRawData(2); //first 2 bytes don't belong to the number
float result;
stream >> result;
//qDebug()<<result;
}
}
When I measure the iteration frequency of the while(true) loop I'm able to achieve about 30 kHz.
Reading multiple values per read I can reach up to 70 Khz. (Not taking other calculations into account which might slow me down)
My questions are:
If I read multiple values at once, how do I extract these values from the QDataStream? I need a 6 bytes spacing with only 4 bytes containing the value.
Answer: In my case there is 2 bytes (trash) followed by a known number of values, for example 4 bytes for a float, 4 bytes for another float, 2 bytes for an uint16.
stream >> trashuint16 >> resultfloat1 >> resultfloat2 >> resultuint16
Expands 1: I can configure my device to send different values of different type (int, float) which need to be written to different variables.
Answer: Same.
Is there a more efficient way to read many values from a QTcpSocket?
Answer: Anwered in the comments.
Update (to answer some questions):
Max rate in Bytes: 70 kHz x 6 Byte (for one value) = 420 kB/s (Doesnt seem that much :))
Update 2
New Question: When i start a transaction (using stream.startTransaction) I would like to know whats inside that stream in binary code.
I dont understand how QDataStream::startTransaction works. How many bytes will be read? what happens with the data I dont extract using >>?
I've tried the following:
if (mysock->waitForReadyRead())
{
stream.startTransaction();
char *c = new char[40];
stream.readRawData(c, 40); //I want to know whats really inside
QByteArray a(c);
qDebug() << a <<stream.status();
if (!stream.commitTransaction())
break;
}
Doing this again and again, I'll sometimes get status = -1 (read too much) and sometimes not. How do I get the "size" of the stream?
Your code has couple mistakes.
You are doing direct reading from socket when in the same time you are using QDataStream. This can break stuff.
Also your code is assuming that your application will receive data in same chunks as it was sent by other end. You do not have such warranty! It may happen that you will receive chunk data which are ending in middle of your frame. It works just by pure luck or you are ignoring some bugs of your application.
This should go like this:
while(true)
if (mysock->waitForReadyRead()) // IMO doing such loop is terrible approach
// but this is Out of the scope of question, so ignoring that
{
while (true)
{
stream.startTransaction();
float result;
qint32 somedata
stream >> somedata >> result; // I do not know binary format your application is using
if (!in.commitTransaction())
break;
AddDataToModel(result, somedata);
}
}
Edit:
From comment:
Please correct me if I'm wrong, but if I want 2 bytes to be discarded I need to do "stream >> someint(2 byte) >> somefloat(4 byte)"? How can I handle many values in stream?
qint16 toBeDiscarded;
float value;
// note stream.setFloatingPointPrecision(QDataStream::SinglePrecision);
// is needed to read float as 32 bit floating point number
stream >> toBeDiscarded >> value;
ProcessValue(value);
I'm writing something server-client related, and I have this code snippet here:
char serverReceiveBuf[65536];
client->read(serverReceiveBuf, client->bytesAvailable());
handleConnection(serverReceiveBuf);
that reads data whenever a readyRead() signal is emitted by the server. Using bytesAvailable() is fine when I test on my local network since there's no latency, but when I deploy the program I want to make sure the entire message is received before I handleConnection().
I was thinking of ways to do this, but read and write only accept chars, so the maximum message size indicator I can send in one char is 127. I want the maximum size to be 65536, but the only way I can think of doing that is have a size-of-size-of-message variable first.
I reworked the code to look like this:
char serverReceiveBuf[65536];
char messageSizeBuffer[512];
int messageSize = 0, i = 0; //max value of messageSize = 65536
client->read(messageSizeBuffer,512);
while((int)messageSizeBuffer[i] != 0 || i <= 512){
messageSize += (int) messageSizeBuffer[i];
//client will always send 512 bytes for size of message size
//if message size < 512 bytes, rest of buffer will be 0
}
client->read(serverReceiveBuf, messageSize);
handleConnection(serverReceiveBuf);
but I'd like a more elegant solution if one exists.
It is a very common technique when sending messages over a stream to send a fixed-sized header before the message payload. This header can include many different pieces of information, but it always includes the payload size. In the simplest case, you can send the message size encoded as a uint16_t for a maximum payload size of 65535 (or uint32_t if that's not sufficient). Just make sure you handle byte ordering with ntohs and htons.
uint16_t messageSize;
client->read((char*)&messageSize, sizeof(uint16_t));
messageSize = ntohs(messageSize);
client->read(serverReceiveBuf, messageSize);
handleConnection(serverReceiveBuf);
read and write work with byte streams. It does not matter to them if the bytes are chars or any other form of data. You can send a 4-byte integer by casting its address to char* and sending 4 bytes. On the receiving end cast the 4 bytes back to an int. (If the machines are of different types you may also have endian issues, requiring the bytes to be rearranged into an int. See htonl and its cousins.)
I have an arduino board that is connected to a sensor. From Arduino IDE serial monitor, I see the readings are mostly 160, 150, etc. Arduino has a 10 bit ADC, so I assume the readings range from 0 to 1024.
I want to fetch that readings to my computer so that I can do further processing. It must be done this way up to this point. Now, I wrote a c++ program to read serial port buffer with Windows APIs (DCB). The transfer speed of the serial ports are set to 115200 on both the Arduino IDE and the c++ program.
I will describe my problem first: Since I want to send the readings to my computer, I expect the data looks like the following:
124
154
342
232
...
But now it looks like
321
43
5
2
123
...
As shown, the data are concatenated. I knew it because I tried to display them with [], and the data are truly messed up.
The section of the code that is doing the serial port reading on the computer is as here:
// Read
int n = 10;
char szBuff[10 + 1] = {0};
DWORD dwBytesRead = 0;
int i;
for (i = 0; i < 200; i++){
{
if(!ReadFile(hSerial, szBuff, n, &dwBytesRead, NULL)){
//error occurred. Report to user.
printf("Cannot read.\n");
}
else{
printf("%s\n" , szBuff);
}
}
}
The Arduino code that's doing the serial port sending is:
char buffer [10] = { 0 };
int analogIn = 0;
int A0_val = 0;
void setup() {
Serial.begin(115200);
}
void loop() {
A0_val = analogRead(analogIn);
sprintf(buffer, "%d", A0_val);
Serial.println(buffer);
}
I suspect that the messing up of the data is caused by different size of the buffer used to transmit and receive data in the serial port. What is the good suggestion for the size of the buffer and even better method to guarantee the successful transmission of valid data?
Thanks very much!
Your reciever code cannot assume a single read from the serial port will yield a complete line (i.e. the 2 or 3 digits followed by a '\n' that the arduino continuously sends).
It is up to the receiver to synthetize complete lines of text on reception, and only then try to use them as meaningful numbers.
Since the serial interface is extremely slow compared with your average PC computing power, there is little point in reading more than one character at a time: literally millions of CPU cycles will be spent waiting for the next character, so you really don't need to react fast to the arduino input.
Since in that particular case it will not hinder performances in the slightest, I find it more convenient to read one character at a time. That will save you the hassle of moving bits of strings around. At least it makes writing an educational example easier.
// return the next value received from the arduino as an integer
int read_arduino (HANDLE hserial)
{
char buffer[4]; // any value longer than 3 digits must come
// from a faulty transmission
// the 4th caracter is used for a terminating '\0'
size_t buf_index = 0; // storage position of received characters
for (;;)
{
char c; // read one byte at a time
if (!ReadFile(
hSerial,
&c, // 1 byte buffer
1, // of length 1
NULL, // we will read exactly one byte or die trying,
// so length checking is pointless
NULL)){
/*
* This error means something is wrong with serial port config,
* and I assume your port configuration is hard-coded,
* so the code won't work unless you modify and recompile it.
* No point in keeping the progam running, then.
*/
fprintf (stderr, "Dang! Messed up the serial port config AGAIN!");
exit(-1);
}
else // our read succeded. That's a start.
{
if (c == '\n') // we're done receiving a complete value
{
int result; // the decoded value we might return
// check for buffer overflow
if (buf_index == sizeof (buffer))
{
// warn the user and discard the input
fprintf (stderr,
"Too many characters received, input flushed\n");
}
else // valid number of characters received
{
// add a string terminator to the buffer
buffer[buf_index] = '\0';
// convert to integer
result = atoi (buffer);
if (result == 0)
{
/*
* assuming 0 is not a legit value returned by the arduino, this means the
* string contained something else than digits. It could happen in case
* of electricval problems on the line, typically if you plug/unplug the cable
* while the arduino is sending (or Mr Fluffy is busy gnawing at it).
*/
fprintf (stderr, "Wrong value received: '%s'\n", buffer);
}
else // valid value decoded
{
// at last, return the coveted value
return res; // <-- this is the only exit point
}
}
// reset buffer index to prepare receiving the next line
buf_index = 0;
}
else // character other than '\n' received
{
// store it as long as our buffer does not overflow
if (buf_index < sizeof (buffer))
{
buffer[buf_index++] = c;
/*
* if, for some reason, we receive more than the expected max number of
* characters, the input will be discarded until the next '\n' allow us
* to re-synchronize.
*/
}
}
}
}
}
CAVEAT: this is just code off the top of my head. I might have left a few typos here and there, so don't expect it to run or even compile out of the box.
A couple of basic problems here. First, it is unlikely that the PC can reliably keep up with 115,200 baud data if you only read 10 bytes at a time with ReadFile. Try a slower baud rate and/or change the buffer size and number of bytes per read to something that will get around 20 milliseconds of data, or more.
Second, after you read some data put a nul at the end of it
szBuf[dwBytesRead] = 0;
before you pass it to printf or any other C string code.
I'm trying to use QSerialPort class to reading and writing to serial port.
Right now i'm using virtual comports implemented by eltima driver.
I can successfully send bytes like this:
QSerialPortInfo info = QSerialPortInfo("COM30");
QSerialPort serial;
serial.setPort(info);
serial.setBaudRate(57600);
serial.open(QIODevice::ReadWrite);
char arr[] = {0xAA, 0xBB, 0xCC, 0xDD};
serial.write(arr, 4);
I'm trying reading like this (I want to read just a single byte; this code is called by timer, if data is ready to be read):
virtual uint8_t getByte(void)
{
char arr[2] = {0};
int8_t err = qPort.read(arr, 1);
DEBUG_ASSERT(err != -1);
if(! isNewByte() )
{
onReceiveFinished();
}
return arr[0];
}
If I send to a virtual port (i.e. to my program) any value less then 128, I get it right (as debugger is showing). However, if I try to send 128 or more, I get value-128 o_o (if I send 153 - I get 25. Not -25 or 103).
That seems like something really odd to me.
Can anyone see where is the mistake?
My mistake was really stupid. QSerialPort is set to 7 databits by default (which seems not very practical, actually), so every received byte had its MSB cut off (like substracting 128).
Still, oddly enough, sending worked fine.
Not. You wrong do setBaudRate(). It need to do after the port is opening.
I have a problem reading more than 2048 bytes from a QLocalSocket.
This is my server-side code:
clientConnection->flush(); // <-- clientConnection is a QLocalSocket
QByteArray block;
QDataStream out(&block, QIODevice::WriteOnly);
out.setVersion(QDataStream::Qt_5_0);
out << (quint16)message.size() << message; // <--- message is a QString
qint64 c = clientConnection->write(block);
clientConnection->waitForBytesWritten();
if(c == -1)
qDebug() << "ERROR:" << clientConnection->errorString();
clientConnection->flush();
And this is how I read the data in my client:
QDataStream in(sock); // <--- sock is a QLocalSocket
in.setVersion(QDataStream::Qt_5_0);
while(sock->bytesAvailable() < (int)sizeof(quint16)){
sock->waitForReadyRead();
}
in >> bytes_to_read; // <--- quint16
while(sock->bytesAvailable() < (int)bytes_to_read){
sock->waitForReadyRead();
}
in >> received_message;
The client code is connected to the readyRead signal and it's disconnected after the first call to the slot.
Why I'm able to read only 2048 bytes?
==EDIT==
After peppe's reply I updated my code. Here is how it looks now:
server side code:
clientConnection->flush();
QByteArray block;
QDataStream out(&block, QIODevice::WriteOnly);
out.setVersion(QDataStream::Qt_5_0);
out << (quint16)0;
out << message;
out.device()->seek(0);
out << (quint16)(block.size() - sizeof(quint16));
qDebug() << "Bytes client should read" << (quint16)(block.size() - sizeof(quint16));
qint64 c = clientConnection->write(block);
clientConnection->waitForBytesWritten();
client side code:
QDataStream in(sock);
in.setVersion(QDataStream::Qt_5_0);
while(sock->bytesAvailable() < sizeof(quint16)){
sock->waitForReadyRead();
}
quint16 btr;
in >> btr;
qDebug() << "Need to read" << btr << "and we have" << sock->bytesAvailable() << "in sock";
while(sock->bytesAvailable() < btr){
sock->waitForReadyRead();
}
in >> received_message;
qDebug() << received_message;
I'm still not able to read more data.
out.setVersion(QDataStream::Qt_5_0);
out << (quint16)message.size() << message; // <--- message is a QString
This is wrong. The serialized length of "message" will be message.size() * 2 + 4 bytes, as QString prepends its own length as a quint32, and each QString character is actually a UTF-16 code unit, so it requires 2 bytes. Expecting only message.size() bytes to read in the reader will cause QDataStream to short read, which is undefined behaviour.
Please do check the size of "block" after those lines -- it'll be 2 + 4 + 2 * message.size() bytes. So you need to fix the math. You can safely assume it won't change, as the format of serialization of Qt datatypes is known and documented.
I do recognize the "design pattern" of prepending the length, though. It probably comes from the fortune network example shipped with Qt. The notable difference there is that the example doesn't prepend the length of the string in UTF-16 code units (which is pointless, as it's not how it's going to be serialized) -- it prepends the length of the serialized QString. Look at what it does:
out << (quint16)0;
out << fortunes.at(qrand() % fortunes.size());
out.device()->seek(0);
out << (quint16)(block.size() - sizeof(quint16));
First it reserves some space in the output, by writing a 0. Then it serializes a QString. Then it backtracks and overwrites the 0 with the length of the serialized QString -- which at this point, is exactly block.size() minus the prepended integer stating the lenght (and we know that the serialized length of a quint16 is sizeof(quint16))
To repeat myself, there actually two reasons about why that example was coded that way, and they're somehow related:
QDataStream has no means to recover from short reads: all the data it needs to successfully decode an object must be available when you use the operator>> to deserialize the object. Therefore, you cannot use it before being sure that all data was received. Which brings us to:
TCP has no built in mechanism for separating data in "records". You can't just send some bytes followed by a "record marker" which tells the receiver that he has received all the data pertinent to a record. What TCP provides is a raw stream of bytes. Eventually, you can (half-)close the connection to signal the other peer that the transmission is over.
1+2 imply that you must use some other mechanism to know (on the receiver side) if you already have all the data you need or you must wait for some more. For instance, you can introduce in-band markers like \r\n (like IRC or - up to a certain degree - HTTP do).
The solution in the fortune example is prepending to the "actual" data (the serialized QString with the fortune message) the length, in bytes, of that data; then it sends the length (as a 16 bit integer) followed by the data itself.
The receiver first reads the length; then it reads up that many bytes, then it knows it can decode the fortune. If there's not enough data available (both for the length - i.e. you received less than 2 bytes - and the payload itself) the client simply does nothing and waits for more.
Note that:
the design ain't new: it's what all most protocols do. In the "standard" TCP/IP stack, TCP, IP, Ethernet and so on all have a field in their "headers" which specify the lenght of the payload (or of the whole "record");
the transmission of the "length" uses a 16bit unsigned integer sent in a specific byte order: it's not memcpy()d into the buffer, but QDataStream is used on it to both store it and read it back. Although it may seem trivial, this actually completes the definition of the protocol you're using.
if QDataStream had been able to recover from short reads (f.i. by throwing an exception and leaving the data in the device), you would not have needed to send the length of the payload, since QDataStream already sends the length of the string (as a 32 bit unsigned bigendian integer) followed by the UTF-16 chars.