Serial Communication Timeout in QT with Arduino - c++

I want to implement a timeout mechanism such that if the arduino doesn't read the command within one second, it results in a timeout and the new command is discarded and the program runs fine.
But right now, the program hangs if any new command is sent during the execution of the old one.
This is the timeout section of my code:
QByteArray requestData = myRequest.toLocal8Bit();
serial.write(requestData);
if (serial.waitForBytesWritten(waitTime)) {
if (serial.waitForReadyRead(myWaitTimeout)) {
QByteArray responseData = serial.readAll();
while (serial.waitForReadyRead(10))
responseData += serial.readAll();
QString response(responseData);
emit this->response(response);
} else {
emit timeout(tr("Wait Read Request Timed Out %1")
.arg(QTime::currentTime().toString()));
}
} else {
emit timeout(tr("Wait Write Request Timed Out %1")
.arg(QTime::currentTime().toString()));
}
The timeout signal is connected to a slot that just prints the timeout message and does nothing.
How can I fix this so that I can achieve what I target?

You are using blocking approach to transmit data via serial port. Unless you are using threads I don't see possibility to send any additional data during execution of previous loop.
BTW: Your program, for example, will block indefinitely if Arduino manages to keep sending something within 10ms periods.
Add couple of QDebug() << "I'm here"; lines to check where your code gets stuck; it is possible that you are blocking somewhere outside code you pasted here. Alternative is to use debugger.
What if previous 'command' you tried to send is still in the buffer ? You'll end up filling output buffer. Check with QDebug how many bytes are in output buffer before writing more data to it. Buffer should be empty. (qint64 QIODevice::bytesToWrite() const).

Related

The slot of readMessageFromTCPServer could not be triggered sometimes after receiving message

I use Qt creator 5.5.1 in windows 7.
The compiler is VC 2010 32 bits.
I have written a socket client. It could connect well but sometimes its readyRead() signal could not be triggered after receiving message from server. So the readMessageFromTCPServer() slot could not be triggered and the thread could not run.
void MainWindow::on_pushBtn_LoadCfg_clicked()
{
if (tcpClient == NULL)
{
tcpClient = new QTcpSocket;
tcpClient->connectToHost(ui->txtIPServer->text(),ui->txtPortServer-
>text().toInt());
QObject::connect(tcpClient,SIGNAL(readyRead()),this,
SLOT(readMessageFromTCPServer()));
}
}
void MainWindow::readMessageFromTCPServer()
{
std::string r="start";
QByteArray qba;
qba= tcpClient->readAll();
if (qba.contains(r.c_str()))
{
cout<<"thread run";
}
}
When I tried to debug this program. I put a break point at this line: Sleep(800), but sometimes this slot could not be triggered after receiving message from socket server. And I have checked that the socket is still connected, why the slot could not be triggered?
There are some errors in your code. You will only have the slot triggered once.
Get rid of those Sleep commands. There is no good reason to use it if you are doing it in the main thread.
QByteArray qba= NULL; makes no sense. QByteArray qba; is ok.
while(1) means forever. You should break the loop at some point. Actually, you do not need this loop at all. Put the code inside this loop out of it and remove the loop. When the readyRead() signal is emitted, it means that there is some data to be read. You can readAll() that data chunk, do your stuff and return.
There is no guarantee that you will get your entire message in one round. So, in some circumstances, you may get "St" in one signal and "art" in another. So, you should implement your own buffering mechanism for such a situation. It may happen on big chunks of data. If you are sure that you are getting very short packages, then it's ok to rely on the internal buffer of QTCPSocket.

QTcpSocket data transfer stops when read buffer is full and does not resumes when it frees up

I have server-client Qt application, where client sends data packets to server and server reads them at a set time intervals. It happens that client sends data faster than server can read thus filling all the memory on the server side. I am using QAbstractSocket::setReadBufferSize(size) to set max read buffer size on the server side and when it fills up, socket data transferring stops, and data is buffered on client side, which is what i want, but the problem is when server's QTcpSocket's internal read buffer frees up (is not full anymore), data transfer between client and server does not resume.
I've tried to use QAbstractSocket::resume() which seems to work, but Qt5.10 documentation says:
Continues data transfer on the socket. This method should only be used
after the socket has been set to pause upon notifications and a
notification has been received. The only notification currently
supported is QSslSocket::sslErrors(). Calling this method if the
socket is not paused results in undefined behavior.
I feel like I should not use that function in this situation, but is there any other solution? How do i know if socket is paused? Why data transfer does not continue automaticaly when QTcpSocket's internal read buffer is not full anymore?
EDIT 1 :
I have downloaded Qt(5.10.0) sources and pdb's to debug this situation and I can see that QAbstractSocket::readData() internal function have line "d->socketEngine->setReadNotificationEnabled(true)" which re-enables data transfering, but QAbstractSocket::readData() gets called only when QTcpSocket internal read buffer is empty (qiodevice.cpp; QIODevicePrivate::read(); line 1176) and in My situation it is never empty, because I read it only when it has enough data for complete packet.
Shouldn't QAbstractSocket::readData() be called when read buffer is not full anymore and not when it's completely empty? Or maybe i do something wrong?
Found a Workaround!
In Qt5.10 sources i can clearly see that QTcpSpcket internal read notifications is disabled (qabstractsocket.cpp; bool QAbstractSocketPrivate::canReadNotification(); line 697) when read buffer is full and to enable read notifications you need to read all buffer to make it empty OR use QAbstractSocket::setReadBufferSize(newSize) which internally enables read notifications WHEN newSize is not 0 (unlimited) and not equal to oldSize (qabstractsocket.cpp; void QAbstractSocket::setReadBufferSize(qint64 size); line 2824).
Here's a short function for that:
QTcpSocket socket;
qint64 readBufferSize; // Current max read buffer size.
bool flag = false; // flag for changing max read buffer size.
bool isReadBufferLimitReached = false;
void App::CheckReadBufferLimitReached()
{
if (readBufferSize <= socket.bytesAvailable())
isReadBufferLimitReached = true;
else if (isReadBufferLimitReached)
{
if (flag)
{
readBufferSize++;
flag = !flag;
}
else
{
readBufferSize--;
flag = !flag;
}
socket.setReadBufferSize(readBufferSize);
isReadBufferLimitReached = false;
}
}
In the function which reads data from QTcpSocket at the set intervals, BEFORE reading data, I call this function, which checks if read buffer is full and sets isReadBufferLimitReached if true. Then I read needed amount of data from QTcpSocket and AT THE END I call that function again, which, if buffer were full before, calls QAbstractSocket::setReadBufferSize(size) to set new buffer size and enable internal read notifications. Changing read buffer size by +/-1 should be safe, because you read at least 1 byte from socket.

Qt5 - ASSERT: "bytesTransferred == writeChunkBuffer.size()"

I've written a tool which uses the QSerialPort to write to a serial device. It runs for a certain time until I get the following error message:
ASSERT: "bytesTransferred == writeChunkBuffer.size()" in file
qserialport_win.cpp, line 511
My sending function looks like this:
/**
* #brief Send text to device
* #param text
* #return Success/Fail
*/
bool Serial::send(QString text)
{
if (connectionStatus && qsp.isWritable()) {
QByteArray buffer = text.toLatin1();
if (buffer.size() != qsp.write(buffer))
qDebug() << "Send does not work";
qsp.flush();
msleep(15);
return true;
} else {
return false;
}
}
If I understand it correctly I write the text (which is around 20 chars) with flush to the device, wait 15ms and repeat it afterwards. I don't really understand why I get this message?
// EDIT:
After some time I figured out what the problem was. I forgot to mention, why I wait 15 ms. That was part of the documentation, to wait after I send the data. The biggest problem, related to sending data was, that I run QSerialPort in a separate thread. By using that I run in trouble. I moved it to the MainThread and use signal slot design by Qt.
Without knowing the API you're using or how it works, there is no inherent reason why a write() method should transfer all the data supplied, especially when it returns a write count: a clear signal that it may not transfer everything in one go.
The only problem is the assertion itself. You should loop until the data is written, not assume it is all written in a single write. Nor should you sleep between writes, in a vain attempt to outguess the device you're sending to. This is literally just a waste of time.

How would one avoid race conditions from multiple threads of a server sending data to a client? C++

I was following a tutorial on youtube on building a chat program using winsock and c++. Unfortunately the tutorial never bothered to consider race conditions, and this causes many problems.
The tutorial had us open a new thread every time a new client connected to the chat server, which would handle receiving and processing data from that individual client.
void Server::ClientHandlerThread(int ID) //ID = the index in the SOCKET Connections array
{
Packet PacketType;
while (true)
{
if (!serverptr->GetPacketType(ID, PacketType)) //Get packet type
break; //If there is an issue getting the packet type, exit this loop
if (!serverptr->ProcessPacket(ID, PacketType)) //Process packet (packet type)
break; //If there is an issue processing the packet, exit this loop
}
std::cout << "Lost connection to client ID: " << ID << std::endl;
}
When the client sends a message, the thread will process it and send it by first sending packet type, then sending the size of the message/packet, and finally sending the message.
bool Server::SendString(int ID, std::string & _string)
{
if (!SendPacketType(ID, P_ChatMessage))
return false;
int bufferlength = _string.size();
if (!SendInt(ID, bufferlength))
return false;
int RetnCheck = send(Connections[ID], _string.c_str(), bufferlength, NULL); //Send string buffer
if (RetnCheck == SOCKET_ERROR)
return false;
return true;
}
The issue arises when two threads (Two separate clients) are synchronously trying to send a message at the same time to the same ID. (The same third client). One thread may send to the client the int packet type, so the client is now prepared to receive an int, but then the second thread sends a string. (Because the thread assumes the client is waiting for that). The client is unable to process correctly and results in the program being unusable.
How would I solve this issue?
One solution I had:
Rather than allow each thread to execute server commands on their own, they would set an input value. The main server thread would loop through all the input values from each thread and then execute the commands one by one.
However I am unsure this won't have problems of its own... If a client sends multiple messages in the time frame of a single server loop, only one of the messages will send (since the new message would over-write the previous message). Of course there are ways around this, such as arrays of input or faster loops, but it still poses a problem.
Another issue that I thought of was that a client with a lower ID would always end up having their message sent first each loop. This isn't that big of a deal but if there was a situation, say, a trivia game, where two clients entered the correct answer in the same loop then the client with the lower ID would end up saying the answer "first" every time.
Thanks in advance.
If all I/O is being handled through a central server, a simple (but certainly not elegant) solution is to create a barrier around the I/O mechanisms to each client. In the simplest case this can just be a mutex. Associate that barrier with each client and anytime someone wants to send that client something (a complete message), lock the barrier. Unlock it when the complete message is handled. That way only one client can actually send something to another client at a time. In C++11, see std::mutex.

Emitting signal when bytes are received in serial port

I am trying to connect a signal and a slot in C++ using the boost libraries. My code currently opens a file and reads data from it. However, I am trying to improve the code so that it can read and analyze data in real time using a serial port. What I would like to do is have the analyze functions called only once there is data available in the serial port.
How would I go about doing this? I have done it in Qt before, however I cannot use signals and slots in Qt because this code does not use their moc tool.
Your OS (Linux) provides you with the following mechanism when dealing with the serial port.
You can set your serial port to noncanonical mode (by unsetting ICANON flag in termios structure). Then, if MIN and TIME parameters in c_cc[] are zero, the read() function will return if and only if there is new data in the serial port input buffer (see termios man page for details). So, you may run a separate thread responsible for getting the incoming serial data:
ssize_t count, bytesReceived = 0;
char myBuffer[1024];
while(1)
{
if (count = read(portFD,
myBuffer + bytesReceived,
sizeof(myBuffer)-bytesReceived) > 0)
{
/*
Here we check the arrived bytes. If they can be processed as a complete message,
you can alert other thread in a way you choose, put them to some kind of
queue etc. The details depend greatly on communication protocol being used.
If there is not enough bytes to process, you just store them in buffer
*/
bytesReceived += count;
if (MyProtocolMessageComplete(myBuffer, bytesReceived))
{
ProcessMyData(myBuffer, bytesReceived);
AlertOtherThread(); //emit your 'signal' here
bytesReceived = 0; //going to wait for next message
}
}
else
{
//process read() error
}
}
The main idea here is that the thread calling read() is going to be active only when new data arrives. The rest of the time OS will keep this thread in wait state. Thus it will not consume CPU time. It is up to you how to implement the actual signal part.
The example above uses regular read system call to get data from port, but you can use the boost class in the same manner. Just use syncronous read function and the result will be the same.