TIdTCPServer sharing of serial port - c++

Recently I changed a program which acts as a TCP server to help share the traffic on a serial port connected to a device. Multiple clients connect and should have access to the Serial Port and act simultaneously.
Application is built using C++Builder, using TIdTCPServer in the server and TIdTCPClient in the client application.
Multiple clients need to connect and send commands to the serial port. The serial port will respond immediately after sending a command to it, as per the protocols of the device it is attached to.
There is also a background thread which occasionally accesses the serial port and updates a memory cache of data held in the server's memory. The commands for sending and receiving from the serial port have a mutex on them, so they are accessible from both the TIdTcpServer's OnExecute event and the background thread.
I'm having difficulty getting the TIdTCPServer's OnExecute event to work without overlapping.
It would be really nice if the OnExecute event were to execute fully without another request coming in from another client, causing the overlapping.
Here is the OnExecute event handler of the TIdTCPServer:
void __fastcall TfrmMain::IServerExecute(TIdContext *AContext)
{
int i;
int Len;
TIdBytes TRB, TSB;
unsigned char ARB[BUFFERLENGTH];
int NumbSent, NumbReceived;
// Read the command from the client. Send the length first then the actual data.
Len = AContext->Connection->Socket->ReadLongInt();
AContext->Connection->Socket->ReadBytes(TRB, Len, false);
memset(ARB,0,BUFFERLENGTH);
for(i=0;i<Len;i++) AOB[i]=TRB[i];
NumbSent=Len;
// Now send it out to the Serial port
ProcessSerialMessage(AOB, Len, ARB, &NumbReceived, false);
sending=false;
TSB.Length=NumbReceived;
for(i=0;i<TSB.Length;i++) TSB[i]=ARB[i];
AContext->Connection->Socket->Write(TSB.Length);
AContext->Connection->Socket->Write(TSB);
return;
}
Here is the routine for sending the data out over the serial port:
int ProcessSerialMessage(unsigned char *SendBuf, int NumbSBytes, unsigned char *ReceiveBuf, int *NumbRBytes, bool CalledFromThread)
{
// MMUtex is a global TMutex Object
// Mutex required to help with the background thread trying to update memory cache.
MMutex->Acquire();
// Ok now send the data out over the serial port and receive it.
// These routines are standard serial port I/O routines and aren't explained here.
rawsend(SendBuf, NumbSBytes);
rawreceive(ReceiveBuf, NumbRBytes);
RetValue=*NumbRBytes;
MMutex->Release();
return(RetValue);
}

TIdTCPServer is a multi-threaded component. Each connected client runs in its own independent thread. The OnExecute event runs in those threads. So, it is your responsibility to make sure your OnExecute code is thread-safe, by serializing access to any shared resources.
You are using a mutex inside of ProcessSerialMessage(), so you are serializing access to the serial port (assuming your other background thread is also entering the same mutex). So that should be fine (although, I would suggest protecting the mutex locking/unlocking using a try..__finally block, or a local RAII-style class, to ensure the mutex is unlocked properly even if an exception is thrown).
However, one major issue I see with this code is that your AOB (and sending) variable is not declared as a local variable to IServerExecute(), which means it must be a shared variable accessed across threads (UPDATE: you have confirmed that in comments: "[AOB is] declared globally."). But, it is not being protected from concurrent access by multiple threads, which means that multiple clients are free to overwrite each other's inbound data while it is being sent to the serial port.
You are reading the serial port's response into local variables, and then using them to send back to the requesting clients. So there is no concurrency issue in that code.
I would suggest passing your TRB and TSB arrays directly to ProcessSerialMessage(). The 2 loops you have to copy bytes from one array to another are not really necessary, thus you can eliminate the AOB and ARB variables from this code completely. That might be enough to solve your issue.
Try this:
void __fastcall TfrmMain::IServerExecute(TIdContext *AContext)
{
TIdBytes TRB, TSB;
int NumbSent, NumbReceived;
// Read the command from the client. Send the length first then the actual data.
NumbSent = AContext->Connection->Socket->ReadLongInt();
AContext->Connection->Socket->ReadBytes(TRB, NumbSent, false);
TSB.Length = BUFFERLENGTH;
// Now send it out to the Serial port
ProcessSerialMessage(&TRB[0], NumbSent, &TSB[0], &NumbReceived, false);
AContext->Connection->Socket->Write(NumbReceived);
AContext->Connection->Socket->Write(TSB, NumbReceived);
}

Related

How would one avoid race conditions from multiple threads of a server sending data to a client? C++

I was following a tutorial on youtube on building a chat program using winsock and c++. Unfortunately the tutorial never bothered to consider race conditions, and this causes many problems.
The tutorial had us open a new thread every time a new client connected to the chat server, which would handle receiving and processing data from that individual client.
void Server::ClientHandlerThread(int ID) //ID = the index in the SOCKET Connections array
{
Packet PacketType;
while (true)
{
if (!serverptr->GetPacketType(ID, PacketType)) //Get packet type
break; //If there is an issue getting the packet type, exit this loop
if (!serverptr->ProcessPacket(ID, PacketType)) //Process packet (packet type)
break; //If there is an issue processing the packet, exit this loop
}
std::cout << "Lost connection to client ID: " << ID << std::endl;
}
When the client sends a message, the thread will process it and send it by first sending packet type, then sending the size of the message/packet, and finally sending the message.
bool Server::SendString(int ID, std::string & _string)
{
if (!SendPacketType(ID, P_ChatMessage))
return false;
int bufferlength = _string.size();
if (!SendInt(ID, bufferlength))
return false;
int RetnCheck = send(Connections[ID], _string.c_str(), bufferlength, NULL); //Send string buffer
if (RetnCheck == SOCKET_ERROR)
return false;
return true;
}
The issue arises when two threads (Two separate clients) are synchronously trying to send a message at the same time to the same ID. (The same third client). One thread may send to the client the int packet type, so the client is now prepared to receive an int, but then the second thread sends a string. (Because the thread assumes the client is waiting for that). The client is unable to process correctly and results in the program being unusable.
How would I solve this issue?
One solution I had:
Rather than allow each thread to execute server commands on their own, they would set an input value. The main server thread would loop through all the input values from each thread and then execute the commands one by one.
However I am unsure this won't have problems of its own... If a client sends multiple messages in the time frame of a single server loop, only one of the messages will send (since the new message would over-write the previous message). Of course there are ways around this, such as arrays of input or faster loops, but it still poses a problem.
Another issue that I thought of was that a client with a lower ID would always end up having their message sent first each loop. This isn't that big of a deal but if there was a situation, say, a trivia game, where two clients entered the correct answer in the same loop then the client with the lower ID would end up saying the answer "first" every time.
Thanks in advance.
If all I/O is being handled through a central server, a simple (but certainly not elegant) solution is to create a barrier around the I/O mechanisms to each client. In the simplest case this can just be a mutex. Associate that barrier with each client and anytime someone wants to send that client something (a complete message), lock the barrier. Unlock it when the complete message is handled. That way only one client can actually send something to another client at a time. In C++11, see std::mutex.

UnrealEngine4: Recv function would keep blocking when TCP server shutdown

I use a blocking FSocket in client-side that connected to tcp server, if there's no message from server, socket thread would block in function FScoket::Recv(), if TCP server shutdown, socket thread is still blocking in this function. but when use blocking socket of BSD Socket API, thread would pass from recv function and return errno when TCP server shutdown, so is it the defect of FSocket?
uint32 HRecvThread::Run()
{
uint8* recv_buf = new uint8[RECV_BUF_SIZE];
uint8* const recv_buf_head = recv_buf;
int readLenSeq = 0;
while (Started)
{
//if (TcpClient->Connected() && ClientSocket->GetConnectionState() != SCS_Connected)
//{
// // server disconnected
// TcpClient->SetConnected(false);
// break;
//}
int32 bytesRead = 0;
//because use blocking socket, so thread would block in Recv function if have no message
ClientSocket->Recv(recv_buf, readLenSeq, bytesRead);
.....
//some logic of resolution for tcp msg bytes
.....
}
delete[] recv_buf;
return 0
}
As I expected, you are ignoring the return code, which presumably indicates success or failure, so you are looping indefinitely (not blocking) on an error or end of stream condition.
NB You should allocate the recv_buf on the stack, not dynamically. Don't use the heap when you don't have to.
There is a similar question on the forums in the UE4 C++ Programming section. Here is the discussion:
https://forums.unrealengine.com/showthread.php?111552-Recv-function-would-keep-blocking-when-TCP-server-shutdown
Long story short, in the UE4 Source, they ignore EWOULDBLOCK as an error. The code comments state that they do not view it as an error.
Also, there are several helper functions you should be using when opening the port and when polling the port (I assume you are polling since you are using blocking calls)
FSocket::Connect returns a bool, so make sure to check that return
value.
FSocket::GetLastError returns the UE4 Translated error code if an
error occured with the socket.
FSocket::HasPendingData will return a value that informs you if it
is safe to read from the socket.
FSocket::HasPendingConnection can check to see your connection state.
FSocket::GetConnectionState will tell you your active connection state.
Using these helper functions for error checking before making a call to FSocket::Recv will help you make sure you are in a good state before trying to read data. Also, it was noted in the forum posts that using the non-blocking code worked as expected. So, if you do not have a specific reason to use blocking code, just use the non-blocking implementation.
Also, as a final hint, using FSocket::Wait will block until your socket is in a desirable state of your choosing with a timeout, i.e. is readable or has data.

Is it expected for poll() to take 40ms to return even though data will be available sooner?

I created a proxy server to handle CQL orders from website clients. The proxy listens for incoming connections and each connection is given a thread. The thread loops as long as the socket exists and dies on HUP. You may also stop the proxy, which will stop the threads by sending an event (See eventfd()) to each thread.
By itself, this already allows me to save a good 100ms because the proxy is local and connecting to a local service is much faster than a service on a remote computer... (even if the computer is local.)
However, I send orders and once in a while the proxy sees no incoming data (i.e. it calls read() on the socket which is setup as NONBLOCK and gets -1 in return and errno == EAGAIN.) When that happens, I call poll() to wait for additional data, the HUP, or a hit on the eventfd meaning I have to quit (i.e. 2 fds, the socket and the eventfd).
Somehow, more often than not, when I hit the poll() function call, it adds an extra 40ms to the time it takes for a message to go round trip. Although one would think this only happens on larger messages, it happens when I receive an order, which is less than 100 bytes! So the size should not be the culprit. I also changed the code to make sure I send the entire order from the client to the proxy in one write() and to avoid the poll() if at all possible (i.e. I call read() first, and poll() only if nothing is available.)
Note that I have no timeout in this case because there is nothing to check other than the incoming orders and the eventfd. So I would imagine that the timeout won't be a problem.
The code base is really big. But the client/server comes down to something like this (the sizes in original are fully dynamic):
// Client
...
connect(socket);
...
write(socket, order, sizeof(order));
read(socket, result, sizeof(result));
// repeat for other orders, as required by client...
// server
...
socket = accept(); // happens for each client
...
pthread_create(runner);
...
// server thread (runner)
...
for(;;)
{
int r(0);
for(;;)
{
r += read(socket, order, sizeof(order));
if(r >= sizeof(order))
{
break;
}
// wait for more data is not enough received yet
poll(..."socket" + "eventfd"...); // <-- this will often take 40ms
if(eventfd_happened)
{
// quit thread
return;
}
}
...
[work on order]
...
write(socket, result, sizeof(result));
}
Note 1: I see the problem when I have a single client. So having multiple clients does not in itself cause the problem either.
Note 2: The client really uses BIO_connect(), BIO_read() and BIO_write() [from OpenSSL], but I doubt that would be a problem. I do not use any kind of encryption.
I don't see why you're using non-blocking I/O given you have a dedicated thread per socket. Just block in read(). Use SO_RCVTIMEO if you need an overall read timeout.

UDP real time sending and receiving on Linux on command from control computer

I am currently working on a project written in C++ involving UDP real time connection. I receive UDP packets from a control computer containing commands to start/stop an infinite while loop that reads data from an IMU and sends that data to the control computer.
My problem is the following: First I implemented an exit condition from the loop using recvfrom() and read(), but the control computer sends a UDP packet every second, which was delaying the whole loop and made sending the data in the desired time interval of 5ms impossible.
I tried to fix this problem by usingfcntl(fd, F_SETFL, O_NONBLOCK);and using only read(), which actually works fine, but I am unsure whether this is a wise idea or not, since I am not checking for errors anymore. Is there any elegant way how to solve this problem? I thought about using Pthreads or something like that, however I have never worked with threads or parallel programming so I would have to spend some time learning that.
I appreciate any advice on that problem you could give me.
Here is a code example:
//include
...
int main() {
RNet cmd; //RNet: struct that contains all the information of the UDP header and the command
RNet* pCmd = &cmd;
ssize_t b;
int fd2;
struct sockaddr_in snd; // sender is control computer
socklen_t length;
// further declaration of variables, connecting to socket, etc...
...
fcntl(fd2, F_SETFL, O_NONBLOCK);
while (1)
{
// read messages from control computer
if ((b = read(fd2, pCmd, 19)) > 0) {
memcpy(&cmd, pCmd, b);
}
// transmission
while (cmd.CLout.MotionCommand == 1) // MotionCommand: 1 - send messages; 0 - do nothing
{
if(time_elapsed >= 5) // elapsed time in ms
{
// update sensor values
...
//sendto ()
...
// update control time, timestamp, etc.
...
}
if (recvfrom(fd2, pCmd, (int)sizeof(pCmd), 0, (struct sockaddr*) &snd, &length) < 0) {
perror("error receiving data");
return 0;
}
// checking Control Model Command
if ((b = read(fd2, pCmd, 19)) > 0) {
memcpy(&cmd, pCmd, b);
}
}
}
}
I really like the "blocking calls on multiple threads" design. It enables you to have distinct independent tasks, and you don't have to worry about how each task can disturb another. It can have some drawbacks but it is usually a good fit for many needs.
To do that, just use pthread_create to create a new thread for each task (you may keep the main thread for one task). In your case, you should have a thread to receive commands, and another one to send your data. You also need for the receiving thread to notify the sending thread of the commands. To do that, you can use some synchronization tool, like a mutex.
Overall, you should have your receiving thread blocking on recvfrom, and the sending thread waiting for a signal from the mutex (wait for the mutex to be freed, technically). When the receiving thread receive a start command, it signals the mutex and go back to recvfrom (optionally you can set a variable to provide more information to the other thread).
As a comment, remember that UDP are 1-to-many, thus your code here will react to any packet sent to you (even from some random or malicious host). You may want to filter with the remote sockaddr after recvfrom, or use connect + recv. It depends on what you want.

Emitting signal when bytes are received in serial port

I am trying to connect a signal and a slot in C++ using the boost libraries. My code currently opens a file and reads data from it. However, I am trying to improve the code so that it can read and analyze data in real time using a serial port. What I would like to do is have the analyze functions called only once there is data available in the serial port.
How would I go about doing this? I have done it in Qt before, however I cannot use signals and slots in Qt because this code does not use their moc tool.
Your OS (Linux) provides you with the following mechanism when dealing with the serial port.
You can set your serial port to noncanonical mode (by unsetting ICANON flag in termios structure). Then, if MIN and TIME parameters in c_cc[] are zero, the read() function will return if and only if there is new data in the serial port input buffer (see termios man page for details). So, you may run a separate thread responsible for getting the incoming serial data:
ssize_t count, bytesReceived = 0;
char myBuffer[1024];
while(1)
{
if (count = read(portFD,
myBuffer + bytesReceived,
sizeof(myBuffer)-bytesReceived) > 0)
{
/*
Here we check the arrived bytes. If they can be processed as a complete message,
you can alert other thread in a way you choose, put them to some kind of
queue etc. The details depend greatly on communication protocol being used.
If there is not enough bytes to process, you just store them in buffer
*/
bytesReceived += count;
if (MyProtocolMessageComplete(myBuffer, bytesReceived))
{
ProcessMyData(myBuffer, bytesReceived);
AlertOtherThread(); //emit your 'signal' here
bytesReceived = 0; //going to wait for next message
}
}
else
{
//process read() error
}
}
The main idea here is that the thread calling read() is going to be active only when new data arrives. The rest of the time OS will keep this thread in wait state. Thus it will not consume CPU time. It is up to you how to implement the actual signal part.
The example above uses regular read system call to get data from port, but you can use the boost class in the same manner. Just use syncronous read function and the result will be the same.