C++ Boost Asio Serial Port Synchronous read_some Blocking Indefinitely? - c++

I'm trying to implement my own synchronous, serial port "read_until" function but with a timeout. My implementation looks a bit like this in pseudo-code:
//returns true if timed out, otherwise false
bool MyReadUntil(string delim, int timeoutSecs)
{
//set up timer
time start = now();
time current = now();
time deltaTime = start - current;
//keep appending to this string until timer runs out
string readString = "";
char[1024] cBuff;
boost::system::error_code ec;
while(readString.find(delim) == string::npos)
{
//update time and return true if timed out
current = now();
deltaTime = start-current;
if (deltaTime>=timeoutSecs)
{
return true;
}
else
{
//this only works once
//NOTE: serialPort is a pointer to a boost::asio::serial_port
serialPort->read_some(boost::asio::buffer(cBuff), ec);
if (!ec)
{
readString = readString + cBuff;
}
}
}
//if we break the while loop no timeout
return false;
}
So what happens is that the read_some function only reads once on the first iteration of the loop, and then the next time it is called it blocks forever. I tried looking up the answer and have been searching for a while, but unfortunately the terms "read_some" and "async_read_some" are very closely related, and, seeing as how the asynchronous functionality is more commonly used, the latter dominates my search queries, making it hard to find an answer so far.
I would like to avoid putting in an async_read for this, because this is just part of a handshake implementation for a usb-protocol. It doesn't need to be more complicated by being made asynchronous (there's nothing for it to do in the interim but wait anyway).
I think the issue has something to do with resetting the serial port on every read or something along those lines (I remember reading an article about the proper time to reset when iteratively reading from a serial port, but I can't find it again unfortunately). In any case, I don't think it's something earth-shatteringly complicated to fix, but I'm having difficulty finding the answer. Thanks in advnce for your help.

Ok, so because of some suggestions I received, I found that only one byte of data was being read on that first loop, and then no more data was being read subsequently. This behavior though was not unwarranted, because boost was doing exactly what it was supposed to be doing. which is
"block[ing] until one or more bytes of data has been read successfully"
The answer was not that there was an error being thrown, but because I had assumed there was data being transmitted when in fact no data was coming over the wire (though, I'm still not entirely clear on where the one lone byte was coming from on the first iteration of the read). I traced the issue back and found a hardware bug causing the issue, so, once that was fixed, and data was being transmitted properly, it stopped blocking. Can't believe that slipped my mind, but thanks for the suggestions everybody.

Related

I need help figuring out tcp sockets (clsocket)

I am having trouble figuring out sockets i am just asking the server for data at a position (glm::i64vec4) and expecting a response but the position gets way off when i get the response and the data for that position reflects that (aka my voxel game make a kinda cool looking but useless mess)
It's probably just me not understanding sockets whatsoever or maybe something weird with this library
one thought i had is it was maybe something to do with mismatching blocking and non blocking on the server and client
but when i switched the server to blocking (and put each client in a seperate thread from each other and the accepting process) it did nothing
if i'm doing something really stupid please tell me i know next to nothing about sockets
here is some code that probably looks horrible
Server Code
std::deque <CActiveSocket*> clients;
CPassiveSocket socket;
socket.Initialize();
socket.SetNonblocking();//I'm doing this so i don't need multiple threads for clients
socket.Listen("0.0.0.0",port);
while (1){
{
CActiveSocket* c;
if ((c = socket.Accept()) != NULL){
clients.emplace_back(c);
}
}
for (CActiveSocket*& c : clients){
c->Receive(sizeof(glm::i64vec4));
if (c->GetBytesReceived() == sizeof(glm::i64vec4)){
chkpkt chk;
chk.pos = *(glm::i64vec4*)c->GetData();
LOOP3D(chksize+2){
chk.data(i,j,k).val = chk.pos.y*chksize+j;
chk.data(i,j,k).id=0;
}
while (c->Send((uint8*)&chk,sizeof(chkpkt)) != sizeof(chkpkt)){}
}
}
}
Client Code
//v is a glm::i64vec4
//fsock is set to Blocking
if(fsock.Send((uint8*)&v,sizeof(glm::i64vec4)))
if (fsock.Receive(sizeof(chkpkt))){
tthread::lock_guard<tthread::fast_mutex> lock(wld->filemut);
wld->ichks[v]=(*(chkpkt*)fsock.GetData()).data;//i tried using the position i get back from the server to set this (instead of v) but that made it to where nothing loaded
//i checked it and the chunks position never lines up with what i sent
}
Without your complete application codes it's unrealistic to offer any suggestions of any particular lines of code correction.
But it seems like you are using this library. It doesn't matter if not, because most of time when doing network programming, socket's weird behavior make some problems somewhat universal. Thus there are a few suggestions for the portion of socket application in your project:
It suffices to have BLOCKING sockets.
Most of time socket's read have somewhat weird behavior, that is, it might not receive the requested size of bytes at a time. Due to this, you need to repeatedly call read until the receiving buffer is read thoroughly. For a complete and robust solution you can refer to Stevens's readn routine ([Ref.1], page122).
If you are using exactly the library mentioned above, you can see that your fsock.Receive eventually calls recv. And recv is just an variant of read[Ref.2], thus the solutions for both of them are just identical. And this pattern might help:
while(fsock.Receive(sizeof(chkpkt))>0)
{
// ...
}
Ref.1: https://mathcs.clarku.edu/~jbreecher/cs280/UNIX%20Network%20Programming(Volume1,3rd).pdf
Ref.2: https://man7.org/linux/man-pages/man2/recv.2.html#DESCRIPTION

c++ streaming udp data into a queue?

I am streaming data as a string over UDP, into a Socket class inside Unreal engine. This is threaded, and runs in the background.
My read function is:
float translate;
void FdataThread::ReceiveUDP()
{
uint32 Size;
TArray<uint8> ReceivedData;
if (ReceiverSocket->HasPendingData(Size))
{
int32 Read = 0;
ReceivedData.SetNumUninitialized(FMath::Min(Size, 65507u));
ReceiverSocket->RecvFrom(ReceivedData.GetData(), ReceivedData.Num(), Read, *targetAddr);
}
FString str = FString(bytesRead, UTF8_TO_TCHAR((const UTF8CHAR *)ReceivedData));
translate = FCString::Atof(*str);
}
I then call the translate variable from another class, on a Tick, or timer.
My test case sends an incrementing number from another application.
If I print this number from inside the above Read function, it looks as expected, counting up incrementally.
When i print it from the other thread, it is missing some of the numbers.
I believe this is because I call it on the Tick, so it misses out some data due to processing time.
My question is:
Is there a way to queue the incoming data, so that when i pull the value, it is the next incremental value and not the current one? What is the best way to go about this?
Thank you, please let me know if I have not been clear.
Is this the complete code? ReceivedData isn't used after it's filled with data from the socket. Instead, an (in this code) undefined variable 'buffer' is being used.
Also, it seems that the while loop could run multiple times, overwriting old data in the ReceivedData buffer. Add some debugging messages to see whether RecvFrom actually reads all bytes from the socket. I believe it reads only one 'packet'.
Finally, especially when you're using UDP sockets over the network, note that the UDP protocol isn't guaranteed to actually deliver its packets. However, I doubt this is causing your problems if you're using it on a single computer or a local network.
Your read loop doesn't make sense. You are reading and throwing away all datagrams but the last in any given sequence that happen to be in the socket receive buffer at the same time. The translate call should be inside the loop, and the loop should be while(true), or while (running), or similar.

C/C++ recv() with timeout

Hoping someone can help me out. I'm wanting to implement some sort of timeout if my socket is not able to receive data in a certain amount of time... I've looked up ways online but the examples I've looked at doesn't have their recv() in a while loop like mine, they typically just receive the whole buffer that is waiting. Maybe mine just isn't very efficient and someone could point me in a better direction in receiving all the data.
The string that is to be received is not a fixed length which is why I receive 1 at a time because I don't know how big the string might be. As you can see my recv() will receive data until it finds the End of text character (). The examples with select() I found would use the select before calling receive, but should I be doing that for each go around of my while loop? or maybe call select() before I even enter the while loop?
Anyways, any help is appreciated.
string recv_data(int socket){
bool endfound = false;
char temp[1];
string recvstring ;
while(endfound == false)//receives 1 character at a time until ETX(\x03)character is found
{
if(recv(socket,temp,sizeof(temp),0)<0)
{
perror("error in recv data");
}
if(memchr(temp,'\x03',1) != NULL)
{
endfound = true;
}
recvstring += temp;
temp[0] = 0;
}
return formatting(recvstring);
}
You must put your socket into non-blocking mode, and use poll().
poll() technically works on regular blocking sockets too; however it's been my experience that there are various subtle differences in semantics and race conditions, when using poll() with blocking sockets, and for best portability I always used non-blocking mode sockets, together with poll(), and careful inspection of the returning value from recv() or read().
Some Google food for you: fcntl(), F_SETFL, O_NONBLOCK.

pselect blocks even though data is available for read on socket

I'm experiencing an intermittent delay when reading from a POSIX socket (RHEL6 x86_64 C++ icpc). My code is designed such that a user can provide an absolute timespec deadline (vs. a relative timeout) to be used across multiple calls to recv. I call pselect to make sure that data is available for reading before attempting to call recv.
This typically works as expected (will wait for data but not exceed deadline, introducing no noticeable delay if data is available to recv). However, I have a user that can periodically (~50% of the time) get his application into a state where the select blocks for ~400-500 ms even though data is available on the socket. If I watch /proc/net/tcp, I can see that data is available in the RX queue and I can see the application slowly reading the data off the queue. If I skip the call to pselect and just call recv, the behavior is similar (but less delay overall indicating recv is also blocking unnecessarily). When the application gets into this state it stays this way (experiences consistent delay with each pselect/recv).
I spent several hours poking around here and on other sites. This is the closest similar issue I could find, but there was no resolution...
http://developerweb.net/viewtopic.php?id=7458
Has anyone run into this sort of behavior before? I'm at a loss for what to do. I've instrumented the code to validate that this is where the delay is happening. (Edit: We actually just validated that the entire method below was slow, not any particular system call.) It seems like a kernel/OS issue but I'm not sure where to look. Here's the code...
// protected
bool
Message::wait(int socket, const timespec & deadline) {
// Bail if deadline not provided
if (deadline.tv_sec == 0 && deadline.tv_nsec == 0) {
return true;
}
// Make sure we haven't already exceeded deadline
timespec currentTime;
clock_gettime(CLOCK_REALTIME, &currentTime);
if (VirtualClock::cmptime(currentTime, deadline) >= 0) {
LOG_WARNING("Timed out waiting to receive data");
m_timedOut = true;
return false;
}
// Calculate receive timeout
timespec timeout;
memset(&timeout, 0, sizeof(timeout));
timeout.tv_nsec = VirtualClock::nsecs(currentTime, deadline);
VirtualClock::fixtime(timeout);
// Wait for data
fd_set descSet;
FD_ZERO(&descSet);
FD_SET(socket, &descSet);
int result = pselect(socket + 1, &descSet, NULL, NULL, &timeout, NULL);
if (result == -1) {
m_error = errno;
LOG_ERROR("Failed to wait for data: %d, %s",
m_error, strerror(m_error));
return false;
} else if (result == 0 || !FD_ISSET(socket, &descSet)) {
LOG_WARNING("Timed out waiting to receive data");
m_timedOut = true;
return false;
}
return true;
}
VirtualClock is a time-related utility class just used here to compare/fix-up timespecs (i.e. not introducing any delays). I'd appreciate any insight on this behavior.
This was in fact not a problem with any system call. We used strace to diagnose and were seeing tons of calls to clock_gettime. Another (third) review of the calling code revealed a programming error resulting in the called code having a reference to corrupt stack data. This was facilitated by a flawed API design on my part resulting in corruption of the deadline.
I was allowing the user to pass in a reference to a ServerConfig class containing configuration (including data related to the deadline). My Server class was saving the reference instead of copying the object. The user created an instance of my Server class on the heap, passed in a reference a ServerConfig on the stack (in a method) resulting in non-deterministic garbage in the configuration when the method exited and the ServerConfig went out of scope. This is older code and I've since prevented this sort of thing from happening in other places after being burned but this one slipped through.
So lessons learned for me are: be careful with writing APIs that hang on to user-provided references, rethink premature optimization (the whole reason I was hanging onto a reference instead of just doing a copy), and look for stack corruption when you see non-deterministic behavior like this (something that I check for when I suspect builds are jacked up but didn't suspect this time). Also, strace is a great tool...I've seen others use it but now I'm comfortable using it myself.
Thanks for the comments and sorry for the false alarm.

C++ non blocking socket select send too slow?

I have a program that maintains a list of "streaming" sockets. These sockets are configured to be non-blocking sockets.
Currently, I have used a list to store these streaming sockets. I have some data that I need to send to all these streaming sockets hence I used the iterator to loop through this list of streaming sockets and calling the send_TCP_NB function below:
The issue is that my own program buffer that stores the data before sending to this send_TCP_NB function slowly decreases in free size indicating that the send is slower than the rate at which data is put into the program buffer. The rate at which the program buffer is about 1000 data per second. Each data is quite small, about 100 bytes.
Hence, i am not sure if my send_TCP_NB function is working efficiently or correct?
int send_TCP_NB(int cs, char data[], int data_length) {
bool sent = false;
FD_ZERO(&write_flags); // initialize the writer socket set
FD_SET(cs, &write_flags); // set the write notification for the socket based on the current state of the buffer
int status;
int err;
struct timeval waitd; // set the time limit for waiting
waitd.tv_sec = 0;
waitd.tv_usec = 1000;
err = select(cs+1, NULL, &write_flags, NULL, &waitd);
if(err==0)
{
// time limit expired
printf("Time limit expired!\n");
return 0; // send failed
}
else
{
while(!sent)
{
if(FD_ISSET(cs, &write_flags))
{
FD_CLR(cs, &write_flags);
status = send(cs, data, data_length, 0);
sent = true;
}
}
int nError = WSAGetLastError();
if(nError != WSAEWOULDBLOCK && nError != 0)
{
printf("Error sending non blocking data\n");
return 0;
}
else
{
if(nError == WSAEWOULDBLOCK)
{
printf("%d\n", nError);
}
return 1;
}
}
}
One thing that would help is if you thought out exactly what this function is supposed to do. What it actually does is probably not what you wanted, and has some bad features.
The major features of what it does that I've noticed are:
Modify some global state
Wait (up to 1 millisecond) for the write buffer to have some empty space
Abort if the buffer is still full
Send 1 or more bytes on the socket (ignoring how much was sent)
If there was an error (including the send decided it would have blocked despite the earlier check), obtain its value. Otherwise, obtain a random error value
Possibly print something to screen, depending on the value obtained
Return 0 or 1, depending on the error value.
Comments on these points:
Why is write_flags global?
Did you really intend to block in this function?
This is probably fine
Surely you care how much of the data was sent?
I do not see anything in the documentation that suggests that this will be zero if send succeeds
If you cleared up what the actual intent of this function was, it would probably be much easier to ensure that this function actually fulfills that intent.
That said
I have some data that I need to send to all these streaming sockets
What precisely is your need?
If your need is that the data must be sent before proceeding, then using a non-blocking write is inappropriate*, since you're going to have to wait until you can write the data anyways.
If your need is that the data must be sent sometime in the future, then your solution is missing a very critical piece: you need to create a buffer for each socket which holds the data that needs to be sent, and then you periodically need to invoke a function that checks the sockets to try writing whatever it can. If you spawn a new thread for this latter purpose, this is the sort of thing select is very useful for, since you can make that new thread block until it is able to write something. However, if you don't spawn a new thread and just periodically invoke a function from the main thread to check, then you don't need to bother. (just write what you can to everything, even if it's zero bytes)
*: At least, it is a very premature optimization. There are some edge cases where you could get slightly more performance by using the non-blocking writes intelligently, but if you don't understand what those edge cases are and how the non-blocking writes would help, then guessing at it is unlikely to get good results.
EDIT: as another answer implied, this is something the operating system is good at anyways. Rather than try to write your own code to manage this, if you find your socket buffers filling up, then make the system buffers larger. And if they're still filling up, you should really give serious thought to the idea that your program needs to block anyways, so that it stops sending data faster than the other end can handle it. i.e. just use ordinary blocking sends for all of your data.
Some general advice:
Keep in mind you are multiplying data. So if you get 1 MB/s in, you output N MB/s with N clients. Are you sure your network card can take it ? It gets worse with smaller packets, you get more general overhead. You may want to consider broadcasting.
You are using non blocking sockets, but you block while they are not free. If you want to be non blocking, better discard the packet immediately if the socket is not ready.
What would be better is to "select" more than one socket at once. Do everything that you are doing but for all the sockets that are available. You'll write to each "ready" socket, then repeat again while there are sockets that are not ready. This way, you'll proceed with the sockets that are available first, and then with some chance, the busy sockets will become themselves available.
the while (!sent) loop is useless and probably buggy. Since you are checking only one socket FD_ISSET will always be true. It is wrong to check again FD_ISSET after a FD_CLR
Keep in mind that your OS has some internal buffers for the sockets and that there are way to extend them (not easy on Linux, though, to get large values you need to do some config as root).
There are some socket libraries that will probably work better than what you can implement in a reasonable time (boost::asio and zmq for the ones I know).
If you need to implement it yourself, (i.e. because for instance zmq has its own packet format), consider using a threadpool library.
EDIT:
Sleeping 1 millisecond is probably a bad idea. Your thread will probably get descheduled and it will take much more than that before you get some CPU time again.
This is just a horrible way to do things. The select serves no purpose but to waste time. If the send is non-blocking, it can mangle data on a partial send. If it's blocking, you still waste arbitrarily much time waiting for one receiver.
You need to pick a sensible I/O strategy. Here is one: Set all sockets non-blocking. When you need to send data to a socket, just call write. If all the data writes, lovely. If not, save the portion of data that wasn't sent for later and add the socket to your write set. When you have nothing else to do, call select. If you get a hit on any socket in your write set, write as many bytes as you can from what you saved. If you write all of them, remove that socket from the write set.
(If you need to write to a data that's already in your write set, just add the data to the saved data to be sent. You may need to close the connection if too much data gets buffered.)
A better idea might be to use a library that already does all these things. Boost::asio is a good one.
You are calling select() before calling send(). Do it the other way around. Call select() only if send() reports WSAEWOULDBLOCK, eg:
int send_TCP_NB(int cs, char data[], int data_length)
{
int status;
int err;
struct timeval waitd;
char *data_ptr = data;
while (data_length > 0)
{
status = send(cs, data_ptr, data_length, 0);
if (status > 0)
{
data_ptr += status;
data_length -= status;
continue;
}
err = WSAGetLastError();
if (err != WSAEWOULDBLOCK)
{
printf("Error sending non blocking data\n");
return 0; // send failed
}
FD_ZERO(&write_flags);
FD_SET(cs, &write_flags); // set the write notification for the socket based on the current state of the buffer
waitd.tv_sec = 0;
waitd.tv_usec = 1000;
status = select(cs+1, NULL, &write_flags, NULL, &waitd);
if (status > 0)
continue;
if (status == 0)
printf("Time limit expired!\n");
else
printf("Error waiting for time limit!\n");
return 0; // send failed
}
return 1;
}