I have a TCP/IP socket set to non-blocking that is blocking anyway. The socket is only referenced in one thread. This code works on Windows (with a few call substitutions) but not on Linux. I have code that looks like this (Don't mind the C-style casts -- this was written long ago. Also, I trimmed it up a bit, so let me know if I accidentally trimmed off a step. Chances are that I'm actually doing that step. The actual code is on another computer, so I can't copy-paste.):
// In the real code, these are class members. I'm not bonkers
int mSocket;
sockaddr_in mAddress;
void CreateSocket(
unsigned int ipAddress,
unsigned short port)
{
// Omitting my error checking in this question for brevity because everything comes back valid
mSocket = socket(AF_INET, SOCK_STREAM, 0); // Not -1
int oldFlags = fctnl(mSocket, F_GETFL, 0); // Not -1
fcntl(mSocket, F_SETFL, oldFlags | O_NONBLOCK); // Not -1
mAddress.sin_family = AF_INET;
mAddress.sin_addr.s_addr = ipAddress; // address is valid
mAddress.sin_port = htons((u_short)port); // port is not 0 and allowed on firewall
memset(mAddress.sin_zero, 0, sizeof(mAddress.sin_zero));
// <Connect attempt loop starts here>
connect(mSocket, (sockaddr*)&mAddress, sizeof(mAddress)); // Not -1 to exit loop
// <Connect attempt loop ends here>
// Connection is now successful ('connect' returned a value other than -1)
}
// ... Stuff happens ...
// ... Then this is called because 'select' call shows read data available ...
void AttemptReceive(
MyReturnBufferTypeThatsNotImportant &returnedBytes)
{
// Read socket
const size_t bufferSize = 4096;
char buffer[bufferSize];
int result = 0;
do {
// Debugging code: sanity checks
int socketFlags = fcntl(mSocket, F_GETFL, 0); // Not -1
printf("result=%d\n", result);
printf("O_NONBLOCK? %d\n", socketFlags & O_NONBLOCK); // Always prints "O_NONBLOCK? 2048"
result = recv(mSocket, buffer, bufferSize, 0); // NEVER -1 or 0 after hundreds to thousands of calls, then suddenly blocks
// ... Save off and package read data into user format for output to caller ...
} while (result == bufferSize);
}
I believe, because AttemptReceive is called in response to select, that the socket just happens to contain exactly a number of bytes equal to a multiple of the buffer size (4096). I've somewhat confirmed this with the printf statements, so it never blocks on the first loop-through. Every time this bug happens, the last two lines to get printed before the thread blocks are:
result=4096
O_NONBLOCK? 2048
Changing the recv line to recv(mSocket, buffer, bufferSize, MSG_DONTWAIT); actually "fixes" the issue (suddenly, recv occasionally returns -1 with errno EWOULDBLOCK/EAGAIN (both equal to each other on my OS)), but I'm afraid I'm just putting a band-aid on a gushing wound, so to speak. Any ideas?
P.S. the address is "localhost", but I don't think it matters.
Note: I'm using an old compiler (not by choice), g++ 4.4.7-23 from 2010. That may have something to do with the issue.
socket() automatically sets O_RDWR on the socket with my operating system and compiler, but it appears that O_RDWR had accidentally gotten unset on the socket in question at the start of the program (which somehow allowed it to read fine if there was data to read, but block otherwise). Fixing that bug caused the socket to stop blocking. Apparently, both O_RDWR and O_NONBLOCK are required to avoid sockets blocking, at least on my operating system and compiler.
Related
Running in Docker on a MacOS, I have a simple server and client setup to measure how fast I can allocate data on the client and send it to the server. The tests are done using loopback (in the same docker container). The message size for my tests was 1000000 bytes.
When I set SO_RCVBUF and SO_SNDBUF to their respective defaults, the performance halves.
SO_RCVBUF defaults to 65536 and SO_SNDBUF defaults to 1313280 (retrieved by calling getsockopt and dividing by 2).
Tests:
When I test setting neither buffer size, I get about 7 Gb/s throughput.
When I set one buffer or the other to the default (or higher) I get 3.5 Gb/s.
When I set both buffer sizes to the default I get 2.5 Gb/s.
Server code: (cs is an accepted stream socket)
void tcp_rr(int cs, uint64_t& processed) {
/* I remove this entire thing and performance improves */
if (setsockopt(cs, SOL_SOCKET, SO_RCVBUF, &ENV.recv_buf, sizeof(ENV.recv_buf)) == -1) {
perror("RCVBUF failure");
return;
}
char *buf = (char *)malloc(ENV.msg_size);
while (true) {
int recved = 0;
while (recved < ENV.msg_size) {
int recvret = recv(cs, buf + recved, ENV.msg_size - recved, 0);
if (recvret <= 0) {
if (recvret < 0) {
perror("Recv error");
}
return;
}
processed += recvret;
recved += recvret;
}
}
free(buf);
}
Client code: (s is a connected stream socket)
void tcp_rr(int s, uint64_t& processed, BenchStats& stats) {
/* I remove this entire thing and performance improves */
if (setsockopt(s, SOL_SOCKET, SO_SNDBUF, &ENV.send_buf, sizeof(ENV.send_buf)) == -1) {
perror("SNDBUF failure");
return;
}
char *buf = (char *)malloc(ENV.msg_size);
while (stats.elapsed_millis() < TEST_TIME_MILLIS) {
int sent = 0;
while (sent < ENV.msg_size) {
int sendret = send(s, buf + sent, ENV.msg_size - sent, 0);
if (sendret <= 0) {
if (sendret < 0) {
perror("Send error");
}
return;
}
processed += sendret;
sent += sendret;
}
}
free(buf);
}
Zeroing in on SO_SNDBUF:
The default appears to be: net.ipv4.tcp_wmem = 4096 16384 4194304
If I setsockopt to 4194304 and getsockopt (to see what's currently set) it returns 425984 (10x less than I requested).
Additionally, it appears a setsockopt sets a lock on buffer expansion (for send, the lock's name is SOCK_SNDBUF_LOCK which prohibits adaptive expansion of the buffer). The question then is - why can't I request the full size buffer?
Clues for what is going on come from the kernel source handle for SO_SNDBUF (and SO_RCVBUF but we'll focus on SO_SNDBUF below).
net/core/sock.c contains implementations for the generic socket operations, including the SOL_SOCKET getsockopt and setsockopt handles.
Examining what happens when we call setsockopt(s, SOL_SOCKET, SO_SNDBUF, ...):
case SO_SNDBUF:
/* Don't error on this BSD doesn't and if you think
* about it this is right. Otherwise apps have to
* play 'guess the biggest size' games. RCVBUF/SNDBUF
* are treated in BSD as hints
*/
val = min_t(u32, val, sysctl_wmem_max);
set_sndbuf:
sk->sk_userlocks |= SOCK_SNDBUF_LOCK;
sk->sk_sndbuf = max_t(int, val * 2, SOCK_MIN_SNDBUF);
/* Wake up sending tasks if we upped the value. */
sk->sk_write_space(sk);
break;
case SO_SNDBUFFORCE:
if (!capable(CAP_NET_ADMIN)) {
ret = -EPERM;
break;
}
goto set_sndbuf;
Some interesting things pop out.
First of all, we see that the max possible value is sysctl_wmem_max, a setting which is difficult to pin down within a docker container. We know from the context above that this is likely 212992 (half your max value you retrieved after trying to set 4194304).
Secondly, we see SOCK_SNDBUF_LOCK being set. This setting is in my opinion not well documented in the man pages, but it appears to lock dynamic adjustment of the buffer size.
For example, in the function tcp_should_expand_sndbuf we get:
static bool tcp_should_expand_sndbuf(const struct sock *sk)
{
const struct tcp_sock *tp = tcp_sk(sk);
/* If the user specified a specific send buffer setting, do
* not modify it.
*/
if (sk->sk_userlocks & SOCK_SNDBUF_LOCK)
return false;
...
So what is happening in your code? You attempt to set the max value as you understand it, but it's being truncated to something 10x smaller by the sysctl sysctl_wmem_max. This is then made far worse by the fact that setting this option now locks the buffer to that smaller size. The strange part is that apparently dynamically resizing the buffer doesn't have this same restriction, but can go all the way to the max.
If you look at the first code snip above, you see the SO_SNDBUFFORCE option. This will disregard the sysctl_wmem_max and allow you to set essentially any buffer size provided you have the right permissions.
It turns out processes launched in generic docker containers don't have CAP_NET_ADMIN, so in order to use this socket option, you must run in --privileged mode. However, if you do, and if you force the max size, you will see your benchmark return the same throughput as not setting the option at all and allowing it to grow dynamically to the same size.
Yes, I understand this issue has been discussed many times.
And yes, I've seen and read these and other discussions:
1
2
3
and I still can't fix my code myself.
I am writing my own web server. In the next cycle, it listens on a socket, connects each new client and writes it to a vector.
Into my class i have this struct:
struct Connection
{
int socket;
std::chrono::system_clock::time_point tp;
std::string request;
};
with next data structures:
std::mutex connected_clients_mux_;
std::vector<HttpServer::Connection> connected_clients_;
and the cycle itself:
//...
bind (listen_socket_, (struct sockaddr *)&addr_, sizeof(addr_));
listen(listen_socket_, 4 );
while(1){
connection_socket_ = accept(listen_socket_, NULL, NULL);
//...
Connection connection_;
//...
connected_clients_mux_.lock();
this->connected_clients_.push_back(connection_);
connected_clients_mux_.unlock();
}
it works, clients connect, send and receive requests.
But the problem is that if the connection is broken ("^C" for client), then my program will not know about it even at the moment:
void SendRespons(HttpServer::Connection socket_){
write(socket_.socket,( socket_.request + std::to_string(socket_.socket)).c_str(), 1024);
}
as the title of this question suggests, my app receives a SIGPIPE signal.
Again, I have seen "solutions".
signal(SIGPIPE, &SigPipeHandler);
void SigPipeHandler(int s) {
//printf("Caught SIGPIPE\n%d",s);
}
but it does not help. At this moment, we have the "№" of the socket to which the write was made, is it possible to "remember" it and close this particular connection in the handler method?
my system:
Operating System: Ubuntu 20.04.2 LTS
Kernel: Linux 5.8.0-43-generic
g++ --version
g++ (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
As stated in the links you give, the solution is to ignore SIGPIPE, and CHECK THE RETURN VALUE of the write calls. This latter is needed for correct operation (short writes) in all but the most trivial, unloaded cases anyways. Also the fixed write size of 1024 that you are using is probably not what you want -- if your response string is shorter, you'll send a bunch of random garbage along with it. You probably really want something like:
void SendRespons(HttpServer::Connection socket_){
auto data = socket_.request + std::to_string(socket_.socket);
int sent = 0;
while (sent < data.size()) {
int len = write(socket_.socket, &data[sent], data.size() - sent);
if (len < 0) {
// there was an error -- might be EPIPE or EAGAIN or EINTR or ever a few other
// obscure corner cases. For EAGAIN or EINTR (which can only happen if your
// program is set up to allow them), you probably want to try again.
// Anything else, probably just close the socket and clean up.
if (errno == EINTR)
continue;
close(socket_.socket);
// should tell someone about it?
break; }
sent += len; }
}
I cannot understand how to properly use SSL_Shutdown command in OpenSSL. Similar questions arisen several times in different places, but I couldn't find a solution which matches exactly my situation. I am using package libssl-dev 1.0.1f-1ubuntu2.15 (the latest for now) under Ubuntu in VirtualBox.
I am working with a small legacy C++ wrapper over the OpenSSL library with non-blocking IO for server and client sockets. The wrapper seems to work fine, except in the following test case (I'm not providing the code of the unit test itself, because it contains a lot of code not related to the problem):
Initialize a server socket with self-signed certificate
Connect to that socket. SSL handshake completes successfully, except that I'm ignoring X509_V_ERR_DEPTH_ZERO_SELF_SIGNED_CERT return of SSL_get_verify_result for now.
Successfully send/receive some data through the connection. This step is optional and it doesn't affect the problem which follows. I mention it only to show that the connection is really established and set into correct state.
Trying to shutdown SSL connection (server or client, doesn't matter which one) which leads to infinite wait on select.
All of the calls to SSL_read and SSL_write are synchronized, locking_callback is also set. After step 3 there are no other operations on sockets except shutting down on one of them.
In the code snippet below I omit all of the error processing and debugging code for clarity, none of the OpenSSL/POSIX calls fail (except cases where I left error processing in place). I also provide connect functions, in case this is important:
void OpenSslWrapper::ConnectToHost( ErrorCode& ec )
{
ctx_ = SSL_CTX_new(SSLv23_client_method());
SSL_CTX_load_verify_locations(ctx_, NULL, config_.verify_locations.c_str());
if (config_.use_standard_verify_locations)
{
SSL_CTX_set_default_verify_paths(ctx_);
}
bio_ = BIO_new_ssl_connect(ctx_);
BIO_get_ssl(bio_, &ssl_);
SSL_set_mode(ssl_, SSL_MODE_AUTO_RETRY);
std::string hostname = config_.address + ":" + to_string(config_.port);
BIO_set_conn_hostname(bio_, hostname.c_str());
BIO_set_nbio(bio_, 1);
int res = 0;
while ((res = BIO_do_connect(bio_)) <= 0)
{
BIO_get_fd(bio_, &fd_);
if (!BIO_should_retry(bio_))
{ /* Never happens */}
WaitAfterError(res);
}
res = SSL_get_verify_result(ssl_);
if (res != X509_V_OK && res != X509_V_ERR_DEPTH_ZERO_SELF_SIGNED_CERT)
{ /* Never happens */ }
SSL_set_mode(ssl_, SSL_MODE_ENABLE_PARTIAL_WRITE);
SSL_set_mode(ssl_, SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER);
}
// config_.handle is a file descriptor which was got from
// accept function, stx_ is also set in advance
void OpenSslWrapper::ConnectAsServer( ErrorCode& ec )
{
ssl_ = SSL_new(ctx_);
int flags = fcntl(config_.handle, F_GETFL, 0);
flags |= O_NONBLOCK;
fcntl(config_.handle, F_SETFL, flags);
SSL_set_fd(ssl_, config_.handle);
while (true)
{
int res = SSL_accept(ssl_);
if( res > 0) {break;}
if( !WaitAfterError(res).isSucceded() )
{ /* never happens */ }
}
SSL_set_mode(ssl_, SSL_MODE_ENABLE_PARTIAL_WRITE);
SSL_set_mode(ssl_, SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER);
}
// The trouble is here
void OpenSSLWrapper::Shutdown()
{
// ...
while (true)
{
int ret = SSL_shutdown(ssl_);
if (ret > 0) {break;}
else if (ret == 0) {continue;}
else {WaitAfterError(ret);}
}
// ...
}
ErrorCode OpenSSLWrapper::WaitAfterError(int res)
{
int err = SSL_get_error(ssl_, res);
switch (ret)
{
case SSL_ERROR_WANT_READ:
WaitForFd(fd_, k_WaitRead);
return ErrorCode::Success;
case SSL_ERROR_WANT_WRITE:
case SSL_ERROR_WANT_CONNECT:
case SSL_ERROR_WANT_ACCEPT:
WaitForFd(fd_, k_WaitWrite);
return ErrorCode::Success;
default:
return ErrorCode::Fail;
}
}
WaitForFd is just a simple wrapper over the select, which waits on a given socket infinitely long on a specified FD_SET for read or write.
When the client calls Shutdown, the first call to SSL_Shutdown returns 0. After the second call it returns -1 and SSL_get_error returns SSL_ERROR_WANT_READ, but selecting on the file descriptor for reading never returns. If specify timeout on select, SSL_Shutdown will continue returning -1 and SSL_get_error continue returning SSL_ERROR_WANT_READ. The loop never exits. After the first call to SSL_Shutdown the shutdown status is always SSL_SENT_SHUTDOWN.
It doesn't matter if I close a server or a client: both have the same behavior.
There's also a strange situation when I connect to some external host. The first call to SSL_Shutdown returns 0, the second one -1 with SSL_ERROR_WANT_READ. Selecting on the socket finishes successfully, but when I call to SSL_Shutdown next time, I again got -1 with error SSL_ERROR_SYSCALL and errno=0. As I read in other places, it is not a big deal, although it still seems strange and maybe it is somehow related, so I mention it here.
UPD. I ported that same code for Windows, the behavior didn't change.
P.S. I am sorry for mistakes in my English, I'd be grateful if someone corrects my language.
Currently I try to write a serial port communication in VC++ to transfer data from PC and robot via XBee transmitter. But after I wrote some commands to poll data from robot, I didn't receive anything from the robot (the output of filesize is 0 in the code.). Because my MATLAB interface works, so the problem should happen in the code not the hardware or communication. Would you please give me help?
01/03/2014 Updated: I have updated my codes. It still can not receive any data from my robot (the output of read is 0). When I use "cout<<&read" in the while loop, I obtain "0041F01C1". I also don't know how to define the size of buffer, because I don't know the size of data I will receive. In the codes, I just give it a random size like 103. Please help me.
// This is the main DLL file.
#include "StdAfx.h"
#include <iostream>
#define WIN32_LEAN_AND_MEAN //for GetCommState command
#include "Windows.h"
#include <WinBase.h>
using namespace std;
int main(){
char init[]="";
HANDLE serialHandle;
// Open serial port
serialHandle = CreateFile("\\\\.\\COM8", GENERIC_READ | GENERIC_WRITE, 0, 0, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, 0);
// Do some basic settings
DCB serialParams;
DWORD read, written;
serialParams.DCBlength = sizeof(serialParams);
if((GetCommState(serialHandle, &serialParams)==0))
{
printf("Get configuration port has a problem.");
return FALSE;
}
GetCommState(serialHandle, &serialParams);
serialParams.BaudRate = CBR_57600;
serialParams.ByteSize = 8;
serialParams.StopBits = ONESTOPBIT;
serialParams.Parity = NOPARITY;
//set flow control="hardware"
serialParams.fOutX=false;
serialParams.fInX=false;
serialParams.fOutxCtsFlow=true;
serialParams.fOutxDsrFlow=true;
serialParams.fDsrSensitivity=true;
serialParams.fRtsControl=RTS_CONTROL_HANDSHAKE;
serialParams.fDtrControl=DTR_CONTROL_HANDSHAKE;
if (!SetCommState(serialHandle, &serialParams))
{
printf("Set configuration port has a problem.");
return FALSE;
}
GetCommState(serialHandle, &serialParams);
// Set timeouts
COMMTIMEOUTS timeout = { 0 };
timeout.ReadIntervalTimeout = 30;
timeout.ReadTotalTimeoutConstant = 30;
timeout.ReadTotalTimeoutMultiplier = 30;
timeout.WriteTotalTimeoutConstant = 30;
timeout.WriteTotalTimeoutMultiplier = 30;
SetCommTimeouts(serialHandle, &timeout);
if (!SetCommTimeouts(serialHandle, &timeout))
{
printf("Set configuration port has a problem.");
return FALSE;
}
//write packet to poll data from robot
WriteFile(serialHandle,">*>p4",strlen(">*>p4"),&written,NULL);
//check whether the data can be received
char buffer[103];
do {
ReadFile (serialHandle,buffer,sizeof(buffer),&read,NULL);
cout << read;
} while (read!=0);
//buffer[read]="\0";
CloseHandle(serialHandle);
return 0;
}
GetFileSize is documented not to be valid when used with a serial port handle. Use the ReadFile function to receive serial port data.
You should use strlen instead of sizeof here:
WriteFile(serialHandle,init,strlen(init),&written,NULL)
You would be even better off creating a function like this:
function write_to_robot (const char * msg)
{
DWORD written;
BOOL ok = WriteFile(serialHandle, msg, strlen(msg), &written, NULL)
&& (written == strlen(msg));
if (!ok) printf ("Could not send message '%s' to robot\n", msg);
}
But that's only the appetizer. The main trouble is, as MDN says:
You cannot use the GetFileSize function with a handle of a nonseeking device such as a pipe or a communications device.
If you want to read from the port, you can simply use ReadFile until it returns zero bytes.
If you already know the max size of your robot's response, try reading that many characters.
Continue reading until the read reports an actual number of bytes read inferior to the size of the buffer. For instance:
#define MAX_ROBOT_ANSWER_LENGTH 1000 /* bytes */
const char * read_robot_response ()
{
static char buffer[MAX_ROBOT_ANSWER_LENGTH];
DWORD read;
if (!ReadFile (serialHandle, buffer, sizeof(buffer), &read, NULL))
{
printf ("something wrong with the com port handle");
exit (-1);
}
if (read == sizeof(buffer))
{
// the robot response is bigger than it should
printf ("this robot is overly talkative. Flushing input\n");
// read the rest of the input so that the next answer will not be
// polluted by leftovers of the previous one.
do {
ReadFile (serialHandle, buffer, sizeof(buffer), &read, NULL);
} while (read != 0);
// report error
return "error: robot response exceeds maximal length";
}
else
{
// add a terminator to string in case Mr Robot forgot to provide one
buffer[read] = '\0';
printf ("Mr Robot said '%s'\n", buffer);
return buffer;
}
}
This simplistic function returns a static variable, which will be overwritten each time you call read_robot_response.
Of course the proper way of doing things would be to use blocking I/Os instead of waiting one second and praying for the robot to answer in time, but that would require a lot more effort.
If you feel adventurous, you can use overlapped I/O, as this lenghty MDN article thoroughly explores.
EDIT: after looking at your code
// this reads at most 103 bytes of the answer, and does not display them
if (!ReadFile(serialHandle,buffer,sizeof(buffer),&read,NULL))
{
printf("Reading data to port has a problem.");
return FALSE;
}
// this could display the length of the remaining of the answer,
// provided it is more than 103 bytes long
do {
ReadFile (serialHandle,buffer,sizeof(buffer),&read,NULL);
cout << read;
}
while (read!=0);
You are displaying nothing but the length of the response beyond the first 103 characters received.
This should do the trick:
#define BUFFER_LEN 1000
DWORD read;
char buffer [BUFFER_LEN];
do {
if (!ReadFile(
serialHandle, // handle
buffer, // where to put your characters
sizeof(buffer) // max nr of chars to read
-1, // leave space for terminator character
&read, // get the number of bytes actually read
NULL)) // Yet another blody stupid Microsoft parameter
{
// die if something went wrong
printf("Reading data to port has a problem.");
return FALSE;
}
// add a terminator after last character read,
// so as to have a null terminated C string to display
buffer[read] = '\0';
// display what you actually read
cout << buffer;
}
while (read!=0);
I advised you to wrap the actual calls to serial port accesses inside simpler functions for a reason.
As I said before, Microsoft interfaces are a disaster. They are verbose, cumbersome and only moderately consistent. Using them directly leads to awkward and obfuscated code.
Here, for instance, you seem to have gotten confused between read and buffer
read holds the number of bytes actually read from the serial port
buffer holds the actual data.
buffer is what you will want to display to see what the robot answered you
Also, you should have a documentation for your robot stating which kind of answers you are supposed to expect. It would help to know how they are formatted, for instance whether they are null-terminated strings or not. That could dispense to add the string terminator.
This is a followup to this question: How to wait for input from the serial port in the middle of a program
I am writing a program to control an Iridium modem that needs to wait for a response from the serial port in the middle of the program in order to verify that the correct response was given. In order to accomplish this, a user recommended I use the select() command to wait for this input.
However, I have run into some difficulty with this approach. Initially, select() would return the value indicated a timeout on the response every time (even though the modem was sending back the correct responses, which I verified with another program running at the same time). Now, the program stops after one iteration, even with the correct response sent back from the modem.
//setting the file descriptor to the port
int fd = open(portName.c_str(), O_RDWR | O_NOCTTY | O_NDELAY);
if (fd == -1)
{
/*
* Could not open the port.
*/
perror("open_port: Unable to open /dev/ttyS0 - ");
}
else
fcntl(fd, F_SETFL, 0);
FILE *out = fopen(portName.c_str(), "w");//sets the serial port
FILE *in = fopen(portName.c_str(), "r");
fd_set fds;
FD_ZERO(&fds);
FD_SET(fd, &fds);
struct timeval timeout = { 10, 0 }; /* 10 seconds */
//int ret = select(fd+1, &fds, NULL, NULL, &timeout);
/* ret == 0 means timeout, ret == 1 means descriptor is ready for reading,
ret == -1 means error (check errno) */
char buf[100];
int i =0;
while(i<(sizeof(messageArray)/sizeof(messageArray[0])))
{
//creates a string with the AT command that writes to the module
std::string line1("AT+SBDWT=");
line1+=convertInt( messageArray[i].numChar);
line1+=" ";
line1+=convertInt(messageArray[i].packetNumber);
line1+=" ";
line1+=messageArray[i].data;
line1+=std::string("\r\n");
//creates a string with the AT command that initiates the SBD session
std::string line2("AT+SBDI");
line2+=std::string("\r\n");
fputs(line1.c_str(), out); //sends to serial port
//usleep(7000000);
int ret =select(fd+1, &fds, NULL, NULL, &timeout);
/* ret == 0 means timeout, ret == 1 means descriptor is ready for reading,
ret == -1 means error (check errno) */
if (ret ==1){
fgets (buf ,sizeof(buf), in);
//add code to check if response is correct
}
else if(ret == 0) {
perror("timeout error ");
}
else if (ret ==-1) {
perror("some other error");
}
fputs(line2.c_str(), out); //sends to serial port
//usleep(7000000); //Pauses between the addition of each packet.
int ret2 = select(fd+1, &fds, NULL, NULL, &timeout);
/* ret == 0 means timeout, ret == 1 means descriptor is ready for reading,
ret == -1 means error (check errno) */
if(ret2 == 0) {
perror("timeout error ");
}
else if (ret2 ==-1) {
perror("some other error");
}
i++;
}
You aren't using the same file handle for read/write/select, which is somewhat strange.
You are not resetting your fd_sets, which are modified by select and would have all of your fds unset in the case of a timeout, making the next call timeout by default (as you are asking for no fds).
you are also using buffered IO, which is bound to create headaches in this case. eg. fgets waits for either EOF (which won't occur), or a newline, reading all the while. It will block until it gets its newline, so may keep you hanging indefinitely if that never occurs.
It may also read more than it needs into the buffer, messing up your select read signal (you have data in the buffer, but select will time out, since there's nothing to read on the filehandle).
Bottom line is this:
use FD_SET in the loop to set/reset your fd sets, also reset your timeout, as select may modify it.
use a single handle for read/write/select, instead of multiple handles, eg. open file with fopen(..., "w+") or open(..., O_RDWR)
if still using fopen, try disabling buffering using setvbuf with the _IONBF buffering option.
otherwise, use open/read/write instead of fopen etc.
I will note that part of this was mentioned in this answer to your previous question.
You should perhaps use fflush() on your output file stream.