send() not reporting ENOTCONN when client has closed the connection (AS400) - c++

This is on an AS400 (IBM i, iSeries, et al).
I have a small Java program that I use to send test files to a server written in C++, which also runs on the IBM i. In my Java program I am setting my timeout for a response to be 5, lets say. In the server I am randomly sleeping for 0 to 10 seconds. When the Java program times out, it throws java.net.SocketTimeoutException, closes the socket with .close() and exits. The server program just goes ahead after its sleep and calls send(). Thing is, send() does not fail with -1 and give ENOTCONN. Why? Also inet_ntop() on the socket gives me the remote IP and port that connected to the server, as though the socket were still connected. Scratching my head.
EDIT: After disappointment with poll(), I found select() will report an error with FD_ISSET() when setting the errors set. In my case, select() returns 3, indicating that 3 conditions (read, write and error) are set for my one socket. You can't find out what the error is, at least I don't know yet how to find out.
fd_set read_set, write_set, error_set;
FD_ZERO(&read_set);
FD_ZERO(&write_set);
FD_ZERO(&error_set);
FD_SET(sock_fd, &read_set);
FD_SET(sock_fd, &write_set);
FD_SET(sock_fd, &error_set);
struct timeval timeout;
timeout.tv_sec = 10; // reset this on every new iteration.
timeout.tv_usec = 0;
int rc = select(sock_fd + 1, &read_set, &write_set, &error_set, &timeout);
CERR << "select() returned " << rc << endl;
if (rc >= 0) {
if (FD_ISSET(sock_fd, &read_set)) {
CERR << "ready to read" << endl;
}
if (FD_ISSET(sock_fd, &write_set)) {
CERR << "ready to write" << endl;
}
if (FD_ISSET(sock_fd, &error_set)) {
CERR << "has an error" << endl;
CERR << "errno=" << errno << ", " << strerror(errno) << endl;
}
}

From man send:
ENOTCONN
The socket is not connected, and no target has been given.
In other words your expectations are incorrect. ENOTCONN is for the case when you haven't connected the socket. It doesn't have anything to do with the peer disconnecting. That case will eventually cause ECONNRESET, but not on the first such send, because of TCP buffering.
Working as designed.

Related

Boost ASIO ip tcp iostream expires_from_now doesn't result in an error when connection fails

We've come across a strange situation where our code reports that it has successfully opened a connection to an unreachable address. But it only happens when we actively reduce the time-out for a boost::asio::ip::tcp::iostream:
void Connect(std::string address, std::string port) {
std::cout << "Waiting to connect: " << address << " at port " << port << "\n";
boost::asio::ip::tcp::iostream s;
s.expires_from_now(std::chrono::seconds(5));
s.connect(address, port);
if (!s) {
std::cout << "Unable to connect: " << s.error().message() << "\n";
} else {
std::cout << "Success!" << "\n";
}
}
With the above code, connect("192.168.25.25", "1234"); will quickly report "Success!", even though the address is unreachable. If we remove the expires_from_now, then we instead get "Unable to connect: Connection timed out" after about 2 minutes - as we expect.
I would have expected that changing the timeout using expires_from_now would result in a time-out error state. We're using boost 1.68.
Is there any other way to find out that the time-out is reached, or perhaps more appropriately, whether the connection has been established?
The "Success" indication just means that nothing has gone wrong yet. It does not mean that the connect has succeeded. You chose not to wait to find out whether the connect succeeded or failed and to instead time out the wait for that result.
You cannot make a TCP connect operation fail early. The rules for the conditions under which a TCP connect attempt fails are part of the TCP specification and just setting a timeout won't change how long it takes the connection attempt to fail.
Querying for the remote_endpoint will however indicate an error condition if the connection has not been established, for example:
boost::system::error_code ec;
s.socket().remote_endpoint(ec);
if (ec)
{
std::cout << "Unable to connect\n";
}

How to interrupt accept() in a TCP/IP server?

I'm working on a vision-application, which have two modes:
1) parameter setting
2) automatic
The problem is in 2), when my app waits for a signal via TCP/IP. The program is freezing while accept()-methode is called. I want to provide the possibility on a GUI to change the mode. So if the mode is changing, it's provided by another signal (message_queue). So I want to interrupt the accept state.
Is there a simple possibility to interrupt the accept?
std::cout << "TCPIP " << std::endl;
client = accept(slisten, (struct sockaddr*)&clientinfo, &clientinfolen);
if (client != SOCKET_ERROR)
cout << "client accepted: " << inet_ntoa(clientinfo.sin_addr) << ":"
<< ntohs(clientinfo.sin_port) << endl;
//receive the message from client
//recv returns the number of bytes received!!
//buf contains the data received
int rec = recv(client, buf, sizeof(buf), 0);
cout << "Message: " << rec << " bytes and the message " << buf << endl;
I read about select() but I have no clue how to use it. Could anybody give me a hint how to implement for example select() in my code?
Thanks.
Best regards,
T
The solution is to call accept() only when there is an incoming connection request. You do that by polling on the listen socket, where you can also add other file descriptors, use a timeout etc.
You did not mention your platform. On Linux, see epoll(), UNIX see poll()/select(), Windows I don't know.
A general way would be to use a local TCP connection by which the UI thread could interrupt the select call. The general architecture would use:
a dedicated thread waiting with select on both slisten and the local TCP connection
a TCP connection (Unix domain socket on a Unix or Unix-like system, or 127.0.0.1 on Windows) between the UI thread and the waiting one
various synchronizations/messages between both threads as required
Just declare that select should read slisten and the local socket. It will return as soon as one is ready, and you will be able to know which one is ready.
As you haven't specified your platform, and networking, especially async, is platform-specific, I suppose you need a cross-platform solution. Boost.Asio fits perfectly here: http://www.boost.org/doc/libs/1_39_0/doc/html/boost_asio/reference/basic_socket_acceptor/async_accept/overload1.html
Example from the link:
void accept_handler(const boost::system::error_code& error)
{
if (!error)
{
// Accept succeeded.
}
}
...
boost::asio::ip::tcp::acceptor acceptor(io_service);
...
boost::asio::ip::tcp::socket socket(io_service);
acceptor.async_accept(socket, accept_handler);
If Boost is a problem, Asio can be a header-only lib and used w/o Boost: http://think-async.com/Asio/AsioAndBoostAsio.
One way would be to run select in a loop with a timeout.
Put slisten into nonblocking mode (this isn't strictly necessary but sometimes accept blocks even when select says otherwise) and then:
fd_set read_fds;
FD_ZERO(&read_fds);
FD_SET(slisten, &read_fds);
struct timeval timeout;
timeout.tv_sec = 1; // 1s timeout
timeout.tv_usec = 0;
int select_status;
while (true) {
select_status = select(slisten+1, &read_fds, NULL, NULL, &timeout);
if (select_status == -1) {
// ERROR: do something
} else if (select_status > 0) {
break; // we have data, we can accept now
}
// otherwise (i.e. select_status==0) timeout, continue
}
client = accept(slisten, ...);
This will allow you to catch signals once per second. More info here:
http://man7.org/linux/man-pages/man2/select.2.html
and Windows version (pretty much the same):
https://msdn.microsoft.com/pl-pl/library/windows/desktop/ms740141(v=vs.85).aspx

Network Programming Issue - buffer will only send once to the server

I am trying to send a file to a server using socket programming. My server and client are able to connect to each other successfully however I am expecting the while loop below to go through the entire file and add it to the server. The issue I am having is that it only send the first chunk and not the rest.
On the client side I have the following:
memset(szbuffer, 0, sizeof(szbuffer)); //Initialize the buffer to zero
int file_block_size;
while ((file_block_size = fread(szbuffer, sizeof(char), 256, file)) > 0){
if (send(s, szbuffer, file_block_size, 0) < 0){
throw "Error: failed to send file";
exit(1);
} //Loop while there is still contents in the file
memset(szbuffer, 0, sizeof(szbuffer)); //Reset the buffer to zero
}
On the server side I have the following:
while (1)
{
FD_SET(s, &readfds); //always check the listener
if (!(outfds = select(infds, &readfds, NULL, NULL, tp))) {}
else if (outfds == SOCKET_ERROR) throw "failure in Select";
else if (FD_ISSET(s, &readfds)) cout << "got a connection request" << endl;
//Found a connection request, try to accept.
if ((s1 = accept(s, &ca.generic, &calen)) == INVALID_SOCKET)
throw "Couldn't accept connection\n";
//Connection request accepted.
cout << "accepted connection from " << inet_ntoa(ca.ca_in.sin_addr) << ":"
<< hex << htons(ca.ca_in.sin_port) << endl;
//Fill in szbuffer from accepted request.
while (szbuffer > 0){
if ((ibytesrecv = recv(s1, szbuffer, 256, 0)) == SOCKET_ERROR)
throw "Receive error in server program\n";
//Print reciept of successful message.
cout << "This is the message from client: " << szbuffer << endl;
File.open("test.txt", ofstream::out | ofstream::app);
File << szbuffer;
File.close();
//Send to Client the received message (echo it back).
ibufferlen = strlen(szbuffer);
if ((ibytessent = send(s1, szbuffer, ibufferlen, 0)) == SOCKET_ERROR)
throw "error in send in server program\n";
else cout << "Echo message:" << szbuffer << endl;
}
}//wait loop
} //try loop
The code above is the setup for the connection between the client and server which works great. It is in a constant while loop waiting to receive new requests. The issue is with my buffer. Once I send the first buffer over, the next one doesn't seem to go through. Does anyone know what I can do to set the server to receive more than just one buffer? I've tried a while loop but did not get any luck.
Your code that sends the file from the server appears to send consecutive sections of the file correctly.
Your code that appears to have the intention of receiving the file from the client performs the following steps:
1) Wait for and accept a socket.
2) Read up to 256 bytes from the socket.
3) Write those bytes back to the socket.
At this point the code appears to go back to waiting for another connection, and keeping the original connection open, and, at least based on the code you posted, obviously leaking the file descriptor.
So, the issues seems to be that the client and the server disagreeing on what should happen. The client tries to send the entire file, and doesn't read from the socket. The server reads the first 256 bytes from the socket, and writes it back to the client.
Of course, its entirely possible that portions of the code not shown implement some of the missing pieces, but there's definitely a disconnect here between what the sending side is doing, and what the receiving side is doing.
buffer will only send once to the server
No, your server is only reading once from the client. You have to loop, just like the sending loop does.

Improving port scanner performance

So I made a port scanner in C++ this morning and it seems to work alright, just having one rather annoying issue with it- whenever I use it to scan an IP over the network, it takes a good 10-20 seconds PER port.
It seems like the connect() method is what's taking it so long.
Now aside from multi-threading, which I'm sure will speed up the process but not by much, how could I make this faster? Here is the section of code that does the scanning:
for (i = 0; i < a_size(port_no); i++)
{
sock = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
target.sin_family = AF_INET;
target.sin_port = htons(port_no[i]);
target.sin_addr.s_addr = inet_addr(argv[1]);
if (connect(sock, (SOCKADDR *)&target, sizeof(target)) != SOCKET_ERROR)
cout << "Port: " << port_no[i] << " - open" << endl;
else
cout << "Port: " << port_no[i] << " - closed" << endl;
closesocket(sock);
}
If you need more let me know.
Oh also, I am using the winsock2.h file. Is it because of this that its so slow?
When you call connect(2), the OS initiates the three-way handshake by sending a SYN packet to the other peer. If no response is received, it waits a little bit and sends a few more SYN packets. If no response is still received after a given timeout, then the operation fails, and connect(2) returns with the error code ETIMEODOUT.
Ordinarily, if a peer is up but not accepting TCP connections on a given port, it will reply to any SYN packets with a RST packet. This will cause connect(2) to fail much more quickly (one network round-trip time) with the error ECONNREFUSED. However, if the peer has a firewall set up, it'll just ignore your SYN packets and won't send those RST packets, which will cause connect(2) to take a long time to fail.
So, if you want to avoid waiting for that timeout for every port, you need to do multiple connections in parallel. You can do this multithreading (one synchronous connect(2) call per thread), but this doesn't scale well since threads take up a fair amount of resources.
The better method would be to use non-blocking sockets. To make a socket non-blocking, call fcntl(2) with the F_SETFL option and the O_NONBLOCK option. Then, connect(2) will return immediately with either EWOULDBLOCK or EAGAIN, at which point you can use either select(2) or poll(2) and friends to monitor a large number of sockets at once.
Try creating an array of non-blocking sockets to queue up a bunch of connection attempts at once.
Read about it here
I figured out a solution that works on windows. First I added:
u_long on = 1;
timeval tv = {0, 1000}; //timeout value in microseconds
fd_set fds;
FD_ZERO(&fds);
then i changed this code to look like this:
for (i = 0; i < a_size(port_no); i++)
{
sock = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
FD_SET(sock, &fds);
ioctlsocket(sock, FIONBIO, &on);
target.sin_family = AF_INET;
target.sin_port = htons(port_no[i]);
target.sin_addr.s_addr = inet_addr(argv[1]);
connect(sock, (SOCKADDR *)&target, sizeof(target));
err = select(sock, &fds, &fds, &fds, &tv);
if (err != SOCKET_ERROR && err != 0)
cout << "Port: " << port_no[i] << " - open" << endl;
else
cout << "Port: " << port_no[i] << " - closed" << endl;
closesocket(sock);
}
and it seems to function much faster now! I will do some work to optimize it & clean it up a bit, but thank you for all your input everyone who responded! :)

Why might bind() sometimes give EADDRINUSE when other side connects?

In my C++ application, I am using ::bind() for a UDP socket, but on rare occasions, after reconnection due to lost connection, I get errno EADDRINUSE, even after many retries. The other side of the UDP connection which will receive the data reconnected fine and is waiting for select() to indicate there is something to read.
I presume this means the local port is in use. If true, how might I be leaking the local port such that the other side connects to it fine? The real issue here is that other side connected fine and is waiting but this side is stuck on EADDRINUSE.
--Edit--
Here is a code snippet showing that I am already doing SO_REUSEADDR on my TCP socket, not on this UDP socket for which I am having issue:
// According to "Linux Socket Programming by Example" p. 319, we must call
// setsockopt w/ SO_REUSEADDR option BEFORE calling bind.
// Make the address is reuseable so we don't get the nasty message.
int so_reuseaddr = 1; // Enabled.
int reuseAddrResult
= ::setsockopt(getTCPSocket(), SOL_SOCKET, SO_REUSEADDR, &so_reuseaddr,
sizeof(so_reuseaddr));
Here is my code to close the UDP socket when done:
void
disconnectUDP()
{
if (::shutdown(getUDPSocket(), 2) < 0) {
clog << "Warning: error during shutdown of data socket("
<< getUDPSocket() << "): " << strerror(errno) << '\n';
}
if (::close(getUDPSocket()) < 0 && !seenWarn) {
clog << "Warning: error while closing data socket("
<< getUDPSocket() << "): " << strerror(errno) << '\n';
}
}
Yes, that's normal. You need to set the socket SO_REUSEADDR before you bind, eg on *nix:
int sock = socket(...);
int yes = 1;
setsockopt(sock, SOL_SOCKET, SO_REUSEADDR, &yes, sizeof(yes));
If you have separate code that reconnects by creating a new socket, set it on that one too. This is just to do with the default behaviour of the OS -- the port on a broken socket is kept defunct for a while.
[EDIT] This shouldn't apply to UDP connections. Maybe you should post the code you use to set up the socket.
In UDP there's no such thing as lost connection, because there's no connection. You can lose sent packets, that's all.
Don't reconnect, simply reuse the existing fd.