C++/Qt: QTcpSocket won't write after reading - c++

I am creating a network client application that sends requests to a server using a QTcpSocket and expects responses in return. No higher protocol involved (HTTP, etc.), they just exchange somewhat simple custom strings.
In order to test, I have created a TCP server in Python that listens on a socket and logs the strings it receives and those it sends back.
I can send the first request OK and get the expected response. However, when I send the second request, it does not seem to get written to the network.
I have attached debug slots to the QTcpSocket's notification signals, such as bytesWritten(...), connected(), error(), stateChanged(...), etc. and I see the connection being established, the first request sent, the first response processed, the number of bytes written - it all adds up...
Only the second request never seems to get sent :-(
After attempting to send it, the socket sends an error(RemoteHostClosedError) signal followed by ClosingState and UnconnectedState state change signals.
Before I go any deeper into this, a couple of (probably really basic) questions:
do I need to "clear" the underlying socket in any way after reading ?
is it possible / probable that not reading all the data the server has sent me prevents me from writing ?
why does the server close the connection ? Does it always do that so quickly or could that be a sign that something is not right ? I tried setting LowDelay and KeepAlive socket options, but that didn't change anything. I've also checked the socket's state() and isValid() and they're good - although the latter also returns true when unconnected...
In an earlier version of the application, I closed and re-opened the connection before sending a request. This worked ok. I would prefer keeping the connection open though. Is that not a reasonable approach ? What is the 'canonical' way to to implement TCP network communication ? Just read/write or re-open every time ?
Does the way I read from the socket have any impact on how I can write to it ? Most sample code uses readAll(...) to get all available data; I read piece by piece as I need it and << to a QTextStream when writing...
Could this possibly be a bug in the Qt event loop ? I have observed that the output in the Qt Creator console created with QDebug() << ... almost always gets cut short, i.e. just stops. Sometimes some more output is printed when I shut down the application.
This is with the latest Qt 5.4.1 on Mac OS X 10.8, but the issue also occurs on Windows 7.
Update after the first answer and comments:
The test server is dead simple and was taken from the official Python SocketServer.TCPServer Example:
import SocketServer
class MyTCPHandler(SocketServer.StreamRequestHandler):
def handle(self):
request = self.rfile.readline().strip()
print "RX [%s]: %s" % (self.client_address[0], request)
response = self.processRequest(request)
print "TX [%s]: %s" % (self.client_address[0], response)
self.wfile.write(response)
def processRequest(self, message):
if message == 'request type 01':
return 'response type 01'
elif message == 'request type 02':
return 'response type 02'
if __name__ == "__main__":
server = SocketServer.TCPServer(('localhost', 12345), MyTCPHandler)
server.serve_forever()
The output I get is
RX [127.0.0.1]: request type 01
TX [127.0.0.1]: response type 01
Also, nothing happens when I re-send any message after this - which is not surprising as the socket was closed. Guess I'll have to figure out why it is closed...
Next update:
I've captured the network traffic using Wireshark and while all the network stuff doesn't really tell me a lot, I do see the first request and the response. Right after the client [ACK]nowledges the response, the server sends a Connection finish (FIN). I don't see the second request anywhere.
Last update:
I have posted a follow-up question at Python: SocketServer closes TCP connection unexpectedly.

Only the second request never seems to get sent :-(
I highly recommend running a program like WireShark and seeing what packets are actually getting sent and received across the network. (As it is, you can't know for sure whether the bug is on the client side or in the server, and that is the first thing you need to figure out)
do I need to "clear" the underlying socket in any way after reading ?
No.
is it possible / probable that not reading all the data the server has
sent me prevents me from writing ?
No.
why does the server close the connection ?
It's impossible to say without looking at the server's code.
Does it always do that so quickly or could that be a sign that
something is not right ?
Again, this would depend on how the server was written.
This worked ok. I would prefer keeping the connection open though. Is
that not a reasonable approach ?
Keeping the connection open is definitely a reasonable approach.
What is the 'canonical' way to to implement TCP network communication
? Just read/write or re-open every time ?
Neither was is canonical; it depends on what you are attempting to accomplish.
Does the way I read from the socket have any impact on how I can write
to it ?
No.
Could this possibly be a bug in the Qt event loop ?
That's extremely unlikely. The Qt code has been used for years by tens of thousands of programs, so any bug that serious would almost certainly have been found and fixed long ago. It's much more likely that either there is a bug in your client, or a bug in your server, or a mismatch between how you expect some API call to behave and how it actually behaves.

Related

how to keep UDP socket connection open between 2 hosts

I'm working on a simple chatroom based on C++ and UDP, and I'm using this as a base. Every time client-server are saying "hello" to each other, both of them are ending their processes and nothing else, but I'd like to keep the socket open after that, so I can send something else and/or something like that, but haven't found a way to do so, so how do I do such thing? Haven't found much info on what I need, so any help appreciated. Thanks in advance.
You don't need to send a pulse or a heartbeat to keep the socket open. The socket will remain open as long as the program is running or you call close on it.
You can wrap your send and receive in an infinite loop but you should note that the example code you linked to is waaaay too simple for a chat client: you will need to handle errors like the underlying connection being offline ( for example, the interface being disconnected/ brought down, when the send and recv calls will return an error with associated errno ). You should look into using the select, poll and epoll system calls to detect errors and deal with them.

ZMQ - Client Server: Client is powered off unexpectedly, how server detects it?

Multiple clients are connected to a single ZMQ_PUSH socket. When a client is powered off unexpectedly, server does not get an alert and keep sending messages to it. Despite of using ZMQ_OBLOCK and setting ZMQ_HWM to 5 (queue only 5 messages at max), my server doesn't get an error until unless client is reconnected and all the messages in queue are received at once.
I recently ran into a similar problem when using ZMQ. We would cut power to interconnected systems, and the subscriber would be unable to reconnect automatically. It turns out the there has recently (past year or so) been implemented a heartbeat mechanism over ZMTP, the underlying protocol used by ZMQ sockets.
If you are using ZMQ version 4.2.0 or greater, look into setting the ZMQ_HEARTBEAT_IVL and ZMQ_HEARTBEAT_TIMEOUT socket options (http://api.zeromq.org/4-2:zmq-setsockopt). These will set the interval between heartbeats (ZMQ_HEARTBEAT_IVL) and how long to wait for the reply until closing the connection (ZMQ_HEARTBEAT_TIMEOUT).
EDIT: You must set these socket options before connecting.
There is nothing in zmq explicitly to detect the unexpected termination of a program at the other end of a socket, or the gratuitous and unexpected failure of a network connection.
There has been historical talk of adding some kind of underlying ping-pong are-you-still-alive internal messaging to zmq, but last time I looked (quite some time ago) it had been decided not to do this.
This does mean that crashes, network failures, etc aren't necessarily handled very cleanly, and your application will not necessarily know what is going on or whether messages have been successfully sent. It is Actor model after all. As you're finding your program may eventually determine something had previously gone wrong. Timeouts in zmtp will spot the failure, and eventually the consequences bubble back up to your program.
To do anything better you'd have to layer something like a ping-pong on top yourself (eg have a separate socket just for that so that you can track the reachability of clients) but that then starts making it very hard to use the nice parts of ZMQ such as push / pull. Which is probably why the (excellent) zmq authors decided not to put it in themselves.
When faced with a similar problem I ended up writing my own transport library. I couldn't find one off the shelf that gave nice behaviour in the face of network failures, crashes, etc. It implemented CSP, not actor model, wasn't terribly fast (an inevitability), didn't do patterns in the zmq sense, but did mean that programs knew exactly where messages were at all times, and knew that clients were alive or unreachable at all times. The CSPness also meant message transfers were an execution rendezvous, so programs know what each other is doing too.

interface is down but netstat still shows the connection established? [duplicate]

This question already has answers here:
Java socket API: How to tell if a connection has been closed?
(9 answers)
Closed 5 years ago.
When I'm using e.g. PuTTY and my connection gets lost (or when I do a manual ipconfig /release on Windows), it responds directly and notifies my connection was lost.
I want to create a Java program which monitors my Internet connection (to some reliable server), to log the date/times when my internet fails.
I tried use the Socket.isConnected() method but that will just forever return "true". How can I do this in Java?
Well, the best way to tell if your connection is interrupted is to try to read/write from the socket. If the operation fails, then you have lost your connection sometime.
So, all you need to do is to try reading at some interval, and if the read fails try reconnecting.
The important events for you will be when a read fails - you lost connection, and when a new socket is connected - you regained connection.
That way you can keep track of up time and down time.
Even though TCP/IP is "connection oriented" protocol, normally no data is sent over an idle connection. You can have a socket open for a year without a single bit sent over it by the IP stack. In order to notice that a connection is lost, you have to send some data on the application level.(*) You can try this out by unplugging the phone cable from your ADSL modem. All connections in your PC should stay up, unless the applications have some kind of application level keepalive mechanism.
So the only way to notice lost connection is to open TCP connection to some server and read some data from it. Maybe the most simple way could be to connect to some FTP server and fetch a small file - or directory listing - once in a while. I have never seen a generic server which was really meant to be used for this case, and owners of the FTP server may not like clients doing this.
(*) There is also a mechanism called TCP keepalive but in many OS's you have to activate it for all applications, and it is not really practical to use if you want to notice loss of connection quickly
If the client disconnects properly, a read() will return -1, readLine() returns null, readXXX() for any other X throws EOFException. The only reliable way to detect a lost TCP connection is to write to it. Eventually this will throw an IOException 'connection reset', but it takes at least two writes due to buffering.
Why not use the isReachable() method of the java.net.InetAddress class?
How this works is JVM implementation specific but:
A typical implementation will use ICMP ECHO REQUESTs if the privilege can be obtained, otherwise it will try to establish a TCP connection on port 7 (Echo) of the destination host.
If you want to keep a connection open continually so you can see when that fails you could connect to server running the ECHO protocol yourself rather than having isReachable() do it for you and read and write data and wait for it to fail.
You might want to try looking at the socket timeout interval. With a short timeout (I believe the default is 'infinite timeout') then you might be able to trap an exception or something when the host becomes unreachable.
Okay so I finally got it working with
try
{
Socket s = new Socket("stackoverflow.com",80);
DataOutputStream os = new DataOutputStream(s.getOutputStream());
DataInputStream is = new DataInputStream(s.getInputStream());
while (true)
{
os.writeBytes("GET /index.html HTTP/1.0\n\n");
is.available();
Thread.sleep(1000);
}
}
catch (IOException e)
{
System.out.println("connection probably lost");
e.printStackTrace();
}
Not as clean as I hoped but it's not working if I leave out the os.writeBytes().
You could ping a machine every number of seconds, and this would be pretty accurate. Be careful that you don't DOS it.
Another alternative would be run a small server on a remote machine and keep a connection to it.
Its probably simpler to connect to yahoo/google or somewhere like this.
URL yahoo = new URL("http://www.yahoo.com/");
URLConnection yc = yahoo.openConnection();
int dataLen = yc.getContentLength() ;
Neil
The isConnected()method inside Socket.java class is a little misleading. It does not tell you if the socket is currently connected to a remote host (like if it is unclosed). Instead, it tells you whether the socket has ever been connected to a remote host. If the socket was able to connect to the remote host at all, this method returns true, even after that socket has been closed. To tell if a socket is currently open, you need to check that isConnected() returns true and isClosed() returns false.
For example:
boolean connected = socket.isConnected() && !socket.isClosed();

Forced server-side socket close without SO_LINGER > 0 can lose data, right?

I'm writing a cross-platform client application that uses sockets, written in C++. I'm having problems where the server is doing a hard close on the socket when it's done sending me info.
I've been reading other posts on this topic, and I'm not so much interested in the rights or wrong of this approach, but it's seems the server is either explicitly setting SO_LINGER=0, or that's the default behavior on that system (not sure, it's a Linux box).
I can see (in Wireshark) that the data was sent to me followed within milli-seconds by an RST, indicating a hard close by the server. I personally don't agree with this approach as it should be up to the client to shutdown the socket.
Server team are saying there's nothing wrong with that approach (doing a hard close rather than shutdown), it's typical on servers to avoid accumulating TIMED_WAIT sockets. On Windows my select() returns indicating there's something to read (while I haven't read any of this "in transit" data yet).
However, because of the quick arrival of the RST, on Windows recv() returns -1 and I'm seeing a 10054 for the error code (connection reset by peer). This wouldn't be too bad if I could at least get the data that was sent, but it seems that once my client's socket stack sees the RST any unread bytes are no longer made available to me.
On Linux (client), there's no problem. It seems the TCP stack is behaving slightly differently, in that I can read the outstanding bytes before the RST is honoured. I'm having trouble convincing the server guys they have a bug, given that it works for a Linux client.
First off, am I correct? Is this a server-side issue? I can't see that the client end is doing anything wrong, so it must be right?
It seems the server team are adamant that they want to perform the close, and they don't want to in have TIMED_WAITs, so I was going to push for them to add a SO_LINGER of, say 2 seconds? Does that sound like it will solve my problem? From what I understand this will stop the server from sending out a RST so soon after sending data, and should give me a chance to read the outstanding bytes.
Found a definitive answer to my own question:
"...Upon reception of RST segment, the receiving side will immediately abort the connection. This statement has more implications than just meaning that you will not be able to receive or send any more data to/from this connection. It also implies that any unread data still in the TCP reception buffer will be lost..." It cites the book "TCP/IP Internetworking Volume II". I don't have that book, so I can only take his word for it. Doesn't seems to discard data on Linux, only Windows...
Olivier Langlois's blog
The side-effect of fiddling with SO_LINGER to force a reset is that all pending data is lost. The fact that you don't receive it is all the proof you need that the server team is wrong to do this.
RFC 793 cited below says 'this command [ABORT] causes all pending SENDs and RECEIVEs to be aborted, ... and a special RESET message to be sent to the TCP on the other side of the connection.' See also W.R. Stevens, TCP/IP Illustrated, Vol. 1, p. 287: 'Aborting a connection provides two features to the application: (1) any queued data is thrown away and the reset is sent immediately, and (2) the receiver of the RST can tell that the other end did an abort instead of a normal close'. There is similar wording, along with an extract from the BSD code that implements it, in Vol. 2.
The TIME_WAIT state only occurs on a socket which sends a FIN before it has received one: see RFC 793. So the server should be waiting for a FIN from the client, with a suitable timeout, rather than resetting. This will also permit the client to do connection pooling.

"Specified network name is no longer available" in Httplistener

I have built a simple web service that simply uses HttpListener to receive and send requests. Occasionally, the service fails with "Specified network name is no longer available". It appears to be thrown when I write to the output buffer of the HttpListenerResponse.
Here is the error:
ListenerCallback() Error: The specified network name is no longer available at System.Net.HttpResponseStream.Write(Byte[] buffer, Int32 offset, Int32 size)
and here is the guilty portion of the code. responseString is the data being sent back to the client:
buffer = System.Text.Encoding.UTF8.GetBytes(responseString);
response.ContentLength64 = buffer.Length;
output = response.OutputStream;
output.Write(buffer, 0, buffer.Length);
It doesn't seem to always be a huge buffer, two examples are 3,816 bytes and, 142,619 bytes, these errors were thrown about 30 seconds apart. I would not think that my single client application would be overloading HTTPlistener; the client does occasionally sent/receive data in bursts, with several exchanges happening one after another.
Mostly Google searches shows that this is a common IT problem where, when there are network problems, this error is shown -- most of the help is directed toward sysadmins diagnosing a problem with an app moreso than developers tracking down a bug. My app has been tested on different machines, networks, etc. and I don't think it's simply a network configuration problem.
What may be the cause of this problem?
I'm getting this too, when a ContentLength64 is specified and KeepAlive is false. It seems as though the client is inspecting the Content-Length header (which, by all possible accounts, is set correctly, since I get an exception with any other value) and then saying "Whelp I'm done KTHXBYE" and closing the connection a little bit before the underlying HttpListenerResponse stream was expecting it to. For now, I'm just catching the exception and moving on.
I've only gotten this particular exception once so far when using HttpListener.
It occurred when I resumed execution after my application had been standing on a breakpoint for a while.
Perhaps there is some sort of internal timeout involved? Your application sends data in bursts, which means it's probably completely inactive a lot of the time. Did the exception occur immediately after a period of inactivity?
Same problem here, but other threads suggest ignoring the Exception.
C# problem with HttpListener
May be that's not the right thing to do.
For me I find that whenever the client close the webpage before it load fully it gives me that exception. What I do is just add a try catch block and print something when the exception happen. In another word I just ignore the exception.
The problem occurs when you're trying to respond to an invalid request. Take a look at this. I found out that the only way to solve this problem is:
listener = new HttpListener();
listener.IgnoreWriteExceptions = true;
Just set IgnoreWriteExceptions to true after instantiating your listener and the errors are gone.
Update:
For a deeper explanation, Http protocol is based on TCP protocol which works with streams to which each peer writes data. TCP protocol is peer to peer and each peer can close the connection. When the client sends a request to your HttpListener there will be a TCP handshake, then the server will process the data and responds back to the client by writing into the connection's stream. If you try to write into a stream which is already closed by the remote peer the Exception with "Specified network name is no longer available" will occur.