I have a blocking client/server connected locally via Winsock. The client uses firefox to retrieve data from websites, passing certain data along to the server for extra processing. The server always responds, and the processing can take anywhere from 1/10th second to a few minutes. The client has no winsock connection to anything but the server; all web data is retrieved to hard-drive via firefox.
This setup works quite well until, seemingly randomly, the client's recv returns -1 (SOCKET_ERROR) with error code 10054 (WSAECONNRESET). This means the server supposedly terminated connection, but the server is actually still waiting to recv as if nothing is wrong. The connection has failed in this way as early as 5 minutes in or after working for as long as about an hour and a half. The client sends about 10 different types of requests to the server, and failure has occurred on a variety of them. The frequency of requests is roughly constant, probably an average of 10-15 a minute. When the connection breaks, neither computer experiences internet problems and remote desktop does not disconnect.
Initially I thought memory leaks, but after extensive debugging I am reasonably certain no more exist. Firefox is engaged in considerable HTTP traffic at times, so I thought maybe that could be filling available socket bufferspace or something -- seems doubtful but at this point I'm really not sure. So, could it be more memory leaks, maybe a hidden buffer overrun, too much web traffic? What is causing my Winsock app to randomly fail?
Sounds like a firewall at work.
Many firewalls are configured to terminate idle connections (i.e. open TCP sessions on which no data is transferred for awhile). Especially if it's an HTTP connection, which are typically not persistent.
Related
I'm trying to get a starting point on where to begin understanding what could cause a socket stall and would appreciate any insights any of you might have.
So, server is a modern dual socket xeon (2 x 6 core # 3.5 ghz) running windows 2012. In a single process, there are 6 blocking tcp sockets with default options, each of which are running on their own threads (not numa/core specified). 5 of them are connected to the same remote server and receiving very heavy loads (hundreds of thousands of small ~75 byte msgs per second). The last socket is connected to a different server with a very light send/receive load for administrative messaging.
The problem I ran into was a 5 second stall in the admin messaging socket. Multiple send calls to the socket returned successfully, however nothing was received from the remote server (should receive a protocol ack within milliseconds) or received BY the remote admin server for 5 seconds. It was as if that socket just turned off for a bit. After the 5 seconds stall passed, all of the acks came in a burst, and afterwards everything continued normally. During this, the other sockets were receiving much higher numbers of messages than normal, however there was no indication of any interruption or stall as the data logs displayed nothing unusual (light logging, maybe 500 msgs/sec).
From what I understand, the socket send call does not ensure that data has gone out on the wire, just that a transfer to the tcp stack was successful. So, I'm trying to understand the different scenarios that could have taken place that would cause a 5 second stall on the admin socket. Is it possible that due to the large amount of data being received the tcp stack was essentially overwhelmed and prioritized those sockets that were being most heavily utilized? What other situations could have potentially caused this?
Thank you!
If the sockets are receiving hundreds of thousands of 75-byte messages per second there is a possibility that the server is at maximum capacity with some resources. Maybe not bandwidth, as with 100K messages you might be consuming around 10Mbps. But it could be CPU utilization.
You should use two tools to understand you problem:
perfmon to see utilization of CPU (user and priviledged https://technet.microsoft.com/en-us/library/aa173932(v=sql.80).aspx) , memory, bandwidth, and disk queue length. You can also check number of interrupts and context switches with perfmon.
A sniffer like Wireshark to see if at TCP level data is being transmitted and responses received.
Something else I would do is to write a timestamp right after the send call and right before and after the read call in the thread in charge of admin socket. Maybe it is a coding problem.
The fact that send calls return successfully doesn't mean data was immediately sent. In TCP data will be stored in the send buffer and from there, TCP stack will send the data to the other end.
If your system is CPU bound (you can see with perfmon if this is true), then you should put attention to the comments written by #EJP, this is something that could happen when the machine is under heavy load. With the tools I mentioned, you can see if the receive window in the admin socket is closed or if it is just that socket read is taking time in the admin socket.
I have a program in C++ using winsock that connects to a server, the user's will need to send data to this server periodically over a very long span of time(perhaps weeks without the need to reconnect).
I have found plenty of documentation on timeouts when establishing a connection, but I am trying to find out how long the connection lasts after it has been established. Does the connection last until either program is shut down? Can I connect then wait two hours to send something?
There's no explicit connection lifetime limitation (at least in TCP). The connection lasts until one of the following:
Either endpoint (application) shuts down (actually the connection may remain in half-duplex mode)
Intermediate entity decides to terminate the connection (such as firewall, NAT or etc.)
In "real-world" internet connections are usually shut down forcibly after some period of time, especially if there's no data sent. Besides of this, depending on the protocol, some servers refuse to keep the connection open for indefinite time (such as http servers).
In conclusion: there's no generic way to discover the lifetime of the connection. You're completely in the hands of the firewalls, proxies (if applicable), and the server behalf.
Sending some data periodically (such as keep-alive messages) usually help. It also helps to detect that the connection has been silently terminated.
I am writing a proxy server that proxies SSL connections, and it is all working perfectly fine for normal traffic. However when there is a large file transfer (Anything over 20KB) like an email attachment, then the connection is reset on the TCP level before the file is finished being written. I am using non-blocking IO, and am spawning a thread for each specific connection.
When a connection comes in I do the following:
Spawn a thread
Connect to the client (unencrypted) and read the connect request (all other requests are ignored)
Create a secure connection (SSL using openssl api) to the server
Tell the client that we contacted the server (unencrypted)
Create secure connection to client, and start proxying data between the two using a select loop to determine when reading and writing can occur
Once the underlying sockets are closed, or there is an error, the connection is closed, and thread is terminated.
Like I said, this works great for normal sized data (regular webpages, and other things) but fails as soon as a file is too large with either an error code (depending on the webapp being used) or a Error: Connection Interrupted.
I have no idea what is causing the connection to close, whether it's something TCP, HTTP, or SSL specific, and I can't find any information on it at all. In some browsers it will start to work if I put a sleep statement immediately after the SSL_write, but this seems to cause other issues in other browsers. The sleep doesn't have to be long, really just a delay. I currently have it set to 4ms per write, and 2ms per read, and this fixes it completely in older firefox, chrome with HTTP uploads, and opera.
Any leads would be appreciated, and let me know if you need any more information. Thanks in advanced!
-Sam
If the web-app thinks an uploaded file is too large what does it do? If it's entitled to just close the connection, that will cause an ECONN at the sender: 'connection reset'. Whatever it does, as you're writing a proxy, and assuming there are no bugs in your code that are causing this, your mission is to mirror whatever happens to your upstream connection back down the downstream connection. In this case the answer is to just do what you're doing: close the upstream and downstream sockets. If you got an incoming close_notify from the server, do an orderly SSL close to the client; if you got ECONN, just close the client socket directly, bypassing SSL.
System Background:
Its basically a client/server application. Server is an embedded device and Client is a windows app developed in C++.
Issue: After a runtime of about a week, communication breaks between client/server,
because of this the server is not able to connect back to the client and needs a restart to recover. Looks like System is experiencing Socket re-connection problem. Also The network sometimes experiences intermittent failures.
Abrupt Termination at remote end
Port locking
Want some suggestions on how to cleanup the socket or shutdown cleanly so that re-connection happens properly. Other alternate solutions?
Thanks,
Hussain
It does not sound like you are in a position to easily write a stress test app to reproduce this more quickly out of band, which is what I would normally suggest. A pragmatic solution might be to periodically restart the server and client at a time when you think the system is least busy, or when problems arise. This sounds like cheating but many production systems I have been involved with take this approach to maximize system uptime.
My preferred solution here would be to abstract the server and client socket code (hopefully your design allows this to be done without too much work) and use it to implement client and server test apps that can be used to stress test only the socket code by simulating a lot of normal socket traffic in a short space of time - this helps identify timing windows and edge cases that could cause problems over time, and might speed up the process of obtaining a debuggable repro - you can simulate network error in your test code by dropping the socket on the client or server periodically.
A further step to take on the strategic front would be to ensure that you have good diagnostics in your socket handlers on client and server side. Track socket open and close, with special focus on your socket error and reconnect paths given you know the network is unreliable. Make sure the logs are output sequential with a timestamp. Something as simple as this might quickly show you what error or conditions trigger your problems. You can quickly make sure the logs are correct and complete using the test apps I mentioned above.
One thing you might want to check is that you are not being hit by lack of ability to reuse addresses. Sometimes when a socket gets closed, it cannot be immediately reused for a reconnect attempt as there is still residual activity on one or other end. You may be able to get around this (based on my Windows/Winsock experience) by experimenting with SO_REUSEADDR and SO_LINGER on your sockets. however, my first focus in your case would be on ensuring the socket code on client and server handles all errors and mainline cases correctly, before worrying about this.
A common issue is that when a connection is dropped, it is kept opened by the OS in TIME_WAIT state. If you want to restart the server socket, it will not be able to reopen the same port directly because it is still present for the OS.
To avoid that, you need to set the parameter SO_REUSEADDR so that the OS allows you to reuse the port if it is in TIME_WAIT state for a server socket.
Example:
int optval=1;
// set SO_REUSEADDR on a socket to true (1):
setsockopt(s1, SOL_SOCKET, SO_REUSEADDR, &optval, sizeof optval);
I'm experiencing something similar with encrypted connections. I believe in my case it is because the client dropped the connection and reconnected in less than the 4 minute FIN_WAIT period. The initial connection is recycled (by the os) and the server doesn't see the drop out. The SSL authentication is lost when the client loses connection so the client tries to re-authenticate. This is during what the servers considers the middle of a conversation. The server then hangs up on the client. I think the server ssl code considers this a man in the middle attack or just gets confused and closes the connection.
I need a little help if someone's got a minute.
I've written a web server using IO completion ports, but I am having some trouble sending out large files. Web pages seem to load fine, but during large file transfers, WSASend() fails after a few minutes with error "The specified network name is no longer available."
Right now, my server just closes the associated connection when any overlapped operation fails. Is this the right thing to do? or should I retry failed overlapped operations a few times before I close the socket? I am using tcp/stream sockets.
(fixed) I am also receiving what seems like random 0 byte packets from WSARecv. I am not sure what to make of this, or if the problem is related.(/fixed)
Thanks for any help
edit: now that the server properly handles connections, and has a much more comprehensive log, it seems like Len is right. The client is closing the connection for some reason.
The log:
Initializing Windows Sockets...
Forwarding port 80...
Starting server...
Waiting for incoming connections...
Socket 1128: Client connected.
Socket 1128: Request received
Socket 1128: Sent response
Socket 1128: Error 64: SendChunk() failed. //WSASend()
Socket 1128: Closing connection - GetQueueCompletionStatus == FALSE
so the question is now, why would the client close the connection? It takes anywhere from 2-5 minutes to happen. I have decreased the buffer size to 4098 bytes per send, and only send the next chunk when the first has completed.
Thanks again for any ideas on this.
p.s. I even just implemented a retry function so that it will retry a failed overlapped IO operation five times before giving up....still no luck =(
A zero length packet returned from recv indicates client on the other end has closed the connection.
Which answers why your subsequent send to the client failed.
http://www.opengroup.org/onlinepubs/009695399/functions/recv.html
If no messages are available to be
received and the peer has performed an
orderly shutdown, recv() shall return
0.
Are you doing anything to impose some form of flow control on your data transmission?
If not then you are probably using up resources which is causing the send to fail.
For example, if you are simply issuing LOTS of WSASend() calls one after the other rather than pacing them based on when they complete then each one will use system resources (non-paged pool and/or lock pages which go towards the 'locked pages limit'). You'll then likely eventually fail with ENOBUFS or similar errors.
What you need to do is build a flow control system that works off of the send completions so that you only ever have a known number of sends outstanding at a time.
See these questions for more detail:
Implement a good performing "to-send" queue with TCP
Limiting TCP sends with a "to-be-sent" queue and other design issues
Finally figured it out.
from Rogers Internet Terms of Service:
Without limitation, you may not use (or allow anyone else to use) our Services to:
(xvi) operate a server in connection with the Services, including, without limitation, >mail, news, file, gopher, telnet, chat, Web, or host configuration servers, multimedia >streamers or multi-user interactive forums;
how lame is that? O_o
good news: server works fine =)
edit- called Rogers. They verified that they are cutting me off, and told me that I need a business account to run a web server.