How to tell libcurl to close idle connections after a period of time? - libcurl

Is there a way to tell curl to close connections once they become idle for a certain period of time?

Idle connections remain in the connection cache until
the connection is reused
the connection is killed because the cache needs the space
the cache is shut down and killed
libcurl only reuses connections the first N seconds from it was put into the pool. The default is 118 seconds and applications can change the timeout with CURLOPT_MAXAGE_CONN.
Connections older than MAXAGE might remain in the pool longer because the connection age is only checked at certain occasions. It will however never be considered for reuse if older.

Related

Multiple tcp sockets, one stalled

I'm trying to get a starting point on where to begin understanding what could cause a socket stall and would appreciate any insights any of you might have.
So, server is a modern dual socket xeon (2 x 6 core # 3.5 ghz) running windows 2012. In a single process, there are 6 blocking tcp sockets with default options, each of which are running on their own threads (not numa/core specified). 5 of them are connected to the same remote server and receiving very heavy loads (hundreds of thousands of small ~75 byte msgs per second). The last socket is connected to a different server with a very light send/receive load for administrative messaging.
The problem I ran into was a 5 second stall in the admin messaging socket. Multiple send calls to the socket returned successfully, however nothing was received from the remote server (should receive a protocol ack within milliseconds) or received BY the remote admin server for 5 seconds. It was as if that socket just turned off for a bit. After the 5 seconds stall passed, all of the acks came in a burst, and afterwards everything continued normally. During this, the other sockets were receiving much higher numbers of messages than normal, however there was no indication of any interruption or stall as the data logs displayed nothing unusual (light logging, maybe 500 msgs/sec).
From what I understand, the socket send call does not ensure that data has gone out on the wire, just that a transfer to the tcp stack was successful. So, I'm trying to understand the different scenarios that could have taken place that would cause a 5 second stall on the admin socket. Is it possible that due to the large amount of data being received the tcp stack was essentially overwhelmed and prioritized those sockets that were being most heavily utilized? What other situations could have potentially caused this?
Thank you!
If the sockets are receiving hundreds of thousands of 75-byte messages per second there is a possibility that the server is at maximum capacity with some resources. Maybe not bandwidth, as with 100K messages you might be consuming around 10Mbps. But it could be CPU utilization.
You should use two tools to understand you problem:
perfmon to see utilization of CPU (user and priviledged https://technet.microsoft.com/en-us/library/aa173932(v=sql.80).aspx) , memory, bandwidth, and disk queue length. You can also check number of interrupts and context switches with perfmon.
A sniffer like Wireshark to see if at TCP level data is being transmitted and responses received.
Something else I would do is to write a timestamp right after the send call and right before and after the read call in the thread in charge of admin socket. Maybe it is a coding problem.
The fact that send calls return successfully doesn't mean data was immediately sent. In TCP data will be stored in the send buffer and from there, TCP stack will send the data to the other end.
If your system is CPU bound (you can see with perfmon if this is true), then you should put attention to the comments written by #EJP, this is something that could happen when the machine is under heavy load. With the tools I mentioned, you can see if the receive window in the admin socket is closed or if it is just that socket read is taking time in the admin socket.

What happens to queued FastCGI requests when my server goes down?

I understand that FastCGI queues requests and acts on them one by one. I was wondering what would happen if there are multiple requests queued, and for some reason my server goes down. Will it still remember the requests and continue acting on them when the server springs back up or will I lose all those queued up requests?
You will lose the queued requests. They are held in memory, not on disk.
Unless you have documentation for your FastCGI application to the contrary, I would assume that when the OS or hardware fails or shutdown, the requests in process will be lost. If you want to be certain, you can set up a test where some requests are queued, and then shutdown or unplug as needed to simulate the situation you want to test.

How long does a winsock connection last?

I have a program in C++ using winsock that connects to a server, the user's will need to send data to this server periodically over a very long span of time(perhaps weeks without the need to reconnect).
I have found plenty of documentation on timeouts when establishing a connection, but I am trying to find out how long the connection lasts after it has been established. Does the connection last until either program is shut down? Can I connect then wait two hours to send something?
There's no explicit connection lifetime limitation (at least in TCP). The connection lasts until one of the following:
Either endpoint (application) shuts down (actually the connection may remain in half-duplex mode)
Intermediate entity decides to terminate the connection (such as firewall, NAT or etc.)
In "real-world" internet connections are usually shut down forcibly after some period of time, especially if there's no data sent. Besides of this, depending on the protocol, some servers refuse to keep the connection open for indefinite time (such as http servers).
In conclusion: there's no generic way to discover the lifetime of the connection. You're completely in the hands of the firewalls, proxies (if applicable), and the server behalf.
Sending some data periodically (such as keep-alive messages) usually help. It also helps to detect that the connection has been silently terminated.

Process terminated, but its network resource remained

My program has a TCP server and always has several longlived connections. Sometimes I close the program without closing all the connections and then I execute netstat -ano in command line, amazingly all the connections remained with the state of "ESTABLISHED" with a pid that doesn't exist in the task-manager! Restarting the network card doesn't do any help. The only solution is logout/logon or restarting the computer. Anybody ever met this problem?
This may be the sockets in a 'half-closed' state.
They usually disappear after some timeout which may be pretty big (from 5 to 30 minutes), depending on your system.

Can Winsock connections randomly fail?

I have a blocking client/server connected locally via Winsock. The client uses firefox to retrieve data from websites, passing certain data along to the server for extra processing. The server always responds, and the processing can take anywhere from 1/10th second to a few minutes. The client has no winsock connection to anything but the server; all web data is retrieved to hard-drive via firefox.
This setup works quite well until, seemingly randomly, the client's recv returns -1 (SOCKET_ERROR) with error code 10054 (WSAECONNRESET). This means the server supposedly terminated connection, but the server is actually still waiting to recv as if nothing is wrong. The connection has failed in this way as early as 5 minutes in or after working for as long as about an hour and a half. The client sends about 10 different types of requests to the server, and failure has occurred on a variety of them. The frequency of requests is roughly constant, probably an average of 10-15 a minute. When the connection breaks, neither computer experiences internet problems and remote desktop does not disconnect.
Initially I thought memory leaks, but after extensive debugging I am reasonably certain no more exist. Firefox is engaged in considerable HTTP traffic at times, so I thought maybe that could be filling available socket bufferspace or something -- seems doubtful but at this point I'm really not sure. So, could it be more memory leaks, maybe a hidden buffer overrun, too much web traffic? What is causing my Winsock app to randomly fail?
Sounds like a firewall at work.
Many firewalls are configured to terminate idle connections (i.e. open TCP sessions on which no data is transferred for awhile). Especially if it's an HTTP connection, which are typically not persistent.