I'm writing server application which uses boost::asio, and libssl via its integration with boost::asio. When there is incoming connection, the first time the handshake succeeded, but after the connection is dropped and the client tries to connect again, the handshake fails with the error:
session id context uninitialized
Here has proposed solution with using SSL_OP_NO_TICKET option when the SSL context is initialized. I'm using it the following way:
m_sslContext.set_options(SSL_OP_NO_TICKET);
In the beginning this resolves the problem, but now despite the option is still set the error appears again. Does anybody has an idea what other can be done with this problem?
I found that when the problem arises I still have an old connection to the same remote endpoint which tries to connect again. When I dropped the old connection properly the problem gone.
I got exactly the same error with client certificate verification enabled.
The solution was to create separate ssl_context for every connection, unlike in boost.asio examples.
One thing to note, SSL stream shutdown never completed in my case, it just would hang indefinitely. Perhaps, because the client didn't implement it correctly.
Related
In my application (c++) I have a service exposed as:
grpc foo(stream Request) returns (Reply) { }
The issue is that when the server goes down (CTRL-C) the stream on the client side keeps going indeed the
grpc::ClientWriter::Write
doesn't return false. I can confirm that using netstat I don't see any connection between the client and the server (apart a TIME_WAIT one that after a while goes away) and the client keeps calling that Write without errors.
Is there a way to see if the underlying connection is still up instead to rely on the Write return value ? I use grpc version 1.12
update
I discovered that the underlying channel goes in status IDLE but still the ClientWriter::Write doesn't report the error, I don't know if this is intended. During the streaming I'm trying now to reestablish a connection with the server every time the channel status is not GRPC_CHANNEL_READY
This could happen in a few scenarios but the most common element is a connection issue. We have KEEPALIVE support in gRPC to tackle exactly this issue. For C++, please refer to https://github.com/grpc/grpc/blob/master/doc/keepalive.md on how to set this up. Essentially, endpoints would send pings at certain intervals and expect a reply within a certain timeframe.
I have a server running in async and a client running in sync.
The client and server do a handshake, and then they do the SSL handshake. The client sends a message to the server, the server reads the message (and I can print it out correctly) and then the server sends back a response boost::async_write. The response leaves the server and the reads are being executed on the client boost::read() but the client never returns from the read command. Eventually the request times out and throws an exception (request timed out).
The server is asynchronous and the client is synchronous.
Please note that without SSL, everything works correctly, but with SSL the scenario above unfolds. I have viewed in Wireshark that the handshake works correctly and both the SSL and TCP handshake are correct. Plus, when the client sends the first message boost::write(), the server can decrypt and read it perfectly (boost::read_async). The server sends back the message perfectly(boost:write_async)... but the client for some reason never returns from reading!! (ONLY in the SSL case, normal TCP works correctly).
Boost version is 1.48 ... and I am truly puzzled how TCP can work fine and SSL is not working (but the as per the scenario above it has nothing to do with the encryption aspect). Is there something I have to do in boost differently than I currently have?
the issue was for some reason the header of one of the messages i was passing in was out of scope. Declaring a header on the stack of a function, and then passing that into an async Send will NOT guarantee the memory of that header has been passed entirely into the function. The header has to have a more persistent scope (such as a heap, member variable etc) in the async case.
There are so many questions about this issue but none has been able to address my issue specifically and I have yet to find any valid explanation of the error itself:
The underlying connection was closed: The connection was closed unexpectedly
In our situation we are making a call to a 3rd Party API via SSL. On my local PC I can connect to that API make a request and get a response back, but on an IIS Production server I get this error. The API is using OAuth to authenticate.
What exactly does it mean. Is the request leaving our server and rejected by the remote server, or is it not even leaving our server and our system is preventing it from making the request.
Some more information incase anyone may know what the issue is:
No known changes to any networking, servers, routing, security (apparently)
No code changes recently
According to our own internal logging, the issue started off as an ocassional 403 Error-Forbidden then we saw a number of Cannot Connect to Remote Server. Eventually it failed with The underlying connection was closed: The connection was closed unexpectedly.
Can someone please explain what the actual error means? If anyone has experienced this in a similar situation and can shed some light, that would be greatly appreciated.
The underlying connection was closed: The connection was closed unexpectedly
This just says, that (probably the remote end) closed the TCP connection which underlies the SSL connection. Usually an SSL alert should be sent back on SSL related errors, but some stacks instead close the connection. It might also be that the peer does not expect SSL at all and thus closes the connection because of invalid data.
On my local PC I can connect to that API make a request and get a response back, but on an IIS Production server I get this error.
It is hard to say what the problem might be, but if this is not only the same API but also the same server then the problem must be related to differences in the client. This can be support in ciphers, TLS versions, client certificates etc which can be different between machines. If this is not even the same server you should make sure that the problem is not server related by contacting the non-working server with the working client.
A good idea is also to make a TCP dump (wireshark) and compare the handshakes.
More detailed problem analysis can only be done when you provide more details about the problem, see http://noxxi.de/howto/ssl-debugging.html#hdr2.2 on what might be useful information.
I'm currently trying to write up a simple websocket which lists all the client lists to each connected client when somebody joins or leaves the websocket connection pool.
Currently I'm using the (R)Lock and unlock to make sure that there is a concurrent connection going on to avoid any interference between the connections.
Whenever I try to access the connection pool I lock it, this includes both reading and writing to the pool but for some strange reason ONLY when I bruteforce the websocket by sending 100 concurrent connections at once and I end them all I get a Broken Pipe Error.
From the looks of it the error occurs right after removing the client and broadcasting the new client list.
Could you figure out why it's failing to send the connection pool to each client when somebody looses a connection? Keep in mind that this only happens when I bruteforce the websocket connection by having a for loop which creates 100 connections and when ending the connection list it would fail.
I added a note where it fails.
Another side note is if you guys have a better way to send the connection pool other then by looping through it and storing the UUID into a string array then feel free to tell me about it aswell but right now I'm mainly focusing on debugging this problem as I'd like to figure out where I'm failing.
Edit: Forgot to add source:
Websocket source: https://gist.github.com/anonymous/eaaf2e5430ed694bc424
Stress Test source: https://gist.github.com/anonymous/92ad79ffee1afdfd3382
So it turns out that the exception only seems to be a problem somehow when catching it in broadcastMessage (Refer to the websocket source).
As you can see, I catched the error of WriteMessage in the broadcastMessage function.
Not entirely sure why it isn't a problem when not catching it but I'll create another question about it.
Thanks anyway to those who chose to read my the problem I was having!
For those who are interested, here is the post https://stackoverflow.com/questions/26235760/golang-websocket-broken-pipe-error-only-when-catching-sending-message-to-clien
I am writing a proxy server that proxies SSL connections, and it is all working perfectly fine for normal traffic. However when there is a large file transfer (Anything over 20KB) like an email attachment, then the connection is reset on the TCP level before the file is finished being written. I am using non-blocking IO, and am spawning a thread for each specific connection.
When a connection comes in I do the following:
Spawn a thread
Connect to the client (unencrypted) and read the connect request (all other requests are ignored)
Create a secure connection (SSL using openssl api) to the server
Tell the client that we contacted the server (unencrypted)
Create secure connection to client, and start proxying data between the two using a select loop to determine when reading and writing can occur
Once the underlying sockets are closed, or there is an error, the connection is closed, and thread is terminated.
Like I said, this works great for normal sized data (regular webpages, and other things) but fails as soon as a file is too large with either an error code (depending on the webapp being used) or a Error: Connection Interrupted.
I have no idea what is causing the connection to close, whether it's something TCP, HTTP, or SSL specific, and I can't find any information on it at all. In some browsers it will start to work if I put a sleep statement immediately after the SSL_write, but this seems to cause other issues in other browsers. The sleep doesn't have to be long, really just a delay. I currently have it set to 4ms per write, and 2ms per read, and this fixes it completely in older firefox, chrome with HTTP uploads, and opera.
Any leads would be appreciated, and let me know if you need any more information. Thanks in advanced!
-Sam
If the web-app thinks an uploaded file is too large what does it do? If it's entitled to just close the connection, that will cause an ECONN at the sender: 'connection reset'. Whatever it does, as you're writing a proxy, and assuming there are no bugs in your code that are causing this, your mission is to mirror whatever happens to your upstream connection back down the downstream connection. In this case the answer is to just do what you're doing: close the upstream and downstream sockets. If you got an incoming close_notify from the server, do an orderly SSL close to the client; if you got ECONN, just close the client socket directly, bypassing SSL.