SChannel TLS 1.3 mystery additional message - c++

A while ago I implemented a client and server using SChannel to encrypt communication. Recently I made the required switch from the SCHANNEL_CRED struct to the SCH_CREDENTIALS one so that TLS 1.3 support is provided in Windows 11. However, I encountered a situation that my code didn't originally account for and that I've resolved but can't explain.
The negotiation flow is as follows:
I call InitializeSecurityContext on the client and get SEC_I_CONTINUE_NEEDED with some data to send to the server (264 bytes for example). This would be the client hello, cipher suites, and key share.
I call AcceptSecurityContext on the server and pass in the received data, getting SEC_I_CONTINUE_NEEDED with some data to send to the client (785 bytes for example). This would be the server hello, key agreement protocol, key share, and an indication that the server has finished.
I call InitializeSecurityContext on the client, pass in the received data, and get SEC_E_OK with some data to send to the server (80 bytes for example). This would be the client finished indication.
At this point I call AcceptSecurityContext on the server and pass in the received data and I would expect to get SEC_E_OK and no data to pass back to the client. Both sides have indicated that they've finished and, by all accounts that I've read, the negotiation is complete. However what actually happens is:
I call AcceptSecurityContext on the server and pass in the received data, getting SEC_E_OK with some data to send to the client (103 bytes for example). I don't know what this message could be.
My original implementation would fail at this point because once a given side returned SEC_E_OK I didn't expect the peer to provide it with any more messages for the negotiation. The client already returned that, and yet the server has more data to send it.
At this point I call InitializeSecurityContext on the client with the extra data and get SEC_E_OK with no more data to send to the server. Negotiation is finally actually complete.
Can anyone explain what this additional message is?

I would have put this in as a comment, but my reputation isn't high enough. I can't tell you what the additional token represents, in terms of the TLS protocol, but I can tell you that it's not specific to TLS 1.3 (I haven't done anything with 1.3, and my implementation allows for this final token), and that it is documented.
SEC_E_OK
0x00000000L
The function succeeded. The security context received from the client was accepted. If an output token was generated by the function, it must be sent to the client process.

Related

OpenSSL BIO and SSL_read

In our client/server application, we use TLS/TCP protocol for messaging. There is a message shift occurs between applications after a while (messages are sent and received in the correct order at the beginning) i.e. the client sends the 1000th message to the server and receives the response of message 999th. The suspect is on the client side, in which we implement TCP and TLS layers independently i.e. do not bind TCP socket to SSL object (via SSL_set_fd()) but using BIOs. When the client app gets the response from server (pretty sure that message is processed in the server correctly, client TCP layer receives the message correctly etc.), the message is forwarded to SSL layer. The client app firstly write the message to BIO:
BIO_write (readBio, data, length);
Then in another function of SSL layer, the message is read using SSL_read():
res = SSL_read (ssl, buffer, length);
The read operation is done successfully, but my goal is to check whether there is another record(s) to be read in the BIO. I considered to use the method SSL_pending() but it seems that this one should be used in order to check if there are still bytes in the SAME record. If our suspects are correct, I would like to check if there is another record in the BIO so that all messages are processed without any delay. Can you help me on this topic? Thanks in advance.
SSL_pending tells you if there are data from the current decryted SSL record which got not yet read by SSL_read. BIO_pending can be used to find out if there are already data in the BIO which are not processed by the SSL layer. To find out if the are unread data at the socket level use MSG_PEEK.

Check if server received data after timeout

I made a program that uses serveral RestAPI's of Bitcoin exchanges, e.g. Bitstamp
There is a function that allows me to do a trade: sell or buy Bitcoin for a specific price. Simplified, you have to call a URL with parameters like this:
https://www.bitstamp.net/api/trade?price=100&amount=1&type=sell
The server then answers in JSON. Example:
{"error":"","message":"Sold 1 BTC # 100$"}
If the trade was successful, my program continues. If it was not, it tries again (depending on the error message).
However, there is one problem. I'm using libcurl for the communication with the server and I set the CURLOPT_TIMEOUT to two seconds. It almost always works, but sometimes I get the following error:
Code #28: Operation timed out after 2000 milliseconds with 0 bytes received
When this happens, my program tries to trade again. But sometimes, despite the timeout, the trade was already made, which means it is done multiple times because my code tries again.
Can I somehow find out if the server atleast received all the data? The thing is if I increase CURLOPT_TIMEOUT to say 10 seconds, and the server does not answer, I have the same problem. So this is not a solution.
I do not know details of Bitstamp, but here is how HTTP works. Client sends a request to a server and receives a response. In the response, details about success or failure are described (by using HTTP error codes). However, if a timeout is received, then client has no information about it's request:
is it sent to the server;
did server receive it;
if server received the request, did it manage to process;
maybe server processed the request, but sending back the response failed due to the network issues.
For that reason, one should not count that the request was successful, and should resend the request. The problem you have described is certainly possible - server received the request, processed it but did not manage to send back the response. For that reason, other more complex protocols should be used, unfortunately HTTP is not one of them because of it's request-response nature.
Perhaps you should check if the given REST API gives some status for the transactions.
You are supposed to wait for the HTTP response to be a little bit more sure wether your request was successfully processed or not.
If you can access to the file descriptor, you can call ioctl() with the SIOCOUTQ (Linux) or FIONWRITE (BSD) -- I don't know the equivalent for Windows --, to check for unacknowledged sent data at socket level, before totally aborting you connection.
The problem is that it wouldn't be totally error-free either. Even though TCP is stateful at transport level, HTTP is stateless at application level. If your application needs transactional behavior (you dealing with currency, after all, aren't you?), it should provide a means for that.
All that said, I think two seconds might be too little. If you need speed because of multiple operations or something like that, consider parallelizing your connections.

boost read() never returns eventhough write executed on server (ssl) boost

I have a server running in async and a client running in sync.
The client and server do a handshake, and then they do the SSL handshake. The client sends a message to the server, the server reads the message (and I can print it out correctly) and then the server sends back a response boost::async_write. The response leaves the server and the reads are being executed on the client boost::read() but the client never returns from the read command. Eventually the request times out and throws an exception (request timed out).
The server is asynchronous and the client is synchronous.
Please note that without SSL, everything works correctly, but with SSL the scenario above unfolds. I have viewed in Wireshark that the handshake works correctly and both the SSL and TCP handshake are correct. Plus, when the client sends the first message boost::write(), the server can decrypt and read it perfectly (boost::read_async). The server sends back the message perfectly(boost:write_async)... but the client for some reason never returns from reading!! (ONLY in the SSL case, normal TCP works correctly).
Boost version is 1.48 ... and I am truly puzzled how TCP can work fine and SSL is not working (but the as per the scenario above it has nothing to do with the encryption aspect). Is there something I have to do in boost differently than I currently have?
the issue was for some reason the header of one of the messages i was passing in was out of scope. Declaring a header on the stack of a function, and then passing that into an async Send will NOT guarantee the memory of that header has been passed entirely into the function. The header has to have a more persistent scope (such as a heap, member variable etc) in the async case.

Sockets in Linux - how do I know the client has finished?

I am currently trying to implement my own webserver in C++ - not for productive use, but for learning.
I basically open a socket, listen, wait for a connection and open a new socket from which I read the data sent by the client. So far so good. But how do I know the client has finished sending data and not simply temporarily stopped sending more because of some other reason?
My current example: When the client sends a POST-request, it first sends the headers, then two times "\r\n" in a row and then the request body. Sometimes the body does not contain any data. So if the client is temporarily unable to send anything after it sent the headers - how do I know it is not yet finished with its request?
Does this solely depend on the used protocol (HTTP) and it is my task to find this out on the basis of the data I received, or is there something like an EOF for sockets?
If I cannot get the necessary Information from the socket, how do I protect my program from faulty clients? (Which I guess I must do regardless of this, since it might be an attacker and not a faulty client sending wrong data.) Is my only option to keep reading until the request is complete by definition of the protocol or a timeout (defined by me) is reached?
I hope this makes sense.
Btw: Please don't tell me to use some library - I want to learn the basics.
The protocol (HTTP) tells you when the client has stopped sending data. You can't get the info from the socket as the client will leave it open waiting for a response.
As you say, you must guard against errant clients not sending proper requests. Typically in the case of an incomplete request a timeout is applied to the read. If you haven't received anything in 30 seconds, say, then close the socket and ignore it.
For an HTTP post, there should be a header (Content-Length) saying how many bytes to expect after the the end of the headers. If its a POST and there is no Content-Length, then reject it.
"Does this solely depend on the used protocol (HTTP) and it is my task to find this out on the basis of the data I received,"
Correct. You can find the HTTP spec via google;
http://www.w3.org/Protocols/rfc2616/rfc2616.html
"or is there something like an EOF for sockets?"
There is as it behaves just like a file ... but that's not applicable here because the client isn't closing the connection; you're sending the reply ON that connection.
With text based protocols like HTTP you are at the mercy of the client. Most well formatted POST will have a content-length so you know how much data is coming. However the client can just delay sending the data, or it may have had its Ethernet cable removed or just hang, in which case that socket is sitting there indefinitely. If it disconnects nicely then you will get a socket closed event/response from the recv().
Most well designed servers in that case will have a receive timeout, and if the socket is idle for more than say 30 seconds it will close that socket, so resources are not leaked by misbehaving clients.

internal working of the recv socket api

I am working on TCP client server application using c++.third party lib are now allowed in this project.
Here exchange between client server takes using well define protocol format.once the client receives the packet it will send it for parsing.I have protocol manager which will take care of the parsing activity.
I have following doubt
When the data arrives at client from the network,
the OS buffers it until application calls recv() function.
So two message msg1 and msg2 arrives at the buffer a call to recv will return msg1+msg2.
Now this may result in failure of the parsing activity.
My queries
1. whether above mentioned assumption is correct or not ?
2. If above mentioned assuption is correct then how can resolve this issue.
TCP emulates a stream, so in TCP there is no notion of messages. If you want messages, your application has to have some protocol to seperate them.
UDP does have a messages, so there it is possible to retrieve separate messages.
You can use a LV protocol (Length-Value) for your message.
Encode the message length in the first (1-4) bytes, then put the message.
Something like this for example : 14"Hello world\0"
In your server, when a client is sending something you'll have to recv() the first (1-4) bytes then recv() the length of the message.