I have ssl-server and ssl-client. it almost same with Boost ssl example.
now,
server open(listen) then client join and send data to server.
when sending data is done
close socket from client with socket_.lowest_layer().close();
I want change this to somethings half-close : send close, read open
and client should be send boost::asio::error::eof before half-close.
Then server catch boost::asio::error::eof and close socket from server.
Is there any more good matter? and boost have half-close?
TCP half-close (shutdown for output) cannot be used with SSL. SSL sends a close_notify, so as to enable truncation attacks to be detected, and this logically closes the SSL connection in both directions. If you just shutdown the underlying TCP socket yourself, SSL will consider that a truncation attack and make the SSL connection unusable.
I don't understand why you want to change to what you described. SSL already does essentially that itself with the close_notify.
Related
I am in the process of adding client/server UDP support to thekogans stream library and have run into a problem on Windows. Here is what I am doing;
server udp socket is bound to 0.0.0.0:8854.
server udp socket has IP_PKTINFO = true.
server udp socket has SO_REUSEADDR = true.
server udp socket starts an overlapped WSARecvMsg operation.
client binds to 0.0.0.0:0 and connects to 127.0.0.1:8854.
client sends a message using WSASend.
server socket receives the message and creates a new UDP socket with the following attributes:
SO_REUSEADDR = true
bind to address returned by IP_PKTINFO (127.0.0.1:8854).
connect to whatever address was returned by WSARecvMsg.
client and the new server UDP socket exchange a bunch of messages (using WSASend and WSARecv).
Here is the behavior I am seeing:
the first connection between client and server works flawlessly.
I then have the client exit and restart.
all other packets from the client are dropped.
if I set a timeout on the new server UDP socket (127.0.0.1:8854) and it times out and is closed, then the client can connect again. In other words, the scheme seems to work but only one client at a time. If the server has a concrete (not wildcard) socket created for the same port, no other client can send it messages.
Some more information that may be helpful: The server is async and uses IOCP. This code (using epoll and kqueue) works perfectly on Linux and OS X. I feel like I am missing some flag somewhere that winsock needs set but I can't seem to find it. I have tried googling various search terms but have hit a wall.
Any and all help would be greatly appreciated. thank you.
I know that SOCKS 5 supports UDP and I have been over the structure of the packets that are sent/received in negotiating with a SOCKS proxy.
The one thing I am not clear on is the procedure for setting up to register with a proxy to send/receive UDP packets.
Specifically, my biggest question is, "Is the connection to the SOCKS proxy that is used to negotiate a UDP associate relationship still made with TCP/IP?". In other words, "Do you end up using a TCP/IP socket to receive UDP packets routed through a SOCKS proxy?"
I would imagine that, if you used a TCP/IP connection to establish a pathway for UDP communication, you'd kind of be missing the whole point of establishing UDP communications. However, on the other hand, if the negotiation were made using UDP (and resulted in a UDP socket), then how would the relationship be terminated when your application is shutting down and no longer needs the proxy to "remember" you?
I have been all over the net looking for an example...but can't find anything. Any help (especially an example) would be appreciated.
https://www.rfc-editor.org/rfc/rfc1928
"A UDP-based client MUST send its datagrams to the UDP relay server at
the UDP port indicated by BND.PORT in the reply to the UDP ASSOCIATE
request"
but
"UDP association terminates when the TCP connection that the UDP
ASSOCIATE request arrived on terminates."
I actually tried using it once, but failed, because many "socks5" proxy
implementations don't actually support the complete protocol.
So I'd suggest to set up a working test case first (find an app which
would support socks5 udp proxy, and a proxy where it would actually work).
Then any network sniffer would tell you how it really works (if it does).
I am writing a proxy server that proxies SSL connections, and it is all working perfectly fine for normal traffic. However when there is a large file transfer (Anything over 20KB) like an email attachment, then the connection is reset on the TCP level before the file is finished being written. I am using non-blocking IO, and am spawning a thread for each specific connection.
When a connection comes in I do the following:
Spawn a thread
Connect to the client (unencrypted) and read the connect request (all other requests are ignored)
Create a secure connection (SSL using openssl api) to the server
Tell the client that we contacted the server (unencrypted)
Create secure connection to client, and start proxying data between the two using a select loop to determine when reading and writing can occur
Once the underlying sockets are closed, or there is an error, the connection is closed, and thread is terminated.
Like I said, this works great for normal sized data (regular webpages, and other things) but fails as soon as a file is too large with either an error code (depending on the webapp being used) or a Error: Connection Interrupted.
I have no idea what is causing the connection to close, whether it's something TCP, HTTP, or SSL specific, and I can't find any information on it at all. In some browsers it will start to work if I put a sleep statement immediately after the SSL_write, but this seems to cause other issues in other browsers. The sleep doesn't have to be long, really just a delay. I currently have it set to 4ms per write, and 2ms per read, and this fixes it completely in older firefox, chrome with HTTP uploads, and opera.
Any leads would be appreciated, and let me know if you need any more information. Thanks in advanced!
-Sam
If the web-app thinks an uploaded file is too large what does it do? If it's entitled to just close the connection, that will cause an ECONN at the sender: 'connection reset'. Whatever it does, as you're writing a proxy, and assuming there are no bugs in your code that are causing this, your mission is to mirror whatever happens to your upstream connection back down the downstream connection. In this case the answer is to just do what you're doing: close the upstream and downstream sockets. If you got an incoming close_notify from the server, do an orderly SSL close to the client; if you got ECONN, just close the client socket directly, bypassing SSL.
My application connects as a client across an ethernet to a server process.
As the server is well known and will not change, UDP and TCP are both setup using
socket();
setsockopt(SO_REUSEADDR);
bind();
connect();
The connection protocol includes heartbeats sent both ways.
When I detect an error with the connection e.g. hearbeat timeout, I need to reset the connection.
Is it sufficient just to connect() to the NULL address and then re-connect() after a short pause, or should I close the socket and then reinitialise from scratch?
thanks
After a socket error you have to discard the one in hand and restart the setup with a new socket.
Winsock documentation, for example:
When a connection between sockets is
broken, the sockets should be
discarded and recreated. When a
problem develops on a connected
socket, the application must discard
and recreate the needed sockets in
order to return to a stable point.
You have to close(2) the socket and re-do everything again. Why do you bind(2) on the client?
I am working on a project where a partner provides a service as socket server. And I write client sockets to communicate with it. The communication is two way: I send a request to server and then receive a response from server.
The problem is that I send the data to the server but apparently the server cannot receive the data.
From my side I just use very simple implementation just like the example from http://www.linuxhowtos.org/C_C++/socket.htm
#include <sys/socket.h>
socket_connect();
construct_request_data();
send(socket, request_data, request_length, 0/*flag*/); // I set flag as 0
// now the server should receive my request and send response to me
recv(socket, response_data, response_length, 0);
socket_close();
And it seems that the server socket is implemented with a "binding" to std::iostream and it is buffered stream. (i.e. the socket send/recv is done in iostream::write/read.)
server_socket_io >> receive_data;
server_socket_io << response_data;
Btw, I got a test client from my partner and it is wrapped in a iostream as well. The test socket client can communicate with the server without problem, but it must do iostream::flush() after every socket send.
But I want to just keep it simple not to wrap my socket client in iostream.
I just wonder whether the buffered iostream results in the problem: the data is not processed since the data the client socket sent is just in very small amount and still buffered.
Or could it be my problem? how can I know if I really send out the data? does my client socket also buffer the data?
I have tried some "bad" workaround with TCP_NODELAY but it didn't help!
How can I solve the problem? from client side? or server side?
Should I close the socket after sending request and before receiving response, so that the data will be "flushed" and processed?
or should I wrap my socket in iostream and do flush?
or the server socket should use a "unbuffered" stream?
thanks for any suggestion and advice!
Further to Jay's answer, you can try any network packet sniffer and check whether your packets are getting to the server or not. Have a look at wireshark or tcpdump.
Let's use "divide and conquer" to solve the problem.
First, does the server work?
From your code look up the port number that your server is listening on.
Start your server program.
Run the following command line program to see if the server is really listening:
netstat -an -p tcp
It will produce a list of connections. You should see a connection on your selected port when the server is running. Stop the server and run the command again to ensure the port is no longer in use.
Once you've verified the server is listening try to connect to it using the following command:
telnet your-server-address-here your-port-number-here
telnet will print what your server sends to you on the screen and send what you type back to the sever.
This should give you some good clues.
I had a similar issue once before. My problem was that I never 'accepted' a connection (TCP) on the server inorder to create the stream between server/client. After I accepted the connection on the server side, everything worked as designed.
You should check the firewall settings for both systems. They may not be passing along your data.