I want to pass the SSL socket(along with its SSL session) to another process. Is this possible ?
In the non-SSL socket implementation, I use WSADuplicateSocket(Windows API) to get socket info and then send it to another process to create a duplicated socket.
How can I do this on SSL socket? Which information I has to pass to the second process to let them create the duplicated socket and continue the SSL session from the first process? Once the socket is passed to the second process, the first process will close its socket handle.
No, it is not possible. A socket is an OS object, which is why you can duplicate the socket handle in another process. OpenSSL, on the other hand, is an application-level library that just sits on top on whatever connection framework you decide to use for the physical communication. So you cannot duplicate the SSL structures and state machine that are attached to the original socket, as it cannot be shared across process boundaries.
Related
What I want to do is create a named pipe server that routes messages between connected peers. On Windows it seems that you first have to create a pipe and then connect it to a client and then you read from the connected client pipe to get the message you want and then that handle is bound to that client and you have to create a new named pipe. Is there no way to easily multiplex all the clients into one handle so I don't have to read from each client separately? To write to the clients from the server you obviously have to use the clients’ handle. Maybe the server can close the connection every time it has processed a request but that seems a bit unnecessarily wasteful. I would rather avoid implementing my own named pipes with shared memory...
"you first have to create a pipe and then connect it to a client"
Not exactly. The server process creates the pipe, but the client connects itself. Also, the client can try to connect and block if the server hasn't yet created the pipe.
"you read from the connected client pipe to get the message you want and then that handle is bound to that client". True. Doesn't stop you from immediately waiting for the next client.
"Is there no way to easily multiplex all the clients into one handle?". No, that would defeat the point of the HANDLE. That's the bit you need to demultiplex the clients.
What you seem to miss is that you can set the number of pipe instances to PIPE_UNLIMITED_INSTANCES, and read all of them using a shared LPOVERLAPPED_COMPLETION_ROUTINE callback. The callback will tell you which HANDLE and thus which pipe has data available.
I'm currently trying to develop a server and some clients which communicate with each other using something like a proxy in the middle. The "proxy" will have sockets opened to every client and server on the system. This means that I'm currently using threads to keep all the connections opened. Every time a client decides to send a message it uses its socket with the proxy and sends the message. Then the proxy will propagate the message to every other node using the respective socket.
As you can see, a node can be receiving messages by having the proxy writing on the socket or a node may want to send messages by writing on the socket.
How do I guarantee that the content in the socket does not get overwritten ? Do I have to use mutexes to lock the access to the socket ? What is a good practice to solve this problem ?
Connections are bi-directional. Content going one way does not overwrite content going the other way. No mutex is needed for this.
Besides, you couldn't use a mutex anyway, as both sides of the connection are separate.
Running in Linux 3.9 kernel and later, I have an application X, which listens on a particular socket for connections. I want to write an unrelated application, Y, which tracks the number of attempts to connect to this socket, the source IP, etc.
Is it possible in c++ (ideally through Qt library) to share / monitor a socket already in use by an unrelated process? I found several StackOverflow questions which suggest forking to share the socket, but that's not possible in this case.
It is possible to transfer a file descriptor to another process, which behaves like a cross process dup(2). See Can I open a socket and pass it to another process in Linux for details. But this needs to be explicitly done, i.e. one process sends the file descriptor and another receives it. Thus the "unrelated" process must cooperate.
But a listen socket cannot be used for monitoring. The socket can only accept a connection but it is not possible to see if another process accepted a connection on the same socket, no matter if the sockets are shared with fork,threading or by file descriptor passing.
Given the right permissions and OS you can monitor the behavior of an application at the syscall level using the ptrace(2) or similar interface. There you could see if the application uses accept and what it returns. Or like suggested in a comment you can use packet capturing (tcpdump, raw sockets) to simply watch the traffic and deduct from a successful TCP handshake that some (unknown) process must have accepted the connection.
Dear Stackoverflowers,
I am researching networking a bit and I decided I'd like to create a small and simple networking library with Winsock. (I am using Completion Ports and Overlapped IO though)
As I researched a bit I came to the following steps for a TCP Listener(Correct me if I am wrong):
Create a Listening Socket
Bind it to a port/IP
Listen on it
When a new connection is created, give a seperate Socket for that connection.
Listener continue's to listen, the specific connection is handled as needed.
EDIT: With a 'connection' from here I mean communication between the server and distinct clients.
Though for an UDP Listener we need to make use of WSARecvFrom which returns the IP address at the lpFrom parameter. Now I was wondering the following:
Is it better to make one UDP Socket listen to incoming connections on a specific port with WSARecvFrom and create new sockets for every specific connection? Or could I just use the UDP Socket itself with WSASendTo. Would that cause any performance penalties if one UDP Socket is used for for example 1000 connections? Or would it be the same or even better then creating/duplicating seperate Sockets for each different incoming connection?
Note: If multiple sockets are needed how would you handle sockets listening on the same port or could a client accept UDP from different ports?
Hope you guys can help!
Ps. Extra tips are always welcome!
Unlike TCP, UDP is connection-less, and as such you don't need to create separate sockets for each party. One UDP socket can handle everything. Bind it to a local IP/Port and call WSARecvFrom() once, and when it reports data to your IOCP you can process the data as needed (if another thread if needed) and then call WSARecvFrom() again. Each time new data arrives, you have to look at the reported lpFrom address to know the IP/Port of the sender. And yes, you can use the same UDP socket for sending data to each sender when needed.
I am writing a server in linux that is supposed to serve an API.
Initially, I wanted to make it Multi-threaded on a single port, meaning that I'd have multiple threads working on various request received on a single port.
One of my friends told me that it not the way it is supposed to work. He told me that when a request is received, I first have to follow a Handshake procedure, create a thread that is listening to some other port dedicated to the request and then redirect the requested client to the new port.
Theoretically, it's very interesting but I could not find any information on how to implement the handshake and do the redirection. Can someone help?
If I'm not wrong in interpreting your responses, once I create a multithreaded server with a main thread listening to a port, and creates a new thread to handle requests, I'm essentially making it multithreaded on a single port?
Consider the scenario where I get a large number of requests every second. Isn't it true that every request on the port should now wait for the "current" request to complete? If not, how would the communication still be done: Say a browser sends a request, so the thread handling this has to first listen to the port, block it, process it, respond and then unblock it.
By this, eventhough I'm having "multithreads" , all I'm using is one single thread at a time apart from the main thread because the port is being blocked.
What your friend told you is similar to passive FTP - a client tells the server that it needs a connection, the server sends back the port number and the client creates a data connection to that port.
But all you wanted to do is a multithreaded server. All you need is one server socket listening and accepting connections on a given port. As soon as the automatic TCP handshake is finished, you'll get a new socket from the accept function - that socket will be used for communication with the client that has just connected. So now you only have to create a new thread, passing that client socket to the thread function. In your server thread, you will then call accept again in order to accept another connection.
TCP/IP does the handshake, if you can't think of any reason to do a handshake than your application does not demand it.
An example of an application specific handshake could be for user authentication.
What your colleague is suggesting sounds like the way FTP works. This is not a good thing to do -- the internet these days is more or less used for protocols which use a single port, and having a command port is bad. One of the reasons is because statefull firewalls aren't designed for multi-port applications; they have to be extended for each individual application that does things this way.
Look at ASIO's tutorial on async TCP. There one part accept connections on TCP and spawns handlers that each communicate with a single client. That's how TCP-servers usually work (including HTTP/web, the most common tcp protocol.)
You may disregard the asynchronous stuff of ASIO if you're set on creating a thread per connection. It doesn't apply to your question. (Going fully async and have one worker-thread per core is nice, but it might not integrate well with the rest of your environment.)