Handling multiple types of connections by a server - c++

I'm developing a server that allows clients to communicate to a hardware simulation, simulating the underlying connection type. When a client initially connects, the server doesn't know what type of connection to simulate. The client's first packet to the server requests the connection to be of a specific type.
How do servers usually handle connections of different types? I've only ever worked on servers where all connections were the same. I started off implementing three unrelated connection classes - an "unknown connection" class, and two others that simulate the other connection types. When the connection type is determined, the unknown connection creates a connection of the appropriate type, registers the connection with a registry, then passes off the socket handle to the new connection.
Is it more common to have a single connection implementing a state machine, each type represented by a state, which in turn contains another state machine to handle the type-specific state? Are there alternative designs worth considering?
[Update]
After starting to implement the suggestion made by codenheim, I realized some design factors that make it a less-than-attractive solution for my specific problem. The biggest issue is that, regardless of connection type, I need to wait for a connection to receive a hardware address before anything can be done with the connection. If I use the listen port to determine connection type, I have to repeat the logic for receiving the hardware address in each connection type. I also have to keep a list of connections in this state for each connection type, even though they are all essentially doing the same thing - waiting for a hardware address.

The traditional practice is to separate each protocol on a unique port. This allows you to write modular protocol handlers that each bind to their own port and even register the ports by protocol name if the OS supports that (such as inetd and /etc/services on UNIX).
In your current design, your server has to handle the initial "knock" packet. If you don't like that particular aspect (and can't use distinct ports), you may consider using the approach of knockd (port knocking). knockd handles the initial knock sequence before opening the port up (with filter rules, but you could proxy the connection instead). The services being protected know nothing about it. But all this does is move the handshake from one server to another. With unique ports, you can do away with the handshake.

Related

Pinging by creating new sockets per each peer

I created a small cross-platform app using Qt sockets in C++ (although this is not a C++ or Qt specific question).
The app has a small "ping" feature that tries to connect to a peer and asks for a small challenge (i.e. some custom data sent and some custom data replied) to see if it's alive.
I'm opening one socket per each peer so as soon as the ping starts we have several sockets in SYN_SENT.
Is this a proper way to implement a ping-like protocol with challenge? Am I wasting sockets? Is there a better way I should be doing this?
I'd say your options are:
An actual ping (using ICMP echo packets). This has low overhead, but only tells you whether the host is up. And it requires you to handle lost packets, timeouts, and retransmits.
A UDP-based protocol. This also has lower kernel overhead, but again you'll be responsible for setting up timeouts, handling lost packets, and retransmits. It has the advantage of allowing you to positively affirm that your program is running on the peer. It can be implemented with only a single socket endpoint no matter how many peers you add. (It is also possible that you could send to multiple peers at once with a broadcast if all are on a local network, or a multicast [complicated set-up required for that].)
TCP socket as you're doing now. This is much easier to code, extremely reliable and will automatically provide a timeout (i.e. your connect will eventually fail if the peer doesn't respond). It lets you know positively that your peer is there and running your program. Although there is more kernel overhead to this, and you will use one socket endpoint on your host per peer system, I wouldn't call it a significant issue unless you think you'll be having thousands of peers.
So, in the end, you have to judge: If thousands of hosts will be participating and this pinging is going to happen frequently, you may be better off coding up a UDP solution. If the pinging is rare or you don't expect so many peers, I would go the TCP route. (And I wouldn't consider that a "waste of sockets" -- those advantages are why TCP is so commonly used.)
The technique described in the question doesn't really implement ping for the connection and doesn't test if the connection itself is alive. The technique only checks that the peer is listening for (and is responsive to) new connections...
What you're describing is more of an "is the server up?" test than a "keep-alive" ping.
If we're discussing "keep-alive" pings, than this technique will fail.
For example, if just the read or the write aspect of the connection is closed, you wouldn't know. Also, if the connection was closed improperly (i.e., due to an intermediary dropping the connection), this ping will not expose the issue.
Most importantly, for some network connections and protocols, you wouldn't be resetting the connection's timeout... so if your peer is checking for connection timeouts, this ping won't help.
For a "keep-alive" ping, I would recommend that you implement a protocol specific ping.
Make sure that the ping is performed within the existing (same) connection and never requires you to open a new connection.

Sharing sockets (WINSOCK) by sending them to each other between 2 servers

I am trying to write a distributed server system (consisting of server 1="main", and server 2="replacement" for now). Don't mind some dirty methods, it's just to achieve a basic function a distributed server would achive.
Currently I have both servers running via SO_REUSEADDR and a TCP Socket (maybe UDP will solve my problem, but I wanna try it either way with TCP first) on the same machine.
Main server sends establishes a connection with the Replacement server and clients connecting to it.
Now what I want to achieve: The main server should send the socket of the connecting clients to the replacement server, so in case the main server can't work anymore (timeout or what ever) the replacement server can work with the clients and send/recv with them.
The socket I send from main to the replacement server is the one I get with ClientSocket = ::accept(ListenSocket, NULL, NULL); where ClientSocket is a SOCKET (UINT_PTR).
The replacement server can't send/recv to the clients even though the main server gets terminated midway.
Is that because each server, even though they run on the same port, need to be connected via a ::connect from the clients?
EDIT: If my theory is true, this should be solved by using UDP instead of TCP as well, no?
EDIT2: With distributed server I mean a server which in case of a failure will be replaced by another without the clients task getting interrupted.
EDIT3: If there is a better and more simple way to do this, I'd like to know about that as well. I'm not too fond of sockets and server communication as of now that's how I came up with my idea to solve it.
You cannot send a socket between machines. A socket is an OS concept. It represents (in this case) a connection between the server and a client. This connection cannot be resumed on a different machine that has a different IP address because a TCP connection is defined to be established between a pair of addresses and a pair of ports.
The UINT_PTR on one machine means nothing to another machine. It is an opaque value. It is an OS handle.
UDP does not solve the problem since the client needs to notice that the IP address is is communicating with has changed.
Even if you manage that you have the problem that a server failure kills all data on that server. The other server cannot resume with the exact same data. This is a basic problem of distributed computing. You cannot keep the state of all three machines completely in sync.
Make the clients tolerate interruptions by retrying. Make the servers stateless and put all data into a database.
This is a very hard problem to solve. Seek off-the-shelve solutions.

Assign port number manually for each connection

I'm running a server (say on port 50000). Any new request is accepted and a random port is assigned by OS each time. I want to manually assign the port number instead of system doing it randomly for me.
The main reason for this is I'm trying to do some multicast thing based on port number. I'm planning to assign few clients on same port. Next slot of clients on another port and so on.
Any idea?
A TCP socket is identified by a tuple of client-side IP/Port and server-side IP/Port pairs. The server-side IP/Port is decided by calling bind() before listen(). The client IP/Port is decided explicitly by calling bind() before connect(), or implicitly by omitting bind() and letting connect() decide. When a connection is accepted by accept(), it is assigned the client-side IP/Port that made it and the server-side IP/Port that accepted it.
The only random option available here is on the client side. It can call connect() without a preceding bind(), or it can call bind() with a zero IP/Port. In either case, the OS chooses an appropriate network adapter and assigns its IP if not explicitly stated, and assigns a random available ephemeral port if not explicitly stated. Calling bind() allows the client to assign either/both of those values if desired. bind() is not typically used on the client side in most situations, but it is allowed when needed when dealing with specific protocol requirements or firewall/router issues.
Tracking clients by Port alone is not good enough. You need to track the full tuple instead, or at least the client-side IP/port pair of the tuple. Clients from the same network would be using the same client IP but different Ports, but clients from different networks would be using different client IPs and could be using the same client Port, and that is perfectly OK. So using Port alone may find the wrong client from the wrong network. You need to take the client IP into account as well.
When the server accepts a connection, the server has no control over changing the values of the tuple. The OS needs the values to be predictable so it can route packets correctly. When you want to send a packet to a specific client, you need to know both client IP and Port.
If you want to have different server-side IP/Port values in the tuples of accepted connections, the only option is to open multiple listening sockets that are bound with the desired server-side values.

multi way inter process communication

There are 10 processes in my machine and each should have the capability to communicate with each other.
Now the scenario is all the 10 processes should be in the listening state so that any process can communicate with it at any time. Again when required it should be able to pass a message to any of the processes.
I am trying to code it with C++ and unix tcp/udp sockets. However I don't understand how to structure it. Shall I use UDP or TCP, which would be better? How can a process listen and send data simultaneously.
I need help.
The decision of UDP vs TCP depends on your messages, whether or not they need to be reliably delivered, etc.
For pure TCP, each peer would have a TCP socket on which each process accepts connections from other peers (and each accept would result in a new socket). This new socket is bi directional and can be used for sending / recieving from one peer to another. With this solution, you would need some sort of discovery mechanism.
For UDP, it's much the same except you don't need the accept socket. You still need some form of discovery mechanism.
The discovery mechanism could either be another peer with a well known (via configuration, etc) address, or possibly you could use UDP broadcast for the discovery mechanism.
In terms of zeroMQ, which is a slightly higher level than raw sockets, you would have a single ROUTER socket on which you're listening and recieving data, and one DEALER socket per peer on which you're sending data.
No matter the solution, you would likely need a thread for handling the network connections using poll() or something like that, and as messages are received you need another thread (or thread pool) for handling the messages.
you can run each process as severer & span 9 more thread to connect other processes as client.
This question applies to any language, so the answer is not C++ related.
When given a choice, look for a library to have an easier communication (e.g. apache-thrift).
About TCP/UDP: TCP is typically slower but more reliable, so by default, go for TCP, but there might be reasons for choosing UDP, like streaming, multicast/broadcast,... Reliability might not be an issue when all processes are on the same board, but you might want to communicate with external processes later on.
A threaded process can use the same socket for sending and receiving without locks.
Also, you need some kind of scheme to find out to what port to send to reach a process and with TCP, you need to decide whether to use static connections or connect every time you want to send.
what you want to do seems to be message passing.
before trying to build it yourself, take a look at boost mpi

Server/client chat

The idea i'm having is that clients can connect to a chat room on a server and communicate with each other. In the chat room you should also be able to target another user and they should be able to talk with each other.
Now to the problem. I'm not sure which way is the easiest/best way to implement this. For the chat room I thought of when a user writes something, the message is sent to the server and the server then echoes that message to the other clients. Not sure what other options I have.
What i'm most confused about is how I can make only 2 clients talk to each other. Either the server act as a proxy and just forward the messages to the other client, this seem inefficient though. The only alternative to this I can think of is that the 2 clients establish a connection between each other. Which implementation is most common in order to achieve this?
I am using unix sockets with C++.
There are a few options for many-to-many chat and one-to-one chat. However, the only reasonably sane option for many-to-many chat is to do as you've said: send a message to the centralized server, the server relays the message to all other connected clients (or, in the same "room" / "channel").
For one-to-one chat, I recommend that you follow the same exact model: it's just a special case of the many-many chat relay in which the message is sent by the server, as a proxy, to only one other connected client. This is simple, and it hides every client's IP address from every other.
However, if the one-to-one communication were to become more voluminous than chat (e.g., a file transfer), direct one-to-one communication may be appropriate. In this case, the server should relay the initiation of a direct, peer-to-peer communication channel to the remote user, probably exchanging IP addresses upon setup, and the clients would then connect directly to each other for their special-purpose direct communication (while usually, though optionally, remaining connected to the server).
Thus, one-to-one communication is normally proxied by the server as in the general case of many-to-many, and the degree to which that practice is inefficient is superficial. Special purpose one-to-one communication (file transfers, VoIP, etc.) are done with direct client-to-client connections usually orchestrated at first by the server (i.e., to prepare each side for direct communication).
Implementation hint: the server is all TCP. Read about non-blocking sockets, the POSIX system call poll, and let the idea of message framing over TCP roll around in your head. You can then skip multithreading [and scalability] issues in the server code. The clients, besides speaking the same custom TCP protocol as your server, are up to you.
Well you can implement a multi-threaded client/server. A single "server" relaying the messages is the right way to go(to preserve global ordering of messages). Also think about leader election algorithms(http://en.wikipedia.org/wiki/Bully_algorithm) for example in case your "server" goes down. Check out
Another way to do this would be to use signals and event-driven programming.