Assign port number manually for each connection - c++

I'm running a server (say on port 50000). Any new request is accepted and a random port is assigned by OS each time. I want to manually assign the port number instead of system doing it randomly for me.
The main reason for this is I'm trying to do some multicast thing based on port number. I'm planning to assign few clients on same port. Next slot of clients on another port and so on.
Any idea?

A TCP socket is identified by a tuple of client-side IP/Port and server-side IP/Port pairs. The server-side IP/Port is decided by calling bind() before listen(). The client IP/Port is decided explicitly by calling bind() before connect(), or implicitly by omitting bind() and letting connect() decide. When a connection is accepted by accept(), it is assigned the client-side IP/Port that made it and the server-side IP/Port that accepted it.
The only random option available here is on the client side. It can call connect() without a preceding bind(), or it can call bind() with a zero IP/Port. In either case, the OS chooses an appropriate network adapter and assigns its IP if not explicitly stated, and assigns a random available ephemeral port if not explicitly stated. Calling bind() allows the client to assign either/both of those values if desired. bind() is not typically used on the client side in most situations, but it is allowed when needed when dealing with specific protocol requirements or firewall/router issues.
Tracking clients by Port alone is not good enough. You need to track the full tuple instead, or at least the client-side IP/port pair of the tuple. Clients from the same network would be using the same client IP but different Ports, but clients from different networks would be using different client IPs and could be using the same client Port, and that is perfectly OK. So using Port alone may find the wrong client from the wrong network. You need to take the client IP into account as well.
When the server accepts a connection, the server has no control over changing the values of the tuple. The OS needs the values to be predictable so it can route packets correctly. When you want to send a packet to a specific client, you need to know both client IP and Port.
If you want to have different server-side IP/Port values in the tuples of accepted connections, the only option is to open multiple listening sockets that are bound with the desired server-side values.

Related

Can I call bind() and then connect() on same socket descriptor?

Just a curious question about networking socket programming in a Windows Application with C/C++:
How can I tell the connect() function to use a particular source IP and source port values?
After a socket is created, application calls connect() to remote IP and port by using the sockaddr structure.
the connect() function internally selects the source IP and port for the connection.
Rather than the system deciding the source IP and/or port for the connect(), let that be the application's responsibility to decide which source IP and/or port to bind to.
bind() requests port that is not in use so it can claim it and become a server, while connect() wants a port that is already in use, so it can connect to it and talk to the server.
As user stark said, you need to call bind if you want to specify which interface/port combination to use, although if you want next call to bind it to a random available port number, you can opt out from bind() call because client doesn't necessarily has to have a fixed port number.
It is possible to ask the kernel to select a specific port before calling connect(), but if I may ask - why you wouldn't want to kernel to allocate source ports, as far as I know it's not best practice.
How can I tell the connect() function to use a particular source IP and source port values?
Use the socket library's bind() function for that. Yes, you can call bind() before connect() for an outgoing socket. That is a perfectly legitimate operation for both UDP and TCP sockets.
Yes, you can. Indeed there's a reason to do that: In case your routing policy makes your connection to be established from an IP address which is not the one you want to use, you can force in a multihomed/routing host a specific IP address as source by means of bind(2) system call. Another use is to specify a fixed source port for the connection, but this is not as usual as the previous case.
But beware: you can select only one of the already configured IP addresses you have, not any address you can imagine.

Handling multiple types of connections by a server

I'm developing a server that allows clients to communicate to a hardware simulation, simulating the underlying connection type. When a client initially connects, the server doesn't know what type of connection to simulate. The client's first packet to the server requests the connection to be of a specific type.
How do servers usually handle connections of different types? I've only ever worked on servers where all connections were the same. I started off implementing three unrelated connection classes - an "unknown connection" class, and two others that simulate the other connection types. When the connection type is determined, the unknown connection creates a connection of the appropriate type, registers the connection with a registry, then passes off the socket handle to the new connection.
Is it more common to have a single connection implementing a state machine, each type represented by a state, which in turn contains another state machine to handle the type-specific state? Are there alternative designs worth considering?
[Update]
After starting to implement the suggestion made by codenheim, I realized some design factors that make it a less-than-attractive solution for my specific problem. The biggest issue is that, regardless of connection type, I need to wait for a connection to receive a hardware address before anything can be done with the connection. If I use the listen port to determine connection type, I have to repeat the logic for receiving the hardware address in each connection type. I also have to keep a list of connections in this state for each connection type, even though they are all essentially doing the same thing - waiting for a hardware address.
The traditional practice is to separate each protocol on a unique port. This allows you to write modular protocol handlers that each bind to their own port and even register the ports by protocol name if the OS supports that (such as inetd and /etc/services on UNIX).
In your current design, your server has to handle the initial "knock" packet. If you don't like that particular aspect (and can't use distinct ports), you may consider using the approach of knockd (port knocking). knockd handles the initial knock sequence before opening the port up (with filter rules, but you could proxy the connection instead). The services being protected know nothing about it. But all this does is move the handshake from one server to another. With unique ports, you can do away with the handshake.

How do I check if a TCP port is already being listened on?

I have a third party library that acts as a HTTP server. I pass it an address and port, which it then uses to listen for incoming connections. This library listens in such a way that it doesn't receive exclusive usage of the port and address it's bound to. As a result, I can listen on the same port multiple times.
I need to run multiple instances of this HTTP server in the same process. Each instance has a default port, but if that port isn't available, it should use the next available port. This is where my problem is; I can end up with two HTTP servers listening on the same port.
I cannot change the HTTP server's code and the HTTP server will not alert me if it cannot listen on the port I give it, so I have to be able to check if a port is already in use before starting each HTTP server. I have tried checking if a port is already being listened on by binding my own socket with SO_REUSEADDR set to FALSE and SO_EXCLUSIVEADDRUSE set to TRUE, but the bind and listen calls both succeed when an existing HTTP server is already listening on that port.
How is this HTTP server achieving this effect, and how can I accurately check if a port is being listened on in this manner?
The quick and dirty method would be to try to connect() to the port on localhost. If the connect() call succeeds, then you know the port is currently being listened on (by whomever received the connection). If the connect call fails (in particular with ECONNREFUSED) then you can be pretty sure that nobody is listening on that port.
Of course, there's a race condition here: Nothing is really stopping another program from swooping in and grabbing the port immediately after you ran the above test, but before you get around to binding to the port yourself. So you should take the result of the test as more of a hint than an absolute rule, and (hopefully) have some way of handling it if you later find out that the port is in use after all.
Use a port number of 0. The OS will pick a free port.
http://msdn.microsoft.com/en-us/library/windows/desktop/ms740621(v=vs.85).aspx explains how the different options interact.
You haven't given us nearly enough information to tell us exactly what's going on in your use case, but I can work through one arbitrary use case that would look like what you're seeing.
Let's say you're on Win 2003 or later, and your primary NIC is 10.0.0.1, and everything is running under the same user account.
The first instance of your app comes up, and your test code tries to bind 10.0.0.1:12345 with SO_EXCLUSIVEADDREUSE. Of course this works.
You close the socket, then tell the HTTP server to listen to port 12345. It binds 0.0.0.0:12345 with SO_REUSEADDR, which of course works.
Now a second instance of your app comes up, and your test code tries to bind 10.0.0.1:12345 with SO_EXCLUSIVEADDREUSE. According to the chart in the MSDN article, that works.
You close the socket, then tell the HTTP server to listen to port 12345. It binds 0.0.0.0:12345 with SO_REUSEADDR, which works.
If this is the problem, assuming you can't get the HTTP server to bind a specific address, you can solve things by using 0.0.0.0 in your test code. (Of course if it's one of the other hundreds of possible problems, that solution won't work.)
If you don't know what socket options, address, etc. the HTTP server is using, and don't have the source, just run it in the debugger and breakpoint the relevant calls.

Accepting Sockets Only From Specific IPs

My game server is now accepting sockets from every one. But how can I block specific IPs from attack? You know, if they would like to crash my server or something. I'm using SFML library, C++.
With TCP, when your program (or the library you are using) calls accept(2), the second argument is an output which tells you the client's address.
With UDP there are no connections, but there is recvfrom(2), which just like accept(2), gives you the peer's address. So you can ignore the peers you don't like.
Or you can run your server behind some sort of firewall appliance and add rules there, or use iptables or similar as a software firewall on the host machine.
In SFML you have SocketTCP::Accept and SocketUDP::Receive, both of which will give you the peer's address if you pass an empty address as an argument.
I don't know of any specific method for blocking some ips, but you can surely reject the request (probably connection request in your case) after determining the originating ip. Maintain a list of blocked ip (or BlackListed ip) (make it configurable as well for easier additions/deletions) and reject the request if it is coming from one of the Black Listed ips.
Under Windows, WinSock 2.x has an optional callback parameter for WSAAccept() that can be used to conditionally accept/reject connection requests. Client IP/Port is one of the available parameters.

TCP simultaneous open and self connect prevention

TCP standard has "simultaneous open" feature.
The implication of the feature, client trying to connect to local port, when the port is from ephemeral range, can occasionally connect to itself (see here).
So client think it's connected to server, while it actually connected to itself. From other side, server can not open its server port, since it's occupied/stolen by client.
I'm using RHEL 5.3 and my clients constantly tries to connect to local server.
Eventually client connects to itself.
I want to prevent the situation. I see two possible solutions to the problem:
Don't use ephemeral ports for server ports.
Agree ephemeral port range and configure it on your machines (see ephemeral range)
Check connect() as somebody propose here.
What do you thinks?
How do you handle the issue?
P.S. 1
Except of the solution, which I obviously looking for,
I'd like you to share your real life experience with the problem.
When I found the cause of the problem, I was "astonished" on my work place people are not familiar with it. Polling server by connecting it periodically is IMHO common practice,
so how it's that the problem is not commonly known.
When I stumbled into this I was flabbergasted. I could figure out that the outgoing
port number accidentally matches the incoming port number, but not why the TCP
handshake (SYN SYN-ACK ACK) would succeed (ask yourself: who is sending the ACK if
there is nobody doing a listen() and accept()???)
Both Linux and FreeBSD show this behavior.
Anyway, one solution is to stay out of the high range of port numbers for servers.
I noticed that Darwin side-steps this issue by not allowing the outgoing port
to be the same as the destination port. They must have been bitten by this as well...
An easy way to show this effect is as follows:
while true
do
telnet 127.0.0.1 50000
done
And wait for a minute or so and you will be chatting with yourself...
Trying 127.0.0.1...
telnet: Unable to connect to remote host: Connection refused
Trying 127.0.0.1...
telnet: Unable to connect to remote host: Connection refused
Trying 127.0.0.1...
telnet: Unable to connect to remote host: Connection refused
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
hello?
hello?
Anyway, it makes good job interview material.
Bind the client socket to port 0 (system assigns), check the system assigned port, if it matches the local server port you already know the server is down and and can skip connect().
For server you need to bind() socket to port. Once addr:port pair had socket bound, it will no longer be used for implicit binding in connect().
No problem, no trouble.
Note that this solution is theoretical and I have not tested it on my own. I've not experienced it before (or did not realize) and hopefully I won't experience it anymore.
I'm assuming that you cannot edit neither the client source code nor the server source. Additionally I'm assuming the real problem is the server which cannot start.
Launch the server with a starter application. If the target port that the server will bind is being used by any process, create an RST (reset packet) by using raw sockets.
The post below briefly describes what an RST packet is (taken from http://forum.soft32.com/linux/killing-socket-connection-cmdline-ftopict473059.html)
You have to look at a "raw socket" packet generator.
And you have to be superuser.
You probably need a network sniffer as well.
http://en.wikipedia.org/wiki/Raw_socket
http://kerneltrap.org/node/3072 - TCP RST attacks
http://search.cpan.org/dist/Net-RawIP/lib/Net/RawIP.pm - a Perl module
http://mixter.void.ru/rawip.html - raw IP in C
In the C version, you want a TH_RST packet.
RST is designed to handle the following case.
A and B establish a connection.
B reboots, and forgets about this.
A sends a packet to B to port X from port Y.
B sends a RST packet back, saying "what are you talking about? I don't
have a connection with you. Please close this connection down."
So you have to know/fake the IP address of B, and know both ports X
and Y. One of the ports will be the well known port number. The other
you have to find out. I thnk you also need to know the sequence
number.
Typically people do this with a sniffer. You could use a switch with a
packet mirroring function, or run a sniffer on either host A or B.
As a note, Comcast did this to disable P2P traffic.
http://www.eff.org/wp/packet-forgery-isps-report-comcast-affair
In our case we don't need to use a sniffer since we know the information below:
So you have to know/fake the IP address of B, and know both ports X
and Y
X = Y and B's IP address is localhost
Tutorial on http://mixter.void.ru/rawip.html describes how to use Raw Sockets.
NOTE that any other process on the system might also steal our target port from ephemeral pool. (e.g. Mozilla Firefox) This solution will not work on this type of connections since X != Y B's IP address is not localhost but something like 192.168.1.43 on eth0. In this case you might use netstat to retrieve X, Y and B's IP address and then create a RST packet accordingly.
Hmm, that is an odd problem. If you have a client / server on the same machine and it will always be on the same machine perhaps shared memory or a Unix domain socket or some other form of IPC is a better choice.
Other options would be to run the server on a fixed port and the client on a fixed source port. Say, the server runs on 5000 and the client runs on 5001. You do have the issue of binding to either of these if something else is bound to them.
You could run the server on an even port number and force the client to an odd port number. Pick a random number in the ephemeral range, OR it with 1, and then call bind() with that. If bind() fails with EADDRINUSE then pick a different odd port number and try again.
This option isn't actually implemented in most TCPs. Do you have an actual problem?
That's an interesting issue! If you're mostly concerned that your server is running, you could always implement a heartbeat mechanism in the server itself to report status to another process. Or you could write a script to check and see if your server process is running.
If you're concerned more about the actual connection to the server being available, I'd suggest moving your client to a different machine. This way you can verify that your server at least has some network connectivity.
In my opinion, this is a bug in the TCP spec; listening sockets shouldn't be able to send unsolicited SYNs, and receiving a SYN (rather than a SYN+ACK) after you've sent one should be illegal and result in a reset, which would quickly let the client close the unluckily-chosen local port. But nobody asked for my opinion ;)
As you say, the obvious answer is not to listen in the ephemeral port range. Another solution, if you know you'll be connecting to a local machine, is to design your protocol so that the server sends the first message, and have a short timeout on the client side for receiving that message.
The actual problem you are having seems to be that while the server is down, something else can use the ephemeral port you expect for your server as the source port for an outgoing connection. The detail of how that happens is separate to the actual problem, and it can happen in ways other than the way you describe.
The solution to that problem is to set SO_REUSEADDR on the socket. That will let you create a server on a port that has a current outgoing connection.
If you really care about that port number, you can use operating specific methods to stop it being allocated as an ephemeral port.