I have to write a server which have to accept clients on ports that they have specified.
Example : A wants to connect on port 1337, so server listen on port 1337. B on 1992 so server listen on 1992, etc...
I don't know how to handle this.
Should I make a system like :
- All clients connects on the same port : XXXX ;
- The client's 1st packet specify the port he wants ;
- The server binds / listens / accepts on the new port ;
- The server answers the client that it's OK ;
- The client stops connecting on port XXXX and start connecting on the new port.
I don't know if this kind of system is good, but I can't figure out how else to do it.
Thank you, Florian
You'll want some kind of master process or central table that keeps the room to port mappings. You'll need clients to either connect to the master process or to some kind of "entry room" to get connected in the first place. Then, when they move from room to room, just look up the port they need to connect to, and refer them to the next port. All the central information can be kept in a database, if desired.
You'll need to have at least one standard port open for clients to connect to if they're to communicate their requests about other ports. It doesn't have to be obviously open, it can just quietly accept UDP packets for instance, as some ssh hiding systems do. It can also work with TCP if you're willing to produce some kind of response that the new port is bound and ready.
In any case, listening on multiple ports is not especially tricky if you have the right framework. libevent is an example of how you might get started.
It does seem odd that you'd have a standard port open as well as an unknown number of dynamic ports, though. When do you close these down? Do they time-out eventually? Is the listen call only short-term by nature?
Related
I want to write a simple program in c++ that use tcp socket to communicate with the same program on another computer in lan.
To create the tcp socket I could make the user write the ip and the port to make the connection. But I also need to be able to autodetect in the local area network if there is any computer also running the program.
My idea was:
when the program is autodetecting for available connection in lan, it will send all ips a message via udp to a specific port, meanwhile it will also keep listening to a port waiting to eventual answer.
when the program on the other computer is opened for lan connection, it will keep listening to the a port in case another computer is trying to detect, then it will send also via udp the response messagee notifying the possibility of connection.
All the security system is another problem for which I don't need answer now.
// Client 1:
// Search for all ips in local network
// create udp socket
// send check message
// thread function listening for answers
// if device found than show to menu
// continue searching process
// Client 2 (host) :
// user enable lan connection
// create udp socket
// thread function listening for detection requests
// if request structure is right send back identification message
// continue listening for request
My question - Is there a more efficient or standard way to do something like that?
Testing whether another computer is listening on a given port is what hackers do all day to try to take over the world...
When writing a software like you describe, though, you want to specify the IP and port information. A reason to search and automatically find a device would be if you are implementing a printer, for example. In that case, as suggested by Hero, you could use broadcasting. However, in that case, you use UDP (because TCP does not support that feature).
The software on one side must have a server, which in TCP parlance means a listen() call followed by an accept() until a connection materialized.
The client on the other side can then attempt a connect(). If the connect works, then the software on the other side is up and running.
If you need both to be able to attempt a connection, then both must implement the client and server (which is doable if you use ppoll() [or the old select()] you know which event is happening and can act on it, no need for threads or fork()).
On my end, I wrote the eventdispatcher library to do all those things under the hood. I also want many computers to communicate between each others, so I have a form of RPC service I call communicatord. This service is at the same time a client and a server. It listens on a port and tries to connect to other systems. If the other system has a lower IP address, it is considered a server. Otherwise, it is viewed as a client and I disconnect after sending a GOSSIP message. That way the client (larger IP address) can in turn connect to the server. This communicator service allows all my other services to communicate without having to re-implement the communication layer between computer over and over again.
I have a third party library that acts as a HTTP server. I pass it an address and port, which it then uses to listen for incoming connections. This library listens in such a way that it doesn't receive exclusive usage of the port and address it's bound to. As a result, I can listen on the same port multiple times.
I need to run multiple instances of this HTTP server in the same process. Each instance has a default port, but if that port isn't available, it should use the next available port. This is where my problem is; I can end up with two HTTP servers listening on the same port.
I cannot change the HTTP server's code and the HTTP server will not alert me if it cannot listen on the port I give it, so I have to be able to check if a port is already in use before starting each HTTP server. I have tried checking if a port is already being listened on by binding my own socket with SO_REUSEADDR set to FALSE and SO_EXCLUSIVEADDRUSE set to TRUE, but the bind and listen calls both succeed when an existing HTTP server is already listening on that port.
How is this HTTP server achieving this effect, and how can I accurately check if a port is being listened on in this manner?
The quick and dirty method would be to try to connect() to the port on localhost. If the connect() call succeeds, then you know the port is currently being listened on (by whomever received the connection). If the connect call fails (in particular with ECONNREFUSED) then you can be pretty sure that nobody is listening on that port.
Of course, there's a race condition here: Nothing is really stopping another program from swooping in and grabbing the port immediately after you ran the above test, but before you get around to binding to the port yourself. So you should take the result of the test as more of a hint than an absolute rule, and (hopefully) have some way of handling it if you later find out that the port is in use after all.
Use a port number of 0. The OS will pick a free port.
http://msdn.microsoft.com/en-us/library/windows/desktop/ms740621(v=vs.85).aspx explains how the different options interact.
You haven't given us nearly enough information to tell us exactly what's going on in your use case, but I can work through one arbitrary use case that would look like what you're seeing.
Let's say you're on Win 2003 or later, and your primary NIC is 10.0.0.1, and everything is running under the same user account.
The first instance of your app comes up, and your test code tries to bind 10.0.0.1:12345 with SO_EXCLUSIVEADDREUSE. Of course this works.
You close the socket, then tell the HTTP server to listen to port 12345. It binds 0.0.0.0:12345 with SO_REUSEADDR, which of course works.
Now a second instance of your app comes up, and your test code tries to bind 10.0.0.1:12345 with SO_EXCLUSIVEADDREUSE. According to the chart in the MSDN article, that works.
You close the socket, then tell the HTTP server to listen to port 12345. It binds 0.0.0.0:12345 with SO_REUSEADDR, which works.
If this is the problem, assuming you can't get the HTTP server to bind a specific address, you can solve things by using 0.0.0.0 in your test code. (Of course if it's one of the other hundreds of possible problems, that solution won't work.)
If you don't know what socket options, address, etc. the HTTP server is using, and don't have the source, just run it in the debugger and breakpoint the relevant calls.
I have a C++ application which accepts TCP connections and then reads the traffic sent to it. It has worked very well until I moved it to a new machine. It seems like winsock never accepts the inbound tcp connection. In my code it never returns from the select statement. I can see using netstat/tcpview that the application is listening on port 14005.
I can connect to this port if I just telnet in locally. However, when someone tries to connect in via an outside IP address the TCP 3 way handshake never finishes. I can see the inbound SYN packet in wireshark. It is going to the correct port, 14005. However my system never sends the SYN-ACK back. This is just something that winsock is suppose to handle right? The machine does have multiple NIC cards, but I'm binding with INADDR_ANY so this shouldn't matter. Is there some way I can dig deeper to see why this handshake never takes place?
per ways to dig deeper: nothing more than wireshark / tshark (which you already use, however if you want to play with packets, look at scapy)
what happens if you reduce headache - only use one nic and network, put the client on the same network (ie, no router or smart switch between), (last resort) disable unneeded network services.
I have both a client and server application using UDP port 25565.
In order to run these on the same machine, because only one application may bind itself to port 25565, does this mean that it is necessary for me to use two separate ports for transmitting data between the applications?
What I have in mind is the following -
Client -> 25565 -> Server
Client <- 25566 <- Server
Is this the only solution or is there another way of handling this?
Your server application open a port and wait for client to connect.
Client need to know this port in advance so it can establish a connection to the desired service.
Client can use any available ports to initiate this connection (better to use ports > 1000).
The server sees in the incomming packet wich port the client is using, so it will send anwser to it. No need to specify it in your design.
After handshake the TCP/IP connection is then identified by these 4 values : server IP, server port, client IP, client port.
No other connection could have the same four values.
To answer your question. A TCP/IP connection is bi-directional, once established, the server can send data to the client and the other way around.
I would draw the scheme like this :
SERVER port 25565 <-> CLIENT port 25566 (or any other port)
Well, no. Only the server needs to listen on the port 25565 - the client will just connect to that port. There is no reason to specify which client the port should 'use' to connect to that port. Also, once the server has accepted the connection, the port can listen for other requests.
The whole point of separate UDP ports is to eliminate conflicts among applications listening to incoming packets. Changing one of these ports is probably the best solution.
However, if you really want both programs to listen on the same port you will need to use virtual network interfaces such as TUN/TAP (there is a Windows port). Then both applications will bind to the port with tha same number but on the different network interfaces.
TCP standard has "simultaneous open" feature.
The implication of the feature, client trying to connect to local port, when the port is from ephemeral range, can occasionally connect to itself (see here).
So client think it's connected to server, while it actually connected to itself. From other side, server can not open its server port, since it's occupied/stolen by client.
I'm using RHEL 5.3 and my clients constantly tries to connect to local server.
Eventually client connects to itself.
I want to prevent the situation. I see two possible solutions to the problem:
Don't use ephemeral ports for server ports.
Agree ephemeral port range and configure it on your machines (see ephemeral range)
Check connect() as somebody propose here.
What do you thinks?
How do you handle the issue?
P.S. 1
Except of the solution, which I obviously looking for,
I'd like you to share your real life experience with the problem.
When I found the cause of the problem, I was "astonished" on my work place people are not familiar with it. Polling server by connecting it periodically is IMHO common practice,
so how it's that the problem is not commonly known.
When I stumbled into this I was flabbergasted. I could figure out that the outgoing
port number accidentally matches the incoming port number, but not why the TCP
handshake (SYN SYN-ACK ACK) would succeed (ask yourself: who is sending the ACK if
there is nobody doing a listen() and accept()???)
Both Linux and FreeBSD show this behavior.
Anyway, one solution is to stay out of the high range of port numbers for servers.
I noticed that Darwin side-steps this issue by not allowing the outgoing port
to be the same as the destination port. They must have been bitten by this as well...
An easy way to show this effect is as follows:
while true
do
telnet 127.0.0.1 50000
done
And wait for a minute or so and you will be chatting with yourself...
Trying 127.0.0.1...
telnet: Unable to connect to remote host: Connection refused
Trying 127.0.0.1...
telnet: Unable to connect to remote host: Connection refused
Trying 127.0.0.1...
telnet: Unable to connect to remote host: Connection refused
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
hello?
hello?
Anyway, it makes good job interview material.
Bind the client socket to port 0 (system assigns), check the system assigned port, if it matches the local server port you already know the server is down and and can skip connect().
For server you need to bind() socket to port. Once addr:port pair had socket bound, it will no longer be used for implicit binding in connect().
No problem, no trouble.
Note that this solution is theoretical and I have not tested it on my own. I've not experienced it before (or did not realize) and hopefully I won't experience it anymore.
I'm assuming that you cannot edit neither the client source code nor the server source. Additionally I'm assuming the real problem is the server which cannot start.
Launch the server with a starter application. If the target port that the server will bind is being used by any process, create an RST (reset packet) by using raw sockets.
The post below briefly describes what an RST packet is (taken from http://forum.soft32.com/linux/killing-socket-connection-cmdline-ftopict473059.html)
You have to look at a "raw socket" packet generator.
And you have to be superuser.
You probably need a network sniffer as well.
http://en.wikipedia.org/wiki/Raw_socket
http://kerneltrap.org/node/3072 - TCP RST attacks
http://search.cpan.org/dist/Net-RawIP/lib/Net/RawIP.pm - a Perl module
http://mixter.void.ru/rawip.html - raw IP in C
In the C version, you want a TH_RST packet.
RST is designed to handle the following case.
A and B establish a connection.
B reboots, and forgets about this.
A sends a packet to B to port X from port Y.
B sends a RST packet back, saying "what are you talking about? I don't
have a connection with you. Please close this connection down."
So you have to know/fake the IP address of B, and know both ports X
and Y. One of the ports will be the well known port number. The other
you have to find out. I thnk you also need to know the sequence
number.
Typically people do this with a sniffer. You could use a switch with a
packet mirroring function, or run a sniffer on either host A or B.
As a note, Comcast did this to disable P2P traffic.
http://www.eff.org/wp/packet-forgery-isps-report-comcast-affair
In our case we don't need to use a sniffer since we know the information below:
So you have to know/fake the IP address of B, and know both ports X
and Y
X = Y and B's IP address is localhost
Tutorial on http://mixter.void.ru/rawip.html describes how to use Raw Sockets.
NOTE that any other process on the system might also steal our target port from ephemeral pool. (e.g. Mozilla Firefox) This solution will not work on this type of connections since X != Y B's IP address is not localhost but something like 192.168.1.43 on eth0. In this case you might use netstat to retrieve X, Y and B's IP address and then create a RST packet accordingly.
Hmm, that is an odd problem. If you have a client / server on the same machine and it will always be on the same machine perhaps shared memory or a Unix domain socket or some other form of IPC is a better choice.
Other options would be to run the server on a fixed port and the client on a fixed source port. Say, the server runs on 5000 and the client runs on 5001. You do have the issue of binding to either of these if something else is bound to them.
You could run the server on an even port number and force the client to an odd port number. Pick a random number in the ephemeral range, OR it with 1, and then call bind() with that. If bind() fails with EADDRINUSE then pick a different odd port number and try again.
This option isn't actually implemented in most TCPs. Do you have an actual problem?
That's an interesting issue! If you're mostly concerned that your server is running, you could always implement a heartbeat mechanism in the server itself to report status to another process. Or you could write a script to check and see if your server process is running.
If you're concerned more about the actual connection to the server being available, I'd suggest moving your client to a different machine. This way you can verify that your server at least has some network connectivity.
In my opinion, this is a bug in the TCP spec; listening sockets shouldn't be able to send unsolicited SYNs, and receiving a SYN (rather than a SYN+ACK) after you've sent one should be illegal and result in a reset, which would quickly let the client close the unluckily-chosen local port. But nobody asked for my opinion ;)
As you say, the obvious answer is not to listen in the ephemeral port range. Another solution, if you know you'll be connecting to a local machine, is to design your protocol so that the server sends the first message, and have a short timeout on the client side for receiving that message.
The actual problem you are having seems to be that while the server is down, something else can use the ephemeral port you expect for your server as the source port for an outgoing connection. The detail of how that happens is separate to the actual problem, and it can happen in ways other than the way you describe.
The solution to that problem is to set SO_REUSEADDR on the socket. That will let you create a server on a port that has a current outgoing connection.
If you really care about that port number, you can use operating specific methods to stop it being allocated as an ephemeral port.