I have a server application with a listening socket opened on a specific IP port.
How can I allow the socket to enable incoming connections from just one specified IP address?
You'll have to either use some firewall software to restrict incoming requests to that port, or shut down accepted connections that you do not want to service (based on the socket address returned by accept).
There might be libraries out there that do that for you, but the socket API doesn't have anything to do it automatically.
When you accept a connection you can examine the sockaddr after accepting to see if it came from the right address. If not you immediately close the connect socket returned by accept.
You have to accept the connection with accept(), then close it if you don't want it (perhaps sending an error response if your protocol supports this). This is good enough for most applications.
Try: libauth, it's a robust way of access control http://linux.die.net/man/3/libauth
Related
I want to write a tcp server and client application, which has several different connections to each other where the client uses the same port number.
So far I understand it, the server has a listener port and when the client calls it, then I get a new socket for this new connection on the server side, when I call
accept();
Right? So on Server side I can identify my connection with this new socket and send data through it.
Now my understanding problem with the client side. There I get my socket when I call
socket(AF_INET, SOCK_STREAM, 0)
so I have only one socket. In the
connect()
I can specify remote adress and so on. So when I understand it correctly I can use one socket to make several connects to different adresses/port pairs to create different connections. Right?
But how can I now see in the Client from which logical connection I receive my data or how can I send it when 2 logical connections use the same local port at the client? On serverside I have 2 sockets when I have 2 accept called but what about the client side? For send and receive I have only one socket handle?
Or do I have to call socket() for each logical connection on the client?
I can specify remote adress and so on. So when I understand it correctly I can use one socket to make several connects to different adresses/port pairs to create different connections. Right?
No. A socket is the combination of IP address plus port number.
Or do I have to call socket() for each logical connection on the client?
Yes.
It seems to me your confusion arises because you think for example that a certain port is used for SMTP connections and a certain port is used for HTTP connections.
Well, that port alone is NOT defining for you a socket to the server. The IP address of the server is changing.
As an example, consider the following scenario:
You want to connection to Stackoverflow:
Your PC – IP1+port 50500 ——– Stackoverflow IP2 + port 80 (standard http port)
That is the combination IP1+50500 = the socket on the client computer and IP2 + port 80 = destination socket on the Stackoverflow server.
Now you want to connect to gnu.org:
your PC – IP1+port 50501 ——–gnu.org IP3 +port 80 (standard http port)
The combination IP1+50501 = the socket on the client computer and IP3 + port 80 = destination socket on the gnu.org server.
Better check out Beej's Network Programming to learn more. It is a must-read for anyone working with sockets.
So when I understand it correctly I can use one socket to make several connects to different adresses/port pairs to create different connections. Right?
No. A TCP socket can only be used once. When its connection has finished, or even if connect() just fails to make a connection, you must close the socket and create a new one if you want to make a new connection.
But how can I now see in the Client from which logical connection I receive my data or how can I send it when 2 logical connections use the same local port at the client?
Every TCP connection will have its own unique socket allocated for it. It is your responsibility to keep track of them.
On serverside I have 2 sockets when I have 2 accept called but what about the client side?
The exact same thing happens on the client side, too. You need to create and connect a separate socket for every TCP connection you make. So, you will have a new pair of socket()/connect() calls for every connection.
For send and receive I have only one socket handle?
No, you will have a separate socket for each connection, just like on the server side.
Or do I have to call socket() for each logical connection on the client?
Yes, and connect(), too.
I will not talk about a specific programming language rather I will give a general answer that is applicable for all:
In networking what you care about is the socket (IP+Port) this should be unique whether it is server/client socket or UDP/TCP socket.
For server sockets you must assign a port. For client sockets usually you do not specifically assign a port but it will be assigned by the operating system automatically. However, you can still assign a port to a client socket manually (e.g. in case some port numbers are blocked by the firewall)
In the server process:
you can get the server socket info and the connected client socket info
In the client process:
you can get the client socket info and the server (you want to connect to) socket info (of course you should know the server socket info previously otherwise how will you connect to it).
You can send/receive from/to client sockets. After the server gets the connected client socket it can send/receive through it. Same for the client side it can send/receive through its socket.
The "socket" abstraction is an unfortunate relic of past network stack design. It mixes two different sorts of objects.
A listening socket on the server has a port, and potentially an IP address of the local interface. However, this can also be 0.0.0.0 when listening on all interfaces.
A connected socket is associated with a TCP connection, and therefore has 4 parameters: {local IP, local port, remote IP, remote port}.
Now on the client side, you typically don't care about local IP or local port, so these are locally assigned on connect. And yes, these local parameters can in fact be reused for multiple connections. Only the 4-tuple of {local IP, local port, remote IP, remote port} needs to be unique. The OS will map that unique tuple to your SOCKET.
But since you need a new 4-tuple for every connection, it also follows you need a new SOCKET on both sides, for every connection, on both client and server.
I have a third party library that acts as a HTTP server. I pass it an address and port, which it then uses to listen for incoming connections. This library listens in such a way that it doesn't receive exclusive usage of the port and address it's bound to. As a result, I can listen on the same port multiple times.
I need to run multiple instances of this HTTP server in the same process. Each instance has a default port, but if that port isn't available, it should use the next available port. This is where my problem is; I can end up with two HTTP servers listening on the same port.
I cannot change the HTTP server's code and the HTTP server will not alert me if it cannot listen on the port I give it, so I have to be able to check if a port is already in use before starting each HTTP server. I have tried checking if a port is already being listened on by binding my own socket with SO_REUSEADDR set to FALSE and SO_EXCLUSIVEADDRUSE set to TRUE, but the bind and listen calls both succeed when an existing HTTP server is already listening on that port.
How is this HTTP server achieving this effect, and how can I accurately check if a port is being listened on in this manner?
The quick and dirty method would be to try to connect() to the port on localhost. If the connect() call succeeds, then you know the port is currently being listened on (by whomever received the connection). If the connect call fails (in particular with ECONNREFUSED) then you can be pretty sure that nobody is listening on that port.
Of course, there's a race condition here: Nothing is really stopping another program from swooping in and grabbing the port immediately after you ran the above test, but before you get around to binding to the port yourself. So you should take the result of the test as more of a hint than an absolute rule, and (hopefully) have some way of handling it if you later find out that the port is in use after all.
Use a port number of 0. The OS will pick a free port.
http://msdn.microsoft.com/en-us/library/windows/desktop/ms740621(v=vs.85).aspx explains how the different options interact.
You haven't given us nearly enough information to tell us exactly what's going on in your use case, but I can work through one arbitrary use case that would look like what you're seeing.
Let's say you're on Win 2003 or later, and your primary NIC is 10.0.0.1, and everything is running under the same user account.
The first instance of your app comes up, and your test code tries to bind 10.0.0.1:12345 with SO_EXCLUSIVEADDREUSE. Of course this works.
You close the socket, then tell the HTTP server to listen to port 12345. It binds 0.0.0.0:12345 with SO_REUSEADDR, which of course works.
Now a second instance of your app comes up, and your test code tries to bind 10.0.0.1:12345 with SO_EXCLUSIVEADDREUSE. According to the chart in the MSDN article, that works.
You close the socket, then tell the HTTP server to listen to port 12345. It binds 0.0.0.0:12345 with SO_REUSEADDR, which works.
If this is the problem, assuming you can't get the HTTP server to bind a specific address, you can solve things by using 0.0.0.0 in your test code. (Of course if it's one of the other hundreds of possible problems, that solution won't work.)
If you don't know what socket options, address, etc. the HTTP server is using, and don't have the source, just run it in the debugger and breakpoint the relevant calls.
I'm writing client-server application and I need my server to find all clients in some network. I've already found some info here: Discovering clients on a wifi network, but I still don't understand how to implement this. Maybe somebody can say where I can find some code examples.
Thanks in advance.
PS. Working on c++, windows.
Generally TCP/IP is used as a communication protocol between client and server. For Windows platform Winsock library is used to implement TCP/IP. The server binds and listens on a port for incoming connections. Just like a webserver like stackoverflow listens by default on port 80 and then client (browsers) connects to it.
Here is a link to start. Here is sample
Normally all the client connects to server which listens on a well defined port. The server is only one hence the IP address and port is well know to all the client and hence they can connect to it.
In you case you want your server to have ablity to discover all the clients in the network. To achieve this the server needs to broadcast to network some message. The client will receive this message and will respond to the server that they are available on such IP and they can connect to server or provide additional information to server. Normally instead of broadcast, multicast is used which is limited broadcast. All the clients and server will subscribe to the multicast group which is a special kind of IP address. When server send a message to this multicast address all the client, which are subscribers of this address will receive this message and can respond back. Here is a sample
Edit: you can also use boost lib to implement multicast: sender eg., receiver eg.
I'm studing c++ socket programming...
The server program binds to a socket and starts listening for connection requests...ok now how can I list the IP addreses of the listened requests?
I know I can get the IP addresses after accepting the connections but lets say I don't wanna accept a connection from an specific IP address...
On Windows only, you can use the conditional callback feature of WinSock2's WSAAccept() function to access client information before accepting a connection, and to even reject the connection before it is accepted.
This can't be done in terms of the standard socket API. On all platforms I know, the system actually accepts the connection (i.e. responds with SYN+ACK TCP datagram) before the application has a chance to monitor the pending request.
For optimum performance, this would be solved by filtering in the network stack, but the details of doing that will depend on the operating system (this is not part of the socket interface and your application may generally not even have the rights to configure your network stack this way.)
The other opportunity is after the accept, by which time the connection is already accepted (CONNECT ACK) on TCP level.
I don't think you can do it in the middle phase where you would prefer that. That however would not be very different from doing it after accept anyway.
TCP standard has "simultaneous open" feature.
The implication of the feature, client trying to connect to local port, when the port is from ephemeral range, can occasionally connect to itself (see here).
So client think it's connected to server, while it actually connected to itself. From other side, server can not open its server port, since it's occupied/stolen by client.
I'm using RHEL 5.3 and my clients constantly tries to connect to local server.
Eventually client connects to itself.
I want to prevent the situation. I see two possible solutions to the problem:
Don't use ephemeral ports for server ports.
Agree ephemeral port range and configure it on your machines (see ephemeral range)
Check connect() as somebody propose here.
What do you thinks?
How do you handle the issue?
P.S. 1
Except of the solution, which I obviously looking for,
I'd like you to share your real life experience with the problem.
When I found the cause of the problem, I was "astonished" on my work place people are not familiar with it. Polling server by connecting it periodically is IMHO common practice,
so how it's that the problem is not commonly known.
When I stumbled into this I was flabbergasted. I could figure out that the outgoing
port number accidentally matches the incoming port number, but not why the TCP
handshake (SYN SYN-ACK ACK) would succeed (ask yourself: who is sending the ACK if
there is nobody doing a listen() and accept()???)
Both Linux and FreeBSD show this behavior.
Anyway, one solution is to stay out of the high range of port numbers for servers.
I noticed that Darwin side-steps this issue by not allowing the outgoing port
to be the same as the destination port. They must have been bitten by this as well...
An easy way to show this effect is as follows:
while true
do
telnet 127.0.0.1 50000
done
And wait for a minute or so and you will be chatting with yourself...
Trying 127.0.0.1...
telnet: Unable to connect to remote host: Connection refused
Trying 127.0.0.1...
telnet: Unable to connect to remote host: Connection refused
Trying 127.0.0.1...
telnet: Unable to connect to remote host: Connection refused
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
hello?
hello?
Anyway, it makes good job interview material.
Bind the client socket to port 0 (system assigns), check the system assigned port, if it matches the local server port you already know the server is down and and can skip connect().
For server you need to bind() socket to port. Once addr:port pair had socket bound, it will no longer be used for implicit binding in connect().
No problem, no trouble.
Note that this solution is theoretical and I have not tested it on my own. I've not experienced it before (or did not realize) and hopefully I won't experience it anymore.
I'm assuming that you cannot edit neither the client source code nor the server source. Additionally I'm assuming the real problem is the server which cannot start.
Launch the server with a starter application. If the target port that the server will bind is being used by any process, create an RST (reset packet) by using raw sockets.
The post below briefly describes what an RST packet is (taken from http://forum.soft32.com/linux/killing-socket-connection-cmdline-ftopict473059.html)
You have to look at a "raw socket" packet generator.
And you have to be superuser.
You probably need a network sniffer as well.
http://en.wikipedia.org/wiki/Raw_socket
http://kerneltrap.org/node/3072 - TCP RST attacks
http://search.cpan.org/dist/Net-RawIP/lib/Net/RawIP.pm - a Perl module
http://mixter.void.ru/rawip.html - raw IP in C
In the C version, you want a TH_RST packet.
RST is designed to handle the following case.
A and B establish a connection.
B reboots, and forgets about this.
A sends a packet to B to port X from port Y.
B sends a RST packet back, saying "what are you talking about? I don't
have a connection with you. Please close this connection down."
So you have to know/fake the IP address of B, and know both ports X
and Y. One of the ports will be the well known port number. The other
you have to find out. I thnk you also need to know the sequence
number.
Typically people do this with a sniffer. You could use a switch with a
packet mirroring function, or run a sniffer on either host A or B.
As a note, Comcast did this to disable P2P traffic.
http://www.eff.org/wp/packet-forgery-isps-report-comcast-affair
In our case we don't need to use a sniffer since we know the information below:
So you have to know/fake the IP address of B, and know both ports X
and Y
X = Y and B's IP address is localhost
Tutorial on http://mixter.void.ru/rawip.html describes how to use Raw Sockets.
NOTE that any other process on the system might also steal our target port from ephemeral pool. (e.g. Mozilla Firefox) This solution will not work on this type of connections since X != Y B's IP address is not localhost but something like 192.168.1.43 on eth0. In this case you might use netstat to retrieve X, Y and B's IP address and then create a RST packet accordingly.
Hmm, that is an odd problem. If you have a client / server on the same machine and it will always be on the same machine perhaps shared memory or a Unix domain socket or some other form of IPC is a better choice.
Other options would be to run the server on a fixed port and the client on a fixed source port. Say, the server runs on 5000 and the client runs on 5001. You do have the issue of binding to either of these if something else is bound to them.
You could run the server on an even port number and force the client to an odd port number. Pick a random number in the ephemeral range, OR it with 1, and then call bind() with that. If bind() fails with EADDRINUSE then pick a different odd port number and try again.
This option isn't actually implemented in most TCPs. Do you have an actual problem?
That's an interesting issue! If you're mostly concerned that your server is running, you could always implement a heartbeat mechanism in the server itself to report status to another process. Or you could write a script to check and see if your server process is running.
If you're concerned more about the actual connection to the server being available, I'd suggest moving your client to a different machine. This way you can verify that your server at least has some network connectivity.
In my opinion, this is a bug in the TCP spec; listening sockets shouldn't be able to send unsolicited SYNs, and receiving a SYN (rather than a SYN+ACK) after you've sent one should be illegal and result in a reset, which would quickly let the client close the unluckily-chosen local port. But nobody asked for my opinion ;)
As you say, the obvious answer is not to listen in the ephemeral port range. Another solution, if you know you'll be connecting to a local machine, is to design your protocol so that the server sends the first message, and have a short timeout on the client side for receiving that message.
The actual problem you are having seems to be that while the server is down, something else can use the ephemeral port you expect for your server as the source port for an outgoing connection. The detail of how that happens is separate to the actual problem, and it can happen in ways other than the way you describe.
The solution to that problem is to set SO_REUSEADDR on the socket. That will let you create a server on a port that has a current outgoing connection.
If you really care about that port number, you can use operating specific methods to stop it being allocated as an ephemeral port.