How to avoid re-use socket? [closed] - c++

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
There is a server, there are clients. Clients connect to the server. Servers function "accept" returns socket connected client. But when the client socket becomes invalid, it can be reused. How to prevent the server to reuse the same socket?
P.S.: For fans downvote pay attention, I'm not asking about a socket server, I ask about a client sockets.

This is just to clear up some misconceptions - I'm still voting for the question to be closed unless it's edited into something sensible.
But when the client socket becomes invalid, it can be reused
No, the socket file descriptor has to be closed if the TCP connection has shut down. A new socket could be allocated later for a new TCP connection, and receive a file descriptor with the same integer value, but it isn't the same socket.
The socket - this is not just a port, but also the address.
No, a socket is the handle your process uses to talk to the OS about a TCP connection, which is itself uniquely identified by the 4-tuple consisting of two ports and two addresses. See this answer, I'm not going to paste it all here.
If no connection, then the customer will not be aware of this [closing a socket]
If there is no connection, there's nothing to close.
If there is an existing TCP connection, and either the client or server close their socket, the other end will be notified, and the other end's socket will also become invalid (and should be closed in response).
For example, socket have parameters SO_REUSEADDR and SO_REUSEPORT. Why would they?
When you close a connection, you send a packet telling the other end of the connection that you've done so. Even after they've acknowledged this, you could receive other packets on the same connection, if they took a different network path. So, TCP keeps the closed connection around in TIME_WAIT state, preventing another connection starting on the same address:port tuple, for some arbitrary time until it's very unlikely to receive a packet that was really intended for the previous connection.
This arbitrary TIME_WAIT duration is 4 minutes, which is easily long enough that you could, for example, kill a server process and then restart it (at which point it will fail to bind to its address:port, because the closed connection is still using that address:port).
SO_REUSEADDR allows the server to replace the old TIME_WAIT connection with its new, live connection.
SO_REUSEPORT allows multiple sockets to bind to, and accept on, the same port for load-balancing purposes.
This is all documented, eg. in the man page, and neither option has anything to do with the socket file descriptor value.
As bolov said in a comment, the reason these are used by the server is that you actually care about the address:port bound by a server, because that's how you know where to reach it. The local port of a client connection is generally assigned from the ephemeral port range, and no-one cares what its value is except that it's unique at that moment in time.

This can be possible by creating such program that on not accepting by the server the socket id file descriptor is released. But this can complicate the OS may be a scenario that OS may failed.
file descriptors for a single process, file table and inode table available. As each file descriptor may be directed by individual inode.
But maximum number of inodes is fixed at file system creation, limiting the maximum number of files the file system can hold. A typical allocation heuristic for inodes in a file system is one percent of total size.
As indoe table has fixed size and on releasing the file descriptor that particular inode is free, but as per your way inode table may be dry up soon and crash the system.

Related

What exactly bind API do in Server program [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
what exactly bind api doing in server programm.
I am very new for socket programming.
Bind says : bind the socket to IPaddress and port.
So what exactly happen if i give
argument1 of bind=AF_INET & args2=(sockaddr *)struct sockaddr hint and args3=sizeof(hint);
In short: bind() specifies the address & port on the local side of the connection. If you don't call bind(), the operating system will automatically assign you an available port number.
Each time an IP datagram containing TCP data is sent over the network, the datagram contains a 'local address', 'remote address', 'local port', and 'remote port'. This is the only information that IP has to figure out who ends up getting the packet.
So, both the client and the server port numbers need to be filled in before the connection can work. Data that is directed to the server needs a 'destination' port, so that the data can get sent to the appropriate program running on the server. Likewise, it needs a 'source' so that the server knows who to send data back to, and also so that if there are many connections from the same computer, the server can keep them separate by looking at the source port number.
Since the connection is initiated by the client program, the client program needs to know the server's port number before it can make a connection. For this reason, servers are placed on 'well-known' port numbers. For example, a telnet server is always on port 23. A http server is always on port 80.
The bind() API call assigns the 'local' port number. That is, the port number that is used as the 'source port' on outgoing datagrams, and the 'destination port' on incoming datagrams.
There is a detailed explanation here with an example.

Pinging by creating new sockets per each peer

I created a small cross-platform app using Qt sockets in C++ (although this is not a C++ or Qt specific question).
The app has a small "ping" feature that tries to connect to a peer and asks for a small challenge (i.e. some custom data sent and some custom data replied) to see if it's alive.
I'm opening one socket per each peer so as soon as the ping starts we have several sockets in SYN_SENT.
Is this a proper way to implement a ping-like protocol with challenge? Am I wasting sockets? Is there a better way I should be doing this?
I'd say your options are:
An actual ping (using ICMP echo packets). This has low overhead, but only tells you whether the host is up. And it requires you to handle lost packets, timeouts, and retransmits.
A UDP-based protocol. This also has lower kernel overhead, but again you'll be responsible for setting up timeouts, handling lost packets, and retransmits. It has the advantage of allowing you to positively affirm that your program is running on the peer. It can be implemented with only a single socket endpoint no matter how many peers you add. (It is also possible that you could send to multiple peers at once with a broadcast if all are on a local network, or a multicast [complicated set-up required for that].)
TCP socket as you're doing now. This is much easier to code, extremely reliable and will automatically provide a timeout (i.e. your connect will eventually fail if the peer doesn't respond). It lets you know positively that your peer is there and running your program. Although there is more kernel overhead to this, and you will use one socket endpoint on your host per peer system, I wouldn't call it a significant issue unless you think you'll be having thousands of peers.
So, in the end, you have to judge: If thousands of hosts will be participating and this pinging is going to happen frequently, you may be better off coding up a UDP solution. If the pinging is rare or you don't expect so many peers, I would go the TCP route. (And I wouldn't consider that a "waste of sockets" -- those advantages are why TCP is so commonly used.)
The technique described in the question doesn't really implement ping for the connection and doesn't test if the connection itself is alive. The technique only checks that the peer is listening for (and is responsive to) new connections...
What you're describing is more of an "is the server up?" test than a "keep-alive" ping.
If we're discussing "keep-alive" pings, than this technique will fail.
For example, if just the read or the write aspect of the connection is closed, you wouldn't know. Also, if the connection was closed improperly (i.e., due to an intermediary dropping the connection), this ping will not expose the issue.
Most importantly, for some network connections and protocols, you wouldn't be resetting the connection's timeout... so if your peer is checking for connection timeouts, this ping won't help.
For a "keep-alive" ping, I would recommend that you implement a protocol specific ping.
Make sure that the ping is performed within the existing (same) connection and never requires you to open a new connection.

How to get the client IP address before accepting the connection in C++

I'm studing c++ socket programming...
The server program binds to a socket and starts listening for connection requests...ok now how can I list the IP addreses of the listened requests?
I know I can get the IP addresses after accepting the connections but lets say I don't wanna accept a connection from an specific IP address...
On Windows only, you can use the conditional callback feature of WinSock2's WSAAccept() function to access client information before accepting a connection, and to even reject the connection before it is accepted.
This can't be done in terms of the standard socket API. On all platforms I know, the system actually accepts the connection (i.e. responds with SYN+ACK TCP datagram) before the application has a chance to monitor the pending request.
For optimum performance, this would be solved by filtering in the network stack, but the details of doing that will depend on the operating system (this is not part of the socket interface and your application may generally not even have the rights to configure your network stack this way.)
The other opportunity is after the accept, by which time the connection is already accepted (CONNECT ACK) on TCP level.
I don't think you can do it in the middle phase where you would prefer that. That however would not be very different from doing it after accept anyway.

Winsock2: How to allow ONLY one client connection at a time by using listen's backlog in VC++

I want to allow only one connection at a time from my TCP server. Can you please tell, how to use listen without backlog length of zero.
I m using the code(given below), but when i launch 2 client one by one, both gets connected. I m using VC++ with winsock2.
listen(m_socket,-1);
passing zero as backlog is also not working.
Waiting for ur reply.
Regards,
immi
If you can indeed limit your application to only use Winsock 2, you can use its conditional accept mechanism:
SOCKET sd = socket(...);
listen(sd, ...);
DWORD nTrue = 1;
setsockopt(sd, SOL_SOCKET, SO_CONDITIONAL_ACCEPT, (char*)&nTrue, sizeof(nTrue));
This changes the stack's behavior to not automatically send SYN-ACK replies to incoming SYN packets as long as connection backlog space is available. Instead, your program gets the signal that it should accept the connection as normal -- select(), WSAEventSelect(), WSAAsyncSelect()... -- then you call WSAAccept() instead of accept():
sockaddr_in sin;
WSAAccept(sd, (sockaddr*)&sin, sizeof(sin), ConditionalAcceptChecker, 0);
You write the function ConditionalAcceptChecker() to look at the incoming connection info and decide whether to accept the connection. In your case, you can just return CF_REJECT as long as you're already processing a connection.
Again, beware that this mechanism is specific to Winsock 2. If you need portable behavior, the other posts' advice to close the listening socket while your program already has a connection is better.
You may set backlog equal to 1, since this is number of connections you want.
But AFAIK there's no hard warranty on queue size (this doc says it would be 1.5 * backlog on BSD, e.g.).
IMHO, you'd better control number of connections manually by not accept()'ing connections after some limit.
I would say, only accept once. If you only want one client at a time on your server you can use also only one thread to perform the handling. The backlog limits only the amount of pending connections handled by the system for accepting (the queue is empty again after the first accept so the next client gets into the backlog) not the amount of connections!
That's not what the listen backlog is for.
The listen backlog affects a queue that's used for pending connections, it allows the TCP stack to queue up pending connections for you to accept.
To do what you want to do you need to accept the one connection that you're allowing and then close the listening socket. Once you've finished with your single client you can recreate your listening socket and listen for a new connection. This will prevent more than a single client connecting to you but there will be no way for the client to know that you're actually running and accepting connections on a "one at a time" basis. All clients except the one that manages to connect will think you're just not there.
It's probably a better design to keep your listening socket open and accept all connections but once you have you "one" active connection you simply accept and then either send an application level message to your client telling it that you cant accept any more connections OR if you can't do that, simply close the new connection.

How do I create a TCP server that will accept only one connection at a time? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
I'm writing a client-server pair in C++ using Linux sockets. I want the server to listen for a connection, and while one client is connected the server should reject any other clients that try to connect.
I tried implementing this by setting the backlog parameter in the listen function to 0 and to 1 and neither one of those values seems to work. The first client connects as expected, but any subsequent clients just block while the first client finishes. What's really confusing to me is that they don't block on connecting to the server, they block on the first read.
I used the code here to get started writing my client and server. Does anyone know what I need to change to get the server to accept only one client connection, and drop any subsequent connection attempts?
When you accept a connection, a new socket gets created. The old one is still used to listen for future connections.
Since you want to only allow 1 connection at a time, you could just accept the connections, and then close the new accepted socket if you detect you are already processing another.
Is there a net difference that you are looking for compared to closing the new accepted socket right after the accept? The client will know as soon as it tries to use its socket (or right away if it is already waiting on the server with a read call) with a last error of: server actively closed the connection.
Just don't fork() after accept().
This pseudo-C-code will only accept one client at once.
while(1) {
listen()
accept()
*do something with the connection*
close()
}
You could close your original socket that's listening for connections after accepting the first connection. I don't know if the socket class you're using will allow you to do that though.
Sounds like you need to implement it manually. Let a client connect, then send a disconnect message from the server to the client if there's already another client connected. If the client receives this message let it disconnect itself.
Since you want to only allow 1 connection at a time, you could just accept the connections, and then close the new accepted socket if you detect you are already processing another.
I think it should be the listen socket that to be closed.
When the first connection is established, you close the original listen socket.
And after that no more connections can be established.
After the first connection ends, you can create a new socket to listen again.
If you have control over the clients, you can make the sockets non-blocking. In this case they'll return the error message EINPROGRESS.
I'm still looking for how to change the socket to be non-blocking. If anybody know how to offhand, feel free to edit the answer.
let the listening socket die after accepting and starting a new connection. Then when that connection is done, have it spin off a new listening socket.
You might have the socket option TCP_DEFER_ACCEPT set on your listening socket:
TCP_DEFER_ACCEPT (since Linux 2.4)
Allows a listener to be awakened only when data arrives on the socket.
Takes an integer value (seconds), this can bound the maximum number of
attempts TCP will make to complete the connection. This option should
not be used in code intended to be portable.
I would assume it would lead to the effect you described, that the connecting client doesn't block on the connect, but on the subsequent read. I'm not exactly sure what's the options default setting and to what it should be set to disable this behavior, but probably a value of zero is worth a try:
int opt = 0;
setsockopt(sock, IPPROTO_TCP, TCP_DEFER_ACCEPT, &opt, sizeof(opt));
As far as I see, it is not possible to listen for exactly one connection.
Tcp involves a 3-way handshake. After the first syn packet is received the kernel puts that "connection" in a wait queue, answers with an syn/ack and waits for the final ack. After this is received it moves the connection from the wait queue to the accept queue, where it can be picked up by the application using the accept() call. (For details have a look here.)
On linux the backlog argument only limits the size of the accept queue. but the kernel will still do the 3-way handshake magic. The client receives syn/ack and answers with the final ack and calls the connection established.
Your only options are, either shutting down the listening socket as soon as you accepted the first connection. (This might however result in other connection already being available.) Or you actively accept other connections and close them immediately to notify the client.
The last option you have is the one you are already using: let the server queue your connections and process them one after the other. Your clients will block in that case.