I have a while loop for retrying to connect to a device.
problem is, every time it does retry it is using the same socket descriptor(closing and opening again), is this safe?
while(retry)
create socket
read(use socket created before)
if read fails
close socket and retry
i want a new socket fd to connect with the server and read again. Is reusing the same one safe in case read has failed?
If you actually close the socket then I would suggest to create a new socket since the fd used for to describe the socket is now invalid so your reuse of the same file descriptor could cause errors.
Normally though it's better to close the old and create a new one.
since you called bind or connect on the socket, you can't change the address.
you must close the socket and create the new one.
Related
I got connectex/disconnectex working I tested they work but how do I reuse socket with them? I saw Reusing socket descriptor on connection failure but they say to loop the whole socket again and I don't want that (they say to go back to creating socket)? I just want to make a new socket once recv fails because host is offline (I'm doing a tcp client).
Thank you.
I am using the boost library to create a server application. At a time one client is allowed therefore if the async_accept(...) function gets called the acceptor will be closed.
The only job of my server is to send data periodically (if sending is enabled on the server, otherwise "just sit there" until it gets enabled) to the client. Therefore I have a boost message queue - if a message arrives the send() is called on the socket.
My problem is that I cannot tell if the client is still listening. Normally you would not care, by the next transmission the send would yield an error.
But in my case the acceptor is not opened when a socket is opened. If the socket gets in the CLOSE_WAIT state I have to close it and open the acceptor again so that the client can connect again.
Waiting until the next send is also no option since it is possible that the sending is disabled therefore my server would be stuck.
Question:
How can I determine if a boost::asio::ip::tcp::socket is in a CLOSE_WAIT state?
Here is the code to do what Dmitry Poroh suggests:
typedef asio::detail::socket_option::integer<ASIO_OS_DEF(SOL_SOCKET),SO_ERROR>so_error;
so_error tmp;
your_socket.get_option(tmp);
int value=tmp.value();
//do something with value.
You can try to use ip::tcp::socket::get_option and get error state with level SOL_SOCKET and option name SO_ERROR. I'm surprised that I have not found the ready boost implementation for it. So you can try to meet GettableSocketOption requirements an use ip::tcp::socket::get_option to fetch the socket error state.
I am trying to pass an established connection Unix Domain Socket file descriptor from process A to process B through another Unix Domain Socket connection, with no luck
although a tcp socket is passed with no problem.
Is there a reason to it or am I doing something wrong?
Both are passed through ancilary message
Thanx
Socket file descriptor (just like a regular file descriptors) have absolutely no meaning outside the process that properly created them. When you send a fd to other process, you are just sending a bunch of bytes - nothing more, nothing less.
The only way you can move a working fd from one process to another is to fork() a process containing the fd to be passed.
If you want some process to connect to a particular Unix socket you should pass a unix socket file system entry name to that process. The receiving process can properly create a socket and make a connection afterwards.
I don't know why didn't you have problems with passing a tcp socket fd. Perhaps if you post the relevant parts of your code the reason will be revealed.
I'm writing an application that is split into two parts for Mac OS X - a daemon and an agent. I'm using a standard unix socket to communicate between the daemon and the agents. That is, the socket is created with PF_UNIX and SOCK_STREAM.
When agents are created (whenever a user logs in), one of the first things it does is to connect to the socket. This seems to work perfectly for the first agent. However, when the second agent connects, the daemon experiences the following issue:
I'm using select() to check for data that can be read. The select() call succeeds, and indicates that there is data to be read. However, when i call recv() it returns with -1, and errno is set to 35, or "Resource temporarily unavailable".
Now, I would expect this for a non-blocking socket, but I have triple-checked - I never set the socket to be non-blocking.
As far as I can tell, this only happens when a second agent connects to the same unix socket. If I limit myself to one daemon and one agent then everything seems to work perfectly. What could be causing this odd behaviour?
It sounds a bit like you're trying to read from the wrong client fd. It's hard to tell without seeing your code, but it also sounds a bit that way from your description.
So just in case, here's how it works. Your server is ending up with three file descriptors, the socket it first starts listening on, and then one file descriptor for each connected client. When there's something to read on the original socket, that means there's a new client; it sounds like you have this part right. Each connected client then gives you its own independent fd to read/write from. Calling select() will return if any of these is ready to read; you then have to check each fd in the readfds variable from select with FD_ISSET() to see if it actually has data to read.
You can see a basic example of this type of code here.
I am having issues reading data from a socket. Supposedly, there is a server socket that is waiting for clients to connect. When I write a client to connect() to the server socket/port, it appears that I am connected. But when I try to read() data that the server is supposedly writing on the socket, the read() function hangs until the server app is stopped.
Why would a read() call ever hang if the socket is connected? I believe that I am not ever really connected to the socket/port but I can't prove it, b/c the connect() call did not return an error. The read() call is not returning an error either, it is just never returning at all.
Read is blocking until is receives some I/O (or an error).
As John & Whirl mentioned, the problem is almost certainly that the server hasn't sent any data for your read() call to return. Another easy thing to overlook when you're starting out with network programming is that the data transfered in a server's write() call is not always symmetrical with a client's read() call. Where the server may write("hello world"), your read() could easily return "hello world", "hello wo", "hel", or even just "h"
Unless you explicitly changed your reader's socket to non-blocking mode, a call to read will do exactly what you say until there is data available: It will block forever until some data is actually read.
You can also use netstat (I use it with -f inet) to figure out connections that have been made and see the status of your socket connection.
Your server is probably not writing data to the socket, so your reader just blocks waiting for data to appear on the socket.