I am using boost::asio to implement network programming and running into timing issues. The issue is currently most with the client.
The protocol initially begins by the server returning a date time string to the user, and the client reads it. Up to that part it works fine. But What I also want is to be able to write commands to the server which then processes them. To accomplish this I use the io_service.post() function as shown below.
io_service.post(boost::bind()); // bounded function calls async_write() method.
For some reason the write tries happens before the initial client/server communication, when the socket has not been created yet. And I get bad socket descriptor error.
Now the io_service's run method is indeed called in another thread.
When I place a sleep(2) command before post method, it work fine.
Is there way to synchronize this, so that the socket is created before any posted calls are executed.
When creating the socket and establishing the connection using boost::asio, you can define a method to be called when these operations have either completed or failed. So, you should trigger your "posted call" in the success callback.
Relevant methods and classes are :
boost::asio::ip::tcp::resolver::async_resolve(...)
boost::asio::ip::tcp::socket::async_connect(...)
I think the links below
will give u some help
http://www.boost.org/doc/libs/1_42_0/doc/html/boost_asio/reference/io_service.html
Related
When developing an Async C++ GRPC Server, how can I differentiate between the client being done with writing and the connection being broken ?
I am Streaming data from the client to the server and once the client is done it will call WritesDone to let the server know it should finish storing the file. If I have a sync server I can differentiate between the client calling WritesDone and the connection being broken by calling context->IsCancelled() but in async mode you can not call IsCancelled until you get the tag specified in AsyncNotifyWhenDone.
In both cases (WritesDone and Call done) the Read tag gets returned with ok set to false. However, the AsyncNotifyWhenDone tag, which would allow me to differentiate arrives after the read tag.
I will know after I try to call finish (it will also return false) but I need to know before I call finish as my final processing might fail and I can't return the error anymore if I already called finish.
There's no way to distinguish until the AsyncNotifyWhenDone tag returns. It may come after the Read in which case you may need to buffer it up. In the sync API you can check IsCancelled() anytime (and you can also do that in the Callback API which should be available for general use soon).
First, I want to say that I'm new with Boost asio, and I see a lot of examples but it remains things I don't understand.
I want to create a server, that will accept two clients (it will use two socket). The first client will send messages to the server and the server will send this message to the other client (yes, it is useless to use a server, but it's not the point here, I want to understand how all this work). This will happen until one of the client close.
So, I created a server, the server wait for the clients, and then, it must wait for the first client to send some message. And this is my question: what must I do after?
I thought I need to read the first socket, and then write on the second, and so and so, but how I know if the first client writed on the socket? Same, how I know if the second client read the second socket?
I don't need code, I just want to know the good way to do that.
Thanks a lot for reading!
When you perform async_read you specifify a callback which is going to be called whenever any data is read to the buffer ( you should provide the buffer also, check the async_read's documentation ). Respectively you should provide callback for the async_write to know when your data is already sent. So, from the server perspective, for the client which 'writes' you should do async_read, and for the second client which 'reads' you should do async write. With the offered dataflow client1->server->client2 it is hard to recognize which client the server should read from and which one is write to. It's up to you. You can choose the first connected client as writer and the second as reader, for example.
You might want to start with asio iostreams. It's a high-level iostream-like abstraction above asynchronous sockets.
P.S.: also, don't forget to run io_service.run() loop somewhere. Because all the asio callbacks are executed within that loop.
I have a simple Client application (using QWebSocket) that wants to connect to my server application (i.e. QWebSocketServer).
When I open a connection to a webSocketServer that is down/unavailable, my webSocket fires a "disconnectd" signal after 30 sec.
This is good as it helps me to understand that the server is down/unavailable so I can retry or warn the user about the problem.
If the link between the client and server fails the same thing happens. i.e. after writing (sendBinaryMessage) to the webSocket causes the disconnected signal to be fired after 30 secs.
I would like to know what are the default timers in QWebSocket and how I can modify them?
Where can I find such information/documentation? The Qt documentation on webSockets does not mention this behaviour at all! Should I read the code or ...?!
Thanks in advance
I doubt that any of these timers are part of Qt; these timers exist as part of the underlying operating system's implementation of TCP/IP. A socket waiting for a connection to time out will eventually go bad if the remote end does not respond. Same if a sent data is not acknowledge after a reasonable amount of time.
Qt however does everything asynchronously and makes use of signals and slots to notify you when something has happened. This means that if you want to shorten a timeout the simplest way to do this is using a QTimer that runs in parallel to you QAbstractSocket, if the timer times out before the socket signals its response, you can then take appropriate action.
Failing that, there may be some socket options that allow you to set the various timeouts on your TCP Connection to your liking.
From QWebSocket:
This class was modeled after QAbstractSocket.
QAbstractSocket in turn inherits from QIODevice.
The documentation of these classes have some information about timeouts.
Specifically you can see the default of 30 seconds pop up here and there.
Another place to look at is QObject's documentation (QWebSocket inherits it). Perhaps by overriding QObject's timer-related virtual functions you can somehow get in between these mechanisms and perhaps change the timeout.
Sorry to not be of more help.
For every single tutorials and examples I have seen on the internet for Linux/Unix socket tutorials, the server side code always involves an infinite loop that checks for client connection every single time.
Example:
http://www.thegeekstuff.com/2011/12/c-socket-programming/
http://tldp.org/LDP/LG/issue74/tougher.html#3.2
Is there a more efficient way to structure the server side code so that it does not involve an infinite loop, or code the infinite loop in a way that it will take up less system resource?
the infinite loop in those examples is already efficient. the call to accept() is a blocking call: the function does not return until there is a client connecting to the server. code execution for the thread which called the accept() function is halted, and does not take any processing power.
think of accept() as a call to join() or like a wait on a mutex/lock/semaphore.
of course, there are many other ways to handle incoming connection, but those other ways deal with the blocking nature of accept(). this function is difficult to cancel, so there exists non-blocking alternatives which will allow the server to perform other actions while waiting for an incoming connection. one such alternative is using select(). other alternatives are less portable as they involve low-level operating system calls to signal the connection through a callback function, an event or any other asynchronous mechanism handled by the operating system...
For C++ you could look into boost.asio. You could also look into e.g. asynchronous I/O functions. There is also SIGIO.
Of course, even when using these asynchronous methods, your main program still needs to sit in a loop, or the program will exit.
The infinite loop is there to maintain the server's running state, so when a client connection is accepted, the server won't quit immediately afterwards, instead it'll go back to listening for another client connection.
The listen() call is a blocking one - that is to say, it waits until it receives data. It does this is an extremely efficient way, using zero system resources (until a connection is made, of course) by making use of the operating systems network drivers that trigger an event (or hardware interrupt) that wakes the listening thread up.
Here's a good overview of what techniques are available - The C10K problem.
When you are implementing a server that listens for possibly infinite connections, there is imo no way around some sort of infinite loops. Usually this is not a problem at all, because when your socket is not marked as non-blocking, the call to accept() will block until a new connection arrives. Due to this blocking, no system resources are wasted.
Other libraries that provide like an event-based system are ultimately implemented in the way described above.
In addition to what has already been posted, it's fairly easy to see what is going on with a debugger. You will be able to single-step through until you execute the accept() line, upon which the 'sigle-step' highlight will disappear and the app will run on - the next line is not reached. If you put a breadkpoint on the next line, it will not fire until a client connects.
We need to follow the best practice on writing client -server programing. The best guide I can recommend you at this time is The C10K Problem . There are specific stuff we need to follow in this case. We can go for using select or poll or epoll. Each have there own advantages and disadvantages.
If you are running you code using latest kernel version, then I would recommend to go for epoll. Click to see sample program to understand epoll.
If you are using select, poll, epoll then you will be blocked until you get an event / trigger so that your server will not run in to infinite loop by consuming your system time.
On my personal experience, I feel epoll is the best way to go further as I observed the threshold of my server machine on having 80k ACTIVE connection was very less on comparing it will select and poll. The load average of my server machine was just 3.2 on having 80k active connection :)
On testing with poll, I find my server load average went up to 7.8 on reaching 30k active client connection :(.
I am using boost::asio::ip::udp::socket to communicate. I use socket.receive_from(...) to receive messages from clients. This is working fine for now, but I want to be able to shut down my server. Right now I am calling receive_from in a while-loop, which depends on a bool condition which I can set. However, this is pretty useless if I cannot force the thread to exit receive_from at regular intervals or at a certain call.
Is this even possible? I have tried googling, but found no clear answer. I have tried using socket.cancel() but this seems to have no effect.
Am I using the socket in the correct way?
There's no good way to do what you want using the synchronous receive_from method. You should use the asynchronous async_receive_from method if you desire timeouts and cancelability. There's a ticket on the Boost.Asio trac website describing this.
I answered a similar question recently that you might find useful as well.