I have a simple Client application (using QWebSocket) that wants to connect to my server application (i.e. QWebSocketServer).
When I open a connection to a webSocketServer that is down/unavailable, my webSocket fires a "disconnectd" signal after 30 sec.
This is good as it helps me to understand that the server is down/unavailable so I can retry or warn the user about the problem.
If the link between the client and server fails the same thing happens. i.e. after writing (sendBinaryMessage) to the webSocket causes the disconnected signal to be fired after 30 secs.
I would like to know what are the default timers in QWebSocket and how I can modify them?
Where can I find such information/documentation? The Qt documentation on webSockets does not mention this behaviour at all! Should I read the code or ...?!
Thanks in advance
I doubt that any of these timers are part of Qt; these timers exist as part of the underlying operating system's implementation of TCP/IP. A socket waiting for a connection to time out will eventually go bad if the remote end does not respond. Same if a sent data is not acknowledge after a reasonable amount of time.
Qt however does everything asynchronously and makes use of signals and slots to notify you when something has happened. This means that if you want to shorten a timeout the simplest way to do this is using a QTimer that runs in parallel to you QAbstractSocket, if the timer times out before the socket signals its response, you can then take appropriate action.
Failing that, there may be some socket options that allow you to set the various timeouts on your TCP Connection to your liking.
From QWebSocket:
This class was modeled after QAbstractSocket.
QAbstractSocket in turn inherits from QIODevice.
The documentation of these classes have some information about timeouts.
Specifically you can see the default of 30 seconds pop up here and there.
Another place to look at is QObject's documentation (QWebSocket inherits it). Perhaps by overriding QObject's timer-related virtual functions you can somehow get in between these mechanisms and perhaps change the timeout.
Sorry to not be of more help.
Related
Please let me explain what is my problem:
I have a Gui application, that has to connect to remote server and keep connected to it for the time untill a user decides to quit the connection, or the server will. I wish to create the client connection mechanism in a separate thread. If the client should be able to asynchronusly receive data and in event driven style inform the main gui thread about it. The thread should also be able to receive data from gui thread to be sent to the server.
I come from a low level microcontroller place, where I would handle this task simply using interrupts and while(1) loop and flags. The problem is on a pc, this would take to much processor time. I have watched and read a lot of tutorials about sockets and threads in qt, but i still dont know what is the best aproach and how to do it properly.
For now, I have a test server on a remote target that is able to receive connections from my Qt client that I am trying to write. I have a class now for my client in Qt, that inherits from Qthread, but then I read that it is not the best aproach anymore.
I wish to create a client instance in new thread (triggered from the gui thread) that will hang forever with exec(). Now I dont know how to handle, using signals the incoming data from the server and incoming commands from the main GUI thread. In general, I would maybe know how to implement this on a low level, but i read about a lot of high level functions for this that qt delivers, i wish to use that.
I would really aprichiate help in this matter. I tried searching, but havent found any solid, working up to date code examples. Could someone please explain me how to create a client instannce in a new thread that wont disconnect after sending/ receiving some data, but instead stay connected and stay responsive to to server calls and gui thread calls in event driven style?
May be use general Qt socket mechanism instead separate thread will be better for you. Sockets is very similar to MCU interrupts and simple to use. For your application requests it must be enough.
I have a custom made hardware pannel that I need to connect to.
Using QAbstractSocket class, I managed to connect to it and communicate with it.
But the issue is that if I unplug the ethernet cable after the connected state is reached, no update is made to the state.
So i need a way to "ping" it every X seconds to make sure the connection is not lost.
What would be the best way to do so?
For example, a 2nd thread created once it's connected that will frenquently "ping" the device?
This question is not related to the ping part but to the every X seconds part.
I don't know if that's the best way but here is what I used.
QTimer is your fiend in this situation. You don't need to get a different thread for this as it is asynchronous (Qt documentation strongly advises not to use threads in this case here for example)
I used QTimer to be able to trigger a specific method every interval of time. (you can set the QTimer as singleshot)
I used another QTimer, to wait for the reply, once the first timed out.
By using the signal/slots etc... It was really easy to do.
For every single tutorials and examples I have seen on the internet for Linux/Unix socket tutorials, the server side code always involves an infinite loop that checks for client connection every single time.
Example:
http://www.thegeekstuff.com/2011/12/c-socket-programming/
http://tldp.org/LDP/LG/issue74/tougher.html#3.2
Is there a more efficient way to structure the server side code so that it does not involve an infinite loop, or code the infinite loop in a way that it will take up less system resource?
the infinite loop in those examples is already efficient. the call to accept() is a blocking call: the function does not return until there is a client connecting to the server. code execution for the thread which called the accept() function is halted, and does not take any processing power.
think of accept() as a call to join() or like a wait on a mutex/lock/semaphore.
of course, there are many other ways to handle incoming connection, but those other ways deal with the blocking nature of accept(). this function is difficult to cancel, so there exists non-blocking alternatives which will allow the server to perform other actions while waiting for an incoming connection. one such alternative is using select(). other alternatives are less portable as they involve low-level operating system calls to signal the connection through a callback function, an event or any other asynchronous mechanism handled by the operating system...
For C++ you could look into boost.asio. You could also look into e.g. asynchronous I/O functions. There is also SIGIO.
Of course, even when using these asynchronous methods, your main program still needs to sit in a loop, or the program will exit.
The infinite loop is there to maintain the server's running state, so when a client connection is accepted, the server won't quit immediately afterwards, instead it'll go back to listening for another client connection.
The listen() call is a blocking one - that is to say, it waits until it receives data. It does this is an extremely efficient way, using zero system resources (until a connection is made, of course) by making use of the operating systems network drivers that trigger an event (or hardware interrupt) that wakes the listening thread up.
Here's a good overview of what techniques are available - The C10K problem.
When you are implementing a server that listens for possibly infinite connections, there is imo no way around some sort of infinite loops. Usually this is not a problem at all, because when your socket is not marked as non-blocking, the call to accept() will block until a new connection arrives. Due to this blocking, no system resources are wasted.
Other libraries that provide like an event-based system are ultimately implemented in the way described above.
In addition to what has already been posted, it's fairly easy to see what is going on with a debugger. You will be able to single-step through until you execute the accept() line, upon which the 'sigle-step' highlight will disappear and the app will run on - the next line is not reached. If you put a breadkpoint on the next line, it will not fire until a client connects.
We need to follow the best practice on writing client -server programing. The best guide I can recommend you at this time is The C10K Problem . There are specific stuff we need to follow in this case. We can go for using select or poll or epoll. Each have there own advantages and disadvantages.
If you are running you code using latest kernel version, then I would recommend to go for epoll. Click to see sample program to understand epoll.
If you are using select, poll, epoll then you will be blocked until you get an event / trigger so that your server will not run in to infinite loop by consuming your system time.
On my personal experience, I feel epoll is the best way to go further as I observed the threshold of my server machine on having 80k ACTIVE connection was very less on comparing it will select and poll. The load average of my server machine was just 3.2 on having 80k active connection :)
On testing with poll, I find my server load average went up to 7.8 on reaching 30k active client connection :(.
I'm new to Poco framework and not to good with C++ but I am learning. I have to create a server-client based application in windows.
The problem that I have now is that I need to send repeatedly from minute to minute some data to the clients. i need to do this for the clients that have an active tcp connection with the server. I don't know how can I create an event, or something that is triggered in a thread and starts all the active threads to send data to the clients.
My first idea is that I have to rewrite, or extend the TCPServerDispatcher Class. And I don't know how can I identify the active threads from the ThreadPool.
Do you have any ideas, or maybe suggestions, or a tutorial, something?
I can't figure it out how to do it...
Hope somebody can give me an idea, or some code example. Thank you.
Can these server<> client threads not obtain the data for themselves? It would be fairly easy to add a 60-second timeout on a read() in each thread and send the data then. Maybe this would involve too many database connections?
Failing that, can you put the latest data in a lockable object and have the threads just lock, write and unlock the latest data on a timeout? Such a solution should really have a write timeout as well to prevent a badly-behaved client causing its server thread to block while holding the lock. If it's not too large, I suppose the server<> client thread could make a copy of the data to send, but I'm not a great fan of copying, TBH.
There are more complex ways of signaling the server<> client threads that new data is avalable. It is quite possible to signal each thread that new data is available and have them act upon it 'immediately'. This usually means the server<> client thread waiting on more than one signal. In general, the lower the latency, the more complex the solution:(
Rgds,
Martin
I have a game I am working on in C++ and OpenGL. I have made a threaded server that right now accepts clients (the game) and receives messages from them. Right now the game only sends messages. I want both the game and server to be able to send and receive, but I'm not sure the best way to go about it. I was considering using a thread for sending and one for receiving, both on the same socket. Right now the game runs in a single thread, and the server makes a separate thread for each client.
Looking for suggestions on how to go about it for the game as well as the server (unless your suggestion is the same for both). Any questions, feel free to ask :)
Thanks!
What you need to do is set up an outgoing queue of messages for each client. Say you have 2 clients connected to the server, one being serviced by thread A and the other by thread B. Thread A should do a WaitOnMultipleObjects() on its socket and on a semaphore/mutex/condition variable for its queue. That way, if it gets something in its queue, it can wake up and send it out. If it gets a message from the client that it needs no give to client B, it will process that message and put it in thread B's outgoing queue.
This is a very simple synchronization scheme. If your game is very complex or massive, you will have to do something much more clever than this.
Don't use threads in a game server. Many professional, AAA game servers are single-threaded - every one I've ever seen, in fact.
Consider using Boost.ASIO that implements this well with a C++ API (allowing many different approaches besides just asynchronous I/O). There are plenty of tutorials. However, for the absolute highest performance, you should probably not use threads.