I am writing a wrapper around the C++ API of a programme, which needs to connect to a network. I want my own Connect() function to wait for 2 seconds or less, and continue if no connection is established. What I was thinking of is simply using Sleep(...) and checking again, but this doesn't work:
class MyWrapperClass
{
IClient* client;
bool MyWrapperClass::Connect()
{
client->Connect();
int i = 0;
while (i++ < 20 && !client->IsConnected())
Sleep(100); /* Sleep for 0.1 s to give client time to connect (DOESN'T HAPPEN) */
return client->IsConnected();
}
}
I am assuming that this fails (i.e. no connection is established) because the thread as a whole stops, including the IClient::Connect() method. I have no access to this method, so I cannot verify whether this starts any other threads or anything.
Is there a better way to have a function wait for a short while without blocking anything?
Edit:
To complicate matters consider the following: the programme has to be compiled with /clr as the API demands this (so std::thread cannot be used) AND IClient cannot be an unmanaged class (i.e. IClient^ client = gcnew IClient() is not legal), as the class contains unmanaged stuff. Neither is in my power to alter, as it is demanded by the API.
As others pointed out you can't wait without blocking. Blocking IS the entire point of waiting.
I would look carefully af IClient and read any documentation to ensure there is no function that lets you do this asynchronously.
If you have no luck, then you are left with doing a loop with sleep in you code. If you can use c++11 then Chris gave a good suggetion. Otherwise you are left with whatever your OS gives you. On a POSIX system (unix) you could try usleep() or nanosleep() to give you shorter sleep than sleep() see http://linux.die.net/man/3/usleep.
If you want to connect without switching to another thread you can use system-specific options for that. For example setsockopt() for SO_RCVTIMEO option before connect will help you on linux. You can try to find a way to pass a preconfigured socket to the library in question.
Related
I am utilizing the Berkeley sockets select function in the following way.
/*Windows and linux typedefs/aliases/includes are made here with wsa
junk already taken care of.*/
/**Check if a socket can receive data without waiting.
\param socket The os level socket to check.
\param to The timeout value. A nullptr value will block forever, and zero
for each member of the value will cause it to return immediately.
\return True if recv can be called on the socket without blocking.*/
bool CanReceive(OSSocket& socket,
const timeval * to)
{
fd_set set = {};
FD_SET(socket, &set);
timeval* toCopy = nullptr;
if (to)
{
toCopy = new timeval;
*toCopy = *to;
}
int error = select((int)socket, &set, 0, 0, toCopy);
delete toCopy;
if (error == -1)
throw Err(); //will auto set from errno.
else if (error == 0)
return false;
else
return true;
}
I have written a class that will watch a container of sockets (wrapped up in aother class) and add an ID to a separate container that stores info on what sockets are ready to be accessed. The map is an unordered_map.
while(m_running)
{
for(auto& e : m_idMap)
{
auto id = e.first;
auto socket = e.second;
timeval timeout = ZeroTime; /*0sec, 0micro*/
if(CanReceive(socket,&timeout) &&
std::count(m_readyList.begin(),m_readyList.end(),socket) == 0)
{
/*only add sockets that are not on the list already.*/
m_readyList.push_back(id);
}
}
}
As I'm sure many have noticed, this code run insanely fast and gobbles up CPU like there is no tomorrow (40% CPU usage with only one socket in the map). My first solution was to have a smart waiting function that keeps the iterations per second to a set value. That seemed to be fine with some people. My question is this: How can I be notified when sockets are ready without using this method? Even if it might require a bunch of macro junk to keep it portable that's fine. I can only think there might be some way to have the operating system watch it for me and get some sort of notification or event when the socket is ready. Just to be clear, I have chosen not use dot net.
The loop runs in its own thread, sends notifications to other parts of the software when sockets are ready. The entire thing is multi threaded and every part of it (except this part) uses an event based notification system that eliminates the busy waiting problem. I understand that things become OS-dependent and limited in this area.
Edit: The sockets are run in BLOCKING mode (but select has no timeout, and therefor will not block), but they are operated on in a dedicated thread.
Edit: The system performs great with the smart sleeping functions on it, but not as good as it could with some notification system in place (likely from the OS).
First, you must set the socket non-blocking if you don't want the sockets to block. The select function does not provide a guarantee that a subsequent operation will not block. It's just a status reporting function that tells you about the past and the present.
Second, the best way to do this varies from platform to platform. If you don't want to write lots of platform specific code, you really should use a library like Boost ASIO or libevent.
Third, you can call select on all the sockets at the same time with a timeout. The function will return immediately if any of the sockets are (or were) readable and, if not, will wait up to the timeout. When select returns, it will report whether it timed out or, if not, which sockets were readable.
This will still perform very poorly because of the large number of wait lists the process has to be put on just to be immediately removed from all of them as soon as a single socket is readable. But it's the best you can do with reasonable portability.
How can I be notified when sockets are ready without using this
method?
That's what select() is for. The idea is that your call to select() should block until at least one of the sockets you passed in to it (via FD_SET()) is ready-for-read. After select() returns, you can find out which socket(s) are now ready-for-read (by calling FD_ISSET()) and call recv() on those sockets to get some data from them and handle it. After that you loop again, go back to sleep inside select() again, and repeat ad infinitum. In this way you handle all of your tasks as quickly as possible, while using the minimum amount of CPU cycles.
The entire thing is multi threaded and every part of it (except this
part) uses an event based notification system that eliminates the busy
waiting problem.
Note that if your thread is blocked inside of select() and you want it to wake up and do something right away (i.e. without relying on a timeout, which would be slow and inefficient), then you'll need some way to cause select() in that thread to return immediately. In my experience the most reliable way to do that is to create a pipe() or socketpair() and have the thread include one end of the file-descriptor-pair in its ready-for-read fd_set. Then when another thread wants to wake that thread up, it can do so simply by sending a byte on the the other end of the pair. That will cause select() to return, the thread can then read the single byte (and throw it away), and then do whatever it is supposed to do after waking up.
I am new to multi-threading. I am using c++ on unix.
In the code below, runSearch() takes a long time and I want to be able to kill the search as soon as "cancel == true". The function cancelSearch is called by another thread.
What is the best way to solve this problem?
Thanks you..
------------------This is the existing code-------------------------
struct SearchTask : public Runnable
{
bool cancel = false;
void cancelSearch()
{
cancel = true;
}
void run()
{
cancel = false;
runSearch();
if (cancel == true)
{
return;
}
//...more steps.
}
}
EDIT: To make it more clear, say runSearch() takes 10 mins to run. After 1 min, cancel==true, then I want to exit out of run() immediately rather than waiting another 9 more mins for runSearch() to complete.
You'll need to keep checking the flag throughout the search operation. Something like this:
void run()
{
cancel = false;
while (!cancel)
{
runSearch();
//do your thread stuff...
}
}
You have mentioned that you cannot modify runSearch(). With pthreads there's a pthread_setcancelstate() function, however I don't believe this is safe, especially with C++ code that expects RAII semantics.
Safe thread cancellation must be cooperative. The code that gets canceled must be aware of the cancellation and be able to clean up after itself. If the code is not designed to do this and is simply terminated then your program will probably exhibit undefined behavior.
For this reason C++'s std::thread does not offer any method of thread cancellation and instead the code must be written with explicit cancellation checks as other answers have shown.
Create a generic method that accepts a action / delegate. Have each step be something REALLY small and specific. Send the generic method a delegate / action of what you consider a "step". In the generic method detect if cancel is true and return if true. Because steps are small if it is cancelled it shouldn't take long for the thread to die.
That is the best advice I can give without any code of what the steps do.
Also note :
void run()
{
cancel = false;
runSearch();
while (!cancel)
{
//do your thread stuff...
}
}
Won't work because if what you are doing is not a iteration it will run the entire thread before checking for !cancel. Like I said if you can add more details on what the steps do it would easier to give you advice. When working with threads that you want to halt or kill, your best bet is to split your code into very small steps.
Basically you have to poll the cancel flag everywhere. There are other tricks you could use, but they are more platform-specific, like thread cancellation, or are not general enough like interrupts.
And cancel needs to be an atomic variable (like in std::atomic, or just protected it with a mutex) otherwise the compiler might just cache the value in a register and not see the update coming from another thread.
Reading the responses is right - just because you've called a blocking function in a thread doesn't mean it magically turns into a non-blocking call. The thread may not interrupt the rest of the program, but it still has to wait for the runSearch call to complete.
OK, so there are ways round this, but they're not necessarily safe to use.
You can kill a thread explicitly. On Windows you can use TerminateThread() that will kill the thread execution. Sound good right? Well, except that it is very dangerous to use - unless you know exactly what all the resources and calls are going on in the killed thread, you may find yourself with an app that refuses to work correctly next time round. If runSearch opens a DB connection for example, the TerminateThread call will not close it. Same applies to memory, loaded dlls, and all they use. Its designed for killing totally unresponsive threads so you can close a program and restart it.
Given the above, and the very strong recommendation you not use it, the next step is to call the runSearch in a external manner - if you run your blocking call in a separate process, then the process can be killed with a lot more certainty that you won't bugger everything else up. The process dies, clears up its memory, its heap, any loaded dlls, everything. So inside your thread, call CreateProcess and wait on the handle. You'll need some form on IPC (probably best not to use shared memory as it can be a nuisance to reset that when you kill the process) to transfer the results back to your main app. If you need to kill this process, call ExitProcess on it's handle (or exit in Linux)
Note that these exit calls require to be called inside the process, so you'll need to run a thread inside the process for your blocking call. You can terminate a process externally, but again, its dangerous - not nearly as dangerous as killing a thread, but you can still trip up occasionally. (use TerminateProcess or kill for this)
I'm using a third party library which has a blocking function, that is, it won't return until it's done; I can set a timeout for that call.
Problem is, that function puts the library in a certain state. As soon as it enters that state, I need to do something from my own code. My first solution was to do that in a separate thread:
void LibraryWrapper::DoTheMagic(){
//...
boost::thread EnteredFooStateNotifier( &LibraryWrapper::EnterFooState, this );
::LibraryBlockingFunction( timeout_ );
//...
}
void LibraryWrapper::EnterFooState(){
::Sleep( 50 ); //Ensure ::LibraryBlockingFunction is called first
//Do the stuff
}
Quite nasty, isn't it? I had to put the Sleep call because ::LibraryBlockingFunction must definitely be called before the stuff I do below, or everything will fail. But waiting 50 milliseconds is quite a poor guarantee, and I can't wait more because this particular task needs to be done as fast as possible.
Isn't there a better way to do this? Consider that I don't have access to the Library's code. Boost solutions are welcome.
UPDATE: Like one of the answers says, the library API is ill-defined. I sent an e-mail to the developers explaining the problem and suggesting a solution (i.e. making the call non-blocking and sending an event to a registered callback notifying the state change). In the meantime, I set a timeout high enough to ensure stuff X is done, and set a delay high enough before doing the post-call work to ensure the library function was called. It's not deterministic, but works most of the time.
Would using boost future clarify this code? To use an example from the boost future documentation:
int calculate_the_answer_to_life_the_universe_and_everything()
{
return 42;
}
boost::packaged_task<int> pt(calculate_the_answer_to_life_the_universe_and_everything);
boost::unique_future<int> fi=pt.get_future();
boost::thread task(boost::move(pt));
// In your example, now would be the time to do the post-call work.
fi.wait(); // wait for it to finish
Although you will still presumably need a bit of a delay in order to ensure that your function call has happened (this bit of your problem seems rather ill-defined - is there any way you can establish deterministically when it is safe to execute the post-call state change?).
The problem as I understand it is that you need to do this:
Enter a blocking call
After you have entered the blocking call but before it completes, you need to do something else
You need to have finished #2 before the blocking call returns
From a purely C++ standpoint, there's no way you can accomish this in a deterministic way. That is without understanding the details of the library you're using.
But I noticed your timeout value. That might provide a loophole, maybe.
What if you:
Enter the blocking call with a timeout of zero, so that it returns immediately
Do you other stuff, either in the same thread or synchronized with the main thread. Perhaps using a barrier.
After #2 is verified to be done, enter the blocking call again, with the normal non-zero timeout.
This will only work if the library's state will change if you enter the blocking call with a zero timeout.
Let's say that I have two libraries (A and B), and each has one function that listen on sockets. These functions use select() and they return some event immediately if the data has arrived, otherwise they wait for some time (timeout) and then return NULL:
A_event_t* A_wait_for_event(int timeout);
B_event_t* B_wait_for_event(int timeout);
Now, I use them in my program:
int main (int argc, char *argv[]) {
// Init A
// Init B
// .. do some other initialization
A_event_t *evA;
B_event_t *evB;
for(;;) {
evA = A_wait_for_event(50);
evB = B_wait_for_event(50);
// do some work based on events
}
}
Each library has its own sockets (e.g. udp socket) and it is not accessible from outside.
PROBLEM: This is not very efficient. If for example there is a lot of events waiting to be delivered by *B_wait_for_event* these would have to wait always until *A_wait_for_event* timeouts, which effectively limits the throughput of library B and my program.
Normally, one could use threads to separate processing, BUT what if processing of some event require to call function of other library and vice verse. Example:
if (evA != 0 && evA == A_EVENT_1) {
B_do_something();
}
if (evB != 0 && evB == B_EVENT_C) {
A_do_something();
}
So, even if I could create two threads and separate functionality from libraries, these threads would have to exchange events among them (probably through pipe). This would still limit performance, because one thread would be blocked by *X_wait_for_event()* function, and would not be possible to receive data immediately from other thread.
How to solve this?
This solution may not be available depending on the libraries you're using, but the best solution is not to call functions in individual libraries that wait for events. Each library should support hooking into an external event loop. Then your application uses a single loop which contains a poll() or select() call that waits on all of the events that all of the libraries you use want to wait for.
glib's event loop is good for this because many libraries already know how to hook into it. But if you don't use something as elaborate as glib, the normal approach is this:
Loop forever:
Start with an infinite timer and an empty set of file descriptors
For each library you use:
Call a setup function in the library which is allowed to add file descriptors to your set and/or shorten (but not lengthen) the timeout.
Run poll()
For each library you use:
Call a dispatch function in the library that responds to any events that might have occurred when the poll() returned.
Yes, it's still possible for an earlier library to starve a later library, but it works in practice.
If the libraries you use don't support this kind of setup & dispatch interface, add it as a feature and contribute the code upstream!
(I'm moving this to an answer since it's getting too long for a comment)
If you are in a situation where you're not allowed to call A_do_something in one thread while another thread is executing A_wait_for_event (and similarly for B), then I'm pretty sure you can't do anything efficient, and have to settle between various evils.
The most obvious improvement is to immediately take action upon getting an event, rather than trying to read from both: i.e. order your loop
Wait for an A event
Maybe do something in B
Wait for a B event
Maybe do something in A
Other mitigations you could do are
Try to predict whether an A event or a B event is more likely to come next, and wait on that first. (e.g. if they come in streaks, then after getting and handling an A event, you should go back to waiting for another A event)
Fiddle with the timeout values to strike a balance between spin loops and too much blocking. (maybe even adjust dynamically)
EDIT: You might check the APIs for your library; they might already offer a way to deal with the problem. For example, they might allow you to register callbacks for events, and get notifications of events through the callback, rather than polling wait_for_event.
Another thing is if you can create new file descriptors for the library to listen on. e.g. If you create a new pipe and hand one end to library A, then if thread #1 is waiting for an A event, thread #2 can write to the pipe to make an event happen, thus forcing #1 out of wait_for_event. With the ability to kick threads out of the wait_for_event functions at will, all sorts of new options become available.
A possible solution is to use two threads to wait_for_events plus boost::condition_variable in "main" thread which "does something". An alike but not exact solution is here
I am currently trying to use boost::asio for some simple tcp networking for the first time, and I allready came across something I am not really sure how to deal with. As far as I understand io_service.run() method is basically a loop which runs until there is nothing more left to do, which means it will run until I release my little server object. Since I allready got some sort of mainloop set up, I would rather like to update the networking loop manually from there just for the sake of simplicity, and I think io_service.poll() would do what I want, sort of like this:
void myApplication::update()
{
myIoService.poll();
//do other stuff
}
This seems to work, but I am still wondering if there is a drawback from this method since that does not seem to be the common way to deal with boost::asios io services. Is this a valid approach or should I rather use io_service.run() in a non blocking extra thread?
Using io_service::poll instead of io_service::run is perfectly acceptable. The difference is explained in the documentation
The poll() function may also be used
to dispatch ready handlers, but
without blocking.
Note that io_service::run will block if there's any work left in the queue
The work class is used to inform the
io_service when work starts and
finishes. This ensures that the
io_service object's run() function
will not exit while work is underway,
and that it does exit when there is no
unfinished work remaining.
whereas io_service::poll does not exhibit this behavior, it just invokes ready handlers. Also note that you will need to invoke io_service::reset on any subsequent invocation to io_service:run or io_service::poll.
A drawback is that you'll make a busy loop.
while(true) {
myIoService.poll()
}
will use 100% cpu. myIoService.run() will use 0% cpu.
myIoService.run_one() might do what you want but it will block if there is nothing for it to do.
A loop like this lets you poll, doesn't busy-wait, and resets as needed. (I'm using the more recent io_context that replaced io_service.)
while (!exitCondition) {
if (ioContext.stopped()) {
ioContext.restart();
}
if (!ioContext.poll()) {
if (stuffToDo) {
doYourStuff();
} else {
std::this_thread::sleep_for(std::chrono::milliseconds(3));
}
}
}