I would like to have several threads which change the value of certain elements periodically. Let's say I have some kind of run-method changing the value and a certain amount in milliseconds of sleep afterwards. I do need to be able to change the interval a) right after the change and b) after the sleep. I also need a possibility to change between single execution and repeated execution in timed intervals.
The problem with using a Timer is, that I do not have the possibilities I have with using threads directly, like naming or using conditions.
Can anybody please give me a hint in the right direction?
Related
I have a timer class which uses the std::condition_variable wait_until (I have also tried wait_for). I am using the std::chrono::steady_clock time to wait until a specific time in the future.
This is meant to be monotonic, but there has been a long standing issue with this where this actually uses the system clock and fails to work correctly when the system time is changed.
It has been fixed in libc as suggested here: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=41861.
the issue is that this is still pretty new ~2019 and only available in gcc version 10. I have some cross compilers that are only up to gcc version ~8.
I am wandering if there is a way to get this fix into my versions (I have quite a few cross compilers) of gcc? - but this might prove difficult to maintain if I have to re-build the cross compilers each time I update them or such.
So a better question might be, what is a solution to this issue until I can get all my tools up to gcc v10? - how can I make my timer resistant to system time changes?
updated notes
Rustyx mentions the version of glibc needed is 2.3.0+ (more for my ref - use ldd --version to check that)
glibc change log showing the relevant entry for the fix supplied by Daniel Langr: https://github.com/gcc-mirror/gcc/blob/master/libstdc%2B%2B-v3/ChangeLog-2019#L2093
The required glibc patch (supplied by rustyx): https://gcc.gnu.org/git/?p=gcc.git&a=commit;h=ad4d1d21ad5c515ba90355d13b14cbb74262edd2
Create a data structure that contains a list of condition variables each with a use count protected by a mutex.
When a thread is about to block on a condition variable, first acquire the mutex and add the condition variable to the list (or bump its use count if it's already on the list).
When done blocking on the condition variable, have the thread again acquire the mutex that protects the list and decrement the use count of the condition variable it was blocked on. Remove the condition variable from the list if its use count drops to zero.
Have a dedicated thread to watch the system clock. If it detects a clock jump, acquire the mutex that protects the list of condition variables and broadcast every condition variable.
That's it. That solves the problem.
If necessary, you can also add a boolean to each entry in the table and set it to false when the entry is added. If the clock watcher thread hast broadcast the condition variable, have it set the bool to true so the woken threads will know why they were woken.
If you wish, you can just add the condition variable to the list when it's created and remove it from the list when it's destroyed. This will result in broadcasting condition variables no threads are blocked on if the clock jumps, but that's harmless.
Here are some implementation suggestions:
Use a dedicated thread to watch the clock. An easy thing to look at is the offset between wall time and the system's uptime clock.
One simple thing to do is to keep a count of the number of time jumps observed and increment it each time you sense a time jump. When you wait for a condition, you can use the following logic:
Note the number of time jumps.
Block on the condition.
When you wake up, recheck the condition.
If the condition isn't satisfied, check the number of time jumps.
If the count from 1 and 4 mismatch, handle it as a time jump wakeup.
You can wrap this all up so that there's no ugliness in the calling code. It just becomes another possible return value from your version of wait_for.
I have a loop in C++ that I would like to run for a few seconds. Although the amount of work on every iteration is different, from a few microseconds to seconds, it is ok to stop between iterations. It is high-performance code so I would like to avoid calculating time difference on each iteration:
while (!status.has_value())
{
// do something
// this adds extra delays that I would like to avoid
if (duration_cast<seconds>(system_clock::now() - started).count() >= limit)
status = CompletedBy::duration;
}
What I'm thinking is maybe there is a way to schedule signal and then stop the loop when it happens instead of checking the time difference on every iteration.
BTW, the loop may exit before the signal.
I have done something similar, but in Java. The general idea is to use a separate thread to manage a sentinel value, making your loop look like...
okayToLoop = true;
// code to launch thread that will wait N milliseconds, and then negate okayToLoop
while((!status.hasValue()) AND (okayToLoop)) {
// loop code
}
The "cautionary note" is that many sleep() functions for threads employ "sleep at least" semantics, so if it is really important to only sleep N milliseconds, you'll need to address that in your thread implementation. But, this avoids constantly checking the duration for each iteration of the loop.
Note that this will also allow the current iteration of the loop to finish, before the sentinel value is checked. I have also implemented this approach where the "control thread" actually interrupts the thread on which the loop is executing, interrupting the iteration. When I've done this, I've actually put the loop into a worker thread.
Any form of inter-thread communication is going to be way slower than a simple query of a high performance clock.
Now, steady_clock::now() might be too slow in the loop.
Using OS specific APIs, bind your thread to have ridiculous priority and affinity for a specific CPU. Or use rdtsc, after taking into account everything that can go wrong. Calculate what value you'd expect to get if (a) something went wrong, or (b) you have passed the time threshold.
When that happens, check steady_clock::now(), see if you are close enough to being done, and if so finish. If not, calculate a new high performance clock target and loop again.
I'm trying to figure out the best way to do this, but I'm getting a bit stuck in figuring out exactly what it is that I'm trying to do, so I'm going to explain what it is, what I'm thinking I want to do, and where I'm getting stuck.
I am working on a program that has a single array (Image really), which per frame can have a large number of objects placed on an image array. Each object is completely independent of all other objects. The only dependency is the output, in theory possible to have 2 of these objects placed on the same location on the array. I'm trying to increase the efficiency of placing the objects on the image, so that I can place more objects. In order to do that, I'm wanting to thread the problem.
The first step that I have taken towards threading it involves simply mutex protecting the array. All operations which place an object on the array will call the same function, so I only have to put the mutex lock in one place. So far, it is working, but it is not seeing the improvements that I would hope to have. I am hypothesizing that this is because most of the time, the limiting factor is the image write statement.
What I'm thinking I need to do next is to have multiple image buffers that I'm writing to, and to combine them when all of the operations are done. I should say that obscuration is not a problem, all that needs to be done is to simply add the pixel counts together. However, I'm struggling to figure out what mechanism I need to use in order to do this. I have looked at semaphores, but while I can see that they would limit a number of buffers, I can envision a situation in which two or more programs would be trying to write to the same buffer at the same time, potentially leading to inaccuracies.
I need a solution that does not involve any new non-standard libraries. I am more than willing to build the solution, but I would very much appreciate a few pointers in the right direction, as I'm currently just wandering around in the dark...
To help visualize this, imagine that I am told to place, say, balls at various locations on the image array. I am told to place the balls each frame, with a given brightness, location, and size. The exact location of the balls is dependent on the physics from the previous frame. All of the balls must be placed on a final image array, as quickly as they possibly can be. For the purpose of this example, if two balls are on top of each other, the brightness can simply be added together, thus there is no need to figure out if one is blocking the other. Also, no using GPU cards;-)
Psuedo-code would look like this: (Assuming that some logical object is given for location, brightness, and size). Also, assume, that isValidPoint simply finds if the point should be on the circle, given the location and radius of said circle.
global output_array[x_arrLimit*y_arrLimit)
void update_ball(int ball_num)
{
calc_ball_location(ball_num, *location, *brightness, *size); // location, brightness, size all set inside function
place_ball(location,brightness,size)
}
void place_ball(location,brighness,size)
{
get_bounds(location,size,*xlims,*ylims)
for (int x=xlims.min;x<xlims.max;y++)
{
for (int y=ylims.min;y<ylims.max;y++)
{
if (isValidPoint(location,size,x,y))
{
output_array(x,y)+=brightness;
}
}
}
}
The reason you're not seeing any speed up with the current design is that, with a single mutex for the entire buffer, you might as well not bother with threading, as all the objects have to be added serially anyway (unless there's significant processing being done to determine what to add, but it doesn't sound like that's the case). Depending on what it takes to "add an object to the buffer" (do you use scan-line algorithms, flood fill, or something else), you might consider having one mutex per row or a range of rows, or divide the image into rectangular tiles with one mutex per region or something. That would allow multiple threads to add to the image at the same time as long as they're not trying to update the same regions.
OK, you have an image member in some object. Add the, no doubt complex, code to add other image/objects to it. maipulate it, whatever. Aggregate in all the other objects that may be involved, add some command enun to tell threads what op to do and an 'OnCompletion' event to call when done.
Queue it to a pool of threads hanging on the end of a producer-consumer queue. Some thread will get the *object, perform the operation on the image/set and then call the event, (pass the completed *object as a parameter). In the event, you can do what you like, according to the needs of your app. Maybe you will add the processed images into a (thread-safe!!), vector or other container or queue them off to some other thread - whatever.
If the order of processing the images must be preserved, (eg. video stream), you could add an incrementing sequence-number to each object that is submitted to the pool, so enabling your 'OnComplete' handler to queue up 'later' images until all earlier ones have come in.
Since no two threads ever work on the same image, you need no locking while processing. The only locks you should, (may), need are those internal the queues, and they only lock for the time taken to push/pop object pointers to/from the queue - contention will be very rare.
please consider the following:
I have a queue of objects represented
as an array.
I process them off the top of the
array (at position 1) before calling
arrayDeleteAt() to remove it from
the array.
I add new queue item at the top of
the array using arrayAppend().
This works fine. However, I now wish to re-order the array immediately after adding an item.
I am concerned that if a thread is taking from the queue it will find the queue order has changed between it taking the item at position 1 and it deleting the item at position 1 - because in that time an additional item has been added the the queue has been re-sorted. So I need to ensure my queue is thread-safe.
Is there any way to doing this using the cflock tag? Since my add and remove code are in different places in the code the thread executing one bit of code would need to know that a thread is executing another specific bit of code and halt until that other thread has stopped executing it's code.
Or am I better off just raising a flag while the sorting is going on and preventing anything being taken from the array while the sort is in progress?
All this is happening in the APPLICATION scope on a CF 8 Enterprise server.
Thanks in advance for any help.
Ciaran
An exclusive CFLOCK should do what you want. You could just scope-lock APPLICATION, but that might be overly broad. Probably best to do it as a named lock. It won't matter where the different bits of code with the lock are located, as long as they're all using the same name.
This must be an easy question but I can't find a properly answer to it.
I'm coding on VS-C++. I've a custom class 'Person' with attribute 'height'. I want to call class method Grow() that starts a timer that will increment 'height' attribute every 0.5 seconds.
I'll have a StopGrow() that stops the timer and Shrink() that decrements instead of increment.
I really need a little push on which timer to use and how to use it within Grow() method. Other methods must be straight forward after knowing that.
That's my first question here so please be kind (and warn me if I'm doing it wrong :) Forgive my English, not my first language.
Do you really need to call the code every half second to recalculate a value? For most scenarios, there is another much simpler, faster, effective way.
Don't expose a height member, but use a method such as GetHeight(), which will calculate the height at the exact moment you need it.
Your Grow() method would set a base height value and start time and nothing else. Then, your GetHeight() method would subtract the starting time from the current time to calculate the height "right now", when you need it.
No timers needed!
Since you're on Windows, the simplest solution is probably to use the GetTickCount() function supplied by Windows.
There isn't a good timer function in the C++ language with a precision guaranteed to be less than a second.
So instead, include the windows.h header, and then call GetTickCount() to get a number of milliseconds. The next time you call it, you simlpy subtract the two values, and if the result is over 500, half a second has elapsed.
Alternatively, if you want to block the thread for half a second, use the Sleep(n) function, where n is the number of milliseconds you want the thread to sleep. (500 in your case)
You might want to take a look at CreateTimerQueue() and CreateTimerQueueTimer(). I've never personally used them, but they would probably fit the bill.
I currently spawn a thread that is responsible for doing timer based operations. It calls WaitForSingleObject() on a manual-reset event with a 10ms timeout. It keeps an internal collection of callbacks in the form of pointer-to-method and objects that the callbacks are invoked for. This is all hidden behind a singleton that provides a scheduler interface that let's the caller schedule method calls on the objects either after a timer expiration or regularly on an interval. It looks like the two functions that I mentioned should give you pretty much the same functionality... hmmm... might be time to revisit that scheduler code... ;-)
Sleep() an the normal timer event run off a 10ms clock.
For high resolution timer events on windows use high resolution timers
Not an easy question at all! You have at least two possibilities:
create a thread that will execute a loop: sleep 0.5s, increase height, sleep 0.5s, increase height, etc.
invert flow of control and pass it to some framework like Boost::Asio that will call your timer handler in every 0.5s.
In order to make the right decision you have to think about your whole application. Does it compute something (then maybe threads)? Does it interact with the user (then maybe event driven)? Each approach has some gotchas:
When you use threads you have to deal with locking, which can be tricky.
When you do event-driven stuff, you have to write asynchronous handlers, which can be tricky.