Random scheduling of n non-overlapping events in a time interval - c++

Please forgive me if this is a well-known class of problem with a well-known solution. I've been searching but obviously not succeeding.
Assume I have n events that must occur in an interval (e.g., [0,1]). Each event is associated with a duration that is stochastically drawn from a predefined distribution. Assume that the interval is much larger than all the event durations combined: valid schedules always exist. The probabilities of event occurrence over the [0,1] interval are not uniform, but the events are independent as long as they are not overlapping.
What is an efficient and accurate (random) way to schedule these events?
Here's my current pseudocode:
lowerBound = min(interval)
upperBound = max(interval)
for n in numEvents {
draw startTime
draw endTime
while ( startTime is less than lowerBound || endTime exceeds upperBound ) {
draw startTime
draw endTime
}
add event
reset lowerBound and upperBound to define largest remaining (event-free) interval
}
I think that by choosing the larger remaining interval in which to schedule the new event, I'm making the events overdispersed--more spaced out than they'd otherwise be. This nonetheless seems very efficient and probably extremely accurate when the number of events is small.
I'm using C++, although that's probably irrelevant.
I'd also greatly appreciate search terms, if you know what this kind of problem is called.
Context: Accuracy is most important to me here. This is not a time-intensive step in the overall program.

I'd just place each event randomly, checking for collisions, and if there's a collision wipe the slate clean and start over.
You say that the interval is much larger than the sum of the event durations, so collisions will be rare, so this method is quite fast in practice.
You say you want accurate results. I'm not sure what that means, but at a guess I'd say that all valid solutions should be equally probable; the current solution in the question doesn't satisfy that requirement, but this one does.

Related

Switching an image at specific frequencies c++

I am currently developing a stimuli provider for the brain's visual cortex as a part of a university project. The program is to (preferably) be written in c++, utilising visual studio and OpenCV. The way it is supposed to work is that the program creates a number of threads, accordingly to the amount of different frequencies, each running a timer for their respective frequency.
The code looks like this so far:
void timerThread(void *param) {
t *args = (t*)param;
int id = args->data1;
float freq = args->data2;
unsigned long period = round((double)1000 / (double)freq)-1;
while (true) {
Sleep(period);
show[id] = 1;
Sleep(period);
show[id] = 0;
}
}
It seems to work okay for some of the frequencies, but others vary quite a lot in frame rate. I have tried to look into creating my own timing function, similar to what is done in Arduino's "blinkWithoutDelay" function, though this worked very badly. Also, I have tried with the waitKey() function, this worked quite like the Sleep() function used now.
Any help would be greatly appreciated!
You should use timers instead of "sleep" to fix this, as sometimes the loop may take more or less time to complete.
Restart the timer at the start of the loop and take its value right before the reset- this'll give you the time it took for the loop to complete.
If this time is greater than the "period" value, then it means you're late, and you need to execute right away (and even lower the period for the next loop).
Otherwise, if it's lower, then it means you need to wait until it is greater.
I personally dislike sleep, and instead constantly restart the timer until it's greater.
I suggest looking into "fixed timestep" code, such as the one below. You'll need to put this snippet of code on every thread with varying values for the period (ns) and put your code where "doUpdates()" is.
If you need a "timer" library, since I don't know OpenCV, I recommend SFML (SFML's timer docs).
The following code is from here:
long int start = 0, end = 0;
double delta = 0;
double ns = 1000000.0 / 60.0; // Syncs updates at 60 per second (59 - 61)
while (!quit) {
start = timeAsMicro();
delta+=(double)(start - end) / ns; // You can skip dividing by ns here and do "delta >= ns" below instead //
end = start;
while (delta >= 1.0) {
doUpdates();
delta-=1.0;
}
}
Please mind the fact that in this code, the timer is never reset.
(This may not be completely accurate but is the best assumption I can make to fix your problem given the code you've presented)

How would you implement this adaptive 'fudge factor' in a scheduler?

I have a scheduler, endlessly executing n actions. Each action is scheduled for x seconds into the future. When an action completes, it is re-scheduled for another x seconds into the future after its previously scheduled time. Every 1s, the scheduler "ticks", executing at most 25 actions which are due to fire. Actions may take a second or so to complete (though this value should be considered variable and unpredictable).
Say that x is 60 seconds. Due to the throttling of at most 25 actions being executed simultaneously, when n grows large, it is conceivable that the scheduler won't have time to execute all n actions within a 60 second window, and actions will be executed later and later as time goes on. This is undesirable, as it'll become true that there are actions to execute on every single tick and this increases load on my system. It's less important to me to keep x exactly constant than it is to keep load down.
So I wish to implement an adaptive "handicap", an automatically-applied fudge factor h, increasing it when a majority of actions are executed "late", and decreasing it (edging it back to its default of zero) when they're all seemingly and consistently on time. The scheduler would then be made to schedule actions for x+h seconds' time, rather than x.
At a high level, how would you approach this? How would you define "a majority of actions are executed 'late'" and how would you represent/detect it in C++03 code?
Better yet, is there an existing well-known approach that objectively "works" here?
To be clear, you are aiming to avoid sustained high load where there are tasks
every tick, rather than aiming to minimise the scheduling delay.
Correspondingly, the metric you should be looking at when considering the fudge
factor is the load, not the lateness.
If you have full knowledge of the system — the number of tasks, their
rescheduling intervals, the distribution of their execution time —
you could in principle exactly solve for a handicap value that would give you
a mean target load when busy, or would say, only exceed the target load
10% of the time when busy, or so on.
On the other hand, if this information is not available or predictable,
you will need an adaptive approach.
The general theory for this sort of thing is control theory, which can get
quite involved. Broadly though the heuristic is: if the load is less than the
threshold, and we have a positive handicap, reduce the handicap; if the load is
over the threshold, increase the handicap.
The handicap should be proportional, rather than additional: if, for example,
we knew we were consistently 10% overloaded, then we'd be right on target if we
applied a proportional delay of 10% on the scheduling of jobs. That is, we're
looking to apply a handicap factor h such that jobs are scheduled at xh
seconds time instead of x. A factor of 1 would correspond to no handicap.
When we're overloaded, but not maximally overloaded, the response then is linear
in the log: log(h) = log(load) - log(load_target). So the simplest method
would be:
load = get_current_load();
if (load>load_target) h = load/load_target;
else h = 1.0;
Unfortunately, there is a maximum measured load, and linearity breaks down
here. The linear model can be extended to incorporate the accumulated
deviation from the target load, and the rate of change of the load.
This corresponds to the proportional-integral-derivative controller.
As this is a noisy environment (there is variation in the action
execution times), it might be wise to shy away from the derivative bit
of this model, and stick with the proportional-integral (PI) part.
When this model is discretized, we get an expression for log(h)
that is proportional to the current (log) overload, plus a term that
captures how badly we've been doing:
load = get_current_load();
deviation = load > load_target ? log(load/load_target) : 0;
accum += p1 * deviation;
log_h = p2 * deviation + accum;
h = log_h < 0 ? 1.0 : exp(log_h);
Except, we don't have a symmetric problem: when we're below
the load target, but the accumulated error term stays high.
We could work around it by accumulating negative deviations
as well, but limiting the accumulated error to be at least
non-negative, so that a period of legitimately low load
doesn't give us a free pass for later:
load = get_current_load();
if (load > 0) {
deviation = log(load/load_target);
accum += p1 * deviation;
if (accum < 0) accum = 0;
if (deviation < 0) deviation = 0;
}
else {
accum = 0;
deviation = 0;
}
log_h = p2 * deviation + accum;
h = log_h < 0 ? 1.0 : exp(log_h);
The value for p2 will be somewhere (roughly) between 0.5 and 0.9,
to leave some room for the influence of the accumulated error.
A good value for p1 will be probably be around 0.3 to 0.5 times
the reciprocal of the lag time, the number of steps it takes for a change
in h to present itself as a change in load. This can be estimated
by the mean rescheduling time of the actions.
You can play around with these parameters to get the sort of
response you'd like, or you can make a more faithful mathematical
model of your scheduling problem and then do maths to it!
The parameters themselves can also be modified adaptively over
time, based on the observed response to changes in load.
(Warning, I haven't actually tried these fragments in a mock scheduler!)

Is event recording on time-sensitive possible?

Basic Question
Is there any way to event recording an playback within a time-sensitive (framerate independent) system?
Any help - including a simple "No sorry it is impossible" - would be greatly appreciated. I have spent almost 20 hours working on this over the past few weekends and am driving myself crazy.
Full Details
This is being currently aimed at a game but the libraries I'm writing are designed to be more general and this concept applies to more than just my C++ coding.
I have some code that looks functionally similar this... (it is written in C++0x but I'm taking some liberties to make it more compact)
void InputThread()
{
InputAxisReturn AxisState[IA_SIZE];
while (Continue)
{
Threading()->EventWait(InputEvent);
Threading()->EventReset(InputEvent);
pInput->GetChangedAxis(AxisState);
//REF ALPHA
if (AxisState[IA_GAMEPAD_0_X].Changed)
{
X_Axis = AxisState[IA_GAMEPAD_0_X].Value;
}
}
}
And I have a separate thread that looks like this...
//REF BETA
while (Continue)
{
//Is there a message to process?
StandardWindowsPeekMessageProcessing();
//GetElapsedTime() returns a float of time in seconds since its last call
UpdateAll(LoopTimer.GetElapsedTime());
}
Now I'd like to record input events for playback for testing and some limited replay functionality.
I can easily record the events with precision timing by simply inserting the following code where I marked //REF ALPHA
//REF ALPHA
EventRecordings.pushback(EventRecording(TimeSinceRecordingBegan, AxisState));
The real issue is playing these back. My LoopTimer is extremely high precision using the High Performance Counter (QueryPreformanceCounter). This means that it is nearly impossible to hit the same time difference using code like below in place of //REF BETA
// REF BETA
NextEvent = EventRecordings.pop_back()
Time TimeSincePlaybackBegan;
while (Continue)
{
//Is there a message to process?
StandardWindowsPeekMessageProcessing();
//Did we reach the next event?
if (TimeSincePlaybackBegan >= NextEvent.TimeSinceRecordingBegan)
{
if (NextEvent.AxisState[IA_GAMEPAD_0_X].Changed)
{
X_Axis = NextEvent.AxisState[IA_GAMEPAD_0_X].Value;
}
NextEvent = EventRecordings.pop_back();
}
//GetElapsedTime() returns a float of time in seconds since its last call
Time elapsed = LoopTimer.GetElapsedTime()
UpdateAll(elapsed);
TimeSincePlabackBegan += elapsed;
}
The issue with this approach is that you will almost never hit the exact same time so you will have a few microseconds where the playback doesn't match the recording.
I also tried event snapping. Kind of a confusing term but basically if the TimeSincePlaybackBegan > NextEvent.TimeSinceRecordingBegan then TimeSincePlaybackBegan = NextEvent.TimeSinceRecordingBegan and ElapsedTime was altered to suit.
It had some interesting side effects which you would expect (like some slowdown) but it unfortunately still resulted in the playback de-synchronizing.
For some more background - and possibly a reason why my time snapping approach didn't work - I'm using BulletPhysics somewhere down that UpdateAll call. Kind of like this...
void Update(float diff)
{
static const float m_FixedTimeStep = 0.005f;
static const uint32 MaxSteps = 200;
//Updates the fps
cWorldInternal::Update(diff);
if (diff > MaxSteps * m_FixedTimeStep)
{
Warning("cBulletWorld::Update() diff > MaxTimestep. Determinism will be lost");
}
pBulletWorld->stepSimulation(diff, MaxSteps, m_FixedTimeStep);
}
But I also tried with pBulletWorkd->stepSimulation(diff, 0, 0) which according to http://www.bulletphysics.org/mediawiki-1.5.8/index.php/Stepping_the_World should have done the trick but still with no avail.
Answering my own question for anyone else who stumbles upon this.
Basically if you want deterministic recording and playback you need to lock the frame-rate. If the system cannot handle the frame-rate you must introduce slowdown or risk dsyncronization.
After two weeks of additional research I've decided it is just not possible due to floating point inadequacies and the fact that floating point numbers are not necessarily associative.
The only solution to have a deterministic engine that relies on floating point numbers is to have a stable and fixed frame-rate. Any change in the frame-rate will - across a long term - result in the playback becoming desynchronized.

Limit iterations per time unit

Is there a way to limit iterations per time unit? For example, I have a loop like this:
for (int i = 0; i < 100000; i++)
{
// do stuff
}
I want to limit the loop above so there will be maximum of 30 iterations per second.
I would also like the iterations to be evenly positioned in the timeline so not something like 30 iterations in first 0.4s and then wait 0.6s.
Is that possible? It does not have to be completely precise (though the more precise it will be the better).
#FredOverflow My program is running
very fast. It is sending data over
wifi to another program which is not
fast enough to handle them at the
current rate. – Richard Knop
Then you should probably have the program you're sending data to send an acknowledgment when it's finished receiving the last chunk of data you sent then send the next chunk. Anything else will just cause you frustrations down the line as circumstances change.
Suppose you have a good Now() function (GetTickCount() is bad example, it's OS specific and has bad precision):
for (int i = 0; i < 1000; i++){
DWORD have_to_sleep_until = GetTickCount() + EXPECTED_ITERATION_TIME_MS;
// do stuff
Sleep(max(0, have_to_sleep_until - GetTickCount()));
};
You can check elapsed time inside the loop, but it may be not an usual solution. Because computation time is totally up to the performance of the machine and algorithm, people optimize it during their development time(ex. many game programmer requires at least 25-30 frames per second for properly smooth animation).
easiest way (for windows) is to use QueryPerformanceCounter(). Some pseudo-code below.
QueryPerformanceFrequency(&freq)
timeWanted = 1.0/30.0 //time per iteration if 30 iterations / sec
for i
QueryPerf(count1)
do stuff
queryPerf(count2)
timeElapsed = (double)(c2 - c1) * (double)(1e3) / double(freq) //time in milliseconds
timeDiff = timeWanted - timeElapsed
if (timeDiff > 0)
QueryPerf(c3)
QueryPerf(c4)
while ((double)(c4 - c3) * (double)(1e3) / double(freq) < timeDiff)
queryPerf(c4)
end for
EDIT: You must make sure that the 'do stuff' area takes less time than your framerate or else it doesn't matter. Also instead of 1e3 for milliseconds, you can go all the way to nanoseconds if you do 1e9 (if you want that much accuracy)
WARNING... this will eat your CPU but give you good 'software' timing... Do it in a separate thread (and only if you have more than 1 processor) so that any guis wont lock. You can put a conditional in there to stop the loop if this is a multi-threaded app too.
#FredOverflow My program is running very fast. It is sending data over wifi to another program which is not fast enough to handle them at the current rate. – Richard Knop
What you might need a buffer or queue at the receiver side. The thread that receives the messages from the client (like through a socket) get the message and put it in the queue. The actual consumer of the messages reads/pops from the queue. Of course you need concurrency control for your queue.
Besides the flow control methods mentioned, if you also have the need to maintain an accurate specific data sending rate in your sender part. Usually it can be done like this.
E.x. if you want to send at 10Mbps, create a timer of interval 1ms so it will call a predefined function every 1ms. Then in the timer handler function, by keep tracking of 2 static variables 1)Time elapsed since beginning of sending data 2)How much data in bytes have been sent up to last call, you can easily calculate how much data is needed to be sent in the current call (or just sleep and wait for next call).
By this way, you can do "streaming" of data in a very stable way with very little jitterness, and this is usually adopted in streaming of videos. Of course it also depends on how accurate the timer is.

How can I set tens of thousands of tasks to each trigger at a different defined time?

I'm constructing a data visualisation system that visualises over 100,000 data points (visits to a website) across a time period. The time period (say 1 week) is then converted into simulation time (1 week = 2 minutes in simulation), and a task is performed on each and every piece of data at the specific time it happens in simulation time (the time each visit occurred during the week in real time). With me? =p
In other programming languages (eg. Java) I would simply set a timer for each datapoint. After each timer is complete it triggers a callback that allows me to display that datapoint in my app. I'm new to C++ and unfortunately it seems that timers with callbacks aren't built-in. Another method I would have done in ActionScript, for example, would be using custom events that are triggered after a specific timeframe. But then again I don't think C++ has support for custom events either.
In a nutshell; say I have 1000 pieces of data that span across a 60 second period. Each piece of data has it's own time in relation to that 60 second period. For example, one needs to trigger something at 1 second, another at 5 seconds, etc.
Am I going about this the right way, or is there a much easier way to do this?
Ps. I'm using Mac OS X, not Windows
I would not use timers to do that. Sounds like you have too many events and they may lie too close to each other. Performance and accuracy may be bad with timers.
a simulation is normally done like that:
You are simly doing loops (or iterations). And on every loop you add an either measured (for real time) or constant (non real time) amount to your simulation time.
Then you manually check all your events and execute them if they have to.
In your case it would help to have them sorted for execution time so you would not have to loop through them all every iteration.
Tme measuring can be done with gettimer() c function for low accuracy or there are better functions for higher accuracy e.g. QueryPerformanceTimer() on windows - dont know the equivalent for Mac.
Just make a "timer" mechanism yourself, that's the best, fastest and most flexible way.
-> make an array of events (linked to each object event happens to) (std::vector in c++/STL)
-> sort the array on time (std::sort in c++/STL)
-> then just loop on the array and trigger the object action/method upon time inside a range.
Roughly that gives in C++:
// action upon data + data itself
class Object{
public:
Object(Data d) : data(d) {
void Action(){display(data)};
Data data;
};
// event time + object upon event acts
class Event{
public:
Event(double t, Object o) time (t), object(o) {};
// useful for std::sort
bool operator<(Event e) { return time < e.time; }
double time;
Object object;
}
//init
std::vector<Event> myEvents;
myEvents.push_back(Event(1.0, Object(data0)));
//...
myEvents.push_back(Event(54.0, Object(data10000)));
// could be removed is push_back() is guaranteed to be in the correct order
std::sort(myEvents.begin(), myEvents.end());
// the way you handle time... period is for some fuzziness/animation ?
const double period = 0.5;
const double endTime = 60;
std::vector<Event>::iterator itLastFirstEvent = myEvents.begin();
for (double currtime = 0.0; currtime < endTime; currtime+=0.1)
{
for (std::vector<Event>::iterator itEvent = itLastFirstEvent ; itEvent != myEvents.end();++itEvent)
{
if (currtime - period < itEvent.time)
itLastFirstEvent = itEvent; // so that next loop start is optimised
else if (itEvent.time < currtime + period)
itEvent->actiontick(); // action speaks louder than words
else
break; // as it's sorted, won't be any more tick this loop
}
}
ps: About custom events, you might want to read/search about delegates in c++ and function/method pointers.
If you are using native C++, you should look at the Timers section of the Windows API on the MSDN website. They should tell you exactly what you need to know.