Compute minimal schedule length for a set of tasks - list

duration(a,5).
duration(b,7).
duration(c,3).
prereqs(a,[]).
prereqs(b,[]).
prereqs(c,[b]).
?- len([a,b,c],Time).
Time = 10.
The question is, Find the total time taken for the tasks to be completed; the tasks do all start at the same time so task C is the longest and will take 10 seconds as it has to complete the prereq task of b.
I've been struggling on this question for a few days now and any help would be much appreciated.

Related

How to find finish times of processes in cplex

I have a machine,batch scheduling problem. Finish time of a batch is "Z[b]" variable. There are three machines(f represent machines). If a machine starts processing a specific batch at time t X[f][b][t] equals to 1.
"P[b]" parameter is the proccesing time of the batches. I need to find ending times of batches.Tried this constraint.t is the range of time for example 48 hours.
"forall(p in B) Z[p]-(sum(n in F)sum(a in 1..t-P[p]+1)(a+P[p])*X[n][p][a])==0 ;"
I have 3 machines but this constraint just use 2 machines at time 1. Also Z[p] values is not logical.How can i fix this?
Within CPLEX you have CPOptimizer that is good at scheduling.
And to get the end of an interval , endOf(itvs) works fine

Can the minimum time to schedule tasks be found without brute force?

If I have a list of integers representing the time it takes for a task to be completed and I have x workers that can only work on one task until the time it takes to complete is up, can I find the minimum time it could possibly take in a best case scenario? I do not need the exact permutation that makes up this minimum completion time, just the time.
For example, to make it simple, if I have a list [2, 4, 6] and I have 2 workers then if I start with 2 and 4 then when 2 finishes 6 will start meaning that it will take 8 seconds to complete all tasks. However if I start with 6 and 2 then when 2 finishes 4 will start and finish at the same time as 6, therefore the tasks only take 6 seconds if done in this order.
Is there a way of knowing that it will only take 6 seconds that is better than n! or brute force complexity that guarantees it is the minimum time possible? Thank you for any help in advance please feel free to ask questions if I left out any details or you're confused!
edit: please help :(
edit 2: is it even possible? Anyone know?
In the case of a single worker, then the actual total time required is the same as the sum of all task times.
jobs = [ 2, 4, 6, etc... ]
time_required = SUM( jobs )
In the case of two workers, then given a specific ordering of jobs the total-time required can be determined by first assigning each task's required time to whichever worker has the current lowest sum associated with it, then getting the highest sum associated with each worker:
define type worker = vector<time_t>
define type workers = min_priority_queue<worker> using worker.sum() # so the current worker.sum() (the sum of `time_t` values in `vector<time_t>`) is the priority-queue key.
define type task = int
jobs = [ 2, 4, 6, etc... ]
# Use two workers:
workers.add( new worker )
workers.add( new worker )
# Iterate once through each job:
foreach( task t in jobs ) {
minWorker = workers.getMinWorker() # priority queue "find-min" operation
minWorker.add( t )
}
# Determine which worker will work the longest time:
time_required = workers.getMaxWorker().sum() # priority queue "find-max" operation
Because this is an actual solution, then the time_required is a point-sample that exists between the upper and lower-bounds - which isn't exactly what you're after, but because it can be computed in O(n) time it's a good starting point.
The above algorithm can then be generalised to any number of workers just by adding them to the priority queue - as heap-based priority queues' find-min operation is O(1) I believe this algorithm runs in O(n) time where n is the number of jobs, independent of the number of workers. (I may be wrong about the precise runtime complexity).
As for computing bounds in less time than O(n!) time... that's tricky (at least for me, as it's been a few years since I last cracked-open my copy of CLRS).
A minimal lower-bound for x workers for any order of jobs is simply the largest single value in the job set.
A maximal upper-bound for x workers for any order of jobs could be the sum of the largest 100 * (1/x) % of jobs (so given 2 workers it's the sum of the largest 50% jobs, for 3 workers it's the sum of the largest 33% jobs, for 4 workers it's 25%, etc). This will require you to sort the set first (taking O(n log n) if using Quicksort).
jobs = [ 2, 4, 6, etc... ]
worker_count = 2
jobs.sortDescending() # O(n log n)
# if there's 50 jobs and 2 workers, then take the first 25 jobs and sum them...
# ...that's an upper_bound for the time required to complete all tasks by 2 workers, as it assumes that 1 unlucky worker will get all of the longest tasks
upper_bound = jobs.take( jobs.count / worker_count ).sum()

FIO runtime different than gettimeofday()

I am trying to measure the execution time of FIO benchmark. I am, currently, doing so wrapping the FIO call between gettimeofday():
gettimeofday(&startFioFix, NULL);
FILE* process = popen("fio --name=randwrite --ioengine=posixaio rw=randwrite --size=100M --direct=1 --thread=1 --bs=4K", "r");
gettimeofday(&doneFioFix, NULL);
and calculate the elapsed time as:
double tstart = startFioFix.tv_sec + startFioFix.tv_usec / 1000000.;
double tend = doneFioFix.tv_sec + doneFioFix.tv_usec / 1000000.;
double telapsed = (tend - tstart);
Now, the question(s) is
telapsed time is different (larger) than the runt by FIO output. Can you please help me in understanding Why? as the fact can be seen in FIO output:
randwrite: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=posixaio, iodepth=1
fio-2.2.8
Starting 1 thread
randwrite: (groupid=0, jobs=1): err= 0: pid=3862: Tue Nov 1 18:07:50 2016
write: io=102400KB, bw=91674KB/s, iops=22918, runt= 1117msec
...
and the telapsed is:
telapsed: 1.76088 seconds
what is the actual time taken by FIO execution:
a) runt given by FIO, or
b) the elapsed time by getttimeofday()
How does FIO measure its runt? (probably, this question linked to 1.)
PS: I have tried to replace the gettimeofday(with std::chrono::high_resolution_clock::now()), but it also behaves the same (by same, I mean it also gives larger elapsed time than runt)
Thank you in advance, for your time and assistance.
A quick point:gettimeofday() on Linux uses a clock that doesn't necessarily tick at a constant interval and can even move backwards (see http://man7.org/linux/man-pages/man2/gettimeofday.2.html and https://stackoverflow.com/a/3527632/4513656 ) - this may make telapsed unreliable (or even negative).
Your gettimeofday/popen/gettimeofday measurement (telapsed) is going to be: the fio process start up (i.e. fork+exec on Linux) elapsed + fio initialisation (e.g. thread creation because I see --thread, ioengine initialisation) + fio job elapsed (runt) + fio stopping elapsed + process stop elapsed). You are comparing this to just runt which is a sub component of telapsed. It is unlikely all the non-runt components are going to happen instantly (i.e. take up 0 usecs) so the expectation is that runt will be smaller than telapsed. Try running fio with --debug=all just to see all the things it does in addition to actually submitting I/O for the job.
This is difficult to answer because it depends on what you want you mean when you say "fio execution" and why (i.e. the question is hard to interpret in an unambiguous way). Are you interested in how long fio actually spent trying to submit I/O for a given job (runt)? Are you interested in how long it takes your system to start/stop a new process that just so happens to try and submit I/O for a given period (telapsed)? Are you interested in how much CPU time was spent submitting I/O (none of the above)? So because I'm confused I'll ask you some questions instead: what are you going to use the result for and why?
Why not look at the source code? https://github.com/axboe/fio/blob/7a3b2fc3434985fa519db55e8f81734c24af274d/stat.c#L405 shows runt comes from ts->runtime[ddir]. You can see it is initialised by a call to set_epoch_time() (https://github.com/axboe/fio/blob/6be06c46544c19e513ff80e7b841b1de688ffc66/backend.c#L1664 ), is updated by update_runtime() ( https://github.com/axboe/fio/blob/6be06c46544c19e513ff80e7b841b1de688ffc66/backend.c#L371 ) which is called from thread_main().

Scheduling reset every 24 hours at midnight

I have a counter "numberOrders" and i want to reset it everyday at midnight, to know how many orders I get in one day, what I have right now is this:
val system = akka.actor.ActorSystem("system")
system.scheduler.schedule(86400000 milliseconds, 0 milliseconds){(numberOrders = 0)}
This piece of code is inside a def which is called every time i get a new order, so want it does is: reset numberOrders after 24hours from the first order or from every order, I'm not really sure if every time there's a new order is going to reset after 24 hours, which is not what I want. I want to rest the variable everyday at midnight, any idea? Thanks!
To further increase pushy's answer. Since you might not always be sure when the site started and if you want to be exactly sure it runs at midnight you can do the following
val system = akka.actor.ActorSystem("system")
val wait = (24 hours).toMillis - System.currentTimeMillis
system.scheduler.schedule(Duration.apply(wait, MILLISECONDS), 24 hours, orderActor, ResetCounterMessage)
Might not be the tidiest of solutions but it does the job.
As schedule supports repeated executions, you could just set the interval parameter to 24 hours, the initial delay to the amount of time between now and midnight, and initiate the code at startup. You seem to be creating a new actorSystem every time you get an order right now, that does not seem quite right, and you would be rid of that as well.
Also I would suggest using the schedule method which sends messages to actors instead. This way the actor that processes the order could keep count, and if it receives a ResetCounter message it would simply reset the counter. You could simply write:
system.scheduler.schedule(x seconds, 24 hours, orderActor, ResetCounterMessage)
when you start up your actor system initially, and be done with it.

Unexplained crash while polling systemtime type

I have a program that runs every 5 minutes when the stock market is open, which it does by running once, then entering the following function, which returns once 5 minutes has passed if the stock market is open.
What I don't understand, is that after a period of time, usually about 18 or 19 hours, it crashes returning a sigsegv error. I have no idea why, as it isn't writing to any memory - although I don't know much about the systemtime type, so maybe that's it?
Anyway, any help you could give would be very much appreciated! Thanks in advance!!
void KillTimeUntilNextStockDataReleaseOnWeb()
{
SYSTEMTIME tLocalTimeNow;
cout<<"\n*****CHECKING IF RUN HAS JUST COMPLETED OR NOT*****\n";
GetLocalTime(&tLocalTimeNow);//CHECK IF A RUN HAS JUST COMPLETED. IF SO, AWAIT NEXT 5 MINUTE MARK
while((tLocalTimeNow.wMinute % 5)==0)
GetLocalTime(&tLocalTimeNow);
cout<<"\n*****AWAITING 5 MINUTE MARK TO UPDATE STOCK DATA*****\n";
GetLocalTime(&tLocalTimeNow);//LOOP THROUGH THIS SECTION, CHECKING CURRENT TIME, UNTIL 5 MINUTE UPDATE. THEN PROCEED
while((tLocalTimeNow.wMinute % 5)!=0)
GetLocalTime(&tLocalTimeNow);
cout<<"\n*****CHECKING IF MARKET IS OPEN*****\n";
//CHECK IF STOCK MARKET IS EVEN OPEN. IF NOT, REPEAT
GetLocalTime(&tLocalTimeNow);
while((tLocalTimeNow.wHour < 8)||(tLocalTimeNow.wHour) > 17)
GetLocalTime(&tLocalTimeNow);
cout<<"\n*****PROGRAM CONTINUING*****\n";
return;
}
If you want to "wait for X seconds", then the Windows system call Sleep(x) will sleep for x milliseconds. Note however, if you sleep for, say, 300s, after some operation that took 3 seconds, that would mean you drift 3 seconds every 5minutes - it may not matter, but if it's critical that you keep the same timing all the time, you should figure out [based on time or some such function] how long it is to the next boundary, and then sleep that amount [possibly run a bit short and then add another check and sleep if you woke up early]. If "every five minutes" is more of an approximate thing, then 300s is fine.
There are other methods to wait for a given amount of time, but I suspect the above is sufficient.
Instead of using a busy loop, or even Sleep() in a loop, I would suggest using a Waitable Timer instead. That way, the calling thread can sleep effectively while it is waiting, while still providing a mechanism to "wake up" early if needed.