Earliest deadline scheduling - scheduling

I want to implement Earliest deadline scheduling in C but I cant find the algorithm on the net..
I understand the example below that when time is 0, both A1 and B1 arrive. Since A1 has the earliest deadline, it is scheduled first. When A1 completes, B1 is given the processor.when time is 20, A2 arrives. Because A2 has an earlier deadline than B1, B1 is interrupted so that A2 can execute to completion. Then B1 is resumed when time is 30. when time is 40, A3 arrives. However, B1 has an earlier ending deadline and is allowed to execute to completion when time is 45. A3 is then given the processor and finishes when time is 55.. However I cant come up with a solution.. Please help me to find an algorithm.
Thanks..
Image of the example
http://imageshack.us/photo/my-images/840/scheduling.png/

when a process finishes (and at the beginning), take the process with the lowest processTimeToDeadline - processTimeToExecute as the new current process
When a new process arrives, replace the current process if and only if newProcessTimeToDeadline - newProcessTimeToExecute < currentProcessTimeToDeadline - currentProcessTimeStillNeededToExecute.
Note: if you do this with multiple CPU, you got the Multiprocessor scheduling problem, which is NP complete.

Previous answer describe "Earliest Feasible Deadline First" (EFDF) scheduler and it fist perfectly to image from qestion.
"Earliest Deadline First" (EDF) scheduler is more simple. Scheduler just run task with earliest deadline. And it is all.

Related

Compute minimal schedule length for a set of tasks

duration(a,5).
duration(b,7).
duration(c,3).
prereqs(a,[]).
prereqs(b,[]).
prereqs(c,[b]).
?- len([a,b,c],Time).
Time = 10.
The question is, Find the total time taken for the tasks to be completed; the tasks do all start at the same time so task C is the longest and will take 10 seconds as it has to complete the prereq task of b.
I've been struggling on this question for a few days now and any help would be much appreciated.

Scheduling reset every 24 hours at midnight

I have a counter "numberOrders" and i want to reset it everyday at midnight, to know how many orders I get in one day, what I have right now is this:
val system = akka.actor.ActorSystem("system")
system.scheduler.schedule(86400000 milliseconds, 0 milliseconds){(numberOrders = 0)}
This piece of code is inside a def which is called every time i get a new order, so want it does is: reset numberOrders after 24hours from the first order or from every order, I'm not really sure if every time there's a new order is going to reset after 24 hours, which is not what I want. I want to rest the variable everyday at midnight, any idea? Thanks!
To further increase pushy's answer. Since you might not always be sure when the site started and if you want to be exactly sure it runs at midnight you can do the following
val system = akka.actor.ActorSystem("system")
val wait = (24 hours).toMillis - System.currentTimeMillis
system.scheduler.schedule(Duration.apply(wait, MILLISECONDS), 24 hours, orderActor, ResetCounterMessage)
Might not be the tidiest of solutions but it does the job.
As schedule supports repeated executions, you could just set the interval parameter to 24 hours, the initial delay to the amount of time between now and midnight, and initiate the code at startup. You seem to be creating a new actorSystem every time you get an order right now, that does not seem quite right, and you would be rid of that as well.
Also I would suggest using the schedule method which sends messages to actors instead. This way the actor that processes the order could keep count, and if it receives a ResetCounter message it would simply reset the counter. You could simply write:
system.scheduler.schedule(x seconds, 24 hours, orderActor, ResetCounterMessage)
when you start up your actor system initially, and be done with it.

Unexplained crash while polling systemtime type

I have a program that runs every 5 minutes when the stock market is open, which it does by running once, then entering the following function, which returns once 5 minutes has passed if the stock market is open.
What I don't understand, is that after a period of time, usually about 18 or 19 hours, it crashes returning a sigsegv error. I have no idea why, as it isn't writing to any memory - although I don't know much about the systemtime type, so maybe that's it?
Anyway, any help you could give would be very much appreciated! Thanks in advance!!
void KillTimeUntilNextStockDataReleaseOnWeb()
{
SYSTEMTIME tLocalTimeNow;
cout<<"\n*****CHECKING IF RUN HAS JUST COMPLETED OR NOT*****\n";
GetLocalTime(&tLocalTimeNow);//CHECK IF A RUN HAS JUST COMPLETED. IF SO, AWAIT NEXT 5 MINUTE MARK
while((tLocalTimeNow.wMinute % 5)==0)
GetLocalTime(&tLocalTimeNow);
cout<<"\n*****AWAITING 5 MINUTE MARK TO UPDATE STOCK DATA*****\n";
GetLocalTime(&tLocalTimeNow);//LOOP THROUGH THIS SECTION, CHECKING CURRENT TIME, UNTIL 5 MINUTE UPDATE. THEN PROCEED
while((tLocalTimeNow.wMinute % 5)!=0)
GetLocalTime(&tLocalTimeNow);
cout<<"\n*****CHECKING IF MARKET IS OPEN*****\n";
//CHECK IF STOCK MARKET IS EVEN OPEN. IF NOT, REPEAT
GetLocalTime(&tLocalTimeNow);
while((tLocalTimeNow.wHour < 8)||(tLocalTimeNow.wHour) > 17)
GetLocalTime(&tLocalTimeNow);
cout<<"\n*****PROGRAM CONTINUING*****\n";
return;
}
If you want to "wait for X seconds", then the Windows system call Sleep(x) will sleep for x milliseconds. Note however, if you sleep for, say, 300s, after some operation that took 3 seconds, that would mean you drift 3 seconds every 5minutes - it may not matter, but if it's critical that you keep the same timing all the time, you should figure out [based on time or some such function] how long it is to the next boundary, and then sleep that amount [possibly run a bit short and then add another check and sleep if you woke up early]. If "every five minutes" is more of an approximate thing, then 300s is fine.
There are other methods to wait for a given amount of time, but I suspect the above is sufficient.
Instead of using a busy loop, or even Sleep() in a loop, I would suggest using a Waitable Timer instead. That way, the calling thread can sleep effectively while it is waiting, while still providing a mechanism to "wake up" early if needed.

How to calculate Average Waiting Time and average Turn-around time in SJF Scheduling?

In SJF (Shortest Job First) Scheduling method.
How to calculate Average Waiting Time and average Turn-around time?
Is Gannt Chart correct ?
Gantt chart is wrong...
First process P3 has arrived so it will execute first. Since the burst time of P3 is 3sec after the completion of P3, processes P2,P4, and P5 has been arrived.
Among P2,P4, and P5 the shortest burst time is 1sec for P2, so P2 will execute next. Then P4 and P5. At last P1 will be executed.
Gantt chart for this ques will be:
| P3 | P2 | P4 | P5 | P1 |
1 4 5 7 11 14
Average waiting time=(0+2+2+3+3)/5=2
Average Turnaround time=(3+3+4+7+6)/5=4.6
SJF are two type - i) non preemptive SJF ii)pre-emptive SJF
I have re-arranged the processes according to Arrival time.
here is the non preemptive SJF
A.T= Arrival Time
B.T= Burst Time
C.T= Completion Time
T.T = Turn around Time = C.T - A.T
W.T = Waiting Time = T.T - B.T
Here is the preemptive SJF
Note: each process will preempt at time a new process arrives.Then it will compare the burst times and will allocate the process which have shortest burst time. But if two process have same burst time then the process which came first that will be allocated first just like FCFS.
it is wrong.
correct will be
P3 P2 P4 P5 P1
0 3 4 6 10 as the correct difference are these
Waiting Time (0+3+4+6+10)/5 = 4.6
Ref: http://www.it.uu.se/edu/course/homepage/oskomp/vt07/lectures/scheduling_algorithms/handout.pdf
The Gantt charts given by Hifzan and Raja are for FCFS algorithms.
With an SJF algorithm, processes can be interrupted. That is, every process doesn't necessarily execute straight through their given burst time.
P3|P2|P4|P3|P5|P1|P5
1|2|3|5|7|8|11|14
P3 arrives at 1ms, then is interrupted by P2 and P4 since they both have smaller burst times, and then P3 resumes. P5 starts executing next, then is interrupted by P1 since P1's burst time is smaller than P5's. You must note the arrival times and be careful. These problems can be trickier than how they appear at-first-glance.
EDIT: This applies only to Preemptive SJF algorithms. A plain SJF algorithm is non-preemptive, meaning it does not interrupt a process.

Need explanation for this boost::asio timer example

There is a line in the 3rd tutorial on Boost asio that shows how to renew a timer and yet prevent there from being drift. The line is the following:
t->expires_at(t->expires_at() + boost::posix_time::seconds(1));
Maybe it's me but I wasn't able to find documentation on the 2nd usage of expires_at(), with no parameters. expires_at(x) sets the new expiration, cancelling any pending completion handlers. So presumably expires_at() does what, return time of the last expiry? So by adding one second, if there should be some number of ms, say n ms, then it will in essence be "subtracted" from the next expiry since the time is being accounted for? What happens then if the time it takes to perform this handler is greater than 1 second in this example? Does it fire immediately?
expires_at() return the time when it is set to timeout. So this will move the timeout to 1 second later.
When you set the time with expires_at(x) you will get a return of 0 if it already invoked due to time already passed. If return is bigger then 0 it indicates number of cancels that were made.