How to calculate Average Waiting Time and average Turn-around time in SJF Scheduling? - scheduling

In SJF (Shortest Job First) Scheduling method.
How to calculate Average Waiting Time and average Turn-around time?
Is Gannt Chart correct ?

Gantt chart is wrong...
First process P3 has arrived so it will execute first. Since the burst time of P3 is 3sec after the completion of P3, processes P2,P4, and P5 has been arrived.
Among P2,P4, and P5 the shortest burst time is 1sec for P2, so P2 will execute next. Then P4 and P5. At last P1 will be executed.
Gantt chart for this ques will be:
| P3 | P2 | P4 | P5 | P1 |
1 4 5 7 11 14
Average waiting time=(0+2+2+3+3)/5=2
Average Turnaround time=(3+3+4+7+6)/5=4.6

SJF are two type - i) non preemptive SJF ii)pre-emptive SJF
I have re-arranged the processes according to Arrival time.
here is the non preemptive SJF
A.T= Arrival Time
B.T= Burst Time
C.T= Completion Time
T.T = Turn around Time = C.T - A.T
W.T = Waiting Time = T.T - B.T
Here is the preemptive SJF
Note: each process will preempt at time a new process arrives.Then it will compare the burst times and will allocate the process which have shortest burst time. But if two process have same burst time then the process which came first that will be allocated first just like FCFS.

it is wrong.
correct will be
P3 P2 P4 P5 P1
0 3 4 6 10 as the correct difference are these
Waiting Time (0+3+4+6+10)/5 = 4.6
Ref: http://www.it.uu.se/edu/course/homepage/oskomp/vt07/lectures/scheduling_algorithms/handout.pdf

The Gantt charts given by Hifzan and Raja are for FCFS algorithms.
With an SJF algorithm, processes can be interrupted. That is, every process doesn't necessarily execute straight through their given burst time.
P3|P2|P4|P3|P5|P1|P5
1|2|3|5|7|8|11|14
P3 arrives at 1ms, then is interrupted by P2 and P4 since they both have smaller burst times, and then P3 resumes. P5 starts executing next, then is interrupted by P1 since P1's burst time is smaller than P5's. You must note the arrival times and be careful. These problems can be trickier than how they appear at-first-glance.
EDIT: This applies only to Preemptive SJF algorithms. A plain SJF algorithm is non-preemptive, meaning it does not interrupt a process.

Related

Does perf reset counter after every sample?

In Collecting Samples sections of the perf documentation, we can specify the sampling rate using -F option. According to my understanding, if -F 250 is used, perf will sample the counters 250 times per second on average. Does this mean that after sampling the counter, does that counter get reset? For example, consider two samples below(assume these are the only samples generated for the function f1)
sample S1 reporting c1 cycles in function f1
sample S2 reporting c2 cycles in function f1
Suppose I have to calculate approximately how many cycles are spent in all calls of function f1(assume in the application above there is only one call to f1). Is it right to say the answer is c1 + c2(This is true if the counters are reset after every sample)?
Thanks

WASAPI, Delays on m_AudioClient->Start()

The app captures sound from a microphone using WASAPI.
This code initializes m_AudioClient that is of type IAudioClient*.
const LONG CAPTURE_CLIENT_LATENCY = 50 * 10000;
DWORD loopFlag = m_IsLoopback ? AUDCLNT_STREAMFLAGS_LOOPBACK : 0;
hr = m_AudioClient->Initialize(AUDCLNT_SHAREMODE_SHARED
, AUDCLNT_STREAMFLAGS_EVENTCALLBACK | AUDCLNT_STREAMFLAGS_NOPERSIST | loopFlag
, CAPTURE_CLIENT_LATENCY, 0, m_WaveFormat->GetRawFormat(), NULL);
Then I use m_AudioClient->Start() and m_AudioClient->Stop() to pause or resume capturing.
Usaually m_AudioClient->Start() takes 5-6 ms, but sometimes it takes about 150 mswhich is too much for the application.
If I once call m_AudioClient->Start() then subsecuent calls of m_AudioClient->Start() during next 5 seconds will be fast but after about 10-15 seconds next call of m_AudioClient->Start() will take logner (150 ms). So looks like it keeps some state several seconds and after that it needs to get to that state again which takes some time.
On another machine this delays never happen, every call of m_AudioClient->Start() takes about 30 ms
On the third machine average duration of m_AudioClient->Start() is 140 ms but peak values are about 1 s.
I run the same code on all 3 machines. The michrophone is not exatly the same on most cases it is Microphone Array Realtek High Definition Audio.
Can somebody explain why these peak values for the duration m_AudioClient->Start() happen and how I can fix it?

How would you implement this adaptive 'fudge factor' in a scheduler?

I have a scheduler, endlessly executing n actions. Each action is scheduled for x seconds into the future. When an action completes, it is re-scheduled for another x seconds into the future after its previously scheduled time. Every 1s, the scheduler "ticks", executing at most 25 actions which are due to fire. Actions may take a second or so to complete (though this value should be considered variable and unpredictable).
Say that x is 60 seconds. Due to the throttling of at most 25 actions being executed simultaneously, when n grows large, it is conceivable that the scheduler won't have time to execute all n actions within a 60 second window, and actions will be executed later and later as time goes on. This is undesirable, as it'll become true that there are actions to execute on every single tick and this increases load on my system. It's less important to me to keep x exactly constant than it is to keep load down.
So I wish to implement an adaptive "handicap", an automatically-applied fudge factor h, increasing it when a majority of actions are executed "late", and decreasing it (edging it back to its default of zero) when they're all seemingly and consistently on time. The scheduler would then be made to schedule actions for x+h seconds' time, rather than x.
At a high level, how would you approach this? How would you define "a majority of actions are executed 'late'" and how would you represent/detect it in C++03 code?
Better yet, is there an existing well-known approach that objectively "works" here?
To be clear, you are aiming to avoid sustained high load where there are tasks
every tick, rather than aiming to minimise the scheduling delay.
Correspondingly, the metric you should be looking at when considering the fudge
factor is the load, not the lateness.
If you have full knowledge of the system — the number of tasks, their
rescheduling intervals, the distribution of their execution time —
you could in principle exactly solve for a handicap value that would give you
a mean target load when busy, or would say, only exceed the target load
10% of the time when busy, or so on.
On the other hand, if this information is not available or predictable,
you will need an adaptive approach.
The general theory for this sort of thing is control theory, which can get
quite involved. Broadly though the heuristic is: if the load is less than the
threshold, and we have a positive handicap, reduce the handicap; if the load is
over the threshold, increase the handicap.
The handicap should be proportional, rather than additional: if, for example,
we knew we were consistently 10% overloaded, then we'd be right on target if we
applied a proportional delay of 10% on the scheduling of jobs. That is, we're
looking to apply a handicap factor h such that jobs are scheduled at xh
seconds time instead of x. A factor of 1 would correspond to no handicap.
When we're overloaded, but not maximally overloaded, the response then is linear
in the log: log(h) = log(load) - log(load_target). So the simplest method
would be:
load = get_current_load();
if (load>load_target) h = load/load_target;
else h = 1.0;
Unfortunately, there is a maximum measured load, and linearity breaks down
here. The linear model can be extended to incorporate the accumulated
deviation from the target load, and the rate of change of the load.
This corresponds to the proportional-integral-derivative controller.
As this is a noisy environment (there is variation in the action
execution times), it might be wise to shy away from the derivative bit
of this model, and stick with the proportional-integral (PI) part.
When this model is discretized, we get an expression for log(h)
that is proportional to the current (log) overload, plus a term that
captures how badly we've been doing:
load = get_current_load();
deviation = load > load_target ? log(load/load_target) : 0;
accum += p1 * deviation;
log_h = p2 * deviation + accum;
h = log_h < 0 ? 1.0 : exp(log_h);
Except, we don't have a symmetric problem: when we're below
the load target, but the accumulated error term stays high.
We could work around it by accumulating negative deviations
as well, but limiting the accumulated error to be at least
non-negative, so that a period of legitimately low load
doesn't give us a free pass for later:
load = get_current_load();
if (load > 0) {
deviation = log(load/load_target);
accum += p1 * deviation;
if (accum < 0) accum = 0;
if (deviation < 0) deviation = 0;
}
else {
accum = 0;
deviation = 0;
}
log_h = p2 * deviation + accum;
h = log_h < 0 ? 1.0 : exp(log_h);
The value for p2 will be somewhere (roughly) between 0.5 and 0.9,
to leave some room for the influence of the accumulated error.
A good value for p1 will be probably be around 0.3 to 0.5 times
the reciprocal of the lag time, the number of steps it takes for a change
in h to present itself as a change in load. This can be estimated
by the mean rescheduling time of the actions.
You can play around with these parameters to get the sort of
response you'd like, or you can make a more faithful mathematical
model of your scheduling problem and then do maths to it!
The parameters themselves can also be modified adaptively over
time, based on the observed response to changes in load.
(Warning, I haven't actually tried these fragments in a mock scheduler!)

Fixed Round-Robin scheduling with two processes and a quantum of 1

Morning,
I'm using fixed RR algorithm with a quantum of 1. P1 arrives at 0 and P5 arrives at 1. P1 has a burst time of 10 and P5 has a burst time of 5.
P1 executes from 0 to 1. P5 arrives at 1, but it goes to the back of the queue. Since there are only two processes at the start of 1, I believe P1 would execute from 1 to 2, P5 would wait one tick and first execute from 2 to 3.
Is this correct? If not, would P5 execute immediately from 1 to 2?
Thank you
Your understanding is correct,the OS prefers a recently ended process to a newly entered one when end time of p1=start time of p5the following question may be usefulSpecial case scheduling

Earliest deadline scheduling

I want to implement Earliest deadline scheduling in C but I cant find the algorithm on the net..
I understand the example below that when time is 0, both A1 and B1 arrive. Since A1 has the earliest deadline, it is scheduled first. When A1 completes, B1 is given the processor.when time is 20, A2 arrives. Because A2 has an earlier deadline than B1, B1 is interrupted so that A2 can execute to completion. Then B1 is resumed when time is 30. when time is 40, A3 arrives. However, B1 has an earlier ending deadline and is allowed to execute to completion when time is 45. A3 is then given the processor and finishes when time is 55.. However I cant come up with a solution.. Please help me to find an algorithm.
Thanks..
Image of the example
http://imageshack.us/photo/my-images/840/scheduling.png/
when a process finishes (and at the beginning), take the process with the lowest processTimeToDeadline - processTimeToExecute as the new current process
When a new process arrives, replace the current process if and only if newProcessTimeToDeadline - newProcessTimeToExecute < currentProcessTimeToDeadline - currentProcessTimeStillNeededToExecute.
Note: if you do this with multiple CPU, you got the Multiprocessor scheduling problem, which is NP complete.
Previous answer describe "Earliest Feasible Deadline First" (EFDF) scheduler and it fist perfectly to image from qestion.
"Earliest Deadline First" (EDF) scheduler is more simple. Scheduler just run task with earliest deadline. And it is all.