Morning,
I'm using fixed RR algorithm with a quantum of 1. P1 arrives at 0 and P5 arrives at 1. P1 has a burst time of 10 and P5 has a burst time of 5.
P1 executes from 0 to 1. P5 arrives at 1, but it goes to the back of the queue. Since there are only two processes at the start of 1, I believe P1 would execute from 1 to 2, P5 would wait one tick and first execute from 2 to 3.
Is this correct? If not, would P5 execute immediately from 1 to 2?
Thank you
Your understanding is correct,the OS prefers a recently ended process to a newly entered one when end time of p1=start time of p5the following question may be usefulSpecial case scheduling
Related
I have to design a system that adds jobs for workers and checks on them. There can be multiple workers, so multiple tasks can be completed simultaneously, and if the number of jobs is more than workers then they're added to a queue.
My job is to design some code that will add jobs and check on jobs and some separate code will make calls to my functions,
so I need to mainly design 3 functions,
init(3) //initializes no of workers
add_job(timestamp, burst_time); //adds a job and returns number of jobs in the system
jobs_active(timestamp); //returns No of jobs in the system
Burst Time is a concept used in OS and scheduling, it's serving the same purpose here so I used it here.
Burst time refers to time required to complete the job.
jobs_active function returns same thing as add_job without adding a new job
You can assume only one operation can be performed at a time.
What I'm not sure about is how I should simulate time in this question.
for eg, in the following run, each function call would give different answer based on the timestamp
init(2) //initializes time = 0, set no of workers as 2
add_job(1, 4); // returns 1 //Add a job at time=1, burst time = 4, job would get finished at time=5 (1+4)
jobs_active(3); // returns 1 //checks jobs in system at time = 3, currently 1
jobs_active(5); // returns 0 //At time=5 job added at 1 ends here
add_job(6, 3); // returns 1 //Add a job at time=6, burst time = 3, job would get finished at time=9 (6+3)
add_job(7, 3); // returns 2 //Add a job at time=7, burst time = 3, job would get finished at time=10 (7+3)
jobs_active(8); // returns 2 //At time=8, both previous jobs are active
jobs_active(9); // returns 1 //At time=9, job added at 6 is finished
jobs_active(10); // returns 0 //At time=10 job added at 7 ends here
In above example, when the job is added at time=1, with burst time = 4. it will end at time = 5 (1+4),
Let's say I have a frame, that uses 2 copy queues, 1 graphics and 1 compute queue in order:
1) Upload data from CPU to GPU using 1st copy queue at the beginning of the frame (mesh vertices and such). That will be ExecuteCommandLists on 1st copy queue then SignalFence.
2) Build a ray tracing acceleration structure on async compute queue. WaitFence to wait for data we just uploaded, then ExecuteCommandLists to build accel. structure, then SignalFence.
3) WaitFence on graphics queue to wait for AS build then ExecuteCommandLists to render the frame. Then issue another SignalFence
4) WaitFence then ExecuteCommandLists on 2nd copy queue to perform data readback (GPU -> CPU), let's say to get terrain and physics back to the CPU. Then we call the final SignalFence for the frame.
Now, I want to have 3 frames buffered at all times to avoid CPU/GPU bubbles when no work is performed.
What would be the correct fence setup to achieve this?
So far I have implemented 2 variants 1 of which should work (unless I'm completely wrong) but it doesn't, and second works, but I'm not sure why. Please help me figure it out.
1) Have 2 fences (A and B) for all of the frames and queues:
For 1st frame:
CopyQueue1.ExecuteCommands();
CopyQueue1.SignalFence(A, 1);
AsyncComputeQueue.Wait(A, 1);
AsyncComputeQueue.ExecuteCommands();
AsyncComputeQueue.Signal(A, 2);
GraphicsQueue.Wait(A, 2);
GraphicsQueue.ExecuteCommands();
GraphicsQueue.Signal(A, 3);
CopyQueue2.Wait(A, 3);
CopyQueue2.ExecuteCommands();
CopyQueue2.Signal(B, 1);
Same thing for the next frames except that values for A and B will be incremented: 3, 4, 5 and 6, 7, 8 for A in frame 2 and 3, and values 2, 3 in frames 2 and 3 respectively for B.
At the end of render loop I perform a check to keep maximum of 3 frames in flight:
if (CurrentFrameBValue - B.SignalledValue() >= 3)
{
StallCurrentCPUThread();
}
ReleaseCommandListsForThisFrame();
// GoToNextRenderLoop
This code has an issue where B is being signaled very quickly, I do not stall the CPU and proceed to resetting command lists for corresponding frame and get debug layer error that says I was resetting command lists while GPU was still using them.
As I understand it, all work submitted to GPU is guarantied to be performed in submission order. So I expect fences to advance as follows: A - 1, 2, 3, then B to 1, then A to 4, 5, 6 then B to 2 and so forth.. Why is B signaled before all work for the frame is done?
2) Approach that's not emitting errors. Have 4 fences for each queue A, B, C, D, increment their values by one each frame, as we did for B in case 1.
1 reason I can see for 1st case failing is that work on GPU is not really done in order I expect it, and fence A can be signaled in an unpredictable order, messing up dependencies, while 2nd case has separate fences for each case..
I should also note that I don't have dependencies between frames: CopyQueue1 does not depend on CopyQueue2 via fences, I ensure correctness by keeping not more then 3 frames in flight with CPU stalling shown above.
Any thoughts?
I believe the problem was in using 1 fence for 3 different queues. Let's look at case 1. Copy1(Frame 1) -> AsyncCompute(Frame 1) -> Graphics(Frame 1) -> Copy2(Frame 1) then Copy1(Frame 2) -> AsyncCompute(Frame 2) -> Graphics(Frame 2) -> Copy2(Frame 2) all with the same fence object but different values.
In my case, I believe, Copy1(Frame 2) was done before AsyncCompute(Frame 1) or even Graphics(Frame 1), doesn't matter, because the fence value it signals is higher then anything expected in frame 1, messing up frame 1 dependencies and starting Copy2(Frame 1) too early which led to frame finish and reset of command lists while Async and/or Graphic work was actually still running.
I have two raspberry pi. Suppose this two pi are denoted as A and B
So A and B are connected with each other over socket.
On particular event, A is generating value every one seconds.
and on event A stops generating those values.
So B needs to read those values from A every 1 second over socket.
So B has while loop running
So what I have done is that I am reading time every while loop iteration and checking whether 1 seconds elapsed or not. If 1 seconds elapsed, I am reading values from B.
Here is some pseudo code for this.
while True:
on = read_from_A()
if on: // "on" will suggest me to start read from B
current_time = time.time()
if current_time - last_time == 1:
read_values_from_A()
last_time = current_time
do_some_task()
With this approach I am not able to read read values from A exact after 1 seconds. B is missing some values from A.
So suppose A generated 360 values in 6 minutes
B should be able to read those 360 values.
What is the way that I should be using so that there should not be any data loss.
In SJF (Shortest Job First) Scheduling method.
How to calculate Average Waiting Time and average Turn-around time?
Is Gannt Chart correct ?
Gantt chart is wrong...
First process P3 has arrived so it will execute first. Since the burst time of P3 is 3sec after the completion of P3, processes P2,P4, and P5 has been arrived.
Among P2,P4, and P5 the shortest burst time is 1sec for P2, so P2 will execute next. Then P4 and P5. At last P1 will be executed.
Gantt chart for this ques will be:
| P3 | P2 | P4 | P5 | P1 |
1 4 5 7 11 14
Average waiting time=(0+2+2+3+3)/5=2
Average Turnaround time=(3+3+4+7+6)/5=4.6
SJF are two type - i) non preemptive SJF ii)pre-emptive SJF
I have re-arranged the processes according to Arrival time.
here is the non preemptive SJF
A.T= Arrival Time
B.T= Burst Time
C.T= Completion Time
T.T = Turn around Time = C.T - A.T
W.T = Waiting Time = T.T - B.T
Here is the preemptive SJF
Note: each process will preempt at time a new process arrives.Then it will compare the burst times and will allocate the process which have shortest burst time. But if two process have same burst time then the process which came first that will be allocated first just like FCFS.
it is wrong.
correct will be
P3 P2 P4 P5 P1
0 3 4 6 10 as the correct difference are these
Waiting Time (0+3+4+6+10)/5 = 4.6
Ref: http://www.it.uu.se/edu/course/homepage/oskomp/vt07/lectures/scheduling_algorithms/handout.pdf
The Gantt charts given by Hifzan and Raja are for FCFS algorithms.
With an SJF algorithm, processes can be interrupted. That is, every process doesn't necessarily execute straight through their given burst time.
P3|P2|P4|P3|P5|P1|P5
1|2|3|5|7|8|11|14
P3 arrives at 1ms, then is interrupted by P2 and P4 since they both have smaller burst times, and then P3 resumes. P5 starts executing next, then is interrupted by P1 since P1's burst time is smaller than P5's. You must note the arrival times and be careful. These problems can be trickier than how they appear at-first-glance.
EDIT: This applies only to Preemptive SJF algorithms. A plain SJF algorithm is non-preemptive, meaning it does not interrupt a process.
I want to implement Earliest deadline scheduling in C but I cant find the algorithm on the net..
I understand the example below that when time is 0, both A1 and B1 arrive. Since A1 has the earliest deadline, it is scheduled first. When A1 completes, B1 is given the processor.when time is 20, A2 arrives. Because A2 has an earlier deadline than B1, B1 is interrupted so that A2 can execute to completion. Then B1 is resumed when time is 30. when time is 40, A3 arrives. However, B1 has an earlier ending deadline and is allowed to execute to completion when time is 45. A3 is then given the processor and finishes when time is 55.. However I cant come up with a solution.. Please help me to find an algorithm.
Thanks..
Image of the example
http://imageshack.us/photo/my-images/840/scheduling.png/
when a process finishes (and at the beginning), take the process with the lowest processTimeToDeadline - processTimeToExecute as the new current process
When a new process arrives, replace the current process if and only if newProcessTimeToDeadline - newProcessTimeToExecute < currentProcessTimeToDeadline - currentProcessTimeStillNeededToExecute.
Note: if you do this with multiple CPU, you got the Multiprocessor scheduling problem, which is NP complete.
Previous answer describe "Earliest Feasible Deadline First" (EFDF) scheduler and it fist perfectly to image from qestion.
"Earliest Deadline First" (EDF) scheduler is more simple. Scheduler just run task with earliest deadline. And it is all.