Why is the average wait time of pre-emptive SJF guaranteed to be no larger than that of non-preemptive SJF scheduling? - scheduling

SJF = Shortest Job First, title wouldn't let me fit it
Wouldn't the preemptive SJF scheduling make the average wait time of a process be greater than if it was simply executed in a non-preemptive SJF scheduling algorithm? After all, you are continually context switching and forcing a process to wait longer to be completed.
I can't seem to understand why it is that pre-emptive SJF (aka. Shortest-Time-Remaining-First, or STRF) is better than non-preemptive SJF (in terms of average wait time for a process).
Can someone explain this to me?
Thank you.

Suppose p1(8 ms burst time) arrived at the queue at 0 ms, after executing p1 for 1 ms, another process p2 came into the queue with 4 ms burst time. The processor will stop executing process p1 and will start executing process p2. Why ? because p1 has 7ms remaining to finish execution, while p1 has only 4 ms remaining to finish.
I think it's clear why it is called "shortest-time-remaining-first" scheduling. Because it always chooses a process which has the smallest amount of time remained to execute.
For your other question, why it is better....let's extend the scenario.
Process p1 --> burst time 8 ms, arrival time 0 ms,
Process p2 --> burst time 4 ms, arrival time 1 ms,
Process p3 --> burst time 9 ms, arrival time 2 ms,
Process p4 --> burst time 5 ms, arrival time 3 ms.
for preemptive SJF,
average waiting time =[ (for p1)(10-1) + (for p2)(1-1) + (for p3)(17-2)+(for p4)(5-3)]/4 = 6.5 ms
for non preemptive SJF it would be,
average waiting time =[ (for p1)(0) + (for p2)(8-1) + (for p3)(17-2)+(for p4)(12-3)]/4 = 7.75 ms
you can see why it is said that preemptive is better than non-preemptive, it's because it takes less time to execute all the process using this algorithm.
Reference: Operating System Concepts by Galvin, Silberschatz, Gagne (8th edition).

Related

Processing tasks in parallel in specific time frame without waiting for them to finish

This is a question about concurrency/parallelism and processes. I am not sure how to express it, so please forgive my ignorance.
It is not related to any specific language, although I'm using Rust lately.
The question is if it is possible to launch processes in concurrent/parallel mode, without waiting for them to finish, and within a specific time frame, even when the total time of the processes takes more than the given time frame.
For example: lets say I have 100 HTTP requests that I want to launch in one second, separated by 10ms each. Each request will take +/- 50ms. I have a computer with 2 cores to make them.
In parallel that would be 100 tasks / 2 cores, 50 tasks each. The problem is that 50 tasks * 50ms each is 2500ms in total, so two seconds and half to run the 100 tasks in parallel.
Would it be possible to launch all these tasks in 1s?

How to decrease CPU usage of high resolution (10 micro second) precise timer?

I'm writing up a timer for some complex communication application in windows 10 with qt5 and c++. I want to use max 3 percent of CPU with micro second resolution.
Initially i used qTimer (qt5) in this app. It was fine with low CPU usage and developer friendly interface. But It was not precise as i need.It takes only millisecond as parameter but i need microsecond. And the accuracy of the timer wasn't equal this resolution in many real-world situations like heavy load on cpu. Sometimes the timer fires at 1 millisecond, sometimes 15 millisecond. You can see this problem in picture:
I searched a solution for days. But in the end i found Windows is a non real-time Operating System (RTOS) and don't give high resolution and precise timer.
I wrote my own High resolution precise timer with CPU polling for this goal. I developed a singleton class working in separate thread. It works at 10 micro second resolution.
But it is consuming one logical core in CPU. Equivalent to 6.25 percent at ryzen 2700.
For my application this CPU usage is unacceptable. How can i reduce this CPU usage without give high resolution away ?
This is the code that does the job:
void CsPreciseTimerThread::run()
{
while (true)
{
QMutexLocker locker(&mMutex);
for (int i=0;i<mTimerList.size();i++)
{
CsPreciseTimerMiddleLayer* timer = mTimerList[i];
int interval = timer->getInterval();
if ( (timer->isActive() == true&&timer->remainingTime()<0))
{
timer->emitTimeout();
timer->resetTime();
}
}
}
}
I tried to down priority of timer thread. I used this lines:
QThread::start(QThread::Priority::LowestPriority);
And this:
QThread::start(QThread::Priority::IdlePriority);
That changes makes timer less precise but CPU usage didn't decrease.
After that i tried force the current thread to sleep for few microseconds in loop.
QThread::usleep(15);
As you might guess sleep function did screw up the accuracy. Sometimes timer sleeps longer than expected , like 10 ms or 15 ms.
I'm going to reference Windows APIs directly instead of the Qt abstractions.
I don't think you want to lower your thread priority, I think you want to raise your thread priority and use the smallest amount of Sleep between polling to balance between latency and CPU overhead.
Two ideas:
In Windows Vista, they introduced the Multimedia Class Scheduler Service specifically so that they could move the Windows audio components out of kernel mode and running in user mode, without impacting pro-audio tools. That's probably going to be helpful to you - it's not precisesly "real time" guararteed, but it's meant for low latency operations.
Going the classic way - raise your process and thread priority to high or critical, while using a reasonable sleep statement of a few milliseconds. That is, raise your thread priority to THREAD_PRIORITY_TIME_CRITICAL. Then do a very small Sleep after completion of the for loop. This sleep amount should be between 0..10 milliseconds. Some experimentation required, but I would sleep no more than half the time to the next expected timeout, with a max of 10ms. And when you are within N microseconds of your timer, you might need to just spin instead of yielding. Some experimentation is required. You can also experiment with raising your Process priority to REALTIME_PRIORITY_CLASS.
Be careful - A handful of runaway processes and threads at these higher priority levels that isn't sleeping can lock up the system.

Calculate average wait & turn around times of a process

Hi I am having trouble understanding exactly how wait and turn around times are calculated, I am learning this in preparation for an exam and I do have a past paper for this however the marking schemes do not explain where the values are coming from here is the question:
An operating system has two process priority levels, 1 and 2. Three new
processes, A B and C, arrive at the same time in that order with the
following characteristics
A
----------
Quanta Required 20
Priority 1
----------
B
----------
Quanta Required 30
Priority 1
----------
C
----------
Quanta Required 40
Priority 2
----------
Calculate the average Waiting Time and average Turn-Around Time if:
a)normal Round Robin scheduling is used
b)a modified Round Robin scheduling which gives priority 1 processes a
double quantum and priority 2 processes a single quantum whenever they
are scheduled
c)What advantages or disadvantages has the algorithm described in (c)
compared to a Priority Queue scheduling?
The answers only show calculations with no explanation to the best of my understanding:
Average Wait Time = Sum of Process Finish times / Number of Processes
Average Turn Around = Sum of Process Start Times / Number of Processes
Is this correct or is there a better way to calculate this any help is much appreciated

How to calculate average turnaround time - Round Robin and FIFO scheduling?

Five processes begins with their execution at (0, 0, 2, 3, 3) seconds and execute for (2, 2, 1, 2, 2) seconds. How do I calculate average turnaround time if:
a) We use Round Robin (quantum 1 sec.)
b) We use FIFO scheduling?
I am not sure how to solve this, could you guys help me out?
Here is the link of .png table;
table link
I suppose that your exercise is about scheduling tasks on a single processor. My understanding is hence the following:
With FIFO, each task is scheduled in order of arrival and is executed until it's completed
With RR, earch tasks scheduled is executed for a quantum of time only, sharing the processor between all active processes.
In this case you obtain such a scheduling table:
The turnaround is the time between the time the job is submitted, and the time it is ended. In first case, I find 19 in total thus 3.8 in average. In the second case, I find 25 in total and 5 on average.
On your first try, you have processes running in parallel. This would assume 2 processors. But if 2 processors are available, the round robin and the FIFO would have the same result, as there are always enough processors for serving the active processes (thus no waiting time). The total turnaround would be 9 and the average 1,8.

How would I implement a SJF and Round Robin scheduling simulator?

I have a vector of structs, with the structs looking like this:
struct myData{
int ID;
int arrivalTime;
int burstTime;
};
After populating my vector with this data:
1 1 5
2 3 2
3 5 10
where each row is an individual struct's ID, arrivalTime, and burstTime, how would I use "for" or "while" loops to step through my vector's indices and calculate the data in a way that I could print something like this out:
Time 0 Processor is Idle
Time 1 Process 1 is running
Time 3 Process 2 is running
Time 5 Process 1 is running
Time 8 Process 3 is running
I know that SJF and RR scheduling are pretty similar with the exception that RR has the time quantum so that no process can last longer than a arbitrary time limit before being pre-empted by another process. With that in mind, I think that after I implement SJF, RR will come easily with just a few modifications of the SJF algorithm.
The way I thought about implementing SJF is to sort the vector based on arrival times first, then if two or more vector indices have the same arrival time, sort it based on shortest burstTime first. After that, using
int currentTime = 0;
to keep track of how much time has passed, and
int i = 0;
to use as the index of my vector and to control a "while" loop, how would I implement an algorithm that allows me to print out my desired output shown above? I have a general idea of what needs to happen, but I can't seem to lay it all out in code in a way that works.
I know that whenever the currentTime is less than the next soonest arrivalTime, then that means the processor is idle and currentTime needs to be set to this arrivalTime.
If the vector[i+1].arrivalTime < currentTime + vector[i].burstTime, I need to set the vector[i].burstTime to vector[i+1].arrivalTime - currentTime, then set currentTime to vector[i+1].arrivalTime, then print out currentTime and the process ID
I know that these are simple mathematical operations to implement but I cant think of how to lay it all out in a way that works the way I want it to. The way it loops around and how sometimes a few processes have the same arrival times throws me off. Do I need more variables to keep track of what is going on? Should I shift the arrival times of all the items in the vector every time a process is pre-empted and interrupted by a newer process with a shorter burst time? Any help in C++ code or even psuedo-code would be greatly appreciated. I feel like I am pretty solid on the concept of how SJF works but I'm just having trouble translating what I understand into code.
Thanks!
I know that SJF and RR scheduling are pretty similar with the exception that RR has the time quantum so that no process can last longer than a arbitrary time limit before being pre-empted by another process.
I don't think that's right. At least that's not how I learned it. RR is closer to FCFS (first come, first served) than it is to SJF.
One way to implement SJF is to insert incoming jobs into the pending list based on the running time. The insert position is at the end if this the new job's running time is longer than that of the job at the end; otherwise it's before the first job with a running time longer than the incoming job. Scheduling is easy: Remove the job at the head of the pending list and run that job to completion. A job with a long running time might not ever be run if short jobs keep coming in and getting processed ahead of that job with a long running time.
One way to implement round robin is to use a FIFO, just like with FCFS. New jobs are added to the end of the queue. Scheduling is once again easy: Remove the job at the head of the queue and process it. So far, this is exactly what FCFS does. The two differ in that RR has a limit on how long a job can be run. If the job takes longer than some time quantum to finish, the job is run for only that amount of time and then it is added back to the end of the queue. Note that with this formulation, RR is equivalent to FCFS if the time quantum is longer than the running time of the longest running job.
I suppose you could insert those incomplete jobs back into in the middle of the process list as SJF does, but that doesn't seem very round-robinish to me, and the scheduling would be a good deal hairier. You couldn't use the "always run the job at the head" scheduling rule because then all you would have is SJF, just made more complex.