this is something I found while studying an excercise which the result to was already given but something seems a bit off from what a studied so far:
In this picture you can see the scheduling table of a preemptive scheduling on 2 CPUs with 6 tasks to do and of course each one of them was given the time needed to finish the task and priorities.
And now the main question:
Shouldn't the remaining work time of task #1 be 4 when its work is continued in CPU #2 at t=7 ? In the picture it says that 6 out of 6 task was done although 6 of the originally 10 work time was done at t=0 which means that only 4 task is left to do. Is the excercise doing it wrong or did I miss something you should know about multi procossed scheduling?
(At first I thought it is because switching processor but if I look at task #4 it doesn't seem like thats the case)
Would really appreciate your opinion Thanks.
So yeah it turned out to be a mistake in the sheet. so task #1 is supposed to have only 4 task left by the time t = 7
Related
I have my freeRTOS currently working on my Microzed board. I am using the Xilinx SDK as the software platform and until now I have been able to create tasks and assign priority.
I was just curious to know if it would be possible to assign a fixed time for each of my tasks such that for example after 100 miliseconds my scheduler would switch to the next task . So is it possible to set a fixed execution time for each of my tasks ?? As far as I checked I could not find a method to work this out, if there is any means to implement this using the utilities of freeRTOS, kindly let me know guys.
By default FreeRTOS will time slice tasks of equal priority, see http://www.freertos.org/a00110.html#configUSE_TIME_SLICING, but there is nothing to guarantee that each task gets an equal share of the CPU. For example, interrupts use an unknown amount of processing time during each time slice, and higher priority tasks can use part or all of a time slice.
Question for you though - why would you want the behaviour you requested? Maybe if you said what you were trying to achieve, rather than than ask if a feature existed, people would be able to make helpful suggestions.
I have built my first application using glibmm. I'm using a lot of threads as it does heavy processing. I have tried to follow the guidelines concerning multithreading, i.e. not doing any GUI updates from other threads than the one where g_main_loop is running.
I do a lot of graphics rendering in worker threads but I always only update a PixBuf which is later drawn by the widgets on_draw() from the main loop.
All was fine as long as the data I render was read from files. When I started streaming data from a server which I render at regular intervals then the problems started.
Every now and then, especially when executing multiple instances of my application simultaneously, I see that the main threads takes 100% CPU time. Running strace on the process shows that g_main_loop has ended up in an eternal loop calling poll:
poll([{fd=3, events=POLLIN}, {fd=4, events=POLLIN}, {fd=10, events=POLLIN}, {fd=8, events=POLLIN}], 4, 100) = 1 ([{fd=10, revents=POLLIN}])
In proc I get this for file-descriptor 10: 10 -> socket:[1132750]
The poll always returns immediately as file-descriptor 10 has something to offer. This goes on forever so I assume that the file-descriptor is never read. The odd thing is that running 5 applications will almost always lead to all 5 ending up in the infinite poll loop after just a couple of minutes while running only instance one seems to work more than 30 minutes most of the times I try.
Why is this happening and is there any way to debug this?
My mistake was that I called queue_draw() from one of my worker threads. Given that the function is called "queue", I assumed it would queue a redraw which would later be executed by the g_main_loop. As it turned out, this was what broke the g_main_loop. I wish libgtkmm would have a little more detail about these multithreading restrictions in its reference manual.
My solution, to the problem was adding Glib::Dispatcher queueRedraw to my Widget and connecting it to the queue_draw() function:
queueRedraw.connect(sigc::mem_fun(*this, &MyWidgetClass::queue_draw))
Calling queueRedraw() signals the main thread to call the queue_draw() function.
I don't know if this is the best approach, but it solves the problem.
I wrote a MPI fortran program that I need to run multiple times (for consistency let call this program P1). The minimum number of core that I can use to run a program is 512. The problem is that P1 has the best scalability with 128 cores.
What I want to do is to create another program (P2) on top of P1, that call P1 4 times simultaneously, each of the call would be on 128 cores..
Basically I need to run 4 instances of a call simultaneously with a number of process equal to the total processors divided by 4.
Do you think it is possible? My problem is I don't know where to search to do this.
I am currently looking at MPI groups and communicators, am I following the good path to reach my goal?
EDIT :
The system scheduler is Loadleveler. When I submit a job I need to specify how many nodes I need. There is 16 cores by node et the minimum nodes I can use is 32. In the batch, we specify also -np NBCORES, but if we do so i.e. -np 128, the time consumed will be as if we were using 512 cores (32 nodes) even if the job ran on 128 cores..
I were able to do it thanks to your answers.
As I mentioned later (sry for that), the scheduler is Loadlever,
if you have access to the subblock module follow this : http://www.hpc.cineca.it/content/batch-scheduler-loadleveler-0#sub-block as Hristo Iliev mentioned.
if you don't, you can do a multistep job with no dependency between the steps, so they will be executed simultaneously. It is a classic multistep job, you just have to remove any #dependency flags (in the case of Loadleveler).
I have 2 projects. One is built by C++ Builder without MFC Style. And other one is VC++ MFC 11.
When I create a thread and create a cycle -- let's say this cycle adds one to progressbar position -- from 1 to 100 by using Sleep(10) it works of course for both C++ Builder and C++ MFC.
Now, Sleep(10) is wait 10 miliseconds. OK. But the problem is only if I have open media player, Winamp or anything else that produces "Sound". If I close all media player, winamp and other sound programs, my threads get slower than 10 miliseconds.
It takes like 50-100 ms / each. If I open any music, it works normally as I expected.
I have no any idea why this is happening. I first thought that I made a mistake inside MFC App but why does C++ Builder also slow down?
And yes, I am positively sure it is sound related because I even re-formated my windows, disabled everything. Lastly I discovered that sound issue.
Does my code need something?
Update:
Now, I follow the code and found that I used Sleep(1) in such areas to wait 1 miliseconds. The reason of this, I move an object from left to right. If I remove this sleep then the moving is not showing up because it is very fast. So, I should use Sleep(1). With Sleep(1), if audio is on than it works. If audio is off than it is very slow.
for (int i = 0; i <= 500; i++) {
theDialog->staticText->SetWindowsPosition(NULL, i, 20, 0, 0);
Sleep(1);
}
So, suggestions regarding this are really appreciated. What should I do?
I know this is the incorrect way. I should use something else that is proper and valid. But what exactly? Which function or class help me to move static texts from one position to another smoothly?
Also, changing the thread priority has not helped.
Update 2:
Update 1 is an another question :)
Sleep (10), will (as we know), wait for approximately 10 milliseconds. If there is a higher priority thread which needs to be run at that moment, the thread wakeup maybe delayed. Multimedia threads are probably running in a Real-Time or High priority, as such when you play sound, your thread wakeup gets delayed.
Refer to Jeffrey Richters comment in Programming Applications for Microsoft Windows (4th Ed), section Sleeping in Chapter 7:
The system makes the thread not schedulable for approximately the
number of milliseconds specified. That's right—if you tell the system
you want to sleep for 100 milliseconds, you will sleep approximately
that long but possibly several seconds or minutes more. Remember that
Windows is not a real-time operating system. Your thread will probably
wake up at the right time, but whether it does depends on what else is
going on in the system.
Also as per MSDN Multimedia Class Scheduler Service (Windows)
MMCSS ensures that time-sensitive processing receives prioritized access to CPU resources.
As per the above documentation, you can also control the percentage of CPU resources that will be guaranteed to low-priority tasks, through a registry key
Sleep(10) waits for at least 10 milliseconds. You have to write code to check how long you actually waited and if it's more than 10 milliseconds, handle that sanely in your code. Windows is not a real time operating system.
The minimum resolution for Sleep() timing is set system wide with timeBeginPeriod() and timeEndPeriod(). For example passing timeBeginPeriod(1) sets the minimum resolution to 1 ms. It may be that the audio programs are setting the resolution to 1 ms, and restoring it to something greater than 10 ms when they are done. I had a problem with a program that used Sleep(1) that only worked fine when the XE2 IDE was running but would otherwise sleep for 12 ms. I solved the problem by directly setting timeBeginPeriod(1) at the beginning of my program.
See: http://msdn.microsoft.com/en-us/library/windows/desktop/dd757624%28v=vs.85%29.aspx
I have a vector of structs, with the structs looking like this:
struct myData{
int ID;
int arrivalTime;
int burstTime;
};
After populating my vector with this data:
1 1 5
2 3 2
3 5 10
where each row is an individual struct's ID, arrivalTime, and burstTime, how would I use "for" or "while" loops to step through my vector's indices and calculate the data in a way that I could print something like this out:
Time 0 Processor is Idle
Time 1 Process 1 is running
Time 3 Process 2 is running
Time 5 Process 1 is running
Time 8 Process 3 is running
I know that SJF and RR scheduling are pretty similar with the exception that RR has the time quantum so that no process can last longer than a arbitrary time limit before being pre-empted by another process. With that in mind, I think that after I implement SJF, RR will come easily with just a few modifications of the SJF algorithm.
The way I thought about implementing SJF is to sort the vector based on arrival times first, then if two or more vector indices have the same arrival time, sort it based on shortest burstTime first. After that, using
int currentTime = 0;
to keep track of how much time has passed, and
int i = 0;
to use as the index of my vector and to control a "while" loop, how would I implement an algorithm that allows me to print out my desired output shown above? I have a general idea of what needs to happen, but I can't seem to lay it all out in code in a way that works.
I know that whenever the currentTime is less than the next soonest arrivalTime, then that means the processor is idle and currentTime needs to be set to this arrivalTime.
If the vector[i+1].arrivalTime < currentTime + vector[i].burstTime, I need to set the vector[i].burstTime to vector[i+1].arrivalTime - currentTime, then set currentTime to vector[i+1].arrivalTime, then print out currentTime and the process ID
I know that these are simple mathematical operations to implement but I cant think of how to lay it all out in a way that works the way I want it to. The way it loops around and how sometimes a few processes have the same arrival times throws me off. Do I need more variables to keep track of what is going on? Should I shift the arrival times of all the items in the vector every time a process is pre-empted and interrupted by a newer process with a shorter burst time? Any help in C++ code or even psuedo-code would be greatly appreciated. I feel like I am pretty solid on the concept of how SJF works but I'm just having trouble translating what I understand into code.
Thanks!
I know that SJF and RR scheduling are pretty similar with the exception that RR has the time quantum so that no process can last longer than a arbitrary time limit before being pre-empted by another process.
I don't think that's right. At least that's not how I learned it. RR is closer to FCFS (first come, first served) than it is to SJF.
One way to implement SJF is to insert incoming jobs into the pending list based on the running time. The insert position is at the end if this the new job's running time is longer than that of the job at the end; otherwise it's before the first job with a running time longer than the incoming job. Scheduling is easy: Remove the job at the head of the pending list and run that job to completion. A job with a long running time might not ever be run if short jobs keep coming in and getting processed ahead of that job with a long running time.
One way to implement round robin is to use a FIFO, just like with FCFS. New jobs are added to the end of the queue. Scheduling is once again easy: Remove the job at the head of the queue and process it. So far, this is exactly what FCFS does. The two differ in that RR has a limit on how long a job can be run. If the job takes longer than some time quantum to finish, the job is run for only that amount of time and then it is added back to the end of the queue. Note that with this formulation, RR is equivalent to FCFS if the time quantum is longer than the running time of the longest running job.
I suppose you could insert those incomplete jobs back into in the middle of the process list as SJF does, but that doesn't seem very round-robinish to me, and the scheduling would be a good deal hairier. You couldn't use the "always run the job at the head" scheduling rule because then all you would have is SJF, just made more complex.