I just realized that after learning a lot about various scheduling algorithms, how a context switch is done, etc. one thing still isn't clear to me.
Take a uniprocessor system:
If process A is running and it's time slot should end in 5 seconds, how does the scheduler or the operating system know how to end it after 5 seconds? No part of the operating system can run while A is running. The scheduler is supposed to be monitoring it, but how can it if it cannot run? Does the operating system's scheduler write an ISR and have an interrupt generate every 5 seconds? Is this possible? Even if it is, it doesn't seem a good way to implement it.
How exactly does a scheduler do this?
Does the operating system's scheduler write an ISR and have an interrupt generate every 5 seconds? Is this possible? Even if it is, it doesn't seem a good way to implement it.
Yes, this is exactly how it works on a preemptive multitasking system (although on desktop systems the interval is usually more like 10 milliseconds).
Yes, there are other schemes, such as cooperative multitasking, where each process decides for itself when to yield.
Yes, normally there is some kind of timer interrupt that fires. The kernel can then run for a bit and switch process context if it needs to - normally that interrupt would fire an awful lot more often than just once every 5 seconds though. Why doesn't it seem like a good way to implement it?
Related
I want to refresh and have control of time interval changes. Most people only have an infinite loop constantly polling the time from time.h and wasting cycles. There is a way to get clock changes without disturbing too much the system? I am using c/c++ and really want to learn how to do this manually only using linux libraries. Most programs need the notion of time.
I want to be notified of system clock updates. I am trying to do a scientific app that responds in real time. Sleep() and thing like that only let me specify a time delay starting from the execution of that statemen. Localtime() and string returning times from the c header only give me the specific time when was executed. If I use it this time is too late, it had elapsed too many nanoseconds.
Read the time(7) man pages to understand how to use the system calls gettimeofday(2), setitimer(2), clock_gettime(2), timer_create(2) etc... and library functions (strftime, localtime, ...) related to time.
If you want to code an application recieving timer events, learn about timers and e.g. SIGALRM signal. Read first signal(7)
But you really should read e.g. Advanced Unix Programming and Advanced Linux Programming and understand what are syscalls
You might want to use poll(2) for polling or for waiting.
The most basic approach that's also portable and compatible with most other tasks is select. It sleeps until a certain amount of time elapses or a file becomes ready for I/O, which gives you a way to update the task list before the next task occurs. If you don't need interruption capability, you can just use sleep (or usleep).
I have a thread running on a Linux system which i need to execute in as accurate intervals as possbile. E.g. execute once every ms.
Currently this is done by creating a timer with
timerfd_create(CLOCK_MONOTONIC, 0)
, and then passing the desired sleep time in a struct with
timerfd_settime (fd, 0, &itval, NULL);
A blocking read call is performed on this timer which halts thread execution and reports lost wakeup calls.
The problem is that at higher frequencies, the system starts loosing deadlines, even though CPU usage is below 10%. I think this is due to the scheduler not waking the thread often enough to check the blocking call. Is there a command i can use to tell the scheduler to wake the thread at certain intervals as far as it is possble?
Busy-waiting is a bad option since the system handles many other tasks.
Thank you.
You need to get RT linux*, and then increase the RT priority of the process that you want to wake up at regular intervals.
Other then that, I do not see problems in your code, and if your process is not getting blocked, it should work fine.
(*) RT linux - an os with some real time scheduling patches applied.
One way to reduce scheduler latency is to run your process using the realtime scheduler such as SCHED_FIFO. See sched_setscheduler .
This will generally improve latency a lot, but still theres little guarantee, to further reduce latency spikes, you'll need to move to the realtime brance of linux, or a realtime OS such as VxWorks, RTEMS or QNX.
You won't be able to do what you want unless you run it on an actual "Real Time OS".
If this is only Linux for x86 system I would choose HPET timer. I think all modern PCs has this hardware timer build in and it is very, very accurate. I allow you to define callback that will be called every millisecond and in this callback you can do your calculations (if they are simple) or just trigger other thread work using some synchronization object (conditional variable for example)
Here is some example how to use this timer http://blog.fpmurphy.com/2009/07/linux-hpet-support.html
Along with other advice such as setting the scheduling class to SCHED_FIFO, you will need to use a Linux kernel compiled with a high enough tick rate that it can meet your deadline.
For example, a kernel compiled with CONFIG_HZ of 100 or 250 Hz (timer interrupts per second) can never respond to timer events faster than that.
You must also set your timer to be just a little bit faster than you actually need, because timers are allowed to go beyond their requested time but never expire early, this will give you better results. If you need 1 ms, then I'd recommend asking for 999 us instead.
I am working on a threaded application on Linux in C++ which attempts to be real time, doing an action on a heartbeat, or as close to it as possible.
In practice, I find the OS is swapping out my thread and causing delays of up to a tenth of a second while it is switched out, causing the heartbeat to be irregular.
Is there a way my thread can hint to the OS that now is a good time to context switch it out? I could make this call right after doing a heartbeat, and thus minimize the delay due to an ill timed context switch.
It is hard to say what the main problem is in your case, but it is most certainly not something that can be corrected with a call to sched_yield() or pthread_yield(). The only well-defined use for yielding, in Linux, is to allow a different ready thread to preempt the currently CPU-bound running thread at the same priority on the same CPU under SCHED_FIFO scheduling policy. Which is a poor design decision in almost all cases.
If you're serious about your goal of "attempting to be real-time" in Linux, then first of all, you should be using a real-time sched_setscheduler setting (SCHED_FIFO or SCHED_RR, FIFO preferred).
Second, get the full preemption patch for Linux (from kernel.org if your distro does not supply one. It will also give you the ability to reschedule device driver threads and to execute your thread higher than, say, hard disk or ethernet driver threads.
Third, see RTWiki and other resources for more hints on how to design and set up a real-time application.
This should be enough to get you under 10 microseconds response time, regardless of system load on any decent desktop system. I have an embedded system where I only squeeze out 60 us response idle and 150 us under heavy disk/system load, but it's still orders of magnitude faster than what you're describing.
You can tell the current executing thread to pause execution with various commands such as yield.
Just telling the thread to pause is non-determanistic, 999 times it might provide good intervals and 1 time it doesn't.
You'll will probably want to look at real time scheduling for consistant results. This site http://www2.net.in.tum.de/~gregor/docs/pthread-scheduling.html seems to be a good starting spot for researching about thread scheduling.
use sched_yield
And fur threads there is an pthread_yield http://www.kernel.org/doc/man-pages/online/pages/man3/pthread_yield.3.html
I'm a bit confused by the question. If your program is just waiting on a periodic heartbeat and then doing some work, then the OS should know to schedule other things when you go back to waiting on the heartbeat.
You aren't spinning on a flag to get your "heartbeat" are you?
You are using a timer function such as setitimer(), right? RIGHT???
If not, then you are doing it all wrong.
You may need to specify a timer interval that is just a little shorter than what you really need. If you are using a real-time scheduler priority and a timer, your process will almost always be woken up on time.
I would say always on time, but Linux isn't a perfect real-time OS yet.
I'm not too sure for Linux, but on Windows it's been explained that you can't ask the system to not interrupt you for several reasons (first paragraph mostly). Off my head, one of the reasons is hardware interrupts that can occur at any time and over which you have no control.
EDIT Some guy just suggested the use of sched_yield then deleted his answer. It'll relinquish time for your whole process though. You can also use sched_setscheduler to hint the kernel about what you need.
I've written a C++ library that does some seriously heavy CPU work (all of it math and calculations) and if left to its own devices, will easily consume 100% of all available CPU resources (it's also multithreaded to the number of available logical cores on the machine).
As such, I have a callback inside the main calculation loop that software using the library is supposed to call:
while(true)
{
//do math here
callback(percent_complete);
}
In the callback, the client calls Sleep(x) to slow down the thread.
Originally, the clientside code was a fixed Sleep(100) call, but this led to bad unreliable performance because some machines finish the math faster than others, but the sleep is the same on all machines. So now the client checks the system time, and if more than 1 second has passed (which == several iterations), it will sleep for half a second.
Is this an acceptable way of slowing down a thread? Should I be using a semaphore/mutex instead of Sleep() in order to maximize performance? Is sleeping x milliseconds for each 1 second of processing work fine or is there something wrong that I'm not noticing?
The reason I ask is that the machine still gets heavily bogged down even though taskman shows the process taking up ~10% of the CPU. I've already explored hard disk and memory contention to no avail, so now I'm wondering if the way I'm slowing down the thread is causing this problem.
Thanks!
Why don't you use a lower priority for the calculation threads? That will ensure other threads are scheduled while allowing your calculation threads to run as fast as possible if no other threads need to run.
What is wrong with the CPU at 100%? That's what you should strive for, not try to avoid. These math calculations are important, no? Unless you're trying to avoid hogging some other resource not explicitly managed by the OS (a mutex, the disk, etc) and used by the main thread, generally trying to slow your thread down is a bad idea. What about on multicore systems (which almost all systems will be, going forward)? You'd be slowing down a thread for absolutely no reason.
The OS has a concept of a thread quantum. It will take care of ensuring that no important thread on your system is starved. And, as I mentioned, on multicore systems spiking one thread on one CPU does not hurt performance for other threads on other cores at all.
I also see in another comment that this thread is also doing a lot of disk I/O - these operations will already cause your thread to yield while it's waiting for the results, so the sleeps will do nothing.
In general, if you're calling Sleep(x), there is something wrong/lazy with your design, and if x==0, you're opening yourself up to live locks (the thread calling Sleep(0) can actually be rescheduled immediately, making it a noop).
Sleep should be fine for throttling an app, which from your comments is what you're after. Perhaps you just need to be more precise how long you sleep for.
The only software in which I use a feature like this is the BOINC client. I don't know what mechanism it uses, but it's open-source and multi-platform, so help yourself.
It has a configuration option ("limit CPU use to X%"). The way I'd expect to implement that is to use platform-dependent APIs like clock() or GetSystemTimes(), and compare processor time against elapsed wall clock time. Do a bit of real work, check whether you're over or under par, and if you're over par sleep for a while to get back under.
The BOINC client plays nicely with priorities, and doesn't cause any performance issues for other apps even at 100% max CPU. The reason I use the throttle it is that otherwise, the client runs the CPU flat-out all the time, and drives up the fan speed and noise. So I run it at the level where the fan stays quiet. With better cooling maybe I wouldn't need it :-)
Another, not so elaborate, method could be to time one iteration and let the thread sleep for (x * t) milliseconds before the next iteration where t is the millisecond time for one iteration and x is the choosen sleep time fraction (between 0 and 1).
Have a look at cpulimit. It sends SIGSTOP and SIGCONT as required to keep a process below a given CPU usage percentage.
Even still, WTF at "crazy complaints and outlandish reviews about your software killing PC performance". I'd be more likely to complain that your software was slow and not making the best use of my hardware, but I'm not your customer.
Edit: on Windows, SuspendThread() and ResumeThread() can probably produce similar behaviour.
I've been working on win32, c,c++ for a while. I code on visual studio. Most of the time I see system idle process uses more cpu utilization. Is there a way to allocate more processor cycles to my program to run it faster? I understand there might be limitations from i/o, in those cases this question doesn't make any sense.
OR
did i misunderstood the task manager numbers? I'm in a confusion, please help me out.
And I want to do something in program itself, btw I will be happy if answers are specific to windows.
Thanks in advance
~calvin
If your program it the only program that has something to do (not wait for IO), its thread will always be assigned to a processor core.
However, if you have a multi-core processor, and a single-threaded program, the CPU usage of your process displayed in the task manager will always be limited by 100/Ncores.
For example, if you have a quad-core machine, your process will be at 25% (using one core), and the idle process at around 75%. You can only additional CPU power by dividing your tasks into chunks that can be worked on by separate threads which will then be run on the idle cores.
The idle process only "runs" when no other process needs to. If you want to use more CPU cycles, then use them.
If your program is idling, it doesn't do anything, i.e. there is nothing that could be done any faster. So the CPU is probably not the bottle-neck in your case.
Are you maybe waiting for data coming from the disk or network?
In case your processor has multiple cores and your program uses only one core to its full extent, making your program multi-threaded could work.
In a multitask / multithread OS the processor(s) time is splitted among threads.
If you want a specific thread to get bigger time chunk you can set its priority with the SetThreadPriority function, not wise to do it though.
Only special software (should) mess with those settings.
It's common for window applications to have a low cpu usage percent (which we see in the task manager)
because most of the time they just wait for messages.
Use threads to:
abstract away all the I/O waits.
assign work to all cores.
also, remove all sleep-wait states from main thread.
Defer all I/O to a thread, so that wait states are confined within it. Keep the actual computations in the foreground thread, and use synchronization mechanisms that make the I/O slave thread to wait for your main thread when communicating.
If your CPU is multi-core, and your problem is paralellizable, create as many threads as you have cores, research "set affinity" functions to assign them between the cores and still keep a separate thread for all I/O.
Also pay attention not to wait in your main thread - usleep(1) doesn't send you into background for 1 microsecond, but for "no less than..." and that may mean anything between 1ms and 100ms but hardly ever less than that, and never anything close to a microsecond.