Just curious. How does actually the function Sleep() work (declared in windows.h)? Maybe not just that implementation, but anyone. With that I mean - how is it implemented? How can it make the code "stop" for a specific time? Also curious about how cin >> and those actually work. What do they do exactly?
The only way I know how to "block" something from continuing to run is with a while loop, but considering that that takes a huge amount of processing power in comparison to what's happening when you're invoking methods to read from stdin (just compare a while (true) to a read from stdin), I'm guessing that isn't what they do.
The OS uses a mechanism called a scheduler to keep all of the threads or processes it's managing behaving nicely together.
several times per second, the computer's hardware clock interrupts the CPU, which causes the OS's scheduler to become activated. The scheduler will then look at all the processes that are trying to run and decides which one gets to run for the next time slice.
The different things it uses to decide depend on each processes state, and how much time it has had before. So if the current process has been using the CPU heavily, preventing other processes from making progress, it will make the current process wait and swaps in another process so that it can do some work.
More often, though, most processes are going to be in a wait state. For instance, if a process is waiting for input from the console, the OS can look at the processes information and see which io ports its waiting for. It can check those ports to see if they have any data for the process to work on. If they do, it can start the process up again, but if there is no data, then that process gets skipped over for the current timeslice.
as for sleep(), any process can notify the OS that it would like to wait for a while. The scheduler will then be activated even before a hardware interrupt (which is also what happens when a process tries to do a blocking read from a stream that has no data ready to be read,) and the OS makes a note of what the process is waiting for. For a sleep, the process is waiting for an alarm to go off, or it may just yield again each time it's restarted until the timer is up.
Since the OS only resumes processes after something causes it to preempt a running process, such as the process yielding or the hardware timer interrupt i mentioned, sleep() is not very accurate, how accurate depends on the OS or hardware, but it's usually on the order of one or more milliseconds.
If more accuracy is needed, or very short waits, the only option is to use the busy loop construct you mentioned.
The operating system schedules how processes run (which processes are eligible to run, in what order, ...).
Sleep() probably issues a system call which tells the kernel “don't let me use the processor for x milliseconds”.
In short, Sleep() tells the OS to ignore the process/thread for a while.
'cin' uses a ton of overloaded operators. The '>>', which is usually right bit-shift, is overloaded for pretty much every type of right-hand operand in C++. A separate function is provided for each one, which reads from the console and converts the input into whichever variable type you have given. For example:
std::cin::operator>> (int &rhs);
That's not real C++ — I haven't worked with streams and overloading in a while, so I don't remember the return type or the exact order of arguments. Nevertheless, this function is called when you run cin >> an integer variable.
The exact underlying implementation depends on the operating system.
The answer depends on the operating system, but generally speaking, the operating system either schedules some other code to run elsewhere in another thread, or if it literally has nothing to do, it gets the CPU to wait until a hardware event occurs, which causes the CPU to jump to some code called an interrupt handler, which can then decide what code to run.
If you are looking for a more controlled way of blocking a thread/process in a multi-threaded program, have a look at Semaphores, Mutexes, CriticalSections and Events. These are all techniques used to block a process or thread (without loading the CPU via a while construct).
They essentially work off of a Wait/Signal idiom where the blocked thread is waiting and another process signals it to tell it to start again. These (at least in windows) can also have timeouts, thus providing a similar functionality to Sleep().
At a low level, the system has a routine called the "scheduler" that dispatches the instructions from all the running programs to the CPU(s), which actually run them. System calls like "Sleep" and "usleep" match to instructions that tell the scheduler to IGNORE that thread or process for a fixed amount of time.
As for C++ streams, the "cin" hides the actual file handle (stdin and stdout actually are such handles) you're accessing, and the ">>" operator for it hides the underlying calls to read and write. Since its an interface the implementation can be OS-specific, but conceptually it is still doing things like printf and scanf under the hood.
Related
Suppose I have a multi-threaded program in C++11, in which each thread controls the behavior of something displayed to the user.
I want to ensure that for every time period T during which one of the threads of the given program have run, each thread gets a chance to execute for at least time t, so that the display looks as if all threads are executing simultaneously. The idea is to have a mechanism for round robin scheduling with time sharing based on some information stored in the thread, forcing a thread to wait after its time slice is over, instead of relying on the operating system scheduler.
Preferably, I would also like to ensure that each thread is scheduled in real time.
In case there is no way other than relying on the operating system, is there any solution for Linux?
Is it possible to do this? How?
No that's not cross-platform possible with C++11 threads. How often and how long a thread is called isn't up to the application. It's up to the operating system you're using.
However, there are still functions with which you can flag the os that a special thread/process is really important and so you can influence this time fuzzy for your purposes.
You can acquire the platform dependent thread handle to use OS functions.
native_handle_type std::thread::native_handle //(since C++11)
Returns the implementation defined underlying thread handle.
I just want to claim again, this requires a implementation which is different for each platform!
Microsoft Windows
According to the Microsoft documentation:
SetThreadPriority function
Sets the priority value for the specified thread. This value, together
with the priority class of the thread's process determines the
thread's base priority level.
Linux/Unix
For Linux things are more difficult because there are different systems how threads can be scheduled. Under Microsoft Windows it's using a priority system but on Linux this doesn't seem to be the default scheduling.
For more information, please take a look on this stackoverflow question(Should be the same for std::thread because of this).
I want to ensure that for every time period T during which one of the threads of the given program have run, each thread gets a chance to execute for at least time t, so that the display looks as if all threads are executing simultaneously.
You are using threads to make it seem as though different tasks are executing simultaneously. That is not recommended for the reasons stated in Arthur's answer, to which I really can't add anything.
If instead of having long living threads each doing its own task you can have a single queue of tasks that can be executed without mutual exclusion - you can have a queue of tasks and a thread pool dequeuing and executing tasks.
If you cannot, you might want to look into wait free data structures and algorithms. In a wait free algorithm/data structure, every thread is guaranteed to complete its work in a finite (and even specified) number of steps. I can recommend the book The Art of Multiprocessor Programming where this topic is discussed in length. The gist of it is: every lock free algorithm/data structure can be modified to be wait free by adding communication between threads over which a thread that's about to do work makes sure that no other thread is starved/stalled. Basically, prefer fairness over total throughput of all threads. In my experience this is usually not a good compromise.
I'm working on an embedded Linux system (3.12.something), and our application, after some random amount of time, starts hogging the CPU. I've run strace on our application, and right when the problem happens, I see a lot of lines similar to this in the strace output:
[48530666] futex(0x485f78b8, FUTEX_WAIT_PRIVATE, 2, NULL) = -1 EAGAIN (Resource temporarily unavailable) <0.009002>
I'm pretty sure this is the smoking gun I'm looking for and there is a race of some sort. However, I now need to figure out how to identify the place in the code that's trying to get this mutex. How can I do that? Our code is compiled with GCC and has debugging symbols in it.
My current thinking (that I haven't tried yet) is to print out a string to stdout and flush before trying to grab any mutex in our system, with the expectation that the string will print right before strace complains about getting the lock ... but there are a LOT of places in the code that would have to be instrumented like this.
EDIT: Another strange thing that I just realized is that our program doesn't start hogging the CPU until some random time has passed since it was run (5 minutes to 5 hours and anywhere in between). During that time, there are zero futex syscalls happening. Why do they suddenly start? From what I've read, I think maybe they are being used properly in userspace until something fails and falls back to making a futex() syscall...
Any suggestions?
If you perpetually and often lock a mutex for a short time from different threads, like e.g. one protecting a global logger, you might cause a so-called thread convoy. The problem doesn't occur until two threads compete for the lock. The first gets the lock and holds it for a short time, then, when it needs the lock a second time, it gets preempted because the second one is waiting already. The second one does the same. The timeslice available to each thread is suddenly reduced to the time between two lock attempts, causing many context switches and the according slowdown. Further, all but one thread is always blocked on the mutex, effectively disabling any parallel execution.
In order to fix this, make your threads cooperate instead of competing for resources. For above example of a logger, consider e.g. a lock-free queue for the entries or separate queues for each thread using thread-local storage.
Concerning the futex() calls, the idea is to poll an atomic flag and after some rotations use the actual OS mutex. The atomic flag is available without the expensive switch between user-space and kernel-space. For longer breaks, using the kernel preemption (with futex()) avoids blocking the CPU with polling. This explains why the program doesn't need any futex() calls in normal operation.
You, basically need to generate core file at this moment.
Then you could load program+core in GDB and look at it
man gcore
or
generate-core-file
During that time, there are zero futex syscalls happening. Why do they suddenly start?
This is due to the fact that uncontested mutex, implemented via futex, doesn't make a system call, only atomic increment, purely in user space. Only CONTESTED lock is visible as system call
I've created a multi-threaded application using C++ and POSIX threads. In which I should now block a thread (main thread) until a boolean flag is set (becomes true).
I've found two ways to get this done.
Spinning through a loop without sleep.
while(!flag);
Spinning through a loop with sleep.
while(!flag){
sleep(some_int);
}
If I should follow the first way, why do some people write codes following the second way? If the second way should be used, why should we make current thread to sleep? And what are disadvantages of this way?
The first option (a "busy wait") wastes an entire core for the duration of the wait, preventing other useful work being done and/or wasting energy.
The second option is less wasteful - your waiting thread uses very little CPU and allows other threads to run. But it is still wasteful to keep switching back to the thread to check the flag.
Far better than either would be to use a condition variable, which allows the waiting thread to block without consuming any resources until it is able to proceed.
while(flag); will cause your thread to use all of its allocated time checking the condition. This wastes a lot of CPU cycles checking something which has likely not changed.
Sleeping for a bit causes the thread to pause and give up the CPU to programs that actually need it.
You shouldn't do either though; you should use a threading library to create a flag object and call its wait function, so that the kernel will pause the thread until the flag is set.
The first way (just the plain while) is wasting resources, specifically the processor time of your process.
When a thread is put into sleep, OS may decide that the processor will be used for different tasks when talking about systems with preemptive multitasking. In theory, if you had as many processors / cores as threads, there would not have to be any difference.
If a solution is good or not depends on the operating system used, and sometimes architecture the program is running on. You should consult your syscall reference to find out more about this.
While profiling my code to find what is going slow, I have 3functions that are taking forever apparently, well thats what very sleepy says.
These functions are:
ZwDelayExecution 20.460813 20.460813 19.987685 19.987685
MsgWaitForMultipleObjects 20.460813 20.460813 19.987685 19.987685
WaitForSingleObject 20.361805 20.361805 19.890967 19.890967
Can anybody tell me what these functions are? Why they are taking so long, and how to fix them.
Thanks
Probably that functions are used to make thread 'sleeping' in Win32 API. Also they might be used as thread synchronization so check these thing.
They are taking so much CPU time because they are designed for that.
The WaitForSingleObject function can wait for the following objects:
Change notification
Console input
Event
Memory resource notification
Mutex
Process
Semaphore
Thread
Waitable timer
So the other possible thing where it can be used for is console user input waiting.
ZwDelayExecution is an internal function of Windows. As it can be seen it is used to realize Sleep function. Here is call stack for Sleep function so you can see it with your own eyes:
0 ntdll.dll ZwDelayExecution
1 kernel32.dll SleepEx
2 kernel32.dll Sleep
It probaly uses Assembly low-level features to realize that so it can delay thread with precision of 100ns.
MsgWaitForMultipleObjects has a similar to WaitForSingleObject goal.
Judging on the names, all 3 functions seem to block, so they take a long time because they are designed to do so, but they shouldn't use any CPU while waiting.
One of the first steps should always be to check the documentation:
WaitForSingleObject:
http://msdn.microsoft.com/en-us/library/windows/desktop/ms687032.aspx
Waits for an object like a thread, process, mutex.
MsgWaitForMultipleObjects:
http://msdn.microsoft.com/en-us/library/windows/desktop/ms684242.aspx
Simply waits for multiple objects, just like WaitForSingleObject.
ZwDelayExecution:
There doesn't seem to be a documentation for ZwDelayExecution but I think that is an internal method which get's called when you call Sleep.
Anyway, the name already reveals part of it. "Wait" and "Delay"-functions are supposed to take time. If you want to reduce the waiting time you have to find out what is calling these functions.
To give you an example:
If you start a new thread and then wait for it to finish in your main thread, you will call WaitForSingleObject one way or another in WINAPI-programming. It doesn't even have to be you who is starting the thread - it could be the runtime itself. The function will wait until the thread finishes. Therefore it will take time and block the program in WaitForSingleObject until thread is done or a timeout occurs. This is nothing bad, this is intended behaviour.
Before you start zooming in on these functions, you might first want to determine what kind of slowness your program is suffering from. It is pretty normal for a Windows program to have one or more threads spending most of their time in blocking functions.
You would first need to determine whether your actual critical thread is CPU bound. In that case you don't want to zoom in on the functions that take a lot off wall clock time, you want to find those functions that take CPU time.
I don't have much experience with Very Sleepy, but IIRC it is a sampling profiler, and those are typically not so good at measuring CPU usage.
Only after you've determined that your program is not CPU bound, then you should zoom in on the functions that wait a lot.
From what I understand, you write your Linux Daemon that listens to a request in an endless loop.
Something like..
int main() {
while(1) {
//do something...
}
}
ref: http://www.thegeekstuff.com/2012/02/c-daemon-process/
I read that sleeping a program makes it go into waiting mode so it doesn't eat up resources.
1.If I want my daemon to check for a request every 1 second, would the following be resource consuming?
int main() {
while(1) {
if (request) {
//do something...
}
sleep(1)
}
}
2.If I were to remove the sleep, does it mean the CPU consumption will go up 100%?
3.Is it possible to run an endless loop without eating resources? Say..if it does nothing but just loops itself. Or just sleep(1).
Endless loops and CPU resources is a mystery to me.
Is it possible to run an endless loop without eating resources? Say..if it does nothing but just loops itself. Or just sleep(1).
There ia a better option.
You can just use a semaphore, which remains blocked at the begining of loop and you can signal the semaphore whenever you want the loop to execute.
Note that this will not eat any resources.
The poll and select calls (mentioned by Basile Starynkevitch in a comment) or a semaphore (mentioned by Als in an answer) are the correct ways to wait for requests, depending on circumstances. On operating systems without poll or select, there should be something similar.
Neither sleep, YieldProcessor, nor sched_yield are proper ways to do this, for the following reasons.
YieldProcessor and sched_yield merely move the process to the end of the runnable queue but leave it runnable. The effect is that they allow other processes at the same or higher priority to execute, but, when those processes are done (or if there are none), then the process that called YieldProcessor or sched_yield continues to run. This causes two problems. One is that lower priority processes still will not run. Another is that this causes the processor to be always running, using energy. We would prefer the operating system to recognize when no process needs to be running and to put the processor into a low-power state.
sleep may permit this low-power state, but it plays a guessing game about how long it will be until the next request comes in, it wakes the processor repeatedly when there is no need, and it makes the process less responsive to requests, since the process will continue sleeping until the expiration of the requested time even if there is a request to be serviced.
The poll and select calls are designed for exactly this situation. They tell the operating system that this process wants to service a request coming in on one of its I/O channels but otherwise has no work to do. This allows the operating system to mark the process as not runnable and to put the processor in a low-power state if suitable.
Using a semaphore provides the same behavior, except that the signal to wake the process comes from another process raising the semaphore instead of activity arising in an I/O channel. Semaphores are suitable when the signal to do some work arrives in this way; simply use whichever of poll or a semaphore is more appropriate for your situation.
The criticism that poll, select, or a semaphore causes a kernel-mode call is irrelevant, because the other methods also cause kernel-mode calls. A process cannot sleep on its own; it has to call the operating system to request it. Similarly, YieldProcessor and sched_yield make requests to the operating system.
The short answer is yes -- removing sleep gives 100% CPU -- but the answer does depend on some additional details. It consumes all CPU it can get, unless...
The loop body is trivial, and optimised away.
The loop contains a blocking operation (like a file or network operation). The link you provide suggests to avoid this, but it is often a good idea to block until something relevant happens.
EDIT : For your scenario, I support the suggestion made by #Als.
EDIT 2: I expect this answer has received a -1 because I claim blocking operations can actually be a good idea. [If you -1, you should leave a motivation in a comment so that we all may learn something.]
Current popular thinking is that non-block (event-based) IO is good and blocking is bad. This view is oversimplified because it assumes all software that performs IO can improve throughput by using non-blocking operations.
What? Am I really suggesting that using non-blocking IO can actually reduce throughput? Yes it can. When a process serves a single activity it is actually better to use blocking IO because blocking IO only burns resources that have already been paid for in the existence of the process.
In contrast, non-blocking IO can carry a greater fixed overhead than simple blocking IO. If the process isn't able to supply additional IO that can be interleaved, then there is nothing gained by paying for non-blocking setup. (In practice, the greatest cost of innapropriate non-blocking IO is simply in the added code complexity. Beyond that, this topic is largely a thought exercise.)
Under blocking IO we rely upon the operating system to schedule those processes that can make progress. That's what the OS is designed to do.
Under non-blocking IO we have greater setup costs but can share the resources of the process and its threads between interleaved work. The non-blocking IO is therefor ideal for any process that serves multiple independent activities, such as a web server. The throughput gained is vastly superior to the fixed cost overheads of non-blocking IO.