I will be writing an interface in C++ that controls a large CNC machine and it will run on Windows. For safety reasons, I would like this process to run on it's own core so that it can run the process in real-time. If it shares a core with Windows, all of the Windows processes will be dealt with before my process is. Basically, how do I make sure either my process is always in the front of the processing queue, or how do I designate a core to run my process and leave the other core to handle Windows? Also, is there any way of seeing that my programming is run in real-time. AKA, this core is processing this program, but that core is not doing anything because we told out program not to run on it. Any input would be helpful.
There is no guarantee your process will be dealt with in real time. Windows does not do that. Since you mention safety, I will mention that if a lawsuit occurred you would be in deep trouble. Expert witnesses would testify that the design is inherently unsafe.
User control and display can be done in Windows but real time operations belong in dedicated hardware such as a PLC.
You can use SetThreadAffinityMask to restrict a thread to running on some subset of available processors/cores.
If you use SetThreadPriority and SetProcessPriorityClass to set a thread to real time priority, only other threads running at real time priority can interrupt it, effectively forcing other threads to run on other cores (unless you raise two or more threads to real time priority.
As an alternative, Windows Embedded Compact is a realtime priority-based OS that can make soft realtime guarantees (far better than Windows Vista/7). It's costly, but on par with other commercial RTOSes.
Related
If possible I do wish to allocate a logical core to a single process exclusively.
I am aware that Winbase.h contains Get/SetProcessAffinityMask and SetThreadAffinityMask.
I can get all processes running when the specific process is started and set their affinities to other logical cores, however, I do not want to check all processes in a periodic manner, for instance in order to deal with processes launched after the initiation of my process.
Furthermore there will be other processes which need to use specific logical cores only exclusively (no other process shall waste resources on that logical core). For instance my process shall run on core 15 but another shall run only on core 14.
Is there a better and more permanent way to allocate specific logical cores to specific processes than above mentioned Get/SetProcessAffinityMask scheme.
Windows is not a real-time operating system. Windows is designed to do preemptive multitasking with isolated processes, like basically any other modern desktop OS. A process is not supposed to just lock out every other process from a particular core, therefore, there is no API to explicitly do so (at least I'm not aware of one). It's up to the OS scheduler to decide which threads get to run when and where. That's the whole idea. You can use thread priorities to tell the scheduler that certain threads should be given a chance to run over others. You can use affinity masks to tell the scheduler which cores a thread can be scheduled to. You can even set a preferred core for your thread. But you don't get to schedule threads yourself.
Note that there's apparently a way to get something a bit like what you're looking for to work on Linux (see this question for more). I don't think similar possibilities exist on Windows. Yes you could try to hack together some solution based on a background task that continuously monitors and adjusts the priorities and affinity masks of all the threads in the system to approximate the desired behavior (like the person in the question linked by Ben Voigt above has apparently tried, and failed to achieve). But why would you want to do that? It goes completely against the very nature of everything an OS like Windows is designed to do. To me, what you are asking sounds a lot like what you're really looking for is a completely different kind of operating system, or maybe even no operating system at all. Boot the CPU straight into your own image and you get to drive all the cores in whatever way you fancy…
According to the video, which is from the Introductory Operating Systems course in Udacity:
For a user-level thread to actually execute first, it must be associated with a kernel-level thread. And then, the OS level scheduler must schedule that kernel-level thread onto a CPU.
According to Remzi H. Arpaci-Dusseau's book (Operating Systems: Three Easy Pieces) in chapter 6:
We have described some key low-level mechanisms to implement CPU
virtualization, a set of techniques which we collectively refer to as limited
direct execution. The basic idea is straightforward: just run the program
you want to run on the CPU, but first make sure to set up the hardware
so as to limit what the process can do without OS assistance.
This general approach is taken in real life as well. For example, those
of you who have children, or, at least, have heard of children, may be
familiar with the concept of baby proofing a room: locking cabinets containing
dangerous stuff and covering electrical sockets. When the room is
thus readied, you can let your baby roam freely, secure in the knowledge
that the most dangerous aspects of the room have been restricted.
In an analogous manner, the OS “baby proofs” the CPU, by first (during
boot time) setting up the trap handlers and starting an interrupt timer,
and then by only running processes in a restricted mode. By doing so, the
OS can feel quite assured that processes can run efficiently, only requiring
OS intervention to perform privileged operations or when they have
monopolized the CPU for too long and thus need to be switched out.
First quote states a kernel-level thread has to associated with a user-level thread to run, but according to the second quote, OS directly executes the user-level thread but under strict control, without any associations.
Questions:
1) I'm a newbie Linux user. Which of the above approach is more correct for Linux operating system?
2) Yes there are quite many posts and links about Multithreading models in the internet. But can't find a single example code. Can you give a simple example C code on One-to-One and Many-to-One for Linux?
I work in lab and wrote multithreaded computational program, on C++11 using std::thread. Now I have an opportunity to run my program on multi-cpu server.
Server:
Runs Ubuntu server
Has 40 Intel CPU's
I know nothing about multi-cpu programming. First idea, that comes into my mind to run 40 applications and then glue their results together. It is possible, but I want to know more about my opportunities.
If I compile my code on server by it's gcc compiler, does resulting application take advantage of multi-cpu?
If #1 answer depends, how can I check it?
Thank you!
If your program runs multithreaded your OS should take care automatically that it uses the CPUs available.
Make sure to distribute the work you have to do to about the same number of threads there are CPUs you can use. Make sure it is not just one thread that does the work and the other threads are just waiting for the termination of this thread.
You question is not only about multi-thread, but about multi-cpu.
Basically the operating system will automatically spread out the threads over the cores. You don't need to do anything.
Once you are using C++11, you have std::thread::get_id() that you can call and identify the different thread, but you CAN NOT identify the core you are using. Use pthreads directly + "cpu affinity" for this.
You can google for "CPU affinity" for more details on how to get control over it. If you want this kind of precision. You can identify the core as well as choose the core... You can start with this: http://man7.org/linux/man-pages/man3/pthread_setaffinity_np.3.html
This question already has answers here:
How can multithreading speed up an application (when threads can't run concurrently)?
(9 answers)
Closed 9 years ago.
My professor causally mentioned that we should program multi-thread programs even if we are using a unicore processor however because of the lack of time , he did not elaborate on it .
I would like to know what are the benefits of a multi-thread program in a unicore processor ??
It won't be as significant as a multi-core system but it can still provide some benefits.
Mainly all the benefits that you are going to get will be regarding to the context switch that will happen after a input miss to the already executing thread. Executing thread may be waiting for anything such as a hardware resource or a branch mis-prediction or even data transfer after a cache miss.
At this point the waiting thread can be executed to benefit from this "waiting time". But of course context switch will take some time. Also managing threads inside the code rather than sequential computation can create some extra complexity to your program. And as it has been said, some applications needs to be multi-threaded so there is no escape from the context switch in some cases.
Some applications need to be multi-threaded. Multi-threading isn't just about improving performance by using more cores, it's also about performing multiple tasks at once.
Take Skype for example - The GUI needs to be able to accept the text you're entering, display it on the screen, listen for new messages coming from the user you're talking to, and display them. This wouldn't be a trivial task in a single threaded application.
Even if there's only one core available, the OS thread scheduler will give you the illusion of parallelism.
Usually it is about not blocking. Running many threads on a single core still gives the illusion of concurrency. So you can have, say, a thread doing IO while another one does user interactions. The user interaction thread is not blocked while the other does IO, so the user is free to carry on interacting.
Benefits could be different.
One of the widely used examples is the application with GUI, which supposed to perform some kind of computations. If you will have a single thread - the user will have to wait the result before dealing something else with the application, but if you start it in the separate thread - user interface could be still available for user during the computation process. So, multi-thread program could emulate multi-task environment even on a unicore system. That's one of the points.
As others have already mentioned, not blocking is one application. Another one is separation of logic for unrelated tasks that are to be executed simultaneously. Using threads for that leaves handling of scheduling these tasks to the OS.
However, note that it may also be possible to implement similar behavior using asynchronous operations in a single thread. "Future" and boost::asio provide ways of doing non-blocking stuff without necessarily resorting to multiple threads.
I think it depends a bit on how exactly you design your threads and which logic is actually in the thread. Some benefits you can even get on a single core:
A thread can wrap a blocking/long-during call you can't circumvent otherwise. For some operations there are polling mechanisms, but not for all.
A thread can wrap an almost standalone part of your application that has virtually no interaction with other code. For example background polling for updates, monitoring some resource (e.g. free storage), checking internet connectivity. If you keep them in a separate thread you can keep the code relatively simple in its own 'runtime' without caring too much about the impact on the main program, the sole communication with the main logic is usually a single 'event'.
In some environments you might get more processing time. This mainly depends on how your OS scheduling system works, but if this allocates time per thread, the more threads you have the more your app will be scheduled.
Some benefits long-term:
Where it's not hard to do you benefit if your hardware evolves. You never know what's going to happen, today your app runs on a single-core embedded device, tomorrow that embedded device gets a quad core. Programming threaded from the beginning improves your future scalability.
One example is an environment where you can deterministically assign work to a thread, e.g. based on some hash all related operations end up in the same thread. The advantage for single cores is 'small' but it's not hard to do as you need little synchronization primitives so the overhead stays small.
That said, I think there are situations where it's very ill advise:
As soon as your required synchronization mechanism with other threads becomes complex (e.g. multiple locks, lots of critical sections, ...). It might still be then that multi-threading gives you a benefit when effectively moving to multiple CPUs, but the overhead is huge both for your single core and your programming time.
For instance think about operations that block because of slow peripheral devices (harddisk access etc.). While these are waiting, even the single core can do other things asyncronously.
In a lot of applications the bottleneck is not CPU processing power. So when the program flow is waiting for completion of IO requests (user input, network/disk IO), critical resources to be available, or any sort of asynchroneously triggered events, the CPU can be scheduled to do other work instead of just blocking.
In this case you don't necessarily need multiple threads that can actually run in parallel. Cooperative multi-tasking concepts like asynchroneous IO, coroutines, or fibers come into mind.
If however the application's bottleneck is CPU processing power (constantly 100% CPU usage), then it makes sense to increase the number of CPUs available to the application. At that point it is easier to scale the application up to use more CPUs if it was designed to run in parallel upfront.
As far as I can see, one answer was not yet given:
You will have to write multithreaded applications in the future!
The average number of cores will double every 18 months in the future. People have learned single-threaded programming for 50 years now, and now they are confronted with devices that have multiple cores. The programming style in a multi-threaded environment differs significantly from single-threaded programming. This refers to low-level aspects like avoiding race conditions and proper synchronization, as well as the high-level aspects like the general algorithm design.
So in addition to the points already mentioned, it's also about writing future-proof software, scalability and the development of the skills that are required to achieve these goals.
I am new to this kind of programming and need your point of view.
I have to build an application but I can't get it to compute fast enough. I have already tried Intel TBB, and it is easy to use, but I have never used other libraries.
In multiprocessor programming, I am reading about OpenMP and Boost for the multithreading, but I don't know their pros and cons.
In C++, when is multi threaded programming advantageous compared to multiprocessor programming and vice versa?Which is best suited to heavy computations or launching many tasks...? What are their pros and cons when we build an application designed with them? And finally, which library is best to work with?
Multithreading means exactly that, running multiple threads. This can be done on a uni-processor system, or on a multi-processor system.
On a single-processor system, when running multiple threads, the actual observation of the computer doing multiple things at the same time (i.e., multi-tasking) is an illusion, because what's really happening under the hood is that there is a software scheduler performing time-slicing on the single CPU. So only a single task is happening at any given time, but the scheduler is switching between tasks fast enough so that you never notice that there are multiple processes, threads, etc., contending for the same CPU resource.
On a multi-processor system, the need for time-slicing is reduced. The time-slicing effect is still there, because a modern OS could have hundred's of threads contending for two or more processors, and there is typically never a 1-to-1 relationship in the number of threads to the number of processing cores available. So at some point, a thread will have to stop and another thread starts on a CPU that the two threads are sharing. This is again handled by the OS's scheduler. That being said, with a multiprocessors system, you can have two things happening at the same time, unlike with the uni-processor system.
In the end, the two paradigms are really somewhat orthogonal in the sense that you will need multithreading whenever you want to have two or more tasks running asynchronously, but because of time-slicing, you do not necessarily need a multi-processor system to accomplish that. If you are trying to run multiple threads, and are doing a task that is highly parallel (i.e., trying to solve an integral), then yes, the more cores you can throw at a problem, the better. You won't necessarily need a 1-to-1 relationship between threads and processing cores, but at the same time, you don't want to spin off so many threads that you end up with tons of idle threads because they must wait to be scheduled on one of the available CPU cores. On the other hand, if your parallel tasks requires some sequential component, i.e., a thread will be waiting for the result from another thread before it can continue, then you may be able to run more threads with some type of barrier or synchronization method so that the threads that need to be idle are not spinning away using CPU time, and only the threads that need to run are contending for CPU resources.
There are a few important points that I believe should be added to the excellent answer by #Jason.
First, multithreading is not always an illusion even on a single processor - there are operations that do not involve the processor. These are mainly I/O - disk, network, terminal etc. The basic form for such operation is blocking or synchronous, i.e. your program waits until the operation is completed and then proceeds. While waiting, the CPU is switched to another process/thread.
if you have anything you can do during that time (e.g. background computation while waiting for user input, serving another request etc.) you have basically two options:
use asynchronous I/O: you call a non-blocking I/O providing it with a callback function, telling it "call this function when you are done". The call returns immediately and the I/O operation continues in the background. You go on with the other stuff.
use multithreading: you have a dedicated thread for each kind of task. While one waits for the blocking I/O call, the other goes on.
Both approaches are difficult programming paradigms, each has its pros and cons.
with async I/O the logic of the program's logic is less obvious and is difficult to follow and debug. However you avoid thread-safety issues.
with threads, the challange is to write thread-safe programs. Thread safety faults are nasty bugs that are quite difficult to reproduce. Over-use of locking can actually lead to degrading instead of improving the performance.
(coming to the multi-processing)
Multithreading made popular on Windows because manipulating processes is quite heavy on Windows (creating a process, context-switching etc.) as opposed to threads which are much more lightweight (at least this was the case when I worked on Win2K).
On Linux/Unix, processes are much more lightweight. Also (AFAIK) threads on Linux are implemented actually as a kind of processes internally, so there is no gain in context-switching of threads vs. processes. However, you need to use some form of IPC (inter-process communications), as shared memory, pipes, message queue etc.
On a more lite note, look at the SQLite FAQ, which declares "Threads are evil"! :)
To answer the first question:
The best approach is to just use multithreading techniques in your code until you get to the point where even that doesn't give you enough benefit. Assume the OS will handle delegation to multiple processors if they're available.
If you actually are working on a problem where multithreading isn't enough, even with multiple processors (or if you're running on an OS that isn't using its multiple processors), then you can worry about discovering how to get more power. Which might mean spawning processes across a network to other machines.
I haven't used TBB, but I have used IPP and found it to be efficient and well-designed. Boost is portable.
Just wanted to mention that the Flow-Based Programming ( http://www.jpaulmorrison.com/fbp ) paradigm is a naturally multiprogramming/multiprocessing approach to application development. It provides a consistent application view from high level to low level. The Java and C# implementations take advantage of all the processors on your machine, but the older C++ implementation only uses one processor. However, it could fairly easily be extended to use BOOST (or pthreads, I assume) by the use of locking on connections. I had started converting it to use fibers, but I'm not sure if there's any point in continuing on this route. :-) Feedback would be appreciated. BTW The Java and C# implementations can even intercommunicate using sockets.