Is there any possible way of disabling multi core functionality on windows and just using a single core using C\C++? Any library that allows it?
My application access one of our chip modules used to communicate with the host. We suspect someone else is accessing this module and changes it. The strange thing is that it only happens on a multi core system (Windows 7 64bit), and when you set windows to only use a single core (How to disable a core) everything works great.
It sounds to me that windows shouldn't allow any program to accomplish it programmatic, but I hope I'm mistaking.
EDIT: I'm not looking for suggestion regrading my threading skills. My problem is possibly more hardware or firmware related (Maybe the 2nd core is also accessing my chip module).
2nd EDIT: This is NOT a software problem! I only want to know if it's possible to disable multi core using C\C++. I don't seek threading advice as I'm 100% sure the problem doesn't lay there.
3rd EDIT: issue was SOLVED. The problem was with another process that my customer ran that accessing the same shared memory my application was accessing.
As I previously mentioned, there was no problem with my thread and I never got the answer I looked for: A simple Yes or No regarding weather it is possible to disable one of the cores using C++.
Processor affinity was not helpfull in my special case.
I think some workaround can be in setting affinity for main thread of your task for one core, than create threads with infinite loops for other cores and set them highest possible priority. But usually somthing wrong is in software if it cannot run on multicore hardware.
Even on a single core, you can be interrupted at any point. So I suspect your problem is to due to switching cores.
If you suspect the hardware driver isn't thread safe use the following to set interrupt affinity.
http://www.microsoft.com/whdc/system/sysperf/intpolicy.mspx
Unless your C/C++ library is using threading, I can't imagine how the number of cores could affect the runtime behavior. If your app does use threads, they will probably execute a bit differently on one core vs multiple cores, but if you can get an error on a multicore system you can probably get the same error on a single core system.
You can set processor affinity when launching your application. This locks your application to one processor. This may be a simple solution to the problem.
Multithreading is a beast. Even "foolproof" sometimes bites you in the ...uhm... "back".
When you're using threads, you should do it right, because the OS has only a few rules it has to follow and can schedule your threads as it pleases. IIRC you can set priorities etc, maybe express something like preferences for certain CPUs in some OSes.
Even when you're using only one additional thread, you still have two, because the main app runs as well.
Maybe have a look at the debugging tools that specialize on thread debugging. Maybe they can help you with that.
As a start I'd add some mutexes or synchronized areas that access the module.
Related
If possible I do wish to allocate a logical core to a single process exclusively.
I am aware that Winbase.h contains Get/SetProcessAffinityMask and SetThreadAffinityMask.
I can get all processes running when the specific process is started and set their affinities to other logical cores, however, I do not want to check all processes in a periodic manner, for instance in order to deal with processes launched after the initiation of my process.
Furthermore there will be other processes which need to use specific logical cores only exclusively (no other process shall waste resources on that logical core). For instance my process shall run on core 15 but another shall run only on core 14.
Is there a better and more permanent way to allocate specific logical cores to specific processes than above mentioned Get/SetProcessAffinityMask scheme.
Windows is not a real-time operating system. Windows is designed to do preemptive multitasking with isolated processes, like basically any other modern desktop OS. A process is not supposed to just lock out every other process from a particular core, therefore, there is no API to explicitly do so (at least I'm not aware of one). It's up to the OS scheduler to decide which threads get to run when and where. That's the whole idea. You can use thread priorities to tell the scheduler that certain threads should be given a chance to run over others. You can use affinity masks to tell the scheduler which cores a thread can be scheduled to. You can even set a preferred core for your thread. But you don't get to schedule threads yourself.
Note that there's apparently a way to get something a bit like what you're looking for to work on Linux (see this question for more). I don't think similar possibilities exist on Windows. Yes you could try to hack together some solution based on a background task that continuously monitors and adjusts the priorities and affinity masks of all the threads in the system to approximate the desired behavior (like the person in the question linked by Ben Voigt above has apparently tried, and failed to achieve). But why would you want to do that? It goes completely against the very nature of everything an OS like Windows is designed to do. To me, what you are asking sounds a lot like what you're really looking for is a completely different kind of operating system, or maybe even no operating system at all. Boot the CPU straight into your own image and you get to drive all the cores in whatever way you fancy…
I work in lab and wrote multithreaded computational program, on C++11 using std::thread. Now I have an opportunity to run my program on multi-cpu server.
Server:
Runs Ubuntu server
Has 40 Intel CPU's
I know nothing about multi-cpu programming. First idea, that comes into my mind to run 40 applications and then glue their results together. It is possible, but I want to know more about my opportunities.
If I compile my code on server by it's gcc compiler, does resulting application take advantage of multi-cpu?
If #1 answer depends, how can I check it?
Thank you!
If your program runs multithreaded your OS should take care automatically that it uses the CPUs available.
Make sure to distribute the work you have to do to about the same number of threads there are CPUs you can use. Make sure it is not just one thread that does the work and the other threads are just waiting for the termination of this thread.
You question is not only about multi-thread, but about multi-cpu.
Basically the operating system will automatically spread out the threads over the cores. You don't need to do anything.
Once you are using C++11, you have std::thread::get_id() that you can call and identify the different thread, but you CAN NOT identify the core you are using. Use pthreads directly + "cpu affinity" for this.
You can google for "CPU affinity" for more details on how to get control over it. If you want this kind of precision. You can identify the core as well as choose the core... You can start with this: http://man7.org/linux/man-pages/man3/pthread_setaffinity_np.3.html
I will be writing an interface in C++ that controls a large CNC machine and it will run on Windows. For safety reasons, I would like this process to run on it's own core so that it can run the process in real-time. If it shares a core with Windows, all of the Windows processes will be dealt with before my process is. Basically, how do I make sure either my process is always in the front of the processing queue, or how do I designate a core to run my process and leave the other core to handle Windows? Also, is there any way of seeing that my programming is run in real-time. AKA, this core is processing this program, but that core is not doing anything because we told out program not to run on it. Any input would be helpful.
There is no guarantee your process will be dealt with in real time. Windows does not do that. Since you mention safety, I will mention that if a lawsuit occurred you would be in deep trouble. Expert witnesses would testify that the design is inherently unsafe.
User control and display can be done in Windows but real time operations belong in dedicated hardware such as a PLC.
You can use SetThreadAffinityMask to restrict a thread to running on some subset of available processors/cores.
If you use SetThreadPriority and SetProcessPriorityClass to set a thread to real time priority, only other threads running at real time priority can interrupt it, effectively forcing other threads to run on other cores (unless you raise two or more threads to real time priority.
As an alternative, Windows Embedded Compact is a realtime priority-based OS that can make soft realtime guarantees (far better than Windows Vista/7). It's costly, but on par with other commercial RTOSes.
I am using visual studio 2012. I have a module, where, I have to read a huge set of files from the hard disk after traversing their corresponding paths through an xml. For this i am doing
std::vector<std::thread> m_ThreadList;
In a while loop I am pushing back a new thread into this vector, something like
m_ThreadList.push_back(std::thread(&MyClass::Readfile, &MyClassObject, filepath,std::ref(polygon)));
My C++11 multi threading knowledge is limited.The question that I have here , is , how do create a thread on a specific core ? I know of parallel_for and parallel_for_each in vs2012, that make optimum use of the cores. But, is there a way to do this using standard C++11?
As pointed out in other comments, you cannot create a thread "on a specific core", as C++ has no knowledge of such architectural details. Moreover, in the majority of cases, the operating system will be able to manage the distribution of threads among cores/processors well enough.
That said, there exist cases in which forcing a specific distribution of threads among cores can be beneficial for performance. As an example, by forcing a thread to execute onto a one specific core it might be possible to minimise data movement between different processor caches (which can be critical for performance in certain memory-bound scenarios).
If you want to go down this road, you will have to look into platform-specific routines. E.g., for GNU/linux with POSIX threads you will want pthread_setaffinity_np(), in FreeBSD cpuset_setaffinity(), in Windows SetThreadAffinityMask(), etc.
I have some relevant code snippets here if you are interested:
http://gitorious.org/piranhapp0x/mainline/blobs/master/src/thread_management.cpp
I'm fairly certain that core affinity isn't included in std::thread. The assumption is that the OS is perfectly capable of making best possible use of the cores available. In all but the most extreme of cases you're not to going to beat the OS's decision, so the assumption is a fair one.
If you do go down that route then you have to add some decision making to your code to take account of machine architecture to ensure that your decision is better than the OSes on every machine you run on. That takes a lot of effort! For starters you'll be wanting to limit the number of threads to match the number of cores on the computer. And you don't have any knowledge of what else is going on in the machine; the OS does!
Which is why thread pools exist. They tend by default to have as many threads as there are cores, automatically set up by the language runtime. AFAIK C++11 doesn't have one of those. So the one good thing you can do to get the optimum performance is to find out how many cores there are and limit the number of threads you have to that number. Otherwise it's probably just best to trust the OS.
Joachim Pileborg's comment is well worth paying attention to, unless the work done by each thread outweighs the I/O overhead.
As a quick overview of threading in the context of dispatching threads to cores:
Most modern OS's make use of kernel level threads, or hybrid. With kernel level threading, the OS "sees" all the threads in each process; in contrast to user level threads, which are employed in Java, where the OS sees a single process, and has no knowledge of threading. Now, because, with kernel level threading, the OS can recognise the separate threads of a process, and manages their dispatch onto a given core, there is the potential for true parallelism - where multiple threads of the same process are run on different cores. You, as the programmer, will have no control over this however, when employing std::thread; the OS decides. With user level threading, all the management of threads are done at the user level, with Java, a library manages the "dispatch". In the case of hybrid threading, kernel threading is used, where each kernel thread is actually a set of user level threads.
I'm writing portable code for multicore machines and I want kernel level threads so the threads can use more than one cpu. After reading QThread documentation on Qt Assistant I still haven't found any hints.
On Windows XP the multithreading example (mandelbrot) from the QtSDK used only one core. So I guess on XP only user level threads are possible. I haven't tested that on Linux or OSX so far since there isn't the full SDK installed.
EDIT: The example given in the SDK is stupid - it only uses one thread for those calculation so the binding to only one core was misleading. Buildig a sample myself I could use all cores, so on XP with mingw/GCC Qt uses kernel level threads.
So, what kind of threads are used by QThread? Is it possible to specify what kind of thread to use?
Multiple processes are also an option in combination with shared memory.
Edit
http://doc.qt.io/qt-4.8/thread-basics.html gives a nice introduction.
I don't know about Windows, but on Unix it is using pthreads. QT isn't exposing API for CPU affinity because it needs to be platform- and hardware-independent. The QThread distribution across CPUs is left to the OS scheduler, you can't hint it via some QT API.
From QThread Class Reference:
A QThread represents a separate thread of control within the program; it shares data with all the other threads within the process but executes independently in the way that a separate program does on a multitasking operating system.
In your terms, it's a "kernel" thread.
Also, the conclusion that "only user-level threads are possible" on Windows XP is surely incorrect.