Can a fiber created in thread A switch to another fiber created in thread B? To make the question more specific, some operating systems have fibers natively implemented (windows fibers),
other need to implement it themselves (using setjump longjump in linux etc.).
Libcoro for example wraps this all up in a single API (for windows it’s just a wrapper for native fibers, for Linux it implements it itself etc.)
So, if it's possible to migrate fibers between threads, can you give me an example usage in windows (linux) in c/c++?
I found something about fiber migration in the boost library documentation, but it's not specific enough about it's implementation and platform dependence. I still want to understand how to do it myself using only windows fibers for example (or using Libcoro on linux).
If it's not possible in a general way, why so?
I understand that fibers are meant to be used as lightweight threads for cooperative multitasking over a single thread, they have cheap context switching compared to regular threads, and they simplify the programming.
An example usage is a system with several threads, each having several fibers doing some kind of work hierarchy on their parent thread (never leaving the parent thread).
Even though it's not the intended use I still want to learn how to do it if it's possible in a general way, because I think I can optimize the work load on my job system by migrating fibers between threads.
The mentioned boost.fiber uses boost.context (callcc/continuation) to implement context switching.
Till boost-1.64 callcc was implemented in assembler only, boost-1.65 enables you to choose between assembler, Windows Fibers (Windows) or ucontext (POSIX if available; deprecated API by POSIX).
The assembler implementation is faster that the other two (2 orders of magnitude compared to ucontext).
boost.fiber uses callcc to implement lightweight threads/fibers - the library provides fiber schedulers that allow to migrate fibers between threads.
For instance one provided scheduler steals fibers from other threads if its run-queue goes out of work (fibers that are ready/that can be resumed).
(so you can choose Windows Fibers that get migrated between threads).
Related
As Far as I know,
In computer science, a thread of execution is the smallest sequence of
programmed instructions that can be managed independently by a
scheduler, which is typically a part of the operating system. The
implementation of threads and processes differs between operating
systems, but in most cases a thread is a component of a process.
Multiple threads can exist within one process, executing concurrently
and sharing resources such as memory, while different processes do not
share these resources. In particular, the threads of a process share
its executable code and the values of its variables at any given time.[1]
When I decided to write a multi thread program in c++, i faced with many choices like boost thread, posix thread and std thread.
A simple search on internet shows a performance measurement taken by boost.org website here.
My question is a bit more basic and performance related as well.
Basically, Why do they differ in performance ? Why, for example thread type A, is faster than the others? The are written by most professional programmers, are ran by powerful OSs ,yet they offer different performance.
What makes them faster or slower?
The Boost documentation refers to the Fiber library, which are not actually threads. Creating what the library calls a fiber (essentially a user-space thread or coroutine, sometimes also referred to as green threads) does not create a separate schedulable entity on the kernel side, so it can be much more efficient at creation time. Other things could be less efficient because I/O operations necessarily become much more involved under this model (because a fiber doing I/O should not block the operating system thread it runs on if other fibers could do work there).
Note that some of the coroutine implementations out there are well out of the conceptual limits of the de-facto GNU/Linux ABI and other POSIX-like operating systems, so they should be considered ugly hacks at best.
I have seen in some posts it has been said that to use multiple cores of processor use Boost thread (use multi-threading) library. Usually threads are not visible to operating system. So how can we sure that multi-threading will support usage of multi-cores. Is there a difference between Java threads and Boost threads?
The operating system is also called a "supervisor" because it has access to everything. Since it is responsible for managing preemptive threads, it knows exactly how many a process has, and can inspect what they are doing at any time.
Java may add a layer of indirection (green threads) to make many threads look like one, depending on JVM and configuration. Boost does not do this, but instead only wraps the POSIX interface which usually communicates directly with the OS kernel.
Massively multithreaded applications may benefit from coalescing threads, so that the number of ready-to-run threads matches the number of logical CPU cores. Reducing everything to one thread may be going too far, though :v) and #Voo says that green threads are only a legacy technology. A good JVM should support true multithreading; check your configuration options. On the C++ side, there are libraries like Intel TBB and Apple GCD to help manage parallelism.
I want to build a portable and efficient server in C++; it will have lots of clients trying to connect at the same time, so it must be able of handling each request parallel.
I have been trying to find documentation, guides... etc. for multithreading. I have found a lot about POSIX Pthread, but almost nothing for GNU Pth (apart from the official manual in gnu.org).
So, can anyone explain me the difference between POSIX Pthread and GNU Pth? Please, I want the response not to be a copy of Wikipedia's contents (keep in mind that I'm an absolute newbie to multithreading). I want my server to be portable and efficient between all *nix-based systems, keeping away of using heavy fork()s.
Thanks for your help.
PS: I think it's better to ask this here: what about Windows? Are Pthreads or Pth an option there? If not, what is the API for that operating system?
Use Pthreads, it's much more widely used, so there is far more information and support available for it. I've never met anyone who actually uses GNU Pth. Or better yet if you are using C++11 use std::thread and if not then use boost::thread.
So, can anyone explain me the difference between POSIX Pthread and GNU Pth?
Pthreads is a cross-platform standard for pre-emptible multithreading, meaning (usually) the OS kernel manages the threads and the OS scheduler decides when each thread gets to run (if you have a single core only one thread can run at a time, if you have multiple cores multiple threads can run at a time). The OS scheduler could pause any thread at (almost) any time and let another thread run, so each thread gets a limited "time slice" and then other threads get to run.
GNU Pth is a non-preemptible user-space threading library, meaning the threads and which ones run at which time are decided in user-space not by the kernel. Some people say programs using non-preemptible threading libraries are easier to understand, because your thread won't get paused at arbitrary times for another thread to run.
I want my server to be portable and efficient between all *nix-based systems, keeping away of using heavy fork()s.
fork is not heavy on UNIX.
what about W*ndows? Are Pthreads or Pth an option there? If not, what is the API for that operating system?
There are pthreads APIs for Windows, but they're not native to the Windows OS. I don't know if GNU Pth works on Windows - I doubt it, unless you use Cygwin. Windows has its own Win32 thread model.
Using std::thread or boost::thread is portable to POSIX platforms and Windows, and makes certain parts of the API easier to use (specifically, locking and unlocking mutexes can be easily done in an exception safe way and condition variables are easier to use.)
Gnu PTH is for a very limited use case: you want to use a multi-threaded implementation paradigm but you don't want to use multiple CPUs or cores and you don't want to rely on any OS or kernel-level support. Since almost all general-purpose CPUs now have multiple cores, this use case is increasingly irrelevant.
Windows has a separate threading model from POSIX; if you want your application to be cross-platform it is best to use a cross-platform threading library such as boost::thread.
I think GNUs PTH is meaned for C in the first place. You can use it on C++ too but C++ have its own anyway.
There are quite some applications using pth like low-level burning tools (and so GUI-Tools like K3B and Brasero depend on pth), also GnuPG uses PTH, the package management of Archlinux and some multimedia stuff.
On Windows its always a bit complicated. Microsoft did never get over the fact that C is the Programming Language from/for UNIX-Systems and so is suffering the NIH Symptome (Not Invented Here)
So they do a lot of stuff without any advantage just to be different.
If you use an Application which should run everywhere and its not low-level, use Qt with its QThreads and QThreadPool
Its 100% the same on all operating systems
You need much less code
If you write an "low-level" application i recommend to split your applications into backends and frontends and write a own backend for each OS and use the library which will do the least problems.
I know there are some threading libraries for C++, like Pthread, Boost etc out there, but how are they working? There must be an implementation of the logic somewhere.
Let's say that I would like to write my own threading mechanism in C++, not using any library, how would I start? What should I have in mind when writing it?
You'd directly call the underlying API calls in the operating system. For example, CreateThread. Naturally, this is cumbersome and platform-specific, which is why we like to use portable C++ threading libraries...
In C++98/03, there is no notion of a "thread", so the question cannot be answered within the language. In C++11, the answer is to use <thread>.
On the implementation side, threading is an operating system feature. The operating system already has to schedule multiple processes (i.e. separate programs), and a multi-threading OS adds to that the ability to schedule multiple threads within one process. A the very heart, the OS may or may not take advantage of having physically more than one CPU (though that also applies to simple multi-processing; and conversely you can schedule multiple threads on a single CPU). At the heart of the programming, you will need hardware support for synchronisation primitives like atomic read/writes and atomic compare-and-swap to implement correct memory access. (This is not needed for only multi-processing, because separate processes have distinct memory; although it will be needed by the OS itself if there are multiple physical CPUs in use.)
Well, you need something which is able to run several threads.
If you are working on developing an operating system kernel on the bare metal, I think that current multi-core processors have only one core working after their power-on reset. Even the BIOS on most PCs probably keep only one core working (and the other cores idle). So you'll need to write (assembly, non-portable) code to start other cores.
And (as James reminded you), most of the time you are using some operating system kernel. For instance, on Linux (I don't know about Windows), threads are known by the kernel (because the tasks it is scheduling are threads) and they need to be initiated by the Linux clone(2) system call.
Often, kernel threads are quite heavy, and the system has a library (NPTL for Linux Posix threads) which may use fewer kernel threads than user threads (actually Linux NPTL is a 1:1 mapping between kernel and user threads, but on some other systems, like probably Solaris, things are different).
You can't write your own threading mechanism, unless you mean pseudo-threads like co-routines and not actual concurrently executing threads. This is because the fundamental thread mechanism is defined by the kernel and you can't change it nor implement your own. Any library you write must fall back, eventually, to the operating system.
I'm writing portable code for multicore machines and I want kernel level threads so the threads can use more than one cpu. After reading QThread documentation on Qt Assistant I still haven't found any hints.
On Windows XP the multithreading example (mandelbrot) from the QtSDK used only one core. So I guess on XP only user level threads are possible. I haven't tested that on Linux or OSX so far since there isn't the full SDK installed.
EDIT: The example given in the SDK is stupid - it only uses one thread for those calculation so the binding to only one core was misleading. Buildig a sample myself I could use all cores, so on XP with mingw/GCC Qt uses kernel level threads.
So, what kind of threads are used by QThread? Is it possible to specify what kind of thread to use?
Multiple processes are also an option in combination with shared memory.
Edit
http://doc.qt.io/qt-4.8/thread-basics.html gives a nice introduction.
I don't know about Windows, but on Unix it is using pthreads. QT isn't exposing API for CPU affinity because it needs to be platform- and hardware-independent. The QThread distribution across CPUs is left to the OS scheduler, you can't hint it via some QT API.
From QThread Class Reference:
A QThread represents a separate thread of control within the program; it shares data with all the other threads within the process but executes independently in the way that a separate program does on a multitasking operating system.
In your terms, it's a "kernel" thread.
Also, the conclusion that "only user-level threads are possible" on Windows XP is surely incorrect.