Are there any thread specific clocks in the C++ world? - c++

I used to measure time consumption of different threads with CLOCK_THREAD_CPUTIME_ID and clock_gettime.
But clock_gettime is a standard in POSIX world, so it won't work in other platform like Windows when it comes to platform crossing.
I checked the C++ STD, found steady_clock, system_clcok, high_resolution_clock till now, none of these can clock a specific thread.
Did I miss any thing? If yes, what's the bullet? Or if not, any advices?

You will have to abstract out implementation of your thread timer, under windows you can use GetThreadTimes function. Under linux (or non POSIX) use CLOCK_THREAD_CPUTIME_ID. But I suppose if you compile with mingw under windows then CLOCK_THREAD_CPUTIME_ID will be available.
Also, for portability take a look at boost thread_clock

The only "clock" to measure CPU time in the C++ standard is std::clock, which doesn't measure CPU time in Windows so it's still not portable, and anyway it's per process and not per thread.
If you want to measure thread CPU time you have to resort to non-portable platform-specific functions.

Related

Does std::chrono::system_clock::now() use mutex inside? [duplicate]

First, I'm assuming that calling any function of std::chrono is guaranteed to be thread-safe (no undefined behaviour or race conditions or anything dangerous if called from different threads). Am I correct?
Next, for example on windows there is a well known problem related to multi-core processors that force some implementations of time related systems to allow forcing a specific core to get any time information.
What I want to know is:
using std::chrono, in the standard, is there any guarantee that think kind of problem shouldn't appear?
or is it implementation defined
or is there an explicit absence of guarantee that imply that on windows you'd better get time always from the same core?
Yes, calls to some_clock::now() from different threads should be thread safe.
As regards the specific issue you mention with QueryPerformanceCounter, it is just that the Windows API exposes a hardware issue on some platforms. Other OSes may or may not expose this hardware issue to user code.
As far as the C++ standard is concerned, if the clock claims to be a "steady clock" then it must never go backwards, so if there are two reads on the same thread, the second must never return a value earlier than the first, even if the OS switches the thread to a different processor.
For non-steady clocks (such as std::chrono::system_clock on many systems), there is no guarantee about this, since an external agent could change the clock arbitrarily anyway.
With my implementation of the C++11 thread library (including the std::chrono stuff) the implementation takes care to ensure that the steady clocks are indeed steady. This does impose a cost over and above a raw call to QueryPerformanceCounter to ensure the synchronization, but no longer pins the thread to CPU 0 (which it used to do). I would expect other implementations to have workarounds for this issue too.
The requirements for a steady clock are in 20.11.3 [time.clock.req] (C++11 standard)
I honestly believe that this question is fully answered by the follow statement: There is no guarantee that an implementation will not have platform-specific bugs. It's all supposed to work perfectly, but sometimes for some reason or another it doesn't. Nobody can promise you that it will do what you want, but it is supposed to work.

GNU pth vs. pthread

I want to build a portable and efficient server in C++; it will have lots of clients trying to connect at the same time, so it must be able of handling each request parallel.
I have been trying to find documentation, guides... etc. for multithreading. I have found a lot about POSIX Pthread, but almost nothing for GNU Pth (apart from the official manual in gnu.org).
So, can anyone explain me the difference between POSIX Pthread and GNU Pth? Please, I want the response not to be a copy of Wikipedia's contents (keep in mind that I'm an absolute newbie to multithreading). I want my server to be portable and efficient between all *nix-based systems, keeping away of using heavy fork()s.
Thanks for your help.
PS: I think it's better to ask this here: what about Windows? Are Pthreads or Pth an option there? If not, what is the API for that operating system?
Use Pthreads, it's much more widely used, so there is far more information and support available for it. I've never met anyone who actually uses GNU Pth. Or better yet if you are using C++11 use std::thread and if not then use boost::thread.
So, can anyone explain me the difference between POSIX Pthread and GNU Pth?
Pthreads is a cross-platform standard for pre-emptible multithreading, meaning (usually) the OS kernel manages the threads and the OS scheduler decides when each thread gets to run (if you have a single core only one thread can run at a time, if you have multiple cores multiple threads can run at a time). The OS scheduler could pause any thread at (almost) any time and let another thread run, so each thread gets a limited "time slice" and then other threads get to run.
GNU Pth is a non-preemptible user-space threading library, meaning the threads and which ones run at which time are decided in user-space not by the kernel. Some people say programs using non-preemptible threading libraries are easier to understand, because your thread won't get paused at arbitrary times for another thread to run.
I want my server to be portable and efficient between all *nix-based systems, keeping away of using heavy fork()s.
fork is not heavy on UNIX.
what about W*ndows? Are Pthreads or Pth an option there? If not, what is the API for that operating system?
There are pthreads APIs for Windows, but they're not native to the Windows OS. I don't know if GNU Pth works on Windows - I doubt it, unless you use Cygwin. Windows has its own Win32 thread model.
Using std::thread or boost::thread is portable to POSIX platforms and Windows, and makes certain parts of the API easier to use (specifically, locking and unlocking mutexes can be easily done in an exception safe way and condition variables are easier to use.)
Gnu PTH is for a very limited use case: you want to use a multi-threaded implementation paradigm but you don't want to use multiple CPUs or cores and you don't want to rely on any OS or kernel-level support. Since almost all general-purpose CPUs now have multiple cores, this use case is increasingly irrelevant.
Windows has a separate threading model from POSIX; if you want your application to be cross-platform it is best to use a cross-platform threading library such as boost::thread.
I think GNUs PTH is meaned for C in the first place. You can use it on C++ too but C++ have its own anyway.
There are quite some applications using pth like low-level burning tools (and so GUI-Tools like K3B and Brasero depend on pth), also GnuPG uses PTH, the package management of Archlinux and some multimedia stuff.
On Windows its always a bit complicated. Microsoft did never get over the fact that C is the Programming Language from/for UNIX-Systems and so is suffering the NIH Symptome (Not Invented Here)
So they do a lot of stuff without any advantage just to be different.
If you use an Application which should run everywhere and its not low-level, use Qt with its QThreads and QThreadPool
Its 100% the same on all operating systems
You need much less code
If you write an "low-level" application i recommend to split your applications into backends and frontends and write a own backend for each OS and use the library which will do the least problems.

How make a threading mechanism in C++?

I know there are some threading libraries for C++, like Pthread, Boost etc out there, but how are they working? There must be an implementation of the logic somewhere.
Let's say that I would like to write my own threading mechanism in C++, not using any library, how would I start? What should I have in mind when writing it?
You'd directly call the underlying API calls in the operating system. For example, CreateThread. Naturally, this is cumbersome and platform-specific, which is why we like to use portable C++ threading libraries...
In C++98/03, there is no notion of a "thread", so the question cannot be answered within the language. In C++11, the answer is to use <thread>.
On the implementation side, threading is an operating system feature. The operating system already has to schedule multiple processes (i.e. separate programs), and a multi-threading OS adds to that the ability to schedule multiple threads within one process. A the very heart, the OS may or may not take advantage of having physically more than one CPU (though that also applies to simple multi-processing; and conversely you can schedule multiple threads on a single CPU). At the heart of the programming, you will need hardware support for synchronisation primitives like atomic read/writes and atomic compare-and-swap to implement correct memory access. (This is not needed for only multi-processing, because separate processes have distinct memory; although it will be needed by the OS itself if there are multiple physical CPUs in use.)
Well, you need something which is able to run several threads.
If you are working on developing an operating system kernel on the bare metal, I think that current multi-core processors have only one core working after their power-on reset. Even the BIOS on most PCs probably keep only one core working (and the other cores idle). So you'll need to write (assembly, non-portable) code to start other cores.
And (as James reminded you), most of the time you are using some operating system kernel. For instance, on Linux (I don't know about Windows), threads are known by the kernel (because the tasks it is scheduling are threads) and they need to be initiated by the Linux clone(2) system call.
Often, kernel threads are quite heavy, and the system has a library (NPTL for Linux Posix threads) which may use fewer kernel threads than user threads (actually Linux NPTL is a 1:1 mapping between kernel and user threads, but on some other systems, like probably Solaris, things are different).
You can't write your own threading mechanism, unless you mean pseudo-threads like co-routines and not actual concurrently executing threads. This is because the fundamental thread mechanism is defined by the kernel and you can't change it nor implement your own. Any library you write must fall back, eventually, to the operating system.

Delaying for milliseconds in C++ cross-platform

I'm writing a multi-platform internal library in C++ that will eventually run on Windows, Linux, MacOS, and an ARM platform, and need a way to sleep for milliseconds at a time.
I have an accurate method for doing this on the ARM platform, but I'm not sure how to do this on the other platforms.
Is there a way to sleep with millisecond resolution on most platforms or do I have to special-case things for each platform?
For Linux and Mac OS X you can use usleep:
usleep(350 * 1000);
For Windows you can use Sleep:
Sleep(350);
EDIT: usleep() sleeps for microseconds, not milliseconds, so needs adjusting.
boost::this_thread::sleep()
usleep provides microsecond resolution theoretically, but it depends on platform.
It seems to be obsolete on windows, so you should use QueryPerformanceCounter there (or write your compatibility layer).
P.S.: building program depending on sleeps is often a way to disaster. Usually, what programmer really wants is waiting for some event to happen asynchronously. In this case you should look at waitable objects available at the platform, like semaphores or mutexes or even good ol' file descriptors.
for timer you could use boost asio::deadline_timer, synchronously or asynchronously.
you also could look into boost::posix_time for timer precision adjustment between seconds,milliseconds,microseconds and nanoseconds.
Windows sleep() does provide millsecond precision, but nowhere near millisecond accuracy. There is always jitter, especially with small values on a heavily-loaded system. Similar problems are only to be expected with other non-real-time OS. Even if the priority of the thread calling sleep() is very high, a driver interrupt may introduce an extra delay at any time.
Rgds,
Martin

What is the official way to call a function (C/C++) in ab. every 1/100 sec on Linux?

I have an asynchronous dataflow system written in C++. In dataflow architecture, the application is a set of component instances, which are initialized at startup, then they communicate each other with pre-defined messages. There is a component type called Pulsar, which provides "clock signal message" to other components which connect to one it (e.g. Delay). It fires message (calls the dataflow dispatcher API) every X ms, where X is the value of the "frequency" parameter, which is given in ms.
Short, the task is just to call a function (method) in every X ms. The question is: what's the best/official way to do it? Is there any pattern for it?
There are some methods I found:
Use SIGALRM. I think, signalling is not suits for that purpose. Altough, the resolution is 1 sec, it's too rare.
Use HW interrupt. I don't need this precisity. Also, I aware using HW-related solution (the server is compiled for several platforms, e.g. ARM).
Measure elapsed time, and usleep() until next call. I'm not sure that it's the best way to measure time to call time related system calls by 5 thread, each 10 times in every second - but maybe I'm wrong.
Use RealTime kernel functions. I don't know anything about it. Also, I don't need crystal precise call, it's not an atomreactor, and I can't install RT kernel on some platforms (also, 2.6.x Kernel is available).
Maybe, the best answer is a short commented part of an audio/video player's source code (which I can't find/understand by myself).
UPDATE (requested by #MSalters): The co-author of the DF project is using Mac OSX, so we should find a solution that works on most Posix-compilant op. systems, not only on Linux. Maybe, in the future there'll be a target device which uses BSD, or some restricted Linux.
If you do not need hard real-time guarantees, usleep should do the job. If you want hard real-time guarantees then an interrupt based or realtime kernel based function will be necessary.
To be honest, I think having to have a "pulsar" in what claims to be an asynchronous dataflow system is a design flaw. Either it is asynchronous or it has a synchronizing clock event.
If you have a component that needs a delay, have it request one, through boost::asio::deadline_timer.async_wait or any of the lower level solutions (select() / epoll() / timer_create() / etc). Either way, the most effective C++ solution is probably the boost.asio timers, since they would be using whatever is most efficient on your linux kernel version.
An alternative to the previously mentioned approaches is to use the Timer FD support in Linux Kernels 2.6.25+ (pretty much any distribution that's close to "current"). Timer FDs provide a bit more flexibility than the previous approaches.
Neglecting the question of design (which I think is an interesting question, but deserves its own thread)...
I would start off by designing an 'interrupt' idea, and using signals or some kernel function to interrupt every X usec. I would delay doing sleep-functions until the other ideas were too painful.