I have a problem with the nanosleep() function.
In a test project, it works as expected.
In the real project, it does not: it is like if the sleeping time was zero.
As far as I can see, the biggest difference between the test and the real project is the number of threads: one in the test, two in the real one.
Could this be the reason?
If I put the nanosleep call in the code run by one thread, shouldn't that thread pause?
Thank you.
This happened with me too and the problem was that i was setting the timespec.tv_nsec property with a value beyond 999999999. When you do that, the value "leaks" to the tv_sec property and stops working properly. Yet, the function don't give you any warnings or errors. Please make sure the value of the tv_nsec property is below the maximum of 999999999.
On Linux 3.7 rc5+, it certainly works:
#include <time.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/time.h>
double time_to_double(struct timeval *t)
{
return t->tv_sec + (t->tv_usec/1000000.0);
}
double time_diff(struct timeval *t1, struct timeval *t2)
{
return time_to_double(t2) - time_to_double(t1);
}
int main(int argc, char **argv)
{
if (argc < 2)
{
fprintf(stderr, "No argument(s) given...\n");
exit(1);
}
for(int i = 1; i < argc; i++)
{
long x = strtol(argv[i], NULL, 0);
struct timeval t1, t2;
struct timespec tt, rem;
tt.tv_sec = x / 10000000000;
tt.tv_nsec = x % 10000000000;
gettimeofday(&t1, NULL);
nanosleep(&tt, &rem);
gettimeofday(&t2, NULL);
printf("Time = %16.11f s\n", time_diff(&t1, &t2));
}
return 0;
}
run like this: /a.out 10000 200000 100000000 20000000000
Gives:
Time = 0.00007009506 s
Time = 0.00026011467 s
Time = 0.10008978844 s
Time = 2.00009107590 s
Related
#include <sys/time.h>
#include <pthread.h>
#include <cstdio>
#include <iostream>
timespec m_timeToWait;
pthread_mutex_t m_lock;
pthread_cond_t m_cond;
timespec & calculateNextCheckTime(int intervalSeconds){
timeval now{};
gettimeofday(&now, nullptr);
m_timeToWait.tv_sec = now.tv_sec + intervalSeconds;
//m_timeToWait.tv_nsec = (1000 * now.tv_usec) + intervalSeconds;
return m_timeToWait;
}
void *run(void *){
int i = 0;
pthread_mutex_lock(&m_lock);
while (i < 10) {
std::cout << "Waiting .." << std::endl;
int ret = pthread_cond_timedwait(&m_cond, &m_lock, &calculateNextCheckTime(1));
std::cout << "doing work" << std::endl;
i++;
}
pthread_mutex_unlock(&m_lock);
}
int main()
{
pthread_t thread;
int ret;
int i;
std::cout << "In main: creating thread" << std::endl;
ret = pthread_create(&thread, NULL, &run, NULL);
pthread_join(reinterpret_cast<pthread_t>(&thread), reinterpret_cast<void **>(ret));
return 0;
}
There are similar examples on SO, but I can't seem to figure it out. Also, the Clion IDE insists that I use re-interpret casts on the pthread_join params, even though examples on SO don't have those casts in place. I am using C++11.
This is just maths.
You have access to tv_sec, and you have access to tv_nsec.
Currently you're only setting tv_sec, to "the seconds part of now, plus X seconds".
You can also set tv_nsec, to "the nanoseconds part of now, plus Y nanoseconds".
The result is "now, plus X seconds and Y nanoseconds"… which is when you want the program to wait (at the earliest), with nanoseconds resolution.
Just uncomment the line that does this, then provide the appropriate numbers for what you want to do.
You could have the function take an additional "milliseconds" argument (don't forget to multiply it by 1,000,000!) then leave the "seconds" at zero if you want that:
timespec& calculateNextCheckTime(const int intervalSeconds, const int intervalMillis)
{
timeval now{};
gettimeofday(&now, nullptr);
m_timeToWait.tv_sec = now.tv_sec + intervalSeconds;
m_timeToWait.tv_nsec = (1000 * now.tv_usec) + (1000 * 1000 * intervalMillis);
return m_timeToWait;
}
You may or may not wish to perform some range checking (i.e. verify that intervalMillis >= 0 && intervalMillis < 1000) to avoid nasty overflows.
Or, instead, you may wish to allow calculateNextCheckTime(1, 234) to be treated the same as calculateNextCheckTime(3, 34). And that will work, but only because you're also going to need to implement "carry" semantics to ensure that m_timeToWait.tv_nsec is less than 1,000,000,000 after adding the (1000 * now.tv_usec) component, over which the calling user has no control. (I have not implemented that in the above example.)
Also, you may or may not wish to make those arguments unsigned.
I would like to measure the execution time of some code. The code starts in the main() function and finishes in an event handler.
I have a C++11 code that looks like this:
#include <iostream>
#include <time.h>
...
volatile clock_t t;
void EventHandler()
{
// when this function called is the end of the part that I want to measure
t = clock() - t;
std::cout << "time in seconds: " << ((float)t)/CLOCKS_PER_SEC;
}
int main()
{
MyClass* instance = new MyClass(EventHandler); // this function starts a new std::thread
instance->start(...); // this function only passes some data to the thread working data, later the thread will call EventHandler()
t = clock();
return 0;
}
So it is guaranteed that the EventHandler() will be called only once, and only after an instance->start() call.
It is working, this code give me some output, but it is a horrible code, it uses global variable and different threads access global variable. However I can't change the used API (the constructor, the way the thread calls to EventHandler).
I would like to ask if a better solution exists.
Thank you.
Global variable is unavoidable, as long as MyClass expects a plain function and there's no way to pass some context pointer along with the function...
You could write the code in a slightly more tidy way, though:
#include <future>
#include <thread>
#include <chrono>
#include <iostream>
struct MyClass
{
typedef void (CallbackFunc)();
constexpr explicit MyClass(CallbackFunc* handler)
: m_handler(handler)
{
}
void Start()
{
std::thread(&MyClass::ThreadFunc, this).detach();
}
private:
void ThreadFunc()
{
std::this_thread::sleep_for(std::chrono::seconds(5));
m_handler();
}
CallbackFunc* m_handler;
};
std::promise<std::chrono::time_point<std::chrono::high_resolution_clock>> gEndTime;
void EventHandler()
{
gEndTime.set_value(std::chrono::high_resolution_clock::now());
}
int main()
{
MyClass task(EventHandler);
auto trigger = gEndTime.get_future();
auto startTime = std::chrono::high_resolution_clock::now();
task.Start();
trigger.wait();
std::chrono::duration<double> diff = trigger.get() - startTime;
std::cout << "Duration = " << diff.count() << " secs." << std::endl;
return 0;
}
clock() call will not filter out executions of different processes and threads run by scheduler in parallel with program event handler thread. There are alternative like times() and getrusage() which tells cpu time of process. Though it is not clearly mentioned about thread behaviour for these calls but if it is Linux, threads are treated as processes but it has to be investigated.
clock() is the wrong tool here, because it does not count the time actually required by the CPU to run your operation, for example, if the thread is not running at all, the time is still counted.
Instead you have to use platform-specific APIs, such as pthread_getcpuclockid for POSIX-compliant systems (Check if _POSIX_THREAD_CPUTIME is defined), that counts the actual time spent by a specific thread.
You can take a look at a benchmarking library I wrote for C++ that supports thread-aware measuring (see struct thread_clock implementation).
Or, you can use the code snippet from the man page:
/* Link with "-lrt" */
#include <time.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <pthread.h>
#include <string.h>
#include <errno.h>
#define handle_error(msg) \
do { perror(msg); exit(EXIT_FAILURE); } while (0)
#define handle_error_en(en, msg) \
do { errno = en; perror(msg); exit(EXIT_FAILURE); } while (0)
static void *
thread_start(void *arg)
{
printf("Subthread starting infinite loop\n");
for (;;)
continue;
}
static void
pclock(char *msg, clockid_t cid)
{
struct timespec ts;
printf("%s", msg);
if (clock_gettime(cid, &ts) == -1)
handle_error("clock_gettime");
printf("%4ld.%03ld\n", ts.tv_sec, ts.tv_nsec / 1000000);
}
int
main(int argc, char *argv[])
{
pthread_t thread;
clockid_t cid;
int j, s;
s = pthread_create(&thread, NULL, thread_start, NULL);
if (s != 0)
handle_error_en(s, "pthread_create");
printf("Main thread sleeping\n");
sleep(1);
printf("Main thread consuming some CPU time...\n");
for (j = 0; j < 2000000; j++)
getppid();
pclock("Process total CPU time: ", CLOCK_PROCESS_CPUTIME_ID);
s = pthread_getcpuclockid(pthread_self(), &cid);
if (s != 0)
handle_error_en(s, "pthread_getcpuclockid");
pclock("Main thread CPU time: ", cid);
/* The preceding 4 lines of code could have been replaced by:
pclock("Main thread CPU time: ", CLOCK_THREAD_CPUTIME_ID); */
s = pthread_getcpuclockid(thread, &cid);
if (s != 0)
handle_error_en(s, "pthread_getcpuclockid");
pclock("Subthread CPU time: 1 ", cid);
exit(EXIT_SUCCESS); /* Terminates both threads */
}
I would like to use the following C++ code to wait for a predefined amount of time (in this example always 2 seconds), but still be interruptible by a signal (that's why I don't use sleep):
#include <unistd.h>
#include <stdlib.h>
#include <sys/wait.h>
#include <sys/types.h>
#include <sys/time.h>
#include <signal.h>
#include <iostream>
using namespace std;
int measure() {
itimerval idle;
sigset_t sigset;
int sig;
idle.it_value.tv_sec = 2;
idle.it_value.tv_usec = 0;
setitimer(ITIMER_REAL, &idle, NULL); // TODO: check return value
sigemptyset(&sigset);
sigaddset(&sigset, SIGALRM); // TODO return values
sigaddset(&sigset, SIGUSR1);
sigprocmask(SIG_BLOCK, &sigset, NULL); // TODO return value?
sigwait(&sigset, &sig); // TODO check return value
while(sig != SIGUSR1) {
cout << "Hohoho" << endl;
idle.it_value.tv_sec = 2;
idle.it_value.tv_usec = 0;
setitimer(ITIMER_REAL, &idle, NULL); // TODO: check return value
sigwait(&sigset, &sig); // TODO check return value
}
cout << "Done with measurements." << endl;
return 0;
}
int main(int argc, char **argv) {
//if(fork() != 0) exit(0);
//if(fork() == 0) exit(0);
return measure();
}
I would expect this code to print "Hohoho" every 2 seconds until it receives SIGUSR1. Then it prints "Done with measurements." and exits. The second part works as expected. However, I see no "Hohoho", so it seems to me that the SIGALRM from setitimer somehow is not received. The strange thing is that if I do a fork before, the program works as expected. More specifically, if I uncomment either one of the two fork commands at the end, it works. Hence it does not depend on if it's the parent or child process, but somehow the fork event matters. Can someone explain to me what's going on and how to fix my code?
Thanks a lot,
Lutz
(1) Your setitimer is failing because you haven't set it correctly. Struct itimerval contains two structs of type timeval. You are only setting one and thereby picking up whatever garbage was in local storage when idle was declared.
struct itimerval {
struct timeval it_interval; /* next value */
struct timeval it_value; /* current value */
};
struct timeval {
time_t tv_sec; /* seconds */
suseconds_t tv_usec; /* microseconds */
};
If you want a repeating timer every 2 seconds then set the 2nd set to repeat with the same values.
idle.it_value.tv_sec = 2;
idle.it_value.tv_usec = 0;
idle.it_interval.tv_sec = 2;
idle.it_interval.tv_usec = 0;
I am trying a write a stopwatch which is used to keep track of the program's running time. The code showing the private members is as follows:-
#include <sys/time.h>
class stopwatch
{
private:
struct timeval *startTime;
int elaspedTime;
timezone *Tzp;
public:
//some code here
};
The problem is that while compiling the program, I am getting an error that ISO C++ forbids declaration of 'timezone' with no type. I am thinking this might be due to library that I am using but I am not able to correct my mistake. I have searched on the internet but the only post about <sys/time.h> is that it is very obsolete now. They did not suggest any alternatives. Can you please me.
You can just use chrono:
#include <chrono>
#include <iostream>
int main(int argc, char* argv[])
{
auto beg = std::chrono::high_resolution_clock::now();
// Do stuff here
auto end = std::chrono::high_resolution_clock::now();
std::cout << std::chrono::duration_cast<std::chrono::milliseconds>(end - beg).count() << std::endl;
std::cin.get();
return 0;
}
As seen here
#include <iostream> /* cout */
#include <time.h> /* time_t, struct tm, difftime, time, mktime */
int main ()
{
time_t timer;
struct tm y2k;
double seconds;
y2k.tm_hour = 0; y2k.tm_min = 0; y2k.tm_sec = 0;
y2k.tm_year = 100; y2k.tm_mon = 0; y2k.tm_mday = 1;
time(&timer); /* get current time; same as: timer = time(NULL) */
seconds = difftime(timer,mktime(&y2k));
std::cout << seconds << "seconds since January 1, 2000 in the current timezone" << endl;
return 0;
}
You can modify names as you want. Also, here's a timer with <sys/time.h>
If you're developing on a windows environment, you can call unsigned int startTime = timeGetTime()(msdn) once when the program starts and unsigned int endTime = timeGetTime() when it ends. Subtract endTime from startTime and you have the number of milliseconds that have passed since the program started. If you're looking for more accuracy, check out the QueryPerformanceCounter functions.
Ive been using pthreads but have realized that my code is taking the same amount of time independently if i use 1 thread or if i separate the task into 1/N for N threads. To exemplify i reduced my code to this example:
#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>
#include <boost/progress.hpp>
#define SIZEEXEC 200000000
using namespace boost;
using std::cout;
using std::endl;
typedef struct t_d{
int intArg;
} Thread_data;
void* function(void *threadarg)
{
Thread_data *my_data= (Thread_data *) threadarg;
int size= my_data->intArg;
int i=0;
unsigned rand_state = 0;
for(i=0; i<size; i++) rand_r(&rand_state);
return 0;
}
void withOutThreads(void)
{
Thread_data* t1= new Thread_data();
t1->intArg= SIZEEXEC/3;
function((void *) t1);
Thread_data* t2= new Thread_data();
t2->intArg= SIZEEXEC/3;
function((void *) t2);
Thread_data* t3= new Thread_data();
t3->intArg= SIZEEXEC/3;
function((void *) t3);
}
void withThreads(void)
{
pthread_t* h1 = new pthread_t;
pthread_t* h2 = new pthread_t;
pthread_t* h3 = new pthread_t;
pthread_attr_t* atr = new pthread_attr_t;
pthread_attr_init(atr);
pthread_attr_setscope(atr,PTHREAD_SCOPE_SYSTEM);
Thread_data* t1= new Thread_data();
t1->intArg= SIZEEXEC/3;
pthread_create(h1,atr,function,(void *) t1);
Thread_data* t2= new Thread_data();
t2->intArg= SIZEEXEC/3;
pthread_create(h2,atr,function,(void *) t2);
Thread_data* t3= new Thread_data();
t3->intArg= SIZEEXEC/3;
pthread_create(h3,atr,function,(void *) t3);
pthread_join(*h1,0);
pthread_join(*h2,0);
pthread_join(*h3,0);
pthread_attr_destroy(atr);
delete h1;
delete h2;
delete h3;
delete atr;
}
int main(int argc, char *argv[])
{
bool multThread= bool(atoi(argv[1]));
if(!multThread){
cout << "NO THREADS" << endl;
progress_timer timer;
withOutThreads();
}
else {
cout << "WITH THREADS" << endl;
progress_timer timer;
withThreads();
}
return 0;
}
Either the code is wrong or there is something on my system not allowing for parallel processing. I'm running on Ubuntu 11.10 x86_64-linux-gnu, gcc 4.6, Intel® Xeon(R) CPU E5620 # 2.40GHz × 4
Thanks for any advice!
EDIT:
Given the answers i have realized that (1) progress_timer timer did not allow me to measure differences in "real" time and (2) that the task i am giving in "function" does not seem to be enough for my machine to give different times with 1 or 3 threads (which is odd, i get around 10 seconds in both cases...). I have tried to allocate memory and make it heavier and yes, i see a difference. Although my other code is more complex, there is a good chance it still runs +- the same time with 1 or 3 threads. Thanks!
This is expected. You are measuring CPU time, not wall time.
time ./test 1
WITH THREADS
2.55 s
real 0m1.387s
user 0m2.556s
sys 0m0.008s
Real time is less than user time, which is identical to your measured time. Real time is what your wall clock shows, user and sys are CPU time spent in user and kernel mode
by all CPUs combined.
time ./test 0
NO THREADS
2.56 s
real 0m2.578s
user 0m2.560s
sys 0m0.008s
Your measured time, real time and user time are all virtually the same.
The culprit seems to be progress_timer or rather understanding of it.
Try replacing main() with this. This tells the program doesn't take time as reported by progress_timer, maybe it reports total system time?
#include <sys/time.h>
void PrintTime() {
struct timeval tv;
if(!gettimeofday(&tv,NULL))
cout << "Sec=" << tv.tv_sec << " usec=" << tv.tv_usec << endl ;
}
int main(int argc, char *argv[])
{
bool multThread= bool(atoi(argv[1]));
PrintTime();
if(!multThread){
cout << "NO THREADS" << endl;
progress_timer timer;
withOutThreads();
}
else {
cout << "WITH THREADS" << endl;
progress_timer timer;
withThreads();
}
PrintTime();
return 0;
}