I'm trying to figure out how to calculate time in c++ . I'm making
a program where every 3 seconds an event happens for example print out "hello" etc;
Here's an example using two threads so your program won't freeze and this_thread::sleep_for() in C++11:
#include <iostream>
#include <chrono>
#include <thread>
using namespace std;
void hello()
{
while(1)
{
cout << "Hello" << endl;
chrono::milliseconds duration( 3000 );
this_thread::sleep_for( duration );
}
}
int main()
{
//start the hello thread
thread help1(hello);
//do other stuff in the main thread
for(int i=0; i <10; i++)
{
cout << "Hello2" << endl;
chrono::milliseconds duration( 3000 );
this_thread::sleep_for( duration );
}
//wait for the other thread to finish in this case wait forever(while(1))
help1.join();
}
you can use boost::timer to calculate time in C++:
using boost::timer::cpu_timer;
using boost::timer::cpu_times;
using boost::timer::nanosecond_type;
...
nanosecond_type const three_seconds(3 * 1000000000LL);
cpu_timer timer;
cpu_times const elapsed_times(timer.elapsed());
nanosecond_type const elapsed(elapsed_times.system + elapsed_times.user);
if (elapsed >= three_seconds)
{
//more then 3 seconds elapsed
}
It is dependent on your OS/Compiler.
Case 1:
If you have C++11 then you can use as suggested by Chris:
std::this_thread::sleep_for() // You have to include header file thread
Case 2:
If you are on the windows platform then you can also use something like:
#include windows.h
int main ()
{
event 1;
Sleep(1000); // number is in milliseconds 1Sec = 1000 MiliSeconds
event 2;
return 0;
}
Case 3:
On linux platform you can simply use:
sleep(In seconds);
Related
I need to delay a function by x amount of time. The problem is that I can't use sleep nor any function that suspends the function (that's because the function is a loop that contains more function, sleeping / suspending one will sleep / suspend all)
Is there a way I could do it?
If you want to execute some specific code at a certain time interval and don't want to use threads (to be able to suspend), then you have to keep track of time and execute the specific code when the delay time was exceeded.
Example (pseudo):
timestamp = getTime();
while (true) {
if (getTime() - timestamp > delay) {
//main functionality
//reset timer
timestamp = getTime();
}
//the other functionality you mentioned
}
With this approach, you invoke a specific fuction every time interval specified by delay. The other functions will be invoked at each iteration of the loop.
In other words, it makes no difference if you delay a function or execute it at specific time intervals.
Assuming that you need to run functions with their own arguments inside of a loop with custom delay and wait for them to finish before each iteration:
#include <cstdio>
void func_to_be_delayed(const int &idx = -1, const unsigned &ms = 0)
{
printf("Delayed response[%d] by %d ms!\n", idx, ms);
}
#include <chrono>
#include <future>
template<typename T, typename ... Ta>
void delay(const unsigned &ms_delay, T &func, Ta ... args)
{
std::chrono::time_point<std::chrono::high_resolution_clock> start = std::chrono::high_resolution_clock::now();
double elapsed;
do {
std::chrono::time_point<std::chrono::high_resolution_clock> end = std::chrono::high_resolution_clock::now();
elapsed = std::chrono::duration<double, std::milli>(end - start).count();
} while(elapsed <= ms_delay);
func(args...);
}
int main()
{
func_to_be_delayed();
const short iterations = 5;
for (int i = iterations; i >= 0; --i)
{
auto i0 = std::async(std::launch::async, [i]{ delay((i+1)*1000, func_to_be_delayed, i, (i+1)*1000); } );
// Will arrive with difference from previous
auto i1 = std::async(std::launch::async, [i]{ delay(i*1000, func_to_be_delayed, i, i*1000); } );
func_to_be_delayed();
// Loop will wait for all calls
}
}
Notice: this method potentially will spawn additional thread on each call with std::launch::async type of policy.
Standard solution is to implement event loop.
If you use some library, framework, system API, then most probably there is something similar provided to solve this kind of problem.
For example Qt has QApplication which provides this loop and there is QTimer.
boost::asio has io_context which provides even loop in which timer can be run boost::asio::deadline_timer.
You can also try implement such event loop yourself.
Example wiht boost:
#include <boost/asio.hpp>
#include <boost/date_time.hpp>
#include <exception>
#include <iostream>
void printTime(const std::string& label)
{
auto timeLocal = boost::posix_time::second_clock::local_time();
boost::posix_time::time_duration durObj = timeLocal.time_of_day();
std::cout << label << " time = " << durObj << '\n';
}
int main() {
boost::asio::io_context io_context;
try {
boost::asio::deadline_timer timer{io_context};
timer.expires_from_now(boost::posix_time::seconds(5));
timer.async_wait([](const boost::system::error_code& error){
if (!error) {
printTime("boom");
} else {
std::cerr << "Error: " << error << '\n';
}
});
printTime("start");
io_context.run();
} catch (const std::exception& e) {
std::cerr << e.what() << '\n';
}
return 0;
}
https://godbolt.org/z/nEbTvMhca
C++20 introduces coroutines, this could be a good solution too.
Let's say I have a foo() function. I want it to run in, for example, 5 seconds, after that, it has to be cancelled and continues to do the rest of the program.
Code snippets:
int main() {
// Blah blah
foo(); // Running in 5 sec only
// After 5 sec, came here and finished
}
References: After a while searching on StackOverflow, I found this is what I need but written in python: Timeout on a function call.
signal.h and unistd.h can be related.
This is possible with threads. Since C++20, it will be fairly simple:
{
std::jthread t([](std::stop_token stoken) {
while(!stoken.stop_requested()) {
// do things that are not infinite, or are interruptible
}
});
using namespace std::chrono_literals;
std::this_thread::sleep_for(5s);
}
Note that many interactions with the operating system cause the process to be "blocked". An example of such is the POSIX function listen, which waits for incoming connections. If the thread is blocked, then it will not be able to proceed to the next iteration.
Unfortunately, the C++ standard doesn't specify whether such platform specific calls should be interrupted by request to stop or not. You need to use platform specific methods to make sure that happens. Typically, signals can be configured to interrupt blocking system calls. In case of listen, an option is to connect to the waiting socket.
There is no way to do that uniformly in C++. There are ways to do this with some degree of success when you use OS specific APIs, however it all becomes extremely cumbersome.
The basic idea which you can use in *nix is a combination of alarm() system call and setjmp/longjmp C function.
A (pseudo) code:
std::jmp_buf jump_buffer;
void alarm_handle(int ) {
longjmp(jump_buffer);
}
int main() {
signal(SIGALRM, alarm_handle);
alarm(5);
if (setjmp(jump_buffer)) {
foo(); // Running in 5 sec only
} else {
// After 5 sec, came here and finished
// if we are here, foo timed out
}
}
This all is extremely fragile and shaky (i.e. long jumps do not place nicely with C++ objects lifetime), but if you know what you are doing this might work.
Perfectly standard C++11
#include <iostream>
#include <thread> // std::this_thread::sleep_for
#include <chrono> // std::chrono::seconds
using namespace std;
// stop flag
bool stopfoo;
// function to run until stopped
void foo()
{
while( ! stopfoo )
{
// replace with something useful
std::this_thread::sleep_for (std::chrono::seconds(1));
std::cout << "still working!\n";
}
std::cout "stopped\n";
}
// function to call a top after 5 seconds
void timer()
{
std::this_thread::sleep_for (std::chrono::seconds( 5 ));
stopfoo = true;
}
int main()
{
// initialize stop flag
stopfoo = false;
// start timer in its own thread
std::thread t (timer);
// start worker in main thread
foo();
return 0;
}
Here is the same thing with a thread safe stop flag ( not really neccessary, but good practice for more complex cases )
#include <iostream>
#include <thread> // std::this_thread::sleep_for
#include <chrono> // std::chrono::seconds
#include <mutex>
using namespace std;
class cFlagThreadSafe
{
public:
void set()
{
lock_guard<mutex> l(myMtx);
myFlag = true;
}
void unset()
{
lock_guard<mutex> l(myMtx);
myFlag = false;
}
bool get()
{
lock_guard<mutex> l(myMtx);
return myFlag;
}
private:
bool myFlag;
mutex myMtx;
};
// stop flag
cFlagThreadSafe stopfoo;
// function to run until stopped
void foo()
{
while( ! stopfoo.get() )
{
// replace with something useful
this_thread::sleep_for (std::chrono::seconds(1));
cout << "still working!\n";
}
cout << "stopped\n";
}
// function to call a top after 5 seconds
void timer()
{
this_thread::sleep_for (chrono::seconds( 5 ));
stopfoo.set();
}
int main()
{
// initialize stop flag
stopfoo.unset();
// start timer in its own thread
thread t (timer);
// start worker in main thread
foo();
t.join();
return 0;
}
And if it is OK to do everything in the main thread, things can be greatly simplified.
#include <iostream>
#include <thread> // std::this_thread::sleep_for
#include <chrono> // std::chrono::seconds
using namespace std;
void foo()
{
auto t1 = chrono::steady_clock ::now();
while( chrono::duration_cast<chrono::seconds>(
chrono::steady_clock ::now() - t1 ).count() < 5 )
{
// replace with something useful
this_thread::sleep_for (std::chrono::seconds(1));
cout << "still working!\n";
}
cout << "stopped\n";
}
int main()
{
// start worker in main thread
foo();
return 0;
}
I would like to measure the execution time of some code. The code starts in the main() function and finishes in an event handler.
I have a C++11 code that looks like this:
#include <iostream>
#include <time.h>
...
volatile clock_t t;
void EventHandler()
{
// when this function called is the end of the part that I want to measure
t = clock() - t;
std::cout << "time in seconds: " << ((float)t)/CLOCKS_PER_SEC;
}
int main()
{
MyClass* instance = new MyClass(EventHandler); // this function starts a new std::thread
instance->start(...); // this function only passes some data to the thread working data, later the thread will call EventHandler()
t = clock();
return 0;
}
So it is guaranteed that the EventHandler() will be called only once, and only after an instance->start() call.
It is working, this code give me some output, but it is a horrible code, it uses global variable and different threads access global variable. However I can't change the used API (the constructor, the way the thread calls to EventHandler).
I would like to ask if a better solution exists.
Thank you.
Global variable is unavoidable, as long as MyClass expects a plain function and there's no way to pass some context pointer along with the function...
You could write the code in a slightly more tidy way, though:
#include <future>
#include <thread>
#include <chrono>
#include <iostream>
struct MyClass
{
typedef void (CallbackFunc)();
constexpr explicit MyClass(CallbackFunc* handler)
: m_handler(handler)
{
}
void Start()
{
std::thread(&MyClass::ThreadFunc, this).detach();
}
private:
void ThreadFunc()
{
std::this_thread::sleep_for(std::chrono::seconds(5));
m_handler();
}
CallbackFunc* m_handler;
};
std::promise<std::chrono::time_point<std::chrono::high_resolution_clock>> gEndTime;
void EventHandler()
{
gEndTime.set_value(std::chrono::high_resolution_clock::now());
}
int main()
{
MyClass task(EventHandler);
auto trigger = gEndTime.get_future();
auto startTime = std::chrono::high_resolution_clock::now();
task.Start();
trigger.wait();
std::chrono::duration<double> diff = trigger.get() - startTime;
std::cout << "Duration = " << diff.count() << " secs." << std::endl;
return 0;
}
clock() call will not filter out executions of different processes and threads run by scheduler in parallel with program event handler thread. There are alternative like times() and getrusage() which tells cpu time of process. Though it is not clearly mentioned about thread behaviour for these calls but if it is Linux, threads are treated as processes but it has to be investigated.
clock() is the wrong tool here, because it does not count the time actually required by the CPU to run your operation, for example, if the thread is not running at all, the time is still counted.
Instead you have to use platform-specific APIs, such as pthread_getcpuclockid for POSIX-compliant systems (Check if _POSIX_THREAD_CPUTIME is defined), that counts the actual time spent by a specific thread.
You can take a look at a benchmarking library I wrote for C++ that supports thread-aware measuring (see struct thread_clock implementation).
Or, you can use the code snippet from the man page:
/* Link with "-lrt" */
#include <time.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <pthread.h>
#include <string.h>
#include <errno.h>
#define handle_error(msg) \
do { perror(msg); exit(EXIT_FAILURE); } while (0)
#define handle_error_en(en, msg) \
do { errno = en; perror(msg); exit(EXIT_FAILURE); } while (0)
static void *
thread_start(void *arg)
{
printf("Subthread starting infinite loop\n");
for (;;)
continue;
}
static void
pclock(char *msg, clockid_t cid)
{
struct timespec ts;
printf("%s", msg);
if (clock_gettime(cid, &ts) == -1)
handle_error("clock_gettime");
printf("%4ld.%03ld\n", ts.tv_sec, ts.tv_nsec / 1000000);
}
int
main(int argc, char *argv[])
{
pthread_t thread;
clockid_t cid;
int j, s;
s = pthread_create(&thread, NULL, thread_start, NULL);
if (s != 0)
handle_error_en(s, "pthread_create");
printf("Main thread sleeping\n");
sleep(1);
printf("Main thread consuming some CPU time...\n");
for (j = 0; j < 2000000; j++)
getppid();
pclock("Process total CPU time: ", CLOCK_PROCESS_CPUTIME_ID);
s = pthread_getcpuclockid(pthread_self(), &cid);
if (s != 0)
handle_error_en(s, "pthread_getcpuclockid");
pclock("Main thread CPU time: ", cid);
/* The preceding 4 lines of code could have been replaced by:
pclock("Main thread CPU time: ", CLOCK_THREAD_CPUTIME_ID); */
s = pthread_getcpuclockid(thread, &cid);
if (s != 0)
handle_error_en(s, "pthread_getcpuclockid");
pclock("Subthread CPU time: 1 ", cid);
exit(EXIT_SUCCESS); /* Terminates both threads */
}
I have a console application that is intended to only run on windows. It is written in C++. Is there any way to wait 60 seconds (and show remaining time on screen) and then continue code flow?
I've tried different solutions from the internet, but none of them worked. Either they don't work, or they don't display the time correctly.
//Please note that this is Windows specific code
#include <iostream>
#include <Windows.h>
using namespace std;
int main()
{
int counter = 60; //amount of seconds
Sleep(1000);
while (counter >= 1)
{
cout << "\rTime remaining: " << counter << flush;
Sleep(1000);
counter--;
}
}
You can use sleep() system call to sleep for 60 seconds.
You can follow this link for how to set 60 seconds timer using system call Timer in C++ using system calls.
possible use Waitable Timer Objects with perion set to 1 second for this task. possible implementation
VOID CALLBACK TimerAPCProc(
__in_opt LPVOID /*lpArgToCompletionRoutine*/,
__in DWORD /*dwTimerLowValue*/,
__in DWORD /*dwTimerHighValue*/
)
{
}
void CountDown(ULONG Seconds, COORD dwCursorPosition)
{
if (HANDLE hTimer = CreateWaitableTimer(0, 0, 0))
{
static LARGE_INTEGER DueTime = { (ULONG)-1, -1};//just now
ULONGLONG _t = GetTickCount64() + Seconds*1000, t;
if (SetWaitableTimer(hTimer, &DueTime, 1000, TimerAPCProc, 0, FALSE))
{
HANDLE hConsoleOutput = GetStdHandle(STD_OUTPUT_HANDLE);
do
{
SleepEx(INFINITE, TRUE);
t = GetTickCount64();
if (t >= _t)
{
break;
}
if (SetConsoleCursorPosition(hConsoleOutput, dwCursorPosition))
{
WCHAR sz[8];
WriteConsoleW(hConsoleOutput,
sz, swprintf(sz, L"%02u..", (ULONG)((_t - t)/1000)), 0, 0);
}
} while (TRUE);
}
CloseHandle(hTimer);
}
}
COORD dwCursorPosition = { };
CountDown(60, dwCursorPosition);
this might be of some help, it's not entirely clear what the question is but this is a countdown timer from 10 seconds, you can change the seconds and add minutes as well as hours.
#include <iomanip>
#include <iostream>
using namespace std;
#ifdef _WIN32
#include <windows.h>
#else
#include <unistd.h>
#endif
int main()
{
for (int sec = 10; sec < 11; sec--)
{
cout << setw(2) << sec;
cout.flush();
sleep(1);
cout << '\r';
if (sec == 0)
{
cout << "boom" << endl;
}
if (sec <1)
break;
}
}
In c++ you can use countdown. please go through with the following logic which will allow you to show remaining time on the screen.
for(int min=m;min>0;min--) //here m is the total minits as per ur requirements
{
for(int sec=59;sec>=;sec--)
{
sleep(1); // here you can assign any value in sleep according to your requirements.
cout<<"\r"<<min<<"\t"<<sec;
}
}
if you need more help on this then please follow the link here
Hope it will work, please let me know that it is working in your case or not? or if you need any help.
Thanks!
Suppose there are several boost strand share_ptr stored in a vector m_poStrands. And tJobType is the enum indicated different type of job.
I found the time diff from posting a job in one strand (JOBA) to call the onJob of another strand (JOBB) is around 50 milli second.
I want to know if there is any way to reduce the time diff.
void postJob(tJobType oType, UINT8* pcBuffer, size_t iSize)
{
//...
m_poStrands[oType]->post(boost::bind(&onJob, this, oType, pcDestBuffer, iSize));
}
void onJob(tJobType oType, UINT8* pcBuffer, size_t iSize)
{
if (oType == JOBA)
{
//....
struct timeval sTV;
gettimeofday(&sTV, 0);
memcpy(pcDestBuffer, &sTV, sizeof(sTV));
pcDestBuffer += sizeof(sTV);
iSize += sizeof(sTV);
memcpy(pcDestBuffer, pcBuffer, iSize);
m_poStrands[JOBB]->(boost::bind(&onJob, this, JOBB, pcDestBuffer, iSize));
}
else if (oType == JOBB)
{
// get the time from buffer
// and calculate the dime diff
struct timeval eTV;
gettimeofday(&eTV, 0);
}
}
Your latency is probably coming from the memcpys between your gettimeofdays. Here's an example program I ran on my machine (2 ghz core 2 duo). I'm getting thousands of nanoseconds. So a few microseconds. I doubt that your system is running 4 orders of magnitude slower than mine. The worst I ever saw it run was 100 microsecond for one of the two tests. I tried to make the code as close to the code posted as possible.
#include <boost/asio.hpp>
#include <boost/chrono.hpp>
#include <boost/bind.hpp>
#include <boost/thread.hpp>
#include <iostream>
struct Test {
boost::shared_ptr<boost::asio::strand>* strands;
boost::chrono::high_resolution_clock::time_point start;
int id;
Test(int i, boost::shared_ptr<boost::asio::strand>* strnds)
: id(i),
strands(strnds)
{
strands[0]->post(boost::bind(&Test::callback,this,0));
}
void callback(int i) {
if (i == 0) {
start = boost::chrono::high_resolution_clock::now();
strands[1]->post(boost::bind(&Test::callback,this,1));
} else {
boost::chrono::nanoseconds sec = boost::chrono::high_resolution_clock::now() - start;
std::cout << "test " << id << " took " << sec.count() << " ns" << std::endl;
}
}
};
int main() {
boost::asio::io_service io_service_;
boost::shared_ptr<boost::asio::strand> strands[2];
strands[0] = boost::shared_ptr<boost::asio::strand>(new boost::asio::strand(io_service_));
strands[1] = boost::shared_ptr<boost::asio::strand>(new boost::asio::strand(io_service_));
boost::thread t1 (boost::bind(&boost::asio::io_service::run, &io_service_));
boost::thread t2 (boost::bind(&boost::asio::io_service::run, &io_service_));
Test test1 (1, strands);
Test test2 (2, strands);
t1.join();
t2.join();
}