I have searched but I can't find an equivalent to the matlab tic/toc function to simply display on the console how long time took the program to do its processing. (ideally I would like to put the tic (start timer) and toc (end timer) anywhere in the program.
Any suggestions?
I found what I was looking for.
Include:
#include <ctime>
Then at the beginning:
time_t tstart, tend;
tstart = time(0);
And finally before the end:
tend = time(0);
cout << "It took "<< difftime(tend, tstart) <<" second(s)."<< endl;
You can look at the boost date_time module which might be more portable.
If you are on linux you can use the function
clock_gettime();
if on windows try
QueryPerformanceCounter()
You can google these for specific implementation details. Other operating systems I dont know about. There are doubtless many other ways to achieve the same thing but if you get no other responses, these are a reasonable place to start.
By using std::chrono you can write a simple function that performs as Matlab's tic toc:
#include <iostream>
#include <chrono>
#include <thread> // sleep_for, for testing only
void tic(int mode=0) {
static std::chrono::_V2::system_clock::time_point t_start;
if (mode==0)
t_start = std::chrono::high_resolution_clock::now();
else {
auto t_end = std::chrono::high_resolution_clock::now();
std::cout << "Elapsed time is " << (t_end-t_start).count()*1E-9 << " seconds\n";
}
}
void toc() { tic(1); }
int main(int argc, char **argv)
{
tic();
// wait 5 seconds just for testing
std::chrono::seconds sleep_s(5);
std::this_thread::sleep_for(sleep_s);
toc();
return 0;
}
Related
Consider this code:
#include <iostream>
#include <vector>
#include <functional>
#include <map>
#include <atomic>
#include <memory>
#include <chrono>
#include <thread>
#include <boost/asio.hpp>
#include <boost/thread.hpp>
#include <boost/asio/high_resolution_timer.hpp>
static const uint32_t FREQUENCY = 5000; // Hz
static const uint32_t MKSEC_IN_SEC = 1000000;
std::chrono::microseconds timeout(MKSEC_IN_SEC / FREQUENCY);
boost::asio::io_service ioservice;
boost::asio::high_resolution_timer timer(ioservice);
static std::chrono::system_clock::time_point lastCallTime = std::chrono::high_resolution_clock::now();
static uint64_t deviationSum = 0;
static uint64_t deviationMin = 100000000;
static uint64_t deviationMax = 0;
static uint32_t counter = 0;
void timerCallback(const boost::system::error_code &err) {
auto actualTimeout = std::chrono::high_resolution_clock::now() - lastCallTime;
std::chrono::microseconds actualTimeoutMkSec = std::chrono::duration_cast<std::chrono::microseconds>(actualTimeout);
long timeoutDeviation = actualTimeoutMkSec.count() - timeout.count();
deviationSum += abs(timeoutDeviation);
if(abs(timeoutDeviation) > deviationMax) {
deviationMax = abs(timeoutDeviation);
} else if(abs(timeoutDeviation) < deviationMin) {
deviationMin = abs(timeoutDeviation);
}
++counter;
//std::cout << "Actual timeout: " << actualTimeoutMkSec.count() << "\t\tDeviation: " << timeoutDeviation << "\t\tCounter: " << counter << std::endl;
timer.expires_from_now(timeout);
timer.async_wait(timerCallback);
lastCallTime = std::chrono::high_resolution_clock::now();
}
using namespace std::chrono_literals;
int main() {
std::cout << "Frequency: " << FREQUENCY << " Hz" << std::endl;
std::cout << "Callback should be called each: " << timeout.count() << " mkSec" << std::endl;
std::cout << std::endl;
ioservice.reset();
timer.expires_from_now(timeout);
timer.async_wait(timerCallback);
lastCallTime = std::chrono::high_resolution_clock::now();
auto thread = new std::thread([&] { ioservice.run(); });
std::this_thread::sleep_for(1s);
std::cout << std::endl << "Messages posted: " << counter << std::endl;
std::cout << "Frequency deviation: " << FREQUENCY - counter << std::endl;
std::cout << "Min timeout deviation: " << deviationMin << std::endl;
std::cout << "Max timeout deviation: " << deviationMax << std::endl;
std::cout << "Avg timeout deviation: " << deviationSum / counter << std::endl;
return 0;
}
It runs timer to call timerCallback(..) periodically with specified frequency. In this example, callback must be called 5000 times per second. One can play with frequency and see that actual (measured) frequency of calls is different from desired one. In fact the higher is the frequency, the higher is deviation. I did some measurements with different frequencies and here is summary:
https://docs.google.com/spreadsheets/d/1SQtg2slNv-9VPdgS0RD4yKRnyDK1ijKrjVz7BBMSg24/edit?usp=sharing
When desired frequency is 10000Hz, system miss 10% (~ 1000) of calls.
When desired frequency is 100000Hz, system miss 40% (~ 40000) of calls.
Question: Is it possible to achieve better accuracy in Linux \ C ++ environment? How? I need it to work without significant deviation with frequency of 500000Hz
P.S. My first idea was that it is the body of the timerCallback(..) method itself causes delay. I measured it. It takes a stably takes less than 1 microsecond to execute. So it does not affect the process.
I have no experience in this problem myself, but I guess (as the references explains) that the scheduler of the OS interferes with your callback somehow.
So, you could try to use the real-time scheduler and try to change priority of your task to a higher one.
Hope this gives you a direction to find your answer.
Scheduler:
http://gumstix.8.x6.nabble.com/High-resolution-periodic-task-on-overo-td4968642.html
Priority:
https://linux.die.net/man/3/setpriority
If you need to achieve one call each two microsecond interval, you'd better to attach to absolute time positions, and don't consider the time each request is going to require.... You run although into the problem that the processing required at each timeslot could be more cpu demanding than the time required for it to execute.
If you have a multicore cpu, I'd divide the timeslot between each core (in a multithreaded approach) for it to be longer for each core, so suppose that you have your requirements in a four core cpu, then you can allow each thread to execute 1 cal per 8usec, which is probably more affordable. In this case you use absolute timers (one absolute timer is one that waits until the wall clock ticks a specific absolute time, and not a delay from the time you called it) and will offset them by an amount equal to the thread number of 2usec delay, in this case (4 cores) you will start thread #1 at time T, thread #2 at time T + 2usec, thread #3 at time T + 4usec, ... and thread #N at time T + 2*(N-1)usec. Each thread will then start itself again at time oldT + 2usec, instead of doing some kind of nsleep(3) call. This will not accumulate the processing time to the delay call, as this is most probably what you are experiencing. The pthread library timers are all absolute time timers, so you can use them. I think this is the only way you'll be capable of reaching such a hard spec. (and prepare to see how the battery suffers with that, assuming you're in an android environment)
NOTE
in this approach, the external bus can be a bottleneck, so even if you get it working, probably it would be better to synchronize several machines with NTP (this can be done to the usec level, at the speed of actual GBit links) and use different processors running in parallel. As you don't describe anything of the process you have to repeat so densely, I cannot provide more help to the problem.
I want to add a delay so that one line will run and then after a short delay the second one will run. I'm fairly new to C++ so I'm not sure how I would do this whatsoever. So ideally, in the code below it would print "Loading..." and wait at least 1-2 seconds and then print "Loading..." again. Currently it prints both instantaneously instead of waiting.
cout << "Loading..." << endl;
// The delay would be between these two lines.
cout << "Loading..." << endl;
in c++ 11 you can use this thread and crono to do it:
#include <chrono>
#include <thread>
...
using namespace std::chrono_literals;
...
std::this_thread::sleep_for(2s);
to simulate a 'work-in-progress report', you might consider:
// start thread to do some work
m_thread = std::thread( work, std::ref(*this));
// work-in-progress report
std::cout << "\n\n ... " << std::flush;
for (int i=0; i<10; ++i) // for 10 seconds
{
std::this_thread::sleep_for(1s); //
std::cout << (9-i) << '_' << std::flush; // count-down
}
m_work = false; // command thread to end
m_thread.join(); // wait for it to end
With output:
... 9_8_7_6_5_4_3_2_1_0_
work abandoned after 10,175,240 us
Overview: The method 'work' did not 'finish', but received the command to abandon operation and exit at timeout. (a successful test)
The code uses chrono and chrono_literals.
In windons OS
#include <windows.h>
Sleep( sometime_in_millisecs ); // note uppercase S
In Unix base OS
#include <unistd.h>
unsigned int sleep(unsigned int seconds);
#include <unistd.h>
int usleep(useconds_t usec); // Note usleep - suspend execution for microsecond intervals
You want the sleep(unsigned int seconds) function from unistd.h. Call this function between the cout statements.
I've recently tried cpp, in the thing I'm making I'm trying to make it so that a variable with the value of 20 is subtracted by 1 every second, but I also need the machine to be waiting for an input from the user. I tried using for loops but they won't proceed until the input is placed or until the variable runs out. I looked at clock but they don't seem to fit my need, or maybe I just misunderstood their purpose.
Any suggestions?
As has already been suggested in the comments, threading is one way to do this. There is a nice self-contained example here (which I've borrowed from in the code below).
In the code below an asynchronous function is launched. Details on these here. This returns a future object which will contain the result once the job has finished.
In this case the job is listening to cin (typically the terminal input) and will return when some data is entered (i.e. when enter is pressed).
In the meantime the while loop will be running which keeps a track of how much time has passed, decrements the counter, and also returns if the asynchronous job finishes. It wasn't clear from your question if this is exactly the behaviour you want but it gives you the idea. It will print out value of decremented variable, but user can enter text, and it will print that out once user presses enter.
#include <iostream>
#include <thread>
#include <future>
#include <time.h>
int main() {
// Enable standard literals as 2s and ""s.
using namespace std::literals;
// Execute lambda asyncronously (waiting for user input)
auto f = std::async(std::launch::async, [] {
auto s = ""s;
if (std::cin >> s) return s;
});
// Continue execution in main thread, run countdown and timer:
int countdown = 20;
int countdownPrev = 0;
std::chrono::steady_clock::time_point begin = std::chrono::steady_clock::now();
std::chrono::steady_clock::time_point end;
double elapsed;
while((f.wait_for(5ms) != std::future_status::ready) && countdown >= 0) {
end = std::chrono::steady_clock::now();
elapsed = std::chrono::duration_cast<std::chrono::milliseconds>(end - begin).count();
countdown = 20 - (int) (elapsed/1000);
if (countdown != countdownPrev) {
std::cout << "Counter now: " << std::fixed << countdown << std::endl;
countdownPrev = countdown;
}
}
if (countdown == -1) {
std::cout << "Countdown elapsed" << std::endl;
return -1;
} else {
std::cout << "Input was: " << f.get() << std::endl;
return 0;
}
}
P.S. to get this to work on my compiler I have to compile it with g++ -pthread -std=c++14 file_name.cpp to correctly link the threading library and allow use of c++14 features.
See the following code.
#include <future>
#include <iostream>
#include <ctime>
int main()
{
std::future<int> future = std::async(std::launch::deferred, [](){
std::this_thread::sleep_for(std::chrono::seconds(5));
return 100;
});
std::cout << "waiting...\n";
clock_t start = clock();
std::future_status status = future.wait_for(std::chrono::seconds(20));
std::cout << "result is " << future.get() << std::endl;
clock_t end = clock();
std::cout<<"Time Cost : "<< (double)(end-start)/CLOCKS_PER_SEC <<" seconds."<< std::endl;
}
It's very confusing about the execution result. Yep, the main thread will wait for only 5 seconds around and then print "100". But why "Time Cost" shows 0? The test environment is Cygwin with g++ 4.9.3.
Then I tested it in VS2013. The result is 25 seocnds. Strange!
It doesn't show 0 on my machine but a very small value : 0;000156s. But as it measures processor time and your main thread does not consume any cpu (wait is not an active loop), the result is almost 0.
clock() returns processor time spent. It doesn't have any guarantee of advancement whatsoever. If your CPU sleeps, the value returned by it will not be advanced. To measure intervals properly, use clocks from std::chrono, for example, std::chrono::steady_clock.
I am making a command-line c++ application and I want text to be on a sort of timer because there is a lot of text. I already know how to make it so they have to press enter, but I want it to be automatic. What would be the simplest way to do this.
Example Output:
Welcome to the Calculator Game!
(1 second later) Do you want to play(Yes or No)?
The easiest thing is just to use `sleep(milliseconds)'. Most operating systems have varous ways of doing timers as well.
Even better if you are using C++11, use something like this:
#include <iostream>
#include <chrono>
#include <thread>
int main()
{
std::cout << "Hello waiter" << std::endl;
std::chrono::milliseconds dura( 2000 );
std::this_thread::sleep_for( dura );
std::cout << "Waited 2000 ms\n";
}
If you're not using C++11, then try the following:
#include <time.h>
void sleep(unsigned int mseconds)
{
clock_t goal = mseconds + clock();
while (goal > clock());
}
Docs here: http://en.cppreference.com/w/cpp/thread/sleep_for