I am preparing for a coding challenge, where the fastest calculation of pi to the 10000th digit wins. all calculations will be run on a raspberry Pi4 running linux during competition.
I want to know which code runs the fastest, so I can know which function to submit.
so I wrote a little program named "lol" to try and establish a baseline around a known time.
//lol....lol is an exe which calls usleep()
#include <unistd.h>
using namespace std;
int main(){
usleep(100);
return 0;
}
then to measure execution time, I wrote this:
#include <chrono>
#include <stdlib.h>
#include <iostream>
using namespace std::chrono;
using namespace std;
int main(int argc, char **argv){
//returns runtime in nanoseconds
//useage: runtime <program>
//caveates: I put the exe in /home/$USER/bin
//start timing
auto start = high_resolution_clock::now();
//executable being timed:
system(argv[1]);
// After function call
auto stop = high_resolution_clock::now();
auto duration = duration_cast<nanoseconds>(stop - start);
cout << argv[1] << " " << duration.count() << endl;
return 0;
}
my issue is that the run time seems to be wildly variant. Is this because I'm running in userspace and my system is also doing other things? why am I not getting more consistent run times?
$ ./run_time lol
lol 13497886
$ ./run_time lol
lol 11175649
$ ./run_time lol
lol 3340143
./run_time lol
lol 3364727
$ ./run_time lol
lol 3372376
$ ./run_time lol
lol 1981566
$ ./run_time lol
lol 3385961
instead of executing a program, measure a function completion in a single program:
auto start = high_resolution_clock::now();
//function being timed:
my_func();
// After function call
auto stop = high_resolution_clock::now();
you are using chrono header. so why usleepwhen you can use sleep_for:
https://en.cppreference.com/w/cpp/thread/sleep_for
The merits of this contest is not how you micro-optimize to save 1ns. It`s about choosing the right algorithm to calculate pi.
Related
Let me start by saying, I am not a skilled C++ programmer, I have just begun that journey and am working in a Win 10 environment using Code::Blocks as my ide. I am starting to learn C++ by coding solutions to the Euler Project problems. I used these problems to start learning Python several years ago, so I have at least one Python solution for the first 50+ problems. In doing these problems in Python, I learned that as my skill improves, I can formulate better solutions. When learning Python, I used a trial & error approach to improving my code until I became aware of the python tools for timing my code. Once, I defined a convenient and consistent timing method, my coding improved dramatically. Since, I am beginning this journey now with C++ I decided I would be proactive and create a consistent method for timing the code execution. To do this, I have decided to utilize the C++ chrono library and have defined two different approaches that seemingly should produce the same time results. However as the tile to this question implies they don't.
So here's my question: Why don't the following two approaches yield the same results?
Approach # 1
This method provides a timing for the do some stuff segment of slightly over 3 seconds, but less than the CODE::BLOCKS execution time report of 3.054 seconds. I certainly understand these differences based on the program flow. So this approach seems to give good result. The issue I have with this approach is I need to copy & paste the timing code into each .cpp file I want to time, which seems sloppy.
Output using this method looks like the following:
Elapsed time in seconds : 3.00053 sec
Process returned 0 (0x0) execution time : 3.151 s
#include <iostream>
#include <chrono>
#include <unistd.h>
using namespace std;
using Clock = chrono::steady_clock;
using TimePoint = chrono::time_point<Clock>;
// Functions
TimePoint get_timestamp();
double get_duration(TimePoint, TimePoint);
int main()
{
//auto start = chrono::steady_clock::now();
auto start_t = get_timestamp();
//cout << "Start is of type " << typeid(start).name() << "\n";
// do some stuff
sleep(3);
auto end_t = get_timestamp();
//double tme_secs = chrono::duration_cast<chrono::nanoseconds>(end - start).count()/1000000000.0000;
double tme_secs = get_duration(start_t, end_t)/1000000000;
cout << "Elapsed time in seconds : " << tme_secs << " sec";
return 0;
}
TimePoint get_timestamp(){
return Clock::now();
}
double get_duration(TimePoint start, TimePoint end){
return chrono::duration_cast<chrono::nanoseconds>(end - start).count()*1.00000000;
}
Approach #2
In this approach, I attempted to create a ProcessTime class which could be included in files that I want to time and provide a cleaner method. The problem with this approach is I get timing report in the nano seconds, which does not reflect the process being timed. Here is my implementation of this approach.
output using this method looks like the following:
Elapsed time: 1.1422e+06 seconds
Process returned 0 (0x0) execution time : 3.148 s
ProcessTime.h file
#ifndef PROCESSTIME_H_INCLUDED
#define PROCESSTIME_H_INCLUDED
#include <chrono>
using namespace std;
using Clock = chrono::steady_clock;
using TimePoint = chrono::time_point<Clock>;
class ProcessTime{
public:
ProcessTime();
double get_duration();
private:
TimePoint proc_start;
};
#endif // PROCESSTIME_H_INCLUDED
ProcessTime.cpp file
#include "ProcessTime.h"
#include <chrono>
using namespace std;
using Clock = chrono::steady_clock;
using TimePoint = chrono::time_point<Clock>;
ProcessTime::ProcessTime(){
TimePoint proc_start = Clock::now();
}
double ProcessTime::get_duration(){
TimePoint proc_end = Clock::now();
return chrono::duration_cast<chrono::nanoseconds>(proc_end - ProcessTime::proc_start).count()*1.00000000;
}
main.cpp file:
#include <iostream>
#include "ProcessTime.h"
#include <unistd.h>
using namespace std;
int main()
{
ProcessTime timer;
// Do some Stuff
sleep(3);
double tme_secs = timer.get_duration()/1000000000;
cout << "Elapsed time: " << tme_secs << " seconds";
return 0;
}
This is incorrect:
ProcessTime::ProcessTime(){
TimePoint proc_start = Clock::now();
}
You are setting a local variable named proc_start, and then the constructor ends. You did not set the actual member variable of ProcessTime.
One fix (and the preferred method) is to use the member-initialization list:
ProcessTime::ProcessTime() : proc_start(Clock::now()) {}
or if you knew nothing about the member-initialization list, the code would look like this to assign the value.
ProcessTime::ProcessTime(){
proc_start = Clock::now();
}
I was using Dev C++ for coding c++ (for competitive programming, been doing it for a couple of months) but after a while I decided to try to do it on VSCode (terrible idea btw). Everything ended up working however when executing a program on the command prompt via Dev C++ it showed both the execution time and the return value of the main function like this:
Process exited after 4.962 seconds with return value 0
The problem is, when executing a c++ .exe file by normal means, these things are not shown and I don't even know where it comes from. Is there a program or a command that makes these show on the command prompt?
Edit: That comment solved my problem
"Execution time is not tracked by Windows. echo %errorlevel% prints the exit code."
A very rough example of achieving the requirement:
#include <iostream>
// to count time elapsed from beginning to end of the program
#include <chrono>
// for atexit() function
#include <cstdlib>
using namespace std::chrono;
// used globally, see the reason below of the answer
int i;
void onExit();
int main(void) {
int input;
// clock begins
steady_clock::time_point begin = steady_clock::now();
// --- SOME LONG PROGRAM ---
std::cin >> input;
// clock ends
steady_clock::time_point end = steady_clock::now();
// calculating the difference
long long diff = duration_cast<milliseconds>(end - begin).count();
// displaying the time
std::cout << "Process exited in " << diff / 1000.00 << 's' << std::endl;
// on exit of the program, this function will be executed
atexit(&onExit);
// supposing the return code is 5
i = 5;
return i;
}
// exit function to be executed before exit
void onExit() {
std::cout << "Exit code: " << i << std::endl;
}
This will output something like:
test // --- INPUT
Process exited in 4.383s
Exit code: 5
Notice that we've used the variable i globally since atexit()'s passed function must be returning void (i.e. nothing), that's the reason of not passing the i in the user-defined function onExit() and nothing else.
The definition of atexit() is as follows:
extern "C++" int atexit (void (*func)(void)) noexcept;
I have a program that I want to calculate its time of execution :
#include <iostream>
#include <boost/chrono.hpp>
using namespace std;
int main(int argc, char* const argv[])
{
boost::chrono::system_clock::time_point start = boost::chrono::system_clock::now();
// Intructions to burn time
boost::chrono::duration<double> sec = boost::chrono::system_clock::now() - start;
cout <<"---- time execution is " << sec.count() << ";";
return 0;
}
For example the result after one run:
---- time execution is 0.0223588
This result isn't very conscious because the CPU time is included .
I had An idea to avoid CPU contention by testing many runs and getting there average .
The problem is :
How can I store the time value of the previous run ?
Can we do that via a file ?
How to incrementally calculate the average after each run ?
Your suggestion / pseudocodes are welcome.
You may pass average number via command line using 2 args: current average value and the number of iterations performed.
Then:
NewAverage = ((CurrentAverage*N) + CurrentValue) / (N+1);
where N is the number of iterations.
I used gcc-4.8.1(configure: ./configure --prefix=/usr/local) to compile following code in Ubuntu 12.04, but when I ran it, it didn't work. it didn't stop to wait the mutex. It returned false, and outputed "Hello world!"
command: g++ -std=c++11 main.cpp -omain -pthread
When I used gcc-4.6(apt-get install g++) to compile it, it worked well. The program waited about ten seconds, and outputed "Hello world!"
#include <thread>
#include <iostream>
#include <chrono>
#include <mutex>
std::timed_mutex test_mutex;
void f()
{
test_mutex.try_lock_for(std::chrono::seconds(10));
std::cout << "hello world\n";
}
int main()
{
std::lock_guard<std::timed_mutex> l(test_mutex);
std::thread t(f);
t.join();
return 0;
}
If I am not mistaken, that is Bug 54562 -mutex and condition variable timers.
The reason for the bug is also mentioned:
This is because it uses the CLOCK_MONOTONIC clock (if available on the
platform) to calculate the absolute time when it needs to return,
which is incorrect as the POSIX pthread_mutex_timedlock() call uses
the CLOCK_REALTIME clock, and on my platform the monotonic clock is
way behind the real time clock.
However, this doesn't explain why you see the correct behavior on gcc-4.6 though. Perhaps _GLIBCXX_USE_CLOCK_MONOTONIC is not enabled?
A possible workaround:
const int WAIT_PRECISION_MS = 10; // Change it to whatever you like
int TIME_TO_WAIT_MS = 2000; // Change it to whatever you like
int ms_waited = 0;
bool got_lock = false;
while (ms_waited < TIME_TO_WAIT_MS) {
std::this_thread::sleep_for(
std::chrono::milliseconds(WAIT_PRECISION_MS));
ms_waited += WAIT_PRECISION_MS;
got_lock = YOUR_MUTEX.try_lock();
if (got_lock) {
break;
}
}
The WAIT_PRECISION_MS will tell the while loop how often to "wake up" and try getting the lock. But, it would also tell how accurate your deadline is going to be, unless your precision time is a factor of the deadline time.
For example:
deadline = 20, precision = 3: 3 is not a factor of 20 - the last iteration of the while loop will be when ms_waited is 18. It means that you are going to wait a total of 21ms and not 20ms.
deadline = 20, precision = 4: 4 is a factor of 20 - the last iteration of the while loop will be when ms_waited is 16. It means that you are going to wait exactly 20ms, as your deadline is defined.
I am writing a program that will be used on a Solaris machine. I need a way of keeping track of how many seconds has passed since the start of the program. I'm talking very simple here. For example I would have an int seconds = 0; but how would I go about updating the seconds variable as each second passes?
It seems that some of the various time functions that I've looked at only work on Windows machines, so I'm just not sure.
Any suggestions would be appreciated.
Thanks for your time.
A very simple method:
#include <time.h>
time_t start = time(0);
double seconds_since_start = difftime( time(0), start);
The main drawback to this is that you have to poll for the updates. You'll need platform support or some other lib/framework to do this on an event basis.
Use std::chrono.
#include <chrono>
#include <iostream>
int main(int argc, char *argv[])
{
auto start_time = std::chrono::high_resolution_clock::now();
auto current_time = std::chrono::high_resolution_clock::now();
std::cout << "Program has been running for " << std::chrono::duration_cast<std::chrono::seconds>(current_time - start_time).count() << " seconds" << std::endl;
return 0;
}
If you only need a resolution of seconds, then std::steady_clock should be sufficient.
You are approaching it backwards. Instead of having a variable you have to worry about updating every second, just initialize a variable on program start with the current time, and then whenever you need to know how many seconds have elapsed, you subtract the now current time from that initial time. Much less overhead that way, and no need to nurse some timing related variable update.
#include <stdio.h>
#include <time.h>
#include <windows.h>
using namespace std;
void wait ( int seconds );
int main ()
{
time_t start, end;
double diff;
time (&start); //useful call
for (int i=0;i<10;i++) //this loop is useless, just to pass some time.
{
printf ("%s\n", ctime(&start));
wait(1);
}
time (&end);//useful call
diff = difftime(end,start);//this will give you time spent between those two calls.
printf("difference in seconds=%f",diff); //convert secs as u like
system("pause");
return 0;
}
void wait ( int seconds )
{
clock_t endwait;
endwait = clock () + seconds * CLOCKS_PER_SEC ;
while (clock() < endwait) {}
}
this should work fine on solaris/unix also, just remove win refs
You just need to store the date/time when application started. Whenever you need to display for how long your program is running get current date/time and subtract the when application started.