timestamp of time(0) at multiple places in a C++ program - c++

Why is the timestamp of time(0) at multiple places in a C++ program the same value?
Ex:
int main(){
cout << time(0) << endl;
cout << time(0) << endl;
cout << time(0) << endl;
cout << time(0) << endl;
}
All of the values above are the same. Is this because the program is executed at such a fast speed that the time values in the above example are all the same?
Could someone help me out? Thanks!

The resolution of the time() function isn't fine grained enough to result in different values to make a different result for each call you make, i.e. the CPU is faster.
You might try to insert std::this_thread::sleep_for calls to check what timing resolution fits for your needs with the hardware and OS you have at hand.

The time(0) function returns the current time in seconds.
The code that you upload will run all the code in one second.
Therefore, even if the time is output, all the same time is output.
The following code is to output the current time continuously for 3 seconds.
If you run it, you will see a lot of numbers, but if you look closely you can see that the number changes three times.
#include <iostream>
#include <ctime>
int main()
{
time_t s = std::time(0); // time_t is int64 in windows 10 64bit.
time_t n;
do {
n = std::time(0);
std::cout << n << " ";
} while ((s + 3) > n); // repeat until 3 sec passed.
return 0;
}

Related

Why are there a big difference when measuring elapsed time according to where to measure?

My question is about the difference of the elapsed time according to the point.
For finding the largest portion of the total elapsed time when executing in my code, I used clock function.
source : calculating time elapsed in C++
First, I put the clock function at the start and end of the main function.
(Actually, there are some declaration of variables but I deleted them for readability of my questions). Then I think I will be able to measure the total elapsed time.
int main(){
using clock = std::chrono::system_clock;
using sec = std::chrono::duration<double>;
const auto before = clock::now();
...
std::cin >> a >> b;
lgstCommSubStr findingLCSS(a,b,numberofHT,cardi,SubsA);
const sec duration = clock::now() - before;
std::cout << "It took " << duration.count() << "s in main function" << std::endl;
return 0;
}
Second, I put the clock function at the class findingLCSS. This class is for finding longest common sub-string between two string. It is the class that actually do my algorithm. I write the code for finding it in its constructor. Therefore, when making this class, it returns longest common sub-string information. I think this elapsed time will be the actual algorithm running time.
public:
lgstCommSubStr(string a, string b, int numHT, int m, vector <int> ** SA):
strA(a), strB(b), hashTsize(numHT), SubstringsA(SA),
primeNs(numHT), xs(numHT),
A_hashValues(numHT), B_hashValues(numHT),
av(numHT), bv(numHT), cardi(m)
{
using clock = std::chrono::system_clock;
using sec = std::chrono::duration<double>;
const auto before = clock::now();
...
answer ans=binarySearch(a,b, numHT);
std::cout << ans.i << " " << ans.j << " " << ans.length << "\n";
const sec duration = clock::now() - before;
std::cout << "It took " << duration.count() << "s in the class" << std::endl;
}
The output is as below.
tool coolbox
1 1 3
It took 0.002992s in inner class
It took 4.13945s in main function
It means 'tool' and 'coolbox' have a substring 'ool'
But I am confused that there is a big difference between two times.
Because the first time is total time and the second time is algorithm running time, I have to think its difference time is the elapsed time for declaration variables.
But it looks weird because I think declaration variables time is short.
Is there a mistake in measuring the elapsed time?
Please give me a hint for troubleshoot. Thank you for reading!
Taking a snapshot of the time before std::cin >> a >> b; leads to an inaccurate measurement as you're likely starting the clock before you type in the values for a and b. Generally you want to put your timing as close as possible to the thing you're actually measuring.

Not truly Random

For some reason the code I'm about to post below is not purely random... And I have used srand(), to attempt to make it random.. I don't know why it's acting weird...
#include<vector>
#include "../Header Files/SinglePlayer.h"
SinglePlayer::SinglePlayer()
{
}
int myRand(int low, int high)
{
srand(time(NULL));
return rand() % (high - low + 1) + low;
}
void SinglePlayer::startGame()
{
cout << "Starting Single Player........." << endl;
cout << "Starting out with two cards...." << endl;
int randomCardStarterOnePlayer = myRand(0,10);
int randomCardStarterTwoPlayer = myRand(0,10);
int randomCardStarterOneAI = myRand(0,10);
int randomCardStarterTwoAI = myRand(0,10);
this -> calculateRandomStarter(randomCardStarterOnePlayer,
randomCardStarterTwoPlayer,
randomCardStarterOneAI,
randomCardStarterTwoAI);
cout << "You Start out with " << amountPlayer << endl;
cout << "Computer Starts out with " << amountAI << endl;
}
void SinglePlayer::calculateRandomStarter(int randomOnePlayer, int randomTwoPlayer, int randomOneAI, int randomTwoAI)
{
amountPlayer = amountPlayer + randomOnePlayer + randomTwoPlayer;
playerCards.push_back(randomOnePlayer);
playerCards.push_back(randomTwoPlayer);
amountAI = amountAI + randomOneAI + randomTwoAI;
AICards.push_back(randomOneAI);
AICards.push_back(randomTwoAI);
}
SinglePlayer::~SinglePlayer()
{
}
Outcome:
~~~~~~~~~~BLACKJACK~~~~~~~~~~~
Do you want to play single player, or multiplayer? (Enter 0 for single
player, 1 for multiplayer)
0
Starting Single Player.........
Starting out with two cards....
You Start out with 2
Computer Starts out with 2
You can see the player and computer starts with same number.. and that always happens for some reason... I cant seem to spot the problem, please help.
time(NULL) returns time in seconds, and because you set new seed every time you are generating new number you probably (in most cases) set same number as seed every time.
Move:
srand(NULL)
to start of main or somewhere where it will be called only once.
It sounds like time(NULL) in your code returns something that is constant and does not call std::time(NULL) as you may expect. If it did, you would have a random number properly generated from rand().
Try to print the output of time(NULL) and check if you actually get the number of seconds elasped since the epoch. If not, make sure you include <ctime> and call fully qualified srand(std::time(NULL)).

How do I properly use "time.h" to return a number greater than zero? c++

I am trying to calculate the time a certain function takes to run
#include <iostream>
#include <cstdlib>
#include <ctime>
#include "time.h"
int myFunction(int n)
{
.............
}
int n;
time_t start;
std::cout<<"What number would you like to enter ";
std::cout << std::endl;
std::cin>>n;
start = clock();
std::cout<<myFunction(m)<<std::endl;
std::cout << "Time it took: " << (clock() - start) / (double)(CLOCKS_PER_SEC/ 1000 ) << std::endl;
std::cout << std::endl;
This works fine in x-code (getting numbers such 4.2, 2.6 ...), but doesn't on a linux based server where I'm always getting 0. Any ideas why that is and how to fix it?
The "tick" of clock may be more than 1/CLOCKS_PER_SEC seconds, for example it could be 10ms, 15.832761ms or 32 microseconds.
If the time consumed by your code is smaller than this "tick", then the time taken will appear to be zero.
There's no simple way to find out what that is, other than - you could call clock repeatedly until the return-value is not the same as last time, but that may not be entirely reliable, and if the clock-tick is VERY small, it may not be accurate in that direction, but if the time is quite long, you may be able to find out.
For measuring very short times (a few milliseconds), and assuming the function is entirely CPU/Memory-bound, and not spending time waiting for file I/O or sending receiving packets over a network, then std::chrono can be used to measure the time. For extremely short times, using the processor time-stamp-counter can also be a method, although that can be quite tricky to use because it varies in speed depending on load, and can have different values between different cores.
In my compiler project, I'm using this to measure the time of things:
This part goes into a header:
class TimeTrace
{
public:
TimeTrace(const char *func) : impl(0)
{
if(timetrace)
{
createImpl(func);
}
}
~TimeTrace() { if(impl) destroyImpl(); }
private:
void createImpl(const char *func);
void destroyImpl();
TimeTraceImpl *impl;
};
This goes into a source file.
class TimeTraceImpl
{
public:
TimeTraceImpl(const char *func) : func(func)
{
start = std::chrono::steady_clock::now();
}
~TimeTraceImpl()
{
end = std::chrono::steady_clock::now();
uint64_t elapsed = std::chrono::duration_cast<std::chrono::microseconds>(end-start).count();
std::cerr << "Time for " << func << " "
<< std::fixed << std::setprecision(3) << elapsed / 1000.0 << " ms" << std::endl;
}
private:
std::chrono::time_point<std::chrono::steady_clock> start, end;
const char* func;
};
void TimeTrace::createImpl(const char *func)
{
impl = new TimeTraceImpl(func);
}
void TimeTrace::destroyImpl()
{
delete impl;
}
The reason for the rather comple pImpl implementation is that I don't want to burden the code with extra work when the timing is turned off (timetrace is false).
Of course, the smallest actual tick of std::chrono also varies, but in most Linux implementations, it will be nanoseconds or some small multiple thereof, so (much) better precision than clock.
The drawback is that it measures the elapsed time, not the CPU-usage. This is fine for when the bottleneck is the CPU and memory, but not for things that depend on external hardware to perform something [unless you actually WANT that measurement].

C++ timing events console bug?

I am trying to write a class that will be able to time events using QueryPerformanceCounter in C++.
The idea is that you create a timer object, give a function a time in double format, and it counts until that time has passed by and does stuff afterwards. This class will ideally be used for timing things in a game ( having a timer that counts 60 times in a second for example). When i compile this code though, it just prints 0's to the console, seemingly for ever. But i noticed some kind of bug that i can't understand. If i click on the scroll bar of the console window and hold it, the timer actually counts properly. If i enter 5.0 for example, then quickly click and hold the scroll bar for 5 seconds or longer, when i let go the program will print 'Done!!!'. so why doesn't it count properly when i just let it print the elapsed time to the console? is there a glitch with printing things to the console, or is there something wrong with my timing code? Below is the code:
#include <iostream>
#include <iomanip>
#include "windows.h"
using namespace std;
int main()
{
setprecision(10); // i tried to see if precision in the stream was the problem but i don't think it is
cout << "hello! lets time something..." << endl;
bool timing = 0; // a switch to turn the timer on and off
LARGE_INTEGER T1, T2; // the timestamps to count
LARGE_INTEGER freq; // the frequency per seccond for measuring the difference between the stamp values
QueryPerformanceFrequency(&freq); // gets the frequency from the computer
// mil.QuadPart = freq.QuadPart / 1000; // not used
double ellapsedtime = 0, desiredtime; // enter a value to count up to in secconds
// if you entered 4.5 for example, then it should wait for 4.5 secconds
cout << "enter the amount of time you would like to wait for in seconds (in double format.)!!" << endl;
cin >> desiredtime;
QueryPerformanceCounter(&T1); // gets the first stamp value
timing = 1; // switches the timer on
while(timing)
{
QueryPerformanceCounter(&T2); // gets another stamp value
ellapsedtime += (T2.QuadPart - T1.QuadPart) / freq.QuadPart; // measures the difference between the two stamp
//values and then divides them by the frequency to get how many secconds has ellapsed
cout << ellapsedtime << endl;
T1.QuadPart = T2.QuadPart; // assigns the value of the second stamp to the first one, so that we can measure the
// difference between them again and again
if(ellapsedtime>=desiredtime) // checks if the elapsed time is bigger than or equal to the desired time,
// and if it is prints done and turns the timer off
{
cout << "done!!!" << endl;
timing = 0; // breaks the loop
}
}
return 0;
}
You should store in ellapsedtime the number of microseconds elapsed since the fisrt call to QueryPerformanceCounter, and you should not overwrite the first time stamp.
Working code:
// gets another stamp value
QueryPerformanceCounter(&T2);
// measures the difference between the two stamp
ellapsedtime += (T2.QuadPart - T1.QuadPart);
cout << "number of tick " << ellapsedtime << endl;
ellapsedtime *= 1000000.;
ellapsedtime /= freq.QuadPart;
cout << "number of Microseconds " << ellapsedtime << endl;
// checks if the elapsed time is bigger than or equal to the desired time
if(ellapsedtime/1000000.>=desiredtime) {
cout << "done!!!" << endl;
timing = 0; // breaks the loop
}
Be sure to read : Acquiring high-resolution time stamps

C++ clock measures time incorrectly

I have a program which reads 2 input files. First file contains some random words which are put into an BST and AVL tree. Then the program looks for the words listed in the second read file and says if they exist in the trees, then writes an output file with the information gathered. While doing this the program prints out the time spent for finding a certain item. However the program does not seem to be measuring the time spent.
BST* b = new BST();
AVLTree* t = new AVLTree();
string s;
ifstream in;
in.open(argv[1]);
while(!in.eof())
{
in >> s;
b->insert(s);
t->insert(s);
}
ifstream q;
q.open(argv[2]);
ofstream out;
out.open(argv[3]);
int bstItem = 0;
int avlItem = 0;
float diff1 = 0;
float diff2 = 0;
clock_t t1, t1e, t2, t2e;
while(!q.eof())
{
q >> s;
t1 = clock();
bstItem = b->findItem(s);
t1e = clock();
diff1 = (float)(t1e - t1)/CLOCKS_PER_SEC;
t2 = clock();
avlItem = t->findItem(s);
t2e = clock();
diff2 = (float)(t2e - t2)/CLOCKS_PER_SEC;
if(avlItem == 0 && bstItem == 0)
cout << "Query " << s << " not found in " << diff1 << " microseconds in BST, " << diff2 << " microseconds in AVL" << endl;
else
cout << "Query " << s << " found in " << diff1 << " microseconds in BST, " << diff2 << " microseconds in AVL" << endl;
out << bstItem << " " << avlItem << " " << s << "\n";
}
The clock() value I get just before entering while and just after finishing it is exactly the same. So it appears as if the program does not even run the while loop at all, so it print 0. I know that this is not the case since it takes around 10 seconds for the program the finish as it should. Also the output file contains correct results, so the possibility of having bad findItem() functions is also not true.
I did a little bit research in Stack Overflow, and saw that many people experience the same problem as me. However none of the answers I read solved it.
I solved my problem using a higher resolution clock, though the clock resolution was not my problem. I used clock_gettime() from time.h. As far as I know higher clock resolutions than clock() is platform dependent and this particular method I used in my code is only available for Linux. I still haven't figured out why I wasn't able to obtain healthy results from clock(), but I suspect platform dependency again.
An important note, the use of clock_gettime() requires you to include POSIX real time extension when compiling the code.
So you should do:
g++ a.cpp b.cpp c.cpp -lrt -o myProg
where -lrt is the parameter to include POSIX extensions.
If (t1e - t1) is < CLOCKS_PER_SEC your result will always be 0 because integer division is truncated. Cast CLOCKS_PER_SEC to float.
diff1 = (t1e - t1)/((float)CLOCKS_PER_SEC);