I'm writing a simple wrapper around the Win32 FILETIME structure. boost::datetime has most of what I want, except I need whatever date type I end up using to interpolate with Windows APIs without issues.
To that end, I've decided to write my own things for doing this -- most of the operations aren't all that complicated. I'm implementing the TimeSpan - like type at this point, but I'm unsure how I'd implement FileTimeToSystemTime. I could just use the system's built-in FileTimeToSystemTime function, except FileTimeToSystemTime cannot handle negative dates -- I need to be able to represent something like "-12 seconds".
How should something like this be implemented?
Billy3
Windows SYSTEMTIME and FILETIME data types are intended to represent a particular date and time. They are not really suitable to represent time differences. Time differences are better of as a simple integer representing the number of between two SYSTEMTIMEs or FILETIMEs. might be seconds, or something smaller if you need more precision.
If you need to display a difference to users, simple division and modulus can be used to compute the components.
std::string PrintTimeDiff(int nSecDiff)
{
std::ostringstream os;
if (nSecDiff<0)
{
os << "-";
nSecDiff= -nSecDiff;
}
int nSeconds = nSecDiff % (24*60*60);
nSecDiff /= 60;
int nMinutes = nSecDiff % (24*60)
nSecDiff /= 60;
int nHours = nSecDiff % 24;
int nDays = nSecDiff / 24;
os << nDays << " Days " << nHours << ":" << nMinutes << ":" << nSeconds;
return os .str();
}
Assuming you didn't have a problem with the structure all having unsigned components, you could take any negative timespans, make them positive, call FileTimeToSystemTime, and then (if the original input was negative) pick out components to make negative.
I see bad design here. Time span, difference between two times, is the same when measuring with system time and with file time too. W32 FileTimeToSystemTime is right about not accepting negative values because it has no sense. Period of 2 seconds is a period of 2 seconds, no matter which time zone you used.
//EDIT:
Second problem. SYSTEMTIME is somehow able to represent time span, but it would be erroneous. I.e. month is not usable unit when measuring time spans.
Related
How can I write a C++ function which takes a long long value representing a VMS timestamp and returns the corresponding time_t value, assuming the conversion yields a valid time_t? (I'll be parsing binary data sent over network on a commodity CentOS server, if that makes any differences.)
I've had a look into a document titled "Why Is Wednesday November 17, 1858 The Base Time For VAX/VMS" but I don't think I can write a correct implementation without testing with actual data which I don't have at hand right now, unfortunately.
If I'm not mistaken, it should be a simple arithmetic in this form:
time_t vmsTimeToTimeT(long long v) {
return v/10'000'000 - OFFSET;
}
Could somebody tell me what value to put into OFFSET ?
Things I'm concerned about:
I don't want to be bitten by my local timezone
I don't want to be bitten by the 0.5 thing (afternoon vs midnight) in the definition of modified Julian date (though it should be helping me here; modified Julian epoch and Unix Epoch should differ by a multiple of 24 hours thanks to the definition)
I tried to compute it by myself with the help from Boost.DateTime, only to get a mysterious negative value...
int main() {
boost::posix_time::ptime x(
boost::gregorian::date(1858, boost::gregorian::Nov, 17),
boost::posix_time::time_duration(0, 0, 0) );
boost::posix_time::ptime y(
boost::gregorian::date(1970, boost::gregorian::Jan, 1),
boost::posix_time::time_duration(0, 0, 0) );
std::cout << (y - x).total_seconds() << std::endl;
std::cout << (y > x ? "y is after x" : "y is before x") << std::endl;
}
-788250496
y is after x
I used Boost 1.60 for it:
The current implementation supports dates in the range 1400-Jan-01 to 9999-Dec-31.
Update
Crap, sizeof(total_seconds()) was 4, dispite what the document says
So I got 3506716800 from
auto diff = y - x;
std::cout << diff.ticks() / diff.ticks_per_second() << std::endl;
which doesn't look too wrong but... who can assure this is really correct?
Wow, you guys make it all appear to be so difficult with libraries and all.
So you read up on November-17 1858 and found out that VMS stores the time as 100nS 'clunks' since that date. Right?
Unix times are Seconds (or microseconds) since 1-jan-1970. Right?
So all you need to do is to subtract the OpenVMS time value 'offset' for 1-jan-1970 from the reported OpenVMS times ad divide by 10,000,000 (seconds) or 10 (microseconds).
You only need to find that value once using a trivial OpenVMS program.
Below I did not even use a dedicated program, just used the OpenVMS interactive debugger running a random executable program:
$ run tmp/debug
DBG> set rad hex
DBG> dep/date 10000 = "01-JAN-1970 00:00:00" ! Local time
DBG> examin/quad 10000
TMP\main: 007C95674C3DA5C0
DBG> examin/quad/dec 10000
TMP\main: 35067168005400000
So there is you offset, both in HEX and DECIMAL to use as you see fit.
In the simplest form you pre-divide the incoming OpenVMS time by 10,000,000 and subtract 3506716800 (decimal) to get Epoch seconds.
Be sure to keep the math, including the subtract to long-long int's
hth,
Hein.
According to this:
https://www.timeanddate.com/date/durationresult.html?d1=17&m1=11&y1=1858&d2=1&m2=jan&y2=1970
you'd want 40587 days, times 86400 seconds, makes 3506716800 as the offset in your calculation.
Using this free open-source library which extends <chrono> to calendrical computations, I can confirm your figure of the offset in seconds:
#include "chrono_io.h"
#include "date.h"
#include <iostream>
int
main()
{
using namespace date;
using namespace std::chrono;
using namespace std;
seconds offset = sys_days{jan/1/1970} - sys_days{nov/17/1858};
cout << offset << '\n';
}
Output:
3506716800s
I have a program that reads the current time from the system clock and saves it to a text file. I previously used the GetSystemTime function which worked, but the times weren't completely consistent eg: one of the times is 32567.789 and the next time is 32567.780 which is backwards in time.
I am using this program to save the time up to 10 times a second. I read that the GetSystemTimeAsFileTime function is more accurate. My question is, how to I convert my current code to use the GetSystemTimeAsFileTime function? I tried to use the FileTimeToSystemTime function but that had the same problems.
SYSTEMTIME st;
GetSystemTime(&st);
WORD sec = (st.wHour*3600) + (st.wMinute*60) + st.wSecond; //convert to seconds in a day
lStr.Format( _T("%d %d.%d\n"),GetFrames() ,sec, st.wMilliseconds);
std::wfstream myfile;
myfile.open("time.txt", std::ios::out | std::ios::in | std::ios::app );
if (myfile.is_open())
{
myfile.write((LPCTSTR)lStr, lStr.GetLength());
myfile.close();
}
else {lStr.Format( _T("open file failed: %d"), WSAGetLastError());
}
EDIT To add some more info, the code captures an image from a camera which runs 10 times every second and saves the time the image was taken into a text file. When I subtract the 1st entry of the text file from the second and so on eg: entry 2-1 3-2 4-3 etc I get this graph, where the x axis is the number of entries and the y axis is the subtracted values.
All of them should be around the 0.12 mark which most of them are. However you can see that a lot of them vary and some even go negative. This isn't due to the camera because the camera has its own internal clock and that has no variations. It has something to do with capturing the system time. What I want is the most accurate method to extract the system time with the highest resolution and as little noise as possible.
Edit 2 I have taken on board your suggestions and ran the program again. This is the result:
As you can see it is a lot better than before but it is still not right. I find it strange that it seems to do it very incrementally. I also just plotted the times and this is the result, where x is the entry and y is the time:
Does anyone have any idea on what could be causing the time to go out every 30 frames or so?
First of all, you wanna get the FILETIME as follows
FILETIME fileTime;
GetSystemTimeAsFileTime(&fileTime);
// Or for higher precision, use
// GetSystemTimePreciseAsFileTime(&fileTime);
According to FILETIME's documentation,
It is not recommended that you add and subtract values from the FILETIME structure to obtain relative times. Instead, you should copy the low- and high-order parts of the file time to a ULARGE_INTEGER structure, perform 64-bit arithmetic on the QuadPart member, and copy the LowPart and HighPart members into the FILETIME structure.
So, what you should be doing next are
ULARGE_INTEGER theTime;
theTime.LowPart = fileTime.dwLowDateTime;
theTime.HighPart = fileTime.dwHighDateTime;
__int64 fileTime64Bit = theTime.QuadPart;
And that's it. The fileTime64Bit variable now contains the time you're looking for.
If you want to get a SYSTEMTIME object instead, you could just do the following:
SYSTEMTIME systemTime;
FileTimeToSystemTime(&fileTime, &systemTime);
Getting the system time out of Windows with decent accuracy is something that I've had fun with, too... I discovered that Javascript code running on Chrome seemed to produce more consistent timer results than I could with C++ code, so I went looking in the Chrome source. An interesting place to start is the comments at the top of time_win.cc in the Chrome source. The links given there to a Mozilla bug and a Dr. Dobb's article are also very interesting.
Based on the Mozilla and Chrome sources, and the above links, the code I generated for my own use is here. As you can see, it's a lot of code!
The basic idea is that getting the absolute current time is quite expensive. Windows does provide a high resolution timer that's cheap to access, but that only gives you a relative, not absolute time. What my code does is split the problem up into two parts:
1) Get the system time accurately. This is in CalibrateNow(). The basic technique is to call timeBeginPeriod(1) to get accurate times, then call GetSystemTimeAsFileTime() until the result changes, which means that the timeBeginPeriod() call has had an effect. This gives us an accurate system time, but is quite an expensive operation (and the timeBeginPeriod() call can affect other processes) so we don't want to do it each time we want a time. The code also calls QueryPerformanceCounter() to get the current high resolution timer value.
bool NeedCalibration = true;
LONGLONG CalibrationFreq = 0;
LONGLONG CalibrationCountBase = 0;
ULONGLONG CalibrationTimeBase = 0;
void CalibrateNow(void)
{
// If the timer frequency is not known, try to get it
if (CalibrationFreq == 0)
{
LARGE_INTEGER freq;
if (::QueryPerformanceFrequency(&freq) == 0)
CalibrationFreq = -1;
else
CalibrationFreq = freq.QuadPart;
}
if (CalibrationFreq > 0)
{
// Get the current system time, accurate to ~1ms
FILETIME ft1, ft2;
::timeBeginPeriod(1);
::GetSystemTimeAsFileTime(&ft1);
do
{
// Loop until the value changes, so that the timeBeginPeriod() call has had an effect
::GetSystemTimeAsFileTime(&ft2);
}
while (FileTimeToValue(ft1) == FileTimeToValue(ft2));
::timeEndPeriod(1);
// Get the current timer value
LARGE_INTEGER counter;
::QueryPerformanceCounter(&counter);
// Save calibration values
CalibrationCountBase = counter.QuadPart;
CalibrationTimeBase = FileTimeToValue(ft2);
NeedCalibration = false;
}
}
2) When we want the current time, get the high resolution timer by calling QueryPerformanceCounter(), and use the change in that timer since the last CalibrateNow() call to work out an accurate "now". This is in Now() in my code. This also periodcally calls CalibrateNow() to ensure that the system time doesn't go backwards, or drift out.
FILETIME GetNow(void)
{
for (int i = 0; i < 4; i++)
{
// Calibrate if needed, and give up if this fails
if (NeedCalibration)
CalibrateNow();
if (NeedCalibration)
break;
// Get the current timer value and use it to compute now
FILETIME ft;
::GetSystemTimeAsFileTime(&ft);
LARGE_INTEGER counter;
::QueryPerformanceCounter(&counter);
LONGLONG elapsed = ((counter.QuadPart - CalibrationCountBase) * 10000000) / CalibrationFreq;
ULONGLONG now = CalibrationTimeBase + elapsed;
// Don't let time go back
static ULONGLONG lastNow = 0;
now = max(now,lastNow);
lastNow = now;
// Check for clock skew
if (LONGABS(FileTimeToValue(ft) - now) > 2 * GetTimeIncrement())
{
NeedCalibration = true;
lastNow = 0;
}
if (!NeedCalibration)
return ValueToFileTime(now);
}
// Calibration has failed to stabilize, so just use the system time
FILETIME ft;
::GetSystemTimeAsFileTime(&ft);
return ft;
}
It's all a bit hairy but works better than I had hoped. This also seems to work well as far back on Windows as I have tested (which was Windows XP).
I believe you are looking for GetSystemTimePreciseAsFileTime() function or even QueryPerformanceCounter() - to be short for something that is guarantied to produce monotone values.
Following up from here
I am trying to see whether my data is 120 second old or not by looking at the timestamp of the data so I have below small code in my library project which is using std::chrono package:
uint64_t now = duration_cast<milliseconds>(steady_clock::now().time_since_epoch()).count();
bool is_old = (120 * 1000 < (now - data_holder->getTimestamp()));
// some logging to print out above values
LOG4CXX_WARN(logger, "data logging, now: " << now << ", data holder timestamp: " << data_holder->getTimestamp() << ", is_old: " << is_old << ", difference: " << (now - data_holder->getTimestamp()));
In the above code data_holder->getTimestamp() is uint64_t which returns timestamp in milliseconds.
Now when I print out now variable value, I am seeing this 433425679 and when I print out data_holder->getTimestamp() value which is 1437943796841 and the difference of now and data holder timestamp is coming as 18446742636199180454 as shown below in the logs:
2015-07-26 13:49:56,850 WARN 0x7fd050bc9700 simple_process - data logging, now: 433425679 , data holder timestamp: 1437943796841 , is_old: 1 , difference: 18446742636199180454
Now if I convert data holder timestamp 1437943796841 using epoch converter, I see this:
Your time zone: 7/26/2015, 1:49:56 PM
which is exactly same as the timestamp shown in the logs 2015-07-26 13:49:56,850 WARN so that means my data doesn't look to be 120 second old data. If yes, then why I am seeing is_old value as 1?
It looks like data_holder->getTimestamp() value is coming from this below code in our code base and then we are comparing it for 120 second old data check.
// is this the problem?
struct timeval val;
gettimeofday(&val, NULL);
uint64_t time_ms = uint64_t(val.tv_sec) * 1000 + val.tv_usec / 1000;
Now after carefully reading about various clock implementation in C++, it looks like we should use same clock to do the comparison.
Does my above code in which I am calculating data_holder->getTimestamp() value is the problem? since I am not using steady_clock there so epoch time will be different and that's why I see this issue?
Now my question is - what code should I use then to fix this issue? Should I use steady_clock as well for data_holder->getTimestamp() code? If yes, then what's the right way?
Also same code works fine in Ubuntu 12 box but it doesn't work fine in Ubuntu 14. I am running all statically linked libraries. For Ubuntu 12, code is compiled on Ubuntu 12 running 4.7.3 compiler and for Ubuntu 14, code is compiled on Ubuntu 14 running 4.8.2 compiler.
Use the same clock for both. If your timestamps need to maintain meaning across runs of your application, you must use system_clock, not steady_clock. If your timestamps only have meaning within a single run you can use steady_clock.
steady_clock is like a "stopwatch". You can time stuff with it, but you can't get the current time of day with it.
DataHolder::DataHolder()
: timestamp_{system_clock::now()}
{}
system_clock::time_point
DataHolder::getTimestamp()
{
return timestamp_;
}
bool is_old = minutes{2} < system_clock::now() - data_holder->getTimestamp();
In C++14 you can shorten this to:
bool is_old = 2min < system_clock::now() - data_holder->getTimestamp();
Do use <chrono>.
Don't use count() or time_since_epoch() (except for debugging purposes).
Don't use conversion factors such as 1000 or 120.
Violation of the guidelines above will turn compile-time errors into run-time errors. Compile-time errors are your friend. <chrono> catches many errors at compile-time. Once you escape the type-safety of <chrono> (e.g. by using count()), you are programming in the assembly language equivalent of time-keeping. And the space/time overhead of <chrono>'s type-safety system is zero.
You should definetely use the same time function for both.
I would recommend changing either the way the getTimestamp() value is created (e.g. by using chrono::system_clock) or the way you compare the timestamp.
The clean way would be to change it like this:
struct timeval val;
gettimeofday(&val, NULL);
uint64_t now = uint64_t(val.tv_sec) * 1000 + val.tv_usec / 1000;
bool is_old = (120 * 1000 < (now - data_holder->getTimestamp()));
Or the other way around
1.Change the way the getTimestamp() value is created
long long time_ms = std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::system_clock::now().time_since_epoch()).count();
2.Adjust the compare function
long long now = std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::system_clock::now().time_since_epoch()).count();
bool is_old = (120 * 1000 < (now - data_holder->getTimestamp()));
I have found a function to get milliseconds since the Mac was started:
U32 Platform::getRealMilliseconds()
{
// Duration is a S32 value.
// if negative, it is in microseconds.
// if positive, it is in milliseconds.
Duration durTime = AbsoluteToDuration(UpTime());
U32 ret;
if( durTime < 0 )
ret = durTime / -1000;
else
ret = durTime;
return ret;
}
The problem is that after ~20 days AbsoluteToDuration returns INT_MAX all the time until the Mac is rebooted.
I have tried to use method below, it worked, but looks like gettimeofday takes more time and slows down the game a bit:
timeval tim;
gettimeofday(&tim, NULL);
U32 ret = ((tim.tv_sec) * 1000 + tim.tv_usec/1000.0) + 0.5;
Is there a better way to get number of milliseconds elapsed since some epoch (preferably since the app started)?
Thanks!
Your real problem is that you are trying to fit an uptime-in-milliseconds value into a 32-bit integer. If you do that your value will always wrap back to zero (or saturate) in 49 days or less, no matter how you obtain the value.
One possible solution would be to track time values with a 64-bit integer instead; that way the day of reckoning gets postponed for a few hundred years and so you don't have to worry about the problem. Here's a MacOS/X implementation of that:
uint64_t GetTimeInMillisecondsSinceBoot()
{
return UnsignedWideToUInt64(AbsoluteToNanoseconds(UpTime()))/1000000;
}
... or if you don't want to return a 64-bit time value, the next-best thing would be to record the current time-in-milliseconds value when your program starts, and then always subtract that value from the values you return. That way things won't break until your own program has been running for at least 49 days, which I suppose is unlikely for a game.
uint32_t GetTimeInMillisecondsSinceProgramStart()
{
static uint64_t _firstTimeMillis = GetTimeInMillisecondsSinceBoot();
uint64_t nowMillis = GetTimeInMillisecondsSinceBoot();
return (uint32_t) (nowMillis-_firstTimeMillis);
}
My preferred method is mach_absolute_time - see this tech note - I use the second method, i.e. mach_absolute_time to get time stamps and mach_timebase_info to get the constants needed to convert the difference between time stamps into an actual time value (with nanosecond resolution).
I need some way in c++ to keep track of the number of milliseconds since program execution. And I need the precision to be in milliseconds. (In my googling, I've found lots of folks that said to include time.h and then multiply the output of time() by 1000 ... this won't work.)
clock has been suggested a number of times. This has two problems. First of all, it often doesn't have a resolution even close to a millisecond (10-20 ms is probably more common). Second, some implementations of it (e.g., Unix and similar) return CPU time, while others (E.g., Windows) return wall time.
You haven't really said whether you want wall time or CPU time, which makes it hard to give a really good answer. On Windows, you could use GetProcessTimes. That will give you the kernel and user CPU times directly. It will also tell you when the process was created, so if you want milliseconds of wall time since process creation, you can subtract the process creation time from the current time (GetSystemTime). QueryPerformanceCounter has also been mentioned. This has a few oddities of its own -- for example, in some implementations it retrieves time from the CPUs cycle counter, so its frequency varies when/if the CPU speed changes. Other implementations read from the motherboard's 1.024 MHz timer, which does not vary with the CPU speed (and the conditions under which each are used aren't entirely obvious).
On Unix, you can use GetTimeOfDay to just get the wall time with (at least the possibility of) relatively high precision. If you want time for a process, you can use times or getrusage (the latter is newer and gives more complete information that may also be more precise).
Bottom line: as I said in my comment, there's no way to get what you want portably. Since you haven't said whether you want CPU time or wall time, even for a specific system, there's not one right answer. The one you've "accepted" (clock()) has the virtue of being available on essentially any system, but what it returns also varies just about the most widely.
See std::clock()
Include time.h, and then use the clock() function. It returns the number of clock ticks elapsed since the program was launched. Just divide it by "CLOCKS_PER_SEC" to obtain the number of seconds, you can then multiply by 1000 to obtain the number of milliseconds.
Some cross platform solution. This code was used for some kind of benchmarking:
#ifdef WIN32
LARGE_INTEGER g_llFrequency = {0};
BOOL g_bQueryResult = QueryPerformanceFrequency(&g_llFrequency);
#endif
//...
long long osQueryPerfomance()
{
#ifdef WIN32
LARGE_INTEGER llPerf = {0};
QueryPerformanceCounter(&llPerf);
return llPerf.QuadPart * 1000ll / ( g_llFrequency.QuadPart / 1000ll);
#else
struct timeval stTimeVal;
gettimeofday(&stTimeVal, NULL);
return stTimeVal.tv_sec * 1000000ll + stTimeVal.tv_usec;
#endif
}
The most portable way is using the clock function.It usually reports the time that your program has been using the processor, or an approximation thereof. Note however the following:
The resolution is not very good for GNU systems. That's really a pity.
Take care of casting everything to double before doing divisions and assignations.
The counter is held as a 32 bit number in GNU 32 bits, which can be pretty annoying for long-running programs.
There are alternatives using "wall time" which give better resolution, both in Windows and Linux. But as the libc manual states: If you're trying to optimize your program or measure its efficiency, it's very useful to know how much processor time it uses. For that, calendar time and elapsed times are useless because a process may spend time waiting for I/O or for other processes to use the CPU.
Here is a C++0x solution and an example why clock() might not do what you think it does.
#include <chrono>
#include <iostream>
#include <cstdlib>
#include <ctime>
int main()
{
auto start1 = std::chrono::monotonic_clock::now();
auto start2 = std::clock();
sleep(1);
for( int i=0; i<100000000; ++i);
auto end1 = std::chrono::monotonic_clock::now();
auto end2 = std::clock();
auto delta1 = end1-start1;
auto delta2 = end2-start2;
std::cout << "chrono: " << std::chrono::duration_cast<std::chrono::duration<float>>(delta1).count() << std::endl;
std::cout << "clock: " << static_cast<float>(delta2)/CLOCKS_PER_SEC << std::endl;
}
On my system this outputs:
chrono: 1.36839
clock: 0.36
You'll notice the clock() method is missing a second. An astute observer might also notice that clock() looks to have less resolution. On my system it's ticking by in 12 millisecond increments, terrible resolution.
If you are unable or unwilling to use C++0x, take a look at Boost.DateTime's ptime microsec_clock::universal_time().
This isn't C++ specific (nor portable), but you can do:
SYSTEMTIME systemDT;
In Windows.
From there, you can access each member of the systemDT struct.
You can record the time when the program started and compare the current time to the recorded time (systemDT versus systemDTtemp, for instance).
To refresh, you can call GetLocalTime(&systemDT);
To access each member, you would do systemDT.wHour, systemDT.wMinute, systemDT.wMilliseconds.
To get more information on SYSTEMTIME.
Do you want wall clock time, CPU time, or some other measurement? Also, what platform is this? There is no universally portable way to get more precision than time() and clock() give you, but...
on most Unix systems, you can use gettimeofday() and/or clock_gettime(), which give at least microsecond precision and access to a variety of timers;
I'm not nearly as familiar with Windows, but one of these functions probably does what you want.
You can try this code (get from StockFish chess engine source code (GPL)):
#include <iostream>
#include <stdio>
#if !defined(_WIN32) && !defined(_WIN64) // Linux - Unix
# include <sys/time.h>
typedef timeval sys_time_t;
inline void system_time(sys_time_t* t) {
gettimeofday(t, NULL);
}
inline long long time_to_msec(const sys_time_t& t) {
return t.tv_sec * 1000LL + t.tv_usec / 1000;
}
#else // Windows and MinGW
# include <sys/timeb.h>
typedef _timeb sys_time_t;
inline void system_time(sys_time_t* t) { _ftime(t); }
inline long long time_to_msec(const sys_time_t& t) {
return t.time * 1000LL + t.millitm;
}
#endif
struct Time {
void restart() { system_time(&t); }
uint64_t msec() const { return time_to_msec(t); }
long long elapsed() const {
return long long(current_time().msec() - time_to_msec(t));
}
static Time current_time() { Time t; t.restart(); return t; }
private:
sys_time_t t;
};
int main() {
sys_time_t t;
system_time(&t);
long long currentTimeMs = time_to_msec(t);
std::cout << "currentTimeMs:" << currentTimeMs << std::endl;
Time time = Time::current_time();
for (int i = 0; i < 1000000; i++) {
//Do something
}
long long e = time.elapsed();
std::cout << "time elapsed:" << e << std::endl;
getchar(); // wait for keyboard input
}