How to get the time in milliseconds in C++ - c++

In Java you can do this:
long now = (new Date()).getTime();
How can I do the same but in C++?

Because C++0x is awesome
namespace sc = std::chrono;
auto time = sc::system_clock::now(); // get the current time
auto since_epoch = time.time_since_epoch(); // get the duration since epoch
// I don't know what system_clock returns
// I think it's uint64_t nanoseconds since epoch
// Either way this duration_cast will do the right thing
auto millis = sc::duration_cast<sc::milliseconds>(since_epoch);
long now = millis.count(); // just like java (new Date()).getTime();
This works with gcc 4.4+. Compile it with --std=c++0x. I don't know if VS2010 implements std::chrono yet.

There is no such method in standard C++ (in standard C++, there is only second-accuracy, not millisecond). You can do it in non-portable ways, but since you didn't specify I will assume that you want a portable solution. Your best bet, I would say, is the boost function microsec_clock::local_time().

I like to have a function called time_ms defined as such:
// Used to measure intervals and absolute times
typedef int64_t msec_t;
// Get current time in milliseconds from the Epoch (Unix)
// or the time the system started (Windows).
msec_t time_ms(void);
The implementation below should work in Windows as well as Unix-like systems.
#if defined(__WIN32__)
#include <windows.h>
msec_t time_ms(void)
{
return timeGetTime();
}
#else
#include <sys/time.h>
msec_t time_ms(void)
{
struct timeval tv;
gettimeofday(&tv, NULL);
return (msec_t)tv.tv_sec * 1000 + tv.tv_usec / 1000;
}
#endif
Note that the time returned by the Windows branch is milliseconds since the system started, while the time returned by the Unix branch is milliseconds since 1970. Thus, if you use this code, only rely on differences between times, not the absolute time itself.

You can try this code (get from StockFish chess engine source code (GPL)):
#include <iostream>
#include <stdio>
#if !defined(_WIN32) && !defined(_WIN64) // Linux - Unix
# include <sys/time.h>
typedef timeval sys_time_t;
inline void system_time(sys_time_t* t) {
gettimeofday(t, NULL);
}
inline long long time_to_msec(const sys_time_t& t) {
return t.tv_sec * 1000LL + t.tv_usec / 1000;
}
#else // Windows and MinGW
# include <sys/timeb.h>
typedef _timeb sys_time_t;
inline void system_time(sys_time_t* t) { _ftime(t); }
inline long long time_to_msec(const sys_time_t& t) {
return t.time * 1000LL + t.millitm;
}
#endif
int main() {
sys_time_t t;
system_time(&t);
long long currentTimeMs = time_to_msec(t);
std::cout << "currentTimeMs:" << currentTimeMs << std::endl;
getchar(); // wait for keyboard input
}

Standard C++ does not have a time function with subsecond precision.
However, almost every operating system does. So you have to write code that is OS-dependent.
Win32:
GetSystemTime()
GetSystemTimeAsFileTime()
Unix/POSIX:
gettimeofday()
clock_gettime()

Boost has a useful library for doing this:
http://www.boost.org/doc/libs/1_43_0/doc/html/date_time.html
ptime microsec_clock::local_time() or ptime second_clock::local_time()

Java:
package com.company;
public class Main {
public static void main(String[] args) {
System.out.println(System.currentTimeMillis());
}
}
c++:
#include <stdio.h>
#include <windows.h>
__int64 currentTimeMillis() {
FILETIME f;
GetSystemTimeAsFileTime(&f);
(long long)f.dwHighDateTime;
__int64 nano = ((__int64)f.dwHighDateTime << 32LL) + (__int64)f.dwLowDateTime;
return (nano - 116444736000000000LL) / 10000;
}
int main() {
printf("%lli\n ", currentTimeMillis());
return 0;
}

Related

The cpu_time obtained by proc_pid_rusage does not meet expectations on the macOS M1 chip

At present, I need to calculate the cpu usage of a certain process on the macOS platform (the target process is not directly related to the current process). I use the proc_pid_rusage API. The calculation method is to call it every once in a while, and then calculate this section The difference between ri_user_time and ri_system_time of the time. So as to calculate the percentage of cpu usage.
I used it on a macOS system with non-M1 chip, and the results were in line with expectations (basically the same as what I saw on the activity monitor), but recently I found that the value obtained on the macOS system with the M1 chip is small. For example, one of my processes that consumes 30+% of the cpu(from activity monitor) is less than 1%.
I provide a demo code, you can directly create a new project to run:
//
// main.cpp
// SimpleMonitor
//
// Created by m1 on 2021/2/23.
//
#include <stdio.h>
#include <stdlib.h>
#include <libproc.h>
#include <stdint.h>
#include <iostream>
#include <thread> // std::this_thread::sleep_for
#include <chrono> // std::chrono::seconds
int main(int argc, const char * argv[]) {
// insert code here...
std::cout << "run simple monitor!\n";
// TODO: change process id:
int64_t pid = 12483;
struct rusage_info_v4 ru;
struct rusage_info_v4 ru2;
int64_t success = (int64_t)proc_pid_rusage((pid_t)pid, RUSAGE_INFO_V4, (rusage_info_t *)&ru);
if (success != 0) {
std::cout << "get cpu time fail \n";
return 0;
}
std::cout<<"getProcessPerformance, pid=" + std::to_string(pid) + " ru.ri_user_time=" + std::to_string(ru.ri_user_time) + " ru.ri_system_time=" + std::to_string(ru.ri_system_time)<<std::endl;
std::this_thread::sleep_for (std::chrono::seconds(10));
int64_t success2 = (int64_t)proc_pid_rusage((pid_t)pid, RUSAGE_INFO_V4, (rusage_info_t *)&ru2);
if (success2 != 0) {
std::cout << "get cpu time fail \n";
return 0;
}
std::cout<<"getProcessPerformance, pid=" + std::to_string(pid) + " ru2.ri_user_time=" + std::to_string(ru2.ri_user_time) + " ru2.ri_system_time=" + std::to_string(ru2.ri_system_time)<<std::endl;
int64_t cpu_time = ru2.ri_user_time - ru.ri_user_time + ru2.ri_system_time - ru.ri_system_time;
// percentage:
double cpu_usage = (double)cpu_time / 10 / 1000000000 * 100 ;
std::cout<<pid<<" cpu usage: "<<cpu_usage<<std::endl;
}
Here I want to know whether there is a problem with my calculation method, if there is no problem, how can I handle the inaccurate results on the M1 chip macOS system?
you have to multiply the cpu usage by some constant. Here are some snipets of code from a diff.
+#include <mach/mach_time.h>
mach_timebase_info_data_t sTimebase;
mach_timebase_info(&sTimebase);
timebase_to_ns = (double)sTimebase.numer / (double)sTimebase.denom;
syscpu.total = task_info.ptinfo.pti_total_system* timebase_to_ns/ 1000000;
usercpu.total = task_info.ptinfo.pti_total_user* timebase_to_ns / 1000000;
~

How to use timer interrupt from GPIO to control sampling period of ADC in BeagleBone Black by c++ code?

I'm a university student, who need to study about making adc capturing in BeagleBone Black.
Everything goes really well. I can sampling the data from adc and even print the time stamp in each sample value. Then I check the sampling period of result which i got by using oscilloscope the check the wave from GPIO P8_10 by using "BeagleBoneBlack-GPIO" library Finally I realized that the sampling period is not stable at all.
And I assumed that I supposed to use Interrupt timer in BeagleBone Black. But my root-skill is pretty low to make it by my own.
Anyway. How can i make Interrupt timer by c++ through GPIO because I need to used the interrupt timer to control the adc to make the steady and stable sampling period such as 3ms.
data below is which version I am using, the code, and the result right now also
-BeagleBone Black
-Debian GNU/LInux 8.11 (jessie)
-Linux 5.0.3-bone5
-ARMv7 Processor rev2 (v7l)
#include <stdio.h>
#include <sys/time.h>
#include <time.h>
#include <math.h>
#include<iostream>
#include<fstream>
#include<string>
#include<sstream>
#include<unistd.h>
#include "GPIO/GPIOManager.h"
#include "GPIO/GPIOConst.h"
using namespace std;
#define LIN0_PATH "/sys/bus/iio/devices/iio:device0/in_voltage"
int readAnalog(int number){
stringstream ss;
ss << LIN0_PATH << number << "_raw";
fstream fs;
fs.open(ss.str().c_str(), fstream::in);
fs >> number;
fs.close();
return number;
}
int main(int argc, char* argv[ ]){
int i=0;
GPIO::GPIOManager* gp = GPIO::GPIOManager::getInstance();
int pin1 = GPIO::GPIOConst::getInstance()->getGpioByKey("P8_10");
gp->setDirection(pin1, GPIO::OUTPUT);
char buffer[26];
int millisec;
struct tm* tm_info;
struct timeval tv;
gettimeofday(&tv, NULL);
millisec = lrint(tv.tv_usec/1000.0); // Round to nearest millisec
if (millisec>=1000) {
millisec -=1000;
tv.tv_sec++;
} tm_info = localtime(&tv.tv_sec);
strftime(buffer, 26, "%d/%m/%Y %H:%M:%S", tm_info);
cout<<"print date and time"<<buffer<<":"<<millisec << endl;
for (int j=0;j<100;j++){
gp->setValue(pin1, GPIO::HIGH);
float value[j] = readAnalog(0)*(1.8/4096) ;
gp->setValue(pin1, GPIO::LOW);
usleep(300);
}
for (int j=0;j<100;j++){
cout << fixed;
cout.precision(3);
cout <<i<<";"<<value<< endl;
i++; }
return 0; }
And these are command to run the my file
g++ GPIO/GPIOConst.cpp GPIO/GPIOManager.cpp try.cpp
then
./a.out
and this is the result
print date and time10/04/2019 17:02:27:460
0;1.697
1;1.697
2;1.695
3;1.693
4;1.694
5;1.693
6;1.693
7;1.692
8;1.691
9;1.692
10;1.693
11;1.692
12;1.694
13;1.694
14;1.694
15;1.692
16;1.695
17;1.692
18;1.693
19;1.694
20;1.693
21;1.691
22;1.692
23;1.693
24;1.691
25;1.693
26;1.693
27;1.693
28;1.694
29;1.691
30;1.694
31;1.693
32;1.695
33;1.691
34;1.694
35;1.693
36;1.693
37;1.691
38;1.693
39;1.691
40;1.692
41;1.694
42;1.692
43;1.692
44;1.693
45;1.692
46;1.694
47;1.693
48;1.693
49;1.692
50;1.692
51;1.692
52;1.691
53;1.690
54;1.691
55;1.692
56;1.693
57;1.692
58;1.692
59;1.692
60;1.694
61;1.694
62;1.694
63;1.694
64;1.693
65;1.692
66;1.693
67;1.692
68;1.693
69;1.693
70;1.692
71;1.692
72;1.693
73;1.694
74;1.693
75;1.694
76;1.693
77;1.692
78;1.694
79;1.692
80;1.692
81;1.692
82;1.692
83;1.692
84;1.694
85;1.694
86;1.693
87;1.693
88;1.694
89;1.693
90;1.693
91;1.692
92;1.694
93;1.691
94;1.694
95;1.693
96;1.691
97;1.692
98;1.693
99;1.694
[and this is what i got from oscilloscope][1]
[1]: https://i.stack.imgur.com/FJSRe.jpg
It will be really great if there are anyone who would love to give me some advice. And If there are something concerning you guys. Please feel free to ask me.
Best Regard
Peeranut Noonurak

How to convert milli seconds time to FILETIME in c++

I have a time in secs (ex:1505306792).
How to convert this into FILETIME?
Here is the code i have tried
INT64 timer64 = 1505306792;
timer64 = timer64 *1000 *10000;
ULONGLONG xx = timer64;
FILETIME fileTime;
ULARGE_INTEGER uliTime;
uliTime.QuadPart = xx;
fileTime.dwHighDateTime = uliTime.HighPart;
fileTime.dwLowDateTime = uliTime.LowPart;
This result FILETIME is coming as 1648-09-13 15:34:00
I am expecting this date to be 2017-09-13 12:46:31 . I am getting the same when using online converters.
Any idea how to solve this?
I have seen some answers using boost, but it is available in my project.
It's about adding 116444736000000000, see How To Convert a UNIX time_t to a Win32 FILETIME or SYSTEMTIME:
#include <winbase.h>
#include <winnt.h>
#include <time.h>
void UnixTimeToFileTime(time_t t, LPFILETIME pft)
{
// Note that LONGLONG is a 64-bit value
LONGLONG ll;
ll = Int32x32To64(t, 10000000) + 116444736000000000;
pft->dwLowDateTime = (DWORD)ll;
pft->dwHighDateTime = ll >> 32;
}

Field tv_sec could not be resolved

I am trying to run the following code snippet taken from this simple example of a timer:
#include <sys/time.h>
#include <stdio.h>
int SetTimer(struct timeval &tv, time_t sec) {
gettimeofday(&tv, NULL);
tv.tv_sec += sec;
return 1;
}
int CheckTimer(struct timeval &tv, time_t sec) {
struct timeval ctv;
gettimeofday(&ctv, NULL);
if ((ctv.tv_sec > tv.tv_sec)) {
gettimeofday(&tv, NULL);
tv.tv_sec += sec;
return 1;
} else {
return 0;
}
}
int main() {
struct timeval tv;
SetTimer(tv, 5); //set up a delay timer
printf("start counting.\n");
while (1)
if (CheckTimer(tv, 5) == 1)
printf("Welcome to cc.byexamples.com\n");
return 0;
}
I am obtaining the following error: field tv_sec could not be resolved
I have searched for it on the Web but no one seems to give a concrete answer.
I tried my chance looking into the libraries sys/time.h and time.h but in none of them seems to be defined this structure but is anyway used.
Am I missing any library? Since this example is rather old, did something changed that needs it to be done in a different way? I would appreciate any sight.
PS: I am using Eclipse CDT Indigo under Ubuntu 11.10 and g++ 4.6.1.
try to put this in your file : #define __USE_GNU
In the end it was an Eclipse problem since it was unable to index the time.h library. I solved it by following the most upvoted answer of this other SF question:
Adding manually time.h to the C/C++ indexer.
Currently I am using Eclipse CDT Juno and the problem doesn't seems not to be happening anymore. As a side comment in Eclipse CDT Juno I couldn't find the location where to manually edit the C/C++ indexer settings.

How to profile each call to a function?

I want to profile my execution in a non-standard way. Using gprof, Valgrind, Oprofile... for a given function, I only get the mean of its execution time. What I would like is to obtain the standard deviation of this execution time.
Example:
void a()
sleep ( rand() % 10 + 10 )
void b()
sleep ( rand() % 14 + 2 )
main
for (1 .. 100)
a()
b()
With standard tools, a and b functions will have similar behaviour. Do you know any tool which could give me this result, with an automatic approach.
I already tested with TAU, but until now, it is not really relevant. I think there is a solution in this way, but I am not enough confident with TAU. If anyone is Tau expert, I try to keep all the function execution times, and do the math at the end. But I don't know how to specify it in Tau.
I want to profile C / C++ code, but if you have any lead in other programing language, I'm open.
A profiling tool is not magic, and you can roll your own for whatever purpose in a few lines.
Something like this, perhaps:
// code profile.cpp : Defines the entry point for the console application.
//
#include "stdafx.h"
class cProfile
{
public:
// construct profiler for a particular scope
// call at begining of scope to be timed
// pass unique name of scope
cProfile( const char* name )
{
myName = string( name );
QueryPerformanceCounter( (LARGE_INTEGER *)&myTimeStart );
}
// destructor - automatically called when scope ends
~cProfile();
// constructor - produces report when called without parameters
cProfile();
private:
typedef accumulator_set<__int64, stats<tag::variance(lazy)> > acc_t;
static map < string, acc_t > myMap;
string myName;
__int64 myTimeStart;
};
map < string, accumulator_set<__int64, stats<tag::variance(lazy)> > > cProfile::myMap;
cProfile::~cProfile()
{
__int64 t=0;
QueryPerformanceCounter( (LARGE_INTEGER *)&t );
t -= myTimeStart;
map < string, acc_t >::iterator p = myMap.find( myName );
if( p == myMap.end() ) {
// this is the first time this scope has run
acc_t acc;
pair<string,acc_t > pr(myName,acc);
p = myMap.insert( pr ).first;
}
// add the time of running to the accumulator for this scope
(p->second)( t );
}
// Generate profile report
cProfile::cProfile()
{
__int64 f;
QueryPerformanceFrequency( (LARGE_INTEGER *)&f );
printf("%20s Calls\tMean (secs)\tStdDev\n","Scope");
for( map < string, accumulator_set<__int64, stats<tag::variance(lazy)> > >::iterator p = myMap.begin();
p != myMap.end(); p++ )
{
float av = mean(p->second) / f;
float stdev = sqrt( ((double) variance(p->second)) ) / f;
printf("%20s %d\t%f\t%f\n",p->first.c_str(),
boost::accumulators::count(p->second), av, stdev);
}
}
void a()
{
cProfile profile("a");
Sleep ( rand() % 10 + 10 );
}
void b()
{
cProfile profile("b");
Sleep ( rand() % 20 + 5 );
}
int _tmain(int argc, _TCHAR* argv[])
{
for (int k=1;k<=100;k++) {
a();
b();
}
cProfile profile_report;
return 0;
}
Which produces
Scope Calls Mean (secs) StdDev
a 100 0.014928 0.002827
b 100 0.015254 0.005671
Maybe does not apply, as it's gcc-specific, but I had found this save me a couple of
times at least. If you compile the code with "-finstrument-functions" flag, then every entry and exit point of a function in the module that was compiled with this flag, will be stubbed with the calls to instrument functions. All you have to do is to have an inline that would read some high-precision counter (e.g. rdtsc on x86, though see this discussion) and a large array of records: [ func_addr, is_enter, timer_value ] to which you would continually write in the instrument functions. Upon exit, dump this array to file and analyze offline.
Quite far from "automated" approach, that you probably were looking for - but hope this is of use. The sample below shows the behaviour of gcc if it is compiled with -finstrument-functions. If you do not include the flag, it will work "as normal".
#include <stdio.h>
#include <stdlib.h>
void __cyg_profile_func_enter(void *fn, void *call)
__attribute__ ((no_instrument_function));
void __cyg_profile_func_exit(void *fn, void *call)
__attribute__ ((no_instrument_function));
void __cyg_profile_func_enter(void *fn, void *call) {
printf("Enter %x,%x\n",fn,call);
}
void __cyg_profile_func_exit(void *fn, void *call) {
printf("Exit %x,%x\n",fn,call);
}
int foo(int i) {
printf("inside foo\n");
}
int main(int argc, char *argv[]) {
printf("inside main 1\n");
foo(123);
printf("inside main 2\n");
exit(0);
}
I think Apple's Shark profiling tool can generate the mean for each function. Of course, that only helps you on a Mac.
Actually Oprofile can profile function from a call graph view, which means same callee routine with different caller will be profiled in different stats.
Try opreport command for the report.