GetTickCount values on Windows 10 - c++

Im trying to use the GetTickCount() in Windows API to get the system uptime. I want to know how long the system has been running.
However the return from GetTickCount is insanely high. This code gives me uptime of over 500 hours.
This goes for GetTickCount64() as well.
Why is this value so high?
DWORD time_ms = GetTickCount();
DWORD seconds = (time_ms / 1000) % 60;
DWORD minutes = time_ms /(1000*60) % 60;
DWORD hours = time_ms / (1000*60*60);
stringstream ss;
ss << "Hours: ";
ss << hours;
ss << " Minutes: ";
ss << minutes;
ss << " seconds: ";
ss << seconds;
MessageBoxA(0, ss.str().c_str(), "Uptime", 0);
As i run the program I can see that it is progressing correctly, but cannot comprehend how I will get the total uptime for my desktop.
Thanks
Edit:
I checked the uptime with "systeminfo" in CMD and found that the "System boot time" was actually aprox ~500hours ago. So I shut down the computer and unplugged the electricity, booted but still,the System boot time had this high value. However, restarting the computer made it reset, and now my code works.
EDIT 2:
This blog https://blogs.msdn.microsoft.com/oldnewthing/20141113-00/?p=43623 states that GetTickCount should rather be used for measuring time intervals than what I'm trying to achieve. Seems like I have to look at the registry.
EDIT 3:
After finding the right counter in the registry, it has the same value has GetTickCount and similar functions. It seems that shut down in Windows brings it to some sort of hibernation. I have not found any solution to this.

The documentation for GetTickCount describes the correct way to get the system up-time:
To obtain the time elapsed since the computer was started, retrieve
the System Up Time counter in the performance data in the registry key
HKEY_PERFORMANCE_DATA. The value returned is an 8-byte value. For more
information, see Performance Counters

I just had the same problem. I solved it by :
using the restart command rather that shutting down the computer : this performs a true restart of the computer and not a "half-hibernation" like the shutdown command when the "fast startup" is enabled
disabling the "fast startup" option. This fast startup option leads to a non reseted GetTickCount at startup !!
I guess that a lot of old programs will bug again with W10 and the fast startup option...

500 hours uptime is not especially surprising. That's around 20 days. Modern systems are seldom shutdown. Typically they are suspended rather than shutdown. I think your system really has been up for 500 hours.
Of course, you should be using GetTickCount64 to work with a 64 bit tick count. That will avoid the 49 day wrap around that is a consequence of the 32 bit values returned by GetTickCount.

Im trying to use the GetTickCount() in Windows API to get the system uptime. I want to know how long the system has been running.
In addition to what others have said, there is another alternative solution.
You can call NtQuerySystemInformation() from ntdll.dll, setting its SystemInformationClass parameter to SystemTimeOfDayInformation to retrieve a SYSTEM_TIMEOFDAY_INFORMATION structure containing BootTime and CurrentTime fields as LARGE_INTEGER values:
typedef struct _SYSTEM_TIMEOFDAY_INFORMATION {
LARGE_INTEGER BootTime;
LARGE_INTEGER CurrentTime;
LARGE_INTEGER TimeZoneBias;
ULONG TimeZoneId;
ULONG Reserved;
ULONGLONG BootTimeBias;
ULONGLONG SleepTimeBias;
} SYSTEM_TIMEOFDAY_INFORMATION, *PSYSTEM_TIMEOFDAY_INFORMATION;
You can simply subtract BootTime from CurrentTime to get the elapsed time that Windows has been running (this is exactly how Task Manager calculates the "System Up Time").
How to get Windows boot time?
How does Task Manager compute Up Time, and why doesn’t it agree with GetTickCount?

Related

Monitor task CPU utilization in VxWorks while program is running

I'm running a VxWorks 6.9 OS embedded system and I need to see when I'm starving low priority tasks. Ideally I'd like to have CPU utilization by task so I know what is eating up all my CPU time.
I know this is a built in feature in many operating systems but have been so far unable to find it for VxWorks 6.9.
If I can't measure by task I'd like to at least to see what percentage of time the CPU is idle.
To that end I've been trying to make a lowest priority task that will run the function below that would try to measure it indirectly.
float Foo::IdleTime(Foo* f)
{
bool inIdleTask;
float timeIdle;
float totalTime;
float percentIdle;
while(true)
{
startTime = _time(); //get time before before measurement starts
inIdleTask = true;
timeIdle = 0;
while(inIdleTask) // I have no clue how to detect when the task left and set this to false
{
timeIdle += (amount_of_time_for_inner_loop); //measure idle time
}
returnTime = _time(); //get time after you return to IdleTime task
totalTime = ( returnTime - startTime );
percentIdle = ( timeIdle / totalTime ) * 100; //calculate percentage of idle time
//logic to report percentIdle
}
The big problem with this concept is I don't know how I would detect when this task is left for a higher priority task.
If you are looking for a one time measurement done during the developement, then spyLib is what you are looking for. Simply call spy from the command line to get per task CPU usage report in 10s intervals. Call spyHelp to learn how to configure the spy. (Might need to inculude the spyLib to kernel if not already included.)
If you want to go the extra mile, taskHookLib is what you need. Simply put, you hook a function to be called in every task switch. Call gives you the TASK_IDs of tasks going in and out of the CPU. You can either simply monitor the starvation of low pri tasks or take action and increase their priority temporarily.
From experience, spy adds a little performance overhead, especially if stdout faces to a slow I/O (e.g. a 9600 baud serial), but fairly easy to use. taskHook'ing adds little to none overhead if you are not immediately printing the results on the terminal, but takes a bit of programming to get it running.
Another thing that might be of interest is WindRiver's remote debugger. Haven't use that one personally, imagine it would require setting up the workbench and the target properly.

Busy Loop/Spinning sometimes takes too long under Windows

I'm using a windows 7 PC to output voltages at a rate of 1kHz. At first I simply ended the thread with sleep_until(nextStartTime), however this has proven to be unreliable, sometimes working fine and sometimes being of by up to 10ms.
I found other answers here saying that a busy loop might be more accurate, however mine for some reason also sometimes takes too long.
while (true) {
doStuff(); //is quick enough
logDelays();
nextStartTime = chrono::high_resolution_clock::now() + chrono::milliseconds(1);
spinStart = chrono::high_resolution_clock::now();
while (chrono::duration_cast<chrono::microseconds>(nextStartTime -
chrono::high_resolution_clock::now()).count() > 200) {
spinCount++; //a volatile int
}
int spintime = chrono::duration_cast<chrono::microseconds>
(chrono::high_resolution_clock::now() - spinStart).count();
cout << "Spin Time micros :" << spintime << endl;
if (spinCount > 100000000) {
cout << "reset spincount" << endl;
spinCount = 0;
}
}
I was hoping that this would work to fix my issue, however it produces the output:
Spin Time micros :9999
Spin Time micros :9999
...
I've been stuck on this problem for the last 5 hours and I'd very thankful if somebody knows a solution.
According to the comments this code waits correctly:
auto start = std::chrono::high_resolution_clock::now();
const auto delay = std::chrono::milliseconds(1);
while (true) {
doStuff(); //is quick enough
logDelays();
auto spinStart = std::chrono::high_resolution_clock::now();
while (start > std::chrono::high_resolution_clock::now() + delay) {}
int spintime = std::chrono::duration_cast<std::chrono::microseconds>
(std::chrono::high_resolution_clock::now() - spinStart).count();
std::cout << "Spin Time micros :" << spintime << std::endl;
start += delay;
}
The important part is the busy-wait while (start > std::chrono::high_resolution_clock::now() + delay) {} and start += delay; which will in combination make sure that delay amount of time is waited, even when outside factors (windows update keeping the system busy) disturb it. In case that the loop takes longer than delay the loop will be executed without waiting until it catches up (which may be never if doStuff is sufficiently slow).
Note that missing an update (due to the system being busy) and then sending 2 at once to catch up might not be the best way to handle the situation. You may want to check the current time inside doStuff and abort/restart the transmission if the timing is wrong by more then some acceptable amount.
On Windows I dont think its possible to ever get such precise timing, because you can not garuntee your thread is actually running at the time you desire. Even with low CPU usage and setting your thread to real time priority, it can still be interuptted (Hardware interupts as I understand. Never fully investigate but even a simple while(true) ++i; type loop at realtime Ive seen get interupted then moved between CPU cores). While such interrupts and switching for a realtime thread is very quick, its still significant if your trying to directly drive a signal without buffering.
Instead you really want to read and write buffers of digital samples (so at 1KHz each sample is 1ms). You need to be sure to queue another buffer before the last one is completed, which will constrain how small they can be, but at 1KHz at realtime priority if the code is simple and no other CPU contention a single sample buffer (1ms) might even be possible, which is at worst 1ms extra latency over "immediate" but you would have to test. You then leave it up to the hardware and its drivers to handle the precise timing (e.g. make sure each output sample is "exactly" 1ms to the accuracy the vendor claims).
This basically means your code only has to be accurate to 1ms in worst case, rather than trying to persue somthing far smaller than the OS really supports such as microsecond accuracy.
As long as you are able to queue a new buffer before the hardware used up the previous buffer, it will be able to run at the desired frequency without issue (to use audio as an example again, while the tolerated latencies are often much higher and thus the buffers as well, if you overload the CPU you can still sometimes hear auidble glitches where an application didnt queue up new raw audio in time).
With careful timing you might even be able to get down to a fraction of a millisecond by waiting to process and queue your next sample as long as possible (e.g. if you need to reduce latency between input and output), but remember that the closer you cut it the more you risk submitting it too late.

how to run Clock-gettime correctly in Vxworks to get accurate time

I am trying to measure time take by processes in C++ program with linux and Vxworks. I have noticed that clock_gettime(CLOCK_REALTIME, timespec ) is accurate enough (resolution about 1 ns) to do the job on many Oses. For a portability matter I am using this function and running it on both Vxworks 6.2 and linux 3.7.
I ve tried to measure the time taken by a simple print:
#define <timers.h<
#define <iostream>
#define BILLION 1000000000L
int main(){
struct timespec start, end; uint32_t diff;
for(int i=0; i<1000; i++){
clock_gettime(CLOCK_REALTME, &start);
std::cout<<"Do stuff"<<std::endl;
clock_gettime(CLOCK_REALTME, &end);
diff = BILLION*(end.tv_sec-start.tv_sec)+(end.tv_nsec-start.tv_nsec);
std::cout<<diff<<std::endl;
}
return 0;
}
I compiled this on linux and vxworks. For linux results seemed logic (average 20 µs). But for Vxworks, I ve got a lot of zeros , then 5000000 ns , then a lot of zeros...
PS , for vxwroks, I runned this app on ARM-cortex A8, and results seemed random
have anyone seen the same bug before,
In vxworks, the clock resolution is defined by the system scheduler frequency. By default, this is typically 60Hz, however may be different dependant on BSP, kernel configuration, or runtime configuration.
The VxWorks kernel configuration parameters SYS_CLK_RATE_MAX and SYS_CLK_RATE_MIN define the maximum and minimum values supported, and SYS_CLK_RATE defines the default rate, applied at boot.
The actual clock rate can be modified at runtime using sysClkRateSet, either within your code, or from the shell.
You can check the current rate by using sysClkRateGet.
Given that you are seeing either 0 or 5000000ns - which is 5ms, I would expect that your system clock rate is ~200Hz.
To get greater resolution, you can increase the system clock rate. However, this may have undesired side effects, as this will increase the frequency of certain system operations.
A better method of timing code may be to use sysTimestamp which is typically driven from a high frequency timer, and can be used to perform high-res timing of short-lived activities.
I think in vxworks by default the clock resolution is 16.66ms which you can get by calling clock_getres() function. You can change the resolution by calling sysclkrateset() function(max resolution supported is 200us i guess by passing 5000 as argument to sysclkrateset function). You can then calculate the difference between two timestamps using difftime() function

FIO runtime different than gettimeofday()

I am trying to measure the execution time of FIO benchmark. I am, currently, doing so wrapping the FIO call between gettimeofday():
gettimeofday(&startFioFix, NULL);
FILE* process = popen("fio --name=randwrite --ioengine=posixaio rw=randwrite --size=100M --direct=1 --thread=1 --bs=4K", "r");
gettimeofday(&doneFioFix, NULL);
and calculate the elapsed time as:
double tstart = startFioFix.tv_sec + startFioFix.tv_usec / 1000000.;
double tend = doneFioFix.tv_sec + doneFioFix.tv_usec / 1000000.;
double telapsed = (tend - tstart);
Now, the question(s) is
telapsed time is different (larger) than the runt by FIO output. Can you please help me in understanding Why? as the fact can be seen in FIO output:
randwrite: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=posixaio, iodepth=1
fio-2.2.8
Starting 1 thread
randwrite: (groupid=0, jobs=1): err= 0: pid=3862: Tue Nov 1 18:07:50 2016
write: io=102400KB, bw=91674KB/s, iops=22918, runt= 1117msec
...
and the telapsed is:
telapsed: 1.76088 seconds
what is the actual time taken by FIO execution:
a) runt given by FIO, or
b) the elapsed time by getttimeofday()
How does FIO measure its runt? (probably, this question linked to 1.)
PS: I have tried to replace the gettimeofday(with std::chrono::high_resolution_clock::now()), but it also behaves the same (by same, I mean it also gives larger elapsed time than runt)
Thank you in advance, for your time and assistance.
A quick point:gettimeofday() on Linux uses a clock that doesn't necessarily tick at a constant interval and can even move backwards (see http://man7.org/linux/man-pages/man2/gettimeofday.2.html and https://stackoverflow.com/a/3527632/4513656 ) - this may make telapsed unreliable (or even negative).
Your gettimeofday/popen/gettimeofday measurement (telapsed) is going to be: the fio process start up (i.e. fork+exec on Linux) elapsed + fio initialisation (e.g. thread creation because I see --thread, ioengine initialisation) + fio job elapsed (runt) + fio stopping elapsed + process stop elapsed). You are comparing this to just runt which is a sub component of telapsed. It is unlikely all the non-runt components are going to happen instantly (i.e. take up 0 usecs) so the expectation is that runt will be smaller than telapsed. Try running fio with --debug=all just to see all the things it does in addition to actually submitting I/O for the job.
This is difficult to answer because it depends on what you want you mean when you say "fio execution" and why (i.e. the question is hard to interpret in an unambiguous way). Are you interested in how long fio actually spent trying to submit I/O for a given job (runt)? Are you interested in how long it takes your system to start/stop a new process that just so happens to try and submit I/O for a given period (telapsed)? Are you interested in how much CPU time was spent submitting I/O (none of the above)? So because I'm confused I'll ask you some questions instead: what are you going to use the result for and why?
Why not look at the source code? https://github.com/axboe/fio/blob/7a3b2fc3434985fa519db55e8f81734c24af274d/stat.c#L405 shows runt comes from ts->runtime[ddir]. You can see it is initialised by a call to set_epoch_time() (https://github.com/axboe/fio/blob/6be06c46544c19e513ff80e7b841b1de688ffc66/backend.c#L1664 ), is updated by update_runtime() ( https://github.com/axboe/fio/blob/6be06c46544c19e513ff80e7b841b1de688ffc66/backend.c#L371 ) which is called from thread_main().

Programmatically getting system boot up time in c++ (windows)

So quite simply, the question is how to get the system boot up time in windows with c/c++.
Searching for this hasn't got me any answer, I have only found a really hacky approach which is reading a file timestamp ( needless to say, I abandoned reading that halfway ).
Another approach that I found was actually reading windows diagnostics logged events? Supposedly that has last boot up time.
Does anyone know how to do this (with hopefully not too many ugly hacks)?
GetTickCount64 "retrieves the number of milliseconds that have elapsed since the system was started."
Once you know how long the system has been running, it is simply a matter of subtracting this duration from the current time to determine when it was booted. For example, using the C++11 chrono library (supported by Visual C++ 2012):
auto uptime = std::chrono::milliseconds(GetTickCount64());
auto boot_time = std::chrono::system_clock::now() - uptime;
You can also use WMI to get the precise time of boot. WMI is not for the faint of heart, but it will get you what you are looking for.
The information in question is on the Win32_OperatingSystem object under the LastBootUpTime property. You can examine other properties using WMI Tools.
Edit:
You can also get this information from the command line if you prefer.
wmic OS Get LastBootUpTime
As an example in C# it would look like the following (Using C++ it is rather verbose):
static void Main(string[] args)
{
// Create a query for OS objects
SelectQuery query = new SelectQuery("Win32_OperatingSystem", "Status=\"OK\"");
// Initialize an object searcher with this query
ManagementObjectSearcher searcher = new ManagementObjectSearcher(query);
string dtString;
// Get the resulting collection and loop through it
foreach (ManagementObject envVar in searcher.Get())
dtString = envVar["LastBootUpTime"].ToString();
}
The "System Up Time" performance counter on the "System" object is another source. It's available programmatically using the PDH Helper methods. It is, however, not robust to sleep/hibernate cycles so is probably not much better than GetTickCount()/GetTickCount64().
Reading the counter returns a 64-bit FILETIME value, the number of 100-NS ticks since the Windows Epoch (1601-01-01 00:00:00 UTC). You can also see the value the counter returns by reading the WMI table exposing the raw values used to compute this. (Read programmatically using COM, or grab the command line from wmic:)
wmic path Win32_PerfRawData_PerfOS_System get systemuptime
That query produces 132558992761256000 for me, corresponding to Saturday, January 23, 2021 6:14:36 PM UTC.
You can use the PerfFormattedData equivalent to get a floating point number of seconds, or read that from the command line in wmic or query the counter in PowerShell:
Get-Counter -Counter '\system\system up time'
This returns an uptime of 427.0152 seconds.
I also implemented each of the other 3 answers and have some observations that may help those trying to choose a method.
Using GetTickCount64 and subtracting from current time
The fastest method, clocking in at 0.112 ms.
Does not produce a unique/consistent value at the 100-ns resolution of its arguments, as it is dependent on clock ticks. Returned values are all within 1/64 of a second of each other.
Requires Vista or newer. XP's 32-bit counter rolls over at ~49 days and can't be used for this approach, if your application/library must support older Windows versions
Using WMI query of the LastBootUpTime field of Win32_OperatingSystem
Took 84 ms using COM, 202ms using wmic command line.
Produces a consistent value as a CIM_DATETIME string
WMI class requires Vista or newer.
Reading Event Log
The slowest method, taking 229 ms
Produces a consistent value in units of seconds (Unix time)
Works on Windows 2000 or newer.
As pointed out by Jonathan Gilbert in the comments, is not guaranteed to produce a result.
The methods also produced different timestamps:
UpTime: 1558758098843 = 2019-05-25 04:21:38 UTC (sometimes :37)
WMI: 20190524222528.665400-420 = 2019-05-25 05:25:28 UTC
Event Log: 1558693023 = 2019-05-24 10:17:03 UTC
Conclusion:
The Event Log method is compatible with older Windows versions, produces a consistent timestamp in unix time that's unaffected by sleep/hibernate cycles, but is also the slowest. Given that this is unlikely to be run in a loop it's this may be an acceptable performance impact. However, using this approach still requires handling the situation where the Event log reaches capacity and deletes older messages, potentially using one of the other options as a backup.
C++ Boost used to use WMI LastBootUpTime but switched, in version 1.54, to checking the system event log, and apparently for a good reason:
ABI breaking: Changed bootstamp function in Windows to use EventLog service start time as system bootup time. Previously used LastBootupTime from WMI was unstable with time synchronization and hibernation and unusable in practice. If you really need to obtain pre Boost 1.54 behaviour define BOOST_INTERPROCESS_BOOTSTAMP_IS_LASTBOOTUPTIME from command line or detail/workaround.hpp.
Check out boost/interprocess/detail/win32_api.hpp, around line 2201, the implementation of the function inline bool get_last_bootup_time(std::string &stamp) for an example. (I'm looking at version 1.60, if you want to match line numbers.)
Just in case Boost ever dies somehow and my pointing you to Boost doesn't help (yeah right), the function you'll want is mainly ReadEventLogA and the event ID to look for ("Event Log Started" according to Boost comments) is apparently 6005.
I haven't played with this much, but I personally think the best way is probably going to be to query the start time of the "System" process. On Windows, the kernel allocates a process on startup for its own purposes (surprisingly, a quick Google search doesn't easily uncover what its actual purposes are, though I'm sure the information is out there). This process is called simply "System" in the Task Manager, and always has PID 4 on current Windows versions (apparently NT 4 and Windows 2000 may have used PID 8 for it). This process never exits as long as the system is running, and in my testing behaves like a full-fledged process as far as its metadata is concerned. From my testing, it looks like even non-elevated users can open a handle to PID 4, requesting PROCESS_QUERY_LIMITED_INFORMATION, and the resulting handle can be used with GetProcessTimes, which will fill in the lpCreationTime with the UTC timestamp of the time the process started. As far as I can tell, there isn't any meaningful way in which Windows is running before the System process is running, so this timestamp is pretty much exactly when Windows started up.
#include <iostream>
#include <iomanip>
#include <windows.h>
using namespace std;
int main()
{
unique_ptr<remove_pointer<HANDLE>::type, decltype(&::CloseHandle)> hProcess(
::OpenProcess(
PROCESS_QUERY_LIMITED_INFORMATION,
FALSE, // bInheritHandle
4), // dwProcessId
::CloseHandle);
FILETIME creationTimeStamp, exitTimeStamp, kernelTimeUsed, userTimeUsed;
FILETIME creationTimeStampLocal;
SYSTEMTIME creationTimeStampSystem;
if (::GetProcessTimes(hProcess.get(), &creationTimeStamp, &exitTimeStamp, &kernelTimeUsed, &userTimeUsed)
&& ::FileTimeToLocalFileTime(&creationTimeStamp, &creationTimeStampLocal)
&& ::FileTimeToSystemTime(&creationTimeStampLocal, &creationTimeStampSystem))
{
__int64 ticks =
((__int64)creationTimeStampLocal.dwHighDateTime) << 32 |
creationTimeStampLocal.dwLowDateTime;
wios saved(NULL);
saved.copyfmt(wcout);
wcout << setfill(L'0')
<< setw(4)
<< creationTimeStampSystem.wYear << L'-'
<< setw(2)
<< creationTimeStampSystem.wMonth << L'-'
<< creationTimeStampSystem.wDay
<< L' '
<< creationTimeStampSystem.wHour << L':'
<< creationTimeStampSystem.wMinute << L':'
<< creationTimeStampSystem.wSecond << L'.'
<< setw(7)
<< (ticks % 10000000)
<< endl;
wcout.copyfmt(saved);
}
}
Comparison for my current boot:
system_clock::now() - milliseconds(GetTickCount64()):
2020-07-18 17:36:41.3284297
2020-07-18 17:36:41.3209437
2020-07-18 17:36:41.3134106
2020-07-18 17:36:41.3225148
2020-07-18 17:36:41.3145312
(result varies from call to call because system_clock::now() and ::GetTickCount64() don't run at exactly the same time and don't have the same precision)
wmic OS Get LastBootUpTime
2020-07-18 17:36:41.512344
Event Log
No result because the event log entry doesn't exist at this time on my system (earliest event is from July 23)
GetProcessTimes on PID 4:
2020-07-18 17:36:48.0424863
It's a few seconds different from the other methods, but I can't think of any way that it is wrong per se, because, if the System process wasn't running yet, was the system actually booted?