I want to create a program that will slow down Windows' time. I will be using SetLocalTime() for this. However, when I open the program, my PC starts to micro-stutter and game performances drops even though the process isn't using nearly any CPU.
#include <iostream>
#include "Windows.h"
#include <thread>
#include <chrono>
using namespace std;
SYSTEMTIME st;
WORD hour;
WORD minute;
WORD second = 0;
int main()
{
GetLocalTime(&st);
hour = st.wHour;
minute = st.wMinute;
second = st.wSecond;
for (;;)
{
for (int i = 0; i < 4; i++)
{
this_thread::sleep_for(chrono::milliseconds(500));
st.wHour = hour;
st.wMinute = minute;
st.wSecond = second;
SetLocalTime(&st);
}
second++;
if (second == 60)
{
second = 0;
minute++;
}
if (minute == 60)
{
minute = 0;
hour++;
}
}
}
If you change the system clock, all the programs that use it for timing will also slow down.
From your comments, I could gather that you wish to time scale an application that uses time. So far, you didn't get more specific, so I cannot suggest anything more than a general approach.
Create a time manager class that, when you start your application, gets the current system time and store it as the base time of your application. Instead of using GetLocalTime() or GetSystemTime(), create a method in your class that will return the current time based on a time dilatation factor.
class TimeManager
{
private:
SYSTEMTIME _BaseTime;
double _TimeDilatation;
public:
TimeManager();
void SetTimeDilatation(double timeDilatation);
void GetTime(LPSYSTEMTIME lpSystemTime);
};
// Constructor will get the current local time.
TimeManager::TimeManager()
{
GetLocalTime(&_BaseTime);
}
// Sets the time dilatation factor.
// 0.0 to 0.9 time will slow down
// 1.0 normal flow of time
// 1.1 to max double, time will go faster
void TimeManager::SetTimeDilatation(double timeDilatation)
{
_TimeDilatation = timeDilatation;
}
// Get the current time taking into account time dilatation
void TimeManager::GetTime(LPSYSTEMTIME lpSystemTime)
{
SYSTEMTIME resultingTime;
SYSTEMTIME realTime;
FILETIME ftime;
ULARGE_INTEGER uliTime;
__int64 lowerValue, higherValue, result;
// Get the current local time
GetLocalTime(&realTime);
// Translate the base time into a large integer for subtraction
SystemTimeToFileTime(&_BaseTime, &ftime);
uliTime.LowPart = ftime.dwLowDateTime;
uliTime.HighPart = ftime.dwHighDateTime;
lowerValue = uliTime.QuadPart;
// Translate the current time into a large integer for subtraction
SystemTimeToFileTime(&realTime, &ftime);
uliTime.LowPart = ftime.dwLowDateTime;
uliTime.HighPart = ftime.dwHighDateTime;
higherValue = uliTime.QuadPart;
// Get the time difference and multiply the dilatation factor
result = (higherValue - lowerValue) * _TimeDilatation;
// Apply the difference to the base time value
result = lowerValue + result;
// Convert the new time back into a SYSTEMTIME value
uliTime.QuadPart = result;
ftime.dwLowDateTime = uliTime.LowPart;
ftime.dwHighDateTime = uliTime.HighPart;
FileTimeToSystemTime(&ftime,&resultingTime);
// Assign it to the pointer passed in parameter, and feel like a Time Lord.
*lpSystemTime = resultingTime;
}
int main()
{
TimeManager TM;
TM.SetTimeDilatation(0.75f); // the time will pass 75% slower
for (;;)
{
SYSTEMTIME before, after;
TM.GetTime(&before);
// Do something that should take exactly one minute to process.
TM.GetTime(&after);
// Inspect the value of before and after, you'll see
// that only 45 secondes has passed
}
}
Note that it's a general idea to push you in the right direction. I haven't compile that code, so there maybe an error or five. Feel free to point them out, and I'll fix my post. I just didn't want to be too specific since your question is a bit broad ; this code may or may not help you depending on your use case. But that's generally how you slow down time without affecting system time.
Related
Consider the loop below. This is a simplified example of a problem I am trying to solve. I want to limit the number of times doSomething function is called in each second. Since the loop works very fast, I thought I could use a rate limiter. Let's assume that I have found an appropriate value by running it with different x numbers.
unsigned int incrementionRate = x;
unsigned int counter == 0;
while (true) {
double seconds = getElapsedSeconds();
print(seconds);
counter = (counter + 1) % incrementionRate;
if (counter == 0) {
doSomething();
}
}
I wonder if the number of calls to doSomething function would be less if I was working on a lower clock rate. In that case, I would like to limit the number of calls to doSomething function to once for each second. The second loop I have written is below.
float epsilon = 0.0001;
while (true) {
double seconds = getElapsedSeconds();
print(seconds);
if (abs(seconds - floor(seconds)) <= epsilon) {
doSomething();
}
}
Would that do the trick for different clock cycles or are there still problems? Also, I would like to know if there is a better way of doing this. I have never worked with clock rates before and trying to understand how concerns differ when working with limited resources.
Note: Using sleep is not an option.
If I understand the issue proberly, you could use a std::chrono::steady_clock that you just add a second to every time a second has passed.
Example:
#include <chrono>
auto end_time = std::chrono::steady_clock::now();
while (true) {
// only call doSomething once a second
if(end_time < std::chrono::steady_clock::now()) {
doSomething();
// set a new end time a second after the previous one
end_time += std::chrono::seconds(1);
}
// do something else
}
Ted's answer is fine if you are really doing something else in the loop; if not, though, this results in a busy wait which is just consuming up your CPU for nothing.
In such a case you should rather prefer letting your thread sleep:
std::chrono::milliseconds offset(200);
auto next = std::chrono::steady_clock::now();
for(;;)
{
doSomething();
next += offset;
std::this_thread::sleep_until(next);
}
You'll need to include chrono and thread headers for.
I decided to go with a much more simple approach at the end. Used an adjustable time interval and just stored the latest update time, without introducing any new mechanism. Honestly, now I don't know why I couldn't think of it at first. Overthinking is a problem. :)
double lastUpdateTimestamp = 0;
const double updateInterval = 1.0;
while (true) {
double seconds = getElapsedSeconds();
print(seconds);
if ((elapsedSeconds - lastUpdateTimestamp) >= updateInterval) {
doSomething();
lastUpdateTimestamp = elapsedSeconds;
}
}
What is the best way in C++11 to implement a high-resolution timer that continuously checks for time in a loop, and executes some code after it passes a certain point in time? e.g. check what time it is in a loop from 9am onwards and execute some code exactly at 11am. I require the timing to be precise (i.e. no more than 1 microsecond after 9am).
I will be implementing this program on Linux CentOS 7.3, and have no issues with dedicating CPU resources to execute this task.
Instead of implementing this manually, you could use e.g. a systemd.timer. Make sure to specify the desired accuracy which can apparently be as precise as 1us.
a high-resolution timer that continuously checks for time in a loop,
First of all, you do not want to continuously check the time in a loop; that's extremely inefficient and simply unnecessary.
...executes some code after it passes a certain point in time?
Ok so you want to run some code at a given time in the future, as accurately as possible.
The simplest way is to simply start a background thread, compute how long until the target time (in the desired resolution) and then put the thread to sleep for that time period. When your thread wakes up, it executes the actual task. This should be accurate enough for the vast majority of needs.
The std::chrono library provides calls which make this easy:
System clock in std::chrono
High resolution clock in std::chrono
Here's a snippet of code which does what you want using the system clock (which makes it easier to set a wall clock time):
// c++ --std=c++11 ans.cpp -o ans
#include <thread>
#include <iostream>
#include <iomanip>
// do some busy work
int work(int count)
{
int sum = 0;
for (unsigned i = 0; i < count; i++)
{
sum += i;
}
return sum;
}
std::chrono::system_clock::time_point make_scheduled_time (int yyyy, int mm, int dd, int HH, int MM, int SS)
{
tm datetime = tm{};
datetime.tm_year = yyyy - 1900; // Year since 1900
datetime.tm_mon = mm - 1; // Month since January
datetime.tm_mday = dd; // Day of the month [1-31]
datetime.tm_hour = HH; // Hour of the day [00-23]
datetime.tm_min = MM;
datetime.tm_sec = SS;
time_t ttime_t = mktime(&datetime);
std::chrono::system_clock::time_point scheduled = std::chrono::system_clock::from_time_t(ttime_t);
return scheduled;
}
void do_work_at_scheduled_time()
{
using period = std::chrono::system_clock::period;
auto sched_start = make_scheduled_time(2019, 9, 17, // date
00, 14, 00); // time
// Wait until the scheduled time to actually do the work
std::this_thread::sleep_until(sched_start);
// Figoure out how close to scheduled time we actually awoke
auto actual_start = std::chrono::system_clock::now();
auto start_delta = actual_start - sched_start;
float delta_ms = float(start_delta.count())*period::num/period::den * 1e3f;
std::cout << "worker: awoken within " << delta_ms << " ms" << std::endl;
// Now do some actual work!
int sum = work(12345);
}
int main()
{
std::thread worker(do_work_at_scheduled_time);
worker.join();
return 0;
}
On my laptop, the typical latency is about 2-3ms. If you use the high_resolution_clock you should be able to get even better results.
There are other APIs you could use too, such as Boost where you could use ASIO to implement high res timeout.
I require the timing to be precise (i.e. no more than 1 microsecond after 9am).
Do you really need it to be accurate to the microsecond? Consider that at this resolution, you will also need to take into account all sorts of other factors, including system load, latency, clock jitter, and so on. Your code can start to execute at close to that time, but that's only part of the problem.
My suggestion would be to use timer_create(). This allows you to get notified by a signal at a given time. You can then implement your action in the signal handler.
In any case you should be aware that the accuracy of course depends on the system clock accuracy.
I have a function which has a factor that needs to be adjusted according to the load on the machine to consume exactly the wall time passed to the function. The factor can vary according to the load of the machine.
void execute_for_wallTime(int factor, int wallTime)
{
double d = 0;
for (int n = 0; n<factor; ++n)
for (int m = 0; wall_time; ++m)
d += d * n*m;
}
Is there a way to dynamically check the load on the machine and adjust the factor accordingly in order to consume the exact wall time passed to the function?
The wall time is read from the file and passed to this function. The values are in micro seconds, e.g:
73
21
44
According to OP comment:
#include <sys/time.h>
int deltaTime(struct timeval *tv1, struct timeval *tv2){
return ((tv2->tv_sec - tv1->tv_sec)*1000000)+ tv2->tv_usec - tv1->tv_usec;
}
//might require longs anyway. this is time in microseconds between the 2 timevals
void execute_for_wallTime(int wallTime)
{
struct timeval tvStart, tvNow;
gettimeofday(&tvStart, NULL);
double d = 0;
for (int m = 0; wall_time; ++m){
gettimeofday(&tvNow, NULL);
if(deltaTime(tvStart,tvNow) >=wall_time) { // if timeWall is 1000 microseconds,
// this function returns after
// 1000 microseconds (and a
// little more due to overhead)
return;
}
d += d*m;
}
}
Now deal with timeWall by increasing or decreasing it in a logic outside this function depending on your performance calculations. This function simply runs for timeWall microseconds.
For C++ style, you can use std::chrono.
I must comment that I would handle things differently, for example by calling nanosleep(). The operations make no sense unless in actual code you plan to substitute these "fillers" with actual operations. In that case you might consider threads and schedulers. Besides the clock calls add overhead.
Long story short, I'm recoding a very CPU hungry app, going to restructure it in an entirely different way, and change alot of it's inner workings. I was looking for a good way to compare old and new results.
Say I start by changing how function foo() works:
I want to have the program run for say, 60 seconds, and measure the % of CPU that function is using within the program's total CPU usage. If it's at a constant 25%, I want to know how much of those 25% is my function. Then I'll test after changing code, and have two good indicators of whether I had a good improvement.
I've tried Very Sleepy but I can't get access to the functions I wanted access to; they do not show. I want to be able to see the % usage of the function I CODED MYSELF that uses the library's functions (SDL), yet it will only show me SDL Functions.
There are a few different ways, one of which is to simply add a high precision timer call at the start and end of the function. Depending on the number of calls to your function, you can either accumulate the time, e.g:
typedef type_of_time_source tt;
tt total = 0;
void my_func(....)
{
tt time = gettime();
... lots of your code ...
time = gettime() - time;
total += time;
}
Or you can store the individual intervals, e.g.
tt array[LARGE_NUMBER];
int index = 0;
... same code as above ...
time = gettime() - time;
if (index >= LARGE_NUMBER) index = 0; // [or LARGE_NUMBER-1?]
array[index++] = time;
Of course, if your calls to SDL are in the middle of your function, you need to one way or another discount that time.
Another method would be to measure the individual timing of several functions:
enum {
FUNCA,
FUNCB,
....
MAX_TIMINGS
}
struct timing_val
{
tt start, end;
char *name;
}
struct timing_val timing_values[MAX_TIMINGS];
#define START(f) do { timing_values[f].name = #f; timing_values[f].start = gettime(); } while (0);
#define END(f) do { timing_values[f].end = gettime(); } while(0);
void report()
{
for(int i = 0; i < MAX_TIMING; i++)
{
if (timing_values[i].start == 0 && timing_vlaues[i].end
cout << timing_values[i].name <<< " time = " <<
timing_values[i].end - timing_values[i].start << endl;
}
}
void big_function()
{
START(FUNCA);
funca();
END(FUNCA);
START(FUNCB);
funcb();
END(FUNCB)
...
report();
}
I've certainly used all of these functions, and as long-running as the functions are reasonably large, it shouldn't add much overhead.
You can also measure several functions at once, e.g. if we want to have the WHOLE function, we could just add the enum "BIG_FUNC" to the enum list above, and do this:
void big_function()
{
START(BIG_FUNCTION);
START(FUNCA);
funca();
END(FUNCA);
START(FUNCB);
funcb();
END(FUNCB)
...
END(BIG_FUNCTION);
report();
}
First off, I found a lot of information on this topic, but no solutions that solved the issue unfortunately.
I'm simply trying to regulate my C++ program to run at 60 iterations per second. I've tried everything from GetClockTicks() to GetLocalTime() to help in the regulation but every single time I run the program on my Windows Server 2008 machine, it runs slower than on my local machine and I have no clue why!
I understand that "clock" based function calls return CPU time spend on the execution so I went to GetLocalTime and then tried to differentiate between the start time and the stop time then call Sleep((FPS / 1000) - millisecondExecutionTime)
My local machine is quite faster than the servers CPU so obviously the thought was that it was going off of CPU ticks, but that doesn't explain why the GetLocalTime doesn't work. I've been basing this method off of http://www.lazyfoo.net/SDL_tutorials/lesson14/index.php changing the get_ticks() with all of the time returning functions I could find on the web.
For example take this code:
#include <Windows.h>
#include <time.h>
#include <string>
#include <iostream>
using namespace std;
int main() {
int tFps = 60;
int counter = 0;
SYSTEMTIME gStart, gEnd, start_time, end_time;
GetLocalTime( &gStart );
bool done = false;
while(!done) {
GetLocalTime( &start_time );
Sleep(10);
counter++;
GetLocalTime( &end_time );
int startTimeMilli = (start_time.wSecond * 1000 + start_time.wMilliseconds);
int endTimeMilli = (end_time.wSecond * 1000 + end_time.wMilliseconds);
int time_to_sleep = (1000 / tFps) - (endTimeMilli - startTimeMilli);
if (counter > 240)
done = true;
if (time_to_sleep > 0)
Sleep(time_to_sleep);
}
GetLocalTime( &gEnd );
cout << "Total Time: " << (gEnd.wSecond*1000 + gEnd.wMilliseconds) - (gStart.wSecond*1000 + gStart.wMilliseconds) << endl;
cin.get();
}
For this code snippet, run on my computer (3.06 GHz) I get a total time (ms) of 3856 whereas on my server (2.53 GHz) I get 6256. So it potentially could be the speed of the processor though the ratio of 2.53/3.06 is only .826797386 versus 3856/6271 is .614893956.
I can't tell if the Sleep function is doing something drastically different than expected though I don't see why it would, or if it is my method for getting the time (even though it should be in world time (ms) not clock cycle time. Any help would be greatly appreciated, thanks.
For one thing, Sleep's default resolution is the computer's quota length - usually either 10ms or 15ms, depending on the Windows edition. To get a resolution of, say, 1ms, you have to issue a timeBeginPeriod(1), which reprograms the timer hardware to fire (roughly) once every millisecond.
In your main loop you can
int main()
{
// Timers
LONGLONG curTime = NULL;
LONGLONG nextTime = NULL;
Timers::GameClock::GetInstance()->GetTime(&nextTime);
while (true) {
Timers::GameClock::GetInstance()->GetTime(&curTime);
if ( curTime > nextTime && loops <= MAX_FRAMESKIP ) {
nextTime += Timers::GameClock::GetInstance()->timeCount;
// Business logic goes here and occurr based on the specified framerate
}
}
}
using this time library
include "stdafx.h"
LONGLONG cacheTime;
Timers::SWGameClock* Timers::SWGameClock::pInstance = NULL;
Timers::SWGameClock* Timers::SWGameClock::GetInstance ( ) {
if (pInstance == NULL) {
pInstance = new SWGameClock();
}
return pInstance;
}
Timers::SWGameClock::SWGameClock(void) {
this->Initialize ( );
}
void Timers::SWGameClock::GetTime ( LONGLONG * t ) {
// Use timeGetTime() if queryperformancecounter is not supported
if (!QueryPerformanceCounter( (LARGE_INTEGER *) t)) {
*t = timeGetTime();
}
cacheTime = *t;
}
LONGLONG Timers::SWGameClock::GetTimeElapsed ( void ) {
LONGLONG t;
// Use timeGetTime() if queryperformancecounter is not supported
if (!QueryPerformanceCounter( (LARGE_INTEGER *) &t )) {
t = timeGetTime();
}
return (t - cacheTime);
}
void Timers::SWGameClock::Initialize ( void ) {
if ( !QueryPerformanceFrequency((LARGE_INTEGER *) &this->frequency) ) {
this->frequency = 1000; // 1000ms to one second
}
this->timeCount = DWORD(this->frequency / TICKS_PER_SECOND);
}
Timers::SWGameClock::~SWGameClock(void)
{
}
with a header file that contains the following:
// Required for rendering stuff on time
#pragma once
#define TICKS_PER_SECOND 60
#define MAX_FRAMESKIP 5
namespace Timers {
class SWGameClock
{
public:
static SWGameClock* GetInstance();
void Initialize ( void );
DWORD timeCount;
void GetTime ( LONGLONG* t );
LONGLONG GetTimeElapsed ( void );
LONGLONG frequency;
~SWGameClock(void);
protected:
SWGameClock(void);
private:
static SWGameClock* pInstance;
}; // SWGameClock
} // Timers
This will ensure that your code runs at 60FPS (or whatever you put in) though you can probably dump the MAX_FRAMESKIP as that's not truly implemented in this example!
You could try a WinMain function and use the SetTimer function and a regular message loop (you can also take advantage of the filter mechanism of GetMessage( ... ) ) in which you test for the WM_TIMER message with the requested time and when your counter reaches the limit do a PostQuitMessage(0) to terminate the message loop.
For a duty cycle that fast, you can use a high accuracy timer (like QueryPerformanceTimer) and a busy-wait loop.
If you had a much lower duty cycle, but still wanted precision, then you could Sleep for part of the time and then eat up the leftover time with a busy-wait loop.
Another option is to use something like DirectX to sync yourself to the VSync interrupt (which is almost always 60 Hz). This can make a lot of sense if you're coding a game or a/v presentation.
Windows is not a real-time OS, so there will never be a perfect way to do something like this, as there's no guarantee your thread will be scheduled to run exactly when you need it to.
Note that in the remarks for Sleep, the actual amount of time will be at least one "tick" and possible one whole "tick" longer than the delay you requested before the thread is scheduled to run again (and then we have to assume the thread is scheduled). The "tick" can vary a lot depending on hardware and the version of Windows. It is commonly in the 10-15 ms range, and I've seen it as bad as 19 ms. For 60 Hz, you need 16.666 ms per iteration, so this is obviously not nearly precise enough to give you what you need.
What about rendering (iterating) based on the time elapsed between rendering of each frame? Consider creating a void render(double timePassed) function and render depending on the timePassed parameter instead of putting program to sleep.
Imagine, for example, you want to render a ball falling or bouncing. You would know it's speed, acceleration and all other physics that you need. Calculate the position of the ball based on timePassed and all other physics parameters (speed, acceleration, etc.).
Or if you prefer, you could just skip the render() function execution if time passed is a value to small, instead of puttin program to sleep.