i am writing a program which simulates an activity, i am wondering how to speed up time for the simulation, let say 1 hour in the real world is equal to 1 month in the program.
thank you
the program is actually similar to a restaurant simulation where you dont really know when customer come. let say we pick a random number (2-10) customer every one hour
It depends on how it gets time now.
For example, if it calls Linux system time(), just replace that with your own function (like mytime) which returns speedier times. Perhaps mytime calls time and multiplies the returned time by whatever factor makes sense. 1 hr = 1 month is 720 times. Handling the origin as when the program begins should be accounted for:
time_t t0;
main ()
{
t0 = time(NULL); // at program initialization
....
for (;;)
{
time_t sim_time = mytime (NULL);
// yada yada yada
...
}
}
time_t mytime (void *)
{
return 720 * (time (NULL) - t0); // account for time since program started
// and magnify by 720, so one hour is one month
}
You just do it. You decide how many events take place in an hour of simulation time (eg., if an event takes place once a second, then after 3600 simulated events you've simulated an hour of time). There's no need for your simulation to run in real time; you can run it as fast as you can calculate the relevant numbers.
It sounds like you are implementing a Discrete Event Simulation. You don't even need to have a free-running timer (no matter what scaling you may use) in such a situation. It's all driven by the events. You have a priority queue containing events, ordered by the event time. You have a processing loop which takes the event at the head of the queue, and advances the simulation time to the event time. You process the event, which may involve scheduling more events. (For example, the customerArrived event may cause a customerOrdersDinner event to be generated 2 minutes later.) You can easily simulate customers arriving using random().
The other answers I've read thus far are still assuming you need a continuous timer, which is usually not the most efficient way of simulating an event-driven system. You don't need to scale real time to simulation time, or have ticks. Let the events drive time!
If the simulation is data dependent (like a stock market program), just speed up the rate at which the data is pumped. If it is some think that depends on time() calls you will have to do some thing like wallyk's answer (assuming you have the source code).
If time in your simulation is discrete, one option is to structure your program so that something happens "every tick".
Once you do that, time in your program is arbitrarily fast.
Is there really a reason for having a month of simulation time correspond exactly to an hour of time in the real world ? If yes, you can always process the number of ticks that correspond to a month, and then pause the appropriate amount of time to let an hour of "real time" finish.
Of course, a key variable here is the granularity of your simulation, i.e. how many ticks correspond to a second of simulated time.
Related
I'm writing a little game in c++ atm.
My Game While loop is always active, in this loop,
I have a condition if the player is shooting.
Now I face the following problem,
After every shot fired, there is a delay, this delay changes over time and while the delay the player should move.
shoot
move
wait 700 ms
shoot again
atm I'm using Sleep(700) the problem is I can't move while the 700 ms, I need something like a timer, so the move command is only executed for 700 ms instead of waiting 700 ms
This depends on how your hypothetical 'sleep' is implemented. There's a few things you should know, as it can be solved in a few ways.
You don't want to put your thread to sleep because then everything halts, which is not what you want.
Plus you may get more time than sleep allows. For example, if you sleep for 700ms you may get more than that, which means if you depend on accurate times you will get burned possibly by this.
1) The first way would be to record the raw time inside of the player. This is not the best approach but it'd work for a simple toy program and record the result of std::chrono::high_resolution_clock::now() (check #include <chrono> or see here) inside the class at the time you fire. To check if you can fire again, just compare the value you stored to ...::now() and see if 700ms has elapsed. You will have to read the documentation to work with it in milliseconds.
2) A better way would be to give your game a pulse via something called 'game ticks', which is the pulse to which your world moves forward. Then you can store the gametick that you fired on and do something similar to the above paragraph (except now you are just checking if currentGametick > lastFiredGametick + gametickUntilFiring).
For the gametick idea, you would make sure you do gametick++ every X milliseconds, and then run your world. A common value is somewhere between 10ms and 50ms.
Your game loop would then look like
while (!exit) {
readInput();
if (ticker.shouldTick()) {
ticker.tick();
world.tick(ticker.gametick);
}
render();
}
The above has the following advantages:
You only update the world every gametick
You keep rendering between gameticks, so you can have smooth animations since you will be rendering at a very high framerate
If you want to halt, just spin in a while loop until the amount of time has elapsed
Now this has avoided a significant amount of discussion, of which you should definitely read this if you are thinking of going the gametick route.
With whatever route you take, you probably need to read this.
Qt 5.7 32-bit on windows 10 64-bit
long period timer
the interval of a QTimer is given in msecs as a signed integer, so the maximum interval which can be set is a little bit more than 24 days (2^31 / (1000*3600*24) = 24.85)
I need a timer with intervals going far beyond this limit.
So my question is, which alternative do you recommend? std::chrono (C++11) seems not to be suitable as it does not have an event handler?
Alain
You could always create your own class which uses multiple QTimer's for the duration they are valid and just count how many have elapsed.
Pretty simple problem. If you can only count to 10 and you need to count to 100 - just count to 10 ten times.
I would implement this in the following way:
Upon timer start, note the current time in milliseconds like this:
m_timerStartTime = QDateTime::currentMSecsSinceEpoch()
The, I would start a timer at some large interval, such as 10 hours, and attach a handler function to the timer that simply compared the time since it started to see if we are due:
if(QDateTime::currentMSecsSinceEpoch() - m_timerStartTime > WANTED_DELAY_TIME){
// Execute the timer payload
// Stop interval timer
}
This simple approach could be improved in several ways. For example, to keep the timer running even if application is stopped/restarted, simply save the timer start time in a setting or other persistent storage, and read it back in at application start up.
And to improve precision, simply change the interval from the timer handler function in the last iteration so that it tracks the initial end time perfectly (instead of overshooting by up to 10 minutes).
Everything I've found so far regarding timers is that it's, at best, available at a 1ms resolution. QTimer's docs claim that's the best it can provide.
I understand that OSes like Windows are not real-time OSes, but I still want to ask this question in hopes that someone knows something that could help.
So, I'm writing an app that requires a function to be called at a fairly precise but arbitrary interval, say 60 times/sec (full range: 59-61Hz). That means I need it to be called, on average, every ~16.67ms. This part of the design can't change.
The best timing source I currently have is vsync. When I go off of that, it's pretty good. It's not ideal, because the monitor's frequency is not exactly what I need to call this function at, but it can be somewhat compensated for.
The kicker is that the level of accuracy given the range I'm after is more or less available with timers, but not the level of precision I want. I can get a 16ms timer to hit exactly 16ms ~97% of the time. I can get a 17ms timer to hit exactly 17ms ~97% of the time. But no API exists to get me 16.67?
Is what I'm looking for simply not possible?
Background: The project is called Phoenix. Essentially, it's a libretro frontend. Libretro "cores" are game console emulators encapsulated in individual shared libraries. The API function being called at a specific rate is retro_run(). Each call emulates a game frame and calls callbacks for audio, video and so on. In order to emulate at a console's native framerate, we must call retro_run() at exactly (or as close to) this rate, hence the timer.
You could write a loop that checks std::chrono::high_resolution_clock() and std::this_thread::yield() until the right time has elapsed. If the program needs to be responsive while this is going on, you should do it in a separate thread from the one checking the main loop.
Some example code:
http://en.cppreference.com/w/cpp/thread/yield
An alternative is to use QElapsedTimer with a value of PerformanceCounter. You will still need to check it from a loop, and probably will still want to yield within that loop. Example code: http://doc.qt.io/qt-4.8/qelapsedtimer.html
It is completely unnecessary to call retro_run at any highly controlled time in particular, as long as the average frame rate comes out right, and as long as your audio output buffers don't underflow.
First of all, you are likely to have to measuring the real time using an audio-output-based timer. Ultimately, each retro_run produces a chunk of audio. The audio buffer state with the chunk added is your timing reference: if you run early, the buffer will be too full, if you run late, the buffer will be too empty.
This error measure can be fed into a PI controller, whose output gives you the desired delay until the next invocation of retro_run. This will automatically ensure that your average rate and phase are correct. Any systematic latencies in getting retro_run active will be integrated away, etc.
Secondly, you need a way of waking yourself up at the correct moment in time. Given a target time (in terms of a performance counter, for example) to call retro_run, you'll need a source of events that wake your code up so that you can compare the time and retro_run when necessary.
The simplest way of doing this would be to reimplement QCoreApplication::notify. You'll have a chance to retro_run prior to the delivery of every event, in every event loop, in every thread. Since system events might not otherwise come often enough, you'll also want to run a timer to provide a more dependable source of events. It doesn't matter what the events are: any kind of event is good for your purpose.
I'm not familiar with threading limitations of retro_run - perhaps you can run it in any one thread at a time. In such case, you'd want to run it on the next available thread in a pool, perhaps with the exception of the main thread. So, effectively, the events (including timer events) are used as energetically cheap sources of giving you execution context.
If you choose to have a thread dedicated to retro_run, it should be a high priority thread that simply blocks on a mutex. Whenever you're ready to run retro_run when a well-timed event comes, you unlock the mutex, and the thread should be scheduled right away, since it'll preempt most other threads - and certainly all threads in your process.
OTOH, on a low core count system, the high priority thread is likely to preempt the main (gui) thread, so you might as well invoke retro_run directly from whatever thread got the well-timed event.
It might of course turn out that using events from arbitrary threads to wake up the dedicated thread introduces too much worst-case latency or too much latency spread - this will be system-specific and you may wish to collect runtime statistics, switch threading and event source strategies on the fly, and stick with the best one. The choices are:
retro_run in a dedicated thread waiting on a mutex, unlock source being any thread with a well-timed event caught via notify,
retro_run in a dedicated thread waiting for a timer (or any other) event; events still caught via notify,
retro_run in a gui thread, unlock source being the events delivered to the gui thread, still caught via notify,
any of the above, but using timer events only - note that you don't care which timer events they are, they don't need to come from your timer,
as in #4, but selective to your timer only.
My implementation based on Lorehead's answer. Time for all variables are in ms.
It of course needs a way to stop running and I was also thinking about subtracting half the (running average) difference between timeElapsed and interval to make the average +-n instead of +2n, where 2n is the average overshoot.
// Typical interval value: 1/60s ~= 16.67ms
void Looper::beginLoop( double interval ) {
QElapsedTimer timer;
int counter = 1;
int printEvery = 240;
int yieldCounter = 0;
double timeElapsed = 0.0;
forever {
if( timeElapsed > interval ) {
timer.start();
counter++;
if( counter % printEvery == 0 ) {
qDebug() << "Yield() ran" << yieldCounter << "times";
qDebug() << "timeElapsed =" << timeElapsed << "ms | interval =" << interval << "ms";
qDebug() << "Difference:" << timeElapsed - interval << " -- " << ( ( timeElapsed - interval ) / interval ) * 100.0 << "%";
}
yieldCounter = 0;
importantBlockingFunction();
// Reset the frame timer
timeElapsed = ( double )timer.nsecsElapsed() / 1000.0 / 1000.0;
}
timer.start();
// Running this just once means massive overhead from calling timer.start() so many times so quickly
for( int i = 0; i < 100; i++ ) {
yieldCounter++;
QThread::yieldCurrentThread();
}
timeElapsed += ( double )timer.nsecsElapsed() / 1000.0 / 1000.0;
}
}
I have a simulation that I am trying to convert to "real time". I say "real time" because its okay for performance to dip if needed (slowing down time for the observers/clients too). However, if there is a small number of objects, I want to limit the performance so that it runs at a steady frame rate (~100 FPS in this case).
I tried sleep() and Sleep() for linux and windows respectively but it doesn't seem to be accurate enough as the FPS really dips to a fraction of what I was aiming for. I suppose this scenario is common for games, especially online games but I was not able to find any helpful material on the subject. What is the preferable way of frame limiting? Is there a sleep method that can guarantee that it won't give up more time than what was specified?
Note: I'm running this on 2 different clusters (linux and windows) and all nodes only have built-in video. So I have to implement limiting on both platforms and it shouldn't be video card based (if there is even such a thing). I also need to implement the limiting on just one thread/node because there is already synchronization between nodes and the others would automatically be limited if one thread is properly limited.
Edit: some pseudo code that shows how I implemented the current limiter:
while (ProcessControlMessages())
{
uint64 tStart;
SimulateFrame();
uint64 newT =_context.GetTimeMs64();
if (newT - tStart < DESIRED_FRAME_RATE_DURATION)
this_thread::sleep_for(chrono::milliseconds(DESIRED_FRAME_RATE_DURATION - (newT - tStart)));
}
I was also thinking if I could do the limiting every N frames, where N is a fraction of the desired frame rate. I'll give it a try and report back.
For games a frame limiter is usually inadequate. Instead, the methods that update the game state (in your case SimulateFrame()) are kept frame rate independent. E.g. if you want to move an object, then the actual offset is the object's speed multiplied with the last frame's duration. Similarly, you can do this for all kind of calculations.
This approach has the advantage that the user gets maximum frame rate while maintaining the real-timeness. However, you should watch out that the frame durations don't get too small ( < 1 ms). This could result in inaccurate calculations. In this case a small sleep with a fixed duration could help.
This is how games usually handle this problem. You have to check if your simulation is appropriate for this technique, too.
Instead of having each frame try to sleep for long enough to be a full frame, have them sleep to try to average out. Keep a global/thread owned time count. for each frame have a "desired earliest end time," calculated from the previous desired earliest end time, rather than from the current time
tGoalEndTime = _context.GetTimeMS64() + DESIRED_FRAME_RATE_DURATION;
while (ProcessControlMessages())
{
SimulateFrame();
uint64 end =_context.GetTimeMs64();
if (end < tGoalEndTime) {
this_thread::sleep_for(chrono::milliseconds(tGoalEndTime - end)));
tGoalEndTime += DESIRED_FRAME_RATE_DURATION;
} else {
tGoalEndTime = end; // we ran over, pretend we didn't and keep going
}
Note: this uses your example's sleep_for because I wanted to show the minimum number of changes to enact it. sleep_until works better here.
The trick is that any frame that sleeps too long immediately causes the next few frames to rush to catch up.
Note: You cannot get any timing within 2ms (20% jitter on 100fps) on modern consumer OSs. The quantum for threads on most consumer OSs is around 100ms, so the instant you sleep, you may sleep for multiple quantums before it is your turn. sleep_until may use a OS specific technique to have less jitter, but you can't rely on it.
I would like to achieve determinism in my game engine, in order to be able to save and replay input sequences and to make networking easier.
My engine currently uses a variable timestep: every frame I calculate the time it took to update/draw the last one and pass it to my entities' update method. This makes 1000FPS games seem as fast ad 30FPS games, but introduces undeterministic behavior.
A solution could be fixing the game to 60FPS, but it would make input more delayed and wouldn't get the benefits of higher framerates.
So I've tried using a thread (which constantly calls update(1) then sleeps for 16ms) and draw as fast as possible in the game loop. It kind of works, but it crashes often and my games become unplayable.
Is there a way to implement threading in my game loop to achieve determinism without having to rewrite all games that depend on the engine?
You should separate game frames from graphical frames. The graphical frames should only display the graphics, nothing else. For the replay it won't matter how many graphical frames your computer was able to execute, be it 30 per second or 1000 per second, the replaying computer will likely replay it with a different graphical frame rate.
But you should indeed fix the gameframes. E.g. to 100 gameframes per second. In the gameframe the game logic is executed: stuff that is relevant for your game (and the replay).
Your gameloop should execute graphical frames whenever there is no game frame necessary, so if you fix your game to 100 gameframes per second that's 0.01 seconds per gameframe. If your computer only needed 0.001 to execute that logic in the gameframe, the other 0.009 seconds are left for repeating graphical frames.
This is a small but incomplete and not 100% accurate example:
uint16_t const GAME_FRAMERATE = 100;
uint16_t const SKIP_TICKS = 1000 / GAME_FRAMERATE;
uint16_t next_game_tick;
Timer sinceLoopStarted = Timer(); // Millisecond timer starting at 0
unsigned long next_game_tick = sinceLoopStarted.getMilliseconds();
while (gameIsRunning)
{
//! Game Frames
while (sinceLoopStarted.getMilliseconds() > next_game_tick)
{
executeGamelogic();
next_game_tick += SKIP_TICKS;
}
//! Graphical Frames
render();
}
The following link contains very good and complete information about creating an accurate gameloop:
http://www.koonsolo.com/news/dewitters-gameloop/
To be deterministic across a network, you need a single point of truth, commonly called "the server". There is a saying in the game community that goes "the client is in the hands of the enemy". That's true. You cannot trust anything that is calculated on the client for a fair game.
If for example your game gets easier if for some reasons your thread only updates 59 times a second instead of 60, people will find out. Maybe at the start they won't even be malicious. They just had their machines under full load at the time and your process didn't get to 60 times a second.
Once you have a server (maybe even in-process as a thread in single player) that does not care for graphics or update cycles and runs at it's own speed, it's deterministic enough to at least get the same results for all players. It might still not be 100% deterministic based on the fact that the computer is not real time. Even if you tell it to update every $frequence, it might not, due to other processes on the computer taking too much load.
The server and clients need to communicate, so the server needs to send a copy of it's state (for performance maybe a delta from the last copy) to each client. The client can draw this copy at the best speed available.
If your game is crashing with the thread, maybe it's an option to actually put "the server" out of process and communicate via network, this way you will find out pretty fast, which variables would have needed locks because if you just move them to another project, your client will no longer compile.
Separate game logic and graphics into different threads . The game logic thread should run at a constant speed (say, it updates 60 times per second, or even higher if your logic isn't too complicated, to achieve smoother game play ). Then, your graphics thread should always draw the latest info provided by the logic thread as fast as possible to achieve high framerates.
In order to prevent partial data from being drawn, you should probably use some sort of double buffering, where the logic thread writes to one buffer, and the graphics thread reads from the other. Then switch the buffers every time the logic thread has done one update.
This should make sure you're always using the computer's graphics hardware to its fullest. Of course, this does mean you're putting constraints on the minimum cpu speed.
I don't know if this will help but, if I remember correctly, Doom stored your input sequences and used them to generate the AI behaviour and some other things. A demo lump in Doom would be a series of numbers representing not the state of the game, but your input. From that input the game would be able to reconstruct what happened and, thus, achieve some kind of determinism ... Though I remember it going out of sync sometimes.