I'm making a simple meteor and rocket game in the console. And I want to increase the spawnrate of the meteors every five seconds. I have already tried the Sleep() function but that will of course not work and sleep the whole application. So does a while loop.
I will only post the Logic() function where it must increase because it's a program
of like 100 lines and I didn't feel like posting it all in here. If you do need context just ask me and I will post everything.
void Logic() {
Sleep(5000); // TODO Increase meteors every Five seconds
nMeteors++;
}
I'm pretty stuck on this so it would be nice if someone could help me :)
There are mainly two ways to approach this problem. One would be to spawn a new thread and put the loop there. You can use C++11's standard libraries <thread> and <chrono. Putting the thread to sleep for 5 seconds is as simple as std::this_thread::sleep_for(std::chrono::seconds{5});
But dedicating an entire thread to such a trivial task is unnecessary. In a videogame you usually have some sort of time keeping variable.
What you'd want to do is probably have a variable like std::chrono::time_point<std::chrono::steady_clock> previous_time = std::chrono::steady_clock::now(); (or simply auto previous_time = std::chrono::steady_clock::now()) outside of your loop. Now you have a reference point you can use to know where you are in time while running your loop. Inside of your loop you create another variable like auto current_time = std::chrono::steady_clock::now();, this is your current time. Now it's a simple matter of calculating the difference between current_time and previous_time and check if 5 seconds have passed. If they have, increase your variable and don't forget to set previous_time = current_time; to update the time, if not then just skip and keep doing whatever else you need to do in your main game loop.
To check if 5 seconds have passed, you do if (std::chrono::duration_cast<std::chrono::seconds>(current_time - previous_time).count() >= 5) { ... }.
You can find a lot more info here for the chrono library and here for the thread library. Plus, Google is your friend.
The typical way to write a game is to have an event loop.
The event loop polls various inputs for status, updates the state of the game, and then repeats. Some clever event loops even sleep for short periods and get notifications when inputs change or state has to be updated.
In your meteor spawning code, keep track of a timestamp when the last increase in spawnrate occurred. When you check if a meteor should spawn or spawn meteors 5 seconds after that point, update the spawn rate and record a new timestamp (possibly retroactively, and possibly in a loop to handle more than 10 seconds passing between checks for whatever reason).
An alternative solution involving an extra thread of execution is possible, but not a good idea.
As an aside, most games want to support pausing; so you want to distinguish between wall-clock time and nominal game-play time.
One way you can do this is by making your value a function of elapsed time. For example:
// somewhere to store the beginning of the
// time period.
inline std::time_t& get_start_timer()
{
static std::time_t t{};
return t;
}
// Start a time period (resets meteors to zero)
inline void start_timer()
{
get_start_timer() = std::time(nullptr); // current time in seconds
}
// retrieve the current number of meteors
// as a function of time.
inline int nMeteors()
{
return int(std::difftime(std::time(nullptr), get_start_timer())) / 5;
}
int main()
{
start_timer();
for(;;)
{
std::this_thread::sleep_for(std::chrono::seconds(1));
std::cout << "meteors: " << nMeteors() << '\n';
}
}
Here is a similar version using C++11 <chrono> library:
// somewhere to store the beginning of the
// time period.
inline auto& get_time_point()
{
static std::chrono::steady_clock::time_point tp{};
return tp;
}
// Start a time period (resets meteors to zero)
inline void start_timing()
{
get_time_point() = std::chrono::steady_clock::now(); // current time in seconds
}
// retrieve the current number of meteors
// as a function of time.
inline auto nMeteors()
{
return std::chrono::duration_cast<std::chrono::seconds>(std::chrono::steady_clock::now() - get_time_point()).count() / 5;
}
int main()
{
start_timing();
for(;;)
{
std::this_thread::sleep_for(std::chrono::seconds(1));
std::cout << "meteors: " << nMeteors() << '\n';
}
}
I found this easier than using chrono
Open to feedbacks:
Code:-
include "time.h"
main(){
int d;
time_t s,e;
time(&s);
time(&e);
d=e-s;
while(d<5){
cout<<d;
time(&e);
d=e-s;
}
}
Related
What is the best way in C++11 to implement a high-resolution timer that continuously checks for time in a loop, and executes some code after it passes a certain point in time? e.g. check what time it is in a loop from 9am onwards and execute some code exactly at 11am. I require the timing to be precise (i.e. no more than 1 microsecond after 9am).
I will be implementing this program on Linux CentOS 7.3, and have no issues with dedicating CPU resources to execute this task.
Instead of implementing this manually, you could use e.g. a systemd.timer. Make sure to specify the desired accuracy which can apparently be as precise as 1us.
a high-resolution timer that continuously checks for time in a loop,
First of all, you do not want to continuously check the time in a loop; that's extremely inefficient and simply unnecessary.
...executes some code after it passes a certain point in time?
Ok so you want to run some code at a given time in the future, as accurately as possible.
The simplest way is to simply start a background thread, compute how long until the target time (in the desired resolution) and then put the thread to sleep for that time period. When your thread wakes up, it executes the actual task. This should be accurate enough for the vast majority of needs.
The std::chrono library provides calls which make this easy:
System clock in std::chrono
High resolution clock in std::chrono
Here's a snippet of code which does what you want using the system clock (which makes it easier to set a wall clock time):
// c++ --std=c++11 ans.cpp -o ans
#include <thread>
#include <iostream>
#include <iomanip>
// do some busy work
int work(int count)
{
int sum = 0;
for (unsigned i = 0; i < count; i++)
{
sum += i;
}
return sum;
}
std::chrono::system_clock::time_point make_scheduled_time (int yyyy, int mm, int dd, int HH, int MM, int SS)
{
tm datetime = tm{};
datetime.tm_year = yyyy - 1900; // Year since 1900
datetime.tm_mon = mm - 1; // Month since January
datetime.tm_mday = dd; // Day of the month [1-31]
datetime.tm_hour = HH; // Hour of the day [00-23]
datetime.tm_min = MM;
datetime.tm_sec = SS;
time_t ttime_t = mktime(&datetime);
std::chrono::system_clock::time_point scheduled = std::chrono::system_clock::from_time_t(ttime_t);
return scheduled;
}
void do_work_at_scheduled_time()
{
using period = std::chrono::system_clock::period;
auto sched_start = make_scheduled_time(2019, 9, 17, // date
00, 14, 00); // time
// Wait until the scheduled time to actually do the work
std::this_thread::sleep_until(sched_start);
// Figoure out how close to scheduled time we actually awoke
auto actual_start = std::chrono::system_clock::now();
auto start_delta = actual_start - sched_start;
float delta_ms = float(start_delta.count())*period::num/period::den * 1e3f;
std::cout << "worker: awoken within " << delta_ms << " ms" << std::endl;
// Now do some actual work!
int sum = work(12345);
}
int main()
{
std::thread worker(do_work_at_scheduled_time);
worker.join();
return 0;
}
On my laptop, the typical latency is about 2-3ms. If you use the high_resolution_clock you should be able to get even better results.
There are other APIs you could use too, such as Boost where you could use ASIO to implement high res timeout.
I require the timing to be precise (i.e. no more than 1 microsecond after 9am).
Do you really need it to be accurate to the microsecond? Consider that at this resolution, you will also need to take into account all sorts of other factors, including system load, latency, clock jitter, and so on. Your code can start to execute at close to that time, but that's only part of the problem.
My suggestion would be to use timer_create(). This allows you to get notified by a signal at a given time. You can then implement your action in the signal handler.
In any case you should be aware that the accuracy of course depends on the system clock accuracy.
I'm trying to implement a MIDI-like clocked sample player.
There is a timer, which increments pulse counter, and every 480 pulses is a quarter, so pulse period is 1041667 ns for 120 beats per minute.
Timer is not sleep-based and running in separate thread, but it seems like delay time is inconsistent: period between samples played in a test file is fluctuating +- 20 ms (in some occasions period is OK and steady, I can't find out dependency of this effect).
Audio backend influence is excluded: i've tried OpenAL as well as SDL_mixer.
void Timer_class::sleep_ns(uint64_t ns){
auto start = std::chrono::high_resolution_clock::now();
bool sleep = true;
while(sleep)
{
auto now = std::chrono::high_resolution_clock::now();
auto elapsed = std::chrono::duration_cast<std::chrono::nanoseconds>(now - start);
if (elapsed.count() >= ns) {
TestTime = elapsed.count();
sleep = false;
//break;
}
}
}
void Timer_class::Runner(void){
// this running as thread
while(1){
sleep_ns(BPMns);
if (Run) Transport.IncPlaybackMarker(); // marker increment
if (Transport.GetPlaybackMarker() == Transport.GetPlaybackEnd()){ // check if timer have reached end, which is 480 pulses
Transport.SetPlaybackMarker(Transport.GetPlaybackStart());
Player.PlayFile(1); // period of this event fluctuates severely
}
}
};
void Player_class::PlayFile(int FileNumber){
#ifdef AUDIO_SDL_MIXER
if(Mix_PlayChannel(-1, WaveData[FileNumber], 0)==-1) {
printf("Mix_PlayChannel: %s\n",Mix_GetError());
}
#endif // AUDIO_SDL_MIXER
}
Am i doing something wrong in terms of an approach? Is there any better way to implement timer of this kind?
Deviation higher than 4-5 ms is too much in case of audio.
I see a large error and a small error. The large error is that your code assumes that the main processing in Runner consistently takes zero time:
if (Run) Transport.IncPlaybackMarker(); // marker increment
if (Transport.GetPlaybackMarker() == Transport.GetPlaybackEnd()){ // check if timer have reached end, which is 480 pulses
Transport.SetPlaybackMarker(Transport.GetPlaybackStart());
Player.PlayFile(1); // period of this event fluctuates severely
}
That is, you're "sleeping" for the time you want your loop iteration to take, and then you're doing processing on top of that.
The small error is presuming that you can represent your ideal loop iteration time with an integral number of nanoseconds. This error is so small that it doesn't really matter. However I amuse myself by showing people how they can get rid of this error too. :-)
First lets correct the small error by exactly representing the idealized loop iteration time:
using quarterPeriod = std::ratio<1, 2>;
using iterationPeriod = std::ratio_divide<quarterPeriod, std::ratio<480>>;
using iteration_time = std::chrono::duration<std::int64_t, iterationPeriod>;
I know nothing of music, but I'm guessing the above code is right because if you convert iteration_time{1} to nanoseconds, you get approximately 1041667ns. iteration_time{1} is intended to be the precise amount of time you want each iteration of your loop in Timer_class::Runner to take.
To correct the large error, you need to sleep until a time_point, as opposed to sleeping for a duration. Here's a generic utility to help you do that:
template <class Clock, class Duration>
void
delay_until(std::chrono::time_point<Clock, Duration> tp)
{
while (Clock::now() < tp)
;
}
Now if you code Timer_class::Runner to use delay_until instead of sleep_ns, I think you'll get better results:
void
Timer_class::Runner()
{
auto next_start = std::chrono::steady_clock::now() + iteration_time{1};
while (true)
{
if (Run) Transport.IncPlaybackMarker(); // marker increment
if (Transport.GetPlaybackMarker() == Transport.GetPlaybackEnd()){ // check if timer have reached end, which is 480 pulses
Transport.SetPlaybackMarker(Transport.GetPlaybackStart());
Player.PlayFile(1);
}
delay_until(next_start);
next_start += iteration_time{1};
}
}
I ended up using #howard-hinnant version of delay, and reducing buffer size in openal-soft, that's what made a huge difference, fluctuations is now about +-5 ms for 1/16th at 120BPM (125 ms period) and +-1 ms for quarter beats. Leaves a lot to be desired, but i guess it's okay
This is my code using QueryPeformanceCounter as timer.
//timer.h
class timer {
private:
...
public:
...
double get(); //returns elapsed time in seconds
void start();
};
//a.cpp
void loop() {
timer t;
double tick;
double diff; //surplus seconds
t.start();
while( running ) {
tick = t.get();
if( tick >= 1.0 - diff ) {
t.start();
//things that should be run exactly every second
...
}
Sleep( 880 );
}
}
Without Sleep this loop would go on indefinitely calling t.get() every time which causes high CPU usage. For that reason, I make it sleep for about 880 milliseconds so that it wouldn't call t.get() while not necessary.
As I said above, I'm currently using Sleep to do the trick, but what I'm worried about is the accuracy of Sleep. I've read somewhere that the actual milliseconds the program pauses may vary - 20 to 50 ms - the reason I set the parameter to 880. I want to reduce the CPU usage as much as possible; I want to, if possible, pause more than 990 milliseconds EDIT: and yet less than 1000 milliseconds between every loop. What would be the best way to go?
I don't get why you are calling t.start() twice (it resets the clock?), but I would like to propose a kind of solution for the Sleep inaccuracy. Let's take a look at the content of while( running ) loop and follow the algorithm:
double future, remaining, sleep_precision = 0.05;
while (running) {
future = t.get() + 1.0;
things_that_should_be_run_exactly_every_second();
// the loop in case of spurious wakeup
for (;;) {
remaining = future - t.get();
if (remaining < sleep_precision) break;
Sleep(remaining);
}
// next, do the spin-lock for at most sleep_precision
while (t.get() < future);
}
The value of sleep_precision should be set empirically - OSes I know can't give you that.
Next, there are some alternatives of the sleeping mechanism that may better suit your needs - Is there an alternative for sleep() in C?
If you want to pause more than 990 milliseconds, write a sleep for 991 milliseconds. Your thread is guaranteed to be asleep for at least that long. It won't be less, but it could be multiples of 20-50ms more (depending on the resolution of your OS's time slicing, and on the the cost of context switching).
However, this will not give you something running "exactly every second". There is just no way to achieve that on a time-shared operating system. You'll have to program closer to the metal, or rely on an interrupt from a PPS source and just pray your OS lets you run your entire loop iteration in one shot. Or, I suppose, write something to run in kernel modeā¦?
I've implemented code to call a service API every 10 seconds using a c++ client. Most of the times I've noticed it is around 10 seconds but occassionally I see an issue like below where it look longer. I'm using conditional variable on wait_until. What's wrong with my implementation? Any ideas?
Here's the timing output:
currentDateTime()=2015-12-21.15:13:21
currentDateTime()=2015-12-21.15:13:57
And the code:
void client::runHeartbeat() {
std::unique_lock<std::mutex> locker(lock);
for (;;) {
// check the current time
auto now = std::chrono::system_clock::now();
/* Set a condition on the conditional variable to wake up the this thread.
This thread is woken up on 2 conditions:
1. After a timeout of now + interval when we want to send the next heartbeat
2. When the client is destroyed.
*/
shutdownHeartbeat.wait_until(locker, now + std::chrono::milliseconds(sleepMillis));
// After waking up we want to check if a sign-out has occurred.
if (m_heartbeatRunning) {
std::cout << "currentDateTime()=" << currentDateTime() << std::endl;
SendHeartbeat();
}
else {
break;
}
}
}
You might want to consider using the high_resolution_clock for your needs. system_clock is not guaranteed a high resolution, so that may be a part of the problem.
Note that it's definition is implementation dependent so you might just get a typedef back onto system_clock on some compilers.
I am relatively new to C++, so I don't have a huge amount of experience. I have learned Python, and I am trying to make an improved version of a Python code I wrote in C++. However, I want it to work in real time, so I need to set the speed of a While loop. I'm sure there is an answer, but I couldn't find it. I want a comparable code to this:
rate(timeModifier * (1/dt))
This was the code I used in Python. I can set a variable dt to make calculations more precise, and timeModifier to double or triple the speed (1 sets it to realtime). This means that the program will go through the loop 1/dt times per second. I understand I can include time.h at the header, but I guess I am too new to C++ to understand how to transfer this to my needs.
You could write your own timer class:
#include <ctime>
class Timer {
private:
unsigned long startTime;
public:
void start() {
startTime = clock();
}
unsigned long elapsedTime() {
return ((unsigned long) clock() - startTime) / CLOCKS_PER_SEC;
}
bool isTimeout(unsigned long seconds) {
return seconds >= elapsedTime();
}
};
int main()
{
unsigned long dt = 10; //in seconds
Timer t;
t.start();
while(true)
{
if(t.elapsedTime() < dt)
{
//do something to pass time as a busy-wait or sleep
}
else
{
//do something else
t = Timer(); //reset the timer
}
}
}
Note that busy-waits are discouraged, since they will hog the CPU. If you don't need to do anything, use the sleep command(Windows) or usleep ( Linux). For more information on making timers in C++, see this link.
You can't do it the same manner in C++. You need to manually call some kind of sleep function in calculation loop, Sleep on Windows or usleep on *NIX.
It's been a while since I've done something like this, but something like this will work:
#include <time.h>
time_t t2, t1 = time(NULL);
while(CONDITIONS)
{
time_t t2 = time(NULL);
if(difftime(t2, t1) > timeModifier)
{
//DO the stuff!
t1 = time(NULL);
}
}
I should note, however, that I'm not familiar with the precision of this method, I think it measures the difference in seconds.
If you need something more precise, use the clock() function which has the number of milliseconds since 12:00 AM beginning January 1, 1980, to the nearest 10 milliseconds.
Perhaps something like this:
#include <time.h>
clock_t t2, t1 = clock();
while(CONDITIONS)
{
t2 = clock();
if((t2-t1) > someTimeElapsed*timeModifier)
{
//DO the stuff!
t1 = clock());
}
}
Update:
You can even yield the CPU to other threads and processes by adding this after the end of the if statement:
else
{
usleep(10000); //sleep for ten milliseconds (chosen because of precision on clock())
}
Depending on the accuracy you need, and your platform, you could use usleep This allows you to set the pause time down to microseconds:
#include <unistd.h>
int usleep(useconds_t useconds);
Remember that your loop will always take longer than this because of the inherent processingtime of the rest of the loop but it's a start. For anything more accurate,you'd probably need to look at timer based callbacks.
You should really create a new thread and have it do the timing so that it remains unaffected by the processing work done in the loop.
WARNING: Pseudo code... just to give you an idea of how to start.
Thread* tThread = CreateTimerThread(1000);
tThread->run();
while( conditionNotMet() )
{
tThread->waitForTimer();
doWork();
}
CreateTimerThread() should return the thread object you want, and run would be something like:
run()
{
while( false == shutdownLatch() )
{
Sleep( timeout );
pulseTimerEvent();
}
}
waitForTimer()
{
WaitForSingleObject( m_handle );
return;
}
Under Windows you can use QueryPerformanceCounter, while polling the time (e.g. within another while loop) call Sleep(0) to allow other threads to continue operation.
Remember Sleep is highly inaccurate. For full control just run a loop without operations, however you'll use 100% of the CPU. To relax the strain on the CPU you can call Sleep(10) etc.