So I'm pretty new with programming from scratch, have mainly used unity for a few years up untill now so my general programming knowledge is pretty good. Started studying game development at a university after summer though where we began programming from scratch and as a task we have to make a simple game in a 2D engine we made together in class.
So the game I decided to make was a copy of bomberman and I've gotten as far as where I'm now making the bombs functional.
The problem I'm having is that I don't know how to propperly add in a timer that counts down the time to where the bomb exlpode so the player can avoid it.
I've tried SDL_Delay and _sleep which both just pause the entire program so that doesn't work and I've searched around for more options but not really understood how things work. If I could get some expamples and links to pages that explains how to properly make something like this work (something easy and small hopfully :P) then that would be highly appreciated!
Note that we are using SDL in the engine.
Thanks!
Typically, a game uses a loop, in which you read user input (you are probably using SDL_PollEvent for that), advance the game state for a short time period and draw the screen. This loop is typically called the game loop, render loop or main loop.
A simple, accurate and typical way to delay an event (such as a timed explosion), is to store the future time into a queue. Then, each time the game state advances, check the first and therefore the oldest timestamp in the queue and if the current time is higher than the stored one, then we know that the the thing should now happen and you can call the function that executes the event without delay. Then remove the timestamp from the queue and check the next one until only future events remain or the queue is empty.
If the event delay can vary, then you'll need to use a priority queue to always get the event that should fire next.
skypjack points out in the comments that this is a problematic approach if you need to implement pausing the game. That can be solved by not measuring wall clock, but instead use a separate simulation time that drifts from the wall clock when the game is paused. They also propose a simpler solution:
store a timeToEvent (to be elapsed) and decrement it, so that you detach the game time from the real one. Once it's <= 0, it's its time.
That approach is simpler, but has more overhead for checking the expiration of deadlines.
Related
I'm writing a little game in c++ atm.
My Game While loop is always active, in this loop,
I have a condition if the player is shooting.
Now I face the following problem,
After every shot fired, there is a delay, this delay changes over time and while the delay the player should move.
shoot
move
wait 700 ms
shoot again
atm I'm using Sleep(700) the problem is I can't move while the 700 ms, I need something like a timer, so the move command is only executed for 700 ms instead of waiting 700 ms
This depends on how your hypothetical 'sleep' is implemented. There's a few things you should know, as it can be solved in a few ways.
You don't want to put your thread to sleep because then everything halts, which is not what you want.
Plus you may get more time than sleep allows. For example, if you sleep for 700ms you may get more than that, which means if you depend on accurate times you will get burned possibly by this.
1) The first way would be to record the raw time inside of the player. This is not the best approach but it'd work for a simple toy program and record the result of std::chrono::high_resolution_clock::now() (check #include <chrono> or see here) inside the class at the time you fire. To check if you can fire again, just compare the value you stored to ...::now() and see if 700ms has elapsed. You will have to read the documentation to work with it in milliseconds.
2) A better way would be to give your game a pulse via something called 'game ticks', which is the pulse to which your world moves forward. Then you can store the gametick that you fired on and do something similar to the above paragraph (except now you are just checking if currentGametick > lastFiredGametick + gametickUntilFiring).
For the gametick idea, you would make sure you do gametick++ every X milliseconds, and then run your world. A common value is somewhere between 10ms and 50ms.
Your game loop would then look like
while (!exit) {
readInput();
if (ticker.shouldTick()) {
ticker.tick();
world.tick(ticker.gametick);
}
render();
}
The above has the following advantages:
You only update the world every gametick
You keep rendering between gameticks, so you can have smooth animations since you will be rendering at a very high framerate
If you want to halt, just spin in a while loop until the amount of time has elapsed
Now this has avoided a significant amount of discussion, of which you should definitely read this if you are thinking of going the gametick route.
With whatever route you take, you probably need to read this.
If you've ever used XNA game studio 4 you are familiar with the update method. By default the code within is processed at 60 times per second. I have been struggling to recreate such an effect in c++.
I would like to create a method where it will only process the code x amount of times per second. Every way I've tried it processes all at once, as loops do. I've tried for loops, while, goto, and everything processes all at once.
If anyone could please tell me how and if I can achieve such a thing in c++ it would be much appreciated.
With your current level of knowledge this is as specific as I can get:
You can't do what you want with loops, fors, ifs and gotos, because we are no longer in the MS-DOS era.
You also can't have code running at precisely 60 frames per second.
On Windows a system application runs within something called an "event loop".
Typically, from within the event loop, most GUI frameworks call the "onIdle" event, which happens when an application is doing nothing.
You call update from within the onIdle event.
Your onIdle() function will look like this:
void onIdle(){
currentFrameTime = getCurrentFrameTime();
if ((currentFrameTime - lastFrameTime) < minUpdateDelay){
sleepForSmallAmountOfTime();//using Sleep or anything.
//Delay should be much smaller than minUPdateDelay.
//Doing this will reduce CPU load.
return;
}
update(currentFrameTime - lastFrameTime);
lastFrameTime = currentFrameTime;
}
You will need to write your own update function, your update function should take amount of time passed since last frame, and you need to write a getFrameTime() function using either GetTickCount, QueryPerformanceCounter, or some similar function.
Alternatively you could use system timers, but that is a bad idea compared to onIdle() event - if your app runs too slowly.
In short, there's a long road ahead of you.
You need to learn some (preferably cross-platform) GUI framework, learn how to create a window, the concept of an event loop (can't do anything without it today), and then write your own "update()" and get a basic idea of multithreading programming and system events.
Good luck.
As you are familiar with XNA then i assume you also are familiar with "input" and "draw". What you could do is assign independant threads to these 3 functions and have a timer to see if its time to run a thread.
Eg the input would probably trigger draw, and both draw and input would trigger the update method.
-Another way to handle this is my messages events. If youre using Windows then look into Windows messages loop. This will make your input, update and draw event easier by executing on events triggered by the OS.
So I was reading this article which contains 'Tips and Advice for Multithreaded Programming in SDL' - https://vilimpoc.org/research/portmonitorg/sdl-tips-and-tricks.html
It talks about SDL_PollEvent being inefficient as it can cause excessive CPU usage and so recommends using SDL_WaitEvent instead.
It shows an example of both loops but I can't see how this would work with a game loop. Is it the case that SDL_WaitEvent should only be used by things which don't require constant updates ie if you had a game running you would perform game logic each frame.
The only things I can think it could be used for are programs like a paint program where there is only action required on user input.
Am I correct in thinking I should continue to use SDL_PollEvent for generic game programming?
If your game only updates/repaints on user input, then you could use SDL_WaitEvent. However, most games have animation/physics going on even when there is no user input. So I think SDL_PollEvent would be best for most games.
One case in which SDL_WaitEvent might be useful is if you have it in one thread and your animation/logic on another thread. That way even if SDL_WaitEvent waits for a long time, your game will continue painting/updating. (EDIT: This may not actually work. See Henrik's comment below)
As for SDL_PollEvent using 100% CPU as the article indicated, you could mitigate that by adding a sleep in your loop when you detect that your game is running more than the required frames-per-second.
If you don't need sub-frame precision in your input, and your game is constantly animating, then SDL_PollEvent is appropriate.
Sub-frame precision can be important for, eg. games where the player might want very small increments in movement - quickly tapping and releasing a key has unpredictable behavior if you use the classic lazy method of keydown to mean "velocity = 1" and keyup to mean "velocity = 0" and then you only update position once per frame. If your tap happens to overlap with the frame render then you get one frame-duration of movement, if it does not you get no movement, where what you really want is an amount of movement smaller than the length of a frame based on the timestamps at which the events occurred.
Unfortunately SDL's events don't include the actual event timestamps from the operating system, only the timestamp of the PumpEvents call, and WaitEvent effectively polls at 10ms intervals, so even with WaitEvent running in a separate thread, the most precision you'll get is 10ms (you could maybe approximate smaller by saying if you get a keydown and keyup in the same poll cycle then it's ~5ms).
So if you really want precision timing on your input, you might actually need to write your own version of SDL_WaitEventTimeout with a smaller SDL_Delay, and run that in a separate thread from your main game loop.
Further unfortunately, SDL_PumpEvents must be run on the thread that initialized the video subsystem (per https://wiki.libsdl.org/SDL_PumpEvents ), so the whole idea of running your input loop on another thread to get sub-frame timing is nixed by the SDL framework.
In conclusion, for SDL applications with animation there is no reason to use anything other than SDL_PollEvents. The best you can do for sub-framerate input precision is, if you have time to burn between frames, you have the option of being precise during that time, but then you'll get weird render-duration windows each frame where your input loses precision, so you end up with a different kind of inconsistency.
In general, you should use SDL_WaitEvent rather than SDL_PollEvent to release the CPU to the operating system to handle other tasks, like processing user input. This will manifest to you users as sluggish reaction to user input, since this can cause a delay between when they enter a command and when your application processes the event. By using SDL_WaitEvent instead, the OS can post events to your application more quickly, which improves the perceived performance.
As a side benefit, users on battery powered systems, like laptops and portable devices should see slightly less battery usage since the OS has the opportunity to reduce overall CPU usage since your game isn't using it 100% of the time - it would only be using it when an event actually occurs.
This is a very late response, I know. But this is the thread that tops a Google search on this, so it seems the place to add an alternative suggestion to dealing with this that some might find useful.
You could write your code using SDL_WaitEvent, so that, when your application is not actively animating anything, it'll block and hand the CPU back to the OS.
But then you can send a user-defined message to the queue, from another thread (e.g. the game logic thread), to wake up the main rendering thread with that message. And then it goes through the loop to render a frame, swap and returns back to SDL_WaitEvent again. Where another of these user-defined messages can be waiting to be picked up, to tell it to loop once more.
This sort of structure might be good for an application (or game) where there's a "burst" of animation, but otherwise it's best for it to block and go idle (and save battery on laptops).
For example, a GUI where it animates when you open or close or move windows or hover over buttons, but it's otherwise static content most of the time.
(Or, for a game, though it's animating all the time in-game, it might not need to do that for the pause screen or the game menus. So, you could send the "SDL_ANIMATEEVENT" user-defined message during gameplay, but then, in the game menus and pause screen, just wait for mouse / keyboard events and actually allow the CPU to idle and cool down.)
Indeed, you could have self-triggering animation events. In that the rendering thread is woken up by a "SDL_ANIMATEEVENT" and then one more frame of animation is done. But because the animation is not complete, the rendering thread itself posts a "SDL_ANIMATEEVENT" to its own queue, that'll trigger it to wake up again, when it reaches SDL_WaitEvent.
And another idea there is that SDL events can carry data too. So you could supply, say, an animation ID in "data1" and a "current frame" counter in "data2" with the event. So that when the thread picks up the "SDL_ANIMATEEVENT", the event itself tells it which animation to do and what frame we're currently on.
This is a "best of both worlds" solution, I feel. It can behave like SDL_WaitEvent or SDL_PollEvent at the application's discretion by just sending messages to itself.
For a game, this might not be worth it, as you're updating frames constantly, so there's no big advantage to this and maybe it's not worth bothering with (though even games could benefit from going to 0% CPU usage in the pause screen or in-game menus, to let the CPU cool down and use less laptop battery).
But for something like a GUI - which has more "burst-y" animation - then a mouse event can trigger an animation (e.g. opening a new window, which zooms or slides into view) that sends "SDL_ANIMATEEVENT" back to the queue. And it keeps doing that until the animation is complete, then falls back to normal SDL_WaitEvent behaviour again.
It's an idea that might fit what some people need, so I thought I'd float it here for general consumption.
You could actually initialise the SDL and the window in the main thread and then create 2 more threads for updates(Just updates game states and variables as time passes) and rendering(renders the surfaces accordingly).
Then after all that is done, use SDL_WaitEvent in your main thread to manage SDL_Events. This way you could ensure that event is managed in the same thread that called the sdl_init.
I have been using this method for long to make my games work in windows and linux and have been able to successfully run 3 threads at the same time as mentioned above.
I had to use mutex to make sure that textures/surfaces can be transformed/changed in the update thread as well by pausing the render thread, and the lock is called every once 60 frames, so its not going to cause major perf issues.
This model works best to create event driven games, run time games, or both.
I would like to achieve determinism in my game engine, in order to be able to save and replay input sequences and to make networking easier.
My engine currently uses a variable timestep: every frame I calculate the time it took to update/draw the last one and pass it to my entities' update method. This makes 1000FPS games seem as fast ad 30FPS games, but introduces undeterministic behavior.
A solution could be fixing the game to 60FPS, but it would make input more delayed and wouldn't get the benefits of higher framerates.
So I've tried using a thread (which constantly calls update(1) then sleeps for 16ms) and draw as fast as possible in the game loop. It kind of works, but it crashes often and my games become unplayable.
Is there a way to implement threading in my game loop to achieve determinism without having to rewrite all games that depend on the engine?
You should separate game frames from graphical frames. The graphical frames should only display the graphics, nothing else. For the replay it won't matter how many graphical frames your computer was able to execute, be it 30 per second or 1000 per second, the replaying computer will likely replay it with a different graphical frame rate.
But you should indeed fix the gameframes. E.g. to 100 gameframes per second. In the gameframe the game logic is executed: stuff that is relevant for your game (and the replay).
Your gameloop should execute graphical frames whenever there is no game frame necessary, so if you fix your game to 100 gameframes per second that's 0.01 seconds per gameframe. If your computer only needed 0.001 to execute that logic in the gameframe, the other 0.009 seconds are left for repeating graphical frames.
This is a small but incomplete and not 100% accurate example:
uint16_t const GAME_FRAMERATE = 100;
uint16_t const SKIP_TICKS = 1000 / GAME_FRAMERATE;
uint16_t next_game_tick;
Timer sinceLoopStarted = Timer(); // Millisecond timer starting at 0
unsigned long next_game_tick = sinceLoopStarted.getMilliseconds();
while (gameIsRunning)
{
//! Game Frames
while (sinceLoopStarted.getMilliseconds() > next_game_tick)
{
executeGamelogic();
next_game_tick += SKIP_TICKS;
}
//! Graphical Frames
render();
}
The following link contains very good and complete information about creating an accurate gameloop:
http://www.koonsolo.com/news/dewitters-gameloop/
To be deterministic across a network, you need a single point of truth, commonly called "the server". There is a saying in the game community that goes "the client is in the hands of the enemy". That's true. You cannot trust anything that is calculated on the client for a fair game.
If for example your game gets easier if for some reasons your thread only updates 59 times a second instead of 60, people will find out. Maybe at the start they won't even be malicious. They just had their machines under full load at the time and your process didn't get to 60 times a second.
Once you have a server (maybe even in-process as a thread in single player) that does not care for graphics or update cycles and runs at it's own speed, it's deterministic enough to at least get the same results for all players. It might still not be 100% deterministic based on the fact that the computer is not real time. Even if you tell it to update every $frequence, it might not, due to other processes on the computer taking too much load.
The server and clients need to communicate, so the server needs to send a copy of it's state (for performance maybe a delta from the last copy) to each client. The client can draw this copy at the best speed available.
If your game is crashing with the thread, maybe it's an option to actually put "the server" out of process and communicate via network, this way you will find out pretty fast, which variables would have needed locks because if you just move them to another project, your client will no longer compile.
Separate game logic and graphics into different threads . The game logic thread should run at a constant speed (say, it updates 60 times per second, or even higher if your logic isn't too complicated, to achieve smoother game play ). Then, your graphics thread should always draw the latest info provided by the logic thread as fast as possible to achieve high framerates.
In order to prevent partial data from being drawn, you should probably use some sort of double buffering, where the logic thread writes to one buffer, and the graphics thread reads from the other. Then switch the buffers every time the logic thread has done one update.
This should make sure you're always using the computer's graphics hardware to its fullest. Of course, this does mean you're putting constraints on the minimum cpu speed.
I don't know if this will help but, if I remember correctly, Doom stored your input sequences and used them to generate the AI behaviour and some other things. A demo lump in Doom would be a series of numbers representing not the state of the game, but your input. From that input the game would be able to reconstruct what happened and, thus, achieve some kind of determinism ... Though I remember it going out of sync sometimes.
I'm writing a game in c++ using allegro 5. Allegro 5 has events which are stacked in an event queue(like mouse clicked or timer ticked after 1/FSP time). So my question is how should be the logic of the main loop of my game, or since it's event based I can implement it without the main loop??
Any ideas how real games do it? Links will be good.
I have no experience with Allegro, but when using SFML and rendering a game with OpenGL, I myself poll the event queue as part of my main loop. Something like the below pseudo-code (but more abstracted):
while(game_on)
{
auto events = poll_occured_events();
for_each(events, do_somewithng_with_event);
render_game();
}
Seems to work fine so far... I'd guess something similar is possible in Allegro. Event driven games are tricky, since you need to continually update the game.
You could (possibly) have the main loop in another thread and then synchronize between event thread and game thread...
I don't have any experiences with Allegro too but logic would be the same.
(so called) Real games also have game loops but the diference is they use threads which are working parallel but within different time intervals. For instance there are different threads for physic calculations, AI, gameplay, sound, rendering... as user events are usually concers gameplay events are getting collected before it (as Max suggests) and consumed until the next frame (actually some collects it in for instance 5 frames).
As a frame might get too long, all the events coming from OS are getting collected by the game for that reason these inputs are called buffered input. There is also one another method which is called unbuffered input which doesn't work discrete but instead you test it during gameloop at the very instances it is queried.
If the user input is very important and you dont want to loose any inputs at all then you can use buffered otherwise unbuffered. However unbuffered might be tricky especially during debug.
here are some links
book excerpt game engine
Game Loops on IOS