Why is my UWP game slower in release than in debug mode? - c++

I'm trying to do an UWP game, and i came accross a problem where my game is much slower in release mode than it is in debug mode.
My game will draw a 3D view (Dungeon master style) and will have an UI part that draws over the 3D view. Because the 3D view can slow down to a small amount of frames per seconds (FPS), i decided to make my game running the UI part always at 60 FPS.
Here is how the main gameloop looks like, in some pseudo code:
Gameloop start
Update game datas
copy actual finished 3D view from buffer to screen
draw UI part
3D view loop start
If no more time to draw more textures on the 3D view exit 3D view loop
Draw one texture to 3D view buffer
3D view loop end --> 3D view loop start
Gameloop end --> Gameloop start
Here are the actual update and render functions:
void Dungeons_of_NargothMain::Update()
{
m_ritonTimer.startTimer(static_cast<int>(E_RITON_TIMER::UI));
m_ritonTimer.frameCountPlusOne((int)E_RITON_TIMER::UI_FRAME_COUNT);
m_ritonTimer.manageFramesPerSecond((int)E_RITON_TIMER::UI_FRAME_COUNT);
m_ritonTimer.manageFramesPerSecond((int)E_RITON_TIMER::LABY_FRAME_COUNT);
if (m_sceneRenderer->m_numberTotalOfTexturesToDraw == 0 ||
m_sceneRenderer->m_numberTotalOfTexturesToDraw <= m_sceneRenderer->m_numberOfTexturesDrawn)
{
m_sceneRenderer->m_numberTotalOfTexturesToDraw = 150000;
m_sceneRenderer->m_numberOfTexturesDrawn = 0;
}
}
// RENDER
bool Dungeons_of_NargothMain::Render()
{
//********************************//
// Render UI part here //
//********************************//
//**********************************//
// Render 3D view to 960X540 screen //
//**********************************//
m_sceneRenderer->setRenderTargetTo960X540Screen(); // 3D view buffer screen
bool screen960GotFullDrawn = false;
bool stillenoughTimeLeft = true;
while (stillenoughTimeLeft && (!screen960GotFullDrawn))
{
stillenoughTimeLeft = m_ritonTimer.enoughTimeForOneMoreTexture((int)E_RITON_TIMER::UI);
screen960GotFullDrawn = m_sceneRenderer->renderNextTextureTo960X540Screen();
}
if (screen960GotFullDrawn)
m_ritonTimer.frameCountPlusOne((int)E_RITON_TIMER::LABY_FRAME_COUNT);
return true;
}
I removed what is not essential.
Here is the timer part (RitonTimer):
#pragma once
#include "pch.h"
#include <wrl.h>
#include "RitonTimer.h"
Dungeons_of_Nargoth::RitonTimer::RitonTimer()
{
initTimer();
if (!QueryPerformanceCounter(&m_qpcGameStartTime))
{
throw ref new Platform::FailureException();
}
}
void Dungeons_of_Nargoth::RitonTimer::startTimer(int timerIndex)
{
if (!QueryPerformanceCounter(&m_qpcNowTime))
{
throw ref new Platform::FailureException();
}
m_qpcStartTime[timerIndex] = m_qpcNowTime.QuadPart;
m_framesPerSecond[timerIndex] = 0;
m_frameCount[timerIndex] = 0;
}
void Dungeons_of_Nargoth::RitonTimer::resetTimer(int timerIndex)
{
if (!QueryPerformanceCounter(&m_qpcNowTime))
{
throw ref new Platform::FailureException();
}
m_qpcStartTime[timerIndex] = m_qpcNowTime.QuadPart;
m_framesPerSecond[timerIndex] = m_frameCount[timerIndex];
m_frameCount[timerIndex] = 0;
}
void Dungeons_of_Nargoth::RitonTimer::frameCountPlusOne(int timerIndex)
{
m_frameCount[timerIndex]++;
}
void Dungeons_of_Nargoth::RitonTimer::manageFramesPerSecond(int timerIndex)
{
if (!QueryPerformanceCounter(&m_qpcNowTime))
{
throw ref new Platform::FailureException();
}
m_qpcDeltaTime = m_qpcNowTime.QuadPart - m_qpcStartTime[timerIndex];
if (m_qpcDeltaTime >= m_qpcFrequency.QuadPart)
{
m_framesPerSecond[timerIndex] = m_frameCount[timerIndex];
m_frameCount[timerIndex] = 0;
m_qpcStartTime[timerIndex] += m_qpcFrequency.QuadPart;
if ((m_qpcStartTime[timerIndex] + m_qpcFrequency.QuadPart) < m_qpcNowTime.QuadPart)
m_qpcStartTime[timerIndex] = m_qpcNowTime.QuadPart - m_qpcFrequency.QuadPart;
}
}
void Dungeons_of_Nargoth::RitonTimer::initTimer()
{
if (!QueryPerformanceFrequency(&m_qpcFrequency))
{
throw ref new Platform::FailureException();
}
m_qpcOneFrameTime = m_qpcFrequency.QuadPart / 60;
m_qpc5PercentOfOneFrameTime = m_qpcOneFrameTime / 20;
m_qpc10PercentOfOneFrameTime = m_qpcOneFrameTime / 10;
m_qpc95PercentOfOneFrameTime = m_qpcOneFrameTime - m_qpc5PercentOfOneFrameTime;
m_qpc90PercentOfOneFrameTime = m_qpcOneFrameTime - m_qpc10PercentOfOneFrameTime;
m_qpc80PercentOfOneFrameTime = m_qpcOneFrameTime - m_qpc10PercentOfOneFrameTime - m_qpc10PercentOfOneFrameTime;
m_qpc70PercentOfOneFrameTime = m_qpcOneFrameTime - m_qpc10PercentOfOneFrameTime - m_qpc10PercentOfOneFrameTime - m_qpc10PercentOfOneFrameTime;
m_qpc60PercentOfOneFrameTime = m_qpc70PercentOfOneFrameTime - m_qpc10PercentOfOneFrameTime;
m_qpc50PercentOfOneFrameTime = m_qpc60PercentOfOneFrameTime - m_qpc10PercentOfOneFrameTime;
m_qpc45PercentOfOneFrameTime = m_qpc50PercentOfOneFrameTime - m_qpc5PercentOfOneFrameTime;
}
bool Dungeons_of_Nargoth::RitonTimer::enoughTimeForOneMoreTexture(int timerIndex)
{
while (!QueryPerformanceCounter(&m_qpcNowTime));
m_qpcDeltaTime = m_qpcNowTime.QuadPart - m_qpcStartTime[timerIndex];
if (m_qpcDeltaTime < m_qpc45PercentOfOneFrameTime)
return true;
else
return false;
}
In debug mode the game's UI works at 60 FPS, and the 3D view is about 1 FPS on my PC. But even there i'm not sure why i have to stop my texture drawing at 45% of one game time and call present, to get the 60 FPS, if i wait longer i only get 30 FPS. (this valor is set in "enoughTimeForOneMoreTexture()" in RitonTimer.
In Release mode it drops dramatically, having like 10 FPS for the UI part, 1 FPS for the 3D part. I tried to find why for the last 2 days, didn't find it.
Also i have another small question: How do i tell visual studio that my game is actually a game and not an app ? Or does Microsoft do the "switch" when i send my game to their store ?
Here i have put my game on my OneDrive so everyone can download the source files and try to compile it, and see if you get the same results as me:
OneDrive link: https://1drv.ms/f/s!Aj7wxGmZTdftgZAZT5YAbLDxbtMNVg
compile in either x64 Debug, or x64 Release mode.
UPDATE:
I think i found the explanation why my game is slower in release mode.
The CPU is probably not waiting for the drawing instruction to be done, but simply adds it to a list which will be forward to the GPU at it's own pace in a separate task (or maybe the GPU does that cache himself). That would explain it all.
My plan was to draw the UI first and then to draw as many textures from the 3D view as possible till 95% of a 1/60th second frame time passed and then present it to the swapchain. The UI would always be at 60 FPS and the 3D view would be as fast as the system allows it (also at 60FPS if it can all be drawn in 95% of the frame time).
This didn't work because it probbably cached all the instructions my 3D view had (i was testing with 150000 BIG texture draw instructions for the 3D view) in one frame time, and so of course the UI was as slow as the 3D view at the end, or close to.
That is also why even in debug mode i didn't get 60FPS when i waited for 95% of a frame time, i had to wait for 45% of a frame time to get my 60 FPS i wanted for the UI.
I tested it with a lower value in release mode to verify that theory, and indeed i also get 60 FPS for the UI when i stop the Drawings at only 15% of a frame time.
I tought it worked like this only in DirectX12.

"How do i tell visual studio that my game is actually a game and not an app" - there's no difference, a game is an app.
I have your code running at 300-400 FPS now in debug mode.
Firstly I commented out your code that checks if you've got time to render another texture. Don't do that. Everything the player sees should render within a single frame. If your frame is taking more than 16ms (with 60fps target) look for expensive operations, or calls that are made repeatedly, possibly adding up to some unexpected cost. Look for code that might be doing something repeatedly when it only needs to do it once per frame or per resize. etc
So the issue is that you were rendering very large textures and a lot of them. You want to avoid overdraw (rendering a pixel where you've already rendered a pixel). You can have a bit of overdraw and that's sometimes preferable to being pedantic. But you were drawing 1000x2000 textures over and over again. So you were absolutely killing the pixel shader. It just can't render that many pixels. I didn't bother looking at the code that tries to control texture rendering based on frame time remaining. For what you're trying to do, that's not helpful.
Inside your render method comment out the while and if/else sections and use this to draw an array of your textures ..
// set sprite dimensions
int w = 64, h = 64;
for (int y = 0; y < 16; y++)
{
for (int x = 0; x < 16; x++)
{
m_sceneRenderer->renderNextTextureTo960X540Screen(x*64, y*64, w, h);
}
}
and in RenderNextTextureToScreen(int x, int y, int w, int h) ..
m_squareBuffer.sizeX = w; // 1000;
m_squareBuffer.sizeY = h; // 2000;
m_squareBuffer.posX = x; // (float)(rand() % 1920);
m_squareBuffer.posY = y; // (float)(rand() % 1080);
See how this code renders much smaller textures, the textures are 64x64 and there's no overdraw.
And just be aware that the GPU isn't all powerful, it can do a lot if you use it right, but if you just throw crazy operations at it, you can grind it to a halt, just like with the CPU. So try to render things that 'look normal', that you can imagine being in a game. You'll learn in time what's sensible and what isn't.
The most likely explanation for the code running slower in release mode is that your timing and rendering limiter code was broken. It wasn't working properly because the 3d view was running at 1fps, so then who knows what it's behaviour is. With the changes I've made, the program seems to run faster in release mode as expected. Your clock code is showing 600-1600fps in release mode now for me.

Related

glReadPixels doesnt work for the first left click

I am working on a MFC app which is a MDI. One of the child frame uses OpenGL(mixed with fixed function and modern version) called 3d view and another child frame uses GDI called plan view. Both of the views use the same doc.
The 3d view has a function to detect if the mouse cursor is over rendered 3d model by reading pixels and check its depth value.
The function is used for WM_MOUSEMOVE and WM_LBUTTONDOWN events. Most time it works pretty well. But it failed when I move my cursor from the plan view(currently active) to the 3d view and left mouse click. The depth values read from the pixels(called from onLButtonDown) are always all zeros though it is over a model. There is no OpenGL error reported. It only fails on the first mouse click when the 3d view is not activated. Afterwards, everything works well again.
The issue doesn't happen on all machines. And it happens to me but not to another guy with the same hardware machine with me. Is that possible hardware related or a code bug?
Things tried:
I tried to increase the pixel block size much bigger but depths are still all zero.
If I click on the title bar of the 3d view to activate it first, then works.
I tried to set the 3d view active and foreground in the onLButtonDown method before reading pixels. But still failed.(btw, the 3d view should be active already before the OnLButtonDown handler via other message handler fired by the left button down).
I tried to invalidate rect before reading pixels, failed too.
The code is as below:
BOOL CMy3DView::IsOverModel(int x0, int y0, int &xM, int &yM, GLfloat &zWin, int width0 , int height0 )
{
int width = max(1,width0);
int height= max(1,height0);
CRect RectView;
GetClientRect(&RectView);
GLint realy = RectView.Height() - 1 - (GLint)y0 ; /* OpenGL y coordinate position */
std::vector<GLfloat> z(width*height);
//Read the window z co-ordinates the z value of the points in a rectangle of width*height )
xM = max(0, x0-(width-1)/2);
yM = max(0, realy-(height-1)/2);
glReadPixels(xM, yM, (GLsizei)width, (GLsizei)height, GL_DEPTH_COMPONENT, GL_FLOAT, &z[0]); OutputGlError(_T("glReadPixels")) ;
/* check pixels along concentric, nested boxes around the central point */
for (int k=0; k<=(min(height,width)-1)/2; ++k){
for (int i=-k;i<=k;++i){
xM = x0+i;
for (int j=-k;j<=k;++j){
if (abs(i)==k || abs(j)==k) {
yM = realy+j;
zWin=z[(i+(width-1)/2)+width*(j+(height-1)/2)];
if (zWin<1.0-FLT_EPSILON) break;
}
}
if (zWin<1.0-FLT_EPSILON) break;
}
if (zWin<1.0-FLT_EPSILON) break;
}
yM = RectView.Height() - 1 - yM;
if (zWin>1.0-FLT_EPSILON || zWin<FLT_EPSILON) {// z is the depth, between 0 and 1, i.e. between Near and Far plans.
xM=x0; yM=y0;
return FALSE;
}
return TRUE;
}
Just found a solution for that: I called render(GetDC) before any processing in OnLButtonDown. somehow it fixed the issue though I don't think it's necessary.
InvalideRect wont fix the issue since it will update the view for the next WM_PAINT.
Weird, since it works for some machines without the fix. Still curious about the reason.

SDL tilemap rendering quite slow

Im using SDL to write a simulation that displays quite a big tilemap(around 240*240 tiles). Since im quite new to the SDL library I cant really tell if the pretty slow performance while rendering more than 50,000 tiles is actually normal. Every tile is visible at all times, being around 4*4px big. Currently its iterating every frame through a 2d array and rendering every single tile, which gives me about 40fps, too slow to actually put any game logic behind the system.
I tried to find some alternative systems, like only updating updated tiles but people always commented on how this is a bad practice and that the renderer is supposed to be cleaned every frame and so on.
Here a picture of the map
So I basically wanted to ask if there is any more performant system than rendering every single tile every frame.
Edit: So heres the simple rendering method im using
void World::DirtyBiomeDraw(Graphics *graphics) {
if(_biomeTexture == NULL) {
_biomeTexture = graphics->loadImage("assets/biome_sprites.png");
printf("Biome texture loaded.\n");
}
for(int i = 0; i < globals::WORLD_WIDTH; i++) {
for(int l = 0; l < globals::WORLD_HEIGHT; l++) {
SDL_Rect srect;
srect.h = globals::SPRITE_SIZE;
srect.w = globals::SPRITE_SIZE;
if(sites[l][i].biome > 0) {
srect.y = 0;
srect.x = (globals::SPRITE_SIZE * sites[l][i].biome) - globals::SPRITE_SIZE;
}
else {
srect.y = globals::SPRITE_SIZE;
srect.x = globals::SPRITE_SIZE * fabs(sites[l][i].biome);
}
SDL_Rect drect = {i * globals::SPRITE_SIZE * globals::SPRITE_SCALE, l * globals::SPRITE_SIZE * globals::SPRITE_SCALE,
globals::SPRITE_SIZE * globals::SPRITE_SCALE, globals::SPRITE_SIZE * globals::SPRITE_SCALE};
graphics->blitOnRenderer(_biomeTexture, &srect, &drect);
}
}
}
So in this context every tile is called "site", this is because they're also storing information like moisture, temperature and so on.
Every site got a biome assigned during the generation process, every biome is basically an ID, every land biome has an ID higher than 0 and every water id is 0 or lower.
This allows me to put every biome sprite ordered by ID into the "biome_sprites.png" image. All the land sprites are basically in the first row, while all the water tiles are in the second row. This way I dont have to manually assign a sprite to a biome and the method can do it itself by multiplying the tile size(basically the width) with the biome.
Heres the biome ID table from my SDD/GDD and the actual spritesheet.
The blitOnRenderer method from the graphics class basically just runs a SDL_RenderCopy blitting the texture onto the renderer.
void Graphics::blitOnRenderer(SDL_Texture *texture, SDL_Rect
*sourceRectangle, SDL_Rect *destinationRectangle) {
SDL_RenderCopy(this->_renderer, texture, sourceRectangle, destinationRectangle);
}
In the game loop every frame a RenderClear and RenderPresent gets called.
I really hope I explained it understandably, ask anything you want, im the one asking you guys for help so the least I can do is be cooperative :D
Poke the SDL2 devs for a multi-item version of SDL_RenderCopy() (similar to the existing SDL_RenderDrawLines()/SDL_RenderDrawPoints()/SDL_RenderDrawRects() functions) and/or batched SDL_Renderer backends.
Right now you're trying slam at least 240*240 = 57000 draw-calls down the GPU's throat; you can usually only count on 1000-4000 draw-calls in any given 16 milliseconds.
Alternatively switch to OpenGL & do the batching yourself.

SDL image disappears after 15 seconds

I'm learning SDL and I have a frustrating problem. Code is below.
Even though there is a loop that keeps the program alive, when I load an image and change the x value of the source rect to animate, the image that was loaded disappears after exactly 15 seconds. This does not happen with static images. Only with animations. I'm sure there is a simple thing I'm missing but I cant see it.
void update(){
rect1.x = 62 * int ( (SDL_GetTicks() / 100) % 12);
/* 62 is the width of a frame, 12 is the number of frames */
}
void shark(){
surface = IMG_Load("s1.png");
if (surface != 0){
texture = SDL_CreateTextureFromSurface(renderer,surface);
SDL_FreeSurface(surface);
}
rect1.y = 0;
rect1.h = 90;
rect1.w = 60;
rect2.x = 0;
rect2.y = 0;
rect2.h = rect1.h+30; // enlarging the image
rect2.w = rect1.w+30;
SDL_RenderCopy(renderer,texture,&rect1,&rect2);
}
void render(){
SDL_SetRenderDrawColor(renderer, 0, 0, 100, 150);
SDL_RenderPresent(renderer);
SDL_RenderClear(renderer);
}
and in main
update();
shark();
render();
SDL_image header is included, linked, dll exists. Could be the dll is broken?
I left out rest of the program to keep it simple. If this is not enough, I can post the whole thing.
Every time you call the shark function, it loads another copy of the texture. With that in a loop like you have it, you will run out of video memory quickly (unless you are calling SDL_DestroyTexture after every frame, which you have not indicated). At which point, you will no longer be able to load textures. Apparently this takes about fifteen seconds for you.
If you're going to use the same image over and over, then just load it once, before your main loop.
This line int ( (SDL_GetTicks() / 100) % 12);
SDL_GetTicks() returns the number of miliseconds that have elapsed since the lib initialized (https://wiki.libsdl.org/SDL_GetTicks). So you're updating with the TOTAL AMOUNT OF TIME since your application started, not the time since last frame.
You're supposed to keep count of the last time and update the application with how much time has passed since the last update.
Uint32 currentTime=SDL_GetTicks();
int deltaTime = (int)( currentTime-lastTime );
lastTime=currentTime; //declared previously
update( deltaTime );
shark();
render();
Edit: Benjamin is right, the update line works fine.
Still using the deltaTime is a good advice. In a game, for instance, you won't use the total time since the beginning of the application, you'll probably need to keep your own counter of how much time has passed (since you start an animation).
But there's nothing wrong with that line for your program anyhow.

Accuracy (not precision) of std::chrono::high_resolution_clock and screen refresh rate

I'm using visual studio 2012 and would like to know about the accuracy of high_resolution_clock.
Basically I'm writing some code to display sound and images, but I need them to be very well synchronised, and the images must be tear free. I'm using directX to give tear free images and am timing screen refreshes using high_resolution_clock. The display claims to be 60 fps, however, timing with high_resolution_clock gives a refresh rate of 60.035 fps, averaged over 10000 screen refreshes. Depending upon which is correct my audio will end up out by 0.5 ms after a second, which is around 2 s after an hour. I would expect any clock to be more accurate than that - more like 1 s drift over a year, not an hour.
Has anyone ever looked at this kind of stuff before. Should I expect my sound card clock to be different again?
edit
Here is my timing code. This while loop runs in my rendering thread. m_renderData is and array of structs containing the data needed for rendering my scene, it has one element per screen. For the tests I'm only running on one screen so it has only one elements
while(!TestDestroy())
{
for(size_t i=0; i<m_renderData.size(); ++i)
{
//work out where in the vsync cycle we are and increment the render cycle
//as needed until we need to actually render
D3DRASTER_STATUS rStatus;
m_renderData[i].deviceD3D9->GetRasterStatus(0, &rStatus);
if(m_renderData[i].renderStage==inVBlankRenderingComplete)
{
if(!rStatus.InVBlank)
m_renderData[i].renderStage=notInVBlank;
}
else if(m_renderData[i].renderStage==notInVBlank)
{
if(rStatus.InVBlank)
m_renderData[i].renderStage=inVBlankReadyToRender;
}
//check for missing the vsync for rendering
bool timeOut=false;
if(m_renderData[i].durations.size()>0)
{
double timeSinceLastRender=std::chrono::duration_cast<std::chrono::microseconds>(std::chrono::high_resolution_clock::now()-m_renderData[i].durations.back()).count();
if (timeSinceLastRender>expectedUpdatePeriod*1.2)
timeOut=true;
}
if(m_renderData[i].renderStage==inVBlankReadyToRender || timeOut)
{
//We have reached the time to render
//record the time and increment the number of renders that have been performed
m_renderData[i].durations.push_back(std::chrono::high_resolution_clock::now());
++m_renderData[i].nRenders;
//we calculate the fps using 10001 times - i.e. an interval of 10000 frames
size_t fpsUpdatePeriod=10001;
if(m_renderData[i].nRenders<fpsUpdatePeriod)
{
//if we don't have enough times then display a message
m_renderData[i].fpsString = "FPS: Calculating";
}
else
{
//we have enough timing info, calculate the fps
double meanFrameTime = std::chrono::duration_cast<std::chrono::microseconds>(m_renderData[i].durations.back()-*(m_renderData[i].durations.end()-fpsUpdatePeriod)).count()/double(fpsUpdatePeriod-1);
double fps = 1000000.0/meanFrameTime;
saveFps(fps);
}
//render to the back buffer for this screen
renderToBackBuffer(i);
//display the back buffer
if(!TestDestroy())
m_renderData[i].deviceD3D9->Present(NULL, NULL, NULL, NULL);
//make sure we render to the correct back buffer next time
m_renderData[i].bufferToRender--;
//update the render cycle
m_renderData[i].renderStage=inVBlankRenderingComplete;
}
}
}
Unfortunately, in VS 2012 and 2013 the chrono::high_resolution_clock is not using RDTSC. This is fixed for a future version of VS. See VS Connect. In the meantime, use QPC instead.

Simulated time in a game loop using c++

I am building a 3d game from scratch in C++ using OpenGL and SDL on linux as a hobby and to learn more about this area of programming.
Wondering about the best way to simulate time while the game is running. Obviously I have a loop that looks something like:
void main_loop()
{
while(!quit)
{
handle_events();
DrawScene();
...
SDL_Delay(time_left());
}
}
I am using the SDL_Delay and time_left() to maintain a framerate of about 33 fps.
I had thought that I just need a few global variables like
int current_hour = 0;
int current_min = 0;
int num_days = 0;
Uint32 prev_ticks = 0;
Then a function like :
void handle_time()
{
Uint32 current_ticks;
Uint32 dticks;
current_ticks = SDL_GetTicks();
dticks = current_ticks - prev_ticks; // get difference since last time
// if difference is greater than 30000 (half minute) increment game mins
if(dticks >= 30000) {
prev_ticks = current_ticks;
current_mins++;
if(current_mins >= 60) {
current_mins = 0;
current_hour++;
}
if(current_hour > 23) {
current_hour = 0;
num_days++;
}
}
}
and then call the handle_time() function in the main loop.
It compiles and runs (using printf to write the time to the console at the moment) but I am wondering if this is the best way to do it. Is there easier ways or more efficient ways?
I've mentioned this before in other game related threads. As always, follow the suggestions by Glenn Fiedler in his Game Physics series
What you want to do is to use a constant timestep which you get by accumulating time deltas. If you want 33 updates per second, then your constant timestep should be 1/33. You could also call this the update frequency. You should also decouple the game logic from the rendering as they don't belong together. You want to be able to use a low update frequency while rendering as fast as the machine allows. Here is some sample code:
running = true;
unsigned int t_accum=0,lt=0,ct=0;
while(running){
while(SDL_PollEvent(&event)){
switch(event.type){
...
}
}
ct = SDL_GetTicks();
t_accum += ct - lt;
lt = ct;
while(t_accum >= timestep){
t += timestep; /* this is our actual time, in milliseconds. */
t_accum -= timestep;
for(std::vector<Entity>::iterator en = entities.begin(); en != entities.end(); ++en){
integrate(en, (float)t * 0.001f, timestep);
}
}
/* This should really be in a separate thread, synchronized with a mutex */
std::vector<Entity> tmpEntities(entities.size());
for(int i=0; i<entities.size(); ++i){
float alpha = (float)t_accum / (float)timestep;
tmpEntities[i] = interpolateState(entities[i].lastState, alpha, entities[i].currentState, 1.0f - alpha);
}
Render(tmpEntities);
}
This handles undersampling as well as oversampling. If you use integer arithmetic like done here, your game physics should be close to 100% deterministic, no matter how slow or fast the machine is. This is the advantage of increasing the time in fixed time intervals. The state used for rendering is calculated by interpolating between the previous and current states, where the leftover value inside the time accumulator is used as the interpolation factor. This ensures that the rendering is is smooth, no matter how large the timestep is.
Other than the issues already pointed out (you should use a structure for the times and pass it to handle_time() and your minute will get incremented every half minute) your solution is fine for keeping track of time running in the game.
However, for most game events that need to happen every so often you should probably base them off of the main game loop instead of an actual time so they will happen in the same proportions with a different fps.
One of Glenn's posts you will really want to read is Fix Your Timestep!. After looking up this link I noticed that Mads directed you to the same general place in his answer.
I am not a Linux developer, but you might want to have a look at using Timers instead of polling for the ticks.
http://linux.die.net/man/2/timer_create
EDIT:
SDL Seem to support Timers: SDL_SetTimer