SDL image disappears after 15 seconds - c++

I'm learning SDL and I have a frustrating problem. Code is below.
Even though there is a loop that keeps the program alive, when I load an image and change the x value of the source rect to animate, the image that was loaded disappears after exactly 15 seconds. This does not happen with static images. Only with animations. I'm sure there is a simple thing I'm missing but I cant see it.
void update(){
rect1.x = 62 * int ( (SDL_GetTicks() / 100) % 12);
/* 62 is the width of a frame, 12 is the number of frames */
}
void shark(){
surface = IMG_Load("s1.png");
if (surface != 0){
texture = SDL_CreateTextureFromSurface(renderer,surface);
SDL_FreeSurface(surface);
}
rect1.y = 0;
rect1.h = 90;
rect1.w = 60;
rect2.x = 0;
rect2.y = 0;
rect2.h = rect1.h+30; // enlarging the image
rect2.w = rect1.w+30;
SDL_RenderCopy(renderer,texture,&rect1,&rect2);
}
void render(){
SDL_SetRenderDrawColor(renderer, 0, 0, 100, 150);
SDL_RenderPresent(renderer);
SDL_RenderClear(renderer);
}
and in main
update();
shark();
render();
SDL_image header is included, linked, dll exists. Could be the dll is broken?
I left out rest of the program to keep it simple. If this is not enough, I can post the whole thing.

Every time you call the shark function, it loads another copy of the texture. With that in a loop like you have it, you will run out of video memory quickly (unless you are calling SDL_DestroyTexture after every frame, which you have not indicated). At which point, you will no longer be able to load textures. Apparently this takes about fifteen seconds for you.
If you're going to use the same image over and over, then just load it once, before your main loop.

This line int ( (SDL_GetTicks() / 100) % 12);
SDL_GetTicks() returns the number of miliseconds that have elapsed since the lib initialized (https://wiki.libsdl.org/SDL_GetTicks). So you're updating with the TOTAL AMOUNT OF TIME since your application started, not the time since last frame.
You're supposed to keep count of the last time and update the application with how much time has passed since the last update.
Uint32 currentTime=SDL_GetTicks();
int deltaTime = (int)( currentTime-lastTime );
lastTime=currentTime; //declared previously
update( deltaTime );
shark();
render();
Edit: Benjamin is right, the update line works fine.
Still using the deltaTime is a good advice. In a game, for instance, you won't use the total time since the beginning of the application, you'll probably need to keep your own counter of how much time has passed (since you start an animation).
But there's nothing wrong with that line for your program anyhow.

Related

Why is my UWP game slower in release than in debug mode?

I'm trying to do an UWP game, and i came accross a problem where my game is much slower in release mode than it is in debug mode.
My game will draw a 3D view (Dungeon master style) and will have an UI part that draws over the 3D view. Because the 3D view can slow down to a small amount of frames per seconds (FPS), i decided to make my game running the UI part always at 60 FPS.
Here is how the main gameloop looks like, in some pseudo code:
Gameloop start
Update game datas
copy actual finished 3D view from buffer to screen
draw UI part
3D view loop start
If no more time to draw more textures on the 3D view exit 3D view loop
Draw one texture to 3D view buffer
3D view loop end --> 3D view loop start
Gameloop end --> Gameloop start
Here are the actual update and render functions:
void Dungeons_of_NargothMain::Update()
{
m_ritonTimer.startTimer(static_cast<int>(E_RITON_TIMER::UI));
m_ritonTimer.frameCountPlusOne((int)E_RITON_TIMER::UI_FRAME_COUNT);
m_ritonTimer.manageFramesPerSecond((int)E_RITON_TIMER::UI_FRAME_COUNT);
m_ritonTimer.manageFramesPerSecond((int)E_RITON_TIMER::LABY_FRAME_COUNT);
if (m_sceneRenderer->m_numberTotalOfTexturesToDraw == 0 ||
m_sceneRenderer->m_numberTotalOfTexturesToDraw <= m_sceneRenderer->m_numberOfTexturesDrawn)
{
m_sceneRenderer->m_numberTotalOfTexturesToDraw = 150000;
m_sceneRenderer->m_numberOfTexturesDrawn = 0;
}
}
// RENDER
bool Dungeons_of_NargothMain::Render()
{
//********************************//
// Render UI part here //
//********************************//
//**********************************//
// Render 3D view to 960X540 screen //
//**********************************//
m_sceneRenderer->setRenderTargetTo960X540Screen(); // 3D view buffer screen
bool screen960GotFullDrawn = false;
bool stillenoughTimeLeft = true;
while (stillenoughTimeLeft && (!screen960GotFullDrawn))
{
stillenoughTimeLeft = m_ritonTimer.enoughTimeForOneMoreTexture((int)E_RITON_TIMER::UI);
screen960GotFullDrawn = m_sceneRenderer->renderNextTextureTo960X540Screen();
}
if (screen960GotFullDrawn)
m_ritonTimer.frameCountPlusOne((int)E_RITON_TIMER::LABY_FRAME_COUNT);
return true;
}
I removed what is not essential.
Here is the timer part (RitonTimer):
#pragma once
#include "pch.h"
#include <wrl.h>
#include "RitonTimer.h"
Dungeons_of_Nargoth::RitonTimer::RitonTimer()
{
initTimer();
if (!QueryPerformanceCounter(&m_qpcGameStartTime))
{
throw ref new Platform::FailureException();
}
}
void Dungeons_of_Nargoth::RitonTimer::startTimer(int timerIndex)
{
if (!QueryPerformanceCounter(&m_qpcNowTime))
{
throw ref new Platform::FailureException();
}
m_qpcStartTime[timerIndex] = m_qpcNowTime.QuadPart;
m_framesPerSecond[timerIndex] = 0;
m_frameCount[timerIndex] = 0;
}
void Dungeons_of_Nargoth::RitonTimer::resetTimer(int timerIndex)
{
if (!QueryPerformanceCounter(&m_qpcNowTime))
{
throw ref new Platform::FailureException();
}
m_qpcStartTime[timerIndex] = m_qpcNowTime.QuadPart;
m_framesPerSecond[timerIndex] = m_frameCount[timerIndex];
m_frameCount[timerIndex] = 0;
}
void Dungeons_of_Nargoth::RitonTimer::frameCountPlusOne(int timerIndex)
{
m_frameCount[timerIndex]++;
}
void Dungeons_of_Nargoth::RitonTimer::manageFramesPerSecond(int timerIndex)
{
if (!QueryPerformanceCounter(&m_qpcNowTime))
{
throw ref new Platform::FailureException();
}
m_qpcDeltaTime = m_qpcNowTime.QuadPart - m_qpcStartTime[timerIndex];
if (m_qpcDeltaTime >= m_qpcFrequency.QuadPart)
{
m_framesPerSecond[timerIndex] = m_frameCount[timerIndex];
m_frameCount[timerIndex] = 0;
m_qpcStartTime[timerIndex] += m_qpcFrequency.QuadPart;
if ((m_qpcStartTime[timerIndex] + m_qpcFrequency.QuadPart) < m_qpcNowTime.QuadPart)
m_qpcStartTime[timerIndex] = m_qpcNowTime.QuadPart - m_qpcFrequency.QuadPart;
}
}
void Dungeons_of_Nargoth::RitonTimer::initTimer()
{
if (!QueryPerformanceFrequency(&m_qpcFrequency))
{
throw ref new Platform::FailureException();
}
m_qpcOneFrameTime = m_qpcFrequency.QuadPart / 60;
m_qpc5PercentOfOneFrameTime = m_qpcOneFrameTime / 20;
m_qpc10PercentOfOneFrameTime = m_qpcOneFrameTime / 10;
m_qpc95PercentOfOneFrameTime = m_qpcOneFrameTime - m_qpc5PercentOfOneFrameTime;
m_qpc90PercentOfOneFrameTime = m_qpcOneFrameTime - m_qpc10PercentOfOneFrameTime;
m_qpc80PercentOfOneFrameTime = m_qpcOneFrameTime - m_qpc10PercentOfOneFrameTime - m_qpc10PercentOfOneFrameTime;
m_qpc70PercentOfOneFrameTime = m_qpcOneFrameTime - m_qpc10PercentOfOneFrameTime - m_qpc10PercentOfOneFrameTime - m_qpc10PercentOfOneFrameTime;
m_qpc60PercentOfOneFrameTime = m_qpc70PercentOfOneFrameTime - m_qpc10PercentOfOneFrameTime;
m_qpc50PercentOfOneFrameTime = m_qpc60PercentOfOneFrameTime - m_qpc10PercentOfOneFrameTime;
m_qpc45PercentOfOneFrameTime = m_qpc50PercentOfOneFrameTime - m_qpc5PercentOfOneFrameTime;
}
bool Dungeons_of_Nargoth::RitonTimer::enoughTimeForOneMoreTexture(int timerIndex)
{
while (!QueryPerformanceCounter(&m_qpcNowTime));
m_qpcDeltaTime = m_qpcNowTime.QuadPart - m_qpcStartTime[timerIndex];
if (m_qpcDeltaTime < m_qpc45PercentOfOneFrameTime)
return true;
else
return false;
}
In debug mode the game's UI works at 60 FPS, and the 3D view is about 1 FPS on my PC. But even there i'm not sure why i have to stop my texture drawing at 45% of one game time and call present, to get the 60 FPS, if i wait longer i only get 30 FPS. (this valor is set in "enoughTimeForOneMoreTexture()" in RitonTimer.
In Release mode it drops dramatically, having like 10 FPS for the UI part, 1 FPS for the 3D part. I tried to find why for the last 2 days, didn't find it.
Also i have another small question: How do i tell visual studio that my game is actually a game and not an app ? Or does Microsoft do the "switch" when i send my game to their store ?
Here i have put my game on my OneDrive so everyone can download the source files and try to compile it, and see if you get the same results as me:
OneDrive link: https://1drv.ms/f/s!Aj7wxGmZTdftgZAZT5YAbLDxbtMNVg
compile in either x64 Debug, or x64 Release mode.
UPDATE:
I think i found the explanation why my game is slower in release mode.
The CPU is probably not waiting for the drawing instruction to be done, but simply adds it to a list which will be forward to the GPU at it's own pace in a separate task (or maybe the GPU does that cache himself). That would explain it all.
My plan was to draw the UI first and then to draw as many textures from the 3D view as possible till 95% of a 1/60th second frame time passed and then present it to the swapchain. The UI would always be at 60 FPS and the 3D view would be as fast as the system allows it (also at 60FPS if it can all be drawn in 95% of the frame time).
This didn't work because it probbably cached all the instructions my 3D view had (i was testing with 150000 BIG texture draw instructions for the 3D view) in one frame time, and so of course the UI was as slow as the 3D view at the end, or close to.
That is also why even in debug mode i didn't get 60FPS when i waited for 95% of a frame time, i had to wait for 45% of a frame time to get my 60 FPS i wanted for the UI.
I tested it with a lower value in release mode to verify that theory, and indeed i also get 60 FPS for the UI when i stop the Drawings at only 15% of a frame time.
I tought it worked like this only in DirectX12.
"How do i tell visual studio that my game is actually a game and not an app" - there's no difference, a game is an app.
I have your code running at 300-400 FPS now in debug mode.
Firstly I commented out your code that checks if you've got time to render another texture. Don't do that. Everything the player sees should render within a single frame. If your frame is taking more than 16ms (with 60fps target) look for expensive operations, or calls that are made repeatedly, possibly adding up to some unexpected cost. Look for code that might be doing something repeatedly when it only needs to do it once per frame or per resize. etc
So the issue is that you were rendering very large textures and a lot of them. You want to avoid overdraw (rendering a pixel where you've already rendered a pixel). You can have a bit of overdraw and that's sometimes preferable to being pedantic. But you were drawing 1000x2000 textures over and over again. So you were absolutely killing the pixel shader. It just can't render that many pixels. I didn't bother looking at the code that tries to control texture rendering based on frame time remaining. For what you're trying to do, that's not helpful.
Inside your render method comment out the while and if/else sections and use this to draw an array of your textures ..
// set sprite dimensions
int w = 64, h = 64;
for (int y = 0; y < 16; y++)
{
for (int x = 0; x < 16; x++)
{
m_sceneRenderer->renderNextTextureTo960X540Screen(x*64, y*64, w, h);
}
}
and in RenderNextTextureToScreen(int x, int y, int w, int h) ..
m_squareBuffer.sizeX = w; // 1000;
m_squareBuffer.sizeY = h; // 2000;
m_squareBuffer.posX = x; // (float)(rand() % 1920);
m_squareBuffer.posY = y; // (float)(rand() % 1080);
See how this code renders much smaller textures, the textures are 64x64 and there's no overdraw.
And just be aware that the GPU isn't all powerful, it can do a lot if you use it right, but if you just throw crazy operations at it, you can grind it to a halt, just like with the CPU. So try to render things that 'look normal', that you can imagine being in a game. You'll learn in time what's sensible and what isn't.
The most likely explanation for the code running slower in release mode is that your timing and rendering limiter code was broken. It wasn't working properly because the 3d view was running at 1fps, so then who knows what it's behaviour is. With the changes I've made, the program seems to run faster in release mode as expected. Your clock code is showing 600-1600fps in release mode now for me.

Time Step in PhysX

I'm trying to define a time step for the physics simulation in a PhysX application, such that the physics will run at the same speed on all machines. I wish for the physics to update at 60FPS, so each update should have a delta time of 1/60th of a second.
My application must use GLUT. Currently, my loop is set up as follows.
Idle Function:
void GLUTGame::Idle()
{
newElapsedTime = glutGet(GLUT_ELAPSED_TIME);
deltaTime = newElapsedTime - lastElapsedTime;
lastElapsedTime = newElapsedTime;
glutPostRedisplay();
}
The frame rate does not really matter in this case - it's only the speed at which my physics update that actually matters.
My render function contains the following:
void GLUTGame::Render()
{
// Rendering Code
simTimer += deltaTime;
if (simTimer > m_fps)
{
m_scene->UpdatePhys(m_fps);
simTimer = 0;
}
}
Where:
Fl32 m_fps = 1.f/60.f
However, this results in some very slow updates, due to the fact that deltaTime appears to equal 0 on most loops (which shouldn't actually be possible...) I've tried moving my deltaTime calculations to the bottom of my rendering function, as I thought that maybe the idle callback was called too often, but this did not solve the issue. Any ideas what I'm doing wrong here?
From the OpenGL website, we find that glutGet(GLUT_ELAPSED_TIME) returns the number of passed milliseconds as an int. So, if you call your void GLUTGame::Idle() method about 2000 times per second, then the time passed after one such call is about 1000 * 1/2000 = 0.5 ms. Thus more than 2000 calls per second to void GLUTGame::Idle() results in glutGet(GLUT_ELAPSED_TIME) returning 0 due to integer rounding.
Likely you're adding very small numbers to larger ones and you get rounding errors.
Try this:
void GLUTGame::Idle()
{
newElapsedTime = glutGet(GLUT_ELAPSED_TIME);
timeDelta = newElapsedTime - lastElapsedTime;
if (timeDelta < m_fps) return;
lastElapsedTime = newElapsedTime;
glutPostRedisplay();
}
You can do something similar in the other method if you want to.
I don't now anything about GLUT or PhysX, but here's how to have something execute at the same rate (using integers) no matter how fast the game runs:
if (currentTime - lastUpdateTime > msPerUpdate)
{
DWORD msPassed = currentTime - lastUpdateTime;
int updatesPassed = msPassed / msPerUpdate;
for (int i=0; i<updatesPassed; i++)
UpdatePhysX(); //or whatever function you use
lastUpdateTime = currentTime - msPassed + (updatesPassed * msPerUpdate);
}
Where currentTime is updated to timeGetTime every run through the game loop, lastUpdateTime is the last time PhysX updated, and msPerUpdate is the amount of milliseconds you assign to each update - 16 or 17 ms for 60fps
If you want to support floating-point update factors (which is recommended for a physics application), then define float timeMultiplier and update it every frame like so: timeMultiplier = (float)frameRate / desiredFrameRate; - where frameRate is self-explanatory and desiredFramerate is 60.0f if you want the physics updating at 60 fps. To do this, you have to update UpdatePhysX as taking a float parameter that it multiplies all update factors by.

Start count from zero on each keypress

I have a program in which I am drawing images on the screen. The draw function here is called per frame inside in which I have all my drawing code.
I have written an image sequencer that return the respective image from an index of images.
void draw()
{
sequence.getFrameForTime(getCurrentElapsedTime()).draw(0,0); //get current time returns time in float and startson application start
}
On key press, I have start the sequences from the first image [0] and then go on further. So, everytime I press a key, it has to start from [0] unlike the above code where it basically uses the currentTime%numImages to get the frame (which is not the start 0 position of image).
I was thinking to write a timer of own that basically can be triggered everytime I press the key so that the time always starts from 0. But before doing that, I wanted to ask if anybody had better/easier implementation ideas for this?
EDIT
Why I didn't use just a counter?
I have framerate adjustments in my ImageSequence as well.
Image getFrameAtPercent(float rate)
{
float totalTime = sequence.size() / frameRate;
float percent = time / totalTime;
return setFrameAtPercent(percent);
}
int getFrameIndexAtPercent(float percent){
if (percent < 0.0 || percent > 1.0) percent -= floor(percent);
return MIN((int)(percent*sequence.size()), sequence.size()-1);
}
void draw()
{
sequence.getFrameForTime(counter++).draw(0,0);
}
void OnKeyPress(){ counter = 0; }
Is there a reason this wont suffice?
What you should do is increase a "currentFrame" as a float and convert it to an int to index your frame:
void draw()
{
currentFrame += deltaTime * framesPerSecond; // delta time being the time between the current frame and your last frame
if(currentFrame >= numImages)
currentFrame -= numImages;
sequence.getFrameAt((int)currentFrame).draw(0,0);
}
void OnKeyPress() { currentFrame = 0; }
This should gracefully handle machines with different framerates and even changes of framerates on a single machine.
Also, you won't be skipping part of a frame when you loop over as the remainder of the substraction is kept.

Simulated time in a game loop using c++

I am building a 3d game from scratch in C++ using OpenGL and SDL on linux as a hobby and to learn more about this area of programming.
Wondering about the best way to simulate time while the game is running. Obviously I have a loop that looks something like:
void main_loop()
{
while(!quit)
{
handle_events();
DrawScene();
...
SDL_Delay(time_left());
}
}
I am using the SDL_Delay and time_left() to maintain a framerate of about 33 fps.
I had thought that I just need a few global variables like
int current_hour = 0;
int current_min = 0;
int num_days = 0;
Uint32 prev_ticks = 0;
Then a function like :
void handle_time()
{
Uint32 current_ticks;
Uint32 dticks;
current_ticks = SDL_GetTicks();
dticks = current_ticks - prev_ticks; // get difference since last time
// if difference is greater than 30000 (half minute) increment game mins
if(dticks >= 30000) {
prev_ticks = current_ticks;
current_mins++;
if(current_mins >= 60) {
current_mins = 0;
current_hour++;
}
if(current_hour > 23) {
current_hour = 0;
num_days++;
}
}
}
and then call the handle_time() function in the main loop.
It compiles and runs (using printf to write the time to the console at the moment) but I am wondering if this is the best way to do it. Is there easier ways or more efficient ways?
I've mentioned this before in other game related threads. As always, follow the suggestions by Glenn Fiedler in his Game Physics series
What you want to do is to use a constant timestep which you get by accumulating time deltas. If you want 33 updates per second, then your constant timestep should be 1/33. You could also call this the update frequency. You should also decouple the game logic from the rendering as they don't belong together. You want to be able to use a low update frequency while rendering as fast as the machine allows. Here is some sample code:
running = true;
unsigned int t_accum=0,lt=0,ct=0;
while(running){
while(SDL_PollEvent(&event)){
switch(event.type){
...
}
}
ct = SDL_GetTicks();
t_accum += ct - lt;
lt = ct;
while(t_accum >= timestep){
t += timestep; /* this is our actual time, in milliseconds. */
t_accum -= timestep;
for(std::vector<Entity>::iterator en = entities.begin(); en != entities.end(); ++en){
integrate(en, (float)t * 0.001f, timestep);
}
}
/* This should really be in a separate thread, synchronized with a mutex */
std::vector<Entity> tmpEntities(entities.size());
for(int i=0; i<entities.size(); ++i){
float alpha = (float)t_accum / (float)timestep;
tmpEntities[i] = interpolateState(entities[i].lastState, alpha, entities[i].currentState, 1.0f - alpha);
}
Render(tmpEntities);
}
This handles undersampling as well as oversampling. If you use integer arithmetic like done here, your game physics should be close to 100% deterministic, no matter how slow or fast the machine is. This is the advantage of increasing the time in fixed time intervals. The state used for rendering is calculated by interpolating between the previous and current states, where the leftover value inside the time accumulator is used as the interpolation factor. This ensures that the rendering is is smooth, no matter how large the timestep is.
Other than the issues already pointed out (you should use a structure for the times and pass it to handle_time() and your minute will get incremented every half minute) your solution is fine for keeping track of time running in the game.
However, for most game events that need to happen every so often you should probably base them off of the main game loop instead of an actual time so they will happen in the same proportions with a different fps.
One of Glenn's posts you will really want to read is Fix Your Timestep!. After looking up this link I noticed that Mads directed you to the same general place in his answer.
I am not a Linux developer, but you might want to have a look at using Timers instead of polling for the ticks.
http://linux.die.net/man/2/timer_create
EDIT:
SDL Seem to support Timers: SDL_SetTimer

How to programmatically move a window slowly, as if the user were doing it?

I am aware of the MoveWindow() and SetWindowPos() functions. I know how to use them correctly. However, what I am trying to accomplish is move a window slowly and smoothly as if a user is dragging it.
I have yet to get this to work correctly. What I tried was getting the current coordinates with GetWindowRect() and then using the setwindow and movewindow functions, incrementing Right by 10 pixels each call.
Any ideas?
Here is what I had beside all my definitions.
while(1)
{
GetWindowRect(notepad,&window);
Sleep(1000);
SetWindowPos(
notepad,
HWND_TOPMOST,
window.top - 10,
window.right,
400,
400,
TRUE
);
}
If you want smooth animation, you'll need to make it time-based, and allow Windows to process messages in between movements. Set a timer, and respond to WM_TIMER notifications by moving the window a distance based on the elapsed time since your animation started. For natural-looking movement, don't use a linear function for determining the distance - instead, try something like Robert Harvey's suggested function.
Pseudocode:
//
// animate as a function of time - could use something else, but time is nice.
lengthInMS = 10*1000; // ten second animation length
StartAnimation(desiredPos)
{
originalPos = GetWindowPos();
startTime = GetTickCount();
// omitted: hwnd, ID - you'll call SetTimer differently
// based on whether or not you have a window of your own
timerID = SetTimer(30, callback);
}
callback()
{
elapsed = GetTickCount()-startTime;
if ( elapsed >= lengthInMS )
{
// done - move to destination and stop animation timer.
MoveWindow(desiredPos);
KillTimer(timerID);
}
// convert elapsed time into a value between 0 and 1
pos = elapsed / lengthInMS;
// use Harvey's function to provide smooth movement between original
// and desired position
newPos.x = originalPos.x*(1-SmoothMoveELX(pos))
+ desiredPos.x*SmoothMoveELX(pos);
newPos.y = originalPos.y*(1-SmoothMoveELX(pos))
+ desiredPos.y*SmoothMoveELX(pos);
MoveWindow(newPos);
}
I found this code which should do what you want. It's in c#, but you should be able to adapt it:
increment a variable between 0 and 1 (lets call it "inc" and make it global) using small increments (.03?) and use the function below to give a smooth motion.
Math goes like this:
currentx=x1*(1-smoothmmoveELX(inc)) + x2*smoothmmoveELX(inc)
currenty=y1*(1-smoothmmoveELX(inc)) + y2*smoothmmoveELX(inc)
Code:
public double SmoothMoveELX(double x)
{
double PI = Atn(1) * 4;
return (Cos((1 - x) * PI) + 1) / 2;
}
http://www.vbforums.com/showthread.php?t=568889
A naturally-moving window would accelerate as it started moving, and decelerate as it stopped. The speed vs. time graph would look like a bell curve, or maybe the top of a triangle wave. The triangle wave would be easier to implement.
As you move the box, you need to steadily increase the number of pixels you are moving the box each time through the loop, until you reach the halfway point between point a and point b, at which you will steadily decrease the number of pixels you are moving the box by. There is no special math involved; it is just addition and subtraction.
If you are bored enough, you can do loopback VNC to drag the mouse yourself.
Now, as for why you would want to I don't know.