this_thread::sleep_for / SDL Rendering Skips instructions - c++

I'm trying to make a sorting visualizer with SDL2, everything works except one thing, the wait time.
The sorting visualizer has a delay, I can change it to whatever i want, but when I set it to around 1ms it skips some instructions.
Here is 10ms vs 1ms:
10ms delay
1ms delay
The video shows how the 1ms delay doesn't actually finish sorting:
Picture of 1ms delay algorithm completion.
I suspect the problem being the wait function I use, I'm trying to make this program multi-platform so there are little to no options.
Here's a snippet of the code:
Selection Sort Code (Shown in videos):
void selectionSort(void)
{
int minimum;
// One by one move boundary of unsorted subarray
for (int i = 0; i < totalValue-1; i++)
{
// Find the minimum element in unsorted array
minimum = i;
for (int j = i+1; j < totalValue; j++){
if (randArray[j] < randArray[minimum]){
minimum = j;
lineColoration[j] = 2;
render();
}
}
lineColoration[i] = 1;
// Swap the found minimum element with the first element
swap(randArray[minimum], randArray[i]);
this_thread::sleep_for(waitTime);
render();
}
}
Some variables need explanation:
totalValue is the amount of values to be sorted (user input)
randArray is a vector that stores all the values
waitTime is the amount of milliseconds the computer will wait each time (user input)
I've cut the code down, and removed other algorithms to make a reproducible example, not rendering and using cout seems to work, but I still cant pin down if the issue is the render or the wait function:
#include <algorithm>
#include <chrono>
#include <iostream>
#include <random>
#include <thread>
#include <vector>
#include <math.h>
SDL_Window* window;
SDL_Renderer* renderer;
using namespace std;
vector<int> randArray;
int totalValue= 100;
auto waitTime= 1ms;
vector<int> lineColoration;
int lineSize;
int lineHeight;
Uint32 ticks= 0;
void OrganizeVariables()
{
randArray.clear();
for(int i= 0; i < totalValue; i++)
randArray.push_back(i + 1);
auto rng= default_random_engine{};
shuffle(begin(randArray), end(randArray), rng);
lineColoration.assign(totalValue,0);
}
int create_window(void)
{
window= SDL_CreateWindow("Sorting Visualizer", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, 1800, 900, SDL_WINDOW_SHOWN);
return window != NULL;
}
int create_renderer(void)
{
renderer= SDL_CreateRenderer(
window, -1, SDL_RENDERER_PRESENTVSYNC); // Change SDL_RENDERER_PRESENTVSYNC to SDL_RENDERER_ACCELERATED
return renderer != NULL;
}
int init(void)
{
if(SDL_Init(SDL_INIT_VIDEO) != 0)
goto bad_exit;
if(create_window() == 0)
goto quit_sdl;
if(create_renderer() == 0)
goto destroy_window;
cout << "All safety checks passed succesfully" << endl;
return 1;
destroy_window:
SDL_DestroyRenderer(renderer);
SDL_DestroyWindow(window);
quit_sdl:
SDL_Quit();
bad_exit:
return 0;
}
void cleanup(void)
{
SDL_DestroyWindow(window);
SDL_Quit();
}
void render(void)
{
SDL_SetRenderDrawColor(renderer, 0, 0, 0, 255);
SDL_RenderClear(renderer);
//This is used to only render when 16ms hits (60fps), if true, will set the ticks variable to GetTicks() + 16
if(SDL_GetTicks() > ticks) {
for(int i= 0; i < totalValue - 1; i++) {
// SDL_Rect image_pos = {i*4, 100, 3, randArray[i]*2};
SDL_Rect fill_pos= {i * (1 + lineSize), 100, lineSize,randArray[i] * lineHeight};
switch(lineColoration[i]) {
case 0:
SDL_SetRenderDrawColor(renderer,255,255,255,255);
break;
case 1:
SDL_SetRenderDrawColor(renderer,255,0,0,255);
break;
case 2:
SDL_SetRenderDrawColor(renderer,0,255,255,255);
break;
default:
cout << "Error, drawing color not defined, exting...";
cout << "Unkown Color ID: " << lineColoration[i];
cleanup();
abort();
break;
}
SDL_RenderFillRect(renderer, &fill_pos);
}
SDL_RenderPresent(renderer);
lineColoration.assign(totalValue,0);
ticks= SDL_GetTicks() + 16;
}
}
void selectionSort(void)
{
int minimum;
// One by one move boundary of unsorted subarray
for (int i = 0; i < totalValue-1; i++) {
// Find the minimum element in unsorted array
minimum = i;
for (int j = i+1; j < totalValue; j++) {
if (randArray[j] < randArray[minimum]) {
minimum = j;
lineColoration[j] = 2;
render();
}
}
lineColoration[i] = 1;
// Swap the found minimum element with the first element
swap(randArray[minimum], randArray[i]);
this_thread::sleep_for(waitTime);
render();
}
}
int main(int argc, char** argv)
{
//Rough estimate of screen size
lineSize= 1100 / totalValue;
lineHeight= 700 / totalValue;
create_window();
create_renderer();
OrganizeVariables();
selectionSort();
this_thread::sleep_for(5000ms);
cleanup();
}

The problem is the ticks= SDL_GetTicks() + 16; as those are too many ticks for a millisecond wait and the if(SDL_GetTicks() > ticks) condition is false most of the time.
If you put 1ms wait and ticks= SDL_GetTicks() + 5 it will work.
In the selectionSort loop, if in the last, say, eight iterations, the if(SDL_GetTicks() > ticks) skips the drawing, the loop may well finish and let some pending drawings.
It is not the algorithm not completing, it is it finish before ticks reaches a number high enough to allow the drawing.

The main problem is that you are dropping updates to the screen by making all rendering dependant on an if condition:
if(SDL_GetTicks() > ticks)
My tests have shown that only about every 70th call to the function render actually gets rendered. All other calls are filtered by this if condition.
This extremely high number is because you are calling the function render not only in your outer loop, but also in the inner loop. I see no reason why it should also be called in the inner loop. In my opinion, it should only be called in the outer loop.
If you only call it in the outer loop, then about every 16th call to the function is actually rendered.
However, this still means that the last call to the render function only has a 1 in 16 chance of being rendered. Therefore, it is not surprising that the last render of your program does not represent the last sorting step.
If you want to ensure that the last sorting step gets rendered, you could simply execute the rendering code once unconditionally, after the sorting has finished. However, this may not be the ideal solution, because I believe you should first make a more fundamental decision on how your program should behave:
In your question, you are using delays of 1ms between calls to render. This means that your program is designed to render 1000 frames per second. However, your monitor can probably only display about 60 frames per second (some gaming monitors can display more). In that case, every displayed frame lasts for at least 16.7 milliseconds.
Therefore, you must decide how you want your program to behave with regard to the monitor. You could make your program
sort faster than your monitor can display individual sorting steps, so that not all of the sorting steps are rendered, or
sort slower than your monitor can display individual sorting steps, so that all sorting steps are displayed by the monitor for at least one frame, possibly several frames, or
sort at exactly the same speed as your monitor can display, so that one sorting step is displaying for exactly one frame by the monitor.
Implementing #3 is the easiest of all. Because you have enabled VSYNC in the function call to SDL_CreateRenderer, SDL will automatically limit the number of renders to the display rate of your monitor. Therefore, you don't have to perform any additional waiting in your code and can remove the line
this_thread::sleep_for(waitTime);
from the function selectionSort. Also, since SDL knows better than you whether your monitor is ready for the next frame to be drawn, it does not seem appropriate that you try to limit the number of frames yourself. So you can remove the line
if(SDL_GetTicks() > ticks) {
and the corresponding closing brace from the function render.
On the other hand, it may be better to keep the if statement to prevent the massively high frame rates in case SDL doesn't limit them properly. In that case, the frame rate limiter should probably be set well above 60 fps, though (maybe 100-200 fps), to ensure that the frames are passed fast enough to SDL.
Implementing #1 is harder, as it actually requires you to select which sorting steps to render and which ones not to render. Therefore, in order to implement #1, you will probably need to keep the if statement mentioned above, so that rendering only occurs conditionally.
However, it does not seem meaningful to make the if statement dependant on elapsed time since the last render, because while wating, the sorting will continue at full speed and it is therefore possible that all of the sorting will be completed with only one frame of rendering. You are currently preventing this from happending by slowing down the sort by using the line
this_thread::sleep_for(waitTime);
in the function selectionSort. But this does not seem like an ideal solution, but rather a stopgap measure.
Instead of making the if condition dependant on time, it would be easier to make it dependant on the number of sorting steps since the last render. That way, you could, for example, program it in such a way that every 5th sorting step gets rendered. In that case, there would be no need anymore to additionally slow down the actual sorting and your code would be simpler.
As already described above, when implementing #1, you will also have to ensure that you do not drop the last rendering step, or that you at least render the last frame after the sorting is finished. Otherwise, the last frame will likely not display the completed sort, but rather an intermediate sorting step.
Implementing #2 is similar to implementing #1, except that you will have to use SDL_Delay (which is equivalent this_thread::sleep_for) or SDL_AddTimer to determine when it is time to render the next sorting step.
Using SDL_AddTimer would require you to handle SDL Events. However, I would recommend that you do this anyway, because that way, you will also be able to handle SDL_QUIT events, so that you can close your program by closing the window. This would also make the line
this_thread::sleep_for( 5000ms );
at the end of your program unnecessary, because you could instead wait for the user to close the window, like this:
for (;;)
{
SDL_Event event;
SDL_WaitEvent( &event );
if ( event.type == SDL_QUIT ) break;
}
However, it would probably be better if you restructured your entire program, so that you only have one message loop, which responds to both SDL Timer and SDL_QUIT events.

Related

Delay function makes openFrameworks window freeze until specified time passes

I have network of vertices and I want to change their color every second. I tried using Sleep() function and currently using different delay function but both give same result - lets say I want to color 10 vertices red with 1 second pause. When Im starting project it seems like the window freezes for 10 seconds and then shows every vertice already with red color.
This is my update function.
void ofApp::update(){
for (int i = 0; i < vertices.size(); i++) {
ofColor red(255, 0, 0);
vertices[i].setColor(red);
delay(1);
}
}
Here is draw function
void ofApp::draw(){
for (int i = 0; i < vertices.size(); i++) {
for (int j = 0; j < G[i].size(); j++) {
ofDrawLine(vertices[i].getX(), vertices[i].getY(), vertices[G[i][j]].getX(), vertices[G[i][j]].getY());
}
vertices[i].drawBFS();
}
}
void vertice::drawBFS() {
ofNoFill();
ofSetColor(_color);
ofDrawCircle(_x, _y, 20);
ofDrawBitmapString(_id, _x - 3, _y + 3);
ofColor black(0, 0, 0);
ofSetColor(black);
}
This is my delay() function
void ofApp::delay(int number_of_seconds) {
// Converting time into milli_seconds
int milli_seconds = 1000 * number_of_seconds;
// Stroing start time
clock_t start_time = clock();
// looping till required time is not acheived
while (clock() < start_time + milli_seconds)
;
}
There is no bug in your code, just a misconception about waiting. So this is no definitive answer, just a hint in the right direction.
You have probably just one thread. One thread can do exactly one thing at a time. When you call delay all this one thread is doing is checking the time over and over again until some time has passed.
During this time the thread can do nothing else (it can not magically skip instructions or detect your intentions). So it can not issue drawing commands or swap some buffers to display vectors on screen. That's the reason why you application seems to freeze - 99.9% of the time it's checking if the interval has passed. This also places a heavy load on the cpu.
The solution can be a bit tricky and requires threading. Usually you have a UI-Thread that regularly draws stuff, refreshes the display, maybe takes inputs and so on. This thread should never do heavy calculations to keep the UI responsive. A second thread will then manage heavier calculations or updating data.
If you want to run a task in an interval you don't just loop until the time is over, but essentially "tell the OS" that the second thread should be inactive for a certain period. The OS will manage this way more efficient than implementing active waiting.
But that's quite a large topic, so I suggest you read on about Multithreading. C++ has a small thread library since C++11. May be worth a look.
// .h
float nextEventSeconds = 0;
// .cpp
void ofApp::draw(){
float now = ofGetElapsedTimef();
if(now > nextEventSeconds) {
// do something here that should only happen
// every 3 seconds
nextEventSeconds = now + 3;
}
}
Following this example I managed to solve my problem.

Update console without flickering - c++

I'm attempting to make a console side scrolling shooter, I know this isn't the ideal medium for it but I set myself a bit of a challenge.
The problem is that whenever it updates the frame, the entire console is flickering. Is there any way to get around this?
I have used an array to hold all of the necessary characters to be output, here is my updateFrame function. Yes, I know system("cls") is lazy, but unless that's the cause of problem I'm not fussed for this purpose.
void updateFrame()
{
system("cls");
updateBattleField();
std::this_thread::sleep_for(std::chrono::milliseconds(33));
for (int y = 0; y < MAX_Y; y++)
{
for (int x = 0; x < MAX_X; x++)
{
std::cout << battleField[x][y];
}
std::cout << std::endl;
}
}
Ah, this brings back the good old days. I did similar things in high school :-)
You're going to run into performance problems. Console I/O, especially on Windows, is slow. Very, very slow (sometimes slower than writing to disk, even). In fact, you'll quickly become amazed how much other work you can do without it affecting the latency of your game loop, since the I/O will tend to dominate everything else. So the golden rule is simply to minimize the amount of I/O you do, above all else.
First, I suggest getting rid of the system("cls") and replace it with calls to the actual Win32 console subsystem functions that cls wraps (docs):
#define NOMINMAX
#define WIN32_LEAN_AND_MEAN
#include <Windows.h>
void cls()
{
// Get the Win32 handle representing standard output.
// This generally only has to be done once, so we make it static.
static const HANDLE hOut = GetStdHandle(STD_OUTPUT_HANDLE);
CONSOLE_SCREEN_BUFFER_INFO csbi;
COORD topLeft = { 0, 0 };
// std::cout uses a buffer to batch writes to the underlying console.
// We need to flush that to the console because we're circumventing
// std::cout entirely; after we clear the console, we don't want
// stale buffered text to randomly be written out.
std::cout.flush();
// Figure out the current width and height of the console window
if (!GetConsoleScreenBufferInfo(hOut, &csbi)) {
// TODO: Handle failure!
abort();
}
DWORD length = csbi.dwSize.X * csbi.dwSize.Y;
DWORD written;
// Flood-fill the console with spaces to clear it
FillConsoleOutputCharacter(hOut, TEXT(' '), length, topLeft, &written);
// Reset the attributes of every character to the default.
// This clears all background colour formatting, if any.
FillConsoleOutputAttribute(hOut, csbi.wAttributes, length, topLeft, &written);
// Move the cursor back to the top left for the next sequence of writes
SetConsoleCursorPosition(hOut, topLeft);
}
Indeed, instead of redrawing the entire "frame" every time, you're much better off drawing (or erasing, by overwriting them with a space) individual characters at a time:
// x is the column, y is the row. The origin (0,0) is top-left.
void setCursorPosition(int x, int y)
{
static const HANDLE hOut = GetStdHandle(STD_OUTPUT_HANDLE);
std::cout.flush();
COORD coord = { (SHORT)x, (SHORT)y };
SetConsoleCursorPosition(hOut, coord);
}
// Step through with a debugger, or insert sleeps, to see the effect.
setCursorPosition(10, 5);
std::cout << "CHEESE";
setCursorPosition(10, 5);
std::cout 'W';
setCursorPosition(10, 9);
std::cout << 'Z';
setCursorPosition(10, 5);
std::cout << " "; // Overwrite characters with spaces to "erase" them
std::cout.flush();
// VoilĂ , 'CHEESE' converted to 'WHEEZE', then all but the last 'E' erased
Note that this eliminates the flicker, too, since there's no longer any need to clear the screen completely before redrawing -- you can simply change what needs changing without doing an intermediate clear, so the previous frame is incrementally updated, persisting until it's completely up to date.
I suggest using a double-buffering technique: Have one buffer in memory that represents the "current" state of the console screen, initially populated with spaces. Then have another buffer that represents the "next" state of the screen. Your game update logic will modify the "next" state (exactly like it does with your battleField array right now). When it comes time to draw the frame, don't erase everything first. Instead, go through both buffers in parallel, and write out only the changes from the previous state (the "current" buffer at that point contains the previous state). Then, copy the "next" buffer into the "current" buffer to set up for your next frame.
char prevBattleField[MAX_X][MAX_Y];
std::memset((char*)prevBattleField, 0, MAX_X * MAX_Y);
// ...
for (int y = 0; y != MAX_Y; ++y)
{
for (int x = 0; x != MAX_X; ++x)
{
if (battleField[x][y] == prevBattleField[x][y]) {
continue;
}
setCursorPosition(x, y);
std::cout << battleField[x][y];
}
}
std::cout.flush();
std::memcpy((char*)prevBattleField, (char const*)battleField, MAX_X * MAX_Y);
You can even go one step further and batch runs of changes together into a single I/O call (which is significantly cheaper than many calls for individual character writes, but still proportionally more expensive the more characters are written).
// Note: This requires you to invert the dimensions of `battleField` (and
// `prevBattleField`) in order for rows of characters to be contiguous in memory.
for (int y = 0; y != MAX_Y; ++y)
{
int runStart = -1;
for (int x = 0; x != MAX_X; ++x)
{
if (battleField[y][x] == prevBattleField[y][x]) {
if (runStart != -1) {
setCursorPosition(runStart, y);
std::cout.write(&battleField[y][runStart], x - runStart);
runStart = -1;
}
}
else if (runStart == -1) {
runStart = x;
}
}
if (runStart != -1) {
setCursorPosition(runStart, y);
std::cout.write(&battleField[y][runStart], MAX_X - runStart);
}
}
std::cout.flush();
std::memcpy((char*)prevBattleField, (char const*)battleField, MAX_X * MAX_Y);
In theory, that will run a lot faster than the first loop; however in practice it probably won't make a difference since std::cout is already buffering writes anyway. But it's a good example (and a common pattern that shows up a lot when there is no buffer in the underlying system), so I included it anyway.
Finally, note that you can reduce your sleep to 1 millisecond. Windows will actually often sleep longer, typically up 15ms, but it will prevent your CPU core from reaching 100% usage with a minimum of additional latency.
Note that this not at all the way "real" games do things; they almost always clear the buffer and redraw everything every frame. They don't get flickering because they use the equivalent of a double-buffer on the GPU, where the previous frame stays visible until the new frame is completely finished being drawn.
Bonus: You can change the colour to any of 8 different system colours, and the background too:
void setConsoleColour(unsigned short colour)
{
static const HANDLE hOut = GetStdHandle(STD_OUTPUT_HANDLE);
std::cout.flush();
SetConsoleTextAttribute(hOut, colour);
}
// Example:
const unsigned short DARK_BLUE = FOREGROUND_BLUE;
const unsigned short BRIGHT_BLUE = FOREGROUND_BLUE | FOREGROUND_INTENSITY;
std::cout << "Hello ";
setConsoleColour(BRIGHT_BLUE);
std::cout << "world";
setConsoleColour(DARK_BLUE);
std::cout << "!" << std::endl;
system("cls") is the cause of your problem. For updating frame your program has to spawn another process and then load and execute another program. This is quite expensive.
cls clears your screen, which means for a small amount of the time (until control returns to your main process) it displays completely nothing. That's where flickering comes from.
You should use some library like ncurses which allows you to display the "scene", then move your cursor position to <0,0> without modifying anything on the screen and redisplay your scene "over" the old one. This way you'll avoid flickering, because your scene will always display something, without 'completely blank screen' step.
One method is to write the formatted data to a string (or buffer) then block write the buffer to the console.
Every call to a function has an overhead. Try go get more done in a function. In your Output, this could mean a lot of text per output request.
For example:
static char buffer[2048];
char * p_next_write = &buffer[0];
for (int y = 0; y < MAX_Y; y++)
{
for (int x = 0; x < MAX_X; x++)
{
*p_next_write++ = battleField[x][y];
}
*p_next_write++ = '\n';
}
*p_next_write = '\0'; // "Insurance" for C-Style strings.
cout.write(&buffer[0], std::distance(p_buffer - &buffer[0]));
I/O operations are expensive (execution-wise), so the best use is to maximize the data per output request.
With the accepted answer the rendering would still be flickering if your updated area is big enough. Even if you animate a single horizontal line to move from top to bottom you'll most of the time see it like this:
###########################
#####################
This happens because you see the previous frame in the process of being overwritten by a newer one. For complex scenes like video or 3D rendering, this is barely acceptable. The proper way to do it is by using the double buffering technique. The idea is to draw all the "pixels" into an off-screen buffer and when done display it all at once. Gladly Windows console supports this approach pretty well. Please see the full example on how to do the double buffering below:
#include <chrono>
#include <thread>
#include <Windows.h>
#include <vector>
const unsigned FPS = 25;
std::vector<char> frameData;
short cursor = 0;
// Get the intial console buffer.
auto firstBuffer = GetStdHandle(STD_OUTPUT_HANDLE);
// Create an additional buffer for switching.
auto secondBuffer = CreateConsoleScreenBuffer(
GENERIC_READ | GENERIC_WRITE,
FILE_SHARE_WRITE | FILE_SHARE_READ,
nullptr,
CONSOLE_TEXTMODE_BUFFER,
nullptr);
// Assign switchable back buffer.
HANDLE backBuffer = secondBuffer;
bool bufferSwitch = true;
// Returns current window size in rows and columns.
COORD getScreenSize()
{
CONSOLE_SCREEN_BUFFER_INFO bufferInfo;
GetConsoleScreenBufferInfo(firstBuffer, &bufferInfo);
const auto newScreenWidth = bufferInfo.srWindow.Right - bufferInfo.srWindow.Left + 1;
const auto newscreenHeight = bufferInfo.srWindow.Bottom - bufferInfo.srWindow.Top + 1;
return COORD{ static_cast<short>(newScreenWidth), static_cast<short>(newscreenHeight) };
}
// Switches back buffer as active.
void swapBuffers()
{
WriteConsole(backBuffer, &frameData.front(), static_cast<short>(frameData.size()), nullptr, nullptr);
SetConsoleActiveScreenBuffer(backBuffer);
backBuffer = bufferSwitch ? firstBuffer : secondBuffer;
bufferSwitch = !bufferSwitch;
std::this_thread::sleep_for(std::chrono::milliseconds(1000 / FPS));
}
// Draw horizontal line moving from top to bottom.
void drawFrame(COORD screenSize)
{
for (auto i = 0; i < screenSize.Y; i++)
{
for (auto j = 0; j < screenSize.X; j++)
if (cursor == i)
frameData[i * screenSize.X + j] = '#';
else
frameData[i * screenSize.X + j] = ' ';
}
cursor++;
if (cursor >= screenSize.Y)
cursor = 0;
}
int main()
{
const auto screenSize = getScreenSize();
SetConsoleScreenBufferSize(firstBuffer, screenSize);
SetConsoleScreenBufferSize(secondBuffer, screenSize);
frameData.resize(screenSize.X * screenSize.Y);
// Main rendering loop:
// 1. Draw frame to the back buffer.
// 2. Set back buffer as active.
while (true)
{
drawFrame(screenSize);
swapBuffers();
}
}
In this example, I went with a static FPS value for the sake of simplicity. You may also want to introduce some functionality to stabilize frame frequency output by counting the actual FPS. That would make your animation run smoothly independent of the console throughput.

SDL image disappears after 15 seconds

I'm learning SDL and I have a frustrating problem. Code is below.
Even though there is a loop that keeps the program alive, when I load an image and change the x value of the source rect to animate, the image that was loaded disappears after exactly 15 seconds. This does not happen with static images. Only with animations. I'm sure there is a simple thing I'm missing but I cant see it.
void update(){
rect1.x = 62 * int ( (SDL_GetTicks() / 100) % 12);
/* 62 is the width of a frame, 12 is the number of frames */
}
void shark(){
surface = IMG_Load("s1.png");
if (surface != 0){
texture = SDL_CreateTextureFromSurface(renderer,surface);
SDL_FreeSurface(surface);
}
rect1.y = 0;
rect1.h = 90;
rect1.w = 60;
rect2.x = 0;
rect2.y = 0;
rect2.h = rect1.h+30; // enlarging the image
rect2.w = rect1.w+30;
SDL_RenderCopy(renderer,texture,&rect1,&rect2);
}
void render(){
SDL_SetRenderDrawColor(renderer, 0, 0, 100, 150);
SDL_RenderPresent(renderer);
SDL_RenderClear(renderer);
}
and in main
update();
shark();
render();
SDL_image header is included, linked, dll exists. Could be the dll is broken?
I left out rest of the program to keep it simple. If this is not enough, I can post the whole thing.
Every time you call the shark function, it loads another copy of the texture. With that in a loop like you have it, you will run out of video memory quickly (unless you are calling SDL_DestroyTexture after every frame, which you have not indicated). At which point, you will no longer be able to load textures. Apparently this takes about fifteen seconds for you.
If you're going to use the same image over and over, then just load it once, before your main loop.
This line int ( (SDL_GetTicks() / 100) % 12);
SDL_GetTicks() returns the number of miliseconds that have elapsed since the lib initialized (https://wiki.libsdl.org/SDL_GetTicks). So you're updating with the TOTAL AMOUNT OF TIME since your application started, not the time since last frame.
You're supposed to keep count of the last time and update the application with how much time has passed since the last update.
Uint32 currentTime=SDL_GetTicks();
int deltaTime = (int)( currentTime-lastTime );
lastTime=currentTime; //declared previously
update( deltaTime );
shark();
render();
Edit: Benjamin is right, the update line works fine.
Still using the deltaTime is a good advice. In a game, for instance, you won't use the total time since the beginning of the application, you'll probably need to keep your own counter of how much time has passed (since you start an animation).
But there's nothing wrong with that line for your program anyhow.

Insert a delay in Processing sketch

I am attempting to insert a delay in Processing sketch. I tried Thread.sleep() but I guess it will not work because, as in Java, it prevents rendering of the drawings.
Basically, I have to draw a triangle with delays in drawing three sides.
How do I do that?
Processing programs can read the value of computer’s clock. The current second is read with the second() function, which returns values from 0 to 59. The current minute is read with the minute() function, which also returns values from 0 to 59. - Processing: A Programming Handbook
Other clock related functions : millis(), day(), month(), year().
Those numbers can be used to trigger events and calculate the passage of time, as in the following Processing sketch quoted from the aforementioned book:
// Uses millis() to start a line in motion three seconds
// after the program starts
int x = 0;
void setup() {
size(100, 100);
}
void draw() {
if (millis() > 3000) {
x++;
line(x, 0, x, 100);
}
}
Here's an example of a triangle whose sides are drawn each one after 3 seconds (the triangle is reset every minute):
int i = second();
void draw () {
background(255);
beginShape();
if (second()-i>=3) {
vertex(50,0);
vertex(99,99);
}
if (second()-i>=6) vertex(0,99);
if (second()-i>=9) vertex(50,0);
endShape();
}
As #user2468700 suggests, use a time keeping function. I like millis().
If you have a value to keep track of the time at certain intervals and the current time (continuously updated) you can check if one timer(manually updated one) falls behind the other(continuous one) based on a delay/wait value. If it does, update your data (number of points to draw in this case) and finally the local stop-watch like value.
Here's a basic commented example.
Rendering is separated from data updates to make it easier to understand.
//render related
PVector[] points = new PVector[]{new PVector(10,10),//a list of points
new PVector(90,10),
new PVector(90,90)};
int pointsToDraw = 0;//the number of points to draw on the screen
//time keeping related
int now;//keeps track of time only when we update, not continuously
int wait = 1000;//a delay value to check against
void setup(){
now = millis();//update the 'stop-watch'
}
void draw(){
//update
if(millis()-now >= wait){//if the difference between the last 'stop-watch' update and the current time in millis is greater than the wait time
if(pointsToDraw < points.length) pointsToDraw++;//if there are points to render, increment that
now = millis();//update the 'stop-watch'
}
//render
background(255);
beginShape();
for(int i = 0 ; i < pointsToDraw; i++) {
vertex(points[i].x,points[i].y);
}
endShape(CLOSE);
}

Simulated time in a game loop using c++

I am building a 3d game from scratch in C++ using OpenGL and SDL on linux as a hobby and to learn more about this area of programming.
Wondering about the best way to simulate time while the game is running. Obviously I have a loop that looks something like:
void main_loop()
{
while(!quit)
{
handle_events();
DrawScene();
...
SDL_Delay(time_left());
}
}
I am using the SDL_Delay and time_left() to maintain a framerate of about 33 fps.
I had thought that I just need a few global variables like
int current_hour = 0;
int current_min = 0;
int num_days = 0;
Uint32 prev_ticks = 0;
Then a function like :
void handle_time()
{
Uint32 current_ticks;
Uint32 dticks;
current_ticks = SDL_GetTicks();
dticks = current_ticks - prev_ticks; // get difference since last time
// if difference is greater than 30000 (half minute) increment game mins
if(dticks >= 30000) {
prev_ticks = current_ticks;
current_mins++;
if(current_mins >= 60) {
current_mins = 0;
current_hour++;
}
if(current_hour > 23) {
current_hour = 0;
num_days++;
}
}
}
and then call the handle_time() function in the main loop.
It compiles and runs (using printf to write the time to the console at the moment) but I am wondering if this is the best way to do it. Is there easier ways or more efficient ways?
I've mentioned this before in other game related threads. As always, follow the suggestions by Glenn Fiedler in his Game Physics series
What you want to do is to use a constant timestep which you get by accumulating time deltas. If you want 33 updates per second, then your constant timestep should be 1/33. You could also call this the update frequency. You should also decouple the game logic from the rendering as they don't belong together. You want to be able to use a low update frequency while rendering as fast as the machine allows. Here is some sample code:
running = true;
unsigned int t_accum=0,lt=0,ct=0;
while(running){
while(SDL_PollEvent(&event)){
switch(event.type){
...
}
}
ct = SDL_GetTicks();
t_accum += ct - lt;
lt = ct;
while(t_accum >= timestep){
t += timestep; /* this is our actual time, in milliseconds. */
t_accum -= timestep;
for(std::vector<Entity>::iterator en = entities.begin(); en != entities.end(); ++en){
integrate(en, (float)t * 0.001f, timestep);
}
}
/* This should really be in a separate thread, synchronized with a mutex */
std::vector<Entity> tmpEntities(entities.size());
for(int i=0; i<entities.size(); ++i){
float alpha = (float)t_accum / (float)timestep;
tmpEntities[i] = interpolateState(entities[i].lastState, alpha, entities[i].currentState, 1.0f - alpha);
}
Render(tmpEntities);
}
This handles undersampling as well as oversampling. If you use integer arithmetic like done here, your game physics should be close to 100% deterministic, no matter how slow or fast the machine is. This is the advantage of increasing the time in fixed time intervals. The state used for rendering is calculated by interpolating between the previous and current states, where the leftover value inside the time accumulator is used as the interpolation factor. This ensures that the rendering is is smooth, no matter how large the timestep is.
Other than the issues already pointed out (you should use a structure for the times and pass it to handle_time() and your minute will get incremented every half minute) your solution is fine for keeping track of time running in the game.
However, for most game events that need to happen every so often you should probably base them off of the main game loop instead of an actual time so they will happen in the same proportions with a different fps.
One of Glenn's posts you will really want to read is Fix Your Timestep!. After looking up this link I noticed that Mads directed you to the same general place in his answer.
I am not a Linux developer, but you might want to have a look at using Timers instead of polling for the ticks.
http://linux.die.net/man/2/timer_create
EDIT:
SDL Seem to support Timers: SDL_SetTimer