I'm trying to make a sorting visualizer with SDL2, everything works except one thing, the wait time.
The sorting visualizer has a delay, I can change it to whatever i want, but when I set it to around 1ms it skips some instructions.
Here is 10ms vs 1ms:
10ms delay
1ms delay
The video shows how the 1ms delay doesn't actually finish sorting:
Picture of 1ms delay algorithm completion.
I suspect the problem being the wait function I use, I'm trying to make this program multi-platform so there are little to no options.
Here's a snippet of the code:
Selection Sort Code (Shown in videos):
void selectionSort(void)
{
int minimum;
// One by one move boundary of unsorted subarray
for (int i = 0; i < totalValue-1; i++)
{
// Find the minimum element in unsorted array
minimum = i;
for (int j = i+1; j < totalValue; j++){
if (randArray[j] < randArray[minimum]){
minimum = j;
lineColoration[j] = 2;
render();
}
}
lineColoration[i] = 1;
// Swap the found minimum element with the first element
swap(randArray[minimum], randArray[i]);
this_thread::sleep_for(waitTime);
render();
}
}
Some variables need explanation:
totalValue is the amount of values to be sorted (user input)
randArray is a vector that stores all the values
waitTime is the amount of milliseconds the computer will wait each time (user input)
I've cut the code down, and removed other algorithms to make a reproducible example, not rendering and using cout seems to work, but I still cant pin down if the issue is the render or the wait function:
#include <algorithm>
#include <chrono>
#include <iostream>
#include <random>
#include <thread>
#include <vector>
#include <math.h>
SDL_Window* window;
SDL_Renderer* renderer;
using namespace std;
vector<int> randArray;
int totalValue= 100;
auto waitTime= 1ms;
vector<int> lineColoration;
int lineSize;
int lineHeight;
Uint32 ticks= 0;
void OrganizeVariables()
{
randArray.clear();
for(int i= 0; i < totalValue; i++)
randArray.push_back(i + 1);
auto rng= default_random_engine{};
shuffle(begin(randArray), end(randArray), rng);
lineColoration.assign(totalValue,0);
}
int create_window(void)
{
window= SDL_CreateWindow("Sorting Visualizer", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, 1800, 900, SDL_WINDOW_SHOWN);
return window != NULL;
}
int create_renderer(void)
{
renderer= SDL_CreateRenderer(
window, -1, SDL_RENDERER_PRESENTVSYNC); // Change SDL_RENDERER_PRESENTVSYNC to SDL_RENDERER_ACCELERATED
return renderer != NULL;
}
int init(void)
{
if(SDL_Init(SDL_INIT_VIDEO) != 0)
goto bad_exit;
if(create_window() == 0)
goto quit_sdl;
if(create_renderer() == 0)
goto destroy_window;
cout << "All safety checks passed succesfully" << endl;
return 1;
destroy_window:
SDL_DestroyRenderer(renderer);
SDL_DestroyWindow(window);
quit_sdl:
SDL_Quit();
bad_exit:
return 0;
}
void cleanup(void)
{
SDL_DestroyWindow(window);
SDL_Quit();
}
void render(void)
{
SDL_SetRenderDrawColor(renderer, 0, 0, 0, 255);
SDL_RenderClear(renderer);
//This is used to only render when 16ms hits (60fps), if true, will set the ticks variable to GetTicks() + 16
if(SDL_GetTicks() > ticks) {
for(int i= 0; i < totalValue - 1; i++) {
// SDL_Rect image_pos = {i*4, 100, 3, randArray[i]*2};
SDL_Rect fill_pos= {i * (1 + lineSize), 100, lineSize,randArray[i] * lineHeight};
switch(lineColoration[i]) {
case 0:
SDL_SetRenderDrawColor(renderer,255,255,255,255);
break;
case 1:
SDL_SetRenderDrawColor(renderer,255,0,0,255);
break;
case 2:
SDL_SetRenderDrawColor(renderer,0,255,255,255);
break;
default:
cout << "Error, drawing color not defined, exting...";
cout << "Unkown Color ID: " << lineColoration[i];
cleanup();
abort();
break;
}
SDL_RenderFillRect(renderer, &fill_pos);
}
SDL_RenderPresent(renderer);
lineColoration.assign(totalValue,0);
ticks= SDL_GetTicks() + 16;
}
}
void selectionSort(void)
{
int minimum;
// One by one move boundary of unsorted subarray
for (int i = 0; i < totalValue-1; i++) {
// Find the minimum element in unsorted array
minimum = i;
for (int j = i+1; j < totalValue; j++) {
if (randArray[j] < randArray[minimum]) {
minimum = j;
lineColoration[j] = 2;
render();
}
}
lineColoration[i] = 1;
// Swap the found minimum element with the first element
swap(randArray[minimum], randArray[i]);
this_thread::sleep_for(waitTime);
render();
}
}
int main(int argc, char** argv)
{
//Rough estimate of screen size
lineSize= 1100 / totalValue;
lineHeight= 700 / totalValue;
create_window();
create_renderer();
OrganizeVariables();
selectionSort();
this_thread::sleep_for(5000ms);
cleanup();
}
The problem is the ticks= SDL_GetTicks() + 16; as those are too many ticks for a millisecond wait and the if(SDL_GetTicks() > ticks) condition is false most of the time.
If you put 1ms wait and ticks= SDL_GetTicks() + 5 it will work.
In the selectionSort loop, if in the last, say, eight iterations, the if(SDL_GetTicks() > ticks) skips the drawing, the loop may well finish and let some pending drawings.
It is not the algorithm not completing, it is it finish before ticks reaches a number high enough to allow the drawing.
The main problem is that you are dropping updates to the screen by making all rendering dependant on an if condition:
if(SDL_GetTicks() > ticks)
My tests have shown that only about every 70th call to the function render actually gets rendered. All other calls are filtered by this if condition.
This extremely high number is because you are calling the function render not only in your outer loop, but also in the inner loop. I see no reason why it should also be called in the inner loop. In my opinion, it should only be called in the outer loop.
If you only call it in the outer loop, then about every 16th call to the function is actually rendered.
However, this still means that the last call to the render function only has a 1 in 16 chance of being rendered. Therefore, it is not surprising that the last render of your program does not represent the last sorting step.
If you want to ensure that the last sorting step gets rendered, you could simply execute the rendering code once unconditionally, after the sorting has finished. However, this may not be the ideal solution, because I believe you should first make a more fundamental decision on how your program should behave:
In your question, you are using delays of 1ms between calls to render. This means that your program is designed to render 1000 frames per second. However, your monitor can probably only display about 60 frames per second (some gaming monitors can display more). In that case, every displayed frame lasts for at least 16.7 milliseconds.
Therefore, you must decide how you want your program to behave with regard to the monitor. You could make your program
sort faster than your monitor can display individual sorting steps, so that not all of the sorting steps are rendered, or
sort slower than your monitor can display individual sorting steps, so that all sorting steps are displayed by the monitor for at least one frame, possibly several frames, or
sort at exactly the same speed as your monitor can display, so that one sorting step is displaying for exactly one frame by the monitor.
Implementing #3 is the easiest of all. Because you have enabled VSYNC in the function call to SDL_CreateRenderer, SDL will automatically limit the number of renders to the display rate of your monitor. Therefore, you don't have to perform any additional waiting in your code and can remove the line
this_thread::sleep_for(waitTime);
from the function selectionSort. Also, since SDL knows better than you whether your monitor is ready for the next frame to be drawn, it does not seem appropriate that you try to limit the number of frames yourself. So you can remove the line
if(SDL_GetTicks() > ticks) {
and the corresponding closing brace from the function render.
On the other hand, it may be better to keep the if statement to prevent the massively high frame rates in case SDL doesn't limit them properly. In that case, the frame rate limiter should probably be set well above 60 fps, though (maybe 100-200 fps), to ensure that the frames are passed fast enough to SDL.
Implementing #1 is harder, as it actually requires you to select which sorting steps to render and which ones not to render. Therefore, in order to implement #1, you will probably need to keep the if statement mentioned above, so that rendering only occurs conditionally.
However, it does not seem meaningful to make the if statement dependant on elapsed time since the last render, because while wating, the sorting will continue at full speed and it is therefore possible that all of the sorting will be completed with only one frame of rendering. You are currently preventing this from happending by slowing down the sort by using the line
this_thread::sleep_for(waitTime);
in the function selectionSort. But this does not seem like an ideal solution, but rather a stopgap measure.
Instead of making the if condition dependant on time, it would be easier to make it dependant on the number of sorting steps since the last render. That way, you could, for example, program it in such a way that every 5th sorting step gets rendered. In that case, there would be no need anymore to additionally slow down the actual sorting and your code would be simpler.
As already described above, when implementing #1, you will also have to ensure that you do not drop the last rendering step, or that you at least render the last frame after the sorting is finished. Otherwise, the last frame will likely not display the completed sort, but rather an intermediate sorting step.
Implementing #2 is similar to implementing #1, except that you will have to use SDL_Delay (which is equivalent this_thread::sleep_for) or SDL_AddTimer to determine when it is time to render the next sorting step.
Using SDL_AddTimer would require you to handle SDL Events. However, I would recommend that you do this anyway, because that way, you will also be able to handle SDL_QUIT events, so that you can close your program by closing the window. This would also make the line
this_thread::sleep_for( 5000ms );
at the end of your program unnecessary, because you could instead wait for the user to close the window, like this:
for (;;)
{
SDL_Event event;
SDL_WaitEvent( &event );
if ( event.type == SDL_QUIT ) break;
}
However, it would probably be better if you restructured your entire program, so that you only have one message loop, which responds to both SDL Timer and SDL_QUIT events.
I have network of vertices and I want to change their color every second. I tried using Sleep() function and currently using different delay function but both give same result - lets say I want to color 10 vertices red with 1 second pause. When Im starting project it seems like the window freezes for 10 seconds and then shows every vertice already with red color.
This is my update function.
void ofApp::update(){
for (int i = 0; i < vertices.size(); i++) {
ofColor red(255, 0, 0);
vertices[i].setColor(red);
delay(1);
}
}
Here is draw function
void ofApp::draw(){
for (int i = 0; i < vertices.size(); i++) {
for (int j = 0; j < G[i].size(); j++) {
ofDrawLine(vertices[i].getX(), vertices[i].getY(), vertices[G[i][j]].getX(), vertices[G[i][j]].getY());
}
vertices[i].drawBFS();
}
}
void vertice::drawBFS() {
ofNoFill();
ofSetColor(_color);
ofDrawCircle(_x, _y, 20);
ofDrawBitmapString(_id, _x - 3, _y + 3);
ofColor black(0, 0, 0);
ofSetColor(black);
}
This is my delay() function
void ofApp::delay(int number_of_seconds) {
// Converting time into milli_seconds
int milli_seconds = 1000 * number_of_seconds;
// Stroing start time
clock_t start_time = clock();
// looping till required time is not acheived
while (clock() < start_time + milli_seconds)
;
}
There is no bug in your code, just a misconception about waiting. So this is no definitive answer, just a hint in the right direction.
You have probably just one thread. One thread can do exactly one thing at a time. When you call delay all this one thread is doing is checking the time over and over again until some time has passed.
During this time the thread can do nothing else (it can not magically skip instructions or detect your intentions). So it can not issue drawing commands or swap some buffers to display vectors on screen. That's the reason why you application seems to freeze - 99.9% of the time it's checking if the interval has passed. This also places a heavy load on the cpu.
The solution can be a bit tricky and requires threading. Usually you have a UI-Thread that regularly draws stuff, refreshes the display, maybe takes inputs and so on. This thread should never do heavy calculations to keep the UI responsive. A second thread will then manage heavier calculations or updating data.
If you want to run a task in an interval you don't just loop until the time is over, but essentially "tell the OS" that the second thread should be inactive for a certain period. The OS will manage this way more efficient than implementing active waiting.
But that's quite a large topic, so I suggest you read on about Multithreading. C++ has a small thread library since C++11. May be worth a look.
// .h
float nextEventSeconds = 0;
// .cpp
void ofApp::draw(){
float now = ofGetElapsedTimef();
if(now > nextEventSeconds) {
// do something here that should only happen
// every 3 seconds
nextEventSeconds = now + 3;
}
}
Following this example I managed to solve my problem.
I have a program in which I am drawing images on the screen. The draw function here is called per frame inside in which I have all my drawing code.
I have written an image sequencer that return the respective image from an index of images.
void draw()
{
sequence.getFrameForTime(getCurrentElapsedTime()).draw(0,0); //get current time returns time in float and startson application start
}
On key press, I have start the sequences from the first image [0] and then go on further. So, everytime I press a key, it has to start from [0] unlike the above code where it basically uses the currentTime%numImages to get the frame (which is not the start 0 position of image).
I was thinking to write a timer of own that basically can be triggered everytime I press the key so that the time always starts from 0. But before doing that, I wanted to ask if anybody had better/easier implementation ideas for this?
EDIT
Why I didn't use just a counter?
I have framerate adjustments in my ImageSequence as well.
Image getFrameAtPercent(float rate)
{
float totalTime = sequence.size() / frameRate;
float percent = time / totalTime;
return setFrameAtPercent(percent);
}
int getFrameIndexAtPercent(float percent){
if (percent < 0.0 || percent > 1.0) percent -= floor(percent);
return MIN((int)(percent*sequence.size()), sequence.size()-1);
}
void draw()
{
sequence.getFrameForTime(counter++).draw(0,0);
}
void OnKeyPress(){ counter = 0; }
Is there a reason this wont suffice?
What you should do is increase a "currentFrame" as a float and convert it to an int to index your frame:
void draw()
{
currentFrame += deltaTime * framesPerSecond; // delta time being the time between the current frame and your last frame
if(currentFrame >= numImages)
currentFrame -= numImages;
sequence.getFrameAt((int)currentFrame).draw(0,0);
}
void OnKeyPress() { currentFrame = 0; }
This should gracefully handle machines with different framerates and even changes of framerates on a single machine.
Also, you won't be skipping part of a frame when you loop over as the remainder of the substraction is kept.
Lets say I have 4 images and I want to use these 4 images to animate a character. The 4 images represent the character walking. I want the animation to repeat itself as long as I press the key to move but to stop right when I unpress it. It doesn't need to be SFML specific if you don't know it, just basic theory would really help me.
Thank you.
You may want some simple kind of state machine. When the key is down (see sf::Input's IsKeyDown method), have the character in the "animated" state. When the key is not down, have the character in "not animated" state. Of course, you could always skip having this "state" and just do what I mention below (depending on exactly what you're doing).
Then, if the character is in the "animated" state, get the next "image" (see the next paragraph for more details on that). For example, if you have your images stored in a simple 4 element array, the next image would be at (currentIndex + 1) % ARRAY_SIZE. Depending on what you are doing, you may want to store your image frames in a more sophisticated data structure. If the character is not in the "animated" state, then you wouldn't do any updating here.
If your "4 images" are within the same image file, you can use the sf::Sprite's SetSubRect method to change the portion of the image displayed. If you actually have 4 different images, then you probably would need to use the sf::Sprite's SetImage method to switch the images out.
How would you enforce a framerate so that the animation doesn't happen too quickly?
Hello please see my answer here and accept this post as the best solution.
https://stackoverflow.com/a/52656103/3624674
You need to supply duration per frame and have the total progress be used to step through to the frame.
In the Animation source file do
class Animation {
std::vector<Frame> frames;
double totalLength;
double totalProgress;
sf::Sprite *target;
public:
Animation(sf::Sprite& target) {
this->target = ⌖
totalProgress = 0.0;
}
void addFrame(Frame& frame) {
frames.push_back(std::move(frame));
totalLength += frame.duration;
}
void update(double elapsed) {
// increase the total progress of the animation
totalProgress += elapsed;
// use this progress as a counter. Final frame at progress <= 0
double progress = totalProgress;
for(auto frame : frames) {
progress -= (*frame).duration;
// When progress is <= 0 or we are on the last frame in the list, stop
if (progress <= 0.0 || &(*frame) == &frames.back())
{
target->setTextureRect((*frame).rect);
break; // we found our frame
}
}
};
To stop when you unpress, simply only animate when the key is held
if(isKeyPressed) {
animation.update(elapsed);
}
To support multiple animations for different situations have a boolean for each state
bool isWalking, isJumping, isAttacking;
...
if(isJumping && !isWalking && !isAttacking) {
jumpAnimation.update(elapsed);
} else if(isWalking && !isAttacking) {
walkAnimation.update(elapsed);
} else if(isAttacking) {
attackAnimation.update(elapsed);
}
...
// now check for keyboard presses
if(jumpkeyPressed) { isJumping = true; } else { isJumping false; }
I am building a 3d game from scratch in C++ using OpenGL and SDL on linux as a hobby and to learn more about this area of programming.
Wondering about the best way to simulate time while the game is running. Obviously I have a loop that looks something like:
void main_loop()
{
while(!quit)
{
handle_events();
DrawScene();
...
SDL_Delay(time_left());
}
}
I am using the SDL_Delay and time_left() to maintain a framerate of about 33 fps.
I had thought that I just need a few global variables like
int current_hour = 0;
int current_min = 0;
int num_days = 0;
Uint32 prev_ticks = 0;
Then a function like :
void handle_time()
{
Uint32 current_ticks;
Uint32 dticks;
current_ticks = SDL_GetTicks();
dticks = current_ticks - prev_ticks; // get difference since last time
// if difference is greater than 30000 (half minute) increment game mins
if(dticks >= 30000) {
prev_ticks = current_ticks;
current_mins++;
if(current_mins >= 60) {
current_mins = 0;
current_hour++;
}
if(current_hour > 23) {
current_hour = 0;
num_days++;
}
}
}
and then call the handle_time() function in the main loop.
It compiles and runs (using printf to write the time to the console at the moment) but I am wondering if this is the best way to do it. Is there easier ways or more efficient ways?
I've mentioned this before in other game related threads. As always, follow the suggestions by Glenn Fiedler in his Game Physics series
What you want to do is to use a constant timestep which you get by accumulating time deltas. If you want 33 updates per second, then your constant timestep should be 1/33. You could also call this the update frequency. You should also decouple the game logic from the rendering as they don't belong together. You want to be able to use a low update frequency while rendering as fast as the machine allows. Here is some sample code:
running = true;
unsigned int t_accum=0,lt=0,ct=0;
while(running){
while(SDL_PollEvent(&event)){
switch(event.type){
...
}
}
ct = SDL_GetTicks();
t_accum += ct - lt;
lt = ct;
while(t_accum >= timestep){
t += timestep; /* this is our actual time, in milliseconds. */
t_accum -= timestep;
for(std::vector<Entity>::iterator en = entities.begin(); en != entities.end(); ++en){
integrate(en, (float)t * 0.001f, timestep);
}
}
/* This should really be in a separate thread, synchronized with a mutex */
std::vector<Entity> tmpEntities(entities.size());
for(int i=0; i<entities.size(); ++i){
float alpha = (float)t_accum / (float)timestep;
tmpEntities[i] = interpolateState(entities[i].lastState, alpha, entities[i].currentState, 1.0f - alpha);
}
Render(tmpEntities);
}
This handles undersampling as well as oversampling. If you use integer arithmetic like done here, your game physics should be close to 100% deterministic, no matter how slow or fast the machine is. This is the advantage of increasing the time in fixed time intervals. The state used for rendering is calculated by interpolating between the previous and current states, where the leftover value inside the time accumulator is used as the interpolation factor. This ensures that the rendering is is smooth, no matter how large the timestep is.
Other than the issues already pointed out (you should use a structure for the times and pass it to handle_time() and your minute will get incremented every half minute) your solution is fine for keeping track of time running in the game.
However, for most game events that need to happen every so often you should probably base them off of the main game loop instead of an actual time so they will happen in the same proportions with a different fps.
One of Glenn's posts you will really want to read is Fix Your Timestep!. After looking up this link I noticed that Mads directed you to the same general place in his answer.
I am not a Linux developer, but you might want to have a look at using Timers instead of polling for the ticks.
http://linux.die.net/man/2/timer_create
EDIT:
SDL Seem to support Timers: SDL_SetTimer