I am writing an app drawing multiple figures and comparing images (counting pixels values). Drawing and counting thread doesn't display any image. Gui is implemented in another thread.
Currently, I have found very strange anomaly. My drawing and comparing thread works very slow, but when I add before main loop sf::Window
I have x70 performance increase, but adding these lines breaks my GUI (probably because I create a window in another thread)*.
I am looking for a way of increasing performance without using sf::Window::create(...).
Full example:
int main()
{
// sf::Window window(sf::VideoMode(200, 200), "SFML");
std::vector<sf::CircleShape> circles_;
for (int i = 0; i < 200; i++)
{
sf::CircleShape circle(rand() % 50 + 10, 20);
circle.setFillColor(sf::Color(rand() % 256, rand() % 256, rand() % 256, 128));
circle.setPosition(rand() % (100), rand() % (100));
circles_.push_back(circle);
}
sf::RenderTexture generated_texture;
generated_texture.create(100, 100);
sf::Clock clock;
uint i = 0;
while (i < 10)
{
for (auto &shape : circles_)
{
generated_texture.draw(shape);
}
i++;
}
double result = double(i) / clock.getElapsedTime().asSeconds();
cout << "Result: " << result << " loops/sec";
return 0;
}
*For simplification let's assume that I don't have any gui (in my app it is optional). I just wanna run my app from commandline.
I have opened an issue on SFML github and found out that it was caused by activating and deactivating context, so in this case instead of:
sf::Window window(sf::VideoMode(200, 200), "SFML");
should be
sf::Context some_context;.
Original issue link:
https://github.com/SFML/SFML/issues/1672
Full answer:
"When sf::RenderTexture is done drawing it tries to restore the state to how it was before the draw call. Since this was the "no context" state it will repeatedly activate and deactivate the context every iteration. This is standard behaviour for any OpenGL resource that can live on its own without necessarily having a window.
If you want to do anything rendering related it is recommended to always have some kind of context-owning-thing lying around. If you don't want a full sf::Window then an sf::Context will have to do.
Due to the nature of OpenGL, you will never get around the limitation of having some kind of window (whether it's visible or not) that itself owns a context. That's just the way the API designers designed it 25 years ago."
Related
I'm trying to make a sorting visualizer with SDL2, everything works except one thing, the wait time.
The sorting visualizer has a delay, I can change it to whatever i want, but when I set it to around 1ms it skips some instructions.
Here is 10ms vs 1ms:
10ms delay
1ms delay
The video shows how the 1ms delay doesn't actually finish sorting:
Picture of 1ms delay algorithm completion.
I suspect the problem being the wait function I use, I'm trying to make this program multi-platform so there are little to no options.
Here's a snippet of the code:
Selection Sort Code (Shown in videos):
void selectionSort(void)
{
int minimum;
// One by one move boundary of unsorted subarray
for (int i = 0; i < totalValue-1; i++)
{
// Find the minimum element in unsorted array
minimum = i;
for (int j = i+1; j < totalValue; j++){
if (randArray[j] < randArray[minimum]){
minimum = j;
lineColoration[j] = 2;
render();
}
}
lineColoration[i] = 1;
// Swap the found minimum element with the first element
swap(randArray[minimum], randArray[i]);
this_thread::sleep_for(waitTime);
render();
}
}
Some variables need explanation:
totalValue is the amount of values to be sorted (user input)
randArray is a vector that stores all the values
waitTime is the amount of milliseconds the computer will wait each time (user input)
I've cut the code down, and removed other algorithms to make a reproducible example, not rendering and using cout seems to work, but I still cant pin down if the issue is the render or the wait function:
#include <algorithm>
#include <chrono>
#include <iostream>
#include <random>
#include <thread>
#include <vector>
#include <math.h>
SDL_Window* window;
SDL_Renderer* renderer;
using namespace std;
vector<int> randArray;
int totalValue= 100;
auto waitTime= 1ms;
vector<int> lineColoration;
int lineSize;
int lineHeight;
Uint32 ticks= 0;
void OrganizeVariables()
{
randArray.clear();
for(int i= 0; i < totalValue; i++)
randArray.push_back(i + 1);
auto rng= default_random_engine{};
shuffle(begin(randArray), end(randArray), rng);
lineColoration.assign(totalValue,0);
}
int create_window(void)
{
window= SDL_CreateWindow("Sorting Visualizer", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, 1800, 900, SDL_WINDOW_SHOWN);
return window != NULL;
}
int create_renderer(void)
{
renderer= SDL_CreateRenderer(
window, -1, SDL_RENDERER_PRESENTVSYNC); // Change SDL_RENDERER_PRESENTVSYNC to SDL_RENDERER_ACCELERATED
return renderer != NULL;
}
int init(void)
{
if(SDL_Init(SDL_INIT_VIDEO) != 0)
goto bad_exit;
if(create_window() == 0)
goto quit_sdl;
if(create_renderer() == 0)
goto destroy_window;
cout << "All safety checks passed succesfully" << endl;
return 1;
destroy_window:
SDL_DestroyRenderer(renderer);
SDL_DestroyWindow(window);
quit_sdl:
SDL_Quit();
bad_exit:
return 0;
}
void cleanup(void)
{
SDL_DestroyWindow(window);
SDL_Quit();
}
void render(void)
{
SDL_SetRenderDrawColor(renderer, 0, 0, 0, 255);
SDL_RenderClear(renderer);
//This is used to only render when 16ms hits (60fps), if true, will set the ticks variable to GetTicks() + 16
if(SDL_GetTicks() > ticks) {
for(int i= 0; i < totalValue - 1; i++) {
// SDL_Rect image_pos = {i*4, 100, 3, randArray[i]*2};
SDL_Rect fill_pos= {i * (1 + lineSize), 100, lineSize,randArray[i] * lineHeight};
switch(lineColoration[i]) {
case 0:
SDL_SetRenderDrawColor(renderer,255,255,255,255);
break;
case 1:
SDL_SetRenderDrawColor(renderer,255,0,0,255);
break;
case 2:
SDL_SetRenderDrawColor(renderer,0,255,255,255);
break;
default:
cout << "Error, drawing color not defined, exting...";
cout << "Unkown Color ID: " << lineColoration[i];
cleanup();
abort();
break;
}
SDL_RenderFillRect(renderer, &fill_pos);
}
SDL_RenderPresent(renderer);
lineColoration.assign(totalValue,0);
ticks= SDL_GetTicks() + 16;
}
}
void selectionSort(void)
{
int minimum;
// One by one move boundary of unsorted subarray
for (int i = 0; i < totalValue-1; i++) {
// Find the minimum element in unsorted array
minimum = i;
for (int j = i+1; j < totalValue; j++) {
if (randArray[j] < randArray[minimum]) {
minimum = j;
lineColoration[j] = 2;
render();
}
}
lineColoration[i] = 1;
// Swap the found minimum element with the first element
swap(randArray[minimum], randArray[i]);
this_thread::sleep_for(waitTime);
render();
}
}
int main(int argc, char** argv)
{
//Rough estimate of screen size
lineSize= 1100 / totalValue;
lineHeight= 700 / totalValue;
create_window();
create_renderer();
OrganizeVariables();
selectionSort();
this_thread::sleep_for(5000ms);
cleanup();
}
The problem is the ticks= SDL_GetTicks() + 16; as those are too many ticks for a millisecond wait and the if(SDL_GetTicks() > ticks) condition is false most of the time.
If you put 1ms wait and ticks= SDL_GetTicks() + 5 it will work.
In the selectionSort loop, if in the last, say, eight iterations, the if(SDL_GetTicks() > ticks) skips the drawing, the loop may well finish and let some pending drawings.
It is not the algorithm not completing, it is it finish before ticks reaches a number high enough to allow the drawing.
The main problem is that you are dropping updates to the screen by making all rendering dependant on an if condition:
if(SDL_GetTicks() > ticks)
My tests have shown that only about every 70th call to the function render actually gets rendered. All other calls are filtered by this if condition.
This extremely high number is because you are calling the function render not only in your outer loop, but also in the inner loop. I see no reason why it should also be called in the inner loop. In my opinion, it should only be called in the outer loop.
If you only call it in the outer loop, then about every 16th call to the function is actually rendered.
However, this still means that the last call to the render function only has a 1 in 16 chance of being rendered. Therefore, it is not surprising that the last render of your program does not represent the last sorting step.
If you want to ensure that the last sorting step gets rendered, you could simply execute the rendering code once unconditionally, after the sorting has finished. However, this may not be the ideal solution, because I believe you should first make a more fundamental decision on how your program should behave:
In your question, you are using delays of 1ms between calls to render. This means that your program is designed to render 1000 frames per second. However, your monitor can probably only display about 60 frames per second (some gaming monitors can display more). In that case, every displayed frame lasts for at least 16.7 milliseconds.
Therefore, you must decide how you want your program to behave with regard to the monitor. You could make your program
sort faster than your monitor can display individual sorting steps, so that not all of the sorting steps are rendered, or
sort slower than your monitor can display individual sorting steps, so that all sorting steps are displayed by the monitor for at least one frame, possibly several frames, or
sort at exactly the same speed as your monitor can display, so that one sorting step is displaying for exactly one frame by the monitor.
Implementing #3 is the easiest of all. Because you have enabled VSYNC in the function call to SDL_CreateRenderer, SDL will automatically limit the number of renders to the display rate of your monitor. Therefore, you don't have to perform any additional waiting in your code and can remove the line
this_thread::sleep_for(waitTime);
from the function selectionSort. Also, since SDL knows better than you whether your monitor is ready for the next frame to be drawn, it does not seem appropriate that you try to limit the number of frames yourself. So you can remove the line
if(SDL_GetTicks() > ticks) {
and the corresponding closing brace from the function render.
On the other hand, it may be better to keep the if statement to prevent the massively high frame rates in case SDL doesn't limit them properly. In that case, the frame rate limiter should probably be set well above 60 fps, though (maybe 100-200 fps), to ensure that the frames are passed fast enough to SDL.
Implementing #1 is harder, as it actually requires you to select which sorting steps to render and which ones not to render. Therefore, in order to implement #1, you will probably need to keep the if statement mentioned above, so that rendering only occurs conditionally.
However, it does not seem meaningful to make the if statement dependant on elapsed time since the last render, because while wating, the sorting will continue at full speed and it is therefore possible that all of the sorting will be completed with only one frame of rendering. You are currently preventing this from happending by slowing down the sort by using the line
this_thread::sleep_for(waitTime);
in the function selectionSort. But this does not seem like an ideal solution, but rather a stopgap measure.
Instead of making the if condition dependant on time, it would be easier to make it dependant on the number of sorting steps since the last render. That way, you could, for example, program it in such a way that every 5th sorting step gets rendered. In that case, there would be no need anymore to additionally slow down the actual sorting and your code would be simpler.
As already described above, when implementing #1, you will also have to ensure that you do not drop the last rendering step, or that you at least render the last frame after the sorting is finished. Otherwise, the last frame will likely not display the completed sort, but rather an intermediate sorting step.
Implementing #2 is similar to implementing #1, except that you will have to use SDL_Delay (which is equivalent this_thread::sleep_for) or SDL_AddTimer to determine when it is time to render the next sorting step.
Using SDL_AddTimer would require you to handle SDL Events. However, I would recommend that you do this anyway, because that way, you will also be able to handle SDL_QUIT events, so that you can close your program by closing the window. This would also make the line
this_thread::sleep_for( 5000ms );
at the end of your program unnecessary, because you could instead wait for the user to close the window, like this:
for (;;)
{
SDL_Event event;
SDL_WaitEvent( &event );
if ( event.type == SDL_QUIT ) break;
}
However, it would probably be better if you restructured your entire program, so that you only have one message loop, which responds to both SDL Timer and SDL_QUIT events.
I'm trying to do an UWP game, and i came accross a problem where my game is much slower in release mode than it is in debug mode.
My game will draw a 3D view (Dungeon master style) and will have an UI part that draws over the 3D view. Because the 3D view can slow down to a small amount of frames per seconds (FPS), i decided to make my game running the UI part always at 60 FPS.
Here is how the main gameloop looks like, in some pseudo code:
Gameloop start
Update game datas
copy actual finished 3D view from buffer to screen
draw UI part
3D view loop start
If no more time to draw more textures on the 3D view exit 3D view loop
Draw one texture to 3D view buffer
3D view loop end --> 3D view loop start
Gameloop end --> Gameloop start
Here are the actual update and render functions:
void Dungeons_of_NargothMain::Update()
{
m_ritonTimer.startTimer(static_cast<int>(E_RITON_TIMER::UI));
m_ritonTimer.frameCountPlusOne((int)E_RITON_TIMER::UI_FRAME_COUNT);
m_ritonTimer.manageFramesPerSecond((int)E_RITON_TIMER::UI_FRAME_COUNT);
m_ritonTimer.manageFramesPerSecond((int)E_RITON_TIMER::LABY_FRAME_COUNT);
if (m_sceneRenderer->m_numberTotalOfTexturesToDraw == 0 ||
m_sceneRenderer->m_numberTotalOfTexturesToDraw <= m_sceneRenderer->m_numberOfTexturesDrawn)
{
m_sceneRenderer->m_numberTotalOfTexturesToDraw = 150000;
m_sceneRenderer->m_numberOfTexturesDrawn = 0;
}
}
// RENDER
bool Dungeons_of_NargothMain::Render()
{
//********************************//
// Render UI part here //
//********************************//
//**********************************//
// Render 3D view to 960X540 screen //
//**********************************//
m_sceneRenderer->setRenderTargetTo960X540Screen(); // 3D view buffer screen
bool screen960GotFullDrawn = false;
bool stillenoughTimeLeft = true;
while (stillenoughTimeLeft && (!screen960GotFullDrawn))
{
stillenoughTimeLeft = m_ritonTimer.enoughTimeForOneMoreTexture((int)E_RITON_TIMER::UI);
screen960GotFullDrawn = m_sceneRenderer->renderNextTextureTo960X540Screen();
}
if (screen960GotFullDrawn)
m_ritonTimer.frameCountPlusOne((int)E_RITON_TIMER::LABY_FRAME_COUNT);
return true;
}
I removed what is not essential.
Here is the timer part (RitonTimer):
#pragma once
#include "pch.h"
#include <wrl.h>
#include "RitonTimer.h"
Dungeons_of_Nargoth::RitonTimer::RitonTimer()
{
initTimer();
if (!QueryPerformanceCounter(&m_qpcGameStartTime))
{
throw ref new Platform::FailureException();
}
}
void Dungeons_of_Nargoth::RitonTimer::startTimer(int timerIndex)
{
if (!QueryPerformanceCounter(&m_qpcNowTime))
{
throw ref new Platform::FailureException();
}
m_qpcStartTime[timerIndex] = m_qpcNowTime.QuadPart;
m_framesPerSecond[timerIndex] = 0;
m_frameCount[timerIndex] = 0;
}
void Dungeons_of_Nargoth::RitonTimer::resetTimer(int timerIndex)
{
if (!QueryPerformanceCounter(&m_qpcNowTime))
{
throw ref new Platform::FailureException();
}
m_qpcStartTime[timerIndex] = m_qpcNowTime.QuadPart;
m_framesPerSecond[timerIndex] = m_frameCount[timerIndex];
m_frameCount[timerIndex] = 0;
}
void Dungeons_of_Nargoth::RitonTimer::frameCountPlusOne(int timerIndex)
{
m_frameCount[timerIndex]++;
}
void Dungeons_of_Nargoth::RitonTimer::manageFramesPerSecond(int timerIndex)
{
if (!QueryPerformanceCounter(&m_qpcNowTime))
{
throw ref new Platform::FailureException();
}
m_qpcDeltaTime = m_qpcNowTime.QuadPart - m_qpcStartTime[timerIndex];
if (m_qpcDeltaTime >= m_qpcFrequency.QuadPart)
{
m_framesPerSecond[timerIndex] = m_frameCount[timerIndex];
m_frameCount[timerIndex] = 0;
m_qpcStartTime[timerIndex] += m_qpcFrequency.QuadPart;
if ((m_qpcStartTime[timerIndex] + m_qpcFrequency.QuadPart) < m_qpcNowTime.QuadPart)
m_qpcStartTime[timerIndex] = m_qpcNowTime.QuadPart - m_qpcFrequency.QuadPart;
}
}
void Dungeons_of_Nargoth::RitonTimer::initTimer()
{
if (!QueryPerformanceFrequency(&m_qpcFrequency))
{
throw ref new Platform::FailureException();
}
m_qpcOneFrameTime = m_qpcFrequency.QuadPart / 60;
m_qpc5PercentOfOneFrameTime = m_qpcOneFrameTime / 20;
m_qpc10PercentOfOneFrameTime = m_qpcOneFrameTime / 10;
m_qpc95PercentOfOneFrameTime = m_qpcOneFrameTime - m_qpc5PercentOfOneFrameTime;
m_qpc90PercentOfOneFrameTime = m_qpcOneFrameTime - m_qpc10PercentOfOneFrameTime;
m_qpc80PercentOfOneFrameTime = m_qpcOneFrameTime - m_qpc10PercentOfOneFrameTime - m_qpc10PercentOfOneFrameTime;
m_qpc70PercentOfOneFrameTime = m_qpcOneFrameTime - m_qpc10PercentOfOneFrameTime - m_qpc10PercentOfOneFrameTime - m_qpc10PercentOfOneFrameTime;
m_qpc60PercentOfOneFrameTime = m_qpc70PercentOfOneFrameTime - m_qpc10PercentOfOneFrameTime;
m_qpc50PercentOfOneFrameTime = m_qpc60PercentOfOneFrameTime - m_qpc10PercentOfOneFrameTime;
m_qpc45PercentOfOneFrameTime = m_qpc50PercentOfOneFrameTime - m_qpc5PercentOfOneFrameTime;
}
bool Dungeons_of_Nargoth::RitonTimer::enoughTimeForOneMoreTexture(int timerIndex)
{
while (!QueryPerformanceCounter(&m_qpcNowTime));
m_qpcDeltaTime = m_qpcNowTime.QuadPart - m_qpcStartTime[timerIndex];
if (m_qpcDeltaTime < m_qpc45PercentOfOneFrameTime)
return true;
else
return false;
}
In debug mode the game's UI works at 60 FPS, and the 3D view is about 1 FPS on my PC. But even there i'm not sure why i have to stop my texture drawing at 45% of one game time and call present, to get the 60 FPS, if i wait longer i only get 30 FPS. (this valor is set in "enoughTimeForOneMoreTexture()" in RitonTimer.
In Release mode it drops dramatically, having like 10 FPS for the UI part, 1 FPS for the 3D part. I tried to find why for the last 2 days, didn't find it.
Also i have another small question: How do i tell visual studio that my game is actually a game and not an app ? Or does Microsoft do the "switch" when i send my game to their store ?
Here i have put my game on my OneDrive so everyone can download the source files and try to compile it, and see if you get the same results as me:
OneDrive link: https://1drv.ms/f/s!Aj7wxGmZTdftgZAZT5YAbLDxbtMNVg
compile in either x64 Debug, or x64 Release mode.
UPDATE:
I think i found the explanation why my game is slower in release mode.
The CPU is probably not waiting for the drawing instruction to be done, but simply adds it to a list which will be forward to the GPU at it's own pace in a separate task (or maybe the GPU does that cache himself). That would explain it all.
My plan was to draw the UI first and then to draw as many textures from the 3D view as possible till 95% of a 1/60th second frame time passed and then present it to the swapchain. The UI would always be at 60 FPS and the 3D view would be as fast as the system allows it (also at 60FPS if it can all be drawn in 95% of the frame time).
This didn't work because it probbably cached all the instructions my 3D view had (i was testing with 150000 BIG texture draw instructions for the 3D view) in one frame time, and so of course the UI was as slow as the 3D view at the end, or close to.
That is also why even in debug mode i didn't get 60FPS when i waited for 95% of a frame time, i had to wait for 45% of a frame time to get my 60 FPS i wanted for the UI.
I tested it with a lower value in release mode to verify that theory, and indeed i also get 60 FPS for the UI when i stop the Drawings at only 15% of a frame time.
I tought it worked like this only in DirectX12.
"How do i tell visual studio that my game is actually a game and not an app" - there's no difference, a game is an app.
I have your code running at 300-400 FPS now in debug mode.
Firstly I commented out your code that checks if you've got time to render another texture. Don't do that. Everything the player sees should render within a single frame. If your frame is taking more than 16ms (with 60fps target) look for expensive operations, or calls that are made repeatedly, possibly adding up to some unexpected cost. Look for code that might be doing something repeatedly when it only needs to do it once per frame or per resize. etc
So the issue is that you were rendering very large textures and a lot of them. You want to avoid overdraw (rendering a pixel where you've already rendered a pixel). You can have a bit of overdraw and that's sometimes preferable to being pedantic. But you were drawing 1000x2000 textures over and over again. So you were absolutely killing the pixel shader. It just can't render that many pixels. I didn't bother looking at the code that tries to control texture rendering based on frame time remaining. For what you're trying to do, that's not helpful.
Inside your render method comment out the while and if/else sections and use this to draw an array of your textures ..
// set sprite dimensions
int w = 64, h = 64;
for (int y = 0; y < 16; y++)
{
for (int x = 0; x < 16; x++)
{
m_sceneRenderer->renderNextTextureTo960X540Screen(x*64, y*64, w, h);
}
}
and in RenderNextTextureToScreen(int x, int y, int w, int h) ..
m_squareBuffer.sizeX = w; // 1000;
m_squareBuffer.sizeY = h; // 2000;
m_squareBuffer.posX = x; // (float)(rand() % 1920);
m_squareBuffer.posY = y; // (float)(rand() % 1080);
See how this code renders much smaller textures, the textures are 64x64 and there's no overdraw.
And just be aware that the GPU isn't all powerful, it can do a lot if you use it right, but if you just throw crazy operations at it, you can grind it to a halt, just like with the CPU. So try to render things that 'look normal', that you can imagine being in a game. You'll learn in time what's sensible and what isn't.
The most likely explanation for the code running slower in release mode is that your timing and rendering limiter code was broken. It wasn't working properly because the 3d view was running at 1fps, so then who knows what it's behaviour is. With the changes I've made, the program seems to run faster in release mode as expected. Your clock code is showing 600-1600fps in release mode now for me.
I trying to implement a wator simulation were sharks eat fishes, I want to randomly spawn sharks, the program compiles but i get "Setting Vertical Sync not supported" .
working on Ubuntu 16.04. Before i was working on something else and i got the same error but the window was displayed this is not. Any help?
EDIT I have fixed up the code i had one too many { in my loop but now i am getting a "Segmentation Fault (Core dumped)" Error i have changed my png to 8 bit but that didnt help.
#include <SFML/Graphics.hpp>
int main()
{
int n;
int x;
int y;
sf::RenderWindow window(sf::VideoMode(800, 800), "SFML works!");
// Set Frame Rate to 60fps
window.setFramerateLimit(60);
srand(time(0));
sf::Texture shark;
shark.loadFromFile("image.png");
std::vector<sf::Sprite> Fishes(n,sf::Sprite(shark));
for (int n = 0; n < Fishes.size(); n++){
Fishes[n].setOrigin(15, 15);
Fishes[n].getPosition();
Fishes[n].setPosition(x = rand() % 790 + 10, y = rand() % -10 - 50);
}
// run the program as long as the window is open
while (window.isOpen())
{
// check all the window's events that were triggered since the last iteration of the loop
sf::Event event;
while (window.pollEvent(event))
{
// "close requested" event: we close the window
if (event.type == sf::Event::Closed)
window.close();
}
Fishes[n].setPosition(x, y+=1);
Fishes[n].rotate(1);
// clear the window with black color
window.clear(sf::Color::Black);
// draw everything here...
// window.draw(...);
window.draw(Fishes[n]);
// end the current frame
window.display();
}
return 0;
}
I would add a comment but I lack reputation. A segfault is caused writing or reading from illegal memory. In your case, I would try checking to see if your image is being loaded properly.
I would also note for loops with only one line in the body don't use just one curly bracket, use both or none.
You are still missing a loop over your fishes in your rendering loop.
Your first loop sets your loop variable n to the number after your last fish. Using that same loop variable again will result in undefined behavior. Fix it. Probably by adding another for loop where you use n the second time in your while rendering-loop.
Specifically, when you come to this line:
Fishes[n].setPosition(x, y+=1);
you variable n is not any kind of loop variable. Even worse it's totally random, you have not set any value. It's int n; from the first line after main(). If you delete that line (the first after main), you will see what is wrong.
I'm learning SDL and I have a frustrating problem. Code is below.
Even though there is a loop that keeps the program alive, when I load an image and change the x value of the source rect to animate, the image that was loaded disappears after exactly 15 seconds. This does not happen with static images. Only with animations. I'm sure there is a simple thing I'm missing but I cant see it.
void update(){
rect1.x = 62 * int ( (SDL_GetTicks() / 100) % 12);
/* 62 is the width of a frame, 12 is the number of frames */
}
void shark(){
surface = IMG_Load("s1.png");
if (surface != 0){
texture = SDL_CreateTextureFromSurface(renderer,surface);
SDL_FreeSurface(surface);
}
rect1.y = 0;
rect1.h = 90;
rect1.w = 60;
rect2.x = 0;
rect2.y = 0;
rect2.h = rect1.h+30; // enlarging the image
rect2.w = rect1.w+30;
SDL_RenderCopy(renderer,texture,&rect1,&rect2);
}
void render(){
SDL_SetRenderDrawColor(renderer, 0, 0, 100, 150);
SDL_RenderPresent(renderer);
SDL_RenderClear(renderer);
}
and in main
update();
shark();
render();
SDL_image header is included, linked, dll exists. Could be the dll is broken?
I left out rest of the program to keep it simple. If this is not enough, I can post the whole thing.
Every time you call the shark function, it loads another copy of the texture. With that in a loop like you have it, you will run out of video memory quickly (unless you are calling SDL_DestroyTexture after every frame, which you have not indicated). At which point, you will no longer be able to load textures. Apparently this takes about fifteen seconds for you.
If you're going to use the same image over and over, then just load it once, before your main loop.
This line int ( (SDL_GetTicks() / 100) % 12);
SDL_GetTicks() returns the number of miliseconds that have elapsed since the lib initialized (https://wiki.libsdl.org/SDL_GetTicks). So you're updating with the TOTAL AMOUNT OF TIME since your application started, not the time since last frame.
You're supposed to keep count of the last time and update the application with how much time has passed since the last update.
Uint32 currentTime=SDL_GetTicks();
int deltaTime = (int)( currentTime-lastTime );
lastTime=currentTime; //declared previously
update( deltaTime );
shark();
render();
Edit: Benjamin is right, the update line works fine.
Still using the deltaTime is a good advice. In a game, for instance, you won't use the total time since the beginning of the application, you'll probably need to keep your own counter of how much time has passed (since you start an animation).
But there's nothing wrong with that line for your program anyhow.
I am using SFML 2.1 in Code Blocks and I can't figure out how to use Vectors to make clones of my asteroid sprite. It keeps saying that asteroid_V hasn't been declared, and a warning box pops up saying it is "using characters that are illegal in the selected coding" and that they "were changed to protect [me] from losing data".
The objective of this program is to continuously create asteroid sprites that will spawn at random points above the screen before dropping straight down. There were other sprites and aspects in the program but I removed them from this post to properly condense it. This appears to be the only problem after all.
int n;
int main()
{
RenderWindow window;
window.setFramerateLimit(30);
RenderWindow mainMenu;
srand( time(0));
Texture asteroid_Pic;
asteroid_Pic.loadFromFile("Asteroid.png");
std::vector<sf::Sprite> asteroid(n, Sprite(asteroid_Pic));
for (int i = 0; i < asteroid.size(); i++){
asteroid[n].setOrigin(15, 15);
asteroid[n].getPosition();
asteroid[n].setPosition(x = rand() % 790 + 10, y = rand() % -10 - 50);
}
// run the program as long as the window is open
while (window.isOpen())
{
// check all the window's events that were triggered since the last iteration of the loop
Event event;
while (window.pollEvent(event))
{
// "close requested" event: we close the window
if (event.type == Event::Closed){
window.close();
}
asteroid[n].setPosition(x, y+=1);
asteroid[n].rotate(1);
// clear the window with black color
window.clear(Color::Black);
// draw everything here...
// window.draw(...);
window.draw(player1);
window.draw(asteroid[n]);
// end the current frame
window.display();
}
return 0;
}
You have another while (window.isOpen()) inside your main loop. Your program will enter the main loop and then never get out of that inner loop. It will never get to drawing at least once.
You need to get rid of the inner while (window.isOpen()) loop and find another way.
Although the original question was about timers, you can find a basic explanation of a game loop here. You have to handle time in you loop if you want to do something (move sprites, create new ingame entities) based on time.