Flickering during sprite sheet animation - c++

I am animating my sprite which looks like this:
I made a variable which increments by 64 every time i press W, as each sprite is 64 x 64, it works however there is blinking, here is my code. It is in the draw method by the way.
if (sf::Keyboard::isKeyPressed(sf::Keyboard::W)){
animator += 64;
}
else{
animator = 0;
}
if (animator > 512){
animator = 0;
}
playerSprite.setTextureRect(sf::IntRect(0, animator, 64, 64));
window.draw(playerSprite);
Any help would be appreciated, thanks.

You shouldn't implement the frame's change this way : the change here is dependent of the framerate and not of the elapsed time.
You should have a timer and change the frame, each [FRAME_DELAY] time.
For example, each 200 ms.

Related

Make Windows MFC Game Loop Faster

I am creating a billiards game and am having major problems with tunneling at high speeds. I figured using linear interpolation for animations would help quite a bit, but the problem persists. To see this, I drew a circle at the previous few positions an object has been. At the highest velocity the ball can travel, the path looks like this:
Surely, these increments of advancement are much too large even after using linear interpolation.
At each frame, every object's location is updated based on the amount of time since the window was last drawn. I noticed that the average time for the window to be redrawn is somewhere between 70 and 80ms. I would really like this game to work at 60 fps, so this is about 4 or 5 times longer than what I am looking for.
Is there a way to change how often the window is redrawn? Here is how I am currently redrawing the screen
#include "pch.h"
#include "framework.h"
#include "ChildView.h"
#include "DoubleBufferDC.h"
const int FrameDuration = 16;
void CChildView::OnPaint()
{
CPaintDC paintDC(this); // device context for painting
CDoubleBufferDC dc(&paintDC); // device context for painting
Graphics graphics(dc.m_hDC); // Create GDI+ graphics context
mGame.OnDraw(&graphics);
if (mFirstDraw)
{
mFirstDraw = false;
SetTimer(1, FrameDuration, nullptr);
LARGE_INTEGER time, freq;
QueryPerformanceCounter(&time);
QueryPerformanceFrequency(&freq);
mLastTime = time.QuadPart;
mTimeFreq = double(freq.QuadPart);
}
LARGE_INTEGER time;
QueryPerformanceCounter(&time);
long long diff = time.QuadPart - mLastTime;
double elapsed = double(diff) / mTimeFreq;
mLastTime = time.QuadPart;
mGame.Update(elapsed);
}
void CChildView::OnTimer(UINT_PTR nIDEvent)
{
RedrawWindow(NULL, NULL, RDW_UPDATENOW);
Invalidate();
CWnd::OnTimer(nIDEvent);
}
EDIT: Upon request, here is how the actual drawing is done:
void CGame::OnDraw(Gdiplus::Graphics* graphics)
{
// Draw the background
graphics->DrawImage(mBackground.get(), 0, 0,
mBackground->GetWidth(), mBackground->GetHeight());
mTable->Draw(graphics);
Pen pen(Color(500, 128, 0), 1);
Pen penW(Color(1000, 1000, 1000), 1);
this->mCue->Draw(graphics);
for (shared_ptr<CBall> ball : this->mBalls)
{
ball->Draw(graphics);
}
for (shared_ptr<CBall> ball : this->mSunkenSolidBalls)
{
ball->Draw(graphics);
}
for (shared_ptr<CBall> ball : this->mSunkenStripedBalls)
{
ball->Draw(graphics);
}
this->mPowerBar->Draw(graphics);
}
Game::OnDraw will call Draw on all of the game items, which draw on the graphics object they receive as an argument.

Why is my UWP game slower in release than in debug mode?

I'm trying to do an UWP game, and i came accross a problem where my game is much slower in release mode than it is in debug mode.
My game will draw a 3D view (Dungeon master style) and will have an UI part that draws over the 3D view. Because the 3D view can slow down to a small amount of frames per seconds (FPS), i decided to make my game running the UI part always at 60 FPS.
Here is how the main gameloop looks like, in some pseudo code:
Gameloop start
Update game datas
copy actual finished 3D view from buffer to screen
draw UI part
3D view loop start
If no more time to draw more textures on the 3D view exit 3D view loop
Draw one texture to 3D view buffer
3D view loop end --> 3D view loop start
Gameloop end --> Gameloop start
Here are the actual update and render functions:
void Dungeons_of_NargothMain::Update()
{
m_ritonTimer.startTimer(static_cast<int>(E_RITON_TIMER::UI));
m_ritonTimer.frameCountPlusOne((int)E_RITON_TIMER::UI_FRAME_COUNT);
m_ritonTimer.manageFramesPerSecond((int)E_RITON_TIMER::UI_FRAME_COUNT);
m_ritonTimer.manageFramesPerSecond((int)E_RITON_TIMER::LABY_FRAME_COUNT);
if (m_sceneRenderer->m_numberTotalOfTexturesToDraw == 0 ||
m_sceneRenderer->m_numberTotalOfTexturesToDraw <= m_sceneRenderer->m_numberOfTexturesDrawn)
{
m_sceneRenderer->m_numberTotalOfTexturesToDraw = 150000;
m_sceneRenderer->m_numberOfTexturesDrawn = 0;
}
}
// RENDER
bool Dungeons_of_NargothMain::Render()
{
//********************************//
// Render UI part here //
//********************************//
//**********************************//
// Render 3D view to 960X540 screen //
//**********************************//
m_sceneRenderer->setRenderTargetTo960X540Screen(); // 3D view buffer screen
bool screen960GotFullDrawn = false;
bool stillenoughTimeLeft = true;
while (stillenoughTimeLeft && (!screen960GotFullDrawn))
{
stillenoughTimeLeft = m_ritonTimer.enoughTimeForOneMoreTexture((int)E_RITON_TIMER::UI);
screen960GotFullDrawn = m_sceneRenderer->renderNextTextureTo960X540Screen();
}
if (screen960GotFullDrawn)
m_ritonTimer.frameCountPlusOne((int)E_RITON_TIMER::LABY_FRAME_COUNT);
return true;
}
I removed what is not essential.
Here is the timer part (RitonTimer):
#pragma once
#include "pch.h"
#include <wrl.h>
#include "RitonTimer.h"
Dungeons_of_Nargoth::RitonTimer::RitonTimer()
{
initTimer();
if (!QueryPerformanceCounter(&m_qpcGameStartTime))
{
throw ref new Platform::FailureException();
}
}
void Dungeons_of_Nargoth::RitonTimer::startTimer(int timerIndex)
{
if (!QueryPerformanceCounter(&m_qpcNowTime))
{
throw ref new Platform::FailureException();
}
m_qpcStartTime[timerIndex] = m_qpcNowTime.QuadPart;
m_framesPerSecond[timerIndex] = 0;
m_frameCount[timerIndex] = 0;
}
void Dungeons_of_Nargoth::RitonTimer::resetTimer(int timerIndex)
{
if (!QueryPerformanceCounter(&m_qpcNowTime))
{
throw ref new Platform::FailureException();
}
m_qpcStartTime[timerIndex] = m_qpcNowTime.QuadPart;
m_framesPerSecond[timerIndex] = m_frameCount[timerIndex];
m_frameCount[timerIndex] = 0;
}
void Dungeons_of_Nargoth::RitonTimer::frameCountPlusOne(int timerIndex)
{
m_frameCount[timerIndex]++;
}
void Dungeons_of_Nargoth::RitonTimer::manageFramesPerSecond(int timerIndex)
{
if (!QueryPerformanceCounter(&m_qpcNowTime))
{
throw ref new Platform::FailureException();
}
m_qpcDeltaTime = m_qpcNowTime.QuadPart - m_qpcStartTime[timerIndex];
if (m_qpcDeltaTime >= m_qpcFrequency.QuadPart)
{
m_framesPerSecond[timerIndex] = m_frameCount[timerIndex];
m_frameCount[timerIndex] = 0;
m_qpcStartTime[timerIndex] += m_qpcFrequency.QuadPart;
if ((m_qpcStartTime[timerIndex] + m_qpcFrequency.QuadPart) < m_qpcNowTime.QuadPart)
m_qpcStartTime[timerIndex] = m_qpcNowTime.QuadPart - m_qpcFrequency.QuadPart;
}
}
void Dungeons_of_Nargoth::RitonTimer::initTimer()
{
if (!QueryPerformanceFrequency(&m_qpcFrequency))
{
throw ref new Platform::FailureException();
}
m_qpcOneFrameTime = m_qpcFrequency.QuadPart / 60;
m_qpc5PercentOfOneFrameTime = m_qpcOneFrameTime / 20;
m_qpc10PercentOfOneFrameTime = m_qpcOneFrameTime / 10;
m_qpc95PercentOfOneFrameTime = m_qpcOneFrameTime - m_qpc5PercentOfOneFrameTime;
m_qpc90PercentOfOneFrameTime = m_qpcOneFrameTime - m_qpc10PercentOfOneFrameTime;
m_qpc80PercentOfOneFrameTime = m_qpcOneFrameTime - m_qpc10PercentOfOneFrameTime - m_qpc10PercentOfOneFrameTime;
m_qpc70PercentOfOneFrameTime = m_qpcOneFrameTime - m_qpc10PercentOfOneFrameTime - m_qpc10PercentOfOneFrameTime - m_qpc10PercentOfOneFrameTime;
m_qpc60PercentOfOneFrameTime = m_qpc70PercentOfOneFrameTime - m_qpc10PercentOfOneFrameTime;
m_qpc50PercentOfOneFrameTime = m_qpc60PercentOfOneFrameTime - m_qpc10PercentOfOneFrameTime;
m_qpc45PercentOfOneFrameTime = m_qpc50PercentOfOneFrameTime - m_qpc5PercentOfOneFrameTime;
}
bool Dungeons_of_Nargoth::RitonTimer::enoughTimeForOneMoreTexture(int timerIndex)
{
while (!QueryPerformanceCounter(&m_qpcNowTime));
m_qpcDeltaTime = m_qpcNowTime.QuadPart - m_qpcStartTime[timerIndex];
if (m_qpcDeltaTime < m_qpc45PercentOfOneFrameTime)
return true;
else
return false;
}
In debug mode the game's UI works at 60 FPS, and the 3D view is about 1 FPS on my PC. But even there i'm not sure why i have to stop my texture drawing at 45% of one game time and call present, to get the 60 FPS, if i wait longer i only get 30 FPS. (this valor is set in "enoughTimeForOneMoreTexture()" in RitonTimer.
In Release mode it drops dramatically, having like 10 FPS for the UI part, 1 FPS for the 3D part. I tried to find why for the last 2 days, didn't find it.
Also i have another small question: How do i tell visual studio that my game is actually a game and not an app ? Or does Microsoft do the "switch" when i send my game to their store ?
Here i have put my game on my OneDrive so everyone can download the source files and try to compile it, and see if you get the same results as me:
OneDrive link: https://1drv.ms/f/s!Aj7wxGmZTdftgZAZT5YAbLDxbtMNVg
compile in either x64 Debug, or x64 Release mode.
UPDATE:
I think i found the explanation why my game is slower in release mode.
The CPU is probably not waiting for the drawing instruction to be done, but simply adds it to a list which will be forward to the GPU at it's own pace in a separate task (or maybe the GPU does that cache himself). That would explain it all.
My plan was to draw the UI first and then to draw as many textures from the 3D view as possible till 95% of a 1/60th second frame time passed and then present it to the swapchain. The UI would always be at 60 FPS and the 3D view would be as fast as the system allows it (also at 60FPS if it can all be drawn in 95% of the frame time).
This didn't work because it probbably cached all the instructions my 3D view had (i was testing with 150000 BIG texture draw instructions for the 3D view) in one frame time, and so of course the UI was as slow as the 3D view at the end, or close to.
That is also why even in debug mode i didn't get 60FPS when i waited for 95% of a frame time, i had to wait for 45% of a frame time to get my 60 FPS i wanted for the UI.
I tested it with a lower value in release mode to verify that theory, and indeed i also get 60 FPS for the UI when i stop the Drawings at only 15% of a frame time.
I tought it worked like this only in DirectX12.
"How do i tell visual studio that my game is actually a game and not an app" - there's no difference, a game is an app.
I have your code running at 300-400 FPS now in debug mode.
Firstly I commented out your code that checks if you've got time to render another texture. Don't do that. Everything the player sees should render within a single frame. If your frame is taking more than 16ms (with 60fps target) look for expensive operations, or calls that are made repeatedly, possibly adding up to some unexpected cost. Look for code that might be doing something repeatedly when it only needs to do it once per frame or per resize. etc
So the issue is that you were rendering very large textures and a lot of them. You want to avoid overdraw (rendering a pixel where you've already rendered a pixel). You can have a bit of overdraw and that's sometimes preferable to being pedantic. But you were drawing 1000x2000 textures over and over again. So you were absolutely killing the pixel shader. It just can't render that many pixels. I didn't bother looking at the code that tries to control texture rendering based on frame time remaining. For what you're trying to do, that's not helpful.
Inside your render method comment out the while and if/else sections and use this to draw an array of your textures ..
// set sprite dimensions
int w = 64, h = 64;
for (int y = 0; y < 16; y++)
{
for (int x = 0; x < 16; x++)
{
m_sceneRenderer->renderNextTextureTo960X540Screen(x*64, y*64, w, h);
}
}
and in RenderNextTextureToScreen(int x, int y, int w, int h) ..
m_squareBuffer.sizeX = w; // 1000;
m_squareBuffer.sizeY = h; // 2000;
m_squareBuffer.posX = x; // (float)(rand() % 1920);
m_squareBuffer.posY = y; // (float)(rand() % 1080);
See how this code renders much smaller textures, the textures are 64x64 and there's no overdraw.
And just be aware that the GPU isn't all powerful, it can do a lot if you use it right, but if you just throw crazy operations at it, you can grind it to a halt, just like with the CPU. So try to render things that 'look normal', that you can imagine being in a game. You'll learn in time what's sensible and what isn't.
The most likely explanation for the code running slower in release mode is that your timing and rendering limiter code was broken. It wasn't working properly because the 3d view was running at 1fps, so then who knows what it's behaviour is. With the changes I've made, the program seems to run faster in release mode as expected. Your clock code is showing 600-1600fps in release mode now for me.

How to make an Smooth Movement in C++, SFML

I have a problem doing an smooth movement in c++ w/ sfml.
Someone can make an algorithm to smooth move a shape / sprite by 15 px (the sprite / shape is 15 px) by pressing a button? Thanks.
Because many other programmers might want to know that, here it is, but next time ask a more precise question
Here's my way to make a smooth movement:
Take your initial movement, lets say 25, 15 per second
Make a loop like this:
void GameEngine::GameLoop()
{
sf::Clock timer;
sf::Time tickRate;
sf::Vector2f movement(25.0,15.0); // Your movement vector
while(/* wahtever */){
tickRate = timer.restart();
yourShape.move(movement * ((float)tickRate.asMilliseconds() / 1000)); //Move your shape depending on the time elapsed between two frame
yourWindow.clear();
yourWindow.draw(yourShape);
yourWindow.display();
}
}

SDL image disappears after 15 seconds

I'm learning SDL and I have a frustrating problem. Code is below.
Even though there is a loop that keeps the program alive, when I load an image and change the x value of the source rect to animate, the image that was loaded disappears after exactly 15 seconds. This does not happen with static images. Only with animations. I'm sure there is a simple thing I'm missing but I cant see it.
void update(){
rect1.x = 62 * int ( (SDL_GetTicks() / 100) % 12);
/* 62 is the width of a frame, 12 is the number of frames */
}
void shark(){
surface = IMG_Load("s1.png");
if (surface != 0){
texture = SDL_CreateTextureFromSurface(renderer,surface);
SDL_FreeSurface(surface);
}
rect1.y = 0;
rect1.h = 90;
rect1.w = 60;
rect2.x = 0;
rect2.y = 0;
rect2.h = rect1.h+30; // enlarging the image
rect2.w = rect1.w+30;
SDL_RenderCopy(renderer,texture,&rect1,&rect2);
}
void render(){
SDL_SetRenderDrawColor(renderer, 0, 0, 100, 150);
SDL_RenderPresent(renderer);
SDL_RenderClear(renderer);
}
and in main
update();
shark();
render();
SDL_image header is included, linked, dll exists. Could be the dll is broken?
I left out rest of the program to keep it simple. If this is not enough, I can post the whole thing.
Every time you call the shark function, it loads another copy of the texture. With that in a loop like you have it, you will run out of video memory quickly (unless you are calling SDL_DestroyTexture after every frame, which you have not indicated). At which point, you will no longer be able to load textures. Apparently this takes about fifteen seconds for you.
If you're going to use the same image over and over, then just load it once, before your main loop.
This line int ( (SDL_GetTicks() / 100) % 12);
SDL_GetTicks() returns the number of miliseconds that have elapsed since the lib initialized (https://wiki.libsdl.org/SDL_GetTicks). So you're updating with the TOTAL AMOUNT OF TIME since your application started, not the time since last frame.
You're supposed to keep count of the last time and update the application with how much time has passed since the last update.
Uint32 currentTime=SDL_GetTicks();
int deltaTime = (int)( currentTime-lastTime );
lastTime=currentTime; //declared previously
update( deltaTime );
shark();
render();
Edit: Benjamin is right, the update line works fine.
Still using the deltaTime is a good advice. In a game, for instance, you won't use the total time since the beginning of the application, you'll probably need to keep your own counter of how much time has passed (since you start an animation).
But there's nothing wrong with that line for your program anyhow.

Scaling items and rendering

I am making a small game in C++11 with Qt. However, I am having some issues with scaling.
The background of my map is an image. Each pixel of that image represents a tile, on which a protagonist can walk and enemies/healthpacks can be.
To set the size of a tile, I calculat the maximum amount like so (where imageRows & imageCols is amount of pixels on x- and y-axis of the background image):
QRect rec = QApplication::desktop()->screenGeometry();
int maxRows = rec.height() / imageRows;
int maxCols = rec.width() / imageCols;
if(maxRows < maxCols){
pixSize = maxRows;
} else{
pixSize = maxCols;
}
Now that I have the size of a tile, I add the background-image to the scene (in GameScene ctor, extends from QGraphicsScene):
auto background = new QGraphicsPixmapItem();
background->setPixmap(QPixmap(":/images/map.png").scaledToWidth(imageCols * pixSize));
this->addItem(background);
Then for adding enemies (they extend from a QGraphicsPixMapItem):
Enemy *enemy = new Enemy();
enemy->setPixmap(QPixmap(":/images/enemy.png").scaledToWidth(pixSize));
scene->addItem(enemy);
This all works fine, except that on large maps images get scaled once (to a height of lets say 2 pixels), and when zooming in on that item it does not get more clear, but stays a big pixel. Here is an example: the left one is on a small map where pixSize is pretty big, the second one has a pixSize of pretty small.
So how should I solve this? In general having a pixSize based on the screen resolution is not really useful, since the QGrapicsScene is resized to fit the QGraphicsView it is in, so in the end the view still determines how big the pixels show on the screen.
MyGraphicsView w;
w.setScene(gameScene);
w.fitInView(gameScene->sceneRect(), Qt::KeepAspectRatio);
I think you might want to look at the chip example from Qt (link to Qt5 but also works for Qt4).
The thing that might help you is in the chip.cpp file:
in the paint method:
const qreal lod = option->levelOfDetailFromTransform(painter->worldTransform());
where painter is simply a QPainter and option is of type QStyleOptionGraphicsItem. This quantity gives you back a measure of the current zoom level of your QGraphicsView and thus as in the example you can adjust what is being drawn at which level, e.g.
if (lod < 0.2) {
if (lod < 0.125) {
painter->fillRect(QRectF(0, 0, 110, 70), fillColor);
return;
}
QBrush b = painter->brush();
painter->setBrush(fillColor);
painter->drawRect(13, 13, 97, 57);
painter->setBrush(b);
return;
}
[...]
if (lod >= 2) {
QFont font("Times", 10);
font.setStyleStrategy(QFont::ForceOutline);
painter->setFont(font);
painter->save();
painter->scale(0.1, 0.1);
painter->drawText(170, 180, QString("Model: VSC-2000 (Very Small Chip) at %1x%2").arg(x).arg(y));
painter->drawText(170, 200, QString("Serial number: DLWR-WEER-123L-ZZ33-SDSJ"));
painter->drawText(170, 220, QString("Manufacturer: Chip Manufacturer"));
painter->restore();
}
Does this help?