Spritesheet animation with scaled frames - c++

In order to create an animation in cocos2d-x 3.2 I do this:
SpriteFrameCache* cache = SpriteFrameCache::getInstance();
Vector<SpriteFrame*> animFrames(15);
for(int i = 1; i <= 7; ++i)
{
SpriteFrame* frame = cache->getSpriteFrameByName(String::createWithFormat("%d.png", i)->getCString());
animFrames.pushBack(frame);
}
auto animation = Animation::createWithSpriteFrames(animFrames, 1 / animFrames.size());
auto animate = Animate::create(animation);
pSprite->runAction(animate);
But now I need some frames to scaleByX with -1 in order to create a mirrored image. SpriteFrame has not scale method. Also I can't scale the pSprite as only some of the frames should be scaled. How can I solve this problem?

You have a pretty weird situation :)
You can schedule an update selector on sprite and set flipX to true/false based on your desired conditions. That's my personal preference.
You can't hack SpriteFrame that way, but you can use RenderTexture: http://www.cocos2d-x.org/reference/native-cpp/V3.0alpha0/d9/ddc/classcocos2d_1_1_render_texture.html - flip your desired sprites in a new texture, basically generate a new sprite-sheet on the fly. Now that's a bad idea.

Related

Why is my UWP game slower in release than in debug mode?

I'm trying to do an UWP game, and i came accross a problem where my game is much slower in release mode than it is in debug mode.
My game will draw a 3D view (Dungeon master style) and will have an UI part that draws over the 3D view. Because the 3D view can slow down to a small amount of frames per seconds (FPS), i decided to make my game running the UI part always at 60 FPS.
Here is how the main gameloop looks like, in some pseudo code:
Gameloop start
Update game datas
copy actual finished 3D view from buffer to screen
draw UI part
3D view loop start
If no more time to draw more textures on the 3D view exit 3D view loop
Draw one texture to 3D view buffer
3D view loop end --> 3D view loop start
Gameloop end --> Gameloop start
Here are the actual update and render functions:
void Dungeons_of_NargothMain::Update()
{
m_ritonTimer.startTimer(static_cast<int>(E_RITON_TIMER::UI));
m_ritonTimer.frameCountPlusOne((int)E_RITON_TIMER::UI_FRAME_COUNT);
m_ritonTimer.manageFramesPerSecond((int)E_RITON_TIMER::UI_FRAME_COUNT);
m_ritonTimer.manageFramesPerSecond((int)E_RITON_TIMER::LABY_FRAME_COUNT);
if (m_sceneRenderer->m_numberTotalOfTexturesToDraw == 0 ||
m_sceneRenderer->m_numberTotalOfTexturesToDraw <= m_sceneRenderer->m_numberOfTexturesDrawn)
{
m_sceneRenderer->m_numberTotalOfTexturesToDraw = 150000;
m_sceneRenderer->m_numberOfTexturesDrawn = 0;
}
}
// RENDER
bool Dungeons_of_NargothMain::Render()
{
//********************************//
// Render UI part here //
//********************************//
//**********************************//
// Render 3D view to 960X540 screen //
//**********************************//
m_sceneRenderer->setRenderTargetTo960X540Screen(); // 3D view buffer screen
bool screen960GotFullDrawn = false;
bool stillenoughTimeLeft = true;
while (stillenoughTimeLeft && (!screen960GotFullDrawn))
{
stillenoughTimeLeft = m_ritonTimer.enoughTimeForOneMoreTexture((int)E_RITON_TIMER::UI);
screen960GotFullDrawn = m_sceneRenderer->renderNextTextureTo960X540Screen();
}
if (screen960GotFullDrawn)
m_ritonTimer.frameCountPlusOne((int)E_RITON_TIMER::LABY_FRAME_COUNT);
return true;
}
I removed what is not essential.
Here is the timer part (RitonTimer):
#pragma once
#include "pch.h"
#include <wrl.h>
#include "RitonTimer.h"
Dungeons_of_Nargoth::RitonTimer::RitonTimer()
{
initTimer();
if (!QueryPerformanceCounter(&m_qpcGameStartTime))
{
throw ref new Platform::FailureException();
}
}
void Dungeons_of_Nargoth::RitonTimer::startTimer(int timerIndex)
{
if (!QueryPerformanceCounter(&m_qpcNowTime))
{
throw ref new Platform::FailureException();
}
m_qpcStartTime[timerIndex] = m_qpcNowTime.QuadPart;
m_framesPerSecond[timerIndex] = 0;
m_frameCount[timerIndex] = 0;
}
void Dungeons_of_Nargoth::RitonTimer::resetTimer(int timerIndex)
{
if (!QueryPerformanceCounter(&m_qpcNowTime))
{
throw ref new Platform::FailureException();
}
m_qpcStartTime[timerIndex] = m_qpcNowTime.QuadPart;
m_framesPerSecond[timerIndex] = m_frameCount[timerIndex];
m_frameCount[timerIndex] = 0;
}
void Dungeons_of_Nargoth::RitonTimer::frameCountPlusOne(int timerIndex)
{
m_frameCount[timerIndex]++;
}
void Dungeons_of_Nargoth::RitonTimer::manageFramesPerSecond(int timerIndex)
{
if (!QueryPerformanceCounter(&m_qpcNowTime))
{
throw ref new Platform::FailureException();
}
m_qpcDeltaTime = m_qpcNowTime.QuadPart - m_qpcStartTime[timerIndex];
if (m_qpcDeltaTime >= m_qpcFrequency.QuadPart)
{
m_framesPerSecond[timerIndex] = m_frameCount[timerIndex];
m_frameCount[timerIndex] = 0;
m_qpcStartTime[timerIndex] += m_qpcFrequency.QuadPart;
if ((m_qpcStartTime[timerIndex] + m_qpcFrequency.QuadPart) < m_qpcNowTime.QuadPart)
m_qpcStartTime[timerIndex] = m_qpcNowTime.QuadPart - m_qpcFrequency.QuadPart;
}
}
void Dungeons_of_Nargoth::RitonTimer::initTimer()
{
if (!QueryPerformanceFrequency(&m_qpcFrequency))
{
throw ref new Platform::FailureException();
}
m_qpcOneFrameTime = m_qpcFrequency.QuadPart / 60;
m_qpc5PercentOfOneFrameTime = m_qpcOneFrameTime / 20;
m_qpc10PercentOfOneFrameTime = m_qpcOneFrameTime / 10;
m_qpc95PercentOfOneFrameTime = m_qpcOneFrameTime - m_qpc5PercentOfOneFrameTime;
m_qpc90PercentOfOneFrameTime = m_qpcOneFrameTime - m_qpc10PercentOfOneFrameTime;
m_qpc80PercentOfOneFrameTime = m_qpcOneFrameTime - m_qpc10PercentOfOneFrameTime - m_qpc10PercentOfOneFrameTime;
m_qpc70PercentOfOneFrameTime = m_qpcOneFrameTime - m_qpc10PercentOfOneFrameTime - m_qpc10PercentOfOneFrameTime - m_qpc10PercentOfOneFrameTime;
m_qpc60PercentOfOneFrameTime = m_qpc70PercentOfOneFrameTime - m_qpc10PercentOfOneFrameTime;
m_qpc50PercentOfOneFrameTime = m_qpc60PercentOfOneFrameTime - m_qpc10PercentOfOneFrameTime;
m_qpc45PercentOfOneFrameTime = m_qpc50PercentOfOneFrameTime - m_qpc5PercentOfOneFrameTime;
}
bool Dungeons_of_Nargoth::RitonTimer::enoughTimeForOneMoreTexture(int timerIndex)
{
while (!QueryPerformanceCounter(&m_qpcNowTime));
m_qpcDeltaTime = m_qpcNowTime.QuadPart - m_qpcStartTime[timerIndex];
if (m_qpcDeltaTime < m_qpc45PercentOfOneFrameTime)
return true;
else
return false;
}
In debug mode the game's UI works at 60 FPS, and the 3D view is about 1 FPS on my PC. But even there i'm not sure why i have to stop my texture drawing at 45% of one game time and call present, to get the 60 FPS, if i wait longer i only get 30 FPS. (this valor is set in "enoughTimeForOneMoreTexture()" in RitonTimer.
In Release mode it drops dramatically, having like 10 FPS for the UI part, 1 FPS for the 3D part. I tried to find why for the last 2 days, didn't find it.
Also i have another small question: How do i tell visual studio that my game is actually a game and not an app ? Or does Microsoft do the "switch" when i send my game to their store ?
Here i have put my game on my OneDrive so everyone can download the source files and try to compile it, and see if you get the same results as me:
OneDrive link: https://1drv.ms/f/s!Aj7wxGmZTdftgZAZT5YAbLDxbtMNVg
compile in either x64 Debug, or x64 Release mode.
UPDATE:
I think i found the explanation why my game is slower in release mode.
The CPU is probably not waiting for the drawing instruction to be done, but simply adds it to a list which will be forward to the GPU at it's own pace in a separate task (or maybe the GPU does that cache himself). That would explain it all.
My plan was to draw the UI first and then to draw as many textures from the 3D view as possible till 95% of a 1/60th second frame time passed and then present it to the swapchain. The UI would always be at 60 FPS and the 3D view would be as fast as the system allows it (also at 60FPS if it can all be drawn in 95% of the frame time).
This didn't work because it probbably cached all the instructions my 3D view had (i was testing with 150000 BIG texture draw instructions for the 3D view) in one frame time, and so of course the UI was as slow as the 3D view at the end, or close to.
That is also why even in debug mode i didn't get 60FPS when i waited for 95% of a frame time, i had to wait for 45% of a frame time to get my 60 FPS i wanted for the UI.
I tested it with a lower value in release mode to verify that theory, and indeed i also get 60 FPS for the UI when i stop the Drawings at only 15% of a frame time.
I tought it worked like this only in DirectX12.
"How do i tell visual studio that my game is actually a game and not an app" - there's no difference, a game is an app.
I have your code running at 300-400 FPS now in debug mode.
Firstly I commented out your code that checks if you've got time to render another texture. Don't do that. Everything the player sees should render within a single frame. If your frame is taking more than 16ms (with 60fps target) look for expensive operations, or calls that are made repeatedly, possibly adding up to some unexpected cost. Look for code that might be doing something repeatedly when it only needs to do it once per frame or per resize. etc
So the issue is that you were rendering very large textures and a lot of them. You want to avoid overdraw (rendering a pixel where you've already rendered a pixel). You can have a bit of overdraw and that's sometimes preferable to being pedantic. But you were drawing 1000x2000 textures over and over again. So you were absolutely killing the pixel shader. It just can't render that many pixels. I didn't bother looking at the code that tries to control texture rendering based on frame time remaining. For what you're trying to do, that's not helpful.
Inside your render method comment out the while and if/else sections and use this to draw an array of your textures ..
// set sprite dimensions
int w = 64, h = 64;
for (int y = 0; y < 16; y++)
{
for (int x = 0; x < 16; x++)
{
m_sceneRenderer->renderNextTextureTo960X540Screen(x*64, y*64, w, h);
}
}
and in RenderNextTextureToScreen(int x, int y, int w, int h) ..
m_squareBuffer.sizeX = w; // 1000;
m_squareBuffer.sizeY = h; // 2000;
m_squareBuffer.posX = x; // (float)(rand() % 1920);
m_squareBuffer.posY = y; // (float)(rand() % 1080);
See how this code renders much smaller textures, the textures are 64x64 and there's no overdraw.
And just be aware that the GPU isn't all powerful, it can do a lot if you use it right, but if you just throw crazy operations at it, you can grind it to a halt, just like with the CPU. So try to render things that 'look normal', that you can imagine being in a game. You'll learn in time what's sensible and what isn't.
The most likely explanation for the code running slower in release mode is that your timing and rendering limiter code was broken. It wasn't working properly because the 3d view was running at 1fps, so then who knows what it's behaviour is. With the changes I've made, the program seems to run faster in release mode as expected. Your clock code is showing 600-1600fps in release mode now for me.

SDL tilemap rendering quite slow

Im using SDL to write a simulation that displays quite a big tilemap(around 240*240 tiles). Since im quite new to the SDL library I cant really tell if the pretty slow performance while rendering more than 50,000 tiles is actually normal. Every tile is visible at all times, being around 4*4px big. Currently its iterating every frame through a 2d array and rendering every single tile, which gives me about 40fps, too slow to actually put any game logic behind the system.
I tried to find some alternative systems, like only updating updated tiles but people always commented on how this is a bad practice and that the renderer is supposed to be cleaned every frame and so on.
Here a picture of the map
So I basically wanted to ask if there is any more performant system than rendering every single tile every frame.
Edit: So heres the simple rendering method im using
void World::DirtyBiomeDraw(Graphics *graphics) {
if(_biomeTexture == NULL) {
_biomeTexture = graphics->loadImage("assets/biome_sprites.png");
printf("Biome texture loaded.\n");
}
for(int i = 0; i < globals::WORLD_WIDTH; i++) {
for(int l = 0; l < globals::WORLD_HEIGHT; l++) {
SDL_Rect srect;
srect.h = globals::SPRITE_SIZE;
srect.w = globals::SPRITE_SIZE;
if(sites[l][i].biome > 0) {
srect.y = 0;
srect.x = (globals::SPRITE_SIZE * sites[l][i].biome) - globals::SPRITE_SIZE;
}
else {
srect.y = globals::SPRITE_SIZE;
srect.x = globals::SPRITE_SIZE * fabs(sites[l][i].biome);
}
SDL_Rect drect = {i * globals::SPRITE_SIZE * globals::SPRITE_SCALE, l * globals::SPRITE_SIZE * globals::SPRITE_SCALE,
globals::SPRITE_SIZE * globals::SPRITE_SCALE, globals::SPRITE_SIZE * globals::SPRITE_SCALE};
graphics->blitOnRenderer(_biomeTexture, &srect, &drect);
}
}
}
So in this context every tile is called "site", this is because they're also storing information like moisture, temperature and so on.
Every site got a biome assigned during the generation process, every biome is basically an ID, every land biome has an ID higher than 0 and every water id is 0 or lower.
This allows me to put every biome sprite ordered by ID into the "biome_sprites.png" image. All the land sprites are basically in the first row, while all the water tiles are in the second row. This way I dont have to manually assign a sprite to a biome and the method can do it itself by multiplying the tile size(basically the width) with the biome.
Heres the biome ID table from my SDD/GDD and the actual spritesheet.
The blitOnRenderer method from the graphics class basically just runs a SDL_RenderCopy blitting the texture onto the renderer.
void Graphics::blitOnRenderer(SDL_Texture *texture, SDL_Rect
*sourceRectangle, SDL_Rect *destinationRectangle) {
SDL_RenderCopy(this->_renderer, texture, sourceRectangle, destinationRectangle);
}
In the game loop every frame a RenderClear and RenderPresent gets called.
I really hope I explained it understandably, ask anything you want, im the one asking you guys for help so the least I can do is be cooperative :D
Poke the SDL2 devs for a multi-item version of SDL_RenderCopy() (similar to the existing SDL_RenderDrawLines()/SDL_RenderDrawPoints()/SDL_RenderDrawRects() functions) and/or batched SDL_Renderer backends.
Right now you're trying slam at least 240*240 = 57000 draw-calls down the GPU's throat; you can usually only count on 1000-4000 draw-calls in any given 16 milliseconds.
Alternatively switch to OpenGL & do the batching yourself.

cocos2d-x v3 c++ Drop shadow cocos2d::Sprite

As far as I've found out, cocos doesn't offer a simple filter handling like AS3 for example does.
My situation:
I want to add a realtime shadow to an cocos2d::Sprite.
For example I would like to do something like this (similar to AS3):
auto mySprite = Sprite::createWithSpriteFrameName("myCharacter.png");
DropShadowFilter* dropShadow = new DropShadowFilter();
dropShadow->distance = 0;
dropShadow->angle = 45;
dropShadow->color = 0x333333;
dropShadow->alpha = 1;
dropShadow->blurX = 10;
dropShadow->blurY = 10;
dropShadow->strength = 1;
dropShadow->quality = 15;
mySprite->addFilter(dropShadow);
This should add a shadow to my Sprite to achieve an result like this:
Adobe Drop Shadow Example
Could you help me please?
There isn't any built in support for shadows on Sprites in Cocos2D-X.
The best option, performance-wise, would be to place your shadows in your sprite images already, instead of calculating and drawing them in the code.
Another option is to sub-class Sprite and override the draw method so that you duplicate the sprite and apply your effects and draw it below the original.
One possible way to achieve that is with this snippet from this thread on the Cocos forum. I can't say that I completely follow what this code does with the GL transforms, but you can use this as a starting point to experiment.
void CMySprite::draw()
{
// is_shadow is true if this sprite is to be considered like a shadow sprite, false otherwise.#
if (is_shadow)
{
ccBlendFunc blend;
// Change the default blending factors to this one.
blend.src = GL_SRC_ALPHA;
blend.dst = GL_ONE;
setBlendFunc( blend );
// Change the blending equation to thi in order to subtract from the values already written in the frame buffer
// the ones of the sprite.
glBlendEquationOES(GL_FUNC_REVERSE_SUBTRACT_OES);
}
CCSprite::draw();
if (is_shadow)
{
// The default blending function of cocos2d-x is GL_FUNC_ADD.
glBlendEquationOES(GL_FUNC_ADD_OES);
}
}

C++ - How to play animation in opposite direction Cocos2DX

I got a png like this
I also got this segment of code
SpriteFrameCache::getInstance()->addSpriteFramesWithFile("walk.plist", "walk.png");
Vector<SpriteFrame*> animFrames;
animFrames.reserve(8);
char spriteFrameByName[MAX_WORD] = { 0 };
for (int index = 1; index <= 8; index++)
{
sprintf(spriteFrameByName, "%d.png", index);
auto frame = SpriteFrameCache::getInstance()->getSpriteFrameByName(spriteFrameByName);
animFrames.pushBack(frame);
}
Animation* animation = Animation::createWithSpriteFrames(animFrames, time);
sprite->runAction(Animate::create(animation));
Now I want to horizontally flip this animation. Something looks like this
Not to create another png file, is there a way to this in C++ code?
Animation* animation = Animation::createWithSpriteFrames(animFrames, time);
sprite->runAction(Animate::create(animation));
sprite->setFlipX(true)
Horizontally flipping an image is equal to scaling the x-axis of that image minus 1. I am not familiar with Cocos2DX, but multiplying the x scale of your image by -1 will horizontally flip it for you.
This answer might help you with scaling:
I am not entirely sure if flip function handles the rotation of the object you want to flip. I believe that it only changes the texture's direction. Which may end up making things a bit more complex down the road if you ever need to compute which direction is your character is facing in your game world.
You can rotate the entire sprite on Y axis. by doing so, It will make sure that everything facing in right direction and not just the texture. Following code provides the same visual by rotating entire sprite.
sprite->setRotation3D(Vec3(0, 180, 0));

recalculate QwtScaleDiv before rendering

I've implemented a QwtPlot which scrolls across the screen as data is added in real-time. Based on user input, an image of the plot is occasionally rendered to a file using QwtPlotRenderer. However, because the axis scrolls during normal operation, the QwtScaleDiv tick marks can look a little wonky at render time (they are right-aligned):
Is there some easy way in which I can recalculate the division prior to rendering so that the first label is on the far left and the last one is on the far right?
This isn't as difficult as it looked at first. Bascially, all you need to do is temporarily replace the axisScaleDiv.
auto divX = this->axisScaleDiv(xBottom);
double ub = divX.upperBound();
double lb = divX.lowerBound();
double numTicks = 11.0; // 10 even divisions
// you can create minor/medium ticks if you want to, I didn't.
QList<double> majorTicks;
for (int i = 0; i < numTicks; ++i)
{
majorTicks.push_back(lb + i * ((ub - lb) / (numTicks - 1)));
}
// set the scale to the newly created division
QwtScaleDiv renderDivX(divX.lowerBound(), divX.upperBound(),
QList<double>(), QList<double>(), majorTicks);
this->setAxisScaleDiv(xBottom, renderDivX);
// DO PLOT RENDERING
QwtPlotRender renderer;
renderer.renderDocument(...);
// RESOTRE PREVIOUS STATE
this->setAxisScaleDiv(xBottom, divX);
this->setAxisScaleDiv(yLeft, divY);
// update the axes
this->updateAxes();