Im using SDL to write a simulation that displays quite a big tilemap(around 240*240 tiles). Since im quite new to the SDL library I cant really tell if the pretty slow performance while rendering more than 50,000 tiles is actually normal. Every tile is visible at all times, being around 4*4px big. Currently its iterating every frame through a 2d array and rendering every single tile, which gives me about 40fps, too slow to actually put any game logic behind the system.
I tried to find some alternative systems, like only updating updated tiles but people always commented on how this is a bad practice and that the renderer is supposed to be cleaned every frame and so on.
Here a picture of the map
So I basically wanted to ask if there is any more performant system than rendering every single tile every frame.
Edit: So heres the simple rendering method im using
void World::DirtyBiomeDraw(Graphics *graphics) {
if(_biomeTexture == NULL) {
_biomeTexture = graphics->loadImage("assets/biome_sprites.png");
printf("Biome texture loaded.\n");
}
for(int i = 0; i < globals::WORLD_WIDTH; i++) {
for(int l = 0; l < globals::WORLD_HEIGHT; l++) {
SDL_Rect srect;
srect.h = globals::SPRITE_SIZE;
srect.w = globals::SPRITE_SIZE;
if(sites[l][i].biome > 0) {
srect.y = 0;
srect.x = (globals::SPRITE_SIZE * sites[l][i].biome) - globals::SPRITE_SIZE;
}
else {
srect.y = globals::SPRITE_SIZE;
srect.x = globals::SPRITE_SIZE * fabs(sites[l][i].biome);
}
SDL_Rect drect = {i * globals::SPRITE_SIZE * globals::SPRITE_SCALE, l * globals::SPRITE_SIZE * globals::SPRITE_SCALE,
globals::SPRITE_SIZE * globals::SPRITE_SCALE, globals::SPRITE_SIZE * globals::SPRITE_SCALE};
graphics->blitOnRenderer(_biomeTexture, &srect, &drect);
}
}
}
So in this context every tile is called "site", this is because they're also storing information like moisture, temperature and so on.
Every site got a biome assigned during the generation process, every biome is basically an ID, every land biome has an ID higher than 0 and every water id is 0 or lower.
This allows me to put every biome sprite ordered by ID into the "biome_sprites.png" image. All the land sprites are basically in the first row, while all the water tiles are in the second row. This way I dont have to manually assign a sprite to a biome and the method can do it itself by multiplying the tile size(basically the width) with the biome.
Heres the biome ID table from my SDD/GDD and the actual spritesheet.
The blitOnRenderer method from the graphics class basically just runs a SDL_RenderCopy blitting the texture onto the renderer.
void Graphics::blitOnRenderer(SDL_Texture *texture, SDL_Rect
*sourceRectangle, SDL_Rect *destinationRectangle) {
SDL_RenderCopy(this->_renderer, texture, sourceRectangle, destinationRectangle);
}
In the game loop every frame a RenderClear and RenderPresent gets called.
I really hope I explained it understandably, ask anything you want, im the one asking you guys for help so the least I can do is be cooperative :D
Poke the SDL2 devs for a multi-item version of SDL_RenderCopy() (similar to the existing SDL_RenderDrawLines()/SDL_RenderDrawPoints()/SDL_RenderDrawRects() functions) and/or batched SDL_Renderer backends.
Right now you're trying slam at least 240*240 = 57000 draw-calls down the GPU's throat; you can usually only count on 1000-4000 draw-calls in any given 16 milliseconds.
Alternatively switch to OpenGL & do the batching yourself.
Related
I am trying to build an autoclicker using C++ to beat a 2D videogame in which the following situation appears:
The main character is in the center of the screen, the background is completely black and enemies are coming from all directions. I want my program to be capable of clicking on enemies just as they appear on the screen.
What I came up at first is that the enemies have a minimum size of 15px, so I tried doing a search every 15 pixels and analyze if any pixel is different than the background's RGB, using GetPixel(). It looks something like this:
COLORREF color;
int R, G, B;
for(int i=0; i<SCREEN_SIZE_X; i+=15){ //These SCREEN_SIZE values are #defined with the ones of my screen
for(int j=0;j<SCREEN_SIZE_Y, j+=15){
//The following conditional excludes the center which is the player's position
if((i<PLAYER_MIN_EDGE_X or i>PLAYER_MAX_EDGE_X) and (j<PLAYER_MIN_EDGE_Y or j>PLAYER_MAX_EDGE_Y)){
color = GetPixel(GetDC(nullptr), i, j);
R = GetRValue(color);
G = GetGValue(color);
B = GetBValue(color);
if(R!=0 or G!=0 or B!=0) cout<<"Enemy Found"<<endl;
}
}
}
It turns out that, as expected, the GetPixel() function is extremely slow as it has to verify about 4000 pixels to cover just one screen scan. I was thinking about a way to solve this faster, and while looking at the keyboard I noticed the button "Pt Scr", and then realized that whatever that button is doing it is able to almost instantly save the information of millions of pixels.
I surely think there is a proper and different technic to approach this kind of problem.
What kind of theory or technic for pixel analyzing should I investigate and read about so that this can be considered respectable code, and to get it actually work, and much faster?
The GetPixel() routine is slow because it's fetching the data from the videocard (device) memory one by one. So to optimize your loop, you have to fetch the entire screen at once, and put it into an array of pixels. Then, you can iterate over that array of pixels much faster, because it'll be operating over the data in your RAM (host memory).
For a better optimization, I also recommend clearing the pixels of your player (in the center of the screen) after fetching the screen into your pixel array. This way, you can eliminate that if((i<PLAYER_MIN_EDGE_X or i>PLAYER_MAX_EDGE_X) and (j<PLAYER_MIN_EDGE_Y or j>PLAYER_MAX_EDGE_Y)) condition inside the loop.
CImage image;
//Save DC to image
int R, G, B;
BYTE *pRealData = (BYTE*)image.GetBits();
int pit = image.GetPitch();
int bitCount = image.GetBPP()/8;
int w=image.GetWidth();
int h=image.GetHeight();
for (int i=0;i<h;i++)
{
for (int j=0;j<w;j++)
{
B=*(pRealData + pit*i + j*bitCount);
G=*(pRealData + pit*i + j*bitCount +1);
R=*(pRealData + pit*i + j*bitCount +2);
}
}
I'm trying to do an UWP game, and i came accross a problem where my game is much slower in release mode than it is in debug mode.
My game will draw a 3D view (Dungeon master style) and will have an UI part that draws over the 3D view. Because the 3D view can slow down to a small amount of frames per seconds (FPS), i decided to make my game running the UI part always at 60 FPS.
Here is how the main gameloop looks like, in some pseudo code:
Gameloop start
Update game datas
copy actual finished 3D view from buffer to screen
draw UI part
3D view loop start
If no more time to draw more textures on the 3D view exit 3D view loop
Draw one texture to 3D view buffer
3D view loop end --> 3D view loop start
Gameloop end --> Gameloop start
Here are the actual update and render functions:
void Dungeons_of_NargothMain::Update()
{
m_ritonTimer.startTimer(static_cast<int>(E_RITON_TIMER::UI));
m_ritonTimer.frameCountPlusOne((int)E_RITON_TIMER::UI_FRAME_COUNT);
m_ritonTimer.manageFramesPerSecond((int)E_RITON_TIMER::UI_FRAME_COUNT);
m_ritonTimer.manageFramesPerSecond((int)E_RITON_TIMER::LABY_FRAME_COUNT);
if (m_sceneRenderer->m_numberTotalOfTexturesToDraw == 0 ||
m_sceneRenderer->m_numberTotalOfTexturesToDraw <= m_sceneRenderer->m_numberOfTexturesDrawn)
{
m_sceneRenderer->m_numberTotalOfTexturesToDraw = 150000;
m_sceneRenderer->m_numberOfTexturesDrawn = 0;
}
}
// RENDER
bool Dungeons_of_NargothMain::Render()
{
//********************************//
// Render UI part here //
//********************************//
//**********************************//
// Render 3D view to 960X540 screen //
//**********************************//
m_sceneRenderer->setRenderTargetTo960X540Screen(); // 3D view buffer screen
bool screen960GotFullDrawn = false;
bool stillenoughTimeLeft = true;
while (stillenoughTimeLeft && (!screen960GotFullDrawn))
{
stillenoughTimeLeft = m_ritonTimer.enoughTimeForOneMoreTexture((int)E_RITON_TIMER::UI);
screen960GotFullDrawn = m_sceneRenderer->renderNextTextureTo960X540Screen();
}
if (screen960GotFullDrawn)
m_ritonTimer.frameCountPlusOne((int)E_RITON_TIMER::LABY_FRAME_COUNT);
return true;
}
I removed what is not essential.
Here is the timer part (RitonTimer):
#pragma once
#include "pch.h"
#include <wrl.h>
#include "RitonTimer.h"
Dungeons_of_Nargoth::RitonTimer::RitonTimer()
{
initTimer();
if (!QueryPerformanceCounter(&m_qpcGameStartTime))
{
throw ref new Platform::FailureException();
}
}
void Dungeons_of_Nargoth::RitonTimer::startTimer(int timerIndex)
{
if (!QueryPerformanceCounter(&m_qpcNowTime))
{
throw ref new Platform::FailureException();
}
m_qpcStartTime[timerIndex] = m_qpcNowTime.QuadPart;
m_framesPerSecond[timerIndex] = 0;
m_frameCount[timerIndex] = 0;
}
void Dungeons_of_Nargoth::RitonTimer::resetTimer(int timerIndex)
{
if (!QueryPerformanceCounter(&m_qpcNowTime))
{
throw ref new Platform::FailureException();
}
m_qpcStartTime[timerIndex] = m_qpcNowTime.QuadPart;
m_framesPerSecond[timerIndex] = m_frameCount[timerIndex];
m_frameCount[timerIndex] = 0;
}
void Dungeons_of_Nargoth::RitonTimer::frameCountPlusOne(int timerIndex)
{
m_frameCount[timerIndex]++;
}
void Dungeons_of_Nargoth::RitonTimer::manageFramesPerSecond(int timerIndex)
{
if (!QueryPerformanceCounter(&m_qpcNowTime))
{
throw ref new Platform::FailureException();
}
m_qpcDeltaTime = m_qpcNowTime.QuadPart - m_qpcStartTime[timerIndex];
if (m_qpcDeltaTime >= m_qpcFrequency.QuadPart)
{
m_framesPerSecond[timerIndex] = m_frameCount[timerIndex];
m_frameCount[timerIndex] = 0;
m_qpcStartTime[timerIndex] += m_qpcFrequency.QuadPart;
if ((m_qpcStartTime[timerIndex] + m_qpcFrequency.QuadPart) < m_qpcNowTime.QuadPart)
m_qpcStartTime[timerIndex] = m_qpcNowTime.QuadPart - m_qpcFrequency.QuadPart;
}
}
void Dungeons_of_Nargoth::RitonTimer::initTimer()
{
if (!QueryPerformanceFrequency(&m_qpcFrequency))
{
throw ref new Platform::FailureException();
}
m_qpcOneFrameTime = m_qpcFrequency.QuadPart / 60;
m_qpc5PercentOfOneFrameTime = m_qpcOneFrameTime / 20;
m_qpc10PercentOfOneFrameTime = m_qpcOneFrameTime / 10;
m_qpc95PercentOfOneFrameTime = m_qpcOneFrameTime - m_qpc5PercentOfOneFrameTime;
m_qpc90PercentOfOneFrameTime = m_qpcOneFrameTime - m_qpc10PercentOfOneFrameTime;
m_qpc80PercentOfOneFrameTime = m_qpcOneFrameTime - m_qpc10PercentOfOneFrameTime - m_qpc10PercentOfOneFrameTime;
m_qpc70PercentOfOneFrameTime = m_qpcOneFrameTime - m_qpc10PercentOfOneFrameTime - m_qpc10PercentOfOneFrameTime - m_qpc10PercentOfOneFrameTime;
m_qpc60PercentOfOneFrameTime = m_qpc70PercentOfOneFrameTime - m_qpc10PercentOfOneFrameTime;
m_qpc50PercentOfOneFrameTime = m_qpc60PercentOfOneFrameTime - m_qpc10PercentOfOneFrameTime;
m_qpc45PercentOfOneFrameTime = m_qpc50PercentOfOneFrameTime - m_qpc5PercentOfOneFrameTime;
}
bool Dungeons_of_Nargoth::RitonTimer::enoughTimeForOneMoreTexture(int timerIndex)
{
while (!QueryPerformanceCounter(&m_qpcNowTime));
m_qpcDeltaTime = m_qpcNowTime.QuadPart - m_qpcStartTime[timerIndex];
if (m_qpcDeltaTime < m_qpc45PercentOfOneFrameTime)
return true;
else
return false;
}
In debug mode the game's UI works at 60 FPS, and the 3D view is about 1 FPS on my PC. But even there i'm not sure why i have to stop my texture drawing at 45% of one game time and call present, to get the 60 FPS, if i wait longer i only get 30 FPS. (this valor is set in "enoughTimeForOneMoreTexture()" in RitonTimer.
In Release mode it drops dramatically, having like 10 FPS for the UI part, 1 FPS for the 3D part. I tried to find why for the last 2 days, didn't find it.
Also i have another small question: How do i tell visual studio that my game is actually a game and not an app ? Or does Microsoft do the "switch" when i send my game to their store ?
Here i have put my game on my OneDrive so everyone can download the source files and try to compile it, and see if you get the same results as me:
OneDrive link: https://1drv.ms/f/s!Aj7wxGmZTdftgZAZT5YAbLDxbtMNVg
compile in either x64 Debug, or x64 Release mode.
UPDATE:
I think i found the explanation why my game is slower in release mode.
The CPU is probably not waiting for the drawing instruction to be done, but simply adds it to a list which will be forward to the GPU at it's own pace in a separate task (or maybe the GPU does that cache himself). That would explain it all.
My plan was to draw the UI first and then to draw as many textures from the 3D view as possible till 95% of a 1/60th second frame time passed and then present it to the swapchain. The UI would always be at 60 FPS and the 3D view would be as fast as the system allows it (also at 60FPS if it can all be drawn in 95% of the frame time).
This didn't work because it probbably cached all the instructions my 3D view had (i was testing with 150000 BIG texture draw instructions for the 3D view) in one frame time, and so of course the UI was as slow as the 3D view at the end, or close to.
That is also why even in debug mode i didn't get 60FPS when i waited for 95% of a frame time, i had to wait for 45% of a frame time to get my 60 FPS i wanted for the UI.
I tested it with a lower value in release mode to verify that theory, and indeed i also get 60 FPS for the UI when i stop the Drawings at only 15% of a frame time.
I tought it worked like this only in DirectX12.
"How do i tell visual studio that my game is actually a game and not an app" - there's no difference, a game is an app.
I have your code running at 300-400 FPS now in debug mode.
Firstly I commented out your code that checks if you've got time to render another texture. Don't do that. Everything the player sees should render within a single frame. If your frame is taking more than 16ms (with 60fps target) look for expensive operations, or calls that are made repeatedly, possibly adding up to some unexpected cost. Look for code that might be doing something repeatedly when it only needs to do it once per frame or per resize. etc
So the issue is that you were rendering very large textures and a lot of them. You want to avoid overdraw (rendering a pixel where you've already rendered a pixel). You can have a bit of overdraw and that's sometimes preferable to being pedantic. But you were drawing 1000x2000 textures over and over again. So you were absolutely killing the pixel shader. It just can't render that many pixels. I didn't bother looking at the code that tries to control texture rendering based on frame time remaining. For what you're trying to do, that's not helpful.
Inside your render method comment out the while and if/else sections and use this to draw an array of your textures ..
// set sprite dimensions
int w = 64, h = 64;
for (int y = 0; y < 16; y++)
{
for (int x = 0; x < 16; x++)
{
m_sceneRenderer->renderNextTextureTo960X540Screen(x*64, y*64, w, h);
}
}
and in RenderNextTextureToScreen(int x, int y, int w, int h) ..
m_squareBuffer.sizeX = w; // 1000;
m_squareBuffer.sizeY = h; // 2000;
m_squareBuffer.posX = x; // (float)(rand() % 1920);
m_squareBuffer.posY = y; // (float)(rand() % 1080);
See how this code renders much smaller textures, the textures are 64x64 and there's no overdraw.
And just be aware that the GPU isn't all powerful, it can do a lot if you use it right, but if you just throw crazy operations at it, you can grind it to a halt, just like with the CPU. So try to render things that 'look normal', that you can imagine being in a game. You'll learn in time what's sensible and what isn't.
The most likely explanation for the code running slower in release mode is that your timing and rendering limiter code was broken. It wasn't working properly because the 3d view was running at 1fps, so then who knows what it's behaviour is. With the changes I've made, the program seems to run faster in release mode as expected. Your clock code is showing 600-1600fps in release mode now for me.
Abstract
My ultimate goal is to use Fltk to take user inputs of pixels, display a generated maze (either my own, or fetch it from the website mentioned in the details), and then show the animated solution.
This is what i've managed so far:
https://giant.gfycat.com/VioletWelloffHatchetfish.webm
Details
I'm in my first c++/algorithm class of a bachelors in CE.
As we've been learning about graphs, dijkstra etc. the last weeks i decided after watching Computerphile's video about Maze solving, to try to put the theory into "practice".
At first i wanted to output a maze from this site, http://hereandabove.com/maze/mazeorig.form.html, with the plotted solution. I chose that walls and paths should be 1x1 pixel, to make it easier to make into a 2D-vector, and then a graph.
This went well, and my program outputs a solved .png file, using dijkstra to find the shortest path.
I then wanted to put the entire solution in an animated gif.
This also works well. For each pixel it colors green/yellow, it passes an RGBA-vector to a gif-library, and in the end i end up with an animated step by step solution.
I also for each RGBA-vector passed to the gif-library, scale it up first, using this function:
//Both the buffer and resized buffer are member variables, and for each //plotted pixel in the path it updates 'buffer', and in this function makes a //larger version of it to 'resized_buffer'
// HEIGHT and WIDTH are the original size
// nHeight and nWidth are the new size.
bool Maze_IMG::resample(int nWidth, int nHeight)
{
if (buffer.size() == 0) return false;
resized_buffer.clear();
for (int i = 0; i < nWidth * nHeight * 4; i++) resized_buffer.push_back(-1);
double scaleWidth = (double)nWidth / (double)WIDTH;
double scaleHeight = (double)nHeight / (double)HEIGHT;
for (int cy = 0; cy < nHeight; cy++)
{
for (int cx = 0; cx < nWidth; cx++)
{
int pixel = (cy * (nWidth * 4)) + (cx * 4);
int nearestMatch = (((int)(cy / scaleHeight) * (WIDTH * 4)) + ((int)(cx / scaleWidth) * 4));
resized_buffer[pixel] = buffer[nearestMatch];
resized_buffer[pixel + 1] = buffer[nearestMatch + 1];
resized_buffer[pixel + 2] = buffer[nearestMatch + 2];
resized_buffer[pixel + 3] = buffer[nearestMatch + 3];
}
}
return true;
}
Problems
The problem is that it takes a looong time to do this while scaling them up, even with "small" mazes at 50x50 pixels, when trying to scale them to say 300x300. I've spent a lot of time to make code as efficient and fast as possible, but after i added the scaling, stuff that used to take 10 minutes, now takes hours.
In fltk i use the Fl_Anim_Gif-library to display animated gifs, but it wont load the maze gifs that has been scaled up (still troubleshooting this).
My real questions
Is it possible to improve the scaling function, so that it does not take forever? Or is this a totally wrong approach?
Is it a stupid idea to try to display it as a gif in fltk, would it be easier to just draw it directly in fltk, or should i rather try to display the images one after another i fltk?
I'm just familiarizing myself with fltk. Would it be easier now to use something like Qt instead. Would that be more beneficial in the long run as far as learning a GUI-library goes?
I'm mainly doing this for learning, and to start building some sort of portfolio for when i graduate. Is it beneficial at all to make a gui for this, or is this a waste of time?
Any thoughts or input would be greatly appreciated.
Whatever graphics package you use, the performance will be similar. It depends on how you handle the internals. For instance,
If you write it to a buffer and BLT it to the screen, it would be faster than writing to the screen directly.
If you only BLT on the paint event, it would be faster than forcing and update every time the screen data changes.
If you preallocate the buffers then the system does not have to keep on reallocating whenever the buffer space runs out.
Assuming that the space is preallocated, it can be written to without clearing first. Every cell it going to be written to so no need to clear, allocate and and reallocate.
I'm currently trying to combine the inbuilt PhysicsBody and TileMap classes in cocos2d-x to create the levels (walls) for my physics-based sidescroller. I have maps of size 80*24 tiles and each tile is 30*30 pixels big - I need to assign a static box shaped physics body to each tile.
for (int x=0; x < 80; x++) //width of map
{
for (int y = 0; y < 24; y++) //height of map
{
auto spriteTile = wallLayer->getTileAt(Vec2(x,y));
if (spriteTile != NULL)
{
PhysicsBody* tilePhysics = PhysicsBody::createBox(Size(30.0f, 30.0f), PhysicsMaterial(1.0f, 1.0f, 0.0f));
tilePhysics->setDynamic(false); //static is good enough for walls
spriteTile->setPhysicsBody(tilePhysics);
}
}
}
The above code works, but is very slow and brings performance down from 60 fps to around 20. Is there a less brute-force approach, which can more efficiently create the physics bodies? Note: most the map is blank, so I dont think the number of bodies/tiles is the main problem.
Any insight would be helpful, thanks
First, check whether debug draw is on for physics, as it can reduce FPS considerably. Also, make sure you're measuring FPS on the device and not on the simulator, as it's not reliable there.
If that doesn't help, then you'll have to employ some method of contour tracing to avoid creating a static body for each tile in your tile map. It's better to create 1 static body for each contour that you trace out of the tile map. That way, you will end up with much fewer physics bodies and your performance won't be hurt that much.
One method of doing it is the so called marching squares algorithm. Another one is Moore neighborhood algorithm.
I´m currently trying to develop a game and im having some trouble with the Map.
The Map works the Following way: We have a class named Map, which will contain a vector of Tiles.
class GMap
{
private :
std::vector <BTiles> TileList;
...
So, there will be a function Load in GMap which will load all the tiles from a txt file.
All the tiles have their own function, like render for example. And their own variables, like ID and Type of Tile.
I can easily render the tiles, but my problem is that, since the maps are kind of big, and each tile is only 16x16 pixels, it takes a lot of them to fill the whole Surface. And since there are so many of them, it takes way too long to load it. Like, 30-40 seconds for a small part of them.
I still havent developed the code that actually reads the txt file, which will contain the information of how many tiles to load, which types are them and their position, so i have been using this code to test the Tile Rendering.
bool GMap::Load(char *File)
{
int XRand;
for(int i = 0;i < 1024;i++) //I need 1024 tiles to load a screen of 512x512 pixels.
{
BTiles NewTile; //Btile is the Tiles Class.
XRand = rand() % 5; //There are currently only 5 types of Tile. And i wanted to print them randomly, just for testing.
NewTile.OnLoad(XRand, i); //This will be setting type = Xrand, and ID = i. The Type will define which block to print on the screen. And the ID will define where to print it.
TileList.push_back(NewTile);
}
return true;
}
This is the Tiles OnLoad function:
bool BTiles::OnLoad(int BType, int BID)
{
if((BSurface = Surface::OnLoad("BTexture.png")) == false)
return false;
Type = BType;
ID = BID;
return true;
}
I can then print all of the tiles the following way:
void GMap::Render(SDL_Surface *MainSurface)
{
for(int i = 0;i < TileList.size();i++)
{
TileList[i].OnRender(MainSurface); //I am calling a Render function inside the Tile Class. MainSurface is the primary surface im using to render images.
}
But My problem is in the Load Function. It takes way too much time to load those 1024 Tiles. And 1024 tiles are only a few of the amount i will actually have to load in a serious map. Besides, it wont even load them all. After the huge amount of time it takes to "load" the 1024 tiles, it only prints like, half of them. Like, the screen isnt complete with tiles, even though i "loaded" the correct amount to fill the whole screen. I then proceeded to increase the number from 1024 to 2048, in hope that it would finish the screen. But it didnt, in fact, it changed NOTHING. Its like, it loads certain amount, and then it just stops. Or at least it stops rendering.
If anyone wants to know how the rendering is made, i have a Global function which will do the work, and then, on the Tile Class, i have this function:
void BTiles::OnRender(SDL_Surface *MSurface)
{
int X = (ID * 16) % M_WIDTH; //Since i am only using the ID to know which position to put a Tile, i use this function to locate which Horizontal Position to put them. M_WIDTH is a global variable that defines the Width of the screen, it is currently 512
int Y = ((ID * 16) / M_HEIGHT) * 16; //The same but for the Vertical Position. M_HEIGHT is currently also 512
Surface::OnDraw(MSurface, BSurface, X, Y, (Type * 16) % M_WIDTH, (Type * 16) / M_HEIGHT, 16, 16); //This means Render(On The Primary Surface, using the Images on the BSurface, on the Position X, on the position Y, Where the Tile i want to render starts on the X line, Where the Tile i want to render starts on the Y line, it is 16 bits Width, it is 16 bits Height
}
I apologize i didnt explain properly the last function, but i dont think my problem is there.
Anyway if anyone need more info, in a part of the code, just ask.
Thank you!
I Discovered the Problem. Each tile had its own Surface, which would load the same image. That means that i was generating 1024 surfaces, and loading 1024 surfaces. What i did to solve the problem was to create a Surface in the Map Class, which would be used by all Tiles.
So
bool BTiles::OnLoad(int BType, int BID)
{
if((BSurface = Surface::OnLoad("BTexture.png")) == false)
return false;
Type = BType;
ID = BID;
return true;
}
became
bool BTiles::OnLoad(int BType, int BID)
{
Type = BType;
ID = BID;
return true;
}
In The Map Class i added the MSurface, which would load the Image that would contain all Tile Blocks.
And then to render i would do the following:
void GMap::Render(SDL_Surface *MainSurface)
{
for(int i = 0;i < TileList.size();i++)
{
TileList[i].OnRender(MainSurface, MSurface, 0, 0);
}
}
Msurface is the Surface that contained the Image.
And each tile would receive MSurface as an external surface, yet it would be used to hold all images.
Therefore instead of creating 1024 Surfaces, i only created 1. Now it takes 2 seconds to load a lot more than it would before. It also fixed my problem of the Not-Rendering all Tiles.