Huge SDL_Textures - Out of Memory - c++

Im programming a Prototyp of an Tilebased Game to learn SDL.My map is just a 257x257 array of Tiles, each Tile is 1x1 to 60x60 pixels (different zoomings).The SDL Window has a resolution of 1024x768. So i can display 18x13 to 1024*768 Tiles.
Till now i tried 2 approches.
1st: Render from Tiles
//for (at worst) 1024*768 Tiles
SDL_Rect Tile;
SDL_SetRenderDrawColor(gRenderer, /*some color*/ , 255);
Tile = { Tile_size * x, Tile_size * y, Tile_size, Tile_size };
SDL_RenderFillRect(gRenderer, &(Tile));
con: it is way to time consuming and the game starts lagging if i try to move the map.
2nd create an texture before the Game starts
with: SDL_CreateRGBSurface, SDL_FillRect, SDL_CreateTextureFromSurface
con: the Texture would be (257x257)(Tiles)x(60x60)(pixel/Tile)x(32)(bit/pixel) ~ 951 MB. and with multiple Textures for different Zoom steps its way to huge to handle.
I'd appreciate any tips to improve the performance.

The first example just draws a single filled rectangle... That can't be slow, I'd have to see more to give a better answer.
In general you'll want to render only the tiles which are visible on the screen not the tiles on the map itself. With 60x60 tiles you can get away with just using SDL 2d drawing functions then. When you add different zoom levels I'm afraid you won't be able to just use 1x1 pixel tiles and use the same approach - you would try to call every pixel via a function call!
So once you add different zoom levels you'll have to figure out how to get that on screen - and what that's supposed to mean to the player anyway :)

Related

OpenGL 2d Game Rendering layers using Z coordinates

I'm in the process of converting the rendering of my top down 2d game from the SDL2 draw functions to direct OpenGL, so that I can take advantage of batch rendering and shaders.
Currently for drawing my various sprites, I use an array of layers, each layer has an array of gameObjects. I simply loop through the layers and all gameObjects in each layer when rendering so that stuff is drawn in the right order.i.e. layer 0 is background objects, layer 1 is main level , and layer 2 is foreground.
I'm realizing while designing the OpenGL code, that I now have the option to draw on the Z axis. So, I'm wondering - does it make sense for me to get rid of my layer arrays and simply have each gameObject have a layer value that I can simply use to represent a particular Z axis value. I could have Z axis 3,2 and 1, and just let the openGL draw calls handle what goes on top or in front of what.
Some of my textures WILL have transparent parts, so I'll need that to work as well.

Scaling and Performance

I've never been able to understand the best practice in this context . I usually want to ship my game with as minimum size as possible. so where ever possible , I try to use scaling of graphics . Let us suppose I have to draw a 1000 X 300 px wall of yellow color in my game. So I usually just use a 3 X 3 px yellow image and stretch it in game (using nearest neighbor filter). Is this the right approach ?
Let us consider another situation . Let us suppose I wish to render rain in my game . Basically 2 X 30 px blue white gradient streaks . Let us suppose at any time 200 drops max are going to be rendered . Now if I just ship a 2 X 6 px streak with the game and scale it at runtime , will it affect performance .
In short how does scaling affect performance in OpenGL?
It seems you are asking if scaling the textures will affect rendering performance?
For your wall example in the vertex shader there will be no difference. In the fragment shader you will sample the texture and the GPU will simply multiply the texture coordinate by the size of the texture and then round the resulting coordinates and grab the corresponding pixel from the texture buffer. It doesn't matter how large the texture is the operations are the same.
Same issue with the rain besides that linear scaling will grab some more pixels and combine them according to how close the texture coordinate is to them.
Besides these issues you should think of the memory requirements of storing the textures in the GPU 9 pixels needs a lot less memory than 300,000 pixels.
Let me give you a little advice. If your target is to minimize app weight, I would recommend you generate one-pixel size white Texture and use it with different colors for every case where possible (e.g. wall of monochrome green color).
public static Texture createPixelTexture() {
Pixmap pixmap = new Pixmap(1, 1, Pixmap.Format.RGBA8888);
pixmap.drawPixel(0, 0, Color.WHITE.toIntBits());
Texture texture = new Texture(pixmap);
pixmap.dispose();
return texture;
}
Be aware of this method gives you unmanaged texture. It means that you have to recreate it each time your app loses context and of course you have to call dipsose for it.
This is suitable when you just need to draw something monochrome, if you need in gradient approach you get another problem... And in this situation you may use same one-pixel white texture with your custom fragment shader. But this way is a way of serious guys who love to get in troubles and then solve them (sometimes very slowly), because managing of different shaders (I'm sure you need 2 at least) will complicate your drawing cycle, and you have to manage them somehow...
So, I just wanted give you a point. Good luck!

(python) pygame opengl is much slower than blitting

I just implemented basic opengl rendering into my pygame application thinking hardware acceleration would make the program run faster. It is much, much slower instead.
Looks like the problem is the drawing function.
Here is my opengl drawing function
def draw(self, screen):
rect = self.texture.imagerect.copy()
rect.x += self.xoffset
rect.y += self.yoffset
halfWidth = self.getWidth()/2
halfHeight = self.getHeight()/2
glEnable(GL_TEXTURE_2D)
glBindTexture(GL_TEXTURE_2D, self.texture.getTexID())
self.color.setGLColor()
glPushMatrix()
glTranslatef(rect.x,rect.y,0)
glRotatef(self.angle, 0, 0, 1);
glBegin(GL_QUADS)
glTexCoord2d(0,0)
glVertex2f(-halfWidth + self.pivot.x, -halfHeight + self.pivot.y)
glTexCoord2d(0,1)
glVertex2f(-halfWidth + self.pivot.x,-halfHeight + self.getHeight() + self.pivot.y)
glTexCoord2d(1,1)
glVertex2f(-halfWidth + self.getWidth() + self.pivot.x,-halfHeight + self.getHeight() + self.pivot.y)
glTexCoord2d(1,0)
glVertex2f(-halfWidth + self.getWidth() + self.pivot.x,-halfHeight + self.pivot.y)
glEnd()
glPopMatrix()
what my profiler gives for the draw function
ncalls tottime percall cumtime percall filename:lineno(function)
312792 20.395 0.000 34.637 0.000 image.py:61(draw)
the rest of my profiler text: (expires in 1 month)
http://pastebin.com/ApfiCQzw
my sourcecode
https://bitbucket.org/claysmithr/warbots/src
Note: when i set it to not draw any tiles i get 60 fps! I also get 20 fps if i limit to only draw tiles that appear on the screen, but this is still much slower than blitting
Number of tiles i'm trying to draw (64x64): 15,625
Is there any way to test if I am really hardware accelerated?
Should I just go back to blitting?
edit: Does blitting automatically not draw tiles that are not on the screen? that could be the reason why opengl is being so slow!
If I've understood correctly, you need to draw thousands of these textured quadrangles every frame. The slowdown comes from the multiple OpenGL calls you are making for each quadrangle - at least 16 in the code above.
What you need to do is draw in batches, many more primitives at a time.
To start with, can you merge tiles into bigger units? Any graphics card made in the last decade can handle 16K x 16K texture maps. The more quads you can draw without having to bind a new texture, the better.
Replace the glBegin .. glEnd blocks with vertex arrays. Even in the worst case of still drawing one quad at a time, you can replace the current 10 OpenGL calls with just 2, glVertexPointer and glTexCoordPointer.
Then start merging tile quads into bigger vertex arrays. Instead of having a glTranslatef, add the rect.x and rect.y values directly to each vertex.
The glRotatef is a problem if it really does have to be different for each tile. If it's limited to multiples of 90 degrees then you don't need it, instead just swap the texture coords around. For other values, work out how to use sin and cos to directly rotate the texture coords.
Once you've eliminated the translate and rotate per tile, you can stick all the calculated quadrangle vertex and texture coords into giant vertex arrays and draw the entire map with just two calls.
Hope this helps.
(For real hyper performance you'd probably want to use GPU side vertex buffer objects and shaders, but from your coding style I assume you want to stick with OpenGL 1/2.)

Diffusion between two tiles

I'm making my first thing with libgdx. That's a tiled 2D game. Say I have two types of tiles: blue and green. Those are 32x32 pixel images that cover one cell on the game field. I want to be able to create a transition between tiles such as the one on the right of the image attached. Blue and green doesn't mean all pixels in a tile are the same color, that just defines what texture a pixel is from.
I'm not asking about an algorithm — I've already done it via canvas in JavaScript. I just need some directions on what classes/techniques/solutions to use specifically in libgdx.
So I need to take pixels from the blue texture and draw them above the green one. Is there a way to do this with a shader or maybe by directly taking pixel values from blue tile's texture?
Say I already have all my textures (with no transition sprites yet calculated) loaded in a TextureAtlas. What classes should go next to get the desired effect?
Update: Here is a rough example of what my code currently is. My gameScreen.render() method looks simply like this:
batch.begin();
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
Sprite sprite = getFloorSpriteByCell(cells[x][y]);
batch.draw(sprite, x * 32, y * 32);
}
}
batch.end();
and in getFloorSpriteByCell() I choose between some of preloaded sprites, that's all, no fancy level editing in a fancy gui-thing.
I don't use tilemaps, I just need to draw some part of a texture above another texture during rendering.
Welp, I finished it, and here is how it looks:
Without diffusion:
With diffusion:
The code itself, I guess, is too large and project-specific to post it here. Here are the steps (actual classes from LibGDX are in bold starting with a capital letter):
Pick individual pixels of an image you are diffusing into and put
them into a Pixmap (if you have your textures in a
TextureAtlas as I have had them, use a PixmapTextureAtlas class from this question on GoogleCode to get Pixmaps).
Draw these Pixmaps to Textures
Then, as usual, draw all Textures to a SpriteBatch.
Hope this will be useful for someone. Contact me if you need the actual code.

Scaling sprites in SDL

How can i scale sprites in SDL?
SDL doesn't provide scaling functionality directly, but there's an additional library called SDL_gfx which provides rotation and zooming capabilities. There's also another library called Sprig which provides similar features.
You can do scaling if you are getting sprites from a texture with SDL_RenderCopy() but i cannot guarantee you antialiasing.
With function SDL_RenderCopy() you pass 4 params:
a pointer to a renderer (where you are going to renderize).
a pointer to a texture (where you are going to get the sprite).
pointer to source rect(the area and position where you get the sprite on the texture).
and pointer to dest rect(the area and position on the renderer you are going to draw).
You should only modify your dest rect like for example, if you are going to render an image 300 x 300 and you want it scaled, your dest rect should be like 150 x 150 or 72 x 72 or whatever value you wanted to scale.
You haven't provided any code, so I'm going to assume you are using textures and an SDL_Renderer:
When using SDL_RenderCopy() the texture will be stretched to fit the destination SDL_Rect, so if you make the destination SDL_Rect larger or smaller you can perform a simple scaling of the texture.
https://wiki.libsdl.org/SDL_RenderCopy
Solution from Ibrahim CS works.
Let me expand on top of this solution and providing the code.
Another thing to note is to calculate new position (x,y) with top-left is origin to render scaled texture.
I do it like this
// calculate new x and y
int new_x = (x + texture->w/2) - (texture->w/2 * new_scale);
int new_y = (y + texture->h/2) - (texture->h/2 * new_scale);
// form new destination rect
SDL_Rect dest_rect = { new_x, new_y, texture->w * scale, texture->h * scale };
// render
SDL_RenderCopy(renderer, texture, NULL, &dest_rect);
assume that texture is SDL_Texture, and renderer is SDL_Renderer, and you render fully from input texture to destination.
If you use SFML instead then you get a very similar set of cross-platform capabilities but the graphics are hardware accelerated and features such as scaling and rotation come for free, both in terms of needing no additional dependencies and in terms of taking no noticeable CPU time to operate.