OpenGL texture vs FBO - c++

I am pretty new to OpenGL and just want some quick advice. I want to draw a tiled background for a game. I guess this means drawing a whole bunch of sprite like objects to the screen. I have about 48 columns to 30 rows, therefore 1440 tiles (tiles change depending on the game, so I can't pre-render the entire grid).
Currently on start up I create 6 different FBO (using the ofFbo class from OpenFrameworks) that act as 6 different tiles. I then draw these buffers, up to a maximum of 1400 times, selecting one for each tile. So there are only ever 6 fbos, just being draw a lot of times. (The buffers are drawn to on start up, and are never changed once created).
for (int x=0; x<columns; x++) {
for (int y=0; y<rows; y++) {
// Get tile type and rotation from tile struct.
tileNum = tile.form
rotNum = tile.rot
// Draw image/texture/fbo that's stored in a std vector.
tileSet->draw(x*TILESIZE, y*TILESIZE, TILESIZE, TILESIZE);
}
}
I think I am going about this the wrong way, and was wondering if anyone new the best / optimal way to do this. Think something like an old school 8 bit video game background. Here is an image of my work in progress.
The structures in the background are the sprites i'm talking about, the different pieces are the inny corner, outty (concave) corner, square fill, and straight edge. Sorry for messing around with question.

I don't really understand your question. A texture is a (usually 2-dimensinal) image that can be applied to polygons, whereas an FBO (framebuffer object) is a kind of offscreen buffer that can be rendered into instead of the screen, usually used to render directly into textures. So I don't see where your question could be "textures vs FBOs" in any way, as they're quite orthogonal concepts and using FBOs doesn't make any sense in your example. Maybe you're messing up FBOs with VBOs (vertex buffer objects) or PBOs (pixel buffer objects)? But then your question is still quite ill-posed.
EDIT: If you're really using FBOs, and that only because you think they magically make the texture they reference to be stored in video memory, then rest assured that textures are always stored in video memory and using an FBO in your case is completely useless.

for what your doing I would recommend you just use a standard OpenGL texture.
I think I understand what your trying to do, but FBO's retain far more information than you need and will thus take much longer to render onto your destination (which is an FBO).

Related

Reducing opengl draw calls vs binding smaller textures

I'm making an isometric (2D) game with SFML. I handle the drawing order (depth) by sorting all drawables by their Y position and it works perfectly well.
The game uses an enourmous amount of art assets, such that the npcs, monsters and player graphics alone are contained in their own 4k texture atlas. It is logistically not possible for me to put everything into one atlas. The target devices would not be able to handle textures of that size. Please do not focus on WHY it's impossible, and understand that I simply MUST use seperate files for my textures in this case.
This causes a problem. Let's say I have a level with 2 npcs and 2 pillars. The npcs are in NPCs.png and the pillars are in CastleLevel.png. Depending on where the npcs move, the drawing order (hence the opengly texture binding order) can be different. Let's say the Y positions are sorted like this:
npc1, pillar1, npc2, pillar 2
This would mean that opengl has to switch between the 2 textures twice. My question is, should I:
a) keep the texture atlasses OR
b) divide them all into smaller png files (1 png per npc, 1 png per pillar etc). Since the textures must be changed multiple times anyway, would it improve performance if opengl had to bind smaller textures instead?
Is it worth keeping the texture atlasses because it will SOMETIMES reduce the number of draw calls?
Since the textures must be changed multiple times anyway, would it improve performance if opengl had to bind smaller textures instead?
Almost certainly not. The cost of a texture bind is fixed; it isn't based on the texture's size.
It would be better for you to either:
Properly batch your rendering. That is, when you say "draw NPC1", you don't actually draw it yet. You stick some data in an array, and later on, you execute "draw NPCs", which draws all of the NPCs you've buffered in one go.
Use a bigger texture atlas, probably involving array textures. Each layer of the array texture would be one of the atlases you load. This way, you only ever bind one texture to render your scene.
Deal with it. 2D games aren't exactly stressful on the GPU or CPU. The overhead from the additional state changes will not be what knocks you down from 60FPS to 30FPS.

Zoom window in OpenGL

I've implemented Game of Life using OpenGL buffers (as specified here: http://www.glprogramming.com/red/chapter14.html#name20). In this implementation each pixel is a cell in the game.
My program receives the initial state of the game (2d array). The size array ,in my implementation, is the size of the window. This of course makes it "unplayable" if the array is 5x5 or some other small values.
At each iteration I'm reading the content of the framebuffer into a 2D array (its size is the window size):
glReadPixels(0, 0, win_x, win_y, GL_RGB, GL_UNSIGNED_BYTE, image);
Then, I'm doing the necessary steps to calculate the living and dead cells, and then draw a rectangle which covers the whole window, using:
glRectf(0, 0, win_x, win_y);
I want to zoom (or enlarge) the window without affecting the correctness of my code. If I resize the window, then the framebuffer content won't fit inside image(the array). Is there a way of zooming the window(so that each pixel be drawn as several pixels) without affecting the framebuffer?
First, you seem to be learning opengl 2, I would suggest instead learning a newer version, as it is more powerful and efficient. A good tutorial can be found here http://www.opengl-tutorial.org/
If i understand this correctly, you read in an initial state and draw it, then continuously read in the pixels on the screen, update the array based on the game of life logic then draw it back? this seems overly complicated.
The reading of the pixels on the screen is unnecessary, and will cause complications if you try to enlarge the rects to more than a pixel.
I would say a good solution would be to keep a bit array (1 is a organism, 0 is not), possibly as a 2d array in memory, updating the logic every say 30 iterations (for 30 fps), then drawing all the rects to the screen, black for 1, white for 0 using glColor(r,g,b,a) tied to an in statement in a nested for loop.
Then, if you give your rects a negative z coord, you can "zoom in" using glTranslate(x,y,z) triggered by a keyboard button.
Of course in a newer version of opengl, vertex buffers would make the code much cleaner and efficient.
You can't store your game state directly the window framebuffer and then resize it for rendering, since what is stored in the framebuffer is by definition what is about to be rendered. (You could overwrite it, but then you lose your game state...) The simplest solution would just to store the game state in an array (on the client side) and then update a texture based on that. Thus for each block that was set, you could set a pixel in a texture to be the appropriate color. Each frame, you then render a full screen quad with that texture (with GL_NEAREST filtering).
However, if you want to take advantage of your GPU there are some tricks that could massively speed up the simulation by using a fragment shader to generate the texture. In this case you would actually have two textures that you ping-pong between: one containing the current game state, and the other containing the next game state. Each frame you would use your fragment shader (along with a FBO) to generate the next state texture from the current state texture. Afterwards, the two textures are swapped, making the next state become the current state. The current state texture would then be rendered to the screen the same way as above.
I tried to give an overview of how you might be able to offload the computation onto the GPU, but if I was unclear anywhere just ask! For a more detailed explanation feel free to ask another question.

Mouse-picking using off-screen rendering?

I have 3d-scene with a lot of simple objects (may be huge number of them), so I think it's not very good idea to use ray-tracing for picking objects by mouse.
I'd like to do something like this:
render all these objects into some opengl off-screen buffer, using pointer to current object instead of his color
render the same scene onto the screen, using real colors
when user picks a point with (x,y) screen coordinates, I take the value from the off-screen buffer (from corresponding position) and have a pointer to object
Is it possible? If yes- what type of buffer can I choose for "drawing with pointers"?
I suppose you can render in two passes. First to a buffer or a texture data you need for picking and then on the second pass the data displayed. I am not really familiar with OGL but in DirectX you can do it like this: http://www.two-kings.de/tutorials/dxgraphics/dxgraphics16.html. You could then find a way to analyse the texture. Keep in mind that you are rendering data twice, which will not necessarily double your render time (as you do not need to apply all your shaders and effects) bud it will be increased quite a lot. Also per each frame you are essentially sending at least 2MB of data (if you go for 1byte per pixel on 2K monitor) from GPU to CPU but that might change if you have more than 256 objects on screen.
Edit: Here is how to do the same with OGL although I cannot verify that the tutorial is correct: http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-14-render-to-texture/ (There is also many more if you look around on Google)

Sprite Sheet With OpenGL and SDL

I been working in a new game, and finally reached the point where I started to code the motion of my main character but I have a doubt about how do that.
Previously, I make two games in Allegro, so the spritesheets are kind of easy to implement, because I establish the frame and position on the image, and save every frame in a different bitmap, but I know that do that with OpenGL it's not neccesary and cost a little bit more.
So, I been thinking in how save my spritesheet and used in my program and I have only one idea.
I loaded the image and transformed in a texture, in my function that help me animate I simply grab a portion of the texture to draw instead of store every single texture in my program.
This is the best way to do that?
Thanks beforehand for the help.
You're on the right track.
Things to consider:
leave enough dead space around each sprite so that the video card does not blend in texels from adjacent sprites at small scales.
set texture min/mag filtering appropriately. GL_NEAREST is OK if you're going for the blocky look.
if you want to be fancy and save some texture memory, there's no reason that the sprites have to be laid out in a regular grid. Smaller sprites can be packed closer in the texture.
if your sprites are being rendered from 3D models, you could output normal & displacement maps from the model into another texture, then combine them in a fragment shader for some awesome lighting and self-shadowing.
You got the right idea, if you have a bunch of sprites it is much better to just stick them all in one big textures. Just draw your sprites as textured quads whose texture coordinates index into the frame of the sprite. You can do a few optimizations, but most of them revolve around trying to get the most out of your texture memory and packing the sprites closely together with out blending issues.
I know that do that with OpenGL it's not neccesary and cost a little bit more.
Why not? There are no real downsides to putting a lot of sprites into a single texture. All you need to do is change the texture coordinates to pick the region in question out of the texture.

Using Vertex Buffer Objects for a tile-based game and texture atlases

I'm creating a tile-based game in C# with OpenGL and I'm trying to optimize my code as best as possible.
I've read several articles and sections in books and all come to the same conclusion (as you may know) that use of VBOs greatly increases performance.
I'm not quite sure, however, how they work exactly.
My game will have tiles on the screen, some will change and some will stay the same. To use a VBO for this, I would need to add the coordinates of each tile to an array, correct?
Also, to texture these tiles, I would have to create a separate VBO for this?
I'm not quite sure what the code would look like for tiling these coordinates if I've got tiles that are animated and tiles that will be static on the screen.
Could anyone give me a quick rundown of this?
I plan on using a texture atlas of all of my tiles. I'm not sure where to begin to use this atlas for the textured tiles.
Would I need to compute the coordinates of the tile in the atlas to be applied? Is there any way I could simply use the coordinates of the atlas to apply a texture?
If anyone could clear up these questions it would be greatly appreciated. I could even possibly reimburse someone for their time & help if wanted.
Thanks,
Greg
OK, so let's split this into parts. You didn't specify which version of OpenGL you want to use - I'll assume GL 3.3.
VBO
Vertex buffer objects, when considered as an alternative to client vertex arrays, mostly save the GPU bandwidth. A tile map is not really a lot of geometry. However, in recent GL versions the vertex buffer objects are the only way of specifying the vertices (which makes a lot of sense), so we cannot really talked about "increasing performance" here. If you mean "compared to deprecated vertex specification methods like immediate mode or client-side arrays", then yes, you'll get a performance boost, but you'd probably only feel it with 10k+ vertices per frame, I suppose.
Texture atlases
The texture atlases are indeed a nice feature to save on texture switching. However, on GL3 (and DX10)-enabled GPUs you can save yourself a LOT of trouble characteristic to this technique, because a more modern and convenient approach is available. Check the GL reference docs for TEXTURE_2D_ARRAY - you'll like it. If GL3 cards are your target, forget texture atlases. If not, have a google which older cards support texture arrays as an extension, I'm not familiar with the details.
Rendering
So how to draw a tile map efficiently? Let's focus on the data. There are lots of tiles and each tile has the following infromation:
grid position (x,y)
material (let's call it "material" not "texture" because as you said the image might be animated and change in time; the "material" would then be interpreted as "one texture or set of textures which change in time" or anything you want).
That should be all the "per-tile" data you'd need to send to the GPU. You want to render each tile as a quad or triangle strip, so you have two alternatives:
send 4 vertices (x,y),(x+w,y),(x+w,y+h),(x,y+h) instead of (x,y) per tile,
use a geometry shader to calculate the 4 points along with texture coords for every 1 point sent.
Pick your favourite. Also note that directly corresponds to what your VBO is going to contain - the latter solution would make it 4x smaller.
For the material, you can pass it as a symbolic integer, and in your fragment shader - basing on current time (passed as an uniform variable) and the material ID for a given tile - you can decide on the texture ID from the texture array to use. In this way you can make a simple texture animation.