So I am making a fairly complex 2d game using modern OpenGL. Right now I am passing a VBO with model matrix, texture cords etc. for all of my sprites (it’s an “Entity” based game so everything is essentially a sprite) to the shader and using glDrawElements to draw them. Everything is working fine and I can draw many thousands of transformed sprites and I have a camera system working with zoom etc. However, I am using a single sampler2D uniform with my texture atlas. The problem is that I want to be able to use multiple texture atlases. How do other games/engines handle this?
The only thing I can think of is something like this:
I know I can have up to GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS textures bound, but in order to draw them I need to set a sampler2D uniform for each texture and pass them to the shader. Then, in the shader I need to know which sampler2D to use when I draw. I somehow need to figure this out based on what data I have in my VBO (I assume I would pass a texture id or something for each vertex). Is this approach even possible using glDrawElements, is there a better/sane way to do this? I realize that I could sort my sprites by texture atlas and use multiple glDrawElement calls, but the problem is I need the sprites to be in a specific order for layering.
If this isn’t very clear please let me know so I can try and reword it. I am new to OpenGL so it’s hard for me to explain what I am trying to do when I don’t even know what I’m doing.
bind atlas 1
draw objects that use atlas 1
bind atlas 2
draw objects that use atlas 2
...
would be the simple way. Other methods tend to be over-complicated for a 2D game where performance isn't that important. When working with VBO's you just need an index-buffer for every atlas you use.
Related
Using OpenGL, is it possible to apply a fragment shader to a specified region around a single vertex i.e. glPoint, rather than creating an array of quads and mapping a texture coordinate of a "duck" to each vertex?
I guess it would be more efficient as it would only require one vertex to be sent to the GPU per duck displayed, rather than the four vertices currently needed.
I have achieved a similar effect by using a geometry shader to build a quad around each vertex. Still, I am wondering if it is possible to achieve the same result without using a geometry shader.
I played around with gl_PointSize, but it is a limited feature, and I don't think it is the proper way to do it.
To summarize, I would like to know if OpenGL would allow filling a region around a single vertex using the fragment shader rather than, expressively creating a quad. Is that possible?
I'm trying to use modern OpenGL and shaders, instead of the immediate mode I have been using so far. I recently learned about VBOs and VAO, and I'm still trying to get my head round them, but I know that a VBO takes an array of floats that are vertices, which it then passes to the GPU etc
What is the best way to draw multiple objects (which are all identical) but in different positions, using VBOs. Will I have to draw one, then modify the array passed in beforehand, and then draw it again and modify and draw and modify and so on... for all blocks in the screen every frame? Or is there a better way?
I'm trying to achieve this: http://imgur.com/cBgJ0sK
Any help is appreciated - I don't want to learn bad (deprecated, old) immediate mode habits, when I could be learning a more modern way!
You should not modify the vertices in your program, that should be done in the shaders. For this, you will create a matrix that represents the transformation and will use that matrix in the vertex shader.
The main idea is:
You create a VAO holding the information of your VBO (vertices, normals, texture coordinates, tangent information, etc.)
Then, for every different object, you generate a model matrix that holds the information of the position, orientation and scale (and other homogeneous tranformations) and send that to your shader to make the transformations.
The idea is that you bind your VAO just once and then draw all the different objects just sending the information that change (model matrix, may be textures) and draw the objects.
To learn about how to use the model matrix, read tutorials like this:
http://ogldev.atspace.co.uk/www/tutorial06/tutorial06.html
There are even better ways to do this, but you can start from here.
Other information that would be good for your case is using instancing.
http://ogldev.atspace.co.uk/www/tutorial33/tutorial33.html
Later, you can move on indirect drawing for even better performance. Later...
I am working on a painting app using the LibGDX framework, though this should be primarily OpenGL related.
Basically, I am looking for a way to prevent the sprites I use to draw from overlapping each other when they aren't fully opaque, as this creates a lot of unpleasant effects. Drawing the sprites at 1.0 alpha onto a texture and then drawing that texture back at the desired alpha gives the effect I want, but that method would involve constantly recreating the texture as the user is drawing, which is far too intensive to be viable.
From what I can see, the best option for me, in basic terms, is to sort of subtract one of these sprites from the other in the fragment shader. I am quite certain this route would work, but I cannot figure out how to get to the point where I can actually compare them in the fragment shader. Both will always use the same single texture, but they will be positioned in different spots. Is it at all possible to actually compare them like that, or is there a suitable alternative?
It's not actually possible to compare 2 textures that are applied to different geometry (sprites) in the fragment or vertex shader that way, because they will be rendered on different iterations of the shaders, at different points in time.
You could have two or more texture units to sample and subtract multiple textures, but they would have to be applied to the same vertices (sprites), which I think is not what you want.
A better approach would be to compute the proximity of the sprites before they are rendered. You could then either change their positions, or pass the proxmity as a uniform value into the shaders, which could then be used to change the alpha of the fragment pixels for the sprites.
I'm creating a tile-based game in C# with OpenGL and I'm trying to optimize my code as best as possible.
I've read several articles and sections in books and all come to the same conclusion (as you may know) that use of VBOs greatly increases performance.
I'm not quite sure, however, how they work exactly.
My game will have tiles on the screen, some will change and some will stay the same. To use a VBO for this, I would need to add the coordinates of each tile to an array, correct?
Also, to texture these tiles, I would have to create a separate VBO for this?
I'm not quite sure what the code would look like for tiling these coordinates if I've got tiles that are animated and tiles that will be static on the screen.
Could anyone give me a quick rundown of this?
I plan on using a texture atlas of all of my tiles. I'm not sure where to begin to use this atlas for the textured tiles.
Would I need to compute the coordinates of the tile in the atlas to be applied? Is there any way I could simply use the coordinates of the atlas to apply a texture?
If anyone could clear up these questions it would be greatly appreciated. I could even possibly reimburse someone for their time & help if wanted.
Thanks,
Greg
OK, so let's split this into parts. You didn't specify which version of OpenGL you want to use - I'll assume GL 3.3.
VBO
Vertex buffer objects, when considered as an alternative to client vertex arrays, mostly save the GPU bandwidth. A tile map is not really a lot of geometry. However, in recent GL versions the vertex buffer objects are the only way of specifying the vertices (which makes a lot of sense), so we cannot really talked about "increasing performance" here. If you mean "compared to deprecated vertex specification methods like immediate mode or client-side arrays", then yes, you'll get a performance boost, but you'd probably only feel it with 10k+ vertices per frame, I suppose.
Texture atlases
The texture atlases are indeed a nice feature to save on texture switching. However, on GL3 (and DX10)-enabled GPUs you can save yourself a LOT of trouble characteristic to this technique, because a more modern and convenient approach is available. Check the GL reference docs for TEXTURE_2D_ARRAY - you'll like it. If GL3 cards are your target, forget texture atlases. If not, have a google which older cards support texture arrays as an extension, I'm not familiar with the details.
Rendering
So how to draw a tile map efficiently? Let's focus on the data. There are lots of tiles and each tile has the following infromation:
grid position (x,y)
material (let's call it "material" not "texture" because as you said the image might be animated and change in time; the "material" would then be interpreted as "one texture or set of textures which change in time" or anything you want).
That should be all the "per-tile" data you'd need to send to the GPU. You want to render each tile as a quad or triangle strip, so you have two alternatives:
send 4 vertices (x,y),(x+w,y),(x+w,y+h),(x,y+h) instead of (x,y) per tile,
use a geometry shader to calculate the 4 points along with texture coords for every 1 point sent.
Pick your favourite. Also note that directly corresponds to what your VBO is going to contain - the latter solution would make it 4x smaller.
For the material, you can pass it as a symbolic integer, and in your fragment shader - basing on current time (passed as an uniform variable) and the material ID for a given tile - you can decide on the texture ID from the texture array to use. In this way you can make a simple texture animation.
First of all, I have very little knowledge of what shaders can do, and i am very interested in making vertex lighting. I am attempting to use a 3d colormap which would be used to calculate the vertex color at that position of the world, and also interpolate the color by using the nearby colors from the colormap.
I cant use typical OpenGL lighting because its probably too slow and theres a lot of lights i need to render. I am going to "render" the lights at the colormap first, and then i could either manually map every vertex drawn with the corresponding color from the colormap.
...Or i could somehow automate this process, so i wouldnt have to change the color values of vertexes myself, but a shader could perhaps do this for me?
Questions is... is this possible, and if it is: what i need to know to make it possible?
Edit: Note that i also need to update the lightmap efficiently, without caring about the size of the lightmap, so the update should be done only at that specific part of the lightmap i want to update.
It almost sounds like what you want to do is render the lights to your color map, then use your color map as a texture, but instead of decal mode set it to modulate mode, so it's multiplied with the existing color instead of just replacing it.
That is different in one way though: instead of just affecting the vertexes, it'll map to the individual fragments (pixels, in essence).
Edit: What I had in mind wasn't a 3D texture -- it was a cube map. Basically, create a virtual cube surrounding everything in your "world". Create a 2D texture for each face of that cube. Render your coloring to the cube map. Then, to color a vertex you (virtually) extend a ray outward from the center, through the vertex, to the cube. The pixel you hit on the cube map gives you the color of lighting for that vertex.
Updating should be relatively efficient -- you have normal 2D textures for the top, bottom, front, etc., and you update them as needed.
If you cant use the fixed function pipeline functionality the best way to do per vertex lighting should be to do all the lighting calculations per vertex in the vertex-shader, when you then pass it on the the fragment shader it will be correctly interpolated across the face.
Another way to deal with performances issues when using a lot of light sources is to use deferred rendering as it will only do lighting calculation on the geometry that is actually visible.
That is possible, but will not be effective on the current hardware.
You want to render light volumes into 3d texture. The rasterizer works on a 2D surface, so your volumes have to be split along one of the axises. The split can be done in one of the following ways:
Different draw calls for each split
Instanced draw, with layer selection based on glInstanceID (will require geometry shader)
Branch in geometry shader directly from a single draw call
In order to implement it, I would suggest reading GL-3 specification and examples. It's not going to be easy, nor it will be fast enough in the result for complex scenes.