I'm trying to implement transparency in OpenGL and from what I've read it's necessary to first render all opaque objects and then render the transparent objects in the correct order.
My issue is how do I go about separating opaque from transparent objects in my scene graph so that I can render the opaque objects first. My scene graph consists of a bunch of nodes which can have entities attached to them, and each entity can be composed of several meshes with different materials, each with one or more textures.
If I load a scene into my graph I need to know which materials are partially or completely transparent which means I need to check if the loaded textures have any alpha values smaller than 1. I'm currently using Assimp to handle loading the models/scenes and SOiL to read the textures and I haven't found any simple way to separate transparent materials from opaque ones.
This probably has a really simple solution because I haven't found anyone else with the same question, but I'm still starting with OpenGL and I have been stuck on this matter for the past hours.
How is transparency normlly done and how are opaque objects separated from partially or fully transparent ones so that they may be rendered first?
For most rendering method, you don't strictly have to separate the opaque from the transparent objects. If you think about it, transparency (or opacity) is a continuous quality. In OpenGL, the alpha component is typically used to define opacity. Opaque objects have an alpha value of 1.0, but this is just one value in a continuous spectrum. Methods that can correctly handle all alpha values will not suddenly fail just because the alpha value happens to be 1.0.
Putting it differently: Is an object with alpha value 0.9 opaque? What if the alpha value is 0.99, can you justify treating it differently from alpha value 1.0? It is really all continuous, and not a binary decision.
With that said, there are reasons why it's common to treat opaque objects differently. The main ones I can think of:
Since the non-opaque objects have to be sorted for common simple transparency rendering methods, you can save work by sorting only the non-opaque objects. Sorting is not cheap, and you reduce processing time this way. For most of these methods, you get perfectly fine results by sorting all objects, at the price of being less efficient.
Often times, objects cannot be sorted perfectly, or doing so is at least not easy. An obvious problem is when objects overlap, but the challenges go beyond that case (see my answer here for some more in depth illustration of problematic cases: Some questions about OpenGL transparency). In these cases, you get artifacts from incorrect sorting. By drawing opaque objects with depth testing enabled, you avoid the possibility of artifacts for those objects, and reduce the overall occurrence of noticeable artifacts.
Your situation of not knowing which objects contain transparency seems somewhat unusual. In most cases, you know which objects are opaque because you're in control of the rendering, and the content. So having an attribute that specifies if an object is opaque normally comes pretty much for free.
If you really have no way to define which objects are opaque, a couple of options come to mind. The first one is that you sort all objects, and render them in order. Based on the explanation above, you could encounter performance or quality degradation, but it's worth trying.
There are methods that can render with transparency without any need for sorting, or separating opaque and transparent objects. A simple one that comes to mind is alpha-to-coverage. Particularly if you render with MSAA anyway, it results in no overhead. The downside is that the quality can be mediocre depending on the nature of your scene. But again, it's worth trying.
You can find a basic explanation of alpha-to-coverage, as well as some other simple transparency rendering methods, in my answer to this question: OpenGL ES2 Alpha test problems.
There are more advanced transparency rendering methods that partly rely on recent hardware features. Covering them goes beyond the scope of a post here (and also largely beyond my knowledge...), but you should be able to find material by searching for "order independent transparency".
Related
I am currently working on a Graphics Engine for a 2D game with OpenGL, but I have issues implementing a viable solution for partial transparency, since opaque objects need to be drawn first, and transparent objects from back to front.
Specifically, I want to be able to pass render commands in an arbitrary order, and let the Engine handle the sorting.
My current approach is that I simply collect all vertices and textures of each transparent object in a sorted list, and at the and of each frame, I simply draw them from this list.
Although I get correct results, this is obviously not a viable solution, due to a lot of copy instructions. In case I have a few thousand partially transparent particles, this approach does not work at all due to its low performance.
I did not find any other way to implement this for 2D Graphics, so my question is: What is the most common approach to this ? Are there any sources where I can read more about this topic?
I use glBlendFunc(GL_SOURCE_ALPHA, GL_ONE_MINUS_SRC_ALPHA) by the way.
I've been reading up on how some OpenGL-based architectures manage their objects in an effort to create my own light weight engine based on my application's specific needs (please no "why don't you just use this existing product" responses). One architecture I've been studying is Qt's Quick Scene Graph, and while it makes a lot of sense I'm confused about something.
According to their documentation, opaque primitives are ordered front-to-back and non-opaque primitives are ordered back-to-front. The ordering is to do early z-killing by hopefully eliminating the need to process pixels that appear behind others. This seems to be a fairly common practice and I get it. It makes sense.
Their documentation also talks about how items that use the same Material can be batched together to reduce the number of state changes. That is, a shared shader program can be bound once and then multiple items rendered using the same shader. This also makes sense and I'm good with it.
What I don't get is how these two techniques work together. Say I have 3 different materials (let's just say they are all opaque for simplification) and 100 items that each use one of the 3 materials, then I could theoretically create 3 batches based off the materials. But what if my 100 items are at different depths in the scene? Would I then need to create more than 3 batches so that I can properly sort the items and render them front-to-back?
Based on what I've read of other engines, like Ogre 3D, both techniques seem to be used pretty regularly, I just don't understand how they are used together.
If you really have 3 materials, you can only batch objects that are rendered in a group according to their sorting. At times the sorting can be optimized for objects that do not overlap each other to minimize the material switches.
The real "trick" behind all that how ever is to combine the materials. If the engine is able to create one single material out of the 3 source materials and use the shaders to properly apply the material settings to the different objects (mostly that is transforming the texture coordinates), everything can be batched and ordered at the same time. But if that is not possible the engine can't optimize it further and has to switch the material every now and then.
You don't have to group every material in your scene together. But if it's possible to group those materials that often switch with each other, it can already improve the performance a lot.
I'm currently working alongside a piece of software that generates game maps by taking several images and then tiling them into a game map. Right now I'm working with OpenGL to draw these maps. As you know, switching states in OpenGL and making multiple draw calls is costly. I've decided to implement a texture atlas system, which would allow me to draw the entire map in a single draw call with no state switching. However, I'm having a problem with implementing the texture atlas. Firstly, would it be better to store each TILE in the texture atlas, or the images themselves? Secondly, not all of the images are guaranteed to be square, or even powers of two. Do I pad them to the nearest power of two, a square, or both? Another thing that concerns me is that the images can get quite large, and I'm worried about exceeding the OpenGL size limitation for textures, which would force me to split the map up, ruining the entire concept.
Here's what I have so far, conceptually:
-Generate texture
-Bind texture
-Generate image large enough to hold textures (Take padding into account?)
-Sort textures?
-Upload subtexture to blank texture, store offsets
-Unbind texture
This is not so much a direct answer, but I can't really answer directly since you are asking many questions at once. I'll simply try to give you as much info as I can on the related subjects.
The following is a list of considerations for you, allowing you to rethink exactly what your priorities are and how you wish to execute them.
First of all, in my experience (!!), using texture arrays is much easier than using a texture atlas, and the performance is about equal. Texture arrays do exactly what you think they would do, you can sample them in shaders based on a variable name and an index, instead of just a name (ie: mytexarray[0]). One of the big drawbacks include having the same texture size for all textures in the array, advantages being: easy indexing of subtextures and binding in one draw call.
Second of all, always use powers of 2. I don't know if some recent systems allow for non-power of 2 textures totally without problems, but (again in my experience) it is best to use powers of 2 everywhere. One of the problems I had in a 500*500 texture was black lines when drawing textured quads, these black lines were exactly the size needed to pad to a nearest power of two (12 pixels on x and y). So OpenGL somewhat creates this problem for you even on recent hardware.
Third of all (is this even english?), concerning size. All your problems seem to handle images, textures. You might want to look at texturebuffers, they allow for large amounts of data to be streamed to your GC and are updated easier than textures (this allows for LOD map systems). This is mostly nice if you use textures but only need the data in them represented in their colors, not the colors directly.
Finally you might want to look at "texture splatting", this is a way to increase detail without increasing data. I don't know exactly what you are making so I don't know if you can use it, but it's easy and it's being used in the game industry alot. You create a set of textures (rock, sand, grass, etc) you use everywhere, and one big texture keeping track of which smaller texture is applied where.
I hope at least one of the things I wrote here will help you out,
Good luck!
PS: openGL texture size limitations depend on the graphics card of the user, so be careful with sizes greater than 2048*2048, even if your computer runs fine others might have serious issues. Safe values are anything upto 1024*1024.
PSS: excuse any grammer mistakes, ask for clarification if needed. Also, this is my first answer ever, excuse my lack of protocol.
I am emulating an old client and most of the textures are 128x128. As I know switching textures and render calls are expensive, could I get away with at runtime, creating a very large texture atlas of all the tiny textures.
Then from there, could I bind the large texture and render via shaders with the texture offset in the atlas? What kind of performance hit would this have?
My second question is involving sorting. The level files are broken up into little chunks of a BSP tree. They are very small and there are often thousands per level. The current way, which is definitely slower is that I render each group of textures in each BSP leaf that are in my frustum and PVS (Quake 3 style). What would be the idea fix for this?
Would I want to run through each region (back to front) I could see, and group all of the visible triangles by texture and then render all at once. For some reason, I always feel this may be slower. Does it make more sense to sort first and render all at once or skip sorting and render each region at a time?
Yes, it should be possible to stuff textures into a texture atlas. Careful consideration would need to be given to issues like whether the textures are being tiled, and what you want to happen with mip-maps and filtering.
I'd probably suggest holding off on doing that until you've got the geometry sorted out - reducing the number of draw calls to a sane number may well be all you need to do. The cost of changing a texture isn't necessarily that bad on modern hardware.
As for rendering the level, on modern hardware it's more usual to render front to back, rather than back to front, in order to take advantage of the z-buffer and early culling (i.e. if you have a wall right in front of the camera, and you draw that first with z-buffering enabled, the hardware is fairly good at rejecting stuff you then try to draw behind it).
One possible approach would be to reprocess the BSP into a rougher structure, such as a simple grid (or maybe a quad or oct tree). You don't even need to split polygons neatly, just have a bunch of sectors with a bounding-box around them, and frustum-cull then loosely sort (front to back) the boxes. You can keep a PVS with this approach too, but the usefulness of that might drop when your rendering chunks become larger and coarser.
However before doing any of this, I'd definitely recommend setting up some benchmark tests and recording performance information. You won't know for sure if you're doing the right thing unless you actually analyse the performance. If you can pinpoint exactly what the worst performant thing you are doing is, you just need to fix that.
Does anyone know of an implementation of vector fonts in directx?
If not does anyone have a good starting place for this?
Or even any examples of a reader written in Directx with basic zoom support.
Direct vector fonts don't work to well in D3D, as it requires an intermediary texture to hold rasterized data(verts or pixels) and need to do a lot more extra work, thus you need a approach them a little differently to get them working easily and efficiently(if you are performance constrained/care about performance). You should use signed distances fields for this (they up-scale VERY well, but are horrid for down-scaling if your fonts are complex. Hard edges also don't store too well, but this can be fixed by using two channels to store data. Distance fields also allow easy smoothing, bolding, outlining, glowing and drop shadows), al la valve's improved alpha tested advanced vector texture rendering (which incidently references a paper on vector fonts, if you do want to go that way). It is heavily shader reliant (though it can be done in FFP via alpha testing, but using smoothstep in the pixel shader provides a far better result with minimal overhead), but one doesn't need anything beyond ps v1. see http://www.valvesoftware.com/publications.html for the paper, see the shaders in valves source sdk for a complete implementation reference. (I incidently just built a Dx11 based text renderer using this, works wonderfully, though I use a tool to brute force my sdf's so I don't need to create them at runtime).