I've recently started learning DirectX programming in C++, I have some experience of graphical programming in other languages however I am new to the DirectX scene.
Anyway, I wanted to ask a question about transparent textures. So far I've always used alpha testing as that has reached my needs, however I've recently began to wonder how "proper" game engines manage to render such good looking semi-transparent textures for things like plants and trees which have smooth transparency.
As everytime I've used alpha testing, the texutres have ended up looking blocky and just plain bad. I'd love to be able to have smooth, semi-transparent textures which draw as I would expect.
My guess as to how this works would be to execute render calls in order, starting with things that are far away from the camera and moving closer, However, I can't really see how this works for pre-made models, for example if you had a tree model where the leaves and trunk shared a model, how to guarantee that the back leaves would draw, and the trunks would draw correctly over the leaves, and that the front leaves would look correct over the trunk.
I had tried that method above and had also disabled z buffering for the transparent objects such as smoke particles, and it sort of worked, but looked messy and the effect appeared different depending on the viewing angle. So that didn't seem ideal.
So, in short, what methods do "proper" games use to correctly draw smooth alpha textures (which have a range of alpha values) into a 3D scene for things like foliage.
Thanks,
Michael.
Ordered transparency is accomplished most basically using the painters algorithm.
The painter's algorithm falls apart where an object needs to be drawn both in front of and behind another object, or where a single object has multiple sub components that are transparent. We can't easily sort sub-components of a mesh relative to each other.
While it doesn't solve the problems z-buffer allows us to optimize rendering. Most games use this slightly more complex algorithm as the basis of their rendering.
Render all Opaque objects sorted by material state or front to back
to avoid overdraw.
Render all Transparent objects sorted front to back.
Games use a variety of techniques in combination to avoid this problem.
Split models into non overlapping transparent sections. Often times this is done implicitly because a game's transparent objects will often use use different materials than the rest of the model. You can also split models with multiple layers of transparency in such a way that each new model's layers do not overlap. For example you could split a pine tree model radially into 5 sections.
This was more common in fixed function pipelines. Modern games simply try to avoid the problem.
Avoid semi-transparent parts in models. Use transparency only for anti-aliasing edges and where the transparent object can split the world cleanly into two separate groups of objects. (Windows or water planes for example). Splitting the world like this and rendering those chunks front to back allows our anti-aliased edges to be drawn without causing obvious cut-outs on other transparent objects. The edges themselves tend to look good even if they overlap as long as your alpha-test is set higher than ~30%.
Semi transparent objects are often rendered as particle effects. Grass and smoke are the most common examples. The point list for the effect or group of grass objects is sorted each frame. This is a much simpler problem than sorting arbitrary sub meshes. Many outdoor games have complex grass and foliage instancing systems. These allow them to render individual leaves, and blades properly sorted and avoid most of the rendering overhead of doing it in this fashion but they strictly limit the types of objects.
Many effects can be done in an order independent way using additive and subtraction blending rather than alpha blending.
There are a couple easy options if your smooth edges are still unacceptable. You can dither any parts of the model below 75% transparency. Or you can have the hardware do it for you without visible artifacts by using coverage-to-alpha. This causes the multisampling hardware to dither the edges in the overdrawn samples. It won't give you a smooth gradient but the 4-16 levels of alpha are perfectly acceptable for anti-aliasing edges and free if you already intend to use MSAA.
There are a lot of caveats and special cases. If you have water you will probably need to render any semi-transparent objects that intersect the water twice using a stencil or depth test.
Moving the camera in and out of transparent objects is always problematic.
It is nearly impossible to render a complex semi-transparent object. Like an x-ray view of a building or a ghost. Many games simply render this type of object as additive. But with modern hardware a variety of more complex schemes are possible.
More complex schemes
Depth Peeling is a method of rendering where you render multiple passes with different Z-clipping planes to composite the scene from back to front regardless of order or what object contains the alpha. It is less expensive than you would expect because many objects render to only one or two slices. But it is not perfect and many game developers find it too costly.
There are many other varieties of Order Independent Transparency. With a modern GPU and compute we can render in a single pass to a buffer where each pixel is a stack of possible slices. We can then sort the stack and blend these slices in a post process, and only incur the performance penalty when there are layers of transparency on a pixel.
OIT is still mostly only used in special cases like 2.5D games (such as little Big Planet). But I believe that it may eventually become a core tool in game programming.
Related
Until now, I've been handling translucency by sorting my translucent triangles back to front. This works very well for quads, but I'd like to incorporate translucency in my models now.
I've thought of separating the translucent tris out and sorting them in the same way as my quads. Sorting by their centroids, then streaming the results into an IBO for just them each frame. But the number of triangles in a model, and the need to transform them on the CPU according to a table of bones, and blend shapes, and some other things in my vertex shader... This doesn't seem like a good solution in performance or sanity.
My models are about 4K tris each, with maybe 20 in a scene at worst, and I'd really like to lean into a simple cute style that relies on translucency, which doesn't have to be physically accurate, or draw objects behind 4 or more layers of translucency.
What technique might work well for my situation, in 2021? I'm using OpenGL 3.3 but I'll use another version if new features exist for this.
Afaik, there's no easy solution to what you want to do. However, there are some things that you can do:
Don't sort triangles at all, and only sort individual draw calls. This is the easiest solution and is utilised by most games, if they use translucent/transparent objects in the first place. Of course, this might not have the best looks (though it works well with convex shapes), but it will have very good performance.
Order-independet transparency: There are some pixel shader based techniques for rendering transparent/translucent objects without having to sort anything at all. These techniques usually are approximations (there are also some non-approximate algorithms) and tend work the best for things like smoke (where small errors aren't as noticable), or when not many transparent/translucent triangles overlap each other anyway.
Compute Shaders: You can use a compute shader to transform your individual vertices according to your animation, and then use another compute shader to sort the triangles on the GPU. That is probably the most straight-forward improvement of what you'd otherwise do on the CPU, and there are many examples for sorting stuff on the GPU out there. But, if you've never worked with compute shaders before, it might be a bit hard to wrap your head around their strengths and limitations at first.
Ray Tracing: This would probably be the most complex way to solve that problem, since you'll need specific hardware, generate corresponding data structures and a few new shaders, and on top of that, even with modern hardware, ray tracing is quite costly (though probably still faster than sorting triangles on the CPU each frame). But it also doesn't need your triangles to be sorted and will actually work perfectly well even if different translucent objects are intersecting/interleaving each other.
I'm currently working with upgrading and restructuring an OpenGL render engine. The engine is used for visualising large scenes of architectural data (buildings with interior), and the amount of objects can become rather large. As is the case with any building, there is a lot of occluded objects within walls, and you naturally only see the objects that are in the same room as you, or the exterior if you are on the outside. This leaves a large number of objects that should be occluded through occlusion culling and frustum culling.
At the same time there is a lot of repetative geometry that can be batched in renderbatches, and also a lot of objects that can be rendered with instanced rendering.
The way I see it, it can be difficult to combine renderbatching and culling in an optimal fashion. If you batch too many objects in the same VBO it's difficult to cull the objects on the CPU in order to skip rendering that batch. At the same time if you skip the culling on the cpu, a lot of objects will be processed by the GPU while they are not visible. If you skip batching copletely in order to more easily cull on the CPU, there will be an unwanted high amount of render calls.
I have done some research into existing techniques and theories as to how these problems are solved in modern graphics, but I have not been able to find any concrete solution. An idea a colleague and me came up with was restricting batches to objects relatively close to eachother e.g all chairs in a room or within a radius of n meeters. This could be simplified and optimized through use of oct-trees.
Does anyone have any pointers to techniques used for scene managment, culling, batching etc in state of the art modern graphics engines?
There's lots of information about frustum and occlusion culling on the internet.
Most of it comes from game developers.
Here's a list of some articles that will get you started:
http://de.slideshare.net/guerrillagames/practical-occlusion-culling-in-killzone-3
http://de.slideshare.net/TiagoAlexSousa/secrets-of-cryengine-3-graphics-technology
http://de.slideshare.net/Umbra3/siggraph-2011-occlusion-culling-in-alan-wake
http://de.slideshare.net/Umbra3/visibility-optimization-for-games
http://de.slideshare.net/Umbra3/chen-silvennoinen-tatarchuk-polygon-soup-worlds-siggraph-2011-advances-in-realtime-rendering-course
http://de.slideshare.net/DICEStudio/culling-the-battlefield-data-oriented-design-in-practice
http://www.cse.chalmers.se/~uffe/vfc.pdf
My (pretty fast) renderer works similar to this:
Collection: Send all props, which you want to render, to the renderer.
Frustum culling: The renderer culls the invisible props from the list using multiple threads in parallel.
Occlusion culling: Now you could do occlusion culling on the CPU (I haven't implemented it yet, because I don't need it now). Detailed information on how to do it efficiently can be found in the Killzone and Crysis slides. One solution would be to read back the depth buffer of the previous frame from the GPU and then rasterize the bounding boxes of the objects on top of it to check if the object is visible.
Splitting: Since you now know which objects actually have to be rendered, because they are visible, you have to split them by mesh, because each mesh has a different material or texture (otherwise they would be combined into a single mesh).
Batching: Now you have a list of meshes to render. You can sort them:
by depth (this can be done on the prop level instead of the mesh level), to save fillrate (I don't recommend doing this if your fragment shaders are very simple).
by mesh (because there might be multiple instances of the same mesh and it would make it easy to add instancing).
by texture, because texture switches are very costly.
Rendering: Iterate through your partitioned meshes and render them.
And as "Full Frontal Nudity" already said: There's no perfect solution.
I've been using XNA for essentialy all of my programming so far and would like to move on to OpenGL (along with SFML for IO, creating the window etc.) with C++ . For starters I'd like to create a tile-based game and I've mostly looked at LazyFoo's tutorials.
I just have a two questions:
How should I draw the tiles? Should I use immediate drawing, arrays, VBOs or what? VBOs feel like overkill for this but I'm not sure. It's very tempting to use immediate drawing but apparently it's deprecated. Maybe it's fine for this purpose since it's 2D and only for a bunch of quads.
I'd like a lot of different tiles and thus all of my tiles will not fit into a single texture without making it massive. I've read that using bindTexture isn't very cheap and thus I should avoid as many calls as I can. I thought that maybe I can create a manager for my textures and stitch them all together into one big texture and bind that but then the dimensions of that is an issue.
Don't use immediate mode! It's cumbersome to work with and has been removed from recent OpenGL versions. Use Vertex Arrays, ideally through VBOs. In the end they're much easier to use, believe me.
Regarding that switching of textures. We're talking about optimizing the texture switch patterns in very complex scenes. In your case it will hardly matter at all.
Update
Right now you worry abount things without having even used them. That's worse than premature optimization. I suggest you first get a good grip on OpenGL, then start worrying about state switch management.
With regards to the texture atlas; this is usually done by stitching textures into groups of power-of-two sized textures. For example in a tile-based game you might have a particular tile set (say, tiles for an ice world) grouped together on 2 or 3 textures. When you want to render them you would determine what tiles are visible, then you bind each texture once and render the tiles from that texture for any tiles that are visible on screen.
This requires quite a lot of set-up time to get right; you need keep information on each sub-texture of the atlas so you can find the right texture and render the appropriate region of that texture whenever a tile is referenced. You also need a good way of grouping rendering operations so that they occur when the appropriate texture is bound.
Like datenwolf said, I wouldn't focus too much on complicated texture systems early on; eager binding of textures will be plenty fast enough until you get further down the road.
If I was making a 3D engine, the answer to this question would be clear: I'd go for using the depth buffer instead of thinking of sorting all my polygons on my own.
However, this is a different situation with 2D, because here layers can be implemented easily without the help of OpenGL - and you then could even sort and move sprites within layers. (Which isn't possible in OpenGL afaik)
(Why) should I use the OpenGL depth buffer instead of a C++ layer system running on the CPU?
How much slower would the depth buffer version be?
It is clear to me that making a layer system in C++ would impose as good as no performance impact at all, as I have to iterate over the sprites for rendering in any case.
I would suggest you to do it in software since you probably want to use transparency on your sprites and that implies you render them from back to front. Also sorting a couple of sprites shouldn't be that CPU demanding.
Use both, if you can.
Depth information is nice for post-processing and stuff like 3D-glasses, so you shouldn't throw it away. These kinds of effects can be very nice for 2D games.
Also, if you draw your (opaque) layers front to back, you can save fill-rate because the Z-Buffer can do the clipping for you (Depth tests are faster than actual drawing).
Depth testing is usually almost free, especially when you got hierarchical Z info. Because of this and the fill-rate savings, using depth testing will probably be even faster.
On the other hand, the software sorting is nice so you can actually do front to back rendering for opaque sprites and it's mandatory to do alpha-blending right (of course, you draw these sprites back to front).
Direct answers:
allowing the GPU to use the depth buffer would allow you to dynamically adjust the draw order of things without any on-CPU shuffling and would free you from having to assign things to different layers in situations where doing so is a bit of a fiction — for example, you could have effects like projectiles that come from the background towards and then in front of the player, without having to figure out which layer to assign them to all the time
on the GPU, the use of a depth would have no measurable effect, even if you're on an embedded chip, a plug-in card from more than a decade ago or an integrated part; they're so fundamental to modern GPUs that they've been optimised down to costing nothing in practical terms
However, I'd imagine you actually want to do it on the CPU for the simple reason of treating transparency correctly. A depth buffer stores one depth per pixel, so if you draw a near transparent object then attempt to draw something behind it, the thing behind won't be drawn even though it should be visible. In a 2d game it's likely that anti-aliasing will give your sprites partially transparent edges; if you submit drawing to the GPU in draw order then your partial transparencies will always be composited correctly. If you leave the z-buffer to do it then you risk weird looking fringing.
I'm working on a windowed Direct3D data plotting application that needs to display multiple overlays on top of the data (similar to HUDs in games). Since there could be a large amount of data that needs plotting, and not all overlays will be changed every time, I figured it wouldn't be a good idea to replot verticies when only one overlay in the display changes.
This led me to the idea of rendering the textures and verticies of the overlays to multiple textures with transparent backgrounds that could be overlaid in the render loop and updated independently (similar to layers in Photoshop).
Before I embark on changing a large portion of this program to render to textures as opposed to surfaces, I was just wondering if using textures is the best approach.
RTT works well, I used it in a game I did recently. Each scene (scene refers to layer, "HUD" was a scene, "Main" was the main scene etc...) was rendered onto a texture, then each texture was rendering onto a quad, sorted back to front (for alpha blending). I chose this over just rendering the scenes directly onto the back buffer because it allowed me to do post-processing.
For your caching purposes this seems to be the best way to go, but just be aware that the textures can eat memory quickly, and sometimes its just better to render everything again, making sure you sort back to front.
Render to texture will certainly work and could be a good route but it is probably overkill. Modern 3D hardware is very fast and I'd suggest you verify whether performance is really an issue re-rendering when you need an update before investing significant time making major changes to your program.
If performance is an issue your time might be better spent optimizing the code that renders your plot since that will benefit updates that involve changes to the data as well as those that just change an overlay. I'm a graphics programmer for games and generally with realtime 3D you want to focus your optimization efforts on your worst case (you have to redraw everything) rather than your best (only one overlay needs an update).
Rendering to texture render target surfaces is a very good idea, and can be used for a lot of things e.g. optimization/caching, but beware of the blend operation with regular alpha (a*c1 + (1-a)*c2); if # is ARGB blend, then l1#l2#l3 != l3#l1#l2; i.e. it's not commutative, but by using pre-multiplied alpha in all textures/layers the blend operation can be made commutative.
The ultimate reference is the Porter/Duff article "Compositing Digital Images" from 1984.