How should I do depth independent blending? - c++

I'm working on an OpenGL 3 renderer for a GUI toolkit called Gwen. I nearly have everything working, but I'm having some issues getting everything to blend correctly. I've sorted the triangles by which texture they use and packed them into a VBO, so with the Unit Test, it basically boils down into 3 layers: Filled Rects with no texture, Text, and the windows, buttons, etc that use a skin texture.
The Filled Rects are usually drawn on top of everything else and blended in, but the background behind everything is also a Filled Rect, so I can't count on that. There is a Z-value conflict if you draw them last (ex: the windows have a textured shadow around the edges that turns black because the background fails the depth test) and a blending/z-value conflict if you draw them first (ex: some of the selection highlights get drawn on top of instead of blending like they're supposed to).
I can't count on being able to identify any specific layer except the Filled Rects. The different layers have a mix of z-values, so I can't just draw them in a certain order to make things work. While writing this, I thought of a simple method of drawing the triangles sorted back to front, but it could mean lots of little draw calls, which I'm hoping to avoid. Is there some method that involves some voodoo magic blending that would let me keep my big batches of triangles?

You're drawing a GUI; batching shouldn't be your first priority for the simple fact that a GUI just doesn't do much. A GUI will almost never be your performance bottleneck. This smells of premature optimization; first, get it to work. Then, if it's too slow, make it work faster.
There is no simple mechanism for order-independent transparency. Your best bet is to just render things in the proper Z order.

Related

How to XOR the colors under a shape? (SDL2)

I have an artsy side-project that is running slower than I want it to. Basically, I want to draw a bunch of shapes and colors such that they XOR the shapes and colors that I've already drawn. The program makes things like this:
Which is seven black circles XORed onto the screen.
My method is quite slow, for each pixel, I'm looping through each circle to determine if it should be XORed.
I can draw circles with SDL_gfx, but I can't seem to find a drawing mode that XORs. My current thought process is to use a blending mode that will at least tell me if a specific pixel is odd or even. However, creating an SDL_Texture that can be rendered to ( SDL_TEXTUREACCESS_TARGET ) makes it unable to be directly manipulated ( SDL_TEXTUREACCESS_STREAMING ).
The simple question is, how do I apply a black circle such that it XORs the pixels below it?
I don't think there is a way to do this with SDL_Renderer and still have reasonable performance. You would have to do the work in an SDL_Surface and upload it again.
I wrote SDL_gpu to enable modern graphical effects with a similar style to SDL's built-in render API. This particular effect is trivial in GLSL if you've used it much. If you want to avoid custom shaders, this effect is probably possible with the expanded blend mode options that SDL_gpu has.

DirectX Transparent textures - Making plants look good

I've recently started learning DirectX programming in C++, I have some experience of graphical programming in other languages however I am new to the DirectX scene.
Anyway, I wanted to ask a question about transparent textures. So far I've always used alpha testing as that has reached my needs, however I've recently began to wonder how "proper" game engines manage to render such good looking semi-transparent textures for things like plants and trees which have smooth transparency.
As everytime I've used alpha testing, the texutres have ended up looking blocky and just plain bad. I'd love to be able to have smooth, semi-transparent textures which draw as I would expect.
My guess as to how this works would be to execute render calls in order, starting with things that are far away from the camera and moving closer, However, I can't really see how this works for pre-made models, for example if you had a tree model where the leaves and trunk shared a model, how to guarantee that the back leaves would draw, and the trunks would draw correctly over the leaves, and that the front leaves would look correct over the trunk.
I had tried that method above and had also disabled z buffering for the transparent objects such as smoke particles, and it sort of worked, but looked messy and the effect appeared different depending on the viewing angle. So that didn't seem ideal.
So, in short, what methods do "proper" games use to correctly draw smooth alpha textures (which have a range of alpha values) into a 3D scene for things like foliage.
Thanks,
Michael.
Ordered transparency is accomplished most basically using the painters algorithm.
The painter's algorithm falls apart where an object needs to be drawn both in front of and behind another object, or where a single object has multiple sub components that are transparent. We can't easily sort sub-components of a mesh relative to each other.
While it doesn't solve the problems z-buffer allows us to optimize rendering. Most games use this slightly more complex algorithm as the basis of their rendering.
Render all Opaque objects sorted by material state or front to back
to avoid overdraw.
Render all Transparent objects sorted front to back.
Games use a variety of techniques in combination to avoid this problem.
Split models into non overlapping transparent sections. Often times this is done implicitly because a game's transparent objects will often use use different materials than the rest of the model. You can also split models with multiple layers of transparency in such a way that each new model's layers do not overlap. For example you could split a pine tree model radially into 5 sections.
This was more common in fixed function pipelines. Modern games simply try to avoid the problem.
Avoid semi-transparent parts in models. Use transparency only for anti-aliasing edges and where the transparent object can split the world cleanly into two separate groups of objects. (Windows or water planes for example). Splitting the world like this and rendering those chunks front to back allows our anti-aliased edges to be drawn without causing obvious cut-outs on other transparent objects. The edges themselves tend to look good even if they overlap as long as your alpha-test is set higher than ~30%.
Semi transparent objects are often rendered as particle effects. Grass and smoke are the most common examples. The point list for the effect or group of grass objects is sorted each frame. This is a much simpler problem than sorting arbitrary sub meshes. Many outdoor games have complex grass and foliage instancing systems. These allow them to render individual leaves, and blades properly sorted and avoid most of the rendering overhead of doing it in this fashion but they strictly limit the types of objects.
Many effects can be done in an order independent way using additive and subtraction blending rather than alpha blending.
There are a couple easy options if your smooth edges are still unacceptable. You can dither any parts of the model below 75% transparency. Or you can have the hardware do it for you without visible artifacts by using coverage-to-alpha. This causes the multisampling hardware to dither the edges in the overdrawn samples. It won't give you a smooth gradient but the 4-16 levels of alpha are perfectly acceptable for anti-aliasing edges and free if you already intend to use MSAA.
There are a lot of caveats and special cases. If you have water you will probably need to render any semi-transparent objects that intersect the water twice using a stencil or depth test.
Moving the camera in and out of transparent objects is always problematic.
It is nearly impossible to render a complex semi-transparent object. Like an x-ray view of a building or a ghost. Many games simply render this type of object as additive. But with modern hardware a variety of more complex schemes are possible.
More complex schemes
Depth Peeling is a method of rendering where you render multiple passes with different Z-clipping planes to composite the scene from back to front regardless of order or what object contains the alpha. It is less expensive than you would expect because many objects render to only one or two slices. But it is not perfect and many game developers find it too costly.
There are many other varieties of Order Independent Transparency. With a modern GPU and compute we can render in a single pass to a buffer where each pixel is a stack of possible slices. We can then sort the stack and blend these slices in a post process, and only incur the performance penalty when there are layers of transparency on a pixel.
OIT is still mostly only used in special cases like 2.5D games (such as little Big Planet). But I believe that it may eventually become a core tool in game programming.

Perfect filled triangle rendering algorithm?

Where can I get an algorithm to render filled triangles? Edit3: I cant use OpenGL for rendering it. I need the per-pixel algorithm for this.
My goal is to render a regular polygon from triangles, so if I use this triangle filling algorithm, the edges from each triangle wouldn't overlap (or make gaps between them), because then it would result into rendering errors if I use for example XOR to render the pixels.
Therefore, the render quality should match to OpenGL rendering, so I should be able to define - for example - a circle with N-vertices, and it would render like a circle with any size correctly; so it doesn't use only integer coordinates to render it like some triangle filling algorithms do.
I would need the ability to control the triangle filling myself: I could add my own logic on how each of the individual pixels would be rendered. So I need the bare code behind the rendering, to have full control on it. It should be efficient enough to draw tens of thousands of triangles without waiting more than a second perhaps. (I'm not sure how fast it can be at best, but I hope it wont take more than 10 seconds).
Preferred language would be C++, but I can convert other languages to my needs.
If there are no free algorithms for this, where can I learn to build one myself, and how hard would that actually be? (me=math noob).
I added OpenGL tag since this is somehow related to it.
Edit2: I tried the algo in here: http://joshbeam.com/articles/triangle_rasterization/ But it seems to be slightly broken, here is a circle with 64 triangles rendered with it:
But if you zoom in, you can see the errors:
Explanation: There is 2 pixels overlapping to the other triangle colors, which should not happen! (or transparency or XOR etc effects will produce bad rendering).
It seems like the errors are more visible on smaller circles. This is not acceptable if I want to have a XOR effect for the pixels.
What can I do to fix these, so it will fill it perfectly without overlapped pixels or gaps?
Edit4: I noticed that rendering very small circles isn't very good. I realised this was because the coordinates were indeed converted to integers. How can I treat the coordinates as floats and make it render the circle precisely and perfectly just like in OpenGL ? Here is example how bad the small circles look like:
Notice how perfect the OpenGL render is! THAT is what I want to achieve, without using OpenGL. NOTE: I dont just want to render perfect circle, but any polygon shape.
There's always the half-space method.
OpenGL uses the GPU to perform this job. This is accelerated in hardware and is called rasterization.
As far as i know the hardware implementation is based on the scan-line algorithm.
This used to be done by creating the outline and then filling in the horizontal lines. See this link for more details - http://joshbeam.com/articles/triangle_rasterization/
Edit: I don't think this will produce the lone pixels you are after, there should be a pixel on every line.
Your problem looks a lot like the problem one has when it comes to triangles sharing the very same edge. What is done by triangles sharing an edge is that one triangle is allowed to conquer the space while the other has to leave it blank.
When doing work with a graphic card usually one gets this behavior by applying a drawing order from left to right while also enabling a z-buffer test or testing if the pixel has ever been drawn. So if a pixel with the very same z-value is already set, changing the pixel is not allowed.
In your example with the circles the line of both neighboring circle segments are not exact. You have to check if the edges are calculated differently and why.
Whenever you draw two different shapes and you see something like that you can either fix your model (so they share all the edge vertexes), go for a z-buffer test or a color test.
You can also minimize the effect by drawing edges using a sub-buffer that has a higher resolution and down-sample it. Since this does not effect the whole area it is more cost effective in terms of space and time when compared to down-sampling the whole scene.

OpenGL 2D game question

I want to make a game with Worms-like destructible terrain in 2D, using OpenGL.
What is the best approach for this?
Draw pixel per pixel? (Uh, not good?)
Have the world as a texture and manipulate it (is that possible?)
Thanks in advance
Thinking about the way Worms terrain looked, I came up with this idea. But I'm not sure how you would implement it in OpenGL. It's more of a layered 2D drawing approach. I'm posting the idea anyway. I've emulated the approach using Paint.NET.
First, you have a background sky layer.
And you have a terrain layer.
The terrain layer is masked so the top portion isn't drawn. Draw the terrain layer on top of the sky layer to form the scene.
Now for the main idea. Any time there is an explosion or other terrain-deforming event, you draw a circle or other shape on the terrain layer, using the terrain layer itself as a drawing mask (so only the part of the circle that overlaps existing terrain is drawn), to wipe out part of the terrain. Use a transparent/mask-color brush for the fill and some color similar to the terrain for the thick pen.
You can repeat this process to add more deformations. You could keep this layer in memory and add deformations as they occur or you could even render them in memory each frame if there aren't too many deformations to render.
I guess you'd better use texture-filled polygons with the correct mapping (a linear one that doesn't stretch the texture to use all the texels, but leaves the cropped areas out), and then reshape them as they get destroyed.
I'm assuming your problem will be to implement the collision between characters/weapons/terrain.
As long as you aren't doing this on opengl es, you might be able to get away with using the stencil buffer to do per-pixel collision detection and have your terrain be a single modifyable texture.
This page will give an idea:
http://kometbomb.net/2007/07/11/hardware-accelerated-2d-collision-detection-in-opengl/
The way I imagine it is this:
a plane with the texture applied
a path( a vector of points/segments ) used for ground collisions.
When something explodes, you do a boolean operation (rectangle-circle) for the texture(revealing the background) and for the 'walkable' path.
What I'm trying to say is you do a geometric boolean operation and you use the result to update the texture(with an alpha mask or something) and update the data structure you use to keep track of the walkable area(which ever that might be).
Split things up, instead of relying only on gl draw methods
I think I would start by drawing the foreground into the stencil buffer so the stencil buffer is set to 1 bits anywhere there's foreground, and 0 elsewhere (where you want your sky to show).
Then to draw a frame, you draw your sky, enable the stencil buffer, and draw the foreground. For the initial frame (before any explosion has destroyed part of the foreground) the stencil buffer won't really be doing anything.
When you do have an explosion, however, you draw it to the stencil buffer (clearing the stencil buffer for that circle). Then you re-draw your data as before: draw the sky, enable the stencil buffer, and draw the foreground.
This lets you get the effect you want (the foreground disappears where desired) without having to modify the foreground texture at all. If you prefer not to use the stencil buffer, the alternative that seems obvious to me would be to enable blending, and just manipulate the alpha channel of your foreground texture -- set the alpha to 0 (transparent) where it's been affected by an explosion. IMO, the stencil buffer is a bit cleaner approach, but manipulating the alpha channel is pretty simple as well.
I think, but this is just a quick idea, that a good way might be to draw a Very Large Number of Lines.
I'm thinking that you represent the landscape as a bunch of line segments, for each column of the screen you have 0..n vertical lines, that make up the ground:
12 789
0123 6789
0123456789
0123456789
In the above awesomeness, the column of "0":s makes up a single line, and so on. I didn't try to illustrate the case where a single pixel column has more than one line, since it's a bit hard in this coarse format.
I'm not sure this will be efficient, but it at least makes some sense since lines are an OpenGL primitive.
You can color and texture the lines by enabling texture-mapping and specifying the desired texture coordinates for each line segment.
Typically the way I have seen it done is to have each entity be a textured quad, then update the texture for animation. For a destructible terrain it might be best to break the train into tiles then you only have to update the ones that have changed. Don't use GLdrawpixels it is probably the slowest approach possible (outside of reloading textures from disk every frame though it would be close.)

2D OpenGL scene slows down with lots of overlapping shapes

I'm drawing 2D shapes with OpenGL. They aren't using that many polygons. I notice that I can have lots and lots of shapes as long as they don't overlap. If I get a shape behind a shape behind.... etc.. it really starts lagging. I feel like I might be doing something wrong. Is this normal and is there a way to fix this (I can't omit rendering because I do blending for alpha). I also have CW backface culling enabled.
Thanks
Are your two cases (overlapping and non-overlapping) using the exact same set of shapes? Because if the overlapping case involves a total area of all your shapes that is larger than the first case, then it would be expected to be slower. If it's the same set of shapes that slows down if some of them overlap, then that would be very unusual and shouldn't happen on any standard hardware OpenGL implementation (what platform are you using?). Backface culling won't be causing any problem.
Whenever a shape is drawn, the GPU has to do some work for each pixel that it covers on the screen. If you draw the same shape 100 times in the same place, then that's 100-times the pixel work. Depth buffering can reduce some of the extra cost for opaque objects if you draw objects in depth-sorted order, but that trick can't work for things that use transparency.
When using transparency, it's the sum of the area of each rendered shape that matters. Not the amount of the screen that's covered after everything is rendered.
You need to order your shapes front-to-back if they are opaque. Then the depth test can quickly and easily reject each pixel.
Then, you need to order them back-to-front if they are transparent. Rendering transparency out-of-order is very slow.
Edit: Hmm, I (somehow) missed the fact that this is 2D, despite the fact that the OP mentioned it repeatedly.