OPENGL draw shapes(like polygon) with holes (any 2D shapes hole) - opengl

I want to draw shapes with holes in OpenGL and GLFW3. How can I do this? I don't want to use gluTessBeginPolygon.
this is a rectangle with a rectangle hole in it

If the shape is always the same, then the simplest way is to change how you visualize this. It's not a polygon with a hole, it's 2 (or more) polygons with no holes. Draw that instead:
However, if the shape changes dynamically, calculating this triangulation in code is difficult.
If you can't do this because the hole shape is dynamic then you can use the stencil buffer to prevent OpenGL from drawing where the hole is. Clear the stencil buffer, set the rendering mode so that you only write the stencil, then render the hole. Then set the modes back to normal but set the stencil test so it doesn't draw where the stencil buffer isn't zero, and render the rectangle. Then go back to normal.
If you have a shape with lots of holes (like a chain-link fence) then instead of rendering zillions of vertices, you should use a texture with an alpha channel, and use alpha testing in your shader - use discard; on the transparent pixels so they don't render. The fixed-function version of this is GL_ALPHA_TEST.
If you have a formula to detect whether a pixel is in the hole, you can use discard; as well. Your shader can discard for any reason you like - it doesn't have to be based on the alpha channel of a texture.
What you cannot do is count the number of times you cross the polygon boundary when going from left to right, like a scanline renderer might. That's because OpenGL processes all pixels in parallel - not left-to-right.

Related

Drawing a Circle on a plane, Boolean Subtraction - OpenGL

I'm hoping to draw a plane in OpenGL, using C++, with a hole in the center, much like the green of a golf course for example.
I was wondering what the easiest way to achieve this is?
It's fairly simple to draw a circle and a plane (tutorials all over google will show this for those curious), but I was wondering if there is a boolean subtraction technique like you can get when modelling in 3Ds Max or similar software? Where you create both objects, then take the intersection/union etc to leave a new object/shape? In this case subtract the circle from the plane, creating a hole.
Another way I thought of doing it is giving the circle alpha values and making it transparent, but then of course it still leaves the planes surface visible anyway.
Any help or points in the right direction?
I would avoid messing around with transparency, blending mode, and the like. Just create a mesh with the shape you need and draw it. Remember OpenGL is for graphics, not modelling.
There are a couple ways you could do this. The first way is the one you already stated which is to draw the circle as transparent. The caveat is that you must draw the circle first before you draw the plane so that the alpha blending will blend the circle with the background. Then when you render the plane the parts that are covered by the circle will be discarded in the depth test.
The second method you could try is with texture mapping. You could create a texture that is basically a mask with everything set to opaque white except the circle portion which is set to have an alpha value of 0. In your shader you would then multiply your fragment color by this mask texture color so that the portions where the circle is located are now transparent.
Both of these methods would work with shapes other than a circle as well.
I suggest the stencil buffer. Use the stencil buffer to mark the area where you want the hole to be by masking the color and depth buffers and drawing only to the stencil buffer, then unmask your color and depth, avoid drawing to the stencil buffer, and draw your plane with a stencil function telling OpenGL to discard all pixels where the stencil buffer "markings" are.

How to clip texture with arbitrary shape?

I am rendering complex 3d objects. Here is a simple example with a sphere-like object:
Next I am applying a clipping plane to these objects and rendering a texture on this plane, giving the impression you are looking at the inside of the object, as if it was sliced. For example:
The problem is the jagged edge of the texture. It will stick out passed the boundary of the surface. Here's another angle where you can see it sticking out. The surface and the texture both derive from the same source data, but the surface is smoothed and has a higher resolution than the texture.
What I want is to be able to somehow clip the texture, so that it never sticks out past the boundary of the surface. Also, I don't want to simply scale down the texture, since although this might prevent it from sticking outside, it would create interior gaps between the texture edge and the surface edge. I would rather the texture be a little too big and have it clipped so that it sits flush against the edge of the surface.
Here's where I am:
I figured the first step would be to define the intersection of the plane and the surface. So now I have that, as an ordered list of line segments. However, I'm not sure how to proceed with this info (or if this is even the best approach).
I've been reading up on stencil buffers. One approach might be to turn the intersection line into a 2d shape and draw this into a stencil buffer. Then apply this when drawing the texture. (Although I think it's a lot of work since the shapes can be complicated.)
I am wondering if I can somehow use the already drawn surface (in conjunction with a stencil buffer or some other technique) to somehow clip the texture -- without having to go through the extra trouble of deriving the intersection line, etc.
What's the best approach here? (Any online examples you can point me to would also be really helpful.)
If you're clipping convex objects and know coordinates of clipped points, you can create polygonal "cap" yourself - just draw clipped points in proper order using GL_TRIANGLE_FAN, and that's it. Won't work with non-convex object - that would require triangulation algorithm. You could use glu tesselators to triangulate polygons, but that can be tricky/difficult.
If clipped area can be defined by formula, you can write a shader that'll precisely clip pixels over certain distance (i.e. if x^2+x^2+z^2 > r^2 do not draw pixel).
You could also draw back-facing faces with a shader that would draw every back facing pixel as if it were on on clip-plane using simple raytracing. That's complicated, and might be overkill in your case. Dead Rising used similar technique in their game engine.
Also you can use stencil buffer.
Draw back-facing faces first with GL_INCR (glStencilOp(GL_KEEP, GL_INCR, GL_INCR)), then draw front-facing surfaces with GL_DECR (glStencilOp(GL_KEEP, GL_DECR, GL_DECR)). Then draw texture only where stencil is non-zero. (glStencilFunc(GL_GREATER, 0, 0xff); glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP);). If you have many overlapping shapes, however, you'll need to take special care of them.
--edit--
However, I'm not sure how to proceed with this info (or if this is even the best approach).
Draw it as a triangle fan. For convex objects, that's all you need. For non-convex objects that won't work.
ve been reading up on stencil buffers. One approach might be to turn the intersection line into a 2d shape
No, it won't work like that. Region you want to fill with texture should hold certain stencil value. That's how stencil clipping works.
to somehow clip the texture
In OpenGL you have 6(?) clip planes. If you need more than that, you'll need advanced techniques - stencil, deriving intersection line, shaders, or triangulation.
Any online examples you can point me to would also be really helpful
Drawing Filled, Concave Polygons Using the Stencil Buffer

Reverse triangle lookup from affected pixels?

Assume I have a 3D triangle mesh, and a OpenGL framebuffer to which I can render the mesh.
For each rendered pixel, I need to build a list of triangles that rendered to that pixel, even those that are occluded.
The only way I could think of doing this is to individually render each triangle from the mesh, then go through each pixel in the framebuffer to determine if it was affected by the triangle (using the depth buffer or a user-defined fragment shader output variable). I would then have to clear the framebuffer and do the same for the next triangle.
Is there a more efficient way to do this?
I considered, for each fragment in the fragment shader, writing out a triangle identifier, but GLSL doesn't allow outputting a list of integers.
For each rendered pixel, I need to build a list of triangles that rendered to that pixel, even those that are occluded.
You will not be able to do it for entire scene. There's no structure that allow you to associate "list" with every pixel.
You can get list of primitives that affected certain area using select buffer (see glRenderMode(GL_SELECT)).
You can get scene depth complexity using stencil buffer techniques.
If there are 8 triangles total, then you can get list of triangles that effected every pixel using stencil buffer (basically, assign unique (1 << n) stencil value to each triangle, and OR it with existing stencil buffer value for every stencil OP).
But to solve it in generic case, you'll need your own rasterizer and LOTS of memory to store per-pixel triangle lists. The problem is quite similar to multi-layered depth buffer, after all.
Is there a more efficient way to do this?
Actually, yes, but it is not hardware accelerated and OpenGL has nothing to do it. Store all rasterized triangles in OCT-tree. Launch a "ray" through that OCT-tree for every pixel you want to test, and count triangles this ray hits. That's collision detection problem.

OpenGL 2D game question

I want to make a game with Worms-like destructible terrain in 2D, using OpenGL.
What is the best approach for this?
Draw pixel per pixel? (Uh, not good?)
Have the world as a texture and manipulate it (is that possible?)
Thanks in advance
Thinking about the way Worms terrain looked, I came up with this idea. But I'm not sure how you would implement it in OpenGL. It's more of a layered 2D drawing approach. I'm posting the idea anyway. I've emulated the approach using Paint.NET.
First, you have a background sky layer.
And you have a terrain layer.
The terrain layer is masked so the top portion isn't drawn. Draw the terrain layer on top of the sky layer to form the scene.
Now for the main idea. Any time there is an explosion or other terrain-deforming event, you draw a circle or other shape on the terrain layer, using the terrain layer itself as a drawing mask (so only the part of the circle that overlaps existing terrain is drawn), to wipe out part of the terrain. Use a transparent/mask-color brush for the fill and some color similar to the terrain for the thick pen.
You can repeat this process to add more deformations. You could keep this layer in memory and add deformations as they occur or you could even render them in memory each frame if there aren't too many deformations to render.
I guess you'd better use texture-filled polygons with the correct mapping (a linear one that doesn't stretch the texture to use all the texels, but leaves the cropped areas out), and then reshape them as they get destroyed.
I'm assuming your problem will be to implement the collision between characters/weapons/terrain.
As long as you aren't doing this on opengl es, you might be able to get away with using the stencil buffer to do per-pixel collision detection and have your terrain be a single modifyable texture.
This page will give an idea:
http://kometbomb.net/2007/07/11/hardware-accelerated-2d-collision-detection-in-opengl/
The way I imagine it is this:
a plane with the texture applied
a path( a vector of points/segments ) used for ground collisions.
When something explodes, you do a boolean operation (rectangle-circle) for the texture(revealing the background) and for the 'walkable' path.
What I'm trying to say is you do a geometric boolean operation and you use the result to update the texture(with an alpha mask or something) and update the data structure you use to keep track of the walkable area(which ever that might be).
Split things up, instead of relying only on gl draw methods
I think I would start by drawing the foreground into the stencil buffer so the stencil buffer is set to 1 bits anywhere there's foreground, and 0 elsewhere (where you want your sky to show).
Then to draw a frame, you draw your sky, enable the stencil buffer, and draw the foreground. For the initial frame (before any explosion has destroyed part of the foreground) the stencil buffer won't really be doing anything.
When you do have an explosion, however, you draw it to the stencil buffer (clearing the stencil buffer for that circle). Then you re-draw your data as before: draw the sky, enable the stencil buffer, and draw the foreground.
This lets you get the effect you want (the foreground disappears where desired) without having to modify the foreground texture at all. If you prefer not to use the stencil buffer, the alternative that seems obvious to me would be to enable blending, and just manipulate the alpha channel of your foreground texture -- set the alpha to 0 (transparent) where it's been affected by an explosion. IMO, the stencil buffer is a bit cleaner approach, but manipulating the alpha channel is pretty simple as well.
I think, but this is just a quick idea, that a good way might be to draw a Very Large Number of Lines.
I'm thinking that you represent the landscape as a bunch of line segments, for each column of the screen you have 0..n vertical lines, that make up the ground:
12 789
0123 6789
0123456789
0123456789
In the above awesomeness, the column of "0":s makes up a single line, and so on. I didn't try to illustrate the case where a single pixel column has more than one line, since it's a bit hard in this coarse format.
I'm not sure this will be efficient, but it at least makes some sense since lines are an OpenGL primitive.
You can color and texture the lines by enabling texture-mapping and specifying the desired texture coordinates for each line segment.
Typically the way I have seen it done is to have each entity be a textured quad, then update the texture for animation. For a destructible terrain it might be best to break the train into tiles then you only have to update the ones that have changed. Don't use GLdrawpixels it is probably the slowest approach possible (outside of reloading textures from disk every frame though it would be close.)

opengl - blending with previous contents of framebuffer

I am rendering to a texture through a framebuffer object, and when I draw transparent primitives, the primitives are blended properly with other primitives drawn in that single draw step, but they are not blended properly with the previous contents of the framebuffer.
Is there a way to properly blend the contents of the texture with the new data coming in?
EDIT: More information requsted, I will attempt to explain more clearly;
The blendmode I am using is GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA. (I believe that is typically the standard blendmode)
I am creating an application that tracks mouse movement. It draws lines connecting the previous mouse position to the current mouse position, and as I do not want to draw the lines over again each frame, I figured I would draw to a texture, never clear the texture and then just draw a rectangle with that texture on it to display it.
This all works fine, except that when I draw shapes with alpha less than 1 onto the texture, it does not blend properly with the texture's previous contents. Let's say I have some black lines with alpha = .6 drawn onto the texture. A couple draw cycles later, I then draw a black circle with alpha = .4 over those lines. The lines "underneath" the circle are completely overwritten. Although the circle is not flat black (It blends properly with the white background) there are no "darker lines" underneath the circle as you would expect.
If I draw the lines and the circle in the same frame however, they blend properly. My guess is that the texture just does not blend with it's previous contents. It's like it's only blending with the glclearcolor. (Which, in this case is <1.0f, 1.0f, 1.0f, 1.0f>)
I think there are two possible problems here.
Remember that all of the overlay lines are blended twice here. Once when they are blended into the FBO texture, and again when the FBO texture is blended over the scene.
So the first possibility is that you don't have blending enabled when drawing one line over another in the FBO overlay. When you draw into an RGBA surface with blending off, the current alpha is simply written directly into the FBO overlay's alpha channel. Then later when you blend the whole FBO texture over the scene, that alpha makes your lines translucent. So if you have blending against "the world" but not between overlay elements, it is possible that no blending is happening.
Another related problem: when you blend one line over another in "standard" blend mode (src alpha, 1 - src alpha) in the FBO, the alpha channel of the "blended" part is going to contain a blend of the alphas of the two overlay elements. This is probably not what you want.
For example, if you draw two 50% alpha lines over each other in the overlay, to get the equivalent effect when you blit the FBO, you need the FBO's alpha to be...75%. (That is, 1 - (1-.5) * (1-0.5), which is what would happen if you just drew two 50% alpha lines over your scene. But when you draw the two 50% lines, you'll get 50% alpha in the FBO (a blend of 50% with...50%.
This brings up the final issue: by pre-mixing the lines with each other before you blend them over the world, you are changing the draw order. Whereas you might have had:
blend(blend(blend(background color, model), first line), second line);
now you will have
blend(blend(first line, second line), blend(background color, model)).
In other words, pre-mixing the overlay lines into an FBO changes the order of blending and thus changes the final look in a way you may not want.
First, the simple way to get around this: don't use an FBO. I realize this is a "go redesign your app" kind of answer, but using an FBO is not the cheapest thing, and modern GL cards are very good at drawing lines. So one option would be: instead of blending lines into an FBO, write the line geometry into a vertex buffer object (VBO). Simply extend the VBO a little bit each time. If you are drawing less than, say, 40,000 lines at a time, this will almost certainly be as fast as what you were doing before.
(One tip if you go this route: use glBufferSubData to write the lines in, not glMapBuffer - mapping can be expensive and doesn't work on sub-ranges on many drivers...better to just let the driver copy the few new vertices.)
If that isn't an option (for example, if you draw a mix of shape types or use a mix of GL state, such that "remembering" what you did is a lot more complex than just accumulating vertices) then you may want to change how you draw into the VBO.
Basically what you'll need to do is enable separate blending; initialize the overlay to black + 0% alpha (0,0,0,0) and blend by "standard blending" the RGB but additive blending the alpha channels. This still isn't quite correct for the alpha channel but it's generally a lot closer - without this, over-drawn areas will be too transparent.
Then, when drawing the FBO, use "pre-multiplied" alpha, that is, (one, one-minus-src-alph).
Here's why that last step is needed: when you draw into the FBO, you have already multiplied every draw call by its alpha channel (if blending is on). Since you are drawing over black, a green (0,1,0,0.5) line is now dark green (0,0.5,0,0.5). If alpha is on and you blend normally again, the alpha is reapplied and you'l have 0,0.25,0,0.5.). By simply using the FBO color as is, you avoid the second alpha multiplication.
This is sometimes called "pre-multiplied" alpha because the alpha has already been multiplied into the RGB color. In this case you want it to get correct results, but in other cases, programmers use it for speed. (By pre-multiplying, it removes a mult per pixel when the blend op is performed.)
Hope that helps! Getting blending right when the layers are not mixed in order gets really tricky, and separate blend isn't available on old hardware, so simply drawing the lines every time may be the path of least misery.
Clear the FBO with transparent black (0, 0, 0, 0), draw into it back-to-front with
glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
and draw the FBO with
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
to get the exact result.
As Ben Supnik wrote, the FBO contains colour already multiplied with the alpha channel, so instead of doing that again with GL_SRC_ALPHA, it is drawn with GL_ONE. The destination colour is attenuated normally with GL_ONE_MINUS_SRC_ALPHA.
The reason for blending the alpha channel in the buffer this way is different:
The formula to combine transparency is
resultTr = sTr * dTr
(I use s and d because of the parallel to OpenGL's source and destination, but as you can see the order doesn't matter.)
Written with opacities (alpha values) this becomes
1 - rA = (1 - sA) * (1 - dA)
<=> rA = 1 - (1 - sA) * (1 - dA)
= 1 - 1 + sA + dA - sA * dA
= sA + (1 - sA) * dA
which is the same as the blend function (source and destination factors) (GL_ONE, GL_ONE_MINUS_SRC_ALPHA) with the default blend equation GL_FUNC_ADD.
As an aside:
The above answers the specific problem from the question, but if you can easily choose the draw order it may in theory be better to draw premultiplied colour into the buffer front-to-back with
glBlendFunc(GL_ONE_MINUS_DST_ALPHA, GL_ONE);
and otherwise use the same method.
My reasoning behind this is that the graphics card may be able to skip shader execution for regions that are already solid. I haven't tested this though, so it may make no difference in practice.
As Ben Supnik said, the best way to do this is rendering the entire scene with separate blend functions for color and alpha. If you are using the classic non premultiplied blend function try glBlendFuncSeparateOES(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE, GL_ONE) to render your scene to FBO. and glBlendFuncSeparateOES(GL_ONE, GL_ONE_MINUS_SRC_ALPHA) to render the FBO to screen.
It is not 100% accurate, but in most of the cases that will create no unexpected transparency.
Keep in mind that old Hardware and some mobile devices (mostly OpenGL ES 1.x devices, like the original iPhone and 3G) does not support separated blend functions. :(