Can you render two quads with transparency at the same point? - opengl

I'm learning about how to use JOGL and OpenGL to render texture-mapped quads. I have a test program and a test quad, and I figured out how to enable GL_BLEND so that I can specify the alpha value of a vertex to make a quad with a sort of gradient... but now I want this to show through to another textured quad at the same position.
Drawing two quads with the same vertex locations didn't work, it only renders the first quad. Is this possible then, or will I need to basically construct a custom texture on-the-fly based on what I want and then draw one quad with this texture? I was really hoping to take advantage of blending in this case...

Have a look at which glDepthFunc you're using, perhaps you're using GL_LESS/GL_GREATER and it could work if you're using GL_LEQUAL/GL_GEQUAL.

Its difficult to make out of the question what exactly you're trying to achieve but here's a try
For transparency to work correctly in OpenGL you need to draw the polygons from the furthest to the nearest to the camera. If you're scene is static this is definitely something you can do. But if it's rotating and moving then this is usually not feasible since you'll have to sort the polygons for each and every frame.
More on this can be found in this FAQ page:
http://www.opengl.org/resources/faq/technical/transparency.htm

For alpha blending, the renderer blends all colors behind the current transparent object (from the camera's point of view) at the time the transparent object is rendered. If the transparent object is rendered first, there is nothing behind it to blend with. If it's rendered second, it will have something to blend it with.
Try rendering your opaque quad first, then rendering your transparent quad second. Plus, make sure your opaque quad is slightly behind your transparent quad (relative to the camera) so you don't get z-buffer striping.

Related

Get pixel behind the current pixel

I'm coding a programm in C++ with glut, rendering a 3D model in a window.
I'm using glReadPixels to get the image of the scenery displayed in the windows.
And I would like to know how I can get, for a specific pixel (x, y), not directly its color but the color of the next object behind.
If I render a blue triangle, and a red triangle in front of it, glReadPixels gives me red colors from the red triangle.
I would like to know how I can get the colors from the blue triangle, the one I would get from glReadPixels if the red triangle wasn't here.
The default framebuffer only retains the topmost color. To get what you're suggesting would require a specific rendering pipeline.
For instance you could:
Create an offscreen framebuffer of the same dimensions as your target viewport
Render a depth-only pass to the offscreen framebuffer, storing the depth values in an attached texture
Re-render the scene with a special shader that only drew pixels where the post-transformation Z values was LESS than the value in the previously recorded depth buffer
The final result of the last render should be the original scene with the top layer stripped off.
Edit:
It would require only a small amount of new code to create the offscreen framebuffer and render a depth only version of the scene to it, and you could use your existing rendering pipeline in combination with that to execute steps 1 and 2.
However, I can't think of any way you could then re-render the scene to get the information you want in step 3 without a shader, because it both the standard depth test plus a test against the provided depth texture. That doesn't mean there isn't one, just that I'm not well versed in GL tricks to think of it.
I can think of other ways of trying to accomplish the same task for specific points on the screen by fiddling with the rendering system, but they're all far more convoluted than just writing a shader.

What is stereoscopic shader?

These days, I am making some shaders such that Phong, Gourard, even Toon Shader in GLSL.
I have a curious question, I want to make a stereoscopic shader which using 2 camera, and left camera takes red light and right camera takes cyan light can be implemented by using combined them in one camera, so It can be a stereoscopic shader. I think.
Do I think wrong or not? I want to implement in 3D object which consists of 2D primitives.
You'll probably need to render the scene twice, once for the left eye and once for the right eye. You can then blend the 2 together.
One way would be to render each eye into a different texture-backed FBO, and then combine those 2 textures into 1 either using a custom shader or even using additive blending, if you can render each eye with the correct colors to begin with. (If the left eye is truly only the red channel and the right is only the green and blue channels, an additive blend should do the right thing, I think.)
If you want to create an anaglyph imagage, using OpenGL lights to color the scene is stupid.
Either use the method descibed in the other answer, i.e. using FBOs to render the scene into textures, then combine the results in a shader, or by simply drawing them on two overlaid quads in glBlendFunc(GL_ONE, GL_ONE) mode with modulated colors. Or, in case of red-cyan anaglyph you can use glColorMask to select which color channels are going to be written to.

'Render to Texture' and multipass rendering

I'm implementing an algorithm about pencil rendering. First, I should render the model using Phong shading to determine the intensity. Then I should map the texture to the rendered result.
I'm going to do a multipass rendering with opengl and cg shaders. Someone told me that I should try 'render to texture'. But I don't know how to use this method to get the effects that I want. In my opinion, we should first use this method to render the mesh, then we can get a 2D texture about the whole scene. Now that we have draw content to the framebuffer, next we should render to the screen, right? But how to use the rendered texture and do some post-processing on it? Can anybody show me some code or links about it?
I made this tutorial, it might help you : http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-14-render-to-texture/
However, using RTT is overkill for what you're trying to do, I think. If you need the fragment's intensity in the texture, well, you already have it in your shader, so there is no need to render it twice...
Maybe this could be useful ? http://www.ozone3d.net/demos_projects/toon-snow.php
render to a texture with Phong shading
Draw that texture to the screen again in a full screen textured quad, applying a shader that does your desired operation.
I'll assume you need clarification on RTT and using it.
Essentially, your screen is a framebuffer (very similar to a texture); it's a 2D image at the end of the day. The idea of RTT is to capture that 2D image. To do this, the best way is to use a framebuffer object (FBO) (Google "framebuffer object", and click on the first link). From here, you have a 2D picture of your scene (you should check it by saving to an image file that it actually is what you want).
Once you have the image, you'll set up a 2D view and draw that image back onto the screen with an 800x600 quadrilateral or what-have-you. When drawing, you use a fragment program (shader), which transforms the brightness of the image into a greyscale value. You can output this, or you can use it as an offset to another, "pencil" texture.

Rendering 3D Models With Textures That Have Alpha In OpenGL

So Im trying to figure out the best way to render a 3D model in OpenGL when some of the textures applied to it have alpha channels.
When I have the depth buffer enabled, and start drawing all the triangles in a 3D model, if it draws a triangle that is in front of another triangle in the model, it will simply not render the back triangle when it gets to it. The problem is when the front triangle has alpha transparency, and should be able to be seen through to the triangle behind it, but the triangle behind is still not rendered.
Disabling the depth buffer eliminates that problem, but creates the obvious issue that if the triangle IS opaque, then it will still render triangles behind it on top if rendered after.
For example, I am trying to render a pine tree that is basically some cones stacked on top of each other that have a transparent base. The following picture shows the problem that arises when the depth buffer is enabled:
You can see how you can still see the outline of the transparent triangles.
The next picture shows what it looks like when the depth buffer is disabled.
Here you can see how some of the triangles on the back of the tree are being rendered in front of the rest of the tree.
Any ideas how to address this issue, and render the pine tree properly?
P.S. I am using shaders to render everything.
If you're not using any partial transparency (everything is either 0 or 255), you can glEnable(GL_ALPHA_TEST) and that should help you. The problem is that if you render the top cone first, it deposits the whole quad into the z-buffer (even the transparent parts), so the lower branches underneath get z-rejected when its their time to be drawn. Enabling alpha testing doesn't write pixels to the z buffer if they fail the alpha test (set with glAlphaFunc).
If you want to use partial transparency, you'll need to sort the order of rendering objects from back to front, or bottom to top in your case.
You'll need to leave z-buffer enabled as well.
[edit] Whoops I realized that those functions I don't believe work when you're using shaders. In the shader case you want to use the discard function in the fragment shader if the alpha value is close to zero.
if(color.a < 0.01) {
discard;
} else {
outcolor = color;
}
You needs to implement a two-pass algorithm.
The first pass render only the back faces, while the second pass render only the front faces.
In this way you don't need to order the triangles, but some artifacts may occour depending whether your geometry is convex or not.
I may be wrong, but this is because when you render in 3d you do no render the backside of triangles using Directx's default settings, when the Z is removed - it draws them in order, with the Z on it doesnt draw the back side of the triangles anymore.
It is possible to show both sides of the triangle, even with Z enabled, however I'm thinking there might be a reason its normally enabled.. such as speed..
Device->SetRenderState(D3DRS_CULLMODE, Value);
value can equal
D3DCULL_NONE - Shows both sides of triangle
D3DCULL_CW - Culls Front side of triangle
D3DCULL_CCW - Default state

OpenGL: glLogicOp() color filling trick with different coloring?

I am currently using glLogicOp() with a cube, which i render twice: with glFrontFace(GL_CW) and then with glFrontFace(GL_CCW). This allows me to see which area of the other 3d object my cube is overlapping with.
But i want to change the negative color to something else, lets say 0.5f transparent blue color.
How this can be done? Sorry about the title, i dont know the name of this method.
--
Also, i am having problem with being inside the cube with my camera: i need to fill the screen with negative coloring, is there any other way than swithing to 2d mode and drawing a quad with glLogicOp() enabled ? Also the problem is that theres a chance to see bugged rendering if i am at the edge of the cube surface, any ideas for preventing this perfectly?
You should look into the "Carmack's reverse" algorithm and the stencil shadow algorithms in general, as your problem is closely related to them (your cube being a shadow volume object). You will not get away with using glLogicOp() if you want other colors than black and white.