I am trying to create the Light Scattering effect using OpenGL.
I am following this tutorial.
At some point it says:
Switch to Orthogonal projection and blend the FBO with the framebuffer, activating the shader in order to generate the "God's ray" effect .
I don't understand what's the meaning of "blend the FBO with the Framebuffer". I looked for this "Blending" and I noticed that's a OpenGL pipeline step.
I was thinking that I should use the function glEnablei(GL_BLEND, fbo), but I don't know where I should call it.
To draw the mesh (the scene has one mesh and one light source) I use glDrawArrays(GL_TRIANGLES, 0, n_of_verteces).
Can someone help me?
Ugh, the instructions you cited are written unneccessarily confusing. What it asks you to do is take the texture you've rendered your god rays to (using the FBO) and draw it on top what's in the main (non-FBO) frame buffer using a single, full viewport textured quad. And the instruction to "blend it" is asking you to enable blending (everything drawn thereafter will blend with what's been drawn in the steps before, including other blended stuff) and choose an appropriate blending function; (GL_ONE, GL_ONE) would be the obvious one for lighting effect stuff.
Related
I'm trying to render a model in OpenGL. I'm on Day 4 of C++ and OpenGL (Yes, I have learned this quickly) and I'm at a bit of a stop with textures.
I'm having a bit of trouble making my texture alpha work. In this image, I have this character from Spiral Knights. As you can see on the top of his head, there's those white portions.
I've got Blending enabled and my blend function set to glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
What I'm assuming here, and this is why I ask this question, is that the texture transparency is working, but the triangles behind the texture are still showing.
How do I make those triangles invisible but still show my texture?
Thanks.
There are two important things to be done when using blending:
You must sort primitives back to front and render in that order (order independent transparency in depth buffer based renderers is still an ongoing research topic).
When using textures to control the alpha channel you must either write a shader that somehow gets the texture's alpha values passed down to the resulting fragment color, or – if you're using the fixed function pipeline – you have to use GL_MODULATE texture env mode, or GL_DECAL with the primitive color alpha value set to 0, or use GL_REPLACE.
I'm making a 2D game using OpenGL. I recently tried implementing Framebuffer-objects, and I am having some problems regarding blending.
I'm creating an FBO (using GL_RGBA as format).
When I render to the FBO, I first clear it to fully transparent black, and disable GL_BLEND. I then draw my textures and then I enable GL_BLEND again.
When I'm drawing the FBO-texture, I use GL_SRC_ALPHA and GL_ONE_MINUS_SRC_ALPHA as source and destination pixels respectively, as the blending-function. I am rendering it as a textured quad.
This does not work properly, as white pixels appear transparent. I have tried experimenting with different blend-function values, but all that I have tried have had issues. I do not fully understand how blending works, so it's hard for me to wrap my head around this. Maybe I'm missing something obvious here?
Here's an image of how it looks right now. There is supposed to be a glow around the button when it is being highlighted, but instead the pixels around it appear transparent: http://i.snag.gy/RnV4s.jpg
You can also see two boxes of text in the image. The top one is drawn normally, without an FBO. The textures are also rendered normally without an FBO, so I know that the problem lies within my framebuffer-code somewhere.
I have pasted my "RenderTarget" class to pastebin (I'm used to calling it a rendertarget instead of FBO): http://pastebin.com/dBXgjrUX
This is how I use it:
RT->Begin();
// draw stuff
RT->End();
RT->Draw();
Can someone help me? Let me know if you need any more info about my issue.
Edit:
Here are the properties of OpenGL that I set on startup:
// Initialize shaders
shaderManager.InitializeStockShaders();
// Set some OpenGL properties
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glShadeModel(GL_SMOOTH);
glAlphaFunc(GL_GREATER, 0.0f);
// Enables/disables
glEnable(GL_ALPHA_TEST);
glEnable(GL_BLEND);
glDisable(GL_DEPTH_TEST);
glDisable(GL_DITHER);
glDisable(GL_LIGHTING);
I'ts a bit difficult to tell what your problem is exactly, because you didn't provide source code. Alas, I see several potential troublemakers:
First you told that you want to draw a glow around the button. I presume, that all the buttons are drawn into the FBO, merging them into a UI overlay. Glow sounds to me, like you want to blend something, so you probably also want to have blending enabled, drawing to the FBO.
Next be aware of depth buffer issues. Blending and Depth Buffering have peculiar interactions. In your case I suggest disabling depth testing and depth writes to the FBO (or not using a depth buffer attachment to the FBO at all). Draw the glowing button last, so that it won't block the other buttons from being drawn. Also you must make sure, that your glow comes out with a nonzero alpha value, otherwise it will blend transparent. This is something you control in your shaders, or texture environment (depending on what you use).
Update 1:
Your FBO class doesn't propperly ensure, that textures attached to a bound framebuffer must not be bound themself. It's easy to fix though, by moving attachment code into bind, where the textures are also unbound apropriately. See my edited pastebin http://pastebin.com/1uVT7VkR (I probably missed a few things).
I'm implementing an algorithm about pencil rendering. First, I should render the model using Phong shading to determine the intensity. Then I should map the texture to the rendered result.
I'm going to do a multipass rendering with opengl and cg shaders. Someone told me that I should try 'render to texture'. But I don't know how to use this method to get the effects that I want. In my opinion, we should first use this method to render the mesh, then we can get a 2D texture about the whole scene. Now that we have draw content to the framebuffer, next we should render to the screen, right? But how to use the rendered texture and do some post-processing on it? Can anybody show me some code or links about it?
I made this tutorial, it might help you : http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-14-render-to-texture/
However, using RTT is overkill for what you're trying to do, I think. If you need the fragment's intensity in the texture, well, you already have it in your shader, so there is no need to render it twice...
Maybe this could be useful ? http://www.ozone3d.net/demos_projects/toon-snow.php
render to a texture with Phong shading
Draw that texture to the screen again in a full screen textured quad, applying a shader that does your desired operation.
I'll assume you need clarification on RTT and using it.
Essentially, your screen is a framebuffer (very similar to a texture); it's a 2D image at the end of the day. The idea of RTT is to capture that 2D image. To do this, the best way is to use a framebuffer object (FBO) (Google "framebuffer object", and click on the first link). From here, you have a 2D picture of your scene (you should check it by saving to an image file that it actually is what you want).
Once you have the image, you'll set up a 2D view and draw that image back onto the screen with an 800x600 quadrilateral or what-have-you. When drawing, you use a fragment program (shader), which transforms the brightness of the image into a greyscale value. You can output this, or you can use it as an offset to another, "pencil" texture.
I'm learning about how to use JOGL and OpenGL to render texture-mapped quads. I have a test program and a test quad, and I figured out how to enable GL_BLEND so that I can specify the alpha value of a vertex to make a quad with a sort of gradient... but now I want this to show through to another textured quad at the same position.
Drawing two quads with the same vertex locations didn't work, it only renders the first quad. Is this possible then, or will I need to basically construct a custom texture on-the-fly based on what I want and then draw one quad with this texture? I was really hoping to take advantage of blending in this case...
Have a look at which glDepthFunc you're using, perhaps you're using GL_LESS/GL_GREATER and it could work if you're using GL_LEQUAL/GL_GEQUAL.
Its difficult to make out of the question what exactly you're trying to achieve but here's a try
For transparency to work correctly in OpenGL you need to draw the polygons from the furthest to the nearest to the camera. If you're scene is static this is definitely something you can do. But if it's rotating and moving then this is usually not feasible since you'll have to sort the polygons for each and every frame.
More on this can be found in this FAQ page:
http://www.opengl.org/resources/faq/technical/transparency.htm
For alpha blending, the renderer blends all colors behind the current transparent object (from the camera's point of view) at the time the transparent object is rendered. If the transparent object is rendered first, there is nothing behind it to blend with. If it's rendered second, it will have something to blend it with.
Try rendering your opaque quad first, then rendering your transparent quad second. Plus, make sure your opaque quad is slightly behind your transparent quad (relative to the camera) so you don't get z-buffer striping.
I am trying to create a simple ray tracer. I have a perspective view which shows the rays visibly for debugging purposes.
In my example screenshot below I have a single white sphere to be raytraced and a green sphere representing the eye.
Rays are drawn as lines with
glLineWidth(10.0f)
If a ray misses the sphere it is given color glColor4ub(100,100,100,100);
in my initialization code I have the following:
glEnable(GL_ALPHA_TEST);
glAlphaFunc(GL_GREATER, 0.0f);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA,GL_SRC_ALPHA);
You can see in the screen shot that for some reason, the rays passing between the perspective view point and the sphere are being color blended with the axis line behind the sphere, rather than with the sphere itself.
Here is a screenshot:
Can anyone explain what I am doing wrong here?
Thanks!!
Is it a possibility you cast those rays before you draw the sphere?
Then if Z-buffer is enabled, the sphere's fragments simply won't be rendered, as those parts of rays are closer. When you are drawing something semi-transparent (using blending), you should watch the order you draw things carefully.
In fact I think you cannot use Z-buffer in any sensible way together with ray-tracing process. You'll have to track Z-order manually. While we are at it OpenGL might not be the best API to visualize ray-tracing process. (It will do so possibly much slower than pure software ray-tracer)
You dont need the glAlphaFunc, disable it.
Light rays should be blended by adding to the buffer: glBlendFunc(GL_ONE, GL_ONE) (for premultiplied alpha, which you chose.
Turn off depth buffer writing (not testing) when rendering the rays: glDepthMask(GL_FALSE)
Render the rays last.
AlphaTest is only for discarding fragments - not for blending them. Check the spec
By using it, you are telling OpenGL that you want it to throw away the pixels instead of drawing them, so you won't can any transparent blending. The most common blending function is
glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); You can also check out the OpenGL Transparency FAQ.