I am developing an engine and the way I am handling boundaries the player is not supposed to reach is to have actual polygons as these boundaries. Now, I am wondering how to "render" the polgon but have it non visible.
My main question is: does OpenGL have a way to do this natively?
If not, what if I was to create a texture the way I usually load in the texture but have this texture simply be a single pixel. I could set the alpha channel to that specific pixel color and then use an alpha mask as I do normally with masked textures.
Any advice?
My main question is: does OpenGL have a way to do this natively?
No!
OpenGL only draws nicely colored triangles, lines and point to a framebuffer, and that's it.
It is not a scene graph.
It is not a geometry library.
It is not a collision detection framework.
The question comments solved the problem you are having but there's a way to do what you actually asked so I'm putting it here anyway.
The question as I understood it is about rendering something in the depth buffer (and maybe stencil) but nowhere else. To achieve this, you simply have to use glColorMask like so before drawing the transparent polygons:
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
and restore the color mask afterwards.
If you want to render the boundaries as transparent polygons - you could use the blending method as given here : http://www.opengl.org/resources/faq/technical/transparency.htm
Is that what you were looking for ?
Related
I'm trying to render a model in OpenGL. I'm on Day 4 of C++ and OpenGL (Yes, I have learned this quickly) and I'm at a bit of a stop with textures.
I'm having a bit of trouble making my texture alpha work. In this image, I have this character from Spiral Knights. As you can see on the top of his head, there's those white portions.
I've got Blending enabled and my blend function set to glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
What I'm assuming here, and this is why I ask this question, is that the texture transparency is working, but the triangles behind the texture are still showing.
How do I make those triangles invisible but still show my texture?
Thanks.
There are two important things to be done when using blending:
You must sort primitives back to front and render in that order (order independent transparency in depth buffer based renderers is still an ongoing research topic).
When using textures to control the alpha channel you must either write a shader that somehow gets the texture's alpha values passed down to the resulting fragment color, or – if you're using the fixed function pipeline – you have to use GL_MODULATE texture env mode, or GL_DECAL with the primitive color alpha value set to 0, or use GL_REPLACE.
the last few days i was reading a lot articles about post-processing with bloom etc. and i was able to implement a render to texture functionality with this texture running through a sperate shader.
Now i have some questions regarding the whole thing.
Do i have to render both? The Scene and the Texture put on a full-screen quad?
How does Bloom, or any other Post-Processing (DOF, Blur) with this render to texture functionality work? Or is this something completly different?
I dont really understand the concept of the Back and Front-Buffer and how to make use of this for post processing.
I have read something about the volumetric light rendering where they render the scene like 6 times with different color settings. Isnt this quite inefficient? Or was my understanding there just incorrect?
Thanks for anyone care to explain this things to me ;)
Let me try to answer some of your questions
Yes, you have to render both
DOF is typically implemented by rendering a "blurriness" factor into an offscreen buffer, where a post-processing filter then uses this factor to blur certain pixels more than others (with some compensation for color-leaking between sharp and blurred objects). So yes, the basic idea is the same, render to a buffer, process it and then display it (with or without blending it on top of the original scene).
The back buffer is what you render stuff to (what the user will see on the next frame). All offscreen rendering is done to other rendertargets that you will create and use.
I don't quite understand what you mean. Please provide a link to what you read so I can try to understand and perhaps explain it.
Suppose that:
you have the "luminance" for each renderer pixel in a single texture
this texture hold floating point values that can be greater that 1.0
Now:
You do a blur pass (possibly a separate blur), only considering pixels
with a value greater than 1.0, and put the blur result in another
texture.
Finally:
In a last shader you do the final presentation to screen. You sample
from both the "luminance" (clamped to 1.0) and the "blurred excess luminance"
and add them, obtaining the so-called bloom effect.
Following problem: I have two textures and I want to combine these two into a new texture. Thus, one texture is used as background, the other will be overlaid. The overlay texture is getting initialized with glClearColor(1.0, 1.0, 1.0, 0.0). Objects are draw onto the texture, these objects do have alpha values.
Now blending between the two textures leaves a white border around the objects. The border comes from the fact that the background color in the second texture is white, isn't it?
How can I use alpha blending where I do not have to think about the background color of the overlaying texture?
I solved the problem myself, but thanks a lot to all of you guys!
The problem was following: to combine both textures I used glblend(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) which does not work due to the fact that OpenGL uses pre-multiplied alpha values. Blending with glblend(GL_ONE, GL_ONE_MINUS_SRC_ALPHA), works as the source term now will be:
1*src_alpha*src_color!
How can i use alpha blending where i do not have to think about the background color of the overlaying texture?
You can't; your blend function incorporates the background color into it, because it may not actually be the "background". You render multiple objects to the texture, so the "background" color may in fact be a previously rendered object.
Your best bet is to minimize the impact. There's no particular need for the background color to be white. Just make it black. This won't make the artifacts go away; it will hopefully just make it less noticeable.
The simple fact is that blending in graphics cards simply isn't designed to be able to do the kinds of compositing you're doing. It works best when what you're blending with is opaque. Even if there are layers of transparency between the opaque surface and what you're rendering, it still works.
But if the background is actually transparent, with no fully opaque color, the math simply stops working. You will get artifacts; the question is how noticeable they will be.
If you have access to more advanced hardware, you could use some shader-based programmatic blending techniques. But these will have a performance impact.
Although I think you probably get better results with a black background, as Nicol Bolas pointed out. But you should double check your blending functions, because as you point out, it SHOULD not matter...
1.0 * 0.0 + 0.734 * 1.0 = 0.734
What I don't really get is why your base texture is fully transparent? Is that intended? Unless you blend the textures and then use them somewhere else initializing to Alpha = 1.0 is a batter idea.
Make sure you disable depth writing before you draw the transparent texture (so one transparent texture can't "block" another, preventing part of it from being drawn). To do so just call glDepthMask(false). Once you are done drawing transparent objects, call glDepthMask(true) to set depth writing back to normal.
Two questions -
What is the best way to use a texture in OpenGL to fill the entire window?
I want to use glTexImage2D to take in an array of ints containing colour data, how would I go about doing this? (I've found a couple of pages of reference on glTexImage2D but tutorial on using it would be great)
Clarification:
I have done texturing before. I simply need help on these two specific parts.
glTexImage2D just uploads texture data, nothing more. When you have your texture, draw a texture mapped quad the size of the screen and you will draw your texture pixels to the screen.
A ortographic projection is usually used for this.
NeHe provides tutorials for almost any OpenGL topic.
The first lesson on using textures is #6.
Also you could just upload the pixels with glDrawPixels if you don't need to update to much.
There is a nice example from Nehe on how to use textures here:
I'm learning about how to use JOGL and OpenGL to render texture-mapped quads. I have a test program and a test quad, and I figured out how to enable GL_BLEND so that I can specify the alpha value of a vertex to make a quad with a sort of gradient... but now I want this to show through to another textured quad at the same position.
Drawing two quads with the same vertex locations didn't work, it only renders the first quad. Is this possible then, or will I need to basically construct a custom texture on-the-fly based on what I want and then draw one quad with this texture? I was really hoping to take advantage of blending in this case...
Have a look at which glDepthFunc you're using, perhaps you're using GL_LESS/GL_GREATER and it could work if you're using GL_LEQUAL/GL_GEQUAL.
Its difficult to make out of the question what exactly you're trying to achieve but here's a try
For transparency to work correctly in OpenGL you need to draw the polygons from the furthest to the nearest to the camera. If you're scene is static this is definitely something you can do. But if it's rotating and moving then this is usually not feasible since you'll have to sort the polygons for each and every frame.
More on this can be found in this FAQ page:
http://www.opengl.org/resources/faq/technical/transparency.htm
For alpha blending, the renderer blends all colors behind the current transparent object (from the camera's point of view) at the time the transparent object is rendered. If the transparent object is rendered first, there is nothing behind it to blend with. If it's rendered second, it will have something to blend it with.
Try rendering your opaque quad first, then rendering your transparent quad second. Plus, make sure your opaque quad is slightly behind your transparent quad (relative to the camera) so you don't get z-buffer striping.