OpenGL: Transparent texture issue - opengl

I have troubles with Texture transparency in OpenGL. As you can see in the picture below, it doesn't quite work. It's worth noting, that the black is actually the ClearColor, I use to clear the screen.
I use the following code to implement blending:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Here's my fragment shader:
#version 330 core
in vec2 tex_coords;
out vec4 color;
uniform vec4 spritecolor;
uniform sampler2D image;
void main(void)
{
color = spritecolor * texture(image, tex_coords);
}
Here is a screenshot of the scene in wireframe mode, in case it helps with the drawn vertices:
If anything else is needed, feel free to ask, I'll add it.

You have to do a Transparency Sorting
If a scene is drawn, usually the depth test (glDepthFunc) is set to GL_LESS. This causes fragments to be drawn only when they are in front of the scene so far drawn.
To draw transparents correctly, you have to draw the opaque objects first. The transparent objects have to be drawn after, sorted by the reverse distance to the camera position.To draw transparents correctly, you have to draw the opaque objects first. The transparent objects have to be drawn after, sorted by the reverse distance to the camera position.
Draw the transparent object first, which has the largest distance to the camera position and draw the transparent object last, which has the lowest distance to the camera position.
See also the answers to the following questions:
OpenGL depth sorting
opengl z-sorting transparency
Fully transparent OpenGL model

Related

libgdx heighmap shader distortion artefact

Currently making a game and I want to add nice shader effect like water distortion.
I am rendering the scene to a FBO then apply a heightmap distortion shader on it.
The distortion is applied by the fragment shader.
normalMapPosition is the color vector at the current position of the normal map.
vec2 normalCoord = v_texCoord0;
vec4 normalMapPosition = 2 * texture2D(u_normals, v_texCoord0);
vec2 distortedCoord = normalCoord + (normalMapPosition.xz * 0.05);
then render it to the screen and I obtain the following result
The problem is that there is a diagonal artefact traversing the whole image.
I think this is due to the treatment by openGL of the texture as two triangles.
Is there a nice way to handle this kind of issue?
I finaly found the solution, the problem came from the utilisation of the same FBO to draw the scene and then to render the shader like:
batch.setShader(waterfallShaderProgram)
fbo.begin();
batch.begin();
// here is the problem, fbo is started and used to
// draw at the same time
batch.draw(fbo.getColorBufferTexture());
batch.end()
fbo.end();
the scene is rendered to the FBO fbo in order to apply other effect on top.
Introducing a new FBO fbo2 solved the issue.
batch.setShader(waterfallShaderProgram)
fbo2.begin();
batch.begin();
batch.draw(fbo.getColorBufferTexture());
batch.end()
fbo2.end();

Fully transparent torus in OpenGL

I have a torus rendered by OpenGL and can map a texture; there is no problem as long as the texture is opaque. But it doesn't work when I make the color selectively transparent in fragment shader. Or rather: it works but only in some areas depending on the order of triangles in vertex buffer; see the difference along the outer equator.
The torus should be evenly covered by spots.
The source image will be png, however for now I work with bmp as it is easier to load (the texture loading function is part of a tutorial).
The image has white background and spots of different colors on top of it; it is not transparent.
The desired result is nearly transparent torus with spots. Both spots in the front and the back side must be visible.
The rendering will be done offline, so I don't require speed; I just need to generate image of torus from an image of its surface.
So far my code looks like this (it is a merge of two examples):
https://gist.github.com/juriad/ba66f7184d12c5a29e84
Fragment shader is:
#version 330 core
// Interpolated values from the vertex shaders
in vec2 UV;
// Ouput data
out vec4 color;
// Values that stay constant for the whole mesh.
uniform sampler2D myTextureSampler;
void main(){
// Output color = color of the texture at the specified UV
color.rgb = texture2D( myTextureSampler, UV ).rgb;
if (color.r == 1 && color.g == 1 && color.b == 1) {
color.a = 0.2;
} else {
color.a = 1;
}
}
I know that the issue is related to order.
What I could do (but don't know what will work):
Add transparency to the input image (and find a code which loads such image).
Do something in vertex shader (see Fully transparent OpenGL model).
Sorting would solve my problem, but if I get it correctly, I have to implement it myself. I would have to find a center of each triangle (easy), project it with my matrix and compare z values.
Change somehow blending and depth handling:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_DEPTH_TEST);
glDepthMask(GL_TRUE);
glDepthFunc(GL_LEQUAL);
glDepthRange(0.0f, 1.0f);
I need an advice, how to continue.
Update, this nearly fixes the issue:
glDisable(GL_DEPTH_TEST);
//glDepthMask(GL_TRUE);
glDepthFunc(GL_NEVER);
//glDepthRange(0.0f, 1.0f);
I wanted to write that it doesn't distinguish sports in front and the back, but then I realized they are nearly white and blending with white doesn't make difference.
The new image of torus with colorized texture is:
The remaining problems are:
red spots are blue - it is related to the function loading BMP (doesn't matter)
as in the input images all the spots are of the same size, the bigger spots should be on the top and therefore saturated and not blended with white body of the torus. It seems that the order is opposite than it should be. If you compare it to the previous image, there the big spots by appearance were drawn correctly and the small ones were hidden (the back side of the torus).
How to fix the latter one?
First problem was solved by disabling depth-test (see update in the question).
Second problem was solved by manual sorting of array of all triangles. It works well even in real-time for 20000 triangles, which is more than sufficient for my purpose
The resulting source code: https://gist.github.com/juriad/80b522c856dbd00d529c
It is based on and uses includes from OpenGL tutorial: http://www.opengl-tutorial.org/download/.

OpenGL Alpha Blending Issue, Blending ignored (maybe)?

EDIT + BETTER SOLUTION:
In case anyone happens to run into the problem I was running into, there are two solutions. One is the solution accepted, but that only applies if you are doing things how I was. Let me explain what I was doing:
1.) Render star background to screen
2.) Render ships, then particles to the FBO
3.) Render FBO to screen
This problem, and therefor the solution to this problem, occurred in the first place because I was blending the FBO with the star background.
The real solution, which is supposedly also slightly faster, is to simply render the star background to the FBO, then render the FBO to screen with blending disabled. Using this method, I do not need to mess with glBlendFuncSeparate...
1.) Render stars, then ships, then particles to FBO
2.) Render FBO to screen with blending disabled
----------ORIGINAL QUESTION:----------
From what I understand of the issue, blending is being ignored somehow. The particle texture with alpha transparency completely overwrites below pixels.
I am creating a top-down game. The camera is slightly angled so that there is some feeling of depth. I am rendering my ships, then rending the particles above them...
After beginning the OpenGL context
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);
glCullFace(GL_BACK);
In the render loop, I do this:
glBindFramebuffer(GL_FRAMEBUFFER,ook->fbo);
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
entitymgr_render(ook); //Render entities. Always 1.0 alpha textures.
glDisable(GL_DEPTH_TEST);
glDisable(GL_CULL_FACE);
//glBlendFunc(GL_SRC_ALPHA,GL_ONE); //This would enable additive blending for non-premult
particlemgr_render(ook); //Render particles. Likely always <1.0 alpha textures
//glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);
glBindFramebuffer(GL_FRAMEBUFFER,0);
If I run with the above code, I get results like this...
Particle tex:
Screenshot from OGL Profiler (Mac tool):
Screenshot of the FBO without any particle rendered:
Screenshot of the FBO with some particles rendered on top:
As you can see, the particle, despite having alpha transparency, doesn't blend with the ship rendered below. Instead, it just completely overwrites the pixels.
Final notes, setting pixel transparency in the shader blends correctly - the ship appears below. And here is my shader:
#version 150
uniform sampler2D s_tex1;
uniform float v4_color;
in vec4 vertex;
in vec3 normal;
in vec2 texcoord;
out vec4 frag_color;
void main()
{
frag_color=texture(s_tex1,texcoord)*v4_color;
if(frag_color.a==0.0) discard;
}
Let me know if there is anything I can provide.
Looks to me like it is rendering the alpha channel as well into the frame buffer, so when you write the particles, the src alpha channel is getting mixed with the destination alpha channel, which is not what you want here.
This is exactly why the glblendfuncseparate() function was created. Try this...
glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE, GL_ONE);
So, the alpha channel of your particles will be used to determine the colours of the final pixels, but the alpha channels of the source and destination will be added together.
My guess is that the FBO's rgb channels are being rendered correctly, but because it also has an alpha channel, and it is being drawn with blending enabled, the end result has incorrect transparency where the particle overlaps the spaceship.
Either use glBlendFuncSeparate (described here) to use different blend factors for the alpha channel when you're drawing the particles:
glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE, GL_ONE);
Or turn off blending altogether when you draw your FBO onto the screen.
In order to obtain texture transparency, other than:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);
you should assure also that:
when creating the particle tex with glTexImage2D, use as format GL_RGBA (or GL_LUMINANCE_ALPHA if you are using gray shaded textures)
when drawing the particle texture, after the glBindTexture command, call
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_BLEND);
Instead of GL_BLEND, you could use the correct texture functions as described in
the glTexEnv reference: http://www.opengl.org/sdk/docs/man2/xhtml/glTexEnv.xml

GLSL passing texture coordinates from vertex shader

What I'm trying to accomplish: Drawing the depth map of my scene on top of my scene (so that objects closer are darker, and further away are lighter)
Problem: I don't seem to understand how to pass the right texture coordinates from my vertex shader to my fragment shader.
So I created my FBO, and the texture that the depth map gets drawn to... not that I'm entirely sure what I was doing, but whatever, it works. I tested drawing the texture using the fixed functionality pipeline, and it looks just like it's supposed to (the depth map that is).
But trying to use it in my shaders just isn't working...
Here's the part from my render method that binds the texture:
glActiveTexture(GL_TEXTURE7);
glBindTexture(GL_TEXTURE_2D, depthTextureId);
glUniform1i(depthMapUniform, 7);
glUseProgram(shaderProgram);
look(); //updates my viewing matrix
box.render(); //renders box VBO
So... I think that's sort of right? Maybe? No clue why texture 7, that was just something that was in a tutorial I was checking...
And here's the important stuff from my vertex shader:
out vec4 ShadowCoord;
void main() {
gl_Position = PMatrix * (VMatrix * MMatrix) * gl_Vertex; //projection, view and model matrices
ShadowCoord = gl_MultiTexCoord0; //something I kept seeing in examples, was hoping it would work.
}
Aaand, fragment shader:
in vec4 ShadowCoord;
in vec3 Color; //passed from vertex shader, didn't include the code for it though. Just the vertex color.
out vec4 FragColor;
void main(
FragColor = vec4(texture2D(ShadowMap,shadowCoord.st).x * vec3(Color), 1.0);
Now the problem is that the coordinate that the fragment shader receives for the texture is always (0,0), or the bottom-left corner. I tried changing it to ShadowCoord = gl_MultiTexCoord7, because I figured maybe it had something to do with me putting the texture in slot number 7... but alas, the problem persisted. When the color of (0, 0) changes, so does the color of the entire scene, rather than being a change in color for only the appropriate pixel/fragment.
And that's what I'm hoping to get some insight on... how to pass the correct coordinates (I'd like for the corners of the texture to be the same coordinates as the corners of my screen). And yes, this is a beginners question... but I have been looking in the Orange Book, and the problem with it is that it's great on the GLSL side of things, but the OpenGL side of things is severely lacking in the examples that I could really use...
The input variable gl_MultiTexCoord0 (or 7) is the builtin per-vertex texture coordinate for the 0th (or 7th) texture coordinate, set by gl(Multi)TexCoord (when using immediate mode) or by glTexCoordPointer (when using arrays/VBOs).
But as your depth buffer is already in screen space, what you want is not a usual texture laid onto the object, but just the value in the texture for a specific pixel/fragment. So the vertex shader isn't involved in any way. Instead you just use the current fragment's screen space position as texture coordinate, that can be read in the fragment shader using gl_FragCoord. But keep in mind that this coordinate is in [0,w]x[0,h] and textures are accessed by normalized texture coordinates in [0,1]. So you have to divide the fragment's coordinate by the screen size:
uniform vec2 screenSize;
...
... texture2D(ShadowMap, gl_FragCoord.st/screenSize) ...
But you actually don't need two passes for this effect anyway, as you can just use the fragment's depth directly, without writing it into a texture. Instead of
texture2D(ShadowMap, gl_FragCoord.st/screenSize).x
you can just use
gl_FragCoord.z
which is nothing else than the fragment's depth value, that would have been written into the texture in the first pass. This way you completely spare the first depth-writing pass and the texture access in the second pass.

opengl z-sorting transparency

im rendering png's on simple squares in opengl es 2.0, but when i try and draw something behind an square i have already drawn the transparent area in my top square are rendered the same color as the background.
I am calling these at the start of every render call.
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
glEnable (GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Your title is essentially the answer to your question!
Generally transparency is done by first rendering all opaque objects in the scene (letting the z-buffer figure out what's visible), then rendering all transparent objects from back to front.
Drew Hall gave you a good answer but another option is to set glEnable(GL_ALPHA_TEST) with glAlphaFunc(GL_GREATER, 0.1f). This will prevent transparent pixels (in this case, ones with alpha < 0.1f) from being rendered at all. That way they do not write into the Z buffer and other things can "show through". However, this only works on fully transparent objects. It also has rough edges wherever the 0.1 alpha edge is and this can look bad for distant features where the pixels are large compared to the object.
Figured it out. You can discard in the fragment shader
mediump vec4 basecolor = texture2D(sTexture, TexCoord);
if (basecolor.a == 0.0){
discard;
}
gl_FragColor = basecolor;