Fully transparent torus in OpenGL - c++

I have a torus rendered by OpenGL and can map a texture; there is no problem as long as the texture is opaque. But it doesn't work when I make the color selectively transparent in fragment shader. Or rather: it works but only in some areas depending on the order of triangles in vertex buffer; see the difference along the outer equator.
The torus should be evenly covered by spots.
The source image will be png, however for now I work with bmp as it is easier to load (the texture loading function is part of a tutorial).
The image has white background and spots of different colors on top of it; it is not transparent.
The desired result is nearly transparent torus with spots. Both spots in the front and the back side must be visible.
The rendering will be done offline, so I don't require speed; I just need to generate image of torus from an image of its surface.
So far my code looks like this (it is a merge of two examples):
https://gist.github.com/juriad/ba66f7184d12c5a29e84
Fragment shader is:
#version 330 core
// Interpolated values from the vertex shaders
in vec2 UV;
// Ouput data
out vec4 color;
// Values that stay constant for the whole mesh.
uniform sampler2D myTextureSampler;
void main(){
// Output color = color of the texture at the specified UV
color.rgb = texture2D( myTextureSampler, UV ).rgb;
if (color.r == 1 && color.g == 1 && color.b == 1) {
color.a = 0.2;
} else {
color.a = 1;
}
}
I know that the issue is related to order.
What I could do (but don't know what will work):
Add transparency to the input image (and find a code which loads such image).
Do something in vertex shader (see Fully transparent OpenGL model).
Sorting would solve my problem, but if I get it correctly, I have to implement it myself. I would have to find a center of each triangle (easy), project it with my matrix and compare z values.
Change somehow blending and depth handling:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_DEPTH_TEST);
glDepthMask(GL_TRUE);
glDepthFunc(GL_LEQUAL);
glDepthRange(0.0f, 1.0f);
I need an advice, how to continue.
Update, this nearly fixes the issue:
glDisable(GL_DEPTH_TEST);
//glDepthMask(GL_TRUE);
glDepthFunc(GL_NEVER);
//glDepthRange(0.0f, 1.0f);
I wanted to write that it doesn't distinguish sports in front and the back, but then I realized they are nearly white and blending with white doesn't make difference.
The new image of torus with colorized texture is:
The remaining problems are:
red spots are blue - it is related to the function loading BMP (doesn't matter)
as in the input images all the spots are of the same size, the bigger spots should be on the top and therefore saturated and not blended with white body of the torus. It seems that the order is opposite than it should be. If you compare it to the previous image, there the big spots by appearance were drawn correctly and the small ones were hidden (the back side of the torus).
How to fix the latter one?

First problem was solved by disabling depth-test (see update in the question).
Second problem was solved by manual sorting of array of all triangles. It works well even in real-time for 20000 triangles, which is more than sufficient for my purpose
The resulting source code: https://gist.github.com/juriad/80b522c856dbd00d529c
It is based on and uses includes from OpenGL tutorial: http://www.opengl-tutorial.org/download/.

Related

How could I remove this colour interpolation artefact across a quad?

I've been reading up on a vulkan tutorial online, here: https://vulkan-tutorial.com. This question should apply to any 3D rendering API however.
In this lesson https://vulkan-tutorial.com/Vertex_buffers/Index_buffer, the tutorial had just covered using indexed rendering in order to reuse vertices when drawing the following simple two-triangle quad:
The four vertices were assigned red, green, blue and white colours as vertex attributes and the fragment shader had those colours interpolated across the triangles as expected. This leads to the ugly visual artefact on the diagonal where the two triangles meet. As I understand it, the interpolation will only be happening across each triangle, and so where the two triangles meet the interpolation doesn't cross the boundary.
How could you, generally in any rendering api, have the colours smoothly interpolated over all four corners for a nice colour wheel affect without having this hard line?
This is a correct output from a graphics api point of view. You can achieve your own desired output (a color gradient) within the shader code. You basically need to interpolate the colors yourself. To get an idea on how to do this, here is a glsl piece of code from this answer:
uniform vec2 resolution;
void main(void)
{
vec2 p = gl_FragCoord.xy / resolution.xy;
float gray = 1.0 - p.x;
float red = p.y;
gl_FragColor = vec4(red, gray*red, gray*red, 1.0);
}

OpenGL: Transparent texture issue

I have troubles with Texture transparency in OpenGL. As you can see in the picture below, it doesn't quite work. It's worth noting, that the black is actually the ClearColor, I use to clear the screen.
I use the following code to implement blending:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Here's my fragment shader:
#version 330 core
in vec2 tex_coords;
out vec4 color;
uniform vec4 spritecolor;
uniform sampler2D image;
void main(void)
{
color = spritecolor * texture(image, tex_coords);
}
Here is a screenshot of the scene in wireframe mode, in case it helps with the drawn vertices:
If anything else is needed, feel free to ask, I'll add it.
You have to do a Transparency Sorting
If a scene is drawn, usually the depth test (glDepthFunc) is set to GL_LESS. This causes fragments to be drawn only when they are in front of the scene so far drawn.
To draw transparents correctly, you have to draw the opaque objects first. The transparent objects have to be drawn after, sorted by the reverse distance to the camera position.To draw transparents correctly, you have to draw the opaque objects first. The transparent objects have to be drawn after, sorted by the reverse distance to the camera position.
Draw the transparent object first, which has the largest distance to the camera position and draw the transparent object last, which has the lowest distance to the camera position.
See also the answers to the following questions:
OpenGL depth sorting
opengl z-sorting transparency
Fully transparent OpenGL model

Write to texture GLSL

I want to be able to (in fragment shader) add one texture to another. Right now I have projective texturing and want to expand on that.
Here is what I have so far :
Im also drawing the viewfrustum along which the blue/gray test image is projected onto the geometry that is in constant rotation.
My vertex shader:
ProjTexCoord = ProjectorMatrix * ModelTransform * raw_pos;
My Fragment Shader:
vec4 diffuse = texture(texture1, vs_st);
vec4 projTexColor = textureProj(texture2, ProjTexCoord);
vec4 shaded = diffuse; // max(intensity * diffuse, ambient); -- no shadows for now
if (ProjTexCoord[0] > 0.0 ||
ProjTexCoord[1] > 0.0 ||
ProjTexCoord[0] < ProjTexCoord[2] ||
ProjTexCoord[1] < ProjTexCoord[2]){
diffuse = shaded;
}else if(dot(n, projector_aim) < 0 ){
diffuse = projTexColor;
}else{
diffuse = shaded;
}
What I want to achieve:
When for example - the user presses a button, I want the blue/gray texture to be written to the gray texture on the sphere and rotate with it. Imagine it as sort of "taking a picture" or painting on top of the sphere so that the blue/gray texture spins with the sphere after a button is pressed.
As the fragment shader operates on each pixel it should be possible to copy pixel-by-pixel from one texture to the other, but I have no clue how, I might be googling for the wrong stuff.
How can I achieve this technically? What method is most versatile? Suggestions are very much appreciated, please let me know If more code is necessary.
Just to be clear, you'd like to bake decals into your sphere's grey texture.
The trouble with writing to the grey texture while drawing another object is it's not one to one. You may be writing twice or more to the same texel, or a single fragment may need to write to many texels in your grey texture. It may sound attractive as you already have the coordinates of everything in the one place, but I wouldn't do this.
I'd start by creating a texture containing the object space position of each texel in your grey texture. This is key, so that when you click you can render to your grey texture (using an FBO) and know where each texel is in your current view or your projective texture's view. There may be edge cases where the same bit of texture appears on multiple triangles. You could do this by rendering your sphere to the grey texture using the texture coordinates as your vertex positions. You probably need a floating point texture for this, and the following image probably isn't the sphere's texture mapping, but it'll do for demonstration :P.
So when you click, you render a full screen quad to your grey texture with alpha blending enabled. Using the grey texture object space positions, each fragment computes the image space position within the blue texture's projection. Discard the fragments that are outside the texture and sample/blend in those that are inside.
I think you are overcomplicating things.
Writes to textures inside classic shaders (i.e. not compute shader) are only implemented for latest hardware and very latest OpenGL versions and extensions.
It could be terribly slow if used wrong. It's so easy to introduce pipeline stalls and CPU-GPU sync points
Pixel shader could become a terribly slow unmaintainable mess of branches and texture fetches.
And all this mess will be done for every single pixel every single frame
Solution: KISS
Just update your texture on CPU side.
Write to texture, replacing parts of it with desired content
Update is only need to be done once and only when you need this. Data persists until you rewrite it (not even once per frame, but only once per change request)
Pixel shader is dead brain simple: no branching, one texture
To get target pixels, implement ray-picking (you will need it anyway for any non-trivial interactive 3D-graphics program)
P.S. "Everything should be made as simple as possible, but not simpler." Albert Einstein.

Texture lookup into rendered FBO is off by half a pixel

I have a scene that is rendered to texture via FBO and I am sampling it from a fragment shader, drawing regions of it using primitives rather than drawing a full-screen quad: I'm conserving resources by only generating the fragments I'll need.
To test this, I am issuing the exact same geometry as my texture-render, which means that the rasterization pattern produced should be exactly the same: When my fragment shader looks up its texture with the varying coordinate it was given it should match up perfectly with the other values it was given.
Here's how I'm giving my fragment shader the coordinates to auto-texture the geometry with my fullscreen texture:
// Vertex shader
uniform mat4 proj_modelview_mat;
out vec2 f_sceneCoord;
void main(void) {
gl_Position = proj_modelview_mat * vec4(in_pos,0.0,1.0);
f_sceneCoord = (gl_Position.xy + vec2(1,1)) * 0.5;
}
I'm working in 2D so I didn't concern myself with the perspective divide here. I just set the sceneCoord value using the clip-space position scaled back from [-1,1] to [0,1].
uniform sampler2D scene;
in vec2 f_sceneCoord;
//in vec4 gl_FragCoord;
in float f_alpha;
out vec4 out_fragColor;
void main (void) {
//vec4 color = texelFetch(scene,ivec2(gl_FragCoord.xy - vec2(0.5,0.5)),0);
vec4 color = texture(scene,f_sceneCoord);
if (color.a == f_alpha) {
out_fragColor = vec4(color.rgb,1);
} else
out_fragColor = vec4(1,0,0,1);
}
Notice I spit out a red fragment if my alpha's don't match up. The texture render sets the alpha for each rendered object to a specific index so I know what matches up with what.
Sorry I don't have a picture to show but it's very clear that my pixels are off by (0.5,0.5): I get a thin, one pixel red border around my objects, on their bottom and left sides, that pops in and out. It's quite "transient" looking. The giveaway is that it only shows up on the bottom and left sides of objects.
Notice I have a line commented out which uses texelFetch: This method works, and I no longer get my red fragments showing up. However I'd like to get this working right with texture and normalized texture coordinates because I think more hardware will support that. Perhaps the real question is, is it possible to get this right without sending in my viewport resolution via a uniform? There's gotta be a way to avoid that!
Update: I tried shifting the texture access by half a pixel, quarter of a pixel, one hundredth of a pixel, it all made it worse and produced a solid border of wrong values all around the edges: It seems like my gl_Position.xy+vec2(1,1))*0.5 trick sets the right values, but sampling is just off by just a little somehow. This is quite strange... See the red fragments? When objects are in motion they shimmer in and out ever so slightly. It means the alpha values I set aren't matching up perfectly on those pixels.
It's not critical for me to get pixel perfect accuracy for that alpha-index-check for my actual application but this behavior is just not what I expected.
Well, first consider dropping that f_sceneCoord varying and just using gl_FragCoord / screenSize as texture coordinate (you already have this in your example, but the -0.5 is rubbish), with screenSize being a uniform (maybe pre-divided). This should work almost exact, because by default gl_FragCoord is at the pixel center (meaning i+0.5) and OpenGL returns exact texel values when sampling the texture at the texel center ((i+0.5)/textureSize).
This may still introduce very very very slight deviations form exact texel values (if any) due to finite precision and such. But then again, you will likely want to use a filtering mode of GL_NEAREST for such one-to-one texture-to-screen mappings, anyway. Actually your exsiting f_sceneCoord approach may already work well and it's just those small rounding issues prevented by GL_NEAREST that create your artefacts. But then again, you still don't need that f_sceneCoord thing.
EDIT: Regarding the portability of texelFetch. That function was introduced with GLSL 1.30 (~SM4/GL3/DX10-hardware, ~GeForce 8), I think. But this version is already required by the new in/out syntax you're using (in contrast to the old varying/attribute syntax). So if you're not gonna change these, assuming texelFetch as given is absolutely no problem and might also be slightly faster than texture (which also requires GLSL 1.30, in contrast to the old texture2D), by circumventing filtering completely.
If you are working in perfect X,Y [0,1] with no rounding errors that's great... But sometimes - especially if working with polar coords, you might consider aligning your calculated coords to the texture 'grid'...
I use:
// align it to the nearest centered texel
curPt -= mod(curPt, (0.5 / vec2(imgW, imgH)));
works like a charm and I no longer get random rounding errors at the screen edges...

opengl z-sorting transparency

im rendering png's on simple squares in opengl es 2.0, but when i try and draw something behind an square i have already drawn the transparent area in my top square are rendered the same color as the background.
I am calling these at the start of every render call.
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
glEnable (GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Your title is essentially the answer to your question!
Generally transparency is done by first rendering all opaque objects in the scene (letting the z-buffer figure out what's visible), then rendering all transparent objects from back to front.
Drew Hall gave you a good answer but another option is to set glEnable(GL_ALPHA_TEST) with glAlphaFunc(GL_GREATER, 0.1f). This will prevent transparent pixels (in this case, ones with alpha < 0.1f) from being rendered at all. That way they do not write into the Z buffer and other things can "show through". However, this only works on fully transparent objects. It also has rough edges wherever the 0.1 alpha edge is and this can look bad for distant features where the pixels are large compared to the object.
Figured it out. You can discard in the fragment shader
mediump vec4 basecolor = texture2D(sTexture, TexCoord);
if (basecolor.a == 0.0){
discard;
}
gl_FragColor = basecolor;