For testing purposes I want to display the first slice of a 3D texture. However it seems that the other slices are also displayed.
My vertex shader:
#version 130
attribute vec4 position;
varying vec2 texcoord;
void main()
{
gl_Position = position;
texcoord = position.xy * vec2(0.5) + vec2(0.5);
}
The fragment shader:
#version 130
uniform sampler3D textures[1];
varying vec2 texcoord;
void main()
{
gl_FragColor = texture3D(textures[0], vec3(texcoord.x, texcoord.y, 0));
}
I bind the texture from my C++ code using
glTexImage3D(GL_TEXTURE_3D,0,GL_LUMINANCE,rows-1,cols-1,dep- 1,0,GL_RED,GL_UNSIGNED_SHORT, vol.data());
This is how it looks; the whole head should not be visible, only the upper portion of it.
When you use linear filtering (value GL_LINEAR for the GL_TEXTURE_MIN_FILTER and/or GL_TEXTURE_MAG_FILTER texture parameters), the value of your samples will generally be determined by the two slices closest to your 3rd texture coordinate.
If the size of your 3D texture in the 3rd dimension is d, the position of the first slice in texture coordinate space is 0.5 / d. This is easiest to understand if you picture each slice having a certain thickness in texture coordinate space. Since you have d slices, and the texture coordinate range is [0.0, 1.0], each slice has thickness 1.0 / d. Therefore, the first slice extends from 0.0 to 1.0 / d, and its center is at 0.5 / d.
When you use 0.0 for the 3rd texture coordinate, this is not the center of the first slice, and linear sampling comes into play. Since 0.0 is at the edge of the texture, the wrap modes become critical. The default wrap mode is GL_REPEAT, meaning that sampling at the edges acts as if the texture was repeated. With this setting, the neighbor of the first slice is the last slice, and texture coordinate 0.0 is exactly in the middle between the first slice and the last slice.
The consequence is that linear sampling with texture coordinate 0.0 will give you the average of the first slice and the last slice.
While GL_REPEAT is the default for the wrap modes, it is rarely what you want, unless your texture really contains a repeating pattern. It's certainly not the right setting in this case. What you need here is:
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
Related
See gif switching between RGB and colormap:
The problem is that the two images are different.
I am drawing dots that are RGB white (1.0,1.0,1.0). The alpha channel controls pixel brightness, which creates the dot blur. That's what you see as the brighter image. Then I have a 2-pixel texture of black and white (0.0,0.0,0.0,1.0) (1.0,1.0,1.0,1.0) and in a fragment shader I do:
#version 330
precision highp float;
uniform sampler2D originalColor;
uniform sampler1D colorMap;
in vec2 uv;
out vec4 color;
void main()
{
vec4 oldColor = texture(originalColor, uv);
color = texture(colorMap, oldColor.a);
}
Very simply, take the fragment of the originalColor texture's alpha value of 0 to 1, and translate it to a new color with colorMap texture of black to white. There should be no difference between the two images! Or... at least, that's my goal.
Here's my setup for the colormap texture
glActiveTexture(GL_TEXTURE0);
glGenTextures(1, &colormap_texture_id); // get texture id
glBindTexture(GL_TEXTURE_1D, colormap_texture_id);
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); // required: stop texture wrapping
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); // required: scale texture with linear sampling
glTexImage1D(GL_TEXTURE_1D, 0, GL_RGBA32F, colormapColors.size(), 0, GL_RGBA, GL_FLOAT, colormapColors.data()); // setup memory
Render loop:
GLuint textures[] = { textureIDs[currentTexture], colormap_texture_id };
glBindTextures(0, 2, textures);
colormapShader->use();
colormapShader->setUniform("originalColor", 0);
colormapShader->setUniform("colorMap", 1);
renderFullScreenQuad(colormapShader, "position", "texCoord");
I am using a 1D texture as a colormap because it seems that's the only way to potentially have a 1000 to 2000 indexes of colormap values stored in the GPU memory. If there's a better way, let me know. I assume the problem is that the math for interpolating between two pixels is not right for my purposes.
What should I do to get my expected results?
To make sure there's no shenanigans I tried to following shader code:
color = texture(colorMap, oldColor.a); //incorrect results
color = texture(colorMap, (oldColor.r + oldColor.g + oldColor.b)/3); //incorrect
color = texture(colorMap, (oldColor.r + oldColor.g + oldColor.b + oldColor.a)/4); //incorrect
color = vec4(oldColor.a); //incorrect
color = oldColor; // CORRECT... obviously...
I think to be more accurate, you'd need to change:
color = texture(colorMap, oldColor.a);
to
color = texture(colorMap, oldColor.a * 0.5 + 0.25);
Or more generally
color = texture(colorMap, oldColor.a * (1.0 - (1.0 / texWidth)) + (0.5 / texWidth));
Normally, you wouldn't notice the error, it's just because texWidth is so tiny that the difference is significant.
The reason for this is because the texture is only going to start linear filtering from black to white after you pass the centre of the first texel (at 0.25 in your 2 texel wide texture). The interpolation is complete once you pass the centre of the last texel (at 0.75).
If you had a 1024 texture like you mention you plan to end up with then interpolation starts at 0.000488 and I doubt you'd notice the error.
TASK BACKGROUND
I am trying to implement SSAO after OGLDev Tutorial 45, which is based on a Tutorial by John Chapman. The OGLDev Tutorial uses a highly simplified method which samples random points in a radius around the fragment position and steps up the AO factor depending on how many of the sampled points have a depth greater than the actual surface depth stored at that location (the more positions around the fragment lie in front of it the greater the occlusion).
The 'engine' i use does not have as modular deferred shading as OGLDev, but basically it first renders the whole screen colors to a framebuffer with a texture attachment and a depth renderbuffer attachment. To compare the depths, the fragment view space positions are rendered to another framebuffer with texture attachment.
Those texture are then postprocessed by the SSAO shader and the result is drawn to a screen filling quad.
Both textures on their own draw fine to the quad and the shader input uniforms seem to be ok also, so thats why i havent included any engine code.
The Fragment Shader is almost identical, as you can see below. I have included some comments that serve my personal understanding.
#version 330 core
in vec2 texCoord;
layout(location = 0) out vec4 outColor;
const int RANDOM_VECTOR_ARRAY_MAX_SIZE = 128; // reference uses 64
const float SAMPLE_RADIUS = 1.5f; // TODO: play with this value, reference uses 1.5
uniform sampler2D screenColorTexture; // the whole rendered screen
uniform sampler2D viewPosTexture; // interpolated vertex positions in view space
uniform mat4 projMat;
// we use a uniform buffer object for better performance
layout (std140) uniform RandomVectors
{
vec3 randomVectors[RANDOM_VECTOR_ARRAY_MAX_SIZE];
};
void main()
{
vec4 screenColor = texture(screenColorTexture, texCoord).rgba;
vec3 viewPos = texture(viewPosTexture, texCoord).xyz;
float AO = 0.0;
// sample random points to compare depths around the view space position.
// the more sampled points lie in front of the actual depth at the sampled position,
// the higher the probability of the surface point to be occluded.
for (int i = 0; i < RANDOM_VECTOR_ARRAY_MAX_SIZE; ++i) {
// take a random sample point.
vec3 samplePos = viewPos + randomVectors[i];
// project sample point onto near clipping plane
// to find the depth value (i.e. actual surface geometry)
// at the given view space position for which to compare depth
vec4 offset = vec4(samplePos, 1.0);
offset = projMat * offset; // project onto near clipping plane
offset.xy /= offset.w; // perform perspective divide
offset.xy = offset.xy * 0.5 + vec2(0.5); // transform to [0,1] range
float sampleActualSurfaceDepth = texture(viewPosTexture, offset.xy).z;
// compare depth of random sampled point to actual depth at sampled xy position:
// the function step(edge, value) returns 1 if value > edge, else 0
// thus if the random sampled point's depth is greater (lies behind) of the actual surface depth at that point,
// the probability of occlusion increases.
// note: if the actual depth at the sampled position is too far off from the depth at the fragment position,
// i.e. the surface has a sharp ridge/crevice, it doesnt add to the occlusion, to avoid artifacts.
if (abs(viewPos.z - sampleActualSurfaceDepth) < SAMPLE_RADIUS) {
AO += step(sampleActualSurfaceDepth, samplePos.z);
}
}
// normalize the ratio of sampled points lying behind the surface to a probability in [0,1]
// the occlusion factor should make the color darker, not lighter, so we invert it.
AO = 1.0 - AO / float(RANDOM_VECTOR_ARRAY_MAX_SIZE);
///
outColor = screenColor + mix(vec4(0.2), vec4(pow(AO, 2.0)), 1.0);
/*/
outColor = vec4(viewPos, 1); // DEBUG: draw view space positions
//*/
}
WHAT WORKS?
The fragment colors texture is correct.
The texture coordinates are those of a screen filling quad to which we draw and are transformed to [0, 1]. They yield equivalent results as vec2 texCoord = gl_FragCoord.xy / textureSize(screenColorTexture, 0);
The (perspective) projection matrix is the one the camera uses, and it works for that purpose. In any case, this doesnt seem to be the issue.
The random sample vector components are in range [-1, 1], as intended.
The fragment view space positions texture seems ok:
WHAT'S WRONG?
When i set the AO mixing factor at the bottom of the fragment shader to 0, it runs smooth to the fps cap (even though the calculations are still performed, at least i guess the compiler wont optimize that :D ). But when the AO is mixed in it takes up to 80 ms per frame draw (getting slower with time, as if the buffers were filling up), and the result is really interesting and confusing:
Obviously the mapping seems far off, and the flickering noise seems very random, as if it corresponded directly to the random sample vectors.
I found it most interesting that the draw time increased massively only on the addition of the AO factor, not due to the occlusion calculation. Is there an issue in the draw buffers?
The issue appeared to be linked to the chosen texture types.
The texture with handle viewPosTexture needed to explicitly be defined as a float texture format GL_RGB16F or GL_RGBA32F, instead of just GL_RGB. Interestingly, the seperate textures were drawn fine, the issues arised in combination only.
// generate screen color texture
// note: GL_NEAREST interpolation is ok since there is no subpixel sampling anyway
glGenTextures(1, &screenColorTexture);
glBindTexture(GL_TEXTURE_2D, screenColorTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, windowWidth, windowHeight, 0, GL_BGR, GL_UNSIGNED_BYTE, NULL);
// generate depth renderbuffer. without this, depth testing wont work.
// we use a renderbuffer since we wont have to sample this, opengl uses it directly.
glGenRenderbuffers(1, &screenDepthBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, screenDepthBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, windowWidth, windowHeight);
// generate vertex view space position texture
glGenTextures(1, &viewPosTexture);
glBindTexture(GL_TEXTURE_2D, viewPosTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, windowWidth, windowHeight, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL);
The slow drawing might be caused by the GLSL mix function. Will investigate further on that.
The flickering was due to the regeneration and passing of new random vectors in each frame. Just passing enough random vectors once solves the issue. Otherwise it might help to blur the SSAO result.
Basically, the SSAO works now! Now its just more or less apparent bugs.
I did some calculations using projected gl_Position and screen parameters, but position seems distorted in polygons close to the camera. But when I use...
vec2 fragmentScreenCoordinates = vec2(gl_FragCoord.x / _ScreenParams.x, gl_FragCoord.y / _ScreenParams.y);
...I got pretty accurate xy results.
Pretty output gl_FragCoord.xy coordinates:
Calculating from projected vertices results in interpolated values all over the faces, which I cannot use for sampling screen aligned textures.
Ugly interpolated output from gl_Position:
Is there a way to produce this gl_FragCoord-like value in vertex shader? I really want to calculate texture coordinates in vertex shader for independent texture reads, manual depth tests, etc.
Or is there any Unity built in values I can use here?
In the vertex shader, you set
gl_Position = ...
This must be in clip space, before the perspective divide to normalized device coordinates. This is because OpenGL does a bunch of stuff in the 4D space and is necessary for interpolation.
Since you just want the value at each vertex and nothing interpolated, you can normalize right away (or can even leave out if using an orthographic projection)...
vec3 ndc = gl_Position.xyz / gl_Position.w; //perspective divide/normalize
vec2 viewportCoord = ndc.xy * 0.5 + 0.5; //ndc is -1 to 1 in GL. scale for 0 to 1
vec2 viewportPixelCoord = viewportCoord * viewportSize;
Here, viewportCoord is the equivalent of your fragmentScreenCoordinates, assuming the viewport covers the window.
Note: as #derhass points out, this will fail if geometry intersects the w = 0 plane. I.e. a visible triangle's vertex is behind the camera.
[EDIT]
The comments discuss using the coordinates for 1 to 1 pixel nearest neighbor lookup. As #AndonM.Coleman says, changing the coordinates will work, but it's easier and faster to use nearest neighbor filtering. You can also use texelFetch, which bypasses filtering altogether:
Snap the coordinates:
vec2 sampleCoord = (floor(viewportPixelCoord) + 0.5) / textureSize(mySampler);
vec4 colour = texture(mySampler, sampleCoord);
Nearest neighbor filtering (not sure what this is in unity3d):
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); //magnification is the important one here
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
Use texelFetch:
vec4 colour = texelFetch(mySampler, ivec2(viewportPixelCoord), 0);
I want to apply an uniform checkerboard texture to a cylinder surface of height h, and semiradii (a,b).
I've implemented this shader:
Vertex shader:
varying vec2 texture_coordinate;
float twopi = 6.283185307;
float pi=3.141592654;
float ra = 1.5;
float rb= 1.0;
void main()
{
// Transforming The Vertex
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
// -pi/2 < theta < pi/2
float theta = (atan2( rb*gl_Vertex.y , ra*gl_Vertex.x)+pi*0.5)/pi;
// Passing The Texture Coordinate Of Texture Unit 0 To The Fragment Shader
texture_coordinate = vec2( theta , -(-gl_Vertex.z+0.5) );
}
Fragment shader:
varying vec2 texture_coordinate;
uniform sampler2D my_color_texture;
void main()
{
// Sampling The Texture And Passing It To The Frame Buffer
gl_FragColor = texture2D(my_color_texture, texture_coordinate);
}
while on client side I've specified the following options:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
My texture is a 3768x1200 checkerboard. Now I would like that the texture is applied in order to keep the checkerboard uniform (squares without stretch), but I obtain a correct aspect ratio only in the less curved part of the surface, while on the more curved parts the tiles are stretched.
I would like to understand how to apply the texture without distorting and stretching it, maybe by repeating the texture instead of stretching it.
I also have a problem of strange flickering on the borders of the texture, where the two borders intersect, how to solve it (it can be seen in the second image)?
You can modify the texture coordinates to "shrink" it on an object a bit. What you can't do is to parametrize the texture coordinates to scale non-linearly.
So, the options are:
Quantize the sampling, modifying texture coordinates to better accomodate the non-circularity (dynamic, but quality is low when using low-poly tesselation; it's the simplest solution to implement, though).
Use fragment shader to scale texture coordinates non-linearly (possibly a bit more complicated, but dynamic and giving quite good results, depending on the texture size, filtering used and the texture contents(!))
Modify the texture (static solution - will work only for given Ra/Rb ratio. However, the quality will be the best possible).
As to the flickering on the borders, you have to generate mipmaps for your textures.
Let me know if you need more information.
I am doing a simple pixelate shader in GLSL.
Everything is working as expected except for this border artifact that I see at pixelation borders.
The code is:
precision mediump float;
uniform sampler2D Texture0;
uniform int pixelCount;
varying vec2 fTexCoord;
void main(void)
{
float pixelWidth = 1.0/float(pixelCount);
float x = floor(fTexCoord.x/pixelWidth)*pixelWidth + pixelWidth/2.0;
float y = floor(fTexCoord.y/pixelWidth)*pixelWidth + pixelWidth/2.0;
gl_FragColor = texture2D(Texture0, vec2(x, y));
}
Please see the attached image.
I am clueless on why this is happening.
Please help me with this...
I think your problem is related to texture interpolation.
Are you using GL_NEAREST for your texture samplers (this makes the texture sampler use point sampler instead of some interpolation)?
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST );
Also you should probably not use mipmaps.
It would also be useful to know how the actual texture image looks like and how the geometry looks like (I assume you are rendering a single quad).