I'm using defered rendering in my application and i'm trying to create a texture that will contain both the depth and the stencil.
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH24_STENCIL8, width, height, 0,
???, GL_FLOAT, 0);
Now what format enum does opengl want for this particular texture. I tried a couple and got error for all of them
Also, what is the correct glsl syntax to access the depth and stencil part of the texture. I understand that depth texture are usually uniform sampler2Dshadow. But do I do
float depth = texture(depthstenciltex,uv).r;// <- first bit ? all 32 bit ? 24 bit ?
float stencil = texture(depthstenciltex,uv).a;
Now what format enum does opengl want for this particular texture.
The problem you are running into is that Depth+Stencil is a totally oddball combination of data. The first 24-bits (depth) are fixed-point and the remaining 8-bits (stencil) are unsigned integer. This requires a special packed data type: GL_UNSIGNED_INT_24_8
Also, what is the correct glsl syntax to access the depth and stencil part of the texture. I understand that depth texture are usually uniform sampler2Dshadow.
You will actually never be able to sample both of those things using the same sampler uniform and here is why:
OpenGL Shading Language 4.50 Specification - 8.9 Texture Functions - p. 158
For depth/stencil textures, the sampler type should match the component being accessed as set through the OpenGL API. When the depth/stencil texture mode is set to GL_DEPTH_COMPONENT, a floating-point sampler type should be used. When the depth/stencil texture mode is set to GL_STENCIL_INDEX, an unsigned integer sampler type should be used. Doing a texture lookup with an unsupported combination will return undefined values.
This means if you want to use both the depth and stencil in a shader you are going to have to use texture views (OpenGL 4.2+) and bind those texture to two different Texture Image Units (each view have a different state for GL_DEPTH_STENCIL_TEXTURE_MODE). Both of these things together mean you are going to need at least an OpenGL 4.4 implementation.
Fragment shader that samples depth and stencil:
#version 440
// Sampling the stencil index of a depth+stencil texture became core in OpenGL 4.4
layout (binding=0) uniform sampler2D depth_tex;
layout (binding=1) uniform usampler2D stencil_tex;
in vec2 uv;
void main (void) {
float depth = texture (depth_tex, uv);
uint stencil = texture (stencil_tex, uv);
}
Create a stencil view texture:
// Alternate view of the image data in `depth_stencil_texture`
GLuint stencil_view;
glGenTextures (&stencil_view, 1);
glTextureView (stencil_view, GL_TEXTURE_2D, depth_stencil_tex,
GL_DEPTH24_STENCIL8, 0, 1, 0, 1);
// ^^^ This requires `depth_stencil_tex` be allocated using `glTexStorage2D (...)`
// to satisfy `GL_TEXTURE_IMMUTABLE_FORMAT` == `GL_TRUE`
OpenGL state setup for this shader:
// Texture Image Unit 0 will treat it as a depth texture
glActiveTexture (GL_TEXTURE0);
glBindTexture (GL_TEXTURE_2D, depth_stencil_tex);
glTexParameteri (GL_TEXTURE_2D, GL_DEPTH_STENCIL_TEXTURE_MODE, GL_DEPTH_COMPONENT);
// Texture Image Unit 1 will treat the stencil view of depth_stencil_tex accordingly
glActiveTexture (GL_TEXTURE1);
glBindTexture (GL_TEXTURE_2D, stencil_view);
glTexParameteri (GL_TEXTURE_2D, GL_DEPTH_STENCIL_TEXTURE_MODE, GL_STENCIL_INDEX);
nvm found it
glTexImage2D(gl.TEXTURE_2D, 0, GL_DEPTH24_STENCIL8, w,h,0,GL_DEPTH_STENCIL,
GL_UNSIGNED_INT_24_8, 0);
uint24_8 was my problem.
usage in glsl (330):
sampler2D depthstenciltex;
...
float depth = texture(depthstenciltex,uv).r;//access the 24 first bit,
//transformed between [0-1]
Related
I've found a handful of similar problems posted around the web an it would appear that I'm already doing what the solutions suggest.
To summarize the problem; despite the compute shader running and no errors being present, no change is being made to the texture it's supposedly writing to.
The compute shader code. It was intended to do something else but for the sake of troubleshooting it simply fills the output texture with ones.
#version 430 core
layout(local_size_x = 4 local_size_y = 4, local_size_z = 4) in;
layout(r32f) uniform readonly image3D inputDensityField;
layout(r32f) uniform writeonly image3D outputDensityField;
uniform vec4 paintColor;
uniform vec3 paintPoint;
uniform float paintRadius;
uniform float paintDensity;
void main()
{
ivec3 cellIndex = ivec3(gl_GlobalInvocationID);
imageStore(outputDensityField, cellIndex, vec4(1.0, 1.0, 1.0, 1.0));
}
I'm binding the textures to the compute shader like so.
s32 uniformID = glGetUniformLocation(programID, name);
u32 bindIndex = 0; // 1 for the other texture.
glUseProgram(programID);
glUniform1i(uniformID, bindIndex);
glUseProgram(0);
The dispatch looks something like this.
glUseProgram(programID);
glBindImageTexture(0, inputTexID, 0, GL_FALSE, 0, GL_READ_ONLY, GL_R32F);
glBindImageTexture(1, outputTexID, 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_R32F);
glDispatchCompute(groupDim.x, groupDim.y, groupDim.z);
glMemoryBarrier(GL_ALL_BARRIER_BITS);
glUseProgram(0);
Inspecting through RenderDoc does not reveal any errors. The textures seem to have been bound correctly, although they are both displayed in RenderDoc as outputs which I would assume is an error on RenderDoc's part?
Whichever texture that was the output on the last glDispatchCompute will later be sampled in a fragment shader.
Order of operation
Listed images
The red squares are test fills made with glTexSubImage3D. Again for troubleshooting purposes.
I've made sure that I'm passing the correct texture format.
Example in RenderDoc
Additionally I'm using glDebugMessageCallback which usually catches all errors so I would assume that there's no problem with the creation code.
Apologies if the information provided is a bit incoherent. Showing everything would make a very long post and I'm unsure which parts are the most relevant to show.
I've found a solution! Apparently, in the case of a 3D texture, you need to pass GL_TRUE for layered in glBindImageTexture.
https://www.khronos.org/opengl/wiki/Image_Load_Store
Image bindings can be layered or non-layered, which is determined by layered. If layered is GL_TRUE, then texture must be an Array Texture (of some type), a Cubemap Texture, or a 3D Texture. If a layered image is being bound, then the entire mipmap level specified by level is bound.
TASK BACKGROUND
I am trying to implement SSAO after OGLDev Tutorial 45, which is based on a Tutorial by John Chapman. The OGLDev Tutorial uses a highly simplified method which samples random points in a radius around the fragment position and steps up the AO factor depending on how many of the sampled points have a depth greater than the actual surface depth stored at that location (the more positions around the fragment lie in front of it the greater the occlusion).
The 'engine' i use does not have as modular deferred shading as OGLDev, but basically it first renders the whole screen colors to a framebuffer with a texture attachment and a depth renderbuffer attachment. To compare the depths, the fragment view space positions are rendered to another framebuffer with texture attachment.
Those texture are then postprocessed by the SSAO shader and the result is drawn to a screen filling quad.
Both textures on their own draw fine to the quad and the shader input uniforms seem to be ok also, so thats why i havent included any engine code.
The Fragment Shader is almost identical, as you can see below. I have included some comments that serve my personal understanding.
#version 330 core
in vec2 texCoord;
layout(location = 0) out vec4 outColor;
const int RANDOM_VECTOR_ARRAY_MAX_SIZE = 128; // reference uses 64
const float SAMPLE_RADIUS = 1.5f; // TODO: play with this value, reference uses 1.5
uniform sampler2D screenColorTexture; // the whole rendered screen
uniform sampler2D viewPosTexture; // interpolated vertex positions in view space
uniform mat4 projMat;
// we use a uniform buffer object for better performance
layout (std140) uniform RandomVectors
{
vec3 randomVectors[RANDOM_VECTOR_ARRAY_MAX_SIZE];
};
void main()
{
vec4 screenColor = texture(screenColorTexture, texCoord).rgba;
vec3 viewPos = texture(viewPosTexture, texCoord).xyz;
float AO = 0.0;
// sample random points to compare depths around the view space position.
// the more sampled points lie in front of the actual depth at the sampled position,
// the higher the probability of the surface point to be occluded.
for (int i = 0; i < RANDOM_VECTOR_ARRAY_MAX_SIZE; ++i) {
// take a random sample point.
vec3 samplePos = viewPos + randomVectors[i];
// project sample point onto near clipping plane
// to find the depth value (i.e. actual surface geometry)
// at the given view space position for which to compare depth
vec4 offset = vec4(samplePos, 1.0);
offset = projMat * offset; // project onto near clipping plane
offset.xy /= offset.w; // perform perspective divide
offset.xy = offset.xy * 0.5 + vec2(0.5); // transform to [0,1] range
float sampleActualSurfaceDepth = texture(viewPosTexture, offset.xy).z;
// compare depth of random sampled point to actual depth at sampled xy position:
// the function step(edge, value) returns 1 if value > edge, else 0
// thus if the random sampled point's depth is greater (lies behind) of the actual surface depth at that point,
// the probability of occlusion increases.
// note: if the actual depth at the sampled position is too far off from the depth at the fragment position,
// i.e. the surface has a sharp ridge/crevice, it doesnt add to the occlusion, to avoid artifacts.
if (abs(viewPos.z - sampleActualSurfaceDepth) < SAMPLE_RADIUS) {
AO += step(sampleActualSurfaceDepth, samplePos.z);
}
}
// normalize the ratio of sampled points lying behind the surface to a probability in [0,1]
// the occlusion factor should make the color darker, not lighter, so we invert it.
AO = 1.0 - AO / float(RANDOM_VECTOR_ARRAY_MAX_SIZE);
///
outColor = screenColor + mix(vec4(0.2), vec4(pow(AO, 2.0)), 1.0);
/*/
outColor = vec4(viewPos, 1); // DEBUG: draw view space positions
//*/
}
WHAT WORKS?
The fragment colors texture is correct.
The texture coordinates are those of a screen filling quad to which we draw and are transformed to [0, 1]. They yield equivalent results as vec2 texCoord = gl_FragCoord.xy / textureSize(screenColorTexture, 0);
The (perspective) projection matrix is the one the camera uses, and it works for that purpose. In any case, this doesnt seem to be the issue.
The random sample vector components are in range [-1, 1], as intended.
The fragment view space positions texture seems ok:
WHAT'S WRONG?
When i set the AO mixing factor at the bottom of the fragment shader to 0, it runs smooth to the fps cap (even though the calculations are still performed, at least i guess the compiler wont optimize that :D ). But when the AO is mixed in it takes up to 80 ms per frame draw (getting slower with time, as if the buffers were filling up), and the result is really interesting and confusing:
Obviously the mapping seems far off, and the flickering noise seems very random, as if it corresponded directly to the random sample vectors.
I found it most interesting that the draw time increased massively only on the addition of the AO factor, not due to the occlusion calculation. Is there an issue in the draw buffers?
The issue appeared to be linked to the chosen texture types.
The texture with handle viewPosTexture needed to explicitly be defined as a float texture format GL_RGB16F or GL_RGBA32F, instead of just GL_RGB. Interestingly, the seperate textures were drawn fine, the issues arised in combination only.
// generate screen color texture
// note: GL_NEAREST interpolation is ok since there is no subpixel sampling anyway
glGenTextures(1, &screenColorTexture);
glBindTexture(GL_TEXTURE_2D, screenColorTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, windowWidth, windowHeight, 0, GL_BGR, GL_UNSIGNED_BYTE, NULL);
// generate depth renderbuffer. without this, depth testing wont work.
// we use a renderbuffer since we wont have to sample this, opengl uses it directly.
glGenRenderbuffers(1, &screenDepthBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, screenDepthBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, windowWidth, windowHeight);
// generate vertex view space position texture
glGenTextures(1, &viewPosTexture);
glBindTexture(GL_TEXTURE_2D, viewPosTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, windowWidth, windowHeight, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL);
The slow drawing might be caused by the GLSL mix function. Will investigate further on that.
The flickering was due to the regeneration and passing of new random vectors in each frame. Just passing enough random vectors once solves the issue. Otherwise it might help to blur the SSAO result.
Basically, the SSAO works now! Now its just more or less apparent bugs.
I am using freeglut, GLEW and DevIL to render a textured teapot using a vertex and fragment shader. This is all working fine in OpenGL 2.0 and GLSL 1.2 on Ubuntu 14.04.
Now, I want to apply a bump map to the teapot. My lecturer evidently doesn't brew his own tea, and so doesn't know they're supposed to be smooth. Anyway, I found a nice-looking tutorial on old-school bump mapping that includes a fragment shader that begins:
uniform sampler2D DecalTex; //The texture
uniform sampler2D BumpTex; //The bump-map
What they don't mention is how to pass two textures to the shader in the first place.
Previously I
//OpenGL cpp file
glBindTexture(GL_TEXTURE_2D, textureHandle);
//Vertex shader
gl_TexCoord[0] = gl_TextureMatrix[0] * gl_MultiTexCoord0;
//Fragment shader
gl_FragColor = color * texture2D(DecalTex,gl_TexCoord[0].xy);
so now I
//OpenGL cpp file
glBindTexture(GL_TEXTURE_2D, textureHandle);
glBindTexture(GL_TEXTURE_2D, bumpHandle);
//Vertex shader
gl_TexCoord[0] = gl_TextureMatrix[0] * gl_MultiTexCoord0;
gl_TexCoord[1] = gl_TextureMatrix[1] * gl_MultiTexCoord1;
//Fragment shader
gl_FragColor = color * texture2D(BumpTex,gl_TexCoord[0].xy);
//no bump logic yet, just testing I can use texture 1 instead of texture 0
but this doesn't work. The texture disappears completely (effectively the teapot is white). I've tried GL_TEXTURE_2D_ARRAY, glActiveTexture and few other likely-seeming but fruitless options.
After sifting through the usual mixed bag of references to OpenGL and GLSL new and old, I've come to the conclusion that I probably need glGetUniformLocation. How exactly do I use this in the OpenGL cpp file to pass the already-populated texture handles to the fragment shader?
How to pass an array of textures with different sizes to GLSL?
Passing Multiple Textures from OpenGL to GLSL shader
Multiple textures in GLSL - only one works
(This is homework so please answer with minimal code fragments (if at all). Thanks!)
Failing that, does anyone have a tea cosy mesh?
It is very simple, really. All you need is to bind the sampler to some texture unit with glUniform1i. So for your code sample, assuming the two uniform samplers:
uniform sampler2D DecalTex; // The texture (we'll bind to texture unit 0)
uniform sampler2D BumpTex; // The bump-map (we'll bind to texture unit 1)
In your initialization code:
// Get the uniform variables location. You've probably already done that before...
decalTexLocation = glGetUniformLocation(shader_program, "DecalTex");
bumpTexLocation = glGetUniformLocation(shader_program, "BumpTex");
// Then bind the uniform samplers to texture units:
glUseProgram(shader_program);
glUniform1i(decalTexLocation, 0);
glUniform1i(bumpTexLocation, 1);
OK, shader uniforms set, now we render. To do so, you will need the usual glBindTexture plus glActiveTexture:
glActiveTexture(GL_TEXTURE0 + 0); // Texture unit 0
glBindTexture(GL_TEXTURE_2D, decalTexHandle);
glActiveTexture(GL_TEXTURE0 + 1); // Texture unit 1
glBindTexture(GL_TEXTURE_2D, bumpHandle);
// Done! Now you render normally.
And in the shader, you will use the textures samplers just like you already do:
vec4 a = texture2D(DecalTex, tc);
vec4 b = texture2D(BumpTex, tc);
Note: For techniques like bump-mapping, you only need one set of texture coordinates, since the textures are the same, only containing different data. So you should probably pass texture coordinates as a vertex attribute.
instead of using:
glUniform1i(decalTexLocation, 0);
glUniform1i(bumpTexLocation, 1);
in your code,
you can have:
layout(binding=0) uniform sampler2D DecalTex;
// The texture (we'll bind to texture unit 0)
layout(binding=1)uniform sampler2D BumpTex;
// The bump-map (we'll bind to texture unit 1)
in your shader. That also mean you don't have to query for the location.
I know how to load the texture
std::unique_ptr<glimg::ImageSet> pImgSet(glimg::loaders::dds::LoadFromFile("test.dds"));
GLuint tex = glimg::CreateTexture(pImgSet.get(), 0);
But how do I get this texture into my shader?
GL Image - Unoffcial OpenGL SDK
Bind the texture to a texture unit, e.g. unit 0:
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, tex);
Add a sampler2D uniform to your shader:
uniform sampler2D myTexture;
Set the uniform to the number of the texture unit, as an integer:
glUseProgram(program);
GLint location = glGetUniformLocation(program, "myTexture");
glUniform1i(location, 0);
In the shader, use texture2D to sample it, e.g.:
gl_FragColor = texture2D(myTexture, texCoords);
The key thing to know is that sampler2D uniforms can be set as integers; setting it to 1 means to use the texture bound to GL_TEXTURE1, and so on. The uniform's value defaults to 0, and the active texture unit defaults to GL_TEXTURE0, so if you use only one texture unit, you don't even need to set the uniform.
I'm writing a refraction shader that takes into account two surfaces.
As such, I'm using FBO's to render the depth and normals to texture, and a cubemap to represent the environment.
I need to use the values of the normals stored in the texture to fetch values from the cubemap in order to get the refraction normal of the back surface.
The cubemap works perfectly as long as I don't try to access it from a vector whose value has been retrieved from a texture.
Here is a minimal fragment shader that fails. The color stays desperatly black.
I'm sure that the call to texture 2D returns non-zero values: if I try to display the texture color (representing the normals) contained in direction, I get a perfectly colored model. No matter what kind of operations I do with the "direction" vector, it keeps on failing.
uniform samplerCube cubemap;
uniform sampler2D normalTexture;
uniform vec2 viewportSize;
void main()
{
vec3 direction = texture2D(normalTexture, gl_FragCoord.xy/viewportSize).xyz;
// direction = vec3(1., 0., 0) + direction; // fails as well!!
vec4 color = textureCube(cubemap, direction);
gl_FragColor = color;
}
Here are the values of the vector "direction" displayed as color, just a proof that they're not null!
And here is the result of the above shader (just the teapot).
While this code works perfectly:
uniform samplerCube cubemap;
uniform vec2 viewportSize;
varying vec3 T1;
void main()
{
vec4 color = textureCube(cubemap, T1);
gl_FragColor = color;
}
I can't think of any reason why my color would stay black whenever I access the sampler cube values!
Just for the sake of completeness, even though my cubemap works, here are the parameters used to set it up:
glGenTextures(1, &mTextureId);
glEnable(GL_TEXTURE_CUBE_MAP);
glBindTexture(GL_TEXTURE_CUBE_MAP, mTextureId);
// Set parameters
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
Unless I've missed something important somewhere, I'm thinking it might possibly be a driver bug.
I don't have any graphics card, I'm using the Intel Core i5 processor chipset.
00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09)
Any idea on why this might be occurring, or do you have a workaround ?
Edit: Here is how my shader class binds the textures
4 textures to bind
Bind texture 3 on texture unit unit 0
Bind to shader uniform: 327680
Bind texture 4 on texture unit unit 1
Bind to shader uniform: 262144
Bind texture 5 on texture unit unit 2
Bind to shader uniform: 393216
Bind texture 9 on texture unit unit 3
Bind to shader uniform: 196608
Textures 3 and 4 are depth, 5 is the normal map, 9 is the cubemap.
And the code that does the binding:
void Shader::bindTextures() {
dinf << m_textures.size() << " textures to bind" << endl;
int texture_slot_index = 0;
for (auto it = m_textures.begin(); it != m_textures.end(); it++) {
dinf << "Bind texture " << it->first<< " on texture unit unit "
<< texture_slot_index << std::endl;
glActiveTexture(GL_TEXTURE0 + texture_slot_index);
glBindTexture(GL_TEXTURE_2D, it->first);
// Binds to the shader
dinf << "Bind to shader uniform: " << it->second << endl;
glUniform1i(it->second, texture_slot_index);
texture_slot_index++;
}
// Make sure that the texture unit which is left active is the number 0
glActiveTexture(GL_TEXTURE0);
}
m_textures is a map of texture ids to uniform ids.
You don't appear to be using separate texture units for the normal map and cubemap. Everything is defaulting to texture unit 0. You need something like:
uniform sampler2D norm_tex;
uniform samplerCube cube_tex;
in the shader. The texture lookups should just use the 'overloaded' texture function when using (3.2+) core profile. With (3.3+) you can also use sampler objects.
Generate and bind the textures to separate texture units:
... generate 'norm_tex' and 'cube_tex' ...
glActiveTexture(GL_TEXTURE0);
... bind 'norm_tex' and set parameters ...
glActiveTexture(GL_TEXTURE1);
... bind 'cube_tex' and set parameters ...
... glUseProgram(prog); ...
glUniform1i(glGetUniformLocation(prog, "norm_map"), 0);
glUniform1i(glGetUniformLocation(prog, "cube_map"), 1);
I figured it out, quite a stupid thing really.
I forgot to change my shader function to bind cubemaps as GL_TEXTURE_CUBE_MAP, everything was bound as GL_TEXTURE_2D!
Thanks anyway!