OpenGL Reading from a texture unit currently bound to a framebuffer - opengl

I've encountered an issue trying to read data from a texture unit currently attached to the draw framebuffer. The errors are removed as long as i use a glTextureBarrier between the draw calls. However I'm trying to eliminate draw calls so this is an unoptimal solution. The OpenGL 4.5 specification (section 9.3 Feedback Loops Between Textures and the Framebuffer) states that
The mechanisms for attaching textures to a framebuffer object do not prevent a
one- or two-dimensional texture level, a face of a cube map texture level, or a
layer of a two-dimensional array or three-dimensional texture from being attached to the draw framebuffer while the same texture is bound to a texture unit. While this condition holds, texturing operations accessing that image will produce unde-fined results, as described at the end of section 8.14.
...
Specifically, the values of rendered fragments are undefined if any shader stage fetches texels and the same texels are written via fragment shader outputs, even if the reads and writes are not in the same draw call, unless any of the following exceptions apply:
The reads and writes are from/to disjoint sets of texels (after accounting for texture filtering rules).
There is only a single read and write of each texel, and the read is in
the fragment shader invocation that writes the same texel (e.g. using texelFetch2D(sampler, ivec2(gl_FragCoord.xy), 0);).
If a texel has been written, then in order to safely read the result a texel fetch must be in a subsequent draw call separated by the command void TextureBarrier( void );
TextureBarrier will guarantee that writes have completed and caches have
been invalidated before subsequent draw calls are executed.
I do this to 4 other textures without problems. For these textures I only do one read and one write from the same texel, so they are covered by the 2nd exception. The texture causing the issue requires some filtering, thus I need more than one read and from different texels before writing. I figured this could be allowed through the 1st exception, so I put them in an array texture alternating which layer was read and written to. The idea was that this would create a setting where the reads and writes are from/to disjoint set of texels. It didn't help.
I also tried to do a glTextureBarrier only every other draw call to see if it was the third draw call that was causing the issue. This gave different (Still wrong) results from when I used barriers all the time and when i didn't.
The draw calls are drawing one point in 0,0,0 that is exploded to a fullscreen quad in a geometry shader.
Update
I am raytracing through volume data.
Framebuffer Setup
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, rayPositionTexture, 0);
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, rayDirectionTexture, 0);
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT2, IORTexture, 0);
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT3, resultTexture, 0);
glFramebufferTextureLayer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT4, rayOffsetTexture, 0, 0);
glFramebufferTextureLayer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT5, rayOffsetTexture, 0, 1);
Shader
...
uniform sampler2D RayPositionTexture;
uniform sampler2D RayDirectionTexture;
uniform sampler2DArray RayOffsetTexture;
uniform sampler2D IORTexture;
uniform sampler2D ColorTexture;
...
flat in uint offsetIndex;
layout(location = 0) out vec4 outRayPosition;
layout(location = 1) out vec4 outRayDirection;
layout(location = 2) out float outIOR;
layout(location = 3) out vec4 outColor;
layout(location = 4) out vec4 outRayOffsetA;
layout(location = 5) out vec4 outRayOffsetB;
...
vec3 filteredOffset(ivec2 Pixel)
{
vec3 Offset = vec3(0.0);
float TotalWeight = 0.0;
for(int i = -KernelRadius; i <= KernelRadius; i++)
{
for (int j = -KernelRadius; j <= KernelRadius; j++)
{
ivec2 SampleOffset = ivec2(i,j);
ivec3 SampleLocation = ivec3(Pixel + SampleOffset, offsetIndex);
vec3 Sample = texelFetch(RayOffsetTexture, SampleLocation, 0).xyz;
float Weight = KernelRadius > 0 ? gauss(SampleOffset) : 1.0f;
Offset += Sample * Weight;
TotalWeight += Weight;
}
}
Offset = Offset / TotalWeight;
return Offset;
}
...
//if (offsetIndex == 1)
outRayOffsetA = vec4(RayDirection.xyz - RayDirectionNew, 0.0);
//else
outRayOffsetB = vec4(RayDirection.xyz - RayDirectionNew, 0.0);
outIOR = IOR;
} else {
// if (offsetIndex == 1)
outRayOffsetA = vec4(0.0);
// else
outRayOffsetB = vec4(0.0);
outIOR = PreviousIOR;
//imageStore(VolumeBackTexture, Pixel, vec4(1.0));
}
outColor = Color;
...
Draw calls
GLenum drawBuffers[] = { GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1, GL_COLOR_ATTACHMENT2, GL_COLOR_ATTACHMENT3, GL_COLOR_ATTACHMENT4, GL_COLOR_ATTACHMENT5 };
glDrawBuffers(6, drawBuffers);
// Run shader
glBindVertexArray(pointVAO);
float distance = 0.0f;
for (int i = 0; distance < 1.73205080757f; i++)
{
glTextureBarrier();
glDrawArrays(GL_POINTS, i, 1);
distance += fUnitStep;
}
glBindVertexArray(0);
The above without the texture barrier gives the same results (wrong result) as
glBindVertexArray(pointVAO);
glDrawArrays(GL_POINTS, 0, int(std::ceil(1.73205080757f / fUnitStep)));
glBindVertexArray(0);

To answer this, the TextureBarrier is required here because I wanted to read from texels that had just been drawn. This can only be done safely if the previous draw call has completed and texture caches have been invalidated, which is exactly what TextureBarrier guarantees.

Related

Deferred Rendering not Displaying the GBuffer Textures

I'm trying to implement deferred rendering within an engine I'm developing as a personal learning, and I cannot get to understand what I'm doing wrong when it comes to render all the textures in the GBuffer to check if the implementation is okay.
The thing is that I currently have a framebuffer with 3 color attachments for the different textures of the GBuffer (color, normal and position), which I initialize as follows:
glCreateFramebuffers(1, &id);
glBindFramebuffer(GL_FRAMEBUFFER, id);
std::vector<uint> textures;
textures.resize(3);
glCreateTextures(GL_TEXTURE_2D, 3, textures.data());
for(size_t i = 0; i < 3; ++i)
{
glBindTexture(GL_TEXTURE_2D, textures[i]);
if(i == 0) // For Color Buffer
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, nullptr);
else
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, width, height, 0, GL_RGBA, GL_FLOAT, nullptr);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, GL_TEXTURE_2D, textures[i], 0);
}
GLenum color_buffers[3] = { GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1, GL_COLOR_ATTACHMENT2 };
glDrawBuffers((GLsizei)textures.size(), color_buffers);
uint depth_texture;
glCreateTextures(GL_TEXTURE_2D, 1, &depth_texture);
glBindTexture(GL_TEXTURE_2D, depth_texture);
glTexStorage2D(GL_TEXTURE_2D, 1, GL_DEPTH24_STENCIL8, width, height);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, GL_TEXTURE_2D, depth_texture, 0);
bool fbo_status = glCheckFramebufferStatus(GL_FRAMEBUFFER) == GL_FRAMEBUFFER_COMPLETE;
ASSERT(fbo_status, "Framebuffer Incompleted!");
glBindFramebuffer(GL_FRAMEBUFFER, 0);
This is not reporting any errors and it seems to work since the framebuffer of the forward renderer renders properly. Then, when rendering, I run the next code after binding the framebuffer and clearing the color and depth buffers:
camera_buffer->Bind();
camera_buffer->SetData("ViewProjection", glm::value_ptr(viewproj_mat));
camera_buffer->SetData("CamPosition", glm::value_ptr(glm::vec4(view_position, 0.0f)));
camera_buffer->Unbind();
for(Entity& entity : scene_entities)
{
shader->Bind();
Texture* texture = entity.GetTexture();
BindTexture(0, texture);
shader->SetUniformMat4("u_Model", entity.transform);
shader->SetUniformInt("u_Albedo", 0);
shader->SetUniformVec4("u_Material.AlbedoColor", entity->AlbedoColor);
shader->SetUniformFloat("u_Material.Smoothness", entity->Smoothness);
glBindVertexArray(entity.VertexArray);
glDrawElements(GL_TRIANGLES, entity.VertexArray.index_buffer.count, GL_UNSIGNED_INT, nullptr);
// Shader, VArray and Textures Unbindings
}
So with this code I manage to render the 3 textures created by using the ImGui::Image function, by switching the texture index between 0, 1 or 2 as the next:
ImGui::Image((ImTextureID)(fbo->textures[0]), viewport_size, ImVec2(0, 1), ImVec2(1, 0));
Now, the color texture (at index 0) works perfectly, as the next image shows:
But when rendering the normals and position textures (indexes 2 and 3), I have no result:
Does anybody sees what I'm doing wrong? Because I've been hours and hours with this and I cannot see it. I ran this on RenderDoc and I couldn't see anything wrong, the textures displayed in RenderDoc are the same than in the engine.
The vertex shader I use when rendering the entities is the next:
layout(location = 0) in vec3 a_Position;
layout(location = 1) in vec2 a_TexCoord;
layout(location = 2) in vec3 a_Normal;
out IBlock
{
vec2 TexCoord;
vec3 FragPos;
vec3 Normal;
} v_VertexData;
layout(std140, binding = 0) uniform ub_CameraData
{
mat4 ViewProjection;
vec3 CamPosition;
};
uniform mat4 u_ViewProjection = mat4(1.0);
uniform mat4 u_Model = mat4(1.0);
void main()
{
vec4 world_pos = u_Model * vec4(a_Position, 1.0);
v_VertexData.TexCoord = a_TexCoord;
v_VertexData.FragPos = world_pos.xyz;
v_VertexData.Normal = transpose(inverse(mat3(u_Model))) * a_Normal;
gl_Position = ViewProjection * u_Model * vec4(a_Position, 1.0);
}
And the fragment one is the next, they are both pretty simple:
layout(location = 0) out vec4 gBuff_Color;
layout(location = 1) out vec3 gBuff_Normal;
layout(location = 2) out vec3 gBuff_Position;
in IBlock
{
vec2 TexCoord;
vec3 FragPos;
vec3 Normal;
} v_VertexData;
struct Material
{
float Smoothness;
vec4 AlbedoColor;
};
uniform Material u_Material = Material(1.0, vec4(1.0));
uniform sampler2D u_Albedo, u_Normal;
void main()
{
gBuff_Color = texture(u_Albedo, v_VertexData.TexCoord) * u_Material.AlbedoColor;
gBuff_Normal = normalize(v_VertexData.Normal);
gBuff_Position = v_VertexData.FragPos;
}
It is not clear from the question what exactly might be happening here, as lots of GL states - both at the time the rendering to the gbuffer, and at that time the gbuffer texture is rendered for visualization - are just unknown. However, from the images given in the question, one can not conclude that the actual color output for attachments 1 and 2 is not working.
One issue which comes to mind is alpha blending. The color values processed by the per-fragment operations after the vertex shader are always working with RGBA values - although the value of the A channel only matters if you enabled blending and use a blend function which somehow depends on the source alpha.
If you declare a custom fragment shader output as float, vec2, vec3, the remaining components stay undefined (undefined value, not undefined behavior). This does not impose a problem unless some other operations you do depend on those values.
What we also have here is a GL_RGBA16F output format (which is the right choice, because none of the 3-component RGB formats are required as color-renderable by the spec).
What might happen here is either:
Alpha blending is already turned on during rendering into the g-buffer. The fragment shader's alpha output happens to be zero, so that it appears as 100% transparent and the contents of the texture are not changed.
Alpha blending is not used during rendering into the g-buffer, so the correct contents end up in the texture, the alpha channel just happens to end up with all zeros. Now the texture might be visualized with alpha blending enbaled, ending up in a 100% transparent view.
If it is the first option, turn off blending when rendering the into the g-buffer. It would not work with deferred shading anyway. You might still run into the second option then.
If this is the second option, there is no issue at all - the lighting passes which follow will read the data they need (and ultimately, you will want to put useful information into the alpha channel to not waste it and be able to reduce the number of attachments). It is just your visualization (which I assume is for debug purposed only) is wrong. You can try to fix the visualization.
As a side note: Storing the world space position in the G-Buffer is a huge waste of bandwidth. All you need to be able to reconstruct the world space position is the depth value and the inverse of your view and projection matrices. Also storing world space position in GL_RGB16F will very easily run into precision issues if you move your camera away from world space origin.

Cube mapping does not work correctly using OpenGL/GLSL

I have a strange behaviour with Cube Mapping technique: all my pixel shaders return the black color. So I have in result a black screen.
Here's the situation:
I have a scene only composed by a simple cube mesh (the skybox) and a track ball camera.
Now let's examine the resources (the aspect of the 6 separate textures):
Here's the image details :
So as you can see these textures need to be loaded in GL_RGB format.
Now, for the initialization part, let's take a look to the client C++ texture loading code (I use the 'DevIL' library to load my images):
glGenTextures(1, &this->m_Handle);
glBindTexture(this->m_Target, this->m_Handle);
{
glTexParameteri(this->m_Target, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(this->m_Target, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(this->m_Target, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(this->m_Target, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(this->m_Target, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
{
for (uint32_t idx = 0; idx < this->m_SourceFileList.size(); idx++)
{
ilLoadImage((const wchar_t*)this->m_SourceFileList[idx].c_str()); //IMAGE LOADING
{
uint32_t width = ilGetInteger(IL_IMAGE_WIDTH);
uint32_t height = ilGetInteger(IL_IMAGE_HEIGHT);
uint32_t bpp = ilGetInteger(IL_IMAGE_BPP);
{
char *pPixelData = (char*)ilGetData();
glTexImage2D(this->m_TargetList[idx], 0, GL_RGB8,
width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, pPixelData);
}
}
}
}
}
glBindTexture(this->m_Target, 0);
Concerning the main loop part, here's the informations I send to the shader program (texture and matrix data):
//TEXTURE DATA
scene::type::MaterialPtr pSkyBoxMaterial = pBatch->GetMaterial();
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_CUBE_MAP,
pSkyBoxMaterial->GetDiffuseTexture()->GetHandle());
this->SetUniform("CubeMapSampler", 1);
//MATRIX DATA
this->SetUniform("ModelViewProjMatrix", pBatch->GetModelViewProjMatrix());
As you can see the cube map texture in bound to texture unit 1.
Now here's the vertex shader:
#version 400
/*
** Vertex attributes.
*/
layout (location = 0) in vec3 VertexPosition;
/*
** Uniform matrix buffer.
*/
uniform mat4 ModelViewProjMatrix;
/*
** Outputs.
*/
out vec3 TexCoords;
/*
** Vertex shader entry point.
*/
void main(void)
{
TexCoords = VertexPosition;
gl_Position = ModelViewProjMatrix * vec4(VertexPosition, 1.0f);
}
And finally the fragment shader:
#version 400
/*
** Output color value.
*/
layout (location = 0) out vec4 FragColor;
/*
** Vertex inputs.
*/
in vec3 TexCoords;
/*
** Texture uniform.
*/
uniform samplerCube CubeMapSampler;
/*
** Fragment shader entry point.
*/
void main(void)
{
vec4 finalColor = texture(CubeMapSampler, TexCoords);
FragColor = finalColor;
}
So all the program compiled and executed show the following result :
But I want to precise I use the NVIDIA NSight Debugger and I want to show you first that the cube map is correctly loaded on the GPU:
As you can see as it's been written in the pieces of code above my texture is a RGB texture bound to unit texture 1 and it's a GL_TEXTURE_CUBE_MAP texture type. So until here all seems to be correct!
And if I replace in the fragment shader the line:
vec4 finalColor = texture(CubeMapSampler, TexCoords);
By the line:
vec4 finalColor = vec4(TexCoords, 1.0f);
I have the following result (I render directly the vertex coordinates in model space as color) without wireframe:
And the same result with wireframe:
Plus I want to precise that the line:
std::cout << glGetError() << std::endl;
Always returns the '0' value so I have none error!
So I think these two last pictures show that the matrix informations are correct and the vertex cordinates are correct too (moreover I use a track ball camera and when I move around into my scene I can recognize the cube architecture). So for me all the informations I recover in my shader program are correct except for the cube sampler! I think the problem comes from this sampler. However as you could see above the cube map seems to be loaded correctly.
I am really lost in front of this situation. I don't understand why all the pixel shaders return a #000000 color (I also tried using RGBA format but the result is the same).

Why is texelFetch always returning 0 for a 1D single channel texture?

I'm using a 1D texture to store single-channel integer data which needs to be accessed by the fragment shader. Coming from the application, the integer data type is GLubyte and needs to be accessed as an unsigned integer in the shader. Here is how the texture is created (note that there are other texture units being bound after, which I'm hoping are unrelated to the problem):
GLuint mTexture[2];
std::vector<GLubyte> data;
///... populate with 289 elements, all with value of 1
glActiveTexture(GL_TEXTURE0);
{
glGenTextures(1, &mTexture[0]);
glBindTexture(GL_TEXTURE_1D, mTexture[0]);
{
glTexImage1D(GL_TEXTURE_1D, 0, GL_R8UI, data.size(), 0,
GL_RED_INTEGER, GL_UNSIGNED_BYTE, &data[0]);
}
glBindTexture(GL_TEXTURE_1D, 0);
}
glActiveTexture(GL_TEXTURE1);
{
//Setup the other texture using mTexture[1]
}
The fragment shader looks like this:
#version 420 core
smooth in vec2 tc;
out vec4 color;
layout (binding = 0) uniform usampler1D buffer;
layout (binding = 1) uniform sampler2DArray sampler;
uniform float spacing;
void main()
{
vec3 pos;
pos.x = tc.x;
pos.y = tc.y;
if (texelFetch(buffer, 0, 0).r == 1)
pos.z = 3.0;
else
pos.z = 0.0;
color = texture(sampler, pos);
}
The value returned from texelFetch in this example basicaly dictates which texture layer to use from the 2D array for the final output color. I want it to return the value 1, but it always returns 0 and hits the else clause in the fragment shader. Using NVIDIA's Nsight tool, I can see the texture does contain the value 1, 289 times:

How to achieve depth value invariance between two passes?

I have two geometry passes. In the first pass, I write the fragment's depth value to a float texture with glBlendEquation(GL_MIN), similar to dual depth peeling. In the second pass I use it for early depth testing in the fragment shader.
However, for some fragments the depth test fails unless I slightly offset the min depth value (eps below):
Setting up the texture:
glBindTexture(GL_TEXTURE_2D, offscreenDepthMapTextureId);
glTexStorage2D(GL_TEXTURE_2D, 1, GL_RG32F, screenWidth, screenHeight);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, offscreenDepthMapFboId);
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D,
offscreenDepthMapTextureId, 0);
Note that the texture is used as a color attachment, not the depth attachment. I have disabled GL_DEPTH_TEST in both passes, since I can't use it in the final pass.
Vertex shader, used in both passes:
#version 430
layout(location = 0) in vec4 position;
uniform mat4 mvp;
invariant gl_Position;
void main()
{
gl_Position = mvp * position;
}
First pass fragment shader, with offscreenDepthMapFboId bound, so it writes the depth as color to the texture. Blending makes sure that only the min value ends up in the red component.
#version 430
out vec4 outputColor;
void main()
{
outputColor.rg = vec2(gl_FragCoord.z, -gl_FragCoord.z);
}
Second pass, writing to the default framebuffer. The texture is used as depthTex.
#version 430
out vec4 outputColor;
uniform sampler2D depthTex;
void main()
{
vec2 zwMinMax = texelFetch(depthTex, ivec2(gl_FragCoord.xy), 0).rg;
float zwMin = zwMinMax.r;
float zwMax = -zwMinMax.g;
float eps = 0;// doesn't work
//float eps = 0.0000001; // works
if (gl_FragCoord.z > zwMin + eps)
discard;
outputColor = vec4(1, 0, 0, 1);
}
The invariant qualifier in the vertex shader didn't help. Using eps = 0.0000001 seems like a crude workaround as I can't be sure that this particular value will always work. How do I get the two shaders to produce the exact same gl_FragCoord.z? Or is the depth texture the problem? Wrong format? Are there any conversions happening that I'm not aware of?
Have you tried glDepthFunc(GL_EQUAL) instead of your in-shader-comparison-approach?
Are you reading and writing to the same depth-texture in the same pass?
Im not sure if gl_FragCoord's format and precision is correlated with the one of the depth-buffer. Also it might be a driver glitch, but glDepthFunc(GL_EQUAL) should work as expected.
Do not use gl_FragCoord.z as the depth value.
In vertex shader:
out float depth;
void main()
{
...
depth = ((gl_DepthRange.diff * (gl_Position.z / glPosition.w)) +
gl_DepthRange.near + gl_DepthRange.far) / 2.0;
}

opengl 3d texture issue

I'm trying to use a 3d texture in opengl to implement volume rendering. Each voxel has an rgba colour value and is currently rendered as a screen facing quad.(for testing purposes). I just can't seem to get the sampler to give me a colour value in the shader. The quads always end up black. When I change the shader to generate a colour (based on xyz coords) then it works fine. I'm loading the texture with the following code:
glGenTextures(1, &tex3D);
glBindTexture(GL_TEXTURE_3D, tex3D);
unsigned int colours[8];
colours[0] = Colour::AsBytes<unsigned int>(Colour::Blue);
colours[1] = Colour::AsBytes<unsigned int>(Colour::Red);
colours[2] = Colour::AsBytes<unsigned int>(Colour::Green);
colours[3] = Colour::AsBytes<unsigned int>(Colour::Magenta);
colours[4] = Colour::AsBytes<unsigned int>(Colour::Cyan);
colours[5] = Colour::AsBytes<unsigned int>(Colour::Yellow);
colours[6] = Colour::AsBytes<unsigned int>(Colour::White);
colours[7] = Colour::AsBytes<unsigned int>(Colour::Black);
glTexImage3D(GL_TEXTURE_3D, 0, GL_RGBA, 2, 2, 2, 0, GL_RGBA, GL_UNSIGNED_BYTE, colours);
The colours array contains the correct data, i.e. the first four bytes have values 0, 0, 255, 255 for blue. Before rendering I bind the texture to the 2nd texture unit like so:
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_3D, tex3D);
And render with the following code:
shaders["DVR"]->Use();
shaders["DVR"]->Uniforms["volTex"].SetValue(1);
shaders["DVR"]->Uniforms["World"].SetValue(Mat4(vl_one));
shaders["DVR"]->Uniforms["viewProj"].SetValue(cam->GetViewTransform() * cam->GetProjectionMatrix());
QuadDrawer::DrawQuads(8);
I have used these classes for setting shader params before and they work fine. The quaddrawer draws eight instanced quads. The vertex shader code looks like this:
#version 330
layout(location = 0) in vec2 position;
layout(location = 1) in vec2 texCoord;
uniform sampler3D volTex;
ivec3 size = ivec3(2, 2, 2);
uniform mat4 World;
uniform mat4 viewProj;
smooth out vec4 colour;
void main()
{
vec3 texCoord3D;
int num = gl_InstanceID;
texCoord3D.x = num % size.x;
texCoord3D.y = (num / size.x) % size.y;
texCoord3D.z = (num / (size.x * size.y));
texCoord3D /= size;
texCoord3D *= 2.0;
texCoord3D -= 1.0;
colour = texture(volTex, texCoord3D);
//colour = vec4(texCoord3D, 1.0);
gl_Position = viewProj * World * vec4(texCoord3D, 1.0) + (vec4(position.x, position.y, 0.0, 0.0) * 0.05);
}
uncommenting the line where I set the colour value equal to the texcoord works fine, and makes the quads coloured. The fragment shader is simply:
#version 330
smooth in vec4 colour;
out vec4 outColour;
void main()
{
outColour = colour;
}
So my question is, what am I doing wrong, why is the sampler not getting any colour values from the 3d texture?
[EDIT]
Figured it out but can't self answer (new user):
As soon as I posted this I figured it out, I'll put the answer up to help anyone else (it's not specifically a 3d texture issue, and i've also fallen afoul of it before, D'oh!). I didn't generate mipmaps for the texture, and the default magnification/minification filters weren't set to either GL_LINEAR, or GL_NEAREST. Boom! no textures. Same thing happens with 2d textures.
As soon as I posted this I figured it out, I'll put the answer up to help anyone else (it's not specifically a 3d texture issue, and i've also fallen afoul of it before, D'oh!). I didn't generate mipmaps for the texture, and the default magnification/minification filters weren't set to either GL_LINEAR, or GL_NEAREST. Boom! no textures. Same thing happens with 2d textures.