Sampling a GL_TEXTURE_3D in the Fragment Shader - opengl

I have a GL_TEXTURE_3D which is of size 16x16x6, it has been populated with floats in a compute shader and I am trying to sample it in the fragment shader.
To make it available to the fragment shader I have this code just before the draw call:
//Set the active texture and bind it
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_3D, textureID);
//Add the sampler uniform
glUniform1i(glGetUniformLocation(textureID, "TextureSampler"), 0);
Then in the fragment shader itself:
uniform sampler3D TextureSampler;
Then to test that the texture has come through correctly:
vec4 test = texture(TextureSampler, ivec3(0,0,0));
color = vec4(1.0,0.0,0.0,1.0) * test.x; //or test.y, test.z, test.w
Based on this it looks like every texels value is (0.0,0.0,0.0,1.0)
I can't work out why this is? I would expect at coords (0,0,0) for the value to be either (16.0,0.0,0.0,0.0) or (16.0,16.0,16.0,16.0) based on what I set them to in the compute shader
P.S. It might be worth noting that the values are being written to the texture correctly I check inbetween the compute and fragment shader calls using glGetTexImage()

Related

Render from fbo texture to another within same fbo

I'm trying to set up deferred rendering, and have successfully managed to output data to the varying gbuffer textures (position, normal, albedo, specular).
I am now attempting to sample from the albedo texture to a 5th colour attachment in the same fbo (for the purposes of further potential post process sampling), by rendering a full screen quad with simple texture coordinates.
I have checked that the vertex/texcoord data is good via Nsight, and confirmed that the shader can "see" the texture to sample from, but all that I see in the target texture is the clear colour when I examine it in the Nsight debugger.
At the moment, the shader is basically just a simple pass through shader:
vertex shader:
#version 430
in vec3 MSVertex;
in vec2 MSTexCoord;
out xferBlock
{
vec3 VSVertex;
vec2 VSTexCoord;
} outdata;
void main()
{
outdata.VSVertex = MSVertex;
outdata.VSTexCoord = MSTexCoord;
gl_Position = vec4(MSVertex,1.0);
}
fragment shader:
#version 430
layout (location = 0) uniform sampler2D colourMap;
layout (location = 0) out vec4 colour;
in xferBlock
{
vec3 VSVertex;
vec2 VSTexCoord;
} indata;
void main()
{
colour = texture(colourMap, indata.VSTexCoord).rgba;
}
As you can see, there is nothing fancy about the shader.
The gl code is as follows:
//bind frame buffer for writing to texture #5
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glDrawBuffer(GL_COLOR_ATTACHMENT4); //5 textures total
glClear(GL_COLOR_BUFFER_BIT);
//activate shader
glUseProgram(second_stage_program);
//bind texture
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, fbo_buffers[2]); //third attachment: albedo
//bind and draw fs quad (two array buffers: vertices, and texture coordinates)
glBindVertexArray(quad_vao);
glDrawArrays(GL_TRIANGLES,0,6);
I'm trying to work out what is preventing rendering to the texture. I'm using a core context with OpenGL v4.3.
I've tried outputting a single white colour for all fragments, using the texture coordinates to generate a colour colour = vec4(indata.VSTexCoord, 1.0, 1.0); and sampling the texture itself, as you see in the shader code, but nothing changes the resultant texture, which just shows the clear colour.
What am I doing wrong?

OpenGL Trying to render 2 textures, but only 1 does?

I am trying to render 2 textures in OpenGL 3.
I created two arrays of vertices of GLfloat type,generated and bound the buffers etc.
Note: The texture loading function is working fine,I have already loaded a texture before, now I just need 2 textures rendered at the same time.
Then I load my textures like this:
GLuint grass = texture.loadTexture("grass.bmp");
GLuint grassLoc = glGetUniformLocation(programID, "grassSampler");
glUniform1i(grassLoc, 0);
GLuint crate = texture.loadTexture("crate.bmp");
GLuint crateLoc = glGetUniformLocation(programID, "crateSampler");
glUniform1i(crateLoc, 1);
This is how I draw them:
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, grass);
glDrawArrays(GL_TRIANGLES, 0, 6);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, crate);
glDrawArrays(GL_TRIANGLES, 2, 6);
Vertex shader:
#version 330 core
layout(location = 0) in vec3 grassPosition;
layout(location = 1) in vec2 grassUvPosition;
layout(location = 2) in vec3 cratePosition;
layout(location = 3) in vec2 crateUvPosition;
out vec2 grassUV;
out vec2 crateUV;
uniform mat4 MVP;
void main(){
gl_Position = MVP * vec4(grassPosition,1);
gl_Position = MVP * vec4(cratePosition,1);
grassUV = grassUvPosition;
crateUV = crateUvPosition;
}
Fragment shader:
#version 330 core
in vec2 grassUV;
in vec2 crateUV;
out vec3 grassColor;
out vec3 crateColor;
uniform sampler2D grassSampler;
uniform sampler2D crateSampler;
void main(){
crateColor = texture(grassSampler, grassUV).rgb;
grassColor = texture(crateSampler, crateUV).rgb;
}
Can anyone see what I am doing wrong?
EDIT:
I am trying to render 2 different textures on 2 different VAOs
You're kinda doing everything wrong; it's hard to pick out one thing.
Your shaders look like they're tying to take two positions and two texture coordinates, presumably generate two triangles, then sample from two textures and write colors to two different images.
That's not how it works. Unless you use a geometry shader (and please do not take that as an endorsement), your call to glDrawArrays(GL_TRIANGLES, 0, 6); will render exactly 2 triangles, no matter what your VS or FS's say.
A vertex has only one position. Writing to gl_Position twice will simply overwrite the previous value, just like writing to any variable twice in C++ would. And the number of triangles to be rendered is defined by the number of vertices. A vertex shader cannot create vertices. It can't even destroy them (though, through gl_CullDistance, it can potentially cull whole primitives).
It is not clear what you mean by "I just need 2 textures rendered at the same time." Or more to the point, what "at the same time" refers to. I don't know what your code ought to be trying to do.
Given the data your vertex shader expects, it looks like you have two separate sets of triangles, with their own positions and texture coordinates. You want to render one set of triangles with one texture, then render another set with a different texture.
So... do that. Instead of having your VAOs send 2 positions and 2 texture coordinates, send just one. Your VS should also take one position/texcoord, and your FS should similarly take a single texture and write to a single output. The difference will be determined by what VAO is currently active and which texture is bound to texture unit 0 at the time you issue the render call.
If you truly intend to write to different output images, the way your FS suggests, then change FBOs between rendering as well.
If however, your goal is to have the same triangle use two textures with two mappings, writing separate results to two images, you can do that too. The difference is that you only provide a single position, and textures must be bound to both texture units 0 and 1 when you issue your rendering command.

Purpose of uniform while using multiple texture

I am trying to understand this code:
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture1);
glUniform1i(glGetUniformLocation(ourShader.Program, "ourTexture1"), 0);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, texture2);
glUniform1i(glGetUniformLocation(ourShader.Program, "ourTexture2"), 1);
This is the related shader code:
#version 330 core
...
uniform sampler2D ourTexture1;
uniform sampler2D ourTexture2;
void main()
{
color = mix(texture(ourTexture1, TexCoord), texture(ourTexture2, TexCoord), 0.2);
}
So, as far as I understand, after activating GL_TEXTURE0 we bind texture1 to it. My understanding is that this binds texture1 to first sampler2d.The part I dont understand is, why do we need to use glUniform call.
It's an indirection. You choose the texture that is input at location GL_TEXTURE0 then you tell the uniform in your shader to fetch its texture from that same location. It's kind of like this (apologies for the diagram).
The first row is texture unit locations and the second row is shader uniform locations. You may want to bind texture unit 4 to shader sampler 2, for example.
(DatenWolf will be along in a moment to correct me :).

How can I pass multiple textures to a single shader?

I am using freeglut, GLEW and DevIL to render a textured teapot using a vertex and fragment shader. This is all working fine in OpenGL 2.0 and GLSL 1.2 on Ubuntu 14.04.
Now, I want to apply a bump map to the teapot. My lecturer evidently doesn't brew his own tea, and so doesn't know they're supposed to be smooth. Anyway, I found a nice-looking tutorial on old-school bump mapping that includes a fragment shader that begins:
uniform sampler2D DecalTex; //The texture
uniform sampler2D BumpTex; //The bump-map
What they don't mention is how to pass two textures to the shader in the first place.
Previously I
//OpenGL cpp file
glBindTexture(GL_TEXTURE_2D, textureHandle);
//Vertex shader
gl_TexCoord[0] = gl_TextureMatrix[0] * gl_MultiTexCoord0;
//Fragment shader
gl_FragColor = color * texture2D(DecalTex,gl_TexCoord[0].xy);
so now I
//OpenGL cpp file
glBindTexture(GL_TEXTURE_2D, textureHandle);
glBindTexture(GL_TEXTURE_2D, bumpHandle);
//Vertex shader
gl_TexCoord[0] = gl_TextureMatrix[0] * gl_MultiTexCoord0;
gl_TexCoord[1] = gl_TextureMatrix[1] * gl_MultiTexCoord1;
//Fragment shader
gl_FragColor = color * texture2D(BumpTex,gl_TexCoord[0].xy);
//no bump logic yet, just testing I can use texture 1 instead of texture 0
but this doesn't work. The texture disappears completely (effectively the teapot is white). I've tried GL_TEXTURE_2D_ARRAY, glActiveTexture and few other likely-seeming but fruitless options.
After sifting through the usual mixed bag of references to OpenGL and GLSL new and old, I've come to the conclusion that I probably need glGetUniformLocation. How exactly do I use this in the OpenGL cpp file to pass the already-populated texture handles to the fragment shader?
How to pass an array of textures with different sizes to GLSL?
Passing Multiple Textures from OpenGL to GLSL shader
Multiple textures in GLSL - only one works
(This is homework so please answer with minimal code fragments (if at all). Thanks!)
Failing that, does anyone have a tea cosy mesh?
It is very simple, really. All you need is to bind the sampler to some texture unit with glUniform1i. So for your code sample, assuming the two uniform samplers:
uniform sampler2D DecalTex; // The texture (we'll bind to texture unit 0)
uniform sampler2D BumpTex; // The bump-map (we'll bind to texture unit 1)
In your initialization code:
// Get the uniform variables location. You've probably already done that before...
decalTexLocation = glGetUniformLocation(shader_program, "DecalTex");
bumpTexLocation = glGetUniformLocation(shader_program, "BumpTex");
// Then bind the uniform samplers to texture units:
glUseProgram(shader_program);
glUniform1i(decalTexLocation, 0);
glUniform1i(bumpTexLocation, 1);
OK, shader uniforms set, now we render. To do so, you will need the usual glBindTexture plus glActiveTexture:
glActiveTexture(GL_TEXTURE0 + 0); // Texture unit 0
glBindTexture(GL_TEXTURE_2D, decalTexHandle);
glActiveTexture(GL_TEXTURE0 + 1); // Texture unit 1
glBindTexture(GL_TEXTURE_2D, bumpHandle);
// Done! Now you render normally.
And in the shader, you will use the textures samplers just like you already do:
vec4 a = texture2D(DecalTex, tc);
vec4 b = texture2D(BumpTex, tc);
Note: For techniques like bump-mapping, you only need one set of texture coordinates, since the textures are the same, only containing different data. So you should probably pass texture coordinates as a vertex attribute.
instead of using:
glUniform1i(decalTexLocation, 0);
glUniform1i(bumpTexLocation, 1);
in your code,
you can have:
layout(binding=0) uniform sampler2D DecalTex;
// The texture (we'll bind to texture unit 0)
layout(binding=1)uniform sampler2D BumpTex;
// The bump-map (we'll bind to texture unit 1)
in your shader. That also mean you don't have to query for the location.

Opengl render to Depth texture - red color?

I'm trying to implement shadowmapping, so I'm rendering to depth texture. When I don't bind framebuffer which contains depth texture and use default framebuffer, it outputs to screen. However, instead of white color for distant fragments, they're red.
This is my fragment shader:
#version 330
out float fragmentdepth;
uniform sampler2D inputTex;
void main(){
fragmentdepth = gl_FragCoord.z;
}
Is it a problem? Because shadowmapping isn't working and I'd like to rule out this as source of problem.
Depth is a scalar value. Scalar is interpreted by OpenGL as a single component texture. Single component texture means that only the .x or .r element (whatever you use) gets nonzero.