Issue with using Sampler2D with a Shader - opengl

! I am aware that there are easier ways to draw to the screen than this, but i need to do it this specific way !
I am drawing to a texture through a fbo.
I am then using a shader to re-draw it onto the screen, doing so with a sampler2D (of the texture) and having the shader set gl_fragColor to the color of the sampler2D at the specific point the fragment shader is at.
The issue i am having is that the display buffer (which the shader is drawing the fbo to) is only a solid blue, even though i draw a white squares to the fbo with a blue background.
My Code:
Main Render Loop:
while(!Display.isCloseRequested()){
//Drawing on the fbo a blue background and white square
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fboID);
glColor3f(0,0,1);
drawQuad(0,0,WIDTH,HEIGHT);
glColor3f(1,1,1);
drawQuad(0,0,50,50);
//Trying to draw the texture from the previous fbo
//on to the display buffer with a fragment shader,
//however it only draws a blue background, no white square.
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
GL13.glActiveTexture(GL13.GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D,texID);
glUseProgram(shaderProgram);
glUniform1i(glGetUniformLocation(shaderProgram, "tex"), 0);
glUniform1f(glGetUniformLocation(shaderProgram, "width"), WIDTH);
glUniform1f(glGetUniformLocation(shaderProgram, "height"), HEIGHT);
glBegin(GL_QUADS); {
glVertex2f(0, 0);
glVertex2f(0, HEIGHT);
glVertex2f(WIDTH, HEIGHT);
glVertex2f(WIDTH, 0);
} glEnd();
glUseProgram(0);
Display.update();
}
My Shader:
uniform sampler2D tex;
uniform float width;
uniform float height;
void main() {
vec4 color = texture2D( tex, gl_FragCoord.xy / vec2(width, height));
gl_FragColor = color;
}
Init method for the fbo:
texID=glGenTextures();
fboID=glGenFramebuffersEXT();
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fboID);
glBindTexture(GL_TEXTURE_2D, texID);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, WIDTH, HEIGHT, 0,GL_RGBA, GL_INT, (java.nio.ByteBuffer) null);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT,GL_COLOR_ATTACHMENT0_EXT,GL_TEXTURE_2D, texID, 0);
glBindTexture(GL_TEXTURE_2D, 0);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
I think one reason why this is happening could be becuase i am not correctly passing the Sampler2D to the shader but i not sure how to fix it.
If any could tell why this is happening or how to fix it (or even just a point in the right direction) it would be much appreciated!
(Sorry for any bad English!)

You have transposed your calls to glUniform1i (...) and glUseProgram (...).
At the time that glUniform1i(glGetUniformLocation(shaderProgram, "tex"), 0); is executed, the currently active program is 0 and that should in fact be generating the following error:
GL_INVALID_OPERATION is generated if there is no current program object.
If you swap the two lines mentioned above, that should fix your problem. In the future you should make a point of checking glGetError (...) when something does not work.
UPDATE: Now that the original problem is fixed, you are not setting the texture minification filter correctly.
To set enumerants, you must use glTexParameteri (...). glTexParameterf (...) is going to interpret the value passed as a floating-point number and GL_LINEAR (0x2601) has type GLenum (32-bit unsigned integer). Fortunately all core OpenGL enums only use the lower 16-bits so they can be expressed precisely as 32-bit floats (can express all integers up to 224), but you can see why you would not want to convert an integer constant to a float.
Therefore, the following line:
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
Needs to be:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
UPDATE 2: You must use normalized texture coordinates when using texture2D (...).
gl_FragCoord.xy has the range [0, width] and [0, height], which is outside the normalized range for any framebuffer larger than 1x1. To fix that, you must divide by width and height.
texelFetch (...) allows you to use unnormalized coordinates, but it requires GLSL 1.30 and you would have to cast gl_FragCoord.xy to ivec2 to use it. It may also prove inconvenient if you ever draw into an FBO that has different dimensions than your window.

Related

Send Depthbuffer and TextureBuffer to same Fragment OpenGL3

It might be stupid and a trivial question but since i'm kinda new with OpenGL ,I don't understand how to send color buffer and depth buffer coming from a single FBO to the fragment shader.
My FBO is generated with 1 texture buffer and 1 depth buffer
glGenTextures(1, &texture_color);
glBindTexture(GL_TEXTURE_2D, texture_color);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
glGenTextures(1, &texture_depth);
glBindTexture(GL_TEXTURE_2D, texture_depth);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, width, height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0);
and I would like, in my fragment buffer to have
uniform sampler2D colorTexture;
uniform sampler2D depthTexture;
to be filled with both color and depth texture...
I can send just one or the other with GlBindtexture (and it works well), but I don't success to send the 2 at the same time. Does somebody knows how to do it?
Thanks!
Ok, I might not be giant since the solution is easy, and I was just doing wrong.
I was trying to use glActiveTexture(GL_DEPTH) followed by a binding to my Texture : (glBindTexture(GL_TEXTURE_2D, visibilityFBO.getDepthTexture());) thinking it was normal to use GL_DEPTH.
But i was wrong,
and it works using :
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, visibilityFBO.getColorTexture());
glUniform1i(glGetUniformLocation(visibilityPassShader.Program, "positionTexture"), 0);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, visibilityFBO.getDepthTexture());
glUniform1i(glGetUniformLocation(visibilityPassShader.Program, "depthTexture"), 1);
it's the same as before, but the DEPTH is considerated as a Texture since I used glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, width, height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0); to produce my Depth texture. So I just have to call the next texture and bind it to be able to use it.
In the fragment, it is still :
uniform sampler2D positionTexture;
uniform sampler2D depthTexture;
In the good order.
Using OpenGL >= 4, you can use in the fragment shader
layout (binding = 0) uniform sampler2D positionTexture;
layout (binding = 1) uniform sampler2D depthTexture;
and forgive the part about glUniform1i in the source code.
Hope it can help someone!

OpenGL - Unable to write to frame buffer (or read it properly)

So I've been trying to get shadow mapping to work, but I was unsuccessful, and I am now trying to simply write anything to the frame buffer and then render it to a quad as a texture. I've been looking at this small piece of code for 10 hours, so I thought it might finally be time to ask for some help. Do let me know if you need any more information or if something is unclear.
I started by following this tutorial. After completing it two times, and straight copy pasting a third time I gave up, and started to look elsewhere for information. My current code is a bit of a mess, but I have tried to extract what is crucial
Here is how I set up my FBO with a TEXTURE2D (_shadowMap and _shadowMapFBO are private variables in the class acting as the main program):
glGenTextures(1, &_shadowMap);
glBindTexture(GL_TEXTURE_2D, _shadowMap);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32, SHADOW_WIDTH, SHADOW_HEIGHT, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
glGenFramebuffers(1, &_shadowMapFBO);
glBindFramebuffer(GL_FRAMEBUFFER, _shadowMapFBO);
glDrawBuffer(GL_NONE);
glReadBuffer(GL_NONE);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, _shadowMap, 0);
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
{
std::cout << "Frame buffer failed" << std::endl;
system("pause");
exit(1);
}
glBindFramebuffer(GL_FRAMEBUFFER, 0);
I then do the first render pass in the main loop:
glViewport(0, 0, SHADOW_WIDTH, SHADOW_HEIGHT);
glBindFramebuffer(GL_FRAMEBUFFER, _shadowMapFBO);
glClear(GL_DEPTH_BUFFER_BIT);
_shadowShader.bind();
glBindVertexArray(_cubeVAO);
glDrawElements(GL_TRIANGLES, _numberOfIndicesCube, GL_UNSIGNED_SHORT, 0);
glBindVertexArray(_planeVAO);
glDrawElements(GL_TRIANGLES, _numberOfIndicesPlane, GL_UNSIGNED_SHORT, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glDrawBuffer(GL_BACK);
glReadBuffer(GL_BACK);
Initially I wanted to create a shadow map, but at this point I'm simply trying to write an explicit value to the entire texture attached to the frame buffer using the following shader pair (the one used by _shadowShader)
Vertex
#version 450
in layout(location=0) vec3 position;
uniform mat4 mvp;
void main()
{
gl_Position = mvp * vec4(position, 1.0f);
}
Fragment
#version 450
out float depth;
void main()
{
depth = 0.0f;
}
I then finish up by trying to display the texture on a quad:
glViewport(0, 0, _screenWidth, _screenHeight);
glClearColor(0.7f, 0.5f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
_testShader.bind();
glBindTexture(GL_TEXTURE_2D, _shadowMap);
glBindVertexArray(_quadVAO);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0);
Here is the fragment shader for the quad:
#version 450
in vec2 textureCoordinates;
uniform sampler2D _shadowMap;
out vec4 color;
void main()
{
float depth = texture2D(_shadowMap, textureCoordinates).r;
color = texture2D(_shadowMap, textureCoordinates);
}
But to my continuous frustration, the quad appears white although I output 0.0f (black) from the fragment shader...I know the quad is able to display a texture, as I've been able to render a normal texture displaying a .jpg onto it. So, this is where I'm at now; any ideas, pointers, thoughts, motivational words etc?
EDIT I: So after some more time I have at least figured out that the texture is loaded correctly and that I'm able to read from it. I did this by actually loading a image onto it during initialization and then not writing to it. The quad then displays the texture correctly. So there must be something wrong with how I'm writing to it.
EDIT II: So I got it working; I sat down with a TA and copy pasted the code (yeah, I know...doesn't get much better than that) and it now works. Thanks Bart :)
Your shadow pass fragment shader is just wrong:
#version 450
out float depth;
void main()
{
depth = 0.0f;
}
This does just declare a single channel color output, which will be written to the color attachment (which you don't have). Since you want to write to the depth attachment of your FBO, you have to use the builtin gl_FragDepth output variable. If you don't do it, the GL will write the lineariliy interpolated window space depth value to the depth buffer.
Note that typically, you clear the depth buffer to 1.0, and when using a typical perspective projection, your objects would appear very close to 1.0, say at 0.99something, as the depth value, so it is very likely that converting the result to just 8 bit grayscale will look all white.

OpenGL fragment shader in a texture

I have a simple RGBA texture that I will project on a rectangle/quad in OpenGL.
However, I want to do some operations in the rgb pixels of that texture, e.g., I want the displayed color to be some function of the RGB pixels of the original image.
My questions are: can I apply a fragment shader to a 2D texture? And if I can, How do I access the rgb value of the original texture image?
Any help would be appreciated.
You can certainly do this. Render the quad as usual and send the texture to the fragment shader as a sampler2D uniform.
Inside the fragment shader, you can then make calls to either texture or texelFetch. texture samples your texture, and texelFetch looks up an RGB texel from the texture without performing interpolation, etc.
Here is an example of what your fragment shader might look like:
Fragment shader
#version 330 core
uniform vec2 resolution;
uniform sampler2D myTexture;
out vec3 color;
void main()
{
vec2 pos = gl_FragCoord.xy / resolution.xy;
pos.y = resolution.y - pos.y;
color = texture(myTexture, pos).xyz;
}
And then on the application side:
Initialization
glGenTextures(1, &m_textureID);
glBindTexture(GL_TEXTURE_2D, m_textureID);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, m_imageWidth, m_imageHeight, 0, GL_RGB, GL_UNSIGNED_BYTE, pImage);
m_textureUniformID = glGetUniformLocation(m_programID, "myTexture");
Render loop
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, m_textureID);
glUniform1i(m_textureUniformID, 0);

OpenGL & GLSL: Greyscale texture not showing

I'm trying to show a greyscale texture on the screen. I create my texture via
glGenTextures(1, &heightMap);
glBindTexture(GL_TEXTURE_2D, heightMap);
glTexImage2D(GL_TEXTURE_2D, 0, GL_R32F, 512, 512, 0, GL_RED, GL_FLOAT, colorData);
colorData is a float[512*512] with values between 0.0 and 1.0.
When rendering, I use:
glBindTexture(GL_TEXTURE_2D, heightMap);
glUniform1i(shader.GetUniformLocation("textureSampler"), 0);
shader.GetUniformLocation is a function of a library we use at university. It is essentially the same as glGetUniformLocation(shader, "textureSampler"), so don't be confused by it.
I render two triangles via triangle strip. My fragment shader is:
#version 330
layout(location = 0) out vec4 frag_color;
in vec2 texCoords;
uniform sampler2D textureSampler;
void main()
{
frag_color = vec4(texture(textureSampler, texCoords).r, 0, 0, 1);
}
I know the triangles are rendered correctly (e.g. if I use vec4(1.0, 0, 0, 1) for frag_color, I get a completely red screen). However with the line above, I only get a completely black screen. Every texture value seems to be 0.0.
Does anyone have an idea, what I have done wrong? Are there mistakes in that few lines of code or are these completely correct and the error is somewhere else?
As one of the comments below says, setting glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); and glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); solves the problem. :)

GLSL sampler2DRect and single channel (GL_RED) data

I have pixel map data 1 channel, 8 bit.
I have pixel map width and height.
I'm trying to submit pixmap data to fragment shader.
I'm using ogl3 with VAO and VBO.
My setup:
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_RECTANGLE, texture);
glTexParameteri(GL_TEXTURE_RECTANGLE, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_RECTANGLE, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_RECTANGLE, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_RECTANGLE, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexImage2D(GL_TEXTURE_RECTANGLE, 0, GL_RED, width, height, 0, GL_RED, GL_UNSIGNED_BYTE, data);
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, ..., vertices, GL_STATIC_DRAW);
...create program...
glUseProgram(program);
glUniform1i(glGetUniformLocation(program, "image"), 0);
glDrawArrays(GL_TRIANGLE_FAN, 0, ...);
And fragment shader:
uniform sampler2DRect image;
varying vec2 varying_texcoord;
void main() {
vec4 sample = texture2DRect(image, varying_texcoord);
gl_FragColor = vec4(1.0, 0.0, 0.0, sample.a);
}
gl_FragColor should paint pixels light and dark red depending of sample.a value, however it seems that sample.a is always 1.0 - I'm getting pure red #ff0000.
I think the problem is in glTexImage2D, isn't it?
Please assume program and data are valid.
I believe the answer you are looking for is in the documentation for glTexImage2D. It says
"GL_RED:
Each element is a single red component.The GL converts it to floating point and assembles it into an RGBA elementby attaching 0 for green and blue, and 1 for alpha. Each component is then multiplied by the signed scale factor GL_c_SCALE, added to the signed bias GL_c_BIAS, and clamped to the range [0,1]."
You're asking for the .a component, which is always 1. You need to use the component the texture actually contains data in - which is .r
Hope this helps!
The question is very old but the answer could help others.
You need to use true pixel coordinates if you use sampler2DRect.
I assume you copy the incoming texture coordinates in the vertex shader to your varying_texcoord.
If you do so, you need to compute the coordinates:
vec2 coords = varying_texcoord * imageSize(image);
Then pass the coordinates:
vec4 sample= texture(image, coords);