I'm trying to show a greyscale texture on the screen. I create my texture via
glGenTextures(1, &heightMap);
glBindTexture(GL_TEXTURE_2D, heightMap);
glTexImage2D(GL_TEXTURE_2D, 0, GL_R32F, 512, 512, 0, GL_RED, GL_FLOAT, colorData);
colorData is a float[512*512] with values between 0.0 and 1.0.
When rendering, I use:
glBindTexture(GL_TEXTURE_2D, heightMap);
glUniform1i(shader.GetUniformLocation("textureSampler"), 0);
shader.GetUniformLocation is a function of a library we use at university. It is essentially the same as glGetUniformLocation(shader, "textureSampler"), so don't be confused by it.
I render two triangles via triangle strip. My fragment shader is:
#version 330
layout(location = 0) out vec4 frag_color;
in vec2 texCoords;
uniform sampler2D textureSampler;
void main()
{
frag_color = vec4(texture(textureSampler, texCoords).r, 0, 0, 1);
}
I know the triangles are rendered correctly (e.g. if I use vec4(1.0, 0, 0, 1) for frag_color, I get a completely red screen). However with the line above, I only get a completely black screen. Every texture value seems to be 0.0.
Does anyone have an idea, what I have done wrong? Are there mistakes in that few lines of code or are these completely correct and the error is somewhere else?
As one of the comments below says, setting glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); and glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); solves the problem. :)
Related
I am trying to implement a color picking algorithm, so I render my entities with a unique color on a different Framebuffer, so I can then query that Framebuffer on the pixel my mouse is on (using glReadPixels) and select the entity under the mouse cursor.
This works fine on my integrated Intel HD Graphics 4600. I am able to read back the exact value I sent on the GPU.
However running the application using my Nvidia GTX 860M, the results are not consistent. A few random pixels have the original color but most of them have the color a bit altered.
The same thing happens on another computer that has a Geforce 8600 GT.
I tried running it using NSight Graphics, which allows me to view the memory of the Textures.
The arrow points to the correct pixel color. I would expect every pixel to have the same color.
I use this union to create a unique color...
union u_picking_color {
struct s_entity *ep;
glm::vec3 color;
};
then I query the Framebuffer like this:
struct s_entity *Renderer::GetEntity(int x, int y) {
union u_picking_color picking_color = { 0 };
m_picking_fbo.SetReadTarget();
glReadPixels(x, y, 1, 1, GL_RGB, GL_FLOAT, &picking_color.color);
return picking_color.ep;
}
here are the API calls used to create the Framebuffer:
glGenFramebuffers(1, &m_frameBufferId);
glBindFramebuffer(GL_FRAMEBUFFER, m_frameBufferId);
glGenTextures(1, &m_colorTextureId);
glBindTexture(GL_TEXTURE_2D, m_colorTextureId);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, width, height, 0, GL_RGBA, GL_FLOAT, nullptr);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, m_colorTextureId, 0);
glGenTextures(1, &m_depthTextureId);
glBindTexture(GL_TEXTURE_2D, m_depthTextureId);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, width, height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, nullptr);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, m_depthTextureId, 0);
Also, here are the shaders used to draw on the framebuffer. They do not change the color. They only pass it through the pipeline.
Vertex:
#version 330 core
layout(location = 0) in vec3 v_Position;
//layout(location = 1) in vec3 v_Normal;
//layout(location = 2) in vec4 v_Color;
layout(location = 3) in vec3 v_PickingColor;
out vec4 color;
uniform mat4 u_ViewProjection;
void main()
{
color = vec4(v_PickingColor, 1);
gl_Position = u_ViewProjection * vec4(v_Position.x, v_Position.y, v_Position.z, 1.0f);
}
Fragment:
#version 330 core
out vec4 fragColor;
in vec4 color;
void main()
{
fragColor = color;
}
I tried disabling GL_BLEND and GL_DITHER with no success.
Turns out, even though I was using the same colour per-vertex, I should have used the 'flat' keyword on my shaders to prevent interpolation and avoid precision errors.
Vertex Shader:
flat out vec4 color;
Fragment Shader:
flat in vec4 color;
I have a program that takes the following texture:
which is generated via FreeType2. Basically, it's creating a texture atlas for every character that I've requested to be drawn. As you can see, the characters are bright and clear. In fact, you can see that the top-leftmost pixel of the lowercase 'i' has a value of 71 (out of 255) or 0.7098 when I inspect the texture in RenderDoc.
Next, the engine blits letters onto a Framebuffer Object. This is done via textured quads. The vertex shader:
#version 330
layout(location=0) in vec2 inVertexPosition;
layout(location=1) in vec2 inTexelCoords;
layout(location=2) in float inDepth;
out vec2 texelCoords;
out float depth;
void main()
{
gl_Position = vec4(inVertexPosition.x,-inVertexPosition.y, 0.0, 1.0);
texelCoords = vec2(inTexelCoords.x,1-inTexelCoords.y);
depth = inDepth;
}
And the frag shader:
#version 330
layout(location=0) out vec4 frag_colour;
in vec2 texelCoords;
in float depth;
uniform sampler2D uTexture;
uniform vec4 uTextColor;
void main()
{
vec4 c = texture(uTexture,texelCoords);
frag_colour = uTextColor * c.r;
gl_FragDepth = depth;
}
As you can see, it's sets the pixel color to be a factor of the red channel.
However, when I view the contents of the FBO via RenderDoc, and saved out to file here, you see this:
If you look at this without transparency (just a second layer added underneath in Gimp to illustrate better):
You can see that the text is a little faded compared to what it was before. If you look at the top-leftmost pixel of the lowercase 'i', it's now a value of 50.2, or for a range of 0-1 it's 0.50196 (via RenderDoc).
Next, when the FBO is finally put onto the screen via another textured quad it fades even more. First here's the vertex shader:
#version 330
layout(location=0) in vec2 inVertexPosition;
layout(location=1) in vec2 inTexelCoords;
varying vec2 texelCoords;
void main()
{
gl_Position = vec4(inVertexPosition.x,-inVertexPosition.y, 0.0, 1.0);
texelCoords = vec2(inTexelCoords.x,1-inTexelCoords.y);
}
and the fragment shader:
#version 330
precision highp float;
layout(location=0) out vec4 frag_colour;
varying vec2 texelCoords;
uniform sampler2D uTexture;
void main()
{
vec4 c = texture(uTexture,texelCoords);
frag_colour = c;
}
The results, as I said are more faded than before:
original:
gimp background for clarity:
now that pixel has a value of 25.1 or 0.05139.
What is causing this fading after every render?
I think it's important to note that the brighter areas don't fade.
My Framebuffer creation code
glGenFramebuffers(1, &m_framebuffer);
glGenTextures(1, &m_fboColorAttachment);
glGenTextures(1, &m_fboAdditionalInfo);
glGenTextures(1, &m_fboDepthStencil);
glCall(glBindFramebuffer,GL_FRAMEBUFFER, m_framebuffer);
/* setup color output 0 */
glBindTexture(GL_TEXTURE_2D, m_fboColorAttachment);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE);
glCall(glTexImage2D, GL_TEXTURE_2D, 0, GL_RGBA8, screenDimensionsX, screenDimensionsY, 0, GL_BGRA, GL_UNSIGNED_BYTE, nullptr);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, m_fboColorAttachment, 0);
glBindTexture(GL_TEXTURE_2D, 0);
/* setup color output 1 */
glBindTexture(GL_TEXTURE_2D, m_fboAdditionalInfo);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE);
glCall(glTexImage2D, GL_TEXTURE_2D, 0, GL_R32UI, screenDimensionsX, screenDimensionsY, 0, GL_RED_INTEGER, GL_UNSIGNED_INT, nullptr);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, GL_TEXTURE_2D, m_fboAdditionalInfo, 0);
glBindTexture(GL_TEXTURE_2D, 0);
/* setup depth and stencil */
glBindTexture(GL_TEXTURE_2D, m_fboDepthStencil);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE);
glCall(glTexImage2D, GL_TEXTURE_2D, 0, GL_DEPTH24_STENCIL8, screenDimensionsX, screenDimensionsY, 0, GL_DEPTH_STENCIL, GL_UNSIGNED_INT_24_8, nullptr);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, GL_TEXTURE_2D, m_fboDepthStencil, 0);
glBindTexture(GL_TEXTURE_2D, 0);
The initial (red) texture creation:
glActiveTexture(GL_TEXTURE0);
glGenTextures(1,&textureData.texture);
glBindTexture(GL_TEXTURE_2D,textureData.texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, 500, 500, 0, GL_RED, GL_UNSIGNED_BYTE, 0);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
My blending is done as
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
has a value of 71 (out of 255) or 0.7098
No idea, what you even mean here. 71/255 would be 0.278. 0.7098 normalized would be 181 out of 255. Looks like your "out of 255" are just percentage values, out of 100%.
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
You use that color values from the red channel also as alpha, so, when you have blending enabled, you end up with 0.7098*0.7098=.5038. Since the result will be rounded to the nearest reprersentable value, we rather end with 181/255 * 181/255 = .50382 rounded to 128/255
it's now a value of 50.2, or for a range of 0-1 it's 0.50196
128/255 is 0.50196078....
So the solution is: disable blending for all steps when you don't need it. Or if you need it, set useful alpha values.
Side note:
now that pixel has a value of 25.1 or 0.05139.
No Idea what this means, the 25.1 does not relate to 0.05139 in any obvious way, you definitively switched the meaning of those values again.
So I've been trying to get shadow mapping to work, but I was unsuccessful, and I am now trying to simply write anything to the frame buffer and then render it to a quad as a texture. I've been looking at this small piece of code for 10 hours, so I thought it might finally be time to ask for some help. Do let me know if you need any more information or if something is unclear.
I started by following this tutorial. After completing it two times, and straight copy pasting a third time I gave up, and started to look elsewhere for information. My current code is a bit of a mess, but I have tried to extract what is crucial
Here is how I set up my FBO with a TEXTURE2D (_shadowMap and _shadowMapFBO are private variables in the class acting as the main program):
glGenTextures(1, &_shadowMap);
glBindTexture(GL_TEXTURE_2D, _shadowMap);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32, SHADOW_WIDTH, SHADOW_HEIGHT, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
glGenFramebuffers(1, &_shadowMapFBO);
glBindFramebuffer(GL_FRAMEBUFFER, _shadowMapFBO);
glDrawBuffer(GL_NONE);
glReadBuffer(GL_NONE);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, _shadowMap, 0);
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
{
std::cout << "Frame buffer failed" << std::endl;
system("pause");
exit(1);
}
glBindFramebuffer(GL_FRAMEBUFFER, 0);
I then do the first render pass in the main loop:
glViewport(0, 0, SHADOW_WIDTH, SHADOW_HEIGHT);
glBindFramebuffer(GL_FRAMEBUFFER, _shadowMapFBO);
glClear(GL_DEPTH_BUFFER_BIT);
_shadowShader.bind();
glBindVertexArray(_cubeVAO);
glDrawElements(GL_TRIANGLES, _numberOfIndicesCube, GL_UNSIGNED_SHORT, 0);
glBindVertexArray(_planeVAO);
glDrawElements(GL_TRIANGLES, _numberOfIndicesPlane, GL_UNSIGNED_SHORT, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glDrawBuffer(GL_BACK);
glReadBuffer(GL_BACK);
Initially I wanted to create a shadow map, but at this point I'm simply trying to write an explicit value to the entire texture attached to the frame buffer using the following shader pair (the one used by _shadowShader)
Vertex
#version 450
in layout(location=0) vec3 position;
uniform mat4 mvp;
void main()
{
gl_Position = mvp * vec4(position, 1.0f);
}
Fragment
#version 450
out float depth;
void main()
{
depth = 0.0f;
}
I then finish up by trying to display the texture on a quad:
glViewport(0, 0, _screenWidth, _screenHeight);
glClearColor(0.7f, 0.5f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
_testShader.bind();
glBindTexture(GL_TEXTURE_2D, _shadowMap);
glBindVertexArray(_quadVAO);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0);
Here is the fragment shader for the quad:
#version 450
in vec2 textureCoordinates;
uniform sampler2D _shadowMap;
out vec4 color;
void main()
{
float depth = texture2D(_shadowMap, textureCoordinates).r;
color = texture2D(_shadowMap, textureCoordinates);
}
But to my continuous frustration, the quad appears white although I output 0.0f (black) from the fragment shader...I know the quad is able to display a texture, as I've been able to render a normal texture displaying a .jpg onto it. So, this is where I'm at now; any ideas, pointers, thoughts, motivational words etc?
EDIT I: So after some more time I have at least figured out that the texture is loaded correctly and that I'm able to read from it. I did this by actually loading a image onto it during initialization and then not writing to it. The quad then displays the texture correctly. So there must be something wrong with how I'm writing to it.
EDIT II: So I got it working; I sat down with a TA and copy pasted the code (yeah, I know...doesn't get much better than that) and it now works. Thanks Bart :)
Your shadow pass fragment shader is just wrong:
#version 450
out float depth;
void main()
{
depth = 0.0f;
}
This does just declare a single channel color output, which will be written to the color attachment (which you don't have). Since you want to write to the depth attachment of your FBO, you have to use the builtin gl_FragDepth output variable. If you don't do it, the GL will write the lineariliy interpolated window space depth value to the depth buffer.
Note that typically, you clear the depth buffer to 1.0, and when using a typical perspective projection, your objects would appear very close to 1.0, say at 0.99something, as the depth value, so it is very likely that converting the result to just 8 bit grayscale will look all white.
I have a simple RGBA texture that I will project on a rectangle/quad in OpenGL.
However, I want to do some operations in the rgb pixels of that texture, e.g., I want the displayed color to be some function of the RGB pixels of the original image.
My questions are: can I apply a fragment shader to a 2D texture? And if I can, How do I access the rgb value of the original texture image?
Any help would be appreciated.
You can certainly do this. Render the quad as usual and send the texture to the fragment shader as a sampler2D uniform.
Inside the fragment shader, you can then make calls to either texture or texelFetch. texture samples your texture, and texelFetch looks up an RGB texel from the texture without performing interpolation, etc.
Here is an example of what your fragment shader might look like:
Fragment shader
#version 330 core
uniform vec2 resolution;
uniform sampler2D myTexture;
out vec3 color;
void main()
{
vec2 pos = gl_FragCoord.xy / resolution.xy;
pos.y = resolution.y - pos.y;
color = texture(myTexture, pos).xyz;
}
And then on the application side:
Initialization
glGenTextures(1, &m_textureID);
glBindTexture(GL_TEXTURE_2D, m_textureID);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, m_imageWidth, m_imageHeight, 0, GL_RGB, GL_UNSIGNED_BYTE, pImage);
m_textureUniformID = glGetUniformLocation(m_programID, "myTexture");
Render loop
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, m_textureID);
glUniform1i(m_textureUniformID, 0);
A few day ago i applied the latest Window 8.1 update to my Laptop, when restarting as part of the process i got a bluescreen and so did i when i tried to manually restart etc. In the end i had to do the partial reset (it removes anything windows recognizes as an app but leaves your data intact) you are offered because i could not lose the data.
Before that incident my code for displaying textured and animated models worked but after that i got new errors from my GLSL compilers because of deprecated keywords. When those were fixed my program won't show me textures and instead just display everything black.
I have 2 older projects using the same glsl code and they also have the same problem (although they did not have any deprecated keywords in the shader).
The code worked like this ~2 hours before i did the update.
Initialising:
void TestWindow::initialize()
{
initSkybox();
initVAO();
initVBO();
m_program = new QOpenGLShaderProgram(this);
m_program->addShaderFromSourceFile(QOpenGLShader::Vertex ,"../Qt_Flyff_1/Simple_VertexShader.vert");
m_program->addShaderFromSourceFile(QOpenGLShader::Fragment ,"../Qt_Flyff_1/Simple_FragmentShader.frag");
m_program->link();
qDebug("log: %s",m_program->log().toStdString().c_str());
m_posAttr = m_program->attributeLocation("position");
m_texPos = m_program->attributeLocation("texcoord");
m_colorUniform = m_program->uniformLocation("color");
m_matrixUniform = m_program->uniformLocation("matrix");
int asdf=m_program->uniformLocation("tex");
GLuint tex;
glGenTextures(1, &tex);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, tex);
QImage img("../skybox01_big.png");
glTexImage2D(GL_TEXTURE_2D, 0, GL_BGRA, img.width(), img.height(), 0, GL_BGRA, GL_UNSIGNED_BYTE, img.bits());
glGenerateMipmap(GL_TEXTURE_2D);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_NEAREST);
glEnable(GL_DEPTH_TEST);
// glEnable(GL_CULL_FACE);
glClearColor(0.5,0.5,0.5,1.0);
m_program->setUniformValue(asdf,0);
}
Vertex Shader:
#version 400 core
uniform mediump mat4 matrix;
uniform mediump vec3 color;
in vec4 position;
in vec2 texcoord;
out vec3 Color;
out vec2 Texcoord;
void main()
{
Color = color;
Texcoord = texcoord;
gl_Position = matrix*position;
}
Fragment Shader:
#version 400 core
in vec3 Color;
in vec2 Texcoord;
out vec4 outColor;
uniform sampler2D tex;
void main()
{
outColor = texture(tex, Texcoord) * vec4(Color, 1.0);
}
I've also compared my code to various tutorials and could not find any difference.
When simply using "outColor = vec4(Color,1.0)" my model is completly white as expected and when displaying the texture coordinates as color i also get the expected results.
In case it matters my Laptop has a GeForce GT 740M.
Ok i found the solution.
What i didn't know is that there is a difference between internalformat and format for glTexImage2D.
The format specifies what the data you send to the GPU looks like and the internalformat is what the GPU uses internally and this allows less formats. My texture was stored as BGRA in the memory but OpenGL does not allow that format.
Basicly i had to change:
glTexImage2D(GL_TEXTURE_2D, 0, GL_BGRA, img.width(), img.height(), 0, GL_BGRA, GL_UNSIGNED_BYTE, img.bits());
to
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, img.width(), img.height(), 0, GL_BGRA, GL_UNSIGNED_BYTE, img.bits());
(3rd parameter from GL_BGRA to GL_RGBA)
Info on what types can be used for internalformat can be found here:
https://www.opengl.org/sdk/docs/man/html/glTexImage2D.xhtml