Something strange for my texture in fragment shader - opengl

Here is my fragment shader:
#version 330 core
in vec2 param_uv;
uniform sampler2D uniform_texturetoto;
out vec3 color;
void main()
{
color = texture( uniform_texturetoto, param_uv ).rgb;
}
and here is a piece of my main c++ code:
GLuint textureID;
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, texture->w, texture->h, 0, GL_RGB, GL_UNSIGNED_BYTE, my_texture_pixels);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
...
glBindTexture(GL_TEXTURE_2D, textureID);
My problem is that everything works fine. And this is not normal because uniform_texturetoto is never used in my main c++ program. I can replace uniform_texturetoto by any variable name and the program still works !
My question is... Why ?
Thanks

I think GLSL specs has answer for it
Using the 4.50 spec, section 4.3.5 has the answer:
All uniform variables are read-only and are initialized externally either at link time or through the API. The link-time initial value is either the value of the variable's initializer, if present, or 0 if no initializer is present.
I think thats why you code is working fine.

Related

uniform array of sampler2D only getting 1 texture [duplicate]

This question already has an answer here:
OpenGL sampler2D array
(1 answer)
Closed 8 months ago.
I would like to make my fragment shader take in multiple sampler2D's passed in as a form of uniform sampler2D u_Textures[3]. The vertex buffer has at the end of each verticie one value representing witch texture to sample from(I call it index). I am trying to render multiple textures in the same drawcall, but the program only shows one texture for every index I give to it.
My fragment shader code:
#version 450 core
layout(location = 0) out vec4 out_Color;
in vec2 v_TexCoord;
in float v_texIndex;
uniform sampler2D u_Textures[3];
void main()
{
int ind = int(v_texIndex);
out_Color = texture(u_Textures[ind], v_TexCoord);
}
This is how I acces the "u_Textures" to populate it:
unsigned int loc1 = glGetUniformLocation(sh.getRendererID(), "u_Textures");
GLfloat values[3] = { 0.0f, 1.0f, 2.0f };
glUniform1fv(loc1, 3, values);
This is how I load in the textures in memory from my 'Texture' class:
glGenTextures(1, &m_RendererID);
glBindTexture(GL_TEXTURE_2D, m_RendererID);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, m_Width, m_Height, 0, GL_RGBA, GL_UNSIGNED_BYTE, m_LocalBuffer);
and how I bind the texture:
void Texture::Bind(int slot) const {
glActiveTexture(slot + GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, m_RendererID);
}
I created 2 textures and bound them to different slots(1 and 2), and I try to draw 2 squares, 1 with each texture.
Texture tex1(path1);
Texture tex2(path2);
tex1.Bind(1);
tex2.Bind(2);
However, the output no matter how I change the texture Index or how I bind the textures is I get the same texture in both squares.
I should mention that the line int ind = int(v_texIndex); works good and it passes the right value.
What could be wrong here?
I am trying to render multiple textures in the same drawcall
Well... you can't.
The index used in an array of samplers must be a dynamically uniform expression. If the expression results in different values within the same draw call, then it's not dynamically uniform. And thus, you cannot use it as an index.
The layer index for array textures can be non-uniform. But the index into arrays of samplers cannot.

Rendering a framebuffer to texture wont work

I am trying to create a depth map in OpenGL and for some reason the framebuffer wont write to the texture, I tried multiple things to fix it but it doesn't seem to work.
Here is what I did:
This is the generation of the framebuffer and the texture:
glGenFramebuffers(1, &_DepthMapFBO);
glBindFramebuffer(GL_FRAMEBUFFER, _DepthMapFBO);
glGenTextures(1, &_DepthMapTex);
glBindTexture(GL_TEXTURE_2D, _DepthMapTex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, DEPTH_TEXTURE_WIDTH, DEPTH_TEXTURE_HEIGHT, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_COMPARE_REF_TO_TEXTURE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC, GL_LEQUAL);
//glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, _DepthMapTex, 0);
glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, _DepthMapTex, 0);
glDrawBuffer(GL_NONE);
glReadBuffer(GL_NONE);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
(In the above I tried replacing glTexImage2D with glTexStorage2D and I tried replace glFramebufferTexture with glFramebufferTexture2D, both didn't work)
Then (There is stuff before this but they work find, I checked) I rendered the framebuffer as follows:
_DepthProgram.Use();
// Setting uniforms here
glViewport(0, 0, DEPTH_TEXTURE_WIDTH, DEPTH_TEXTURE_HEIGHT);
glBindFramebuffer(GL_FRAMEBUFFER, _DepthMapFBO);
glClear(GL_DEPTH_BUFFER_BIT);
// Rendering the scene here
glBindFramebuffer(GL_FRAMEBUFFER, 0);
These are the shaders I use:
(Vertex)
#version 410 core
layout(location = 0) in vec3 Position;
uniform mat4 LightSpaceMatrix;
uniform mat4 Model;
void main()
{
gl_Position = LightSpaceMatrix * Model * vec4(Position, 1.0);
}
(Fragment)
#version 410 core
void main()
{
// gl_FragDepth = 0.0;
}
I tried rendering the texture to a plane and it came out totally white (I Checked there was supposed to be other values), As you can see in the fragment shader, I tried to explicitly write to the gl_FragDepth but it didn't change the texture, it kept it white (1.0).
I looked all over the internet and looked at learnopengl.com and everybody seem to be doing the same as me, did I miss something?
(I double checked and I am rendering the texture to the plane right, I replaced the glBindTexture with another one and it rendered the texture)
If I am missing some information please let me know.
EDIT: By the way, I don't get any error with glGetError.

OpenGL texturing doesn't

A few day ago i applied the latest Window 8.1 update to my Laptop, when restarting as part of the process i got a bluescreen and so did i when i tried to manually restart etc. In the end i had to do the partial reset (it removes anything windows recognizes as an app but leaves your data intact) you are offered because i could not lose the data.
Before that incident my code for displaying textured and animated models worked but after that i got new errors from my GLSL compilers because of deprecated keywords. When those were fixed my program won't show me textures and instead just display everything black.
I have 2 older projects using the same glsl code and they also have the same problem (although they did not have any deprecated keywords in the shader).
The code worked like this ~2 hours before i did the update.
Initialising:
void TestWindow::initialize()
{
initSkybox();
initVAO();
initVBO();
m_program = new QOpenGLShaderProgram(this);
m_program->addShaderFromSourceFile(QOpenGLShader::Vertex ,"../Qt_Flyff_1/Simple_VertexShader.vert");
m_program->addShaderFromSourceFile(QOpenGLShader::Fragment ,"../Qt_Flyff_1/Simple_FragmentShader.frag");
m_program->link();
qDebug("log: %s",m_program->log().toStdString().c_str());
m_posAttr = m_program->attributeLocation("position");
m_texPos = m_program->attributeLocation("texcoord");
m_colorUniform = m_program->uniformLocation("color");
m_matrixUniform = m_program->uniformLocation("matrix");
int asdf=m_program->uniformLocation("tex");
GLuint tex;
glGenTextures(1, &tex);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, tex);
QImage img("../skybox01_big.png");
glTexImage2D(GL_TEXTURE_2D, 0, GL_BGRA, img.width(), img.height(), 0, GL_BGRA, GL_UNSIGNED_BYTE, img.bits());
glGenerateMipmap(GL_TEXTURE_2D);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_NEAREST);
glEnable(GL_DEPTH_TEST);
// glEnable(GL_CULL_FACE);
glClearColor(0.5,0.5,0.5,1.0);
m_program->setUniformValue(asdf,0);
}
Vertex Shader:
#version 400 core
uniform mediump mat4 matrix;
uniform mediump vec3 color;
in vec4 position;
in vec2 texcoord;
out vec3 Color;
out vec2 Texcoord;
void main()
{
Color = color;
Texcoord = texcoord;
gl_Position = matrix*position;
}
Fragment Shader:
#version 400 core
in vec3 Color;
in vec2 Texcoord;
out vec4 outColor;
uniform sampler2D tex;
void main()
{
outColor = texture(tex, Texcoord) * vec4(Color, 1.0);
}
I've also compared my code to various tutorials and could not find any difference.
When simply using "outColor = vec4(Color,1.0)" my model is completly white as expected and when displaying the texture coordinates as color i also get the expected results.
In case it matters my Laptop has a GeForce GT 740M.
Ok i found the solution.
What i didn't know is that there is a difference between internalformat and format for glTexImage2D.
The format specifies what the data you send to the GPU looks like and the internalformat is what the GPU uses internally and this allows less formats. My texture was stored as BGRA in the memory but OpenGL does not allow that format.
Basicly i had to change:
glTexImage2D(GL_TEXTURE_2D, 0, GL_BGRA, img.width(), img.height(), 0, GL_BGRA, GL_UNSIGNED_BYTE, img.bits());
to
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, img.width(), img.height(), 0, GL_BGRA, GL_UNSIGNED_BYTE, img.bits());
(3rd parameter from GL_BGRA to GL_RGBA)
Info on what types can be used for internalformat can be found here:
https://www.opengl.org/sdk/docs/man/html/glTexImage2D.xhtml

OpenGL & GLSL: Greyscale texture not showing

I'm trying to show a greyscale texture on the screen. I create my texture via
glGenTextures(1, &heightMap);
glBindTexture(GL_TEXTURE_2D, heightMap);
glTexImage2D(GL_TEXTURE_2D, 0, GL_R32F, 512, 512, 0, GL_RED, GL_FLOAT, colorData);
colorData is a float[512*512] with values between 0.0 and 1.0.
When rendering, I use:
glBindTexture(GL_TEXTURE_2D, heightMap);
glUniform1i(shader.GetUniformLocation("textureSampler"), 0);
shader.GetUniformLocation is a function of a library we use at university. It is essentially the same as glGetUniformLocation(shader, "textureSampler"), so don't be confused by it.
I render two triangles via triangle strip. My fragment shader is:
#version 330
layout(location = 0) out vec4 frag_color;
in vec2 texCoords;
uniform sampler2D textureSampler;
void main()
{
frag_color = vec4(texture(textureSampler, texCoords).r, 0, 0, 1);
}
I know the triangles are rendered correctly (e.g. if I use vec4(1.0, 0, 0, 1) for frag_color, I get a completely red screen). However with the line above, I only get a completely black screen. Every texture value seems to be 0.0.
Does anyone have an idea, what I have done wrong? Are there mistakes in that few lines of code or are these completely correct and the error is somewhere else?
As one of the comments below says, setting glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); and glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); solves the problem. :)

OpenGL depth buffer to texture (for various image sizes)

I'm having a problem with the depth buffer. I want to put into a texture. But it doesn't seem to work.
So, here's the piece of code I execute after rendering the objects:
glGenTextures(1, (GLuint*)&_depthTexture);
glBindTexture(GL_TEXTURE_2D, _depthTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
const pair<int, int> &img_size = getImageSize();
glTexImage2D( GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32, img_size.first, img_size.second, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0);
glCopyTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 0, 0, img_size.first, img_size.second);
glClear( GL_DEPTH_BUFFER_BIT );
The thing is (I'm with OpenGL 3.2+), the image for rendering has different size. Most of the time, it won't be a 2^i by 2^j for the size. So, is that a problem ?
Also, the other part of the problem might be in the fragment shader after:
#version 140
uniform sampler2D depthTexture;
uniform ivec2 screenSize;
out vec4 outColor;
void main()
{
vec2 depthCoord = gl_FragCoord.xy / screenSize.xy;
float d = texture2D(depthTexture, depthCoord).x;
outColor = vec4(d, d, d, 1.0);
}
After that, when I render a second time some shapes, I want to use the previous depth (the texture depth buffer), to do some effects.
But seriously... can anyone just show me a piece of code where you can get the depth buffer into a texture? I don't care if it's rendering to the texture or if the texture is extracted after the rendering! As long as I have a texture with the depth value to do the second pass... that's what is important!
http://www.joeforte.net/projects/soft-particles/
this might be a good solution!
At least, it's the full code... might be able to get all the different parts!
You may need a glReadBuffer call. If your context is double-buffered, that would be glReadBuffer( GL_BACK ).
Also, try GL_DEPTH_COMPONENT24, since a 32-bit depth buffer would be unusual, I think.