Trouble with imageStore() (OpenGL 4.3) - opengl

I'm trying to output some data from compute shader to a texture, but imageStore() seems to do nothing. Here's the shader:
#version 430
layout(RGBA32F) uniform image2D image;
layout (local_size_x = 1, local_size_y = 1) in;
void main() {
imageStore(image, ivec2(gl_GlobalInvocationID.xy), vec4(0.0f, 1.0f, 1.0f, 1.0f));
}
and the application code is here:
GLuint tex;
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, WIDTH, HEIGHT, 0, GL_RGBA, GL_FLOAT, 0);
glBindImageTexture(0, tex, 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_RGBA32F);
glUseProgram(program->GetName());
glUniform1i(program->GetUniformLocation("image"), 0);
glDispatchCompute(WIDTH, HEIGHT, 1);
then a full screen quad is rendered with that texture but currently it only shows some random old data from video memory. Any idea what could be wrong?
EDIT:
This is how I display the texture:
// This comes right after the previous block of code
glUseProgram(drawProgram->GetName());
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, tex);
glUniform1i(drawProgram->GetUniformLocation("sampler"), 0);
glBindVertexArray(vao);
glDrawArrays(GL_TRIANGLES, 0, 6);
glfwSwapBuffers();
and the drawProgram consists of:
#version 430
#extension GL_ARB_explicit_attrib_location : require
layout(location = 0) in vec2 position;
out vec2 uvCoord;
void main() {
gl_Position = vec4(position.x, position.y, 0.0f, 1.0f);
uvCoord = position;
}
and:
#version 430
in vec2 uvCoord;
out vec4 color;
uniform sampler2D sampler;
void main() {
vec2 uv = (uvCoord + vec2(1.0f)) / 2.0f;
uv.y = 1.0f - uv.y;
color = texture(sampler, uv);
//color = vec4(uv.x, uv.y, 0.0f, 1.0f);
}
The last commented line in fragment shader produces this output: Render output
The vertex array object (vao) has one buffer with 6 2D vertices:
-1.0, -1.0
1.0, -1.0
1.0, 1.0
1.0, 1.0
-1.0, 1.0
-1.0, -1.0

This is how I display the texture:
That's not good enough. I don't see a call to glMemoryBarrier, so there's no guarantee that your code actually works.
Remember: writes to images via Image Load/Store are not memory coherent. They require explicit user synchronization before they become visible. If you want to use an image you have stored to as a texture later, there must be an explicit glMemoryBarrier call after the rendering command that writes to it, but before the rendering command that samples from it as a texture.
Why that is a problem, I don't know
Because desktop OpenGL is not OpenGL ES.
The last three parameters only describe the arrangement of the pixel data you're giving OpenGL. They change nothing about how OpenGL stores the data. In ES, they do, but that's only because ES doesn't do format conversions.
In desktop OpenGL, it is perfectly legal to upload floating-point data to a normalized integer texture; OpenGL is expected to convert the data as best it can. ES doesn't do conversions, so it has to change the internal format (the third parameter) to match the data.
Desktop GL does not. If you want a specific image format, you ask for it. Desktop GL gives you what you ask for, and only what you ask for.
Always use sized internal formats.

GL_RGBA is not a sized internal format and so you're not able to know which it is really. Most often, it is transformed to GL_RGBA8 by OpenGL.
In your case, the GL_FLOAT parameter you set only describes the pixel data you could upload in the texture.
Read the table 2 here to know what you can set as an internal texture format.

Okay I found the solution. The problem lies here:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, WIDTH, HEIGHT, 0, GL_RGBA, GL_FLOAT, 0);
this line doesn't specify the size of the internal format (GL_RGBA). When I supplied GL_RGBA32F it started working. Why that is a problem, I don't know (hopefully somebody will be able to explain).

Related

Deferred Rendering not Displaying the GBuffer Textures

I'm trying to implement deferred rendering within an engine I'm developing as a personal learning, and I cannot get to understand what I'm doing wrong when it comes to render all the textures in the GBuffer to check if the implementation is okay.
The thing is that I currently have a framebuffer with 3 color attachments for the different textures of the GBuffer (color, normal and position), which I initialize as follows:
glCreateFramebuffers(1, &id);
glBindFramebuffer(GL_FRAMEBUFFER, id);
std::vector<uint> textures;
textures.resize(3);
glCreateTextures(GL_TEXTURE_2D, 3, textures.data());
for(size_t i = 0; i < 3; ++i)
{
glBindTexture(GL_TEXTURE_2D, textures[i]);
if(i == 0) // For Color Buffer
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, nullptr);
else
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, width, height, 0, GL_RGBA, GL_FLOAT, nullptr);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, GL_TEXTURE_2D, textures[i], 0);
}
GLenum color_buffers[3] = { GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1, GL_COLOR_ATTACHMENT2 };
glDrawBuffers((GLsizei)textures.size(), color_buffers);
uint depth_texture;
glCreateTextures(GL_TEXTURE_2D, 1, &depth_texture);
glBindTexture(GL_TEXTURE_2D, depth_texture);
glTexStorage2D(GL_TEXTURE_2D, 1, GL_DEPTH24_STENCIL8, width, height);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, GL_TEXTURE_2D, depth_texture, 0);
bool fbo_status = glCheckFramebufferStatus(GL_FRAMEBUFFER) == GL_FRAMEBUFFER_COMPLETE;
ASSERT(fbo_status, "Framebuffer Incompleted!");
glBindFramebuffer(GL_FRAMEBUFFER, 0);
This is not reporting any errors and it seems to work since the framebuffer of the forward renderer renders properly. Then, when rendering, I run the next code after binding the framebuffer and clearing the color and depth buffers:
camera_buffer->Bind();
camera_buffer->SetData("ViewProjection", glm::value_ptr(viewproj_mat));
camera_buffer->SetData("CamPosition", glm::value_ptr(glm::vec4(view_position, 0.0f)));
camera_buffer->Unbind();
for(Entity& entity : scene_entities)
{
shader->Bind();
Texture* texture = entity.GetTexture();
BindTexture(0, texture);
shader->SetUniformMat4("u_Model", entity.transform);
shader->SetUniformInt("u_Albedo", 0);
shader->SetUniformVec4("u_Material.AlbedoColor", entity->AlbedoColor);
shader->SetUniformFloat("u_Material.Smoothness", entity->Smoothness);
glBindVertexArray(entity.VertexArray);
glDrawElements(GL_TRIANGLES, entity.VertexArray.index_buffer.count, GL_UNSIGNED_INT, nullptr);
// Shader, VArray and Textures Unbindings
}
So with this code I manage to render the 3 textures created by using the ImGui::Image function, by switching the texture index between 0, 1 or 2 as the next:
ImGui::Image((ImTextureID)(fbo->textures[0]), viewport_size, ImVec2(0, 1), ImVec2(1, 0));
Now, the color texture (at index 0) works perfectly, as the next image shows:
But when rendering the normals and position textures (indexes 2 and 3), I have no result:
Does anybody sees what I'm doing wrong? Because I've been hours and hours with this and I cannot see it. I ran this on RenderDoc and I couldn't see anything wrong, the textures displayed in RenderDoc are the same than in the engine.
The vertex shader I use when rendering the entities is the next:
layout(location = 0) in vec3 a_Position;
layout(location = 1) in vec2 a_TexCoord;
layout(location = 2) in vec3 a_Normal;
out IBlock
{
vec2 TexCoord;
vec3 FragPos;
vec3 Normal;
} v_VertexData;
layout(std140, binding = 0) uniform ub_CameraData
{
mat4 ViewProjection;
vec3 CamPosition;
};
uniform mat4 u_ViewProjection = mat4(1.0);
uniform mat4 u_Model = mat4(1.0);
void main()
{
vec4 world_pos = u_Model * vec4(a_Position, 1.0);
v_VertexData.TexCoord = a_TexCoord;
v_VertexData.FragPos = world_pos.xyz;
v_VertexData.Normal = transpose(inverse(mat3(u_Model))) * a_Normal;
gl_Position = ViewProjection * u_Model * vec4(a_Position, 1.0);
}
And the fragment one is the next, they are both pretty simple:
layout(location = 0) out vec4 gBuff_Color;
layout(location = 1) out vec3 gBuff_Normal;
layout(location = 2) out vec3 gBuff_Position;
in IBlock
{
vec2 TexCoord;
vec3 FragPos;
vec3 Normal;
} v_VertexData;
struct Material
{
float Smoothness;
vec4 AlbedoColor;
};
uniform Material u_Material = Material(1.0, vec4(1.0));
uniform sampler2D u_Albedo, u_Normal;
void main()
{
gBuff_Color = texture(u_Albedo, v_VertexData.TexCoord) * u_Material.AlbedoColor;
gBuff_Normal = normalize(v_VertexData.Normal);
gBuff_Position = v_VertexData.FragPos;
}
It is not clear from the question what exactly might be happening here, as lots of GL states - both at the time the rendering to the gbuffer, and at that time the gbuffer texture is rendered for visualization - are just unknown. However, from the images given in the question, one can not conclude that the actual color output for attachments 1 and 2 is not working.
One issue which comes to mind is alpha blending. The color values processed by the per-fragment operations after the vertex shader are always working with RGBA values - although the value of the A channel only matters if you enabled blending and use a blend function which somehow depends on the source alpha.
If you declare a custom fragment shader output as float, vec2, vec3, the remaining components stay undefined (undefined value, not undefined behavior). This does not impose a problem unless some other operations you do depend on those values.
What we also have here is a GL_RGBA16F output format (which is the right choice, because none of the 3-component RGB formats are required as color-renderable by the spec).
What might happen here is either:
Alpha blending is already turned on during rendering into the g-buffer. The fragment shader's alpha output happens to be zero, so that it appears as 100% transparent and the contents of the texture are not changed.
Alpha blending is not used during rendering into the g-buffer, so the correct contents end up in the texture, the alpha channel just happens to end up with all zeros. Now the texture might be visualized with alpha blending enbaled, ending up in a 100% transparent view.
If it is the first option, turn off blending when rendering the into the g-buffer. It would not work with deferred shading anyway. You might still run into the second option then.
If this is the second option, there is no issue at all - the lighting passes which follow will read the data they need (and ultimately, you will want to put useful information into the alpha channel to not waste it and be able to reduce the number of attachments). It is just your visualization (which I assume is for debug purposed only) is wrong. You can try to fix the visualization.
As a side note: Storing the world space position in the G-Buffer is a huge waste of bandwidth. All you need to be able to reconstruct the world space position is the depth value and the inverse of your view and projection matrices. Also storing world space position in GL_RGB16F will very easily run into precision issues if you move your camera away from world space origin.

Port from OpenGL to GLES 2.0

I have used https://github.com/akrinke/Font-Stash.git for some desktop applications. Now I want to use it on a raspberry Pi which use gles2. I looked into the code and see the only path that don't work on gles is flush_draw function:
glBindTexture(GL_TEXTURE_2D, texture->id);
glEnable(GL_TEXTURE_2D);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glVertexPointer(2, GL_FLOAT, VERT_STRIDE, texture->verts);
glTexCoordPointer(2, GL_FLOAT, VERT_STRIDE, texture->verts+2);
glDrawArrays(GL_TRIANGLES, 0, texture->nverts);
glDisable(GL_TEXTURE_2D);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
I'm trying to port to gles to this:
glBindTexture(GL_TEXTURE_2D, texture->id);
glEnable(GL_TEXTURE_2D);
GLint position_index = get_attrib(stash->program, "position");
glEnableVertexAttribArray(position_index);
glVertexAttribPointer (position_index, 2, GL_FLOAT, GL_FALSE, VERT_STRIDE, texture->verts);
GLint texture_coord_index = get_attrib(stash->program, "texCoord");
glEnableVertexAttribArray(texture_coord_index);
glVertexAttribPointer (texture_coord_index, 2, GL_FLOAT, GL_FALSE, VERT_STRIDE, texture->verts + 2);
GLint texture_index = get_uniform(stash->program, "texture");
glUniform1i(texture_index, 0);
glDrawArrays(GL_TRIANGLES, 0, texture->nverts);
glDisable(GL_TEXTURE_2D);
with vertex sl
attribute vec4 position;
attribute vec2 texCoord;
varying vec2 texCoordVar;
void main() {
gl_Position = position;
texCoordVar = texCoord;
}
and fragment sl
precision mediump float; // set default precision for floats to medium
uniform sampler2D texture; // shader texture uniform
varying vec2 texCoordVar; // fragment texture coordinate varying
void main() {
// sample the texture at the interpolated texture coordinate
// and write it to gl_FragColor
gl_FragColor = texture2D( texture, texCoordVar);
}
but I can't get anything, nothing on screen.
Can anybody show me what's wrong with my code?
You should setup transformations in your vertex shader. Best way to port fixed function OpenGL app is to write vertex and pixel shader that replicate fixed pipeline with transformations set as uniforms and set those uniforms every time transform is changed.
glEnable(GL_TEXTURE_2D), is not valid GLES2 btw. Also you're not doing any manipulation of the position in your vertex shader, so unless the coordinates are guaranteed to sit within the frustum and you're just passing them through to the rasterizer, then you are leaving it to luck as to whether or not they end up in the frustum. Are you sure you've accounted for everything the fixed function pipe used to handle regarding transforms?

OpenGL Textures are all black in 3.3 - but work in 3.1

I'm currently working on a simple 3D scene in OpenGL3.3, but when trying to texture the objects - all of them are textured entirely black. However, if I change the context version to 3.1; it has no problem rendering the textures correctly over the models.
I'm not sure if this suggests I'm using deprecated functionality/methods, but I'm struggling to see where the problem could be.
Setting up the texture
(load texture from file)
...
glGenTextures(1, &TexID); // Create The Texture ( CHANGE )
glBindTexture(GL_TEXTURE_2D, TexID);
glTexImage2D(GL_TEXTURE_2D, 0, texture_bpp / 8, texture_width, texture_height, 0, texture_type, GL_UNSIGNED_BYTE, texture_imageData);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
...
Binding the Texture to Render
// mLocation is the layout location in the shader, from glGetUniformLocation
// mTextureUnit is the specified texture unit to load into. Currently using 0.
// mTextureID is the ID of the loaded texture, as generated above.
glActiveTexture( GL_TEXTURE0 + mData.mTextureUnit );
glBindTexture( GL_TEXTURE_2D, mData.mTextureID );
glUniform1i( mLocation, mData.mTextureUnit );
Fragment Shader
uniform sampler2D diffusemap;
in vec2 passUV;
out vec3 outColour;
...
outColour = texture( diffusemap, passUV ).rgb;
All textures being used are power of 2, square sizes.
Images showing the problem.
GL3.1: http://i.imgur.com/NUgj6vA.png
GL3.3: http://i.imgur.com/oOc0jcd.png
Vertex Shader
#version 330 core
uniform mat4 p;
uniform mat4 v;
uniform mat4 m;
in vec3 vertex;
in vec3 normal;
in vec2 uv;
out vec3 passVertex;
out vec3 passNormal;
out vec2 passUV;
void main( void )
{
gl_Position = p * v * m * vec4( vertex, 1.0 );
passVertex = vec3( m * vec4( vertex, 1.0 ) );
passNormal = vec3( m * vec4( normal, 1.0 ) );
passUV = uv;
}
In the line:
glTexImage2D(GL_TEXTURE_2D, 0, texture_bpp / 8, texture_width, texture_height, 0, texture_type, GL_UNSIGNED_BYTE, texture_imageData);
The assumption that (texture_bpp / 8) will return the correct format type is incorrect. It should be one of the GLenum values that specifies the internal-format such as GL_RGBA.
Correcting it to (or whichever format matches the internal-format of the texture file) fixes the issue entirely and works on both GL3.3 and GL3.1:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, texture_width, texture_height, 0, texture_type, GL_UNSIGNED_BYTE, texture_imageData);
For the sake of completeness, the internal format of a texture should be an enumerator. And one of the sized enumerators, not one of the unsized ones. Please stop using GL_RGB when you can use GL_RGB8.
Answer correctly identifies the issue, but it would be helpful to have it explained why the previous assumption would work on 3.1 and not on 3.3.
The ability to use a number on the range [1, 4] was deprecated in OpenGL 3.0 and removed in OpenGL 3.1. However, at that time, there wasn't a way to say, "give me the actual core profile of OpenGL version 3.1"; the WGL/GLX_CONTEXT_CORE_PROFILE_BIT_ARBs didn't exist. Therefore, when you got a 3.1 context, it was perfectly legal for an implementation to export the ARB_compatibility extension, which still allowed all of the removed functionality.
In 3.2, the ability to explicitly select a profile was added to OpenGL. At which point, you would not get someone exposing ARB_compatibility in a core profile. That's why your code works when you ask for 3.1 (since it's free to give you 3.1 compatibility), but not when you ask for 3.3 core profile.

texturing using texelFetch()

When I pass non max values into texture buffer, while rendering it draws geometry with colors at max values. I found this issue while using glTexBuffer() API.
E.g. Let’s assume my texture data is GLubyte, when I pass any value less than 255, then the color is same as that of drawn with 255, instead of mixture of black and that color.
I tried on AMD and nvidia card, but the results are same.
Can you tell me where could be going wrong?
I am copying my code here:
Vert shader:
in vec2 a_position;
uniform float offset_x;
void main()
{
gl_Position = vec4(a_position.x + offset_x, a_position.y, 1.0, 1.0);
}
Frag shader:
out vec4 Color;
uniform isamplerBuffer sampler;
uniform int index;
void main()
{
Color=texelFetch(sampler,index);
}
Code:
GLubyte arr[]={128,5,250};
glGenBuffers(1,&bufferid);
glBindBuffer(GL_TEXTURE_BUFFER,bufferid);
glBufferData(GL_TEXTURE_BUFFER,sizeof(arr),arr,GL_STATIC_DRAW);
glBindBuffer(GL_TEXTURE_BUFFER,0);
glGenTextures(1, &buffer_texture);
glBindTexture(GL_TEXTURE_BUFFER, buffer_texture);
glTexBuffer(GL_TEXTURE_BUFFER, GL_R8, bufferid);
glUniform1f(glGetUniformLocation(shader_data.psId,"offset_x"),0.0f);
glUniform1i(glGetUniformLocation(shader_data.psId,"sampler"),0);
glUniform1i(glGetUniformLocation(shader_data.psId,"index"),0);
glGenBuffers(1,&bufferid1);
glBindBuffer(GL_ARRAY_BUFFER,bufferid1);
glBufferData(GL_ARRAY_BUFFER,sizeof(vertices4),vertices4,GL_STATIC_DRAW);
attr_vertex = glGetAttribLocation(shader_data.psId, "a_position");
glVertexAttribPointer(attr_vertex, 2 , GL_FLOAT, GL_FALSE ,0, 0);
glEnableVertexAttribArray(attr_vertex);
glDrawArrays(GL_TRIANGLE_FAN,0,4);
glUniform1i(glGetUniformLocation(shader_data.psId,"index"),1);
glVertexAttribPointer(attr_vertex, 2 , GL_FLOAT, GL_FALSE ,0,(void *)(32) );
glDrawArrays(GL_TRIANGLE_FAN,0,4);
glUniform1i(glGetUniformLocation(shader_data.psId,"index"),2);
glVertexAttribPointer(attr_vertex, 2 , GL_FLOAT, GL_FALSE ,0,(void *)(64) );
glDrawArrays(GL_TRIANGLE_FAN,0,4);
In this case it draws all the 3 squares with dark red color.
uniform isamplerBuffer sampler;
glTexBuffer(GL_TEXTURE_BUFFER, GL_R8, bufferid);
There's your problem: they don't match.
You created the texture's storage as unsigned 8-bit integers, which are normalized to floats upon reading. But you told the shader that you were giving it signed 8-bit integers which will be read as integers, not floats.
You confused OpenGL by being inconsistent. Mismatching sampler types with texture formats yields undefined behavior.
That should be a samplerBuffer, not an isamplerBuffer.

GLSL and FBOs - glActiveTexture doesn't work?

I'm trying to write a simple shader which would add textures attached to FBOs. There is no problem with FBO initialization and such (I've tested it). The problem is I believe with
glActiveTexture(GL_TEXTURE0). It doesn't seem to be doing anything- here is my frag shader:
(but generally shader is called - I've tested that by putting gl_FragColor = vec4(0,1,0,1);
uniform sampler2D Texture0;
uniform sampler2D Texture1;
varying vec2 vTexCoord;
void main()
{
vec4 texel0 = texture2D(Texture0, gl_TexCoord[0].st);
vec4 vec = texel0;
gl_FragColor = texel0;
}
And in C++ code i have:
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, iFrameBufferAccumulation);
glDrawBuffer(GL_COLOR_ATTACHMENT0_EXT);
( Render something - it works fine to iTextureImgAccumulation texture attached to GL_COLOR_ATTACHMENT0_EXT )
glClear (GL_COLOR_BUFFER_BIT );
glEnable(GL_TEXTURE_RECTANGLE_NV);
glActiveTexture(GL_TEXTURE0);
glBindTexture( GL_TEXTURE_RECTANGLE_NV, iTextureImgAccumulation ); // Bind our frame buffer texture
xShader.setUniform1i("Texture0", 0);
glLoadIdentity(); // Load the Identity Matrix to reset our drawing locations
glTranslatef(0.0f, 0.0f, -2.0f);
xShader.bind();
glBegin(GL_QUADS);
glTexCoord2f(0,OPT.m_nHeight);
glVertex3f(-1,-1,0);
glTexCoord2f(OPT.m_nWidth,OPT.m_nHeight);
glVertex3f(1,-1,0);
glTexCoord2f(OPT.m_nWidth,0);
glVertex3f(1,1,0);
glTexCoord2f(0,0);
glVertex3f(-1,1,0);
glEnd();
glBindTexture( GL_TEXTURE_RECTANGLE_NV, NULL );
xShader.unbind();
Result: black screen (when displaying second texture and using shader (without using shader its fine). I'm aware that this shader shouldn't do much but he doesn't even display
the first texture.
I'm in the middle of testing things but idea is that after rendering to first texture
I would add first texture to the second one. To do this I imagine that this fragment shader
would work :
uniform sampler2D Texture0;
uniform sampler2D Texture1;
varying vec2 vTexCoord;
void main()
{
vec4 texel0 = texture2D(Texture0, gl_TexCoord[0].st);
vec4 texel1 = texture2D(Texture1, gl_TexCoord[0].st);
vec4 vec = texel0 + texel1;
vec.w = 1.0;
gl_FragColor = vec;
}
And whole idea is that in a loop tex2 = tex2 + tex1 ( would it be possible that i use tex2 in this shader to render to GL_COLOR_ATTACHMENT1_EXT which is attached to tex2 ?)
I've tested both xShader.bind(); before initializing uniform variables and after. Both cases - black screen.
Anyway for a moment, I'm pretty sure that there is some problem with initialization of sampler for textures (maybe cos they are attached to FBO)?
I've checked the rest and it works fine.
Also another stupid problem:
How can i render texture on whole screen ?
I've tried something like that but it doesn't work ( i have to translate a bit this quad )
glViewport(0,0 , OPT.m_nWidth, OPT.m_nHeight);
glBindTexture( GL_TEXTURE_RECTANGLE_NV, iTextureImg/*iTextureImgAccumulation*/ ); // Bind our frame buffer texture
glBegin(GL_QUADS);
glTexCoord2f(0,OPT.m_nHeight);
glVertex3f(-1,-1,0);
glTexCoord2f(OPT.m_nWidth,OPT.m_nHeight);
glVertex3f(1,-1,0);
glTexCoord2f(OPT.m_nWidth,0);
glVertex3f(1,1,0);
glTexCoord2f(0,0);
glVertex3f(-1,1,0);
glEnd();
Doesnt work with glVertex2f also..
Edit: I've checked out and I can initialise some uniform variables only textures are problematic.
I've changed order but it still doesn't work.:( Btw other uniforms values are working well. I've displayed texture I want to pass to shader too. It works fine. But for unknown reason texture sampler isn't initialized in fragment shader. Maybe it has something to do that this texture is glTexImage2D(GL_TEXTURE_RECTANGLE_NV, 0, GL_RGB16F /GL_FLOAT_R32_NV/, OPT.m_nWidth, OPT.m_nHeight, 0, GL_RED, GL_FLOAT, NULL); (its not GL_TEXTURE_2D)?
It's not clear what does your xShader.bind(), I can gues you do glUseProgram(...) there. But uniform variables (sampler index in your case) should be set up after the glUseProgram(...) is called. In this order:
glUseProgram(your_shaders); //probably your xShader.bind() does it.
GLuint sampler_idx = 0;
GLint location = glGetUniformLocation(your_shaders, "Texture0");
if(location != -1) glUniform1i(location, sampler_idx);
else error("cant, get uniform location");
glActiveTexture(GL_TEXTURE0 + sampler_idx);
glBindTexture(GL_TEXTURE_2D, iTextureImg);
and 'yes' you can render FBO texture and use it in shader in another context
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, your_fbo_id);
// render to FBO there
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
then use your FBO texture the same way as you use regular textures.
glActiveTexture(GL_TEXTURE0);
glBindTexture( GL_TEXTURE_RECTANGLE_NV, iTextureImgAccumulation ); // Bind our frame buffer texture
xShader.setUniform1i("Texture0", 0);
This is a rectangle texture.
uniform sampler2D Texture0;
This is a 2D texture. They are not the same thing. The sampler type must match the texture type. You need to use a samplerRect, assuming your version of GLSL supports that.