glsl fragmentshader render objectID - opengl

How do I properly render an integer ID of an object to an integer texture buffer?
Say I have a texture2D with internal format GL_LUMINANCE16 and i attach it as color attachment to my FBO.
When rendering an object, i pass an integer ID to the shader and would like to render this id into my integer texture.
fragmentshader output is of type vec4 however.
How do I properly transform my ID to four component float and avoid conversion inaccuracies such that in the end the integervalue in my integer texture target corresponds to the integer ID i wanted to render?

I still don't think that there is a clear answer here. So here is how I made it work through a 2D texture:
// First, create a frame buffer:
glGenFramebuffers(1, &fbo);
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
// Then generate your texture and define an unsigned int array:
glGenTextures(1, &textureid);
glBindTexture(GL_TEXTURE_2D, textureid);
glTexImage2D(GL_TEXTURE_2D, 0, GL_R32UI, w, h, 0, GL_RED_INTEGER, GL_UNSIGNED_INT, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
// Attach it to the frame buffer object:
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, textureid, 0);
// Before rendering
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
GLuint buffers[2] = { GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1 }; // this assumes that there is another texture that is for the render buffer. Color attachment1 is preserved for the element ids.
glDrawBuffers(2, buffers);
// Clear and render here
glFlush(); // Flush after just in case
glBindFramebuffer(GL_FRAMEBUFFER, 0);
On the GLSL side the fragment shader should have (4.3 core profile code here):
layout(location = 0) out vec4 colorOut; // The first element in 'buffers' so location 0
layout(location = 1) out uvec4 elementID; // The second element in 'buffers' so location 1. unsigned int vector as color
// ...
void main()
{
//...
elementID = uvec4( elementid, 0, 0, 0 ); // Write the element id as integer to the red channel.
}
You can read the values on the host side:
unsigned int* ids = new unsigned int[ w*h ];
glBindTexture(GL_TEXTURE_2D, textureid);
glGetTexImage(GL_TEXTURE_2D, 0, GL_RED_INTEGER, GL_UNSIGNED_INT, ids);

There are several problems with your question.
First, GL_LUMINANCE16 is not an "integer texture." It is a texture that contains normalized unsigned integer values. It uses integers to represent floats on the range [0, 1]. If you want to store actual integers, you must use an actual integer image format.
Second, you cannot render to a luminance texture; they are not color-renderable formats. If you actually want to render to a single-channel texture, you must create a single-channel image format. So instead of GL_LUMINANCE16, you use GL_R16UI, which is a 16-bit single-channel unsigned integral image format.
Now that you have this set up correctly, it's pretty trivial. Define a uint fragment shader output and have your fragment shader write your uint value to it. This uint could come from the vertex shader or from a uniform; however you want to do it.
Obviously you'll also need to attach your texture or renderbuffer to an FBO, but I'm fairly sure you know that.
One final thing: don't use the phrase "texture buffer" unless you mean one of these. Otherwise, it gets confusing.

I suppose that your integer ID is an identifier for a set of primitives; indeed it can be defined as an shader uniform:
uniform int vertedObjectId;
Once you render, you want to store the fragment processing into an integer texture. Note that integer textures shall be sampled with integer samplers (isampler2D), which returns integer vectors (i.e. ivec3).
This texture can be attached to a framebuffer object (be aware of framebuffer completeness). The framebuffer attachment can be bound to an integer output variable:
out int fragmentObjectId;
void main() {
fragmentObjectId = vertedObjectId;
}
You need a few extension supports, or an advanced OpenGL version (3.2?).

Related

OpenGL, render to texture with floating point color without clipping value

I am not really sure what the English name for what I am trying to do is, please tell me if you know.
In order to run some physically based lighting calculations. I need to write floating point data to a texture using one OpenGL shader, and read this data again in another OpenGL shader, but the data I want to store may be less than 0 or above 1.
To do this, I set up a render buffer to render to this texture as follows (This is C++):
//Set up the light map we will use for lighting calculation
glGenFramebuffers(1, &light_Framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, light_Framebuffer);
glBlendFunc(GL_SRC_ALPHA, GL_DST_ALPHA);//Needed for light blending (true additive)
glGenTextures(1, &light_texture);
glBindTexture(GL_TEXTURE_2D, light_texture);
//Initialize empty, and at the size of the internal screen
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, w, h, 0, GL_RGBA, GL_FLOAT, 0);
//No interpolation, I want pixelation
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
//Now the light framebuffer renders to the texture we will use to calculate dynamic lighting
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, light_texture, 0);
GLenum DrawBuffers[1] = { GL_COLOR_ATTACHMENT0 };
glDrawBuffers(1, DrawBuffers);//Color attachment 0 as before
Notice that I use type GL_FLOAT and not GL_UNSIGNED_BYTE, according to this discussion Floating point type texture should not be clipped between 0 and 1.
Now, just to test that this is true, I simply set the color somewhere outside this range in the fragment shader which creates this texture:
#version 400 core
void main()
{
gl_FragColor = vec4(2.0,-2.0,2.0,2.0);
}
After rendering to this texture, I send this texture to the program which should use it like any other texture:
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, light_texture );//This is the texture I rendered too
glUniform1i(surf_lightTex_ID , 1);//This is the ID in the main display program
Again, just to check that this is working I have replaced the fragment shader with one which tests that the colors have been saved.
#version 400 core
uniform sampler2D lightSampler;
void main()
{
color = vec4(0,0,0,1);
if (texture(lightSampler,fragment_pos_uv).r>1.0)
color.r=1;
if (texture(lightSampler,fragment_pos_uv).g<0.0)
color.g=1;
}
If everything worked, everything should turn yellow, but needless to say this only gives me a black screen. So I tried the following:
#version 400 core
uniform sampler2D lightSampler;
void main()
{
color = vec4(0,0,0,1);
if (texture(lightSampler,fragment_pos_uv).r==1.0)
color.r=1;
if (texture(lightSampler,fragment_pos_uv).g==0.0)
color.g=1;
}
And I got
The parts which are green are in shadow in the testing scene, nevermind them; the main point is that all the channels of light_texture get clipped to between 0 and 1, which they should not do. I am not sure if the data is saved correctly and only clipped when I read it, or if the data is clipped to 0 to 1 when saving.
So, my question is, is there some way to read and write to an OpenGL texture, such that the data stored may be above 1 or below 0.
Also, No can not fix the problem by using 32 bit integer per channel and by applying a Sigmoid function before saving and its inverse after reading the data, that would break alpha blending.
The type and format arguments glTexImage2D only specify the format of the source image data, but do not affect the internal format of the texture. You must use a specific internal format. e.g.: GL_RGBA32F:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, w, h, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);

OpenGL ping pong feedback texture not completely clearing itself. Trail is left behind

The goal:
Effectively read and write to the same texture, like how Shadertoy does their buffers.
The setup:
I have a basic feedback system with 2 textures each connected to a framebuffer. As I render to frame buffer 1, I bind Texture 2 for sampling in the shader. Then, as I render to frame buffer 2, I bind texture 1 for sampling, and repeat. Finally, I output texture 1 to the whole screen with the default frame buffer and a sperate shader.
The issue:
This almost works as intended as I'm able to read from the texture in the shader and also output to it, creating the desired feedback loop.
The problem is that the frame buffers do not clear completely to black it seems.
To test, I made a simple trailing effect.
In shadertoy, the trail completely disappears as intended:
Live in shadertoy
But in my app, the trail begins to disappear, but leaves a small amount behind:
My thoughts are I'm not clearing the frame buffers correctly or I am not using GLFW's double buffering correctly in this instance. I've tried every combination of clearing the framebuffers but I must be missing something here.
The code:
Here is the trailing effect shader with a moving circle (Same as above images)
#version 330
precision highp float;
uniform sampler2D samplerA; // Texture sampler
uniform float uTime; // current execution time
uniform vec2 uResolution; // resolution of window
void main()
{
vec2 uv = gl_FragCoord.xy / uResolution.xy; // Coordinates from 0 - 1
vec3 tex = texture(samplerA, uv).xyz;// Read ping pong texture that we are writing to
vec2 pos = .3*vec2(cos(uTime), sin(uTime)); // Circle position (circular motion around screen)
vec3 c = mix(vec3(1.), vec3(0), step(.0, length(uv - pos)-.07)); // Circle color
tex = mix(c, tex, .981); // Replace some circle color with the texture color
gl_FragColor = vec4(tex, 1.0); // Output to texture
}
Frame buffer and texture creation:
// -- Generate frame buffer 1 --
glGenFramebuffers(1, &frameBuffer1);
glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer1);
// Generate texture 1
glGenTextures(1, &texture1);
// Bind the newly created texture
glBindTexture(GL_TEXTURE_2D, texture1);
// Create an empty image
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 1920, 1080, 0, GL_RGBA, GL_FLOAT, 0);
// Nearest filtering, for sampling
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
// Attach output texture to frame buffer
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, texture1, 0);
// -- Generate frame buffer 2 --
glGenFramebuffers(1, &frameBuffer2);
glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer2);
// Generate texture 2
glGenTextures(1, &texture2);
// Bind the newly created texture
glBindTexture(GL_TEXTURE_2D, texture2);
// Create an empty image
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 1920, 1080, 0, GL_RGBA, GL_FLOAT, 0);
// Nearest filtering, for sampling
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
// Attach texture 2 to frame buffer 2
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, texture2, 0);
Main loop:
while(programIsRunning){
// Draw scene twice, once to frame buffer 1 and once to frame buffer 2
for (int i = 0; i < 2; i++)
{
// Start trailing effect shader program
glUseProgram(program);
glViewport(0, 0, platform.windowWidth(), platform.windowHeight());
// Write to frame buffer 1
if (i == 0)
{
// Bind and clear frame buffer 1
glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer1);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
// Bind texture 2 for sampler
glActiveTexture(GL_TEXTURE0 + 0);
glBindTexture(GL_TEXTURE_2D, texture2);
glUniform1i(uniforms.samplerA, 0);
}
else // Write to frame buffer 2
{
// Bind and clear frame buffer 2
glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer2);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
// Bind texture 1 for sampler
glActiveTexture(GL_TEXTURE0 + 0);
glBindTexture(GL_TEXTURE_2D, texture1);
glUniform1i(uniforms.samplerA, 0);
}
// Render to screen
glDrawArrays(GL_TRIANGLES, 0, 6);
}
// Start screen shader program
glUseProgram(screenProgram);
// Bind default frame buffer
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glViewport(0, 0, platform.windowWidth(), platform.windowHeight());
// Bind texture 1 for sampler (binding texture 2 should be the same?)
glActiveTexture(GL_TEXTURE0 + 0);
glBindTexture(GL_TEXTURE_2D, texture1);
glUniform1i(uniforms.samplerA, 0);
// Draw final rectangle to screen
glDrawArrays(GL_TRIANGLES, 0, 6);
// Swap glfw buffers
glfwSwapBuffers(platform.window());
}
If this is an issue with clearing I would really like to know why. Changing which frame buffer gets cleared doesn't seem to change anything.
I will keep experimenting in the meantime.
Thank you!
The problem is that you are creating a texture with too little precision for your exponential moving average computations to ultimately discretize to zero.
In your call to:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 1920, 1080, 0, GL_RGBA, GL_FLOAT, 0);
you are using the unsized internal format GL_RGBA (third argument), which will very likely ultimately result in the GL_RGBA8 internal format actually being used. So, all channels will have a precision of 8 bits.
You probably believed that using GL_FLOAT as the argument for the type parameter results in a 32-bit floating-point texture being allocated: It does not. The type parameter is used to indicate to OpenGL how it should interpret your data (last parameter of the function) when/if you actually specify data to be uploaded. You use 0/NULL so the type parameter really does not influence the call, as there is no memory to be interpreted as float values to be uploaded.
So, your texture will have a precision of 8 bits per channel and therefore each channel can hold at most 256 different values.
Given that in your shown RGB image the RGB value is 24 for each channel, we can do the math how OpenGL gets to this value and why it won't get any lower than that:
First, let's do another round of your exponential moving average between (0, 0, 0) and (24, 24, 24)/255 with a factor of your 0.981:
d = (24, 24, 24)/255 * 0.981
If we had infinite precision, this value d would be 0.09232941176.
Now, let's see what RGB value within the representable range [0, 255] this comes close to: 0.09232941176 * 255 = 23.5439999988.
So, this value is actually (when correctly rounded to the nearest representable value within the [0, 255] discretization) 24 again. And that's where it stays.
In order to fix this, you likely need to use a higher precision internal texture format, such as GL_RGBA32F (which is actually what ShaderToy itself uses).

OpenGL Compute Shader - glDispatchCompue() does not run

I'm currently working with a compute shader in OpenGl and my goal is to render from one texture onto another texture with some modifications. However, it does not seem like my compute shader has any effect on the textures at all.
After creating a compute shader I do the following
//Use the compute shader program
(*shaderPtr).useProgram();
//Get the uniform location for a uniform called "sourceTex"
//Then connect it to texture-unit 0
GLuint location = glGetUniformLocation((*shaderPtr).program, "sourceTex");
glUniform1i(location, 0);
//Bind buffers and call compute shader
this->bindAndCompute(bufferA, bufferB);
The bindAndCompute() function looks like this and its purpose is to ready the two buffers to be accessed by the compute shader and then run the compute shader.
bindAndCompute(GLuint sourceBuffer, GLuint targetBuffer){
glBindImageTexture(
0, //Always bind to slot 0
sourceBuffer,
0,
GL_FALSE,
0,
GL_READ_ONLY, //Only read from this texture
GL_RGB16F
);
glBindImageTexture(
1, //Always bind to slot 1
targetBuffer,
0,
GL_FALSE,
0,
GL_WRITE_ONLY, //Only write to this texture
GL_RGB16F
);
//this->height is currently 960
glDispatchCompute(1, this->height, 1); //Call upon shader
glMemoryBarrier(GL_SHADER_IMAGE_ACCESS_BARRIER_BIT);
}
And finally, here is the compute shader. I currently only try to set it so that it makes the second texture completely white.
#version 440
#extension GL_ARB_compute_shader : enable
#extension GL_ARB_shader_image_load_store : enable
layout (rgba16, binding=0) uniform image2D sourceTex; //Textures bound to 0 and 1 resp. that are used to
layout (rgba16, binding=1) uniform image2D targetTex; //acquire texture and save changes made to texture
layout (local_size_x=960 , local_size_y=1 , local_size_z=1) in; //Local work-group size
void main(){
vec4 result; //Vec4 to store the value to be written
pxlPos = ivec2(gl_GlobalInvocationID.xy); //Get pxl-pos
/*
result = imageLoad(sourceTex, pxlPos);
...
*/
imageStore(targetTex, pxlPos, vec4(1.0f)); //Write white to texture
}
Now, when I start bufferB is empty. When I run this I expect bufferB to become completely white. However, after this code bufferB remains empty. My conclusion is that either
A: The compute shader does not write to the texture
B: glDispatchCompute() is not run at all
However, i get no errors and the shader does compile as it should. I have checked that I bind the texture correctly when rendering by binding bufferA which I already know what it contains, then running bindAndCompute(bufferA, bufferA) to turn bufferA white. However, bufferA is unaltered. So, I've not been able to figure out why my compute shader has no effect. If anyone has any ideas on what I can try to do it would be appreciated.
End note: This has been my first question asked on this site. I've tried to present only relevant information but I still feel like maybe it became too much text anyway. If there is feedback on how to improve the structure of the question that is welcome as well.
---------------------------------------------------------------------
EDIT:
The textures I send in with sourceBuffer and targetBuffer is defined as following:
glGenTextures(1, *buffer);
glBindTexture(GL_TEXTURE_2D, *buffer);
glTexImage2D(
GL_TEXTURE_2D,
0,
GL_RGBA16F, //Internal format
this->width,
this->height,
0,
GL_RGBA, //Format read
GL_FLOAT, //Type of values in read format
NULL //source
);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
The image format of the images you bind doesn't match the image format in the shader. You bind a RGB16F (48byte per texel) texture, but state in the shader that it is of rgba16 format (64byte per texel).
Formats have to match according to the rules given here. Assuming that you allocated the texture in OpenGL, this means that the total size of each texel have to match. Also note, that 3-channel textures are (without some rather strange exceptions) not supported by image load/store.
As a side-note: The shader will execute and write if the texture format size matches. But what you write might be garbage because your textures are in 16-bit floating point format (RGBA_16F) while you tell the shader that they are in 16-bit unsigned normalized format (rgba16). Although this doesn't directlyy matter for the compute shader, it does matter if you read-back the texture or access it trough a sampler or write data > 1.0f or < 0.0f into it. If you want 16-bit floats, use rgba16f in the compute shader.

What's wrong with my framebuffer?

I'm creating a framebuffer object to be my gbuffer for deferred shading. I mainly learned from http://ogldev.atspace.co.uk/, and modified to be a little more... sane. Here's the source code where I create the framebuffer:
/* Create the FBO */
glGenFramebuffers(1, &fbo);
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
/* Create the gbuffer textures */
glGenTextures(GBUFFER_NUM_TEXTURES, tex);
/* Create the color buffer */
glBindTexture(GL_TEXTURE_2D, tex[GBUFFER_COLOR]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, tex[GBUFFER_COLOR], 0);
/* Create the normal buffer */
glBindTexture(GL_TEXTURE_2D, tex[GBUFFER_NORMAL]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RG16F, width, height, 0, GL_RG, GL_FLOAT, NULL);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, GL_TEXTURE_2D, tex[GBUFFER_NORMAL], 0);
/* Create the depth-stencil buffer */
glBindTexture(GL_TEXTURE_2D, tex[GBUFFER_DEPTH_STENCIL]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH24_STENCIL8, width, height, 0, GL_DEPTH_STENCIL, GL_FLOAT, NULL);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, GL_TEXTURE_2D, tex[GBUFFER_DEPTH_STENCIL], 0);
GLenum drawBuffers[] = {GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1};
glDrawBuffers(2, drawBuffers);
glReadBuffer(GL_NONE);
/* Check for errors */
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if (status != GL_FRAMEBUFFER_COMPLETE)
{
error("In GBuffer::init():\n");
errormore("Failed to create Framebuffer, status: 0x%x\n", status);
fbo = 0;
return;
}
// restore default FBO
glBindFramebuffer(GL_FRAMEBUFFER, 0);
When I run this, however, status returns GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT. If it's not clear, I'm trying to create 3 gbuffers:
a 32-bit RGBA color buffer (I'd use 24-bit but I'm scared of alignment penalties),
a 32-bit RG normal buffer (each component using a 16-bit float, but I might get away with a signed short?)
a 24-bit Depth buffer packed with an 8-bit Stencil buffer
(total of 96 bits, or 12 bytes)
Possible problem areas that I can see might be using GL_FLOAT for the normal buffer, and GL_FLOAT for the depth-stencil buffer. I'd imagine GL_HALF_FLOAT would be more appropriate for normals, but that's not on the list of types that I can use with glTexImage2D. Similarly, I have no idea what type is most appropriate to use for a depth-stencil buffer.
What am I doing wrong?
Your use of GL_FLOAT is mostly irrelevant, since no pixel transfer actually happens.
You can supply anything you want there as long as it is a meaningful data type. While no pixel transfer happens when you pass NULL for data, GL still validates the pixel transfer data type against the set of valid types and will raise an error if you do something wrong. To that end, if it raises an error the texture will be incomplete and thus cannot be used as an FBO attachment.
Here is where the problem lies, GL_FLOAT is not a meaningful data type for pixel transfer into a packed GL_DEPTH_STENCIL image format... it expects a packed data type such as GL_UNSIGED_INT_24_8 or something really exotic like GL_FLOAT_32_UNSIGNED_INT_24_8_REV​ (64-bit packed Floating-Point Depth + Stencil format).
In any event, there are actually two components that need to be packed into your data type. GL_FLOAT can only describe one of the two components, because floating-point stencil buffers are meaningless.
By the way, this whole confusing mess about pixel transfer data type can be completely avoided if you use something like glTexStorage2D (...) to only allocate storage for the texture. glTexImage2D (...) does double-duty, allocating storage for a texture LOD and providing a mechanism to initialize it with data. You really do not care about the later part if you are drawing into the texture with an FBO, since that is the only place it gets any data from.

Using RGB10_A2_UI format in glRenderBufferStorage()

I am using FBO and rendering to a texture. Here's my code :
GLuint FramebufferName = 0;
glGenFramebuffers(1, &FramebufferName);
glBindFramebuffer(GL_FRAMEBUFFER, FramebufferName);
GLuint renderedTexture;
glGenTextures(1, &renderedTexture);
glBindTexture(GL_TEXTURE_2D, renderedTexture);
glTexImage2D(GL_TEXTURE_2D,0,GL_RGB10_A2UI,256,256,0,GL_RGBA_INTEGER,GL_UNSIGNED_INT_10_10_10_2,0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
GLuint color_buffer;
glGenRenderbuffers(1, &color_buffer);
glBindRenderbuffer(GL_RENDERBUFFER, color_buffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGB10_A2UI, 256, 256);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, color_buffer);
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,renderedTexture, 0);
GLenum DrawBuffers[2] = {GL_COLOR_ATTACHMENT0};
glDrawBuffers(1, DrawBuffers);
GLuint textureId;
LoadImage(textureId); // Loads texture data into textureId
Render(); // Renders textureId onto FramebufferName
glBindFramebuffer(GL_FRAMEBUFFER, 0); // Bind default FBO
glBindTexture(GL_TEXTURE_2D, renderedTexture); //Render using renderedTexture
glDrawArrays (GL_TRIANGLE_FAN,0, 4);
The output is incorrect. The image is not rendered correctly. If I use format GL_RGBA instead of GL_RGB10_A2UI everything goes fine. The FBO is GL_FRAMEBUFFER_COMPLETE ,no issues there. Am I doing something wrong here ?
My fragment shader for GL_RGB10_A2UI is :
in vec2 texcoord;
uniform usampler2D basetexture;
out vec4 Color;
void main(void)
{
uvec4 IntColor = texture(basetexture, texcoord);
Color = vec4(IntColor.rgb, 1023.0) / 1023.0;
}
For GL_RGBA I am not doing normalization in shader.
If I use format GL_RGBA instead of GL_RGB10_A2UI everything goes fine.
If that's true, then it means your shader is not writing integers.
We've discussed this before, but you don't really seem to understand something. An integer texture is a very different thing from a floating-point texture or a normalized integer texture.
There is no circumstance where a GL_RGBA8 texture and a GL_RGB10_A2UI texture would both work with the same code. The same shader cannot read from a texture that could be normalized or integral texture. The same shader cannot write to a buffer that could be normalized or integral. The same pixel transfer code cannot write to or read from an image that could be normalized or integral. They are in every way different entities, which require different parameters to access them.
Furthermore, even if a shader could write to either one, what would it be writing? Integer textures take integers; if you attempt to stick a floating-point value on the range [0, 1] into an integer, it will either come out as 0 or 1. And if you try to put an integer in the [0, 1] range, you will get 0 if your integer was zero, and 1 otherwise. So whatever your fragment shader is doing is very confused.
Odds are very good that you really should be using GL_RGB10_A2, despite your belief that you really did mean to use GL_RGB10_A2UI. If you really meant to be writing integers, your shader would be writing integers and not floats, and therefore your shader would not have "worked" with GL_RGBA8.
However, if you really, truly want to use an unsigned integral texture, and you really, truly understand what that means and how it is different from GL_RGB10_A2, then here are the things you have to do:
Any fragment shader that intends to write to an integer texture must write to an integer output variable. And the signed/unsigned state of that output must match the destination image's signed/unsigned format. So for your GL_RGB10_A2UI, an unsigned integer format, you must be writing to a uvec4. You should be writing integers to this value, not floating-point values.
Any shader that intends to read from an integer texture must use an integer sampler uniform. And that sampler uniform must match the signed/unsigned format of the image. So for your GL_RGB10_A2UI, you must
Pixel transfer operations must explicit use the _INTEGER pixel transfer formats.