OpenGL Compute Shader - glDispatchCompue() does not run - c++

I'm currently working with a compute shader in OpenGl and my goal is to render from one texture onto another texture with some modifications. However, it does not seem like my compute shader has any effect on the textures at all.
After creating a compute shader I do the following
//Use the compute shader program
(*shaderPtr).useProgram();
//Get the uniform location for a uniform called "sourceTex"
//Then connect it to texture-unit 0
GLuint location = glGetUniformLocation((*shaderPtr).program, "sourceTex");
glUniform1i(location, 0);
//Bind buffers and call compute shader
this->bindAndCompute(bufferA, bufferB);
The bindAndCompute() function looks like this and its purpose is to ready the two buffers to be accessed by the compute shader and then run the compute shader.
bindAndCompute(GLuint sourceBuffer, GLuint targetBuffer){
glBindImageTexture(
0, //Always bind to slot 0
sourceBuffer,
0,
GL_FALSE,
0,
GL_READ_ONLY, //Only read from this texture
GL_RGB16F
);
glBindImageTexture(
1, //Always bind to slot 1
targetBuffer,
0,
GL_FALSE,
0,
GL_WRITE_ONLY, //Only write to this texture
GL_RGB16F
);
//this->height is currently 960
glDispatchCompute(1, this->height, 1); //Call upon shader
glMemoryBarrier(GL_SHADER_IMAGE_ACCESS_BARRIER_BIT);
}
And finally, here is the compute shader. I currently only try to set it so that it makes the second texture completely white.
#version 440
#extension GL_ARB_compute_shader : enable
#extension GL_ARB_shader_image_load_store : enable
layout (rgba16, binding=0) uniform image2D sourceTex; //Textures bound to 0 and 1 resp. that are used to
layout (rgba16, binding=1) uniform image2D targetTex; //acquire texture and save changes made to texture
layout (local_size_x=960 , local_size_y=1 , local_size_z=1) in; //Local work-group size
void main(){
vec4 result; //Vec4 to store the value to be written
pxlPos = ivec2(gl_GlobalInvocationID.xy); //Get pxl-pos
/*
result = imageLoad(sourceTex, pxlPos);
...
*/
imageStore(targetTex, pxlPos, vec4(1.0f)); //Write white to texture
}
Now, when I start bufferB is empty. When I run this I expect bufferB to become completely white. However, after this code bufferB remains empty. My conclusion is that either
A: The compute shader does not write to the texture
B: glDispatchCompute() is not run at all
However, i get no errors and the shader does compile as it should. I have checked that I bind the texture correctly when rendering by binding bufferA which I already know what it contains, then running bindAndCompute(bufferA, bufferA) to turn bufferA white. However, bufferA is unaltered. So, I've not been able to figure out why my compute shader has no effect. If anyone has any ideas on what I can try to do it would be appreciated.
End note: This has been my first question asked on this site. I've tried to present only relevant information but I still feel like maybe it became too much text anyway. If there is feedback on how to improve the structure of the question that is welcome as well.
---------------------------------------------------------------------
EDIT:
The textures I send in with sourceBuffer and targetBuffer is defined as following:
glGenTextures(1, *buffer);
glBindTexture(GL_TEXTURE_2D, *buffer);
glTexImage2D(
GL_TEXTURE_2D,
0,
GL_RGBA16F, //Internal format
this->width,
this->height,
0,
GL_RGBA, //Format read
GL_FLOAT, //Type of values in read format
NULL //source
);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

The image format of the images you bind doesn't match the image format in the shader. You bind a RGB16F (48byte per texel) texture, but state in the shader that it is of rgba16 format (64byte per texel).
Formats have to match according to the rules given here. Assuming that you allocated the texture in OpenGL, this means that the total size of each texel have to match. Also note, that 3-channel textures are (without some rather strange exceptions) not supported by image load/store.
As a side-note: The shader will execute and write if the texture format size matches. But what you write might be garbage because your textures are in 16-bit floating point format (RGBA_16F) while you tell the shader that they are in 16-bit unsigned normalized format (rgba16). Although this doesn't directlyy matter for the compute shader, it does matter if you read-back the texture or access it trough a sampler or write data > 1.0f or < 0.0f into it. If you want 16-bit floats, use rgba16f in the compute shader.

Related

OpenGL, render to texture with floating point color without clipping value

I am not really sure what the English name for what I am trying to do is, please tell me if you know.
In order to run some physically based lighting calculations. I need to write floating point data to a texture using one OpenGL shader, and read this data again in another OpenGL shader, but the data I want to store may be less than 0 or above 1.
To do this, I set up a render buffer to render to this texture as follows (This is C++):
//Set up the light map we will use for lighting calculation
glGenFramebuffers(1, &light_Framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, light_Framebuffer);
glBlendFunc(GL_SRC_ALPHA, GL_DST_ALPHA);//Needed for light blending (true additive)
glGenTextures(1, &light_texture);
glBindTexture(GL_TEXTURE_2D, light_texture);
//Initialize empty, and at the size of the internal screen
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, w, h, 0, GL_RGBA, GL_FLOAT, 0);
//No interpolation, I want pixelation
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
//Now the light framebuffer renders to the texture we will use to calculate dynamic lighting
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, light_texture, 0);
GLenum DrawBuffers[1] = { GL_COLOR_ATTACHMENT0 };
glDrawBuffers(1, DrawBuffers);//Color attachment 0 as before
Notice that I use type GL_FLOAT and not GL_UNSIGNED_BYTE, according to this discussion Floating point type texture should not be clipped between 0 and 1.
Now, just to test that this is true, I simply set the color somewhere outside this range in the fragment shader which creates this texture:
#version 400 core
void main()
{
gl_FragColor = vec4(2.0,-2.0,2.0,2.0);
}
After rendering to this texture, I send this texture to the program which should use it like any other texture:
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, light_texture );//This is the texture I rendered too
glUniform1i(surf_lightTex_ID , 1);//This is the ID in the main display program
Again, just to check that this is working I have replaced the fragment shader with one which tests that the colors have been saved.
#version 400 core
uniform sampler2D lightSampler;
void main()
{
color = vec4(0,0,0,1);
if (texture(lightSampler,fragment_pos_uv).r>1.0)
color.r=1;
if (texture(lightSampler,fragment_pos_uv).g<0.0)
color.g=1;
}
If everything worked, everything should turn yellow, but needless to say this only gives me a black screen. So I tried the following:
#version 400 core
uniform sampler2D lightSampler;
void main()
{
color = vec4(0,0,0,1);
if (texture(lightSampler,fragment_pos_uv).r==1.0)
color.r=1;
if (texture(lightSampler,fragment_pos_uv).g==0.0)
color.g=1;
}
And I got
The parts which are green are in shadow in the testing scene, nevermind them; the main point is that all the channels of light_texture get clipped to between 0 and 1, which they should not do. I am not sure if the data is saved correctly and only clipped when I read it, or if the data is clipped to 0 to 1 when saving.
So, my question is, is there some way to read and write to an OpenGL texture, such that the data stored may be above 1 or below 0.
Also, No can not fix the problem by using 32 bit integer per channel and by applying a Sigmoid function before saving and its inverse after reading the data, that would break alpha blending.
The type and format arguments glTexImage2D only specify the format of the source image data, but do not affect the internal format of the texture. You must use a specific internal format. e.g.: GL_RGBA32F:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, w, h, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);

OpenGL array textures not rendering at all

I'm currently working to convert a project from using a texture atlas to an array texture, but for the life of me I can't get it working.
Some notes about my environment:
I'm using OpenGL 3.3 core context with GLSL version 3.30
The textures are all 128x128 and rendered perfectly fine when using an atlas (barring the edge artifacts which convinced me to switch)
Problems I believe I've ruled out:
Resolution issues - 128x128, being a power of two, should be fine
Texture loading (it works perfectly as it did before)
Incomplete textures (mipmap issues) - I've gone through the common issues regarding mipmaps and I don't believe OpenGL should be expecting them
Here's my code for creating the array texture:
public void createTextureArray() {
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
int handle = glGenTextures();
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D_ARRAY, handle);
glPixelStorei(GL_UNPACK_ROW_LENGTH, Texture.SIZE);
glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
glTexStorage3D(GL_TEXTURE_2D_ARRAY, 1, GL_RGBA8, Texture.SIZE, Texture.SIZE, textures.size());
try {
int layer = 0;
for (Texture tex : textures.values()) {
// Next few lines are just for loading the texture. They've been ruled out as the issue.
PNGDecoder decoder = new PNGDecoder(ImageHelper.asInputStream(tex.getImage()));
ByteBuffer buffer = BufferUtils.createByteBuffer(decoder.getWidth() * decoder.getHeight() * 4);
decoder.decode(buffer, decoder.getWidth() * 4, PNGDecoder.Format.RGBA);
buffer.flip();
glTexSubImage3D(GL_TEXTURE_2D_ARRAY, 0, 0, 0, layer, decoder.getWidth(), decoder.getHeight(), 1,
GL_RGBA, GL_UNSIGNED_BYTE, buffer);
tex.setLayer(layer);
layer++;
}
} catch (IOException ex) {
ex.printStackTrace();
System.err.println("Failed to create/load texture array");
System.exit(-1);
}
}
The code for creating the VAO/VBO:
private static int prepareVbo(int handle, FloatBuffer vbo) {
IntBuffer vaoHandle = BufferUtils.createIntBuffer(1);
glGenVertexArrays(vaoHandle);
glBindVertexArray(vaoHandle.get());
glBindBuffer(GL_ARRAY_BUFFER, handle);
glBufferData(GL_ARRAY_BUFFER, vbo, GL_STATIC_DRAW);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D_ARRAY, GraphicsMain.TEXTURE_REGISTRY.atlasHandle);
glEnableVertexAttribArray(positionAttrIndex);
glEnableVertexAttribArray(texCoordAttrIndex);
glVertexAttribPointer(positionAttrIndex, 3, GL_FLOAT, false, 24, 0);
glVertexAttribPointer(texCoordAttrIndex, 3, GL_FLOAT, false, 24, 12);
glBindVertexArray(0);
vaoHandle.rewind();
return vaoHandle.get();
}
Fragment shader:
#version 330 core
uniform sampler2DArray texArray;
varying vec3 texCoord;
void main() {
gl_FragColor = texture(texArray, texCoord);
}
(texCoord is working fine; it's being passed from the vertex shader correctly.)
I'm about out of ideas, so being still somewhat new to modern OpenGL, I'd like to know if there's anything glaringly wrong with my code.
Some considerations:
you don't need any more to have power of two textures
be sure that every layer has the same number of levels/mipmaps, as the wiki says
the first four glTexParameteri will affect what is bound to GL_TEXTURE_2D_ARRAY at that moment, so you better want to move them after glBindTexture
how can you specify how many textures you want to create with glGenTextures()? If you have the possibility for a more specific method, please use it
GL_UNPACK_ROW_LENGTH if greater than 0, defines the number of pixels in a row. I suppose then Texture.SIZE is not really the texture size but the dimension on one side (128 in your case). Anyway you don't need to set that, you can skip it
set GL_UNPACK_ALIGNMENT to 4 only if your row lenght is a multiple of it. Most of time people set it to 1 before loading a texture to avoid any trouble and then set it back to 4 once done
last argument of glTexStorage3D is expected to be the number of layers, I hope textures.size() better returns that rather than the size (128x128)
glActiveTexture and glBindTexture inside prepareVbo are useless, they are not part of the vao
don't use varying in glsl, it's deprecated, switch to a simple in out
you may want to take inspiration from this sample
use sampler, they give you more flexibility
use Debug Output if available, otherwise glGetError(), some silent errors may not be seen explicitely by the rendering output
you called it prepareVbo but you do initialize in it both vao and vbo

Using RGB10_A2_UI format in glRenderBufferStorage()

I am using FBO and rendering to a texture. Here's my code :
GLuint FramebufferName = 0;
glGenFramebuffers(1, &FramebufferName);
glBindFramebuffer(GL_FRAMEBUFFER, FramebufferName);
GLuint renderedTexture;
glGenTextures(1, &renderedTexture);
glBindTexture(GL_TEXTURE_2D, renderedTexture);
glTexImage2D(GL_TEXTURE_2D,0,GL_RGB10_A2UI,256,256,0,GL_RGBA_INTEGER,GL_UNSIGNED_INT_10_10_10_2,0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
GLuint color_buffer;
glGenRenderbuffers(1, &color_buffer);
glBindRenderbuffer(GL_RENDERBUFFER, color_buffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGB10_A2UI, 256, 256);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, color_buffer);
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,renderedTexture, 0);
GLenum DrawBuffers[2] = {GL_COLOR_ATTACHMENT0};
glDrawBuffers(1, DrawBuffers);
GLuint textureId;
LoadImage(textureId); // Loads texture data into textureId
Render(); // Renders textureId onto FramebufferName
glBindFramebuffer(GL_FRAMEBUFFER, 0); // Bind default FBO
glBindTexture(GL_TEXTURE_2D, renderedTexture); //Render using renderedTexture
glDrawArrays (GL_TRIANGLE_FAN,0, 4);
The output is incorrect. The image is not rendered correctly. If I use format GL_RGBA instead of GL_RGB10_A2UI everything goes fine. The FBO is GL_FRAMEBUFFER_COMPLETE ,no issues there. Am I doing something wrong here ?
My fragment shader for GL_RGB10_A2UI is :
in vec2 texcoord;
uniform usampler2D basetexture;
out vec4 Color;
void main(void)
{
uvec4 IntColor = texture(basetexture, texcoord);
Color = vec4(IntColor.rgb, 1023.0) / 1023.0;
}
For GL_RGBA I am not doing normalization in shader.
If I use format GL_RGBA instead of GL_RGB10_A2UI everything goes fine.
If that's true, then it means your shader is not writing integers.
We've discussed this before, but you don't really seem to understand something. An integer texture is a very different thing from a floating-point texture or a normalized integer texture.
There is no circumstance where a GL_RGBA8 texture and a GL_RGB10_A2UI texture would both work with the same code. The same shader cannot read from a texture that could be normalized or integral texture. The same shader cannot write to a buffer that could be normalized or integral. The same pixel transfer code cannot write to or read from an image that could be normalized or integral. They are in every way different entities, which require different parameters to access them.
Furthermore, even if a shader could write to either one, what would it be writing? Integer textures take integers; if you attempt to stick a floating-point value on the range [0, 1] into an integer, it will either come out as 0 or 1. And if you try to put an integer in the [0, 1] range, you will get 0 if your integer was zero, and 1 otherwise. So whatever your fragment shader is doing is very confused.
Odds are very good that you really should be using GL_RGB10_A2, despite your belief that you really did mean to use GL_RGB10_A2UI. If you really meant to be writing integers, your shader would be writing integers and not floats, and therefore your shader would not have "worked" with GL_RGBA8.
However, if you really, truly want to use an unsigned integral texture, and you really, truly understand what that means and how it is different from GL_RGB10_A2, then here are the things you have to do:
Any fragment shader that intends to write to an integer texture must write to an integer output variable. And the signed/unsigned state of that output must match the destination image's signed/unsigned format. So for your GL_RGB10_A2UI, an unsigned integer format, you must be writing to a uvec4. You should be writing integers to this value, not floating-point values.
Any shader that intends to read from an integer texture must use an integer sampler uniform. And that sampler uniform must match the signed/unsigned format of the image. So for your GL_RGB10_A2UI, you must
Pixel transfer operations must explicit use the _INTEGER pixel transfer formats.

using glTexture in a shader

I'm using openGL and what I want to do is render my scene to a texture and then store that texture so that I can pass it into my fragment shader to be used to render something else.
I created a texture using glGenTexture() and attached it to a frame buffer and then rendered the frame buffer with glBindFrameBuffer(). I then set the framebuffer back to 0 to render back to the screen but now I'm stuck.
In my fragment shader I have a uniform sampler 2D 'myTexture' that I want to use to store the texture. How do I go about doing this?
For .jpg/png images that I found online I just used the following code:
glUseProgram(Shader->GetProgramID());
GLint ID = glGetUniformLocation(
Shader->GetProgramID(), "Map");
glUniform1i(ID, 0);
glActiveTexture(GL_TEXTURE0);
MapImage->Bind();
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT );
However this code doesn't work for the glTexture I created. Specifically I can't call myTexture->Bind() in the same way.
If you truly have "a uniform sampler 2D 'myTexture'" in your shader, why are you getting the uniform location for "Map"? You should be getting the uniform location for "myTexture"; that's the sampler's name, after all.
So I created a texture using glGenTexture() and attached it to a frame buffer
So in other words you did this:
glActiveTexture(GL_TEXTURE0);
GLuint myTexture;
glGenTextures(1, &myTexture);
glBindTexture(GL_TEXTURE_2D, myTexture);
// very important and often missed by newbies:
// A framebuffer attachment must be initialized but not bound
// when using the framebuffer, so it must be unbound after
// initializing
glTexImage2D(…, NULL);
glBindTexture(GL_TEXTURE_2D, 0);
glBindFramebuffer(…);
glFramebufferTexture2D(…, myTexture, …);
Okay, so myTexture is the name handle for the texture. What do we have to do? Unbinding the FBO, so that we can use the attached texture as a image source, so:
glBindFrameuffer(…, 0);
glActiveTexture(GL_TEXTURE0 + n);
glBindTexture(GL_TEXTURE_2D, myTexture);
glUseProgram(…);
glUniformi(texture_sampler_location, n);

glsl fragmentshader render objectID

How do I properly render an integer ID of an object to an integer texture buffer?
Say I have a texture2D with internal format GL_LUMINANCE16 and i attach it as color attachment to my FBO.
When rendering an object, i pass an integer ID to the shader and would like to render this id into my integer texture.
fragmentshader output is of type vec4 however.
How do I properly transform my ID to four component float and avoid conversion inaccuracies such that in the end the integervalue in my integer texture target corresponds to the integer ID i wanted to render?
I still don't think that there is a clear answer here. So here is how I made it work through a 2D texture:
// First, create a frame buffer:
glGenFramebuffers(1, &fbo);
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
// Then generate your texture and define an unsigned int array:
glGenTextures(1, &textureid);
glBindTexture(GL_TEXTURE_2D, textureid);
glTexImage2D(GL_TEXTURE_2D, 0, GL_R32UI, w, h, 0, GL_RED_INTEGER, GL_UNSIGNED_INT, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
// Attach it to the frame buffer object:
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, textureid, 0);
// Before rendering
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
GLuint buffers[2] = { GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1 }; // this assumes that there is another texture that is for the render buffer. Color attachment1 is preserved for the element ids.
glDrawBuffers(2, buffers);
// Clear and render here
glFlush(); // Flush after just in case
glBindFramebuffer(GL_FRAMEBUFFER, 0);
On the GLSL side the fragment shader should have (4.3 core profile code here):
layout(location = 0) out vec4 colorOut; // The first element in 'buffers' so location 0
layout(location = 1) out uvec4 elementID; // The second element in 'buffers' so location 1. unsigned int vector as color
// ...
void main()
{
//...
elementID = uvec4( elementid, 0, 0, 0 ); // Write the element id as integer to the red channel.
}
You can read the values on the host side:
unsigned int* ids = new unsigned int[ w*h ];
glBindTexture(GL_TEXTURE_2D, textureid);
glGetTexImage(GL_TEXTURE_2D, 0, GL_RED_INTEGER, GL_UNSIGNED_INT, ids);
There are several problems with your question.
First, GL_LUMINANCE16 is not an "integer texture." It is a texture that contains normalized unsigned integer values. It uses integers to represent floats on the range [0, 1]. If you want to store actual integers, you must use an actual integer image format.
Second, you cannot render to a luminance texture; they are not color-renderable formats. If you actually want to render to a single-channel texture, you must create a single-channel image format. So instead of GL_LUMINANCE16, you use GL_R16UI, which is a 16-bit single-channel unsigned integral image format.
Now that you have this set up correctly, it's pretty trivial. Define a uint fragment shader output and have your fragment shader write your uint value to it. This uint could come from the vertex shader or from a uniform; however you want to do it.
Obviously you'll also need to attach your texture or renderbuffer to an FBO, but I'm fairly sure you know that.
One final thing: don't use the phrase "texture buffer" unless you mean one of these. Otherwise, it gets confusing.
I suppose that your integer ID is an identifier for a set of primitives; indeed it can be defined as an shader uniform:
uniform int vertedObjectId;
Once you render, you want to store the fragment processing into an integer texture. Note that integer textures shall be sampled with integer samplers (isampler2D), which returns integer vectors (i.e. ivec3).
This texture can be attached to a framebuffer object (be aware of framebuffer completeness). The framebuffer attachment can be bound to an integer output variable:
out int fragmentObjectId;
void main() {
fragmentObjectId = vertedObjectId;
}
You need a few extension supports, or an advanced OpenGL version (3.2?).