error using image2D in compute shader - opengl

I want to do some calculations with a compute shader, and write the results into an image. Here is a tutorial, which I wanted to follow (I use OpenSceneGraph in my project).
But the shader code does not compile. In the tutorial the image is passed like a uniform, but after some research I have found, that you should pass it in a layout. I changed that part in my code, but it still do not work, and I receive an error, when I want to use the image.
Here is my simple compute shader:
#version 430
#define TILE_SIZE 1
layout(local_size_x = TILE_SIZE, local_size_y = TILE_SIZE, local_size_z = 1) in;
layout (binding = 1, rgba32f) writeonly uniform image2D targetTex;
void main() {
imageStore(targetTex, gl_GlobalInvocationID.xy, vec4(1, 0, 1, 0));
}
And my error message:
error C1115: unable to find compatible overloaded function "imageStore(struct image2D1x32_bindless, uvec2, vec4)"
It seems like the format of the image would be wrong, but I have no idea, where is my mistake.

Related

OpenGL Compute Shader: Writing to texture seemingly does nothing

I've found a handful of similar problems posted around the web an it would appear that I'm already doing what the solutions suggest.
To summarize the problem; despite the compute shader running and no errors being present, no change is being made to the texture it's supposedly writing to.
The compute shader code. It was intended to do something else but for the sake of troubleshooting it simply fills the output texture with ones.
#version 430 core
layout(local_size_x = 4 local_size_y = 4, local_size_z = 4) in;
layout(r32f) uniform readonly image3D inputDensityField;
layout(r32f) uniform writeonly image3D outputDensityField;
uniform vec4 paintColor;
uniform vec3 paintPoint;
uniform float paintRadius;
uniform float paintDensity;
void main()
{
ivec3 cellIndex = ivec3(gl_GlobalInvocationID);
imageStore(outputDensityField, cellIndex, vec4(1.0, 1.0, 1.0, 1.0));
}
I'm binding the textures to the compute shader like so.
s32 uniformID = glGetUniformLocation(programID, name);
u32 bindIndex = 0; // 1 for the other texture.
glUseProgram(programID);
glUniform1i(uniformID, bindIndex);
glUseProgram(0);
The dispatch looks something like this.
glUseProgram(programID);
glBindImageTexture(0, inputTexID, 0, GL_FALSE, 0, GL_READ_ONLY, GL_R32F);
glBindImageTexture(1, outputTexID, 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_R32F);
glDispatchCompute(groupDim.x, groupDim.y, groupDim.z);
glMemoryBarrier(GL_ALL_BARRIER_BITS);
glUseProgram(0);
Inspecting through RenderDoc does not reveal any errors. The textures seem to have been bound correctly, although they are both displayed in RenderDoc as outputs which I would assume is an error on RenderDoc's part?
Whichever texture that was the output on the last glDispatchCompute will later be sampled in a fragment shader.
Order of operation
Listed images
The red squares are test fills made with glTexSubImage3D. Again for troubleshooting purposes.
I've made sure that I'm passing the correct texture format.
Example in RenderDoc
Additionally I'm using glDebugMessageCallback which usually catches all errors so I would assume that there's no problem with the creation code.
Apologies if the information provided is a bit incoherent. Showing everything would make a very long post and I'm unsure which parts are the most relevant to show.
I've found a solution! Apparently, in the case of a 3D texture, you need to pass GL_TRUE for layered in glBindImageTexture.
https://www.khronos.org/opengl/wiki/Image_Load_Store
Image bindings can be layered or non-layered, which is determined by layered​. If layered​ is GL_TRUE, then texture​ must be an Array Texture (of some type), a Cubemap Texture, or a 3D Texture. If a layered image is being bound, then the entire mipmap level specified by level​ is bound.

OpenGL compute shader not putting value in uniform buffer

I'm trying to make a compute shader for computing texture samples. For now I just want it to sample a single point in a texture, and later on I will try to have it do so for an array of vectors. While testing this, I found that the shader doesn't seem to be setting the value I'm using as the output.
Shader:
#version 430
layout(local_size_x = 1) in;
layout(std430, binding = 1) buffer samplePoints {
vec2 pos;
};
layout(std430, binding = 2) buffer samples {
float samp;
};
uniform sampler2D tex;
void main() {
//samp = texture(tex, pos).r;
samp = 7.0f;
}
Setting samp to 7 is a test. I run the shader with PyOpenGL, the relevant part being:
shader = GL.glCreateShader(GL.GL_COMPUTE_SHADER)
GL.glShaderSource(shader, open("test.glsl").read())
GL.glCompileShader(shader)
program = GL.glCreateProgram()
GL.glAttachShader(program, shader)
GL.glLinkProgram(program)
points, samples = GL.glGenBuffers(2)
GL.glBindBuffer(GL.GL_UNIFORM_BUFFER, points)
GL.glBufferData(GL.GL_UNIFORM_BUFFER, 8, b"\0\0\0\0\0\0\0\0", GL.GL_STATIC_DRAW)
GL.glBindBuffer(GL.GL_UNIFORM_BUFFER, samples)
GL.glBufferData(GL.GL_UNIFORM_BUFFER, 4, b"\0\0\0\0", GL.GL_STATIC_DRAW)
GL.glUseProgram(program)
GL.glBindBufferBase(GL.GL_UNIFORM_BUFFER, 1, points)
GL.glBindBufferBase(GL.GL_UNIFORM_BUFFER, 2, samples)
GL.glDispatchCompute(1, 1, 1)
GL.glBindBuffer(GL.GL_UNIFORM_BUFFER, samples)
a = GL.glGetBufferSubData(GL.GL_UNIFORM_BUFFER, 0, 4).view("<f4")
print(a)
This results in just printing the float made from the 4 bytes I placed in the samples buffer earlier. 0 in this case. I've omitted various bits of error-checking, none of which report any errors along the way.
Where am I going wrong?
That looks like a storage buffer, not a uniform buffer, so wouldn't you need to use GL_SHADER_STORAGE_BUFFER? Also you need to use glMemoryBarrier before accessing the contents.

What is wrong with my compute shader array indexing?

I'm currently having a problem with my compute shader failing to properly get an element at a certain index of an input array.
I've read the buffers manually using NVidia NSight and it seems to be input properly, the problem seems to be with indexing.
It's supposed to be drawing voxels on a grid, take this case as an example (What is supposed to be drawn is highlighted in red while blue is what I am getting):
And here is the SSBO buffer capture in NSight transposed:
This is the compute shader I'm currently using:
#version 430
layout(local_size_x = 1, local_size_y = 1) in;
layout(rgba32f, binding = 0) uniform image2D img_output;
layout(std430) buffer;
layout(binding = 0) buffer Input0 {
ivec2 mapSize;
};
layout(binding = 1) buffer Input1 {
bool mapGrid[];
};
void main() {
// base pixel colour for image
vec4 pixel = vec4(1, 1, 1, 1);
// get index in global work group i.e x,y position
ivec2 pixel_coords = ivec2(gl_GlobalInvocationID.xy);
vec2 normalizedPixCoords = vec2(gl_GlobalInvocationID.xy) / gl_NumWorkGroups.xy;
ivec2 voxel = ivec2(int(normalizedPixCoords.x * mapSize.x), int(normalizedPixCoords.y * mapSize.y));
float distanceFromMiddle = length(normalizedPixCoords - vec2(0.5, 0.5));
pixel = vec4(0, 0, mapGrid[voxel.x * mapSize.x + voxel.y], 1); // <--- Where I'm having the problem
// I index the voxels the same exact way on the CPU code and it works fine
// output to a specific pixel in the image
//imageStore(img_output, pixel_coords, pixel * vec4(vignettecolor, 1) * imageLoad(img_output, pixel_coords));
imageStore(img_output, pixel_coords, pixel);
}
NSight doc file: https://ufile.io/wmrcy1l4
I was able to fix the problem by completely ditching SSBOs and using a texture buffer, turns out the problem was that OpenGL treated each value as a 4-byte value and stepped 4 bytes instead of one for each index.
Based on this post: Shader storage buffer object with bytes

OpenGL Invalid Operation

I'm having an issue loading/assigning interleaved vertex data in OpenGL.
I keep getting an INVALID_OPERATION when setting the second attribute.
EDIT Turns out this only happens on Mac. On Windows, I don't get an INVALID_OPERATION error. But I have modified the below with what it looks like now. Still errors out on Mac.
GL.BindBuffer(BufferTarget.ArrayBuffer, vbo);
GL.VertexAttribPointer(shader.GetAttribLocation("position"), 3, VertexAttribPointerType.Float, false, _vertexStride, 0);
REngine.CheckGLError();
GL.VertexAttribPointer(shader.GetAttribLocation("normal"), 3, VertexAttribPointerType.Float, false, _vertexStride, 12);
REngine.CheckGLError();
GL.VertexAttribPointer(shader.GetAttribLocation("texcoord"), 2, VertexAttribPointerType.Float, false, _vertexStride, 24);
REngine.CheckGLError();
GL.EnableVertexAttribArray(shader.GetAttribLocation("position"));
REngine.CheckGLError();
GL.EnableVertexAttribArray(shader.GetAttribLocation("normal"));
REngine.CheckGLError();
GL.EnableVertexAttribArray(shader.GetAttribLocation("texcoord"));
REngine.CheckGLError();
Any idea why? Others seem to do it and it works great, but I can't seem to get it to work.
Here is my GLSL for this:
layout(location=0) in vec3 position;
layout(location=1) in vec3 normal;
layout(location=2) in vec2 texcoord;
out vec4 out_position;
out vec4 out_normal;
out vec2 out_texcoord;
void main() {
out_normal = vec4(normal,1.0f);
out_position = vec4(position,1.0f);
out_texcoord = texcoord;
}
and the frag:
out vec4 color;
void main()
{
color = vec4(1.0f,1.0f,1.0f,1.0f);
}
EDIT
Turns out I had stale glErrors in the queue from earlier in the pipeline. I checked earlier and had a bum call to glEnableClientState which isn't supported on Mac using the 4.2 context. I removed it as it wasn't necessary anymore with a full shader approach. This fixed the error and my glorious white mesh was displayed.
Only active attributes have a location. Your normal attribute is not active, as it is not used (the fact that you forward it to out_normal is irrelevant, as out_normal is not used). glGetAttributeLocation will return -1 for that, but the attribute index for glVertexAttribPointer is a GLuint, and (GLuint)-1 is way out of the range for allowed attribute indices. You should get the same error for texcoord too.
Please also note that using sizeof(float) as the size parameter for glVertexAttribPointer is wrong too. That parameter determines the number of components for the attribute vector, 1 (scalar), 2d, 3d or 4d, not some number of bytes.

opengl 3d texture issue

I'm trying to use a 3d texture in opengl to implement volume rendering. Each voxel has an rgba colour value and is currently rendered as a screen facing quad.(for testing purposes). I just can't seem to get the sampler to give me a colour value in the shader. The quads always end up black. When I change the shader to generate a colour (based on xyz coords) then it works fine. I'm loading the texture with the following code:
glGenTextures(1, &tex3D);
glBindTexture(GL_TEXTURE_3D, tex3D);
unsigned int colours[8];
colours[0] = Colour::AsBytes<unsigned int>(Colour::Blue);
colours[1] = Colour::AsBytes<unsigned int>(Colour::Red);
colours[2] = Colour::AsBytes<unsigned int>(Colour::Green);
colours[3] = Colour::AsBytes<unsigned int>(Colour::Magenta);
colours[4] = Colour::AsBytes<unsigned int>(Colour::Cyan);
colours[5] = Colour::AsBytes<unsigned int>(Colour::Yellow);
colours[6] = Colour::AsBytes<unsigned int>(Colour::White);
colours[7] = Colour::AsBytes<unsigned int>(Colour::Black);
glTexImage3D(GL_TEXTURE_3D, 0, GL_RGBA, 2, 2, 2, 0, GL_RGBA, GL_UNSIGNED_BYTE, colours);
The colours array contains the correct data, i.e. the first four bytes have values 0, 0, 255, 255 for blue. Before rendering I bind the texture to the 2nd texture unit like so:
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_3D, tex3D);
And render with the following code:
shaders["DVR"]->Use();
shaders["DVR"]->Uniforms["volTex"].SetValue(1);
shaders["DVR"]->Uniforms["World"].SetValue(Mat4(vl_one));
shaders["DVR"]->Uniforms["viewProj"].SetValue(cam->GetViewTransform() * cam->GetProjectionMatrix());
QuadDrawer::DrawQuads(8);
I have used these classes for setting shader params before and they work fine. The quaddrawer draws eight instanced quads. The vertex shader code looks like this:
#version 330
layout(location = 0) in vec2 position;
layout(location = 1) in vec2 texCoord;
uniform sampler3D volTex;
ivec3 size = ivec3(2, 2, 2);
uniform mat4 World;
uniform mat4 viewProj;
smooth out vec4 colour;
void main()
{
vec3 texCoord3D;
int num = gl_InstanceID;
texCoord3D.x = num % size.x;
texCoord3D.y = (num / size.x) % size.y;
texCoord3D.z = (num / (size.x * size.y));
texCoord3D /= size;
texCoord3D *= 2.0;
texCoord3D -= 1.0;
colour = texture(volTex, texCoord3D);
//colour = vec4(texCoord3D, 1.0);
gl_Position = viewProj * World * vec4(texCoord3D, 1.0) + (vec4(position.x, position.y, 0.0, 0.0) * 0.05);
}
uncommenting the line where I set the colour value equal to the texcoord works fine, and makes the quads coloured. The fragment shader is simply:
#version 330
smooth in vec4 colour;
out vec4 outColour;
void main()
{
outColour = colour;
}
So my question is, what am I doing wrong, why is the sampler not getting any colour values from the 3d texture?
[EDIT]
Figured it out but can't self answer (new user):
As soon as I posted this I figured it out, I'll put the answer up to help anyone else (it's not specifically a 3d texture issue, and i've also fallen afoul of it before, D'oh!). I didn't generate mipmaps for the texture, and the default magnification/minification filters weren't set to either GL_LINEAR, or GL_NEAREST. Boom! no textures. Same thing happens with 2d textures.
As soon as I posted this I figured it out, I'll put the answer up to help anyone else (it's not specifically a 3d texture issue, and i've also fallen afoul of it before, D'oh!). I didn't generate mipmaps for the texture, and the default magnification/minification filters weren't set to either GL_LINEAR, or GL_NEAREST. Boom! no textures. Same thing happens with 2d textures.