Writing a simple compute shader in OpenGL to understand how it works, I can't manage to obtain the wanted result.
I want to pass to my compute shader an array of structures colourStruct to color an output texture.
I would like to have a red image when "wantedColor" = 0 in my compute shader and a green image "wantedColor" = 1, blue for 2.
But I actually have only red when "wantedColor" = 1 or 2 or 3 and black when "wantedColor" > 2...
If someone has an idea, or maybe I did not understand the compute shader inputs ideas.
Thank you for your help, here is the interesting part of my code.
My compute shader :
#version 430 compatibility
layout(std430, binding=4) buffer Couleureuh
{
vec3 Coul[3]; // array of structures
};
layout(local_size_x = 1, local_size_y = 1) in;
layout(rgba32f, binding = 0) uniform image2D img_output;
void main() {
// base pixel colour for image
vec4 pixel = vec4(0.0, 0.0, 0.0, 1.0);
// get index in global work group i.e x,y, position
ivec2 pixel_coords = ivec2(gl_GlobalInvocationID.xy);
ivec2 dims = imageSize (img_output);
int colorWanted = 0;
pixel = vec4(Coul[colorWanted], 1.0);
// output to a secific pixel in the image
imageStore (img_output, pixel_coords, pixel);
}
Compute shader and SSBO initialization:
GLuint structBuffer;
glGenBuffers(1, &structBuffer);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, structBuffer);
glBufferData(GL_SHADER_STORAGE_BUFFER, 3*sizeof(colorStruct), NULL, GL_STATIC_DRAW);
GLint bufMask = GL_MAP_WRITE_BIT | GL_MAP_INVALIDATE_BUFFER_BIT; // invalidate makes a ig difference when re-writting
colorStruct *coul;
coul = (colorStruct *) glMapBufferRange(GL_SHADER_STORAGE_BUFFER, 0, 3*sizeof(colorStruct), bufMask);
coul[0].r = 1.0f;
coul[0].g = 0.0f;
coul[0].b = 0.0f;
coul[1].r = 0.0f;
coul[1].g = 1.0f;
coul[1].b = 0.0f;
coul[2].r = 0.0f;
coul[2].g = 0.0f;
coul[2].b = 1.0f;
glUnmapBuffer(GL_SHADER_STORAGE_BUFFER);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 4, structBuffer);
m_out_texture.bindImage();
// Launch compute shader
m_shader.use();
glDispatchCompute(m_tex_w, m_tex_h, 1);
// Prevent samplign before all writes to image are done
glMemoryBarrier(GL_SHADER_IMAGE_ACCESS_BARRIER_BIT);
vec3 are always 16-byte aligned. As such, when they're in an array, they act like vec4s. Even with std430 layout.
Never use vec3 in interface blocks. You should either use an array of floats (individually access the 3 members you want) or an array of vec4 (with an unused element).
Related
I'm currently having a problem with my compute shader failing to properly get an element at a certain index of an input array.
I've read the buffers manually using NVidia NSight and it seems to be input properly, the problem seems to be with indexing.
It's supposed to be drawing voxels on a grid, take this case as an example (What is supposed to be drawn is highlighted in red while blue is what I am getting):
And here is the SSBO buffer capture in NSight transposed:
This is the compute shader I'm currently using:
#version 430
layout(local_size_x = 1, local_size_y = 1) in;
layout(rgba32f, binding = 0) uniform image2D img_output;
layout(std430) buffer;
layout(binding = 0) buffer Input0 {
ivec2 mapSize;
};
layout(binding = 1) buffer Input1 {
bool mapGrid[];
};
void main() {
// base pixel colour for image
vec4 pixel = vec4(1, 1, 1, 1);
// get index in global work group i.e x,y position
ivec2 pixel_coords = ivec2(gl_GlobalInvocationID.xy);
vec2 normalizedPixCoords = vec2(gl_GlobalInvocationID.xy) / gl_NumWorkGroups.xy;
ivec2 voxel = ivec2(int(normalizedPixCoords.x * mapSize.x), int(normalizedPixCoords.y * mapSize.y));
float distanceFromMiddle = length(normalizedPixCoords - vec2(0.5, 0.5));
pixel = vec4(0, 0, mapGrid[voxel.x * mapSize.x + voxel.y], 1); // <--- Where I'm having the problem
// I index the voxels the same exact way on the CPU code and it works fine
// output to a specific pixel in the image
//imageStore(img_output, pixel_coords, pixel * vec4(vignettecolor, 1) * imageLoad(img_output, pixel_coords));
imageStore(img_output, pixel_coords, pixel);
}
NSight doc file: https://ufile.io/wmrcy1l4
I was able to fix the problem by completely ditching SSBOs and using a texture buffer, turns out the problem was that OpenGL treated each value as a 4-byte value and stepped 4 bytes instead of one for each index.
Based on this post: Shader storage buffer object with bytes
I am trying to use a glsl shader with p5js to create a simulation like the game of life. To do that I want to create a shader which will take a texture as uniform and which will draw a new texture based on this previous texture. In a next iteration this new texture will be used as uniform and that should allow me create a simulation following the idea exposed here. I am experienced with p5.js but I'm completely new to shader programming so I'm probably missing something.
For now my code is as straightforward as possible:
In the preload() function, I create a texture using the createImage() function and setup some pixels to be white and the others to be black.
In the setup() function I use this texture to run the shader a first time to create a new texture. I also set a timer to run the shader at regular intervals and draw the result in a buffer.
In the draw() function I draw the buffer in the canvas.
To keep things simple I keep the canvas and the texture the same size.
My issue is that at some point the y coordinates in my code seems to get inverted and I don't understand why. My understanding is that my code should show a still image but each time I run the shader the image is inverted. Here is what I mean:
I am not sure if my issue comes from how I use glsl or how I use p5 or a mix of both. Can someone explain to me where this weird y inversion comes from?
Here is my minimal reproducible example (which is also in the p5 editor here):
The sketch file:
const sketch = (p5) => {
const D = 100;
let initialTexture;
p5.preload = () => {
// Create the initial image
initialTexture = p5.createImage(D, D);
initialTexture.loadPixels();
for (let i = 0; i < initialTexture.width; i++) {
for (let j = 0; j < initialTexture.height; j++) {
const alive = i === j || i === 10 || j === 40;
const color = p5.color(250, 250, 250, alive ? 250 : 0);
initialTexture.set(i, j, color);
}
}
initialTexture.updatePixels();
// Initialize the shader
shader = p5.loadShader('uniform.vert', 'test.frag');
};
p5.setup = () => {
const canvas = p5.createCanvas(D, D, p5.WEBGL);
canvas.parent('canvasDiv');
// Create the buffer the shader will draw on
graphics = p5.createGraphics(D, D, p5.WEBGL);
graphics.shader(shader);
/*
* Initial step to setup the initial texture
*/
// Used to normalize the frag coordinates
shader.setUniform('u_resolution', [p5.width, p5.height]);
// First state of the simulation
shader.setUniform('u_texture', initialTexture);
graphics.rect(0, 0, p5.width, p5.height);
// Call the shader each time interval
setInterval(updateSimulation, 1009);
};
const updateSimulation = () => {
// Use the previous state as a texture
shader.setUniform('u_texture', graphics);
graphics.rect(0, 0, p5.width, p5.height);
};
p5.draw = () => {
p5.background(0);
// Use the buffer on the canvas
p5.image(graphics, -p5.width / 2, -p5.height / 2);
};
};
new p5(sketch);
The fragment shader which for now only takes the color of the texture and reuses it (I tried using st instead of uv to no avail):
precision highp float;
uniform vec2 u_resolution;
uniform sampler2D u_texture;
// grab texcoords from vert shader
varying vec2 vTexCoord;
void main() {
// Normalize the position between 0 and 1
vec2 st = gl_FragCoord.xy/u_resolution.xy;
// Get the texture coordinate from the vertex shader
vec2 uv = vTexCoord;
// Get the color at the texture coordinate
vec4 c = texture2D(u_texture, uv);
// Reuse the same color
gl_FragColor = c;
}
And the vertex shader which I took from an example and does nothing excepted passing the coordinates:
/*
* vert file and comments from adam ferriss https://github.com/aferriss/p5jsShaderExamples with additional comments from Louise Lessel
*/
precision highp float;
// This “vec3 aPosition” is a built in shader functionality. You must keep that naming.
// It automatically gets the position of every vertex on your canvas
attribute vec3 aPosition;
attribute vec2 aTexCoord;
varying vec2 vTexCoord;
// We always must do at least one thing in the vertex shader:
// tell the pixel where on the screen it lives:
void main() {
// copy the texcoords
vTexCoord = aTexCoord;
// copy the position data into a vec4, using 1.0 as the w component
vec4 positionVec4 = vec4(aPosition, 1.0);
positionVec4.xy = positionVec4.xy * 2.0 - 1.0;
// Send the vertex information on to the fragment shader
// this is done automatically, as long as you put it into the built in shader function “gl_Position”
gl_Position = positionVec4;
}
Long story short: the texture coordinates for a rectangle or a plane drawn with p5.js are (0, 0) in the bottom left, and (1, 1) in the top right, where as the coordinate system for sampling values from a texture are (0, 0) in the top left and (1, 1) in the bottom right. You can verify this by commenting out your color sampling code in your fragment shader and using the following:
float val = (uv.x + uv.y) / 2.0;
gl_FragColor = vec4(val, val, val, 1.0);
As you can see by the resulting image:
The value (0 + 0) / 2 results in black in the lower left, and (1 + 1) / 2 results in white in the upper right.
So, to sample the correct portion of the texture you just need to flip the y component of the uv vector:
texture2D(u_texture, vec2(uv.x, 1.0 - uv.y));
const sketch = (p5) => {
const D = 200;
let initialTexture;
p5.preload = () => {
// This doesn't actually need to go in preload
// Create the initial image
initialTexture = p5.createImage(D, D);
initialTexture.loadPixels();
for (let i = 0; i < initialTexture.width; i++) {
for (let j = 0; j < initialTexture.height; j++) {
// draw a big checkerboard
const alive = (p5.round(i / 10) + p5.round(j / 10)) % 2 == 0;
const color = alive ? p5.color('white') : p5.color(150, p5.map(j, 0, D, 50, 200), p5.map(i, 0, D, 50, 200));
initialTexture.set(i, j, color);
}
}
initialTexture.updatePixels();
};
p5.setup = () => {
const canvas = p5.createCanvas(D, D, p5.WEBGL);
// Create the buffer the shader will draw on
graphics = p5.createGraphics(D, D, p5.WEBGL);
// Initialize the shader
shader = graphics.createShader(vert, frag);
graphics.shader(shader);
/*
* Initial step to setup the initial texture
*/
// Used to normalize the frag coordinates
shader.setUniform('u_resolution', [p5.width, p5.height]);
// First state of the simulation
shader.setUniform('u_texture', initialTexture);
graphics.rect(0, 0, p5.width, p5.height);
// Call the shader each time interval
setInterval(updateSimulation, 100);
};
const updateSimulation = () => {
// Use the previous state as a texture
shader.setUniform('u_texture', graphics);
graphics.rect(0, 0, p5.width, p5.height);
};
p5.draw = () => {
p5.background(0);
// Use the buffer on the canvas
p5.texture(graphics);
p5.rect(-p5.width / 2, -p5.height / 2, p5.width, p5.height);
};
const frag = `
precision highp float;
uniform vec2 u_resolution;
uniform sampler2D u_texture;
// grab texcoords from vert shader
varying vec2 vTexCoord;
varying vec2 vPos;
void main() {
// Get the texture coordinate from the vertex shader
vec2 uv = vTexCoord;
gl_FragColor = texture2D(u_texture, vec2(uv.x, 1.0 - uv.y));
//// For debugging uv coordinate orientation
// float val = (uv.x + uv.y) / 2.0;
// gl_FragColor = vec4(val, val, val, 1.0);
}
`;
const vert = `
/*
* vert file and comments from adam ferriss https://github.com/aferriss/p5jsShaderExamples with additional comments from Louise Lessel
*/
precision highp float;
// This “vec3 aPosition” is a built in shader functionality. You must keep that naming.
// It automatically gets the position of every vertex on your canvas
attribute vec3 aPosition;
attribute vec2 aTexCoord;
varying vec2 vTexCoord;
// We always must do at least one thing in the vertex shader:
// tell the pixel where on the screen it lives:
void main() {
// copy the texcoords
vTexCoord = aTexCoord;
// copy the position data into a vec4, using 1.0 as the w component
vec4 positionVec4 = vec4(aPosition, 1.0);
// This maps positions 0..1 to -1..1
positionVec4.xy = positionVec4.xy * 2.0 - 1.0;
// Send the vertex information on to the fragment shader
// this is done automatically, as long as you put it into the built in shader function “gl_Position”
gl_Position = positionVec4;
}`;
};
new p5(sketch);
<script src="https://cdn.jsdelivr.net/npm/p5#1.3.1/lib/p5.js"></script>
Following this tutorial https://github.com/mattdesl/lwjgl-basics/wiki/LibGDX-Meshes-Lesson-1 on rendering Meshes. It works fine on Desktop Application, but deployed to html5 its all black and spams:
.WebGL-000001FC218B3370]GL ERROR :GL_INVALID_OPERATION : glDrawArrays: attempt to access out of range vertices in attribute 0
Why does it not work? I am not using an array in the shader.
Im using this simple shader which is just supposed to render position and color of a vertex:
Vertex shader
//our attributes
attribute vec2 a_position;
attribute vec4 a_color;
//our camera matrix
uniform mat4 u_projTrans;
//send the color out to the fragment shader
varying vec4 vColor;
void main() {
vColor = a_color;
gl_Position = u_projTrans * vec4(a_position.xy, 0.0, 1.0);
}
Fragment shader
#ifdef GL_ES
precision mediump float;
#endif
//input from vertex shader
varying vec4 vColor;
void main() {
gl_FragColor = vColor;
}
Rendering like this
triangle.mesh.render(shaderProgram, GL20.GL_TRIANGLES, 0, 18 / NUM_COMPONENTS);
Edit
My vertex specification
public static final int MAX_TRIS = 1;
public static final int MAX_VERTS = MAX_TRIS * 3;
// ...
protected float[] verts = new float[MAX_VERTS * NUM_COMPONENTS];
mesh = new Mesh(true, MAX_VERTS, 0,
new VertexAttribute(VertexAttributes.Usage.Position, 2, "a_position"),
new VertexAttribute(VertexAttributes.Usage.ColorPacked, 4, "a_color"));
float c = color.toFloatBits();
idx = 0;
verts[idx++] = coordinates[0].x;
verts[idx++] = coordinates[0].y;
verts[idx++] = c;
//top left vertex
verts[idx++] = coordinates[1].x;
verts[idx++] = coordinates[1].y;
verts[idx++] = c;
//bottom right vertex
verts[idx++] = coordinates[2].x;
verts[idx++] = coordinates[2].y;
verts[idx++] = c;
mesh.setVertices(verts);
My draw call
public void render() {
Gdx.gl.glDepthMask(false);
Gdx.gl.glEnable(GL20.GL_BLEND);
shaderProgram.begin(); // shaderprogram contains vertex and fragment shader
shaderProgram.setUniformMatrix("u_projTrans", world.renderer.getCam().combined);
for (Triangle triangle : triangles) {
triangle.mesh.render(shaderProgram, GL20.GL_TRIANGLES, 0, 18 / NUM_COMPONENTS);
}
shaderProgram.end();
Gdx.gl.glDepthMask(true);
}
The error message
.WebGL-000001FC218B3370]GL ERROR :GL_INVALID_OPERATION : glDrawArrays: attempt to access out of range vertices in attribute 0
means that there are not enough vertex attributes in the vertex array buffer.
It means that the 3rd parameter count specifies more vertices, than that number of vertices in the buffer.
If you've a vertex buffer with 3 vertices, then count has to be 3, respectively 9 / NUM_COMPONENTS since the tuple size of a vertex coordinate is 3 and the array has a size of 9 elements:
triangle.mesh.render(shaderProgram, GL20.GL_TRIANGLES, 0, 18 / NUM_COMPONENTS);
triangle.mesh.render(shaderProgram, GL20.GL_TRIANGLES, 0, 9 / NUM_COMPONENTS);
To draw power spectral density of a signal (which is very similar to heatmap), I use this vertex shader program. It receives value of power at each vertex, takes logarithm to show result in dB, normalizes within the range of colormap array, and assigns a color to vertex.
#version 130
uniform float max_val;
uniform float min_val;
uniform int height;
attribute float val; // power-spectral-density value assigned to each vertex
// colormap values
const float r[512] = float[]( /* red values come here */ );
const float g[512] = float[]( /* green values come here */ );
const float b[512] = float[]( /* blue values come here */ );
void main() {
// set vertex position based on its ID
int x = gl_VertexID / height;
int y = gl_VertexID - x * height;
gl_Position = gl_ModelViewProjectionMatrix * vec4(x, y, -1.0, 1.0);
float e = log(max_val / min_val);
float d = log(val / min_val);
// set color
int idx = int(d * (512 - 1) / e); // find normalized index that falls in range [0, 512)
gl_FrontColor = vec4(r[idx], g[idx], b[idx], 1.0); // set color
}
Corresponding C++ code is here:
QOpenGLShaderProgram glsl_program;
// initialization code is omitted
glsl_program.bind();
glsl_program.setUniformValue(vshader_max_uniform, max_val);
glsl_program.setUniformValue(vshader_min_uniform, min_val);
glsl_program.setUniformValue(vshader_height_uniform, max_colormap_height);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 1, GL_FLOAT, GL_FALSE, 0, colormap); // colormap is a vector that contains value of power at each vertex
glDrawElements(GL_TRIANGLE_STRIP, vertices_length, GL_UNSIGNED_INT, nullptr); // vertex_length is size of colormap
glDisableVertexAttribArray(0);
glsl_program.release();
This program runs fast enough on Linux. But in Windows, it is very slow and takes a lot of CPU time. If I change this line of GLSL:
// int idx = int(d * (512 - 1) / e);
int idx = 0;
then the app runs fast on Windows too. So, It has to be a problem with GLSL code.
How should I fix it?
What you're doing there belongs into the fragment shader, not the vertex shader. And you submit both the color lookup table and the spectral density data as a texture. Although vertex setup is not that expensive, it comes with a certain overhead and in general you want to cover as many pixels with the least number of vertices possible.
Also learn logarithm calculation rules (e.g. log(a/b) = log(a) - log(b)) and avoid doing calculations that are uniform over the whole draw call and precalculate on the host.
/* vertex shader */
#version 130
varying vec2 pos;
void main() {
// set vertex position based on its ID
// To fill the viewport, we need just three vertices
// of a rectangular triangle of with and height 2
pos.x = gl_VertexID % 2;
pos.y = gl_VertexID / 2;
// screen position is controlled using glViewport/glScissor
gl_Position = vec4(2*pos, 0, 1.0);
}
-
/* fragment shader */
#version 130
varying vec2 pos;
uniform sampler2D values;
uniform sampler1D colors;
uniform float log_min;
uniform float log_max;
void main() {
float val = texture2D(values, pos).x;
float e = log_max - log_min;
float d = (log(val) - log_min) / e;
gl_FragColor = vec4(texture1D(colors, d).rgb, 1.0); // set color
}
In later versions of GLSL some keywords have changed. Varyings are defined using in and out instead of varying and texture access functions have been unified to cover all sampler types.
glsl_program.bind();
glsl_program.setUniformValue(vshader_log_max_uniform, log(max_val));
glsl_program.setUniformValue(vshader_log_min_uniform, log(min_val));
// specify where to draw in window pixel coordinates.
glEnable(GL_SCISSOR_TEST);
glViewport(x, y, width, height);
glScissor(x, y, width, height);
glBindTexture(GL_TEXTURE_2D, values_texture);
glTexSubImage2D(GL_TEXTURE_2D, ..., spectral_density_data);
glDrawArrays(GL_TRIANGLES, 0, 3);
glsl_program.release();
I'm trying to use a 3d texture in opengl to implement volume rendering. Each voxel has an rgba colour value and is currently rendered as a screen facing quad.(for testing purposes). I just can't seem to get the sampler to give me a colour value in the shader. The quads always end up black. When I change the shader to generate a colour (based on xyz coords) then it works fine. I'm loading the texture with the following code:
glGenTextures(1, &tex3D);
glBindTexture(GL_TEXTURE_3D, tex3D);
unsigned int colours[8];
colours[0] = Colour::AsBytes<unsigned int>(Colour::Blue);
colours[1] = Colour::AsBytes<unsigned int>(Colour::Red);
colours[2] = Colour::AsBytes<unsigned int>(Colour::Green);
colours[3] = Colour::AsBytes<unsigned int>(Colour::Magenta);
colours[4] = Colour::AsBytes<unsigned int>(Colour::Cyan);
colours[5] = Colour::AsBytes<unsigned int>(Colour::Yellow);
colours[6] = Colour::AsBytes<unsigned int>(Colour::White);
colours[7] = Colour::AsBytes<unsigned int>(Colour::Black);
glTexImage3D(GL_TEXTURE_3D, 0, GL_RGBA, 2, 2, 2, 0, GL_RGBA, GL_UNSIGNED_BYTE, colours);
The colours array contains the correct data, i.e. the first four bytes have values 0, 0, 255, 255 for blue. Before rendering I bind the texture to the 2nd texture unit like so:
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_3D, tex3D);
And render with the following code:
shaders["DVR"]->Use();
shaders["DVR"]->Uniforms["volTex"].SetValue(1);
shaders["DVR"]->Uniforms["World"].SetValue(Mat4(vl_one));
shaders["DVR"]->Uniforms["viewProj"].SetValue(cam->GetViewTransform() * cam->GetProjectionMatrix());
QuadDrawer::DrawQuads(8);
I have used these classes for setting shader params before and they work fine. The quaddrawer draws eight instanced quads. The vertex shader code looks like this:
#version 330
layout(location = 0) in vec2 position;
layout(location = 1) in vec2 texCoord;
uniform sampler3D volTex;
ivec3 size = ivec3(2, 2, 2);
uniform mat4 World;
uniform mat4 viewProj;
smooth out vec4 colour;
void main()
{
vec3 texCoord3D;
int num = gl_InstanceID;
texCoord3D.x = num % size.x;
texCoord3D.y = (num / size.x) % size.y;
texCoord3D.z = (num / (size.x * size.y));
texCoord3D /= size;
texCoord3D *= 2.0;
texCoord3D -= 1.0;
colour = texture(volTex, texCoord3D);
//colour = vec4(texCoord3D, 1.0);
gl_Position = viewProj * World * vec4(texCoord3D, 1.0) + (vec4(position.x, position.y, 0.0, 0.0) * 0.05);
}
uncommenting the line where I set the colour value equal to the texcoord works fine, and makes the quads coloured. The fragment shader is simply:
#version 330
smooth in vec4 colour;
out vec4 outColour;
void main()
{
outColour = colour;
}
So my question is, what am I doing wrong, why is the sampler not getting any colour values from the 3d texture?
[EDIT]
Figured it out but can't self answer (new user):
As soon as I posted this I figured it out, I'll put the answer up to help anyone else (it's not specifically a 3d texture issue, and i've also fallen afoul of it before, D'oh!). I didn't generate mipmaps for the texture, and the default magnification/minification filters weren't set to either GL_LINEAR, or GL_NEAREST. Boom! no textures. Same thing happens with 2d textures.
As soon as I posted this I figured it out, I'll put the answer up to help anyone else (it's not specifically a 3d texture issue, and i've also fallen afoul of it before, D'oh!). I didn't generate mipmaps for the texture, and the default magnification/minification filters weren't set to either GL_LINEAR, or GL_NEAREST. Boom! no textures. Same thing happens with 2d textures.