Tri-linear interpolation in compute shader - opengl

I want to ray cast a line through an image3D grid. Now, if I hit a voxel I would like to have the tri-linear interpolated value of the neighbouring 8 voxels.
Is that even possible with the compute shader? I know with sampler2D the bi-linear interpolation is intrinsically hardware supported.
Of course, I can write the code manually myself. However, that would literally kill the nice performance.

I want to ray cast a line through an image3D grid. Now, if I hit a voxel I would like to have the tri-linear interpolated value of the neighbouring 8 voxels.
Then cast it through a sampler3D instead of an image3D grid.

Related

How to loop over every pixel in a 3D texture/buffer without using compute shaders

I understand how you would do this with a 2D buffer. Just draw two triangles that make a quad that fully encompass the 2D buffer space. That way when the fragment shader runs it runs for all the pixels in the buffer.
Question: How would this work for a 3D buffer?
You could just write a lot of triangles for each cross-section of the 3D buffer. However, if you had a texture that was 1x1x256 that would mean that you would need to draw 256*2 triangles for each slice to iterate over all of the pixels. I know this is an extreme case and there are ways of optimizing this solution. However, I feel like there is a more elegant solution that I am missing.
What I am trying to do: I am trying to make a 3D fluid solver that iterates through each of the pixels of the 3D texture and computes its velocity, density, etc. I am trying to do this via the fragment shader because I am using OpenGL 3.0 which does not use compute shaders.
#version 330 core
out vec4 FragColor;
uniform sampler3D volume;
void main()
{
// computing the fluid density, velocity, and center of mass
// output the values to the 3D buffer to diffrent color channels:
fragColor = vec4(density, velocity.xy, centerOfMass);
}
At some point in the fragment shader, you're going to write some statement of the form:
vec4 value = texture(my_texture, TexCoords);
Where TexCoords is the location in my_texture that maps to some particular value in the source texture. But... that mapping is entirely up to you. Nobody's making you use gl_FragCoord.xy / textureSize(my_texture). You could just as easily use vec3(gl_FragCoord.x, Y_value, gl_FragCoord.y) / textureSize(my_texture), which puts the Y component of the fragment location in the Z dimension of the texture. Y_value in this case is a value passed from the outside that tells which vertical slice of the 3D texture to use.
Of course, whatever mapping you use to fetch the data must also be used when you write the data. If you're writing via fragment shader outputs, that poses a problem. A 3D texture can only be attached to an FBO as either a single 2D slice or as a layered set of 2D slices, with these slices always being along the Z dimension of the image. So even if you try to read in slices along the Y dimension, it has to be written in Z slices. So you'd be moving around the location of the data, which makes this non-viable.
If you're using image load/store, then you have no problem. You can just write to the appropriate texel (indeed, you can read from it as an image using integer coordinates, so there's no need to divide by the texture's size).

Is it possible to use floating point texture with p5.js + GLSL?

I am working on physarum simulation with P5 and GLSL.
I use one separate shader and to calculate particle positions and another for trail diffusion and decay.
I set P5.Graphics buffer as a texture uniform in the particle shader. As far as I understand, it is Uint8ClampedArray limited to the value range 0-255. It is not precise enough for particle simulation and I get a grid looking like this.
Is it possible to set a floating point texture (something like float32array) as a sampler2D uniform variable using P5.js?

Vertex to Pixel Shader TEXCOORD interpolation precision issues

I think I'm experiencing precision issues in the pixel shader when reading the texcoords that's been interpolated from the vertex shader.
My scene constists of some very large triangles (edges being up to 5000 units long, and texcoords ranging from 0 to 5000 units, so that the texture is tiled about 5000 times), and I have a camera that is looking very close up at one of those triangles (the camera might be so close that its viewport only covers a couple of meters of the large triangles). When I pan the camera along the plane of the triangle, the texture is lagging and jumpy. My thought is that I am experiencing lack of precision on the interpolated texcoords.
Is there a way to increase the precision of the texcoords interpolation?
My first though was to let texcoord u be stored in double precision in the xy-components, and texcoord v in zw-components. But I guess that will not work since the shader interpolation assumes there are 4 separate components of single-precision, and not 2 components of double-precision?
If there is no solution on the shader side, I guess I'll just have to tesselate the triangles into finer pieces? I'd hate to do that just for this issue though.. Any ideas?
EDIT: The problem is also visible when printing texcoords as colors on the screen, without any actual texture sampling at all.
You're right, it looks like a precision problem. If your card supports it, you can indeed use double precision floats for interpolation. Just declare the variables as dvec2 and it should work.
The shader interpolation does not assumes there are 4 separate 8bit components. In recent cards, each scalar (ie. component in a vec) is interpolated separately as a float (or a double). Older cards, that could only interpolate vec4s, were also working with full floats (but these ones probably don't support doubles).

Specific coordinate output in glsl fragment shaders?

Is there a way to, instead of using the predetermined coordinate as output by gl_FragColor, set the color of a specific pixel by its coordinate?
I'm currently trying to implement the Mean Shift algorithm via shaders. My input is a black and white texture, where white dots represent points to be clustered and black represents no-data.
After calculating the weighted average of all point positions in the neighborhood, I have to set the pixel in the resulting position to a new color that represents a cluster.
For example, if I look at a neighborhood of 18x18 centered on the pixel relate to fragcoord and find 3 white pixels:
Fragcoord = 30,33
Pixel 1: coordinate (30,33)
Pixel 2: coordinate (27,33)
Pixel 3: coordinate (30,30)
After calculating the average of their positions, I'll have (29,32). Is there a way to set the pixel at 29,32 to a different color, in a shader unit that has a different fragcoord (for example, 30,33)?
Something like gl_FragColor(vec2(29,32)) = vec4(1.0,1.0,1.0,1.0); ?
As Christian said, it's not possible; and if you can use it, a compute framework or image load/store is your best option to switch to.
If you must use GLSL without image load/store, you do have an option: if your image has n pixels total, then send n vertices to the vertex shader as points; in the vertex shader, read from the texture based on your gl_VertexID (available in GLSL 1.10... if you have 1.40+ you should probably use instancing and gl_InstanceID instead), and position the point so that when it goes to the fragment shader, it covers exactly the pixel you want. Then just have the pixel shader output white no matter what.
Its a hack, but it may work fine if you have no other options.
No, that's not possible. A fragment shader is invoked for a specific fragment at a specific position and can only output the values for this particular fragment (or discrad the whole fragment) that then get written into the framebuffer at exactly that pre-determined fragment position.
What you can do is not write your outputs to the framebuffer at all, but into some other storage, either an arbitrary image (using image load/store) or a shader storage buffer. But those two features require quite modern hardware capabilities (GL 4+ hardware). And in this case you could also do the whole thing using a proper compute shader in the first place (or an actual computing framework like CUDA or OpenCL, if you don't need any other OpenGL functionality).
Another way that also works on older hardware would be to do your stuff in the vertex shader instead of the fragment shader. This way you can just compute the vertex's clip space position (that then turns into the fragment position) accordingly. When using the geometry shader instead of the vertex shader you can even scatter data (compute more than one output for a single input) or discard stuff.

Can someone explain how this code transforms something from per vertex lighting to per pixel?

In a tutorial there was a diffuse value calculation of the type
float diffuse_value = max(dot(vertex_normal, vertex_light_position), 0.0);
..on the vertex shader.
That was supposed to be making per vertex lighting if later on the fragment shader..
gl_FragColor = gl_Color * diffuse_value;
Then when he moved the first line - appropriately (by outputting vertex_normal and vertex_light_position to fragment) - to the the fragment shader, it is supposed to be transforming the method to "per pixel shading".
How is that so? The first method appears to be doing the diffuse_value calculation every pixel anyway!
diffuse_value in the first case is computed in the vertex shader. So it's only done per vertex.
After the vertex shader outputs values, the rasterizer takes those values (3 per triangle for each vector) and interpolates (in a perspective correct manner) them to provide different values for each pixel. As it happens, interpolating vectors like that (the normal and the light direction vectors) is not proper, because it loses their normalized property. Many implementations will actually normalize the vectors first thing in the fragment shader.
But it's worse to interpolate the dot of the 2 vectors (what the vector lighting effectively does). Say for example that your is N=+Z for all your vertices and L=norm(Z-X) on one and L=norm(Z+X) on another.
N.L = 1/sqrt(2) for both vertices.
Interpolating that will give you a flat lighting, whereas actually interpolating N and L separately and renormalizing will give you the result you'd expect, a lighting that peaks exactly in the middle of the polygon. (because the interpolation of norm(Z-X) and norm(Z+X) will give exactly Z once normalized).
Well ... Code in a vertex shader is only evaluated per-vertex, with the input values of that vertex.
But when moved to a fragment shader, it is evaluated per-fragment, i.e. per pixel, with input values appropriately interpolated between vertices.
At least that is my understanding, I'm quite rusty with shader programming though.
If diffuse_value is computed in vertex shader, that means it is computed per vertex. Then, it is linearly interpolated on pixels of triangle and feed into pixel shader. (If you don't have per-pixel normals, that's all you can do.) Then, in pixel shader, polygon color (interpolated too) is modulated with that diffuse_value.