GLSL shaders to model physarum - glsl

I'm trying to write some GLSL fragment shaders to model physarum bacteria. The simulation works as follows:
initiate a 2D array of intensities, all zeroed to start with
initiate a certain number of agents at arbitrary positions
for each iteration:
move each agent to its next position according to some movement function (the movement function isn't important for this question)
increase the intensity at each agent's new position by 1
dampen all the intensities by a constant factor
Simulating this and treating the intensity values as a grayscale color yields interesting visuals like those in this demo.
I'm not sure how to model this correctly with textures and fragment shaders though.
To move each agent to its next position in a fragment shader, I need a texture where each pixel is an agent, and the R and G color values are the agent's x and y position.
However, to increase the intensity at each agent's new position by 1, I need to be able to query, for each position, "is there an agent there?", which I can't do with the model above.
Thanks for any help!

Related

OpenGL - tex coord in fragment shader outside of specified range

I'm trying to draw a rectangle with a texture in OpenGL. I'm simply trying to render an entire .jpg image, so I specify the texture coordinates as [0, 0] to [1, 1] in the vertex buffer. I expect all the interpolated texture coordinates in the fragment shader to be between [0, 0] and [1, 1], however, depending on where the texture is drawn, I sometimes get a texture coordinate that is less than 0 (I know this is the case because I tried outputting red from the fragment shader if the tex coord is less than 0).
How come I get an interpolated value outside of the specified range? I currently visualize vertices/fragments like the following image (https://learnopengl.com/Advanced-OpenGL/Anti-Aliasing):
If I imagine a rectangle instead, then if the pixel sample is inside the rectangle, then the interpolated texture coord must be at least 0, since the very left of the rectangle represents 0, right? So how do I end up with a value less than 0?
Edit: after some basic testing, it looks like the fragment shader is called if a shape simply intersects that pixel, not if the pixel sample point is inside the shape. I tested this by placing the start of the rectangle slightly before and slightly after the middle of a pixel - when slightly behind the middle of the pixel, I don't get a negative value, but if I place it slightly after the middle, then I do get a negative value. This contradicts what the website I linked to said - perhaps it's driver-dependent?
Edit: the previous test I did was with multisampling on. If I turn multisampling off, then even if the shape is past the middle, I don't get a negative value...
Turns out I just needed to keep reading the article I linked:
This is where multisampling becomes interesting. We determined that 2 subsamples were covered by the triangle so the next step is to determine a color for this specific pixel. Our initial guess would be that we run the fragment shader for each covered subsample and later average the colors of each subsample per pixel. In this case we'd run the fragment shader twice on the interpolated vertex data at each subsample and store the resulting color in those sample points. This is (fortunately) not how it works, because this basically means we need to run a lot more fragment shaders than without multisampling, drastically reducing performance.
How MSAA really works is that the fragment shader is only run once per pixel (for each primitive) regardless of how many subsamples the triangle covers. The fragment shader is run with the vertex data interpolated to the center of the pixel and the resulting color is then stored inside each of the covered subsamples. Once the color buffer's subsamples are filled with all the colors of the primitives we've rendered, all these colors are then averaged per pixel resulting in a single color per pixel. Because only two of the 4 samples were covered in the previous image, the color of the pixel was averaged with the triangle's color and the color stored at the other 2 sample points (in this case: the clear color) resulting in a light blue-ish color.
So I was getting a negative value because the fragment shader was being run on a pixel that had at least one of its sub-sample points covered by the shape, but the shape was slightly after the mid-point of the pixel, and since "the fragment shader is run with the vertex data interpolated to the center of the pixel", I was getting a negative value.

How to compute vector transformations on the CPU?

[enter image description here][1]
I’ve been trying to transform my vertices outside the main graphics pipeline. I need them on the CPU. But as simple as it seemed at first, I have spent a significant duration trying to implement that but simply failed. I have been trying to figure out the error with my method, but it just seems perfect to me.
I have my world, camera and projection matrices (that I am using in the main graphics pipeline to render objects) working. I use the same matrices to transform the vectors with the function XMVector4Transform(). I have set the w component of my vector to 1 and then when I transform my vertices, instead of getting normalized (between -1 to 1) outputs (while the 3d model is inside the screen space), I am getting values that are outside the screen while with the same matrix transformations in the shader, it is being rendered inside the screen space.
Now after some digging I found that I need to use the function XMVector4Normalize() to normalize the coords. Though after using that the results were normalized, but still there is a major offset between the CPU computed vertices and those that I compute in the shader. And the offset margin increases as I move the objects to the edges.
https://i.stack.imgur.com/bhnpB.png
in the above screenshot, the wireframe is rendered using the CPU computed vertices and the solidly shaded version is being rendered in the main pipeline. the offset that i mentioned can be clearly observed in the screenshot.
PS : I am rendering the CPU computed verts just to test...
DirectX::XMVECTOR v1;
v1.m128_f32[0] = pMesh[i].GetVertices()[j].x;
v1.m128_f32[1] = pMesh[i].GetVertices()[j].y;
v1.m128_f32[2] = pMesh[i].GetVertices()[j].z;
v1.m128_f32[3] = 1.f;
projectedVectors[i].verts.emplace_back(XMVECTOR());
v1 = XMVector4Transform(XMVector4Transform(v1, *mView), *mProj);;
v1 = XMVector4Normalize(v1);
The result you get from this transform is in so-called clip space. This is an artificial 4D-space, where the w-component denotes the common denominator for all other components. Therefore, instead of normalizing, you want to divide by w (so-called perspective divide).
Btw, instead of assigning the components of the vector to the registers yourself, you could also use XMLoadFloat4(). This would take care of everything taking into account what is available.

How to fix a point on the surface of a 3D model created by texture mapping in OpenGL?

Let's see an image first:
The model in the image is create by texture mapping. I want to have a mouse clicked on the screen, then I want to place a fixed point on the surface of the model. What's more, as the model rotates, the fixed point is still on the surface of the model.
My question is:
How can I place the fixed point on the surface of the model?
How can I get the coordinate (x, y, z) of the fixed point?
My thought is as follows:
use gluUnproject function to get two points when I have the mouse clicked on the screen. One point is on the near clip plane and another is on the far one.
concatenate the two points to form a line.
iterate points on the line of step 2 and use glReadPixels to get the pixel value of the iterated points. If the the values jump from zero to nonzero or jump from nonzero to zero(the pixel value of background is zero), the surface points are found.
This is my thought. But it seems that it does not work!!! Can anyone give me some advice. Thank you!
The model in the image is create by texture mapping.
No, it's not. First and foremost there is no model at all. What you have there is a 3D dataset of voxels and then you have a volume rasterizer that "shoots" rays through the dataset, integrates them up and for each ray produces a color and opacity value.
This process is not(!!!) texture mapping. Texture mapping is when you draw a "solid" primitive and for each fragment (a fragment is what eventually becomes a pixel) determines a single location in the texture data set and samples it. But a volume raycaster as you have it there performs a whole ray integration effectively sampling many voxels from the whole dataset into a single pixel. That's a completely different way of creating a color-opacity value.
My question is:
How can I place the fixed point on the surface of the model?
You can't because the dataset you have there does not have a "fixed" surface point. You have to define some segmentation operation that decides which position along the ray constitutes as "this is the surface". The simple most method would be using a threshold cutoff function.
How can I get the coordinate (x, y, z) of the fixed point?
Your best bet would be modifying the volume raycasting code, changing it from an integrator into a segmentizer. Assume that you want to use the threshold method.
You typical volume rasterizer works like this (usually implemented in a shader):
vec4 output;
for(vec3 pos = start
; length(pos - start) <= length(end - start)
; pos += voxel_grid_increment ){
vec4 t = texture3D(voxeldata, pos);
/* integrate t into output */
}
The integration step merges the incoming color and opacity of the texture voxel t into the output color+opacitcy. There are several methods to do this.
You'd change this into a shader that simply stops that loop at a given cutoff threshold and emits the position of that voxel:
vec3 output;
for(vec3 pos = start
; length(pos - start) <= length(end - start)
; pos += voxel_grid_increment ){
float t = texture3D(voxeldata, pos).r;
if( t > threshold ){
output = pos;
break;
}
}
The result of that would be a picture encoding the determined voxel position in its pixels RGB values. Use a 16 bit per channel texture format (or single precision of half precision float) and you've got enough resolution to address withing the limits of what typical GPUs can address in a 3D texture.
You'll want to do this off-screen using a FBO.
Another viable approach is taking the regular voxel raycaster and at the threshold position modify the depth value output for the particular fragment. The drawback of this method is, that depth output modification trashes performance, so you'll not want to do this if framerates matter. The benefit of this method would be, that you then in fact could use glReadPixels on the depth buffer and gluUnProject the depth value at where your moust pointer is.
My thought is as follows:
use gluUnproject function to get two points when I have the mouse
clicked on the screen. One point is on the near clip plane and
another is on the far one. concatenate the two points to form a
line.
iterate points on the line of step 2 and use glReadPixels to get
the pixel value of the iterated points. If the the values jump from
zero to nonzero or jump from nonzero to zero(the pixel value of
background is zero), the surface points are found.
That's not going to work. For the simple reason that glReadPixels sees exactly the same as you see. You can not "select" the depth at which glReadPixels read the pixels, because there's no depth in the picture left. glReadPixels just sees what you see: A flat image as it's shown in the window. You'll have to iterate over the voxel data, but you can't do this post-hoc. You'll have to implement or modify a volume raterizer to extract the information you need.
I am not going to write here a full implementation of what you need.Also,you could just search the web and find quite alot of info on this subject.But what you are looking for is called "Decals".Nvidia also presented a technique called "Texture bombing".In the nutshell,you draw a planar (or enclosing volume)geometry to project the decal texture onto it.The actual process is a little bit more complex as you can see from the examples.

Whats the proper way to draw a large model in OpenGL?

Im trying to draw large gridblock using OpenGL (for example: 114x112x21 cells).
As far as I know ... each cell should be drawn as 6 faces (12 triangle), each contains 4 vertices. each of the vertices has position, normal, and color vectors (each of these is 3*sizeof(GLfloat)).
These values will be passed to VRam in VBO(s). I did some calculations for the example mentioned and found out that it will cost ~200MB to store this data. I'm not sure if this is right, but if it is, I think it's way too much VRAM for such 1 model.
I'm sure there are more efficient ways to do this. and if any can point me to the right direction I would be very thankful.
EDIT: I may have been unclear about the nature of the cells. they do NOT have uniform dimensions that can be scaled/translated to produce other cells or other faces on the same cell. Almost each cell has different dimensions on each face. (these are predefined)
Also let me note that the colors are per cell and are based on a algorithmic scale of different values (depending on which the user wants to visualize). so if the user chooses a value (one for each cell) to be visualized, colors are calculated based on a scale and used to color the cells.
As #BDL suggested in his answer, I'll probably use geometry shader to calculate per face normals.
There are several things that can be done:
First of all, each vertex position (except the ones on the sides) are shared between 8 cells.
If you need per face normals, in which case a position would require several normals, calculate them in a geometry shader instead of storing them in the VBO.
If each cell has a constant color, store it in a 3d-texture and sample the texture in the fragment shader.
For more hints you would have to provide more details on the cells and on what you want to achieve.
There are a few tricks you could do.
To start with, you could use instancing per cube. You then have per vertex positions and normals for a single cell plus a single position and color per cell.
You can actually eliminate the cell positions by deriving it from the instance id, by reversing the formula id = z * width * height + y * width + x.
Furthermore, using a float per component is probably overkill for your colors, you may want to use a smaller format such as GL_RGBA8.
Applying that to your example (268128 cells) we get a buffer size of approximately 1 MiB (of which the 4 bytes color per cell is the most significant, the others are only for a single cell).
Note that this assumes that you want a single color for your entire cell. If you want a color per vertex, or per vertex per face, you can do so by using an 1D texture and indexing by instance and vertex id.
The biggest part of your data is going to be color though, unless there is a constant pattern. If you still want floats per component en per face per vertex colors it is going to take ~73 MiB on color alone.
You can use instanced rendering. It renders the same vertex data with the same shader multiple times in just 1 draw call. Here is a link to the wiki(external): https://en.wikipedia.org/wiki/Geometry_instancing

Specific coordinate output in glsl fragment shaders?

Is there a way to, instead of using the predetermined coordinate as output by gl_FragColor, set the color of a specific pixel by its coordinate?
I'm currently trying to implement the Mean Shift algorithm via shaders. My input is a black and white texture, where white dots represent points to be clustered and black represents no-data.
After calculating the weighted average of all point positions in the neighborhood, I have to set the pixel in the resulting position to a new color that represents a cluster.
For example, if I look at a neighborhood of 18x18 centered on the pixel relate to fragcoord and find 3 white pixels:
Fragcoord = 30,33
Pixel 1: coordinate (30,33)
Pixel 2: coordinate (27,33)
Pixel 3: coordinate (30,30)
After calculating the average of their positions, I'll have (29,32). Is there a way to set the pixel at 29,32 to a different color, in a shader unit that has a different fragcoord (for example, 30,33)?
Something like gl_FragColor(vec2(29,32)) = vec4(1.0,1.0,1.0,1.0); ?
As Christian said, it's not possible; and if you can use it, a compute framework or image load/store is your best option to switch to.
If you must use GLSL without image load/store, you do have an option: if your image has n pixels total, then send n vertices to the vertex shader as points; in the vertex shader, read from the texture based on your gl_VertexID (available in GLSL 1.10... if you have 1.40+ you should probably use instancing and gl_InstanceID instead), and position the point so that when it goes to the fragment shader, it covers exactly the pixel you want. Then just have the pixel shader output white no matter what.
Its a hack, but it may work fine if you have no other options.
No, that's not possible. A fragment shader is invoked for a specific fragment at a specific position and can only output the values for this particular fragment (or discrad the whole fragment) that then get written into the framebuffer at exactly that pre-determined fragment position.
What you can do is not write your outputs to the framebuffer at all, but into some other storage, either an arbitrary image (using image load/store) or a shader storage buffer. But those two features require quite modern hardware capabilities (GL 4+ hardware). And in this case you could also do the whole thing using a proper compute shader in the first place (or an actual computing framework like CUDA or OpenCL, if you don't need any other OpenGL functionality).
Another way that also works on older hardware would be to do your stuff in the vertex shader instead of the fragment shader. This way you can just compute the vertex's clip space position (that then turns into the fragment position) accordingly. When using the geometry shader instead of the vertex shader you can even scatter data (compute more than one output for a single input) or discard stuff.