Specific coordinate output in glsl fragment shaders? - glsl

Is there a way to, instead of using the predetermined coordinate as output by gl_FragColor, set the color of a specific pixel by its coordinate?
I'm currently trying to implement the Mean Shift algorithm via shaders. My input is a black and white texture, where white dots represent points to be clustered and black represents no-data.
After calculating the weighted average of all point positions in the neighborhood, I have to set the pixel in the resulting position to a new color that represents a cluster.
For example, if I look at a neighborhood of 18x18 centered on the pixel relate to fragcoord and find 3 white pixels:
Fragcoord = 30,33
Pixel 1: coordinate (30,33)
Pixel 2: coordinate (27,33)
Pixel 3: coordinate (30,30)
After calculating the average of their positions, I'll have (29,32). Is there a way to set the pixel at 29,32 to a different color, in a shader unit that has a different fragcoord (for example, 30,33)?
Something like gl_FragColor(vec2(29,32)) = vec4(1.0,1.0,1.0,1.0); ?

As Christian said, it's not possible; and if you can use it, a compute framework or image load/store is your best option to switch to.
If you must use GLSL without image load/store, you do have an option: if your image has n pixels total, then send n vertices to the vertex shader as points; in the vertex shader, read from the texture based on your gl_VertexID (available in GLSL 1.10... if you have 1.40+ you should probably use instancing and gl_InstanceID instead), and position the point so that when it goes to the fragment shader, it covers exactly the pixel you want. Then just have the pixel shader output white no matter what.
Its a hack, but it may work fine if you have no other options.

No, that's not possible. A fragment shader is invoked for a specific fragment at a specific position and can only output the values for this particular fragment (or discrad the whole fragment) that then get written into the framebuffer at exactly that pre-determined fragment position.
What you can do is not write your outputs to the framebuffer at all, but into some other storage, either an arbitrary image (using image load/store) or a shader storage buffer. But those two features require quite modern hardware capabilities (GL 4+ hardware). And in this case you could also do the whole thing using a proper compute shader in the first place (or an actual computing framework like CUDA or OpenCL, if you don't need any other OpenGL functionality).
Another way that also works on older hardware would be to do your stuff in the vertex shader instead of the fragment shader. This way you can just compute the vertex's clip space position (that then turns into the fragment position) accordingly. When using the geometry shader instead of the vertex shader you can even scatter data (compute more than one output for a single input) or discard stuff.

Related

How to loop over every pixel in a 3D texture/buffer without using compute shaders

I understand how you would do this with a 2D buffer. Just draw two triangles that make a quad that fully encompass the 2D buffer space. That way when the fragment shader runs it runs for all the pixels in the buffer.
Question: How would this work for a 3D buffer?
You could just write a lot of triangles for each cross-section of the 3D buffer. However, if you had a texture that was 1x1x256 that would mean that you would need to draw 256*2 triangles for each slice to iterate over all of the pixels. I know this is an extreme case and there are ways of optimizing this solution. However, I feel like there is a more elegant solution that I am missing.
What I am trying to do: I am trying to make a 3D fluid solver that iterates through each of the pixels of the 3D texture and computes its velocity, density, etc. I am trying to do this via the fragment shader because I am using OpenGL 3.0 which does not use compute shaders.
#version 330 core
out vec4 FragColor;
uniform sampler3D volume;
void main()
{
// computing the fluid density, velocity, and center of mass
// output the values to the 3D buffer to diffrent color channels:
fragColor = vec4(density, velocity.xy, centerOfMass);
}
At some point in the fragment shader, you're going to write some statement of the form:
vec4 value = texture(my_texture, TexCoords);
Where TexCoords is the location in my_texture that maps to some particular value in the source texture. But... that mapping is entirely up to you. Nobody's making you use gl_FragCoord.xy / textureSize(my_texture). You could just as easily use vec3(gl_FragCoord.x, Y_value, gl_FragCoord.y) / textureSize(my_texture), which puts the Y component of the fragment location in the Z dimension of the texture. Y_value in this case is a value passed from the outside that tells which vertical slice of the 3D texture to use.
Of course, whatever mapping you use to fetch the data must also be used when you write the data. If you're writing via fragment shader outputs, that poses a problem. A 3D texture can only be attached to an FBO as either a single 2D slice or as a layered set of 2D slices, with these slices always being along the Z dimension of the image. So even if you try to read in slices along the Y dimension, it has to be written in Z slices. So you'd be moving around the location of the data, which makes this non-viable.
If you're using image load/store, then you have no problem. You can just write to the appropriate texel (indeed, you can read from it as an image using integer coordinates, so there's no need to divide by the texture's size).

Approach for writing a GLSL fragment shader with a solid color per triangle/face

I have vertex and triangle data which contains a color for each triangle (face), not for each vertex. i.e. A single vertex is shared by multiple faces, each face potentially a different color.
How should I approach this problem in GLSL to obtain a solid color assignment for each face being rendered? Calculating and assigning a "vertex color" buffer by averaging the colors of a vertex's neighboring polys is easy enough, but this of course produces a blurry result where the colors are interpolated in the fragment shader.
What I really need shouldn't be interpolated color values at all, I'll have about 40k triangles shaded with approx 15 possible solid colors once this is working as intended.
While you maybe could do this in high end GLSL, the right way to do solid shading is to make unique vertices for every triangle. This is a trivial loop. For every vertex, count how many triangles share it. That's how often you have to replicate it. Make sure your loop to do this is O(n). Then just set each vertex color or normal to that of the triangle. Again one straight loop. Do not bother to optimize for shared colors, it is not worth it.
Edit much later, because this is a popular answer:
To do flat per face shading you can interpolate the vertex position in world or view space. Then in the fragment shader compute ddx(dFdx) and ddy(dFdy) of this variable. Take the cross product of those two vectors and normalize it - you got a flat normal! No mesh changes or per vertex data needed at all.
OpenGL does not have "per-face" attributes. See:
How can I specify per-face colors when using indexed vertex arrays in OpenGL 3.x?
Here are a few possible options I see:
Ditch the index arrays and use separate vertices for each face like starmole suggested
Create an index array for each color used. Use materials instead of vertex colors and change the material after drawing the triangles from the index array for each color.
If the geometry allows it, you can make sure the last vertex specified by the index array has the correct vertex color for the face, and then use GL_FLAT shading, or have the fragment shader only use at the last vertex color.
In addition to the other answers, you could maybe employ the gl_PrimitiveID variable, that's an input to the fragment shader (don't know since which version) and is incremented implicitly for each triangle. You could then use this to lookup the color (either from a 40k buffer texture of colors or color indices into a 15 color color map, or just some direct computation from the primitive id). But don't ask me about the performance of this approach.

Vertex shader vs Fragment Shader [duplicate]

This question already has answers here:
What are Vertex and Pixel shaders?
(6 answers)
Closed 5 years ago.
I've read some tutorials regarding Cg, yet one thing is not quite clear to me.
What exactly is the difference between vertex and fragment shaders?
And for what situations is one better suited than the other?
A fragment shader is the same as pixel shader.
One main difference is that a vertex shader can manipulate the attributes of vertices. which are the corner points of your polygons.
The fragment shader on the other hand takes care of how the pixels between the vertices look. They are interpolated between the defined vertices following specific rules.
For example: if you want your polygon to be completely red, you would define all vertices red. If you want for specific effects like a gradient between the vertices, you have to do that in the fragment shader.
Put another way:
The vertex shader is part of the early steps in the graphic pipeline, somewhere between model coordinate transformation and polygon clipping I think. At that point, nothing is really done yet.
However, the fragment/pixel shader is part of the rasterization step, where the image is calculated and the pixels between the vertices are filled in or "coloured".
Just read about the graphics pipeline here and everything will reveal itself:
http://en.wikipedia.org/wiki/Graphics_pipeline
Vertex shader is done on every vertex, while fragment shader is done on every pixel. The fragment shader is applied after vertex shader. More about the shaders GPU pipeline link text
Nvidia Cg Tutorial:
Vertex transformation is the first processing stage in the graphics hardware pipeline. Vertex transformation performs a sequence of math operations on each vertex. These operations include transforming the vertex position into a screen position for use by the rasterizer, generating texture coordinates for texturing, and lighting the vertex to determine its color.
The results of rasterization are a set of pixel locations as well as a set of fragments. There is no relationship between the number of vertices a primitive has and the number of fragments that are generated when it is rasterized. For example, a triangle made up of just three vertices could take up the entire screen, and therefore generate millions of fragments!
Earlier, we told you to think of a fragment as a pixel if you did not know precisely what a fragment was. At this point, however, the distinction between a fragment and a pixel becomes important. The term pixel is short for "picture element." A pixel represents the contents of the frame buffer at a specific location, such as the color, depth, and any other values associated with that location. A fragment is the state required potentially to update a particular pixel.
The term "fragment" is used because rasterization breaks up each geometric primitive, such as a triangle, into pixel-sized fragments for each pixel that the primitive covers. A fragment has an associated pixel location, a depth value, and a set of interpolated parameters such as a color, a secondary (specular) color, and one or more texture coordinate sets. These various interpolated parameters are derived from the transformed vertices that make up the particular geometric primitive used to generate the fragments. You can think of a fragment as a "potential pixel." If a fragment passes the various rasterization tests (in the raster operations stage, which is described shortly), the fragment updates a pixel in the frame buffer.
Vertex Shaders and Fragment Shaders are both feature of 3-D implementation that does not uses fixed-pipeline rendering. In any 3-D rendering vertex shaders are applied before fragment/pixel shaders.
Vertex shader operates on each vertex. If you have a fixed polygon mesh and you want to deform it in a shader, you have to implement it in vertex shader. I.e. any physical change in vertex appearances can be done in vertex shaders.
Fragment shader takes the output from the vertex shader and associates colors, depth value of a pixel, etc. After these operations the fragment is send to Framebuffer for display on the screen.
Some operation, as for example lighting calculation, you can perform in vertex shader as well as fragment shader. But fragment shader provides better result than the vertex shader.
In rendering images via 3D hardware you typically have a mesh (point, polygons, lines) these are defined by vertices. To manipulate vertices individually typically for motions in a model or waves in an ocean you can use vertex shaders. These vertices can have static colour or colour assigned by textures, to manipulate vertex colours you use fragment shaders. At the end of the pipeline when the view goes to screen you can also use fragment shaders.

C++ shader optimization question

Could someone explain me the pretty basics of pixel and vertex shader interaction.
The obvious things are that vertex shaders receive basic vertex properties and then repass some of them to the actual pixel shader.
But how does the actual vertex->pixel transition happens? I know that obviously all types of pipelines include the rasterizer change, which is capable of interpolating the vertex parameters and can apply textures based on the certain texture coordinates.
And as far as I understand those are also interpolated (not quite sure about this moment, heard something about complex UV derivative math, but I assume that we can say that they are being interpolated).
So, here are some "targeted" questions.
How does the pixel shader operate? I mean that pixel shader obviously does some actions "per pixel", but due to the unobvious vertex->pixel transition this yields some questions.
Can I assume that if I evaluate matrix - vector product once in my pixel shader, it would be evaluated once when the image is rasterized? Or would it be better to evaluate everything that's possible in my vertex shader and then pass it to the pixel shader?
Also, if someone could point articles / abstracts on this topic, I would really appreciate that.
Thank you.
UPDATE
I thought it actually doesn't matter, because the interaction should be pretty same everywhere. I'm developing visualization applications and games for desktops, using HLSL / GLSL / Nvidia CG for shaders and mostly C++ as the base language.
The vertex shader is executed once for every vertex. It allows you to transform the vertex from world space coordinates (or whichever other coordinate system it might be in) into screenspace coordinates.
That is, if you have a triangle, each vertex is transformed, so it ends up with a position on the screen.
And given these positions, the rasterizer determines which pixels are covered by the triangle spanned by those three vertices.
And then, for each pixel inside the triangle, the pixel shader is invoked. The output from the vertex shader is usually interpolated for each pixel, so pixels close to vertex v0 will receive values very close to those computed by the vertex shader for v0.
And this means that everything you do in the pixel shader is executed once per pixel covered by the primitive being rasterized

Can I use a vertex shader to display a models normals?

I'm currently using a VBO for the texture coordinates, normals and the vertices of a (3DS) model I'm drawing with "glDrawArrays(GL_TRIANGLES, ...);". For debugging I want to (temporarily) show the normals when drawing my model. Do I have to use immediate mode to draw each line from vert to vert+normal -OR- stuff another VBO with vert and vert+normal to draw all the normals… -OR- is there a way for the vertex shader to use the vertex and normal data already passed in when drawing the model to compute the V+N used when drawing the normals?
No, it is not possible to draw additional lines from a vertex shader.
A vertex shader is not about creating geometry, it is about doing per vertex computation. Using vertex shaders, when you say glDrawArrays(GL_TRIANGLES,0,3), this is what specifies exactly what you will draw, i.e. 1 triangle. Once processing reaches the vertex shader, you can only alter the properties of the vertices of that triangle, not modify in any way, shape or form, the topology and/or count of the geometry.
What you're looking for is what OpenGL 3.2 defines as a geometry shader, that allows to output arbitrary geometry count/topology out of a shader. Note however that this is only supported through OpenGL 3.2, that not many cards/drivers support right now (it's been out for a few months now).
However, I must point out that showing normals (in most engines that support some kind of debugging) is usually done with the traditional line rendering, with an additional vertex buffer that gets filled in with the proper positions (P, P+C*N) for each mesh position, where C is a constant that represents the length you want to use to show the normals. It is not that complex to write...
You could approximate this by drawing the geometry twice. Once draw it as you normally would. The second time, draw the geometry as GL_POINTS, and attach a vertex shader which offsets each vertex position by the vertex normal.
This would result in your model having a set of points floating over the surface. Each point would show the direction of the normal from the vertex it corresponds to.
This isn't perfect, but might be sufficient, depending on what it is you're hoping to use it for.
UPDATE: AHA! And if you pass in a constant scaling factor to the vertex shader, and have your application interpolate that factor between 0 and 1 as time goes by, your points rendered by the vertex shader will animate over time, starting at the vertex they apply to, and then floating off in the direction of its normal.
It's probably possible to get more or less the right effect with a cleverly written vertex shader, but it'd be a lot of work. Since this is for debugging purposes anyway, it seems better to just draw a few lines; the performance hit will not be severe.