Is it possible to get data from shaders - c++

what am trying to do is getting the Position of the vertex after translation, rotation, scaling, and get the Normal direction, after translation, rotation, scaling , then pass the values to my C++ app , is that possible ?

Possible yes, but the most elegant method depends on the OpenGL version profile available. The most elegant solution would be vertex transform feedback https://www.opengl.org/wiki/Transform_Feedback
If you don't have VTF you'll have to write the information into the framebuffer (in a fragment buffer object) for readback (this will of course prevent you from seeing a "image" but just give you color coded information). Render in GL_POINTS mode, use the gl_VertexID to set the fragment position, pass the transformed data to the fragment shader and have the fragment shader write it into the right color channels.

Related

Read back data stored in a texture

I have got a texture which is updated from a fragment shader that calculates points positions.
What is the good way to read it back so it could be drawn as primitives ?
If you want to draw using the data from the texture, reading back to host memory is a waste and slow (But for reference you could use glGetTexImage or glReadPixels).
Instead, you can draw primitives without providing vertex positions and read them from your texture in the vertex shader (bound as a sampler and using texelFetch for example).
The coordinates for texel fetch can come from a per-vertex attribute (just like regular texture coordinates), or you can use gl_VertexID to calculate them implicitly.
As #ColonelThirtyTwo said, you can also use transform feedback. Not using your texture and doing the computation in the fragment shader, but replacing it with computation in a vertex shader. Here the varying variables normally interpolated to the fragment shader get packed and saved in a buffer, still on the GPU.

Specific coordinate output in glsl fragment shaders?

Is there a way to, instead of using the predetermined coordinate as output by gl_FragColor, set the color of a specific pixel by its coordinate?
I'm currently trying to implement the Mean Shift algorithm via shaders. My input is a black and white texture, where white dots represent points to be clustered and black represents no-data.
After calculating the weighted average of all point positions in the neighborhood, I have to set the pixel in the resulting position to a new color that represents a cluster.
For example, if I look at a neighborhood of 18x18 centered on the pixel relate to fragcoord and find 3 white pixels:
Fragcoord = 30,33
Pixel 1: coordinate (30,33)
Pixel 2: coordinate (27,33)
Pixel 3: coordinate (30,30)
After calculating the average of their positions, I'll have (29,32). Is there a way to set the pixel at 29,32 to a different color, in a shader unit that has a different fragcoord (for example, 30,33)?
Something like gl_FragColor(vec2(29,32)) = vec4(1.0,1.0,1.0,1.0); ?
As Christian said, it's not possible; and if you can use it, a compute framework or image load/store is your best option to switch to.
If you must use GLSL without image load/store, you do have an option: if your image has n pixels total, then send n vertices to the vertex shader as points; in the vertex shader, read from the texture based on your gl_VertexID (available in GLSL 1.10... if you have 1.40+ you should probably use instancing and gl_InstanceID instead), and position the point so that when it goes to the fragment shader, it covers exactly the pixel you want. Then just have the pixel shader output white no matter what.
Its a hack, but it may work fine if you have no other options.
No, that's not possible. A fragment shader is invoked for a specific fragment at a specific position and can only output the values for this particular fragment (or discrad the whole fragment) that then get written into the framebuffer at exactly that pre-determined fragment position.
What you can do is not write your outputs to the framebuffer at all, but into some other storage, either an arbitrary image (using image load/store) or a shader storage buffer. But those two features require quite modern hardware capabilities (GL 4+ hardware). And in this case you could also do the whole thing using a proper compute shader in the first place (or an actual computing framework like CUDA or OpenCL, if you don't need any other OpenGL functionality).
Another way that also works on older hardware would be to do your stuff in the vertex shader instead of the fragment shader. This way you can just compute the vertex's clip space position (that then turns into the fragment position) accordingly. When using the geometry shader instead of the vertex shader you can even scatter data (compute more than one output for a single input) or discard stuff.

Calculate normals for plane inside fragment shader

I have a situation where I need to do light shading. I don't have a vertex shader so I can't interpolate normals into my fragment shader. Also I have no ability to pass in a normal map. Can I generate normals completely in the fragment shader based,for example on fragment coordinates? The geometry is always planar in my case.
And to extend on what I am trying to do:
I am using the NV_path_rendering extension which allows rendering pure vector graphics on GPU. The problem is that only the fragment stage is accessible via shader which basically means - I can't use a vertex shader with NV_Path objects.
Since your shapes are flat and NV_PATH require compat profile you can pass normal through on of built-in varyings gl_Color or gl_SecondaryColor
Extension description says that there is some kind of interpolation:
Interpolation of per-vertex data (section 3.6.1). Path primitives have neither conventional vertices nor per-vertex data. Instead fragments generate interpolated per-fragment colors, texture coordinate sets, and fog coordinates as a linear function of object-space or eye-space path coordinate's or using the current color, texture coordinate set, or fog coordinate state directly.
http://developer.download.nvidia.com/assets/gamedev/files/GL_NV_path_rendering.txt
Here's a method which "sets the normal as the face normal", without knowing anything about vertex normals (as I understand it).
https://stackoverflow.com/a/17532576/738675
I have a three.js demo working here:
http://meetar.github.io/three.js-normal-map-0/index6.html
My implementation is getting vertex position data from the vertex shader, but it sounds like you're able to get that through other means.

C++ shader optimization question

Could someone explain me the pretty basics of pixel and vertex shader interaction.
The obvious things are that vertex shaders receive basic vertex properties and then repass some of them to the actual pixel shader.
But how does the actual vertex->pixel transition happens? I know that obviously all types of pipelines include the rasterizer change, which is capable of interpolating the vertex parameters and can apply textures based on the certain texture coordinates.
And as far as I understand those are also interpolated (not quite sure about this moment, heard something about complex UV derivative math, but I assume that we can say that they are being interpolated).
So, here are some "targeted" questions.
How does the pixel shader operate? I mean that pixel shader obviously does some actions "per pixel", but due to the unobvious vertex->pixel transition this yields some questions.
Can I assume that if I evaluate matrix - vector product once in my pixel shader, it would be evaluated once when the image is rasterized? Or would it be better to evaluate everything that's possible in my vertex shader and then pass it to the pixel shader?
Also, if someone could point articles / abstracts on this topic, I would really appreciate that.
Thank you.
UPDATE
I thought it actually doesn't matter, because the interaction should be pretty same everywhere. I'm developing visualization applications and games for desktops, using HLSL / GLSL / Nvidia CG for shaders and mostly C++ as the base language.
The vertex shader is executed once for every vertex. It allows you to transform the vertex from world space coordinates (or whichever other coordinate system it might be in) into screenspace coordinates.
That is, if you have a triangle, each vertex is transformed, so it ends up with a position on the screen.
And given these positions, the rasterizer determines which pixels are covered by the triangle spanned by those three vertices.
And then, for each pixel inside the triangle, the pixel shader is invoked. The output from the vertex shader is usually interpolated for each pixel, so pixels close to vertex v0 will receive values very close to those computed by the vertex shader for v0.
And this means that everything you do in the pixel shader is executed once per pixel covered by the primitive being rasterized

Can I use a vertex shader to display a models normals?

I'm currently using a VBO for the texture coordinates, normals and the vertices of a (3DS) model I'm drawing with "glDrawArrays(GL_TRIANGLES, ...);". For debugging I want to (temporarily) show the normals when drawing my model. Do I have to use immediate mode to draw each line from vert to vert+normal -OR- stuff another VBO with vert and vert+normal to draw all the normals… -OR- is there a way for the vertex shader to use the vertex and normal data already passed in when drawing the model to compute the V+N used when drawing the normals?
No, it is not possible to draw additional lines from a vertex shader.
A vertex shader is not about creating geometry, it is about doing per vertex computation. Using vertex shaders, when you say glDrawArrays(GL_TRIANGLES,0,3), this is what specifies exactly what you will draw, i.e. 1 triangle. Once processing reaches the vertex shader, you can only alter the properties of the vertices of that triangle, not modify in any way, shape or form, the topology and/or count of the geometry.
What you're looking for is what OpenGL 3.2 defines as a geometry shader, that allows to output arbitrary geometry count/topology out of a shader. Note however that this is only supported through OpenGL 3.2, that not many cards/drivers support right now (it's been out for a few months now).
However, I must point out that showing normals (in most engines that support some kind of debugging) is usually done with the traditional line rendering, with an additional vertex buffer that gets filled in with the proper positions (P, P+C*N) for each mesh position, where C is a constant that represents the length you want to use to show the normals. It is not that complex to write...
You could approximate this by drawing the geometry twice. Once draw it as you normally would. The second time, draw the geometry as GL_POINTS, and attach a vertex shader which offsets each vertex position by the vertex normal.
This would result in your model having a set of points floating over the surface. Each point would show the direction of the normal from the vertex it corresponds to.
This isn't perfect, but might be sufficient, depending on what it is you're hoping to use it for.
UPDATE: AHA! And if you pass in a constant scaling factor to the vertex shader, and have your application interpolate that factor between 0 and 1 as time goes by, your points rendered by the vertex shader will animate over time, starting at the vertex they apply to, and then floating off in the direction of its normal.
It's probably possible to get more or less the right effect with a cleverly written vertex shader, but it'd be a lot of work. Since this is for debugging purposes anyway, it seems better to just draw a few lines; the performance hit will not be severe.