Is it possible to implement iterpolation that is of a higher order than linear when passing data from vertex to fragment shaders? Ideally I would like some form of quadratic interpolation, but that would require access to vertices beyond the corners of the face being interpolated across.
The short answer is: no.
I do not think that there is a native support for interpolation other than linear if it comes to attributes passed from vertices to the fragment shader.
However you could incorporate a trick to have non linear interpolation by using geometry shader an inserting interpolated vertices in-between. Or, if you want so have some kind of distribution of values along the interpolated line, you can use a predefined 1D texture that will contain the interpolation curve you need to use in the fragment shader.
Related
I am learning to make a graphical engine with OpenGL. I wanted to know, should repetitive operations be moved from the vertex shader to the fragment shader, since from what I understood the vertex shader is only run once per vertex?
For instance, when normalizing a vector for the light direction, since this light is the same in the entire vertex should it be moved to the vertex shader, instead of calculating it for every pixel? Is there a particular reason to keep it in the fragment shader?
If the calculation is exactly the same: yes, it should usually be more efficient to do it in the vertex shader than the fragment shader. Some situations where it might not be more efficient:
when drawing geometry that results in fewer shaded pixels than transformable vertices -- either due to dense geometry or extreme discards/occlusion. If this is the case, usually you would want to address it by switching to lower level-of-detail geometry or smarter geometry culling.
when doing the calculation in the vertex shader requires you to send more data to the fragment shader in order to use the calculation's results. Sending more data can be slower because it requires more memory manipulation and because the rasterizer needs to interpolate more "varying" values across each polygon.
For light calculations, specifically, be mindful that moving calculations from the fragment shader to the vertex shader can affect the quality of your rendering. Particularly, normalized direction vectors at each vertex can become shorter after "varying" interpolation, which can slightly darken triangle interiors if used directly without renormalization. And, of course, moving the entire lighting calculation to the vertex shader has even more drastic effects.
But how visible these effects are depends on the frequency of textures, the resolution of geometry, the size on screen, how far away the lights are, etc. -- in some cases, the quality/performance tradeoff may make sense.
I would like to have the normal vector of the fragments on a single face in my mesh be the same for all fragments.
Due to the way my engine works I cannot enable provoking vertices. Don't bring them up in your reply, i've looked into it already.
I'd like all fragments to take the three values of that face and average them without weighting, interpolation, etc.
To clarify:
I want a variable output from the vertex shader to the fragment shader with strict averaging, no interpolation. Is there some qualifier or technique I could use in OpenGL to achieve this?
I would be even happier if i could just get the values from each vertice and interpolate them myself, I have some awesome shader ideas if I can!
Thanks.
khronos.org/opengl/wiki/Type_Qualifier_(GLSL)#Interpolation_qualifiers
I have a situation where I need to do light shading. I don't have a vertex shader so I can't interpolate normals into my fragment shader. Also I have no ability to pass in a normal map. Can I generate normals completely in the fragment shader based,for example on fragment coordinates? The geometry is always planar in my case.
And to extend on what I am trying to do:
I am using the NV_path_rendering extension which allows rendering pure vector graphics on GPU. The problem is that only the fragment stage is accessible via shader which basically means - I can't use a vertex shader with NV_Path objects.
Since your shapes are flat and NV_PATH require compat profile you can pass normal through on of built-in varyings gl_Color or gl_SecondaryColor
Extension description says that there is some kind of interpolation:
Interpolation of per-vertex data (section 3.6.1). Path primitives have neither conventional vertices nor per-vertex data. Instead fragments generate interpolated per-fragment colors, texture coordinate sets, and fog coordinates as a linear function of object-space or eye-space path coordinate's or using the current color, texture coordinate set, or fog coordinate state directly.
http://developer.download.nvidia.com/assets/gamedev/files/GL_NV_path_rendering.txt
Here's a method which "sets the normal as the face normal", without knowing anything about vertex normals (as I understand it).
https://stackoverflow.com/a/17532576/738675
I have a three.js demo working here:
http://meetar.github.io/three.js-normal-map-0/index6.html
My implementation is getting vertex position data from the vertex shader, but it sounds like you're able to get that through other means.
I want to write a fragment shader to render an object with lightning, but without using the gl_Normal; So I must calculate the normal by myself.
I think I could use the functions dFdx and dFdy to find two tangent vectors and then get the normal with the vectorial product of those.
But I don't know which parameter to send to those functions.
I think I could use the functions dFdx and dFdy to find two tangent vectors and then get the normal with the vectorial product of those.
If you did that, you would only get face normals. And if you're doing faceted rendering, that'd be fine. And the "parameter to send to those functions" would be the fragment's position, in whatever space it is you're doing your lighting in. So obviously your vertex shader will need to compute that and pass it to the fragment shader.
For the rest of this post, I'll assume that you're not doing faceted rendering. That you want smooth normals to approximate a smooth surface.
The whole point of such normals is that they represent the actual surface that your polygonal mesh is approximating. So if you have a sphere, the normal at each vertex position should always point directly away from the sphere's center, no matter how many vertices you have.
You cannot magic such normals into being; you have to compute them based either on the actual surface or via a heuristic. The heuristic method requires looking at the triangles around the current one. And fragment shaders don't have access to that information.
Everyone uses vertex normals; it's standard practice. There are even special vertex attribute formats to minimize the size of such normals (GL_INT_2_10_10_10_REV being the most prominent). So just do it right.
I'm working on a Minecraft-like engine as a hobby project to see how far the concept of voxel terrains can be pushed on modern hardware and OpenGL >= 3. So, all my geometry consists of quads, or squares to be precise.
I've built a raycaster to estimate ambient occlusion, and use the technique of "bent normals" to do the lighting. So my normals aren't perpendicular to the quad, nor do they have unit length; rather, they point roughly towards the space where least occlusion is happening, and are shorter when the quad receives less light. The advantage of this technique is that it just requires a one-time calculation of the occlusion, and is essentially free at render time.
However, I run into trouble when I try to assign different normals to different vertices of the same quad in order to get smooth lighting. Because the quad is split up into triangles, and linear interpolation happens over each triangle, the result of the interpolation clearly shows the presence of the triangles as ugly diagonal artifacts:
The problem is that OpenGL uses barycentric interpolation over each triangle, which is a weighted sum over 3 out of the 4 corners. Ideally, I'd like to use bilinear interpolation, where all 4 corners are being used in computing the result.
I can think of some workarounds:
Stuff the normals into a 2x2 RGB texture, and let the texture processor do the bilinear interpolation. This happens at the cost of a texture lookup in the fragment shader. I'd also need to pack all these mini-textures into larger ones for efficiency.
Use vertex attributes to attach all 4 normals to each vertex. Also attach some [0..1] coefficients to each vertex, much like texture coordinates, and do the bilinear interpolation in the fragment shader. This happens at the cost of passing 4 normals to the shader instead of just 1.
I think both these techniques can be made to work, but they strike me as kludges for something that should be much simpler. Maybe I could transform the normals somehow, so that OpenGL's interpolation would give a result that does not depend on the particular triangulation used.
(Note that the problem is not specific to normals; it is equally applicable to colours or any other value that needs to be smoothly interpolated across a quad.)
Any ideas how else to approach this problem? If not, which of the two techniques above would be best?
As you clearly understands, the triangle interpolation that GL will do is not what you want.
So the normal data can't be coming directly from the vertex data.
I'm afraid the solutions you're envisioning are about the best you can achieve. And no matter what you pick, you'll need to pass down [0..1] coefficients from the vertex to the shader (including 2x2 textures. You need them for texture coordinates).
There are some tricks you can do to somewhat simplify the process, though.
Using the vertex ID can help you out with finding which vertex "corner" to pass from vertex to fragment shader (our [0..1] values). A simple bit test on the lowest 2 bits can let you know which corner to pass down, without actual vertex data input. If packing texture data, you still need to pass an identifier inside the texture, so this may be moot.
if you use 2x2 textures to allow the interpolation, there are (were?) some gotchas. Some texture interpolators don't necessarily give a high precision interpolation if the source is in a low precision to begin with. This may require you to change the texture data type to something of higher precision to avoid banding artifacts.
Well... as you're using Bent normals technique, the best way to increase result is to pre-tessellate mesh and re-compute with mesh with higher tessellation.
Another way would be some tricks within pixel shader... one possible way - you can actually interpolate texture on your own (and not use built-in interpolator) in pixel shader, which could help you a lot. And you're not limited just to bilinear interpolation, you could do better, F.e. bicubic interpolation ;)