I've followed the tessellation tutorial on Rastertek's website and have managed to get it working.
I'm just wondering if its possible to modify a subdivisions vertex in one of the shader files.
The result I'm looking for is to divide up a plane into subdivisions and then manipulate the subdivisions to create waves, is that possible or have I completed missed something? :)
Found out how to do it.
Modifications can be made after the vertex position has been calculated in the domain shader.
:D
Related
I'm feeling like I'm grasping at straws right now researching this!
My goal is to write a simple water shader. The plan is to use tessellation to implement dynamic LODs, and apply a height map based on fractal noise (ref this paper). Where I am stumbling is where we are supposed to apply the height map. It seems like it should be applied to the vertex shader, but the vertex shader precedes the tessellation shaders.
So I am looking to apply the displace vertices at the tessellation evaluation shader (OpenGL) using noise, is that the best way to go?
For the noise, I am planning on inputting the vertices positions to the noise function.
It is confusing to me because so far I have not found any examples on the web on the matter. I am seeing people people sampling in the tessellation shader, but I don't have a texture, only noise. I've also seen someone mention something about using a geometry shader to displace vertices. What's the widely accepted procedure here?
I'm wondering about the impacts of the performances and advice on whether I should think of generating noise texture and interpolating those instead.
Is there a way to pass an array of vertex data from the cpu straight to a geometry shader?
If i want to build a sphere inside a GS, I'd rather have a pre calculated array of points that I can access from within the shader instead of generating them on the fly every frame
Maybe there's a way to access a veretex buffer from a shader I don't know about?
I'm using Panda3D but pure OpenGL explanations are more than welcome as well!
Thanks in advance
I would like to have the normal vector of the fragments on a single face in my mesh be the same for all fragments.
Due to the way my engine works I cannot enable provoking vertices. Don't bring them up in your reply, i've looked into it already.
I'd like all fragments to take the three values of that face and average them without weighting, interpolation, etc.
To clarify:
I want a variable output from the vertex shader to the fragment shader with strict averaging, no interpolation. Is there some qualifier or technique I could use in OpenGL to achieve this?
I would be even happier if i could just get the values from each vertice and interpolate them myself, I have some awesome shader ideas if I can!
Thanks.
khronos.org/opengl/wiki/Type_Qualifier_(GLSL)#Interpolation_qualifiers
The first issue, is how to get from a single light source, to using multiple light sources, without using more than one fragment shader.
My instinct is that each run through of the shader calculations needs light source coordinates, and maybe some color information, and we can just run through the calculations in a loop for n light sources.
How do I pass the multiple lights into the shader program? Do I use an array of uniforms? My guess would be do pass in an array of uniforms with the coordinates of each light source, and then specify how many light sources there are, and then set a maximum value.
Can I call getter or setter methods for a shader program? Instead of just manipulating the globals?
I'm using this tutorial and the libGDX implementation to learn how to do this:
https://gist.github.com/mattdesl/4653464
There are many methods to have multiple light sources. I'll point 3 most commonly used.
1) Specify each light source in array of uniform structures. Light calculations are made in shader loop over all active lights and accumulating result into single vertex-color or fragment-color depending if shading is done per vertex or per fragment. (this is how fixed-function OpenGL was calculating multiple lights)
2) Multipass rendering with single light source enabled per pass, in simplest form passes could be composited by additive blending (srcFactor=ONE dstFactor=ONE). Don't forget to change depth func after first pass from GL_LESS to GL_EQUAL or simply use GL_LEQUAL for all passes.
3) Many environment lighting algorithms mimic multiple light sources, assuming they are at infinte distance from your scene. Simplest renderer should store light intensities into environment texture (prefferably a cubemap), shader job then would be to sample this texture several times in direction around the surface normal with some random angular offsets.
When I light my human model in OpenGL using face normals it is very apparent where each face is on the model. The lighting becomes considerably smoother using vertex normals but still noticeable. Is there any technique available to smooth organic models without adding additional vertices (ex. subdivisions)?
If you are seeing single faces with per vertex normals you might have forgotten to enable smooth facing:
glShadeModel(GL_SMOOTH);
If that's not helping make sure all vertices with the same position also have the same normal. This should be done by blender automatically however. It's also possible that your model has just too few polygons. If that's the case you either have to use more polygons or do what basszero said. Create a normal map and use that together with a GLSL shader.
If you've got access to a higher resolution model/mesh or perhaps a decent 3d modeling tool that can create a highres model via subdivisions or other fancy methods...
GLSL per-pixel-lighting + normal maps
UPDATE:
http://www.blender.org/development/release-logs/blender-246/render-baking/
Yes there are many different techniques. One is to first calculate a normal for each face (which you probably are using for the vertices in the face) and use these normals to calculate new normals for each vertex.
Here's one where you calculate the average of five adjacent triangles.
Cheers !