How to apply displacement mapping with tessellation? - opengl

I'm feeling like I'm grasping at straws right now researching this!
My goal is to write a simple water shader. The plan is to use tessellation to implement dynamic LODs, and apply a height map based on fractal noise (ref this paper). Where I am stumbling is where we are supposed to apply the height map. It seems like it should be applied to the vertex shader, but the vertex shader precedes the tessellation shaders.
So I am looking to apply the displace vertices at the tessellation evaluation shader (OpenGL) using noise, is that the best way to go?
For the noise, I am planning on inputting the vertices positions to the noise function.
It is confusing to me because so far I have not found any examples on the web on the matter. I am seeing people people sampling in the tessellation shader, but I don't have a texture, only noise. I've also seen someone mention something about using a geometry shader to displace vertices. What's the widely accepted procedure here?
I'm wondering about the impacts of the performances and advice on whether I should think of generating noise texture and interpolating those instead.

Related

How to disable weighted interpolation GLSL? (how to just average the vertex values)

I would like to have the normal vector of the fragments on a single face in my mesh be the same for all fragments.
Due to the way my engine works I cannot enable provoking vertices. Don't bring them up in your reply, i've looked into it already.
I'd like all fragments to take the three values of that face and average them without weighting, interpolation, etc.
To clarify:
I want a variable output from the vertex shader to the fragment shader with strict averaging, no interpolation. Is there some qualifier or technique I could use in OpenGL to achieve this?
I would be even happier if i could just get the values from each vertice and interpolate them myself, I have some awesome shader ideas if I can!
Thanks.
khronos.org/opengl/wiki/Type_Qualifier_(GLSL)#Interpolation_qualifiers

openGL simple 2d light

I am making a simple pixel top-down game. And I want to add some simple lights there, but I don't know what the best way to do that. This image is an example of light what I want to realise.
http://imgur.com/a/PpYiR
When I googled that task, I saw only solutions for that kind of light.
https://www.youtube.com/watch?v=mVlYsGOkkyM
But I need to increase a brightness of the texture part when the light source is near. How can I do this if I am using textures with GL_QUADS without UV?
Ok, my response may not totally answer you question, but it will lead you down the right path.
It appears you are using immediate mode, this is now depreciated and changing to VBOs (vertex buffer objects) will make you life easier.
The lighting in the picture appears to be hand drawn. You cannot create that style of lighting exactly with even the best algorithm.
You really have two options to solve your problem, and both of them will require texture coordinates and shaders.
You could go with lightmaps, which use a pre generated texture multiplied over the texture of a quad. This is extremely fast, but requires some sort of tool to generate the lightmaps which might be a bit over your head at the moment.
Instead, learn shader based lighting. Many tutorials exist for 3d lighting but the principles remain the same for 2D.
Some Googling will get you the resources you need to implement shaders.
A basic distance based lighting algorithm will look like this:
GL_Color = texturecolor * 1.0/distance(light_position,world_position);
It multiplies the color of the texel by how far away the texel is from the light position. There are tutorials that go more into depth on this.
If you want to make the lighting look "retro" like in the first image,you can downsample the colors in a postprocesing step.

How hard would it be to write this shader? Multipoint shadow lighting

So, I have a simple project right now. Basically its just a bunch of cuboids that are all axis alligned... so it has really simple geometry.
Anyway I am considering adding a better shader to it. Currently I am using the "flat shader" that is a stock shader in GLShaderManager. It is coloring everything with a flat color. However I would love if I could build a shader like the following.
Basically I want a shader that has an array of point lights at various positions with varying intensities.
Probably defined like this.
struct Light {
float x;
float y;
float z;
float intensity;
};
Light Lighting[20];
And basically based on the level geometry and lights, I would love to simulate basic lighting and shadows, also it would be cool to have a circle under the player (like the player is actually their).
How hard would this be to make? How would I pass it my level geometry and light array. (note even though each cuboid is its own QUADS batch it will be easy to make any kind of variable that stores the data).
I am using Glew, GLTools, GLShaderManager, GLBatch, visual studio 2010, probably whatever "GSHL".
If you could just let me know how complicated a shader like this would be let me know. Also if it is easy to find a shader that works like this online if you could link it.
Also what are the difference between the two types of shaders? (Vertex, and fragment).
I would say it's relatively simple, but the thing about modern GL is that the initial learning curve is quite steep. At first it seems like you have to roll up your sleeves and learn how to do everything (essentially true) but later, it starts to seem like it made things easier than ever before with much more predictable behaviors since you're in the driver's seat.
One of the first things you want to learn to do is understand how to specify attributes from CPU to GPU. For attributes which don't vary on a per-vertex or per-fragment basis such as your light positions and intensities, you want uniform attributes. Check out examples utilizing the glUniform* functions to see how to do this. This will allow you to then experiment, passing values from the CPU side to the GPU side and then seeing how they affect the shader to accelerate your learning.
After that, it's worth learning how direct lighting is computed given a ray bouncing off a surface with phong shading, separating ambient, diffuse, and specular terms.
Later you might even want to store this light data into an environment map. That'll give you the ability to use as many lights as you want without affecting the speed of the shader.
About vertex vs. fragment shaders, vertex shaders compute things on a vertex-by-vertex basis, including data for the fragment shader to then use. The fragment shader is kind of like a pixel shader (in HLSL, it's actually just called a 'pixel shader'). It deals with rasterizing what's in between those vertices and is operating on a pixel-by-pixel basis (however with some potential overdraw). Often for lighting, the real heart of the logic will be in the fragment shader, while the vertex shader serves as an intermediary step to compute all the relevant values for the fragment shader to interpolate and use. The vertex shader is part of the 3D geometry pipeline, while the fragment shader is part of 2D rasterization.
It shouldn't take too long or be too hard to get the hang of this, but you want to approach this kind of slowly and in babysteps. There's a lot of setup work involved in establishing a lighting/shading pipeline for your software with the precise characteristics you want, and for the final work, you want to kind of plan ahead. So it's good to establish a separate scrap project and start experimenting away to figure out how things work.

Opengl terrain and tessellation shaders

Two questions:
How do modern games set up their terrain vertices? Do they attach a height map image to a texture and then use it to set each vertex position, or do they just use a 3D software (like Blender) to create a file that contains these vertices and then read it to a VBO? Please correct me if my grasp is incorrect.
How important are tessellation shaders to this process? Do they just save performance or do they also change the viewer's scene?
The two most common I have seen are heightmaps, in which the RGB value is used for surface normal and the alpha value is used for heights, and procedural terrain generation using a method such as Perlin Noise, that use a random function and sample their surrounding vertices to even out the height.
Tesselation shaders are used primarily in decreasing workload by simplifying far away meshes in which you would not notice the extra detail. They do change the viewers scene, but in a way that is attempting to not be noticed.
Generally height are generated procedurally in shaders for vertices.
By procedurally in computer graphics it means by some mathematics algorithm. Perlin noise is one of the methods for this procedural generation. There are several strategies keep the height map of small size and produce different heights using procedural method this is done as height map is texture and that uses bandwidth.
Tessellation shaders are used along for adaptive tessellation. You can think of it as some kind of level of detail mechanism. Smoothness of terrain depends upon how many triangles are used to represent patch on terrain. Depending on the distance of pixel from camera developers can decide what should be tessellation level on the fly and generate more triangles for patches close to user. This is way to improve details on the terrain. Everything here is happening on the GPU so its extremely efficient.
Previous to tessellation shaders were accessibe there were algorithms like ROAR which used to do adaptive tessellation on the CPU.
Please follow http://vterrain.org/ this project. You will see all state of the terrain techniques implemented here.

Difference between tessellation shaders and Geometry shaders

I'm trying to develop a high level understanding of the graphics pipeline. One thing that doesn't make much sense to me is why the Geometry shader exists. Both the Tessellation and Geometry shaders seem to do the same thing to me. Can someone explain to me what does the Geometry shader do different from the tessellation shader that justifies its existence?
The tessellation shader is for variable subdivision. An important part is adjacency information so you can do smoothing correctly and not wind up with gaps. You could do some limited subdivision with a geometry shader, but that's not really what its for.
Geometry shaders operate per-primitive. For example, if you need to do stuff for each triangle (such as this), do it in a geometry shader. I've heard of shadow volume extrusion being done. There's also "conservative rasterization" where you might extend triangle borders so every intersected pixel gets a fragment. Examples are pretty application specific.
Yes, they can also generate more geometry than the input but they do not scale well. They work great if you want to draw particles and turn points into very simple geometry. I've implemented marching cubes a number of times using geometry shaders too. Works great with transform feedback to save the resulting mesh.
Transform feedback has also been used with the geometry shader to do more compute operations. One particularly useful mechanism is that it does stream compaction for you (packs its varying amount of output tightly so there are no gaps in the resulting array).
The other very important thing a geometry shader provides is routing to layered render targets (texture arrays, faces of a cube, multiple viewports), something which must be done per-primitive. For example you can render cube shadow maps for point lights in a single pass by duplicating and projecting geometry 6 times to each of the cube's faces.
Not exactly a complete answer but hopefully gives the gist of the differences.
See Also:
http://rastergrid.com/blog/2010/09/history-of-hardware-tessellation/