I was wondering how I could takes each sphere's vertex stored in my buffer, and calculate the normal to each of them separately. Is it possible to do it in OpenGL?
OpenGL is typically used to render meshes, not compute them. And normals are part of a mesh's data. It is usually the responsibility of the builder of the mesh to supply normals.
In the case of a sphere, normals are dead simple to compute perfectly. For a given vector position P on the sphere who's center is C, the normal is norm(P - C), where norm normalizes the vector.
Related
I am trying to calculate a normal map from subdivisions of a mesh, there are 2 meshes a UV unwrapped base mesh which contains quads and triangles and a subdivision mesh that contains only quads.
Suppose a I have a quad with all the coordinates of the vertices both in object space and UV space (quad is not flat), the quad's face normal and a pixel with it's position in UV space.
Can I calculate the TBN matrix for the given quad and write colors to the pixel, if so then is it different for quads?
I ask this because I couldn't find any examples for calculating a TBN matrix for quads, only triangles ?
Before answering your question, let me start by explaining what the tangents and bitangents that you need actually are.
Let's forget about triangles, quads, or polygons for a minute. We just have a surface (given in whatever representation) and a parameterization in form of texture coordinates that are defined at every point on the surface. We could then define the surface as: xyz = s(uv). uv are some 2D texture coordinates and the function s turns these texture coordinates into 3D world positions xyz. Now, the tangent is the direction in which the u-coordinate increases. I.e., it is the derivative of the 3D position with respect to the u-coordinate: T = d s(uv) / du. Similarly, the bitangent is the derivative with respect to the v-coordinate. The normal is a vector that is perpendicular to both of them and usually points outwards. Remember that the three vectors are usually different at every point on the surface.
Now let's go over to discrete computer graphics where we approximate our continuous surface s with a polygon mesh. The problem is that there is no way to get the exact tangents and bitangents anymore. We just lost to much information in our discrete approximation. So, there are three common ways how we can approximate the tangents anyway:
Store the vectors with the model (this is usually not done).
Estimate the vectors at the vertices and interpolate them in the faces.
Calculate the vectors for each face separately. This will give you a discontinuous tangent space, which produces artifacts when the dihedral angle between two neighboring faces is too big. Still, this is apparently what most people are doing. And it is apparently also what you want to do.
Let's focus on the third method. For triangles, this is especially simple because the texture coordinates are interpolated linearly (barycentric interpolation) across the triangle. Hence, the derivatives are all constant (it's just a linear function). This is why you can calculate tangents/bitangents per triangle.
For quads, this is not so simple. First, you must agree on a way to interpolate positions and texture coordinates from the vertices of the quad to its inside. Oftentimes, bilinear interpolation is used. However, this is not a linear interpolation, i.e. the tangents and bitangents will not be constant anymore. This will only happen in special cases (if the quad is planar and the quad in uv space is a parallelogram). In general, these assumptions do not hold and you end up with different tangents/bitangents/normals for every point on the quad.
One way to calculate the required derivatives is by introducing an auxiliary coordinate system. Let's define a coordinate system st, where the first corner of the quad has coordinates (0, 0) and the diagonally opposite corner has (1, 1) (the other corners have (0, 1) and (1, 0)). These are actually our interpolation coordinates. Therefore, given an arbitrary interpolation scheme, it is relatively simple to calculate the derivatives d xyz / d st and d uv / d st. The first one will be a 3x2 matrix and the second one will be a 2x2 matrix (these matrices are called Jacobians of the interpolation). Then, given these matrices, you can calculate:
d xyz / d uv = (d xyz / d st) * (d st / d uv) = (d xyz / d st) * (d uv / d st)^-1
This will give you a 3x2 matrix where the first column is the tangent and the second column is the bitangent.
I have a texture of the earth which I want to map onto a sphere.
As it is a unit sphere and the model itself has no texture coordinates, the easiest thing I could think of is to just calculate spherical coordinates for each vertex and use them as texture coordinates.
textureCoordinatesVarying = vec2(atan(modelPositionVarying.y, modelPositionVarying.x)/(2*M_PI)+.5, acos(modelPositionVarying.z/sqrt(length(modelPositionVarying.xyz)))/M_PI);
When doing this in the fragment shader, this works fine, as I calculate the texture coordinates from the (interpolated) vertex positions.
But when I do this in the vertex shader, which I also would do if the model itself has texture coordinates, I get the result as shown in the image below. The vertices are shown as points and a texture coordinate (u) lower than 0.5 is red while all others are blue.
So it looks like that the texture coordinate (u) of two adjacent red/blue vertices have value (almost) 1.0 and 0.0. The variably is then smoothly interpolated and therefore yields values somewhere between 0.0 and 1.0. This of course is wrong, because the value should either be 1.0 or 0.0 but nothing in between.
Is there a way to work with spherical coordinates as texture coordinates without getting those effects shown above? (if possible, without changing the model)
This is a common problem. The seams between two texture coordinate topologies, where you want the texture coordinate to seamlessly wrap from 1.0 to 0.0 requires the mesh to properly handle this. To do this, the mesh must duplicate every vertex along the seam. One of the vertices will have a 0.0 texture coordinate and will be connected to the vertices coming from the right (in your example). The other will have a 1.0 texture coordinate and will be connected to the vertices coming from the left (in your example).
This is a mesh problem, and it is best to solve it in the mesh itself. The same position needs two different texture coordinates, so you must duplicate the position in question.
Alternatively, you could have the fragment shader generate the texture coordinate from an interpolated vertex normal. Of course, this is more computationally expensive, as it requires doing a conversion from a direction to a pair of angles (and then to the [0, 1] texture coordinate range).
I'm able to convert my existing height-map into a normal map by sampling the surrounding pixels, like in this question Generating a normal map from a height map? except I'm doing it on CPU.
I have a sphere that I want to normal map in object-space. How to I apply the above normal-map to the normals on my sphere vertices?
Normal maps do not modify vertex normals. They are used for details smaller than vertices.
In your fragment shader, look up the normal at the fragment's texture coordinates and modify the fragment's normal with it.
As stated before, normal mapping is done per-pixel, so applying it on your sphere's vertices would not work.
In your fragment shader you have to supply/calculate a tangent and bitangent along with the mesh normal vectors.
Then you can use a 3x3 matrix of the normal, tangent and bitangent vectors, and the normal vector you read from the normalmap to calculate the new normal vector.
There's a great tutorial on this topic here:
http://ogldev.atspace.co.uk/www/tutorial26/tutorial26.html
I can't seem to understand the OpenGL pipeline process from a vertex to a pixel.
Can anyone tell me how important are vertex normals on these two shading techinques? As far as i know, in gouraud, lighting is calculated at each vertex, then the result color is interpolated across the polygon between vertices (is this done in fragment operations, before rasterizing?), and phong shading consists of interpolating first the vertices normals and then calculating the illumination on each of these normals.
Another thing is when bump mapping is applied to lets say a plane (2 triangles) and a brick texture as diffuse with its respect bump map, all of this with gouraud shading.
Bump mapping consist on altering the normals by a gradient depending on a bump map. But what normals does it alter and when (at the fragment shader?) if there are only 4 normals (4 vertices = plane), and all 4 are the same. In Gouraud you interpolate the color of each vertex after the illumination calculation, but this calculation is done after altering the normals.
How does the lighting work?
Vertex normals are absoloutely essential for both Gouraud and Phong shading.
In Gouraud shading the lighting is calculated per vertex and then interpolated across the triangle.
In Phong shading the normal is interpolated across the triangle and then the calculation is done per-pixel/fragment.
Bump-mapping refers to a range of different technologies. When doing normal mapping (probably the most common variety these days) the normals, bi-tangent (often erroneously called bi-normal) and tangent are calculated per-vertex to build a basis matrix. This basis matrix is then interpolated across the triangle. The normal retrieved from the normal map is then transformed by this basis matrix and then the lighting is performed per pixel.
There are extensions to the normal mapping technique above that allow bumps to hide other bumps behind them. This is, usually, performed by storing a height map along with the normal map and then ray marching through the height map to find parts that are being obscured. This technique is called Relief Mapping.
There are other older forms such as DUDV bump mapping (Which was implemented in DirectX 6 as Environment Mapped, bump mapping or EMBM).
You also have emboss bump mapping which was a really early way of doing bump mapping
Edit: In answer to your comment, emboss bump mapping CAN be performed on gouraud shaded triangles. Other forms of bump-mapping are, necessarily, per-pixel (due to the fact they work by modifying the surface normals on a per-pixel (or, at least, per-texel) basis). I wouldn't be surprised if there were other methods that can be performed with per-vertex lighting but I can't think of any off the top of my head. The results will look pretty rubbish compared to doing it on a per-pixel basis, though.
Re: Tangents and Bi-Tangents are actually quite simple once you get your head round them (took me years though, tbh ;)). Any 3D coordinate frame can be defined by a set of vectors that form an orthogonal basis matrix. By setting up the normal, tangent and bi-tangent per vertex you are merely setting up the coordinate frame at each vertex. From this you have the ability to transform a world or object space vector into the triangle's own coordinate frame. From here you can simply translate a light vector (or position) into the coordinate frame of a given pixel on the surface of the triangle. This then means that the normals in the normal map don't need to be stored in the object's space and hence as those triangles move around (when being animated, for example) the normals are already being handled in their own local space.
Normal mapping, one of the techniques to simulate bumped surfaces basically perturbs the per-pixel normals before you compute the light equation on that pixel.
For example, one way to implement requires you to interpolate surface normals and binormal (2 of the tangent space basis) and compute the third per-pixel (2+1 vectors which are the tangent basis). This technique also requires to interpolate the light vector. With those 3 (2+1 computed) vectors (named tagent space basis) you have a way to change the light vector from object space into tagent space. This is because these 3 vectors can be arranged as a 3x3 matrix which can be used to change the basis of your light direction vector.
Then it is simply a matter of using that tagent-space light vector and compute the light equation per pixel, where it most basic form would be a dot product between the tagent-space light vector and the normal map (your bump texture).
This is how a normal maps looks like (the normal component is stored in each channel of the texture and is already in tangent space):
This is one way, you can compute things in view space but the above is more easy to understand.
Old bump mapping was way simpler and was also kind of a fake effect.
All bump mapping techniques operate at pixel level, as they perturb in one way or other, how the surface is rendered. Even the old emboss bump mapping did some computation per pixel.
EDIT: I added a few more clarifications, when I have some spare minutes I will try to add some math and examples. Although there are great resources out there that explain this in great detail.
First of all, you don't need to understand the whole graphics pipeline to write a simple shader :). But, of course, you should know whats going on. You could read the graphics pipeline chapter in real-time rendering, 3rd edition (möller, hofmann, akenine-moller). What you describe is per-vertex and per-fragment lighting. For both calculations the vertex normals are part of the equation. For the bump mapping shader you alter the interpolated normals. So after rasterization you have fragments where missing data has to be caculated to determine the final pixel color.
I can't understand the glNormal3f, I know that it works for 'normalize' the 'normals' of the vertex... Or something like that, but I can't understand what is the 'normal' of the vertex.
Can you explain me that function? I can't understand what 'normal' means in openGL...
The "normal" of a vertex is the vector which is "perpendicular" to the vertex. In mathematics "normal" is a generalization of "perpendicular". For a polygon, this "normal vector" is perpendicular to the polygon and is the same for all of its vertices. One reason you might assign different normal vectors to each vertex of a polygon is if you are covering a curved surface with very small triangles. In this case, you don't want the normal vectors of the three vertices of the triangle to all be the same.
Now what is this normal vector used for? The typical application is used for coloring calculations when lighting is enabled in OpenGL. The normal vector can determine whether the light from a light source hits a surface and what angle a light ray makes with the surface. This can then be used to determine whether the surface is shadowed or contains a specular highlight, for instance.
A call to glNormal will emit the normal vector to the last emitted vertex. A vertex normal is usually calculated as the normalized average of normals of the faces incident to the vertex. The normals of faces are vector so that they are perpendicular to the plane described by the face.
This function is deprecated and you really should pick up a good/tutorial or book.
See also Vertex Normal and the associated entries.
If you should not use glVertex* and the associated glNormal* functions, what should you use? Shaders and VBO's. Have a look at this question.