I'm working on the following shader that
translates (on y)
rotates
repeats (tiles)
a texture:
uniform sampler2D texture;
uniform vec2 resolution;
varying vec4 vertColor;
varying vec4 vertTexCoord;
uniform float rotation;
uniform float yTranslation;
void main() {
vec2 repeat = vec2(2, 2);
vec2 coord = vertTexCoord.st;
coord.y += yTranslation;
float sin_factor = sin(rotation);
float cos_factor = cos(rotation);
coord += vec2(0.5);
coord = coord * mat2(cos_factor, sin_factor, -sin_factor, cos_factor) * 0.3;
coord -= vec2(0.5);
coord = vec2(mod(coord.x * repeat.x, 1.0f), mod(coord.y * repeat.y, 1.0f));
gl_FragColor = texture2D(texture, coord) * vertColor;
}
Current behavior
Desired behavior
I want the texture to always rotate around the center, no matter how far it has been translated.
Simply swapping the order of the steps results in weird behavior. What am I missing?
Your problem statement in the question is really wrong: Your addtional comment (to a now deleted answer):
I have a boat that always stays in the center of the screen, the water texture (controlled by this shader) under it moves to make it look like the boat is moving. The movement of the water texture is controlled by rotation (for steering) and yTranslation (for how far the boat has moved forwards/backwards)
makes it clear that you're asking for a different thing, and the approach described in the question is simply not going to solve your problem.
When your boat moves and rotates, it will basically travel on a curve (and you want the inverse of that curve to travel through texture space). But your 2dof parameters rotation and yTranslation are not capable of describing the curve. Your problem needs at least another parameter xTranslation - so in the end, you need a 2D vector describing the position of your boat + an angle describing the rotation. And you need to properly accumulate this data at each simulation step:
update the rotation accordingly
calculate the 2D vector your ship is heading tom as defined by the current rotation
scale it according to the velocity of the movement
accumulate it onto the position vector.
Then, your shader simply has to
1. translate the texcoords by position (or -position, whatever you store)
2. Rotate around the pivot point (which is constant and only depends on how you layed out your texture space)
coord = vec2(mod(coord.x * repeat.x, 1.0f), mod(coord.y * repeat.y, 1.0f));
that's a waste of GPU ALU cycles, the TMUs will already do the mod for you with the GL_REPEAT wrap modes.
However, what you now have here is rotation, scaling and translation: so just use a single matrix for the whole texcoord transformation - the accumulation of the 2D position that I talked about earlier can nicely by done with the matrix representations. It will also remove the sin and cos from your shader, which is another huge waste right now.
Related
The background:
I am writing some terrain visualiser and I am trying to decouple the rendering from the terrain generation.
At the moment, the generator returns some array of triangles and colours, and these are bound in OpenGL by the rendering code (using OpenTK).
So far I have a very simple shader which handles the rotation of the sphere.
The problem:
I would like the application to be able to display the results either as a 3D object, or as a 2D projection of the sphere (let's assume Mercator for simplicity).
I had thought, this would be simple — I should compile an alternative shader for such cases. So, I have a vertex shader which almost works:
precision highp float;
uniform mat4 projection_matrix;
uniform mat4 modelview_matrix;
in vec3 in_position;
in vec3 in_normal;
in vec3 base_colour;
out vec3 normal;
out vec3 colour2;
vec3 fromSphere(in vec3 cart)
{
vec3 spherical;
spherical.x = atan(cart.x, cart.y) / 6;
float xy = sqrt(cart.x * cart.x + cart.y * cart.y);
spherical.y = atan(xy, cart.z) / 4;
spherical.z = -1.0 + (spherical.x * spherical.x) * 0.1;
return spherical;
}
void main(void)
{
normal = vec3(0,0,1);
normal = (modelview_matrix * vec4(in_normal, 0)).xyz;
colour2 = base_colour;
//gl_Position = projection_matrix * modelview_matrix * vec4(fromSphere(in_position), 1);
gl_Position = vec4(fromSphere(in_position), 1);
}
However, it has a couple of obvious issues (see images below)
Saw-tooth pattern where triangle crosses the cut meridian
Polar region is not well defined
3D case (Typical shader):
2D case (above shader)
Both of these seem to reduce to the statement "A triangle in 3-dimensional space is not always even a single polygon on the projection". (... and this is before any discussion about whether great circle segments from the sphere are expected to be lines after projection ...).
(the 1+x^2 term in z is already a hack to make it a little better - this ensures the projection not flat so that any stray edges (ie. ones that straddle the cut meridian) are safely behind the image).
The question: Is what I want to achieve possible with a VertexShader / FragmentShader approach? If not, what's the alternative? I think I can re-write the application side to pre-transform the points (and cull / add extra polygons where needed) but it will need to know where the cut line for the projection is — and I feel that this information is analogous to the modelViewMatrix in the 3D case... which means taking this logic out of the shader seems a step backwards.
Thanks!
I'm trying to implement Normal Mapping, using a simple cube that i created. I followed this tutorial https://learnopengl.com/Advanced-Lighting/Normal-Mapping but i can't really get how normal mapping should be done when drawing 3d objects, since the tutorial is using a 2d object.
In particular, my cube seems almost correctly lighted but there's something i think it's not working how it should be. I'm using a geometry shader that will output green vector normals and red vector tangents, to help me out. Here i post three screenshot of my work.
Directly lighted
Side lighted
Here i actually tried calculating my normals and tangents in a different way. (quite wrong)
In the first image i calculate my cube normals and tangents one face at a time. This seems to work for the face, but if i rotate my cube i think the lighting on the adiacent face is wrong. As you can see in the second image, it's not totally absent.
In the third image, i tried summing all normals and tangents per vertex, as i think it should be done, but the result seems quite wrong, since there is too little lighting.
In the end, my question is how i should calculate normals and tangents.
Should i consider per face calculations or sum vectors per vertex across all relative faces, or else?
EDIT --
I'm passing normal and tangent to the vertex shader and setting up my TBN matrix. But as you can see in the first image, drawing face by face my cube, it seems that those faces adjacent to the one i'm looking directly (that is well lighted) are not correctly lighted and i don't know why. I thought that i wasn't correctly calculating my 'per face' normal and tangent. I thought that calculating some normal and tangent that takes count of the object in general, could be the right way.
If it's right to calculate normal and tangent as visible in the second image (green normal, red tangent) to set up the TBN matrix, why does the right face seems not well lighted?
EDIT 2 --
Vertex shader:
void main(){
texture_coordinates = textcoord;
fragment_position = vec3(model * vec4(position,1.0));
mat3 normalMatrix = transpose(inverse(mat3(model)));
vec3 T = normalize(normalMatrix * tangent);
vec3 N = normalize(normalMatrix * normal);
T = normalize(T - dot(T, N) * N);
vec3 B = cross(N, T);
mat3 TBN = transpose(mat3(T,B,N));
view_position = TBN * viewPos; // camera position
light_position = TBN * lightPos; // light position
fragment_position = TBN * fragment_position;
gl_Position = projection * view * model * vec4(position,1.0);
}
In the VS i set up my TBN matrix and i transform all light, fragment and view vectors to tangent space; doing so i won't have to do any other calculation in the fragment shader.
Fragment shader:
void main() {
vec3 Normal = texture(TextSamplerNormals,texture_coordinates).rgb; // extract normal
Normal = normalize(Normal * 2.0 - 1.0); // correct range
material_color = texture2D(TextSampler,texture_coordinates.st); // diffuse map
vec3 I_amb = AmbientLight.color * AmbientLight.intensity;
vec3 lightDir = normalize(light_position - fragment_position);
vec3 I_dif = vec3(0,0,0);
float DiffusiveFactor = max(dot(lightDir,Normal),0.0);
vec3 I_spe = vec3(0,0,0);
float SpecularFactor = 0.0;
if (DiffusiveFactor>0.0) {
I_dif = DiffusiveLight.color * DiffusiveLight.intensity * DiffusiveFactor;
vec3 vertex_to_eye = normalize(view_position - fragment_position);
vec3 light_reflect = reflect(-lightDir,Normal);
light_reflect = normalize(light_reflect);
SpecularFactor = pow(max(dot(vertex_to_eye,light_reflect),0.0),SpecularLight.power);
if (SpecularFactor>0.0) {
I_spe = DiffusiveLight.color * SpecularLight.intensity * SpecularFactor;
}
}
color = vec4(material_color.rgb * (I_amb + I_dif + I_spe),material_color.a);
}
Handling discontinuity vs continuity
You are thinking about this the wrong way.
Depending on the use case your normal map may be continous or discontinous. For example in your cube, imagine if each face had a different surface type, then the normals would be different depending on which face you are currently in.
Which normal you use is determined by the texture itself and not by any blending in the fragment.
The actual algorithm is
Load rgb values of normal
Convert to -1 to 1 range
Rotate by the model matrix
Use new value in shading calculations
If you want continous normals, then you need to make sure that the charts in the texture space that you use obey that the limits of the texture coordinates agree.
Mathematically that means that if U and V are regions of R^2 that map to the normal field N of your Shape then if the function of the mapping is f it should be that:
If lim S(x_1, x_2) = lim S(y_1, y_2) where {x1,x2} \subset U and {y_1, y_2} \subset V then lim f(x_1, x_2) = lim f(y_1, y_2).
In plain English, if the cooridnates in your chart map to positions that are close in the shape, then the normals they map to should also be close in the normal space.
TL;DR do not belnd in the fragment. This is something that should be done by the normal map itself when its baked, not'by you when rendering.
Handling the tangent space
You have 2 options. Option n1, you pass the tangent T and the normal N to the shader. In which case the binormal B is T X N and the basis {T, N, B} gives you the true space where normals need to be expressed.
Assume that in tangent space, x is side, y is forward z is up. Your transformed normal becomes (xB, yT, zN).
If you do not pass the tangent, you must first create a random vector that is orthogonal to the normal, then use this as the tangent.
(Note N is the model normal, where (x,y,z) is the normal map normal)
I'm attempting to implement shadow mapping into my deferred rendering pipeline, but I'm running into a few issues actually generating the shadow map, then shadowing the pixels – pixels that I believe should be shadowed simply aren't.
I have a single directional light, which is the 'sun' in my engine. I have deferred rendering set up for lighting, which works properly thus far. I render the scene again into a depth-only FBO for the shadow map, using the following code to generate the view matrix:
glm::vec3 position = r->getCamera()->getCameraPosition(); // position of level camera
glm::vec3 lightDir = this->sun->getDirection(); // sun direction vector
glm::mat4 depthProjectionMatrix = glm::ortho<float>(-10,10,-10,10,-10,20); // ortho projection
glm::mat4 depthViewMatrix = glm::lookAt(position + (lightDir * 20.f / 2.f), -lightDir, glm::vec3(0,1,0));
glm::mat4 lightSpaceMatrix = depthProjectionMatrix * depthViewMatrix;
Then, in my lighting shader, I use the following code to determine whether a pixel is in shadow or not:
// lightSpaceMatrix is the same as above, FragWorldPos is world position of the texekl
vec4 FragPosLightSpace = lightSpaceMatrix * vec4(FragWorldPos, 1.0f);
// multiply non-ambient light values by ShadowCalculation(FragPosLightSpace)
// ... do more stuff ...
float ShadowCalculation(vec4 fragPosLightSpace) {
// perform perspective divide
vec3 projCoords = fragPosLightSpace.xyz / fragPosLightSpace.w;
// vec3 projCoords = fragPosLightSpace.xyz;
// Transform to [0,1] range
projCoords = projCoords * 0.5 + 0.5;
// Get closest depth value from light's perspective (using [0,1] range fragPosLight as coords)
float closestDepth = texture(gSunShadowMap, projCoords.xy).r;
// Get depth of current fragment from light's perspective
float currentDepth = projCoords.z;
// Check whether current frag pos is in shadow
float bias = 0.005;
float shadow = (currentDepth - bias) > closestDepth ? 1.0 : 0.0;
// Ensure that Z value is no larger than 1
if(projCoords.z > 1.0) {
shadow = 0.0;
}
return shadow;
}
However, that doesn't really get me what I'm after. Here's a screenshot of the output after shadowing, as well as the shadow map half-assedly converted to an image in Photoshop:
Render output
Shadow Map
Since the directional light is the only light in my shader, it seems that the shadow map is being rendered pretty close to correctly, since the perspective/direction roughly match. However, what I don't understand is why none of the teapots actually end up casting a shadow on the others.
I'd appreciate any pointers on what I might be doing wrong. I think that my issue lies either in the calculation of that light space matrix (I'm not sure how to properly calculate that, given a moving camera, such that the stuff that's in view will be updated,) or in the way I determine whether the texel the deferred renderer is shading is in shadow or not. (FWIW, I determine the world position from the depth buffer, but I've proven that this calculation is working correctly.)
Thanks for any help.
Debugging shadow problems can be tricky. Lets start with a few points:
If you look at your render closely, you will actually see a shadow on one of the pots in the top left corner.
Try rotating your sun, this usually helps to see if there are any problems with the light transform matrix. From your output, it seems the sun is very horizontal and might not cast shadows on this setup. (another angle might show more shadows)
It appears as though you are calculating the matrix correctly, but try shrinking your maximum depth in glm::ortho(-10,10,-10,10,-10,20) to tightly fit your scene. If the depth is too large, you will lose precision and shadow will have artifacts.
To visualize where the problem is coming from further, try outputing the result from your shadow map lookup from here:
closestDepth = texture(gSunShadowMap, projCoords.xy).r
If the shadow map is being projected correctly, then you know you have a problem in your depth comparisons. Hope this helps!
In these days I am reading the Learning Modern 3D Graphics Programming book by Jason L. McKesson. Basically it is a book about the OpenGL 3.3 and I am now at the chapter 4, that is about orthographic and perspective view.
At the end of the chapter, under the "Further Study" section he suggests to try few things like implementing a variable eye point (he used at the begin (0, 0, 0) in camera space for semplicity) and an arbitrary perspective plane location.
He says I am going to need to offset the X, Y camera-space positions of the vertices by E_x and E_y respectively.
I cannot understand this passage, how am I supposed to use a variable eye point modifying only the X, Y offsets?
Edit: could it be something like this?
#version 330
layout(location = 0) in vec4 position;
layout(location = 1) in vec4 color;
smooth out vec4 theColor;
uniform vec2 offset;
uniform vec2 E;
uniform float zNear;
uniform float zFar;
uniform float frustumScale;
void main()
{
vec4 cameraPos = position + vec4(offset.x, offset.y, 0.0, 0.0);
vec4 clipPos;
clipPos.xy = cameraPos.xy * frustumScale + vec4(E.x, E.y, 0.0, 0.0);
clipPos.z = cameraPos.z * (zNear + zFar) / (zNear - zFar);
clipPos.z += 2 * zNear * zFar / (zNear - zFar);
clipPos.w = cameraPos.z / (-E.z);
gl_Position = clipPos;
theColor = color;
}
Edit2: thanks Boris, your picture helped a lot :) especially because:
it makes clear what you previously stated regarding thinking E as projection place position and not eye point position
it underlines that the size of the project plane must be always [-1, 1], passage that I read on the book without fully understanding what it meant
Just a curiosity, why do you mention multiplying after subtracting? Is it for the same reason the book says, that is aspect ratio? Because everything logically push me doing exactly the opposite, that is first translation (-2) and then multiplication (/5).. Or maybe with the term "scaling", the book refers to the reshape function?
Here, we are interested in computing a transformation from Camera Coordinates (CC) to Normalized Device Coordinates (NDC).
Think of E as the position of the projection plane in Camera Coordinates, instead of the position of the eye point according to the projection plane. In Camera Coordinates, the eye point is by definition located at the origin, at least in my interpretation of what "Camera Coordinate" means: a coordinate frame centered from where you look at the scene. (You can mathematically define a perspective transformation centered from anywhere, but this means your input space is not the camera space, imho. This is what the World->Camera transformation is for, as you will see in chapter 6)
Summary:
you are in camera space, hence your eye point is located at (0,0,0)
you are looking toward the negative Z-axis
your projection plane is parallel to the xOy plane, with a size of [-1,1] in both direction
This is the picture here (each tick is 0.5 unit):
In this picture, you can see that the projection plane (bottom side of the gray trapezoid) is centered in (0,0,-1), with a size of [-1,1] in both X and Y direction.
Now, what is asked is instead of choosing (0,0,-1) for the center of this plane, to choose an arbitrary (E.x, E.y, E.z) position (assumes E.z is negative). But the plane has still to be parallel to xOy axis and with the same size.
You can see that the dimension E.xy plays a very different role than E.z, reason why E.xy will be involved in an substraction, while E.z will be involved in a division. This is easy to see with an example:
assume zNear = -E.z (not necessarily the case, but you can in fact always change frustumScale to have an equivalent perspective satisfying this)
consider the point E (which is the center of the projection plane).
What is its coordinate in NDC space? It is (0,0,-1) by definition. What you've done is substracting E.xy, but dividing by -E_z.
Your code got this idea, but still some things are wrong:
First, you defined uniform vec2 E; instead of uniform vec3 E; (just a typo, not a big deal)
The line clipPos.xy = ... ; is about vec2 arithmetic. Hence, you can only multiply by scalar values (i.e., a float), or add/substract vec2 values. Hence, vec4(E.x, E.y, 0.0, 0.0) is of incorrect type, you should use E.xy instead, which has the correct type vec2.
You should in fact substract E.xy instead of add it. This is easy to see in my example above.
Finally, things are more subtle ;-)
I made a picture to illustrate the modifications:
Each tick is 1 unit in this picture. Top left is your Camera Coordinate Space, with displayed zNear, zFar, and two possible projection planes. In blue is the one used in the explanation and shader here, and the red one is the one you now want to use. The colored areas correponds to what should be visible in you final screen, e.g. what should be in the cube [-1,1]^3 in the NDC Space. Hence, if you use the blue projection plane, you want to obtain the space in top right, and if you use the red projection plane, you want to optain the space in the bottom. To do this, you can observe that you need to perform the scaling and translation in NDC space, e.g. after the perspective division! (I think what is written in the book is either incorrect, or interpret the question differently).
Hence you want to do, in euclidean coordinate (i.e., not homogeneous coordinate, e.g. without W coordinate):
clipPosEuclideanRed.xy = clipPosEuclideanBlue.xy * (-E.z) - E.xy;
clipPosEuclideanRed.z = clipPosEuclideanBlue.z;
However, because you are in homogeneous coordinates, this values are in fact:
clipPosEuclidean.xyz = clipPos.xyz / clipPos.w; // with clipPos.w = -cameraPos.z;
Hence, you have to composate by writing:
clipPosRed.xy = clipPosBlue.xy * (-E.z) - E.xy * (-cameraPos.z);
clipPosRed.z = clipPosBlue.z;
So my solution to this problem would be to add only one line:
void main()
{
vec4 cameraPos = position + vec4(offset.x, offset.y, 0.0, 0.0);
vec4 clipPos;
clipPos.xy = cameraPos.xy * frustumScale;
// only add this line
clipPos.xy = - clipPos.xy * E.z + E.xy * cameraPos.z;
clipPos.z = cameraPos.z * (zNear + zFar) / (zNear - zFar);
clipPos.z += 2 * zNear * zFar / (zNear - zFar);
clipPos.w = -cameraPos.z;
gl_Position = clipPos;
theColor = color;
}
I'm generating terrain in Opengl geometry shader and am having trouble calculating normals for lighting. I'm generating the terrain dynamically each frame with a perlin noise function implemented in the geometry shader. Because of this, I need an efficient way to calculate normals per-vertex based on the noise function (no texture or anything). I am able to take cross product of 2 side to get face normals, but they are generated dynamically with the geometry so I cannot then go back and smooth the face normals for vertex normals. How can I get vertex normals on the fly just using the noise function that generates the height of my terrain in the y plane (therefore height being between 1 and -1). I believe I have to sample the noise function 4 times for each vertex, but I tried something like the following and it didn't work...
vec3 xP1 = vertex + vec3(1.0, 0.0, 0.0);
vec3 xN1 = vertex + vec3(-1.0, 0.0, 0.0);
vec3 zP1 = vertex + vec3(0.0, 0.0, 1.0);
vec3 zN1 = vertex + vec3(0.0, 0.0, -1.0);
float sx = snoise(xP1) - snoise(xN1);
float sz = snoise(zP1) - snoise(zN1);
vec3 n = vec3(-sx, 1.0, sz);
normalize(n);
return n;
The above actually generated lighting that moved around like perlin noise! So any advice for how I can get the per-vertex normals correctly?
The normal is the vector perpendicular to the tangent (also known as slope). The slope of a function is its derivative; for n dimensions its n partial derivatives. So you sample the noise around a center point P and at P ± (δx, 0) and P ± (0, δy), with δx, δy choosen to be as small as possible, but large enough for numerical stability. This yields you the tangents in each direction. Then you take the cross product of them, normalize the result and got the normal at P.
You didn't say exactly how you were actually generating the positions. So I'm going to assume that you're using the Perlin noise to generate height values in a height map. So, for any position X, Y in the hieghtmap, you use a 2D noise function to generate the Z value.
So, let's assume that your position is computed as follows:
vec3 CalcPosition(in vec2 loc) {
float height = MyNoiseFunc2D(loc);
return vec3(loc, height);
}
This generates a 3D position. But in what space is this position in? That's the question.
Most noise functions expect loc to be two values on some particular floating-point range. How good your noise function is will determine what range you can pass values in. Now, if your model space 2D positions are not guaranteed to be within the noise function's range, then you need to transform them to that range, do the computations, and then transform it back to model space.
In so doing, you now have a 3D position. The transform for the X and Y values is simple (the reverse of the transform to the noise function's space), but what of the Z? Here, you have to apply some kind of scale to the height. The noise function will return a number on the range [0, 1), so you need to scale this range to the same model space that your X and Y values are going to. This is typically done by picking a maximum height and scaling the position appropriately. Therefore, our revised calc position looks something like this:
vec3 CalcPosition(in vec2 modelLoc, const in mat3 modelToNoise, const in mat4 noiseToModel)
{
vec2 loc = modelToNoise * vec3(modelLoc, 1.0);
float height = MyNoiseFunc2D(loc);
vec4 modelPos = noiseToModel * vec4(loc, height, 1.0);
return modelPos.xyz;
}
The two matrices transform to the noise function's space, and then transform back. Your actual code could use less complicated structures, depending on your use case, but a full affine transformation is simple to describe.
OK, now that we have established that, what you need to keep in mind is this: nothing makes sense unless you know what space it is in. Your normal, your positions, nothing matters until you establish what space it is in.
This function returns positions in model space. We need to calculate normals in model space. To do that, we need 3 positions: the current position of the vertex, and two positions that are slightly offset from the current position. The positions we get must be in model space, or our normal will not be.
Therefore, we need to have the following function:
void CalcDeltas(in vec2 modelLoc, const in mat3 modelToNoise, const in mat4 noiseToModel, out vec3 modelXOffset, out vec3 modelYOffset)
{
vec2 loc = modelToNoise * vec3(modelLoc, 1.0);
vec2 xOffsetLoc = loc + vec2(delta, 0.0);
vec2 yOffsetLoc = loc + vec2(0.0, delta);
float xOffsetHeight = MyNoiseFunc2D(xOffsetLoc);
float yOffsetHeight = MyNoiseFunc2D(yOffsetLoc);
modelXOffset = (noiseToModel * vec4(xOffsetLoc, xOffsetHeight, 1.0)).xyz;
modelYOffset = (noiseToModel * vec4(yOffsetLoc, yOffsetHeight, 1.0)).xyz;
}
Obviously, you can merge these two functions into one.
The delta value is a small offset in the space of the noise texture's input. The size of this offset depends on your noise function; it needs to be big enough to return a height that is significantly different from the one returned by the actual current position. But it needs to be small enough that you aren't pulling from random parts of the noise distribution.
You should get to know your noise function.
Now that you have the three positions (the current position, the x-offset, and the y-offset) in model space, you can compute the vertex normal in model space:
vec3 modelXGrad = modelXOffset - modelPosition;
vec3 modelYGrad = modelYOffset - modelPosition;
vec3 modelNormal = normalize(cross(modelXGrad, modelYGrad));
From here, do the usual things. But never forget to keep track of the spaces of your various vectors.
Oh, and one more thing: this should be done in the vertex shader. There's no reason to do this in a geometry shader, since none of the computations affect other vertices. Let the GPU's parallelism work for you.