I'm trying to draw an element always facing camera. I've read some articles about billboard in shaders, the problem is that I need to compute rotation out of shaders and with different objects (circles, square, mesh, …).
So, I'm trying to compute model's rotation only (not matrix, something similar to Transform.LookAt in Unity engine) but I don't know how to do, here is what I've got:
// Camera data
mat4 viewMatrix = camera->getMatrix();
vec3 up = {viewMatrix[0][1], viewMatrix[1][1], viewMatrix[2][1]};
// Compute useful data.
vec3 look = normalize(camera->getPosition() - model->getPosition());
vec3 right = cross(up, look);
vec3 up2 = cross(look, right);
// Default: Compute matrix.
mat4 transform;
transform[0] = vec4(right, 0);
transform[1] = vec4(up2, 0);
transform[2] = vec4(look, 0);
transform[3] = vec4(position, 1);
// What I want : Compute rotation from look, right, up data and remove "Default: Compute matrix" part.
// ???
My framework compute model's matrix from his attributes (position, scale, rotation), so I can't override it, I just want to compute rotation.
Related
I'm trying to implement Normal Mapping, using a simple cube that i created. I followed this tutorial https://learnopengl.com/Advanced-Lighting/Normal-Mapping but i can't really get how normal mapping should be done when drawing 3d objects, since the tutorial is using a 2d object.
In particular, my cube seems almost correctly lighted but there's something i think it's not working how it should be. I'm using a geometry shader that will output green vector normals and red vector tangents, to help me out. Here i post three screenshot of my work.
Directly lighted
Side lighted
Here i actually tried calculating my normals and tangents in a different way. (quite wrong)
In the first image i calculate my cube normals and tangents one face at a time. This seems to work for the face, but if i rotate my cube i think the lighting on the adiacent face is wrong. As you can see in the second image, it's not totally absent.
In the third image, i tried summing all normals and tangents per vertex, as i think it should be done, but the result seems quite wrong, since there is too little lighting.
In the end, my question is how i should calculate normals and tangents.
Should i consider per face calculations or sum vectors per vertex across all relative faces, or else?
EDIT --
I'm passing normal and tangent to the vertex shader and setting up my TBN matrix. But as you can see in the first image, drawing face by face my cube, it seems that those faces adjacent to the one i'm looking directly (that is well lighted) are not correctly lighted and i don't know why. I thought that i wasn't correctly calculating my 'per face' normal and tangent. I thought that calculating some normal and tangent that takes count of the object in general, could be the right way.
If it's right to calculate normal and tangent as visible in the second image (green normal, red tangent) to set up the TBN matrix, why does the right face seems not well lighted?
EDIT 2 --
Vertex shader:
void main(){
texture_coordinates = textcoord;
fragment_position = vec3(model * vec4(position,1.0));
mat3 normalMatrix = transpose(inverse(mat3(model)));
vec3 T = normalize(normalMatrix * tangent);
vec3 N = normalize(normalMatrix * normal);
T = normalize(T - dot(T, N) * N);
vec3 B = cross(N, T);
mat3 TBN = transpose(mat3(T,B,N));
view_position = TBN * viewPos; // camera position
light_position = TBN * lightPos; // light position
fragment_position = TBN * fragment_position;
gl_Position = projection * view * model * vec4(position,1.0);
}
In the VS i set up my TBN matrix and i transform all light, fragment and view vectors to tangent space; doing so i won't have to do any other calculation in the fragment shader.
Fragment shader:
void main() {
vec3 Normal = texture(TextSamplerNormals,texture_coordinates).rgb; // extract normal
Normal = normalize(Normal * 2.0 - 1.0); // correct range
material_color = texture2D(TextSampler,texture_coordinates.st); // diffuse map
vec3 I_amb = AmbientLight.color * AmbientLight.intensity;
vec3 lightDir = normalize(light_position - fragment_position);
vec3 I_dif = vec3(0,0,0);
float DiffusiveFactor = max(dot(lightDir,Normal),0.0);
vec3 I_spe = vec3(0,0,0);
float SpecularFactor = 0.0;
if (DiffusiveFactor>0.0) {
I_dif = DiffusiveLight.color * DiffusiveLight.intensity * DiffusiveFactor;
vec3 vertex_to_eye = normalize(view_position - fragment_position);
vec3 light_reflect = reflect(-lightDir,Normal);
light_reflect = normalize(light_reflect);
SpecularFactor = pow(max(dot(vertex_to_eye,light_reflect),0.0),SpecularLight.power);
if (SpecularFactor>0.0) {
I_spe = DiffusiveLight.color * SpecularLight.intensity * SpecularFactor;
}
}
color = vec4(material_color.rgb * (I_amb + I_dif + I_spe),material_color.a);
}
Handling discontinuity vs continuity
You are thinking about this the wrong way.
Depending on the use case your normal map may be continous or discontinous. For example in your cube, imagine if each face had a different surface type, then the normals would be different depending on which face you are currently in.
Which normal you use is determined by the texture itself and not by any blending in the fragment.
The actual algorithm is
Load rgb values of normal
Convert to -1 to 1 range
Rotate by the model matrix
Use new value in shading calculations
If you want continous normals, then you need to make sure that the charts in the texture space that you use obey that the limits of the texture coordinates agree.
Mathematically that means that if U and V are regions of R^2 that map to the normal field N of your Shape then if the function of the mapping is f it should be that:
If lim S(x_1, x_2) = lim S(y_1, y_2) where {x1,x2} \subset U and {y_1, y_2} \subset V then lim f(x_1, x_2) = lim f(y_1, y_2).
In plain English, if the cooridnates in your chart map to positions that are close in the shape, then the normals they map to should also be close in the normal space.
TL;DR do not belnd in the fragment. This is something that should be done by the normal map itself when its baked, not'by you when rendering.
Handling the tangent space
You have 2 options. Option n1, you pass the tangent T and the normal N to the shader. In which case the binormal B is T X N and the basis {T, N, B} gives you the true space where normals need to be expressed.
Assume that in tangent space, x is side, y is forward z is up. Your transformed normal becomes (xB, yT, zN).
If you do not pass the tangent, you must first create a random vector that is orthogonal to the normal, then use this as the tangent.
(Note N is the model normal, where (x,y,z) is the normal map normal)
I'm creating basic OpenGL scene and I have problem with manipulating with my object. Each has different transformation matrix, there's also modelview/translation/scaling matrix for whole scene.
How do I bind this data tomy object before executing calculations from vertex shader? I've read about gl(Push|Pop)Matrix(), but these functions are deprecated from what I understood.
A bit of my code. Position from vertex shader:
gl_Position = gl_ProjectionMatrix * gl_ModelViewMatrix * gl_Vertex;
And C++ function to display objects:
// Clear etc...
mat4 lookAt = glm::lookAt();
glLoadMatrixf(&lookAt[0][0]);
mat4 combined = lookAt * (mat4) sceneTranslation * (mat4) sceneScale;
glLoadMatrixf(&combined[0][0]);
mat4 objectTransform(1.0);
// Transformations...
// No idea if it works, but objects are affected by camera position but not individually scaled, moved etc.
GLuint gl_ModelViewMatrix = glGetUniformLocation(shaderprogram, "gl_ModelViewMatrix");
glUniformMatrix4fv(gl_ModelViewMatrix, 1, GL_FALSE, &objectTransform[0][0]);
// For example
glutSolidCube(1.0);
glutSwapBuffers();
Well, you dont have to use glLoadMatrix and other built in matrix functions, because it could be even more difficult than handling your own matrixes.
A simple camera example without controlling it, its a static camera:
glm::mat4x4 view_matrix = glm::lookAt(
cameraPosition,
cameraPosition+directionVector, //or the focus point the camera is pointing to
upVector);
It returns a 4x4 matrix, this is the view matrix.
glm::mat4x4 projection_matrix =
glm::perspective(60.0f, float(screenWidth)/float(screenHeight), 1.0f, 1000.0f);
this is the projection matrix
so now you have the view and projection matrixes, and you can send it to the shader:
gShader->bindShader();
gShader->sendUniform4x4("view_matrix",glm::value_ptr(view_matrix));
gShader->sendUniform4x4("projection_matrix",glm::value_ptr(projection_matrix));
the bindshader is the simple glUseProgram(shaderprog);
the uniform program is
void sendUniform4x4(const string& name, const float* matrix, bool transpose=false)
{
GLuint location = getUniformLocation(name);
glUniformMatrix4fv(location, 1, transpose, matrix);
}
Your model matrix is individual for each of your objects:
glm::mat4x4 model_matrix= glm::mat4(1); //this is the identity matrix, so its static
model_matrix= glm::rotate(model_matrix,
rotationdegree,
vec3(axis)); //same as opengl function.
This created a model matrix, and you can send it to your shader too
gShader->bindShader();
gShader->sendUniform4x4("model_matrix",glm::value_ptr(model_matrix));
the glm::value_ptr(...) creates a 2 dimensional array of your matrix.
in your shader code don't use the glModelViewMatrix and gl_ProjectionMatrix,
matrixes are sent via uniforms.
uniform mat4 projection_matrix;
uniform mat4 view_matrix;
uniform mat4 model_matrix;
void main(){
gl_Position = projection_matrix*view_matrix*model_matrix*gl_Vertex;
//i wrote gl_Vertex because of the glutSolidTeapot.
}
I've never used this build in mesh function so i dont know how it works, supposing it is sending the vertexes to the shader with immediate mode use gl_Vertex.
If you create your own meshes use VBO vertexattribpointer and drawarrays/elements.
Don't forget to bind the shader before sending uniforms.
So with a complete example:
glm::mat4x4 view_matrix = glm::lookAt(2,4,2,-1,-1,-1,0,1,0);
glm::mat4x4 projection_matrix =
glm::perspective(60.0f, float(screenWidth)/float(screenHeight), 1.0f, 10.0f);
glm::mat4x4 model_matrix= glm::mat4(1); //this remains unchanged
glm::mat4x4 teapot_model_matrix= glm::rotate(model_matrix, //this is the teapots model matrix, apply transformations to this
45,
glm::vec3(1,1,1));
teapot_model_matrix = glm::scale(teapot_model_matrix,vec3(2,2,2);
gShader->bindShader();
gShader->sendUniform4x4("model_matrix",glm::value_ptr(model_matrix));
gShader->sendUniform4x4("view_matrix",glm::value_ptr(view_matrix));
gShader->sendUniform4x4("projection_matrix",glm::value_ptr(projection_matrix));
glutSolidCube(0.0); //i don't know what the (0.0) stands for :/
glutSwapBuffers();
///////////////////////////////////
in your shader:
uniform mat4 projection_matrix; //these are the matrixes you've sent
uniform mat4 view_matrix;
uniform mat4 model_matrix;
void main(){
gl_Position = projection_matrix*view_matrix*model_matrix*vec4(gl_Vertex.xyz,1);
}
Now you should have a camera positioned at 2,4,2, focusing at -1,-1,-1, and the up vector is pointing up:)
A teapot is rotated by 45 degrees around the (1,1,1) vector, and scaled by 2 in every direction.
After changing the model matrix, send it to the shader, so if you have more objects to render
send it after each if you want to have different transformations applied to each mesh.
A pseudocode for this looks like:
camera.lookat(camerapostion,focuspoint,updirection); //sets the view
camera.project(fov,aspect ratio,near plane, far plane) //and projection matrix
camera.sendviewmatrixtoshader;
camera.sendprojectionmatrixtoshader;
obj1.rotate(45 degrees, 1,1,1); //these functions should transform the model matrix of the object. Make sure each one has its own.
obj1.sendmodelmatrixtoshader;
obj2.scale(2,1,1);
obj2.sendmodelmatrixtoshader;
If it doesn't work try it with a vertexBuffer, and a simple triangle or cube created by yourself.
You should use a math library, I recommend GLM. It has its matrix functions just like in OpenGL, and uses column major matrixes so you can calculate your owns, and apply them for objects.
First, you should have a matrix class for your scene, which calculates your view matrix, and projection matrix. (glm::lookAt, and glm::project). They work the same as in openGL. You can send them as uniforms to the vertex shader.
For the obejcts, you calculate your own marixes, and send them as the model matrix to the shader(s).
In the shader or on cpu you calculate the mv matrix:
vp = proj*view.
You send your individual model matrixes to the shader and calculate the final position:
gl_Position = vp*m*vec4(vertex.xyz,1);
MODEL MATRIX
with glm, you can easily calculate, transform you matrixes. You create a simple identity matrix:
glm::mat4x4(1) //identity
you can translate, rotate, scale it.
glm::scale
glm::rotate
glm::translate
They work like in immediate mode in opengl.
after you have your matrix send it via the uniform.
MORE MODEL MATRIX
shader->senduniform("proj", camera.projectionmatrix);
shader->senduniform("view", camera.viewmatrix);
glm::mat4 model(1);
obj1.modelmatrix = glm::translate(model,vec3(1,2,1));
shader->senduniform("model", obj1.modelmatrix);
objectloader.render(obj1);
obj2.modelmatrix = glm::rotate(model,obj2.degrees,vec3(obj2.rotationaxis));
shader->senduniform("model", obj2.modelmatrix);
objectloader.render(obj2);
This is just one way to do this. You can write a class for push/pop matrix calculations, automate the method above like this:
obj1.rotate(degrees,vec3(axis)); //this calculates the obj1.modelmatrix for example rotating the identity matrix.
obj1.translate(vec3(x,y,z))//more transform
obj1.render();
//continue with object 2
VIEW MATRIX
the view matrix almost the same as model matrix. Use this to control the global "model matrix", the camera. This transforms your screen globally, and you can have model matrixes for your objects individually.
In my camera class I calculate this with the glm::lookAt(the same as opengl) then send it via uniform to all shaders I use.
Then when I render something I can manipulate its model matrix, rotating or scaling it, but the view matrix is global.
If you want a static object, you don't have to use model matrix on it, you can calculate the position with only:
gl_Position = projmatrix*viewmatrix*staticobjectvertex;
GLOBAL MODEL MATRIX
You can have a global model matrix too.
Use it like
renderer.globmodel.rotate(axis,degree);
renderer.globmodel.scale(x,y,z);
Send it as uniform too, and apply it after the objects' model matrix.
(I've used it to render ocean reflections to texture.)
To sum up:
create a global view(camera) matrix
create a model matrix for each of your sceens, meshes or objects
transform the objects' matrixes individually
send the projection, model and view matrixes via uniforms to the shader
calculate the final position: proj*camera*model*vertex
move your objects, and move your camera
I'm not saying there aren't any better way to do this, but this works for me well.
PS: if you'd like some camera class tuts I have a pretty good one;).
I'm trying to implement shadow mapping in my game. The shaders you see below result in correctly drawn shadows for the game's map, but all the models walking around on the map are completely black.
Here's a screenshot:
I suspect the problem lies with the calculation of the world position. The gl_Vertex is not transformed in any way. Because the map is generated with absolute world coordinates, the transformation with the light matrix results in a correct relative position that can be used to perform the shadow mapping.
However, my 3D models are all very close to the origin. So if their coordinates are plainly transformed using the light matrix, they will most likely never be lit.
I'm wondering if this could be the case, and if so, how I could fix it.
Here's my vertex shader:
#version 120
uniform mat4x4 LightMatrixValue;
varying vec4 shadowMapPosition;
varying vec3 worldPos;
void main(void)
{
vec4 modelPos = gl_Vertex;
worldPos=modelPos.xyz/modelPos.w;
vec4 lightPos = LightMatrixValue*modelPos;
shadowMapPosition = 0.5 * (lightPos.xyzw +lightPos.wwww);
gl_Position = ftransform();
}
And the fragment shader:
varying vec4 shadowMapPosition;
varying vec3 worldPos;
uniform sampler2D depthMap;
uniform vec4 LightPosition;
void main(void)
{
vec4 textureColour = gl_Color;
vec3 realShadowMapPosition = shadowMapPosition.xyz/shadowMapPosition.w;
float depthSm = texture2D(depthMap, realShadowMapPosition.xy).r;
if (depthSm < realShadowMapPosition.z-0.0001)
{
textureColour = vec4(0, 0, 0, 1);
}
gl_FragColor= textureColour;
}
I write this here because it will not fit above, i hope it will solve you problem.
For rendering you on the one hand have your models with their model matrix to position the element, on the other hand you have you view and projection matrix that transforms your model into the screen space.
For creating your shadow map (simplest approach, and i guess the one you have chosen) you render the scene from the view of the light source, so you apply the view and projection matrix of you light source, which will map the x, y and z value to screen space. The x and y values are for the position in the image, the z is for the depth buffer test and for the color you write in your color buffer (you later use as shadow map).
For the final rendering you load this shadow map and the view and projection matrix of the light to your shader. For the display in the scene you apply your view and project matrix of the camera to the vertexes, for the shadow map lookup you apply the view and projection matrix of the light to the vertex (like you did with the rendering for the shadow map). When you apply the lights view and projection matrix to the vertex you have the same mapping as in the shadow map pass, now you only need to transform your x and y coordinates to the texture coordinates and lookup the stored z value, which you compare with the calculated one.
Transforming the models world position into screen space (from the lights point of view)
This part is often done in the vertex or geometry shader:
shadowMapPosition = matLightViewProjection * modelWoldPos;
This is often done in the fragment shader:
shadowMapPosition = shadowMapPosition / shadowMapPosition.w ;
// Add an offset to prevent self-shadowing and moiré pattern
shadowMapPosition.z += 0.0005;
//lookup the stored z value
float distanceFromLight = texture2D(depthMap,shadowMapPosition.xz).z;
Now just compare the distanceFromLight with the shadowMapPosition.z to see if the object is in shadow or not.
So in your second pass you will do the steps of the shadow mapping again, except that you don't draw the data you calculated but compare it to the one you calculated in the pass before.
I am switching my shaders away from relying on using the OpenGL fixed function pipeline. Previously, OpenGL automatically transformed the lights for me, but now I need to do it myself.
I have the following GLSL code in my vertex shader:
//compute all of the light vectors. we want normalized vectors
//from the vertex towards the light
for(int i=0;i < NUM_LIGHTS; i++)
{
//if w is 0, it's a directional light, it is a vector
//pointing from the world towards the light source
if(u_lightPos[i].w == 0)
{
vec3 transformedLight = normalize(vec3(??? * u_lightPos[i]));
v_lightDir[i] = transformedLight;
}
else
{
//this is a positional light, transform it by the view*projection matrix
vec4 transformedLight = u_vp_matrix * u_lightPos[i];
//subtract the vertex from the light to get a vector from the vertex to the light
v_lightDir[i] = normalize(vec3(transformedLight - gl_Position));
}
}
What do I replace the ??? with in the line vec3 transformedLight = normalize(vec3(??? * u_lightPos[i]));?
I already have the following matrices as inputs:
uniform mat4 u_mvp_matrix;//model matrix*view matrix*projection matrix
uniform mat4 u_vp_matrix;//view matrix*projection matrix
uniform mat3 u_normal_matrix;//normal matrix for this model
I don't think it's any of these. It's obviously not the mvp matrix or the normal matrix, because those are specific to the current model, and I'm trying to convert to view coordinates. I don't think it's the vp matrix either though, because that includes translations and I don't want to translate a vector.
Can I compute the matrix that I need from what is currently given to the shader, and if so, what do I need to compute? If not, what matrix do I need to add and how should it be computed?
You can use the same view_projection matrix. Yes, that matrix does include a translation, but because the light vector has w = 0 the matrix multiplication will not apply it.
You are supposed to transform the light position/direction into view space on the CPU, not in a shader. It's a uniform; it's done once per frame per light. It doesn't change, so there's no need to have every vertex compute it. It's a waste of time.
I want to adjust the colors depending on which xyz position they are in the world.
I tried this in my fragment shader:
varying vec4 verpos;
void main(){
vec4 c;
c.x = verpos.x;
c.y = verpos.y;
c.z = verpos.z;
c.w = 1.0;
gl_FragColor = c;
}
but it seems that the colors change depending on my camera angle/position, how do i make the coords independent from my camera position/angle?
Heres my vertex shader:
varying vec4 verpos;
void main(){
gl_Position = ftransform();
verpos = gl_ModelViewMatrix*gl_Vertex;
}
Edit2: changed title, so i want world coords, not screen coords!
Edit3: added my full code
In vertex shader you have gl_Vertex (or something else if you don't use fixed pipeline) which is the position of a vertex in model coordinates. Multiply the model matrix by gl_Vertex and you'll get the vertex position in world coordinates. Assign this to a varying variable, and then read its value in fragment shader and you'll get the position of the fragment in world coordinates.
Now the problem in this is that you don't necessarily have any model matrix if you use the default modelview matrix of OpenGL, which is a combination of both model and view matrices. I usually solve this problem by having two separate matrices instead of just one modelview matrix:
model matrix (maps model coordinates to world coordinates), and
view matrix (maps world coordinates to camera coordinates).
So just pass two different matrices to your vertex shader separately. You can do this by defining
uniform mat4 view_matrix;
uniform mat4 model_matrix;
In the beginning of your vertex shader. And then instead of ftransform(), say:
gl_Position = gl_ProjectionMatrix * view_matrix * model_matrix * gl_Vertex;
In the main program you must write values to both of these new matrices. First, to get the view matrix, do the camera transformations with glLoadIdentity(), glTranslate(), glRotate() or gluLookAt() or what ever you prefer as you would normally do, but then call glGetFloatv(GL_MODELVIEW_MATRIX, &array); in order to get the matrix data to an array. And secondly, in a similar way, to get the model matrix, also call glLoadIdentity(); and do the object transformations with glTranslate(), glRotate(), glScale() etc. and finally call glGetFloatv(GL_MODELVIEW_MATRIX, &array); to get the matrix data out of OpenGL, so you can send it to your vertex shader. Especially note that you need to call glLoadIdentity() before beginning to transform the object. Normally you would first transform the camera and then transform the object which would result in one matrix that does both the view and model functions. But because you're using separate matrices you need to reset the matrix after camera transformations with glLoadIdentity().
gl_FragCoord are the pixel coordinates and not world coordinates.
Or you could just divide the z coordinate by the w coordinate, which essentially un-does the perspective projection; giving you your original world coordinates.
ie.
depth = gl_FragCoord.z / gl_FragCoord.w;
Of course, this will only work for non-clipped coordinates..
But who cares about clipped ones anyway?
You need to pass the World/Model matrix as a uniform to the vertex shader, and then multiply it by the vertex position and send it as a varying to the fragment shader:
/*Vertex Shader*/
layout (location = 0) in vec3 Position
uniform mat4 World;
uniform mat4 WVP;
//World FragPos
out vec4 FragPos;
void main()
{
FragPos = World * vec4(Position, 1.0);
gl_Position = WVP * vec4(Position, 1.0);
}
/*Fragment Shader*/
layout (location = 0) out vec4 Color;
...
in vec4 FragPos
void main()
{
Color = FragPos;
}
The easiest way is to pass the world-position down from the vertex shader via a varying variable.
However, if you really must reconstruct it from gl_FragCoord, the only way to do this is to invert all the steps that led to the gl_FragCoord coordinates. Consult the OpenGL specs, if you really have to do this, because a deep understanding of the transformations will be necessary.