OpenGL screen coordinates - opengl

I can successfully manipulate 3d objects on the screen in openGL.
To add a 2d effect, I thought I could simply turn off the matrix multiplication in the vertex shader (or give the identity matrix) and then the "Vertices" I provide would be screen coordinates.
But 2 simple triangles refuse to display (a square 0,0,100,100, tried various depths, but this same code works fine if I give it a rotating matrix.
Any ideas?
static const char gVertexShader[] =
"attribute vec3 coord3d;\n"
"uniform mat4 mvp;\n"
"void main() {\n"
"gl_Position = mvp*vec4(coord3d,1.0);\n"
"}\n";
->
static const char gVertexShader[] =
"attribute vec3 coord3d;\n"
"uniform mat4 mvp;\n"
"void main() {\n"
"gl_Position = vec4(coord3d,1.0);\n"
"}\n";
EDIT: I was unable to get anything to show using the identity matrix as a transformation, but I was able to do so using this one:
glm::mat4 view = glm::lookAt(glm::vec3(0.0, 0.0, -5), glm::vec3(0.0, 0.0, 0.0), glm::vec3(0.0, 1.0, 0.0));
glm::mat4 pers = glm::perspective(.78f, 1.0f*screenWidth/screenHeight, 0.1f, 10.0f);
xform = pers * view * glm::mat4(1.0f);
You'd have to adjust the -5 to fully fill the screen...

The gl_Position output of the vertex shader does expect clip space coordinates, not window space. The clip space coords will be forst transformed to normaliced device space and finally be converted to window space coords using the viewport transform.
If you want to directly work with window space coodrs for your vertices, you can simple use the inverse of the viewport transform as the projection matrix (the clip space will be identical to normalized device space when you work with orthogonal projections, so you don't need to care about that).
In NDC, (-1, -1) is the bottom left corener and (1,1) the top right one, so it is quite easy to see that all you need is a scale and a translation, you don't even need a full-blown matrix for that, these transforms will nicely end up as multiply-add operations GPUs can handle very efficiently.

Related

Model View Projection - How to move objects in a world

I have a 3D world with a cube inside it. I understand that the cube's vertices are defined as coordinates about the cube's "centre of mass".
I want to place the cube at a position away from me, to the right and up above so I want to place the cube at (1,1,1). I don't have any 'camera' code at the moment so assume my viewing position is (0,0,0).
What do I need to translate to move the cube to a new location?
I am confused by the View and Model matrices. I thought View was the camera view/position and Model was the position of the vertices in the 3D world.
My code below works as I expect for x and y positions but z is negative going into the screen (further away) and positive moving towards/out of the screen. I would like z to be positive the further away an object is from the viewpoint.
What am I doing wrong?
glm::mat4 Projection = glm::perspective( 45.0f, g_AspectRatio, 0.1f, 100.0f );
glm::mat4 View = glm::mat4(1.0);
glm::mat4 Model = glm::mat4(1.0);
// Move the object within the 3D space
Model = glm::translate( Model, glm::vec3( m_X, m_Y, m_Z ) );
glm::mat4 MVP;
MVP = Projection * View * Model;
MVP is passed into the transform uniform.
My shader:
char *vs3DShader =
"#version 140\n"
"#extension GL_ARB_explicit_attrib_location : enable\n"
"layout (location = 0) in vec3 Position;"
"layout (location = 1) in vec4 color;"
"layout (location = 2) in vec3 normal;"
"out vec4 frag_color;"
"uniform mat4 transform;"
"void main()"
"{"
" gl_Position = transform * vec4( Position.x, Position.y, Position.z, 1.0 );"
" // Pass vertex color to fragment shader.. \n"
" frag_color = color;"
"}"
;
My code below works as I expect for x and y positions but z is negative going into the screen (further away) and positive moving towards/out of the screen. I would like z to be positive the further away an object is from the viewpoint.
You can achieve this by setting up a view matrix, which mirrors the Z-Axis:
glm::mat4 View = glm::scale(glm::mat4(1.0), glm::vec3(1.0f, 1.0f, -1.0f));
The view matrix transforms from world space to view space. Your view matrix is the identity matrix:
glm::mat4 View = glm::mat4(1.0);
This means the world space is equal the view space. Note, in general a view matrix is set by glm::lookAt.
The Z-Axis is the cross product of the X-Axis and the Y-Axis. This means, if the X-axis points to the left, and the Y-axis points up, in a right handed system, then the Z-axis points out of the viewport.
See Understanding OpenGL’s Matrices
The coordinates I described above, are in general the view coordinates in a right handed system, because glm::glm::perspective sets up a right handed projection matrix, which transforms this cooridnates to clip sapce, in that way that the z-axis is inverted and becomes the depth.

Phong Illumination in OpenGL website

I was reading through the following Phong Illumination shader which has in opengl.org:
Phong Illumination in Opengl.org
The vertex and fragment shaders were as follows:
vertex shader:
varying vec3 N;
varying vec3 v;
void main(void)
{
v = vec3(gl_ModelViewMatrix * gl_Vertex);
N = normalize(gl_NormalMatrix * gl_Normal);
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
fragment shader:
varying vec3 N;
varying vec3 v;
void main (void)
{
vec3 L = normalize(gl_LightSource[0].position.xyz - v);
vec3 E = normalize(-v); // we are in Eye Coordinates, so EyePos is (0,0,0)
vec3 R = normalize(-reflect(L,N));
//calculate Ambient Term:
vec4 Iamb = gl_FrontLightProduct[0].ambient;
//calculate Diffuse Term:
vec4 Idiff = gl_FrontLightProduct[0].diffuse * max(dot(N,L), 0.0);
Idiff = clamp(Idiff, 0.0, 1.0);
// calculate Specular Term:
vec4 Ispec = gl_FrontLightProduct[0].specular
* pow(max(dot(R,E),0.0),0.3*gl_FrontMaterial.shininess);
Ispec = clamp(Ispec, 0.0, 1.0);
// write Total Color:
gl_FragColor = gl_FrontLightModelProduct.sceneColor + Iamb + Idiff + Ispec;
}
I was wondering about the way which he calculates the viewer vector or v. Because by multiplying the vertex position with gl_ModelViewMatrix, the result will be in view matrix (and view coordinates are rotated most of the time, compared to world coordinates).
So, we cannot simply subtract the light position from v to calculate the L vector, because they are not in the same coordinates system. Also, the result of dot product between L and N won't be correct because their coordinates are not the same. Am I right about it?
So, we cannot simply subtract the light position from v to calculate
the L vector, because they are not in the same coordinates system.
Also, the result of dot product between L and N won't be correct
because their coordinates are not the same. Am I right about it?
No.
The gl_LightSource[0].position.xyz is not the value you set GL_POSITION to. The GL will automatically multiply the position by the current GL_MODELVIEW matrix at the time of the glLight() call. Lighting calculations are done completely in eye space in fixed-function GL. So both V and N have to be transformed to eye space, and gl_LightSource[].position will already be transformed to eye-space, so the code is correct and is actually not mixing different coordinate spaces.
The code you are using is relying on deprecated functionality, using lots of the old fixed-function features of the GL, including that particular issue. In mordern GL, those builtin uniforms and attributes do not exist, and you have to define your own - and you can interpret them as you like.
You of course could also ignore that convention and still use a different coordinate space for the lighting calculation with the builtins, and interpret gl_LightSource[].position differently by simply choosing some other matrix when setting a position (typically, the light's world space position is set while the GL_MODELVIEW matrix contains only the view transformation, so that the eye-space light position for some world-stationary light source emerges, but you can do whatever you like). However, the code as presented is meant to work as some "drop-in" replacement for the fixed-function pipeline, so it will interpret those builtin uniforms and attributes in the same way the fixed-function pipeline did.

Opengl Vertices Camera orientation

I am working on a 3D project using vertices, i started it with a simple gluLookAt in order to have a first person camera moving in an environment, i use it this way :
gluLookAt(_position.x,_position.y,_position.z,
_target.x,_target.y,_target.z,
0,0,1);
Everything was working fine, i was calculating my target according to the position of the mouse and angles (theta and phi), my project moved on to using vertices for performance issues, so i had to use the same camera for these new objects, in order to do this i used the GLM library this way :
glm::mat4 Projection = glm::perspective(90.0f, 800.0f / 600.0f, 0.1f, 100.f);
glm::mat4 View = glm::lookAt(
glm::vec3(position.x,position.y,position.z),
glm::vec3(target.x,target.y,target.z),
glm::vec3(0,0,1)
);
glm::mat4 Model = glm::mat4(1.0f); !
// Our ModelViewProjection : multiplication of our 3 matrices
glm::mat4 MVP = Projection * View * Model;
GLuint MatrixID = glGetUniformLocation(this->shaderProgram, "MVP");
here is the shader i use :
const GLchar* default_vertexSource =
"#version 150 core\n"
"in vec2 position;"
"in vec3 color;"
"out vec3 Color;"
"uniform vec3 translation;"
"uniform mat4 rotation;"
"uniform mat4 MVP;"
"void main() {"
" Color = color;"
" gl_Position = MVP*rotation*vec4(position.x + translation.x, position.y + translation.y, 0.0 + translation.z, 1.0);"
"}";
What happens is that my my object's coordinate reference is not the same as my camera, it is drawn above it on the current x/z plan whereas it should be facing the camera on the x/y plan.
From my point of view it seems that you are missunderstanding how translation in OpenGL works.
glm::lookAt returns one modelview matrix. -> How the objects in the scene are translated according to the camera. In openGL you do not "Move" the camera, you are moving all the objects around the camera.
So if you have 2 objects in 0 0 0 (origin of you world camera system) and ur camera is at
eye(0,0,-3), center(0,0,0), up(0,1,0) and you want to move one object to the left and on object to the right you need to have a matrix stack from glm
std::stack<glm::mat4> glm_ProjectionMatrix;
Here you can push the modelview from your camera as top object.
For the object movemtn to the left side you can just use glm::translate, upload this mat4 as modelview matrix to your shader. (here do not render the scene)
Reset the top matrix to modelview (eiter pop() if you made a copy of the top element before) or just reset using glm::lookAt. Then glm::translate in the other direction.
In your vertex shader you now need no vec3 translate or rotate.
You just say
gl_Position = perspective * modelview *vec4(position,1);
This wil update the two object accordingly to you given translation.
If you want to move some objects around, just update the modelview matrix for the one object.
http://www.songho.ca/opengl/gl_transform.html
I hope you can understand my answer.
The basics of movement (rot and trans) in OpenGL is matrix multipication. You do not need to add some translation vec to your modelview in the shader. GLM can do all the stuff in you c++ code
Math:
Modelview is a 4*4 matrix
If your object should not be rotated or translated the modelview is equals to the identity.
If you want to rotate the object you miltiply the modelview matrix with the correct 4*4 rotation matrix (see homogenous coordinates)
If you want to translate the vertex you multiply the correct translation to the matrix
Lets say X = (x,y,z,w) is you vertex
T is translation and R is rotation
a valid operation may looks like this:
Modelview * rotation * translation *v = v'
v' is the new position of the point in your 3D coordinate system
Some examplecode you can find here: http://incentivelabs.de/Sourcecode/ See "Teil 13" or later. If you are able to understand german, you can find my tutorials on OpenGL with C++ on Youtube using the channel-name "incentivelabs"
Edit:
If I want to move an object/rotate an object using C++ with GLM
glm_ProjectionMatrix.top() = camera.getProjectionMat();
glUniformMatrix4fv(uniformLocations["projection"], 1, false,glm::value_ptr(glm_ProjectionMatrix.top()));
glm_ModelViewMatrix.top() = camera.getModelViewMat();
glm_ModelViewMatrix.top() = glm::translate(
glm_ModelViewMatrix.top(),
glm::vec3(0.0f, 0.0f, -Translate));
glUniformMatrix4fv(uniformLocations["modelview"], 1, false,glm::value_ptr(glm_ModelViewMatrix.top()));
In the GLSL vertex Shader:
gl_Position = projection * modelview * vec4(vertexPos,1);
vertexPos is an attribute (vec3 for the position)
This codes moves all vertices drawn after the upload of the modelview and projection to the shader with the same translation.

Rotate matrix of single model on its own axis

I currently have 5 models displayed in a screen and what I'm trying to do. The following is my vertex shader for translating the models individually so that I can get them to move in different directions:
#version 330
layout (location = 0) Position;
uniform mat4 MVP;
uniform vec3 Translation;
uniform mat4 Rotate;
void main()
{
gl_Position = MVP * * Rotate * vec4(Position + Translation, 1.0); // Correct?
}
And to position/move my models individually within the render loop:
//MODEL ONE
glUniform3f(loc, 0.0f, 4.0f, 0.0f); // loc is "Translate"
glUniformMatrix4fv(loc, 1, GL_FALSE, glm::value_ptr(rotationMatrix)); // loc is "Rotate"
_model1.render();
Also I do have a glm::mat4 rotateMatrix() that returns a rotation. but when I multiply it with the other matrices within the render loop, the whole scene (minus the camera) rotates to the set angle.
UPDATE
How would I be able to apply my rotation to the models independently of the world on their own axis? The problem now is that the model rotates, but from 0,0,0 of the world and not it's own position.
There's are a couple of syntax error in your vertex shader:
No type for the Position variable. Looks from the context like it should be a vec3.
Two * signs after MVP.
I assume that was those were just an accident while copying the code, and you actually have a vertex shader that compiles.
To apply the rotation described by the Rotate matrix before the translation from the Translation vector, you should be able to simply change the order in the vertex shader:
vec4 rotatedVec = Rotate * vec4(Position, 1.0);
gl_Position = MVP * vec4(rotatedVec.xyz + Translation, 1.0);
The whole thing would looks simpler if you defined Rotate as a 3x3 matrix, which is sufficient for a rotation.

GLSL: How to get pixel x,y,z world position?

I want to adjust the colors depending on which xyz position they are in the world.
I tried this in my fragment shader:
varying vec4 verpos;
void main(){
vec4 c;
c.x = verpos.x;
c.y = verpos.y;
c.z = verpos.z;
c.w = 1.0;
gl_FragColor = c;
}
but it seems that the colors change depending on my camera angle/position, how do i make the coords independent from my camera position/angle?
Heres my vertex shader:
varying vec4 verpos;
void main(){
gl_Position = ftransform();
verpos = gl_ModelViewMatrix*gl_Vertex;
}
Edit2: changed title, so i want world coords, not screen coords!
Edit3: added my full code
In vertex shader you have gl_Vertex (or something else if you don't use fixed pipeline) which is the position of a vertex in model coordinates. Multiply the model matrix by gl_Vertex and you'll get the vertex position in world coordinates. Assign this to a varying variable, and then read its value in fragment shader and you'll get the position of the fragment in world coordinates.
Now the problem in this is that you don't necessarily have any model matrix if you use the default modelview matrix of OpenGL, which is a combination of both model and view matrices. I usually solve this problem by having two separate matrices instead of just one modelview matrix:
model matrix (maps model coordinates to world coordinates), and
view matrix (maps world coordinates to camera coordinates).
So just pass two different matrices to your vertex shader separately. You can do this by defining
uniform mat4 view_matrix;
uniform mat4 model_matrix;
In the beginning of your vertex shader. And then instead of ftransform(), say:
gl_Position = gl_ProjectionMatrix * view_matrix * model_matrix * gl_Vertex;
In the main program you must write values to both of these new matrices. First, to get the view matrix, do the camera transformations with glLoadIdentity(), glTranslate(), glRotate() or gluLookAt() or what ever you prefer as you would normally do, but then call glGetFloatv(GL_MODELVIEW_MATRIX, &array); in order to get the matrix data to an array. And secondly, in a similar way, to get the model matrix, also call glLoadIdentity(); and do the object transformations with glTranslate(), glRotate(), glScale() etc. and finally call glGetFloatv(GL_MODELVIEW_MATRIX, &array); to get the matrix data out of OpenGL, so you can send it to your vertex shader. Especially note that you need to call glLoadIdentity() before beginning to transform the object. Normally you would first transform the camera and then transform the object which would result in one matrix that does both the view and model functions. But because you're using separate matrices you need to reset the matrix after camera transformations with glLoadIdentity().
gl_FragCoord are the pixel coordinates and not world coordinates.
Or you could just divide the z coordinate by the w coordinate, which essentially un-does the perspective projection; giving you your original world coordinates.
ie.
depth = gl_FragCoord.z / gl_FragCoord.w;
Of course, this will only work for non-clipped coordinates..
But who cares about clipped ones anyway?
You need to pass the World/Model matrix as a uniform to the vertex shader, and then multiply it by the vertex position and send it as a varying to the fragment shader:
/*Vertex Shader*/
layout (location = 0) in vec3 Position
uniform mat4 World;
uniform mat4 WVP;
//World FragPos
out vec4 FragPos;
void main()
{
FragPos = World * vec4(Position, 1.0);
gl_Position = WVP * vec4(Position, 1.0);
}
/*Fragment Shader*/
layout (location = 0) out vec4 Color;
...
in vec4 FragPos
void main()
{
Color = FragPos;
}
The easiest way is to pass the world-position down from the vertex shader via a varying variable.
However, if you really must reconstruct it from gl_FragCoord, the only way to do this is to invert all the steps that led to the gl_FragCoord coordinates. Consult the OpenGL specs, if you really have to do this, because a deep understanding of the transformations will be necessary.