OpenGL Rotation with vertices not working - opengl

I am trying to make a rotation with shaders on vertices, here is the code of my shader :
"#version 150 core\n"
"in vec2 position;"
"in vec3 color;"
"out vec3 Color;"
"uniform mat4 rotation;"
"void main() {"
" Color = color;"
" gl_Position = rotation*vec4(position, 0.0, 2.0);"
"}";
I am using it with a quat, here is the code producing the matrice and dumping it in the shader :
glm::quat rotation(x,0.0,0.0,0.5);
x+=0.001;
ctm = glm::mat4_cast(rotation);
GLint matrix_loc;
// get from shader pointer to global data
matrix_loc = glGetUniformLocation(shaderProgram, "rotation");
if (matrix_loc == -1)
std::cout << "pointer for rotation of shader not found" << matrix_loc << std::endl;
// put local data in shader :
glUniformMatrix4fv(matrix_loc, 1, GL_FALSE, glm::value_ptr(ctm));
But when it rotates, the object gets bigger and bigger, I know i don't need to GetUniformLocation every time i iterate in my loop but this is the code for a test. GlUniformMatrix is supposed to make the rotation happen as far as I know. After these calls i simply draw my vertex array.

Given its still drawing, rotation in the shader is probably a valid matrix. If it were an issue with the uniform it'd probably be all zeroes and nothing would draw.
As #genpfault says, ctm needs to be initialized:
ctm = glm::mat4_cast(rotation);
See: Converting glm quaternion to rotation matrix and using it with opengl
Also, shouldn't the 2.0 in vec4(position, 0.0, 2.0) be a 1.0?

Related

Vertex shaders in and out?

I've got two shaders like this:
const char* vertexShaderData =
"#version 450 \n"
"in vec3 vp;"
"in vec3 color;\n"
"out vec3 Color;\n"
"void main(){"
"Color=color;"
"gl_Position = vec4(vp, 1.0);"
"}";
const char* fragShaderData =
"#version 410\n"
"uniform vec4 incolor;\n"
"in vec3 Color;"
"out vec4 outColor;"
"void main(){"
"outColor = vec4(Color, 1.0);"
"}";
I understand that each shader is called for each vertex.
Where do the in paremters in my vertexShaderData get their values? In no point in the code do I specify what vp is or what color is. In the second shader, I get that the invalue comes from the first shader's out value. But where do thoes inital ins come from?
About the out value of the fragShaderData: How is this value used? In other words, how does OpenGL know that this is an RGB color value and know to paint the triangle with this color?
For the vertex shader,
you can use glGetAttribLocationin C++ to get the driver assigned location or manually set it like this: layout (location = 0) in vec3 vp; in GLSL to get the location for the attribute. Then you upload the data in C++ like this:
// (Vertex buffer must be bound at this point)
glEnableVertexAttribArray( a ); // 'a' would be 0 if you did the latter
glVertexAttribPointer( a, 3, GL_FLOAT, GL_FALSE, sizeof( your vertex ), nullptr );
For the fragment shader,
'in' variables must match vertex shader's 'out' variables, like in your sample code out vec3 Color; -> in vec3 Color;
gl_Position controls where outColor is painted.
You feed the data to the vertex shader from your OpenGL calls (in CPU). Once you compiled the program (vertex shader + fragment shader), you feed the vertex you want.
Different than the vertex shader, this fragment shader will run for once for EVERY pixel inside the triangle you are rendering. The outColor will be a vec4 (R,G,B,A) that "goes to your framebuffer". About the color, in theory, this is abstract for OpenGL. They are called RGBA for convenience... you can even access the same data as XYZW (it's an alias for RGBA). OpenGL will output NUMBERS to the framebuffer you desire (according the rules of color attachments, etc). In fact you will have 4 channels THAT BY THE WAY are used in the monitor to output RGB (and A used for transparency).... In other words, you can used GL programs to create triangles that will output 1 channel, or 2 channels, depending on your needs, and these channels can mean anything you need. For example, you can interpolate and YUV image, or a UV plane (2 channels). If you output these to monitor, you won't have the colors correct, once the monitor is expecting RGB, but the OpenGL concept is abroader than RGB. It will interpolate numbers for every pixel inside the triangle. That's it.

Trying to understand OpenGL shaders for input vertices [duplicate]

This question already has an answer here:
Frequency of shader invocations in rendering commands
(1 answer)
Closed 7 years ago.
Here is my code to begin with, this is the vertex shader
"#version 400\n"
"layout(location = 0) in vec2 vp;"
"layout(location = 1) in vec2 tex;"
"out vec2 texCoord;"
"void main () {"
" gl_Position = vec4 (vp, 0.0f, 1.0f);"
" texCoord = tex; "
"}";
Quiet common and basic
So basically what i am trying to understand, does the vertex shader run for every vertex attribute separately ? Or it only runs for one attribute individually?
As far as i have understood if i give it as input vertices for a triangle
x y U V
{-0.5,-0.5, 0.0, 0.0
0.5, -0.5 , 1.0, 0.0
0.0 0.0 , 0.5, 1.0 };
Does that mean that the vertex shader will run for both of the vertex attributes and produce each individual fragment that is within the area of both triangles (sample) to result in the xy coordinates for both of my defined triangles for each attribute?
Or does the vertex shader only run for the gl_Position to produce the xy coordinates for the area in the first attribute i.e vp?
The entire shader program runs once per vertex. So in this case, it runs three times. It doesn't work per-attribute.

What does in vec and out vec means?

In GLSL I didnt understood what is "in" and "out" variables, what does it mean?
Here is a sample of my code that I copied from a tutorial.
// Shader sources
const GLchar* vertexSource =
"#version 150 core\n"
"in vec2 position;"
"in vec3 color;"
"out vec3 Color;"
"void main() {"
" Color = color;"
" gl_Position = vec4(position, 0.0, 1.0);"
"}";
const GLchar* fragmentSource =
"#version 150 core\n"
"in vec3 Color;"
"out vec4 outColor;"
"void main() {"
" outColor = vec4(Color, 1.0);"
"}";
Variables declared in and out at "file" scope like that refer to stage input/output.
In a vertex shader, a variable declared in is a vertex attribute and is matched by an integer location to a vertex attribute pointer in OpenGL.
In a fragment shader, a variable declared in should match, by name, an output from the vertex shader (same name, but out).
In a fragment shader, a variable declared out is a color output and has a corresponding color attachment in the framebuffer you are drawing to.
In your vertex shader, you have two vertex attributes (position and color) used to compute the interpolated input in the fragment shader (Color). The fragment shader writes the interpolated color to the color buffer attachment identified by outColor.
It is impossible to tell what vertex attributes position and color and what color buffer outColor are associated with from your shader code. Those must be set in GL code through calls like glBindAttribLocation (...) and glBindFragDataLocation (...) prior to linking.

OpenGL screen coordinates

I can successfully manipulate 3d objects on the screen in openGL.
To add a 2d effect, I thought I could simply turn off the matrix multiplication in the vertex shader (or give the identity matrix) and then the "Vertices" I provide would be screen coordinates.
But 2 simple triangles refuse to display (a square 0,0,100,100, tried various depths, but this same code works fine if I give it a rotating matrix.
Any ideas?
static const char gVertexShader[] =
"attribute vec3 coord3d;\n"
"uniform mat4 mvp;\n"
"void main() {\n"
"gl_Position = mvp*vec4(coord3d,1.0);\n"
"}\n";
->
static const char gVertexShader[] =
"attribute vec3 coord3d;\n"
"uniform mat4 mvp;\n"
"void main() {\n"
"gl_Position = vec4(coord3d,1.0);\n"
"}\n";
EDIT: I was unable to get anything to show using the identity matrix as a transformation, but I was able to do so using this one:
glm::mat4 view = glm::lookAt(glm::vec3(0.0, 0.0, -5), glm::vec3(0.0, 0.0, 0.0), glm::vec3(0.0, 1.0, 0.0));
glm::mat4 pers = glm::perspective(.78f, 1.0f*screenWidth/screenHeight, 0.1f, 10.0f);
xform = pers * view * glm::mat4(1.0f);
You'd have to adjust the -5 to fully fill the screen...
The gl_Position output of the vertex shader does expect clip space coordinates, not window space. The clip space coords will be forst transformed to normaliced device space and finally be converted to window space coords using the viewport transform.
If you want to directly work with window space coodrs for your vertices, you can simple use the inverse of the viewport transform as the projection matrix (the clip space will be identical to normalized device space when you work with orthogonal projections, so you don't need to care about that).
In NDC, (-1, -1) is the bottom left corener and (1,1) the top right one, so it is quite easy to see that all you need is a scale and a translation, you don't even need a full-blown matrix for that, these transforms will nicely end up as multiply-add operations GPUs can handle very efficiently.

How to achieve flat shading with light calculated at centroids?

I'd like to write a GLSL shader program for a per-face shading. My first attempt uses the flat interpolation qualifier with provoking vertices. I use the flat interpolation for both normal and position vertex attributes which gives me the desired old-school effect of solid-painted surfaces.
Although the rendering looks correct, the shader program doesn't actually do the right job:
The light calculation is still performed on a per-fragment basis (in the fragment shader),
The position vector is taken from the provoking vertex, not the triangle's centroid (right?).
Is it possible to apply the illumination equation once, to the triangle's centroid, and then use the calculated color value for the whole primitive? How to do that?
Use a geometry shader whose input is a triangle and whose output is a triangle. Pass normals and positions to it from the vertex shader, calculate the centroid yourself (by averaging the positions), and do the lighting, passing the output color as an output variable to the fragment shader, which just reads it in and writes it out.
Another simple approach is to compute the (screenspace) face normal in the fragment shader using the derivative of the screenspace position. It is very simple to implement and even performs well.
I have written an example of it here (requires a WebGL capable browser):
Vertex:
attribute vec3 vertex;
uniform mat4 _mvProj;
uniform mat4 _mv;
varying vec3 fragVertexEc;
void main(void) {
gl_Position = _mvProj * vec4(vertex, 1.0);
fragVertexEc = (_mv * vec4(vertex, 1.0)).xyz;
}
Fragment:
#ifdef GL_ES
precision highp float;
#endif
#extension GL_OES_standard_derivatives : enable
varying vec3 fragVertexEc;
const vec3 lightPosEc = vec3(0,0,10);
const vec3 lightColor = vec3(1.0,1.0,1.0);
void main()
{
vec3 X = dFdx(fragVertexEc);
vec3 Y = dFdy(fragVertexEc);
vec3 normal=normalize(cross(X,Y));
vec3 lightDirection = normalize(lightPosEc - fragVertexEc);
float light = max(0.0, dot(lightDirection, normal));
gl_FragColor = vec4(normal, 1.0);
gl_FragColor = vec4(lightColor * light, 1.0);
}