Trying to understand OpenGL shaders for input vertices [duplicate] - c++

This question already has an answer here:
Frequency of shader invocations in rendering commands
(1 answer)
Closed 7 years ago.
Here is my code to begin with, this is the vertex shader
"#version 400\n"
"layout(location = 0) in vec2 vp;"
"layout(location = 1) in vec2 tex;"
"out vec2 texCoord;"
"void main () {"
" gl_Position = vec4 (vp, 0.0f, 1.0f);"
" texCoord = tex; "
"}";
Quiet common and basic
So basically what i am trying to understand, does the vertex shader run for every vertex attribute separately ? Or it only runs for one attribute individually?
As far as i have understood if i give it as input vertices for a triangle
x y U V
{-0.5,-0.5, 0.0, 0.0
0.5, -0.5 , 1.0, 0.0
0.0 0.0 , 0.5, 1.0 };
Does that mean that the vertex shader will run for both of the vertex attributes and produce each individual fragment that is within the area of both triangles (sample) to result in the xy coordinates for both of my defined triangles for each attribute?
Or does the vertex shader only run for the gl_Position to produce the xy coordinates for the area in the first attribute i.e vp?

The entire shader program runs once per vertex. So in this case, it runs three times. It doesn't work per-attribute.

Related

Fragment shader not creating gradient like light in OpenGL GLSL

I am trying to understand how to manipulate my renderings with shaders. I haven't changed the projection matrix of the scene but I draw a triangle with vertices = {-0.5, -0.5} {0.5, -0.5} {0, 0.5}. I then pass in a vec2 position of a "light" to the uniform of my fragment shader that i want to essentially shine onto my triangle from the top right of the triangle (lightPos = (0.5,0.5))
Here is a very bad drawing of where everything is located.
and this is what I aim to have in my triangle (kind of.. it doesnt need to be white to blue it just needs to be brighter near the light and darker further away)
Here is the shader
#version 460 core
in vec3 oPos;
out vec4 fragColor;
uniform vec3 u_outputColor;
uniform vec2 u_lightPosition;
void main(){
float intensity = 1 / length(oPos.xy - u_lightPosition);
vec4 col = vec4(u_outputColor, 1.0f);
fragColor = col * intensity;
}
Here is the basic code to compiling the shader(most of it is abstracted away so it is fairly simple)
/* Test data for shader program. */
program.init("passthrough.vert", "passthrough.frag");
program.setUniformVec3("u_outputColor", 0.3f, 0.3f, 0.8f);
program.setUniformVec2("u_lightPosition", 0.5f, 0.5f);
GLfloat vertices[9] = {-0.5f, -0.5, 0, 0,0.5f,0, 0.5, -0.5, 0};
Here is vertex shader:
#version 460 core
layout (location = 0) in vec3 aPos;
out vec3 oPos;
void main(){
gl_Position = vec4(aPos.x, aPos.y, aPos.z, 1.0);
}
Every single test I have run to see why I can't get this to work seems to show me that if there is a slight color change it will change the entire triangle to a different shade. All tests show a triangle of ONE color across the entire thing; no gradient at all. I want the triangle to be a gradient that is brighter near the light and darker further from it. This is driving me crazy because I have been stuck on such a simple thing for 3 hours now and it just seems like any code I write modifies all 3 vertices at once as if they are in the exact same spot. I wrote the math out and I strongly feel as if this should work. Any help is very appreciated.
EDIT
The triangle after the solution fixed my issue:
Try this for your vertex shader:
#version 460 core
layout (location = 0) in vec3 aPos;
out vec3 oPos;
void main(){
oPos.xyz = aPos.xyz; // ADD THIS LINE
gl_Position = vec4(aPos.xyz, 1.0);
}
Your version never writes to oPos, so the fragment shader gets either a) a random value or, in your case b) vec3(0,0,0). Since your color calculation is based off of:
float intensity = 1 / length(oPos.xy - u_lightPosition);
This is basically the same as
float intensity = 1 / length(-1*u_lightPosition);
So the color only depends on the light position.
You can debug and verify this by setting your fragment color to oPos:
vec4 col = vec4(oPos.xy, oPos.z + 0.5, 1.0f);
If oPos was set correctly in the vertex shader, then this line in the fragment shader would show you an RGB ramp. If oPos is not set correctly, you'll see 50% blue.
Always check for errors and logs returned from OpenGL. It should have emitted a warning about this that would have sent you straight to the problem.
Also, I'm surprised that your entire triangle isn't being clipped since vertices have a z of 0.

Vertex shaders in and out?

I've got two shaders like this:
const char* vertexShaderData =
"#version 450 \n"
"in vec3 vp;"
"in vec3 color;\n"
"out vec3 Color;\n"
"void main(){"
"Color=color;"
"gl_Position = vec4(vp, 1.0);"
"}";
const char* fragShaderData =
"#version 410\n"
"uniform vec4 incolor;\n"
"in vec3 Color;"
"out vec4 outColor;"
"void main(){"
"outColor = vec4(Color, 1.0);"
"}";
I understand that each shader is called for each vertex.
Where do the in paremters in my vertexShaderData get their values? In no point in the code do I specify what vp is or what color is. In the second shader, I get that the invalue comes from the first shader's out value. But where do thoes inital ins come from?
About the out value of the fragShaderData: How is this value used? In other words, how does OpenGL know that this is an RGB color value and know to paint the triangle with this color?
For the vertex shader,
you can use glGetAttribLocationin C++ to get the driver assigned location or manually set it like this: layout (location = 0) in vec3 vp; in GLSL to get the location for the attribute. Then you upload the data in C++ like this:
// (Vertex buffer must be bound at this point)
glEnableVertexAttribArray( a ); // 'a' would be 0 if you did the latter
glVertexAttribPointer( a, 3, GL_FLOAT, GL_FALSE, sizeof( your vertex ), nullptr );
For the fragment shader,
'in' variables must match vertex shader's 'out' variables, like in your sample code out vec3 Color; -> in vec3 Color;
gl_Position controls where outColor is painted.
You feed the data to the vertex shader from your OpenGL calls (in CPU). Once you compiled the program (vertex shader + fragment shader), you feed the vertex you want.
Different than the vertex shader, this fragment shader will run for once for EVERY pixel inside the triangle you are rendering. The outColor will be a vec4 (R,G,B,A) that "goes to your framebuffer". About the color, in theory, this is abstract for OpenGL. They are called RGBA for convenience... you can even access the same data as XYZW (it's an alias for RGBA). OpenGL will output NUMBERS to the framebuffer you desire (according the rules of color attachments, etc). In fact you will have 4 channels THAT BY THE WAY are used in the monitor to output RGB (and A used for transparency).... In other words, you can used GL programs to create triangles that will output 1 channel, or 2 channels, depending on your needs, and these channels can mean anything you need. For example, you can interpolate and YUV image, or a UV plane (2 channels). If you output these to monitor, you won't have the colors correct, once the monitor is expecting RGB, but the OpenGL concept is abroader than RGB. It will interpolate numbers for every pixel inside the triangle. That's it.

Vertex Shader for a Particle System

I'm working on a simple particle system in OpenGL; so far I've written two fragment shaders to update velocities and positions in response to my mouse, and they seem to work! I've looked at those two textures and they both seem to respond properly (going from random noise to an orderly structure in response to my mouse).
However, I'm having issues with how to draw the particles. I'm rather new to vertex shaders (having previously only used fragment shaders); it's my understanding that the usual way is a vertex shader like this:
uniform sampler2DRect tex;
varying vec4 cur;
void main() {
gl_FrontColor = gl_Color;
cur = texture2DRect(tex, gl_Vertex.xy);
vec2 pos = cur.xy;
gl_Position = gl_ModelViewProjectionMatrix * vec4(pos, 0., 1.);
}
Would transform the coordinates to the proper place according to the values in the position buffer. However, I'm getting gl errors when I run this that it can't be compiled -- after some research, it seems that gl_ModelViewProjectionMatrix is deprecated.
What would be the proper way to do this now that the model view matrix is deprecated? I'm not trying to do anything fancy with perspective, I just need a plain orthogonal view of the texture.
thanks!
What version of GLSL are you using (don't see any #version directive)? Yes, i think gl_ModelViewProjectionMatrix is really deprecated. However if you want to use it maybe this could help. By the way varying qualifier is quite old too. I would rather use in and out qualifiers it makes your shader code more 'readable'.
'Proper' way of doing that is that you create your own matrices - model and view (use glm library for example) and multiply them and then pass them as uniform to your shader. Tutorial with an example can be found here.
Here is my vs shader i used for displaying texture (fullscreen quad):
#version 430
layout(location = 0) in vec2 vPosition;
layout(location = 1) in vec2 vUV;
out vec2 uv;
void main()
{
gl_Position = vec4(vPosition,1.0,1.0);
uv = vUV;
}
fragment shader:
#version 430
in vec2 uv;
out vec4 final_color;
uniform sampler2D tex;
void main()
{
final_color = texture(tex, uv).rgba;
}
and here are my coordinates (mine are static, but you can change it and update buffer - shader can be the same):
//Quad verticles - omitted z coord, because it will always be 1
float pos[] = {
-1.0, 1.0,
1.0, 1.0,
-1.0, -1.0,
1.0, -1.0
};
float uv[] = {
0.0, 1.0,
1.0, 1.0,
0.0, 0.0,
1.0, 0.0
};
Maybe you could try to turn off depth comparison before executing this shader glDisable(GL_DEPTH_TEST);

OpenGL Rotation with vertices not working

I am trying to make a rotation with shaders on vertices, here is the code of my shader :
"#version 150 core\n"
"in vec2 position;"
"in vec3 color;"
"out vec3 Color;"
"uniform mat4 rotation;"
"void main() {"
" Color = color;"
" gl_Position = rotation*vec4(position, 0.0, 2.0);"
"}";
I am using it with a quat, here is the code producing the matrice and dumping it in the shader :
glm::quat rotation(x,0.0,0.0,0.5);
x+=0.001;
ctm = glm::mat4_cast(rotation);
GLint matrix_loc;
// get from shader pointer to global data
matrix_loc = glGetUniformLocation(shaderProgram, "rotation");
if (matrix_loc == -1)
std::cout << "pointer for rotation of shader not found" << matrix_loc << std::endl;
// put local data in shader :
glUniformMatrix4fv(matrix_loc, 1, GL_FALSE, glm::value_ptr(ctm));
But when it rotates, the object gets bigger and bigger, I know i don't need to GetUniformLocation every time i iterate in my loop but this is the code for a test. GlUniformMatrix is supposed to make the rotation happen as far as I know. After these calls i simply draw my vertex array.
Given its still drawing, rotation in the shader is probably a valid matrix. If it were an issue with the uniform it'd probably be all zeroes and nothing would draw.
As #genpfault says, ctm needs to be initialized:
ctm = glm::mat4_cast(rotation);
See: Converting glm quaternion to rotation matrix and using it with opengl
Also, shouldn't the 2.0 in vec4(position, 0.0, 2.0) be a 1.0?

How to get flat normals on a cube

I am using OpenGL without the deprecated features and my light calculation is done on fragment shader. So, I am doing smooth shading.
My problem, is that when I am drawing a cube, I need flat normals. By flat normals I mean that every fragment generated in a face has the same normal.
My solution to this so far is to generate different vertices for each face. So, instead of having 8 vertices, now I have 24(6*4) vertices.
But this seems wrong to me, replicating the vertexes. Is there a better way to get flat normals?
Update: I am using OpenGL version 3.3.0, I do not have support for OpenGL 4 yet.
If you do the lighting in camera-space, you can use dFdx/dFdy to calculate the normal of the face from the camera-space position of the vertex.
So the fragment shader would look a little like this.
varying vec3 v_PositionCS; // Position of the vertex in camera/eye-space (passed in from the vertex shader)
void main()
{
// Calculate the face normal in camera space
vec3 normalCs = normalize(cross(dFdx(v_PositionCS), dFdy(v_PositionCS)));
// Perform lighting
...
...
}
Since a geometry shader can "see" all three vertices of a triangle at once, you can use a geometry shader to calculate the normals and send them to your fragment shader. This way, you don't have to duplicate vertices.
// Geometry Shader
#version 330
layout(triangles) in;
layout(triangle_strip, max_vertices = 3) out;
out vec3 gNormal;
// You will need to pass your untransformed positions in from the vertex shader
in vec3 vPosition[];
uniform mat3 normalMatrix;
void main()
{
vec3 side2 = vPosition[2] - vPosition[0];
vec3 side0 = vPosition[1] - vPosition[0];
vec3 facetNormal = normalize(normalMatrix * cross(side0, side2));
gNormal = facetNormal;
gl_Position = gl_in[0].gl_Position;
EmitVertex();
gNormal = facetNormal;
gl_Position = gl_in[1].gl_Position;
EmitVertex();
gNormal = facetNormal;
gl_Position = gl_in[2].gl_Position;
EmitVertex();
EndPrimitive();
}
Another option would be to pass MV-matrix and the unrotated AxisAligned coordinate to the fragment shader:
attribute aCoord;
varying vCoord;
void main() {
vCoord = aCoord;
glPosition = aCoord * MVP;
}
At Fragment shader one can then identify the normal by calculating the dominating axis of vCoord, setting that to 1.0 (or -1.0) and the other coordinates to zero -- that is the normal, which has to be rotated by the MV -matrix.