I am beginner at GLSL. I was reading a vertex shader code and I don't understande this part of code:
out vec3 position;
...
gl_Position=gl_ModelViewProjectionMatrix*gl_Vertex;
position=vec3(gl_ModelViewMatrix*gl_Vertex);
What are the differences between gl_ModelViewProjectionMatrix and gl_ModelViewMatrix?
What are the differences between gl_Position and position?
As you might suspect, gl_ModelViewProjectionMatrix is gl_ModelViewMatrix with the addition of the projection -- that is, the perspective camera distortion.
gl_Position is a predefined variable meaning "the projected result of this vertex shader" (all vertex shaders are required to assign a value to gl_Position), while the value "position" is an extra programmer-defined value that comes along for the ride (why is impossible to say, depends on the entire shader)
Related
In my already existing and functioning vertex shader I have something like this:
layout (location = 0) in vec3 aPos;
uniform mat4 projection;
void main(){
gl_Position = projection * vec4(aPos, 1.0);
}
The aPos is a position vector3 and the uniform variable projection is used to obtain screen-space coordinates.
In examples I've seen, the way rotation is implemented is by calculating it in code (which I've already done) and multiplying the result by vec4(aPos, 1.0);, however, I have a uniform inside my shader, the others do not projection which is already multiplying by vec4(aPos, 1.0).
My question is: What do I need to do to apply the rotation inside the vertex shader?
Do I create another uniform for the rotation result and multiply that one by both the projection and vec4(aPos, 1.0) ?
How do I do this?
I'm rendering light spheres for a deferred renderer and I'm in the process of switching to instancing for better performance. I have the following vertex shader:
in vec3 position;
uniform int test_index;
uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
uniform mat4 modelMatrix[256];
void main(void) {
gl_Position = projectionMatrix * viewMatrix * modelMatrix[test_index] * vec4(position, 1.0);
}
I upload the matrices to the shader with
val location = glGetUniformLocation(programId, "modelMatrix[$i]")
glUniformMatrix4fv(location, false, buf)
When I use the int uniform in the index (Hardcoded to a 0 for debugging purposes), the sphere disappears, except when I clip into geometry (in which case it renders as a white circle). The same happens when I use gl_InstanceID as my index.
Weirdly I noticed that the problem also occurs when I pass an int from vertex to fragment shader and use it there for something completely different, regardless of what I use as the index.
The problem disappears instantly and rendering is completely fine when I hardcode modelMatrix[0] in the shader instead of modelMatrix[test_index].
I've got a different shader (for skeletal animation) which uploads a mat4 uniform array the exact same way, also being indexed with an int, but I've got no such problems there...
I don't really know what to make of this, so any advice on how I can debug this is much appreciated. I'm using OpenGL 3.3 on Kotlin+LWJGL
Edit: This probably has nothing to do with the uniform. The following also does not work:
int i = 0;
gl_Position = projectionMatrix * viewMatrix * modelMatrix[i] * vec4(position, 1.0f);
OpenGL has a limit on how many uniforms one can use. The same applies to attributes too (but that's not the problem here). An array of 256 matrices is very likely to exceed the allowed amount.
The reason why the code only breaks when using the int uniform is that glsl compilers do a lot of optimization under the hood, for example, removing unused uniforms. So if you hardcode the array location in the shader, the compiler will notice that only a single matrix is ever used and might remove all the others.
When you need more uniforms than what OpenGL allows for, you have to use a uniform buffer object (UBO) or a shader storage buffer (SSBO).
I'm studying opengl and I'v got i little 3d scene with some objects. In GLSL vertex shader I multiply vertexes on matixes like this:
vertexPos= viewMatrix * worldMatrix * modelMatrix * gl_Vertex;
gl_Position = vertexPos;
vertexPos is a vec4 varying variable and I pass it to fragment shader.
Here is how the scene renders normaly:
normal render
But then I wana do a debug render. I write in fragment shader:
gl_FragColor = vec4(vertexPos.x, vertexPos.x, vertexPos.x, 1.0);
vertexPos is multiplied by all matrixes, including perspective matrix, and I assumed that I would get a smooth gradient from the center of the screen to the right edge, because they are mapped in -1 to 1 square. But look like they are in screen space but perspective deformation isn't applied. Here is what I see:
(dont look at red line and light source, they are using different shader)
debug render
If I devide it by about 15 it will look like this:
gl_FragColor = vec4(vertexPos.x, vertexPos.x, vertexPos.x, 1.0)/15.0;
devided by 15
Can someone please explain me, why the coordinates aren't homogeneous and the scene still renders correctly with perspective distortion?
P.S. if I try to put gl_Position in fragment shader instead of vertexPos, it doesn't work.
A so-called perspective division is applied to gl_Position after it's computed in a vertex shader:
gl_Position.xyz /= gl_Position.w;
But it doesn't happen to your varyings unless you do it manually. Thus, you need to add
vertexPos.xyz /= vertexPos.w;
at the end of your vertex shader. Make sure to do it after you copy the value to gl_Position, you don't want to do the division twice.
VC++ 2010, OpenGL, GLSL, SDL
I am moving over to shaders, and have run into a problem that originally occured while working with the ogl pipeline. That is, the position of the light seems to point in whatever direction my camera faces. In the ogl pipeline it was just the specular highlight, which was fixable with:
glLightModelf(GL_LIGHT_MODEL_LOCAL_VIEWER, 1.0f);
Here are the two shaders:
Vertex
varying vec3 lightDir,normal;
void main()
{
normal = normalize(gl_NormalMatrix * gl_Normal);
lightDir = normalize(vec3(gl_LightSource[0].position));
gl_TexCoord[0] = gl_MultiTexCoord0;
gl_Position = ftransform();
}
Fragment
varying vec3 lightDir,normal;
uniform sampler2D tex;
void main()
{
vec3 ct,cf;
vec4 texel;
float intensity,at,af;
intensity = max(dot(lightDir,normalize(normal)),0.0);
cf = intensity * (gl_FrontMaterial.diffuse).rgb +
gl_FrontMaterial.ambient.rgb;
af = gl_FrontMaterial.diffuse.a;
texel = texture2D(tex,gl_TexCoord[0].st);
ct = texel.rgb;
at = texel.a;
gl_FragColor = vec4(ct * cf, at * af);
}
Any help would be much appreciated!
The question is: What coordinate system (reference frame) do you want the lights to be in? Probably "the world".
OpenGL's fixed-function pipeline, however, has no notion of world coordinates, because it uses a modelview matrix, which transforms directly from eye (camera) coordinates to model coordinates. In order to have “fixed” lights, you could do one of these:
The classic OpenGL approach is to, every frame, set up the modelview matrix to be the view transform only (that is, be the coordinate system you want to specify your light positions in) and then use glLight to set the position (which is specified to apply the modelview matrix to the input).
Since you are using shaders, you could also have separate model and view matrices and have your shader apply both (rather than using ftransform) to vertices, but only the view matrix to lights. However, this means more per-vertex matrix operations and is probably not an especially good idea unless you are looking for clarity rather than performance.
I'm having some problems writing a simple pass-through geometry shader for points. I figured it should be something like this:
#version 330
precision highp float;
layout (points) in;
layout (points) out;
void main(void)
{
gl_Position = gl_in[0].gl_Position;
EmitVertex();
EndPrimitive();
}
I have a bunch of points displayed on screen when I don't specify a geometry shader, but when I try to link this shader to my shader program, no points show up and no error is reported.
I'm using C# and OpenTK, but I don't think that is the problem.
Edit: People requested the other shaders, though I did test these shaders without using the geometry shader and they worked fine without the geometry shader.
Vertex shader:
void main()
{
gl_FrontColor = gl_Color;
gl_Position = ftransform();
}
Fragment shader:
void main()
{
gl_FragColor = gl_Color;
}
I'm not that sure sure (have no real experience with geometry shaders), but don't you have to specify the maximum number of output vertices. In your case it's just one, so try
layout (points, max_vertices=1) out;
Perhaps the shader compiles succesfully because you could still specify the number of vertices by the API (at least in compatibility, I think).
EDIT: You use the builtin varying gl_FrontColor (and read gl_Color in the fragment shader), but then in the geometry shader you don't propagate it to the fragment shader (it doesn't get propagated automatically).
This brings us to another problem. You mix new syntax (like gl_in) with old deprecated syntax (like ftransform and the builtin color varyings). Perhaps that's not a good idea and in this case you got a problem, as gl_in has no gl_Color or gl_FrontColor member if I remember correctly. So the best thing would be to use your own color variable as out variable of the vertex and geometry shaders and as in variable of the geometry and fragment shaders (but remember that the in has to be an array in the geometry shader).