I’m reading OpenGL Programming Guide, Ninth Edition.
I’m in trouble with this:
#version 420 core
uniform mat4 model_matrix;
…
layout (location = 0) in vec4 position;
…
void main(void)
{
vec4 pos = (model_matrix * (position * vec4(1.0, 1.0, 1.0, 1.0)));
…
};
So, what’s the point in multiplying "position" times vec4(1.0, 1.0, 1.0, 1.0)? The result will be "position" with or without vec4(1.0, 1.0, 1.0, 1.0).
I checked it in the Red book and there is no specific reason there, then it is just about a scale to the vertices positions in case that vector is different of Vector4.One and you want to do a scale, but if it is just a units vector, there is no need to do it and you can simple remove the redundancy.
Related
I'm working on a simple particle system in OpenGL; so far I've written two fragment shaders to update velocities and positions in response to my mouse, and they seem to work! I've looked at those two textures and they both seem to respond properly (going from random noise to an orderly structure in response to my mouse).
However, I'm having issues with how to draw the particles. I'm rather new to vertex shaders (having previously only used fragment shaders); it's my understanding that the usual way is a vertex shader like this:
uniform sampler2DRect tex;
varying vec4 cur;
void main() {
gl_FrontColor = gl_Color;
cur = texture2DRect(tex, gl_Vertex.xy);
vec2 pos = cur.xy;
gl_Position = gl_ModelViewProjectionMatrix * vec4(pos, 0., 1.);
}
Would transform the coordinates to the proper place according to the values in the position buffer. However, I'm getting gl errors when I run this that it can't be compiled -- after some research, it seems that gl_ModelViewProjectionMatrix is deprecated.
What would be the proper way to do this now that the model view matrix is deprecated? I'm not trying to do anything fancy with perspective, I just need a plain orthogonal view of the texture.
thanks!
What version of GLSL are you using (don't see any #version directive)? Yes, i think gl_ModelViewProjectionMatrix is really deprecated. However if you want to use it maybe this could help. By the way varying qualifier is quite old too. I would rather use in and out qualifiers it makes your shader code more 'readable'.
'Proper' way of doing that is that you create your own matrices - model and view (use glm library for example) and multiply them and then pass them as uniform to your shader. Tutorial with an example can be found here.
Here is my vs shader i used for displaying texture (fullscreen quad):
#version 430
layout(location = 0) in vec2 vPosition;
layout(location = 1) in vec2 vUV;
out vec2 uv;
void main()
{
gl_Position = vec4(vPosition,1.0,1.0);
uv = vUV;
}
fragment shader:
#version 430
in vec2 uv;
out vec4 final_color;
uniform sampler2D tex;
void main()
{
final_color = texture(tex, uv).rgba;
}
and here are my coordinates (mine are static, but you can change it and update buffer - shader can be the same):
//Quad verticles - omitted z coord, because it will always be 1
float pos[] = {
-1.0, 1.0,
1.0, 1.0,
-1.0, -1.0,
1.0, -1.0
};
float uv[] = {
0.0, 1.0,
1.0, 1.0,
0.0, 0.0,
1.0, 0.0
};
Maybe you could try to turn off depth comparison before executing this shader glDisable(GL_DEPTH_TEST);
I am building a parametric 3d modeler with obj export.
I am really puzzled.
I have changed my GPU last night and now, there are cracks between the vertices, I can see what is behind. My old card was an Nvidia GTX275 and the one, NVIDIA GTX960. I didn't change anything in the code, shader or otherwise. I use only flat colors (not textures).
On the image below, the black lines that cut the pentagon into triangle shouldn't be there.
It seems to be purely an OpenGL problem as when I export the model and look in Blender, the faces are contiguous and there is no duplicate vertices.
the shader code is quite simple :
_VERTEX2 = """
#version 330
#extension GL_ARB_explicit_uniform_location : enable
layout(location = 0 ) in vec3 position;
layout(location = 1 ) in vec4 color;
layout(location = 2 ) in vec3 normal;
varying vec4 baseColor;
// uniform mat4 proj;
layout(location = 0) uniform mat4 view;
layout(location = 4) uniform mat4 proj;
varying vec3 fragVertexEc;
void main(void) {
gl_Position = proj * view * vec4(position, 1.0);
fragVertexEc = (view * vec4(position, 1.0)).xyz;
baseColor = color;
}
"""
_FRAGMENT2 = """
#version 330
#extension GL_OES_standard_derivatives : enable
varying vec3 fragVertexEc;
varying vec4 baseColor;
const vec3 lightPosEc = vec3(0,0,10);
const vec3 lightColor = vec3(1.0,1.0,1.0);
void main()
{
vec3 X = dFdx(fragVertexEc);
vec3 Y = dFdy(fragVertexEc);
vec3 normal=normalize(cross(X,Y));
vec3 lightDirection = normalize(lightPosEc - fragVertexEc);
float light = max(0.0, dot(lightDirection, normal));
gl_FragColor = vec4(normal, 1.0);
gl_FragColor = vec4(baseColor.xyz * light, baseColor.w);
}
"""
the rendering code is also quite simple :
def draw(self, view_transform, proj, transform):
self.shader.use()
gl_wrap.glBindVertexArray(self.vao)
try:
self.vbo.bind()
view_transform = view_transform * transform
GL.glUniformMatrix4fv(0, 1, False, (ctypes.c_float*16)(*view_transform.toList()))
GL.glUniformMatrix4fv(4, 1, False, (ctypes.c_float*16)(*proj.toList()))
GL.glEnableVertexAttribArray(self.shader.attrib['position'])
GL.glEnableVertexAttribArray(self.shader.attrib['color'])
GL.glEnableVertexAttribArray(self.shader.attrib['normal'])
STRIDE = 40
GL.glVertexAttribPointer(
self.shader.attrib['position'], len(Vector._fields), GL.GL_FLOAT,
False, STRIDE, self.vbo)
GL.glVertexAttribPointer(
self.shader.attrib['color'], len(Color._fields), GL.GL_FLOAT,
False, STRIDE, self.vbo+12)
GL.glVertexAttribPointer(
self.shader.attrib['normal'], len(Vector._fields), GL.GL_FLOAT,
False, STRIDE, self.vbo+28)
GL.glDrawElements(
GL.GL_TRIANGLES,
len(self.glindices),
self.index_type,
self.glindices)
finally:
self.vbo.unbind()
gl_wrap.glBindVertexArray(0)
self.shader.unuse()
A simple quad has the following data sent to OpenGL :
vertices ( flat array of pos+rgba+normal) :
[5.0, -3.061616997868383e-16, 5.0, 0.898, 0.0, 0.0, 1.0, 0.0, 1.0, 6.123233995736766e-17,
-5.0, 3.061616997868383e-16, -5.0, 0.898, 0.0, 0.0, 1.0, 0.0, 1.0, 6.123233995736766e-17,
-5.0, -3.061616997868383e-16, 5.0, 0.898, 0.0, 0.0, 1.0, 0.0, 1.0, 6.123233995736766e-17,
5.0, 3.061616997868383e-16, -5.0, 0.898, 0.0, 0.0, 1.0, 0.0, 1.0, 6.123233995736766e-17]
indices :: [0, 1, 2, 0, 3, 1]
ok, got it !
the strange display is due to glEnabled(GL_POLYGON_SMOOTH).
I had completely forgotten about it as my old card wasn't doing anything with this setting. It wasn't visible at least.
But the new one has the correct behaviour : it tries to anti-alias each triangle independently, hence the black lines.
Many thanks to #Jerem who helped me cover the other possibilities.
...and have it actually work. I get the principle, you write a vertex program, something like, this say:
attribute vec3 v_pos;
attribute vec4 v_color;
attribute vec2 v_uv;
attribute vec3 v_rotation; // [angle, x, y]
uniform mat4 modelview_mat;
uniform mat4 projection_mat;
varying vec4 frag_color;
varying vec2 uv_vec;
void main (void) {
mat4 trans_in = mat4(
1.0, 0.0, 0.0, 50.0, // <--- Transformation matrix
0.0, 1.0, 0.0, 50.0,
0.0, 0.0, 1.0, 50.0,
0.0, 0.0, 0.0, 1.0
);
vec4 pos = trans_in * vec4(v_pos,1.0); // <--- apply to input
// Mark a vertex using color to prove a transformation is actually happening...
if (v_rotation[0] > 10.0) {
frag_color = vec4(1.0, 0.0, 0.0, 1.0);
gl_Position = projection_mat * vec4(pos[0], pos[1], 1.0, 1.0);
}
// And leave all the other verticies untouched.
else {
frag_color = v_color;
gl_Position = projection_mat * vec4(v_pos, 1.0); // <--- Untransformed output
}
uv_vec = v_uv; // <--- Pass UV to fragment program
}
The problem is, this doesn't actually work.
After applying the matrix transformation trans_in * v_pos, I expect a point [1, 2, 3] to become [51, 52, 53, 1].
...but it doesn't. In fact, it renders this:
(ie. no transformation of the point location; pos = trans_in * v_pos == vec4(v_pos, 1.0)!!!!!! O_o)
Notice the red marked vertices that prove that I am actually setting the gl_Position for them; indeed, if I do this:
gl_Position = projection_mat * vec4(1.0, 1.0, 1.0, 1.0);
Each of those red points is jumped down to the bottom corner, as you would expect.
I've also tried various 3x3 matrix multiplications and it seems that while the scale operations work, and to some extent, the rotation operations work, I cannot for the life of me get any 2d translation operations to run; the matrix multiplication just seems to... do nothing.
What am I doing wrong?
You got the matrix order wrong. GLSL uses column-major oder, so each row in your intializer will become a column of the matrix. This refelcts the same convention which was used with the (now deprecated) GL matrix stack. It is also consistent to the setting of the transpose parameter of glUinformMatrix*() calls which has to be set to GL_FALSE for column major input (where translation part are elements m[12],m[13],m[14] in an 1D array).
Your matrix actually only alters the w component of your vector, which you then ignore, so it does not have any visible effect.
I'm working on some basic lighting in my application, and am unable to get a simple light to work (so far..).
Here's the vertex shader:
#version 150 core
uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
uniform mat4 modelMatrix;
uniform mat4 pvmMatrix;
uniform mat3 normalMatrix;
in vec3 in_Position;
in vec2 in_Texture;
in vec3 in_Normal;
out vec2 textureCoord;
out vec4 pass_Color;
uniform LightSources {
vec4 ambient = vec4(0.5, 0.5, 0.5, 1.0);
vec4 diffuse = vec4(0.5, 0.5, 0.5, 1.0);
vec4 specular = vec4(0.5, 0.5, 0.5, 1.0);
vec4 position = vec4(1.5, 7, 0.5, 1.0);
vec4 direction = vec4(0.0, -1.0, 0.0, 1.0);
} lightSources;
struct Material {
vec4 ambient;
vec4 diffuse;
vec4 specular;
float shininess;
};
Material mymaterial = Material(
vec4(1.0, 0.8, 0.8, 1.0),
vec4(1.0, 0.8, 0.8, 1.0),
vec4(1.0, 0.8, 0.8, 1.0),
0.995
);
void main() {
gl_Position = pvmMatrix * vec4(in_Position, 1.0);
textureCoord = in_Texture;
vec3 normalDirection = normalize(normalMatrix * in_Normal);
vec3 lightDirection = normalize(vec3(lightSources.direction));
vec3 diffuseReflection = vec3(lightSources.diffuse) * vec3(mymaterial.diffuse) * max(0.0, dot(normalDirection, lightDirection));
/*
float bug = 0.0;
bvec3 result = equal( diffuseReflection, vec3(0.0, 0.0, 0.0) );
if(result[0] && result[1] && result[2]) bug = 1.0;
diffuseReflection.x += bug;
*/
pass_Color = vec4(diffuseReflection, 1.0);
}
And here's the fragment shader:
#version 150 core
uniform sampler2D texture;
in vec4 pass_Color;
in vec2 textureCoord;
void main() {
vec4 out_Color = texture2D(texture, textureCoord);
gl_FragColor = pass_Color;
//gl_FragColor = out_Color;
}
I'm rendering a textured wolf to the screen as a test. If I change the fragment shader to use out_Color, I see the wolf rendered properly. If I use the pass_Color, I see nothing on the screen.
This is what the screen looks like when I use out_Color in the fragment shader:
I know the diffuseReflection vector is full of 0's, by uncommenting this code in the vertex shader:
...
/*
float bug = 0.0;
bvec3 result = equal( diffuseReflection, vec3(0.0, 0.0, 0.0) );
if(result[0] && result[1] && result[2]) bug = 1.0;
diffuseReflection.x += bug;
*/
...
This will make the x component of the diffuseReflection vector 1.0, which turns the wolf red.
Does anyone see anything obvious I'm doing wrong here?
As suggested in the comments, try debugging incrementally. I see a number of ways your shader could be wrong. Maybe your normalMatrix isn't being passed properly? Maybe your in_Normal isn't mapped to the appropriate input? Maybe when you're casting lightSources.direction to a vec3, the compiler's doing something funky? Maybe your shader isn't even running at all, but you think it is? Maybe you have geometry or tessellation units and it's not passing correctly?
No one really has a chance of answering this correctly. As for me, it looks fine--but again, any of the factors above could happen--and probably more.
To debug this, you need to break it down. As suggested in the comments, try rendering the normals. Then, you should try rendering the light direction. Then render your n dot l term. Then multiply by your material parameters, then by your texture. Somewhere along the way you'll figure out the problem. As an additional tip, change your clear color to something other than black so that any black-rendered objects stand out.
It's worth noting that the above advice--break it down--is applicable to all things debugging, not just shaders. As I see it, you haven't done so here.
After deciding to try programming in modern OpenGL, I've left behind the fixed function pipeline and I'm not entirely sure about getting the same functionality I had before.
I'm trying to texture map quads with pixel perfect size, matching the texture size. For example, a 128x128 texture maps to a quad 128x128 in size.
This is my vertex shader.
#version 110
uniform float xpos;
uniform float ypos;
uniform float tw; // texture width in pixels
uniform float th; // texture height in pixels
attribute vec4 position;
varying vec2 texcoord;
void main()
{
mat4 projectionMatrix = mat4( 2.0/600.0, 0.0, 0.0, -1.0,
0.0, 2.0/800.0, 0.0, -1.0,
0.0, 0.0, -1.0, 0.0,
0.0, 0.0, 0.0, 1.0);
gl_Position = position * projectionMatrix;
texcoord = (gl_Position.xy);
}
This is my fragment shader:
#version 110
uniform float fade_factor;
uniform sampler2D textures[1];
varying vec2 texcoord;
void main()
{
gl_FragColor = texture2D(textures[0], texcoord);
}
My vertex data is as such, where w and h are the width and height of the texture.
[
0, 0,
w, 0,
w, h,
0, h
]
I load a 128x128 texture and with these shaders I see the image repeated 4 times: http://i.stack.imgur.com/UY7Ts.jpg
Can anyone offer advice on the correct way to be able to translate and scale given the tw th, xpos, xpos uniforms?
There's a problem with this:
mat4 projectionMatrix = mat4( 2.0/600.0, 0.0, 0.0, -1.0,
0.0, 2.0/800.0, 0.0, -1.0,
0.0, 0.0, -1.0, 0.0,
0.0, 0.0, 0.0, 1.0);
gl_Position = position * projectionMatrix;
Transformation matices are right associative, i.e. you should multiply the opposite order. Also you normally don't specify a projection matrix in the shader, you pass it as a uniform. OpenGL provides you ready to use uniforms for projection and modelview. In OpenGL-3 core you can reuse the uniform names to stay compatible.
// predefined by OpenGL version < 3 core:
#if __VERSION__ < 400
uniform mat4 gl_ProjectionMatrix;
uniform mat4 gl_ModelviewMatrx;
uniform mat4 gl_ModelviewProjectionMatrix; // premultiplied gl_ProjectionMatrix * gl_ModelviewMatrix
uniform mat4 gl_ModelviewInverseTranspose; // needed for transformin normals
attribute vec4 gl_Vertex;
varying vec4 gl_TexCoord[];
#endif
void main()
{
gl_Position = gl_ModelviewProjectionMatrix * gl_Vertex;
}
Next you must understand that texture coordinates don't address texture pixels (texels), but that the texture should be understood as a interpolating function with the given sampling points; texture coordinates 0 or 1 don't hit the texel's centers, but lie exactly between the wraparound, thus blurring. As long as your quad on screen size exactly matches the texture dimensions this is fine. But as soon as you want to show just a subimage things get interesting (I leave it as an exercise to the reader to figure out the exact mapping; hint: You'll have the terms 0.5/dimension and (dimension - 1)/dimension in the solution)