Non-looping animations in GLSL shader - glsl

It's fairly trivial to animate a looping value in GLSL.
uniform float u_time;
out vec4 fragColor;
void main(){
float val = sin(u_time);
fragColor = vec4(val);
}
A graph of the value of val over time looks like this:
But what if we only wanted to do an animation one, two, or three times? Or arbitrarily trigger some animation when, for example, a user clicks the screen?
So that our graph of val would look like this:
[

Related

openGL cubemap reflections in view space are wrong

I am following this tutorial and I managed to add a cube map to my scene. Then I tried to add reflections to my object, differently from the tutorial, I made my GLSL code in view space. However, the reflections seem a bit off. They are always reflecting the same side whatever angle you are facing, in my case, you always see a rock on the reflected object, but the rock is only on one side of my cube map.
Here is a video showing the effect:
.
I tried with other shaped objects, like a cube, and the effect is the same. I also found this book, that shows an example of a view space reflections, and it seems I am doing something similar to it, but it still won't result in the desired effect.
My vertex shader code:
#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 normal;
layout (location = 2) in vec2 aTexCoord;
uniform mat4 Model;
uniform mat4 View;
uniform mat4 Projection;
out vec2 TexCoord;
out vec3 aNormal;
out vec3 FragPos;
void main()
{
aNormal = mat3(transpose(inverse(View * Model))) * normal;
FragPos = vec3(View * Model * vec4(aPos,1.0));
gl_Position = Projection * vec4(FragPos,1.0);
TexCoord = aTexCoord;
}
My vertex code:
#version 330 core
out vec4 FragColor;
in vec3 FragPos;
in vec3 aNormal;
uniform samplerCube skybox;
void main(){
vec3 I = normalize(FragPos);
vec3 R = reflect(I,normalize(aNormal));
FragColor = vec4(texture(skybox,R).rgb,1.0);
}
Since you do the computations, in the fragment shader, in view space, the reflected vector (R) is a vector in view space, too. The cubemap (skybox) represents a map of the environment, in world space.
You have to transform R form view space to world space. That can be done by the inverse view matrix The inverse matrix can be computed by the glsl built-in function inverse:
#version 330 core
out vec4 FragColor;
in vec3 FragPos;
in vec3 aNormal;
uniform samplerCube skybox;
uniform mat4 View;
void main() {
vec3 I = normalize(FragPos);
vec3 viewR = reflect(I, normalize(aNormal));
vec3 worldR = inverse(mat3(View)) * viewR;
FragColor = vec4(texture(skybox, worldR).rgb, 1.0);
}
Note, the view matrix transforms form world space to view space, this the inverse view matrix transforms form view space to world space. See also Invertible matrix.
This is a late answer but I just wanted to give additional information why it is behaving like this.
Imagine your reflective object is simply a 6 sided cube. Each face can be thought of as a mirror. Now because you are in view space every coordinate of that mirror plane that is visible from your viewpoint does have a negative z value. Lets us look at the point directly at the center. This vector looks like (0, 0, -z) and because the side of the cube is like a mirror it will get reflected directly back to you (0, 0, +z). So you end up sampling from GL_TEXTURE_CUBE_MAP_POSITIVE_Z of your cube map.
In shader code it looks like:
vec3 V = normalize(-frag_pos_view_space); // vector from fragment to view point (0,0,0) in view space
vec3 R = reflect(-V, N); // invert V because reflect expects incident vector
vec3 color = texture(skybox, R).xyz;
Now, let us move to the other side of the cube and look at that mirror plane. In view space, the coordinate you are looking at is still (0,0,-z) at the center, will be reflected around the normal and gets back to you, so the reflected vector again looks like (0,0, +z). This means even if you are at the other side of your cube you will sample the same face in your cube map.
So what you have to do is go back into world space using the inverse of your view matrix. If in addition then you rendered the skybox itself by applying a rotation you will even have to transform your reflected vector with the inverse of the model matrix that you used to transform the skybox otherwise the reflections will still be wrong.

How to make a take a circle out of a shader?

I'm working on a game using GLSL shaders
I'm using Go with the library Pixel, it's a 2d game and there's no "camera" (I've had people suggest using a second camera to achieve this)
My current shader is just a basic grayscale shader
#version 330 core
in vec2 vTexCoords;
out vec4 fragColor;
uniform vec4 uTexBounds;
uniform sampler2D uTexture;
void main() {
// Get our current screen coordinate
vec2 t = (vTexCoords - uTexBounds.xy) / uTexBounds.zw;
// Sum our 3 color channels
float sum = texture(uTexture, t).r;
sum += texture(uTexture, t).g;
sum += texture(uTexture, t).b;
// Divide by 3, and set the output to the result
vec4 color = vec4( sum/3, sum/3, sum/3, 1.0);
fragColor = color;
}
I want to take out a circle of the shader to show the color of objects almost like light is shining on them.
This is an example of what I'm trying to achieve
I can't really figure out what to search to find a shadertoy example or something that does this, but I've seen something similar before so I'm pretty sure it's possible.
To restate; I basically just want to remove part of the shader.
Not sure if using shaders is the best way to approach this, if there's another way then please let me know and I will remake the question.
You can easily extend this to use any arbitrary position as the "light."
Declare a uniform buffer to store the current location and a radius.
If the distance from the given location to the current pixel is less than the radius squared return the current color.
Otherwise, return its greyscale.
vec2 displacement = t - light_location;
float distanceSq = (displacement.x * displacement.x + displacement.y * displacement.y)
float radiusSq = radius * radius;
if(distanceSq < radiusSq) {
fragColor = texture(uTexture);
} else {
float sum = texture(uTexture).r;
sum += texture(uTexture).g;
sum += texture(uTexture).b;
float grey = sum / 3.0f;
fragColor = vec4(grey, grey, grey, 1.0f);
}

Parts of my floor plane disappear

I'm having a problem rendering. The object in question is a large plane consisting of two triangles. It should cover most of the area of the window, but parts of it disappear and reappear with the camera moving and turning (I never see the whole plane though)
Note that the missing parts are NOT whole triangles.
I have messed around with the camera to find out where this is coming from, but I haven't found anything.
I haven't added view frustum culling yet.
I'm really stuck as I have no idea at which part of my code I even have to look at to solve this. Searches mainly turn up questions about whole triangles missing, this is not what's happening here.
Any pointers to what the cause of the problem may be?
Edit:
I downscaled the plane and added another texture that's better suited for testing.
Now I have found this behaviour:
This looks like I expect it to
If I move forward a bit more, this happens
It looks like the geometry behind the camera is flipped and rendered even though it should be invisible?
Edit 2:
my vertex and fragment shaders:
#version 330
in vec3 position;
in vec2 textureCoords;
out vec4 pass_textureCoords;
uniform mat4 MVP;
void main() {
gl_Position = MVP * vec4(position, 1);
pass_textureCoords = vec4(textureCoords/gl_Position.w, 0, 1/gl_Position.w);
gl_Position = gl_Position/gl_Position.w;
}
#version 330
in vec4 pass_textureCoords;
out vec4 fragColor;
uniform sampler2D textureSampler;
void main()
{
fragColor= texture(textureSampler, pass_textureCoords.xy/pass_textureCoords.w);
}
Many drivers do not handle big triangles that cross the z-plane very well, as depending on your precision settings and the drivers' internals these triangles may very well generate invalid coordinates outside of the supported numerical range.
To make sure this is not the issue, try to manually tessellate the floor in a few more divisions, instead of only having two triangles for the whole floor.
Doing so is quite straightforward. You'd have something like this pseudocode:
division_size_x = width / max_x_divisions
division_size_y = height / max_y_divisions
for i from 0 to max_x_divisions:
for j from 0 to max_y_divisions:
vertex0 = { i * division_size_x, j * division_size_y }
vertex1 = { (i+1) * division_size_x, j * division_size_y }
vertex2 = { (i+1) * division_size_x, (j+1) * division_size_y }
vertex3 = { i * division_size_x, (j+1) * division_size_y }
OutputTriangle(vertex0, vertex1, vertex2)
OutputTriangle(vertex2, vertex3, vertex0)
Apparently there is an error in my matrices that caused problems with vertices that are behind the camera. I deleted all the divisions by w in my shaders and did glPosition = -gl_Position (initially just to test something), it works now.
I still need to figure out the exact problem but it is working for now.

Why does GLSL lighting code shift the light spot with the camera?

I am trying to make a custom light shader and was trying a lot of different things over time.
Some of the solutions I found work better, others worse. For this question I'm using the solution which worked best so far.
My problem is, that if I move the "camera" around, the light positions seems to move around, too. This solution has very slight but noticeable movement in it and the light position seems to be above where it should be.
Default OpenGL lighting (w/o any shaders) works fine (steady light positions) but I need the shader for multitexturing and I'm planning on using portions of it for lighting effects once it's working.
Vertex Source:
varying vec3 vlp, vn;
void main(void)
{
gl_Position = ftransform();
vn = normalize(gl_NormalMatrix * -gl_Normal);
vlp = normalize(vec3(gl_LightSource[0].position.xyz) - vec3(gl_ModelViewMatrix * -gl_Vertex));
gl_TexCoord[0] = gl_MultiTexCoord0;
}
Fragment Source:
uniform sampler2D baseTexture;
uniform sampler2D teamTexture;
uniform vec4 teamColor;
varying vec3 vlp, vn;
void main(void)
{
vec4 newColor = texture2D(teamTexture, vec2(gl_TexCoord[0]));
newColor = newColor * teamColor;
float teamBlend = newColor.a;
// mixing the textures and colorizing them. this works, I tested it w/o lighting!
vec4 outColor = mix(texture2D(baseTexture, vec2(gl_TexCoord[0])), newColor, teamBlend);
// apply lighting
outColor *= max(dot(vn, vlp), 0.0);
outColor.a = texture2D(baseTexture, vec2(gl_TexCoord[0])).a;
gl_FragColor = outColor;
}
What am I doing wrong?
I can't be certain any of these are the problem, but they could cause one.
First, you need to normalize your per-vertex vn and vlp (BTW, try to use more descriptive variable names. viewLightPosition is a lot easier to understand than vlp). I know you normalized them in the vertex shader, but the fragment shader interpolation will denormalize them.
Second, this isn't particularly wrong so much as redundant. vec3(gl_LightSource[0].position.xyz). The "position.xyz" is already a vec3, since the swizzle mask (".xyz") only has 3 components. You don't need to cast it to a vec3 again.

Odd effect with GLSL normals

As a somewhat similar to a problem I had before and posted before, I'm trying to get normals to display correctly in my GLSL app.
For the purposes of my explanation, I'm using the ninjaHead.obj model provided with RenderMonkey for testing purposes (you can grab it here). Now in the preview window in RenderMonkey, everything looks great:
and the vertex and fragment code generated respectively is:
Vertex:
uniform vec4 view_position;
varying vec3 vNormal;
varying vec3 vViewVec;
void main(void)
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
// World-space lighting
vNormal = gl_Normal;
vViewVec = view_position.xyz - gl_Vertex.xyz;
}
Fragment:
uniform vec4 color;
varying vec3 vNormal;
varying vec3 vViewVec;
void main(void)
{
float v = 0.5 * (1.0 + dot(normalize(vViewVec), vNormal));
gl_FragColor = v* color;
}
I based my GLSL code on this but I'm not quite getting the expected results...
My vertex shader code:
uniform mat4 P;
uniform mat4 modelRotationMatrix;
uniform mat4 modelScaleMatrix;
uniform mat4 modelTranslationMatrix;
uniform vec3 cameraPosition;
varying vec4 vNormal;
varying vec4 vViewVec;
void main()
{
vec4 pos = gl_ProjectionMatrix * P * modelTranslationMatrix * modelRotationMatrix * modelScaleMatrix * gl_Vertex;
gl_Position = pos;
gl_TexCoord[0] = gl_MultiTexCoord0;
gl_FrontColor = gl_Color;
vec4 normal4 = vec4(gl_Normal.x,gl_Normal.y,gl_Normal.z,0);
// World-space lighting
vNormal = normal4*modelRotationMatrix;
vec4 tempCameraPos = vec4(cameraPosition.x,cameraPosition.y,cameraPosition.z,0);
//vViewVec = cameraPosition.xyz - pos.xyz;
vViewVec = tempCameraPos - pos;
}
My fragment shader code:
varying vec4 vNormal;
varying vec4 vViewVec;
void main()
{
//gl_FragColor = gl_Color;
float v = 0.5 * (1.0 + dot(normalize(vViewVec), vNormal));
gl_FragColor = v * gl_Color;
}
However my render produces this...
Does anyone know what might be causing this and/or how to make it work?
EDIT
In response to kvark's comments, here is the model rendered without any normal/lighting calculations to show all triangles being rendered.
And here is the model shading with the normals used for colors. I believe the problem has been found! Now the reason is why it is being rendered like this and how to solve it? Suggestions are welcome!
SOLUTION
Well everyone the problem has been solved! Thanks to kvark for all his helpful insight that has definitely helped my programming practice but I'm afraid the answer comes from me being a MASSIVE tit... I had an error in the display() function of my code that set the glNormalPointer offset to a random value. It used to be this:
gl.glEnableClientState(GL.GL_NORMAL_ARRAY);
gl.glBindBuffer(GL.GL_ARRAY_BUFFER, getNormalsBufferObject());
gl.glNormalPointer(GL.GL_FLOAT, 0, getNormalsBufferObject());
But should have been this:
gl.glEnableClientState(GL.GL_NORMAL_ARRAY);
gl.glBindBuffer(GL.GL_ARRAY_BUFFER, getNormalsBufferObject());
gl.glNormalPointer(GL.GL_FLOAT, 0, 0);
So I guess this is a lesson. NEVER mindlessly Ctrl+C and Ctrl+V code to save time on a Friday afternoon AND... When you're sure the part of the code you're looking at is right, the problem is probably somewhere else!
What is your P matrix? (I suppose it's a world->camera view transform).
vNormal = normal4*modelRotationMatrix; Why did you change the order of arguments? Doing that you are multiplying the normal by inversed rotation, what you don't really want. Use the standard order instead (modelRotationMatrix * normal4)
vViewVec = tempCameraPos - pos. This is entirely incorrect. pos is your vertex in the homogeneous clip-space, while tempCameraPos is in world space (I suppose). You need to have the result in the same space as your normal is (world space), so use world-space vertex position (modelTranslationMatrix * modelRotationMatrix * modelScaleMatrix * gl_Vertex) for this equation.
You seem to be mixing GL versions a bit? You are passing the matrices manually via uniforms, but use fixed function to pass vertex attributes. Hm. Anyway...
I sincerely don't like what you're doing to your normals. Have a look:
vec4 normal4 = vec4(gl_Normal.x,gl_Normal.y,gl_Normal.z,0);
vNormal = normal4*modelRotationMatrix;
A normal only stores directional data, why use a vec4 for it? I believe it's more elegant to just use just vec3. Furthermore, look what happens next- you multiply the normal by the 4x4 model rotation matrix... And additionally your normal's fourth cordinate is equal to 0, so it's not a correct vector in homogenous coordinates. I'm not sure that's the main problem here, but I wouldn't be surprised if that multiplication would give you rubbish.
The standard way to transform normals is to multiply a vec3 by the 3x3 submatrix of the model-view matrix (since you're only interested in the orientation, not the translation). Well, precisely, the "correctest" approach is to use the inverse transpose of that 3x3 submatrix (this gets important when you have scaling). In old OpenGL versions you had it precalculated as gl_NormalMatrix.
So instead of the above, you should use something like
// (...)
varying vec3 vNormal;
// (...)
mat3 normalMatrix = transpose(inverse(mat3(modelRotationMatrix)));
// or if you don't need scaling, this one should work too-
mat3 normalMatrix = mat3(modelRotationMatrix);
vNormal = gl_Normal*normalMatrix;
That's certainly one thing to fix in your code - I hope it solves your problem.