Flip upside down vertex shader (GLES) - glsl

Given the next vertex shader, what is the simplest, most efficient and fastest way to flip the coordinates upside down, so the fragment shader will produce and upside down image?
attribute vec4 a_position;
attribute vec2 a_texcoord;
varying vec2 v_texcoord;
void main()
{
v_texcoord = a_texcoord.st;
gl_Position = a_position;
}

Just flip v_texcoord. So e.g.
v_texcoord = a_texcoord.st * vec2(1.0, -1.0);
Or, I guess:
v_texcoord = vec2(a_texcoord.s, 1.0 - a_texcoord.t);
Depending on what exactly you want to happen to the range of .t.

Related

Rotate sampler2D texture using fragment shader

As per my previous question here, what if I want to rotate a sampler2D texture inside the fragment shader?
In that question I rotated the texture inside vertex shader
#version 120
attribute vec3 a_position;
attribute vec2 a_texCoord;
varying vec2 v_texCoord;
void main()
{
const float w = 1.57;
mat3 A = mat3(cos(w), -sin(w), 0.0,
sin(w), cos(w), 0.0,
0.0, 0.0, 1.0);
gl_Position = vec4(A * a_position, 1.0);
v_texCoord = a_texCoord;
}
but my the fragment shader applies an heavy modification that was thought for a rotated clockwise texture, so using the vertex shader I have an horizontal effect that is applied to vertical coordinates by fragment shader.
Is it possible to rotate a sampler2D before apply the modification?
You cannot rotate a sampler2D, however you can rotated the texture coordinates:
#version 120
attribute vec3 a_position;
attribute vec2 a_texCoord;
varying vec2 v_texCoord;
void main()
{
const float w = 1.57;
mat2 uvRotate = mat2(cos(w), -sin(w),
sin(w), cos(w));
gl_Position = vec4(a_position, 1.0);
v_texCoord = uvRotate * a_texCoord;
}

How to make a retro/neon/glow effect using shaders?

Let's say the concept is to create a map consisting of cubes with a neon aesthetic, such as:
Currently I have this vertex shader:
// Uniforms
uniform mat4 u_projection;
uniform mat4 u_view;
uniform mat4 u_model;
// Vertex atributes
in vec3 a_position;
in vec3 a_normal;
in vec2 a_texture;
vec3 u_light_direction = vec3(1.0, 2.0, 3.0);
// Vertex shader outputs
out vec2 v_texture;
out float v_intensity;
void main()
{
vec3 normal = normalize((u_model * vec4(a_normal, 0.0)).xyz);
vec3 light_dir = normalize(u_light_direction);
v_intensity = max(0.0, dot(normal, light_dir));
v_texture = a_texture;
gl_Position = u_projection * u_view * u_model * vec4(a_position, 1.0);
}
And this pixel shader:
in float v_intensity;
in vec2 v_texture;
uniform sampler2D u_texture;
out vec4 fragColor;
void main()
{
fragColor = texture(u_texture, v_texture) * vec4(v_intensity, v_intensity, v_intensity, 1.0);
}
How would I use this to create a neon effect such as in the example for 3D cubes? The cubes are simply models with a mesh/material. The only change would be to set the material color to black and the outlines to a bright pink or blue (maybe with a glow).
Any help is appreciated. :)
You'd normally implement this as a post-processing effect. First render with bright, saturated colours into a texture, then apply a bloom effect, when drawing that texture to screen.

Why do my specular highlights show up so strongly on polygon edges?

I have a simple application that draws a sphere with a single directional light. I'm creating the sphere by starting with an octahedron and subdividing each triangle into 4 smaller triangles.
With just diffuse lighting, the sphere looks very smooth. However, when I add specular highlights, the edges of the triangles show up fairly strongly. Here are some examples:
Diffuse only:
Diffuse and Specular:
I believe that the normals are being interpolated correctly. Looking at just the normals, I get this:
In fact, if I switch to a flat shading, where the normals are per-polygon instead of per-vertex, I get this:
In my vertex shader, I'm multiplying the model's normals by the transpose inverse modelview matrix:
#version 330 core
layout (location = 0) in vec4 vPosition;
layout (location = 1) in vec3 vNormal;
layout (location = 2) in vec2 vTexCoord;
out vec3 fNormal;
out vec2 fTexCoord;
uniform mat4 transInvModelView;
uniform mat4 ModelViewMatrix;
uniform mat4 ProjectionMatrix;
void main()
{
fNormal = vec3(transInvModelView * vec4(vNormal, 0.0));
fTexCoord = vTexCoord;
gl_Position = ProjectionMatrix * ModelViewMatrix * vPosition;
}
and in the fragment shader, I'm calculating the specular highlights as follows:
#version 330 core
in vec3 fNormal;
in vec2 fTexCoord;
out vec4 color;
uniform sampler2D tex;
uniform vec4 lightColor; // RGB, assumes multiplied by light intensity
uniform vec3 lightDirection; // normalized, assumes directional light, lambertian lighting
uniform float specularIntensity;
uniform float specularShininess;
uniform vec3 halfVector; // Halfway between eye and light
uniform vec4 objectColor;
void main()
{
vec4 texColor = objectColor;
float specular = max(dot(halfVector, fNormal), 0.0);
float diffuse = max(dot(lightDirection, fNormal), 0.0);
if (diffuse == 0.0)
{
specular = 0.0;
}
else
{
specular = pow(specular, specularShininess) * specularIntensity;
}
color = texColor * diffuse * lightColor + min(specular * lightColor, vec4(1.0));
}
I was a little confused about how to calculate the halfVector. I'm doing it on the CPU and passing it in as a uniform. It's calculated like this:
vec3 lightDirection(1.0, 1.0, 1.0);
lightDirection = normalize(lightDirection);
vec3 eyeDirection(0.0, 0.0, 1.0);
eyeDirection = normalize(eyeDirection);
vec3 halfVector = lightDirection + eyeDirection;
halfVector = normalize(halfVector);
glUniform3fv(halfVectorLoc, 1, &halfVector [ 0 ]);
Is that the correct formulation for the halfVector? Or does it need to be done in the shaders as well?
Interpolating normals into a face can (and almost always will) result in a shortening of the normal. That's why the highlight is darker in the center of a face and brighter at corners and edges. If you do this, just re-normalize the normal in the fragment shader:
fNormal = normalize(fNormal);
Btw, you cannot precompute the half vector as it is view dependent (that's the whole point of specular lighting). In your current scenario, the highlight will not change when you just move the camera (keeping the direction).
One way to do this in the shader is to pass an additional uniform for the eye position and then calculate the view direction as eyePosition - vertexPosition. Then continue as you did on the CPU.

cube map implementation using GLSL

i currently work with some shader codes, but some of them makes me confused.
it used incoming gl_vertex to calculate out eyevector,then refelect vector.finally pass to frag shader. in pass of frag shader, extracting a texl through textureCube. my question is is there getting only an pixel per gl_Vertex? where is the interpolation happened to those shaders?
vertex shader:
uniform vec4 eyepos;
varying vec3 reflectvec;
void main(void) {
vec4 pos = normalize(gl_ModelViewMatrix * gl_Vertex);
pos = pos / pos.w;
vec3 eyevec = normalize(eyepos.xyz - pos.xyz);
vec3 norm = normalize(gl_NormalMatrix * gl_Normal);
reflectvec = reflect(-eyevec, norm);
gl_Position = ftransform();
}
frag shader:
uniform samplerCube cubemap;
varying vec3 reflectvec;
void main(void) {
vec4 texcolor = textureCube(cubemap, reflectvec);
gl_FragColor = texcolor;
}
where is the interpolation happened to those shaders?
The fragment shader is executed for every fragment. A pixel consists of at least 1 fragment. Between the vertex and the fragment shader, the input varyings toward the fragment shader are barycentrically interpolated.

Phong-specular lighting in glsl (lwjgl)

I'm currently trying to make specular lighting on an sphere using glsl and using Phong-model.
This is how my fragment shader looks like:
#version 120
uniform vec4 color;
uniform vec3 sunPosition;
uniform mat4 normalMatrix;
uniform mat4 modelViewMatrix;
uniform float shininess;
// uniform vec4 lightSpecular;
// uniform vec4 materialSpecular;
varying vec3 viewSpaceNormal;
varying vec3 viewSpacePosition;
vec4 calculateSpecular(vec3 l, vec3 n, vec3 v, vec4 specularLight, vec4 materialSpecular) {
vec3 r = -l+2*(n*l)*n;
return specularLight * materialSpecular * pow(max(0,dot(r, v)), shininess);
}
void main(){
vec3 normal = normalize(viewSpaceNormal);
vec3 viewSpacePosition = (modelViewMatrix * vec4(gl_FragCoord.x, gl_FragCoord.y, gl_FragCoord.z, 1.0)).xyz;
vec4 specular = calculateSpecular(sunPosition, normal, viewSpacePosition, vec4(0.3,0.3,0.3,0.3), vec4(0.3,0.3,0.3,0.3));
gl_FragColor = color+specular;
}
The sunPosition is not moving and is set to the value (2.0f, 3.0f, -1.0f).
The problem is that the image looks nothing as it's supose to do if the specular calculations were correct.
This is how it looks like:
http://i.imgur.com/Na2C6.png
The reason i don't have any ambient-/emissiv-/deffuse- lighting in this code is because i want to get the specular light part working first.
Thankful for any help!
Edit:
#Darcy Rayner
That indead helped alot tough it seams to be something that is still not right...
The current code looks like this:
Vertex Shader:
viewSpacePosition = (modelViewMatrix*gl_Vertex).xyz;
viewSpaceSunPosition = (modelViewMatrix*vec4(sunPosition,1)).xyz;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
viewSpaceNormal = (normalMatrix * vec4(gl_Position.xyz, 0.0)).xyz;
Fragment Shader:
vec4 calculateSpecular(vec3 l, vec3 n, vec3 v, vec4 specularLight, vec4 materialSpecular) {
vec3 r = -l+2*(n*l)*n;
return specularLight * materialSpecular * pow(max(0,dot(r, v)), shininess);
}
void main(){
vec3 normal = normalize(viewSpaceNormal);
vec3 viewSpacePosition = normalize(viewSpacePosition);
vec3 viewSpaceSunPosition = normalize(viewSpaceSunPosition);
vec4 specular = calculateSpecular(viewSpaceSunPosition, normal, viewSpacePosition, vec4(0.7,0.7,0.7,1.0), vec4(0.6,0.6,0.6,1.0));
gl_FragColor = color+specular;
}
And the sphere looks like this:
-->Picture-link<--
with the sun position: sunPosition = new Vector(12.0f, 15.0f, -1.0f);
Try not using gl_FragCoord, as it is stored in screen coordinates, (and I don't think transforming it by the modelViewMatrix will get it back to view coordinates). Easiest thing to do, set viewSpacePosition in your vertex shader as:
// Change gl_Vertex to whatever attribute you are using.
viewSpacePosition = (modelViewMatrix * gl_Vertex).xyz;
This should get you viewSpacePosition in view coordinates, (ie. before projection is applied). You can then go ahead and normalise viewSpacePosition in the fragment shader. Not sure if you are storing the sun vector in world coordinates, but you will probably want to transform it into view space then normalise it as well. Give it a go and see what happens, these things tend to be very error prone.