I have a diffuse lighting shader which seems to work when the object is not rotating. However, when I apply rotation transform, the light also seems to rotate along with the object. It's like the object and the light stay still but the camera is the one that moves around the object.
Here's my vertex shader code:
#version 110
uniform mat4 projectionMatrix;
uniform mat4 modelviewMatrix;
uniform vec3 lightSource;
attribute vec3 vertex;
attribute vec3 normal;
varying vec2 texcoord;
void main() {
gl_Position = projectionMatrix * modelviewMatrix * vec4( vertex, 1.0 );
vec3 N = gl_NormalMatrix * normalize( normal );
vec4 V = modelviewMatrix * vec4( vertex, 1.0 );
vec3 L = normalize( lightSource - V.xyz );
float NdotL = max( 0.0, dot( N, L ) );
gl_FrontColor = vec4( gl_Color.xyz * NdotL, 1.0 );
gl_TexCoord[0] = gl_MultiTexCoord0;
}
and here's the code that does the rotation:
scene.LoadIdentity();
scene.Translate( 0.0f, -5.0f, -20.0f );
scene.Rotate( angle, 0.0f, 1.0f, 0.0f );
object->Draw();
I sent the eye-space light position through a glUniform3f, inside the object->Draw() function. The light position is static, and defined as:
glm::vec4 lightPos( light.x, light.y, light.z, 1.0 );
glm::vec4 lightEyePos = modelviewMatrix * lightPos;
glUniform3f( uniforms.lightSource, lightEyePos.x, lightEyePos.y, lightEyePos.z );
What's wrong with this approach?
Edit: The glm::lookAt code
Scene scene;
scene.LoadMatrix( projection );
scene.SetMatrixMode( Scene::Modelview );
scene.LoadIdentity();
scene.SetViewMatrix( glm::lookAt( glm::vec3( 0.0f, 0.0f, 0.0f ), glm::vec3( 0.0f, -5.0f, -20.0f ), glm::vec3( 0.0f, 1.0f, 0.0f ) ) );
The SetViewMatrix code:
void Scene::SetViewMatrix( const glm::mat4 &matrix ) {
viewMatrix = matrix;
TransformMatrix( matrix );
}
Then I just changed the modelviewMatrix I used to the viewMatrix:
glm::vec4 lightPos( light.x, light.y, light.z, 1.0 );
glm::vec4 lightEyePos = viewMatrix * lightPos;
glUniform3f( uniforms.lightSource, lightEyePos.x, lightEyePos.y, lightEyePos.z );
Statement 1: The light position is static.
Statement 2: lightEyePos = modelviewMatrix * lightPos;
These two claims are inconsistent. If your light position is supposed to be static, you shouldn't be applying a rotated model matrix to it.
If your lightPos is defined in world coordinates, then you should multiply it with the viewMatrix not the modelviewMatrix. The modelviewMatrix contains the model matrix, which contains the model's rotation (you don't want to apply this to a fixed light source).
Related
I got an orthographic camera working however I wanted to try and implement a perspective camera so I can do some parallax effects later down the line. I am having some issues when trying to implement it. It seems like the depth is not working correctly. I am rotating a 2d image along the x-axis to simulate it laying somewhat down so I get see the projection matrix working. It is still showing as an orthographic perspective though.
Here is some of my code:
CameraPersp::CameraPersp() :
_camPos(0.0f,0.0f,0.0f), _modelMatrix(1.0f), _viewMatrix(1.0f), _projectionMatrix(1.0f)
Function called init to setup the matrix variables:
void CameraPersp::init(int screenWidth, int screenHeight)
{
_screenHeight = screenHeight;
_screenWidth = screenWidth;
_modelMatrix = glm::translate(_modelMatrix, glm::vec3(0.0f, 0.0f, 0.0f));
_modelMatrix = glm::rotate(_modelMatrix, glm::radians(-55.0f), glm::vec3(1.0f, 0.0f, 0.0f));
_viewMatrix = glm::translate(_viewMatrix, glm::vec3(0.0f, 0.0f, -3.0f));
_projectionMatrix = glm::perspective(glm::radians(45.0f), static_cast<float>(_screenWidth) / _screenHeight, 0.1f, 100.0f);
}
Initializing a texture to be loaded in with x,y,z,width,height,src
_sprites.back()->init(-0.5f, -0.5f, 0.0f, 1.0f, 1.0f, "src/content/sprites/DungeonCrawlStoneSoupFull/monster/deep_elf_death_mage.png");
Sending in the matrices to the vertexShader:
GLint mLocation = _colorProgram.getUniformLocation("M");
glm::mat4 mMatrix = _camera.getMMatrix();
//glUniformMatrix4fv(mLocation, 1, GL_FALSE, &(mMatrix[0][0]));
glUniformMatrix4fv(mLocation, 1, GL_FALSE, glm::value_ptr(mMatrix));
GLint vLocation = _colorProgram.getUniformLocation("V");
glm::mat4 vMatrix = _camera.getVMatrix();
//glUniformMatrix4fv(vLocation, 1, GL_FALSE, &(vMatrix[0][0]));
glUniformMatrix4fv(vLocation, 1, GL_FALSE, glm::value_ptr(vMatrix));
GLint pLocation = _colorProgram.getUniformLocation("P");
glm::mat4 pMatrix = _camera.getPMatrix();
//glUniformMatrix4fv(pLocation, 1, GL_FALSE, &(pMatrix[0][0]));
glUniformMatrix4fv(pLocation, 1, GL_FALSE, glm::value_ptr(pMatrix));
Here is my vertex shader:
#version 460
//The vertex shader operates on each vertex
//input data from VBO. Each vertex is 2 floats
in vec3 vertexPosition;
in vec4 vertexColor;
in vec2 vertexUV;
out vec3 fragPosition;
out vec4 fragColor;
out vec2 fragUV;
//uniform mat4 MVP;
uniform mat4 M;
uniform mat4 V;
uniform mat4 P;
void main() {
//Set the x,y position on the screen
//gl_Position.xy = vertexPosition;
gl_Position = M * V * P * vec4(vertexPosition, 1.0);
//the z position is zero since we are 2d
//gl_Position.z = 0.0;
//indicate that the coordinates are nomalized
gl_Position.w = 1.0;
fragPosition = vertexPosition;
fragColor = vertexColor;
// opengl needs to flip the coordinates
fragUV = vec2(vertexUV.x, 1.0 - vertexUV.y);
}
I can see the image "squish" a little because it is still rendering the perspective as orthographic. If I remove the rotation on the x-axis, it is not longer squished because it isn't laying down at all. Any thoughts on what I am doing wrong? I can supply more info upon request but I think I put in most of the meat of things.
Picture:
You shouldn't modify gl_Position.w
gl_Position = M * V * P * vec4(vertexPosition, 1.0); // gl_Position is good
//indicate that the coordinates are nomalized < not true
gl_Position.w = 1.0; // Now perspective divisor is lost, projection isn't correct
The world is made of spheres. Since drawing a sphere in OpenGL takes a lot of triangles, I thought it would be faster to use a point and radius to represent a sphere, then use Billboarding in OpenGL to draw it. The current approach I took causes adjacent spheres to not touch when rotating the view.
Here's an example:
There are two spheres:
Sphere 1 Position (0, 0, -3) Radius (0.5)
Sphere 2 Position (-1, 0, -3) Radius (0.5)
The projection matrix is defined using:
glm::perspective(glm::radians(120.0f), 1.0f, 1.0f, 100.0f);
Image 1: When there is no rotation, it looks as expected.
Image 2: When there is rotation, billboarding responds to the camera as expected, but the spheres' do not touch anymore. And if they were actually spheres that were next to each other, you would expect them to touch.
What I have tried:
I tried GL_POINTS, they didn't work as good because it didn't seem to
handle the depth test correctly for me.
I tried a geometry shader that creates a square before and after
the projection matrix was applied.
Here's the code I have now that created the images:
Vertex Shader
#version 460
layout(location = 0) in vec3 position;
layout(location = 1) in float radius;
out float radius_vs;
void main()
{
gl_Position = vec4(position, 1.0);
radius_vs = radius;
}
Geometry Shader
#version 460
layout(points) in;
layout(triangle_strip, max_vertices = 4) out;
layout(location = 2) uniform mat4 view_mat;
layout(location = 3) uniform mat4 projection_mat;
in float radius_vs[];
out vec2 bounds;
void main()
{
vec3 x_dir = vec3(view_mat[0][0], view_mat[1][0], view_mat[2][0]) * radius_vs[0];
vec3 y_dir = vec3(view_mat[0][1], view_mat[1][1], view_mat[2][1]) * radius_vs[0];
mat4 fmat = projection_mat * view_mat;
gl_Position = fmat * vec4(gl_in[0].gl_Position.xyz - x_dir - y_dir, 1.0f);
bounds = vec2(-1.0f, -1.0f);
EmitVertex();
gl_Position = fmat * vec4(gl_in[0].gl_Position.xyz - x_dir + y_dir, 1.0f);
bounds = vec2(-1.0f, 1.0f);
EmitVertex();
gl_Position = fmat * vec4(gl_in[0].gl_Position.xyz + x_dir - y_dir, 1.0f);
bounds = vec2(1.0f, -1.0f);
EmitVertex();
gl_Position = fmat * vec4(gl_in[0].gl_Position.xyz + x_dir + y_dir, 1.0f);
bounds = vec2(1.0f, 1.0f);
EmitVertex();
EndPrimitive();
}
Fragment Shader
#version 460
out vec4 colorOut;
in vec2 bounds;
void main()
{
vec2 circCoord = bounds;
if (dot(circCoord, circCoord) > 1.0)
{
discard;
}
colorOut = vec4(1.0f, 1.0f, 0.0f, 1.0);
}
I am working on a 3D project in DirectX11, and am currently implementing different lights using the Frank Luna 3D Game Programming with DirectX11 book with my existing code.
Currently, I am developing a spotlight, which should follow the camera's position and look in the same direction, however, the position that is being lit is moving oddly. When the position is being changes, the direction vector of the light seems to be tracking in the (+x, +y, 0) direction. Best explained with a picture.
It look here like they are lit properly, and if the camera stays where it is, the spotlight can be moved around as you'd expect, and it tracks the camera direction.
In this image, I've moved the camera closer to the boxes, along the z axis, and the light spot should just get smaller on the nearest box, but it's instead tracking upwards.
This is the code where the spotlight struct is being set up to be passed into the constant buffer, that is all of the values in the struct, aside from a float being used as a pad at the end:
cb.spotLight = SpotLight();
cb.spotLight.ambient = XMFLOAT4(0.5f, 0.5f, 0.5f, 1.0f);
cb.spotLight.specular = XMFLOAT4(0.5, 0.5, 0.5, 10.0);
cb.spotLight.diffuse = XMFLOAT4(0.5, 0.5, 0.5, 1.0);
cb.spotLight.attenuation = XMFLOAT3(1, 1, 1);
cb.spotLight.range = 15;
XMVECTOR cameraP = XMLoadFloat3(&cameraPos);
XMVECTOR s = XMVectorReplicate(cb.spotLight.range);
XMVECTOR l = XMLoadFloat3(&camera.getForwards());
XMVECTOR lookat = XMVectorMultiplyAdd(s, l, cameraP);
XMStoreFloat3(&cb.spotLight.direction, XMVector3Normalize(lookat - XMVectorSet(cameraPos.x, cameraPos.y, cameraPos.z, 1.0f)));
cb.spotLight.position = cameraPos;
cb.spotLight.spot = 96;
Here is the function being used to calculate the ambient, diffuse and specular values of the spotlight in the shader:
void calculateSpotLight(Material mat, SpotLight light, float3 position, float3 normal, float3 toEye,
out float4 ambient, out float4 diffuse, out float4 specular)
{
ambient = float4(0, 0, 0, 0);
specular = float4(0, 0, 0, 0);
diffuse = float4(0, 0, 0, 0);
float3 lightV = light.position - position;
float distance = length(lightV);
if (distance > light.range)
{
return;
}
lightV /= distance;
ambient = mat.ambient * light.ambient;
float diffuseFact = dot(lightV, normal);
[flatten]
if (diffuseFact > 0.0f)
{
float3 vect = reflect(-lightV, normal);
float specularFact = pow(max(dot(vect, toEye), 0.0f), mat.specular.w);
diffuse = diffuseFact * mat.diffuse * light.diffuse;
specular = specularFact * mat.specular * light.specular;
}
float spot = pow(max(dot(-lightV, float3(-light.direction.x, -light.direction.y, light.direction.z)), 0.0f), light.spot);
float attenuation = spot / dot(light.attenuation, float3(1.0f, distance, distance*distance));
ambient *= spot;
diffuse *= attenuation;
specular *= attenuation;
}
And for completenesses sake, the vertex and the relevant section of the pixel shader.
VS_OUTPUT VS( float4 Pos : POSITION, float3 NormalL : NORMAL, float2 TexC : TEXCOORD )
{
VS_OUTPUT output = (VS_OUTPUT)0;
output.Pos = mul( Pos, World );
//Get normalised vector to camera position in world coordinates
output.PosW = normalize(eyePos - output.Pos.xyz);
output.Pos = mul( output.Pos, View );
output.Pos = mul( output.Pos, Projection );
//Getting normalised surface normal
float3 normalW = mul(float4(NormalL, 0.0f), World).xyz;
normalW = normalize(normalW);
output.Norm = normalW;
output.TexC = TexC;
return output;
}
float4 PS( VS_OUTPUT input ) : SV_Target
{
input.Norm = normalize(input.Norm);
Material newMat;
newMat.ambient = material.ambient;
newMat.diffuse = texCol;
newMat.specular = specCol;
float4 ambient = (0.0f, 0.0f, 0.0f, 0.0f);
float4 specular = (0.0f, 0.0f, 0.0f, 0.0f);
float4 diffuse = (0.0f, 0.0f, 0.0f, 0.0f);
float4 amb, spec, diff;
calculateSpotLight(newMat, spotLight, input.PosW, input.Norm, input.PosW, amb, diff, spec);
ambient += amb;
specular += spec;
diffuse += diff;
//Other light types
float4 colour;
colour = ambient + specular + diffuse;
colour.a = material.diffuse.a;
return colour;
}
Where did I go wrong?
Third argument input.PosW is incorrect here. You must use position in world space. input.PosW is a normalized vector. It doesn't make any sense to subtract normalized vector from light position.
You have
calculateSpotLight(newMat, spotLight, input.PosW, input.Norm, input.PosW, amb, diff, spec);
You need (input.Pos in WS, not projection space)
calculateSpotLight(newMat, spotLight, input.Pos, input.Norm, input.PosW, amb, diff, spec);
I'm having some issues with my specular lighting, I have ambient and diffuse but I am now looking at specular to try make a nice looking model.
I have the following in my vertexShader:
#version 330
layout (location = 0) in vec3 Position;
layout (location = 1) in vec3 Normal;
out vec4 Colour0;
// Transforms
uniform mat4 gModelToWorldTransform;
uniform mat4 gWorldToViewToProjectionTransform;
// Ambient light parameters
uniform vec3 gAmbientLightIntensity;
// Directional light parameters
uniform vec3 gDirectionalLightIntensity;
uniform vec3 gDirectionalLightDirection;
// Material constants
uniform float gKa;
uniform float gKd;
uniform float gKs;
uniform float gKsStrength;
void main()
{
// Transform the vertex from local space to homogeneous clip space
vec4 vertexPositionInModelSpace = vec4(Position, 1);
vec4 vertexInWorldSpace = gModelToWorldTransform * vertexPositionInModelSpace;
vec4 vertexInHomogeneousClipSpace = gWorldToViewToProjectionTransform * vertexInWorldSpace;
gl_Position = vertexInHomogeneousClipSpace;
// Calculate the directional light intensity at the vertex
// Find the normal in world space and normalise it
vec3 normalInWorldSpace = (gModelToWorldTransform * vec4(Normal, 0.0)).xyz;
normalInWorldSpace = normalize(normalInWorldSpace);
// Calculate the ambient light intensity at the vertex
// Ia = Ka * ambientLightIntensity
vec4 ambientLightIntensity = gKa * vec4(gAmbientLightIntensity, 1.0);
// Setup the light direction and normalise it
vec3 lightDirection = normalize(-gDirectionalLightDirection);
//lightDirection = normalize(gDirectionalLightDirection);
// Id = kd * lightItensity * N.L
// Calculate N.L
float diffuseFactor = dot(normalInWorldSpace, lightDirection);
diffuseFactor = clamp(diffuseFactor, 0.0, 1.0);
// N.L * light source colour * intensity
vec4 diffuseLightIntensity = gKd * vec4(gDirectionalLightIntensity, 1.0f) * diffuseFactor;
vec3 lightReflect = normalize(reflect(gDirectionalLightDirection, Normal));
//Calculate the specular light intensity at the vertex
float specularFactor = dot(normalInWorldSpace, lightReflect);
specularFactor = pow(specularFactor, gKsStrength);
vec4 specularLightIntensity = gKs * vec4(gDirectionalLightIntensity, 1.0f) * specularFactor;
// Final vertex colour is the product of the vertex colour
// and the total light intensity at the vertex
vec4 colour = vec4(0.0, 1.0, 0.0, 1.0);
Colour0 = colour * (ambientLightIntensity + diffuseLightIntensity + specularLightIntensity);
}
Then in my main.cpp I have the some code to try and get this working together, the specular light is definitely doing something, only, rather than making the model look shiny, it seems to intensify the light to the point where I can't see any details.
I create the following variables:
// Lighting uniforms location
GLuint gAmbientLightIntensityLoc;
GLuint gDirectionalLightIntensityLoc;
GLuint gDirectionalLightDirectionLoc;
GLuint gSpecularLightIntensityLoc;
// Materials uniform location
GLuint gKaLoc;
GLuint gKdLoc;
GLuint gKsLoc;
GLuint gKsStrengthLoc;
I then set my variables like so in the renderSceneCallBack() function which is called in the main:
// Set the material properties
glUniform1f(gKaLoc, 0.2f);
glUniform1f(gKdLoc, 0.9f);
glUniform1f(gKsLoc, 0.5f);
glUniform1f(gKsStrengthLoc, 0.5f);
I then create a initLights() function which I would like to handle all lighting, this is also called in the main:
static void initLights()
{
// Setup the ambient light
vec3 ambientLightIntensity = vec3(0.2f, 0.2f, 0.2f);
glUniform3fv(gAmbientLightIntensityLoc, 1, &ambientLightIntensity[0]);
// Setup the direactional light
vec3 directionalLightDirection = vec3(0.0f, 0.0f, -1.0f);
normalize(directionalLightDirection);
glUniform3fv(gDirectionalLightDirectionLoc, 1, &directionalLightDirection[0]);
vec3 directionalLightIntensity = vec3(0.8f, 0.8f, 0.8f);
glUniform3fv(gDirectionalLightIntensityLoc, 1, &directionalLightIntensity[0]);
//Setup the specular Light
vec3 specularLightIntensity = vec3(0.5f, 0.5f, 0.5f);
glUniform3fv(gSpecularLightIntensityLoc, 1, &specularLightIntensity[0]);
}
Can anyone see what I might be doing wrong, I could have some of the calculatiuons wrong and I just don't see it. Both the ambient/diffuse lighting are working correctly. This photo illustrates whats currently happening, ambient on the left, diffuse in the middle and specular with strength set to 30 on the right.
Answer
I forgot to pass this value into the main:
gKsStrengthLoc = glGetUniformLocation(shaderProgram, "gKsStrength");
//assert(gDirectionalLightDirectionLoc != 0xFFFFFFFF);
Everything works now using the answer selected
Your value for gKsStrength looks way too small:
glUniform1f(gKsStrengthLoc, 0.5f);
This value controls how shiny the object is, with higher values making it more shiny. This makes sense if you look at the calculation in the shader:
specularFactor = pow(specularFactor, gKsStrength);
The larger the exponent, the faster the value drops off, which means that the specular highlight becomes more narrow.
Typical values might be something like 10.0f for moderately shiny, 30.0f for quite shiny, and even higher for very shiny materials.
With your value of 0.5f, you get a very wide specular "highlight". Your value for the specular intensity is also fairly high (0.5), so the highlight is going to cover most of the object, and saturate the colors for large parts.
I'm trying to implement the paper "Single-Pass Wireframe Rendering", which seems pretty simple, but it's giving me what I'd expect as far as thick, dark values.
The paper didn't give the exact code to figure out the altitudes, so I did it as I thought fit. The code should project the three vertices into viewport space, get their "altitudes" and send them to the fragment shader.
The fragment shader determines the distance of the closest edge and generates an edgeIntensity. I'm not sure what I'm supposed to do with this value, but since it's supposed to scale between [0,1], I multiply the inverse against my outgoing color, but it's just very weak.
I had a few questions that I'm not sure are addressed in the papers. First, should the altitudes be calculated in 2D instead of 3D? Second, they site DirectX features, where DirectX has a different viewport-space z-range, correct? Does that matter? I'm premultiplying the outgoing altitude distances by the w-value of the viewport-space coordinates as they recommend to correct for perspective projection.
image trying to correct for perspective projection
no correction (not premultiplying by w-value)
The non-corrected image seems to have clear problems not correcting for the perspective on the more away-facing sides, but the perspective-corrected one has very weak values.
Can anyone see what's wrong with my code or how to go about debugging it from here?
my vertex code in GLSL...
float altitude(in vec3 a, in vec3 b, in vec3 c) { // for an ABC triangle
vec3 ba = a - b;
vec3 bc = c - b;
vec3 ba_onto_bc = dot(ba,bc) * bc;
return(length(ba - ba_onto_bc));
}
in vec3 vertex; // incoming vertex
in vec3 v2; // first neighbor (CCW)
in vec3 v3; // second neighbor (CCW)
in vec4 color;
in vec3 normal;
varying vec3 worldPos;
varying vec3 worldNormal;
varying vec3 altitudes;
uniform mat4 objToWorld;
uniform mat4 cameraPV;
uniform mat4 normalToWorld;
void main() {
worldPos = (objToWorld * vec4(vertex,1.0)).xyz;
worldNormal = (normalToWorld * vec4(normal,1.0)).xyz;
//worldNormal = normal;
gl_Position = cameraPV * objToWorld * vec4(vertex,1.0);
// also put the neighboring polygons in viewport space
vec4 vv1 = gl_Position;
vec4 vv2 = cameraPV * objToWorld * vec4(v2,1.0);
vec4 vv3 = cameraPV * objToWorld * vec4(v3,1.0);
altitudes = vec3(vv1.w * altitude(vv1.xyz,vv2.xyz,vv3.xyz),
vv2.w * altitude(vv2.xyz,vv3.xyz,vv1.xyz),
vv3.w * altitude(vv3.xyz,vv1.xyz,vv2.xyz));
gl_FrontColor = color;
}
and my fragment code...
varying vec3 worldPos;
varying vec3 worldNormal;
varying vec3 altitudes;
uniform vec3 cameraPos;
uniform vec3 lightDir;
uniform vec4 singleColor;
uniform float isSingleColor;
void main() {
// determine frag distance to closest edge
float d = min(min(altitudes.x, altitudes.y), altitudes.z);
float edgeIntensity = exp2(-2.0*d*d);
vec3 L = lightDir;
vec3 V = normalize(cameraPos - worldPos);
vec3 N = normalize(worldNormal);
vec3 H = normalize(L+V);
//vec4 color = singleColor;
vec4 color = isSingleColor*singleColor + (1.0-isSingleColor)*gl_Color;
//vec4 color = gl_Color;
float amb = 0.6;
vec4 ambient = color * amb;
vec4 diffuse = color * (1.0 - amb) * max(dot(L, N), 0.0);
vec4 specular = vec4(0.0);
gl_FragColor = (edgeIntensity * vec4(0.0)) + ((1.0-edgeIntensity) * vec4(ambient + diffuse + specular));
}
I have implemented swine's idea, and the result is perfect, here is my screenshot:
struct MYBUFFEREDVERTEX {
float x, y, z;
float nx, ny, nz;
float u, v;
float bx, by, bz;
};
const MYBUFFEREDVERTEX g_vertex_buffer_data[] = {
-1.0f, -1.0f, 0.0f,
0.0f, 0.0f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f, 0.0f,
1.0f, -1.0f, 0.0f,
0.0f, 0.0f, 1.0f,
1.0f, 0.0f,
0.0f, 1.0f, 0.0f,
-1.0f, 1.0f, 0.0f,
0.0f, 0.0f, 1.0f,
0.0f, 1.0f,
0.0f, 0.0f, 1.0f,
1.0f, 1.0f, 0.0f,
0.0f, 0.0f, 1.0f,
1.0f, 1.0f,
1.0f, 0.0f, 0.0f,
};
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
vertex shader:
#ifdef GL_ES
// Set default precision to medium
precision mediump int;
precision mediump float;
#endif
uniform mat4 u_mvp_matrix;
uniform vec3 u_light_direction;
attribute vec3 a_position;
attribute vec3 a_normal;
attribute vec2 a_texcoord;
attribute vec3 a_barycentric;
varying vec2 v_texcoord;
varying float v_light_intensity;
varying vec3 v_barycentric;
void main()
{
// Calculate vertex position in screen space
gl_Position = u_mvp_matrix * vec4(a_position, 1.0);
// calculate light intensity, range of 0.3 ~ 1.0
v_light_intensity = max(dot(u_light_direction, a_normal), 0.3);
// Pass texture coordinate to fragment shader
v_texcoord = a_texcoord;
// Pass bary centric to fragment shader
v_barycentric = a_barycentric;
}
fragment shader:
#ifdef GL_ES
// Set default precision to medium
precision mediump int;
precision mediump float;
#endif
uniform sampler2D u_texture;
varying vec2 v_texcoord;
varying float v_light_intensity;
varying vec3 v_barycentric;
void main()
{
float min_dist = min(min(v_barycentric.x, v_barycentric.y), v_barycentric.z);
float edgeIntensity = 1.0 - step(0.005, min_dist);
// Set diffuse color from texture
vec4 diffuse = texture2D(u_texture, v_texcoord) * vec4(vec3(v_light_intensity), 1.0);
gl_FragColor = edgeIntensity * vec4(0.0, 1.0, 1.0, 1.0) + (1.0 - edgeIntensity) * diffuse;
}
first, your function altitude() is flawed, ba_onto_bc is calculated wrong because bc is not unit length (either normalize bc, or divide ba_onto_bc by dot(bc, bc) which is length squared - you save calculating the square root).
The altitudes should be calculated in 2D if you want constant-thickness edges, or in 3D if you want perspective-correct edges.
It would be much easier just to use barycentric coordinates as a separate vertex attribute (ie. vertex 0 of the triangle would get (1 0 0), the second vertex (0 1 0) and the last vertex (0 0 1)). In fragment shader you would calculate minimum and use step() or smoothstep() to calculate edge-ness.
That would only require 1 attribute instead of current two, and it would also eliminate the need for calculating height in vertex shader (although it may be useful if you would like to prescale the barycentric coordinates so you have uniformly thick lines - but calculate it offline). It should also be working pretty much instantly so it would be a good starting point to get to the desired behavior.