Drawing circular splats from vertex position - opengl

I want to draw circular splats, however my only data is the vertex position. So I draw the points in C++ and they appear at the right position. My point size is set to 20pixels, thus the points are big enough to discard some pixels to get a circle.
I thought I could send the vertex pixel position from the vertex shader to the fragment shader and then calculate the distance between the fragment position and the vertex position. This results in circular splats, however most of them will be hidden. What am I doing wrong?
CG Vertex Shader:
void main(
float4 pvec: POSITION,
uniform float4x4 modelView,
uniform float4x4 modelViewIT,
uniform float4x4 modelViewProj,
uniform float2 wsize,
uniform float near,
uniform float top,
uniform float bottom,
out float4 pout: POSITION,
out float4 color: COLOR,
out float2 vpos: TEXCOORD0
)
{
//position of point correctly projected
pout = mul(modelViewProj, pvec);
vpos = float2((pout.xy * 0.5 + 0.5) * wsize);
color = float4(0.0, 0.0, 1.0, 1.0);
}
CG Fragment Shader:
void main(
float4 col : COLOR,
float2 wpos : WPOS,
float2 vpos: TEXCOORD0,
uniform float2 unproj_scale,
uniform float2 unproj_offset,
uniform float f_near,
uniform float zb_scale,
uniform float zb_offset,
uniform float epsilon,
out float4 colorout : COLOR
)
{
float xDiff = abs(vpos.x - wpos.x);
float yDiff = abs(vpos.y - wpos.y);
if ((xDiff * xDiff + yDiff * yDiff) > 20.0) {
col.r = 1.0;
}
colorout = col;
}

When calculating the position in screen space from the vertex shader, remember that the graphics pipeline performs the so called perspective division to convert the point in clip-space (x,y,z,w) to normalized device coordinates (x/w,y/w,z/w,1). Only then will the position be in the range of [-1,1] and is your transformation to screen space correct.

Related

How do I align the raytraced spheres from my fragment shader with GL_POINTS?

I have a very simple shader program that takes in a bunch of position data as GL_POINTS that generate screen-aligned squares of fragments like normal with a size depending on depth, and then in the fragment shader I wanted to draw a very simple ray-traced sphere for each one with just the shadow that is on the sphere opposite to the light. I went to this shadertoy to try to figure it out on my own. I used the sphIntersect function for ray-sphere intersection, and sphNormal to get the normal vectors on the sphere for lighting. The problem is that the spheres do not align with the squares of fragments, causing them to be cut off. This is because I am not sure how to match the projections of the spheres and the vertex positions so that they line up. Can I have an explanation of how to do this?
Here is a picture for reference.
Here are my vertex and fragment shaders for reference:
//vertex shader:
#version 460
layout(location = 0) in vec4 position; // position of each point in space
layout(location = 1) in vec4 color; //color of each point in space
layout(location = 2) uniform mat4 view_matrix; // projection * camera matrix
layout(location = 6) uniform mat4 cam_matrix; //just the camera matrix
out vec4 col; // color of vertex
out vec4 posi; // position of vertex
void main() {
vec4 p = view_matrix * vec4(position.xyz, 1.0);
gl_PointSize = clamp(1024.0 * position.w / p.z, 0.0, 4000.0);
gl_Position = p;
col = color;
posi = cam_matrix * position;
}
//fragment shader:
#version 460
in vec4 col; // color of vertex associated with this fragment
in vec4 posi; // position of the vertex associated with this fragment relative to camera
out vec4 f_color;
layout (depth_less) out float gl_FragDepth;
float sphIntersect( in vec3 ro, in vec3 rd, in vec4 sph )
{
vec3 oc = ro - sph.xyz;
float b = dot( oc, rd );
float c = dot( oc, oc ) - sph.w*sph.w;
float h = b*b - c;
if( h<0.0 ) return -1.0;
return -b - sqrt( h );
}
vec3 sphNormal( in vec3 pos, in vec4 sph )
{
return normalize(pos-sph.xyz);
}
void main() {
vec4 c = clamp(col, 0.0, 1.0);
vec2 p = ((2.0*gl_FragCoord.xy)-vec2(1920.0, 1080.0)) / 2.0;
vec3 ro = vec3(0.0, 0.0, -960.0 );
vec3 rd = normalize(vec3(p.x, p.y,960.0));
vec3 lig = normalize(vec3(0.6,0.3,0.1));
vec4 k = vec4(posi.x, posi.y, -posi.z, 2.0*posi.w);
float t = sphIntersect(ro, rd, k);
vec3 ps = ro + (t * rd);
vec3 nor = sphNormal(ps, k);
if(t < 0.0) c = vec4(1.0);
else c.xyz *= clamp(dot(nor,lig), 0.0, 1.0);
f_color = c;
gl_FragDepth = t * 0.0001;
}
Looks like you have many spheres so I would do this:
Input data
I would have VBO containing x,y,z,r describing your spheres, You will also need your view transform (uniform) that can create ray direction and start position for each fragment. Something like my vertex shader in here:
Reflection and refraction impossible without recursive ray tracing?
Create BBOX in Geometry shader and convert your POINT to QUAD or POLYGON
note that you have to account for perspective. If you are not familiar with geometry shaders see:
rendring cubics in GLSL
Where I emmit sequence of OBB from input lines...
In fragment raytrace sphere
You have to compute intersection between sphere and ray, chose the closer intersection and compute its depth and normal (for lighting). In case of no intersection you have to discard; fragment !!!
From what I can see in your images Your QUADs does not correspond to your spheres hence the clipping and also you do not discard; fragments with no intersections so you overwrite with background color already rendered stuff around last rendered spheres so you have only single sphere left in QUAD regardless of how many spheres are really there ...
To create a ray direction that matches a perspective matrix from screen space, the following ray direction formula can be used:
vec3 rd = normalize(vec3(((2.0 / screenWidth) * gl_FragCoord.xy) - vec2(aspectRatio, 1.0), -proj_matrix[1][1]));
The value of 2.0 / screenWidth can be pre-computed or the opengl built-in uniform structs can be used.
To get a bounding box or other shape for your spheres, it is very important to use camera-facing shapes, and not camera-plane-facing shapes. Use the following process where position is the incoming VBO position data, and the w-component of position is the radius:
vec4 p = vec4((cam_matrix * vec4(position.xyz, 1.0)).xyz, position.w);
o.vpos = p;
float l2 = dot(p.xyz, p.xyz);
float r2 = p.w * p.w;
float k = 1.0 - (r2/l2);
float radius = p.w * sqrt(k);
if(l2 < r2) {
p = vec4(0.0, 0.0, -p.w * 0.49, p.w);
radius = p.w;
k = 0.0;
}
vec3 hx = radius * normalize(vec3(-p.z, 0.0, p.x));
vec3 hy = radius * normalize(vec3(-p.x * p.y, p.z * p.z + p.x * p.x, -p.z * p.y));
p.xyz *= k;
Then use hx and hy as basis vectors for any 2D shape that you want the billboard to be shaped like for the vertices. Don't forget later to multiply each vertex by a perspective matrix to get the final position of each vertex. Here is a visualization of the billboarding on desmos using a hexagon shape: https://www.desmos.com/calculator/yeeew6tqwx

GLSL: Fade 2D grid based on distance from camera

I am currently trying to draw a 2D grid on a single quad using only shaders. I am using SFML as the graphics library and sf::View to control the camera. So far I have been able to draw an anti-aliased multi level grid. The first level (blue) outlines a chunk and the second level (grey) outlines the tiles within a chunk.
I would now like to fade grid levels based on the distance from the camera. For example, the chunk grid should fade in as the camera zooms in. The same should be done for the tile grid after the chunk grid has been completely faded in.
I am not sure how this could be implemented as I am still new to OpenGL and GLSL. If anybody has any pointers on how this functionality can be implemented, please let me know.
Vertex Shader
#version 130
out vec2 texCoords;
void main() {
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
texCoords = (gl_TextureMatrix[0] * gl_MultiTexCoord0).xy;
}
Fragment Shader
#version 130
uniform vec2 chunkSize = vec2(64.0, 64.0);
uniform vec2 tileSize = vec2(16.0, 16.0);
uniform vec3 chunkBorderColor = vec3(0.0, 0.0, 1.0);
uniform vec3 tileBorderColor = vec3(0.5, 0.5, 0.5);
uniform bool drawGrid = true;
in vec2 texCoords;
void main() {
vec2 uv = texCoords.xy * chunkSize;
vec3 color = vec3(1.0, 1.0, 1.0);
if(drawGrid) {
float aa = length(fwidth(uv));
vec2 halfChunkSize = chunkSize / 2.0;
vec2 halfTileSize = tileSize / 2.0;
vec2 a = abs(mod(uv - halfChunkSize, chunkSize) - halfChunkSize);
vec2 b = abs(mod(uv - halfTileSize, tileSize) - halfTileSize);
color = mix(
color,
tileBorderColor,
smoothstep(aa, .0, min(b.x, b.y))
);
color = mix(
color,
chunkBorderColor,
smoothstep(aa, .0, min(a.x, a.y))
);
}
gl_FragColor.rgb = color;
gl_FragColor.a = 1.0;
}
You need to split your multiplication in the vertex shader to two parts:
// have a variable to be interpolated per fragment
out vec2 vertex_coordinate;
...
{
// this will store the coordinates of the vertex
// before its projected (i.e. its "world" coordinates)
vertex_coordinate = gl_ModelViewMatrix * gl_Vertex;
// get your projected vertex position as before
gl_Position = gl_ProjectionMatrix * vertex_coordinate;
...
}
Then in the fragment shader you change the color based on the world vertex coordinate and the camera position:
in vec2 vertex_coordinate;
// have to update this value, every time your camera changes its position
uniform vec2 camera_world_position = vec2(64.0, 64.0);
...
{
...
// calculate the distance from the fragment in world coordinates to the camera
float fade_factor = length(camera_world_position - vertex_coordinate);
// make it to be 1 near the camera and 0 if its more then 100 units.
fade_factor = clamp(1.0 - fade_factor / 100.0, 0.0, 1.0);
// update your final color with this factor
gl_FragColor.rgb = color * fade_factor;
...
}
The second way to do it is to use the projected coordinate's w. I personally prefer to calculate the distance in units of space. I did not test this code, it might have some trivial syntax errors, but if you understand the idea, you can apply it in any other way.

Directional shadow mapping in deferred shading

I'm implementing directional shadow mapping in deferred shading.
First, I render a depth map from light view (orthogonal projection).
Result:
I intend to do VSM so above buffer is R32G32 storing depth and depth * depth.
Then for a full-screen shading pass for shadow (after a lighting pass), I write the following pixel shader:
#version 330
in vec2 texCoord; // screen coordinate
out vec3 fragColor; // output color on the screen
uniform mat4 lightViewProjMat; // lightView * lightProjection (ortho)
uniform sampler2D sceneTexture; // lit scene with one directional light
uniform sampler2D shadowMapTexture;
uniform sampler2D scenePosTexture; // store fragment's 3D position
void main() {
vec3 fragPos = texture(scenePosTexture, texCoord).xyz; // get 3D position of pixel
vec4 fragPosLightSpace = lightViewProjMat * vec4(fragPos, 1.0); // project it to light-space view: lightView * lightProjection
// projective texture mapping
vec3 coord = fragPosLightSpace.xyz / fragPosLightSpace.w;
coord = coord * 0.5 + 0.5;
float lightViewDepth; // depth value in the depth buffer - the maximum depth that light can see
float currentDepth; // depth of screen pixel, maybe not visible to the light, that's how shadow mapping works
vec2 moments; // depth and depth * depth for later variance shadow mapping
moments = texture(shadowMapTexture, coord.xy).xy;
lightViewDepth = moments.x;
currentDepth = fragPosLightSpace.z;
float lit_factor = 0;
if (currentDepth <= lightViewDepth)
lit_factor = 1; // pixel is visible to the light
else
lit_factor = 0; // the light doesn't see this pixel
// I don't do VSM yet, just want to see black or full-color pixels
fragColor = texture(sceneTexture, texCoord).rgb * lit_factor;
}
The rendered result is a black screen, but if I hard coded the lit_factor to be 1, result is:
Basically that's how the sceneTexture looks like.
So I think either my depth value is wrong, which is unlikely, or my projection (light space projection in above shader / projective texture mapping) is wrong. Could you validate it for me?
My shadow map generation code is:
// vertex shader
#version 330 compatibility
uniform mat4 lightViewMat; // lightView
uniform mat4 lightViewProjMat; // lightView * lightProj
in vec3 in_vertex;
out float depth;
void main() {
vec4 vert = vec4(in_vertex, 1.0);
depth = (lightViewMat * vert).z / (500 * 0.2); // 500 is far value, this line tunes the depth precision
gl_Position = lightViewProjMat * vert;
}
// pixel shader
#version 330
in float depth;
out vec2 out_depth;
void main() {
out_depth = vec2(depth, depth * depth);
}
The z component of the fragment shader built in variable gl_FragCoord contains the depth value in range [0.0, 1.0]. This is the value which you shoud store to the depth map:
out_depth = vec2(gl_FragCoord.z, depth * depth);
After the calculation
vec3 fragPos = texture(scenePosTexture, texCoord).xyz; // get 3D position of pixel
vec4 fragPosLightSpace = lightViewProjMat * vec4(fragPos, 1.0); // project it to light-space view: lightView * lightProjection
vec3 ndc_coord = fragPosLightSpace.xyz / fragPosLightSpace.w;
the variable ndc_coord contains a normalized device coordinate, where all components are in range [-1.0, 1.0].
The z component of the normalized device coordiante can be conveted to the depth value (if the depth range is [0.0, 1.0]), by
float currentDepth = ndc_coord.z * 0.5 + 0.5;
This value can be compared to the value from the depth map, because currentDepth and lightViewDepth are calcualted by the same view matrix and projection matrix:
moments = texture(shadowMapTexture, coord.xy).xy;
lightViewDepth = moments.x;
if (currentDepth <= lightViewDepth)
lit_factor = 1; // pixel is visible to the light
else
lit_factor = 0; // the light doesn't see this pixel
This is the depth you store in the shadow map:
depth = (lightViewMat * vert).z / (500 * 0.2);
This is the depth you compare the read back value to:
vec4 fragPosLightSpace = lightViewProjMat * vec4(fragPos, 1.0);
currentDepth = fragPosLightSpace.z;
If fragPos is in world space then I assume lightViewMat * vert == fragPos. You are compressing depth by dividing by 500 * 0.2, but that does not equal to fragPosLightSpace.z.
Hint: Write out the value of currentDepth in one channel and the value from the shadow map in another channel, you can then compare them visually or in RenderDoc or similar.

Analysis of a shader in VR

I would like to create a shader like that that takes world coordinates and creates waves. I would like to analyse the video and know the steps that are required.
I'm not looking for codes, I'm just looking for ideas on how to implement that using GLSL or HLSL or any other language.
Here low quality and fps GIF in case link broke.
Here is the fragment shader:
#version 330 core
// Interpolated values from the vertex shaders
in vec2 UV;
in vec3 Position_worldspace;
in vec3 Normal_cameraspace;
in vec3 EyeDirection_cameraspace;
in vec3 LightDirection_cameraspace;
// highlight effect
in float pixel_z; // fragment z coordinate in [LCS]
uniform float animz; // highlight animation z coordinate [GCS]
// Ouput data
out vec4 color;
vec3 c;
// Values that stay constant for the whole mesh.
uniform sampler2D myTextureSampler;
uniform mat4 MV;
uniform vec3 LightPosition_worldspace;
void main(){
// Light emission properties
// You probably want to put them as uniforms
vec3 LightColor = vec3(1,1,1);
float LightPower = 50.0f;
// Material properties
vec3 MaterialDiffuseColor = texture( myTextureSampler, UV ).rgb;
vec3 MaterialAmbientColor = vec3(0.1,0.1,0.1) * MaterialDiffuseColor;
vec3 MaterialSpecularColor = vec3(0.3,0.3,0.3);
// Distance to the light
float distance = length( LightPosition_worldspace - Position_worldspace );
// Normal of the computed fragment, in camera space
vec3 n = normalize( Normal_cameraspace );
// Direction of the light (from the fragment to the light)
vec3 l = normalize( LightDirection_cameraspace );
// Cosine of the angle between the normal and the light direction,
// clamped above 0
// - light is at the vertical of the triangle -> 1
// - light is perpendicular to the triangle -> 0
// - light is behind the triangle -> 0
float cosTheta = clamp( dot( n,l ), 0,1 );
// Eye vector (towards the camera)
vec3 E = normalize(EyeDirection_cameraspace);
// Direction in which the triangle reflects the light
vec3 R = reflect(-l,n);
// Cosine of the angle between the Eye vector and the Reflect vector,
// clamped to 0
// - Looking into the reflection -> 1
// - Looking elsewhere -> < 1
float cosAlpha = clamp( dot( E,R ), 0,1 );
c =
// Ambient : simulates indirect lighting
MaterialAmbientColor +
// Diffuse : "color" of the object
MaterialDiffuseColor * LightColor * LightPower * cosTheta / (distance*distance) +
// Specular : reflective highlight, like a mirror
MaterialSpecularColor * LightColor * LightPower * pow(cosAlpha,5) / (distance*distance);
float z;
z=abs(pixel_z-animz); // distance to animated z coordinate
z*=1.5; // scale to change highlight width
if (z<1.0)
{
z*=0.5*3.1415926535897932384626433832795; // z=<0,M_PI/2> 0 in the middle
z=0.5*cos(z);
color+=vec3(0.0,z,z);
}
color=vec4(c,1.0);
}
here is the vertex shader:
#version 330 core
// Input vertex data, different for all executions of this shader.
layout(location = 0) in vec3 vertexPosition_modelspace;
layout(location = 1) in vec2 vertexUV;
layout(location = 2) in vec3 vertexNormal_modelspace;
// Output data ; will be interpolated for each fragment.
out vec2 UV;
out vec3 Position_worldspace;
out vec3 Normal_cameraspace;
out vec3 EyeDirection_cameraspace;
out vec3 LightDirection_cameraspace;
out float pixel_z; // fragment z coordinate in [LCS]
// Values that stay constant for the whole mesh.
uniform mat4 MVP;
uniform mat4 V;
uniform mat4 M;
uniform vec3 LightPosition_worldspace;
void main(){
pixel_z=vertexPosition_modelspace.z;
// Output position of the vertex, in clip space : MVP * position
gl_Position = MVP * vec4(vertexPosition_modelspace,1);
// Position of the vertex, in worldspace : M * position
Position_worldspace = (M * vec4(vertexPosition_modelspace,1)).xyz;
// Vector that goes from the vertex to the camera, in camera space.
// In camera space, the camera is at the origin (0,0,0).
vec3 vertexPosition_cameraspace = ( V * M * vec4(vertexPosition_modelspace,1)).xyz;
EyeDirection_cameraspace = vec3(0,0,0) - vertexPosition_cameraspace;
// Vector that goes from the vertex to the light, in camera space. M is ommited because it's identity.
vec3 LightPosition_cameraspace = ( V * vec4(LightPosition_worldspace,1)).xyz;
LightDirection_cameraspace = LightPosition_cameraspace + EyeDirection_cameraspace;
// Normal of the the vertex, in camera space
Normal_cameraspace = ( V * M * vec4(vertexNormal_modelspace,0)).xyz; // Only correct if ModelMatrix does not scale the model ! Use its inverse transpose if not.
// UV of the vertex. No special space for this one.
UV = vertexUV;
}
there are 2 approaches I can think of for this:
3D reconstruction based
so you need to reconstruct the 3D scene from motion (not an easy task and way of my cup of tea). then you simply apply modulation to the selected mesh texture based on u,v texture mapping coordinates and time of animation.
Describe such topic will not fit in SO answer so you should google some CV books/papers on the subject instead.
Image processing based
you simply segmentate the image based on color continuity/homogenity. So you group neighboring pixels that have similar color and intensity (growing regions). When done try to fake surface 3D reconstruction based on intensity gradients similar to this:
Turn any 2D image into 3D printable sculpture with code
and after that create u,v mapping where one axis is depth.
When done then just apply your sin-wave effect modulation to color.
I would divide this into 2 stages. 1st pass will segmentate (I would chose CPU side for this) and second for the effect rendering (on GPU).
As this is form of augmented reality you should also read this:
Augment reality like zookazam
btw what is done on that video is neither of above options. They most likely have the mesh for that car already in vector form and use silhouette matching to obtain its orientation on image ... and rendered as usual ... so it would not work for any object on the scene but only for that car ... Something like this:
How to get the transformation matrix of a 3d model to object in a 2d image
[Edit1] GLSL highlight effect
I took this example:
complete GL+GLSL+VAO/VBO C++ example
And added the highlight to it like this:
On CPU side I added animz variable
it determines the z coordinate in object local coordinate system LCS where the highlight is actually placed. and I animate it in timer between min and max z value of rendered mesh (cube) +/- some margin so the highlight does not teleport at once from one to another side of object...
// global
float animz=-1.0;
// in timer
animz+=0.05; if (animz>1.5) animz=-1.5; // my object z = <-1,+1> 0.5 is margin
// render
id=glGetUniformLocation(prog_id,"animz"); glUniform1f(id,animz);
Vertex shader
I just take vertex z coordinate and pass it without transform into fragment
out float pixel_z; // fragment z coordinate in [LCS]
pixel_z=pos.z;
Fragment shader
After computing target color c (by standard rendering) I compute distance of pixel_z and animz and if small then I modulate c with a sinwave depended on the distance.
// highlight effect
float z;
z=abs(pixel_z-animz); // distance to animated z coordinate
z*=1.5; // scale to change highlight width
if (z<1.0)
{
z*=0.5*3.1415926535897932384626433832795; // z=<0,M_PI/2> 0 in the middle
z=0.5*cos(z);
c+=vec3(0.0,z,z);
}
Here the full GLSL shaders...
Vertex:
#version 400 core
#extension GL_ARB_explicit_uniform_location : enable
layout(location = 0) in vec3 pos;
layout(location = 2) in vec3 nor;
layout(location = 3) in vec3 col;
layout(location = 0) uniform mat4 m_model; // model matrix
layout(location =16) uniform mat4 m_normal; // model matrix with origin=(0,0,0)
layout(location =32) uniform mat4 m_view; // inverse of camera matrix
layout(location =48) uniform mat4 m_proj; // projection matrix
out vec3 pixel_pos; // fragment position [GCS]
out vec3 pixel_col; // fragment surface color
out vec3 pixel_nor; // fragment surface normal [GCS]
// highlight effect
out float pixel_z; // fragment z coordinate in [LCS]
void main()
{
pixel_z=pos.z;
pixel_col=col;
pixel_pos=(m_model*vec4(pos,1)).xyz;
pixel_nor=(m_normal*vec4(nor,1)).xyz;
gl_Position=m_proj*m_view*m_model*vec4(pos,1);
}
Fragment:
#version 400 core
#extension GL_ARB_explicit_uniform_location : enable
layout(location =64) uniform vec3 lt_pnt_pos;// point light source position [GCS]
layout(location =67) uniform vec3 lt_pnt_col;// point light source color&strength
layout(location =70) uniform vec3 lt_amb_col;// ambient light source color&strength
in vec3 pixel_pos; // fragment position [GCS]
in vec3 pixel_col; // fragment surface color
in vec3 pixel_nor; // fragment surface normal [GCS]
out vec4 col;
// highlight effect
in float pixel_z; // fragment z coordinate in [LCS]
uniform float animz; // highlight animation z coordinate [GCS]
void main()
{
// standard rendering
float li;
vec3 c,lt_dir;
lt_dir=normalize(lt_pnt_pos-pixel_pos); // vector from fragment to point light source in [GCS]
li=dot(pixel_nor,lt_dir);
if (li<0.0) li=0.0;
c=pixel_col*(lt_amb_col+(lt_pnt_col*li));
// highlight effect
float z;
z=abs(pixel_z-animz); // distance to animated z coordinate
z*=1.5; // scale to change highlight width
if (z<1.0)
{
z*=0.5*3.1415926535897932384626433832795; // z=<0,M_PI/2> 0 in the middle
z=0.5*cos(z);
c+=vec3(0.0,z,z);
}
col=vec4(c,1.0);
}
And preview:
This approach does not require textures nor u,v mapping.
[Edit2] highlight with start point
There are many ways how to implement it. I chose distance from the start point as a highlight parameter. So the highlight will grow from the point in all directions. Here preview for two different touch point locations:
The white bold cross is the location of touch point rendered for visual check. Here the code:
Vertex:
// Vertex
#version 400 core
#extension GL_ARB_explicit_uniform_location : enable
layout(location = 0) in vec3 pos;
layout(location = 2) in vec3 nor;
layout(location = 3) in vec3 col;
layout(location = 0) uniform mat4 m_model; // model matrix
layout(location =16) uniform mat4 m_normal; // model matrix with origin=(0,0,0)
layout(location =32) uniform mat4 m_view; // inverse of camera matrix
layout(location =48) uniform mat4 m_proj; // projection matrix
out vec3 LCS_pos; // fragment position [LCS]
out vec3 pixel_pos; // fragment position [GCS]
out vec3 pixel_col; // fragment surface color
out vec3 pixel_nor; // fragment surface normal [GCS]
void main()
{
LCS_pos=pos;
pixel_col=col;
pixel_pos=(m_model*vec4(pos,1)).xyz;
pixel_nor=(m_normal*vec4(nor,1)).xyz;
gl_Position=m_proj*m_view*m_model*vec4(pos,1);
}
Fragment:
// Fragment
#version 400 core
#extension GL_ARB_explicit_uniform_location : enable
layout(location =64) uniform vec3 lt_pnt_pos;// point light source position [GCS]
layout(location =67) uniform vec3 lt_pnt_col;// point light source color&strength
layout(location =70) uniform vec3 lt_amb_col;// ambient light source color&strength
in vec3 LCS_pos; // fragment position [LCS]
in vec3 pixel_pos; // fragment position [GCS]
in vec3 pixel_col; // fragment surface color
in vec3 pixel_nor; // fragment surface normal [GCS]
out vec4 col;
// highlight effect
uniform vec3 touch; // highlight start point [GCS]
uniform float animt; // animation parameter <0,1> or -1 for off
uniform float size; // highlight size
void main()
{
// standard rendering
float li;
vec3 c,lt_dir;
lt_dir=normalize(lt_pnt_pos-pixel_pos); // vector from fragment to point light source in [GCS]
li=dot(pixel_nor,lt_dir);
if (li<0.0) li=0.0;
c=pixel_col*(lt_amb_col+(lt_pnt_col*li));
// highlight effect
float t=length(LCS_pos-touch)/size; // distance from start point
if (t<=animt)
{
t*=0.5*3.1415926535897932384626433832795; // z=<0,M_PI/2> 0 in the middle
t=0.75*cos(t);
c+=vec3(0.0,t,t);
}
col=vec4(c,1.0);
}
You control this with uniforms:
uniform vec3 touch; // highlight start point [GCS]
uniform float animt; // animation parameter <0,1> or -1 for off
uniform float size; // max distance of any point of object from touch point

Elliptical gradient rotation in GLSL

I implemented basic elliptical gradient in GLSL and it is working fine. However I failed rotating the gradient. My code is below:
vertex shader
uniform mat4 camera;
uniform mat4 model;
in vec3 vert; // coordinates of vertex
in vec2 vertTexCoord; //pseudo texture coordinates, used for calculating relative fragment position
out vec2 fragTexCoord;
void main() {
fragTexCoord = vertTexCoord; //pass to fragment shader
gl_Position = camera * model * vec4(vert, 1); //apply transformations
}
fragment shader
uniform vec2 gradientCenter; //center of gradient
uniform vec2 gradientDimensions; //how far gradient goes in right and up direction respectively
uniform vec2 gradientDirection; //rotation of gradient, not used..yet
in vec2 fragTexCoord;
out vec4 finalColor;
void main() {
vec2 gradient = gradientCenter - fragTexCoord; //gradient itself
gradient.x = gradient.x * (1.0 / gradientDimensions.x); //relative scale on right direction, currently X axis
gradient.y = gradient.y * (1.0 / gradientDimensions.y); //relative scale on up direction, currently Y axis
float distanceFromLight = length(gradient); //lenght determines output color
finalColor = mix(vec4(1.0, 0.0, 0.0, 1.0), vec4(0.0, 0.0, 1.0, 1.0), distanceFromLight * 2); //mixing red and blue, placeholder colors
}
to better illustrate, in the upper picture is what I have and is working, in the lower picture what is my goal. How to improve my code to allow elliptical gradient manipulation as shown on lower picture?
I assume that gradientDirection is the normalized direction of the first principal axis. Then you can calculate the coordinates in the local system with the dot product:
vec2 secondaryPrincipal = vec2(gradientDirection.y, -gradientDirection.x);
vec2 gradient = gradientCenter - fragTexCoord; //gradient itself
vec2 localGradient(dot(gradient, gradientDirection) * (1.0 / gradientDimensions.x),
dot(gradient, secondaryPrincipal) * (1.0 / gradientDimensions.y));
float distanceFromLight = length(localGradient);
//...