i'm having difficulties understanding the math between the different shader stages.
in the fragment shader from the lights perspective i basically write out the fragDepth to rgb color
#version 330
out vec4 shader_fragmentColor;
void main()
{
shader_fragmentColor = vec4(gl_FragCoord.z, gl_FragCoord.z, gl_FragCoord.z, 1);
//shader_fragmentColor = vec4(1, 0.5, 0.5, 1);
}
when rendering the scene using the above shader it displays the scene in an all white color. i suppose thats because gl_FragCoord.z is bigger than 1. hopefully its not maxed out at 1. but we can leave that question alone for now.
in the geometry shader from the cameras perspective i basicly turn all points into quads and write out the probably "incorrect" texture position to lookup in the lightTexture. the math here is the question. im also a bit unsure about if the interpolation value will be correct in the next shader stage.
#version 330
#extension GL_EXT_geometry_shader4 : enable
uniform mat4 p1_modelM;
uniform mat4 p1_cameraPV;
uniform mat4 p1_lightPV;
out vec4 shader_lightTexturePosition;
void main()
{
float s = 10.00;
vec4 llCorner = vec4(-s, -s, 0.0, 0.0);
vec4 llWorldPosition = ((p1_modelM * llCorner) + gl_in[0].gl_Position);
gl_Position = p1_cameraPV * llWorldPosition;
shader_lightTexturePosition = p1_lightPV * llWorldPosition;
EmitVertex();
vec4 rlCorner = vec4(+s, -s, 0.0, 0.0);
vec4 rlWorldPosition = ((p1_modelM * rlCorner) + gl_in[0].gl_Position);
gl_Position = p1_cameraPV * rlWorldPosition;
shader_lightTexturePosition = p1_lightPV * rlWorldPosition;
EmitVertex();
vec4 luCorner = vec4(-s, +s, 0.0, 0.0);
vec4 luWorldPosition = ((p1_modelM * luCorner) + gl_in[0].gl_Position);
gl_Position = p1_cameraPV * luWorldPosition;
shader_lightTexturePosition = p1_lightPV * luWorldPosition;
EmitVertex();
vec4 ruCorner = vec4(+s, +s, 0.0, 0.0);
vec4 ruWorldPosition = ((p1_modelM * ruCorner) + gl_in[0].gl_Position);
gl_Position = p1_cameraPV * ruWorldPosition;
shader_lightTexturePosition = p1_lightPV * ruWorldPosition;
EmitVertex();
EndPrimitive();
}
in the fragment shader from the cameras perspective i basicly lookup in the lightTexture what color would be shown from the lights perspecive and write out the same color.
#version 330
uniform sampler2D p1_lightTexture;
in vec4 shader_lightTexturePosition;
out vec4 shader_fragmentColor;
void main()
{
vec4 lightTexel = texture2D(p1_lightTexture, shader_lightTexturePosition.xy);
shader_fragmentColor = lightTexel;
/*
if(lightTexel.x < shader_lightTexturePosition.z)
shader_fragmentColor = vec4(1, 0, 0, 1);
else
shader_fragmentColor = vec4(0, 1, 0, 1);
*/
//shader_fragmentColor = vec4(1, 1, 1, 1);
}
when rendering from the cameras perspective i see the scene drawn as it should but with the incorrect texture coordinates applied on them that repeats. repeating texture is probably caused by the texture-coordinate being outside the bounds of 0 to 1.
I've tried several things but still fail to understand what the math should be. some of out commented code and one example im unsure of is:
shader_lightTexturePosition = normalize(p1_lightPV * llWorldPosition) / 2 + vec4(0.5, 0.5, 0.5, 0.5);
for the lower-left corner. similair code to the other corners
from the solution i expect the scene to be rendered from the cameras perspective with exactly the same color as from the lights perspective. with perhaps some precision error.
i figured out the texture mapping bit myself. the depth value bit is still a bit strange.
convert the screenProjectedCoords to normalizedDeviceCoords then add 1 divide by 2.
vec4 textureNormalizedCoords(vec4 screenProjected)
{
vec3 normalizedDeviceCoords = (screenProjected.xyz / screenProjected.w);
return vec4( (normalizedDeviceCoords.xy + 1.0) / 2.0, screenProjected.z * 0.005, 1/screenProjected.w);
}
void main()
{
float s = 10.00;
vec4 llCorner = vec4(-s, -s, 0.0, 0.0);
vec4 llWorldPosition = ((p1_modelM * llCorner) + gl_in[0].gl_Position);
gl_Position = p1_cameraPV * llWorldPosition;
shader_lightTextureCoords = textureNormalizedCoords(p1_lightPV * llWorldPosition);
EmitVertex();a
Related
I am trying to implement geometry shader for line thickness using OpenGL 4.3.
I followed accepted answer and other given solutions of stackoverflow, but it is wrong according to the screenshot. Is there any proper way how can I get a normal of a screen? It seems correct in the first frame but the moment I move my mouse, the camera changes and offset direction is not correct. The shader is updated by camera matrix in while loop.
GLSL Geometry shader to replace glLineWidth
Vertex shader
#version 330 core
layout (location = 0) in vec3 aPos;
uniform mat4 projection_view_model;
void main()
{
gl_Position = projection_view_model * vec4(aPos, 1.0);
}
Fragment shader
#version 330 core
//resources:
//https://stackoverflow.com/questions/6017176/gllinestipple-deprecated-in-opengl-3-1
out vec4 FragColor;
uniform vec4 uniform_fragment_color;
void main()
{
FragColor = uniform_fragment_color;
}
Geometry shader
#version 330 core
layout (lines) in;
layout(triangle_strip, max_vertices = 4) out;
uniform float u_thickness ;
uniform vec2 u_viewportSize ;
in gl_PerVertex
{
vec4 gl_Position;
//float gl_PointSize;
//float gl_ClipDistance[];
} gl_in[];
void main() {
//https://stackoverflow.com/questions/54686818/glsl-geometry-shader-to-replace-gllinewidth
vec4 p1 = gl_in[0].gl_Position;
vec4 p2 = gl_in[1].gl_Position;
vec2 dir = normalize((p2.xy - p1.xy) * u_viewportSize);
vec2 offset = vec2(-dir.y, dir.x) * u_thickness*100 / u_viewportSize;
gl_Position = p1 + vec4(offset.xy * p1.w, 0.0, 0.0);
EmitVertex();
gl_Position = p1 - vec4(offset.xy * p1.w, 0.0, 0.0);
EmitVertex();
gl_Position = p2 + vec4(offset.xy * p2.w, 0.0, 0.0);
EmitVertex();
gl_Position = p2 - vec4(offset.xy * p2.w, 0.0, 0.0);
EmitVertex();
EndPrimitive();
}
To get the direction of the line in normalized device space, the x and y components of the clip space coordinated must be divided by the w component (perspective divide):
vec2 dir = normalize((p2.xy - p1.xy) * u_viewportSize);
vec2 dir = normalize((p2.xy / p2.w - p1.xy / p1.w) * u_viewportSize);
The problem
I'm trying to make shaders for my game. In the fragment shaders, I don't want to set the color of the fragment but add to it. How do I do so? I'm new to this, so sorry for any mistakes.
The code
varying vec3 vN;
varying vec3 v;
varying vec4 color;
#define MAX_LIGHTS 1
void main (void)
{
vec3 N = normalize(vN);
vec4 finalColor = vec4(0.0, 0.0, 0.0, 0.0);
for (int i=0;i<MAX_LIGHTS;i++)
{
vec3 L = normalize(gl_LightSource[i].position.xyz - v);
vec3 E = normalize(-v); // we are in Eye Coordinates, so EyePos is (0,0,0)
vec3 R = normalize(-reflect(L,N));
vec4 Iamb = gl_LightSource[i].ambient;
vec4 Idiff = gl_LightSource[i].diffuse * max(dot(N,L), 0.0);
Idiff = clamp(Idiff, 0.0, 1.0);
vec4 Ispec = gl_LightSource[i].specular * pow(max(dot(R,E),0.0),0.3*gl_FrontMaterial.shininess);
Ispec = clamp(Ispec, 0.0, 1.0);
finalColor += Iamb + Idiff + Ispec;
}
gl_FragColor = color * finalColor ;
}
Thanks in advance!
You can not get the color of the fragment you're writing to. In simple scenarios (like yours) what you can do is to enable blending and set the blending functions to achieve additive blending.
For more complex logic you'd create a framebuffer object, attach a texture to it, render your input(e.g. scene) to that texture, then switch to and render with another framebuffer(or the default "screen" one), this way you can sample your scene from the texture and add the lighting on top. Read how to do this in detail on WebGLFundamentals.com
I'm designing a sprite class, and I would like to display only a color if no texture is loaded.
Here are my vertex shader
#version 330 core
layout (location = 0) in vec4 vertex; // <vec2 pos, vec2 tex>
out vec2 vs_tex_coords;
uniform mat4 model;
uniform mat4 projection;
void main()
{
vs_tex_coords = vertex.zw;
gl_Position = projection * model * vec4(vertex.xy, 0.0, 1.0);
}
And the fragment shader :
#version 330 core
in vec2 vs_tex_coords;
out vec4 fs_color;
uniform sampler2D image;
uniform vec3 sprite_color;
void main()
{
fs_color = vec4(sprite_color, 1.0) * texture(image, vs_tex_coords);
}
My problem is that if I don't bind a texture, it displays only a black sprite. I think the problem is that the texture function in my fragment shader returns a 0, and screw all the formula.
Is there a way to know if the sampler2D is not initialized or null, and just return the sprite_color?
A sampler cannot be "empty". A valid texture must be bound to the texture units referenced by each sampler in order for rendering to have well-defined behavior.
But that doesn't mean you have to read from the texture that's bound there. It's perfectly valid to use a uniform value to tell the shader whether to read from the texture or not.
But you still have to bind a simple, 1x1 texture there. Indeed, you can use textureSize on the sampler; if it is a 1x1 texture, then don't bother to read from it. Note that this might be slower than using a uniform.
Here below the 2 versions, with and without if... else... conditional statement. The conditional statement avoids to have to sample the texture if not used.
The uniform int textureSample is set to 1 or 0 for the texture or the color to show up respectively. Both uniform variables are normally set up by the program, not the shader.
uniform int textureSample = 1;
uniform vec3 color = vec3(1.0, 1.0, 0.0);
void main() { // without if... else...
// ...
vec3 materialDiffuseColor = textureSample * texture( textureSampler, fragmentTexture ).rgb - (textureSample - 1) * color;
// ...
}
void main() { // with if... else...
// ...
if (textureSample == 1) { // 1 if texture, 0 if color
vec3 materialDiffuseColor = textureSample * texture( textureSampler, fragmentTexture ).rgb;
vec3 materialAmbientColor = vec3(0.5, 0.5, 0.5) * materialDiffuseColor;
vec3 materialSpecularColor = vec3(0.3, 0.3, 0.3);
gl_Color = brightness *
(materialAmbientColor +
materialDiffuseColor * lightPowerColor * cosTheta / distanceLight2 +
materialSpecularColor * lightPowerColor * pow(cosAlpha, 10000) / distanceLight2);
}
else {
vec3 materialDiffuseColor = color;
vec3 materialAmbientColor = vec3(0.5, 0.5, 0.5) * materialDiffuseColor;
vec3 materialSpecularColor = vec3(0.3, 0.3, 0.3);
gl_Color = brightness *
(materialAmbientColor +
materialDiffuseColor * lightPowerColor * cosTheta / distanceLight2 +
materialSpecularColor * lightPowerColor * pow(cosAlpha, 10000) / distanceLight2);
}
// ...
}
I'd check length of rgb for diffuse texture. This won't work on a specular map though
vec3 texDiffuseCol = texture2D(diffuseTex, TexCoord).rgb;
if(length(texDiffuseCol) == 0.0)
{
//Texture not present
}else
{
//Texture not present
}
I'm trying to do point source directional lighting in OpenGL using my textbooks examples. I'm showing a rectangle centered at the origin, and doing the lighting computations in the shader. The rectangle appears, but it is black even when I try to put colored lights on it. Normals for the rectangle are all (0, 1.0, 0). I'm not doing any non-uniform scaling, so the regular model view matrix should also transform the normals.
I have code that sets the light parameters(as uniforms) and material parameters(also as uniforms) for the shader. There is no per vertex color information.
void InitMaterial()
{
color material_ambient = color(1.0, 0.0, 1.0);
color material_diffuse = color(1.0, 0.8, 0.0);
color material_specular = color(1.0, 0.8, 0.0);
float material_shininess = 100.0;
// set uniforms for current program
glUniform3fv(glGetUniformLocation(Programs[lightingType], "materialAmbient"), 1, material_ambient);
glUniform3fv(glGetUniformLocation(Programs[lightingType], "materialDiffuse"), 1, material_diffuse);
glUniform3fv(glGetUniformLocation(Programs[lightingType], "materialSpecular"), 1, material_specular);
glUniform1f(glGetUniformLocation(Programs[lightingType], "shininess"), material_shininess);
}
For the lights:
void InitLight()
{
// need light direction and light position
point4 light_position = point4(0.0, 0.0, -1.0, 0.0);
color light_ambient = color(0.2, 0.2, 0.2);
color light_diffuse = color(1.0, 1.0, 1.0);
color light_specular = color(1.0, 1.0, 1.0);
glUniform3fv(glGetUniformLocation(Programs[lightingType], "lightPosition"), 1, light_position);
glUniform3fv(glGetUniformLocation(Programs[lightingType], "lightAmbient"), 1, light_ambient);
glUniform3fv(glGetUniformLocation(Programs[lightingType], "lightDiffuse"), 1, light_diffuse);
glUniform3fv(glGetUniformLocation(Programs[lightingType], "lightSpecular"), 1, light_specular);
}
The fragment shader is a simple pass through shader that sets the color to the one input from the vertex shader. Here is the vertex shader :
#version 150
in vec4 vPosition;
in vec3 vNormal;
out vec4 color;
uniform vec4 materialAmbient, materialDiffuse, materialSpecular;
uniform vec4 lightAmbient, lightDiffuse, lightSpecular;
uniform float shininess;
uniform mat4 modelView;
uniform vec4 lightPosition;
uniform mat4 projection;
void main()
{
// Transform vertex position into eye coordinates
vec3 pos = (modelView * vPosition).xyz;
vec3 L = normalize(lightPosition.xyz - pos);
vec3 E = normalize(-pos);
vec3 H = normalize(L + E);
// Transform vertex normal into eye coordinates
vec3 N = normalize(modelView * vec4(vNormal, 0.0)).xyz;
// Compute terms in the illumination equation
vec4 ambient = materialAmbient * lightAmbient;
float Kd = max(dot(L, N), 0.0);
vec4 diffuse = Kd * materialDiffuse * lightDiffuse;
float Ks = pow(max(dot(N, H), 0.0), shininess);
vec4 specular = Ks * materialSpecular * lightSpecular;
if(dot(L, N) < 0.0) specular = vec4(0.0, 0.0, 0.0, 1.0);
gl_Position = projection * modelView * vPosition;
color = ambient + diffuse + specular;
color.a = 1.0;
}
Ok, it's working now. The solution was to replace glUniform3fv with glUniform4fv, I guess because the glsl counterpart is a vec4 instead of a vec3. I thought that it would be able to recognize this and simply add a 1.0 to the end, but no.
i want to shade the quad with checkers:
f(P)=[floor(Px)+floor(Py)]mod2.
My quad is:
glBegin(GL_QUADS);
glVertex3f(0,0,0.0);
glVertex3f(4,0,0.0);
glVertex3f(4,4,0.0);
glVertex3f(0,4, 0.0);
glEnd();
The vertex shader file:
varying float factor;
float x,y;
void main(){
x=floor(gl_Position.x);
y=floor(gl_Position.y);
factor = mod((x+y),2.0);
}
And the fragment shader file is:
varying float factor;
void main(){
gl_FragColor = vec4(factor,factor,factor,1.0);
}
But im getting this:
It seems that the mod function doeasn't work or maybe somthing else...
Any help?
It is better to calculate this effect in fragment shader, something like that:
vertex program =>
varying vec2 texCoord;
void main(void)
{
gl_Position = vec4(gl_Vertex.xy, 0.0, 1.0);
gl_Position = sign(gl_Position);
texCoord = (vec2(gl_Position.x, gl_Position.y)
+ vec2(1.0)) / vec2(2.0);
}
fragment program =>
#extension GL_EXT_gpu_shader4 : enable
uniform sampler2D Texture0;
varying vec2 texCoord;
void main(void)
{
ivec2 size = textureSize2D(Texture0, 0);
float total = floor(texCoord.x * float(size.x)) +
floor(texCoord.y * float(size.y));
bool isEven = mod(total, 2.0) == 0.0;
vec4 col1 = vec4(0.0, 0.0, 0.0, 1.0);
vec4 col2 = vec4(1.0, 1.0, 1.0, 1.0);
gl_FragColor = (isEven) ? col1 : col2;
}
Output =>
Good luck!
Try this function in your fragment shader:
vec3 checker(in float u, in float v)
{
float checkSize = 2;
float fmodResult = mod(floor(checkSize * u) + floor(checkSize * v), 2.0);
float fin = max(sign(fmodResult), 0.0);
return vec3(fin, fin, fin);
}
Then in main you can call it using :
vec3 check = checker(fs_vertex_texture.x, fs_vertex_texture.y);
And simply pass x and y you are getting from vertex shader. All you have to do after that is to include it when calculating your vFragColor.
Keep in mind that you can change chec size simply by modifying checkSize value.
What your code does is calculate the factor 4 times (once for each vertex, since it's vertex shader code) and then interpolate those values (because it's written into a varying varible) and then output that variable as color in the fragment shader.
So it doesn't work that way. You need to do that calculation directly in the fragment shader. You can get the fragment position using the gl_FragCoord built-in variable in the fragment shader.
May I suggest the following:
float result = mod(dot(vec2(1.0), step(vec2(0.5), fract(v_uv * u_repeat))), 2.0);
v_uv is a vec2 of UV values,
u_repeat is a vec2 of how many times the pattern should be repeated for each axis.
result is 0 or 1, you can use it in mix function to provide colors, for example:
gl_FragColor = mix(vec4(1.0, 1.0, 1.0, 1.0), vec4(0.0, 0.0, 0.0, 1.0) result);
Another nice way to do it is by just tiling a known pattern (zooming out). Assuming that you have a square canvas:
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
// Normalized pixel coordinates (from 0 to 1)
vec2 uv = fragCoord/iResolution.xy;
uv -= 0.5; // moving the coordinate system to middle of screen
// Output to screen
fragColor = vec4(vec3(step(uv.x * uv.y, 0.)), 1.);
}
Code above gives you this kind of pattern.
Code below by just zooming 4.5 times and taking the fractional part repeats the pattern 4.5 times resulting in 9 squares per row.
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
// Normalized pixel coordinates (from 0 to 1)
vec2 uv = fract(fragCoord/iResolution.xy * 4.5);
uv -= 0.5; // moving the coordinate system to middle of screen
// Output to screen
fragColor = vec4(vec3(step(uv.x * uv.y, 0.)), 1.);
}