Various examples of directional lights are all too varied to try and get a coherent picture of what's supposed to be happening; Some examples use matrices with unexplained contents and others, just use the vertex normal and light direction.
I've been attempting to write a shader based on what made the most sense to me but currently it leaves a scene fully lit or fully unlit; Dependent on the light direction.
In my fragment shader:
float diffuseFactor = dot(vertexNormal, -lightDirection)
vec4 diffuseColor = lightColor * diffuseFactor;
fragColor = color * diffuseColor;
So am I way off? Do I need to pass in more information (e.g: modelViewMatrix?) to achieved the desired result?
Related
While debugging my graphics engine I try to color everything at a certain distance in the direction of the camera. This works for static meshes, but animated meshes that use animationMatrix have small sections of fragments that either don't get colored, or change color when it shouldn't. As I see it gl_Position.z either doesn't gauge the depth correctly when I apply animations, or it can't be used as I intend it to.
Vertex Shader
vec4 worldPosition = model * animationMatrix * vec4(coord3d, 1.0);
gl_Position = view_projection * worldPosition;
out float ClipSpacePosZ = gl_Position.z
Fragment Shader
in float ClipSpacePosZ;
if(ClipSpacePosZ > a){
color.r = 1;
}
Thought 1
I had a similar problem with the fragment world position before, where I'd try to color by fragment world position and there would be similar artifacts. The solution was to divide the fragment position by the w component
out vec3 fragPos = vec3(worldPosition.x, worldPosition.y, worldPosition.z)/worldPosition.w;
I've tried similar ideas with ClipSpacePosZ. Maybe this is the solution that I'm missing some division by w, but I can't seem to find anything that works.
Thought 2
Could the depth for my fragments be incorrect, and they only seemingly display at the correct position.
Thought 3
gl_Position might do more things that I'm not aware of.
Thought 4
This only happens for animated meshes. There could be something wrong in my animation matrix. I don't understand why the position with gl_Position seemingly would be correct, but coloring by the depth wouldn't be.
I've seen the shader code using these two. But i don't understand what's difference between them, between texture and fragment.
As i know, fragment is pixels, so what's texture?
Some use these code:
vec2 uv = gl_FragCoord.xy / rectSize.xy;
vec4 bkg_color = texture2D(CC_Texture0, uv);
some use:
vec4 bkg_color = texture2D(CC_Texture0, v_texCoord);
with v_texCoord = a_texCoord;
Both works, except the first way displays inverted image.
In your second example 'v_texCoord' looks like a pre-calculated texture coordinate that is passed to the Fragment Shader as a Vertex Attribute, versus the 'uv' coordinate calculated within the Fragment Shader of the first example.
You can base texture coordinates off whatever you like - so long as you give the texture2D sampler normalised coordinates - its all about your use case and what you want to display from a texture.
Perhaps there is such a use-case difference here, which is why they give different visual outputs.
For more information about how texture coordinates work I recommend this question's answer: How do opengl texture coordinates work?
I am working with an old game format and am trying to finish up a rendering project. I am working on lighting and am trying to get a multiply blend mode going in my shader.
The lighting is provided as vertex values which are they interpolated over the triangle.
I have taken a screenshot of an unlit scene, and just the light maps and put them in Photoshop with a multiply layer. It provides exactly what I want.
I also want to factor in the ambient light which is kind of a like a 'Opacity' on the Photoshop color layer.
I have tried just multiplying them, which works great but again, I want to be able to control the amount of the lightmaps. I tried mix, which just blended the lightmaps and the textures but not in a multiply blend.
Here are the images. The first is the diffuse, the second is the lightmap and the third is them combined at 50% opacity with the lightmaps.
http://imgur.com/Zwg9IZr,6hq0t0p,7hR88I2#0 [1]
So, my question is, how do I multiply blend these with the sort of ambient light "opacity" factor. Again, I have tried a direct mix but it's more of an overlay, rather than a multiply blend.
My GLSL fragment source:
#version 120
uniform sampler2D texture;
varying vec2 outtexture; in vec4 colorout;
void main(void)
{
float ambient = 1.0f;
vec4 textureColor = texture2D(texture, outtexture);
vec4 lighting = colorout;
// here is where I want to blend with the ambient light dictating the contribution from each
gl_FragColor = textureColor * lighting;
}
I'm trying to write the functionality for point-light shadowing. I got spotlight shadowing working first, but then when I switched to point light shadowing (by using a cube map, rendering depth from 6 POV's, etc...) I'm now getting a checkered pattern of light (with NO shadows). Does anyone have any intuition regarding why this might be?
Here's a screenshot:
(Note that if you look closely, you can clearly see that a cubemap is being rendered with the front face of the cube just to the right of the triangle)
And here's my render pass fragment shader glsl code (if you want to see the rest, I can post that as well, but figured this is the important bit)
Note: I'm using deferred lighting, so the vertex shader of this pass is just a quad, and the pos_tex, norm_tex, and col_tex are from the geometry buffer generated in a previous pass.
#version 330 core
in vec2 UV;
out vec3 color;
uniform vec3 lightPosVec; //Non rotation affine
uniform sampler2D pos_tex;
uniform sampler2D col_tex;
uniform sampler2D norm_tex;
uniform samplerCubeShadow shadow_tex;
uniform sampler2D accum_tex; //the tex being drawn to
void main()
{
vec3 lightToFragVec = texture(pos_tex, UV).xyz-lightPosVec;
vec3 l2fabs = abs(lightToFragVec);
float greatestMagnitudeOfLightToFragVec = max(l2fabs.x, max(l2fabs.y, l2fabs.z));
float lightToFragDepth = ((100.0+0.1) / (100.0-0.1) - (2*100.0*0.1)/(100.0-0.1)/greatestMagnitudeOfLightToFragVec + 1.0) * 0.5;
float shadowed = texture(shadow_tex, vec4(normalize(lightToFragVec),lightToFragDepth));
color =
texture(accum_tex, UV).xyz+ //current value
texture(col_tex, UV).xyz //frag color
*dot(lightPosVec-texture(pos_tex,UV).xyz,texture(norm_tex,UV).xyz) //dot product of angle of incidence of light to normal
*(4+15*shadowed) //shadow amplification
/max(0.000001,dot(lightToFragVec,lightToFragVec)); //distance cutoff
}
Thank you!
EDIT:
Ok, so I've been toying around with it, and now the texture seems to be random...
So this makes me think that the depth cube is just full of noise? But that seems unlikely, because for the texture() function with a samplerCubeShadow to return 1.0, the value at that point must be EXACTLY the value of the depth at that fragment... right? Also, I have it set up to control the position of the light with wasd+up+down, and the pattern moves with the movement of the light (when I back up, it gets bigger/dimmer). Which means that the values MUST be dynamically changing to match the actual distance? Oh man I'm confused...
EDIT 2
Ok, sorry, this question is out of control, but I figured I'd continue to document my progress for the sake of anyone in the future with similar problems.
Anyways, I've now figured out that I get the same exact result no matter what I put as the 4th component of the 4d vector passed to texture() with the shadowcubemap. Like, I can even put a constant, and I get the exact same result. How is that possible?
EDIT 3
Darn. Turns out the error had nothing to do with anything I've just said. See answer below for details :(
Darn. Turns out I'm just a fool. I was just rendering things in a wrong order- nothing was wrong with the shader code after all. So, the checkerboard was just from noise in the textures. And the reason noise still rendered is because I was using GL_TEXTURE_COMPARE_FUNC GL_LEQUAL (as I should be) and the noise was either well below 1 or well above 1, so it didn't matter where I moved the light.
I know using a very simple vertex shader like
attribute vec3 aVertexPosition;
attribute vec4 aVertexColor;
uniform mat4 uMVMatrix;
uniform mat4 uPMatrix;
varying vec4 vColor;
void main(void) {
gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0);
vColor = aVertexColor;
}
and a very simple fragment shader like
precision mediump float;
varying vec4 vColor;
void main(void) {
gl_FragColor = vColor;
}
to draw a triangle with red, blue, and green vertices will end up having a triangle like this
My questions are:
Do calculations for interpolating fragment colors belonging to one triangle (or a primitive) happen in parallel on GPU?
What are the algorithm and also hardware support for interpolating fragment colors inside the triangle?
The interpolation is the step Fragment Processor
Algorithm is very simple they just interpolate the color according to their UV
Yes, absolutely.
Triangle color interpolation is part of the fixed-function pipeline (it's actually part of the rasterization step, which happens before fragment processing), so it is carried out entirely in hardware with probably all video cards. The equations for interpolating vertex data can be found e.g. in OpenGL 4.3 specification, section 14.6.1 (pp. 405-406). The algorithm defines barycentric coordinates for the triangle and uses them to interpolate between the vertices.
Besides the answers giving here, I wanted to add that there doesn't have to be dedicated fixed-function hardware for the interpolations. Modern GPU tend to use "pull-model interpolation" where the interpolation is actually done via the shader units.
I recommend reading Fabian Giesen's blog article about the topic (and the whole series of articles about the graphics pipeline in genreal).
On the first question - though there are parallel units on the GPU, it depends on the size of the triangle in consideration. For most of the GPUs, drawing happens on a tile by tile basis, and depending on the "screen" size of the triangle, if it falls within just one tile completely, it will be processed completely in only one tile processor. If it is split, it can be done in parallel by different units.
The second question is answered by other posters before me.