RenderMonkey / GLSL camera & viewpos - glsl

How do I make the vec3 viewpos follow the values of the camera? Presumably they should be the same but I don't know how to access the camera's position values.

If I understood you correctly, you want to know the position of the camera in world space.
If so, it's quite simple:
1. Right-click your effect in your workspace and then select: Add Variable > Float > Predefined > vViewPosition
Define it in your shader like this:
uniform vec4 vViewPosition;
Use it however you like. Example vertex shader:
uniform vec4 vViewPosition;
void main( void )
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
float dist = distance(gl_Vertex.xyz, vViewPosition.xyz);
}
I hope that helped.

Related

how to pass shader COLOR on ALBEDO

ALBEDO is vec3 and COLOR is vec4.. I need make pass COLOR to ALBEDO on Godot.. This shader work on shadertype itemscanvas but not working on spatial material..
shader_type spatial;
uniform float amp = 0.1;
uniform vec4 tint_color = vec4(0.0, 0.5,0.99, 1);
uniform sampler2D iChannel0;
void fragment ()
{
vec2 uv = FRAGCOORD.xy / (1.0/VIEWPORT_SIZE).xy;// (1.0/SCREEN_PIXEL_SIZE) for shader_type canvas_item
vec2 p = uv +
(vec2(.5)-texture(iChannel0, uv*0.3+vec2(TIME*0.05, TIME*0.025)).xy)*amp +
(vec2(.5)-texture(iChannel0, uv*0.3-vec2(-TIME*0.005, TIME*0.0125)).xy)*amp;
vec4 a = texture(iChannel0, p)*tint_color;
ALBEDO = a.xyz; //the w channel is not important, works without it on shader_type canvas_item but if used this on 3d spatial the effect no pass.. whats problem?
}
For ALPHA and ALBEDO: ALBEDO = a.xyz; is correct. For a.w, usually you would do this: ALPHA = a.w;. However, in this case, it appears that a.w is always 1. So there is no point.
I'll pick on the rest of the code. Keep in mind that I do not know how it should look like, nor have any idea of the texture for the sampler2D (I'm guessing noise texture, seamless).
Check your render mode. Being from ShaderToy, there is a chance you want render_mode unshaded;, which will make lights not affect the material. see Render Modes.
For ease of use, You can use hints. In particular, Write the tint color like this:
uniform vec4 tint_color: hint_color = vec4(0.0, 0.5,0.99, 1);
So Godot gives you a color picker in the shader parameters. See Uniforms.
You could also use hint_range(0, 1) for amp. However, I'm not sure about that.
Double check your coordinates. I suspect this FRAGCOORD.xy / (1.0/VIEWPORT_SIZE).xy should be SCREEN_UV (or UV, if it should stay with the object that has the material).
Was the original like this:
vec2 i_resolution = 1.0/SCREEN_PIXEL_SIZE;
vec2 uv = FRAGCOORD.xy/i_resolution;
As I said in the prior answer, 1.0 / SCREEN_PIXEL_SIZE is VIEWPORT_SIZE. Replace it. We have:
vec2 i_resolution = VIEWPORT_SIZE;
vec2 uv = FRAGCOORD.xy/i_resolution;
Inline:
vec2 uv = FRAGCOORD.xy/VIEWPORT_SIZE;
As I said in the prior answer, FRAGCOORD.xy/VIEWPORT_SIZE is SCREEN_UV (or UV if you don't want the material to depend on the position on screen). Replace it. We have:
vec2 uv = SCREEN_UV;
Even if that is not what you want, it is a good for testing.
Try moving the camera. Is that what you want? No? Try vec2 uv = UV; instead. In fact, a variable is hard to justify at that point.

Explanation of working principle of openGL [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I'm trying to understand how coding in openGL works. I found this code on the internet and I want to understand it clearly.
For my vertex shader I have:
Vertex
uniform vec3 fvLightPosition;
varying vec2 Texcoord;
varying vec2 Texcoordcut;
varying vec3 ViewDirection;
varying vec3 LightDirection;
uniform mat4 extra;
attribute vec3 rm_Binormal;
attribute vec3 rm_Tangent;
uniform float fSinTime0_X;
uniform float fCosTime0_X;
void main( void )
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex * extra;
Texcoord = gl_MultiTexCoord0.xy;
Texcoordcut = gl_MultiTexCoord0.xy;
vec4 fvObjectPosition = gl_ModelViewMatrix * gl_Vertex;
vec3 rotationLight = vec3(fCosTime0_X,0, fSinTime0_X);
ViewDirection = - fvObjectPosition.xyz;
LightDirection = (-rotationLight ) * (gl_NormalMatrix);
}
And for my Fragment shader, I created a white color on the picture to create a hole in it. :
uniform vec4 fvAmbient;
uniform vec4 fvSpecular;
uniform vec4 fvDiffuse;
uniform float fSpecularPower;
uniform sampler2D baseMap;
uniform sampler2D bumpMap;
varying vec2 Texcoord;
varying vec2 Texcoordcut;
varying vec3 ViewDirection;
varying vec3 LightDirection;
void main( void )
{
vec3 fvLightDirection = normalize( LightDirection );
vec3 fvNormal = normalize( ( texture2D( bumpMap, Texcoord ).xyz * 2.0 ) - 1.0 );
float fNDotL = dot( fvNormal, fvLightDirection );
vec3 fvReflection = normalize( ( ( 2.0 * fvNormal ) * fNDotL ) - fvLightDirection );
vec3 fvViewDirection = normalize( ViewDirection );
float fRDotV = max( 0.0, dot( fvReflection, fvViewDirection ) );
vec4 fvBaseColor = texture2D( baseMap, Texcoord );
vec4 fvTotalAmbient = fvAmbient * fvBaseColor;
vec4 fvTotalDiffuse = fvDiffuse * fNDotL * fvBaseColor;
vec4 fvTotalSpecular = fvSpecular * ( pow( fRDotV, fSpecularPower ) );
if(fvBaseColor == vec4(1,1,1,1)){
discard;
}else{
gl_FragColor = ( fvTotalDiffuse + fvTotalSpecular );
}
}
Somebody who can explain me very whell what everything does? I understand the basic idea of it. But not often why you need it, and what happend when you use other variables? What happens is that light around the teapot is comming and removing in the time. How is this correctly linked with the cosinus and sinus variables? What if I want the the light comes from above and goes to the bottom of the teapot?
Also,
What do this lines mean?
vec4 fvObjectPosition = gl_ModelViewMatrix * gl_Vertex;
And why is here a minus before the variable?
ViewDirection = - fvObjectPosition.xyz;
Why do we use a negative rotationLight?
LightDirection = (-rotationLight ) * (gl_NormalMatrix);
Why do they use *2.0 ) - 1.0 for calculating the normalvector? Isn't that not possible with Normal = normalize( gl_NormalMatrix * gl_Normal);?
vec3 fvNormal = normalize( ( texture2D( bumpMap, Texcoord ).xyz * 2.0 ) - 1.0 );
Too lazy to fully analyze the code without the propper context of what you are sending to the shaders ... but your subquestions are easy enough:
What do this lines mean? vec4 fvObjectPosition = gl_ModelViewMatrix * gl_Vertex;
This converts gl_Vertex (polygon edge points) from object/model coordinate system to camera coordinate system. In other words it apply all the rotations and translations of your vertexes. The z axis is camera view axis pointing to or from Screen and x,y axises are the same as screens. No projections/clippings/clampings are applied yet !!! The resulting point is stored in fvObjectPosition 4D vector (x,y,z,w) I strongly recommend you to read Understanding 4x4 homogenous transform matrices and the sub-links there are also worth looking into.
And why is here a minus before the variable? ViewDirection = - fvObjectPosition.xyz;
Most likely because you need the direction from surface to camera so direction_from_surface=camera_pos-surface_pos as your surface_pos is already in camera coordinate system then camera position in the same coordinates are (0,0,0) so the result is direction_from_surface=(0,0,0)-surface_pos=-surface_pos or you got negative Z axis view direction (depends on the format of your matrices). It is Hard to determine without background info.
Why do we use a negative rotationLight? LightDirection = (-rotationLight ) * (gl_NormalMatrix);
most likely for the same reasons as bullet 2
Why do they use *2.0)-1.0 for calculating the normalvector?
The shader use normal/bump mapping which means you got an texture with normal vectors encoded as RGB. As RGB textures are clamped to range <0,1> and normal vector coordinates are in range <-1,+1> then you just need to rescale the texel So:
RGB*2.0 is in range <0,2>
RGB*2.0-1.0 is in range <-1,+1>
This obtains your normal vector in polygon coordinate system so you need to convert it to the coordinate system your equations work with. Usually global world space or camera space. The normalize is not necessary if your normal/bump map is normalized already. Normal textures are distinctive with colors ...
flat surface has normal=(0.0,0.0,+1.0) so in RGB it would be (0.5,0.5,1.0)
That is the common bluish/magenta color often seen in textures (see the link above).
But Yes you can use Normal = normalize( gl_NormalMatrix * gl_Normal);
But that will eliminate the bump/normal map and you would got just flat surfaces instead. Something like this:
GL+GLSL normal shading complete example (C++)
Light
vec3(fCosTime0_X,0, fSinTime0_X) looks like the light direction. This one is rotating around y axis. If you want to change light direction to something else just make it an uniform and pass it directly to shader instead of fCosTime0_X,fSinTime0_X
How is this correctly linked with the cosinus and sinus variables?
You can send data to a shader uniform variable via glUniform function. For example: in your vertex shader, you have 2 float values, so you will call glUniform1f twice each time with different location and different value.
Or you can stick the float variables to one vec2 variable like so:
uniform vec2 fSinValues; and fill them with glUniform2f(location, sinVal, cosVal);
What if I want the the light comes from above and goes to the bottom of the teapot?
If you want your light rotate in different direction, just pass the sin and cos values to different space coordinate right here: vec3 rotationLight = vec3(fCosTime0_X,fSinTime0_X, 0);

Why does this Phong shader work?

I recently wrote a Phong shader in GLSL as part of a school assignment. I started with tutorials, then played around with the code until I got it working. It works perfectly fine as far as I can tell, but there's one line in particular I wrote where I don't understand why it does work.
The vertex shader:
#version 330
layout (location = 0) in vec3 Position; // Vertex position
layout (location = 1) in vec3 Normal; // Vertex normal
out vec3 Norm;
out vec3 Pos;
out vec3 LightDir;
uniform mat3 NormalMatrix; // ModelView matrix without the translation component, and inverted
uniform mat4 MVP; // ModelViewProjection Matrix
uniform mat4 ModelView; // ModelView matrix
uniform vec3 light_pos; // Position of the light
void main()
{
Norm = normalize(NormalMatrix * Normal);
Pos = Position;
LightDir = NormalMatrix * (light_pos - Position);
gl_Position = MVP * vec4(Position, 1.0);
}
The fragment shader:
#version 330
in vec3 Norm;
in vec3 Pos;
in vec3 LightDir;
layout (location = 0) out vec4 FragColor;
uniform mat3 NormalMatrix;
uniform mat4 ModelView;
void main()
{
vec3 normalDirCameraCoords = normalize(Norm);
vec3 vertexPosLocalCoords = normalize(Pos);
vec3 lightDirCameraCoords = normalize(LightDir);
float dist = max(length(LightDir), 1.0);
float intensity = max(dot(normalDirCameraCoords, lightDirCameraCoords), 0.0) / pow(dist, 1.001);
vec3 h = normalize(lightDirCameraCoords - vertexPosLocalCoords);
float intSpec = max(dot(h, normalDirCameraCoords), 0.0);
vec4 spec = vec4(0.9, 0.9, 0.9, 1.0) * (pow(intSpec, 100) / pow(dist, 1.2));
FragColor = max((intensity * vec4(0.7, 0.7, 0.7, 1.0)) + spec, vec4(0.07, 0.07, 0.07, 1.0));
}
So I'm doing the method where you calculate the half vector between the light vector and the camera vector, then dot it with the normal. That's all good. However, I do two things that are strange.
Normally, everything is done in eye coordinates. However, Position, which I pass from the vertex shader to the fragment shader, is in local coordinates.
This is the part that baffles me. On the line vec3 h = normalize(lightDirCameraCoords - vertexPosLocalCoords); I'm subtracting the light vector in camera coordinates with the vertex position in local coordinates. This seems utterly wrong.
In short, I understand what this code is supposed to be doing, and how the half vector method of phong shading works.
But why does this code work?
EDIT: The starter code we were provided is open source, so you can download the completed project and look at it directly if you'd like. The project is for VS 2012 on Windows (you'll need to set up GLEW, GLM, and freeGLUT), and should work on GCC with no code changes (maybe a change or two to the makefile library paths).
Note that in the source files, "light_pos" is called "gem_pos", as our light source is the little gem you move around with WSADXC. Press M to get Phong with multiple lights.
The reason this works is happenstance, but it's interesting to see why it still works.
Phong shading is three techniques in one
With phong shading, we have three terms: specular, diffuse, and ambient; these three terms represent the three techniques used in phong shading.
None of these terms strictly require a vector space; you can make phong shading work in world, local, or camera spaces as long as you are consistant. Eye space is usually used for lighting, as it is easier to work with and the conversions are simple.
But what if you are at origin? Now you are multiplying by zero; it's easy to see that there's no difference between any of the vector spaces at origin. By coincidence, at origin, it doesn't matter what vector space you are in; it'll work.
vec3 h = normalize(lightDirCameraCoords - vertexPosLocalCoords);
Notice that it's basically subtracting 0; this is the only time local is used, and it's used in the one place that it can do the least damage. Since the object is at origin, all it's vertices should be at or very close to origin as well. At origin, the approximation is exact; all vector spaces converge. Very close to origin, it's very close to exact; even if we used exact reals, it'd be a very small divergence, but we don't use exact reals, we use floats, compounding the issue.
Basically, you got lucky; this wouldn't work if the object wasn't at origin. Try moving it and see!
Also, you aren't using Phong shading; you are using Blinn-Phong shading (that's the name for the replacement of reflect() with a half vector, just for reference).

Displacing Vertices in the Vertex shader

My code creates a grid of lots of vertices and then displaces them by a random noise value in height in the vertex shader. Moving around using
glTranslated(-camera.x, -camera.y, -camera.z);
works fine, but that would mean you could go until the edge of the grid.
I thought about sending the camera position to the shader, and letting the whole displacement get offset by it. Hence the final vertex shader code:
uniform vec3 camera_position;
varying vec4 position;
void main()
{
vec3 tmp;
tmp = gl_Vertex.xyz;
tmp -= camera_position;
gl_Vertex.y = snoise(tmp.xz / NOISE_FREQUENCY) * NOISE_SCALING;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
position = gl_Position;
}
EDIT: I fixed a flaw, the vertices themselves should not get horizontally displaced.
For some reason though, when I run this and change camera position, the screen flickers but nothing moves. Vertices are displaced by the noise value correctly, and coloring and everything works, but the displacement doesn't move.
For reference, here is the git repository: https://github.com/Orpheon/Synthworld
What am I doing wrong?
PS: "Flickering" is wrong. It's as if with some positions it doesn't draw anything, and others it draws the normal scene from position 0, so if I move without stopping it flickers. I can stop at a spot where it stays black though, or at one where it stays normal.
gl_Position = ftransform();
That's what you're doing wrong. ftransform does not implicitly take gl_Vertex as a parameter. It simply does exactly what it's supposed to: perform transformation of the vertex position exactly as though this were the fixed-function pipeline. So it uses the value that gl_Vertex had initially.
You need to do proper transformations on this:
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
I managed to fix this bug by using glUniform3fv:
main.c exterp:
// Sync the camera position variable
GLint camera_pos_ptr;
float f[] = {camera.x, camera.y, camera.z};
camera_pos_ptr = glGetUniformLocation(shader_program, "camera_position");
glUniform3fv(camera_pos_ptr, 1, f);
// draw the display list
glCallList(grid);
Vertex shader:
uniform vec3 camera_position;
varying vec4 position;
void main()
{
vec4 newpos;
newpos = gl_Vertex;
newpos.y = (snoise( (newpos.xz + camera_position.xz) / NOISE_FREQUENCY ) * NOISE_SCALING) - camera_position.y;
gl_Position = gl_ModelViewProjectionMatrix * newpos;
position = gl_Position;
}
If someone wants to see the complete code, here is the git repository again: https://github.com/Orpheon/Synthworld

Odd effect with GLSL normals

As a somewhat similar to a problem I had before and posted before, I'm trying to get normals to display correctly in my GLSL app.
For the purposes of my explanation, I'm using the ninjaHead.obj model provided with RenderMonkey for testing purposes (you can grab it here). Now in the preview window in RenderMonkey, everything looks great:
and the vertex and fragment code generated respectively is:
Vertex:
uniform vec4 view_position;
varying vec3 vNormal;
varying vec3 vViewVec;
void main(void)
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
// World-space lighting
vNormal = gl_Normal;
vViewVec = view_position.xyz - gl_Vertex.xyz;
}
Fragment:
uniform vec4 color;
varying vec3 vNormal;
varying vec3 vViewVec;
void main(void)
{
float v = 0.5 * (1.0 + dot(normalize(vViewVec), vNormal));
gl_FragColor = v* color;
}
I based my GLSL code on this but I'm not quite getting the expected results...
My vertex shader code:
uniform mat4 P;
uniform mat4 modelRotationMatrix;
uniform mat4 modelScaleMatrix;
uniform mat4 modelTranslationMatrix;
uniform vec3 cameraPosition;
varying vec4 vNormal;
varying vec4 vViewVec;
void main()
{
vec4 pos = gl_ProjectionMatrix * P * modelTranslationMatrix * modelRotationMatrix * modelScaleMatrix * gl_Vertex;
gl_Position = pos;
gl_TexCoord[0] = gl_MultiTexCoord0;
gl_FrontColor = gl_Color;
vec4 normal4 = vec4(gl_Normal.x,gl_Normal.y,gl_Normal.z,0);
// World-space lighting
vNormal = normal4*modelRotationMatrix;
vec4 tempCameraPos = vec4(cameraPosition.x,cameraPosition.y,cameraPosition.z,0);
//vViewVec = cameraPosition.xyz - pos.xyz;
vViewVec = tempCameraPos - pos;
}
My fragment shader code:
varying vec4 vNormal;
varying vec4 vViewVec;
void main()
{
//gl_FragColor = gl_Color;
float v = 0.5 * (1.0 + dot(normalize(vViewVec), vNormal));
gl_FragColor = v * gl_Color;
}
However my render produces this...
Does anyone know what might be causing this and/or how to make it work?
EDIT
In response to kvark's comments, here is the model rendered without any normal/lighting calculations to show all triangles being rendered.
And here is the model shading with the normals used for colors. I believe the problem has been found! Now the reason is why it is being rendered like this and how to solve it? Suggestions are welcome!
SOLUTION
Well everyone the problem has been solved! Thanks to kvark for all his helpful insight that has definitely helped my programming practice but I'm afraid the answer comes from me being a MASSIVE tit... I had an error in the display() function of my code that set the glNormalPointer offset to a random value. It used to be this:
gl.glEnableClientState(GL.GL_NORMAL_ARRAY);
gl.glBindBuffer(GL.GL_ARRAY_BUFFER, getNormalsBufferObject());
gl.glNormalPointer(GL.GL_FLOAT, 0, getNormalsBufferObject());
But should have been this:
gl.glEnableClientState(GL.GL_NORMAL_ARRAY);
gl.glBindBuffer(GL.GL_ARRAY_BUFFER, getNormalsBufferObject());
gl.glNormalPointer(GL.GL_FLOAT, 0, 0);
So I guess this is a lesson. NEVER mindlessly Ctrl+C and Ctrl+V code to save time on a Friday afternoon AND... When you're sure the part of the code you're looking at is right, the problem is probably somewhere else!
What is your P matrix? (I suppose it's a world->camera view transform).
vNormal = normal4*modelRotationMatrix; Why did you change the order of arguments? Doing that you are multiplying the normal by inversed rotation, what you don't really want. Use the standard order instead (modelRotationMatrix * normal4)
vViewVec = tempCameraPos - pos. This is entirely incorrect. pos is your vertex in the homogeneous clip-space, while tempCameraPos is in world space (I suppose). You need to have the result in the same space as your normal is (world space), so use world-space vertex position (modelTranslationMatrix * modelRotationMatrix * modelScaleMatrix * gl_Vertex) for this equation.
You seem to be mixing GL versions a bit? You are passing the matrices manually via uniforms, but use fixed function to pass vertex attributes. Hm. Anyway...
I sincerely don't like what you're doing to your normals. Have a look:
vec4 normal4 = vec4(gl_Normal.x,gl_Normal.y,gl_Normal.z,0);
vNormal = normal4*modelRotationMatrix;
A normal only stores directional data, why use a vec4 for it? I believe it's more elegant to just use just vec3. Furthermore, look what happens next- you multiply the normal by the 4x4 model rotation matrix... And additionally your normal's fourth cordinate is equal to 0, so it's not a correct vector in homogenous coordinates. I'm not sure that's the main problem here, but I wouldn't be surprised if that multiplication would give you rubbish.
The standard way to transform normals is to multiply a vec3 by the 3x3 submatrix of the model-view matrix (since you're only interested in the orientation, not the translation). Well, precisely, the "correctest" approach is to use the inverse transpose of that 3x3 submatrix (this gets important when you have scaling). In old OpenGL versions you had it precalculated as gl_NormalMatrix.
So instead of the above, you should use something like
// (...)
varying vec3 vNormal;
// (...)
mat3 normalMatrix = transpose(inverse(mat3(modelRotationMatrix)));
// or if you don't need scaling, this one should work too-
mat3 normalMatrix = mat3(modelRotationMatrix);
vNormal = gl_Normal*normalMatrix;
That's certainly one thing to fix in your code - I hope it solves your problem.