Model position in Cesium shader - glsl

I'm learning the Cesiumjs shader.
I want to color my model based on the z-value of the position, but I don't know which variable representing the model position in shaders(PointPrimitiveCollectionVS.glsl).
attribute vec4 positionHighAndSize;
attribute vec4 positionLowAndOutline;
vec3 positionHigh = positionHighAndSize.xyz;
vec3 positionLow = positionLowAndOutline.xyz;
vec4 p = czm_translateRelativeToEye(positionHigh, positionLow);
vec4 positionEC = czm_modelViewRelativeToEye * p;
If someone could tell me how to get the model position?

Related

openGL cubemap reflections in view space are wrong

I am following this tutorial and I managed to add a cube map to my scene. Then I tried to add reflections to my object, differently from the tutorial, I made my GLSL code in view space. However, the reflections seem a bit off. They are always reflecting the same side whatever angle you are facing, in my case, you always see a rock on the reflected object, but the rock is only on one side of my cube map.
Here is a video showing the effect:
.
I tried with other shaped objects, like a cube, and the effect is the same. I also found this book, that shows an example of a view space reflections, and it seems I am doing something similar to it, but it still won't result in the desired effect.
My vertex shader code:
#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 normal;
layout (location = 2) in vec2 aTexCoord;
uniform mat4 Model;
uniform mat4 View;
uniform mat4 Projection;
out vec2 TexCoord;
out vec3 aNormal;
out vec3 FragPos;
void main()
{
aNormal = mat3(transpose(inverse(View * Model))) * normal;
FragPos = vec3(View * Model * vec4(aPos,1.0));
gl_Position = Projection * vec4(FragPos,1.0);
TexCoord = aTexCoord;
}
My vertex code:
#version 330 core
out vec4 FragColor;
in vec3 FragPos;
in vec3 aNormal;
uniform samplerCube skybox;
void main(){
vec3 I = normalize(FragPos);
vec3 R = reflect(I,normalize(aNormal));
FragColor = vec4(texture(skybox,R).rgb,1.0);
}
Since you do the computations, in the fragment shader, in view space, the reflected vector (R) is a vector in view space, too. The cubemap (skybox) represents a map of the environment, in world space.
You have to transform R form view space to world space. That can be done by the inverse view matrix The inverse matrix can be computed by the glsl built-in function inverse:
#version 330 core
out vec4 FragColor;
in vec3 FragPos;
in vec3 aNormal;
uniform samplerCube skybox;
uniform mat4 View;
void main() {
vec3 I = normalize(FragPos);
vec3 viewR = reflect(I, normalize(aNormal));
vec3 worldR = inverse(mat3(View)) * viewR;
FragColor = vec4(texture(skybox, worldR).rgb, 1.0);
}
Note, the view matrix transforms form world space to view space, this the inverse view matrix transforms form view space to world space. See also Invertible matrix.
This is a late answer but I just wanted to give additional information why it is behaving like this.
Imagine your reflective object is simply a 6 sided cube. Each face can be thought of as a mirror. Now because you are in view space every coordinate of that mirror plane that is visible from your viewpoint does have a negative z value. Lets us look at the point directly at the center. This vector looks like (0, 0, -z) and because the side of the cube is like a mirror it will get reflected directly back to you (0, 0, +z). So you end up sampling from GL_TEXTURE_CUBE_MAP_POSITIVE_Z of your cube map.
In shader code it looks like:
vec3 V = normalize(-frag_pos_view_space); // vector from fragment to view point (0,0,0) in view space
vec3 R = reflect(-V, N); // invert V because reflect expects incident vector
vec3 color = texture(skybox, R).xyz;
Now, let us move to the other side of the cube and look at that mirror plane. In view space, the coordinate you are looking at is still (0,0,-z) at the center, will be reflected around the normal and gets back to you, so the reflected vector again looks like (0,0, +z). This means even if you are at the other side of your cube you will sample the same face in your cube map.
So what you have to do is go back into world space using the inverse of your view matrix. If in addition then you rendered the skybox itself by applying a rotation you will even have to transform your reflected vector with the inverse of the model matrix that you used to transform the skybox otherwise the reflections will still be wrong.

How to get the room (or screen) coordinates from inside a Gamemaker Studio 2 shader?

I'm mostly new to writing shaders, and this might not be a great place to start, but I'm trying to make a shader to make an object "sparkle." I won't get into the specifics on how it's supposed to look, but to make it I need a value that changes with the object's position in the room (or on the screen, as the camera is fixed). I've tried v_vTexcoord, in_Position, gl_Position, and others without the intended result. If I've used them wrong or missed something, I wouldn't be surprised, but any advice is helpful.
I don't think they'll be helpful but here's my vertex shader:
//it's mostly the same as the default
// Simple passthrough vertex shader
//
attribute vec3 in_Position; // (x,y,z)
//attribute vec3 in_Normal; // (x,y,z) unused in this shader.
attribute vec4 in_Colour; // (r,g,b,a)
attribute vec2 in_TextureCoord; // (u,v)
varying vec2 v_vTexcoord;
varying vec4 v_vColour;
varying vec3 v_inpos;
void main()
{
vec4 object_space_pos = vec4( in_Position.x, in_Position.y, in_Position.z, 1.0);
gl_Position = gm_Matrices[MATRIX_WORLD_VIEW_PROJECTION] * object_space_pos;
v_vColour = in_Colour;
v_vTexcoord = in_TextureCoord;
v_inpos = /*this is the variable that i'd like to set to the x,y*/;
}
and my fragment shader:
//
//
//
varying vec2 v_vTexcoord; //x is <1, >.5
varying vec4 v_vColour;
varying vec3 v_inpos;
uniform float pixelH; //unused, but left in for accuracy
uniform float pixelW; //see above
void main() //WHEN AN ERROR HAPPENS, THE SHADER JUST WON'T DO ANYTHING AT ALL.
{
vec2 offset = vec2 (pixelW, pixelH);
gl_FragColor = v_vColour * texture2D( gm_BaseTexture, v_vTexcoord );
/* i am planning on just testing different math until something works, but i can't
vec3 test = vec3 (v_inpos.x, v_inpos.x, v_inpos.x); //find the values i need to test it
test.x = mod(test.x,.08);
test.x = test.x - 4;
test.x = abs(test.x);
while (test.x > 1.0){
test.x = test.x/10;
}
test = vec3 (test.x, test.x, test.x);
gl_FragColor.a = test.x;
*/
//everything above doesn't cause an error when uncommented, i think
//if (v_inpos.x == 0.0){gl_FragColor.rgb = vec3 (1,0,0);}
//if (v_inpos.x > 1) {gl_FragColor.rgb = vec3 (0,1,0);}
//if (v_inpos.x < 1) {gl_FragColor.rgb = vec3 (0,0,1);}
}
if this question doesn't make sense, i'll try to clarify any other questions in the comments.
If you want to get a position in world space, then you have to transform the vertex coordinate from model space to world space.
This can be done by the model (world) matrix (gm_Matrices[MATRIX_WORLD]). See game-maker-studio-2 - Matrices.
e.g.:
vec4 object_space_pos = vec4(in_Position.xyz, 1.0);
vec4 world_space_pos = gm_Matrices[MATRIX_WORLD] * object_space_pos;
Not, the Cartesian world space position can be get by:
(See also Swizzling)
vec3 pos = world_space_pos.xyz;

How to fix incorrect Blinn-Phong lighting

I am trying to implement Blinn-Phong shading for a single light source within a Vulkan shader but I am getting a result which is not what I expect.
The output is shown below:
The light position should be behind to the right of the camera, which is correctly represented on the touri but not on the circle. I do not expect to have the point of high intensity in the middle of the circle.
The light position is at coordinates (10, 10, 10).
The point of high intensity in the middle of the circle is (0,0,0).
Vertex shader:
#version 450
#extension GL_ARB_separate_shader_objects : enable
layout(binding = 0) uniform MVP {
mat4 model;
mat4 view;
mat4 proj;
} mvp;
layout(location = 0) in vec3 inPosition;
layout(location = 1) in vec3 inColor;
layout(location = 2) in vec2 inTexCoord;
layout(location = 3) in vec3 inNormal;
layout(location = 0) out vec3 fragColor;
layout(location = 1) out vec2 fragTexCoord;
layout(location = 2) out vec3 Normal;
layout(location = 3) out vec3 FragPos;
layout(location = 4) out vec3 viewPos;
void main() {
gl_Position = mvp.proj * mvp.view * mvp.model * vec4(inPosition, 1.0);
fragColor = inColor;
fragTexCoord = inTexCoord;
Normal = inNormal;
FragPos = inPosition;
viewPos = vec3(mvp.view[3][0], mvp.view[3][1], mvp.view[3][2]);
}
Fragment shader:
#version 450
#extension GL_ARB_separate_shader_objects : enable
layout(binding = 1) uniform sampler2D texSampler;
layout(binding = 2) uniform LightUBO{
vec3 position;
vec3 color;
} Light;
layout(location = 0) in vec3 fragColor;
layout(location = 1) in vec2 fragTexCoord;
layout(location = 2) in vec3 Normal;
layout(location = 3) in vec3 FragPos;
layout(location = 4) in vec3 viewPos;
layout(location = 0) out vec4 outColor;
void main() {
vec3 color = texture(texSampler, fragTexCoord).rgb;
// ambient
vec3 ambient = 0.2 * color;
// diffuse
vec3 lightDir = normalize(Light.lightPos - FragPos);
vec3 normal = normalize(Normal);
float diff = max(dot(lightDir, normal), 0.0);
vec3 diffuse = diff * color;
// specular
vec3 viewDir = normalize(viewPos - FragPos);
vec3 reflectDir = reflect(-lightDir, normal);
float spec = 0.0;
vec3 halfwayDir = normalize(lightDir + viewDir);
spec = pow(max(dot(normal, halfwayDir), 0.0), 32.0);
vec3 specular = vec3(0.25) * spec;
outColor = vec4(ambient + diffuse + specular, 1.0);
}
Note:
I am trying to implement shaders from this tutorial into Vulkan.
This would seem to simply be a question of using the right coordinate system. Since some vital information is missing from your question, I will have to make a few assumptions. First of all, based on the fact that you have a model matrix and apparently have multiple objects in your scene, I will assume that your world space and object space are not the same in general. Furthermore, I will assume that your model matrix transforms from object space to world space, your view matrix transforms from world space to view space and your proj matrix transforms from view space to clip space. I will also assume that your inPosition and inNormal attributes are in object space coordinates.
Based on all of this, your viewPos is just taking the last column of the view matrix, which will not contain the camera position in world space. Neither will the last row. The view matrix transforms from world space to view space. Its last column corresponds to the vector pointing to the world space origin as seen from the perspective of the camera. Your FragPos and Normal will be in object space. And, based on what you said in your question, your light positions are in world space. So in the end, you're just mashing together coordinates that are all relative to completely different coordinate systems. For example:
vec3 lightDir = normalize(Light.lightPos - FragPos);
Here, you're subtracting an object space position from a world space position, which will yield a completely meaningless result. This meaningless result is then normalized and dotted with an object-space direction
float diff = max(dot(lightDir, normal), 0.0);
Also, even if viewPos was the world-space camera position, this
vec3 viewDir = normalize(viewPos - FragPos);
would still be meaningless since FragPos is given in object-space coordinates.
Operations on coordinate vectors only make sense if all the vectors involved are relative to the same coordinate system. It doesn't really matter so much which coordinate system you choose. But you have to pick one. Make sure all your vectors are actually relative to that coordinate system, e.g., world space. If some vectors do not already happen to be in that coordinate system, you will have to transform them into that coordinate system. Only once all your vectors are in the same coordinate system, your shading computations will be meaningful…
To get the viewPos, you could take the last column of the inverse view matrix (if you happened to already have that somewhere for some reason), or simply pass the camera position as an additional uniform. Also, rather than multiply the model view and projection matrices again and again, once for every single vertex, consider just passing a combined model-view-projection matrix to the shader…
Apart from that: Note that you will most likely only want to have a specular component if the surface is actually oriented towards the light.

Multitarget rendering in two different spaces in one shader

I make a program which work with a model in a worldspace. For my purpose I want to produce in the shader two textures:
In the worldspace, for postprocessing.
In the screenspace (as UV unwrap), for saving the result of working of shader in the current scene into a new texture of object.
I can't figure out how to do it in one shader with multitarget rendering.
In the vertex shader I set position for both spaces:
layout (location = 0) in vec3 position;
out vec4 scrn_Position;
...
gl_Position = projection * view * model * vec4(position,1.0);
scrn_Position = model * vec4(position,1.0);
In the fragmnet shader I set two outputs:
in vec4 scrn_Position;
...
layout (location = 0) out vec4 wrld_Color;
layout (location = 1) out vec4 scrn_Color;
But how to work with scrn_Position in the fragment shader? Is it possible to carry out in one pass?

How to use a directional light in a blinn shader instead of point light?

So I am using a blinn shader program on some of my models, that uses a DIRECTIONAL light as opposed to a point light. I started out this original code that uses a point light:
VERTEX:
varying vec3 Half;
varying vec3 Light;
varying vec3 Normal;
varying vec4 Ambient;
void main()
{
// Vertex location in modelview coordinates
vec3 P = vec3(gl_ModelViewMatrix * gl_Vertex);
Light = vec3(gl_LightSource[0].position) - P;
Normal = gl_NormalMatrix * gl_Normal;
Half = gl_LightSource[0].halfVector.xyz;
Ambient = gl_FrontMaterial.emission + gl_FrontLightProduct[0].ambient + gl_LightModel.ambient*gl_FrontMaterial.ambient;
gl_TexCoord[0] = gl_MultiTexCoord0;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
FRAGMENT:
varying vec3 Half;
varying vec3 Light;
varying vec3 Normal;
varying vec4 Ambient;
uniform sampler2D tex;
vec4 blinn()
{
vec3 N = normalize(Normal);
vec3 L = normalize(Light);
vec4 color = Ambient;
float Id = dot(L,N);
if (Id>0.0)
{
color += Id*gl_FrontLightProduct[0].diffuse;
vec3 H = normalize(Half);
float Is = dot(H,L); // Specular is cosine of reflected and view vectors
if (Is>0.0) color += pow(Is,gl_FrontMaterial.shininess)*gl_FrontLightProduct[0].specular;
}
return color;
}
void main()
{
gl_FragColor = blinn() * texture2D(tex,gl_TexCoord[0].xy);
}
However, as stated above, instead of a point light I want I want a directional light, such that no matter where in the scene the model is the direction of the light is the same. So I make the following changes:
Instead of:
varying vec3 Light;
and
vec3 P = vec3(gl_ModelViewMatrix * gl_Vertex);
Light = vec3(gl_LightSource[0].position) - P;
I get rid of the above lines of code and instead in my fragment shader have:
uniform vec4 lightDir;
and
vec3 L = normalize(lightDir.xyz);
I pass the direction of the light as a uniform outside of a my shader program, and this works good: the model is lighted from a single direction no matter its location in the world! HOWEVER, now the lighting changes dramatically and unrealistically depending on the user's view, which makes sense since I got rid of ("-P ") in the light calculation in the original code. I've already tried adding that to the code (by moving lightDir back to vertex shader and passing it along again in a varying) to what I have and it just doesn't fix the problem. I'm afraid I just don't understand what is going well enough to figure this out, I understand that the " - P" for the Light vec3 is necessary: to make specular/reflection work, but I don't know how to make it work for a directional light. How do I make take the original above code, and make it so the light is treated as a directional light as opposed to a point light?