simple procedural skybox - opengl

As part of an attempt to generate a very simple looking sky, I've created a skybox (basically a cube going from (-1, -1, -1) to (1, 1, 1), which is drawn after all of my geometry and forced to the back via the following simple vertex shader :
#version 330
layout(location = 0) in vec4 position;
layout(location = 1) in vec4 normal;
out Data
{
vec4 eyespace_position;
vec4 eyespace_normal;
vec4 worldspace_position;
vec4 raw_position;
} vtx_data;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main()
{
mat4 view_without_translation = view;
view_without_translation[3][0] = 0.0f;
view_without_translation[3][1] = 0.0f;
view_without_translation[3][2] = 0.0f;
vtx_data.raw_position = position;
vtx_data.worldspace_position = model * position;
vtx_data.eyespace_position = view_without_translation * vtx_data.worldspace_position;
gl_Position = (projection * vtx_data.eyespace_position).xyww;
}
From this, I'm trying to have my sky display as a very simple gradient from a deep blue at the top to a lighter blue at the horizon.
Obviously, simply mixing my two colors based on the Y coordinate of each fragment is going to look very bad : the fact that you're looking at a box and not a dome is immediately clear, as seen here :
Note the fairly visible "corners" at the top left and top right of the box.
Instinctively, I was thinking that the obvious fix would be to normalize the position of each fragment, to get a position on a unit sphere, then take the Y coordinate of that. I thought that would result in a value that would be constant for a given "altitude", if that makes sense. Like this :
#version 330
in Data
{
vec4 eyespace_position;
vec4 eyespace_normal;
vec4 worldspace_position;
vec4 raw_position;
} vtx_data;
out vec4 outputColor;
const vec4 skytop = vec4(0.0f, 0.0f, 1.0f, 1.0f);
const vec4 skyhorizon = vec4(0.3294f, 0.92157f, 1.0f, 1.0f);
void main()
{
vec4 pointOnSphere = normalize(vtx_data.worldspace_position);
float a = pointOnSphere.y;
outputColor = mix(skyhorizon, skytop, a);
}
The result however is much the same as the first screenshot (I can post it if necessary but since it's visually similar to the first, I'm skipping it to shorten this question right now).
After some random fiddling (cargo cult programming, I know :/), I realized that this works :
void main()
{
vec3 pointOnSphere = normalize(vtx_data.worldspace_position.xyz);
float a = pointOnSphere.y;
outputColor = mix(skyhorizon, skytop, a);
}
The only difference is that I normalize the position without it's W component.
And here's the working result : (the difference is subtle in screenshots but quite noticeable in motion)
So, finally, my question : why does this work when the previous version fails ? I must be misunderstanding something extremely basic about homogenous coordinates but my brain just isn't clicking right now !

GLSL normalize does not handle homogeneous coordinates per se. It interprets the coordinate as belonging to R^4. This is in general not what you want. However, if vtx_data.worldspace_position.w == 0, then the normalize should produce the same result.
I don't know what vec3 pointOnSphere = normalize(vtx_data.worldspace_position); means because the left side should have type vec4 also.

Related

Is there a reason this shader is drawing weird?

I have two linear squares, one behind the other. The front one has transparent parts so you can see some of the back square. I only want the back one to be visible within the first square and not on the outside of it. For my project the back square is far bigger than the front square (I know I can just change the back squares vertices). Anyways, I'm super confused why what I thought should work doesn't. Here's the code for my vertex shader.
#version 330 core
layout(location = 0) in vec4 position;
layout(location = 1) in vec2 textCoord;
layout(location = 2) in vec4 color;
layout(location = 3) in float tIndex;
uniform mat4 u_MVP;
uniform float maxSquareHeight;
uniform float maxSquareWidth;
out vec2 v_textCoord;
out float v_tIndex;
flat out vec4 v_color;
void main()
{
if(tIndex == 1.0f && ((abs(position.x) > maxSquareWidth) || (abs(position.y) > maxSquareHeight))){
v_color = vec4(0.0f, 0.0f, 1.0f, 0.5f);
}
// else{
// v_color = color;
// }
.......
};
As of now it is exactly how I want it. However, I initially thought the commented out part would work but it ends up not showing the back square at all. I'm still new with opengl but assumed from what I looked up about the flat keyword, the shader would draw two separate colors depending on the back squares position. Thanks for any answers!

Inverted geometry gBuffer positions for perspective. Orthographic is ok?

I have a deferred renderer which appears to work correctly, depth, colour and shading comes out correctly. However the position buffer is fine for orthographic, while the geometry appears 'inverted' (or depth disabled) when using a perspective projection.
I am getting the following buffer outputs for orthographic.
With the final 'shaded' image currently looking correct.
However when I am using a perspective projection I get the following buffers coming out...
And final image is fine, although I don't incorporate any position buffer information at the moment (N.B Only doing 'headlight' shading at the moment)
While the final image appears correct, the depth buffer appears to be ignored for my position buffer...(there is no glDisable(GL_DEPTH_TEST) in the code.
The depth and normal buffers looks ok to me, it's only the 'position' buffer which appears to be ignoring the depth? The render pipeline is exactly the same in for ortho and perspective with the only difference being the projection matrix.
I use glm::ortho, and glm::perspective and I calculate my near/far clipping distances on the fly based on the scene AABB. For orthographic my near/far is 1 & 11.4734 respectively, and for perspective it is 11.0875 & 22.5609... The width and height values are the same, fov is 45 for perspective projection.
I do have these calls before drawing any geometry...
glEnable(GL_DEPTH_TEST);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Which I use for compositing different layers as part of the render pipeline.
Am I doing anything wrong here? or am I misunderstanding something?
Here are my shaders...
Vertex shader of gBuffer...
#version 430 core
layout (std140) uniform MatrixPV
{
mat4 P;
mat4 V;
};
layout(location = 0) in vec3 InPoint;
layout(location = 1) in vec3 InNormal;
layout(location = 2) in vec2 InUV;
uniform mat4 M;
out vec4 Position;
out vec3 Normal;
out vec2 UV;
void main()
{
mat4 VM = V * M;
gl_Position = P * VM * vec4(InPoint, 1.0);
Position = P * VM * vec4(InPoint, 1.0);
Normal = mat3(M) * InNormal;
UV = InUV;
}
Fragment shader of gBuffer...
#version 430 core
layout(location = 0) out vec4 gBufferPicker;
layout(location = 1) out vec4 gBufferPosition;
layout(location = 2) out vec4 gBufferNormal;
layout(location = 3) out vec4 gBufferDiffuse;
in vec3 Normal;
in vec4 Position;
vec4 Diffuse();
uniform vec4 PickerColour;
void main()
{
gBufferPosition = Position;
gBufferNormal = vec4(Normal.xyz, 1.0);
gBufferPicker = PickerColour;
gBufferDiffuse = Diffuse();
}
And here is the 'second pass' shader to visualise the position buffer...
#version 430 core
uniform sampler2D debugBufferPosition;
in vec2 UV;
out vec4 frag;
void main()
{
vec3 val = texture(debugBufferPosition, UV).xyz;
frag = vec4(val.xyz, 1.0);
}
I haven't used the position buffer data yet, and I know I can reconstruct it without having to store them in another buffer, however the positions are useful for me for other reasons and I would like to know why they are coming out as they are for perspective?
What you actually write in the position buffer is the clip space coordinate
Position = P * VM * vec4(InPoint, 1.0);
The clip space coordinate is a Homogeneous coordinates and transformed to the normaliced device cooridnate (which is a Cartesian coordinate by a Perspective divide.
ndc = gl_Position.xyz / gl_Position.w;
At orthographic projection the w component is 1, but at perspective projection, the w component contains a value which depends on the z component (depth) of the (cartesian ) view space coordinate.
I recommend to store the normalized device coordinate to the position buffer, rather than the clip space coordinate. e.g.:
gBufferPosition = vec4(Position.xyz / Position.w, 1.0);

How can I texture with vertex position coordinates? openGL,c++

I want to texture my terrain without predetermined texture coordinates. I want to determine the coordinates in the vertex or fragmant shader using vertex position coordinates. I now use position 'xz' coordinates (up=(0,1,0)), but if I have a for example wall which is 90 degrees with the ground the texture will be like this:
How can I transform this position these coordinates to work well?
Here's my vertex shader:
#version 430
in layout(location=0) vec3 position;
in layout(location=1) vec2 textCoord;
in layout(location=2) vec3 normal;
out vec3 pos;
out vec2 text;
out vec3 norm;
uniform mat4 transformation;
void main()
{
gl_Position = transformation * vec4(position, 1.0);
norm = normal;
pos = position;
text = position.xz;
}
And here's my fragmant shader:
#version 430
in vec3 pos;
in vec2 text;
in vec3 norm;
//uniform sampler2D textures[3];
layout(binding=3) uniform sampler2D texture_1;
layout(binding=4) uniform sampler2D texture_2;
layout(binding=5) uniform sampler2D texture_3;
vec3 lightPosition = vec3(-200, 700, 50);
vec3 lightAmbient = vec3(0,0,0);
vec3 lightDiffuse = vec3(1,1,1);
vec3 lightSpecular = vec3(1,1,1);
out vec4 fragColor;
vec4 theColor;
void main()
{
vec3 unNormPos = pos;
vec3 lightVector = normalize(lightPosition) - normalize(pos);
//lightVector = normalize(lightVector);
float cosTheta = clamp(dot(normalize(lightVector), normalize(norm)), 0.5, 1.0);
if(pos.y <= 120){
fragColor = texture2D(texture_2, text*0.05) * cosTheta;
}
if(pos.y > 120 && pos.y < 150){
fragColor = (texture2D(texture_2, text*0.05) * (1 - (pos.y-120)/29) + texture2D(texture_3, text*0.05) * ((pos.y-120)/29))*cosTheta;
}
if(pos.y >= 150)
{
fragColor = texture2D(texture_3, text*0.05) * cosTheta;
}
}
EDIT: (Fons)
text = 0.05 * (position.xz + vec2(0,position.y));
text = 0.05 * (position.xz + vec2(position.y,position.y));
Now the wall work but terrain not.
The problem is actually a very difficult one, since you cannot devise a formula for the texture coordinates that displays vertical walls correctly, using only the xyz coordinates.
To visualize this, imagine a hill next to a piece of flat land. Since the path going over the hill is longer than that going over the flat piece of land, the texture should wrap more times on the hill the on the flat piece of land. In the image below, the texture wraps 5 times on the hill and 4 times on the flat piece.
If the texture coordinates are (0,0) on the left, should they be (4,0) or (5,0) on the right? Since both answers are valid, this proves that there is no function that calculates correct texture coordinates based purely on the xyz coordinates. :(
However, your problems might be solved with different methods:
The walls can be corrected by generating them independently from the terrain, and assigning correct texture coordinates to them. It actually makes more sense not to incorporate those in your terrain.
You can add more detail to the sides of steep hills with normal maps, textures of higher resolution, or a combination of different textures. There might be a better solution that I don't know about.
Edit: Triplanar mapping will solve your problem!
Try:
text = position.xz + vec2(0,y);
Also, I recommend setting the *0.05 scale factor in the vertex shader instead of the fragment shader. The final code would be:
text = 0.05 * (position.xz + vec2(0,y));

OpenGL Simple Shading, Artifacts

I've been trying to implement a simple light / shading system, a simple Phong lighting system without specular lights to be precise. It basically works, except it has some (in my opinion) nasty artifacts.
My first thought was that maybe this is a problem of the texture mipmaps, but disabling them didn't work. My next best guess would be a shader issue, but I can't seem to find the error.
Has anybody ever experienced a similiar issue or an idea on how to solve this?
Image of the artifacts
Vertex shader:
#version 330 core
// Vertex shader
layout(location = 0) in vec3 vpos;
layout(location = 1) in vec2 vuv;
layout(location = 2) in vec3 vnormal;
out vec2 uv; // UV coordinates
out vec3 normal; // Normal in camera space
out vec3 pos; // Position in camera space
out vec3 light[3]; // Vertex -> light vector in camera space
uniform mat4 mv; // View * model matrix
uniform mat4 mvp; // Proj * View * Model matrix
uniform mat3 nm; // Normal matrix for transforming normals into c-space
void main() {
// Pass uv coordinates
uv = vuv;
// Adjust normals
normal = nm * vnormal;
// Calculation of vertex in camera space
pos = (mv * vec4(vpos, 1.0)).xyz;
// Vector vertex -> light in camera space
light[0] = (mv * vec4(0.0,0.3,0.0,1.0)).xyz - pos;
light[1] = (mv * vec4(-6.0,0.3,0.0,1.0)).xyz - pos;
light[2] = (mv * vec4(0.0,0.3,4.8,1.0)).xyz - pos;
// Pass position after projection transformation
gl_Position = mvp * vec4(vpos, 1.0);
}
Fragment shader:
#version 330 core
// Fragment shader
layout(location = 0) out vec3 color;
in vec2 uv; // UV coordinates
in vec3 normal; // Normal in camera space
in vec3 pos; // Position in camera space
in vec3 light[3]; // Vertex -> light vector in camera space
uniform sampler2D tex;
uniform float flicker;
void main() {
vec3 n = normalize(normal);
// Ambient
color = 0.05 * texture(tex, uv).rgb;
// Diffuse lights
for (int i = 0; i < 3; i++) {
l = normalize(light[i]);
cos = clamp(dot(n,l), 0.0, 1.0);
length = length(light[i]);
color += 0.6 * texture(tex, uv).rgb * cos / pow(length, 2);
}
}
As the first comment says, it looks like your color computation is using insufficient precision. Try using mediump or highp floats.
Additionally, the length = length(light[i]); pow(length,2) expression is quite inefficient, and could also be a source of the observed banding; you should use dot(light[i],light[i]) instead.
So i found information about my problem described as "gradient banding", also discussed here. The problem appears to be in the nature of my textures, since both, only the "white" texture and the real texture are mostly grey/white and there are effectively 256 levels of grey when using 8 bit per color channel.
The solution would be to implement post-processing dithering or to use better textures.

OpenGL point light moving when camera rotates

I have a point light in my scene. I thought it worked correctly until I tested it with the camera looking at the lit object from different angles and found that the light area moves on the mesh (in my case simple plane). I'm using a typical ADS Phong lighting approach. I transform light position into camera space on the client side and then transform the interpolated vertex in the vertex shader with model view matrix.
My vertex shader looks like this:
#version 420
layout(location = 0) in vec4 position;
layout(location = 1) in vec2 uvs;
layout(location = 2) in vec3 normal;
uniform mat4 MVP_MATRIX;
uniform mat4 MODEL_VIEW_MATRIX;
uniform mat4 VIEW_MATRIX;
uniform mat3 NORMAL_MATRIX;
uniform vec4 DIFFUSE_COLOR;
//======= OUTS ============//
out smooth vec2 uvsOut;
out flat vec4 diffuseOut;
out vec3 Position;
out smooth vec3 Normal;
out gl_PerVertex
{
vec4 gl_Position;
};
void main()
{
uvsOut = uvs;
diffuseOut = DIFFUSE_COLOR;
Normal = normal;
Position = vec3(MODEL_VIEW_MATRIX * position);
gl_Position = MVP_MATRIX * position;
}
The fragment shader :
//==================== Uniforms ===============================
struct LightInfo{
vec4 Lp;///light position
vec3 Li;///light intensity
vec3 Lc;///light color
int Lt;///light type
};
const int MAX_LIGHTS=5;
uniform LightInfo lights[1];
// material props:
uniform vec3 KD;
uniform vec3 KA;
uniform vec3 KS;
uniform float SHININESS;
uniform int num_lights;
////ADS lighting method :
vec3 pointlightType( int lightIndex,vec3 position , vec3 normal) {
vec3 n = normalize(normal);
vec4 lMVPos = lights[0].Lp ; //
vec3 s = normalize(vec3(lMVPos.xyz) - position); //surf to light
vec3 v = normalize(vec3(-position)); //
vec3 r = normalize(- reflect(s , n));
vec3 h = normalize(v+s);
float sDotN = max( 0.0 , dot(s, n) );
vec3 diff = KD * lights[0].Lc * sDotN ;
diff = clamp(diff ,0.0 ,1.0);
vec3 spec = vec3(0,0,0);
if (sDotN > 0.0) {
spec = KS * pow( max( 0.0 ,dot(n,h) ) , SHININESS);
spec = clamp(spec ,0.0 ,1.0);
}
return lights[0].Li * ( spec+diff);
}
I have studied a lot of tutorials but none of those gives thorough explanation on the whole process when it comes to transform spaces.I suspect it has something to do with camera space I transform light and vertex position into.In my case the view matrix is created with
glm::lookAt()
which always negates "eye" vector so it comes that the view matrix in my shaders has negated translation part.Is is supposed to be like that? Can someone give a detailed explanation how it is done the right way in programmable pipeline? My shaders are implemented based on the book "OpenGL 4.0 Shading language cookbook" .The author seems to use also the camera space.But it doesn't work right unless that is the way it should work ...
I just moved the calculations into the world space.Now the point light stays on the spot.But how do I achieve the same using camera space?
I nailed down the bug and it was pretty stupid one.But it maybe helpful to others who are too much "math friendly" .My light position in the shaders is defined with vec3 .Now , on the client side it is represented with vec4.I was effectively setting .w component of the vec4 to be equal zero each time before transforming it with view matrix.Doing so ,I believe , the light position vector wasn't getting transformed correctly and from this all the light position problems stems in the shader.The solution is to keep w component of light position vector to be always equal 1.