Is there a reason this shader is drawing weird? - glsl

I have two linear squares, one behind the other. The front one has transparent parts so you can see some of the back square. I only want the back one to be visible within the first square and not on the outside of it. For my project the back square is far bigger than the front square (I know I can just change the back squares vertices). Anyways, I'm super confused why what I thought should work doesn't. Here's the code for my vertex shader.
#version 330 core
layout(location = 0) in vec4 position;
layout(location = 1) in vec2 textCoord;
layout(location = 2) in vec4 color;
layout(location = 3) in float tIndex;
uniform mat4 u_MVP;
uniform float maxSquareHeight;
uniform float maxSquareWidth;
out vec2 v_textCoord;
out float v_tIndex;
flat out vec4 v_color;
void main()
{
if(tIndex == 1.0f && ((abs(position.x) > maxSquareWidth) || (abs(position.y) > maxSquareHeight))){
v_color = vec4(0.0f, 0.0f, 1.0f, 0.5f);
}
// else{
// v_color = color;
// }
.......
};
As of now it is exactly how I want it. However, I initially thought the commented out part would work but it ends up not showing the back square at all. I'm still new with opengl but assumed from what I looked up about the flat keyword, the shader would draw two separate colors depending on the back squares position. Thanks for any answers!

Related

opengl glsl bug in which model goes invisible if i use glsl texture function with different parameters

I want to replicate a game. The goal of the game is to create a path between any 2 squares that have the same color.
Here is the game: www.mypuzzle.org/3d-logic-2
The cube has 6 faces. Each faces has 3x3 squares.
The cube has different square types: empty squares(reflect the environment), wall squares(you cant color them), start/finish squares(which have a black square in the middle but the rest of it is the colored).
I've close to finishing my project but i'm stuck with a bug. I used c++,sfml,opengl,glm.
The problem is in the shaders.
Vertex shader:
#version 330 core
layout (location = 0) in vec3 vPosition;
layout (location = 1) in vec3 vColor;
layout (location = 2) in vec2 vTexCoord;
layout (location = 3) in float vType;
layout (location = 4) in vec3 vNormal;
out vec3 Position;
out vec3 Color;
out vec2 TexCoord;
out float Type;
out vec3 Normal;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main()
{
gl_Position = projection * view * model * vec4(vPosition, 1.0f);
Position = vec3(model * vec4(vPosition, 1.0f));
Color=vColor;
TexCoord=vTexCoord;
Type=vType;
Normal = mat3(transpose(inverse(model))) * vNormal;
}
Fragment shader:
#version 330 core
in vec3 Color;
in vec3 Normal;
in vec3 Position;
in vec2 TexCoord;
in float Type;
out vec4 color;
uniform samplerCube skyboxTexture;
uniform sampler2D faceTexture;
uniform sampler2D centerTexture;
void main()
{
color=vec4(0.0,0.0,0.0,1.0);
if(Type==0.0)
{
vec3 I = normalize(Position);
vec3 R = reflect(I, normalize(Normal));
if(texture(faceTexture, TexCoord)==vec4(1.0,1.0,1.0,1.0))
color=mix(texture(skyboxTexture, R),vec4(1.0,1.0,1.0,1.0),0.3);*/
}
else if(Type==1.0)
{
if(texture(centerTexture, TexCoord)==vec4(1.0,1.0,1.0,1.0))
color=vec4(Color,1.0);
}
else if(Type==-1.0)
{
color=vec4(0.0,0.0,0.0,1.0);
}
else if(Type==2.0)
{
if(texture(faceTexture, TexCoord)==vec4(1.0,1.0,1.0,1.0))
color=mix(vec4(Color,1.0),vec4(1.0,1.0,1.0,1.0),0.5);
}
}
/*
Type== 0 ---> blank square(reflects light)
Type== 1 ---> start/finish square
Type==-1 ---> wall square
Type== 2 ---> colored square that was once a black square
*/
In the fragment shader i draw the pixels of a square that has a certain type, so the shader only enters in 1 of the 4 if's for each square. The program works fine if i only use the glsl function texture with the same texture. If i use this function 2 times with different textures ,in 2 differents if's ,my model goes invisible. Why is that happening?
https://postimg.org/image/lximpl0bz/
https://postimg.org/image/5dzvqz2r7/
The red square is of type 1. I've modified code in the type==0 if and then my model went invisible.
Texture sampler in OpenGL should only be accessed in (at least) dynamically uniform control flow. This basically means, that all invocations of a shader execute the same code path. If this is not the case, then no automatic gradients are available and mipmapping or anisotropic filtering will fail.
In your program this problem happens exactly when you try to use multiple textures. One solution might be not to use anything that requires gradients. There are also a number of other options, for example, patching all textures together in a texture atlas and just selecting the appropriate uv-coordinates in the shader or drawing each quad separately and providing the type through a uniform variable.

How can I texture with vertex position coordinates? openGL,c++

I want to texture my terrain without predetermined texture coordinates. I want to determine the coordinates in the vertex or fragmant shader using vertex position coordinates. I now use position 'xz' coordinates (up=(0,1,0)), but if I have a for example wall which is 90 degrees with the ground the texture will be like this:
How can I transform this position these coordinates to work well?
Here's my vertex shader:
#version 430
in layout(location=0) vec3 position;
in layout(location=1) vec2 textCoord;
in layout(location=2) vec3 normal;
out vec3 pos;
out vec2 text;
out vec3 norm;
uniform mat4 transformation;
void main()
{
gl_Position = transformation * vec4(position, 1.0);
norm = normal;
pos = position;
text = position.xz;
}
And here's my fragmant shader:
#version 430
in vec3 pos;
in vec2 text;
in vec3 norm;
//uniform sampler2D textures[3];
layout(binding=3) uniform sampler2D texture_1;
layout(binding=4) uniform sampler2D texture_2;
layout(binding=5) uniform sampler2D texture_3;
vec3 lightPosition = vec3(-200, 700, 50);
vec3 lightAmbient = vec3(0,0,0);
vec3 lightDiffuse = vec3(1,1,1);
vec3 lightSpecular = vec3(1,1,1);
out vec4 fragColor;
vec4 theColor;
void main()
{
vec3 unNormPos = pos;
vec3 lightVector = normalize(lightPosition) - normalize(pos);
//lightVector = normalize(lightVector);
float cosTheta = clamp(dot(normalize(lightVector), normalize(norm)), 0.5, 1.0);
if(pos.y <= 120){
fragColor = texture2D(texture_2, text*0.05) * cosTheta;
}
if(pos.y > 120 && pos.y < 150){
fragColor = (texture2D(texture_2, text*0.05) * (1 - (pos.y-120)/29) + texture2D(texture_3, text*0.05) * ((pos.y-120)/29))*cosTheta;
}
if(pos.y >= 150)
{
fragColor = texture2D(texture_3, text*0.05) * cosTheta;
}
}
EDIT: (Fons)
text = 0.05 * (position.xz + vec2(0,position.y));
text = 0.05 * (position.xz + vec2(position.y,position.y));
Now the wall work but terrain not.
The problem is actually a very difficult one, since you cannot devise a formula for the texture coordinates that displays vertical walls correctly, using only the xyz coordinates.
To visualize this, imagine a hill next to a piece of flat land. Since the path going over the hill is longer than that going over the flat piece of land, the texture should wrap more times on the hill the on the flat piece of land. In the image below, the texture wraps 5 times on the hill and 4 times on the flat piece.
If the texture coordinates are (0,0) on the left, should they be (4,0) or (5,0) on the right? Since both answers are valid, this proves that there is no function that calculates correct texture coordinates based purely on the xyz coordinates. :(
However, your problems might be solved with different methods:
The walls can be corrected by generating them independently from the terrain, and assigning correct texture coordinates to them. It actually makes more sense not to incorporate those in your terrain.
You can add more detail to the sides of steep hills with normal maps, textures of higher resolution, or a combination of different textures. There might be a better solution that I don't know about.
Edit: Triplanar mapping will solve your problem!
Try:
text = position.xz + vec2(0,y);
Also, I recommend setting the *0.05 scale factor in the vertex shader instead of the fragment shader. The final code would be:
text = 0.05 * (position.xz + vec2(0,y));

OpenGL Simple Shading, Artifacts

I've been trying to implement a simple light / shading system, a simple Phong lighting system without specular lights to be precise. It basically works, except it has some (in my opinion) nasty artifacts.
My first thought was that maybe this is a problem of the texture mipmaps, but disabling them didn't work. My next best guess would be a shader issue, but I can't seem to find the error.
Has anybody ever experienced a similiar issue or an idea on how to solve this?
Image of the artifacts
Vertex shader:
#version 330 core
// Vertex shader
layout(location = 0) in vec3 vpos;
layout(location = 1) in vec2 vuv;
layout(location = 2) in vec3 vnormal;
out vec2 uv; // UV coordinates
out vec3 normal; // Normal in camera space
out vec3 pos; // Position in camera space
out vec3 light[3]; // Vertex -> light vector in camera space
uniform mat4 mv; // View * model matrix
uniform mat4 mvp; // Proj * View * Model matrix
uniform mat3 nm; // Normal matrix for transforming normals into c-space
void main() {
// Pass uv coordinates
uv = vuv;
// Adjust normals
normal = nm * vnormal;
// Calculation of vertex in camera space
pos = (mv * vec4(vpos, 1.0)).xyz;
// Vector vertex -> light in camera space
light[0] = (mv * vec4(0.0,0.3,0.0,1.0)).xyz - pos;
light[1] = (mv * vec4(-6.0,0.3,0.0,1.0)).xyz - pos;
light[2] = (mv * vec4(0.0,0.3,4.8,1.0)).xyz - pos;
// Pass position after projection transformation
gl_Position = mvp * vec4(vpos, 1.0);
}
Fragment shader:
#version 330 core
// Fragment shader
layout(location = 0) out vec3 color;
in vec2 uv; // UV coordinates
in vec3 normal; // Normal in camera space
in vec3 pos; // Position in camera space
in vec3 light[3]; // Vertex -> light vector in camera space
uniform sampler2D tex;
uniform float flicker;
void main() {
vec3 n = normalize(normal);
// Ambient
color = 0.05 * texture(tex, uv).rgb;
// Diffuse lights
for (int i = 0; i < 3; i++) {
l = normalize(light[i]);
cos = clamp(dot(n,l), 0.0, 1.0);
length = length(light[i]);
color += 0.6 * texture(tex, uv).rgb * cos / pow(length, 2);
}
}
As the first comment says, it looks like your color computation is using insufficient precision. Try using mediump or highp floats.
Additionally, the length = length(light[i]); pow(length,2) expression is quite inefficient, and could also be a source of the observed banding; you should use dot(light[i],light[i]) instead.
So i found information about my problem described as "gradient banding", also discussed here. The problem appears to be in the nature of my textures, since both, only the "white" texture and the real texture are mostly grey/white and there are effectively 256 levels of grey when using 8 bit per color channel.
The solution would be to implement post-processing dithering or to use better textures.

DirectX FVF-like GLSL Shaders

Could someone assist me or head me in the right direction to implement the basic FVFs from DirectX in GLSL code? I completely understand how to create a program, apply VBOs and all that, but I'm having great difficulty in the actual creation of the shaders. Namely:
transformed+lit (x,y,color,specular,tu,tv)
lit (x,y,z,color,specular,tu,tv)
unlit (x,y,z,nx,ny,nz,tu,tv) [material/lights]
With this, I'd be given enough to implement far more interesting shaders.
So, I'm not asking for a mechanism to deal with FVFs. I'm simply asking, for the shader code, given the proper streams. I understand that the unlit and lit versions rely on passing in matrices and I completely understand the concept. I am just having trouble finding shader examples showing these concepts.
Okay. If you have troubles finding working shaders, there is example (Honestly, you can find it at any OpenGL book).
This shader program will use your object's world matrix and camera's matrices to transform vertices, and then map one texture to pixels and lit them with one directional light, (according to material properties and light direction).
Vertex shader:
#version 330
// Vertex input layout
attribute vec3 inPosition;
attribute vec3 inNormal;
attribute vec4 inVertexCol;
attribute vec2 inTexcoord;
attribute vec3 inTangent;
attribute vec3 inBitangent;
// Output
struct PSIn
{
vec3 normal;
vec4 vertexColor;
vec2 texcoord;
vec3 tangent;
vec3 bitangent;
};
out PSIn psin;
// Uniform buffers
layout(std140)
uniform CameraBuffer
{
mat4 mtxView;
mat4 mtxProj;
vec3 cameraPosition;
};
layout(std140)
uniform ObjectBuffer
{
mat4 mtxWorld;
};
void main()
{
// transform position
vec4 pos = vec4(inPosition, 1.0f);
pos = mtxWorld * pos;
pos = mtxView * pos;
pos = mtxProj * pos;
gl_Position = pos;
// just pass-through other stuff
psin.normal = inNormal;
psin.tangent = inTangent;
psin.bitangent = inBitangent;
psin.texcoord = inTexcoord;
psin.vertexColor = inVertexCol;
}
And fragment shader:
#version 330
// Input
in vec3 position;
in vec3 normal;
in vec4 vertexColor;
in vec2 texcoord;
in vec3 tangent;
in vec3 bitangent;
// Output
out vec4 fragColor;
// Uniforms
uniform sampler2D sampler0;
layout(std140)
uniform CameraBuffer
{
mat4 mtxView;
mat4 mtxProj;
vec3 cameraPosition;
};
layout(std140)
uniform ObjectBuffer
{
mat4 mtxWorld;
};
layout(std140)
uniform LightBuffer
{
vec3 lightDirection;
};
struct Material
{
float Ka; // ambient quotient
float Kd; // diffuse quotient
float Ks; // specular quotient
float A; // shininess
};
layout(std140)
uniform MaterialBuffer
{
Material material;
};
// function to calculate pixel lighting
float Lit( Material material, vec3 pos, vec3 nor, vec3 lit, vec3 eye )
{
vec3 V = normalize( eye - pos );
vec3 R = reflect( lit, nor);
float Ia = material.Ka;
float Id = material.Kd * clamp( dot(nor, -lit), 0.0f, 1.0f );
float Is = material.Ks * pow( clamp(dot(R,V), 0.0f, 1.0f), material.A );
return Ia + Id + Is;
}
void main()
{
vec3 nnormal = normalize(normal);
vec3 ntangent = normalize(tangent);
vec3 nbitangent = normalize(bitangent);
vec4 outColor = texture(sampler0, texcoord); // texture mapping
outColor *= Lit( material, position, nnormal, lightDirection, cameraPosition ); // lighting
outColor.w = 1.0f;
fragColor = outColor;
}
If you don't want texturing, just don't sample texture, but equate outColor to vertexColor.
If you don't need lighting, just comment out Lit() function.
Edit:
For 2D objects you can still use same program, but many of functionality will be redundant. You can strip out:
camera
light
material
all of vertex attributes, but inPosition and inTexcoord (maybe also inVertexCol, f you need vertices to have color) and all of code related with unneeded attributes
inPosition can be vec2
you will need to pass orthographic projection matrix instead of perspective one
you can even strip out matrices, and pass vertex buffer with positions in pixels. See my answer here about how to transform those pixel positions to screen space positions. You can do it either in C/C++ code or in GLSL/HLSL.
Hope it helps somehow.
Intro
You've not specified OpenGL/GLSL version that you targeting, so I'll assume that it is at least OpenGL 3.
One of the main advantages of programmable pipeline, to be compared with with fixed-function pipeline, is fully customizable vertex input. I'm not quite sure, if it is a good idea to introduce such constraints as fixed vertex format. For what?.. (You will find modern approach in paragraph "Another way" of my post)
But, if you really want to emulate fixed-function...
I think you'll need to have a vertex shader for each vertex format
you have, or somehow generate vertex shader on the fly. Or even for
all of the shader stages.
For example, for x, y, color, tu, tv input you will have vertex
shader such as:
attribute vec2 inPosition;
attribute vec4 inCol;
attribute vec2 inTexcoord;
void main()
{
...
}
As you don't have transforms, light and materials fixed-functionality in OpenGL 3, you must implement it yourself:
You must pass matrices for transformations
For lit shader you must pass additional variables, such as light direction
For material shader you must have materials in input
Typically, in shader, you do it with uniforms or uniform blocks:
layout(std140)
uniform CameraBuffer
{
mat4 mtxView;
mat4 mtxProj;
vec3 cameraPosition;
};
layout(std140)
uniform ObjectBuffer
{
mat4 mtxWorld;
};
layout(std140)
uniform LightBuffer
{
vec3 lightDirection;
};
struct Material
{
float Ka;
float Kd;
float Ks;
float A;
};
layout(std140)
uniform MaterialBuffer
{
Material material;
};
Probably, you can somehow combine all of shaders with different formats , uniforms, etc. in one big ubershader with branching.
Another way
You can stick to modern approach and just allow user to declare vertex format he wants (format, that he used in his shader). Just implement concept similar to IDirect3DDevice9::CreateVertexDeclaration or ID3D11Device::CreateInputLayout: you will make use of glVertexAttribPointer() and, probably, VAOs. This way you can also abstract out vertex layout, in API-independent way.
The main ideas are:
user passes an array of structures that describes format in API-independent way to your function (this struct can be similar to D3DVERTEXELEMENT9 or D3D11_INPUT_ELEMENT_DESC)
that function interpret array's elements one by one and builds some kind of internal info that describes format in API-specific way (such as IDirect3DVertexDeclaration9 for D3D9, ID3D11InputLayout for D3D11 or custom struct or VAO for OpenGL)
when it's time to set vertex format you just use this info
P.S. If you need ideas on how to properly implement light, materials in GLSL (I mean algorithms here), you'd better pick up some book or online tutorials, than asking here. Or just Google up "GLSL lighting".
You can find interesting these links:
Good resources for learning modern OpenGL (3.0 or later)?
OpenGL documentation
Select Books on OpenGL and 3D Graphics Coding
Happy coding!

simple procedural skybox

As part of an attempt to generate a very simple looking sky, I've created a skybox (basically a cube going from (-1, -1, -1) to (1, 1, 1), which is drawn after all of my geometry and forced to the back via the following simple vertex shader :
#version 330
layout(location = 0) in vec4 position;
layout(location = 1) in vec4 normal;
out Data
{
vec4 eyespace_position;
vec4 eyespace_normal;
vec4 worldspace_position;
vec4 raw_position;
} vtx_data;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main()
{
mat4 view_without_translation = view;
view_without_translation[3][0] = 0.0f;
view_without_translation[3][1] = 0.0f;
view_without_translation[3][2] = 0.0f;
vtx_data.raw_position = position;
vtx_data.worldspace_position = model * position;
vtx_data.eyespace_position = view_without_translation * vtx_data.worldspace_position;
gl_Position = (projection * vtx_data.eyespace_position).xyww;
}
From this, I'm trying to have my sky display as a very simple gradient from a deep blue at the top to a lighter blue at the horizon.
Obviously, simply mixing my two colors based on the Y coordinate of each fragment is going to look very bad : the fact that you're looking at a box and not a dome is immediately clear, as seen here :
Note the fairly visible "corners" at the top left and top right of the box.
Instinctively, I was thinking that the obvious fix would be to normalize the position of each fragment, to get a position on a unit sphere, then take the Y coordinate of that. I thought that would result in a value that would be constant for a given "altitude", if that makes sense. Like this :
#version 330
in Data
{
vec4 eyespace_position;
vec4 eyespace_normal;
vec4 worldspace_position;
vec4 raw_position;
} vtx_data;
out vec4 outputColor;
const vec4 skytop = vec4(0.0f, 0.0f, 1.0f, 1.0f);
const vec4 skyhorizon = vec4(0.3294f, 0.92157f, 1.0f, 1.0f);
void main()
{
vec4 pointOnSphere = normalize(vtx_data.worldspace_position);
float a = pointOnSphere.y;
outputColor = mix(skyhorizon, skytop, a);
}
The result however is much the same as the first screenshot (I can post it if necessary but since it's visually similar to the first, I'm skipping it to shorten this question right now).
After some random fiddling (cargo cult programming, I know :/), I realized that this works :
void main()
{
vec3 pointOnSphere = normalize(vtx_data.worldspace_position.xyz);
float a = pointOnSphere.y;
outputColor = mix(skyhorizon, skytop, a);
}
The only difference is that I normalize the position without it's W component.
And here's the working result : (the difference is subtle in screenshots but quite noticeable in motion)
So, finally, my question : why does this work when the previous version fails ? I must be misunderstanding something extremely basic about homogenous coordinates but my brain just isn't clicking right now !
GLSL normalize does not handle homogeneous coordinates per se. It interprets the coordinate as belonging to R^4. This is in general not what you want. However, if vtx_data.worldspace_position.w == 0, then the normalize should produce the same result.
I don't know what vec3 pointOnSphere = normalize(vtx_data.worldspace_position); means because the left side should have type vec4 also.