So i've been learning some OpenGL, it's a lot to take in and im just a beginner, but I don't understand the "Layout Qualifier" in GLSL.
So something like this:
layout (location = 0) in vec3 position;
in a simple vertex shader such as this:
#version 330 core
layout (location = 0) in vec3 position; // The position variable has attribute position 0
out vec4 vertexColor; // Specify a color output to the fragment shader
void main()
{
gl_Position = vec4(position, 1.0); // See how we directly give a vec3 to vec4's constructor
vertexColor = vec4(0.5f, 0.0f, 0.0f, 1.0f); // Set the output variable to a dark-red color
}
I understand the out vec4 for instance (since thats going to the fragment shader). And the vertexColor makes sense for instance.
But maybe im not understanding the "position" and what exactly that means in this sense? Does anyone care to explain? The opengl wiki honestly didn't help me.
But maybe im misunderstanding what a vertex shader (im still of course a little unsure of the pipeline). but from my understanding the vertex specification is the first thing we do correct? (Vertices/Indices if needed) and storing these into a VAO.
So the vertex shader is interacting with each individual vertex point? (i hope) because thats how I understood it?
You're right, the vertex shader is executed for each vertex.
A vertex is composed by of or several attributes (positions, normals, texture coordinates etc.).
On the CPU side when you create your VAO, you describe each attribute by saying "this data in this buffer will be attribute 0, the data next to it will be attribute 1 etc.". Note that the VAO only stores this information of where's who. The actual vertex data is stored in VBOs.
In the vertex shader, the line with layout and position is just saying "get the attribute 0 and put it in a variable called position" (the location represents the number of the attribute).
If the two steps are done correctly, you should end up with a position in the variable named position :) Is it clearer?
Related
I want to incorporate a custom attribute that varies per vertex. In this case it is assigned to location=4 ... but nothing happens, the other four attributes vary properly except that one. At the bottom, I added a test to produce a specific color if it encounters the value '1' (which I know exists in the buffer, because I queried the buffer earlier). Attribute 4 is stuck at the first value of its array and never moves.
Am I missing a setting ? (something to be enabled maybe ?) or is it that openGL only varies a handful attributes but nothing else ?
#version 330 //for openGL 3.3
//uniform variables stay constant for the whole glDraw call
uniform mat4 ProjViewModelMatrix;
uniform vec4 DefaultColor; //x=-1 signifies no default color
//non-uniform variables get fed per vertex from the buffers
layout (location=0) in vec3 coords; //feeding from attribute=0 of the main code
layout (location=1) in vec4 color; //per vertex color, feeding from attribute=1 of the main code
layout (location=2) in vec3 normals; //per vertex normals
layout (location=3) in vec2 UVcoord; //texture coordinates
layout (location=4) in int vertexTexUnit;//per vertex texture unit index
//Output
out vec4 thisColor;
out vec2 vertexUVcoord;
flat out int TexUnitIdx;
void main ()
{
vertexUVcoord = UVcoord;
TexUnitIdx=vertexTexUnit;
if (DefaultColor.x==-1) {thisColor = color;} //If no default color is set, use per vertex colors
else {thisColor = DefaultColor;}
gl_Position = ProjViewModelMatrix * vec4(coords,1.0); //This outputs the position to the graphics card.
//TESTING
if (vertexTexUnit==1) thisColor=vec4(1,1,0,1); //Never receives value of 1, but the buffer does contain such values
}
Because the vertexTexUnit attribute is an integer, you must use glVertexAttribIPointer() instead of glVertexAttribPointer().
You can use vertex attributes for whatever you want. OpenGL doesn't know or care what you're using them for.
This is how it should look like. It uses the same vertices/uv coordinates which are used for DX11 and OpenGL. This scene was rendered in DirectX10.
This is how it looks like in DirectX11 and OpenGL.
I don't know how this can happen. I am using for both DX10 and DX11 the same code on top and also they both handle things really similiar. Do you have an Idea what the problem may be and how to fix it?
I can send code if needed.
also using another texture.
changed the transparent part of the texture to red.
Fragment Shader GLSL
#version 330 core
in vec2 UV;
in vec3 Color;
uniform sampler2D Diffuse;
void main()
{
//color = texture2D( Diffuse, UV ).rgb;
gl_FragColor = texture2D( Diffuse, UV );
//gl_FragColor = vec4(Color,1);
}
Vertex Shader GLSL
#version 330 core
layout(location = 0) in vec3 vertexPosition;
layout(location = 1) in vec2 vertexUV;
layout(location = 2) in vec3 vertexColor;
layout(location = 3) in vec3 vertexNormal;
uniform mat4 Projection;
uniform mat4 View;
uniform mat4 World;
out vec2 UV;
out vec3 Color;
void main()
{
mat4 MVP = Projection * View * World;
gl_Position = MVP * vec4(vertexPosition,1);
UV = vertexUV;
Color = vertexColor;
}
Quickly said, it looks like you are using back face culling (which is good), and the other side of your model is wrongly winded. You can ensure that this is the problem by turning back face culling off (OpenGL: glDisable(GL_CULL_FACE)).
The real correction is (if this was the problem) to have correct winding of faces, usually it is counter-clockwise. This depends where you get this model. If you generate it on your own, correct winding in your model generation routine. Usually, model files created by 3D modeling software have correct face winding.
This is just a guess, but are you telling the system the correct number of polygons to draw? Calls like glBufferData() take the size in bytes of the data, not the number of vertices or polygons. (Maybe they should have named the parameter numBytes instead of size?) Also, the size has to contain the size of all the data. If you have color, normals, texture coordinates and vertices all interleaved, it needs to include the size of all of that.
This is made more confusing by the fact that glDrawElements() and other stuff takes the number of vertices as their size argument. The argument is named count, but it's not obvious that it's vertex count, not polygon count.
I found the error.
The reason is that I forgot to set the Texture SamplerState to Wrap/Repeat.
It was set to clamp so the uv coordinates were maxed to 1.
A few things that you could try :
Is depth test enabled ? It seems that your inner faces of the polygons from the 'other' side are being rendered over the polygons that are closer to the view point. This could happen if depth test is disabled. Enable it just in case.
Is lighting enabled ? If so turn it off. Some flashes of white seem to be coming in the rotating image. Could be because of incorrect normals ...
HTH
Since vertex shader is run once per vertex (that mean in triangle 3 times), how does the varying variable gets computed for every fragment, if it's assigned (as in the example) only three times?
Fragment shader:
precision mediump float;
varying vec4 v_Color;
void main() {
gl_FragColor = v_Color;
}
Vertex shader:
attribute vec4 a_Position;
attribute vec4 a_Color;
varying vec4 v_Color;
void main() {
v_Color = a_Color;
gl_Position = a_Position;
}
So, the question is, how does the system behind this know, how to compute the variable v_Color at every fragment, since this shader assigns v_Color only 3 times (in a triangle).
All outputs of the vertex shader are per vertex. When you set v_Color in the vertex shader, it sets it on the current vertex. When the fragment shader runs, it reads the v_Color value for each vertex in the primitive and interpolates between them based on the fragment's location.
First of all, it is a mistake to assume that the vertex shader is run once per-vertex. Using indexed rendering, primitive assembly can usually access the post T&L cache (result of previous vertex shader invocations) based on vertex index to eliminate evaluating a vertex more than once. However, new things such as geometry shaders can easily cause this to break down.
As for how the fragment shader gets its value, that is generally done during rasterization. Those per-vertex attributes are interpolated along the surface of the primitive (triangle in this case) based on the fragment's distance relative to the vertices that were used to build the primitive. In DX11 interpolation can be deferred until the fragment shader itself runs (called "pull-model" interpolation), but traditionally this is something that happens during rasterization.
I can't seem to find any information on the Web about fixing shadow casting by objects, which textures have alpha != 1.
Is there any way to implement something like "per-fragment depth test", not a "per-vertex", so I could just discard appearing of the fragment on a shadowmap if colored texel has transparency? Also, in theory, it could make shadow mapping be more accurate.
EDIT
Well, maybe that was a terrible idea I gave above, but only I want is to tell shaders that if texel have alpha<1, there's no need to shadow things behind that texel. I guess depth texture require only vertex information, thats why every tutorial about shadow mapping has minimized vertex and empty fragment shader and nothing happens when trying do something with fragment shader.
Anyway, what is the main idea of fixing shadow casting by partly-transparent objects?
EDIT2
I've modified my shaders and now It discards every fragment, if at least one has transparency o_O. So those objects now don't cast any shadows (but opaque do)... Please, have a look at the shaders:
// Vertex Shader
uniform mat4 orthoView;
in vec4 in_Position;
in vec2 in_TextureCoord;
out vec2 TC;
void main(void) {
TC = in_TextureCoord;
gl_Position = orthoView * in_Position;
}
.
//Fragment Shader
uniform sampler2D texture;
in vec2 TC;
void main(void) {
vec4 texel = texture2D(texture, TC);
if (texel.a < 0.4)
discard;
}
And it's strange because I use the same trick with the same textures in my other shaders and it works... any ideas?
If you use discard in the fragment shader, then no depth information will be recorded for that fragment. So in your fragment shader, simply add a test to see whether the texture is transparent, and if so discard that fragment.
I am learning WebGL, and would like to do the following:
Create a 3D quad with a square hole in it using a fragment shader.
It looks like I need to set gl_FragColor based on gl_FragCoord appropriately.
So should I:
a) Convert gl_FragCoord from window coordinates to model coordinates, do the appropriate geometry check, and set color.
OR
b) Somehow pass the hole information from the vertex shader to the fragment shader. Maybe use a texture coordinate. I am not clear on this part.
I am fuzzy about implementing either of the above, so I'd appreaciate some coding hints on either.
My background is that of an OpenGL old timer who has not kept up with the new shading language paradigm, and is now trying to catch up...
Edit (27/03/2011):
I have been able to successfully implement the above based on the tex coord hint. I've written up this example at the link below:
Quads with holes - example
The easiest way would be with texture coords. Simply supply the corrds as an extra attribute array then pass through to the fragment shader using a varying. The shaders should contain something like:
vertex:
attribute vec3 aPosition;
attribute vec2 aTexCoord;
varying vec2 vTexCoord;
void main(){
vTexCoord=aTexCoord;
.......
}
fragment:
varying vec2 vTexCoord;
void main(){
if(vTexCoord.x>{lower x limit} && vTexCoord.x<{upper x limit} && vTexCoord.y>{lower y limit} && vTexCoord.y<{upper y limit}){
discard; //this tell GFX to discard this fragment entirly
}
.....
}