GLSL, how are interpolateAtSample/Offset supposed to work? - opengl

I have this question now since a long time, I can't get a simple multisample example running..
This is the original code, this is the vs and this the fs, that I report here for convenience:
layout(location = POSITION) in vec2 Position;
layout(location = TEXCOORD) in vec2 Texcoord;
out vert
{
vec2 Texcoord;
} Vert;
void main()
{
Vert.Texcoord = Texcoord;
gl_Position = MVP * vec4(Position, 0.0, 1.0);
}
in vert
{
vec2 Texcoord;
} Vert;
layout(location = FRAG_COLOR, index = 0) out vec4 Color;
void main()
{
Color = texture(Diffuse, interpolateAtSample(Vert.Texcoord, gl_SampleID));
}
When I run it, I get the following error:
Shader status invalid: 0(20) : error C5229: Argument 1 for
interpolateAtSample must have no component selection
What? I understand the specs says:
Built-in interpolation functions are available to compute an
interpolated
value of a fragment shader input variable at a shader-specified (x,y)
location. A separate (x,y) location may be used for each invocation of
the built-in function, and those locations may differ from the default
(x,y) location used to produce the default value of the input.
float interpolateAtCentroid(float interpolant);
vec2 interpolateAtCentroid(vec2 interpolant);
vec3 interpolateAtCentroid(vec3 interpolant);
vec4 interpolateAtCentroid(vec4 interpolant);
float interpolateAtSample(float interpolant, int sample);
vec2 interpolateAtSample(vec2 interpolant, int sample);
vec3 interpolateAtSample(vec3 interpolant, int sample);
vec4 interpolateAtSample(vec4 interpolant, int sample);
float interpolateAtOffset(float interpolant, vec2 offset);
vec2 interpolateAtOffset(vec2 interpolant, vec2 offset);
vec3 interpolateAtOffset(vec3 interpolant, vec2 offset);
vec4 interpolateAtOffset(vec4 interpolant, vec2 offset);
The function interpolateAtCentroid() will return the value of the input
varying <interpolant> sampled at a location inside the both the pixel and
the primitive being processed. The value obtained would be the same value
assigned to the input variable if declared with the "centroid" qualifier.
The function interpolateAtSample() will return the value of the input
varying <interpolant> at the location of the sample numbered <sample>. If
multisample buffers are not available, the input varying will be evaluated
at the center of the pixel. If the sample number given by <sample> does
not exist, the position used to interpolate the input varying is
undefined.
The function interpolateAtOffset() will return the value of the input
varying <interpolant> sampled at an offset from the center of the pixel
specified by <offset>. The two floating-point components of <offset>
give the offset in pixels in the x and y directions, respectively.
An offset of (0,0) identifies the center of the pixel. The range and
granularity of offsets supported by this function is
implementation-dependent.
For all of the interpolation functions, <interpolant> must be an input
variable or an element of an input variable declared as an array.
Component selection operators (e.g., ".xy") may not be used when
specifying <interpolant>. If <interpolant> is declared with a "flat" or
"centroid" qualifier, the qualifier will have no effect on the
interpolated value. If <interpolant> is declared with the "noperspective"
qualifier, the interpolated value will be computed without perspective
correction.
But I still don't get why. The interpolant has no component selection and it is an input variable coming from a bound vertex attribute in the vs.
Or does it interpret the .Texcoord of Vert.Texcoord has component selection operator?
I get also the same error in another sample, however, on interpolateAtOffset.
What is wrong? I mean, I want to interpolate the gl_SampleID at the given offset, I don't know how else shall that function be used..

Related

GLSL - per-fragment lighting

I started lighting with several light sources. All the manuals that I saw without taking into account the distance between the light source and the object (for example https://learnopengl.com/Lighting/Basic-Lighting). So I wrote my shader, but I'm not sure about its correctness. Please, analyze this shader, and tell me what's wrong / not correct in it. I will be very grateful for any help! Below I bring the shader itself, and the results of its work for different values of n and k.
Fragment shader:
#version 130
precision mediump float; // Set the default precision to medium. We don't need as high of a
// precision in the fragment shader.
#define MAX_LAMPS_COUNT 8 // Max lamps count.
uniform vec3 u_LampsPos[MAX_LAMPS_COUNT]; // The position of lamps in eye space.
uniform vec3 u_LampsColors[MAX_LAMPS_COUNT];
uniform vec3 u_AmbientColor = vec3(1, 1, 1);
uniform sampler2D u_TextureUnit;
uniform float u_DiffuseIntensivity = 12;
uniform float ambientStrength = 0.1;
uniform int u_LampsCount;
varying vec3 v_Position; // Interpolated position for this fragment.
varying vec3 v_Normal; // Interpolated normal for this fragment.
varying vec2 v_Texture; // Texture coordinates.
// The entry point for our fragment shader.
void main() {
float n = 2;
float k = 2;
float finalDiffuse = 0;
vec3 finalColor = vec3(0, 0, 0);
for (int i = 0; i<u_LampsCount; i++) {
// Will be used for attenuation.
float distance = length(u_LampsPos[i] - v_Position);
// Get a lighting direction vector from the light to the vertex.
vec3 lightVector = normalize(u_LampsPos[i] - v_Position);
// Calculate the dot product of the light vector and vertex normal. If the normal and light vector are
// pointing in the same direction then it will get max illumination.
float diffuse = max(dot(v_Normal, lightVector), 0.1);
// Add attenuation.
diffuse = diffuse / (1 + pow(distance, n));
// Calculate final diffuse for fragment
finalDiffuse += diffuse;
// Calculate final light color
finalColor += u_LampsColors[i] / (1 + pow(distance, k));
}
finalColor /= u_LampsCount;
vec3 ambient = ambientStrength * u_AmbientColor;
vec3 diffuse = finalDiffuse * finalColor * u_DiffuseIntensivity;
gl_FragColor = vec4(ambient + diffuse, 1) * texture2D(u_TextureUnit, v_Texture);
}
Vertex shader:
#version 130
uniform mat4 u_MVPMatrix; // A constant representing the combined model/view/projection matrix.
uniform mat4 u_MVMatrix; // A constant representing the combined model/view matrix.
attribute vec4 a_Position; // Per-vertex position information we will pass in.
attribute vec3 a_Normal; // Per-vertex normal information we will pass in.
attribute vec2 a_Texture; // Per-vertex texture information we will pass in.
varying vec3 v_Position; // This will be passed into the fragment shader.
varying vec3 v_Normal; // This will be passed into the fragment shader.
varying vec2 v_Texture; // This will be passed into the fragment shader.
void main() {
// Transform the vertex into eye space.
v_Position = vec3(u_MVMatrix * a_Position);
// Pass through the texture.
v_Texture = a_Texture;
// Transform the normal's orientation into eye space.
v_Normal = vec3(u_MVMatrix * vec4(a_Normal, 0.0));
// gl_Position is a special variable used to store the final position.
// Multiply the vertex by the matrix to get the final point in normalized screen coordinates.
gl_Position = u_MVPMatrix * a_Position;
}
n=2 k=2
n=1 k=3
n=3 k=1
n=3 k=3
And if my shader is correct, then how do I name these parameters (n, k)?
By "correct" I assume you mean is the code working as well as it should. These lighting calculations are not by any means physically accurate. Unless you are going for full compatibility with old devices, I would recommend you use a higher glsl version which allows you to use in and out and some other useful glsl features. The current is version 450 and you are still using 130. The vertex shader looks ok, as it is only passing through values to the fragment shader.
As for the fragment shader there are is one optimisation you could make.
The calculation u_LampsPos[i] - v_Position doesn't have to be repeated twice. Do it once and do the length and normalize on the same result from one calculation.
The code is quite small so there is not much to go wrong glsl wise however I was wondering why you did: finalColor /= u_LampsCount;?
This didn't make sense to me.

Is glVertexAttribpointer used only for vertex, UVs, colors, and normals ? Nothing else?

I want to incorporate a custom attribute that varies per vertex. In this case it is assigned to location=4 ... but nothing happens, the other four attributes vary properly except that one. At the bottom, I added a test to produce a specific color if it encounters the value '1' (which I know exists in the buffer, because I queried the buffer earlier). Attribute 4 is stuck at the first value of its array and never moves.
Am I missing a setting ? (something to be enabled maybe ?) or is it that openGL only varies a handful attributes but nothing else ?
#version 330 //for openGL 3.3
//uniform variables stay constant for the whole glDraw call
uniform mat4 ProjViewModelMatrix;
uniform vec4 DefaultColor; //x=-1 signifies no default color
//non-uniform variables get fed per vertex from the buffers
layout (location=0) in vec3 coords; //feeding from attribute=0 of the main code
layout (location=1) in vec4 color; //per vertex color, feeding from attribute=1 of the main code
layout (location=2) in vec3 normals; //per vertex normals
layout (location=3) in vec2 UVcoord; //texture coordinates
layout (location=4) in int vertexTexUnit;//per vertex texture unit index
//Output
out vec4 thisColor;
out vec2 vertexUVcoord;
flat out int TexUnitIdx;
void main ()
{
vertexUVcoord = UVcoord;
TexUnitIdx=vertexTexUnit;
if (DefaultColor.x==-1) {thisColor = color;} //If no default color is set, use per vertex colors
else {thisColor = DefaultColor;}
gl_Position = ProjViewModelMatrix * vec4(coords,1.0); //This outputs the position to the graphics card.
//TESTING
if (vertexTexUnit==1) thisColor=vec4(1,1,0,1); //Never receives value of 1, but the buffer does contain such values
}
Because the vertexTexUnit attribute is an integer, you must use glVertexAttribIPointer() instead of glVertexAttribPointer().
You can use vertex attributes for whatever you want. OpenGL doesn't know or care what you're using them for.

GLSL variable turns object black

I currently have this vertex shader:
#version 130
in vec3 position;
in vec2 textureCoords;
in vec3 normal;
out vec2 pass_textureCoords;
out vec3 surfaceNormal;
out vec3 toLightVector;
uniform vec3 lightPosition;
void main(void){
surfaceNormal = normal;
toLightVector = vec3(1, 100, 1);
gl_Position = ftransform(); #Yes, I use the fixed-function pipeline. Please don't kill me.
pass_textureCoords = textureCoords;
}
This works fine, but then when I add this:
#version 130
in vec3 position;
in vec2 textureCoords;
in vec3 normal;
out vec2 pass_textureCoords;
out vec3 surfaceNormal;
out vec3 toLightVector;
uniform vec3 lightPosition;
void main(void){
vec3 pos = position; #Completely USELESS added line
surfaceNormal = normal;
toLightVector = vec3(1, 100, 1);
gl_Position = ftransform();
pass_textureCoords = textureCoords;
}
The whole object just turns black (Or green sometimes, but you get the idea - It isn't working).
Expected behaviour:
Actual behaviour:
(The terrain and water are rendered without any shaders, hence why they are not changed)
It's as if the variable "position" is poisonous - If I use it anywhere, even for something useless, my shader simply does not work correctly.
Why could this be happening, and how could I fix it?
You're running into problems because you use both the fixed function position attribute in your vertex shader, and a generic attribute bound to location 0. This is not valid.
While you're not using the fixed function gl_Vertex attribute explicitly, the usage is implied here:
gl_Position = ftransform();
This line is mostly equivalent to this, with some additional precision guarantees:
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
But then you are also using a generic attribute for the position:
in vec3 position;
...
vec3 pos = position;
If you assign location 0 for this generic attribute, the result is an invalid shader program. I found the following on page 94 of the OpenGL 3.3 Compatibility Profile spec, in section "2.14.3 Vertex Attributes":
LinkProgram will also fail if the vertex shaders used in the program object contain assignments (not removed during pre-processing) to an attribute variable bound to generic attribute zero and to the conventional vertex position (gl_Vertex).
To avoid this problem, the ideal approach is of course to move away from using fixed function attributes, and adopt the OpenGL Core Profile. If that is outside the scope of the work you can tackle, you need to at least avoid assigning location 0 to any of your generic vertex attributes. You can either:
Use a location > 0 for all attributes when you set the location of the generic attributes with glBindAttribLocation().
Do not set the location at all, and use glGetAttribLocation() to get the automatically assigned locations after you link the program.

glsl vertex shader glGetUniformLocation fails

I want to set a uniform Vector in my Vertex Shader.
int loc = glGetUniformLocation(shader, "LightPos");
if (loc != -1)
{
//do Stuff
}
The problem is that loc is -1 all the time. I tried it with a variable from the Fragment Shader, that actually worked. The Vertex Shader:
uniform vec3 LightPos;
varying vec2 UVCoord;
varying float LightIntensity;
void main()
{
UVCoord = gl_MultiTexCoord0.st;
gl_Position = ftransform();
vec3 Normal = normalize(gl_NormalMatrix * gl_Normal);
LightIntensity = max(dot(normalize(vec3(0, -10, 0)), Normal), 0.0);
}
The Fragment Shader:
uniform sampler2D tex1;
varying vec2 UVCoord;
varying float LightIntensity;
void main()
{
vec3 Color = vec3(texture2D(tex1, UVCoord));
gl_FragColor = vec4(Color * LightIntensity, 1.0);
}
Does anybody have an idea what I am doing wrong?
Unfortunately, you misunderstood how glGetUniformLocation (...) and uniform location assignment in general works.
Locations are only assigned after your shaders are compiled and linked. This is a two-phase operation that effectively identifies only the used inputs and outputs between all stages of a GLSL program (vertex, fragment, geometry, tessellation). Because LightPos is not used in your vertex shader (or anywhere else for that matter) it is not assigned a location when your program is linked. It simply ceases to exist.
This is where the term active uniform comes from. And glGetUniformLocation (...) only returns the location of active uniforms.
Name
glGetUniformLocation — Returns the location of a uniform variable
[...]
Description
glGetUniformLocation returns an integer that represents the location of a specific uniform variable within a program object. name must be a null terminated string that contains no white space. name must be an active uniform variable name in program that is not a structure, an array of structures, or a subcomponent of a vector or a matrix. This function returns -1 if name does not correspond to an active uniform variable in program, if name starts with the reserved prefix "gl_", or if name is associated with an atomic counter or a named uniform block.
you don't actually use LightPos in the shader, so the optimiser didn't allocate any registers for it
the compiler is free to optimize uniforms and attribute out if they are not used

Can I pack both floats and ints into the same array buffer?

...because the floats seem to be coming out fine, but there's something wrong with the ints.
Essentially I have a struct called "BlockInstance" which holds a vec3 and an int. I've got an array of these BlockInstances which I buffer like so (translating from C# to C for clarity):
glBindBuffer(GL_ARRAY_BUFFER, bufferHandle);
glBufferData(GL_ARRAY_BUFFER, sizeof(BlockInstance)*numBlocks, blockData, GL_DYNAMIC_DRAW);
glVertexAttribPointer(3,3,GL_FLOAT,false,16,0);
glVertexAttribPointer(4,1,GL_INT,false,16,12);
glVertexAttribDivisor(3,1);
glVertexAttribDivisor(4,1);
And my vertex shader looks like this:
#version 330
layout (location = 0) in vec3 Position;
layout (location = 1) in vec2 TexCoord;
layout (location = 2) in vec3 Normal;
layout (location = 3) in vec3 Translation;
layout (location = 4) in int TexIndex;
uniform mat4 ProjectionMatrix;
out vec2 TexCoord0;
void main()
{
mat4 trans = mat4(
1,0,0,0,
0,1,0,0,
0,0,1,0,
Translation.x,Translation.y,Translation.z,1);
gl_Position = ProjectionMatrix * trans * vec4(Position, 1.0);
TexCoord0 = vec2(TexCoord.x+TexIndex,TexCoord.y)/16;
}
When I replace TexIndex on the last line of my GLSL shader with a constant like 0, 1, or 2, my textures come out fine, but if I leave it like it is, they come out all mangled, so there must be something wrong with the number, right? But I don't know what it's coming out as so it's hard to debug.
I've looked at my array of BlockInstances, and they're all set to 1,2, or 19 so I don't think my input is wrong...
What else could it be?
Note that I'm using a sprite map texture where each of the tiles is 16x16 px but my TexCoords are in the range 0-1, so I add a whole number to it to choose which tile, and then divide it by 16 (the map is also 16x16 tiles) to put it back into the proper range. The idea is I'll replace that last line with
TexCoord0 = vec2(TexCoord.x+(TexIndex%16),TexCoord.y+(TexIndex/16))/16;
-- GLSL does integer math, right? An int divided by an int will come out as whole number?
If I try this:
TexCoord0 = vec2(TexCoord.x+(TexIndex%16),TexCoord.y)/16;
The texture looks fine, but it's not using the right sprite. (Looks to be using the first sprite)
If I do this:
TexCoord0 = vec2(TexCoord.x+(TexIndex%16),TexCoord.y+(TexIndex/16))/16;
It comes out all white. This leads me to believe that TexIndex is coming out to be a very large number (bigger than 256 anyway) and that it's probably a multiple of 16.
layout (location = 4) in int TexIndex;
There's your problem.
glVertexAttribPointer is used to send data that will be converted to floating-point values. It's used to feed floating-point attributes. Passing integers is possible, but those integers are converted to floats, because that's what glVertexAttribPointer is for.
What you need is glVertexAttribIPointer (notice the I). This is used for providing signed and unsigned integer data.
So if you declare a vertex shader input as a float or some non-prefixed vec, you use glVertexAttribPointer to feed it. If you declare the input as int, uint, ivec or uvec, then you use glVertexAttribIPointer.