Strange dynamic iteration error in OpenGL shader - opengl

I'm trying to turn a float array into a vector2 array since a float array is the only array I can pass over from Java code to the shader. I'm turning for example float 3.8 into vec2(3,8). But that's pretty irrelevant. The real problem is that the shader stops working when adding a certain line in the for loop. Here's my code:
attribute vec4 a_color;
attribute vec3 a_position;
attribute vec2 a_texCoord0;
uniform float a_tileCoords[144];
uniform mat4 u_projTrans;
varying vec4 v_color;
varying vec2 v_texCoord0;
varying vec2 v_tileCoords[144];
int length = 144;
void main(){
v_color = a_color;
v_texCoord0 = a_texCoord0;
for(int i = 0; i < length; i++){
int x = int(a_tileCoords[i]);
int y = int((a_tileCoords[i] - int(a_tileCoords[i])) * 10);
v_tileCoords[i] = vec2(x,y);
}
gl_Position = u_projTrans * vec4(a_position + vec3(0, 1, 0), 1.);
}
It's about this line:
v_tileCoords[i] = vec2(x,y);
When I remove that line from the for loop the shader works again (the texture gets drawn.) But I want this line to work of course. So the question for you is: What could I have done wrong, or is this even possible what I want to achieve?

The maximum number of varying vectors can be determined by MAX_VARYING_VECTORS and limits the number of vectors you can pass to the next shader stage.
I'm very sure you exceeded the limit.
see also the following questions:
WebGL shaders: maximum number of varying variables
Max Varying Vectors and Floats for GLES 2.0

Related

How to get the room (or screen) coordinates from inside a Gamemaker Studio 2 shader?

I'm mostly new to writing shaders, and this might not be a great place to start, but I'm trying to make a shader to make an object "sparkle." I won't get into the specifics on how it's supposed to look, but to make it I need a value that changes with the object's position in the room (or on the screen, as the camera is fixed). I've tried v_vTexcoord, in_Position, gl_Position, and others without the intended result. If I've used them wrong or missed something, I wouldn't be surprised, but any advice is helpful.
I don't think they'll be helpful but here's my vertex shader:
//it's mostly the same as the default
// Simple passthrough vertex shader
//
attribute vec3 in_Position; // (x,y,z)
//attribute vec3 in_Normal; // (x,y,z) unused in this shader.
attribute vec4 in_Colour; // (r,g,b,a)
attribute vec2 in_TextureCoord; // (u,v)
varying vec2 v_vTexcoord;
varying vec4 v_vColour;
varying vec3 v_inpos;
void main()
{
vec4 object_space_pos = vec4( in_Position.x, in_Position.y, in_Position.z, 1.0);
gl_Position = gm_Matrices[MATRIX_WORLD_VIEW_PROJECTION] * object_space_pos;
v_vColour = in_Colour;
v_vTexcoord = in_TextureCoord;
v_inpos = /*this is the variable that i'd like to set to the x,y*/;
}
and my fragment shader:
//
//
//
varying vec2 v_vTexcoord; //x is <1, >.5
varying vec4 v_vColour;
varying vec3 v_inpos;
uniform float pixelH; //unused, but left in for accuracy
uniform float pixelW; //see above
void main() //WHEN AN ERROR HAPPENS, THE SHADER JUST WON'T DO ANYTHING AT ALL.
{
vec2 offset = vec2 (pixelW, pixelH);
gl_FragColor = v_vColour * texture2D( gm_BaseTexture, v_vTexcoord );
/* i am planning on just testing different math until something works, but i can't
vec3 test = vec3 (v_inpos.x, v_inpos.x, v_inpos.x); //find the values i need to test it
test.x = mod(test.x,.08);
test.x = test.x - 4;
test.x = abs(test.x);
while (test.x > 1.0){
test.x = test.x/10;
}
test = vec3 (test.x, test.x, test.x);
gl_FragColor.a = test.x;
*/
//everything above doesn't cause an error when uncommented, i think
//if (v_inpos.x == 0.0){gl_FragColor.rgb = vec3 (1,0,0);}
//if (v_inpos.x > 1) {gl_FragColor.rgb = vec3 (0,1,0);}
//if (v_inpos.x < 1) {gl_FragColor.rgb = vec3 (0,0,1);}
}
if this question doesn't make sense, i'll try to clarify any other questions in the comments.
If you want to get a position in world space, then you have to transform the vertex coordinate from model space to world space.
This can be done by the model (world) matrix (gm_Matrices[MATRIX_WORLD]). See game-maker-studio-2 - Matrices.
e.g.:
vec4 object_space_pos = vec4(in_Position.xyz, 1.0);
vec4 world_space_pos = gm_Matrices[MATRIX_WORLD] * object_space_pos;
Not, the Cartesian world space position can be get by:
(See also Swizzling)
vec3 pos = world_space_pos.xyz;

GLSL - per-fragment lighting

I started lighting with several light sources. All the manuals that I saw without taking into account the distance between the light source and the object (for example https://learnopengl.com/Lighting/Basic-Lighting). So I wrote my shader, but I'm not sure about its correctness. Please, analyze this shader, and tell me what's wrong / not correct in it. I will be very grateful for any help! Below I bring the shader itself, and the results of its work for different values of n and k.
Fragment shader:
#version 130
precision mediump float; // Set the default precision to medium. We don't need as high of a
// precision in the fragment shader.
#define MAX_LAMPS_COUNT 8 // Max lamps count.
uniform vec3 u_LampsPos[MAX_LAMPS_COUNT]; // The position of lamps in eye space.
uniform vec3 u_LampsColors[MAX_LAMPS_COUNT];
uniform vec3 u_AmbientColor = vec3(1, 1, 1);
uniform sampler2D u_TextureUnit;
uniform float u_DiffuseIntensivity = 12;
uniform float ambientStrength = 0.1;
uniform int u_LampsCount;
varying vec3 v_Position; // Interpolated position for this fragment.
varying vec3 v_Normal; // Interpolated normal for this fragment.
varying vec2 v_Texture; // Texture coordinates.
// The entry point for our fragment shader.
void main() {
float n = 2;
float k = 2;
float finalDiffuse = 0;
vec3 finalColor = vec3(0, 0, 0);
for (int i = 0; i<u_LampsCount; i++) {
// Will be used for attenuation.
float distance = length(u_LampsPos[i] - v_Position);
// Get a lighting direction vector from the light to the vertex.
vec3 lightVector = normalize(u_LampsPos[i] - v_Position);
// Calculate the dot product of the light vector and vertex normal. If the normal and light vector are
// pointing in the same direction then it will get max illumination.
float diffuse = max(dot(v_Normal, lightVector), 0.1);
// Add attenuation.
diffuse = diffuse / (1 + pow(distance, n));
// Calculate final diffuse for fragment
finalDiffuse += diffuse;
// Calculate final light color
finalColor += u_LampsColors[i] / (1 + pow(distance, k));
}
finalColor /= u_LampsCount;
vec3 ambient = ambientStrength * u_AmbientColor;
vec3 diffuse = finalDiffuse * finalColor * u_DiffuseIntensivity;
gl_FragColor = vec4(ambient + diffuse, 1) * texture2D(u_TextureUnit, v_Texture);
}
Vertex shader:
#version 130
uniform mat4 u_MVPMatrix; // A constant representing the combined model/view/projection matrix.
uniform mat4 u_MVMatrix; // A constant representing the combined model/view matrix.
attribute vec4 a_Position; // Per-vertex position information we will pass in.
attribute vec3 a_Normal; // Per-vertex normal information we will pass in.
attribute vec2 a_Texture; // Per-vertex texture information we will pass in.
varying vec3 v_Position; // This will be passed into the fragment shader.
varying vec3 v_Normal; // This will be passed into the fragment shader.
varying vec2 v_Texture; // This will be passed into the fragment shader.
void main() {
// Transform the vertex into eye space.
v_Position = vec3(u_MVMatrix * a_Position);
// Pass through the texture.
v_Texture = a_Texture;
// Transform the normal's orientation into eye space.
v_Normal = vec3(u_MVMatrix * vec4(a_Normal, 0.0));
// gl_Position is a special variable used to store the final position.
// Multiply the vertex by the matrix to get the final point in normalized screen coordinates.
gl_Position = u_MVPMatrix * a_Position;
}
n=2 k=2
n=1 k=3
n=3 k=1
n=3 k=3
And if my shader is correct, then how do I name these parameters (n, k)?
By "correct" I assume you mean is the code working as well as it should. These lighting calculations are not by any means physically accurate. Unless you are going for full compatibility with old devices, I would recommend you use a higher glsl version which allows you to use in and out and some other useful glsl features. The current is version 450 and you are still using 130. The vertex shader looks ok, as it is only passing through values to the fragment shader.
As for the fragment shader there are is one optimisation you could make.
The calculation u_LampsPos[i] - v_Position doesn't have to be repeated twice. Do it once and do the length and normalize on the same result from one calculation.
The code is quite small so there is not much to go wrong glsl wise however I was wondering why you did: finalColor /= u_LampsCount;?
This didn't make sense to me.

GLSL shader uniform locations can't be found

I'm working on a vertex skinning shader, and for some reason my program can't find the uniform locations.
Vertex shader code:
#version 330
const int MAX_JOINTS = 30;
const int MAX_WEIGHTS = 3;
in vec3 position;
in vec2 textureCoords;
in vec3 normal;
in ivec3 boneIndices;
in vec3 weights;
out vec4 fragPos;
out vec3 n;
out vec2 texCoords;
out vec4 mcolor;
uniform mat4 modelMatrix;
uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
uniform mat4 normalMatrix;
uniform mat4[MAX_JOINTS] boneTransforms;
void main() {
vec4 totalLocalPos = vec4(0.0);
vec4 totalNormal = vec4(0.0);
for(int i = 0; i < 3; i++){
mat4 boneTransform = boneTransforms[boneIndices[i]];
vec4 posePosition = boneTransform * vec4(position, 1);
totalLocalPos += posePosition * weights[i];
vec4 worldNormal = boneTransform * vec4(normal, 1);
totalNormal += worldNormal * weights[i];
}
texCoords = textureCoords;
fragPos = modelMatrix * vec4(position,1);
n = totalNormal.xyz;
gl_Position = projectionMatrix * viewMatrix * modelMatrix * totalLocalPos;
}
The boneTransforms uniform doesn't seem to be set correctly; if I query the active uniforms with
GLint uniforms;
glGetProgramiv(shaderProgramID, GL_ACTIVE_UNIFORMS, &uniforms);
for (int i = 0; i < uniforms; i++){
int name_len = -1, num = -1;
GLenum type = GL_ZERO;
char name[100];
glGetActiveUniform(shaderProgramID, GLuint(i), sizeof(name) - 1,
&name_len, &num, &type, name);
name[name_len] = 0;
}
i always get zero; However, if I just put gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(position,1) I get the expected result (correct rendering without any vertex skinning), so the other transforms appear to be working despite it telling me they don't exist?
EDIT: this is sometimes the case, other times I get the model at position (0,0,0) but otherwise rendered correctly with this.
I have read about the compiler stripping unused/inactive uniforms, but if I use boneTransforms to calculate totalLocalPos and use that for gl_Positions the uniform should be active.
I try to set the uniform with
vector<glm::mat4> boneTransforms = model.getBoneTransforms();
int location = glGetUniformLocation(shaderProgramID, "boneTransforms");
glUniformMatrix4fv(location, boneTransforms.size(), false, (GLfloat*)&boneTransforms);
location is always -1.
Is there something wrong with how I try to set this particular uniform, or is the mistake in the shader code?
EDIT2: I just noticed that the behaviour of my shader changes when I add or remove objects (which use a different shader) from the scene. I don't know what to make of that.
EDIT3: If I remove all other meshes from my scene the shader crashes with an access violation. As long as one other object is being rendered there are currently no crashes.
another EDIT: Apparently accessing the weights variable crashes my shader.
I was reading this quick tutorial about vertex skinning shader found here: khronos and it seems to be using a slightly older version of GLSL how ever they do make a mention about the multiplication of the MVP matrix(model view proj matrix) or in your case the PVM matrix( proj view model matrix) with the vec4 for total position in you case and storing it back into gl_Position and they claim that the w may not always have a value of 1 so to be safe they recommend doing this instead and I'll use your code as their example to fix this possible problem.
Change this:
gl_Position = projectionMatrix * viewMatrix * modelMatrix * totalLocalPos;
To this:
gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(totalLocalPos.xyz, 1.0);
To see if this helps you out. I don't know if this is the cause of your problem or not but from what you have shown your shader appears to be okay other than that.
I saw that the way you push the mat4 into the shader is a bit off. When I do "glUniformMatrix4fv" I say the first parameter is the ID of your shader, the one you get from "glCreateProgram()". The second is the name of the variable in the shader, so in your case "boneTransforms". The third is a "1". The fourth I put "GL_FALSE" not "false" but I don't think that should make a difference. The next is your first float of the mat4 which would look more like this :
&boneTransforms[0][0]

OpenGL pass packed texture coordinates to the shader

I need to do some heavy optimizations in my game and I was thinking since we can pass a vec4 color to the shader packed to only one float, is there a way to pass the vec2 texture coordinates too? For example in the following code can I pass the 2 texture coords(its always 1 and 0) as only one element,in the same way as its happening in the color element.
public void point(float x,float y,float z,float size,float color){
float hsize=size/2;
if(index>vertices.length-7)return;
vertices[index++]=x-hsize;
vertices[index++]=y-hsize;
vertices[index++]=color;
vertices[index++]=0;
vertices[index++]=0;
vertices[index++]=x+hsize;
vertices[index++]=y-hsize;
vertices[index++]=color;
vertices[index++]=1;
vertices[index++]=0;
vertices[index++]=x+hsize;
vertices[index++]=y+hsize;
vertices[index++]=color;
vertices[index++]=1;
vertices[index++]=1;
vertices[index++]=x-hsize;
vertices[index++]=y+hsize;
vertices[index++]=color;
vertices[index++]=0;
vertices[index++]=1;
num++;
}
{
mesh=new Mesh(false,COUNT*20, indices.length,
new VertexAttribute(Usage.Position, 2,"a_position"),
new VertexAttribute(Usage.ColorPacked, 4,"a_color"),
new VertexAttribute(Usage.TextureCoordinates, 2,"a_texCoord0")
);
}
frag shader
varying vec4 v_color;
varying vec2 v_texCoords;
uniform sampler2D u_texture;
void main() {
vec4 color=v_color * texture2D(u_texture, v_texCoords);
gl_FragColor = color;
}
vert shader
attribute vec4 a_position;
attribute vec4 a_color;
attribute vec2 a_texCoord0;
uniform mat4 u_projTrans;
varying vec4 v_color;
varying vec2 v_texCoords;
void main() {
v_color = a_color;
v_texCoords = a_texCoord0;
gl_Position = u_projTrans * a_position;
}
I know this will not make a big difference in the performance but I am just willing to spend some extra hours here and there to make this small optimizations in order to make my game run faster.
I haven't tried this but I think it will work. Simply use Usage.ColorPacked for your texture coordinates. I don't think you can send anything smaller than 4 bytes, so you may as well use the already defined packed color. You will only be saving one byte per vertex. You can put your coordinates into the first two elements and ignore the second two elements.
mesh = new Mesh(false,COUNT*20, indices.length,
new VertexAttribute(Usage.Position, 2,"a_position"),
new VertexAttribute(Usage.ColorPacked, 4,"a_color"),
new VertexAttribute(Usage.ColorPacked, 4,"a_texCoord0")
);
I don't think the usage parameter of VertexAttribute is actually used by anything in Libgdx. If you look at the source, it only checks if the usage is ColorPacked or not, and from that decides whether to use a single byte per component versus 4.
And in your vertex shader:
attribute vec4 a_position;
attribute vec4 a_color;
attribute vec4 a_texCoord0; //note using vec4
uniform mat4 u_projTrans;
varying vec4 v_color;
varying vec2 v_texCoords;
void main() {
v_color = a_color;
v_texCoords = a_texCoord0.xy; //using the first two elements
gl_Position = u_projTrans * a_position;
}
To translate the texture coordinate correctly:
final Color texCoordColor = new Color(0, 0, 0, 0);
//....
texCoordColor.r = texCoord.x;
texCoordColor.g = texCoord.y;
float texCoord = texCoordColor.toFloatBits();

GLSL uniform access causing sefault in program

I have a program set up with deferred rendering. I am in the process of removing my position texture in favour of reconstructing positions from depth. I have done this before with no trouble but now for some reason I am getting a segfault when trying to access matrices I pass in through uniforms!
My fragment shader (vertex shader irrelevant):
#version 430 core
layout(location = 0) uniform sampler2D depth;
layout(location = 1) uniform sampler2D diffuse;
layout(location = 2) uniform sampler2D normal;
layout(location = 3) uniform sampler2D specular;
layout(location = 4) uniform mat4 view_mat;
layout(location = 5) uniform mat4 inv_view_proj_mat;
layout(std140) uniform light_data{
// position ect, works fine
} light;
in vec2 uv_f;
vec3 recontruct_pos(){
float z = texture(depth, uv_f);
vec4 pos = vec4(uv_f * 2.0 - 1.0, z * 2.0 - 1.0, 1.0);
//pos = inv_view_proj_mat * pos; //un-commenting this line causes segfault
return pos.xyz / pos.w;
}
layout(location = 3) out vec4 lit; // location 3 is lighting texture
void main(){
vec3 pos = reconstruct_pos();
lit = vec4(0.0, 1.0, 1.0, 1.0); // just fill screen with light blue
}
And as you can see the code causing this segfault is shown in the reconstruct_pos() function.
Why is this causing a segfault? I have checked the data within the application, it is correct.
EDIT:
The code I use to update my matrix uniforms:
// bind program
glUniformMatrix4fv(4, 1, GL_FALSE, &view_mat[0][0]);
glUniformMatrix4fv(5, 1, GL_FALSE, &inv_view_proj_mat[0][0]);
// do draw calls
The problem was my call to glBindBufferBase when allocating my light buffer. Now that I have corrected the arguments I am passing, everything works fine with no segfaults.
Now the next question is: Why are all of my uniform locations reporting to be -1 O_o
Maybe it's the default location, who knows.
glUniformMatrix() method expects the input data be a flattened array with column major order (i.e. float array[16];), not a two-dimensional array (i.e. float array[4][4]). The latter may cause you either a segfault or a program malfunction, due to the 4 single-dimensional arrays composing the 2-dimensional array not being located sequentially.