Can't give Matrix to Shader as uniform - opengl

I'm currently implementing matrices in my enigne. With standard glTranslate and glRotate and then ftransform() in the shader it works. Done manually not.
How i give Matrix to the Shader:
public static void loadMatrix(int location, Matrix4f matrix)
{
FloatBuffer buffer = BufferUtils.createFloatBuffer(16);
matrix.store(buffer);
buffer.flip();
glUniformMatrix4(location, false, buffer);
}
Sending viewMatrix:
shaderEngine.loadMatrix(glGetUniformLocation(shaderEngine.standard, "viewMatrix"), camera.viewMatrix);
shaderEngine.loadMatrix(glGetUniformLocation(shaderEngine.obj, "viewMatrix"), camera.viewMatrix);
System.out.println(camera.viewMatrix.toString());
In the shader i get it with:
uniform mat4 viewMatrix;
and in the shaders main i set the frag color:
gl_FragColor = vec4(viewMatrix[0][3] / -256,0,0,1);
Which is BLACK (so viewMatrix[0][3] == 0) while my matrix output in java looks like this:
1.0 0.0 0.0 -128.0
0.0 1.0 0.0 -22.75
0.0 0.0 1.0 -128.0
0.0 0.0 0.0 1.0

Your confusion seems to come from how array subscripts address the elements of a matrix in GLSL. The first subscript is the column and the second is the row.
Thus, unless you transpose your matrix, column 1 row 4 == 0.0.
If you transpose your matrix or swap the subscripts, you will get -128.0.
That second parameter in the call to glUniformMatrix4 (...) allows you to transpose the matrix before GLSL gets its hands on it, by the way. This will allow you to treat everything as row-major if that is more natural to you.

My problem was that i was giving the matrices to the shader when the used program was 0.

Related

Why does golang gomobile basic example sets 3-float size for a vec4 attribute?

Golang gomobile basic example [1] uses VertexAttribPointer to set 3 x FLOATS per vertex.
However the vertex shader attribute type is vec4. Shouldn't it be vec3?
Why?
Within render loop:
glctx.VertexAttribPointer(position, coordsPerVertex, gl.FLOAT, false, 0, 0)
Triangle data:
var triangleData = f32.Bytes(binary.LittleEndian,
0.0, 0.4, 0.0, // top left
0.0, 0.0, 0.0, // bottom left
0.4, 0.0, 0.0, // bottom right
)
Constant declaration:
const (
coordsPerVertex = 3
vertexCount = 3
)
In vertex shader:
attribute vec4 position;
[1] gomobile basic example: https://github.com/golang/mobile/blob/master/example/basic/main.go
Vertex attributes are conceptually always 4 component vectors. There is no requirement that the number of components you use in the shader and the one you set up for the attribute pointer have to match. If your array has more components than your shader consumes, the additional components are just ignored. If your array supplies less components, the attribute is filled to a vector of the form (0,0,0,1) (which makes sense for homogeneous position vectors as well as RGBA colors).
In the usual case, you want w=1 for every input position anyway, there is no need to store that in an array. But you usually need the full 4D form when applying the transformation matrices (or even when directly forwarding the value as gl_Position). So your shader could conceptually do
in vec3 pos;
gl_Position=vec4(pos,1);
but that would be equivalent of just writing
in vec4 pos;
gl_Position=pos;

How can I add different color overlays to each iteration of the texture in this glsl shader?

So I'm working on this shader right now, and the goal is to have a series of camera based tiles, each overlayed with a different color (think Andy Warhol). I've got the tiles working (side issue - the tiles on the ends are currently being cut off ~50%) but I want to add a color filter to each iteration. I'm looking for the cleanest possible way of doing this. Any ideas?
frag shader:
#define N 3.0 // number of columns
#define M 3. // number of rows
#import "GPUImageWarholFilter.h"
NSString *const kGPUImageWarholFragmentShaderString = SHADER_STRING
(
precision highp float;
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
void main()
{
vec4 color = texture2D(inputImageTexture, vec2(fract(textureCoordinate.x * N)/(M/ N), fract(textureCoordinate.y * M) / (M/N)));
gl_FragColor = color;
}
);
Let's say you're texture is getting applied to a quad and your vertices are:
float quad[] = {
0.0, 0.0,
1.0, 0.0,
1.0, 0.0, // First triangle
1.0, 0.0,
1.0, 0.0,
1.0, 1.0 // Second triangle
};
You can specify texture coordinates which corresponds to each of these vertices and describe the relationship in your vertex array object.
Now, if you quadruple the number of vertices, you'll have access to mid points between the extents of the quad (create four sub-quads) and you can associate (0,0) -> (1,1) texture coordinates with each of these subquads. The effect would be a rendering of the full texture in each subquad.
You could then do math on your vertices in your vertex shader to calculate a color component to assign to each subquad. The color component would be passed to your fragment shader. Use uniforms to specify number of rows and columns then define the color component based on which cell you've calculated the current vertex to be in.
You could also try to compute everything in the fragment shader, but I think it could get expensive pretty fast. Just a hunch though.
You could also try the GL_REPEAT texture parameter, but you'll have less control of the outcome: https://open.gl/textures

glscalef effect on normal vectors

So I have a .3ds mesh that I imported into my project and it's quite large so I wanted to scale it down in size using glscalef. However the rendering algorithm I use in my shader makes use of normal vector values and so after scaling my rendering algorithm no longer works exactly as it should. So how do remedy this? Is there a glscalef for normals as well?
Normal vectors are transformed by the transposed inverse of the modelview matrix. However a second constraint is, that normals be of unit length and scaling the modelview changes that. So in your shader, you should apply a normalization step
#version 330
uniform mat4x4 MV;
uniform mat4x4 P;
uniform mat4x4 N; // = transpose(inv(MV));
in vec3 vertex_pos; // vertes position attribute
in vec3 vertex_normal; // vertex normal attribute
out vec3 trf_normal;
void main()
{
trf_normal = normalize( (N * vec4(vertex_normal, 1)).xyz );
gl_Position = P * MV * vertex_pos;
}
Note that "normalization" is the process of turning a vector into colinear vector of its own with unit length, and has nothing to do with the concept of surface normals.
To transform normals, you need to multiply them by the inverse transpose of the transformation matrix. There are several explanations of why this is the case, the best one is probably here
Normal is transformed by MV^IT, the inverse transpose of the modelview matrix, read the redbook, it explains everything.

Why isn't this orthographic vertex shader producing the correct answer?

My issue is that I have a (working) orthographic vertex and fragment shader pair that allow me to specify center X and Y of a sprite via 'translateX' and 'translateY' uniforms being passed in. I multiply by a projectionMatrix that is hardcoded and works great. Everything works as far as orthographic operation. My incoming geometry to this shader is based around 0, 0, 0 as its center point.
I want to now figure out what that center point (0, 0, 0 in local coordinate space) becomes after the translations. I need to know this information in the fragment shader. I assumed that I can create a vector at 0, 0, 0 and then apply the same translations. However, I'm NOT getting the correct answer.
My question: what I am I doing wrong, and how can I even debug what's going on? I know that the value being computed must be wrong, but I have no insight in to what it is. (My platform is Xcode 4.2 on OS X developing for OpenGL ES 2.0 iOS)
Here's my vertex shader:
// Vertex Shader for pixel-accurate rendering
attribute vec4 a_position;
attribute vec2 a_texCoord;
varying vec2 v_texCoord;
uniform float translateX;
uniform float translateY;
// Set up orthographic projection
// this is for 640 x 960
mat4 projectionMatrix = mat4( 2.0/960.0, 0.0, 0.0, -1.0,
0.0, 2.0/640.0, 0.0, -1.0,
0.0, 0.0, -1.0, 0.0,
0.0, 0.0, 0.0, 1.0);
void main()
{
// Set position
gl_Position = a_position;
// Translate by the uniforms for offsetting
gl_Position.x += translateX;
gl_Position.y += translateY;
// Translate
gl_Position *= projectionMatrix;
// Do all the same translations to a vector with origin at 0,0,0
vec4 toPass = vec4(0, 0, 0, 1); // initialize. doesn't matter if w is 1 or 0
toPass.x += translateX;
toPass.y += translateY;
toPass *= projectionMatrix;
// this SHOULD pass the computed value to my fragment shader.
// unfortunately, whatever value is sent, isn't right.
//v_translatedOrigin = toPass;
// instead, I use this as a workaround, since I do know the correct values for my
// situation. of course this is hardcoded and is terrible.
v_translatedOrigin = vec4(500.0, 200.0, 0.0, 0.0);
}
EDIT: In response to my orthographic matrix being wrong, the following is what wikipedia has to say about ortho projections, and my -1's look right. because in my case for example the 4th element of my mat should be -((right+left)/(right-left)) which is right of 960 left of 0, so -1 * (960/960) which is -1.
EDIT: I've possibly uncovered the root issue here - what do you think?
Why does your ortho matrix have -1's in the bottom of each column? Those should be zeros. Granted, that should not affect anything.
I'm more concerned about this:
gl_Position *= projectionMatrix;
What does that mean? Matrix multiplication is not commutative; M * a is not the same as a * M. So which side do you expect gl_Position to be multiplied on?
Oddly, the GLSL spec does not say (I filed a bug report on this). So you should go with what is guaranteed to work:
gl_Position = projectionMatrix * gl_Position;
Also, you should use proper vectorized code. You should have one translate uniform, which is a vec2. Then you can just do gl_Position.xy = a_position.xy + translate;. You'll have to fill in the Z and W with constants (gl_Position.zw = vec2(0, 1);).
Matrices in GLSL are column major. The first four values are the first column of the matrix, not the first row. You are multiplying with a transposed ortho matrix.
I have to echo Nicol Bolas's sentiment. Two wrongs happening to make things work is frustrating, but doesn't make them any less wrong. The fact that things are showing up where you expect is likely because the translation portion of your matrix is 0, 0, 0.
The equation you posted is correct, but the notation is row major, and OpenGL is column major:
I run afoul of this stuff every new project I start. This site is a really good resource that helped me keep these things straight. They've got another page on projection matrices.
If you're not sure if your orthographic projection is correct (right now it isn't), try plugging the same values into glOrtho, and reading the values back out of GL_PROJECTION.

Cocoa and OpenGL, How do I set a GLSL vertex attribute using an array?

I'm fairly new to OpenGL, and I seem to be experiencing some difficulties. I've written a simple shader in GLSL, that is supposed to transform vertices by given joint matrices, allowing simple skeletal animation. Each vertex has a maximum of two bone influences (stored as the x and y components of a Vec2), indices and corresponding weights that are associated with an array of transformation matrices, and are specified as "Attribute variables" in my shader, then set using the "glVertexAttribPointer" function.
Here's where the problem arises... I've managed to set the "Uniform Variable" array of matrices properly, when I check those values in the shader, all of them are imported correctly and they contain the correct data. However, when I attempt to set the joint Indices variable the vertices are multiplied by arbitrary transformation matrices! They jump to seemingly random positions in space (which are different every time) from this I am assuming that the indices are set incorrectly and my shader is reading past the end of my joint matrix array into the following memory. I'm not exactly sure why, because upon reading all of the information I could find on the subject, I was surprised to see the same (if not very similar) code in their examples, and it seemed to work for them.
I have attempted to solve this problem for quite some time now and it's really beginning to get on my nerves... I know that the matrices are correct, and when I manually change the index value in the shader to an arbitrary integer, it reads the correct matrix values and works the way it should, transforming all the vertices by that matrix, but when I try and use the code I wrote to set the attribute variables, it does not seem to work.
The code I am using to set the variables is as follows...
// this works properly...
GLuint boneMatLoc = glGetUniformLocation([[[obj material] shader] programID], "boneMatrices");
glUniformMatrix4fv( boneMatLoc, matCount, GL_TRUE, currentBoneMatrices );
GLfloat testBoneIndices[8] = {1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0};
// this however, does not...
GLuint boneIndexLoc = glGetAttribLocation([[[obj material] shader] programID], "boneIndices");
glEnableVertexAttribArray( boneIndexLoc );
glVertexAttribPointer( boneIndexLoc, 2, GL_FLOAT, GL_FALSE, 0, testBoneIndices );
And my vertex shader looks like this...
// this shader is supposed to transform the bones by a skeleton, a maximum of two
// bones per vertex with varying weights...
uniform mat4 boneMatrices[32]; // matrices for the bones
attribute vec2 boneIndices; // x for the first bone, y for the second
//attribute vec2 boneWeight; // the blend weights between the two bones
void main(void)
{
gl_TexCoord[0] = gl_MultiTexCoord0; // just set up the texture coordinates...
vec4 vertexPos1 = 1.0 * boneMatrices[ int(boneIndex.x) ] * gl_Vertex;
//vec4 vertexPos2 = 0.5 * boneMatrices[ int(boneIndex.y) ] * gl_Vertex;
gl_Position = gl_ModelViewProjectionMatrix * (vertexPos1);
}
This is really beginning to frustrate me, and any and all help will be appreciated,
-Andrew Gotow
Ok, I've figured it out. OpenGL draws triangles with the drawArrays function by reading every 9 values as a triangle (3 vertices with 3 components each). Because of this, vertices are repepated between triangles, so if two adjacent triangles share a vertex it comes up twice in the array. So my cube which I originally thought had 8 vertices, actually has 36!
six sides, two triangles a side, three vertices per triangle, all multiplies out to a total of 36 independent vertices instead of 8 shared ones.
The entire problem was an issue with specifying too few values. As soon as I extended my test array to include 36 values it worked perfectly.