How to initialize an array inside a struct GLSL - glsl

I'm try to initialize an array inside a struct, as follow:
struct myStruct {
vec3 data[20] = vec3[20] (vec3(1, 1, 1), vec3( 1, -1, 1), vec3(-1, -1, 1), vec3(-1, 1, 1),
vec3(1, 1, -1), vec3( 1, -1, -1), vec3(-1, -1, -1), vec3(-1, 1, -1),
vec3(1, 1, 0), vec3( 1, -1, 0), vec3(-1, -1, 0), vec3(-1, 1, 0),
vec3(1, 0, 1), vec3(-1, 0, 1), vec3( 1, 0, -1), vec3(-1, 0, -1),
vec3(0, 1, 1), vec3( 0, -1, 1), vec3( 0, -1, -1), vec3( 0, 1, -1));
};
But I get this error:
ERROR: 0:84: '=' : syntax error: syntax error
It is possible to do that?

struct starts a type specification and not a variable declaration. You have to declare a variable and to use a struct constructor (See Data Type (GLSL) - Struct constructors):
struct myStruct {
vec3 data[20];
};
myStruct myVar = myStruct( vec3[20]( vec3(1, 1, 1), ..... ) );
See GLSL Specification - 4.1.8 Structures
User-defined types can be created by aggregating other already defined types into a structure using the struct keyword. For example,
struct keyword. For example,
struct light {
float intensity;
vec3 position;
} lightVar;
Structures can be initialized at declaration time using constructors, as discussed in section 5.4.3 “Structure
Constructors”
See GLSL Specification - 5.4.3 Structure Constructors
Once a structure is defined, and its type is given a name, a constructor is available with the same name to
construct instances of that structure. For example:
struct light {
float intensity;
vec3 position;
};
light lightVar = light(3.0, vec3(1.0, 2.0, 3.0));

Related

Perspective projection turns cube into weird tv shaped cuboid

This is my perspective projection matrix code
inline m4
Projection(float WidthOverHeight, float FOV)
{
float Near = 1.0f;
float Far = 100.0f;
float f = 1.0f/(float)tan(DegToRad(FOV / 2.0f));
float fn = 1.0f / (Near - Far);
float a = f / WidthOverHeight;
float b = f;
float c = Far * fn;
float d = Near * Far * fn;
m4 Result =
{
{{a, 0, 0, 0},
{0, b, 0, 0},
{0, 0, c, -1},
{0, 0, d, 0}}
};
return Result;
}
And here is the main code
m4 Project = Projection(ar, 90);
m4 Move = {};
CreateMat4(&Move,
1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, -2,
0, 0, 0, 1);
m4 Rotate = Rotation(Scale);
Scale += 0.01f;
m4 FinalTransformation = Project * Move * Rotate;
SetShaderUniformMat4("Project", FinalTransformation, ShaderProgram);
Here are some pictures of the cube rotating.
In the shader code I just multiply the transformation by the position (with the transformation being on the left).
I am not sure if it's helpful but here is the rotation code:
float c = cos(Angle);
float s = sin(Angle);
m4 R =
{
{{ c, 0, s, 0},
{ 0, 1, 0, 0},
{-s, 0, c, 0},
{ 0, 0, 0, 1}}
};
return R;
I tried multiplying the matricies in the shader code instead of on the c++ side but then everything disappeared.
OpenGL matrixes are stored with column major order. You have to read the columns from left to right. For example the 1st column of the matrix R is { c, 0, s, 0}, the 2nd one is { 0, 1, 0, 0} the 3rd is {-s, 0, c, 0} and the 4th is { 0, 0, 0, 1}. The lines in your code are actually columns (not rows).
Therefore you need to to transpose you projection matrix (Project) and translation matrix (Move).

How to ensure that the vector in homogeneous coordinates is still a vector after transformation

I performed an MVP transformation on the vertices of the model. In theory, I must apply the inverse transpose matrix of the MVP transformation to the normal.
This is the derivation process:
(A, B, C) is the normal of the plane where the point (x, y, z) lies
For a vector, such as (x0, y0, z0), it is (x0, y0, z0, 0) in homogeneous coordinates. After transformation, it should still be a vector, like (x1, y1, z1, 0), This requires that the last row of the 4 * 4 transformation matrix is all 0 except for the elements in the last column, otherwise it will become (x1, y1, z1, n) after the transformation.
In fact, my MVP transformation matrix cannot satisfy this point after undergoing inverse transpose transformation.
Code:
Mat<4, 4> View(const Vec3& pos){
Mat<4, 4> pan{1, 0, 0, -pos.x,
0, 1, 0, -pos.y,
0, 0, 1, -pos.z,
0, 0, 0, 1};
Vec3 v = Cross(camera.lookAt, camera.upDirection).Normalize();
Mat<4, 4> rotate{v.x, v.y, v.z, 0,
camera.upDirection.x, camera.upDirection.y, camera.upDirection.z, 0,
-camera.lookAt.x, -camera.lookAt.y, -camera.lookAt.z, 0,
0, 0, 0, 1};
return rotate * pan;
}
Mat<4, 4> Projection(double near, double far, double fov, double aspectRatio){
double angle = fov * PI / 180;
double t = -near * tan(angle / 2);
double b = -t;
double r = t * aspectRatio;
double l = -r;
Mat<4, 4> zoom{2 / (r - l), 0, 0, 0,
0, 2 / (t - b), 0, 0,
0, 0, 2 / (near - far), 0,
0, 0, 0, 1};
Mat<4, 4> pan{1, 0, 0, -(l + r) / 2,
0, 1, 0, -(t + b) / 2,
0, 0, 1, -(near + far) / 2,
0, 0, 0, 1};
Mat<4, 4> extrusion{near, 0, 0, 0,
0, near, 0, 0,
0, 0, near + far, -near * far,
0, 0, 1, 0};
Mat<4, 4> ret = zoom * pan * extrusion;
return ret;
}
Mat<4, 4> modelMatrix = Mat<4, 4>::identity();
Mat<4, 4> viewMatrix = View(camera.position);
Mat<4, 4> projectionMatrix = Projection(-0.1, -50, camera.fov, camera.aspectRatio);
Mat<4, 4> mvp = projectionMatrix * viewMatrix * modelMatrix;
Mat<4, 4> mvpInverseTranspose = mvp.Inverse().Transpose();
mvp:
-2.29032 0 0.763441 -2.68032e-16
0 -2.41421 0 0
-0.317495 0 -0.952486 2.97455
0.316228 0 0.948683 -3.16228
mvpInverseTranspose:
-0.392957 0 0.130986 0
0 -0.414214 0 0
-4.99 0 -14.97 -4.99
-4.69377 0 -14.0813 -5.01
I seem to understand the problem. The lighting should be calculated in world space, so I only need to apply the inverse transpose matrix of the model transformation to the normal.

Map two colours to two other colours in GLSL

I'm trying to take a noise pattern which consists of black and white (and grey where there is a smooth transition between the two) and I am trying to map it to two different colours, but I'm having trouble figuring out how to do this.
I can easily replace the white or black with a simple if statement, but the gradient areas where the white and black is mixed is still a mix of white and black, which makes sense. So I need to actually the map the colours to the new colours, but I have no idea the way I'm supposed to go about this.
There are easy ways
The inflexible way, use mix
gl_FragColor = mix(color0, color1, noise)
The more flexible way, use a ramp texture
float u = (noise * (rampTextureWidth - 1.0) + 0.5) / rampTextureWidth;
gl_FragColor = texture2D(rampTexture, vec2(u, 0.5));
Using ramp textures handles any number of colors where as mix only handles 2.
const vs = `
attribute vec4 position;
attribute float noise;
uniform mat4 u_matrix;
varying float v_noise;
void main() {
gl_Position = u_matrix * position;
v_noise = noise;
}
`;
const fs = `
precision highp float;
varying float v_noise;
uniform sampler2D rampTexture;
uniform float rampTextureWidth;
void main() {
float u = (v_noise * (rampTextureWidth - 1.0) + 0.5) / rampTextureWidth;
gl_FragColor = texture2D(rampTexture, vec2(u, 0.5));
}
`;
"use strict";
const m4 = twgl.m4;
const gl = document.querySelector("canvas").getContext("webgl");
// compiles shaders, links program, looks up locations
const programInfo = twgl.createProgramInfo(gl, [vs, fs]);
/*
6------7
/| /|
/ | / |
2------3 |
| | | |
| 4---|--5
| / | /
|/ |/
0------1
*/
const arrays = {
position: [
-1, -1, -1,
1, -1, -1,
-1, 1, -1,
1, 1, -1,
-1, -1, 1,
1, -1, 1,
-1, 1, 1,
1, 1, 1,
],
noise: {
numComponents: 1,
data: [
1, 0.5, 0.2, 0.3, 0.9, 0.1, 0.7, 1,
],
},
indices: [
0, 2, 1, 1, 2, 3,
1, 3, 5, 5, 3, 7,
5, 7, 4, 4, 7, 6,
4, 6, 0, 0, 6, 2,
2, 6, 3, 6, 7, 3,
0, 1, 4, 4, 1, 5,
],
};
const bufferInfo = twgl.createBufferInfoFromArrays(gl, arrays);
const red = [255, 0, 0, 255];
const yellow = [255, 255, 0, 255];
const blue = [ 0, 0, 255, 255];
const green = [ 0, 255, 0, 255];
const cyan = [ 0, 255, 255, 255];
const magenta = [255, 0, 255, 255];
function makeTexture(gl, name, colors) {
const width = colors.length / 4;
const texture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA,
width, 1, 0,
gl.RGBA, gl.UNSIGNED_BYTE,
new Uint8Array(colors));
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
return {
name,
texture,
width,
};
}
const textures = [
makeTexture(gl, 'one color',
[...red]),
makeTexture(gl, 'two colors',
[...red, ...yellow]),
makeTexture(gl, 'three colors',
[...blue, ...red, ...yellow]),
makeTexture(gl, 'six colors',
[...green, ...red, ...blue, ...yellow, ...cyan, ...magenta]),
];
const infoElem = document.querySelector('#info');
function render(time) {
time *= 0.001;
gl.viewport(0, 0, gl.canvas.width, gl.canvas.height);
gl.enable(gl.DEPTH_TEST);
gl.enable(gl.CULL_FACE);
// draw cube
const fov = 30 * Math.PI / 180;
const aspect = gl.canvas.clientWidth / gl.canvas.clientHeight;
const zNear = 0.5;
const zFar = 40;
const projection = m4.perspective(fov, aspect, zNear, zFar);
const eye = [1, 4, -7];
const target = [0, 0, 0];
const up = [0, 1, 0];
const camera = m4.lookAt(eye, target, up);
const view = m4.inverse(camera);
const viewProjection = m4.multiply(projection, view);
const world = m4.rotationY(time);
gl.useProgram(programInfo.program);
const tex = textures[time / 2 % textures.length | 0];
infoElem.textContent = tex.name;
// calls gl.bindBuffer, gl.enableVertexAttribArray, gl.vertexAttribPointer
twgl.setBuffersAndAttributes(gl, programInfo, bufferInfo);
// calls gl.uniformXXX, gl.activeTexture, gl.bindTexture
twgl.setUniformsAndBindTextures(programInfo, {
u_matrix: m4.multiply(viewProjection, world),
rampTexture: tex.texture,
rampTextureWidth: tex.width,
});
// calls gl.drawArrays or gl.drawElements
twgl.drawBufferInfo(gl, bufferInfo);
requestAnimationFrame(render);
}
requestAnimationFrame(render);
<script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script>
<canvas></canvas>
<div id="info"></div>

Do I need Bind Pose Bone Transformation for my mesh Animation?

I have a Hand mesh which I want to animate.
I have the Skeleton which can be hierarchically animated.
My mesh is also weighted in Blender. So each vertex has 4 associated bones to be affected by.
When I apply the Animation of my Skeleton to the mesh, the hierarchy is applied correctly. (so the hierarchy of the mesh, matches the hierarchy of the Skeleton).
So far so good, now question:
the fingers look to be stretched (its like the fingers smashed by a heavy door). Why?
Note: (I didnt apply the bind pose bone Transformation Matrix explicitly, but I read about it and I believe its functionality is there, in the hierarchical Transformation I have for my Skeleton).
If you need more clarification of the steps, please ask.
vector<glm::mat4> Posture1Hand::HierarchyApplied(HandSkltn HNDSKs){
vector <glm::mat4> Matrices;
Matrices.resize(HNDSKs.GetLimbNum());
//non Hierarchical Matrices
for (unsigned int i = 0; i < Matrices.size(); i++){
Matrices[i] = newPose[i].getModelMatSkltn(HNDSKs.GetLimb(i).getLwCenter());
}
for (unsigned int i = 0; i < Matrices.size(); i++){
vector<Limb*>childeren = HNDSKs.GetLimb(i).getChildren();
for (unsigned int j = 0; j < childeren.size(); j++){
Matrices[childeren[j]->getId()] = Matrices[i] * Matrices[childeren[j]->getId()];
}
}
return Matrices;
}
Here is my getModelMatSkltn method.
inline glm::mat4 getModelMatSkltn(const glm::vec3& RotationCentre) const{//to apply the rotation on the whole heirarchy
glm::mat4 posMatrix = { 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1 };
posMatrix = glm::translate(posMatrix, newPos);
glm::mat4 trMatrix = { 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1 };
glm::mat4 OriginTranslate = glm::translate(trMatrix, -RotationCentre);
glm::mat4 InverseTranslate = glm::translate(trMatrix, RotationCentre);
glm::mat4 rotXMatrix = { 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1 };
rotXMatrix = glm::rotate(rotXMatrix, glm::radians(newRot.x), glm::vec3(1, 0, 0));
glm::mat4 rotYMatrix = { 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1 };
rotYMatrix = glm::rotate(rotYMatrix, glm::radians(newRot.y), glm::vec3(0, 1, 0));
glm::mat4 rotZMatrix = { 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1 };
rotZMatrix = glm::rotate(rotZMatrix, glm::radians(newRot.z), glm::vec3(0, 0, 1));
glm::mat4 scaleMatric = { 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1 };
scaleMatric = glm::scale(scaleMatric, newScale);
glm::mat4 rotMatrix = rotZMatrix*rotYMatrix*rotXMatrix;
rotMatrix = InverseTranslate*rotMatrix*OriginTranslate;
return posMatrix*rotMatrix*scaleMatric;
}
and this is how I send 20 transformation Matrix (because of 20 joints in Hand) to GPU:
void GLShader::Update(const vector trMat, const GLCamera& camera){
vector<glm::mat4> MVP; MVP.resize(trMat.size());
for (unsigned int i = 0; i < trMat.size(); i++){
MVP[i] = camera.getViewProjection()* trMat[i];
}
glUniformMatrix4fv(newUniform[TRANSFORM_U], trMat.size(), GL_FALSE, &MVP[0][0][0]);//4 floating value
}
I guess one should be familiar with calculation of vertex position in the shader in order to be able to answer the question, but I send a part of my vertex shader too.
attribute vec3 position;
attribute vec2 texCoord;
attribute vec4 weight;
attribute vec4 weightInd;
uniform mat4 transform[20];//vector of uniform for 20 number of joints in my skleton
void main(){
mat4 WMat;//weighted matrix
float w;
int Index;
for (int i=0; i<4; i++){
Index=int(weightInd[i]);
w=weight[i];
WMat += w*transform[Index];
}
gl_Position= WMat*vec4(position, 1.0);
}

OpenGL - reconstruct position from depth in VS

I am trying to reconstruct position from depth texture in Vertex Shader. Usually, this is done in Pixel Shader, but for some reason I need it in VS to transform some geometry.
So my approach.
1) I calculate View Frustrum corners in View Space
I use this input NDC.Those values are transformed via Inverse(view * proj) to put them into World Space and then transformed via view matrix.
//GL - Left Handed - need to "swap" front and back Z coordinate
MyMath::Vector4 cornersVector4[] =
{
//front
MyMath::Vector4(-1, -1, 1, 1), //A
MyMath::Vector4( 1, -1, 1, 1), //B
MyMath::Vector4( 1, 1, 1, 1), //C
MyMath::Vector4(-1, 1, 1, 1), //D
//back
MyMath::Vector4(-1, -1, -1, 1), //E
MyMath::Vector4( 1, -1, -1, 1), //F
MyMath::Vector4( 1, 1, -1, 1), //G
MyMath::Vector4(-1, 1, -1, 1), //H
};
If I print debug output, it seems correct (camera pos is at dist zNear from near plane and far is far enough)
2) post values to shader
3) In shader I do this:
vec3 _cornerPos0 = cornerPos0.xyz * mat3(viewInv);
vec3 _cornerPos1 = cornerPos1.xyz * mat3(viewInv);
vec3 _cornerPos2 = cornerPos2.xyz * mat3(viewInv);
vec3 _cornerPos3 = cornerPos3.xyz * mat3(viewInv);
float x = (TEXCOORD1.x / 100.0); //TEXCOORD1.x = <0, 100>
float y = (TEXCOORD1.y / 100.0); //TEXCOORD1.y = <0, 100>
vec3 ray = mix(mix(_cornerPos0, _cornerPos1, x),
mix(_cornerPos2, _cornerPos3, x),
y);
float depth = texture2D(depthTexture, vec2(x, y));
//depth is created in draw pass before with depth = vertexViewPos.z / farClipPlane;
vec3 reconstructed_posWS = camPos + (depth * ray);
But if I do this nad translate my geometry from [0,0,0] to reconstructed_posWS, only part of screen is covered. What can be incorrect ?
PS: some calculations are useless (transform to space and after that transform back), but speed is not concern atm.