Is it possible to run code once per draw call in WebGL? - glsl

I'm taking a course in WebGL at NTNU. I'm currently exploring what the shaders do and how I can use them.
An example we have shows us that we compute a projection matrix, then set it in the vertex shader, then make a draw call. I wanted to try to do this matrix computation in a shader.
This means I have to put the code somewhere else than the main() function in the vertex shader, since that one is invoked many times per draw call.
Vertex shader:
uniform vec3 camRotation;
attribute vec3 position;
void main() {
// I want this code to run only once per draw call
float rX = camRotation[0];
float rY = camRotation[1];
float rZ = camRotation[2];
mat4 camMatrix = mat4(
cos(rY) * cos(rZ), cos(rZ) * sin(rX) * sin(rY) - cos(rX) * sin(rZ), sin(rX) * sin(rZ) + cos(rX) * cos(rZ) * sin(rY), 0, //
cos(rY) * sin(rZ), cos(rX) * cos(rZ) + sin(rX) * sin(rY) * sin(rZ), cos(rX) * sin(rY) * sin(rZ) - cos(rZ) * sin(rX), 0, //
-sin(rY), cos(rY) * sin(rX), cos(rX) * cos(rY), 0, //
0, 0, 0, 1
);
// End of code in question
gl_Position = camMatrix * vec4(position, 1);
gl_PointSize = 5.0;
}
Is it possible? Am I a fool for trying?

AFAIK, there's no way to do that. You should compute camMatrix in your JS code and pass it to the shader via uniform:
uniform mat4 camMatrix;
attribute vec3 position;
void main() {
gl_Position = camMatrix * vec4(position, 1);
gl_PointSize = 5.0;
}
Now you need to compute matrix in JS:
// assuming that program is your compiled shader program and
// gl is your WebGL context.
const cos = Math.cos;
const sin = Math.sin;
gl.uniformMatrix4fv(gl.getUniformLocation(program, 'camMatrix'), [
cos(rY) * cos(rZ), cos(rZ) * sin(rX) * sin(rY) - cos(rX) * sin(rZ), sin(rX) * sin(rZ) + cos(rX) * cos(rZ) * sin(rY), 0,
cos(rY) * sin(rZ), cos(rX) * cos(rZ) + sin(rX) * sin(rY) * sin(rZ), cos(rX) * sin(rY) * sin(rZ) - cos(rZ) * sin(rX), 0,
-sin(rY), cos(rY) * sin(rX), cos(rX) * cos(rY), 0,
0, 0, 0, 1
]);

No its not possible, the whole concept of shaders is to be vectorizable so they can run in parallel. Even if you could there wouldn't be much gain as the GPUs speed advantage is(besides other things) inherently based on its capability to do computations in parallel. That aside, usually you have a combined view projection matrix that remains static during all draw calls(of a frame) and a model/world matrix attached to each object you're drawing.
The projection matrix does what its name implies projecting the points in a either perspective or orthogonal manner(you can think of this as the lense of your camera).
The view matrix is a transform to translate/rotate that projection(camera position and orientation) while the per-object world/model matrix contains the transformations(translation,rotation and scale) of the individual object.
In your shader you then transform your vertex position to world space using the per-object model/world matrix and then finally transform it to camera space using the premultiplied ViewProjection matrix:
gl_Position = matViewProjection * (matWorld * vPosition)
As you're drawing points depending on your usecase you could reduce the world matrix to just a translation vector.

Related

Normal Mapping Issues [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
I'm attempting to implement normal mapping into my glsl shaders for the first time. I've written an ObjLoader that calculates the Tangents and Bitangents - I then pass the relevant information to my shaders (I'll show code in a bit). However, when I run the program, my models end up looking like this:
Looks great, I know, but not quite what I am trying to achieve!
I understand that I should be simply calculating direction vectors and not moving the vertices - but it seems somewhere down the line I end up making that mistake.
I am unsure if I am making the mistake when reading my .obj file and calculating the tangent/bitangent vectors, or if the mistake is happening within my Vertex/Fragment Shader.
Now for my code:
In my ObjLoader - when I come across a face, I calculate the deltaPositions and deltaUv vectors for all three vertices of the face - and then calculate the tangent and bitangent vectors:
I then organize the vertex data collected to construct my list of indices - and in that process I restructure the tangent and bitangent vectors to respect the newly constructed indice list.
Lastly - I perform Orthogonalization and calcuate the final bitangent vector.
After binding the VAO, VBO, IBO, and passing all the information respectively - my shader calculations are as follows:
Vertex Shader:
void main()
{
// Output position of the vertex, in clip space
gl_Position = MVP * vec4(pos, 1.0);
// Position of the vertex, in world space
v_Position = (M * vec4(pos, 0.0)).xyz;
vec4 bitan = V * M * vec4(bitangent, 0.0);
vec4 tang = V * M * vec4(tangent, 0.0);
vec4 norm = vec4(normal, 0.0);
mat3 TBN = transpose(mat3(tang.xyz, bitan.xyz, norm.xyz));
// Vector that goes from the vertex to the camera, in camera space
vec3 vPos_cameraspace = (V * M * vec4(pos, 1.0)).xyz;
camdir_cameraspace = normalize(-vPos_cameraspace);
// Vector that goes from the vertex to the light, in camera space
vec3 lighPos_cameraspace = (V * vec4(lightPos_worldspace, 0.0)).xyz;
lightdir_cameraspace = normalize((lighPos_cameraspace - vPos_cameraspace));
v_TexCoord = texcoord;
lightdir_tangentspace = TBN * lightdir_cameraspace;
camdir_tangentspace = TBN * camdir_cameraspace;
}
Fragment Shader:
void main()
{
// Light Emission Properties
vec3 LightColor = (CalcDirectionalLight()).xyz;
float LightPower = 20.0;
// Cutting out texture 'black' areas of texture
vec4 tempcolor = texture(AlbedoTexture, v_TexCoord);
if (tempcolor.a < 0.5)
discard;
// Material Properties
vec3 MaterialDiffuseColor = tempcolor.rgb;
vec3 MaterialAmbientColor = material.ambient * MaterialDiffuseColor;
vec3 MaterialSpecularColor = vec3(0, 0, 0);
// Local normal, in tangent space
vec3 TextureNormal_tangentspace = normalize(texture(NormalTexture, v_TexCoord)).rgb;
TextureNormal_tangentspace = (TextureNormal_tangentspace * 2.0) - 1.0;
// Distance to the light
float distance = length(lightPos_worldspace - v_Position);
// Normal of computed fragment, in camera space
vec3 n = TextureNormal_tangentspace;
// Direction of light (from the fragment)
vec3 l = normalize(TextureNormal_tangentspace);
// Find angle between normal and light
float cosTheta = clamp(dot(n, l), 0, 1);
// Eye Vector (towards the camera)
vec3 E = normalize(camdir_tangentspace);
// Direction in which the triangle reflects the light
vec3 R = reflect(-l, n);
// Find angle between eye vector and reflect vector
float cosAlpha = clamp(dot(E, R), 0, 1);
color =
MaterialAmbientColor +
MaterialDiffuseColor * LightColor * LightPower * cosTheta / (distance * distance) +
MaterialSpecularColor * LightColor * LightPower * pow(cosAlpha, 5) / (distance * distance);
}
I can spot 1 obvious mistake in your code. TBN is generated by the bitangent, tangent and normal. While the bitangent and tangent are transformed from model space to view space, the normal is not transformed. That does not make any sense. All the 3 vetors have to be related to the same coordinate system:
vec4 bitan = V * M * vec4(bitangent, 0.0);
vec4 tang = V * M * vec4(tangent, 0.0);
vec4 norm = V * M * vec4(normal, 0.0);
mat3 TBN = transpose(mat3(tang.xyz, bitan.xyz, norm.xyz));

Vertex shader doesn't work well with cloned objects

I'm using OpenGL to create a sphere (approximation):
I'm "inflating" the triangle to create an eight of a sphere:
I'm then drawing that octant four times, and each time rotating the model transoformation by 90°, to achieve a hemisphere:
Code related to drawing calls:
for (int i = 0; i < 4; i++) {
model_trans = glm::rotate(model_trans, glm::radians(i * 90.0f), glm::vec3(0.0f, 0.0f, 1.0f));
glUniformMatrix4fv(uniform_model, 1, GL_FALSE, glm::value_ptr(model_trans));
glDrawElementsBaseVertex(GL_TRIANGLES, sizeof(sphere_indices) / sizeof(sphere_indices[0]),
GL_UNSIGNED_INT, 0, (sizeof(grid_vertices)) / (ATTR_COUNT * sizeof(GLfloat)));
}
My goal is to color each vertex based on the angle of its projection in the XY-plane. Since I'm normalizing values, the resulting projection should behave like an trigonometric circle, x value being the cosine value of the angle with the positive end of x-axis. And because the cosine is an continuous function, my sphere should have continual color, too. However, it is not so:
Is this issue caused by cloning the object? That's the only thing I can think of, but it shouldn't matter, since the vertex shader only receives individual vertices. Speaking of which, here is my vertex shader:
#version 150 core
in vec3 position;
/* flat : the color will be sourced from the provoking vertex. */
flat out vec3 Color;
/* transformation matrices */
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main() {
gl_Position = projection * view * model * vec4(position, 1.0);
vec3 vector_proj = vec3(position.x, position.y, 0.0);
normalize(vector_proj);
/* Addition and division only for mapping range [-1, +1] to [0, 1] */
float cosine = (vector_proj.x + 1) / 2;
Color = vec3(cosine);
}
You want to calculate the color associated to the vertex from the world coordinate of the vertex position.
position is the model coordinate, but not the world coordinate. You have to apply the model matrix to position to transform from model space to world space, before calculating vector_proj:
vec4 world_pos = model * vec4(position, 1.0);
vec3 vector_proj = vec3( world_pos.xy, 0.0 );
The parameter to normalize is not an in-out parameter. It is an input parameter, the normalized result is returned from the function:
vector_proj = normalize(vector_proj);
You can simplify the code as follows:
void main()
{
vec4 world_pos = model * vec4(position, 1.0);
gl_Position = projection * view * world_pos;
vec2 vector_proj = normalize(world_pos.xy);
/* Addition and division only for mapping range [-1, +1] to [0, 1] */
float cosine = (vector_proj.x + 1) / 2;
Color = vec3(cosine);
}

What values should I send to normalized view matrix so a tilemap scrolling only spans a tile?

This is the code that produces the projection, view and model matrices that get sent to the shader:
GL.glEnable(GL.GL_BLEND)
GL.glBlendFunc(GL.GL_SRC_ALPHA, GL.GL_ONE_MINUS_SRC_ALPHA)
arguments['texture'].bind()
arguments['shader'].bind()
arguments['shader'].uniformi('u_Texture', arguments['texture'].slot)
proj = glm.ortho(0.0, float(arguments['screenWidth']), 0.0, float(arguments['screenHeight']), -1.0, 1.0)
arguments['cameraXOffset'] = (float(arguments['cameraXOffset']) / 32) / float(arguments['screenWidth'])
arguments['cameraYOffset'] = (- float(arguments['cameraYOffset']) / 32) / float(arguments['screenHeight'])
print('{}, {}'.format(arguments['cameraXOffset'], arguments['cameraYOffset']))
view = glm.translate(glm.mat4(1.0), glm.vec3(float(arguments['cameraXOffset']), float(arguments['cameraYOffset']), 0.0))
model = glm.translate(glm.mat4(1.0), glm.vec3(0.0, 0.0, 0.0))
arguments['shader'].uniform_matrixf('u_Proj', proj)
arguments['shader'].uniform_matrixf('u_View', view)
arguments['shader'].uniform_matrixf('u_Model', model)
The projection matrix goes from 0.0 to screen width, and from 0.0 to screen height. That allows me to use the actual width in pixels of the tiles (32x32) when determining the vertex floats. Also, when the user presses the wasd keys, the camera accumulates offsets that span the width or height of a tile (always 32). Unfortunately, to reflect that offset in the view matrix, it seems that I need to normalize it, and I can't figure out how to do it so a single movement in any cardinal direction spans a single tile and nothing more. It constantly accumulates an error, so at the end of the map in any direction it shows a band of background (white in this case, for now).
This is the most important part that determines how much it will scroll with the given camera offsets:
arguments['cameraXOffset'] = (float(arguments['cameraXOffset']) / 32) / float(arguments['screenWidth'])
arguments['cameraYOffset'] = (- float(arguments['cameraYOffset']) / 32) / float(arguments['screenHeight'])
Can any of you figure out if that "normalization" for the sake of the view matrix is correct? Or is this a rounding issue? In that case, could I solve it somehow?
Vertex shader:
#version 330 core
layout(location = 0) in vec3 position;
layout(location = 1) in vec4 color;
layout(location = 2) in vec2 texCoord;
out vec4 v_Color;
out vec2 v_TexCoord;
uniform mat4 u_Proj;
uniform mat4 u_View;
uniform mat4 u_Model;
void main()
{
gl_Position = u_Model * u_View * u_Proj * vec4(position, 1.0);
v_TexCoord = texCoord; v_Color = color;
}
FINAL VERSION:
Solved. As mentioned by the commenter, had to change this line in the vertex shader:
gl_Position = u_Model * u_View * u_Proj * vec4(position, 1.0);
to:
gl_Position = u_Proj * u_View * u_Model * vec4(position, 1.0);
The final version of the code, that finally allows the user to scroll exactly one tile over:
arguments['texture'].bind()
arguments['shader'].bind()
arguments['shader'].uniformi('u_Texture', arguments['texture'].slot)
proj = glm.ortho(0.0, float(arguments['screenWidth']), 0.0, float(arguments['screenHeight']), -1.0, 1.0)
arguments['cameraXOffset'] = (float(arguments['cameraXOffset']) / 32) / arguments['screenWidth']
arguments['cameraYOffset'] = (float(-arguments['cameraYOffset']) / 32) / arguments['screenHeight']
view = glm.translate(glm.mat4(1.0), glm.vec3(float(arguments['cameraXOffset']), float(arguments['cameraYOffset']), 0.0))
model = glm.translate(glm.mat4(1.0), glm.vec3(0.0, 0.0, 0.0))
arguments['shader'].uniform_matrixf('u_Proj', proj)
arguments['shader'].uniform_matrixf('u_View', view)
arguments['shader'].uniform_matrixf('u_Model', model)
You have to flip the order of the matrices when you transform the vertex coordinate to the clip space coordinate:
gl_Position = u_Proj * u_View * u_Model * vec4(position, 1.0);
See GLSL Programming/Vector and Matrix Operations:
Furthermore, the *-operator can be used for matrix-vector products of the corresponding dimension, e.g.:
vec2 v = vec2(10., 20.);
mat2 m = mat2(1., 2., 3., 4.);
vec2 w = m * v; // = vec2(1. * 10. + 3. * 20., 2. * 10. + 4. * 20.)
Note that the vector has to be multiplied to the matrix from the right.
If a vector is multiplied to a matrix from the left, the result corresponds to to multiplying a column vector to the transposed matrix from the right. This corresponds to multiplying a column vector to the transposed matrix from the right:
Thus, multiplying a vector from the left to a matrix corresponds to multiplying it from the right to the transposed matrix:
vec2 v = vec2(10., 20.);
mat2 m = mat2(1., 2., 3., 4.);
vec2 w = v * m; // = vec2(1. * 10. + 2. * 20., 3. * 10. + 4. * 20.)
This also applies to the matrix multiplication itself. The first matrix which has to be applied to the vector, has to be the most right matrix and the last matrix the most left, in the row of concatenated matrices.

Shadow Map Positioning and Resolution

I'm currently learning C++ and OpenGL and was wondering if anyone could walk me through what is exactly happening with the below code. It currently calculates the positioning and resolution of a shadow map within a 3D environment.
The code currently works, just looking to get a grasp on things.
//Vertex Shader Essentials.
Position = ProjectionMatrix * ViewMatrix * WorldMatrix * vec4 (VertexPosition, 1);
Normal = (ViewMatrix * WorldMatrix * vec4 (VertexNormal, 0)).xyz;
EyeSpaceLightPosition = ViewMatrix * LightPosition;
EyeSpacePosition = ViewMatrix * WorldMatrix * vec4 (VertexPosition, 1);
STCoords = VertexST;
//What is this block of code currently doing?
ShadowCoord = ProjectionMatrix * ShadowMatrix * WorldMatrix * vec4 (VertexPosition, 1);
ShadowCoord = ShadowCoord / ShadowCoord.w;
ShadowCoord = (ShadowCoord + vec4 (1.0, 1.0, 1.0, 1.0)) * vec4 (1.0/2.0, 1.0/2.0, 1.0/2.0, 1.0);
//Alters the Shadow Map Resolution.
// Please Note - c is a slider that I control in the program execution.
float rounding = (c + 2.1) * 100.0;
ShadowCoord.x = (floor (ShadowCoord.x * rounding)) / rounding;
ShadowCoord.y = (floor (ShadowCoord.y * rounding)) / rounding;
ShadowCoord.z = (floor (ShadowCoord.z * rounding)) / rounding;
gl_Position = Position;
ShadowCoord = ProjectionMatrix * ShadowMatrix * WorldMatrix * vec4 (VertexPosition, 1);
This calculates the position of this vertex within the eye space of the light. What you're recomputing is what the Position = ProjectionMatrix * ViewMatrix * WorldMatrix * vec4 (VertexPosition, 1); line must have produced back when you were rendering to the shadow buffer.
ShadowCoord = ShadowCoord / ShadowCoord.w;
This applies a perspective projection, figuring out where your shadow coordinate should fall on the light's view plane.
Think about it like this: from the light's point of view the coordinate at (1, 1, 1) should appear on the same spot as the one at (2, 2, 2). For both of those you should sample the same 2d location on the depth buffer. Dividing by w achieves that.
ShadowCoord = (ShadowCoord + vec4 (1.0, 1.0, 1.0, 1.0)) * vec4 (1.0/2.0, 1.0/2.0, 1.0/2.0, 1.0);
This also is about sampling at the right spot. The projection above has the thing in the centre of the light's view — the thing at e.g. (0, 0, 1) — end up at (0, 0). But (0, 0) is the bottom left of the light map, not the centre. This line ensures that the lightmap is taken to cover the area from (-1, -1) across to (1, 1) in the light's projection space.
... so, in total, the code is about mapping from 3d vectors that describe the vector from the light to the point in the light's space, to 2d vectors that describe where the point falls on the light's view plane — the plane that was rendered to produce the depth map.

GLM matrix multiplication and OpenGL GLSL

I have similar code as in this question:
some opengl and glm explanation
I have a combined matrix that I pass as a single uniform
//C++
mat4 combinedMatrix = projection * view * model;
//GLSL doesn't work
out_position = combinedMatrix * vec4(vertex, 1.0);
It doesn't work. But if I do all the multiplication in the shader so I pass in each individual matrix and get
//GLSL works
out_position = projection * view * model * vec4(vertex, 1.0);
It works.
I can't see anything wrong with my matrices in the C++ code.
The following works too
//C++
mat4 combinedMatrix = projection * view * model;
vec4 p = combinedMatrix * v;
//pass in vertex p as a vec4
//GLSL works
out_position = vertex
I think the problem could be in the matrix multiplication you do in your code.
How the following multiplication is performed?
mat4 combinedMatrix = projection * view * model
It looks to me quite odd, matrix multiplication cannot be done in this way unless I am totally wrong.
This is the way I perform it:
for (i=0; i<4; i++) {
tmp.m[i][0] = (srcA->m[i][0] * srcB->m[0][0]) +
(srcA->m[i][1] * srcB->m[1][0]) +
(srcA->m[i][2] * srcB->m[2][0]) +
(srcA->m[i][3] * srcB->m[3][0]) ;
tmp.m[i][1] = (srcA->m[i][0] * srcB->m[0][1]) +
(srcA->m[i][1] * srcB->m[1][1]) +
(srcA->m[i][2] * srcB->m[2][1]) +
(srcA->m[i][3] * srcB->m[3][1]) ;
tmp.m[i][2] = (srcA->m[i][0] * srcB->m[0][2]) +
(srcA->m[i][1] * srcB->m[1][2]) +
(srcA->m[i][2] * srcB->m[2][2]) +
(srcA->m[i][3] * srcB->m[3][2]) ;
tmp.m[i][3] = (srcA->m[i][0] * srcB->m[0][3]) +
(srcA->m[i][1] * srcB->m[1][3]) +
(srcA->m[i][2] * srcB->m[2][3]) +
(srcA->m[i][3] * srcB->m[3][3]) ;
}
memcpy(result, &tmp, sizeof(PATRIA_Matrix));
Probably I am wrong on this but I am quite sure you should follow this PATH.
The way I see your example it looks to me a pointer multiplication :( (though I don't have the specific of your mat4 matrix class/struct).