I am fairly new to OpenGL and to 3D graphics in general and this is my first divergence away from tutorials to do something on my own. I want to create billboard textures so that these textures will always face the camera. (I plan to use them as position indicators for light sources and so on.)
After some research, I found this website that offers a fairly simple solution for my problem.
Simple Billboarding
I followed their steps and everything worked as expected except 1 thing. My scaling parameter is ignored and the quad that I use to draw the texture on is not being scaled.
Here is my very simple shader code.
#version 400 core
in vec3 position;
in vec2 textureCoordinates;
out vec2 pass_textureCoordinates;
uniform mat4 transformationMatrix;
uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
void main(void){
mat4 modelView = viewMatrix * transformationMatrix;
modelView[0][0] = 1;
modelView[0][1] = 0;
modelView[0][2] = 0;
modelView[1][0] = 0;
modelView[1][1] = 1;
modelView[1][2] = 0;
modelView[2][0] = 0;
modelView[2][1] = 0;
modelView[2][2] = 1;
gl_Position = projectionMatrix * modelView * vec4(position,1.0);
pass_textureCoordinates = textureCoordinates;
}
And here is the code that generates my vieMatrix and transformationMatrix
public static Matrix4f createTransformationMatrix(BillboardEntity billboardEntity) {
Vector3f translation = billboardEntity.getTranslation();
float rx = billboardEntity.getRotationX();
float ry = billboardEntity.getRotationY();
float rz = billboardEntity.getRotationZ();
float scale = billboardEntity.getScale();
Matrix4f matrix = new Matrix4f();
matrix.setIdentity();
Matrix4f.translate(translation, matrix, matrix);
Matrix4f.rotate((float) Math.toRadians(rx), new Vector3f(1,0,0), matrix, matrix);
Matrix4f.rotate((float) Math.toRadians(ry), new Vector3f(0,1,0), matrix, matrix);
Matrix4f.rotate((float) Math.toRadians(rz), new Vector3f(0,0,1), matrix, matrix);
Matrix4f.scale(new Vector3f(scale, scale, scale), matrix, matrix);
return matrix;
}
public static Matrix4f createViewMatrix(Camera camera) {
Matrix4f viewMatrix = new Matrix4f();
viewMatrix.setIdentity();
Matrix4f.rotate((float) Math.toRadians(camera.getPitch()), new Vector3f(1, 0, 0), viewMatrix,
viewMatrix);
Matrix4f.rotate((float) Math.toRadians(camera.getYaw()), new Vector3f(0, 1, 0), viewMatrix,
viewMatrix);
Vector3f cameraPos = camera.getPosition();
Vector3f negativeCameraPos = new Vector3f(-cameraPos.x, -cameraPos.y, -cameraPos.z);
Matrix4f.translate(negativeCameraPos, viewMatrix, viewMatrix);
return viewMatrix;
}
I should probably mention that I am using Java and LWJGL in my project. Any idea why the scale is being ignored? Also, is this a good approach to make billboard textures ?
I found this post on Google, so I would like to share my solution (if you don't mind).
If you set the upper-left 3x3 part of a 4x4 transform matrix to be an "identity" matrix (all "1"s on the diagonal), you nuke the rotation, but also the scale because they're both stored there.
To fix it, in this particular example, you can use:
void main(){
mat4 modelView = viewMatrix * transformationMatrix;
modelView[0][0] = length(vec3(transformationMatrix[0]));
modelView[0][1] = 0;
modelView[0][2] = 0;
modelView[1][0] = 0;
modelView[1][1] = length(vec3(transformationMatrix[1]));
modelView[1][2] = 0;
modelView[2][0] = 0;
modelView[2][1] = 0;
modelView[2][2] = 1;
gl_Position = projectionMatrix * modelView * vec4(position,1.0);
pass_textureCoordinates = textureCoordinates;
}
It will retain the scaling information from the transform (or model) matrix. I tested it and it works well with any scale set on the CPU side (in .cpp).
However, you should know that length is usually an expensive process. For example, in GLM, it uses sqrt(dot(v, v)) and if you don't know, sqrt is quite expensive, especially if you want MANY billboards in your scene (grass, clouds, particles, etc). So what you can do, instead, is use a uniform that handles the scaling, which is much cheaper:
uniform bool uIsSpherical;
uniform vec2 uSpriteScale; // Using a uniform instead
void main(){
...
modelView[0][0] = uSpriteScale.x;
modelView[0][1] = 0;
modelView[0][2] = 0;
if (uIsSpherical)
{
modelView[1][0] = 0;
modelView[1][1] = uSpriteScale.y;
modelView[1][2] = 0;
}
else
{
modelView[1][1] *= uSpriteScale.y;
}
modelView[2][0] = 0;
modelView[2][1] = 0;
modelView[2][2] = 1;
...
}
The if condition on the second column (modelView[1][...]) is for a cylindrical billboard, suitable for trees in the far distance or other kind of impostors, since you don't want them to tilt backwards if you see them from above, like from an airplane or something.
Now, the problem with doing a billboard in shader, is that on the CPU side, during shadow render passes, you're still using a fixed transform matrix. So the shadow won't follow the rotated quad, it will still "see" it as stationary. The "solution" to that, I've been told, is to use a shadow billboard shader, and do the same calculations there. Apparently it's not worth storing or caching the results, because GPUs are much faster at doing the same things again than doing memory fetches, since the memory is optimized for throughput, not latency. But I have yet to test it. Just keep in mind that by default, shadows won't rotate with the billboard.
Related
I have been using OpenGL in addition to GLM, Glad, and GLFW to create a 2d game. I want to achieve a simple 2d rotation, presumably along the Z-axis because it would not be 3d. The problem is, when I create a simple model matrix that uses a rotation matrix multiplied with a translation and dilatation matrix, the rotation becomes 3d when the primitive is rendered. The square is stretched and the sides are no longer the same length. Is there a way to avoid this stretch so that the square remains the same proportions while it rotates?
Vertex Shader:
//shader vertex
#version 430 core
layout(location = 0) in vec3 aPos;
layout(location = 1) in vec2 aTexCoord;
//uniform mat4 transform;
uniform mat4 model;
out vec2 TexCoord;
void main()
{
gl_Position = model * vec4(aPos, 1.0);
TexCoord = aTexCoord;
}
I have a function that iterates through vectors of matrices to handle large batches of objects. The matrices are first created to equal glm::mat4(1.0f).
void Trans::moveBatch(std::vector <glm::vec2>& speed, std::vector <float>& rot)
{
for (unsigned int i = 0; i < speed.size(); i++)
{
batchRotator[i] = glm::rotate(batchRotator[i], glm::radians(rot[i]), glm::vec3(0.0f, 0.0f, 1.0f));
batchMover[i] = glm::translate(batchMover[i], glm::vec3(speed[i].x, speed[i].y, 0.0f));
batchBox[i].x += speed[i].x;
batchBox[i].y += speed[i].y;
batchBox[i].z += speed[i].x;
batchBox[i].w += speed[i].y;
}
}
I then multiply my matrices and send that as the model matrix into the shader.
Thank you very much! I was able to use glm::ortho to create a projection matrix and that solved my problem when I made the world coordinates within 1920/1080 because that was the correct aspect ratio.
I have a model I'm trying to move through the air in OpenGL with GLSL and, ultimately, have it spin as it flies. I started off just trying to do a static rotation. Here's an example of the result:
The gray track at the bottom is on the floor. The little white blocks all over the place represent an explosion chunk model and are supposed to shoot up and bounce on the floor.
Without rotation, if the model matrix is just an identity, everything works perfectly.
When introducing rotation, it looks they move based on their rotation. That means that some of them, when coming to a stop, rest in the air instead of the floor. (That slightly flatter white block on the gray line next to the red square is not the same as the other little ones. Placeholders!)
I'm using glm for all the math. Here are the relevant lines of code, in order of execution. This particular model is rendered instanced so each entity's position and model matrix get uploaded through the uniform buffer.
Object creation:
// should result in a model rotated along the Y axis
auto quat = glm::normalize(glm::angleAxis(RandomAngle, glm::vec3(0.0, 1.0, 0.0)));
myModelMatrix = glm::toMat4(quat);
Vertex shader:
struct Instance
{
vec4 position;
mat4 model;
};
layout(std140) uniform RenderInstances
{
Instance instance[500];
} instances;
layout(location = 1) in vec4 modelPos;
layout(location = 2) in vec4 modelColor;
layout(location = 3) out vec4 fragColor;
void main()
{
fragColor = vec4(modelColor.r, modelColor.g, modelColor.b, 1);
vec4 pos = instances.instance[gl_InstanceID].position + modelPos;
gl_Position = camera.projection * camera.view * instances.instance[gl_InstanceID].model * pos;
}
I don't know where I went wrong. I do know that if I make the model matrix do a simple translation, that works as expected, so at least the uniform buffer works. The camera is also a uniform buffer shared across all shaders, and that works fine. Any comments on the shader itself are also welcome. Learning!
The translation to each vertex's final destination is happening before the rotating. It was this that I didn't realize was happening, even though I know to do rotations before translations.
Here's the shader code:
void main()
{
fragColor = vec4(modelColor.r, modelColor.g, modelColor.b, 1);
vec4 pos = instances.instance[gl_InstanceID].position + modelPos;
gl_Position = camera.projection * camera.view * instances.instance[gl_InstanceID].model * pos;
}
Due to the associative nature of matrix multiplication, this can also be:
gl_Position = (projection * (view * (model * pos)));
Even though the multiplication happens left to right, the transformations happen right to left.
This is the old code to generate the model matrix:
renderc.ModelMatrix = glm::toMat4(glm::normalize(animc.Rotation));
This will result in the rotation happening with the model not at the origin, due to the translation at the end of the gl_Position = line.
This is now the code that generates the model matrix:
renderc.ModelMatrix = glm::translate(pos);
renderc.ModelMatrix *= glm::toMat4(glm::normalize(animc.Rotation));
renderc.ModelMatrix *= glm::translate(-pos);
Translate to the origin (-pos), rotate, then translate back (+pos).
I tried to implement a motion-blur post processing effect as described in GPU Gems 3 Chapter 27, but I am encountering issues because the blur jitters when i move the camera and does not work as expected.
This is my fragment shader:
varying vec3 pos;
varying vec3 normal;
varying vec2 tex;
varying vec3 tangent;
uniform mat4 matrix;
uniform mat4 VPmatrix;
uniform mat4 matrixPrev;
uniform mat4 VPmatrixPrev;
uniform sampler2D diffuseTexture;
uniform sampler2D zTexture;
void main() {
vec2 texCoord = tex;
float zOverW = texture2D(zTexture, texCoord).r;
vec4 H = vec4(texCoord.x * 2.0 - 1.0, (1 - texCoord.y) * 2.0 - 1.0, zOverW, 1.0);
mat4 inv1 = inverse(matrix);
mat4 inv2 = inverse(VPmatrix);
vec4 D = H*(inv2*inv1);
vec4 worldPos = D/D.w;
mat4 prev = matrixPrev*VPmatrixPrev;
vec4 previousPos = worldPos*prev;
previousPos /= previousPos.w;
vec2 velocity = vec2((H.x-previousPos.x)/2.0, (H.y-previousPos.y)/2.0);
vec3 color = vec3(texture2D(diffuseTexture, texCoord));
for(int i = 0; i < 16; i++) {
texCoord += velocity;
vec3 color2 = vec3(texture2D(diffuseTexture, texCoord));
color += color2;
}
color /= 16;
gl_FragColor = vec4(color, 1.0);
}
The uniforms matrix and VPmatrix are the ModelView and Projection matrixes that were got as following:
float matrix[16];
float VPmatrix[16];
glGetFloatv(GL_MODELVIEW_MATRIX, matrix);
glGetFloatv(GL_PROJECTION_MATRIX, VPmatrix);
The uniforms matrixPrev and VPmatrixPrev are the previous ModelView and Projection matrixes got as following after rendering: (in the code below matrixPrev and VPmatrixPrev are global variables)
for(int i = 0; i < 16; i++) {
matrixPrev[i] = matrix[i];
VPmatrixPrev[i] = VPmatrix[i];
}
All four matrixes are passed to the shader as following:
glUniformMatrix4fvARB(glGetUniformLocationARB(postShader, "matrix"), 16, GL_FALSE, matrix);
glUniformMatrix4fvARB(glGetUniformLocationARB(postShader, "VPmatrix"), 16, GL_FALSE, VPmatrix);
glUniformMatrix4fvARB(glGetUniformLocationARB(postShader, "matrixPrev"), 16, GL_FALSE, matrixPrev);
glUniformMatrix4fvARB(glGetUniformLocationARB(postShader, "VPmatrixPrev"), 16, GL_FALSE, VPmatrixPrev);
In the shader, the uniform zTexture is a texture containing the depth values of the frame buffer. (Not sure if they are divided by W)
I hoped the shader would work but what I get instead is that when i rotate the camera around the blur jitters really fast with subtle rotations. I tried rendering the zTexture and the result I get is a grayscale image so It seems alright. I also tried setting the fragment color to H.xyz and previousPos.xyz and while rendering H.xyz produces a colored screen, previousPos.xyz produces the same colored screen, except that when the camera rotates the colors seem to invert, so I suspect there is something wrong with extracting the world position from depth.
Am I missing something here? Any help would be greatly appreciated.
forget my previous answer, is a matrix error:
matrix multiplications are in the wrong order or otherwise you must send transpose matrix (explaining why only the rotation was taken in acount, translation/scale components where messed up):
glUniformMatrix4fvARB(glGetUniformLocationARB(postShader, "matrix"), 16, GL_FALSE, matrix);
becomes
glUniformMatrix4fvARB(glGetUniformLocationARB(postShader, "matrix"), 1, GL_TRUE, matrix);
(note 16 was not the matrix size but the matrix count, so it should be only only one here)
Another note: you should compute inverse matrix and project*view result matrix in your main application, not the pixel shader: it's done once per pixels but should be done only once per frame!)
Past post note: Google "RuinIsland_GLSL_Demo.zip": it contains many good glsl sample, helped me solve this issue
I'm sorry I don't have an answer but just a clue:
I have EXACTLY the same problem as yours.
I guess I used the same material pointed by google and, like you, the blurring exists only when the camera rotates.
However I have a clue (the same as yours in fact): I think it is because the glsl shader around the net assume our depth texture contains z/w but, like me, you used a genuine depth texture filled using fixed pipeline.
So you only have z and you are missing w in the very first step.
Since "texture2D(zTexture, texCoord).r" does contain only z : we miss the computation to get zOverW.
In the end we are stuck halfway from window spce to to clip space.
I found this: https://www.opengl.org/wiki/Compute_eye_space_from_window_space#From_NDC_to_clip
but my perspective projection matrix does not meet the requierements, perhaps it will help you.
I'm trying to draw an element always facing camera. I've read some articles about billboard in shaders, the problem is that I need to compute rotation out of shaders and with different objects (circles, square, mesh, …).
So, I'm trying to compute model's rotation only (not matrix, something similar to Transform.LookAt in Unity engine) but I don't know how to do, here is what I've got:
// Camera data
mat4 viewMatrix = camera->getMatrix();
vec3 up = {viewMatrix[0][1], viewMatrix[1][1], viewMatrix[2][1]};
// Compute useful data.
vec3 look = normalize(camera->getPosition() - model->getPosition());
vec3 right = cross(up, look);
vec3 up2 = cross(look, right);
// Default: Compute matrix.
mat4 transform;
transform[0] = vec4(right, 0);
transform[1] = vec4(up2, 0);
transform[2] = vec4(look, 0);
transform[3] = vec4(position, 1);
// What I want : Compute rotation from look, right, up data and remove "Default: Compute matrix" part.
// ???
My framework compute model's matrix from his attributes (position, scale, rotation), so I can't override it, I just want to compute rotation.
I've been learning OpenGL 3+ from various online resources and recently gotten confused with transformation (model) matrices. As far as I know the proper order of multiplication is translationMatrix * rotationMatrix * scaleMatrix. If I understand correctly the multiplication is backwards, so the scale is applied first, then the rotation and lastly the transformation.
I have a Transform class which stores the position, scale and origin as 2d vectors and rotation as a float. The method for calculating transformation matrix looks like this:
glm::mat4 Transform::getTransformationMatrix() const
{
glm::mat4 result = glm::mat4(1.0f);
result = glm::translate(result, glm::vec3(position, 0.0f));
result = glm::translate(result, glm::vec3(origin, 0.0f));
result = glm::rotate(result, rotation, glm::vec3(0, 0, 1));
result = glm::translate(result, glm::vec3(-origin, 0.0f));
result = glm::scale(result, glm::vec3(scale, 0.0f));
return result;
}
Here's the vertex shader code:
#version 330 core
layout(location = 0) in vec2 position;
uniform mat4 modelMatrix;
uniform mat4 projectionMatrix;
void main()
{
gl_Position = projectionMatrix * modelMatrix * vec4(position, 0.0, 1.0);
}
As you can see I first translate and then rotate and scale, which is the opposite of what I have learnt. At first I had it the proper way (scale, rotate and translate) but it rotated around the initial position with huge radius, not the translated position which is not what I want (I am making a 2d game with sprites). I don't understand why does it work this way, can someone explain, do I have to keep separate methods for transform matrix calculations? Also does it work the same in 3d space?