Here is my question, i will list them to make it clear:
I am writing a program drawing squares in 2D using instancing.
My camera direction is (0,0,-1), camera up is (0,1,0), camera position is (0,0,3), and the camera position changes when i press some keys.
What I want is that, when I zoom in (the camera moves closer to the square), the square's size(in the screen) won't change. So in my shader:
#version 330 core
layout(location = 0) in vec2 squareVertices;
layout(location = 1) in vec4 xysc;
out vec4 particlecolor;
uniform mat4 VP;
void main()
{
float particleSize = xysc.z;
float color = xysc.w;
gl_Position = VP* vec4(xysc.x, xysc.y, 2.0, 1.0) + vec4(squareVertices.x*particleSize,squareVertices.y*particleSize,0,0);
particlecolor = vec4(1.0f * color , 1.0f * (1-color), 0.0f, 0.5f);
}
Please notice that, inorder to keep the squares' size unchanged, what I do is:
1. transform the center of the square first
VP * vec4(xysc.x, xysc.y, 2.0, 1.0)
2. then compute one of the four corners (x,y,z,1) of the square
+ vec4(squareVertices.x*particleSize,squareVertices.y*particleSize,0,0);
instead of:
gl_Position = VP* (vec4(xysc.x, xysc.y, 2.0, 1.0) + vec4(squareVertices.x*particleSize,squareVertices.y*particleSize,0,0));
However when I move the camera closer to z=0 plane. The squares' size grows unexpectedly. Where is the problem? I can provide a demo code if necessary.
Sounds like you use a perspective projection, and the formula you use in steps 1 and 2 won't work because VP * vec4 will in the general case result in a vec4(x,y,z,w) with the w value != 1, and adding a vec4(a,b,0,0) to that will just get you vec3( (x+a)/w, (y+b)/w, z) after the perspective divide, while you seem to want vec3(x/w + a, y/w +b, z). So the correct approach is to scale a and b by w and add that before the divde: vec4(x+a*w, y+b*w, z, w).
Note that when you move your camera closer to the geometry, the effective w value will approach towards zero, so (x+a)/w will be a greater than x/w + a, resulting in your geometry getting bigger.
Related
I am using this code to generate sphere vertices and textures but as you can see in the image , when I rotate it I can see a dark band.
for (int i = 0; i <= stacks; ++i)
{
float s = (float)i / (float) stacks;
float theta = s * 2 * glm::pi<float>();
for (int j = 0; j <= slices; ++j)
{
float sl = (float)j / (float) slices;
float phi = sl * (glm::pi<float>());
const float x = cos(theta) * sin(phi);
const float y = sin(theta) * sin(phi);
const float z = cos(phi);
sphere_vertices.push_back(radius * glm::vec3(x, y, z));
sphere_texcoords.push_back((glm::vec2((x + 1.0) / 2.0, (y + 1.0) / 2.0)));
}
}
// get the indices
for (int i = 0; i < stacks * slices + slices; ++i)
{
sphere_indices.push_back(i);
sphere_indices.push_back(i + slices + 1);
sphere_indices.push_back(i + slices);
sphere_indices.push_back(i + slices + 1);
sphere_indices.push_back(i);
sphere_indices.push_back(i + 1);
}
I can't figure a way to make it right whatever texture coordinates I used.
Hmm.. If I use another image, then the mapping is different (and worst!)
vertex shader:
#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 aTexCoord;
out vec4 vertexColor;
out vec2 TexCoord;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main()
{
gl_Position = projection * view * model * vec4(aPos.x, aPos.y, aPos.z, 1.0);
vertexColor = vec4(0.5, 0.2, 0.5, 1.0);
TexCoord = vec2(aTexCoord.x, aTexCoord.y);
}
fragment shader:
#version 330 core
out vec4 FragColor;
in vec4 vertexColor;
in vec2 TexCoord;
uniform sampler2D sphere_texture;
void main()
{
FragColor = texture(sphere_texture, TexCoord);
}
I am not using any lighting conditions.
If I use FragColor = vec4(TexCoord.x, TexCoord.y, 0.0f, 1.0f); in fragment shader (for debugging purposes) , I am receiving a nice sphere.
I am using this as texture:
That image of the tennis ball that you linked reveals the problem. I'm glad you ultimately provided it.
Your image is a four-channel PNG with transparency (Alpha channel). There are transparent pixels all around the outside of the yellow part of the ball that have (R,G,B,A) = (0, 0, 0, 0), so if you're ignoring the A channel then (R, G, B), will be (0, 0, 0) = black.
Here are just the Red, Green, and Blue (RGB) channels:
And here is just the Alpha (A) channel.
The important thing to notice is that the circle of the ball does not fill the square. There is a significant margin of 53 pixels of black from the extent of the ball to the edge of the texture. We can calculate the radius of the ball from this. Half the width is 1000 pixels, of which 53 pixels are not used. The ball's radius is 1000-53, which is 947 pixels. Or about 94.7% of the distance from the center to the edge of the texture. The remaining 5.3% of the distance is black.
Side note: I also notice that your ball doesn't quite reach 100% opacity. The yellow part of the ball has an alpha channel value of 254 (of 255) Meaning 99.6% opaque. The white lines and the shiny hot spot do actually reach 100% opacity, giving it sort of a Death Star look. ;)
To fix your problem, there's the intuitive approach (which may not work) and then there are two things that you need to do that will work. Here are a few things you can do:
Intuitive Solution:
This won't quite get you 100% there.
1) Resize the ball to fill the texture. Use image editing software to enlarge the ball to fill the texture, or to trim off the black pixels. This will just make more efficient use of pixels, for one, but it will ensure that there are useful pixels being sampled at the boundary. You'll probably want to expand the image to be slightly larger than 100%. I'll explain why below.
2) Remap your texture coordinates to only extend to 94.7% of the radius of the ball. (Similar to approach 1, but doesn't require image editing). This just uses coordinates that actually correspond to the image you provided. Your x and y coordinates need to be scaled about the center of the image and reduced to about 94.7%.
x2 = 0.5 + (x - 0.5) * 0.947;
y2 = 0.5 + (y - 0.5) * 0.947;
Suggested Solution:
This will ensure no more black.
3) Fill the "black" portion of your ball texture with a less objectionable colour - probably the colour that is at the circumference of the tennis ball. This ensures that any texels that are sampled at exactly the edge of the ball won't be linearly combined with black to produce an unsightly dark-but-not-quite-black band, which is almost the problem you have right now anyway. You can do this in two ways. A) Image editing software. Remove the transparency from your image and matte it against a dark yellow colour. B) Use the shader to detect pixels that are outside the image and replace them with a border colour (this is clever, but probably more trouble than it's worth.)
Different Texture Coordinates
The last thing you can do is avoid this degenerate texture mapping coordinate problem altogether. At the equator, you're not really sure which pixels to sample. The black (transparent) pixels or the coloured pixels of the ball. The discrete nature of square pixels, is fighting against the polar nature of your texture map. You'll never find the exact colour you need near the edge to produce a continuous, seamless map. Instead, you can use a different coordinate system. I hope you're not attached to how that ball looks, because let me introduce you to the equirectangular projection. It's the same projection that you can naively use to map the globe of the Earth to a typical rectangular map of the world you're likely familiar with where the north and south poles get all the distortion but the equatorial regions look pretty good.
Here's your image mapped to equirectangular coordinates:
Notice that black bar at the bottom...we're onto something! That black bar is actually exactly what appears around the equator of your ball with your current texture mapping coordinate system. But with this coordinate system, you can see easily that if we just remapped the ball to fill the square we'd completely eliminate any transparent pixels at all.
It may be inconvenient to work in this coordinate system, but you can transform your image in Photoshop using Filter > Distort > Polar Coordinates... > Polar to Rectangular.
Sigismondo's answer already suggests how to adjust your texture mapping coordinates do this.
And finally, here's a texture that is both enlarged to fill the texture space, and remapped to equirectangular coordinates. No black bars, minimal distortion. But you'll have to use Sigismondo's texture mapping coordinates. Again, this may not be for you, especially if you're attached to the idea of the direct projection for your texture (i.e.: if you don't want to manipulate your tennis ball image and you want to use that projection.) But if you're willing to remap your data, you can rest easy that all the black pixels will be gone!
Good luck! Feel free to ask for clarifications.
I cannot test it, being the code incomplete, but from a rough look I have spotted this problem:
sphere_texcoords.push_back((glm::vec2((x + 1.0) / 2.0, (y + 1.0) / 2.0)));
The texture coordinates should not be evaluated from x and y, being:
const float x = cos(theta) * sin(phi);
const float y = sin(theta) * sin(phi);
but from the angles thta-phi, or stacks-slices. this could work better - untested:
sphere_texcoords.push_back(glm::vec2(s,sl));
being already defined:
float s = (float)i / (float) stacks;
float sl = (float)j / (float) slices;
Furthermore in your code you are using the first and the last "slices" of the sphere as the rest... Shouldn't they be treated differently? This seems quite odd to me - but I don't know whether your implementation is just a simpler one, working fine.
Compare with this explanation, for example: http://www.songho.ca/opengl/gl_sphere.html
I've been working on a deferred renderer to do lighting with, and it works quite well, albeit using a position buffer in my G-buffer. Lighting is done in world space.
I have tried to implement an algorithm to recreate the world space positions from the depth buffer, and the texture coordinates, albeit with no luck.
My vertex shader is nothing particularly special, but this is the part of my fragment shader in which I (attempt to) calculate the world space position:
// Inverse projection matrix
uniform mat4 projMatrixInv;
// Inverse view matrix
uniform mat4 viewMatrixInv;
// texture position from vertex shader
in vec2 TexCoord;
... other uniforms ...
void main() {
// Recalculate the fragment position from the depth buffer
float Depth = texture(gDepth, TexCoord).x;
vec3 FragWorldPos = WorldPosFromDepth(Depth);
... fun lighting code ...
}
// Linearizes a Z buffer value
float CalcLinearZ(float depth) {
const float zFar = 100.0;
const float zNear = 0.1;
// bias it from [0, 1] to [-1, 1]
float linear = zNear / (zFar - depth * (zFar - zNear)) * zFar;
return (linear * 2.0) - 1.0;
}
// this is supposed to get the world position from the depth buffer
vec3 WorldPosFromDepth(float depth) {
float ViewZ = CalcLinearZ(depth);
// Get clip space
vec4 clipSpacePosition = vec4(TexCoord * 2.0 - 1.0, ViewZ, 1);
// Clip space -> View space
vec4 viewSpacePosition = projMatrixInv * clipSpacePosition;
// Perspective division
viewSpacePosition /= viewSpacePosition.w;
// View space -> World space
vec4 worldSpacePosition = viewMatrixInv * viewSpacePosition;
return worldSpacePosition.xyz;
}
I still have my position buffer, and I sample it to compare it against the calculate position later, so everything should be black:
vec3 actualPosition = texture(gPosition, TexCoord).rgb;
vec3 difference = abs(FragWorldPos - actualPosition);
FragColour = vec4(difference, 0.0);
However, what I get is nowhere near the expected result, and of course, lighting doesn't work:
(Try to ignore the blur around the boxes, I was messing around with something else at the time.)
What could cause these issues, and how could I get the position reconstruction from depth working successfully? Thanks.
You are on the right track, but you have not applied the transformations in the correct order.
A quick recap of what you need to accomplish here might help:
Given Texture Coordinates [0,1] and depth [0,1], calculate clip-space position
Do not linearize the depth buffer
Output: w = 1.0 and x,y,z = [-w,w]
Transform from clip-space to view-space (reverse projection)
Use inverse projection matrix
Perform perspective divide
Transform from view-space to world-space (reverse viewing transform)
Use inverse view matrix
The following changes should accomplish that:
// this is supposed to get the world position from the depth buffer
vec3 WorldPosFromDepth(float depth) {
float z = depth * 2.0 - 1.0;
vec4 clipSpacePosition = vec4(TexCoord * 2.0 - 1.0, z, 1.0);
vec4 viewSpacePosition = projMatrixInv * clipSpacePosition;
// Perspective division
viewSpacePosition /= viewSpacePosition.w;
vec4 worldSpacePosition = viewMatrixInv * viewSpacePosition;
return worldSpacePosition.xyz;
}
I would consider changing the name of CalcViewZ (...) though, that is very much misleading. Consider calling it something more appropriate like CalcLinearZ (...).
If you want to render an imposter geometry (say like a sphere), then the standard practice is to draw it using two triangles (say by passing one vertex and making a triangle strip with a geometry shader).
This is nice because it allows the extent of the billboard to be set fairly simply: you compute the actual world space positions directly.
Geometry shaders can alternately output point primitives, and I don't see a reason why they shouldn't. The only issue is finding some way to scale gl_PointSize so that you get that effect.
The only precedent I could find were this question (whose answer I am unsure is correct) and this question (which is unanswered).
It's worth noting that it's fairly simple to scale the point correctly with distance (by doing gl_PointSize = constant/length(gl_Position), but this isn't controllable; you can't say for example: I want this point to look like it is two world units across.
So: anyone know how to do this?
A straight forward idea is to transform a point at the top and bottom of the particle into screen space and find the distance. This cancels very nicely and it's pretty simple to work with just the y coordinate.
The billboard is screen aligned, and view matrices generally don't scale, so the particle size in world space is the same as eye space. That just leaves the projection to get to NDC, the divide by w and scaling by the viewport size.
A typical projection matrix, P, might look something like this...
[ +1.2990 +0.0000 +0.0000 +0.0000 ]
[ +0.0000 +1.7321 +0.0000 +0.0000 ]
[ +0.0000 +0.0000 -1.0002 -0.0020 ]
[ +0.0000 +0.0000 -1.0000 +0.0000 ]
Starting with y_eye, a y coordinate in eye space, the image space coordinate y_image is obtained in pixels...
Plugging in the radius above/below the billboard and subtracting cancels to...
Or, in text, pixelSize = vpHeight * P[1][1] * radius / w_clip
For a perspective projection, P[1][1] = 1 / tan(fov_y / 2). w_clip is gl_Position.w, which is also -z_eye (from the -1 in the perspective matrix). To guarantee your point covers every pixel you want, this may need an additional small constant.
Side note: A sphere on a billboard will look OK in the middle of the screen. If you have a large field of view perspective projection, a true sphere should warp as it approaches the edges of the screen. You could implicitly raycast the virtual sphere for each pixel in the billboard to get a correct result, but the billboard boundary will need to be adjusted accordingly. Quick google results: 1 2 3 4
[EDIT]
Well, since I bothered to test this I'll throw my shaders here too...
Vertex:
#version 150
in vec4 osVert;
uniform mat4 projectionMat;
uniform mat4 modelviewMat;
uniform vec2 viewportSize;
flat out vec2 centre;
flat out float radiusPixels;
const float radius = 1.0;
void main()
{
gl_Position = projectionMat * modelviewMat * osVert;
centre = (0.5 * gl_Position.xy/gl_Position.w + 0.5) * viewportSize;
gl_PointSize = viewportSize.y * projectionMat[1][5] * radius / gl_Position.w;
radiusPixels = gl_PointSize / 2.0;
}
Fragment:
#version 150
flat in vec2 centre;
flat in float radiusPixels;
out vec4 fragColour;
void main()
{
vec2 coord = (gl_FragCoord.xy - centre) / radiusPixels;
float l = length(coord);
if (l > 1.0)
discard;
vec3 pos = vec3(coord, sqrt(1.0-l*l));
fragColour = vec4(vec3(pos.z), 1.0);
}
(Note the visible gap at the bottom right is incorrect as described above)
In these days I am reading the Learning Modern 3D Graphics Programming book by Jason L. McKesson. Basically it is a book about the OpenGL 3.3 and I am now at the chapter 4, that is about orthographic and perspective view.
At the end of the chapter, under the "Further Study" section he suggests to try few things like implementing a variable eye point (he used at the begin (0, 0, 0) in camera space for semplicity) and an arbitrary perspective plane location.
He says I am going to need to offset the X, Y camera-space positions of the vertices by E_x and E_y respectively.
I cannot understand this passage, how am I supposed to use a variable eye point modifying only the X, Y offsets?
Edit: could it be something like this?
#version 330
layout(location = 0) in vec4 position;
layout(location = 1) in vec4 color;
smooth out vec4 theColor;
uniform vec2 offset;
uniform vec2 E;
uniform float zNear;
uniform float zFar;
uniform float frustumScale;
void main()
{
vec4 cameraPos = position + vec4(offset.x, offset.y, 0.0, 0.0);
vec4 clipPos;
clipPos.xy = cameraPos.xy * frustumScale + vec4(E.x, E.y, 0.0, 0.0);
clipPos.z = cameraPos.z * (zNear + zFar) / (zNear - zFar);
clipPos.z += 2 * zNear * zFar / (zNear - zFar);
clipPos.w = cameraPos.z / (-E.z);
gl_Position = clipPos;
theColor = color;
}
Edit2: thanks Boris, your picture helped a lot :) especially because:
it makes clear what you previously stated regarding thinking E as projection place position and not eye point position
it underlines that the size of the project plane must be always [-1, 1], passage that I read on the book without fully understanding what it meant
Just a curiosity, why do you mention multiplying after subtracting? Is it for the same reason the book says, that is aspect ratio? Because everything logically push me doing exactly the opposite, that is first translation (-2) and then multiplication (/5).. Or maybe with the term "scaling", the book refers to the reshape function?
Here, we are interested in computing a transformation from Camera Coordinates (CC) to Normalized Device Coordinates (NDC).
Think of E as the position of the projection plane in Camera Coordinates, instead of the position of the eye point according to the projection plane. In Camera Coordinates, the eye point is by definition located at the origin, at least in my interpretation of what "Camera Coordinate" means: a coordinate frame centered from where you look at the scene. (You can mathematically define a perspective transformation centered from anywhere, but this means your input space is not the camera space, imho. This is what the World->Camera transformation is for, as you will see in chapter 6)
Summary:
you are in camera space, hence your eye point is located at (0,0,0)
you are looking toward the negative Z-axis
your projection plane is parallel to the xOy plane, with a size of [-1,1] in both direction
This is the picture here (each tick is 0.5 unit):
In this picture, you can see that the projection plane (bottom side of the gray trapezoid) is centered in (0,0,-1), with a size of [-1,1] in both X and Y direction.
Now, what is asked is instead of choosing (0,0,-1) for the center of this plane, to choose an arbitrary (E.x, E.y, E.z) position (assumes E.z is negative). But the plane has still to be parallel to xOy axis and with the same size.
You can see that the dimension E.xy plays a very different role than E.z, reason why E.xy will be involved in an substraction, while E.z will be involved in a division. This is easy to see with an example:
assume zNear = -E.z (not necessarily the case, but you can in fact always change frustumScale to have an equivalent perspective satisfying this)
consider the point E (which is the center of the projection plane).
What is its coordinate in NDC space? It is (0,0,-1) by definition. What you've done is substracting E.xy, but dividing by -E_z.
Your code got this idea, but still some things are wrong:
First, you defined uniform vec2 E; instead of uniform vec3 E; (just a typo, not a big deal)
The line clipPos.xy = ... ; is about vec2 arithmetic. Hence, you can only multiply by scalar values (i.e., a float), or add/substract vec2 values. Hence, vec4(E.x, E.y, 0.0, 0.0) is of incorrect type, you should use E.xy instead, which has the correct type vec2.
You should in fact substract E.xy instead of add it. This is easy to see in my example above.
Finally, things are more subtle ;-)
I made a picture to illustrate the modifications:
Each tick is 1 unit in this picture. Top left is your Camera Coordinate Space, with displayed zNear, zFar, and two possible projection planes. In blue is the one used in the explanation and shader here, and the red one is the one you now want to use. The colored areas correponds to what should be visible in you final screen, e.g. what should be in the cube [-1,1]^3 in the NDC Space. Hence, if you use the blue projection plane, you want to obtain the space in top right, and if you use the red projection plane, you want to optain the space in the bottom. To do this, you can observe that you need to perform the scaling and translation in NDC space, e.g. after the perspective division! (I think what is written in the book is either incorrect, or interpret the question differently).
Hence you want to do, in euclidean coordinate (i.e., not homogeneous coordinate, e.g. without W coordinate):
clipPosEuclideanRed.xy = clipPosEuclideanBlue.xy * (-E.z) - E.xy;
clipPosEuclideanRed.z = clipPosEuclideanBlue.z;
However, because you are in homogeneous coordinates, this values are in fact:
clipPosEuclidean.xyz = clipPos.xyz / clipPos.w; // with clipPos.w = -cameraPos.z;
Hence, you have to composate by writing:
clipPosRed.xy = clipPosBlue.xy * (-E.z) - E.xy * (-cameraPos.z);
clipPosRed.z = clipPosBlue.z;
So my solution to this problem would be to add only one line:
void main()
{
vec4 cameraPos = position + vec4(offset.x, offset.y, 0.0, 0.0);
vec4 clipPos;
clipPos.xy = cameraPos.xy * frustumScale;
// only add this line
clipPos.xy = - clipPos.xy * E.z + E.xy * cameraPos.z;
clipPos.z = cameraPos.z * (zNear + zFar) / (zNear - zFar);
clipPos.z += 2 * zNear * zFar / (zNear - zFar);
clipPos.w = -cameraPos.z;
gl_Position = clipPos;
theColor = color;
}
Following this turorial here
I have managed to create a cylindrical billboard (it utilizes a geometry shader which takes points and produces quads). The problem is that when i move the camera so that it's higher than the billboard (using gluLookat) the billboard does not rotate to truly face the camera (as if it was a cylindrical billboard).
How do I make it into spherical?
if anyone interested, here is slightly modified geometry shader code:
#version 330
//based on a great tutorial at http://ogldev.atspace.co.uk/www/tutorial27/tutorial27.html
layout (points) in;
layout (triangle_strip) out;
layout (max_vertices = 4) out;
uniform mat4 mvp;
uniform vec3 cameraPos;
out vec2 texCoord;
void main(){
vec3 pos = gl_in[0].gl_Position.xyz;
pos /= gl_in[0].gl_Position.w; //normalized device coordinates
vec3 toCamera = normalize(cameraPos - pos);
vec3 up = vec3(0,1,0);
vec3 right = normalize(cross(up, toCamera)); //right-handed coordinate system
//vec3 right = cross(toCamera, up); //left-handed coordinate system
pos -= (right*0.5);
gl_Position = mvp*vec4(pos,1.0);
texCoord = vec2(0,0);
EmitVertex();
pos.y += 1.0;
gl_Position = mvp*vec4(pos,1.0);
texCoord = vec2(0,1);
EmitVertex();
pos.y -= 1.0;
pos += right;
gl_Position = mvp*vec4(pos,1.0);
texCoord = vec2(1,0);
EmitVertex();
pos.y += 1.0;
gl_Position = mvp*vec4(pos,1.0);
texCoord = vec2(1,1);
EmitVertex();
}
EDIT:
As I said before, I have tried the approach of setting the 3,3-submatrix to identity. I might have explained the behaviour wrong, but this gif should do it better:
In the picture above, the camera is rotated with the billboard (red) using identity submatrix approach.
The billboard, however, should not move through the surface (white), it should maintain it's position correctly and always be on one side of the surface, which does not happen.
A alternative to create billboards is to throw the geometry shaders away and do it manually like this:
Vector3 DiffCamera = Billboard.position - Camera.position;
Vector3 UpVector = new Vector3(0.0f, 1.0f, 0.0f);
Vector3 CrossA = DiffCamera.cross(UpVector).normalize(); // (Step A)
Vector3 CrossB = DiffCamera.cross(CrossA).normalize(); // (Step B)
// now you can use CrossA and CrossB and the billboard position to calculate the positions of the edges of the billboard-rectangle
// like this
Vector3 Pos1 = Billboard.position + CrossA + CrossB;
Vector3 Pos2 = Billboard.position - CrossA + CrossB;
Vector3 Pos3 = Billboard.position + CrossA - CrossB;
Vector3 Pos4 = Billboard.position - CrossA - CrossB;
we calculate in Step A the cross-product because we want the horizontal aligned direction of the billboard.
In step B we do it for the vertical direction.
do this for every billbaord in the scene.
or better as geometry shader (just a try)
vec3 pos = gl_in[0].gl_Position.xyz;
pos /= gl_in[0].gl_Position.w; //normalized device coordinates
vec3 toCamera = normalize(cameraPos - pos);
vec3 up = vec3(0,1,0);
vec3 CrossA = normalize(cross(up, toCamera));
vec3 CrossB = normalize(cross(CrossA, toCamera));
// set coordinates of the 4 points
Just reset the top left 3×3 subpart of the modelview matrix to identity, leaving the 4th column and row as it is, i.e.:
1 0 0 …
0 1 0 …
0 0 1 …
… … … …
UPDATE World space axis following billboards
The key insight into efficiently implementing aligned billboards is to realize
how they work in view space. By definition the normal vector of a billboard in
view space is Z = (0, 0, 1). This leaves only one free parameter, namely the
rotation of the billboard around this axis. In a view aligned billboard the
billboard right and up axes are merely forced to be view X and Y. This is what
setting the upper left 3×3 of the modelview matrix does.
Now when we want the billboard be aligned to a certain axis within the scene
yet still face the viewer, the only parameter we can vary is the billboards
rotation. For this we do the following:
In world space we choose an axis that should be the up axis of the billboard.
Note that if the viewing axis is parallel to the billboard up axis the following
steps become singular, i.e. the rotation of the billboard is undefined. You have
to deal with this in some way, that I leave undefined here.
This chosen axis we bring into view space. Now an axis is the same kind of
thing like a normal, i.e. a direction, so we transform it the same way as we do
with normals. We transform it by the inverse transpose of the modelview matrix
as you to with normals; note that since we defined the axis in world space, we
need to actually use the inverse transpose of the world to view transformation
matrix then.
The transformed major axis of the billboard is now in view space. Next step is
to orthogonalize it to the viewing direction. For this you use the Gram-Schmidt
method. Now we got the Z and the Y column of the billboard transform. Remains
the X column, which we get by taking the cross product of the Z with the Y column.
In case anyone wonders how I solved this.
I have based my solution on Quonux's answer, the only problem with it was that the billboard would rotate very fast when the camera is right above it (when the up vector is almost parallel to the camera look vector). This strange behaviour is a result of using a cross product to find the right vector: when the camera hovers over the top of the billboard, the cross product changes it's sign, and so does the right vector's direction. That explains the rotation that happens.
So all I needed was to find a right vector using some other way.
As I knew camera's rotation angles (both horizontal and vertical) I decided to use that to find a right vector:
rotatedRight = Vector4.Transform(unRotatedRight, Matrix4.CreateRotationY((-alpha)));
and the geometry shader:
...
uniform vec3 rotRight;
uniform vec3 cameraPos;
out vec2 texCoord;
void main(){
vec3 pos = gl_in[0].gl_Position.xyz;
pos /= gl_in[0].gl_Position.w; //normalized device coordinates
vec3 toCamera = normalize(cameraPos - pos);
vec3 CrossA = rotRight;
... (Continues as Quonux's code)