GLSL Shader - change 'camera' position - opengl

I'm trying to create some kind of 'camera' object with OpenGL. By changing its values, you can zoom in/out, and move the camera around. (imagine a 2d world and you're on top of it). This results in the variables center.x, center.y, and center.z.
attribute vec2 in_Position;
attribute vec4 in_Color;
attribute vec2 in_TexCoords;
uniform vec2 uf_Projection;
uniform vec3 center;
varying vec4 var_Color;
varying vec2 var_TexCoords;
void main() {
var_Color = in_Color;
var_TexCoords = in_TexCoords;
gl_Position = vec4(in_Position.x / uf_Projection.x - center.x,
in_Position.y / -uf_Projection.y + center.y,
0, center.z);
}
I'm using uniform vec3 center to manipulate the camera location. (I'm feeling it should be called an attribute, but I don't know for sure; I only know how to manipulate the uniform values. )
uf_Projection has values of half the screen height and width. This was already the case (forking someones code), and I can only assume it's to make sure the values in gl_Position are normalized?
Entering values for i.e. center.x does change the camera angle correctly. However, it does not match the location at which certain things appear to be rendered.
In addition to the question: how bad is the code?, I'm actually asking these concrete questions:
What is in_Position supposed to be? I've seen several code examples use it, but no-one explains it. It's not explicitly defined either; which values does it take?
What values is gl_Position supposed to take? uf_Projection seems to normalize the values, but when adding values (more than 2000) at center.x, it still works (correctly moved the screen).
Is this the correct way to create a kind of "camera" effect? Or is there a better way? (the idea is that things that aren't on the screen, don't have to get rendered)

The questions you ask can only be answered if one considers the bigger picture. In this case, this means we should have a look at the vertex shader and the typical coordinate transformations which are used for rendering.
The purpose of the vertex shader is to calculate a clip space position for each vertex of the object(s) to be drawn.
In this context, an object is just a sequence of geometrical primitives like points, lines or triangles, each specified by some vertices.
These verties typically specify some position with respect to some completely user-defined coorinate frame of reference. The space those vertex positions are defined in is typically called the object space.
Now the vertex shader's job is to transform from object space to clip space using some mathematical or algorithmical way. Typically, these transformation rules also implicitely or explicitetely consist of some "virtual camera", so that the object is transformd as if observed by said camera.
However, what rules are used, and how they are described, and which inputs are needed is completely free.
What is in_Position supposed to be? I've seen several code examples use it, but no-one explains it. It's not explicitly defined either; which values does it take?
So in_Position in your case is just some attribute (meaning it is a value which is specified per vertex). The "meaning" of this attribute depends solely on how it is used. Since you are using it as input for some coordinate transformation, we could interpret it as meaning the object space position of the vertex. In your case, that is a 2D object space.The values it "takes" are completely up to you.
What values is gl_Position supposed to take? uf_Projection seems to normalize the values, but when adding values (more than 2000) at center.x, it still works (correctly moved the screen).
gl_Position is the clip space position of the vertex. Now clip space is a bit hard to describe. The "normalization" you see here has to do with the fact that there is another space, the normalized device coords (NDC). And in the GL, the convention for NDC is such that the viewing volume is represented by the -1 <= x,y,z <=1 cube in NDC.
So if x_ndc is -1, the object will appear at the left border of your viewport, x=1 at the right borde, y=-1 at bottom border and so on. You also have clipping at z, so object which are too far away or are too near of the hypothetical camera position will also not be visible. (Note that the near clipping plane will also exclude everything which is behind the observer.)
The rule to transform from clip space to NDC is by dividing the clip space x,y and z vlaues by the clip space w value.
The rationale for this is that clip space represents a so called projective space, and the clip space coordinates are homogenuous coordinates. It would be far to much to explain the theory behind this in an StackOverflow article.
But what this means is that by setting gl_Position.w to center.z, the GL will later effectively divide gl_Position.xyz by center.z to reach NDC coordinates. Such a division basiaclly creates the perspective effect that points which are farther away appear closer together.
It is unclear to me if this is exactly what you want. Your current solution has the effect that increasing center.z will increase the object space range that is mapped to the viewing volume, so it does give a zoom effect. Let's consider the x coordinate:
x_ndc = (in_Position.x / uf_Projection.x - center.x) / center.z
= in_Position.x / (uf_Projection.x * center.z) - center.x / center.z
To put it the other way around, the object space x range you can see on the screen will be the inverse transformation applied to x_ndc=-1 and x_ndc=1:
x_obj = (x_ndc + center.x/center.z) * (uf_Projection.x * center.z)
= x_ndc * uf_Projection.x * center.z + center.x * uf_Projection.x
= uf_Projection.x * (x_ndc * center.z + center.x);
So basically, the visibile object space range will be center.xy +- uf_Projection.xy * center.z.
Is this the correct way to create a kind of "camera" effect? Or is there a better way? (the idea is that things that aren't on the screen, don't have to get rendered)
Conceptually, the steps are right. Usually, one uses transformation matrices to define the necessary steps. But in your case, directly applying the transformations as some multiplcations and additions is even more efficient (but less flexible).
I'm using uniform vec3 center to manipulate the camera location. (I'm feeling it should be called an attribute, but I don't know for sure.
Actually, using a uniform for this is the right thing to do. Attributes are for values which can change per vertex. Uniforms are for values which are constant during the draw call (hence, are "uniform" for all shader invocations they are accesed by). Your camera specification should be the same for each vertex you are processing. Only the vertex position does vary between vertices, so that each vertex will end up at a different point with respect to some fixed camera location (and parameters).

Related

Normal mapping without using Tangent/Bitangent vectors

Unfortunately many tutorials describe the TBN matrix as a de-facto must for any type of normal mapping without getting too much into details on why that's the case, which confused me on one particular scenario
Let's assume I need to apply bump/normal mapping on a simple quad on screen, which could later be transformed by it's normal matrix
If the quad's surface normal in "rest position" before any transformation is pointing exactly in positive-z direction (opengl) isn't it sufficient to just transform the vector you read from the normal texture map with the model matrix?
vec3 bumpnormal = texture2D(texture, Coord.xy);
bumpnormal = mat3(model) * bumpnormal; //assuming no scaling occured
I do understand how things would change if we were computing the bumpnormal on a cube without taking in count how different faces with the same texture coordinates actually have different orientations, which leads me to the next question
Assuming that an entire model uses only a single normalmap texture, without any repetition of said texture coordinates in different parts of the model, is it possible to save those 6 floats of the tangent/bitangent vectors stored for each vertex and the computation of the TBN matrix altogheter while getting the same results by simply transforming the bumpnormal with the model's matrix?
If that's the case, why isn't it the preferred solution?
If the quad's surface normal in "rest position" before any transformation is pointing exactly in positive-z direction (opengl) isn't it sufficient to just transform the vector you read from the normal texture map with the model matrix?
No.
Let's say the value you get from the normal map is (1, 0, 0). So that means the normal in the map points right.
So... where is that exactly? Or more to the point, what space are we in when we say "right"?
Now, you might immediately think that right is just +X in model space. But the thing is, it isn't. Why?
Because of your texture coordinates.
If your model-space matrix performs a 90 degree rotation, clockwise, around the model-space Z axis, and you transform your normal by that matrix, then the normal you get should go from (1, 0, 0) to (0, -1, 0). That is what is expected.
But if you have a square facing +Z, and you rotate it by 90 degrees around the Z axis, should that not produce the same result as rotation the texture coordinates? After all, it's the texture coordinates who define what U and V mean relative to model space.
If the top-right texture coordinate of your square is (1, 1), and the bottom left is (0, 0), then "right" in texture space means "right" in model space. But if you change the mapping, so that (1, 1) is at the bottom-right and (0, 0) is at the top-left, then "right" in texture space has become "down" (-Y) in model space.
If you ignore the texture coordinates, the mapping from the model space positions to locations on the texture, then your (1, 0, 0) normal will be still pointing "right" in model space. But your texture mapping says that it should be pointing down (0, -1, 0) in model space. Just like it would have if you rotated model space itself.
With a tangent-space normal map, normals stored in the texture are relative to how the texture is mapped onto a surface. Defining a mapping from model space into the tangent space (the space of the texture's mapping) is what the TBN matrix is for.
This gets more complicated as the mapping between the object and the normals gets more complex. You could fake it for the case of a quad, but for a general figure, it needs to be algorithmic. The mapping is not constant, after all. It involves stretching and skewing as different triangles use different texture coordinates.
Now, there are object-space normal maps, which generate normals that are explicitly in model space. These avoid the need for a tangent-space basis matrix. But it intimately ties a normal map to the object it is used with. You can't even do basic texture coordinate animation, let alone allow a normal map to be used with two separate objects. And they're pretty much unworkable if you're doing bone-weight skinning, since triangles often change sizes.
http://www.thetenthplanet.de/archives/1180
vec3 perturb_normal( vec3 N, vec3 V, vec2 texcoord )
{
// assume N, the interpolated vertex normal and
// V, the view vector (vertex to eye)
vec3 map = texture2D( mapBump, texcoord ).xyz;
#ifdef WITH_NORMALMAP_UNSIGNED
map = map * 255./127. - 128./127.;
#endif
#ifdef WITH_NORMALMAP_2CHANNEL
map.z = sqrt( 1. - dot( map.xy, map.xy ) );
#endif
#ifdef WITH_NORMALMAP_GREEN_UP
map.y = -map.y;
#endif
mat3 TBN = cotangent_frame( N, -V, texcoord );
return normalize( TBN * map );
}
Basically I think you are describing this method. Which I agree is superior in most respects. It makes later calculations much more clean instead of devolving into a mess of space transformation.
Instead of calculating everything into the space of the tangents you just find what the correct world space normal is. That's what I am using in my projects and I am very happy I found this method.

GLSL, change glPosition.z to create a flat change in depth buffer?

I am drawing a stack of decals on a quad. Same geometry, different textures. Z-fighting is the obvious result. I cannot control the rendering order or use glPolygonoffset due to batched rendering. So I adjust depth values inside the vertex shader.
gl_Position = uMVPMatrix * pos;
gl_Position.z += aDepthLayer * uMinStep * gl_Position.w;
gl_Position holds clip coordinates. That means a change in z will move a vertex along its view ray and bring it to the front or push it to the back. For normalized device coordinates the clip coords get divided by gl_Position.w (=-Zclip). As a result the depth buffer does not have linear distribution and has higher resolution towards the near plane. By premultiplying gl_Position.w that should be fixed and I should be able to apply a flat amount (uMinStep) to the NDC.
That minimum step should be something like 1/(2^GL_DEPTH_BITS -1). Or, since NDC space goes from -1.0 to 1.0, it might have to be twice that amount. However it does not work with these values. The minStep is roughly 0.00000006 but it does not bring a texture to the front. Neither when I double that value. If I drop a zero (scale by 10), it works. (Yay, thats something!)
But it does not work evenly along the frustum. A value that brings a texture in front of another while the quad is close to the near plane does not necessarily do the same when the quad is close to the far plane. The same effect happens when I make the frustum deeper. I would expect that behaviour if I was changing eye coordinates, because of the nonlinear z-Buffer distribution. But it seems that premultiplying gl_Position.w is not enough to counter that.
Am I missing some part of the transformations that happen to clip coords? Do I need to use a different formula in general? Do I have to include the depth range [0,1] somehow?
Could the different behaviour along the frustum be a result of nonlinear floating point precision instead of nonlinear z-Buffer distribution? So maybe the calculation is correct, but the minStep just cannot be handled correctly by floats at some point in the pipeline?
The general question: How do I calculate a z-Shift for gl_Position (clip coordinates) that will create a fixed change in the depth buffer later? How can I make sure that the z-Shift will bring one texture in front of another no matter where in the frustum the quad is placed?
Some material:
OpenGL depth buffer faq
https://www.opengl.org/archives/resources/faq/technical/depthbuffer.htm
Same with better readable formulas (but some typos, be careful)
https://www.opengl.org/wiki/Depth_Buffer_Precision
Calculation from eye coords to z-buffer. Most of that happens already when I multiply the projection matrix.
http://www.sjbaker.org/steve/omniv/love_your_z_buffer.html
Explanation about the elements in the projection matrix that turn into the A and B parts in most depth buffer calculation formulas.
http://www.songho.ca/opengl/gl_projectionmatrix.html

How to translate the projected object in screen in opengl

I've rendered an 3d object and its 2d projection in the image is correct. However now I want to shift the 2d projected object by some pixels. How do I achieve that?
Note that simply translating the 3d object doesn't work because under perspective projection the 2d projected object could change. My goal is to just shift the 2d object in the image without changing its shape and size.
If you're using the programmable pipeline, you can apply the translation after you applied the projection transformation.
The only thing you have to be careful about is that the transformed coordinates after applying the projection matrix have a w coordinate that will be used for the perspective division. To make the additional translation amount constant in screen space, you'll have to multiply it by w. The key fragments of the vertex shader would look like this:
in vec4 Position;
uniform mat4 ModelViewProjMat;
uniform vec2 TranslationOffset;
void main() {
gl_Position = ModelViewProjMat * Position;
gl_Position.xy += TranslationOffset * gl_Position.w;
}
After the perspective division by w, this will result in a fixed offset.
Another possibility that works with both the programmable and fixed pipeline is that you shift the viewport. Say if the window size is vpWidth times vpHeight, and the offset you want to apply is (xOffset, yOffset), you can set the viewport to:
glViewport(xOffset, yOffset, vpWidth + xOffset, vpHeight + yOffset);
One caveat here is that the geometry will still be clipped by the same view volume, but only be shifted by the viewport transform after clipping was applied. If the geometry would fit completely inside the original viewport, this will work fine. But if the geometry would have been clipped originally, it will still be clipped with the same planes, even though it might actually be inside the window after the shift is applied.
As an addition to Reto Koradi's answer: You don't need shaders and you don't need to modify the viewport you use (which has the clipping issues mentioned in the answer). You can simply modifiy the projection matrix by pre-multiplying some translation (which in effect will be applied last, after the projective transformation):
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glTranstlate(x,y,z); // <- this one was added
glFrustum(...) or gluPerspective(...) or whatever you use
glFrustum and gluPerspective will multiply the current matrix with the projective transfrom matrix they build, that is why one typically loads identity first. However, it doesn't necessarily have to be identity, and this case is one of the rare cases where one should load something else.
Since you want to shift in pixels, but that transformation is applied in clip space, you need some unit conversions. Since the clip space is just the homogenous representation of the normalized device space, where the frustum is [-1,1] in all 3 dimensions (so the viewport is 2x2 units big in that space), you can use the following:
glTranslate(x * 2.0f/viewport_width, y * 2.0f/viewport_height, 0.0f);
to shift the output by (x,y) pixels.
Note that while I wrote this for fixed-function GL, the math will of course work with shaders as well, and you can simply modify the projection matrix used by the shader in the same way.

Getting depth from Float texture in post process

Im having a bit of trouble with getting a depth value that I'm storing in a Float texture (or rather i don't understand the values). Essentially I am creating a deffered renderer, and in one of the passes I am storing the depth in the alpha component of a floating point render target. The code for that shader looks something like this
Define the clip position as a varying
varying vec4 clipPos;
...
In the vertex shader assign the position
clipPos = gl_Position;
Now in the fragment shader I store the depth:
gl_FragColor.w = clipPos.z / clipPos.w;
This by and large works. When I access this render target in any subsequent shaders I can get the depth. I.e something like this:
float depth = depthMap.w;
Am i right to assume that 0.0 is right in front of the camera and 1 is in the distance? Because I am doing some fog calculations based on this but they don't seem to be correct.
fogFactor = smoothstep( fogNear, fogFar, depth );
fogNear and fogFar are uniforms I send to the shader. When the fogNear is set to 0, I would have thought I get a smooth transition of fog from right in front of the camera to its draw distance. However this is what I see:
When I set the fogNear to 0.995, then I get something more like what Im expecting:
Is that correct, it just doesn't seem right to me? (The scale of the geometry is not really small / too large and neither is the camera near and far too large. All the values are pretty reasonable)
There are two issues with your approach:
You assume the depth is in the range of [0,1], buit what you use is clipPos.z / clipPos.w, which is NDC z coord in the range [-1,1]. You might be better of by directly writing the window space z coord to your depth texture, which is in [0,1] and will simply be gl_FragCoord.z.
The more serious issue that you assume a linear depth mapping. However, that is not the case. The NDC and window space z value is not a linear representation of the distance to the camera plane. It is not surprisinng that anything you see in the screenshot is very closely to 1. Typical, fog calculations are done in eye space. However, since you only need the z coord here, you simply could store the clip space w coordinate - since typically, that is just -z_eye (look at the last row of your projection matrix). However, the resulting value will be not in any normailized range, but in [near,far] that you use in your projection matrix - but specifying fog distances in eye space units (which normally are indentical to world space units) is more intuitive anyway.

How to transform back-facing vertices in GLSL when creating a shadow volume

I'm writing a game using OpenGL and I am trying to implement shadow volumes.
I want to construct the shadow volume of a model on the GPU via a vertex shader. To that end, I represent the model with a VBO where:
Vertices are duplicated such that each triangle has its own unique three vertices
Each vertex has the normal of its triangle
For reasons I'm not going to get into, I was actually doing the above two points anyway, so I'm not too worried about the vertex duplication
Degenerate triangles are added to form quads inside the edges between each pair of "regular" triangles
Using this model format, inside the vertex shader I am able to find vertices that are part of triangles that face away from the light and move them back to form the shadow volume.
What I have left to figure out is what transformation exactly I should apply to the back-facing vertices.
I am able to detect when a vertex is facing away from the light, but I am unsure what transformation I should apply to it. This is what my vertex shader looks like so far:
uniform vec3 lightDir; // Parallel light.
// On the CPU this is represented in world
// space. After setting the camera with
// gluLookAt, the light vector is multiplied by
// the inverse of the modelview matrix to get
// it into eye space (I think that's what I'm
// working in :P ) before getting passed to
// this shader.
void main()
{
vec3 eyeNormal = normalize(gl_NormalMatrix * gl_Normal);
vec3 realLightDir = normalize(lightDir);
float dotprod = dot(eyeNormal, realLightDir);
if (dotprod <= 0.0)
{
// Facing away from the light
// Need to translate the vertex along the light vector to
// stretch the model into a shadow volume
//---------------------------------//
// This is where I'm getting stuck //
//---------------------------------//
// All I know is that I'll probably turn realLightDir into a
// vec4
gl_Position = ???;
}
else
{
gl_Position = ftransform();
}
}
I've tried simply setting gl_position to ftransform() - (vec4(realLightDir, 1.0) * someConstant), but this caused some kind of depth-testing bugs (some faces seemed to be visible behind others when I rendered the volume with colour) and someConstant didn't seem to affect how far the back-faces are extended.
Update - Jan. 22
Just wanted to clear up questions about what space I'm probably in. I must say that keeping track of what space I'm in is the greatest source of my shader headaches.
When rendering the scene, I first set up the camera using gluLookAt. The camera may be fixed or it may move around; it should not matter. I then use translation functions like glTranslated to position my model(s).
In the program (i.e. on the CPU) I represent the light vector in world space (three floats). I've found during development that to get this light vector in the right space of my shader I had to multiply it by the inverse of the modelview matrix after setting the camera and before positioning the models. So, my program code is like this:
Position camera (gluLookAt)
Take light vector, which is in world space, and multiply it by the inverse of the current modelview matrix and pass it to the shader
Transformations to position models
Drawing of models
Does this make anything clearer?
the ftransform result is in clip-space. So this is not the space you want to apply realLightDir in. I'm not sure which space your light is in (your comment confuses me), but what is sure is that you want to add vectors that are in the same space.
On the CPU this is represented in world
space. After setting the camera with
gluLookAt, the light vector is multiplied by
the inverse of the modelview matrix to get
it into eye space (I think that's what I'm
working in :P ) before getting passed to
this shader.
multiplying a vector by the inverse of the mv matrix brings the vector from view space to model space. so you're saying your light-vector (in world space), is applied a transform that does view->model. It makes little sense to me.
We have 4 spaces:
model space: the space where your gl_Vertex is defined in.
world space: a space that GL does not care about in general, that represents an arbitrary space to locate the models in. It's usually what the 3d engine works in (it maps to our general understanding of world coordinates).
view space: a space that corresponds to the referencial of the viewer. 0,0,0 is where the viewer is, looking down Z. Obtained by multiplying gl_Vertex by the modelview
clip space: the magic space that the matrix projection brings us in. result of ftransform is in this space (so is gl_ModelViewProjectionMatrix * gl_Vertex )
Can you clarify exactly which space your light direction is in ?
What you need to do, however, is make the light vector addition in either model, world or view space: Bring all the bits of your operation in the same space. E.g. for model space, just compute the light direction in model space on CPU, and do a:
vec3 vertexTemp = gl_Vertex + lightDirInModelSpace * someConst
then you can bring that new vertex position in clip space with
gl_Position = gl_ModelViewProjectionMatrix * vertexTemp
Last bit, don't try to apply vector additions in clip-space. It won't generally do what you think it should do, as at that point you are necessarily dealing with homogeneous coordinates with non-uniform w.