In these days I am reading the Learning Modern 3D Graphics Programming book by Jason L. McKesson. Basically it is a book about the OpenGL 3.3 and I am now at the chapter 4, that is about orthographic and perspective view.
At the end of the chapter, under the "Further Study" section he suggests to try few things like implementing a variable eye point (he used at the begin (0, 0, 0) in camera space for semplicity) and an arbitrary perspective plane location.
He says I am going to need to offset the X, Y camera-space positions of the vertices by E_x and E_y respectively.
I cannot understand this passage, how am I supposed to use a variable eye point modifying only the X, Y offsets?
Edit: could it be something like this?
#version 330
layout(location = 0) in vec4 position;
layout(location = 1) in vec4 color;
smooth out vec4 theColor;
uniform vec2 offset;
uniform vec2 E;
uniform float zNear;
uniform float zFar;
uniform float frustumScale;
void main()
{
vec4 cameraPos = position + vec4(offset.x, offset.y, 0.0, 0.0);
vec4 clipPos;
clipPos.xy = cameraPos.xy * frustumScale + vec4(E.x, E.y, 0.0, 0.0);
clipPos.z = cameraPos.z * (zNear + zFar) / (zNear - zFar);
clipPos.z += 2 * zNear * zFar / (zNear - zFar);
clipPos.w = cameraPos.z / (-E.z);
gl_Position = clipPos;
theColor = color;
}
Edit2: thanks Boris, your picture helped a lot :) especially because:
it makes clear what you previously stated regarding thinking E as projection place position and not eye point position
it underlines that the size of the project plane must be always [-1, 1], passage that I read on the book without fully understanding what it meant
Just a curiosity, why do you mention multiplying after subtracting? Is it for the same reason the book says, that is aspect ratio? Because everything logically push me doing exactly the opposite, that is first translation (-2) and then multiplication (/5).. Or maybe with the term "scaling", the book refers to the reshape function?
Here, we are interested in computing a transformation from Camera Coordinates (CC) to Normalized Device Coordinates (NDC).
Think of E as the position of the projection plane in Camera Coordinates, instead of the position of the eye point according to the projection plane. In Camera Coordinates, the eye point is by definition located at the origin, at least in my interpretation of what "Camera Coordinate" means: a coordinate frame centered from where you look at the scene. (You can mathematically define a perspective transformation centered from anywhere, but this means your input space is not the camera space, imho. This is what the World->Camera transformation is for, as you will see in chapter 6)
Summary:
you are in camera space, hence your eye point is located at (0,0,0)
you are looking toward the negative Z-axis
your projection plane is parallel to the xOy plane, with a size of [-1,1] in both direction
This is the picture here (each tick is 0.5 unit):
In this picture, you can see that the projection plane (bottom side of the gray trapezoid) is centered in (0,0,-1), with a size of [-1,1] in both X and Y direction.
Now, what is asked is instead of choosing (0,0,-1) for the center of this plane, to choose an arbitrary (E.x, E.y, E.z) position (assumes E.z is negative). But the plane has still to be parallel to xOy axis and with the same size.
You can see that the dimension E.xy plays a very different role than E.z, reason why E.xy will be involved in an substraction, while E.z will be involved in a division. This is easy to see with an example:
assume zNear = -E.z (not necessarily the case, but you can in fact always change frustumScale to have an equivalent perspective satisfying this)
consider the point E (which is the center of the projection plane).
What is its coordinate in NDC space? It is (0,0,-1) by definition. What you've done is substracting E.xy, but dividing by -E_z.
Your code got this idea, but still some things are wrong:
First, you defined uniform vec2 E; instead of uniform vec3 E; (just a typo, not a big deal)
The line clipPos.xy = ... ; is about vec2 arithmetic. Hence, you can only multiply by scalar values (i.e., a float), or add/substract vec2 values. Hence, vec4(E.x, E.y, 0.0, 0.0) is of incorrect type, you should use E.xy instead, which has the correct type vec2.
You should in fact substract E.xy instead of add it. This is easy to see in my example above.
Finally, things are more subtle ;-)
I made a picture to illustrate the modifications:
Each tick is 1 unit in this picture. Top left is your Camera Coordinate Space, with displayed zNear, zFar, and two possible projection planes. In blue is the one used in the explanation and shader here, and the red one is the one you now want to use. The colored areas correponds to what should be visible in you final screen, e.g. what should be in the cube [-1,1]^3 in the NDC Space. Hence, if you use the blue projection plane, you want to obtain the space in top right, and if you use the red projection plane, you want to optain the space in the bottom. To do this, you can observe that you need to perform the scaling and translation in NDC space, e.g. after the perspective division! (I think what is written in the book is either incorrect, or interpret the question differently).
Hence you want to do, in euclidean coordinate (i.e., not homogeneous coordinate, e.g. without W coordinate):
clipPosEuclideanRed.xy = clipPosEuclideanBlue.xy * (-E.z) - E.xy;
clipPosEuclideanRed.z = clipPosEuclideanBlue.z;
However, because you are in homogeneous coordinates, this values are in fact:
clipPosEuclidean.xyz = clipPos.xyz / clipPos.w; // with clipPos.w = -cameraPos.z;
Hence, you have to composate by writing:
clipPosRed.xy = clipPosBlue.xy * (-E.z) - E.xy * (-cameraPos.z);
clipPosRed.z = clipPosBlue.z;
So my solution to this problem would be to add only one line:
void main()
{
vec4 cameraPos = position + vec4(offset.x, offset.y, 0.0, 0.0);
vec4 clipPos;
clipPos.xy = cameraPos.xy * frustumScale;
// only add this line
clipPos.xy = - clipPos.xy * E.z + E.xy * cameraPos.z;
clipPos.z = cameraPos.z * (zNear + zFar) / (zNear - zFar);
clipPos.z += 2 * zNear * zFar / (zNear - zFar);
clipPos.w = -cameraPos.z;
gl_Position = clipPos;
theColor = color;
}
Related
Here is my question, i will list them to make it clear:
I am writing a program drawing squares in 2D using instancing.
My camera direction is (0,0,-1), camera up is (0,1,0), camera position is (0,0,3), and the camera position changes when i press some keys.
What I want is that, when I zoom in (the camera moves closer to the square), the square's size(in the screen) won't change. So in my shader:
#version 330 core
layout(location = 0) in vec2 squareVertices;
layout(location = 1) in vec4 xysc;
out vec4 particlecolor;
uniform mat4 VP;
void main()
{
float particleSize = xysc.z;
float color = xysc.w;
gl_Position = VP* vec4(xysc.x, xysc.y, 2.0, 1.0) + vec4(squareVertices.x*particleSize,squareVertices.y*particleSize,0,0);
particlecolor = vec4(1.0f * color , 1.0f * (1-color), 0.0f, 0.5f);
}
Please notice that, inorder to keep the squares' size unchanged, what I do is:
1. transform the center of the square first
VP * vec4(xysc.x, xysc.y, 2.0, 1.0)
2. then compute one of the four corners (x,y,z,1) of the square
+ vec4(squareVertices.x*particleSize,squareVertices.y*particleSize,0,0);
instead of:
gl_Position = VP* (vec4(xysc.x, xysc.y, 2.0, 1.0) + vec4(squareVertices.x*particleSize,squareVertices.y*particleSize,0,0));
However when I move the camera closer to z=0 plane. The squares' size grows unexpectedly. Where is the problem? I can provide a demo code if necessary.
Sounds like you use a perspective projection, and the formula you use in steps 1 and 2 won't work because VP * vec4 will in the general case result in a vec4(x,y,z,w) with the w value != 1, and adding a vec4(a,b,0,0) to that will just get you vec3( (x+a)/w, (y+b)/w, z) after the perspective divide, while you seem to want vec3(x/w + a, y/w +b, z). So the correct approach is to scale a and b by w and add that before the divde: vec4(x+a*w, y+b*w, z, w).
Note that when you move your camera closer to the geometry, the effective w value will approach towards zero, so (x+a)/w will be a greater than x/w + a, resulting in your geometry getting bigger.
I'm working on the following shader that
translates (on y)
rotates
repeats (tiles)
a texture:
uniform sampler2D texture;
uniform vec2 resolution;
varying vec4 vertColor;
varying vec4 vertTexCoord;
uniform float rotation;
uniform float yTranslation;
void main() {
vec2 repeat = vec2(2, 2);
vec2 coord = vertTexCoord.st;
coord.y += yTranslation;
float sin_factor = sin(rotation);
float cos_factor = cos(rotation);
coord += vec2(0.5);
coord = coord * mat2(cos_factor, sin_factor, -sin_factor, cos_factor) * 0.3;
coord -= vec2(0.5);
coord = vec2(mod(coord.x * repeat.x, 1.0f), mod(coord.y * repeat.y, 1.0f));
gl_FragColor = texture2D(texture, coord) * vertColor;
}
Current behavior
Desired behavior
I want the texture to always rotate around the center, no matter how far it has been translated.
Simply swapping the order of the steps results in weird behavior. What am I missing?
Your problem statement in the question is really wrong: Your addtional comment (to a now deleted answer):
I have a boat that always stays in the center of the screen, the water texture (controlled by this shader) under it moves to make it look like the boat is moving. The movement of the water texture is controlled by rotation (for steering) and yTranslation (for how far the boat has moved forwards/backwards)
makes it clear that you're asking for a different thing, and the approach described in the question is simply not going to solve your problem.
When your boat moves and rotates, it will basically travel on a curve (and you want the inverse of that curve to travel through texture space). But your 2dof parameters rotation and yTranslation are not capable of describing the curve. Your problem needs at least another parameter xTranslation - so in the end, you need a 2D vector describing the position of your boat + an angle describing the rotation. And you need to properly accumulate this data at each simulation step:
update the rotation accordingly
calculate the 2D vector your ship is heading tom as defined by the current rotation
scale it according to the velocity of the movement
accumulate it onto the position vector.
Then, your shader simply has to
1. translate the texcoords by position (or -position, whatever you store)
2. Rotate around the pivot point (which is constant and only depends on how you layed out your texture space)
coord = vec2(mod(coord.x * repeat.x, 1.0f), mod(coord.y * repeat.y, 1.0f));
that's a waste of GPU ALU cycles, the TMUs will already do the mod for you with the GL_REPEAT wrap modes.
However, what you now have here is rotation, scaling and translation: so just use a single matrix for the whole texcoord transformation - the accumulation of the 2D position that I talked about earlier can nicely by done with the matrix representations. It will also remove the sin and cos from your shader, which is another huge waste right now.
I'm developing a game that consists of 2 stages, one of these has an orthographic projection, and the other stage has a perspective projection.
Currently when we go between modes we fade to black, and then come back in the new camera mode.
How would I go about smoothly transitioning between the two?
There are probably a handful of ways of accomplishing this, the two I found that seemed like they would work the best were:
Lerping all the matrix elements from one matrix to the other. Apparently this works pretty well all things considered. I don't believe this transition will appear linear, though. You could try to give it an easing function instead of doing the interpolation linearly
A dolly zoom on the perspective matrix going to/from a near 0 field of view. You would pop from the orthographic matrix to the near 0 perspective matrix and lerp the fov out to your target, and probably be heavily tweaking the near/far planes as you go. In reverse you would lerp to 0 and then pop to the orthographic matrix. The idea behind this being that things appear flatter with a lower fov and that a fov of 0 is essentially an orthographic projection. This is more complex but can also be tweaked a whole lot more.
If you have access to a programmable pipeline (a.k.a. shaders), you can do the transition in the vertex shader. I have found that this works very well and does not introduce artifacts. Here's a GLSL code snippet:
#version 150
uniform mat4 uModelMatrix;
uniform mat4 uViewMatrix;
uniform mat4 uProjectionMatrix;
uniform float uNearClipPlane = 1.0;
uniform vec2 uPerspToOrtho = vec2( 0.0 );
in vec4 inPosition;
void main( void )
{
// Calculate view space position.
vec4 view = uViewMatrix * uModelMatrix * inPosition;
// Scale x&y to 'undo' perspective projection.
view.x = mix( view.x, view.x * ( -view.z / uNearClipPlane ), uPerspToOrtho.x );
view.y = mix( view.y, view.y * ( -view.z / uNearClipPlane ), uPerspToOrtho.y );
// Output clip space coordinate.
gl_Position = uProjectionMatrix * view;
}
In the code, uPerspToOrtho is a vec2 (e.g. a float2) that contains a value in the range [0..1]. When set to 0, your coordinates will use perspective projection (assuming your projection matrix is a perspective one). When set to 1, your coordinates will behave as if projected by an orthographic projection matrix. You can do this separately for the X- and Y-axes.
'uNearClipPlane' is the near plane distance, which is the value you used to create the perspective projection matrix.
When converting this to HLSL, you may need to use view.z instead of -view.z, but I could be wrong.
I hope you find this useful.
Edit: instead of passing in the near clip plane distance, you can also extract it from the projection matrix. For OpenGL, this is how:
float zNear = 2.0 * uProjectionMatrix[3][2] / ( 2.0 * uProjectionMatrix[2][2] - 2.0 );
Edit 2: you can optimize the code by doing the scaling on x and y at the same time:
view.xy = mix( view.xy, view.xy * ( -view.z / uNearClipPlane ), uPerspToOrtho.xy );
To get rid of the division, you could multiply by the inverse near plane distance:
uniform float uInvNearClipPlane; // = 1.0 / zNear
I managed to do this without the explicit use of matrices. I used Java so the syntax is different but comparable. One of the things I used was this mix() function. It returns value1 when factor is 1 and value2 when factor is 0, and has a linear transition for every value in between.
private double mix(double value1, double value2, double factor)
{
return (value1 * factor) + (value2 * (1 - factor));
}
When I call this function, I use value1 for perspective and value2 for orthographic, like so:mix(focalLength/voxel.z, orthoZoom, factor)
When determining your focal length and orthographic zoom factor, it is helpful to know that anything at distance focalLength/orthoZoom away from the camera will project to the same point throughout the transition.
Hope this helps. You can download my program to see how it looks at https://github.com/npetrangelo/3rd-Dimension/releases.
I have been trying to get variance shadow mapping to work in my webgl application, but I seem to be having an issue that I could use some help with. In short, my shadows seem to vary over a much smaller distance than the examples I have seen out there. I.e. the shadow range is from 0 to 500 units, but the shadow is black 5 units away and almost non-existent 10 units away. The examples I am following are based on these two links:
VSM from Florian Boesch
VSM from Fabian Sanglard
In both of those examples, the authors are using spot light perspective projection to map the variance values to a floating point texture. In my engine, I have so far tried to use the same logic except I am using a directional light and orthographic projection. I tried both techniques and the result seems to always be the same for me. I'm not sure if its the because of me using an orthographic matrix to do projection - I suspect it might be. Here is a picture of the problem:
Notice how the box is only a few units away from the circle but the shadow is much darker even though the camera shadow is 0.1 to 500 units.
In the light shadow pass my code looks like this:
// viewMatrix is a uniform of the inverse world matrix of the camera
// vWorldPosition is the varying vec4 of the vertex position x world matrix
vec3 lightPos = (viewMatrix * vWorldPosition).xyz;
depth = clamp(length(lightPos) / 40.0, 0.0, 1.0);
float moment1 = depth;
float moment2 = depth * depth;
// Adjusting moments (this is sort of bias per pixel) using partial derivative
float dx = dFdx(depth);
float dy = dFdy(depth);
moment2 += pow(depth, 2.0) + 0.25 * (dx * dx + dy * dy) ;
gl_FragColor = vec4(moment1, moment2, 0.0, 1.0);
Then in my shadow pass:
// lightViewMatrix is the light camera's inverse world matrix
// vertWorldPosition is the attribute position x world matrix
vec3 lightViewPos = lightViewMatrix * vertWorldPosition;
float lightDepth2 = clamp(length(lightViewPos) / 40.0, 0.0, 1.0);
float illuminated = vsm( shadowMap[i], shadowCoord.xy, lightDepth2, shadowBias[i] );
shadowColor = shadowColor * illuminated
Firstly, should I be doing anything differently with Orthographic projection (Its probably not this, but I don't know what it might be as it happens using both techniques above :( )? If not, what might I be able to do to get a more even spread of the shadow?
Many thanks
If you want to render an imposter geometry (say like a sphere), then the standard practice is to draw it using two triangles (say by passing one vertex and making a triangle strip with a geometry shader).
This is nice because it allows the extent of the billboard to be set fairly simply: you compute the actual world space positions directly.
Geometry shaders can alternately output point primitives, and I don't see a reason why they shouldn't. The only issue is finding some way to scale gl_PointSize so that you get that effect.
The only precedent I could find were this question (whose answer I am unsure is correct) and this question (which is unanswered).
It's worth noting that it's fairly simple to scale the point correctly with distance (by doing gl_PointSize = constant/length(gl_Position), but this isn't controllable; you can't say for example: I want this point to look like it is two world units across.
So: anyone know how to do this?
A straight forward idea is to transform a point at the top and bottom of the particle into screen space and find the distance. This cancels very nicely and it's pretty simple to work with just the y coordinate.
The billboard is screen aligned, and view matrices generally don't scale, so the particle size in world space is the same as eye space. That just leaves the projection to get to NDC, the divide by w and scaling by the viewport size.
A typical projection matrix, P, might look something like this...
[ +1.2990 +0.0000 +0.0000 +0.0000 ]
[ +0.0000 +1.7321 +0.0000 +0.0000 ]
[ +0.0000 +0.0000 -1.0002 -0.0020 ]
[ +0.0000 +0.0000 -1.0000 +0.0000 ]
Starting with y_eye, a y coordinate in eye space, the image space coordinate y_image is obtained in pixels...
Plugging in the radius above/below the billboard and subtracting cancels to...
Or, in text, pixelSize = vpHeight * P[1][1] * radius / w_clip
For a perspective projection, P[1][1] = 1 / tan(fov_y / 2). w_clip is gl_Position.w, which is also -z_eye (from the -1 in the perspective matrix). To guarantee your point covers every pixel you want, this may need an additional small constant.
Side note: A sphere on a billboard will look OK in the middle of the screen. If you have a large field of view perspective projection, a true sphere should warp as it approaches the edges of the screen. You could implicitly raycast the virtual sphere for each pixel in the billboard to get a correct result, but the billboard boundary will need to be adjusted accordingly. Quick google results: 1 2 3 4
[EDIT]
Well, since I bothered to test this I'll throw my shaders here too...
Vertex:
#version 150
in vec4 osVert;
uniform mat4 projectionMat;
uniform mat4 modelviewMat;
uniform vec2 viewportSize;
flat out vec2 centre;
flat out float radiusPixels;
const float radius = 1.0;
void main()
{
gl_Position = projectionMat * modelviewMat * osVert;
centre = (0.5 * gl_Position.xy/gl_Position.w + 0.5) * viewportSize;
gl_PointSize = viewportSize.y * projectionMat[1][5] * radius / gl_Position.w;
radiusPixels = gl_PointSize / 2.0;
}
Fragment:
#version 150
flat in vec2 centre;
flat in float radiusPixels;
out vec4 fragColour;
void main()
{
vec2 coord = (gl_FragCoord.xy - centre) / radiusPixels;
float l = length(coord);
if (l > 1.0)
discard;
vec3 pos = vec3(coord, sqrt(1.0-l*l));
fragColour = vec4(vec3(pos.z), 1.0);
}
(Note the visible gap at the bottom right is incorrect as described above)