Geometry Shader to output a line that’s a quarter of the screen width - opengl

The goal is to select an object on screen and then draw a small coordinate system on this object as an overlay. But I want this coordinate system to always be the same size no matter how far away the object is from the camera.
I start with a position in world space that I pass to the vertex shader as uniform. I then do:
gl_position = uViewProjection * vec4(uPosition, 1.0);
This gets passed to the geometry shader.
What I would like to do now is to let the geometry shader draw a line segment that’s going from gl_position in the direction of the projected x-axis.
That itself is now problem:
I just do:
vec4 x = uViewProjection * vec4(1.0, 0.0, 0.0, 0.0);
Now I add this vector x to gl_position and draw the line.
This works but the line gets smaller or bigger depending on the camera distance to the position. I think that is because gl_position has not yet been divided by gl_position.w?
But I would like the line to always be a quarter of the screen size in length.
I know the perspective divide happens right before the fragment shader but I think I have to use it beforehand in some way in order to achieve my goal.
What am I doing wrong? What am I missing?

Related

How to get homogeneous screen space coordinates in openGL

I'm studying opengl and I'v got i little 3d scene with some objects. In GLSL vertex shader I multiply vertexes on matixes like this:
vertexPos= viewMatrix * worldMatrix * modelMatrix * gl_Vertex;
gl_Position = vertexPos;
vertexPos is a vec4 varying variable and I pass it to fragment shader.
Here is how the scene renders normaly:
normal render
But then I wana do a debug render. I write in fragment shader:
gl_FragColor = vec4(vertexPos.x, vertexPos.x, vertexPos.x, 1.0);
vertexPos is multiplied by all matrixes, including perspective matrix, and I assumed that I would get a smooth gradient from the center of the screen to the right edge, because they are mapped in -1 to 1 square. But look like they are in screen space but perspective deformation isn't applied. Here is what I see:
(dont look at red line and light source, they are using different shader)
debug render
If I devide it by about 15 it will look like this:
gl_FragColor = vec4(vertexPos.x, vertexPos.x, vertexPos.x, 1.0)/15.0;
devided by 15
Can someone please explain me, why the coordinates aren't homogeneous and the scene still renders correctly with perspective distortion?
P.S. if I try to put gl_Position in fragment shader instead of vertexPos, it doesn't work.
A so-called perspective division is applied to gl_Position after it's computed in a vertex shader:
gl_Position.xyz /= gl_Position.w;
But it doesn't happen to your varyings unless you do it manually. Thus, you need to add
vertexPos.xyz /= vertexPos.w;
at the end of your vertex shader. Make sure to do it after you copy the value to gl_Position, you don't want to do the division twice.

Pixel perfect text rendering in perspective projection

I'm trying to render text as textured quads in perspective projection (do not want to use ortho projection), and I'm struggling with pixel alignment.
The setup is simple, I have text with align anchor point in 3D, I change its model transformation into billboard transformation, and calculate scale (triangle similarity) to have the text always the same size. Since geometry of text quads is constructed with world units corresponding to pixels, resulting text indeed seems to be the same size no matter camera orientation or anchor point offset.
Vector3d dist = camera.getPosition();
dist.sub(translation);
double pxFov = camera.getFOV() / camera.getScreenWidth();
double scale = Math.sin(pxFov) / Math.sin((Math.PI / 2) - pxFov)
* dist.length() * camera.getAspectRatio();
V.getRotationScale(R);
R.transpose();
M.setIdentity();
M.setRotation(R);
M.setScale(scale);
M.setTranslation(translation);
Where V is 4x4 camera view matrix, R temporary 3x3 matrix, and M is 4x4 model transformation matrix used for MVP calculation.
I found supposed solution, but it just slightly changed behaviour of rendered text, instead of fixing the problem.
When using vertex shader
void main () {
vec2 view = vec2(1280, 720);
vec4 cpos = MVP * vec4 (position, 1.0f);
vec2 p = floor(cpos.xy * view*0.5/cpos.w);
p += 0.5; // does not influence result
cpos.xy = p * (2.0/view*cpos.w);
gl_Position = cpos;
}
text does render in some places sharp, and in some places blurred
In case of simple vertex shader
void main () {
gl_Position = MVP * vec4 (position, 1.0f);
}
is the text either completely blurred or completely sharp
It seems logial getting vertex positions to viewport space, rounding it there and moving it back, but something seems to be missing.
EDIT: Explanation of how I'm calculating the scale factor.
Here you can see I'm getting right angle triangle as red, green and blue lines. Knowing lenght of red (camera-text anchor distance), angle between red and blue ((fov/2)/(screen width/2)), and angle between red and green being right angle, I can use law of sines to calculate length of the green line, which is also scale of one texel to have same size in current projection.
Scale of the text seems to be correct no matter camera/text orientation/position (desired 8 pixels). It is possible the scale is just lightly wrong and that results in the blur effect, but I fail to see how.
It seems to me that due to numerical issues you're suffering z-buffer fighting.
Perhaps you can add a little value to the text z-coordinate, separate it just a bit from its background.
Another solution is to avoid the text scale-unscale calculation.
You may calculate pixel anchorage by doing perspective maths on CPU side for every text. Then display it with orthographic projection.

OpenGL GL_Position 3D to 2D Screen Space

I am trying to understand the maths associated with converting a 3D
point into a 2D screen position. I understand that the process involves moving from object space->worldspace->eye space -> clip Space -> NDC space -> ViewPort space (Final 2D position on screen)
VertexShader:
GL_Position = Projection Matrix * view Matrix * model Matrix * vec(Position,1); => clip space.
FragmentShader:
(Pseudo code)
//assuming GL_position is received as a vec4 input variable
vec2 Gl_position_ndc = (Gl_position.xy/Gl_position.w)/2+ .5;
(Gl_position_ndc -> GL_FragColor) after perspective division and converting to Normalized device Coordinate space
Do these automatic perspective divides and NDC conversion in the Fragment shader happen automatically to the GL_Position received from the Vertex shader as described above in the Fragment shader?
Yes, division by w is automatic after you output the clip-space vertex in the vertex shader.
It happens before rasterization, and therefore before the fragment shader runs. Now, one interesting quirk is that in window-space (gl_FragCoord) in a fragment shader, w = 1/clip_w. If you try to do this divide again using gl_FragCoord, you actually undo the perspective division and things will get weird.
There are reasons you might want to divide by 1/clip.w, but this is not one of them.

Skew an image using openGL shaders

To lighten the work-load on my artist, I'm working on making a bare-bones skeletal animation system for images. I've conceptualized how all the parts work. But to make it animate how I want to, I need to be able to skew an image in real time. (If I can't do that, I still know how to make it work, but it'd be a lot prettier if I could skew images to add perspective)
I'm using SFML, which doesn't support image skewing. I've been told I'd need to use OpenGL shaders on the image, but all the advice I got on that was "learn GLSL". So I figured maybe someone over here could help me out a bit.
Does anyone know where I should start with this? Or how to do it?
Basically, I want to be able to deforem an image to give it perspective (as shown in the following mockup)
The images on the left are being skewed at the top and bottom. The images on the right are being skewed at the left and right.
An example of how to skew a texture in GLSL would be the following (poorly poorly optimized) shader. Idealy, you would want to actually precompute your transform matrix inside your regular program and pass it in as a uniform so that you aren't recomputing the transform every move through the shader. If you still wanted to compute the transform in the shader, pass the skew factors in as uniforms instead. Otherwise, you'll have to open the shader and edit it every time you want to change the skew factor.
This is for a screen aligned quad as well.
Vert
attribute vec3 aVertexPosition;
varying vec2 texCoord;
void main(void){
// Set regular texture coords
texCoord = ((vec2(aVertexPosition.xy) + 1.0) / 2.0);
// How much we want to skew each axis by
float xSkew = 0.0;
float ySkew = 0.0;
// Create a transform that will skew our texture coords
mat3 trans = mat3(
1.0 , tan(xSkew), 0.0,
tan(ySkew), 1.0, 0.0,
0.0 , 0.0, 1.0
);
// Apply the transform to our tex coords
texCoord = (trans * (vec3(texCoord.xy, 0.0))).xy;
// Set vertex position
gl_Position = (vec4(aVertexPosition, 1.0));
}
Frag
precision highp float;
uniform sampler2D sceneTexture;
varying vec2 texCoord;
void main(void){
gl_FragColor = texture2D(sceneTexture, texCoord);
}
This ended up being significantly simpler than I thought it was. SFML has a vertexArray class that allows drawing custom quads without requiring the use of openGL.
The code I ended up going with is as follows (for anyone else who runs into this problem):
sf::Texture texture;
texture.loadFromFile("texture.png");
sf::Vector2u size = texture.getSize();
sf::VertexArray box(sf::Quads, 4);
box[0].position = sf::Vector2f(0, 0); // top left corner position
box[1].position = sf::Vector2f(0, 100); // bottom left corner position
box[2].position = sf::Vector2f(100, 100); // bottom right corner position
box[3].position = sf::Vector2f(100, 100); // top right corner position
box[0].texCoords = sf::Vector2f(0,0);
box[1].texCoords = sf::Vector2f(0,size.y-1);
box[2].texCoords = sf::Vector2f(size.x-1,size.y-1);
box[3].texCoords = sf::Vector2f(size.x-1,0);
To draw it, you call the following code wherever you usually tell your window to draw stuff.
window.draw(lines,&texture);
If you want to skew the image, you just change the positions of the corners. Works great. With this information, you should be able to create a custom drawable class. You'll have to write a bit of code (set_position, rotate, skew, etc), but you just need to change the position of the corners and draw.

What's wrong with this shader for a centered zooming effect in Orthographic projection?

I've created a basic orthographic shader that displays sprites from textures. It works great.
I've added a "zoom" factor to it to allow the sprite to scale to become larger or smaller. Assuming that the texture is anchored with its origin in the "lower left", what it does is shrink towards that origin point, or expand from it towards the upper right. What I actually want is to shrink or expand "in place" to stay centered.
So, one way of achieving that would be to figure out how many pixels I'll shrink or expand, and compensate. I'm not quite sure how I'd do that, and I also know that's not the best way. I fooled with order of my translates and scales, thinking I can scale first and then place, but I just get various bad results. I can't wrap my head around a way to solve the issue.
Here's my shader:
// Set up orthographic projection (960 x 640)
mat4 projectionMatrix = mat4( 2.0/960.0, 0.0, 0.0, -1.0,
0.0, 2.0/640.0, 0.0, -1.0,
0.0, 0.0, -1.0, 0.0,
0.0, 0.0, 0.0, 1.0);
void main()
{
// Set position
gl_Position = a_position;
// Translate by the uniforms for offsetting
gl_Position.x += translateX;
gl_Position.y += translateY;
// Apply our (pre-computed) zoom factor to the X and Y of our matrix
projectionMatrix[0][0] *= zoomFactorX;
projectionMatrix[1][1] *= zoomFactorY;
// Translate
gl_Position *= projectionMatrix;
// Pass right along to the frag shader
v_texCoord = a_texCoord;
}
mat4 projectionMatrix =
Matrices in GLSL are constructed column-wise. For a mat4, the first 4 values are the first column, then the next 4 values are the second column and so on.
You transposed your matrix.
Also, what are those -1's for?
For the rest of your question, scaling is not something the projection matrix should be dealing with. Not the kind of scaling you're talking about. Scales should be applied to the positions before you multiply them with the projection matrix. Just like for 3D objects.
You didn't post what your sprite's vertex data is, so there's no way to know for sure. But the way it ought to work is that the vertex positions for the sprite should be centered at the sprite's center (which is wherever you define it to be).
So if you have a 16x24 sprite, and you want the center of the sprite to be offset 8 pixels right and 8 pixels up, then your sprite rectangle should be (-8, -8) to (8, 16) (from a bottom-left coordinate system).
Then, if you scale it, it will scale around the center of the sprite's coordinate system.