Render objects with vertices and normals - opengl

What's the easiest way to render objects with vertices and normals of the object given?

Draw the vertices (point, normal, colour), then draw the vertices + normal (assuming normal is unit length, point + normal, normal, colour) and give it a colour.
For normal mapping, colour RGB = normal XYZ * 0.5 + (0.5, 0.5, 0.5).
The half and then add (0.5, 0.5, 0.5) is effectively scaling it and placing its direction within the +ve quadrant. There should be no negative values for the colour components.

Related

How does the coordinate system work for 3D textures in OpenGL?

I am attempting to write and read from a 3D texture, but it seems my mapping is wrong. I have used Render doc to check the textures and they look ok.
A random layer of this voluemtric texture looks like:
So just some blue to denote absence and some green values to denote pressence.
The coordinates I calculate when I write to each layer are calculated in the vertex shader as:
pos.x = (2.f*pos.x-width+2)/(width-2);
pos.y = (2.f*pos.y-depth+2)/(depth-2);
pos.z -= level;
pos.z *= 1.f/voxel_size;
gl_Position = pos;
Since the texture itself looks ok it seems these coordinates are good to achieve my goal.
It's important to note that right now voxel_size is 1 and the scale of the texture is supposed to be 1 to 1 with the scene dimensions. In essence, each pixel in the texture represents a 1x1x1 voxel in the scene.
Next I attempt to fetch the texture values as follows:
vec3 pos = vertexPos;
pos.x = (2.f*pos.x-width+2)/(width-2);
pos.y = (2.f*pos.y-depth+2)/(depth-2);
pos.z *= 1.f/(4*16);
outColor = texture(voxel_map, pos);
Where vertexPos is the global vertex position in the scene. The z coordinate may be completely wrong however (i am not sure if I am supposed to normalize the depth component or not) but that is not the only issue. If you look at the final result:
There is a horizontal sclae component problem. Since each texel represents a voxel, the color of a cube should always be a fixed color. But as you can see I am getting multiple colors for a single cube on the top faces. So my horizontal scale is wrong.
What am i doing wrong when fetching the texels from the texture?

Glow effect on simple rectangle in openGL ES

I would like to create a glow effect, on a rectangle:
I don't really know where to start the fragment shader.
Actually, I would like to achieve this effect on shapes (circles, polygons, rectangles). There is no real border color. There are just blury.
One of the ways:
If you have a rectangle defined with 4 lines (4 points) and have a model matrix then multiply the 4 points with the model matrix and send them as a uniform into the fragment shader. In vertex shader create another varying for the position which is the input position multiplied with the model matrix only. Also some radius must be sent as a uniform.
Now in the fragment shader create code for each of the point pairs representing a line and compute a distance. Now if the distance is smaller then radius create a color scale for the border. The sum of all 4 is then used as a result for the border color.
scale += 1.0-(clamp(currentDistanceToLeftBorder/radius, .0, 1.0));
scale += 1.0-(clamp(currentDistanceToTopBorder/radius, .0, 1.0));
scale += 1.0-(clamp(currentDistanceToRightBorder/radius, .0, 1.0));
scale += 1.0-(clamp(currentDistanceToBottomBorder/radius, .0, 1.0));
Then mix the colors:
color = mix(defaultColor, borderColor, clamp(scale, .0, 1.0));

Reconstructed position from depth - How to handle precision issues?

In my deferred renderer, I've managed to successfully reconstruct my fragment position from the depth buffer.... mostly. By comparing my results to the position stored in an extra buffer, I've noticed that I'm getting a lot of popping far away from the screen. Here's a screenshot of what I'm seeing:
The green and yellow parts at the top are just the skybox, where the position buffer contains (0, 0, 0) but the reconstruction algorithm interprets it as a normal fragment with depth = 0.0 (or 1.0?).
The scene is rendered using fragColor = vec4(0.5 + (reconstPos - bufferPos.xyz), 1.0);, so anywhere that the resulting fragment is exactly (0.5, 0.5, 0.5) is where the reconstruction and the buffer have the exact same value. Imprecision towards the back of the depth buffer is to be expected, but that magenta and blue seems a bit strange.
This is how I reconstruct the position from the depth buffer:
vec3 reconstructPositionWithMat(vec2 texCoord)
{
float depth = texture2D(depthBuffer, texCoord).x;
depth = (depth * 2.0) - 1.0;
vec2 ndc = (texCoord * 2.0) - 1.0;
vec4 pos = vec4(ndc, depth, 1.0);
pos = matInvProj * pos;
return vec3(pos.xyz / pos.w);
}
Where texCoord = gl_FragCoord.xy / textureSize(colorBuffer, 0);, and matInvProj is the inverse of the projection matrix used to render the gbuffer.
Right now my position buffer is GL_RGBA32F (since it's only for testing accuracy, I don't care as much about bandwith and memory waste), and my depth buffer is GL_DEPTH24_STENCIL8 (I got similar results from GL_DEPTH_COMPONENT32, and yes I do need the stencil buffer).
My znear is 0.01f, and zfar is 1000.0f. I'm rendering a single quad as my ground which is 2000.0f x 2000.0f large (I wanted it to be big enough that it would clip with the far plane).
Is this level of imprecision considered acceptable? What are some ways that people have gotten around this problem? Is there something wrong with how I reconstruct the view/eye-space position?

Bias matrix in shadow mapping confuse

I have confused about bias matrix in shadow mapping. According this question: bias matrix in shadow mapping, bias matrix is used to scale down and translate to [0..1]x and [0..1]y. So I image that if we don't use bias matrix, the texture would be filled by only 1/4 scene size? Is that true? Or are there some magic here?
Not entirely, but the result is the same. As the answer from the question you linked said. After the w divide your coordinates are in NDC space, ergo in the range [-1, 1] (x, y and z). Now when you're sampling from a texture the coordinates you should give are in 'texture space', and OpenGL defined that space to be in the range [0, 1] (at least for 2D textures). x=0 y=0 being the bottom left of the texture, and x=1 y=1 the top right of the texture.
This means, when you are going to sample from your rendered depth texture, you have to transform your calculated texture coordinates from [-1, 1] to [0, 1]. If you don't do this, the texture will be fine, but only a quarter of your coordinates will fall in the range you actually want to sample from.
You don't want to bias the objects to be rendered to the depth texture, as OpenGL will transform the coordinates from NDC to window coordinates (the window being your texture in this case, use glViewport for the correct transformation) for you.
To apply the bias to your texture coordinates you can use a texture bias matrix, and multiply it by your projection matrix, so the shaders don't have to worry about it. The post you linked already gave that matrix:
const GLdouble bias[16] = {
0.5, 0.0, 0.0, 0.0,
0.0, 0.5, 0.0, 0.0,
0.0, 0.0, 0.5, 0.0,
0.5, 0.5, 0.5, 1.0};
Provided your matrices are column major this matrix should transform [-1, 1] to [0, 1], it will first multiply by 0.5 and then add 0.5. If your matrices are row major you should simply transpose the matrix and you're good to go.
Hope this helped.

What's wrong with this shader for a centered zooming effect in Orthographic projection?

I've created a basic orthographic shader that displays sprites from textures. It works great.
I've added a "zoom" factor to it to allow the sprite to scale to become larger or smaller. Assuming that the texture is anchored with its origin in the "lower left", what it does is shrink towards that origin point, or expand from it towards the upper right. What I actually want is to shrink or expand "in place" to stay centered.
So, one way of achieving that would be to figure out how many pixels I'll shrink or expand, and compensate. I'm not quite sure how I'd do that, and I also know that's not the best way. I fooled with order of my translates and scales, thinking I can scale first and then place, but I just get various bad results. I can't wrap my head around a way to solve the issue.
Here's my shader:
// Set up orthographic projection (960 x 640)
mat4 projectionMatrix = mat4( 2.0/960.0, 0.0, 0.0, -1.0,
0.0, 2.0/640.0, 0.0, -1.0,
0.0, 0.0, -1.0, 0.0,
0.0, 0.0, 0.0, 1.0);
void main()
{
// Set position
gl_Position = a_position;
// Translate by the uniforms for offsetting
gl_Position.x += translateX;
gl_Position.y += translateY;
// Apply our (pre-computed) zoom factor to the X and Y of our matrix
projectionMatrix[0][0] *= zoomFactorX;
projectionMatrix[1][1] *= zoomFactorY;
// Translate
gl_Position *= projectionMatrix;
// Pass right along to the frag shader
v_texCoord = a_texCoord;
}
mat4 projectionMatrix =
Matrices in GLSL are constructed column-wise. For a mat4, the first 4 values are the first column, then the next 4 values are the second column and so on.
You transposed your matrix.
Also, what are those -1's for?
For the rest of your question, scaling is not something the projection matrix should be dealing with. Not the kind of scaling you're talking about. Scales should be applied to the positions before you multiply them with the projection matrix. Just like for 3D objects.
You didn't post what your sprite's vertex data is, so there's no way to know for sure. But the way it ought to work is that the vertex positions for the sprite should be centered at the sprite's center (which is wherever you define it to be).
So if you have a 16x24 sprite, and you want the center of the sprite to be offset 8 pixels right and 8 pixels up, then your sprite rectangle should be (-8, -8) to (8, 16) (from a bottom-left coordinate system).
Then, if you scale it, it will scale around the center of the sprite's coordinate system.