Simple, most basic SSR in GLSL - opengl

I'm experimenting trying to implement "as simple as possible" SSR in GLSL. Any chance someone could please help me set up an extremely basic ssr code?
I do not need (for now) any roughness/metalness calculations, no Fresnel, no fading in-out effect, nothing fancy - I just want the most simple setup that I can understand, learn and maybe later improve upon.
I have 4 source textures: Color, Position, Normal, and a copy of the previous final frame's image (Reflection)
attributes.position is the position texture (FLOAT16F) - WorldSpace
attributes.normal is the normal texture (FLOAT16F) - WorldSpace
sys_CameraPosition is the eye's position in WorldSpace
rd is supposed to be the reflection direction in WorldSpace
texture(SOURCE_2, projectedCoord.xy) is the position texture at the current reflection-dir endpoint
vec3 rd = normalize(reflect(attributes.position - sys_CameraPosition, attributes.normal));
vec2 uvs;
vec4 projectedCoord;
for (int i = 0; i < 10; i++)
{
// Calculate screen space position from ray's current world position:
projectedCoord = sys_ProjectionMatrix * sys_ViewMatrix * vec4(attributes.position + rd, 1.0);
projectedCoord.xy /= projectedCoord.w;
projectedCoord.xy = projectedCoord.xy * 0.5 + 0.5;
// this bit is tripping me up
if (distance(texture(SOURCE_2, projectedCoord.xy).xyz, (attributes.position + rd)) > 0.1)
rd += rd;
else
uvs = projectedCoord.xy;
break;
}
out_color += (texture(SOURCE_REFL, uvs).rgb);
Is this even possible, using worldSpace coordinates? When I multiply the first pass' outputs with the viewMatrix aswell as the modelMatrix, my light calculations go tits up, because they are also in worldSpace...
Unfortunately there is no basic SSR tutorials on the internet, that explain just the SSR bit and nothing else, so I thought I'd give it a shot here, I really can't seem to get my head around this...

Related

Deferred Rendering Shadows Strange Behaviour (almost working, not acne)

I have a simple deferred rendering setup and I'm trying to get depth map shadows to work. I know that I have a simple error in my code because I can see the shadows working, but they seem to move and lose resolution when I move my camera. This is not a Shadow Acne Problem. The shadow is shifted entirely. I have also seen most of the similar questions here, but none solve my problem.
In my final shader, I have a texture for world space positions of pixels, and the depth map rendered from the light source, as well as the light source's model-view-projection matrix. The steps I take are:
Get worldspace position of pixel from pre-rendered pass.
Multiply worldspace position with the light source's model-view-projection matrix. I am using orthogonal projection (a directional light).
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-0.0025, 0.0025, -0.0025, 0.0025, -0.1, 0.1);
glGetFloatv(GL_PROJECTION_MATRIX,m_lightViewProjectionMatrix);
I place the directional light at a fixed distance around my hero. Here is how I get my light's modelview matrix:
CSRTTransform lightTrans = CSRTTransform();
CVector camOffset = CVector(m_vPlanet[0].m_vLightPos);
camOffset.Normalize();
camOffset *= 0.007;
camOffset += hero.GetLocation();
CVector LPN = m_vLightPos*(-1); LPN.Normalize();
CQuaternion lRot;
lRot = CQuaternion( CVector(0,1,0), asin(LPN | CVector(-1,0,0) ) );
lightTrans.m_qRotate = lRot;
lightTrans.m_vTranslate = camOffset;
m_worldToLightViewMatrix = lightTrans.BuildViewMatrix();
And the final light mvp matrix is:
CMatrix4 lightMVPMatrix = m_lightViewProjectionMatrix * m_worldToLightViewMatrix;
m_shHDR.SetUniformMatrix("LightMVPMatrix", lightMVPMatrix);
For now I am only rendering the shadow casters in my shadow pass. As you can see, the shadow pass seems fine, my hero is centered in the frame and rotated correctly. It is worth noting that these two matrices are passed to the vertex shader of the hero so it seems like they are correct because the hero is rendered correctly in the shadow pass (shown here with more contrast for visibility).
https://lh3.googleusercontent.com/-pxBZ5jnmlfM/V0SBT75yB1I/AAAAAAAABEY/007j_toVO7M41iyEiEnJgvr7K1m5GSceQCCo/s1024/shadow_pass.jpg
And finally, in my deferred shader I do:
vec4 projectedWorldPos = LightMVPMatrix * vec4(worldPos,1.0);
vec3 projCoords = projectedWorldPos.xyz;//projectedWorldPos.w;
projCoords.xyz = (projCoords.xyz * 0.5) + 0.5;
float calc_depth = projCoords.z;
float tD = texture2DRect(shadowPass,vec2(projCoords.x*1280.0,projCoords.y*720.0)).r ;
return ( calc_depth < tD ) ;
I have the /w division commented out because I'm using orthogonal projection. Having it produces the same result anyways.
https://lh3.googleusercontent.com/-b2i7AD_Nnf0/V0SGPyfelsI/AAAAAAAABFE/mvWnhcdQSbsU3l8sd0974jWDA94r6PkxACCo/s1024/render3.jpg
In certain positions for my hero (close to initial position) (1), the shadow looks fine. But as soon as I move the hero, the shadow moves incorrectly (2), and the further away I move, the shadow starts to lose resolution (3).
It is worth noting all my textures and passes are the same size (1280x720). I'm using an NVidia graphics card. The problem seems to be matrix related, but as mentioned, the shadow pass renders OK so I'm not sure what's going on...

Cascaded Shadow maps not quite right

Ok. So, I've been messing around with shadows in my game engine for the last week. I've mostly implemented cascading shadow maps (CSM), but I'm having a bit of a problem with shadowing that I just can't seem to solve.
The only light in this scene is a directional light (sun), pointing {-0.1 -0.25 -0.65}. I calculate 4 sets of frustum bounds for the four splits of my CSMs with this code:
// each projection matrix calculated with same near plane, different far
Frustum make_worldFrustum(const glm::mat4& _invProjView) {
Frustum fr; glm::vec4 temp;
temp = _invProjView * glm::vec4(-1, -1, -1, 1);
fr.xyz = glm::vec3(temp) / temp.w;
temp = _invProjView * glm::vec4(-1, -1, 1, 1);
fr.xyZ = glm::vec3(temp) / temp.w;
...etc 6 more times for ndc cube
return fr;
}
For the light, I get a view matrix like this:
glm::mat4 viewMat = glm::lookAt(cam.pos, cam.pos + lightDir, {0,0,1});
I then create each ortho matrix from the bounds of each frustum:
lightMatVec.clear();
for (auto& frus : cam.frusVec) {
glm::vec3 arr[8] {
glm::vec3(viewMat * glm::vec4(frus.xyz, 1)),
glm::vec3(viewMat * glm::vec4(frus.xyZ, 1)),
etc...
};
glm::vec3 minO = {INFINITY, INFINITY, INFINITY};
glm::vec3 maxO = {-INFINITY, -INFINITY, -INFINITY};
for (auto& vec : arr) {
minO = glm::min(minO, vec);
maxO = glm::max(maxO, vec);
}
glm::mat4 projMat = glm::ortho(minO.x, maxO.x, minO.y, maxO.y, minO.z, maxO.z);
lightMatVec.push_back(projMat * viewMat);
}
I have a 4 layer TEXTURE_2D_ARRAY bound to 4 framebuffers that I draw the scene into with a very simple vertex shader (frag disabled or punchthrough alpha).
I then draw the final scene. The vertex shader outputs four shadow texcoords:
out vec3 slShadcrd[4];
// stuff
for (int i = 0; i < 4; i++) {
vec4 sc = WorldBlock.skylMatArr[i] * vec4(world_pos, 1);
slShadcrd[i] = sc.xyz / sc.w * 0.5f + 0.5f;
}
And a fragment shader, which determines the split to use with:
int csmIndex = 0;
for (uint i = 0u; i < CameraBlock.csmCnt; i++) {
if (-view_pos.z > CameraBlock.csmSplits[i]) index++;
else break;
}
And samples the shadow map array with this function:
float sample_shadow(vec3 _sc, int _csmIndex, sampler2DArrayShadow _tex) {
return texture(_tex, vec4(_sc.xy, _csmIndex, _sc.z)).r;
}
And, this is the scene I get (with each split slightly tinted and the 4 depth layers overlayed):
Great! Looks good.
But, if I turn the camera slightly to the right:
Then shadows start disappearing (and depending on the angle, appearing where they shouldn't be).
I have GL_DEPTH_CLAMP enabled, so that isn't the issue. I'm culling front faces, but turning that off doesn't make a difference to this issue.
What am I missing? I feel like it's an issue with one of my projections, but they all look right to me. Thanks!
EDIT:
All four of the the light's frustums drawn. They are all there, but only z is changing relative to the camera (see comment below):
EDIT:
Probably more useful, this is how the frustums look when I only update them once, when the camera is at (0,0,0) and pointing forwards (0,1,0). Also I drew them with depth testing this time.
IMPORTANT EDIT:
It seems that this issue is directly related to the light's view matrix, currently:
glm::mat4 viewMat = glm::lookAt(cam.pos, cam.pos + lightDir, {0,0,1});
Changing the values for eye and target seems to affect the buggered shadows. But I don't know what I should actually be setting this to? Should be easy for someone with a better understanding than me :D
Solved it! It was indeed an issue with the light's view matrix! All I had to do was replace camPos with the centre point of each frustum! Meaning that each split's light matrix needed a different view matrix. So I just create each view matrix like this...
glm::mat4 viewMat = glm::lookAt(frusCentre, frusCentre+lightDir, {0,0,1});
And get frusCentre simply...
glm::vec3 calc_frusCentre(const Frustum& _frus) {
glm::vec3 min(INFINITY, INFINITY, INFINITY);
glm::vec3 max(-INFINITY, -INFINITY, -INFINITY);
for (auto& vec : {_frus.xyz, _frus.xyZ, _frus.xYz, _frus.xYZ,
_frus.Xyz, _frus.XyZ, _frus.XYz, _frus.XYZ}) {
min = glm::min(min, vec);
max = glm::max(max, vec);
}
return (min + max) / 2.f;
}
And bam! Everything works spectacularly!
EDIT (Last one!):
What I had was not quite right. The view matrix should actually be:
glm::lookAt(frusCentre-lightDir, frusCentre, {0,0,1});

Plotting data accurately (without lines) texture or ortho projection or something else?

I have some scatter graph data that I want to plot accurately. The actual data is a grid of data.
If I do I simply put all the graph data into a vbo and draw it as points, I see lines between my data points.
I believe this is to do with the conversion into screen space. So therefore I need to apply a projection matrix.
Im sure I want Ortho projection but currently I seem to have a bug in my ortho matrix generation:
void OrthoMatrix(float[] matrix, float left, float right, float top, float bottom, float near, float far)
{
float r_l = right - left;
float t_b = top - bottom;
float f_n = far - near;
float tx = - (right + left) / (right - left);
float ty = - (top + bottom) / (top - bottom);
float tz = - (far + near) / (far - near);
matrix[0] = 2.0f / r_l;
matrix[1] = 0.0f;
matrix[2] = 0.0f;
matrix[3] = tx;
matrix[4] = 0.0f;
matrix[5] = 2.0f / t_b;
matrix[6] = 0.0f;
matrix[7] = ty;
matrix[8] = 0.0f;
matrix[9] = 0.0f;
matrix[10] = 2.0f / f_n;
matrix[11] = tz;
matrix[12] = 0.0f;
matrix[13] = 0.0f;
matrix[14] = 0.0f;
matrix[15] = 1.0f;
}
But I can't work it out. My shader is:
gl_Position = projectionMatrix * vec4(position, 0, 1.0);
and my data is plotted like (subset)
-15.9 -9.6
-15.8 -9.6
-15.7 -9.6
-15.6 -9.6
-16.4 -9.5
-16.3 -9.5
-16.2 -9.5
-16.1 -9.5
-16 -9.5
-15.9 -9.5
The image on the left is correct (but with lines) and the image on the right is with ortho:
A colleague has suggested to first put the data into a bitmap that conforms to the plot region and then load that. Which to some degree makes sense but it seems like a step backwards. Particularly as the image is still "stretched" and in realty we are just filling in those gaps.
EDIT
I tried a frustrum projection using glm.net and that works exactly how I want it to.
The frustrum function seems to take similar paramaters to ortho.. I think its time I went and read up a bit more about projection matrices!
If anyone can explain the strange image I got (the ortho) that would be fantastic.
EDIT 2
Now that I am adding zooming I am seeing the same lines. I will have to either draw them as quads or map the point size to the zoom level.
The "lines" you see are actually an artifact of your data samples locations becoming coerced into the pixel grid, which of course involves a rounding step. OpenGL assumes pixel centers to be at x+0.5 in viewport coordinates (i.e. after the NDC to viewport mapping).
The only way to get rid of these lines is to increase the resolution at which points are mapped into viewport space. But then your viewport contains only that much of pixels, so what can you do? Well the answer is "supersampling" or "multisampling", i.e. have several samples per pixels and the coverage of these sample points modulates the rasterization weighting. Now if you fear implementing this yourself, fear not. While implementing it "by hand" is certainly possible it's not efficient: Most (all) modern GPU come with some kind of support for multisampling, usually advertised as "antialiasing".
So that's what you should do: Create an OpenGL context with full screen antialiasing support, enable multisampling and see the grid frequency beat artifacts (aliasing) vanish.

OpenGL 440 - controlling line thickness in frag.

I am drawing a 3D spherical grid in opengl using a VBO of vertex points and GL_LINES. What I want to achieve is to have one line - the zenith - to be brighter than the rest.
I obviously store x,y,z coords and normals, then figured I might be able to use the texture coordinates to "tag" locations where at creation - y coordinate is 0. Like so:
if (round(y) == 0.0f){
_varray[nr].tex[0] = -1.0; // setting the s variable (s,t texcoord),
// passed in with vbo
}
Now in the fragment shader I recieve this value and do:
if(vs_st[0] == -1){
diffuse = gridColor*2.f;
}else{
diffuse = gridColor;
}
And the results looks kind of awful:
Print Screen
I realize that this is propably due to the fragment shader having to interpolate between two points, can you guys think of a good way to identify the zenith line and make it brighter? I'd rather avoid using geometry shaders...
Solution was this:
if (round(y) == 0.0f) _varray[nr].tex[0] = -2; // set arb. number.
And then do not setthat variable anywhere else! then in fragment:
if( floor(vs_st[0]) == -2){
diffuse = gridColor*2.f;
}else{
diffuse = gridColor;
}
Dont know how neat that is, but it works.

How to texture a random convex quad in openGL

Alright, so I started looking up tutorials on openGL for the sake of making a minecraft mod. I still don't know too much about it because I figured that I really shouldn't have to when it comes to making the small modification that I want, but this is giving me such a headache. All I want to do is be able to properly map a texture to an irregular concave quad.
Like this:
I went into openGL to figure out how to do this before I tried running code in the game. I've read that I need to do a perspective-correct transformation, and I've gotten it to work for trapezoids, but for the life of me I can't figure out how to do it if both pairs of edges aren't parallel. I've looked at this: http://www.imagemagick.org/Usage/scripts/perspective_transform, but I really don't have a clue where the "8 constants" this guy is talking about came from or if it will even help me. I've also been told to do calculations with matrices, but I've got no idea how much of that openGL does or doesn't take care of.
I've looked at several posts regarding this, and the answer I need is somewhere in those, but I can't make heads or tails of 'em. I can never find a post that tells me what arguments I'm supposed to put in the glTexCoord4f() method in order to have the perspective-correct transform.
If you're thinking of the "Getting to know the Q coordinate" page as a solution to my problem, I'm afraid I've already looked at it, and it has failed me.
Is there something I'm missing? I feel like this should be a lot easier. I find it hard to believe that openGL, with all its bells and whistles, would have nothing for making something other than a rectangle.
So, I hope I haven't pissed you off too much with my cluelessness, and thanks in advance.
EDIT: I think I need to make clear that I know openGL does perspective transform for you when your view of the quad is not orthogonal. I'd know to just change the z coordinates or my fov. I'm looking to smoothly texture non-rectangular quadrilateral, not put a rectangular shape in a certain fov.
OpenGL will do a perspective correct transform for you. I believe you're simply facing the issue of quad vs triangle interpolation. The difference between affine and perspective-correct transforms are related to the geometry being in 3D, where the interpolation in image space is non-linear. Think of looking down a road: the evenly spaced lines appear more frequent in the distance. Anyway, back to triangles vs quads...
Here are some related posts:
How to do bilinear interpolation of normals over a quad?
Low polygon cone - smooth shading at the tip
https://gamedev.stackexchange.com/questions/66312/quads-vs-triangles
Applying color to single vertices in a quad in opengl
An answer to this one provides a possible solution, but it's not simple:
The usual approach to solve this, is by performing the interpolation "manually" in a fragment shader, that takes into account the target topology, in your case a quad. Or in short you have to perform barycentric interpolation not based on a triangle but on a quad. You might also want to apply perspective correction.
The first thing you should know is that nothing is easy with OpenGL. It's a very complex state machine with a lot of quirks and a poor interface for developers.
That said, I think you're confusing a lot of different things. To draw a textured rectangle with perspective correction, you simply draw a textured rectangle in 3D space after setting the projection matrix appropriately.
First, you need to set up the projection you want. From your description, you need to create a perspective projection. In OpenGL, you usually have 2 main matrixes you're concerned with - projection and model-view. The projection matrix is sort of like your "camera".
How you do the above depends on whether you're using Legacy OpenGL (less than version 3.0) or Core Profile (modern, 3.0 or greater) OpenGL. This page describes 2 ways to do it, depending on which you're using.
void BuildPerspProjMat(float *m, float fov, float aspect, float znear, float zfar)
{
float xymax = znear * tan(fov * PI_OVER_360);
float ymin = -xymax;
float xmin = -xymax;
float width = xymax - xmin;
float height = xymax - ymin;
float depth = zfar - znear;
float q = -(zfar + znear) / depth;
float qn = -2 * (zfar * znear) / depth;
float w = 2 * znear / width;
w = w / aspect;
float h = 2 * znear / height;
m[0] = w;
m[1] = 0;
m[2] = 0;
m[3] = 0;
m[4] = 0;
m[5] = h;
m[6] = 0;
m[7] = 0;
m[8] = 0;
m[9] = 0;
m[10] = q;
m[11] = -1;
m[12] = 0;
m[13] = 0;
m[14] = qn;
m[15] = 0;
}
and here is how to use it in an OpenGL 1 / OpenGL 2 code:
float m[16] = {0};
float fov=60.0f; // in degrees
float aspect=1.3333f;
float znear=1.0f;
float zfar=1000.0f;
BuildPerspProjMat(m, fov, aspect, znear, zfar);
glMatrixMode(GL_PROJECTION);
glLoadMatrixf(m);
// okay we can switch back to modelview mode
// for all other matrices
glMatrixMode(GL_MODELVIEW);
With a real OpenGL 3.0 code, we must use GLSL shaders and uniform variables to pass and exploit the transformation matrices:
float m[16] = {0};
float fov=60.0f; // in degrees
float aspect=1.3333f;
float znear=1.0f;
float zfar=1000.0f;
BuildPerspProjMat(m, fov, aspect, znear, zfar);
glUseProgram(shaderId);
glUniformMatrix4fv("projMat", 1, GL_FALSE, m);
RenderObject();
glUseProgram(0);
Since I've not used Minecraft, I don't know whether it gives you a projection matrix to use or if you have the other information to construct it.