I tried to implement a motion-blur post processing effect as described in GPU Gems 3 Chapter 27, but I am encountering issues because the blur jitters when i move the camera and does not work as expected.
This is my fragment shader:
varying vec3 pos;
varying vec3 normal;
varying vec2 tex;
varying vec3 tangent;
uniform mat4 matrix;
uniform mat4 VPmatrix;
uniform mat4 matrixPrev;
uniform mat4 VPmatrixPrev;
uniform sampler2D diffuseTexture;
uniform sampler2D zTexture;
void main() {
vec2 texCoord = tex;
float zOverW = texture2D(zTexture, texCoord).r;
vec4 H = vec4(texCoord.x * 2.0 - 1.0, (1 - texCoord.y) * 2.0 - 1.0, zOverW, 1.0);
mat4 inv1 = inverse(matrix);
mat4 inv2 = inverse(VPmatrix);
vec4 D = H*(inv2*inv1);
vec4 worldPos = D/D.w;
mat4 prev = matrixPrev*VPmatrixPrev;
vec4 previousPos = worldPos*prev;
previousPos /= previousPos.w;
vec2 velocity = vec2((H.x-previousPos.x)/2.0, (H.y-previousPos.y)/2.0);
vec3 color = vec3(texture2D(diffuseTexture, texCoord));
for(int i = 0; i < 16; i++) {
texCoord += velocity;
vec3 color2 = vec3(texture2D(diffuseTexture, texCoord));
color += color2;
}
color /= 16;
gl_FragColor = vec4(color, 1.0);
}
The uniforms matrix and VPmatrix are the ModelView and Projection matrixes that were got as following:
float matrix[16];
float VPmatrix[16];
glGetFloatv(GL_MODELVIEW_MATRIX, matrix);
glGetFloatv(GL_PROJECTION_MATRIX, VPmatrix);
The uniforms matrixPrev and VPmatrixPrev are the previous ModelView and Projection matrixes got as following after rendering: (in the code below matrixPrev and VPmatrixPrev are global variables)
for(int i = 0; i < 16; i++) {
matrixPrev[i] = matrix[i];
VPmatrixPrev[i] = VPmatrix[i];
}
All four matrixes are passed to the shader as following:
glUniformMatrix4fvARB(glGetUniformLocationARB(postShader, "matrix"), 16, GL_FALSE, matrix);
glUniformMatrix4fvARB(glGetUniformLocationARB(postShader, "VPmatrix"), 16, GL_FALSE, VPmatrix);
glUniformMatrix4fvARB(glGetUniformLocationARB(postShader, "matrixPrev"), 16, GL_FALSE, matrixPrev);
glUniformMatrix4fvARB(glGetUniformLocationARB(postShader, "VPmatrixPrev"), 16, GL_FALSE, VPmatrixPrev);
In the shader, the uniform zTexture is a texture containing the depth values of the frame buffer. (Not sure if they are divided by W)
I hoped the shader would work but what I get instead is that when i rotate the camera around the blur jitters really fast with subtle rotations. I tried rendering the zTexture and the result I get is a grayscale image so It seems alright. I also tried setting the fragment color to H.xyz and previousPos.xyz and while rendering H.xyz produces a colored screen, previousPos.xyz produces the same colored screen, except that when the camera rotates the colors seem to invert, so I suspect there is something wrong with extracting the world position from depth.
Am I missing something here? Any help would be greatly appreciated.
forget my previous answer, is a matrix error:
matrix multiplications are in the wrong order or otherwise you must send transpose matrix (explaining why only the rotation was taken in acount, translation/scale components where messed up):
glUniformMatrix4fvARB(glGetUniformLocationARB(postShader, "matrix"), 16, GL_FALSE, matrix);
becomes
glUniformMatrix4fvARB(glGetUniformLocationARB(postShader, "matrix"), 1, GL_TRUE, matrix);
(note 16 was not the matrix size but the matrix count, so it should be only only one here)
Another note: you should compute inverse matrix and project*view result matrix in your main application, not the pixel shader: it's done once per pixels but should be done only once per frame!)
Past post note: Google "RuinIsland_GLSL_Demo.zip": it contains many good glsl sample, helped me solve this issue
I'm sorry I don't have an answer but just a clue:
I have EXACTLY the same problem as yours.
I guess I used the same material pointed by google and, like you, the blurring exists only when the camera rotates.
However I have a clue (the same as yours in fact): I think it is because the glsl shader around the net assume our depth texture contains z/w but, like me, you used a genuine depth texture filled using fixed pipeline.
So you only have z and you are missing w in the very first step.
Since "texture2D(zTexture, texCoord).r" does contain only z : we miss the computation to get zOverW.
In the end we are stuck halfway from window spce to to clip space.
I found this: https://www.opengl.org/wiki/Compute_eye_space_from_window_space#From_NDC_to_clip
but my perspective projection matrix does not meet the requierements, perhaps it will help you.
Related
I have been using OpenGL in addition to GLM, Glad, and GLFW to create a 2d game. I want to achieve a simple 2d rotation, presumably along the Z-axis because it would not be 3d. The problem is, when I create a simple model matrix that uses a rotation matrix multiplied with a translation and dilatation matrix, the rotation becomes 3d when the primitive is rendered. The square is stretched and the sides are no longer the same length. Is there a way to avoid this stretch so that the square remains the same proportions while it rotates?
Vertex Shader:
//shader vertex
#version 430 core
layout(location = 0) in vec3 aPos;
layout(location = 1) in vec2 aTexCoord;
//uniform mat4 transform;
uniform mat4 model;
out vec2 TexCoord;
void main()
{
gl_Position = model * vec4(aPos, 1.0);
TexCoord = aTexCoord;
}
I have a function that iterates through vectors of matrices to handle large batches of objects. The matrices are first created to equal glm::mat4(1.0f).
void Trans::moveBatch(std::vector <glm::vec2>& speed, std::vector <float>& rot)
{
for (unsigned int i = 0; i < speed.size(); i++)
{
batchRotator[i] = glm::rotate(batchRotator[i], glm::radians(rot[i]), glm::vec3(0.0f, 0.0f, 1.0f));
batchMover[i] = glm::translate(batchMover[i], glm::vec3(speed[i].x, speed[i].y, 0.0f));
batchBox[i].x += speed[i].x;
batchBox[i].y += speed[i].y;
batchBox[i].z += speed[i].x;
batchBox[i].w += speed[i].y;
}
}
I then multiply my matrices and send that as the model matrix into the shader.
Thank you very much! I was able to use glm::ortho to create a projection matrix and that solved my problem when I made the world coordinates within 1920/1080 because that was the correct aspect ratio.
I am trying to implement a Streak shader, which is described here:
http://www.chrisoat.com/papers/Oat-SteerableStreakFilter.pdf
Short explanation: Samples a point with a 1d kernel in a given direction. The kernel size grows exponentially in each step. Color values are weighted based on distance to sampled point and summed. The result is a smooth tail/smear/light streak effect on that direction. Here is the frag shader:
precision highp float;
uniform sampler2D u_texture;
varying vec2 v_texCoord;
uniform float u_Pass;
const float kernelSize = 4.0;
const float atten = 0.95;
vec4 streak(in float pass, in vec2 texCoord, in vec2 dir, in vec2 pixelStep) {
float kernelStep = pow(kernelSize, pass - 1.0);
vec4 color = vec4(0.0);
for(int i = 0; i < 4; i++) {
float sampleNum = float(i);
float weight = pow(atten, kernelStep * sampleNum);
vec2 sampleTexCoord = texCoord + ((sampleNum * kernelStep) * (dir * pixelStep));
vec4 texColor = texture2D(u_texture, sampleTexCoord) * weight;
color += texColor;
}
return color;
}
void main() {
vec2 iResolution = vec2(512.0, 512.0);
vec2 pixelStep = vec2(1.0, 1.0) / iResolution.xy;
vec2 dir = vec2(1.0, 0.0);
float pass = u_Pass;
vec4 streakColor = streak(pass, v_texCoord, dir, pixelStep);
gl_FragColor = vec4(streakColor.rgb, 1.0);
}
It was going to be used for a starfield type of effect. And here is the implementation on ShaderToy which works fine:
https://www.shadertoy.com/view/ll2BRG
(Note: Disregard the first shader in Buffer A, it just filters out the dim colors in the input texture to emulate a star field since afaik ShaderToy doesn't allow uploading custom textures)
But when I use the same shader in my own code and render using ping-pong FrameBuffers, it looks different. Here is my own implementation ported over to WebGL:
https://jsfiddle.net/1b68eLdr/87755/
I basically create 2 512x512 buffers, ping-pong the shader 4 times increasing kernel size at each iteration according to the algorithm and render the final iteration on the screen.
The problem is visible banding, and my streaks/tails seem to be losing brightness a lot faster: (Note: the image is somewhat inaccurate, the lengths of the streaks are same/correct, its color values that are wrong)
I have been struggling with this for a while in Desktop OpenGl / LWJGL, I ported it over to WebGL/Javascript and uploaded on JSFiddle in hopes someone can spot what the problem is. I suspect it's either about texture coordinates or FrameBuffer configuration since shaders are exactly the same.
The reason it works on Shadertoys is because it uses a floating-point render target.
Simply use gl.FLOAT as the type of your framebuffer texture and the issue is fixed (I could verify it with the said modification on your JSFiddle).
So do this in your createBackingTexture():
// Just request the extension (MUST be done).
gl.getExtension('OES_texture_float');
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, this._width, this._height, 0, gl.RGBA, gl.FLOAT, null);
I'm following the tutorial by John Chapman (http://john-chapman-graphics.blogspot.nl/2013/01/ssao-tutorial.html) to implement SSAO in a deferred renderer. The input buffers to the SSAO shaders are:
World-space positions with linearized depth as w-component.
World-space normal vectors
Noise 4x4 texture
I'll first list the complete shader and then briefly walk through the steps:
#version 330 core
in VS_OUT {
vec2 TexCoords;
} fs_in;
uniform sampler2D texPosDepth;
uniform sampler2D texNormalSpec;
uniform sampler2D texNoise;
uniform vec3 samples[64];
uniform mat4 projection;
uniform mat4 view;
uniform mat3 viewNormal; // transpose(inverse(mat3(view)))
const vec2 noiseScale = vec2(800.0f/4.0f, 600.0f/4.0f);
const float radius = 5.0;
void main( void )
{
float linearDepth = texture(texPosDepth, fs_in.TexCoords).w;
// Fragment's view space position and normal
vec3 fragPos_World = texture(texPosDepth, fs_in.TexCoords).xyz;
vec3 origin = vec3(view * vec4(fragPos_World, 1.0));
vec3 normal = texture(texNormalSpec, fs_in.TexCoords).xyz;
normal = normalize(normal * 2.0 - 1.0);
normal = normalize(viewNormal * normal); // Normal from world to view-space
// Use change-of-basis matrix to reorient sample kernel around origin's normal
vec3 rvec = texture(texNoise, fs_in.TexCoords * noiseScale).xyz;
vec3 tangent = normalize(rvec - normal * dot(rvec, normal));
vec3 bitangent = cross(normal, tangent);
mat3 tbn = mat3(tangent, bitangent, normal);
// Loop through the sample kernel
float occlusion = 0.0;
for(int i = 0; i < 64; ++i)
{
// get sample position
vec3 sample = tbn * samples[i]; // From tangent to view-space
sample = sample * radius + origin;
// project sample position (to sample texture) (to get position on screen/texture)
vec4 offset = vec4(sample, 1.0);
offset = projection * offset;
offset.xy /= offset.w;
offset.xy = offset.xy * 0.5 + 0.5;
// get sample depth
float sampleDepth = texture(texPosDepth, offset.xy).w;
// range check & accumulate
// float rangeCheck = abs(origin.z - sampleDepth) < radius ? 1.0 : 0.0;
occlusion += (sampleDepth <= sample.z ? 1.0 : 0.0);
}
occlusion = 1.0 - (occlusion / 64.0f);
gl_FragColor = vec4(vec3(occlusion), 1.0);
}
The result is however not pleasing. The occlusion buffer is mostly all white and doesn't show any occlusion. However, if I move really close to an object I can see some weird noise-like results as you can see below:
This is obviously not correct. I've done a fair share of debugging and believe all the relevant variables are correctly passed around (they all visualize as colors). I do the calculations in view-space.
I'll briefly walk through the steps (and choices) I've taken in case any of you figure something goes wrong in one of the steps.
view-space positions/normals
John Chapman retrieves the view-space position using a view ray and a linearized depth value. Since I use a deferred renderer that already has the world-space positions per fragment I simply take those and multiply them with the view matrix to get them to view-space.
I take a similar approach for the normal vectors. I take the world-space normal vectors from a buffer texture, transform them to [-1,1] range and multiply them with transpose(inverse(mat3(..))) of view matrix.
The view-space position and normals are visualized as below:
This looks correct to me.
Orient hemisphere around normal
The steps to create the tbn matrix are the same as described in John Chapman's tutorial. I create the noise texture as follows:
std::vector<glm::vec3> ssaoNoise;
for (GLuint i = 0; i < noise_size; i++)
{
glm::vec3 noise(randomFloats(generator) * 2.0 - 1.0, randomFloats(generator) * 2.0 - 1.0, 0.0f);
noise = glm::normalize(noise);
ssaoNoise.push_back(noise);
}
...
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB16F, 4, 4, 0, GL_RGB, GL_FLOAT, &ssaoNoise[0]);
I can visualize the noise in the fragment shader so that seems to work.
sample depths
I transform all samples from tangent to view-space (samples are random between [-1,1] on xy axis and [0,1] on z-axis and translate them to fragment's current view-space position (origin).
I then sample from linearized depth buffer (which I visualize below when looking close to an object):
and finally compare sampled depth values to current fragment's depth value and add occlusion values. Note that I do not perform a range-check since I don't believe that is the cause of this behavior and I'd rather keep it as minimal as possible for now.
I don't know what is causing this behavior. I believe it is somewhere in sampling the depth values. As far as I can tell I am working in the right coordinate system, linearized depth values are in view-space as well and all variables are set somewhat properly.
I would like to draw a line between two vertices. But, I don't know the two points in advance.
What I'm trying to do to achieve this is upload a vertex buffer with two vertices at (0, 0, 0). Then, when before the draw call, i.e. before glDrawArrays(GL_LINES, 0, 1), try to translate the two vertices with separate model matrices. I'm using a single shader program. To explain it more, it's sort of like I know the fromVertex and toVertex and I want to draw a line them. I want to apply model matrix 1, to translate vertice 1 from (0, 0, 0) to fromVertex, and model matrix 2, to translate vertice 2 from (0, 0, 0) to toVertex.
uniform mat4 projection;
uniform mat4 camera;
uniform mat4 model;
in vec3 vert;
in vec2 vertTexCoord;
in vec3 vertColor;
out vec2 fragTexCoord;
out vec3 fragColor;
void main() {
// Pass the tex coord straight through to the fragment shader
fragTexCoord = vertTexCoord;
fragColor = vertColor;
// Apply all matrix transformations to vert
gl_Position = projection * camera * model * vec4(vert, 1);
}
I can't translate this into code, because I can't think of a way to set the model matrix uniform variable in the vertex shader to two different values when drawing a line. I'm trying to think of using a scale and/or rotate operation to achieve this. But, I'm guessing there must be an easier way to do this.
I was trying to solve this without using glBufferData in the render loop. Just using that and uploading the vertices solves the problem trivially.
Are point sprites the best choice to build a particle system?
Are point sprites present in the newer versions of OpenGL and drivers of the latest graphics cards? Or should I do it using vbo and glsl?
Point sprites are indeed well suited for particle systems. But they don't have anything to do with VBOs and GLSL, meaning they are a completely orthogonal feature. No matter if you use point sprites or not, you always have to use VBOs for uploading the geometry, be they just points, pre-made sprites or whatever, and you always have to put this geometry through a set of shaders (in modern OpenGL of course).
That being said point sprites are very well supported in modern OpenGL, just not that automatically as with the old fixed-function approach. What is not supported are the point attenuation features that let you scale a point's size based on it's distance to the camera, you have to do this manually inside the vertex shader. In the same way you have to do the texturing of the point manually in an appropriate fragment shader, using the special input variable gl_PointCoord (that says where in the [0,1]-square of the whole point the current fragment is). For example a basic point sprite pipeline could look this way:
...
glPointSize(whatever); //specify size of points in pixels
glDrawArrays(GL_POINTS, 0, count); //draw the points
vertex shader:
uniform mat4 mvp;
layout(location = 0) in vec4 position;
void main()
{
gl_Position = mvp * position;
}
fragment shader:
uniform sampler2D tex;
layout(location = 0) out vec4 color;
void main()
{
color = texture(tex, gl_PointCoord);
}
And that's all. Of course those shaders just do the most basic drawing of textured sprites, but are a starting point for further features. For example to compute the sprite's size based on its distance to the camera (maybe in order to give it a fixed world-space size), you have to glEnable(GL_PROGRAM_POINT_SIZE) and write to the special output variable gl_PointSize in the vertex shader:
uniform mat4 modelview;
uniform mat4 projection;
uniform vec2 screenSize;
uniform float spriteSize;
layout(location = 0) in vec4 position;
void main()
{
vec4 eyePos = modelview * position;
vec4 projVoxel = projection * vec4(spriteSize,spriteSize,eyePos.z,eyePos.w);
vec2 projSize = screenSize * projVoxel.xy / projVoxel.w;
gl_PointSize = 0.25 * (projSize.x+projSize.y);
gl_Position = projection * eyePos;
}
This would make all point sprites have the same world-space size (and thus a different screen-space size in pixels).
But point sprites while still being perfectly supported in modern OpenGL have their disadvantages. One of the biggest disadvantages is their clipping behaviour. Points are clipped at their center coordinate (because clipping is done before rasterization and thus before the point gets "enlarged"). So if the center of the point is outside of the screen, the rest of it that might still reach into the viewing area is not shown, so at the worst once the point is half-way out of the screen, it will suddenly disappear. This is however only noticable (or annyoing) if the point sprites are too large. If they are very small particles that don't cover much more than a few pixels each anyway, then this won't be much of a problem and I would still regard particle systems the canonical use-case for point sprites, just don't use them for large billboards.
But if this is a problem, then modern OpenGL offers many other ways to implement point sprites, apart from the naive way of pre-building all the sprites as individual quads on the CPU. You can still render them just as a buffer full of points (and thus in the way they are likely to come out of your GPU-based particle engine). To actually generate the quad geometry then, you can use the geometry shader, which lets you generate a quad from a single point. First you do only the modelview transformation inside the vertex shader:
uniform mat4 modelview;
layout(location = 0) in vec4 position;
void main()
{
gl_Position = modelview * position;
}
Then the geometry shader does the rest of the work. It combines the point position with the 4 corners of a generic [0,1]-quad and completes the transformation into clip-space:
const vec2 corners[4] = {
vec2(0.0, 1.0), vec2(0.0, 0.0), vec2(1.0, 1.0), vec2(1.0, 0.0) };
layout(points) in;
layout(triangle_strip, max_vertices = 4) out;
uniform mat4 projection;
uniform float spriteSize;
out vec2 texCoord;
void main()
{
for(int i=0; i<4; ++i)
{
vec4 eyePos = gl_in[0].gl_Position; //start with point position
eyePos.xy += spriteSize * (corners[i] - vec2(0.5)); //add corner position
gl_Position = projection * eyePos; //complete transformation
texCoord = corners[i]; //use corner as texCoord
EmitVertex();
}
}
In the fragment shader you would then of course use the custom texCoord varying instead of gl_PointCoord for texturing, since we're no longer drawing actual points.
Or another possibility (and maybe faster, since I remember geometry shaders having a reputation for being slow) would be to use instanced rendering. This way you have an additional VBO containing the vertices of just a single generic 2D quad (i.e. the [0,1]-square) and your good old VBO containing just the point positions. What you then do is draw this single quad multiple times (instanced), while sourcing the individual instances' positions from the point VBO:
glVertexAttribPointer(0, ...points...);
glVertexAttribPointer(1, ...quad...);
glVertexAttribDivisor(0, 1); //advance only once per instance
...
glDrawArraysInstanced(GL_TRIANGLE_STRIP, 0, 4, count); //draw #count quads
And in the vertex shader you then assemble the per-point position with the actual corner/quad-position (which is also the texture coordinate of that vertex):
uniform mat4 modelview;
uniform mat4 projection;
uniform float spriteSize;
layout(location = 0) in vec4 position;
layout(location = 1) in vec2 corner;
out vec2 texCoord;
void main()
{
vec4 eyePos = modelview * position; //transform to eye-space
eyePos.xy += spriteSize * (corner - vec2(0.5)); //add corner position
gl_Position = projection * eyePos; //complete transformation
texCoord = corner;
}
This achieves the same as the geometry shader based approach, properly-clipped point sprites with a consistent world-space size. If you actually want to mimick the screen-space pixel size of actual point sprites, you need to put some more computational effort into it. But this is left as an exercise and would be quite the oppisite of the world-to-screen transformation from the the point sprite shader.