using simple shaders I've found a way to create gradients.
Here's result of my job:
http://goo.gl/A7pY01 (A little updated after OpenGL ES 2.0 Shader - 2D Radial Gradient in Polygon question)
It's nice, but I still need to display this gradient pattern on each face of my meshes. Or on the billboard face, just like it's a texture.
The glsl function gl_FragCoord returns window-related coordinates. Could someone explain me the way how to translate this into face-related coords and then draw my pattern?
Okey. A little surfing of stackoverflow gave me this topic: OpenGL: How to render perfect rectangular gradient?
Here is the meaning string: gl_FragColor = mix(color0, color1, uv.u + uv.v - 2 * uv.u * uv.v);
Of course we cannot translate window-space coordinates into something "face-related", but we could use UV coordinates of a face. So, I decided, what if we have a square face with uv-coordinates corresponding to full-sized texture (like 0,0; 0,1; 1,0; 1,1); So the center of a structure is 0.5,0.5. This could be a center of my round-gradient.
so my code of fragment shader is:
vec2 u_c = vec2(0.5,0.5);
float distanceFromLight = length(uv - u_c);
gl_FragColor = mix(vec4(1.,0.5,1.,1.), vec4(0.,0.,0.,1.), distanceFromLight*2.0);
Vertex shader:
gl_Position = _mvProj * vec4(vertex, 1.0);
uv = uv1;
Of course, we need to give correct UV coordinates, but the point is understood.
Here's example:
http://goo.gl/A7pY01
Related
I try to simulate reflection on a plane with render to texture.
My only problem is how to adjust the texture coordinates correctly to the current view.
In the shader i multiply the texture coordinates with a rotation matrix.
The rotation matrix is set with:
glm::vec3 v1 = glm::vec3(0.0f,1.0f,0.0f);
glm::vec3 v2=glm::vec3(camlocation[0],camlocation[1],0.0);
if(glm::length(v2)!=0.0f)
{
v2=glm::normalize(v2);
}
float alpha=glm::angle(v1,v2);
texturematrix=glm::mat4(1.0f);
texturematrix = glm::translate(texturematrix,glm::vec3(0.5f,0.5f,0.0f));
texturematrix = glm::rotate(texturematrix,alpha,glm::vec3(0.0f,0.0f,1.0f));
texturematrix = glm::translate(texturematrix,glm::vec3(-0.5f,-0.5f,0.0f));
I dont know if its the right way, but the reflection looks wrong.
edit:
Step 1: i bind a framebuffer and my reflection texture and render my model, a teapot for example.
in the shader i invert Z-position.
Step 2: i bind the texture again and draw the plane. in the shader i use
vec4 texcoord = texturematrix*vec4(VertexIn.texcoord,1.0,1.0);
vec4 firsttex = texture(reflectionMap,texcoord.xy);
Step 3: i draw the real model
vec4 texcoord = texturematrix*vec4(VertexIn.texcoord,1.0,1.0);
ok, one mistake was the third coordinate. it must be 0.0.
now it looks better, but still wrong. i have to add the current eye direction to the camera location angle
http://fs1.directupload.net/images/141207/temp/7wk8lvms.png
How would I render a 2D radial field in OpenGL? I know I can render it pixel by pixel but I'm wondering if there are more efficient solutions? I don't mind if it requires OpenGL3+ functionality.
How familiar are you with shaders? Because I'm thinking an easy-ish answer would be to render a quad and then write a fragment shader to color the quad based off of how far each pixel is from the center.
Pseudocode:
vertex shader:
vec2 center = vec2((x1+x2)/2,(y1+y2)/2); //pass this to the fragment shader
fragment shader:
float dist = distance(pos,center); //"pos" is the interpolated position of the fragment. Its passed in from the vertex shader
//Now that we have the distance between each fragment and the center, we can do all kinds of stuff:
gl_fragcolor = vec4(1,1,1,dist) //Assuming you're drawing a unit square, this will make each pixel's transparency smoothly vary from 1 (right next to the center) to 0 (on the edce of the square)
gl_fragcolor = vec4(dist, dist, dist, 1.0) //Vary each pixel's color from white to black
//etc, etc
Let me know if you need more detail
I've read nearly all ocean animation topics(programming and math too), and finally I decided to render it with Gerstner waves with reflection, refraction and caustics.
Well, now my reflection is working with flat plane and with only vertical displacements,but with Gerstner waves I displace the x,z coords too, and my reflection texture coordinates goes out of range when my camera is under a specific height or with changing the angle.
(the closer the ocean surface is, the more the texture wraps)
So, my shader codes:
Gerstner Wave:
vec3 calcWave(vec2 X,float t,float A,vec2 K,float L)
{
vec3 wave;
float k = 2*pi/L;
float W = sqrt(g*k);
wave.xz = -((K/k)*A*sin(dot(K,X)-W*t));
wave.y = A*cos(dot(K,X)-W*t)/2-A/2;
//I do it this way so the max wave amplitude is always
//lower than the plane of the reflection, so I can't see bellow it
return wave;
};
Texture Coordinates
VERTEX SHADER:
undisplaced_world_Vertex.xzw=world_Vertex.xzw;
//this is before the wave calculation, so it contains simple grid coordinates without any displacement
undisplaced_world_Vertex.y = waterLevel;
FRAGMENT SHADER:
vec4 screenCoord = mvp_matrix*undisplaced_world_Vertex;
vec2 projCoord=vec2((screen_coord.x/screen_coord.w+1)/2,(screen_coord.y/screen_coord.w+1)/2);
I've just read the reflection part in Deep-Water Animation and Rendering article, but i have no idea how to implement it.
My question is how to project the texture coordinates of the reflection texture, so it fits my 2 expectations:
the texture coordinates are always in range, or there is a minimal wrap at the edge of the screen
from every angle I can't see "under" the texture (or just a very little:P)
Also, it will be displaced with the normals.
EDIT:
Problems with understanding:
vec3 R = ReflectLeave(viewerDir,normal);
float4 projR = float4(Rh,0)*ReflViewProjTM;
float2 reflUV = (projR.xy / projR.z) * float2(0.5,-0.5)+float2(0.5,0.5);
half4 refl = tex2D(reflTex,reflUV);
:/
I want to view a flat fullscreen texture as it is spherical, by transforming it in a postprocess shader.
I figure I have to apply a projectionmatrix to the texture coordinate in the shader.
I found this website: http://www.songho.ca/opengl/gl_projectionmatrix.html which learns me a lot about the inners of the projectionmatrix.
But how do I apply it? I thought I would have to multiply the third row of the projection matrix to the texture coordinate with a calculated z value added to make it spherical. My efforts don't show any result though.
EDIT: I see the same issue here: http://lists.openscenegraph.org/pipermail/osg-users-openscenegraph.org/2008-April/009765.html
I think after you multiply text coords by projection matrix you have to make a perspective division and move from 3D to 2D (since the texture is 2D). This is the same as with shadow mapping.
// in fragment shader:
vec4 proj = uniformModelViewProjMatrix * tex_coords;
proj.xyz /= proj.w;
proj.xyz += vec3(1.0);
proj.xyz *= 0.5;
vec4 col = texture2D(sampler, proj.xy);
or look at http://www.ozone3d.net/tutorials/glsl_texturing_p08.php (for texture2DProj)
I try to implement Screen Space Ambient Occlusion (SSAO) based on the R5 Demo found here: http://blog.nextrevision.com/?p=76
In Fact I try to adapt their SSAO - Linear shader to fit into my own little engine.
1) I calculate View Space surface normals and Linear depth values.
I Store them in a RGBA texture using the following shader:
Vertex:
varNormalVS = normalize(vec3(vmtInvTranspMatrix * vertexNormal));
depth = (modelViewMatrix * vertexPosition).z;
depth = (-depth-nearPlane)/(farPlane-nearPlane);
gl_Position = pvmtMatrix * vertexPosition;
Fragment:
gl_FragColor = vec4(varNormalVS.x,varNormalVS.y,varNormalVS.z,depth)
For my linear depth calculation I referred to: http://www.gamerendering.com/2008/09/28/linear-depth-texture/
Is it correct?
Texture seem to be correct, but maybe it is not?
2) The actual SSAO Implementation:
As mentioned above the original can be found here: http://blog.nextrevision.com/?p=76
or faster: on pastebin http://pastebin.com/KaGEYexK
In contrast to the original I only use 2 input textures since one of my textures stores both, normals as RGB and Linear Depht als Alpha.
My second Texture, the random normal texture, looks like this:
http://www.gamerendering.com/wp-content/uploads/noise.png
I use almost exactly the same implementation but my results are wrong.
Before going into detail I want to clear some questions first:
1) ssao shader uses projectionMatrix and it's inverse matrix.
Since it is a post processing effect rendered onto a screen aligned quad via orthographic projection, the projectionMatrix is the orthographic matrix. Correct or Wrong?
2) Having a combined normal and Depth texture instead of two seperate ones.
In my opinion this is the biggest difference between the R5 implementation and my implementation attempt. I think this should not be a big problem, however, due to different depth textures this is most likley to cause problems.
Please note that R5_clipRange looks like this
vec4 R5_clipRange = vec4(nearPlane, farPlane, nearPlane * farPlane, farPlane - nearPlane);
Original:
float GetDistance (in vec2 texCoord)
{
//return texture2D(R5_texture0, texCoord).r * R5_clipRange.w;
const vec4 bitSh = vec4(1.0 / 16777216.0, 1.0 / 65535.0, 1.0 / 256.0, 1.0);
return dot(texture2D(R5_texture0, texCoord), bitSh) * R5_clipRange.w;
}
I have to admit I do not understand the code snippet. My depth his stored in the alpha of my texture and I thought it should be enought to just do this
return texture2D(texSampler0, texCoord).a * R5_clipRange.w;
Correct or Wrong?
Your normal texture seems wrong. My guess is that your vmtInvTranspMatrix is a model-view matrix. However it should be model-view-projection matrix (note you need screen space normals, not view space normals). The depth calculation is correct.
I've implemented SSAO once and the normal texture looks like this (note there is no blue here):
1) ssao shader uses projectionMatrix and it's inverse matrix.
Since it is a post processing effect rendered onto a screen aligned quad via orthographic projection, the projectionMatrix is the orthographic matrix. Correct or Wrong ?
If you mean the second pass where you are rendering a quad to compute the actual SSAO, yes. You can avoid the multiplication by the orthogonal projection matrix altogether. If you render screen quad with [x,y] dimensions ranging from -1 to 1, you can use really simple vertex shader:
const vec2 madd=vec2(0.5,0.5);
void main(void)
{
gl_Position = vec4(in_Position, -1.0, 1.0);
texcoord = in_Position.xy * madd + madd;
}
2) Having a combined normal and Depth texture instead of two seperate
ones.
Nah, that won't cause problems. It's a common practice to do so.