How to implement 2D raycasting light effect in GLSL - opengl

This was originally asked by #sydd here. I was curious about it so I try to code it but It was closed/deleted before I could answer so here it is.
Question: How to reproduce/implement this 2D ray casting lighting effect in GLSL?
The effect itself cast rays from mouse position to every direction, accumulating background map alpha and colors affecting the pixels strength.
So the input should be:
mouse position
background RGBA map texture

Background map
Ok I created a test RGBA map as 2 images one containing RGB (on the left) and second with the alpha channel (on the right) so you can see them both. Of coarse they are combined to form single RGBA texture.
I blurred them both a bit to obtain better visual effects on the edges.
Ray casting
As this should run in GLSL we need to cast the rays somewhere. I decided to do it in fragment shader. So the algo is like this:
On GL side pass uniforms needed for shaders Here goes mouse position as texture coordinate, max resolution of texture and light transmition strength.
On GL side draw quad covering whole screen with texture of background (o blending)
On Vertex shader just pass the texture and fragment coordinates needed
On Fragment shader per each fragment:
cast ray from mouse position to actual fragment position (in texture coordinates)
cumulate/integrate the light properties during the ray travel
stop if light strength near zero or target fragment position reached.
Vertex shader
// Vertex
#version 420 core
layout(location=0) in vec2 pos; // glVertex2f <-1,+1>
layout(location=8) in vec2 txr; // glTexCoord2f Unit0 <0,1>
out smooth vec2 t1; // texture end point <0,1>
void main()
{
t1=txr;
gl_Position=vec4(pos,0.0,1.0);
}
Fragment shader
// Fragment
#version 420 core
uniform float transmit=0.99;// light transmition coeficient <0,1>
uniform int txrsiz=512; // max texture size [pixels]
uniform sampler2D txrmap; // texture unit for light map
uniform vec2 t0; // texture start point (mouse position) <0,1>
in smooth vec2 t1; // texture end point, direction <0,1>
out vec4 col;
void main()
{
int i;
vec2 t,dt;
vec4 c0,c1;
dt=normalize(t1-t0)/float(txrsiz);
c0=vec4(1.0,1.0,1.0,1.0); // light ray strength
t=t0;
if (dot(t1-t,dt)>0.0)
for (i=0;i<txrsiz;i++)
{
c1=texture2D(txrmap,t);
c0.rgb*=((c1.a)*(c1.rgb))+((1.0f-c1.a)*transmit);
if (dot(t1-t,dt)<=0.000f) break;
if (c0.r+c0.g+c0.b<=0.001f) break;
t+=dt;
}
col=0.90*c0+0.10*texture2D(txrmap,t1); // render with ambient light
// col=c0; // render without ambient light
}
And Finally the result:
Animated 256 colors GIF:
The colors in GIF are slightly distorted due to 8 bit truncation. Also if the animation stops refresh page or open in decend gfx viewer instead.

Related

Project cubemap to 2D texture

I'd like to debug my render to cubemap function by projecting the whole thing to a 2D texture just like this one:
On my render from texture shader I've only got the UV texture coordinates available (ranging from (0,0) to (1,1)). How can I project the cubemap to the screen in a single draw call?
You can do this by rendering 6 quads and using 3D texture coords (s,t,p) pointing to each vertex of the cube so 8 variations of ( +/-1,+/-1,+/-1 ).
The UV 2D coords (s,t) like 4 variations of (0/1,0/1) are not usable for whole CUBE_MAP only for its individual sides.
Look for txr_skybox in here
Normal mapping gone horribly wrong
on how CUBE_MAP is used in fragment shader.
PS in OpenGL the texture coords are called s,t,p,q instead of u,v,w,...
Here related QA:
rendering cube map layout, understanding glTexCoord3f parameters
My answer is essentially the same as the one accepted one, but I have used this very technique to debug my depth-cubemap (used for shadowcasting) in my current project, so I thought I would include a working sample of the fragment shader code I used.
Unfolding cubemap
This is supposed to be rendered to a rectangle on top of the screen with aspect ratio 3/4 directly on the screen and with s,t going from (0,0) in the lower-left corner to (1,1) at the upper-right corner.
Note that in this case, the cubemap I use is inverted, that is objects to the +(x,y,z) side of the cubemap origen is rendered to -(x,y,z), and the direction I choose as up for the top/bottom quads are completely arbitrary; so to get this example to work you may need to change some signs or swap s and t some times, also note that I here only read one channel, as it is debth map:
Fragment shader code for a quad-map as the one in the question:
//Should work in most other versions
#version 400 core
uniform samplerCube dynamic_texture;
out vec4 out_color;
in vec2 ST;
void main()
{
//In this example i use a debthmap with only 1 channel, but the projection should work with a colored cubemap to, just replace this with a vec3 or vec4
float debth=0;
vec2 localST=ST;
//Scale Tex coordinates such that each quad has local coordinates from 0,0 to 1,1
localST.t = mod(localST.t*3,1);
localST.s = mod(localST.s*4,1);
//Due to the way my debth-cubemap is rendered, objects to the -x,y,z side is projected to the positive x,y,z side
//Inside where tob/bottom is to be drawn?
if (ST.s*4>1 && ST.s*4<2)
{
//Bottom (-y) quad
if (ST.t*3.f < 1)
{
vec3 dir=vec3(localST.s*2-1,1,localST.t*2-1);//Get lower y texture, which is projected to the +y part of my cubemap
debth = texture( dynamic_texture, dir ).r;
}
//top (+y) quad
else if (ST.t*3.f > 2)
{
vec3 dir=vec3(localST.s*2-1,-1,-localST.t*2+1);//Due to the (arbitrary) way I choose as up in my debth-viewmatrix, i her emultiply the latter coordinate with -1
debth = texture( dynamic_texture, dir ).r;
}
else//Front (-z) quad
{
vec3 dir=vec3(localST.s*2-1,-localST.t*2+1,1);
debth = texture( dynamic_texture, dir ).r;
}
}
//If not, only these ranges should be drawn
else if (ST.t*3.f > 1 && ST.t*3 < 2)
{
if (ST.x*4.f < 1)//left (-x) quad
{
vec3 dir=vec3(-1,-localST.t*2+1,localST.s*2-1);
debth = texture( dynamic_texture, dir ).r;
}
else if (ST.x*4.f < 3)//right (+x) quad (front was done above)
{
vec3 dir=vec3(1,-localST.t*2+1,-localST.s*2+1);
debth = texture( dynamic_texture, dir ).r;
}
else //back (+z) quad
{
vec3 dir=vec3(-localST.s*2+1,-localST.t*2+1,-1);
debth = texture( dynamic_texture, dir ).r;
}
}
else//Tob/bottom, but outside where we need to put something
{
discard;//No need to add fancy semi transparant borders for quads, this is just for debugging purpose after all
}
out_color = vec4(vec3(debth),1);
}
Here is a screenshot of this technique used to render my depth-map in the lower-right corner of the screen (rendering with a point-light source placed at the very center of an empty room with no other objects than the walls and the player character):
Equirectangular projection
I must, however, say that I prefer using an equirectangular projection for debugging cubemaps, as it doesn't have any holes in it; and, luckily, these are even easier to make than unfolded cubemaps, just use a fragment shader like this (still with s,t going from (0,0) to (1,1) from lower-left to upper-right corner), but this time with aspect ratio 1/2:
//Should work in most other versions
#version 400 core
uniform samplerCube dynamic_texture;
out vec4 out_color;
in vec2 ST;
void main()
{
float phi=ST.s*3.1415*2;
float theta=(-ST.t+0.5)*3.1415;
vec3 dir = vec3(cos(phi)*cos(theta),sin(theta),sin(phi)*cos(theta));
//In this example i use a debthmap with only 1 channel, but the projection should work with a colored cubemap to
float debth = texture( dynamic_texture, dir ).r;
out_color = vec4(vec3(debth),1);
}
Here is a screenshot where an equirectangular projection is used to display my depth-map in the lower-right corner:

Add radial gradient texture to each white part of another texture in shader

Recently, I have read article about sun shader (XNA Sun Shader) and decided to implement it using OpenGL ES 2.0. But I faced with a problem connected with shader:
I have two textures, one of them is fire gradient texture:
And another one is texture each white part of which must be colored by the first texture:
So, I'm going to have a result like below (do not pay attention that the result texture is rendered on sphere mesh):
I really hope that somebody knows how to implement this shader.
You can first sampling the original texture, if the color is white, then sampling the gradient texture.
uniform sampler2D Texture0; // original texture
uniform sampler2D Texture1; // gradient texture
varying vec2 texCoord;
void main(void)
{
gl_FragColor = texture2D( Texture0, texCoord );
// If the color in original texture is white
// use the color in gradient texture.
if (gl_FragColor == vec4(1.0, 1.0, 1.0,1.0)) {
gl_FragColor = texture2D( Texture1, texCoord );
}
}

deriving screen-space coordinates in glsl shader

I'm trying to write a simple application for baking a texture from a paint buffer. Right now I have a mesh, a mesh texture, and a paint texture. When I render the mesh, the mesh shader will lookup the mesh texture and then based on the screen position of the fragment lookup the paint texture value. I then composite the paint lookup with the mesh lookup.
Here's a screenshot with nothing in the paint buffer and just the mesh texture.
Here's a screenshot with something in the paint buffer composited over the mesh texture.
So that all works great, but I'd like to bake the paint texture into my mesh texture. Right now I send the mesh's UVs down as the position with an ortho set to (0,1)x(0,1) so I'm actually doing everything in texture space. The mesh texture lookup is also the position. The problem I'm having though is computing the screen space position of the fragment from the original projection to figure out where to sample the paint texture. I'm passing the bake shader my original camera project matrices and the object position to send the fragment shader the device-normalized position of the fragment (again from my original camera projection) to do the lookup, but it's coming out funny.
Here's what the bake texture is generating if I render half the output using the paint texture and screen position I've derived.
I would expect that block line to be right down the middle.
Am I calculating the screen position incorrectly in my vertex shader? Or am I going about this in a fundamentally wrong way?
// vertex shader
uniform mat4 orthoPV;
uniform mat4 cameraPV;
uniform mat4 objToWorld;
varying vec2 uv;
varying vec2 screenPos;
void main() {
uv = gl_Vertex.xy;
screenPos = 0.5 * (vec2(1,1) + (cameraPV * objToWorld * vec4(gl_MultiTexCoord0.xyz,1)).xy);
screenPos = gl_MultiTexCoord0.xy;
gl_Position = orthoPV * gl_Vertex;
gl_FrontColor = vec4(1,0,0,1);
}
// fragment shader
uniform sampler2D meshTexture;
uniform sampler2D paintTexture;
varying vec2 uv;
varying vec2 screenPos;
void main() {
gl_FragColor = texture2D(meshTexture, uv);
if (screenPos.x > .5)
gl_FragColor = texture2D(paintTexture, uv);
}

How to render a radial field in OpenGL?

How would I render a 2D radial field in OpenGL? I know I can render it pixel by pixel but I'm wondering if there are more efficient solutions? I don't mind if it requires OpenGL3+ functionality.
How familiar are you with shaders? Because I'm thinking an easy-ish answer would be to render a quad and then write a fragment shader to color the quad based off of how far each pixel is from the center.
Pseudocode:
vertex shader:
vec2 center = vec2((x1+x2)/2,(y1+y2)/2); //pass this to the fragment shader
fragment shader:
float dist = distance(pos,center); //"pos" is the interpolated position of the fragment. Its passed in from the vertex shader
//Now that we have the distance between each fragment and the center, we can do all kinds of stuff:
gl_fragcolor = vec4(1,1,1,dist) //Assuming you're drawing a unit square, this will make each pixel's transparency smoothly vary from 1 (right next to the center) to 0 (on the edce of the square)
gl_fragcolor = vec4(dist, dist, dist, 1.0) //Vary each pixel's color from white to black
//etc, etc
Let me know if you need more detail

OpenGL shadow map issue

I implemented a fairly simple shadow map. I have a simple obj imported plane as ground and a bunch of trees.
I have a weird shadow on the plane which I think is the plane's self shadow. I am not sure what code to post. If it would help please tell me and I'll do so then.
First image, camera view of the scene. The weird textured lowpoly sphere is just for reference of the light position.
Second image, the depth texture stored in the framebuffer. I calculated shadow coords from light perspective with it. Since I can't post more than 2 links, I'll leave this one.
Third image, depth texture with a better view of the plane projecting the shadow from a different light position above the whole scene.
LE: the second picture http://i41.tinypic.com/23h3wqf.jpg (Depth Texture of first picture)
Tried some fixes, adding glCullFace(GL_BACK) before drawing the ground in the first pass removes it from the depth texture but still appears in the final render(like in the first picture, the back part of the ground) - i tried adding CullFace in the second pass also, still showing the shadow on the ground , tried all combinations of Front and Back facing. Can it be because of the values in the ortographic projection ?
Shadow fragment shader:
#version 330 core
layout(location = 0) out vec3 color;
in vec2 texcoord;
in vec4 ShadowCoord;
uniform sampler2D textura1;
uniform sampler2D textura2;
uniform sampler2D textura_depth;
uniform int has_alpha;
void main(){
vec3 tex1 = texture(textura1, texcoord).xyz;
vec3 tex2 = texture(textura2, texcoord).xyz;
if(has_alpha>0.5) if((tex2.r<0.1) && (tex2.g<0.1) && (tex2.b<0.1)) discard;
//Z value of depth texture from pass 1
float hartaDepth=texture( textura_depth,(ShadowCoord.xy/ShadowCoord.w)).z;
float shadowValue=1.0;
if(hartaDepth < ShadowCoord.z-0.005)
shadowValue=0.5;
color = shadowValue * tex1 ;
}