I'd like to debug my render to cubemap function by projecting the whole thing to a 2D texture just like this one:
On my render from texture shader I've only got the UV texture coordinates available (ranging from (0,0) to (1,1)). How can I project the cubemap to the screen in a single draw call?
You can do this by rendering 6 quads and using 3D texture coords (s,t,p) pointing to each vertex of the cube so 8 variations of ( +/-1,+/-1,+/-1 ).
The UV 2D coords (s,t) like 4 variations of (0/1,0/1) are not usable for whole CUBE_MAP only for its individual sides.
Look for txr_skybox in here
Normal mapping gone horribly wrong
on how CUBE_MAP is used in fragment shader.
PS in OpenGL the texture coords are called s,t,p,q instead of u,v,w,...
Here related QA:
rendering cube map layout, understanding glTexCoord3f parameters
My answer is essentially the same as the one accepted one, but I have used this very technique to debug my depth-cubemap (used for shadowcasting) in my current project, so I thought I would include a working sample of the fragment shader code I used.
Unfolding cubemap
This is supposed to be rendered to a rectangle on top of the screen with aspect ratio 3/4 directly on the screen and with s,t going from (0,0) in the lower-left corner to (1,1) at the upper-right corner.
Note that in this case, the cubemap I use is inverted, that is objects to the +(x,y,z) side of the cubemap origen is rendered to -(x,y,z), and the direction I choose as up for the top/bottom quads are completely arbitrary; so to get this example to work you may need to change some signs or swap s and t some times, also note that I here only read one channel, as it is debth map:
Fragment shader code for a quad-map as the one in the question:
//Should work in most other versions
#version 400 core
uniform samplerCube dynamic_texture;
out vec4 out_color;
in vec2 ST;
void main()
{
//In this example i use a debthmap with only 1 channel, but the projection should work with a colored cubemap to, just replace this with a vec3 or vec4
float debth=0;
vec2 localST=ST;
//Scale Tex coordinates such that each quad has local coordinates from 0,0 to 1,1
localST.t = mod(localST.t*3,1);
localST.s = mod(localST.s*4,1);
//Due to the way my debth-cubemap is rendered, objects to the -x,y,z side is projected to the positive x,y,z side
//Inside where tob/bottom is to be drawn?
if (ST.s*4>1 && ST.s*4<2)
{
//Bottom (-y) quad
if (ST.t*3.f < 1)
{
vec3 dir=vec3(localST.s*2-1,1,localST.t*2-1);//Get lower y texture, which is projected to the +y part of my cubemap
debth = texture( dynamic_texture, dir ).r;
}
//top (+y) quad
else if (ST.t*3.f > 2)
{
vec3 dir=vec3(localST.s*2-1,-1,-localST.t*2+1);//Due to the (arbitrary) way I choose as up in my debth-viewmatrix, i her emultiply the latter coordinate with -1
debth = texture( dynamic_texture, dir ).r;
}
else//Front (-z) quad
{
vec3 dir=vec3(localST.s*2-1,-localST.t*2+1,1);
debth = texture( dynamic_texture, dir ).r;
}
}
//If not, only these ranges should be drawn
else if (ST.t*3.f > 1 && ST.t*3 < 2)
{
if (ST.x*4.f < 1)//left (-x) quad
{
vec3 dir=vec3(-1,-localST.t*2+1,localST.s*2-1);
debth = texture( dynamic_texture, dir ).r;
}
else if (ST.x*4.f < 3)//right (+x) quad (front was done above)
{
vec3 dir=vec3(1,-localST.t*2+1,-localST.s*2+1);
debth = texture( dynamic_texture, dir ).r;
}
else //back (+z) quad
{
vec3 dir=vec3(-localST.s*2+1,-localST.t*2+1,-1);
debth = texture( dynamic_texture, dir ).r;
}
}
else//Tob/bottom, but outside where we need to put something
{
discard;//No need to add fancy semi transparant borders for quads, this is just for debugging purpose after all
}
out_color = vec4(vec3(debth),1);
}
Here is a screenshot of this technique used to render my depth-map in the lower-right corner of the screen (rendering with a point-light source placed at the very center of an empty room with no other objects than the walls and the player character):
Equirectangular projection
I must, however, say that I prefer using an equirectangular projection for debugging cubemaps, as it doesn't have any holes in it; and, luckily, these are even easier to make than unfolded cubemaps, just use a fragment shader like this (still with s,t going from (0,0) to (1,1) from lower-left to upper-right corner), but this time with aspect ratio 1/2:
//Should work in most other versions
#version 400 core
uniform samplerCube dynamic_texture;
out vec4 out_color;
in vec2 ST;
void main()
{
float phi=ST.s*3.1415*2;
float theta=(-ST.t+0.5)*3.1415;
vec3 dir = vec3(cos(phi)*cos(theta),sin(theta),sin(phi)*cos(theta));
//In this example i use a debthmap with only 1 channel, but the projection should work with a colored cubemap to
float debth = texture( dynamic_texture, dir ).r;
out_color = vec4(vec3(debth),1);
}
Here is a screenshot where an equirectangular projection is used to display my depth-map in the lower-right corner:
Related
i want to have OpenGL output with spherical projection for make 360 video.
now i have cube map faces and they are generated with 6 perspective cameras.
i need something like this :
how can i have this output?
any idea?
It depends on the exact projection you are expected to use. For a simple spherical projection you render a quad into your destination texture with the following fragment shader:
uniform samplerCube tex;
in vec2 texcoord;
out vec4 OUT;
void main() {
vec3 d = vec3(
cos(texcoord[0])*cos(texcoord[1]),
sin(texcoord[0])*cos(texcoord[1]),
sin(texcoord[1])
);
OUT = texture(tex, d);
}
texcoord shall vary between (-tau/2,-tau/4) in bottom left corner and (tau/2, tau/4) in top right corner.
This was originally asked by #sydd here. I was curious about it so I try to code it but It was closed/deleted before I could answer so here it is.
Question: How to reproduce/implement this 2D ray casting lighting effect in GLSL?
The effect itself cast rays from mouse position to every direction, accumulating background map alpha and colors affecting the pixels strength.
So the input should be:
mouse position
background RGBA map texture
Background map
Ok I created a test RGBA map as 2 images one containing RGB (on the left) and second with the alpha channel (on the right) so you can see them both. Of coarse they are combined to form single RGBA texture.
I blurred them both a bit to obtain better visual effects on the edges.
Ray casting
As this should run in GLSL we need to cast the rays somewhere. I decided to do it in fragment shader. So the algo is like this:
On GL side pass uniforms needed for shaders Here goes mouse position as texture coordinate, max resolution of texture and light transmition strength.
On GL side draw quad covering whole screen with texture of background (o blending)
On Vertex shader just pass the texture and fragment coordinates needed
On Fragment shader per each fragment:
cast ray from mouse position to actual fragment position (in texture coordinates)
cumulate/integrate the light properties during the ray travel
stop if light strength near zero or target fragment position reached.
Vertex shader
// Vertex
#version 420 core
layout(location=0) in vec2 pos; // glVertex2f <-1,+1>
layout(location=8) in vec2 txr; // glTexCoord2f Unit0 <0,1>
out smooth vec2 t1; // texture end point <0,1>
void main()
{
t1=txr;
gl_Position=vec4(pos,0.0,1.0);
}
Fragment shader
// Fragment
#version 420 core
uniform float transmit=0.99;// light transmition coeficient <0,1>
uniform int txrsiz=512; // max texture size [pixels]
uniform sampler2D txrmap; // texture unit for light map
uniform vec2 t0; // texture start point (mouse position) <0,1>
in smooth vec2 t1; // texture end point, direction <0,1>
out vec4 col;
void main()
{
int i;
vec2 t,dt;
vec4 c0,c1;
dt=normalize(t1-t0)/float(txrsiz);
c0=vec4(1.0,1.0,1.0,1.0); // light ray strength
t=t0;
if (dot(t1-t,dt)>0.0)
for (i=0;i<txrsiz;i++)
{
c1=texture2D(txrmap,t);
c0.rgb*=((c1.a)*(c1.rgb))+((1.0f-c1.a)*transmit);
if (dot(t1-t,dt)<=0.000f) break;
if (c0.r+c0.g+c0.b<=0.001f) break;
t+=dt;
}
col=0.90*c0+0.10*texture2D(txrmap,t1); // render with ambient light
// col=c0; // render without ambient light
}
And Finally the result:
Animated 256 colors GIF:
The colors in GIF are slightly distorted due to 8 bit truncation. Also if the animation stops refresh page or open in decend gfx viewer instead.
I have circle in 3D space (red on image) with normals (white)
This circle is being drawn as linestrip.
Problem is: i need to draw only those pixels whose normals directed into camera (angle between normal and camera vector is < 90) using discard in fragment shader code. Like backface culling but for lines.
Red part of circle is what i need to draw and black is what i need to discard in fragment shader.
Good example is 3DS Max rotation gizmo, back sides of lines are hidden:
So, in fragment shader i have:
if(condition)
discard;
Help me to come up with this condition. Considering both orthographic and perspective cameras would be good.
Well, you already described your condition:
(angle between normal and camera vector is < 90)
You have to forward your normals to the fragment shader (don't forget to re-normalize it in the FS, the interpolation will change the length). And you need the viewing vector (in the same space than your normals, so you might transform normal to eye space, or use world space, or even transform the view direction/camera location into object space). Since the condition angle(N,V) >= 90 (degrees) is the same as cos(angle(N,V)) <= 0 (assuming normalized vectors), you can simply use the dot product:
if (dot(N,V) <= 0)
discard;
UPDATE:
As you pointed out in the comments, you have the "classical" GL matrices available. So it makes sense to do this transformation in eye space. In the vertex shader, you put
in vec4 vertex; // object space position
in vec3 normal; // object space normal direction
out vec3 normal_eyespace;
out vec3 vertex_eyespace;
uniform mat3 normalMatrix;
uniform mat4 modelView;
uniform mat4 projection;
void main()
{
normal_eyespace = normalize(normalMatrix * normal);
vec4 v = modelViewMatrix * vertex;
vertex_eyespace = v.xyz;
gl_Position=projectionMatrix * v;
}
and in the fragment shader, you can simply do
in vec3 normal_eyespace;
in vec3 vertex_eyespace;
void main()
{
if (dot(normalize(normal_eyespace), normalize(-vertex_eyespace)) <= 0)
discard;
// ...
}
Note: this code assumes modern GLSL with in/out instead of attribute/varying qualifiers. I also assume no builtin attributes. But that code should be easily adaptable to older GL.
I'm using GLSL to draw sprites from a sprite-sheet. I'm using jME 3, yet there are only small differences, and only with regards to deprecated functions.
The most important part of drawing a sprite from a sprite sheet is to draw only a subset/range of pixels, for example the range from (100, 0) to (200, 100). In the following test case sprite-sheet, and using the previous bounds, only the green part of the sprite-sheet would be drawn.
.
This is what I have so far:
Definition:
MaterialDef Solid Color {
//This is the list of user-defined variables to be used in the shader
MaterialParameters {
Vector4 Color
Texture2D ColorMap
}
Technique {
VertexShader GLSL100: Shaders/tc_s1.vert
FragmentShader GLSL100: Shaders/tc_s1.frag
WorldParameters {
WorldViewProjectionMatrix
}
}
}
.vert file:
uniform mat4 g_WorldViewProjectionMatrix;
attribute vec3 inPosition;
attribute vec4 inTexCoord;
varying vec4 texture_coordinate;
void main(){
gl_Position = g_WorldViewProjectionMatrix * vec4(inPosition, 1.0);
texture_coordinate = vec4(inTexCoord);
}
.frag:
uniform vec4 m_Color;
uniform sampler2D m_ColorMap;
varying vec4 texture_coordinate;
void main(){
vec4 color = vec4(m_Color);
vec4 tex = texture2D(m_ColorMap, texture_coordinate);
color *= tex;
gl_FragColor = color;
}
In jME 3, inTexCoord refers to gl_MultiTexCoord0, and inPosition refers to gl_Vertex.
As you can see, I tried to give the texture_coordinate a vec4 type, rather than a vec2, so as to be able to reference its p and q values (texture_coordinate.p and texture_coordinate.q). Modifying them only resulted in different hues.
m_Color refers to the color, inputted by the user, and serves the purpose of altering the hue. In this case, it should be disregarded.
So far, the shader works as expected and the texture displays correctly.
I've been using resources and tutorials from NeHe (http://nehe.gamedev.net/article/glsl_an_introduction/25007/) and Lighthouse3D (http://www.lighthouse3d.com/tutorials/glsl-tutorial/simple-texture/).
Which functions/values I should alter to get the desired effect of displaying only part of the texture?
Generally, if you want to only display part of a texture, then you change the texture coordinates associated with each vertex. Since you don't show your code for how you're telling OpenGL about your vertices, I'm not sure what to suggest. But in general, if you're using older deprecated functions, instead of doing this:
// Lower Left of triangle
glTexCoord2f(0,0);
glVertex3f(x0,y0,z0);
// Lower Right of triangle
glTexCoord2f(1,0);
glVertex3f(x1,y1,z1);
// Upper Right of triangle
glTexCoord2f(1,1);
glVertex3f(x2,y2,z2);
You could do this:
// Lower Left of triangle
glTexCoord2f(1.0 / 3.0, 0.0);
glVertex3f(x0,y0,z0);
// Lower Right of triangle
glTexCoord2f(2.0 / 3.0, 0.0);
glVertex3f(x1,y1,z1);
// Upper Right of triangle
glTexCoord2f(2.0 / 3.0, 1.0);
glVertex3f(x2,y2,z2);
If you're using VBOs, then you need to modify your array of texture coordinates to access the appropriate section of your texture in a similar manner.
For the sampler2D the texture coordinates are normalized so that the leftmost and bottom-most coordinates are 0, and the rightmost and topmost are 1. So for your example of a 300-pixel-wide texture, the green section would be between 1/3rd and 2/3rds the width of the texture.
I implemented a fairly simple shadow map. I have a simple obj imported plane as ground and a bunch of trees.
I have a weird shadow on the plane which I think is the plane's self shadow. I am not sure what code to post. If it would help please tell me and I'll do so then.
First image, camera view of the scene. The weird textured lowpoly sphere is just for reference of the light position.
Second image, the depth texture stored in the framebuffer. I calculated shadow coords from light perspective with it. Since I can't post more than 2 links, I'll leave this one.
Third image, depth texture with a better view of the plane projecting the shadow from a different light position above the whole scene.
LE: the second picture http://i41.tinypic.com/23h3wqf.jpg (Depth Texture of first picture)
Tried some fixes, adding glCullFace(GL_BACK) before drawing the ground in the first pass removes it from the depth texture but still appears in the final render(like in the first picture, the back part of the ground) - i tried adding CullFace in the second pass also, still showing the shadow on the ground , tried all combinations of Front and Back facing. Can it be because of the values in the ortographic projection ?
Shadow fragment shader:
#version 330 core
layout(location = 0) out vec3 color;
in vec2 texcoord;
in vec4 ShadowCoord;
uniform sampler2D textura1;
uniform sampler2D textura2;
uniform sampler2D textura_depth;
uniform int has_alpha;
void main(){
vec3 tex1 = texture(textura1, texcoord).xyz;
vec3 tex2 = texture(textura2, texcoord).xyz;
if(has_alpha>0.5) if((tex2.r<0.1) && (tex2.g<0.1) && (tex2.b<0.1)) discard;
//Z value of depth texture from pass 1
float hartaDepth=texture( textura_depth,(ShadowCoord.xy/ShadowCoord.w)).z;
float shadowValue=1.0;
if(hartaDepth < ShadowCoord.z-0.005)
shadowValue=0.5;
color = shadowValue * tex1 ;
}