i want to have OpenGL output with spherical projection for make 360 video.
now i have cube map faces and they are generated with 6 perspective cameras.
i need something like this :
how can i have this output?
any idea?
It depends on the exact projection you are expected to use. For a simple spherical projection you render a quad into your destination texture with the following fragment shader:
uniform samplerCube tex;
in vec2 texcoord;
out vec4 OUT;
void main() {
vec3 d = vec3(
cos(texcoord[0])*cos(texcoord[1]),
sin(texcoord[0])*cos(texcoord[1]),
sin(texcoord[1])
);
OUT = texture(tex, d);
}
texcoord shall vary between (-tau/2,-tau/4) in bottom left corner and (tau/2, tau/4) in top right corner.
Related
I'd like to debug my render to cubemap function by projecting the whole thing to a 2D texture just like this one:
On my render from texture shader I've only got the UV texture coordinates available (ranging from (0,0) to (1,1)). How can I project the cubemap to the screen in a single draw call?
You can do this by rendering 6 quads and using 3D texture coords (s,t,p) pointing to each vertex of the cube so 8 variations of ( +/-1,+/-1,+/-1 ).
The UV 2D coords (s,t) like 4 variations of (0/1,0/1) are not usable for whole CUBE_MAP only for its individual sides.
Look for txr_skybox in here
Normal mapping gone horribly wrong
on how CUBE_MAP is used in fragment shader.
PS in OpenGL the texture coords are called s,t,p,q instead of u,v,w,...
Here related QA:
rendering cube map layout, understanding glTexCoord3f parameters
My answer is essentially the same as the one accepted one, but I have used this very technique to debug my depth-cubemap (used for shadowcasting) in my current project, so I thought I would include a working sample of the fragment shader code I used.
Unfolding cubemap
This is supposed to be rendered to a rectangle on top of the screen with aspect ratio 3/4 directly on the screen and with s,t going from (0,0) in the lower-left corner to (1,1) at the upper-right corner.
Note that in this case, the cubemap I use is inverted, that is objects to the +(x,y,z) side of the cubemap origen is rendered to -(x,y,z), and the direction I choose as up for the top/bottom quads are completely arbitrary; so to get this example to work you may need to change some signs or swap s and t some times, also note that I here only read one channel, as it is debth map:
Fragment shader code for a quad-map as the one in the question:
//Should work in most other versions
#version 400 core
uniform samplerCube dynamic_texture;
out vec4 out_color;
in vec2 ST;
void main()
{
//In this example i use a debthmap with only 1 channel, but the projection should work with a colored cubemap to, just replace this with a vec3 or vec4
float debth=0;
vec2 localST=ST;
//Scale Tex coordinates such that each quad has local coordinates from 0,0 to 1,1
localST.t = mod(localST.t*3,1);
localST.s = mod(localST.s*4,1);
//Due to the way my debth-cubemap is rendered, objects to the -x,y,z side is projected to the positive x,y,z side
//Inside where tob/bottom is to be drawn?
if (ST.s*4>1 && ST.s*4<2)
{
//Bottom (-y) quad
if (ST.t*3.f < 1)
{
vec3 dir=vec3(localST.s*2-1,1,localST.t*2-1);//Get lower y texture, which is projected to the +y part of my cubemap
debth = texture( dynamic_texture, dir ).r;
}
//top (+y) quad
else if (ST.t*3.f > 2)
{
vec3 dir=vec3(localST.s*2-1,-1,-localST.t*2+1);//Due to the (arbitrary) way I choose as up in my debth-viewmatrix, i her emultiply the latter coordinate with -1
debth = texture( dynamic_texture, dir ).r;
}
else//Front (-z) quad
{
vec3 dir=vec3(localST.s*2-1,-localST.t*2+1,1);
debth = texture( dynamic_texture, dir ).r;
}
}
//If not, only these ranges should be drawn
else if (ST.t*3.f > 1 && ST.t*3 < 2)
{
if (ST.x*4.f < 1)//left (-x) quad
{
vec3 dir=vec3(-1,-localST.t*2+1,localST.s*2-1);
debth = texture( dynamic_texture, dir ).r;
}
else if (ST.x*4.f < 3)//right (+x) quad (front was done above)
{
vec3 dir=vec3(1,-localST.t*2+1,-localST.s*2+1);
debth = texture( dynamic_texture, dir ).r;
}
else //back (+z) quad
{
vec3 dir=vec3(-localST.s*2+1,-localST.t*2+1,-1);
debth = texture( dynamic_texture, dir ).r;
}
}
else//Tob/bottom, but outside where we need to put something
{
discard;//No need to add fancy semi transparant borders for quads, this is just for debugging purpose after all
}
out_color = vec4(vec3(debth),1);
}
Here is a screenshot of this technique used to render my depth-map in the lower-right corner of the screen (rendering with a point-light source placed at the very center of an empty room with no other objects than the walls and the player character):
Equirectangular projection
I must, however, say that I prefer using an equirectangular projection for debugging cubemaps, as it doesn't have any holes in it; and, luckily, these are even easier to make than unfolded cubemaps, just use a fragment shader like this (still with s,t going from (0,0) to (1,1) from lower-left to upper-right corner), but this time with aspect ratio 1/2:
//Should work in most other versions
#version 400 core
uniform samplerCube dynamic_texture;
out vec4 out_color;
in vec2 ST;
void main()
{
float phi=ST.s*3.1415*2;
float theta=(-ST.t+0.5)*3.1415;
vec3 dir = vec3(cos(phi)*cos(theta),sin(theta),sin(phi)*cos(theta));
//In this example i use a debthmap with only 1 channel, but the projection should work with a colored cubemap to
float debth = texture( dynamic_texture, dir ).r;
out_color = vec4(vec3(debth),1);
}
Here is a screenshot where an equirectangular projection is used to display my depth-map in the lower-right corner:
I'm doing some OpenGL stuff in Java (lwjgl) for a project, part of which includes importing 3d models in OBJ format. Everything looks ok, until I try to displace vertices, then the models break up, you can see right through them.
Here is Suzanne from blender, UV mapped with a completely black texture (for visibility's sake). In the frag shader I'm adding some white colour to the fragment depending on the fragments angle between its normal and the world's up vector:
So far so good. But when I apply a small Y component displacement to the same vertices, I expect to see the faces 'stretch' up. Instead this happens:
Vertex shader:
#version 150
in vec3 position;
in vec2 texCoords;
in vec3 normal;
void main()
{
vertPosModel = position;
cosTheta = dot(vec3(0.0, 1.0, 0.0), normal);
if(cosTheta > 0.0 && cosTheta < 1.0)
vertPosModel += vec3(0.0, 0.15, 0.0);
gl_Position = transform * vec4(vertPosModel, 1.0);
}
Fragment shader:
#version 150
uniform sampler2D objTexture;
in vec2 texcoordOut;
in float cosTheta;
out vec4 fragColor;
void main()
{
fragColor = vec4(texture(objTexture, texcoordOut.st).rgb, 1.0) + vec4(cosTheta);
}
So, your algorithm offsets a vertex's position, based on a property derived from the vertex's normal. This will only produce a connected mesh if your mesh is completely smooth. That is, where two triangles meet, the normals of the shared vertices must be the same on both sides of the triangle.
If the model has discontinuous normals over the surface, then breaks can appear anywhere that the normal stops being continuous. That is, the edges where there is a normal discontinuity may become disconnected.
I'm pretty sure that Blender3D can generate a smooth version of Suzanne. So... did you generate a smooth mesh? Or is it faceted?
I have a cube map set up and what I want to do next is to mark/show separately which areas/segments of the textures on each face of the cube map is being rendered (depending on the camera).
For example, here is my basic vertex shader:
#version 400
in vec3 vp;
uniform mat4 P, V;
out vec3 texcoords;
vec3 newP;
void main () {
texcoords = vp;
gl_Position = P * V * vec4 (vp, 1.0);
}
and here is my basic fragment shader:
#version 400
in vec3 texcoords;
uniform samplerCube cube_texture;
out vec4 frag_colour;
void main () {
frag_colour = texture (cube_texture, texcoords);
}
I now want to show the unfolded 6 textures of a cubemap and color overlay the areas the camera was looking at in the cube-map, for each texture.
Example, if my camera is viewing the intersection of the left and back sides of the cube, I want a separate display where the I can see the 6 side textures unfolded and the area of the respective textures, the camera was viewing, highlighted (half of the loaded texture for the left wall and half of the loaded texture for the back wall)
Can I get some heads up on how to implement this? Or if someone has some similar code in OpenGl.
Thanks :)
I have very basic OpenGL knowledge, but I'm trying to replicate the shading effect that MeshLab's visualizer has.
If you load up a mesh in MeshLab, you'll realize that if a face is facing the camera, it is completely lit and as you rotate the model, the lighting changes as the face that faces the camera changes. I loaded a simple unit cube with 12 faces in MeshLab and captured these screenshots to make my point clear:
Model loaded up (notice how the face is completely gray):
Model slightly rotated (notice how the faces are a bit darker):
More rotation (notice how all faces are now darker):
Off the top of my head, I think the way it works is that it is somehow assigning colors per face in the shader. If the angle between the face normal and camera is zero, then the face is fully lit (according to the color of the face), otherwise it is lit proportional to the dot product between the normal vector and the camera vector.
I already have the code to draw meshes with shaders/VBO's. I can even assign per-vertex colors. However, I don't know how I can achieve a similar effect. As far as I know, fragment shaders work on vertices. A quick search revealed questions like this. But I got confused when the answers talked about duplicate vertices.
If it makes any difference, in my application I load *.ply files which contain vertex position, triangle indices and per-vertex colors.
Results after the answer by #DietrichEpp
I created the duplicate vertices array and used the following shaders to achieve the desired lighting effect. As can be seen in the posted screenshot, the similarity is uncanny :)
The vertex shader:
#version 330 core
uniform mat4 projection_matrix;
uniform mat4 model_matrix;
uniform mat4 view_matrix;
in vec3 in_position; // The vertex position
in vec3 in_normal; // The computed vertex normal
in vec4 in_color; // The vertex color
out vec4 color; // The vertex color (pass-through)
void main(void)
{
gl_Position = projection_matrix * view_matrix * model_matrix * vec4(in_position, 1);
// Compute the vertex's normal in camera space
vec3 normal_cameraspace = normalize(( view_matrix * model_matrix * vec4(in_normal,0)).xyz);
// Vector from the vertex (in camera space) to the camera (which is at the origin)
vec3 cameraVector = normalize(vec3(0, 0, 0) - (view_matrix * model_matrix * vec4(in_position, 1)).xyz);
// Compute the angle between the two vectors
float cosTheta = clamp( dot( normal_cameraspace, cameraVector ), 0,1 );
// The coefficient will create a nice looking shining effect.
// Also, we shouldn't modify the alpha channel value.
color = vec4(0.3 * in_color.rgb + cosTheta * in_color.rgb, in_color.a);
}
The fragment shader:
#version 330 core
in vec4 color;
out vec4 out_frag_color;
void main(void)
{
out_frag_color = color;
}
The uncanny results with the unit cube:
It looks like the effect is a simple lighting effect with per-face normals. There are a few different ways you can achieve per-face normals:
You can create a VBO with a normal attribute, and then duplicate vertex position data for faces which don't have the same normal. For example, a cube would have 24 vertexes instead of 8, because the "duplicates" would have different normals.
You can use a geometry shader which calculates a per-face normal.
You can use dFdx() and dFdy() in the fragment shader to approximate the normal.
I recommend the first approach, because it is simple. You can simply calculate the normals ahead of time in your program, and then use them to calculate the face colors in your vertex shader.
This is simple flat shading, instead of using per vertex normals you can evaluate per face normal with this GLSL snippet:
vec3 x = dFdx(FragPos);
vec3 y = dFdy(FragPos);
vec3 normal = cross(x, y);
vec3 norm = normalize(normal);
then apply some diffuse lighting using norm:
// diffuse light 1
vec3 lightDir1 = normalize(lightPos1 - FragPos);
float diff1 = max(dot(norm, lightDir1), 0.0);
vec3 diffuse = diff1 * diffColor1;
I have a 3D object in my scene and the fragment shader for this object receives a texture that has the same size of the screen. I want to get the coordinates from the current fragment and find the color information on the image in the same position. Is this possible? Can someone point me to the right direction?
The fragment shader has a built-in value called gl_FragCoord that supplies the pixel coordinates of the target fragment. You must divide this by the width and height of the viewport to get the texture coordinates for lookup. Here's a short example:
uniform vec2 resolution;
uniform sampler2D backbuffer;
void main( void ) {
vec2 position = ( gl_FragCoord.xy / resolution.xy );
vec4 color = texture2D(backbuffer, position);
// ... do something with it ...
}
For a complete working example, try this in a WebGL-capable browser:
http://glslsandbox.com/e#375.15