OpenGL cubemap face order & sampling issue - c++

I have a renderer based on SDL2 and OpenGL (3.3 core profile), which gives me expected results with regards to transformations and texture(2D)ing.
However, when I'm trying to display a skybox using a cubemap created from these textures (though I've tried others too), there are two steps in the process that no other tutorial or example that I have encountered seems to have to do, and I cannot explain:
1, The top / bottom faces have to be swapped upon uploading, i.e.: the top one is uploaded as GL_TEXTURE_CUBEMAP_NEGATIVE_Y, and the bottom one is GL_TEXTURE_CUBEMAP_POSITIVE_Y;
2, When sampling the cube map, I have to invert vertex positions along y, but also along z;
Without this, I'm getting the following result:
(N.B. the left-bottom-far vertex was scaled by .8 to clarify that my coordinate system is the right way around)
The image files are named correctly.
The cube is the only draw I'm performing.
If I remove [the indices for] any of the sides, I get the expected results (i.e. no swapping / mirroring there).
I seem to be getting the same results with my integrated and dedicated GPUs.
My OpenGL constants, from a glLoadGen (originally) generated header:
#define GL_TEXTURE_CUBE_MAP_NEGATIVE_X 0x8516
#define GL_TEXTURE_CUBE_MAP_NEGATIVE_Y 0x8518
#define GL_TEXTURE_CUBE_MAP_NEGATIVE_Z 0x851A
#define GL_TEXTURE_CUBE_MAP_POSITIVE_X 0x8515
#define GL_TEXTURE_CUBE_MAP_POSITIVE_Y 0x8517
#define GL_TEXTURE_CUBE_MAP_POSITIVE_Z 0x8519
The texture uploading code (much the same as LearnOpenGL's tutorial):
GLuint name;
glGenTextures(1, &name);
glBindTexture(GL_TEXTURE_CUBE_MAP, name);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR));
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR));
GLint target = GL_TEXTURE_CUBE_MAP_POSITIVE_X;
for (uint8_t i = 0; i < 6; ++i)
{
glTexImage2D(target + i, 0, GL_RGB8, width, height, 0, GL_RGB,
GL_UNSIGNED_BYTE, pixelData[i]));
}
Vertex shader:
#version 330
precision mediump float;
uniform mat4 uModelViewProjection;
in vec3 aPosition;
out vec3 vTexCoord;
void main()
{
vec4 position = uModelViewProjection * vec4(aPosition, 1.f);
gl_Position = position.xyww;
vTexCoord = aPosition;
}
Fragment shader:
#version 330
precision mediump float;
uniform samplerCube uTexture0;
in vec3 vTexCoord;
out vec4 FragColor;
void main()
{
FragColor = texture(uTexture0, vTexCoord);
// using textureCube() yields a compile error asking for #extension GL_NV_shadow_samplers_cube : enable, but even with that, the issue perists.
}
Mesh setup (semi-pseudo-code):
// 4----5
// /| /|
// 6----7 |
// | | | |
// | 0--|-1
// |/ |/
// 2----3
VertexType vertices[8] = {
Vector3(-1.f, -1.f, -1.f) * .8f, // debug coordinate system
Vector3(1.f, -1.f, -1.f),
Vector3(-1.f, -1.f, 1.f),
Vector3(1.f, -1.f, 1.f),
Vector3(-1.f, 1.f, -1.f),
Vector3(1.f, 1.f, -1.f),
Vector3(-1.f, 1.f, 1.f),
Vector3(1.f, 1.f, 1.f),
};
uint16_t indices[] = {
4, 0, 5,
0, 1, 5,
6, 2, 4,
2, 0, 4,
7, 3, 6,
3, 2, 6,
5, 1, 7,
1, 3, 7,
0, 2, 1,
2, 3, 1,
5, 7, 4,
7, 6, 4,
};
// create buffers & upload data
Rendering (pseudo-code):
// -clear color & depth buffers;
// -set the model transform to a translation of -10 units along z;
// view transform is identity; projection is perspective with .25
// radians vertical FOV, zNear of .1, zFar of 100.; viewport is full screen
// -set shader program;
// -bind texture (same name, same target as upon uploading);
// -enable backface culling only (no depth test / write);
// -draw the cube
// -glFlush() and swap buffers;
What on earth can be causing the two issues described above?

The issue is caused by the mapping of the .str texture coordinates to the cubemap:
OpenGL 4.6 API Core Profile Specification, 8.13 Cube Map Texture Selection, page 253:
When a cube map texture is sampled, the (s, t, r) texture coordinates are treated as a direction vector (rx, ry, rz) emanating from the center of a cube. The q coordinate is ignored. At texture application time, the interpolated per-fragment direction vector selects one of the cube map face’s two-dimensional images based on the largest magnitude coordinate direction (the major axis direction). If two or more coordinates have the identical magnitude, the implementation may define the rule to disambiguate this situation. The rule must be deterministic and depend only on (rx, ry, rz). The target column in table 8.19 explains how the major axis direction maps to the two-dimensional image of a particular cube map target.
Using the sc, tc, and ma determined by the major axis direction as specified in table 8.19, an updated (s, t) is calculated as follows:
s = 1/2 (sc / |m_a| + 1)
t = 1/2 (tc / |m_a| + 1)
Major Axis Direction| Target |sc |tc |ma |
--------------------+---------------------------+---+---+---+
+rx |TEXTURE_CUBE_MAP_POSITIVE_X|−rz|−ry| rx|
−rx |TEXTURE_CUBE_MAP_NEGATIVE_X| rz|−ry| rx|
+ry |TEXTURE_CUBE_MAP_POSITIVE_Y| rx| rz| ry|
−ry |TEXTURE_CUBE_MAP_NEGATIVE_Y| rx|−rz| ry|
+rz |TEXTURE_CUBE_MAP_POSITIVE_Z| rx|−ry| rz|
−rz |TEXTURE_CUBE_MAP_NEGATIVE_Z|−rx|−ry| rz|
--------------------+---------------------------+---+---+---+
Table 8.19: Selection of cube map images based on major axis direction of texture
coordinates
The rotation can be achieved by either rotating the 6 cubemap images before loading them to the cubemap sampler or by rotating the texture coordinates.
It cubemap is used as an environment map in a scene and the texture coordinates are get by a direction vector, then it makes sense to rotate the images. If the cubemap is wrapped on a mesh then the texture coordinates can be specified in the right manner.

The previous answer's reasoning from the quoted spec. text is wrong.
What is going on is that the quoted text, if you look carefully at the math, requires the cubemap's images to have a top-down orientation and be arranged in a left-handed coordinate system with +Y up. That means sky at +Y and if you’re facing +Z, -X should be on your left and +X on your right. This was apparently inherited from Renderman where cube maps first appeared.
The coordinates of the cube you are rendering as the skybox, which will be used to sample the cube map, are in OpenGL's coordinate system which is a right-handed system. These must be transformed to the cubemap's left-handed system before sampling. This is done by simply scaling the Z coord by -1. Failure to do that means the scene will be a mirror image of what it should be. A very common failing in samples I've looked at.
The OPs upside down images are because they had standard OpenGL bottom-up orientation.
If you're using Vulkan, that has a left-handed system but Y is down. So to correctly render the cubemap on Vulkan you still need to transform the skybox cube's coordinates, in this case by rotating them 180° around the X axis. Fail to do that and you'll have upside down images.

Related

Why does mipmapping not work on my 3D texture? (opengl)

So I am creating a terrain and for texturing, I want to use a 3D texture (depth 3) which holds 3 images (512x512) on each z-layer, so that I would be able to use GPU interpolation between these layers based on just one factor: 0/3 = image 1, 1/3 = image 2, 2/3 = image 3, and every value in between interpolates with the next level (cyclic).
This works perfectly as long as I don't enable mip maps on this 3D texture. When I do enable it, my terrain gets the same one image all over unless I come closer, as if the images have shifted from being z-layers to being mip-map layers.
I don't understand this, can someone tell me what I'm doing wrong?
This is where I generate the texture:
glGenTextures(1, &m_textureId);
glBindTexture(GL_TEXTURE_3D, m_textureId);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexImage3D(GL_TEXTURE_3D, 0, GL_RGB, 512, 512, 3, 0, GL_BGR, GL_UNSIGNED_BYTE, 0);
This is the step I perform for every Z:
glTexSubImage3D(GL_TEXTURE_3D, 0, 0, 0, Z, 512, 512, 1, GL_BGR, GL_UNSIGNED_BYTE, imageData);
After this, I do:
glGenerateMipmap(GL_TEXTURE_3D);
In the shader, I define the texture as:
uniform sampler3D tGround;
and simply sample it with:
texture(tGround, vec3(texcoord, f));
where texcoord is a 2D coordinate and f is the layer we need, simply based on height at this moment.
There is a way to do something like what you want, but it does require work. And you can't use a 3D texture to do it.
You have to use Array Textures instead. The usual way to think of a 2D array texture is as a bundle of 2D textures of the same size. But you can also think of it as a 3D texture where each mipmap level has the same number of Z layers. However, there's also the issue where there is no blending between array layers.
Since you want blending, you will need to synthesize it. But that's easy enough with shaders:
vec4 ArrayTextureBlend(in vec3 texCoord)
{
float frac = fract(texCoord.z);
texCoord.z = floor(texCoord.z);
vec4 top = texture(arrayTex, texCoord);
vec4 bottom = texture(arrayTex, texCoord + vec3(0, 0, 1));
return mix(top, bottom, frac); //Linearly interpolate top and bottom.
}

OpenGL connecting a skydome and a flat ground plane

I am building a simple city with OpenGL and GLUT, I created a textured skydome and now I would like to connect that with a flat plane to give an appearance of the horizon. To give relative size, the skydome is 3.0 in radius with depth mask turned off, and it only has the camera rotation applied and sits over the camera. A building is about 30.0 in size, and I am looking at it from y=500.0 down.
I have a ground plane that is 1000x1000, I am texturing with a 1024x1024 resolution texture that looks good up close when I am against the ground. My texture is loaded with GL_REPEAT with texture coordinate of 1000 to repeat it 1000 times.
Connecting the skydome with the flat ground plane is where I am having some issues. I will list a number of things I have tried.
Issues:
1) When I rotate my heading, because of the square nature of the plane, I see edge like the attached picture instead of a flat horizon.
2) I have tried a circular ground plane instead, but I get a curve horizon, that becomes more curvy when I fly up.
3) To avoid the black gap between the infinite skydome, and my limited size flat plane, I set a limit on how far up I can fly, and shift the skydome slightly down as I go up, so I don't see the black gap between the infinite skydome and my flat plane when I am up high. Are there other methods to fade the plane into the skydome and take care of the gap when the gap varies in size at different location (ie. Circle circumscribing a square)? I tried to apply a fog color of the horizon, but I get a purple haze over white ground.
4) If I attached the ground as the bottom lid of the skydome hemisphere, then it looks weird when I zoom in and out, it looks like the textured ground is sliding and disconnected with my building.
5) I have tried to draw the infinitely large plane using the vanishing point concept by setting w=0. Rendering infinitely large plane
The horizon does look flat, but texturing properly seems difficult, so I am stuck with a single color.
6) I am disable lighting for the skydome, if I want to enable lighting for my ground plane, then at certain pitch angle, my plane would look black, but my sky is still completely lit, and it looks unnatural.
7) If I make my plane larger, like 10000x10000, then the horizon will look seemingly flat, but, if I press the arrow key to adjust my heading, the horizon will shake for a couple of seconds before stabilizing, what is causing it, and how could I prevent it. A related question to this, it seems like tiling and texturing 1000x1000 ground plane and 10000x10000 does not affect my frame rate, why is that? Wouldn't more tiling mean more work?
8) I read some math-based approach with figuring out the clipping rectangle to draw the horizon, but I wonder if there are simpler approaches http://www.gamedev.net/page/resources/_/technical/graphics-programming-and-theory/a-super-simple-method-for-creating-infinite-sce-r2769
Most threads I read regarding horizon would say, use a skybox, use a skydome, but I haven't come across a specific tutorial that talks about merging skydome with a large ground plane nicely. A pointer to such a tutorial would be great. Feel free to answer any parts of the question by indicating the number, I didn't want to break them up because they are all related. Thanks.
Here is some relevant code on my setup:
void Display()
{
// Clear frame buffer and depth buffer
glClearColor (0.0,0.0,0.0,1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
camera.Update();
GLfloat accumulated_camera_rotation_matrix[16];
GetAccumulatedRotationMatrix(accumulated_camera_rotation_matrix);
SkyDome_Draw(accumulated_camera_rotation_matrix);
FlatGroundPlane_Draw();
// draw buildings
// swap buffers when GLUT_DOUBLE double buffering is enabled
glutSwapBuffers();
}
void SkyDome_Draw(GLfloat (&accumulated_camera_rotation_matrix)[16])
{
glPushMatrix();
glLoadIdentity();
glDepthMask(GL_FALSE);
glDisable(GL_LIGHTING);
glMultMatrixf(accumulated_camera_rotation_matrix);
// 3.0f is the radius of the skydome
// If we offset by 0.5f in camera.ground_plane_y_offset, we can offset by another 1.5f
// at skydome_sky_celing_y_offset of 500. 500 is our max allowable altitude
glTranslatef( 0, -camera.ground_plane_y_offset - camera.GetCameraPosition().y /c amera.skydome_sky_celing_y_offset/1.5f, 0);
skyDome->Draw();
glEnable(GL_LIGHTING);
glDepthMask(GL_TRUE);
glEnable(GL_CULL_FACE);
glPopMatrix();
}
void GetAccumulatedRotationMatrix(GLfloat (&accumulated_rotation_matrix)[16])
{
glGetFloatv(GL_MODELVIEW_MATRIX, accumulated_rotation_matrix);
// zero out translation is in elements m12, m13, m14
accumulated_rotation_matrix[12] = 0;
accumulated_rotation_matrix[13] = 0;
accumulated_rotation_matrix[14] = 0;
}
GLfloat GROUND_PLANE_WIDTH = 1000.0f;
void FlatGroundPlane_Draw(void)
{
glEnable(GL_TEXTURE_2D);
glBindTexture( GL_TEXTURE_2D, concreteTextureId);
glBegin(GL_QUADS);
glNormal3f(0, 1, 0);
glTexCoord2d(0, 0);
// repeat 1000 times for a plane 1000 times in width
GLfloat textCoord = GROUND_PLANE_WIDTH;
glVertex3f( -GROUND_PLANE_WIDTH, 0, -GROUND_PLANE_WIDTH);
// go beyond 1 for texture coordinate so it repeats
glTexCoord2d(0, textCoord);
glVertex3f( -GROUND_PLANE_WIDTH, 0, GROUND_PLANE_WIDTH);
glTexCoord2d(textCoord, textCoord);
glVertex3f( GROUND_PLANE_WIDTH, 0, GROUND_PLANE_WIDTH);
glTexCoord2d(textCoord, 0);
glVertex3f( GROUND_PLANE_WIDTH, 0, -GROUND_PLANE_WIDTH);
glEnd();
glDisable(GL_TEXTURE_2D);
}
Void Init()
{
concreteTextureId = modelParser->LoadTiledTextureFromFile(concreteTexturePath);
}
ModelParser::LoadTiledTextureFromFile(string texturePath)
{
RGBImage image; // wrapping 2-d array of data
image.LoadData(texturePath);
GLuint texture_id;
UploadTiledTexture(texture_id, image);
image.ReleaseData();
return texture_id;
}
void ModelParser::UploadTiledTexture(unsigned int &iTexture, const RGBImage &img)
{
glGenTextures(1, &iTexture); // create the texture
glBindTexture(GL_TEXTURE_2D, iTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
// the texture would wrap over at the edges (repeat)
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT );
gluBuild2DMipmaps(GL_TEXTURE_2D, 3, img.Width(), img.Height(), GL_RGB, GL_UNSIGNED_BYTE, img.Data());
}
Try using a randomized heightmap rather than using a flat plane. Not only will this look more realistic, it will make the edge of the ground plane invisible due to the changes in elevation. You can also try adding in some vertex fog, to blur the area where the skybox and ground plane meet. That's roughly what I did here.
A lot of 3D rendering relies on tricks to make things look realistic. If you look at most games, they have either a whole bunch of foreground objects that obscure the horizon, or they have "mountains" in the distance (a la heightmaps) that also obscure the horizon.
Another idea is to map your ground plane onto a sphere, so that it curves down like the earth does. That might make the horizon look more earthlike. This is similar to what you did with the circular ground plane.

Cubemap shadow mapping not working

I'm attempting to create omnidirectional/point lighting in openGL version 3.3. I've searched around on the internet and this site, but so far I have not been able to accomplish this. From my understanding, I am supposed to
Generate a framebuffer using depth component
Generate a cubemap and bind it to said framebuffer
Draw to the individual parts of the cubemap as refrenced by the enums GL_TEXTURE_CUBE_MAP_*
Draw the scene normally, and compare the depth value of the fragments against those in the cubemap
Now, I've read that it is better to use distances from the light to the fragment, rather than to store the fragment depth, as it allows for easier cubemap look up (something about not needing to check each individual texture?)
My current issue is that the light that comes out is actually in a sphere, and does not generate shadows. Another issue is that the framebuffer complains of not being complete, although I was under the impression that a framebuffer does not need a renderbuffer if it renders to a texture.
Here is my framebuffer and cube map initialization:
framebuffer = 0;
glGenFramebuffers(1, &framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
glGenTextures(1, &shadowTexture);
glBindTexture(GL_TEXTURE_CUBE_MAP, shadowTexture);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);GL_COMPARE_R_TO_TEXTURE);
for(int i = 0; i < 6; i++){
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i , 0,GL_DEPTH_COMPONENT16, 800, 800, 0,GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
}
glDrawBuffer(GL_NONE);
Shadow Vertex Shader
void main(){
gl_Position = depthMVP * M* vec4(position,1);
pos =(M * vec4(position,1)).xyz;
}
Shadow Fragment Shader
void main(){
fragmentDepth = distance(lightPos, pos);
}
Vertex Shader (unrelated bits cut out)
uniform mat4 depthMVP;
void main() {
PositionWorldSpace = (M * vec4(position,1.0)).xyz;
gl_Position = MVP * vec4(position, 1.0 );
ShadowCoord = depthMVP * M* vec4(position, 1.0);
}
Fragment Shader (unrelated code cut)
uniform samplerCube shadowMap;
void main(){
float bias = 0.005;
float visibility = 1;
if(texture(shadowMap, ShadowCoord.xyz).x < distance(lightPos, PositionWorldSpace)-bias)
visibility = 0.1
}
Now as you are probably thinking, what is depthMVP? Depth projection matrix is currently an orthogonal projection with the ranges [-10, 10] in each direction
Well they are defined like so:
glm::mat4 depthMVP = depthProjectionMatrix* ??? *i->getModelMatrix();
The issue here is that I don't know what the ??? value is supposed to be. It used to be the camera matrix, however I am unsure if that is what it is supposed to be.
Then the draw code is done for the sides of the cubemap like so:
for(int loop = 0; loop < 6; loop++){
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_CUBE_MAP_POSITIVE_X+loop, shadowTexture,0);
glClear( GL_DEPTH_BUFFER_BIT);
for(auto i: models){
glUniformMatrix4fv(modelPos, 1, GL_FALSE, glm::value_ptr(i->getModelMatrix()));
glm::mat4 depthMVP = depthProjectionMatrix*???*i->getModelMatrix();
glUniformMatrix4fv(glGetUniformLocation(shadowProgram, "depthMVP"),1, GL_FALSE, glm::value_ptr(depthMVP));
glBindVertexArray(i->vao);
glDrawElements(GL_TRIANGLES, i->triangles, GL_UNSIGNED_INT,0);
}
}
Finally the scene gets drawn normally (I'll spare you the details). Before the calls to draw onto the cubemap I set the framebuffer to the one that I generated earlier, and change the viewport to 800 by 800. I change the framebuffer back to 0 and reset the viewport to 800 by 600 before I do normal drawing. Any help on this subject will be greatly appreciated.
Update 1
After some tweaking and bug fixing, this is the result I get. I fixed an error with the depthMVP not working, what I am drawing here is the distance that is stored in the cubemap.
http://imgur.com/JekOMvf
Basically what happens is it draws the same one sided projection on each side. This makes sense since we use the same view matrix for each side, however I am not sure what sort of view matrix I am supposed to use. I think they are supposed to be lookAt() matrices that are positioned at the center, and look out in the cube map side's direction. However, the question that arises is how I am supposed to use these multiple projections in my main draw call.
Update 2
I've gone ahead and created these matrixes, however I am unsure of how valid they are (they were ripped from a website for DX cubemaps, so I inverted the Z coord).
case 1://Negative X
sideViews[i] = glm::lookAt(glm::vec3(0), glm::vec3(-1,0,0),glm::vec3(0,-1,0));
break;
case 3://Negative Y
sideViews[i] = glm::lookAt(glm::vec3(0), glm::vec3(0,-1,0),glm::vec3(0,0,-1));
break;
case 5://Negative Z
sideViews[i] = glm::lookAt(glm::vec3(0), glm::vec3(0,0,-1),glm::vec3(0,-1,0));
break;
case 0://Positive X
sideViews[i] = glm::lookAt(glm::vec3(0), glm::vec3(1,0,0),glm::vec3(0,-1,0));
break;
case 2://Positive Y
sideViews[i] = glm::lookAt(glm::vec3(0), glm::vec3(0,1,0),glm::vec3(0,0,1));
break;
case 4://Positive Z
sideViews[i] = glm::lookAt(glm::vec3(0), glm::vec3(0,0,1),glm::vec3(0,-1,0));
break;
The question still stands, what I am supposed to translate the depthMVP view portion by, as these are 6 individual matrices. Here is a screenshot of what it currently looks like, with the same frag shader (i.e. actually rendering shadows) http://i.imgur.com/HsOSG5v.png
As you can see the shadows seem fine, however the positioning is obviously an issue. The view matrix that I used to generate this was just an inverse translation of the position of the camera (as the lookAt() function would do).
Update 3
Code, as it currently stands:
Shadow Vertex
void main(){
gl_Position = depthMVP * vec4(position,1);
pos =(M * vec4(position,1)).xyz;
}
Shadow Fragment
void main(){
fragmentDepth = distance(lightPos, pos);
}
Main Vertex
void main(){
PositionWorldSpace = (M*vec4(position, 1)).xyz;
ShadowCoord = vec4(PositionWorldSpace - lightPos, 1);
}
Main Frag
void main(){
float texDist = texture(shadowMap, ShadowCoord.xyz/ShadowCoord.w).x;
float dist = distance(lightPos, PositionWorldSpace);
if(texDist < distance(lightPos, PositionWorldSpace)
visibility = 0.1;
outColor = vec3(texDist);//This is to visualize the depth maps
}
The perspective matrix being used
glm::mat4 depthProjectionMatrix = glm::perspective(90.f, 1.f, 1.f, 50.f);
Everything is currently working, sort of. The data that the texture stores (i.e. the distance) seems to be stored in a weird manner. It seems like it is normalized, as all values are between 0 and 1. Also, there is a 1x1x1 area around the viewer that does not have a projection, but this is due to the frustum and I think will be easy to fix (like offsetting the cameras back .5 into the center).
If you leave the fragment depth to OpenGL to determine you can take advantage of hardware hierarchical Z optimizations. Basically, if you ever write to gl_FragDepth in a fragment shader (without using the newfangled conservative depth GLSL extension) it prevents hardware optimizations called hierarchical Z. Hi-Z, for short, is a technique where rasterization for some primitives can be skipped on the basis that the depth values for the entire primitive lies behind values already in the depth buffer. But it only works if your shader never writes an arbitrary value to gl_FragDepth.
If instead of writing a fragment's distance from the light to your cube map, you stick with traditional depth you should theoretically get higher throughput (as occluded primitives can be skipped) when writing your shadow maps.
Then, in your fragment shader where you sample your depth cube map, you would convert the distance values into depth values by using a snippet of code like this (where f and n are the far and near plane distances you used when creating your depth cube map):
float VectorToDepthValue(vec3 Vec)
{
vec3 AbsVec = abs(Vec);
float LocalZcomp = max(AbsVec.x, max(AbsVec.y, AbsVec.z));
const float f = 2048.0;
const float n = 1.0;
float NormZComp = (f+n) / (f-n) - (2*f*n)/(f-n)/LocalZcomp;
return (NormZComp + 1.0) * 0.5;
}
Code borrowed from SO question: Omnidirectional shadow mapping with depth cubemap
So applying that extra bit of code to your shader, it would work out to something like this:
void main () {
float shadowDepth = texture(shadowMap, ShadowCoord.xyz/ShadowCoord.w).x;
float testDepth = VectorToDepthValue(lightPos - PositionWorldSpace);
if (shadowDepth < testDepth)
visibility = 0.1;
}

Omnidirectional shadow mapping with depth cubemap

I'm working with omnidirectional point lights. I already implemented shadow mapping using a cubemap texture as color attachement of 6 framebuffers, and encoding the light-to-fragment distance in each pixel of it.
Now I would like, if this is possible, to change my implementation this way:
1) attach a depth cubemap texture to the depth buffer of my framebuffers, instead of colors.
2) render depth only, do not write color in this pass
3) in the main pass, read the depth from the cubemap texture, convert it to a distance, and check whether the current fragment is occluded by the light or not.
My problem comes when converting back a depth value from the cubemap into a distance. I use the light-to-fragment vector (in world space) to fetch my depth value in the cubemap. At this point, I don't know which of the six faces is being used, nor what 2D texture coordinates match the depth value I'm reading. Then how can I convert that depth value to a distance?
Here are snippets of my code to illustrate:
Depth texture:
glGenTextures(1, &TextureHandle);
glBindTexture(GL_TEXTURE_CUBE_MAP, TextureHandle);
for (int i = 0; i < 6; ++i)
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, 0, GL_DEPTH_COMPONENT,
Width, Height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
Framebuffers construction:
for (int i = 0; i < 6; ++i)
{
glGenFramebuffers(1, &FBO->FrameBufferID);
glBindFramebuffer(GL_FRAMEBUFFER, FBO->FrameBufferID);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT,
GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, TextureHandle, 0);
glDrawBuffer(GL_NONE);
}
The piece of fragment shader I'm trying to write to achieve my code:
float ComputeShadowFactor(samplerCubeShadow ShadowCubeMap, vec3 VertToLightWS)
{
float ShadowVec = texture(ShadowCubeMap, vec4(VertToLightWS, 1.0));
ShadowVec = DepthValueToDistance(ShadowVec);
if (ShadowVec * ShadowVec > dot(VertToLightWS, VertToLightWS))
return 1.0;
return 0.0;
}
The DepthValueToDistance function being my actual problem.
So, the solution was to convert the light-to-fragment vector to a depth value, instead of converting the depth read from the cubemap into a distance.
Here is the modified shader code:
float VectorToDepthValue(vec3 Vec)
{
vec3 AbsVec = abs(Vec);
float LocalZcomp = max(AbsVec.x, max(AbsVec.y, AbsVec.z));
const float f = 2048.0;
const float n = 1.0;
float NormZComp = (f+n) / (f-n) - (2*f*n)/(f-n)/LocalZcomp;
return (NormZComp + 1.0) * 0.5;
}
float ComputeShadowFactor(samplerCubeShadow ShadowCubeMap, vec3 VertToLightWS)
{
float ShadowVec = texture(ShadowCubeMap, vec4(VertToLightWS, 1.0));
if (ShadowVec + 0.0001 > VectorToDepthValue(VertToLightWS))
return 1.0;
return 0.0;
}
Explaination on VectorToDepthValue(vec3 Vec) :
LocalZComp corresponds to what would be the Z-component of the given Vec into the matching frustum of the cubemap. It's actually the largest component of Vec (for instance if Vec.y is the biggest component, we will look either on the Y+ or the Y- face of the cubemap).
If you look at this wikipedia article, you will understand the math just after (I kept it in a formal form for understanding), which simply convert the LocalZComp into a normalized Z value (between in [-1..1]) and then map it into [0..1] which is the actual range for depth buffer values. (assuming you didn't change it). n and f are the near and far values of the frustums used to generate the cubemap.
ComputeShadowFactor then just compare the depth value from the cubemap with the depth value computed from the fragment-to-light vector (named VertToLightWS here), also add a small depth bias (which was missing in the question), and returns 1 if the fragment is not occluded by the light.
I would like to add more details regarding the derivation.
Let V be the light-to-fragment direction vector.
As Benlitz already said, the Z value in the respective cube side frustum/"eye space" can be calculated by taking the max of the absolute values of V's components.
Z = max(abs(V.x),abs(V.y),abs(V.z))
Then, to be precise, we should negate Z because in OpenGL, the negative Z-axis points into the screen/view frustum.
Now we want to get the depth buffer "compatible" value of that -Z.
Looking at the OpenGL perspective matrix...
http://www.songho.ca/opengl/files/gl_projectionmatrix_eq16.png
http://i.stack.imgur.com/mN7ke.png (backup link)
...we see that, for any homogeneous vector multiplied with that matrix, the resulting z value is completely independent of the vector's x and y components.
So we can simply multiply this matrix with the homogeneous vector (0,0,-Z,1) and we get the vector (components):
x = 0
y = 0
z = (-Z * -(f+n) / (f-n)) + (-2*f*n / (f-n))
w = Z
Then we need to do the perspective divide, so we divide z by w (Z) which gives us:
z' = (f+n) / (f-n) - 2*f*n / (Z* (f-n))
This z' is in OpenGL's normalized device coordinate (NDC) range [-1,1] and needs to be transformed into a depth buffer compatible range of [0,1]:
z_depth_buffer_compatible = (z' + 1.0) * 0.5
Further notes:
It might make sense to upload the results of (f+n), (f-n) and (f*n) as shader uniforms to save computation.
V needs to be in world space since the shadow cube map is normally axis aligned in world space thus the "max(abs(V.x),abs(V.y),abs(V.z))"-part only works if V is a world space direction vector.

What are the texture coordinates for a cube in OpenGL?

I have a cube defined as:
float vertices[] = { -width, -height, -depth, // 0
width, -height, -depth, // 1
width, height, -depth, // 2
-width, height, -depth, // 3
-width, -height, depth, // 4
width, -height, depth, // 5
width, height, depth, // 6
-width, height, depth // 7
};
and I have image 128x128 which I simply want to be painted on each of the 6 faces of the cube and nothing else. So what are the texture cooridinates? I need the actual values.
This is the drawing code:
// Counter-clockwise winding.
gl.glFrontFace(GL10.GL_CCW);
// Enable face culling.
gl.glEnable(GL10.GL_CULL_FACE);
// What faces to remove with the face culling.
gl.glCullFace(GL10.GL_BACK);
// Enabled the vertices buffer for writing and to be used during
// rendering.
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
// Specifies the location and data format of an array of vertex
// coordinates to use when rendering.
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, mVerticesBuffer);
// Bind the texture according to the set texture filter
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[filter]);
gl.glEnable(GL10.GL_TEXTURE_2D);
// Enable the texture state
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
// Point to our buffers
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, mTextureBuffer);
// Set flat color
gl.glColor4f(red, green, blue, alpha);
gl.glDrawElements(GL10.GL_TRIANGLES, mNumOfIndices,
GL10.GL_UNSIGNED_SHORT, mIndicesBuffer);
// ALL the DRAWING IS DONE NOW
// Disable the vertices buffer.
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
// Disable face culling.
gl.glDisable(GL10.GL_CULL_FACE);
This is the index array:
short indices[] = { 0, 2, 1,
0, 3, 2,
1,2,6,
6,5,1,
4,5,6,
6,7,4,
2,3,6,
6,3,7,
0,7,3,
0,4,7,
0,1,5,
0,5,4
};
I am not sure if index array is needed to find tex coordinates. Note that the cube vertex array I gave is the most efficient representation of a cube using the index array. The cube draws perfectly but not the textures. Only one side shows correct picture but other sides are messed up. I used the methods described in various online tutorials on textures but it does not work.
What you are looking for is a cube map. In OpenGL, you can define six textures at once (representing the size sides of a cube) and map them using 3D texture coordinates instead of the common 2D texture coordinates. For a simple cube, the texture coordinates would be the same as the vertices' respective normals. (If you will only be texturing plane cubes in this manner, you can consolidate normals and texture coordinates in your vertex shader, too!) Cube maps are much simpler than trying to apply the same texture to repeating quads (extra unnecessary drawing steps).
GLuint mHandle;
glGenTextures(1, &mHandle); // create your texture normally
// Note the target being used instead of GL_TEXTURE_2D!
glTextParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTextParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
glTextParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTextParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glBindTexture(GL_TEXTURE_CUBE_MAP, mHandle);
// Now, load in your six distinct images. They need to be the same dimensions!
// Notice the targets being specified: the six sides of the cube map.
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X, 0, GL_RGBA, width, height, 0,
format, GL_UNSIGNED_BYTE, data1);
glTexImage2D(GL_TEXTURE_CUBE_MAP_NEGATIVE_X, 0, GL_RGBA, width, height, 0,
format, GL_UNSIGNED_BYTE, data2);
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_Y, 0, GL_RGBA, width, height, 0,
format, GL_UNSIGNED_BYTE, data3);
glTexImage2D(GL_TEXTURE_CUBE_MAP_NEGATIVE_Y, 0, GL_RGBA, width, height, 0,
format, GL_UNSIGNED_BYTE, data4);
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_Z, 0, GL_RGBA, width, height, 0,
format, GL_UNSIGNED_BYTE, data5);
glTexImage2D(GL_TEXTURE_CUBE_MAP_NEGATIVE_Z, 0, GL_RGBA, width, height, 0,
format, GL_UNSIGNED_BYTE, data6);
glGenerateMipmap(GL_TEXTURE_CUBE_MAP);
glTextParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
// And of course, after you are all done using the textures...
glDeleteTextures(1, &mHandle);
When specifying your texture coordinates, you will then use sets of 3 coordinates instead of sets of 2. In a simple cube, you point to the 8 corners using normalized vectors. If N = 1.0 / sqrt(3.0) then one corner would be N, N, N; another would be N, N, -N; etc.
You need to define which orientation you want on each face (and that will change which texture coordinates are put on each vertex)
You need to duplicate the vertex positions as the same cube corner will have different texture coordinates depending on which face it is part of
if you want the full texture on each face, then the texture coordinates are (0, 0) (0, 1) (1, 1) (1, 0). How you map them to the specific vertices (the 24 of them, 4 per face) depends on the orientation you want.
For me, it's easier to consider your verticies as width = x, height = y and depth = z.
Then it's a simple matter of getting the 6 faces.
float vertices[] = { -x, -y, -z, // 0
x, -y, -z, // 1
x, y, -z, // 2
-x, y, -z, // 3
-x, -y, z, // 4
x, -y, z, // 5
x, y, z, // 6
-x, y, z// 7
};
For example the front face of your cube will have a positive depth (this cube's center is at 0,0,0 from the verticies that you've given), now since there are 8 points with 4 positive depths, your front face is 4,5,6,7, this is going from -x,-y anti clockwise to -x,y.
Ok, so your back face is all negative depth or -z so it's simply 0,1,2,3.
See the picture? Your left face is all negative width or -x so 0,3,4,7 and your right face is positive x so 1,2,5,6.
I'll let you figure out the top and bottom of the cube.
Your vertex array only describes 2 sides of a cube, but for arguments sake, say vertices[0] - vertices[3] describe 1 side then your texture coordinates may be:
float texCoords[] = { 0.0, 0.0, //bottom left of texture
1.0, 0.0, //bottom right " "
1.0, 1.0, //top right " "
0.0, 1.0 //top left " "
};
You can use those coordinates for texturing each subsequent side with the entire texture.
To render a skybox (cubemap), the below shader works for me:
Cubemap vertexshader::
attribute vec4 a_position;
varying vec3 v_cubemapTexture;
vec3 texture_pos;
uniform vec3 u_cubeCenterPt;
uniform mat4 mvp;
void main(void)
{
gl_Position = mvp * a_position;
texture_pos = vec3(a_position.x - u_cubeCenterPt.x, a_position.y - u_cubeCenterPt.y, a_position.z - u_cubeCenterPt.z);
v_cubemapTexture = normalize(texture_pos.xyz);
}
Cubemap fragmentshader::
precision highp float;
varying vec3 v_cubemapTexture;
uniform samplerCube cubeMapTextureSample;
void main(void)
{
gl_FragColor = textureCube(cubeMapTextureSample, v_cubemapTexture);
}
Hope it is useful...