I am using OpenGL 3.3 and deferred shading.
When I am setting anisotropy values for my samplers between frames, the next frame causes a crash at the next frame at glClear.
Here's how I set my anisotropy values:
bool OpenGLRenderer::SetAnisotropicFiltering(const float newAnisoLevel)
{
if (newAnisoLevel < 0.0f || newAnisoLevel > GetMaxAnisotropicFiltering())
return false;
mCurrentAnisotropy = newAnisoLevel;
// the sampler used for geometry pass
GLCALL(glSamplerParameterf(mTextureSampler, GL_TEXTURE_MAX_ANISOTROPY_EXT, mCurrentAnisotropy));
// the sampler used in shading pass
GLCALL(glSamplerParameterf(mGBuffer.mTextureSampler, GL_TEXTURE_MAX_ANISOTROPY_EXT, mCurrentAnisotropy));
return true;
}
The geometry pass has the following diffuse / normal textures and are setup like this:
GLCALL(glUseProgram(mGeometryProgram.mProgramHandle));
GLCALL(glGenSamplers(1, &mTextureSampler));
GLCALL(glSamplerParameteri(mTextureSampler, GL_TEXTURE_MAG_FILTER, GL_LINEAR));
GLCALL(glSamplerParameteri(mTextureSampler, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR));
GLCALL(glSamplerParameteri(mTextureSampler, GL_TEXTURE_WRAP_S, GL_REPEAT));
GLCALL(glSamplerParameteri(mTextureSampler, GL_TEXTURE_WRAP_T, GL_REPEAT));
GLCALL(glUniform1i(glGetUniformLocation(mGeometryProgram.mProgramHandle, "unifDiffuseTexture"), OpenGLTexture::TEXTURE_UNIT_DIFFUSE));
GLCALL(glUniform1i(glGetUniformLocation(mGeometryProgram.mProgramHandle, "unifNormalTexture"), OpenGLTexture::TEXTURE_UNIT_NORMAL));
GLCALL(glBindSampler(OpenGLTexture::TEXTURE_UNIT_DIFFUSE, mTextureSampler));
GLCALL(glBindSampler(OpenGLTexture::TEXTURE_UNIT_NORMAL, mTextureSampler));
GLCALL(glUseProgram(0));
The shading pass has the following textures for lighting calculations:
GLCALL(glUseProgram(shadingProgramID));
GLCALL(glGenSamplers(1, &mTextureSampler));
GLCALL(glSamplerParameteri(mTextureSampler, GL_TEXTURE_MAG_FILTER, GL_LINEAR));
GLCALL(glSamplerParameteri(mTextureSampler, GL_TEXTURE_MIN_FILTER, GL_LINEAR));
GLCALL(glSamplerParameteri(mTextureSampler, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE));
GLCALL(glSamplerParameteri(mTextureSampler, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE));
GLCALL(glUniform1i(glGetUniformLocation(shadingProgramID, "unifPositionTexture"), GBuffer::GBUFFER_TEXTURE_POSITION));
GLCALL(glUniform1i(glGetUniformLocation(shadingProgramID, "unifNormalTexture"), GBuffer::GBUFFER_TEXTURE_NORMAL));
GLCALL(glUniform1i(glGetUniformLocation(shadingProgramID, "unifDiffuseTexture"), GBuffer::GBUFFER_TEXTURE_DIFFUSE));
GLCALL(glBindSampler(GBuffer::GBUFFER_TEXTURE_POSITION, mTextureSampler));
GLCALL(glBindSampler(GBuffer::GBUFFER_TEXTURE_NORMAL, mTextureSampler));
GLCALL(glBindSampler(GBuffer::GBUFFER_TEXTURE_DIFFUSE, mTextureSampler));
GLCALL(glUseProgram(0));
And then on the next frame it crashes immediately on the glClear function when doing the geometry pass
void OpenGLRenderer::GeometryPass(const RenderQueue& renderQueue)
{
GLCALL(glUseProgram(mGeometryProgram.mProgramHandle));
GLCALL(glBindFramebuffer(GL_DRAW_FRAMEBUFFER, mGBuffer.mFramebuffer));
GLCALL(glDepthMask(GL_TRUE));
GLCALL(glEnable(GL_DEPTH_TEST));
// clear GBuffer fbo
GLCALL(glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)); // <----- crash!
// both containers are assumed to be sorted by MeshID ascending
auto meshIterator = mMeshes.begin();
for (const Renderable& renderable : renderQueue)
{
// lots of draw code.....
}
GLCALL(glDisable(GL_DEPTH_TEST));
GLCALL(glDepthMask(GL_FALSE));
GLCALL(glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0));
GLCALL(glUseProgram(0));
}
What could be the issue here?
Your range validation is wrong. The minimum acceptable value for anisotropy is 1.0f. A value of 1.0f (the default) means off (isotropic).
To be honest, rather than returning false and doing nothing else when you set anisotropy above or below the acceptable range, I would consider clamping the values to [1.0, MAX]. You can always find out later on that your request was unacceptable by checking the value of mCurrentAnisotropy after the function returns. This is useful if you store the anisotropy level as an option in a configuration file and the hardware changes. Though 16.0 is almost universally the maximum these days, some really old hardware only supports 8.0. You can still return false, report a warning or whatever, but I personally always interpret a request for a level of anisotropy too high for the implementation to support to mean: "I want the highest anisotropy possible."
Related
I am not really sure what the English name for what I am trying to do is, please tell me if you know.
In order to run some physically based lighting calculations. I need to write floating point data to a texture using one OpenGL shader, and read this data again in another OpenGL shader, but the data I want to store may be less than 0 or above 1.
To do this, I set up a render buffer to render to this texture as follows (This is C++):
//Set up the light map we will use for lighting calculation
glGenFramebuffers(1, &light_Framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, light_Framebuffer);
glBlendFunc(GL_SRC_ALPHA, GL_DST_ALPHA);//Needed for light blending (true additive)
glGenTextures(1, &light_texture);
glBindTexture(GL_TEXTURE_2D, light_texture);
//Initialize empty, and at the size of the internal screen
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, w, h, 0, GL_RGBA, GL_FLOAT, 0);
//No interpolation, I want pixelation
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
//Now the light framebuffer renders to the texture we will use to calculate dynamic lighting
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, light_texture, 0);
GLenum DrawBuffers[1] = { GL_COLOR_ATTACHMENT0 };
glDrawBuffers(1, DrawBuffers);//Color attachment 0 as before
Notice that I use type GL_FLOAT and not GL_UNSIGNED_BYTE, according to this discussion Floating point type texture should not be clipped between 0 and 1.
Now, just to test that this is true, I simply set the color somewhere outside this range in the fragment shader which creates this texture:
#version 400 core
void main()
{
gl_FragColor = vec4(2.0,-2.0,2.0,2.0);
}
After rendering to this texture, I send this texture to the program which should use it like any other texture:
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, light_texture );//This is the texture I rendered too
glUniform1i(surf_lightTex_ID , 1);//This is the ID in the main display program
Again, just to check that this is working I have replaced the fragment shader with one which tests that the colors have been saved.
#version 400 core
uniform sampler2D lightSampler;
void main()
{
color = vec4(0,0,0,1);
if (texture(lightSampler,fragment_pos_uv).r>1.0)
color.r=1;
if (texture(lightSampler,fragment_pos_uv).g<0.0)
color.g=1;
}
If everything worked, everything should turn yellow, but needless to say this only gives me a black screen. So I tried the following:
#version 400 core
uniform sampler2D lightSampler;
void main()
{
color = vec4(0,0,0,1);
if (texture(lightSampler,fragment_pos_uv).r==1.0)
color.r=1;
if (texture(lightSampler,fragment_pos_uv).g==0.0)
color.g=1;
}
And I got
The parts which are green are in shadow in the testing scene, nevermind them; the main point is that all the channels of light_texture get clipped to between 0 and 1, which they should not do. I am not sure if the data is saved correctly and only clipped when I read it, or if the data is clipped to 0 to 1 when saving.
So, my question is, is there some way to read and write to an OpenGL texture, such that the data stored may be above 1 or below 0.
Also, No can not fix the problem by using 32 bit integer per channel and by applying a Sigmoid function before saving and its inverse after reading the data, that would break alpha blending.
The type and format arguments glTexImage2D only specify the format of the source image data, but do not affect the internal format of the texture. You must use a specific internal format. e.g.: GL_RGBA32F:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, w, h, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
In our own 3d application I'm loading multiple textures, using devil library. When attempting to load one texture, I'm calling ilutRenderer( ILUT_OPENGL );, which in a turn perform following function calls:
ILboolean ilutGLInit()
{
// Use PROXY_TEXTURE_2D with glTexImage2D() to test more accurately...
glGetIntegerv(GL_MAX_TEXTURE_SIZE, (GLint*)&MaxTexW);
glGetIntegerv(GL_MAX_TEXTURE_SIZE, (GLint*)&MaxTexH);
if (MaxTexW == 0 || MaxTexH == 0)
MaxTexW = MaxTexH = 256; // Trying this because of the VooDoo series of cards...
// Should we really be setting all this ourselves? Seems too much like a glu(t) approach...
glEnable(GL_TEXTURE_2D);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glPixelStorei(GL_UNPACK_SKIP_ROWS, 0);
glPixelStorei(GL_UNPACK_SKIP_PIXELS, 0);
glPixelStorei(GL_UNPACK_ROW_LENGTH, 0);
glPixelStorei(GL_UNPACK_SWAP_BYTES, GL_FALSE);
glPixelStorei(GL_PACK_ALIGNMENT, 1);
#ifdef _MSC_VER
if (IsExtensionSupported("GL_ARB_texture_compression") &&
IsExtensionSupported("GL_EXT_texture_compression_s3tc")) {
ilGLCompressed2D = (ILGLCOMPRESSEDTEXIMAGE2DARBPROC)
wglGetProcAddress("glCompressedTexImage2DARB");
}
#endif
if (IsExtensionSupported("GL_ARB_texture_cube_map"))
HasCubemapHardware = IL_TRUE;
return IL_TRUE;
}
ilutRenderer( ILUT_OPENGL ); needs to be called only once (for each newly created window), but while experimenting I've called same function multiple times. (one call for each loaded texture)
If same function were called multiple times - loaded openGl texture was looking with poorer quality than if called once. (I have multiple textures, but most of them with poorer quality, not sure about first image).
I was stumbled about it - since from my perspective that call did not do anything special - why it cannot tolerate multiple similar calls ?
Well, I've started to filter out which functions can be called multiple times and which cannot be - and concluded that it was glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); which triggered odd behavior. So wrapping function like this:
if( !bInitDone )
{
bInitDone = TRUE;
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
}
after that ilutRenderer( ILUT_OPENGL ) can be called as many times as needed.
I have also tried to centralize opengl initialization like this:
...
InitGL( OnWindowReady )
...
void OnWindowReady( void )
{
ilInit();
ilutRenderer( ILUT_OPENGL );
}
... may be some rendering code ...
void LoadModel( const wchar_t* file )
{
... load texture 1 ...
... load texture 2 ...
}
But textures are still appearing "as corrupted". Maybe window should be rendered at least once before starting to load textures, but I want to know what functions in OpenGL are reflecting texture corruption / looking nice kind of state.
I have "NVidia Quadro K2100M", driver version 375.86.
Is this display driver bug ?
How do you typically report bugs to NVidia ?
I've created an array of 2D textures and initialized it with glTexImage3D. Then I attached separate textures to color attachments with glFramebufferTextureLayer, Framebuffer creation doesn't throw an error and everything seems fine until the draw call happens.
When shader tries to access color attachment the following message appears:
OpenGL Debug Output message : Source : API; Type : ERROR; Severity : HIGH;
GL_INVALID_OPERATION error generated. <location> is invalid.
Shaders are accessing layers of an array with location qualifier:
layout (location = 0) out vec3 WorldPosOut;
layout (location = 1) out vec3 DiffuseOut;
layout (location = 2) out vec3 NormalOut;
layout (location = 3) out vec3 TexCoordOut;
Documentation says that glFramebufferTextureLayer works just like glFramebufferTexture2D, except the layer parameter, so can I use location qualifiers with texture array, or some other way exsists?
I finally managed to bind texture array as a color buffer. It is hard to find useful information on the topic, so here is an instruction:
№1. You need to create a texture array and initialize it properly:
glGenTextures(1, &arrayBuffer);
glBindTexture(GL_TEXTURE_2D_ARRAY, arrayBuffer);
// we should initialize layers for each mipmap level
for (int mip = 0; mip < mipLevelCount; ++mip) {
glTexImage3D(GL_TEXTURE_2D_ARRAY, mip, internalFormat, ImageWidth, ImageHeight,
layerCount, 0, GL_RGB, GL_UNSIGNED_INT, 0);
glTexParameterf(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MIN_FILTER, textureFilter);
glTexParameterf(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MAG_FILTER, textureFilter);
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_BASE_LEVEL, 0);
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MAX_LEVEL, mipLevelCount - 1);
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
}
Keep in mind, that setting texture parameters like MIN/MAG filters and BASE/MAX mipmap level is important. OpenGL sets maximum mipmap level to 1000 and if you didn't provide the whole thousand of mipmaps you will get an incomplete texture, you won't get anything except the black screen.
№2. Don't forget to bind arrayBuffer to the GL_TEXTURE_2D_ARRAY target before attaching the layers to the color buffers:
glBindTexture(GL_TEXTURE_2D_ARRAY, arrayBuffer);
for (unsigned int i = 0; i < NUMBER_OF_TEXTURES; i++) {
glFramebufferTextureLayer(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, arrayBuffer, 0, i);
}
Don't forget to set the GL_TEXTURE_2D_ARRAY target to 0 with glBindTexture or it can get modified outside of the initialization code.
№3. Since the internalFormat of each image in the array must stay the same, I recommend to create a separate texture for the depth/stencil buffer:
glGenTextures(1, &m_depthTexture);
...
glBindTexture(GL_TEXTURE_2D, m_depthTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH32F_STENCIL8, WindowWidth,
WindowHeight, 0, GL_DEPTH_STENCIL, GL_FLOAT_32_UNSIGNED_INT_24_8_REV, NULL);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 0);
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT,
GL_TEXTURE_2D, m_depthTexture, 0);
Don't forget to set up index for each color buffer:
for (int i = 0; i < GBUFFER_NUM_TEXTURES; ++i)
DrawBuffers[i] = GL_COLOR_ATTACHMENT0 + i; //Sets appropriate indices for each color buffer
glDrawBuffers(ARRAY_SIZE_IN_ELEMENTS(DrawBuffers), DrawBuffers);
In shaders you can use layout(location = n) qualifiers to specify the color buffer.
OpenGL 3 Note (NVIDIA): glFramebufferTextureLayer is available since OpenGL 3.2 (Core profile), but on NVIDIA GPU's drivers will force OpenGL version to 4.5, so you should specify the exact version of OpenGL if you care about compatibility. I use SDL2 in my application, so I use the following calls to set OpenGL version:
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 3);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 3);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_CORE);
Results of the deferred shading:
The Problem
I have been trying to implement shadows in OpenGL for some time. I have finally gotten it to a semi-working state in that the shadow appears but covers the scene in strange places [i.e - it is not relative to the light]
To further explain the above gif: As I move the light-source further away from the scene (to the left) - the shadow stretches further. Why? If anything, it should show more of the scene.
Update - I messed around with the lights position and am now being given this result (confusing):
Depth Map
Here it is:
The Code
Because this is a difficult issue to pinpoint - I will post a large chunk of the code I am using in this application.
The Framebuffer and Depth Texture - The first thing I needed was a framebuffer to record the depth values of all the drawn objects and then I needed to dump these values into a depth texture (the shadow-map):
// Create Framebuffer
FramebufferName = 0;
glGenFramebuffers(1, &FramebufferName);
glBindFramebuffer(GL_FRAMEBUFFER, FramebufferName);
// Create and Load Depth Texture
glGenTextures(1, &depthTexture);
glBindTexture(GL_TEXTURE_2D, depthTexture);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC, GL_LEQUAL);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_COMPARE_REF_TO_TEXTURE);
glTexImage2D(GL_TEXTURE_2D, 0,GL_DEPTH_COMPONENT24, 1024, 1024, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, 0);
glBindTexture(GL_TEXTURE_2D, 0);
//Attach Texture To Framebuffer
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, depthTexture, 0);
glDrawBuffer(GL_NONE);
glReadBuffer(GL_NONE);
//Check for errors
if(glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
Falcon::Debug::error("ShadowBuffer [Framebuffer] could not be initialized.");
Rendering The Scene - First I do the shadow-pass which just runs through some basic shaders and outputs to the framebuffer and then I do a second, regular pass that actually draws the scene and does GLSL shadow-map sampling:
//Clear
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
//Select Main Shader
normalShader->useShader();
//Bind + Update + Draw
/* Render Shadows */
shadowShader->useShader();
glBindFramebuffer(GL_FRAMEBUFFER, Shadows::framebuffer());
//Viewport
glViewport(0,0,640,480);
//GLM Matrix Definitions
glm::mat4 shadow_matrix_view;
glm::mat4 shadow_matrix_projection;
//View And Projection Calculations
shadow_matrix_view = glm::lookAt(glm::vec3(light.x,light.y,light.z), glm::vec3(0,0,0), glm::vec3(0,1,0));
shadow_matrix_projection = glm::perspective(45.0f, 1.0f, 0.1f, 1000.0f);
//Calculate MVP(s)
glm::mat4 shadow_depth_mvp = shadow_matrix_projection * shadow_matrix_view * glm::mat4(1.0);
glm::mat4 shadow_depth_bias = glm::mat4(0.5,0,0,0,0,0.5,0,0,0,0,0.5,0,0.5,0.5,0.5,1) * shadow_depth_mvp;
//Send Data To The GPU
glUniformMatrix4fv(glGetUniformLocation(shadowShader->getShader(),"depth_matrix"), 1, GL_FALSE, &shadow_depth_mvp[0][0]);
glUniformMatrix4fv(glGetUniformLocation(normalShader->getShader(),"depth_matrix_bias"), 1, GL_FALSE, &shadow_depth_bias[0][0]);
renderScene();
glBindFramebuffer(GL_FRAMEBUFFER, 0);
/* Clear */
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
/* Shader */
normalShader->useShader();
/* Shadow-map */
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, Shadows::shadowmap());
glUniform1f(glGetUniformLocation(normalShader->getShader(),"shadowMap"),0);
/* Render Scene */
glViewport(0,0,640,480);
renderScene();
Fragment Shader - This is where I calculate the final color to be output and do the depth texture / shadow-map sampling. It could be the source of where I am going wrong:
//Shadows
uniform sampler2DShadow shadowMap;
in vec4 shadowCoord;
void main()
{
//Lighting Calculations...
//Shadow Sampling:
float visibility = 1.0;
if (texture(shadowMap, shadowCoord.xyz) < shadowCoord.z){
visibility = 0.1;
}
//Final Output
outColor = finalColor * visibility;
}
Edits
<1> AMD Hardware Issue - It was also suggested that this could be an issue of the GPU but I find this hard to believe given that it's a Radeon HD 6670. Would it be worth putting in a Nvidia card in to test this theory?
<2> Suggest Changes - I made some suggested changes from the comments and answers:
Firstly, I changed the light's perspective projection to an ortho one which gave me the accuracy I needed in the shadow-map so that now I can see the depth clearly (i.e -> it's not all white). In addition, it removes the need for the perspective division so I am using 3-dimensional coordinates for testing this. Below is a screenshot:
Secondly, I changed my texture sampling to this: visibility = texture(shadowMap,shadowCoord.xyz); which now always returns 0 and thus I cannot see the scene as it is considered ENTIRELY shadowed.
Thirdly and finally, I made a swap from GL_LEQUAL to GL_LESS as suggested an no changes occurred.
There is something fundamentally wrong with your shader:
uniform sampler2DShadow shadowMap; // NOTE: Shadow samplers perform comparison !!
...
if (texture(shadowMap, shadowCoord.xyz) < shadowCoord.z)
You have texture compare vs. reference enabled. That means that the 3rd texture coordinate is going to be compared by the texture (...) function and the returned value is going to be the result of the test function (GL_LEQUAL in this case).
In other words, texture (...) will return either 0.0 (fail) or 1.0 (pass) by comparing the looked up depth at shadowCoord.xy to the value of shadowCoord.z. You are doing this test twice.
Consider using this altered code instead:
float visibility = texture(shadowMap, shadowCoord.xyz);
That is not going to produce quite the results you want because your comparison function is GL_LEQUAL, but it is a start. Consider changing the comparison function to GL_LESS to get an exact functional match.
I have a problem to figure out how to correctly use glTexImage3D when width, height and depth are different values.
I've thought that in order to avoid 3 new on triple pointers and then be very careful to delete everything, boost::multiarray could have been very useful, so I've used.
boost::multi_array<GLubyte, 3> texture3DVolume;
texture3DVolume.resize(boost::extents[textureSizeX][textureSizeY][textureSizeZ]);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glGenTextures(1, &(this->textureID));
glBindTexture(GL_TEXTURE_3D, this->textureID);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glEnable(GL_TEXTURE_3D);
glTexImage3D(GL_TEXTURE_3D, 0, GL_LUMINANCE, this->textureSizeX,this->textureSizeY, this->textureSizeZ, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, texture3DVolume.data());
int textureSizeX=512;
int textureSizeY=512
int textureSizeZ=511;
with 511 I get a skewed texture. The texture coordinates are made with a very simple shader, for every vertex (the vertices of a [-0.5,0.5]x[-0.5,0.5]x[-0.5,0.5] cube) I use:
// vertex shader...
texture_coordinate = (v.xyz+vec3(1.0))*0.5;
// then in the fragment shader
gl_FragColor = texture(my_color_texture, texture_coordinate);
but my result is skewed. I really don't know if I have a problem on how the data are laid out with boost::multiarray or what...
My result is the following:
As you see the spheres are skewed and this is because the width==height but they are !=depth. I would like to better understand if this is a problem of how the data are laid out and/or strides.
If I set depth=512 I get the correctly proportioned cube.
I've thought that in order to avoid 3 new on triple pointers and then be very careful to delete everything, boost::multiarray could have been very useful, so I've used.
Either is wrong. A 3 stage new-ed array is actually a tree. It's an array of pointers to arrays of pointers to values. What you want is a flat region of data. Don't bother with multidimensional arrays, because you don't know how those arrange the data internally.
Use a simple 1-dimensional array of length width * height * depth. Also make sure you set the right GL_UNPACK_ALIGNMENT, usually 1.