OpenGL Skybox visible borders - c++

I have my skybox showing:
But there are borders of the box, which I don't want. I already searched the internet and they all said that GL_CLAMP_TO_EDGE should work, but I am still seeing the borders.
This is what I used for the texture loading:
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glGenerateMipmap(GL_TEXTURE_2D);
Can anyone tell me what I am doing wrong?
EDIT:
Strange thing is that the borders are only showing at the top of the skybox. so when a skybox face, touches the roof of the box.
Here an image of it:

I finally found the solution. This is a filthy mistake in the texture itself. There is a black border around the texture, but you can barely see it unless you zoom in. So I removed the borders and it worked.

Its texture coordinates floating error. If you use shaders you can clean it to strict [0.0f, 1.0f]. I cannot say is there is any possible solution for OpenGL API calls. But shaders must support this. Example using HLSL 2.0 (NVIDIA Cg) for post screen shader.
float g_fInverseViewportWidth: InverseViewportWidth;
float g_fInverseViewportHeight: InverseViewportHeight;
struct VS_OUTPUT {
float4 Pos: POSITION;
float2 texCoord: TEXCOORD0;
};
VS_OUTPUT vs_main(float4 Pos: POSITION){
VS_OUTPUT Out;
// Clean up inaccuracies
Pos.xy = sign(Pos.xy);
Out.Pos = float4(Pos.xy, 0, 1);
// Image-space
Out.texCoord.x = 0.5 * (1 + Pos.x + g_fInverseViewportWidth);
Out.texCoord.y = 0.5 * (1 - Pos.y + g_fInverseViewportHeight);
return Out;
}
Where sign rutine is used for strict [0, 1] texture coord specification. Also there is sign rutine for GLSL you may use. sign retrives sign of the vector or scalar it mean -1 for negative and 1 for positive value so to pass texture coord for vertex shader it must be specified as -1 for 0 and 1 for 1 than you may use this like formulae for actual texture coord specification:
Out.texCoord.x = 0.5 * (1 + Pos.x + g_fInverseViewportWidth);
Out.texCoord.y = 0.5 * (1 - Pos.y + g_fInverseViewportHeight);
Here you can see texture 1 texel width inaccurancy:
Now with modified shader:

Related

OpenGL, render to texture with floating point color without clipping value

I am not really sure what the English name for what I am trying to do is, please tell me if you know.
In order to run some physically based lighting calculations. I need to write floating point data to a texture using one OpenGL shader, and read this data again in another OpenGL shader, but the data I want to store may be less than 0 or above 1.
To do this, I set up a render buffer to render to this texture as follows (This is C++):
//Set up the light map we will use for lighting calculation
glGenFramebuffers(1, &light_Framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, light_Framebuffer);
glBlendFunc(GL_SRC_ALPHA, GL_DST_ALPHA);//Needed for light blending (true additive)
glGenTextures(1, &light_texture);
glBindTexture(GL_TEXTURE_2D, light_texture);
//Initialize empty, and at the size of the internal screen
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, w, h, 0, GL_RGBA, GL_FLOAT, 0);
//No interpolation, I want pixelation
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
//Now the light framebuffer renders to the texture we will use to calculate dynamic lighting
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, light_texture, 0);
GLenum DrawBuffers[1] = { GL_COLOR_ATTACHMENT0 };
glDrawBuffers(1, DrawBuffers);//Color attachment 0 as before
Notice that I use type GL_FLOAT and not GL_UNSIGNED_BYTE, according to this discussion Floating point type texture should not be clipped between 0 and 1.
Now, just to test that this is true, I simply set the color somewhere outside this range in the fragment shader which creates this texture:
#version 400 core
void main()
{
gl_FragColor = vec4(2.0,-2.0,2.0,2.0);
}
After rendering to this texture, I send this texture to the program which should use it like any other texture:
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, light_texture );//This is the texture I rendered too
glUniform1i(surf_lightTex_ID , 1);//This is the ID in the main display program
Again, just to check that this is working I have replaced the fragment shader with one which tests that the colors have been saved.
#version 400 core
uniform sampler2D lightSampler;
void main()
{
color = vec4(0,0,0,1);
if (texture(lightSampler,fragment_pos_uv).r>1.0)
color.r=1;
if (texture(lightSampler,fragment_pos_uv).g<0.0)
color.g=1;
}
If everything worked, everything should turn yellow, but needless to say this only gives me a black screen. So I tried the following:
#version 400 core
uniform sampler2D lightSampler;
void main()
{
color = vec4(0,0,0,1);
if (texture(lightSampler,fragment_pos_uv).r==1.0)
color.r=1;
if (texture(lightSampler,fragment_pos_uv).g==0.0)
color.g=1;
}
And I got
The parts which are green are in shadow in the testing scene, nevermind them; the main point is that all the channels of light_texture get clipped to between 0 and 1, which they should not do. I am not sure if the data is saved correctly and only clipped when I read it, or if the data is clipped to 0 to 1 when saving.
So, my question is, is there some way to read and write to an OpenGL texture, such that the data stored may be above 1 or below 0.
Also, No can not fix the problem by using 32 bit integer per channel and by applying a Sigmoid function before saving and its inverse after reading the data, that would break alpha blending.
The type and format arguments glTexImage2D only specify the format of the source image data, but do not affect the internal format of the texture. You must use a specific internal format. e.g.: GL_RGBA32F:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, w, h, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);

Problem at Shadows Calculation with ShadowMap Rendering

I'm having a little trouble on implementing Shadow Mapping in the Engine I'm doing. I'm following the LearnOpenGL's tutorial to do so, and more or less it "works", but there's something I'm doing wrong, like if something in the shadowmap was reverted or something, check the next gifs: gif1, gif2
In those gifs, there is a simple scene with a directional light (which has an orthogonal frustum to make the shadow calculations and to ease my life), which has to cast shadows. Then, at the right there is a little window showing the "shadow map scene", the scene rendered from light's point of view only with depth values.
Now, about the code, it pretty follows the guidelines from the mentioned tutorial. I have a ModuleRenderer and I first create the framebuffers with the textures they have to have:
glGenTextures(1, &depthMapTexture);
glBindTexture(GL_TEXTURE_2D, depthMapTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, App->window->GetWindowWidth(), App->window->GetWindowHeight(), 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER);
glGenFramebuffers(1, &depthbufferFBO);
glBindFramebuffer(GL_FRAMEBUFFER, depthbufferFBO);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, depthMapTexture, 0);
glDrawBuffer(GL_NONE);
glReadBuffer(GL_NONE);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glBindTexture(GL_TEXTURE_2D, 0);
Then, in the Module Renderer's post update, I make the 2 render passes and draw the FBOs:
// --- Shadows Buffer (Render 1st Pass) ---
glBindFramebuffer(GL_FRAMEBUFFER, depthbufferFBO);
SendShaderUniforms(shadowsShader->ID, true);
DrawRenderMeshes(true);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
// --- Standard Buffer (Render 2nd Pass) ---
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
SendShaderUniforms(defaultShader->ID, false);
DrawRenderMeshes(false);
// --- Draw Lights ---
std::vector<ComponentLight*>::iterator LightIterator = m_LightsVec.begin();
for (; LightIterator != m_LightsVec.end(); ++LightIterator)
(*LightIterator)->Draw();
glBindFramebuffer(GL_FRAMEBUFFER, 0);
// -- Draw framebuffer textures ---
DrawFramebuffer(depth_quadVAO, depthMapTexture, true);
DrawFramebuffer(quadVAO, rendertexture, false);
The DrawRenderMeshes() function, basically gets the list of meshes to draw, the shader it has to pick, and sends all needed uniforms. It's a huge function to put here, but for a normal mesh, it gets a shader called Standard and sends all it needs. For the shadow map, it sends the texture attached to the depth FBO:
glUniform1i(glGetUniformLocation(shader, "u_ShadowMap"), 4);
glActiveTexture(GL_TEXTURE0 + 4);
glBindTexture(GL_TEXTURE_2D, depthMapTexture);
In the standard shader, for the vertex, I just pass the uniform for lightspace (the light's frustum projection x view matrices) to calculate the fragment position in light space (the next is done in the vertex's main):
v_FragPos = vec3(u_Model * vec4(a_Position, 1.0));
v_FragPos_InLightSpace = u_LightSpace * vec4(v_FragPos, 1.0);
v_FragPos_InLightSpace.z = (1.0 - v_FragPos_InLightSpace.z);
gl_Position = u_Proj * u_View * vec4(v_FragPos, 1.0);
And for the fragment, I calculate, with that value, the fragment's shadowing (the diffuse+specular values of light are multiplied by the result of that shadowing function):
float ShadowCalculation()
{
vec3 projCoords = v_FragPos_InLightSpace.xyz / v_FragPos_InLightSpace.w;
projCoords = projCoords * 0.5 + 0.5;
float closeDepth = texture(u_ShadowMap, projCoords.xy).z;
float currDept = projCoords.z;
float shadow = currDept > closeDepth ? 1.0 : 0.0;
return (1.0 - shadow);
}
Again, I'm not sure what can be wrong, but I can guess that something is kind of inverted? Not sure... If anyone can imagine something and let me know, I would appreciate a lot, thanks you :)
Note: For the first render pass, in which all scene is rendered only with depth values, I use a very simple shader that just puts objects in their position with the common function (in the vertex shader):
gl_Position = u_Proj * u_View * u_Model * vec4(a_Position, 1.0);
And the fragment doesn't do anything, is an empty main(), since it's the same than doing what we want for shadows pass
gl_FragDepth = gl_FragCoord.z;

GL_INVALID_OPERATION when attempting to sample cubemap texture

I'm working on shadow casting using this lovely tutorial. The process is, we render the scene to a frame buffer, attached to which is a cubemap to hold the depth values. Then, we pass this cubemap to a fragment shader which samples it and gets the depth values from there.
I took a slight deviation from the tutorial in that instead of using a geometry shader to render the entire cubemap at once, I instead render the scene six times to get the same effect - largely because my current shader system doesn't support geometry shaders and for now I'm not too concerned about the performance hit.
The depth cubemap is being drawn to fine, here's a screenshot from gDEBugger:
Everything seems to be in order here.
However, I'm having issues in my fragment shader when I attempt to sample this cubemap. After the call to glDrawArrays, a call to glGetError returns GL_INVALID_OPERATION, and as best as I can tell, it's coming from here: (The offending line has been commented)
struct PointLight
{
vec3 Position;
float ConstantRolloff;
float LinearRolloff;
float QuadraticRolloff;
vec4 Color;
samplerCube DepthMap;
float FarPlane;
};
uniform PointLight PointLights[NUM_POINT_LIGHTS];
[...]
float CalculateShadow(int lindex)
{
// Calculate vector between fragment and light
vec3 fragToLight = FragPos - PointLights[lindex].Position;
// Sample from the depth map (Comment this out and everything works fine!)
float closestDepth = texture(PointLights[lindex].DepthMap, vec3(1.0, 1.0, 1.0)).r;
// Transform to original value
closestDepth *= PointLights[lindex].FarPlane;
// Get current depth
float currDepth = length(fragToLight);
// Test for shadow
float bias = 0.05;
float shadow = currDepth - bias > closestDepth ? 1.0 : 0.0;
return shadow;
}
Commenting out the aforementioned line seems to make everything work fine - so I'm assuming it's the call to the texture sampler that's causing issues. I saw that this can be attributed to using two textures of different types in the same texture unit - but according to gDEBugger this isn't the case:
Texture 16 is the depth cube map.
In case it's relevant, here's how I'm setting up the FBO: (called only once)
// Generate frame buffer
glGenFramebuffers(1, &depthMapFrameBuffer);
// Generate depth maps
glGenTextures(1, &depthMap);
// Set up textures
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_CUBE_MAP, depthMap);
for (int i = 0; i < 6; ++i)
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, 0, GL_DEPTH_COMPONENT,
ShadowmapSize, ShadowmapSize, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
// Set texture parameters
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
// Attach cubemap to FBO
glBindFramebuffer(GL_FRAMEBUFFER, depthMapFrameBuffer);
glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, depthMap, 0);
glDrawBuffer(GL_NONE);
glReadBuffer(GL_NONE);
if(glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
ERROR_LOG("PointLight created an incomplete frame buffer!\n");
glBindTexture(GL_TEXTURE_CUBE_MAP, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
Here's how I'm drawing with it: (called every frame)
// Set up viewport
glViewport(0, 0, ShadowmapSize, ShadowmapSize);
// Bind frame buffer
glBindFramebuffer(GL_FRAMEBUFFER, depthMapFrameBuffer);
// Clear depth buffer
glClear(GL_DEPTH_BUFFER_BIT);
// Render scene
for(int i = 0; i < 6; ++i)
{
sh->SetUniform("ShadowMatrix", lightSpaceTransforms[i]);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT,
GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, depthMap, 0);
Space()->Get<Renderer>()->RenderScene(sh);
}
// Unbind frame buffer
glBindFramebuffer(GL_FRAMEBUFFER, 0);
And here's how I'm binding it before drawing:
std::stringstream ssD;
ssD << "PointLights[" << i << "].DepthMap";
glActiveTexture(GL_TEXTURE4 + i);
glBindTexture(GL_TEXTURE_CUBE_MAP, pointlights[i]->DepthMap()); // just returns the ID of the light's depth map
shader->SetUniform(ssD.str().c_str(), i + 4); // just a wrapper around glSetUniform1i
Thank you for reading, and please let me know if I can supply more information!
It is old post, but i think it may be useful for other people from the search.
Your problem here:
glActiveTexture(GL_TEXTURE4 + i);
glBindTexture(GL_TEXTURE_CUBE_MAP, pointlights[i]->DepthMap());
This replacement should fix problem:
glActiveTexture(GL_TEXTURE4 + i);
glUniform1i(glGetUniformLocation("programId", "cubMapUniformName"), GL_TEXTURE4 + i);
glBindTexture(GL_TEXTURE_CUBE_MAP, pointlights[i]->DepthMap());
It set texture unit number for shader sampler

OpenGL FreeType: weird texture

After I have initialized the library and loaded the texture I get http://postimg.org/image/4tzkq4uhl.
But when I added this line to the texture code:
std::vector<unsigned char> buffer(w * h, 0);
I get http://postimg.org/image/kqycmumvt.
Why is this happening when I add that specific code, and why does it seems like the letter is multiplied? I have searched examples and tutorials about FreeType and I saw that in some of them they change the buffer array, but I didn't really understand that, so if you can explain that to me, I may handle this better.
Texture Load:
Texture::Texture(FT_GlyphSlot slot) {
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
int w = slot->bitmap.width;
int h = slot->bitmap.rows;
// When I remove this line, the black rectangle below the letter reappears.
std::vector<unsigned char> buffer(w * h, 0);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, slot->bitmap.width, slot->bitmap.rows, 0, GL_LUMINANCE_ALPHA, GL_UNSIGNED_BYTE, slot->bitmap.buffer);
glGenerateMipmap(GL_TEXTURE_2D);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
}
Fragment Shader:
#version 330
in vec2 uv;
in vec4 tColor;
uniform sampler2D tex;
out vec4 color;
void main () {
color = vec4(tColor.rgb, texture(tex, uv).a);
}
You're specifying GL_LUMINANCE_ALPHA for the format of the data you pass to glTexImage2D(). Based on the corresponding FreeType documentation I found here:
http://www.freetype.org/freetype2/docs/reference/ft2-basic_types.html#FT_Pixel_Mode
There is no FT_Pixel_Mode value specifying that the data in slot->bitmap.buffer is in fact luminance-alpha. GL_LUMINANCE_ALPHA is a format with 2 bytes per pixel, where the first byte is used for R, G, and B when the data is used to specify a RGBA image, and the second byte is used for A.
Based on the data you're showing, slot->bitmap.pixel_mode is most likely FT_PIXEL_MODE_GRAY, which means that the bitmap data is 1 byte per pixel. In this case, you need to use GL_ALPHA for the format:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, slot->bitmap.width, slot->bitmap.rows, 0,
GL_ALPHA, GL_UNSIGNED_BYTE, slot->bitmap.buffer);
If the pixel_mode is something other than FT_PIXEL_MODE_GRAY, you'll have to adjust the format accordingly, or potentially create a copy of the data if it's a format that is not supported by glTexImage2D().
The reason you get garbage if you specify GL_LUMINANCE_ALPHA instead of GL_ALPHA is that it reads twice as much data as is contained in the data you pass in. The content of the data that is read beyond the allocated bitmap data is undefined, and may well change depending on what other variables you declare/allocate.
If you want to use texture formats that are still supported in the core profile instead of the deprecated GL_LUMINANCE_ALPHA or GL_ALPHA, you can use GL_R8 instead. Since this format has only one component, instead of the four in GL_RGBA, this will also use 75% less texture memory:
glTexImage2D(GL_TEXTURE_2D, 0, GL_R8, slot->bitmap.width, slot->bitmap.rows, 0,
GL_RED, GL_UNSIGNED_BYTE, slot->bitmap.buffer);
This will also require a slight change in the shader to read the r component instead of the a component:
color = vec4(tColor.rgb, texture(tex, uv).r);
Solved it. I added the following to my code and it works good.
GLubyte * data = new GLubyte[2 * w * h];
for( int y = 0; y < slot->bitmap.rows; y++ )
{
for( int x = 0; x < slot->bitmap.width; x++ )
{
data[2 * ( x + y * w )] = 255;
data[2 * ( x + y * w ) + 1] = slot->bitmap.buffer[x + slot->bitmap.width * y];
}
}
I don't know what happened with that particular line I added but now it works.

Setting GL_TEXTURE_MAX_ANISOTROPY_EXT causes crash on next frame

I am using OpenGL 3.3 and deferred shading.
When I am setting anisotropy values for my samplers between frames, the next frame causes a crash at the next frame at glClear.
Here's how I set my anisotropy values:
bool OpenGLRenderer::SetAnisotropicFiltering(const float newAnisoLevel)
{
if (newAnisoLevel < 0.0f || newAnisoLevel > GetMaxAnisotropicFiltering())
return false;
mCurrentAnisotropy = newAnisoLevel;
// the sampler used for geometry pass
GLCALL(glSamplerParameterf(mTextureSampler, GL_TEXTURE_MAX_ANISOTROPY_EXT, mCurrentAnisotropy));
// the sampler used in shading pass
GLCALL(glSamplerParameterf(mGBuffer.mTextureSampler, GL_TEXTURE_MAX_ANISOTROPY_EXT, mCurrentAnisotropy));
return true;
}
The geometry pass has the following diffuse / normal textures and are setup like this:
GLCALL(glUseProgram(mGeometryProgram.mProgramHandle));
GLCALL(glGenSamplers(1, &mTextureSampler));
GLCALL(glSamplerParameteri(mTextureSampler, GL_TEXTURE_MAG_FILTER, GL_LINEAR));
GLCALL(glSamplerParameteri(mTextureSampler, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR));
GLCALL(glSamplerParameteri(mTextureSampler, GL_TEXTURE_WRAP_S, GL_REPEAT));
GLCALL(glSamplerParameteri(mTextureSampler, GL_TEXTURE_WRAP_T, GL_REPEAT));
GLCALL(glUniform1i(glGetUniformLocation(mGeometryProgram.mProgramHandle, "unifDiffuseTexture"), OpenGLTexture::TEXTURE_UNIT_DIFFUSE));
GLCALL(glUniform1i(glGetUniformLocation(mGeometryProgram.mProgramHandle, "unifNormalTexture"), OpenGLTexture::TEXTURE_UNIT_NORMAL));
GLCALL(glBindSampler(OpenGLTexture::TEXTURE_UNIT_DIFFUSE, mTextureSampler));
GLCALL(glBindSampler(OpenGLTexture::TEXTURE_UNIT_NORMAL, mTextureSampler));
GLCALL(glUseProgram(0));
The shading pass has the following textures for lighting calculations:
GLCALL(glUseProgram(shadingProgramID));
GLCALL(glGenSamplers(1, &mTextureSampler));
GLCALL(glSamplerParameteri(mTextureSampler, GL_TEXTURE_MAG_FILTER, GL_LINEAR));
GLCALL(glSamplerParameteri(mTextureSampler, GL_TEXTURE_MIN_FILTER, GL_LINEAR));
GLCALL(glSamplerParameteri(mTextureSampler, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE));
GLCALL(glSamplerParameteri(mTextureSampler, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE));
GLCALL(glUniform1i(glGetUniformLocation(shadingProgramID, "unifPositionTexture"), GBuffer::GBUFFER_TEXTURE_POSITION));
GLCALL(glUniform1i(glGetUniformLocation(shadingProgramID, "unifNormalTexture"), GBuffer::GBUFFER_TEXTURE_NORMAL));
GLCALL(glUniform1i(glGetUniformLocation(shadingProgramID, "unifDiffuseTexture"), GBuffer::GBUFFER_TEXTURE_DIFFUSE));
GLCALL(glBindSampler(GBuffer::GBUFFER_TEXTURE_POSITION, mTextureSampler));
GLCALL(glBindSampler(GBuffer::GBUFFER_TEXTURE_NORMAL, mTextureSampler));
GLCALL(glBindSampler(GBuffer::GBUFFER_TEXTURE_DIFFUSE, mTextureSampler));
GLCALL(glUseProgram(0));
And then on the next frame it crashes immediately on the glClear function when doing the geometry pass
void OpenGLRenderer::GeometryPass(const RenderQueue& renderQueue)
{
GLCALL(glUseProgram(mGeometryProgram.mProgramHandle));
GLCALL(glBindFramebuffer(GL_DRAW_FRAMEBUFFER, mGBuffer.mFramebuffer));
GLCALL(glDepthMask(GL_TRUE));
GLCALL(glEnable(GL_DEPTH_TEST));
// clear GBuffer fbo
GLCALL(glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)); // <----- crash!
// both containers are assumed to be sorted by MeshID ascending
auto meshIterator = mMeshes.begin();
for (const Renderable& renderable : renderQueue)
{
// lots of draw code.....
}
GLCALL(glDisable(GL_DEPTH_TEST));
GLCALL(glDepthMask(GL_FALSE));
GLCALL(glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0));
GLCALL(glUseProgram(0));
}
What could be the issue here?
Your range validation is wrong. The minimum acceptable value for anisotropy is 1.0f. A value of 1.0f (the default) means off (isotropic).
To be honest, rather than returning false and doing nothing else when you set anisotropy above or below the acceptable range, I would consider clamping the values to [1.0, MAX]. You can always find out later on that your request was unacceptable by checking the value of mCurrentAnisotropy after the function returns. This is useful if you store the anisotropy level as an option in a configuration file and the hardware changes. Though 16.0 is almost universally the maximum these days, some really old hardware only supports 8.0. You can still return false, report a warning or whatever, but I personally always interpret a request for a level of anisotropy too high for the implementation to support to mean: "I want the highest anisotropy possible."