How To Render To Multiple Textures With OpenGL? - c++

This was my understanding of basic steps to rendering to multiple textures.
1) Bind the shader locations to render at
m_uihDiffuseMap = glGetUniformLocation( m_iShaderProgramHandle, "diffuseMap" );
if( m_uihDiffuseMap != -1 )
glUniform1i( m_uihDiffuseMap, 0 );
m_uihNormalMap = glGetUniformLocation( m_iShaderProgramHandle, "normalMap" );
if( m_uihNormalMap != -1 )
glUniform1i( m_uihNormalMap, 1 );
2) Bind to what you want to render to
glBindFramebuffer( GL_FRAMEBUFFER, m_uifboHandle );
//diffuse texture binding
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, m_uiTextureHandle1, 0);
//normal texture binding
(or GL_COLOR_ATTACHMENT1)
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0+1, m_uiTextureHandle2, 0);
3) Clear the buffer & specify what buffers you want to draw to
glClearColor( 1.0f, 1.0f, 1.0f, 1.0f );
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
GLenum buffers[] = { GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1 };
glDrawBuffers(2, buffers);
4) Set your shader program for rendering
glUseProgram( m_uiShaderProgramHandle );
5) Pass variables to shader like our 2 different textures
glActiveTexture( GL_TEXTURE0 );
glBindTexture( GL_TEXTURE_2D, uihDiffuseMap );
//or(GL_TEXTURE1)
glActiveTexture( GL_TEXTURE0+1 );
glBindTexture( GL_TEXTURE_2D, uihNormalMap );
6) Do render call things
//Draw stuff
7) set things back to default in case you have other render procedures using other things
glBindFramebuffer( GL_FRAMEBUFFER, 0 );
glUseProgram( 0 );
------------------------------FRAGMENT SHADER-----------------------------------
In the fragment shader you have to output the 2 results like this right?
#version 330
in vec2 vTexCoordVary;
uniform sampler2D diffuseMap;
uniform sampler2D normalMap;
out vec4 fragColor[2];
void main( void )
{
fragColor[0] = texture( diffuseMap, vTexCoordVary );
fragColor[1] = texture( normalMap, vTexCoordVary );
};
I've double checked
-My diffuse texture and normal texture are loaded fine. If I pass my normal texture as the texture to use as TEXTURE0 it will show up.
-I get fragColor[0] just fine. When i show the fragColor[1] to the screen I got the same result as the first one. But i also hardcoded fragColor[1] to return solid grey inside the shader as a test case and it worked.
So my assumption is somehow when I pass my textures to the shader it assumes "normalMap" is "diffuseMap"? Its my only understanding to why I would get the same result in fragColor[0] and [1].

Yes, as of now the information how your samplers map to texture slots is missing, causing both to refer to the diffuse map. Use glUniform1i() to bind the index of the correct texture to each uniform slot.

It looks like step 1 and step 4 are in the wrong order. This is in step 1:
if( m_uihDiffuseMap != -1 )
glUniform1i( m_uihDiffuseMap, 0 );
if( m_uihNormalMap != -1 )
glUniform1i( m_uihNormalMap, 1 );
and this in step 4:
glUseProgram( m_uiShaderProgramHandle );
glUniform*() calls apply to the currently active program. So glUseProgram() must be called before the glUniform1i() calls.
It might also be a good idea to specifically set the location of the out variable:
layout(location = 0) out vec4 fragColor[2];
I don't think this is causing your problem, but I don't see anything in the spec saying that the linker assigns locations starting at 0 if they are not explicitly specified.

In case someone stumbles upon here via a search engine like me, my problem with writing to multiple textures was a typo. My framebuffer initialization code looked like this:
glGenFramebuffers(1, &renderer.gbufferFboId);
glBindFramebuffer(GL_FRAMEBUFFER, renderer.gbufferFboId);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, renderer.gbufferDepthTexId, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, renderer.gbufferColorPositionId, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, GL_TEXTURE_2D, renderer.gbufferColorColorId, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT2, GL_TEXTURE_2D, renderer.gbufferColorNormalId, 0);
opengl::fbo_check_current_status();
const GLenum drawBuffers[]
{
GL_COLOR_ATTACHMENT0,
GL_COLOR_ATTACHMENT1,
GL_COLOR_ATTACHMENT2
};
glDrawBuffers(1, drawBuffers);
Can you spot the error? It's a small typo in the glDrawBuffer call. It of course should be equivalent to the number of buffers:
glDrawBuffers(3, drawBuffers);

Related

Texture is rendered blank but glGetTexImage works perfectly

I've written some code that adds a UI overlay to an existing OpenGL application.
Unfortunately, I am not proficient in OpenGL but I do know that it somehow always manages to fail on some device.
Some backstory:
The general pipeline is:
Application renders -> UI is rendered to FBO -> FBO is blitted to obtain an RGBA texture -> Texture is drawn on top of the UI scene.
So far so good, then I encountered an issue on Intel cards on Ubuntu 16.04 where the texture broke between context switches (UI rendering is done using a QOpenGLContext, application is raw OpenGL context managed by OGRE but the QOpenGLContext is set to share resources). I solved that issue by checking if sharing works (create a texture in one context and check if the content is correct in the other) and if not, I load the content while still in context B and upload it again in context A.
However, on Ubuntu 18.04 on the same machine for some reason, this will actually work. The texture is still correct in the other context when retrieving its content with glGetTexImage.
Now here's the problem:
It's not being rendered. I get just the application scene without anything on top but if I manually enable the workaround of downloading it and reuploading to a texture that was created in the application context, it works.
How can it be that the texture's content is alright but it won't show unless I grab it using glGetTexImage and re-upload it to a texture created in the other context using glTexImage2D?
There's gotta be some state that is invalid and set correct when using glTexImage2D.
Here's the code after the UI is rendered:
if (!checked_can_share_texture_)
{
qopengl_wrapper_->drawInvisibleTestOverlay();
}
qopengl_wrapper_->finishRender();
if ( !can_share_texture_ || !checked_can_share_texture_ )
{
glBindTexture( GL_TEXTURE_2D, qopengl_wrapper_->texture());
glGetTexImage( GL_TEXTURE_2D, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixel_data_ );
}
glBindTexture( GL_TEXTURE_2D, 0 );
qopengl_wrapper_->doneCurrent(); // Makes the applications context current again
glDisable( GL_DEPTH_TEST );
glDisable( GL_CULL_FACE );
glDisable( GL_LIGHTING );
glEnable( GL_BLEND );
glBlendFunc( GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA );
glUseProgram(shader_program_);
glUniform1i(glGetUniformLocation(shader_program_, "tex"), 0);
glBindBuffer(GL_ARRAY_BUFFER, vertex_buffer_object_);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 5 * sizeof(float), nullptr);
glEnableVertexAttribArray(0);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 5 * sizeof(float), (void*)(3 * sizeof(float)));
glEnableVertexAttribArray(1);
glActiveTexture( GL_TEXTURE0 );
glEnable( GL_TEXTURE_2D );
if ( can_share_texture_ )
{
glBindTexture( GL_TEXTURE_2D, qopengl_wrapper_->texture());
if ( !checked_can_share_texture_)
{
const int count = qopengl_wrapper_->size().width() * qopengl_wrapper_->size().height() * 4;
const int thresh = std::ceil(count / 100.f);
unsigned char content[count];
glGetTexImage( GL_TEXTURE_2D, 0, GL_RGBA, GL_UNSIGNED_BYTE, content);
int wrong = 0;
// can_share_texture_ = false; // Bypassing the actual check will make it work
for (int i = 0; i < count; ++i) {
if (content[i] == pixel_data_[i]) continue;
if (++wrong < thresh) continue;
can_share_texture_ = false;
LOG(
"OverlayManager: Looks like texture sharing isn't working on your system. Falling back to texture copying." );
// If we can't share textures, we have to generate one
glActiveTexture( GL_TEXTURE0 );
glGenTextures( 1, &texture_ );
glBindTexture( GL_TEXTURE_2D, texture_ );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST );
break;
}
if (can_share_texture_)
{
delete pixel_data_;
pixel_data_ = nullptr;
LOG("Texture sharing seems supported. Count: %d", count);
}
checked_can_share_texture_ = true;
}
}
else
{
glBindTexture( GL_TEXTURE_2D, texture_ );
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, qopengl_wrapper_->size().width(), qopengl_wrapper_->size().height(), 0,
GL_RGBA, GL_UNSIGNED_BYTE, pixel_data_ );
}
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
glUseProgram(0);
Vertex Shader
#version 130
in vec3 pos;
in vec2 coord;
out vec2 texCoord;
void main()
{
gl_Position = vec4(pos, 1.0); //Just output the incoming vertex
texCoord = coord;
}
Fragment Shader
#version 130
uniform sampler2D tex;
in vec2 texCoord;
void main()
{
gl_FragColor = texture(tex, texCoord);
}
TL;DR Texture isn't rendered (completely transparent) but if I copy it to memory using glGetTexImage it looks fine and if I copy that back to a texture created on the application context, it will render fine.
Graphics card is an Intel UHD 620 with Mesa version 18.2.8.
Edit: In case it wasn't clear, I'm copying the texture in the context of the application not the texture's original context, the same context where the working texture is created, so, if sharing didn't work I shouldn't get the correct content at that point.

deferred rendering - Renderbuffer vs Texture

So, I've been reading about this, and I still haven't found a conclusion. Some examples use textures as their render targets, some people use renderbuffers, and some use both!
For example, using just textures:
// Create the gbuffer textures
glGenTextures(ARRAY_SIZE_IN_ELEMENTS(m_textures), m_textures);
glGenTextures(1, &m_depthTexture);
for (unsigned int i = 0 ; i < ARRAY_SIZE_IN_ELEMENTS(m_textures) ; i++) {
glBindTexture(GL_TEXTURE_2D, m_textures[i]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB32F, WindowWidth, WindowHeight, 0, GL_RGB, GL_FLOAT, NULL);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, GL_TEXTURE_2D, m_textures[i], 0);
}
both:
glGenRenderbuffersEXT ( 1, &m_diffuseRT );
glBindRenderbufferEXT ( GL_RENDERBUFFER_EXT, m_diffuseRT );
glRenderbufferStorageEXT ( GL_RENDERBUFFER_EXT, GL_RGBA, m_width, m_height );
glFramebufferRenderbufferEXT ( GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_RENDERBUFFER_EXT, m_diffuseRT );
glGenTextures ( 1, &m_diffuseTexture );
glBindTexture ( GL_TEXTURE_2D, m_diffuseTexture );
glTexImage2D ( GL_TEXTURE_2D, 0, GL_RGBA, m_width, m_height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE );
// Attach the texture to the FBO
glFramebufferTexture2DEXT ( GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, m_diffuseTexture, 0 );
What's the difference? What's the point of creating a texture, a render buffer, and then assign one to the other? After you successfully supply a texture with an image, it's got its memory allocated, so why does one need to bind it to a render buffer?
Why would one use textures or renderbuffers? What would be the advantages?
I've read that you cannot read from renderbuffer, only texture. Wht's the use of it, then?
EDIT:
So, my current code for a GBuffer is this:
enum class GBufferTextureType
{
Depth = 0,
Position,
Diffuse,
Normal,
TexCoord
};
.
.
.
glGenFramebuffers ( 1, &OpenGLID );
if ( Graphics::GraphicsBackend->CheckError() == false )
{
Delete();
return false;
}
glBindFramebuffer ( GL_FRAMEBUFFER, OpenGLID );
if ( Graphics::GraphicsBackend->CheckError() == false )
{
Delete();
return false;
}
uint32_t TextureGLIDs[5];
glGenTextures ( 5, TextureGLIDs );
if ( Graphics::GraphicsBackend->CheckError() == false )
{
Delete();
return false;
}
// Create the depth texture
glBindTexture ( GL_TEXTURE_2D, TextureGLIDs[ ( int ) GBufferTextureType::Depth] );
glTexImage2D ( GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32F, In_Dimensions.x, In_Dimensions.y, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL );
glFramebufferTexture2D ( GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, TextureGLIDs[ ( int ) GBufferTextureType::Depth], 0 );
// Create the color textures
for ( unsigned cont = 1; cont < 5; ++cont )
{
glBindTexture ( GL_TEXTURE_2D, TextureGLIDs[cont] );
glTexImage2D ( GL_TEXTURE_2D, 0, GL_RGB32F, In_Dimensions.x, In_Dimensions.y, 0, GL_RGB, GL_FLOAT, NULL );
glFramebufferTexture2D ( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + cont, GL_TEXTURE_2D, TextureGLIDs[cont], 0 );
}
// Specify draw buffers
GLenum DrawBuffers[4];
for ( unsigned cont = 0; cont < 4; ++cont )
DrawBuffers[cont] = GL_COLOR_ATTACHMENT0 + cont;
glDrawBuffers ( 4, DrawBuffers );
if ( Graphics::GraphicsBackend->CheckError() == false )
{
Delete();
return false;
}
GLenum Status = glCheckFramebufferStatus ( GL_FRAMEBUFFER );
if ( Status != GL_FRAMEBUFFER_COMPLETE )
{
Delete();
return false;
}
Dimensions = In_Dimensions;
// Unbind
glBindFramebuffer ( GL_FRAMEBUFFER, 0 );
Is this the way to go?
I still have to write the corresponding shaders...
What's the point of creating a texture, a render buffer, and then assign one to the other?
That's not what's happening. But that's OK, because that second example code is errant nonsense. The glFramebufferTexture2DEXT is overriding the binding from glFramebufferRenderbufferEXT. The renderbuffer is never actually used after it is created.
If you found that code online somewhere, I strongly advise you to disregard anything that source told you about OpenGL development. Though I would advise that anyway, since it's using the "EXT" extension functions in 2016, almost a decade since core FBOs became available.
I've read that you cannot read from renderbuffer, only texture. Wht's the use of it, then?
That is entirely the point of them: you use a renderbuffer for images that you don't want to read from. That's not useful for deferred rendering, since you really do want to read from them.
But imagine if you're generating a reflection image of a scene, which you will later use as a texture in your main scene. Well, to render the reflection scene, you need a depth buffer. But you're not going to read from that depth buffer (not as a texture, at any rate); you need a depth buffer for depth testing. But the only image you're going to read from after is the color image.
So you would make the depth buffer a renderbuffer. That tells the implementation that the image can be put into whatever storage is most efficient for use as a depth buffer, without having to worry about read-back performance. This may or may not have a performance impact. But at the very least, it won't be any slower than using a texture.
Most rendering scenarios need a depth and/or stencil buffer, though it is rare that you would ever need to sample the data stored in the stencil buffer from a shader.
It would be impossible to do depth/stencil tests if your framebuffer did not have a location to store these data and any render pass that uses these fragment tests requires a framebuffer with the appropriate images attached.
If you are not going to use the depth/stencil buffer data in a shader, a renderbuffer will happily satisfy storage requirements for fixed-function fragment tests. Renderbuffers have fewer format restrictions than textures do, particularly if we detour this discussion to multisampling.
D3D10 introduced support for multisampled color textures but omitted multisampled depth textures; D3D10.1 later fixed that problem and GL3.0 was finalized after D3D10's initial design oversight was corrected.
Pre-GL3 / D3D10.1 design would manifest itself in GL as a multisampled framebuffer object that allows either texture or renderbuffer color attachments but forces you to use a renderbuffer for the depth attachment.
Renderbuffers are ultimately the lowest common denominator for storage, they will get you through tough jams on feature-limited hardware. You can actually blit the data stored in a renderbuffer into a texture in some situations where you could not draw directly into the texture.
To that end, you can resolve a multisampled renderbuffer into a single-sampled texture by blitting from one framebuffer to another. This is implicit multisampling, and it (would) allow you to use the anti-aliased results of a previous render pass with a standard texture lookup. Unfortunately it is thoroughly useless for anti-aliasing in deferred shading--you need explicit multisample resolve for that.
Nonetheless, it is incorrect to say that a renderbuffer is not readable; it is in every sense of the word, but since your goal is deferred shading, would require additional GL commands to copy the data into a texture.

OpenGL Shadow Glitch

The Problem
I have been trying to implement shadows in OpenGL for some time. I have finally gotten it to a semi-working state in that the shadow appears but covers the scene in strange places [i.e - it is not relative to the light]
To further explain the above gif: As I move the light-source further away from the scene (to the left) - the shadow stretches further. Why? If anything, it should show more of the scene.
Update - I messed around with the lights position and am now being given this result (confusing):
Depth Map
Here it is:
The Code
Because this is a difficult issue to pinpoint - I will post a large chunk of the code I am using in this application.
The Framebuffer and Depth Texture - The first thing I needed was a framebuffer to record the depth values of all the drawn objects and then I needed to dump these values into a depth texture (the shadow-map):
// Create Framebuffer
FramebufferName = 0;
glGenFramebuffers(1, &FramebufferName);
glBindFramebuffer(GL_FRAMEBUFFER, FramebufferName);
// Create and Load Depth Texture
glGenTextures(1, &depthTexture);
glBindTexture(GL_TEXTURE_2D, depthTexture);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC, GL_LEQUAL);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_COMPARE_REF_TO_TEXTURE);
glTexImage2D(GL_TEXTURE_2D, 0,GL_DEPTH_COMPONENT24, 1024, 1024, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, 0);
glBindTexture(GL_TEXTURE_2D, 0);
//Attach Texture To Framebuffer
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, depthTexture, 0);
glDrawBuffer(GL_NONE);
glReadBuffer(GL_NONE);
//Check for errors
if(glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
Falcon::Debug::error("ShadowBuffer [Framebuffer] could not be initialized.");
Rendering The Scene - First I do the shadow-pass which just runs through some basic shaders and outputs to the framebuffer and then I do a second, regular pass that actually draws the scene and does GLSL shadow-map sampling:
//Clear
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
//Select Main Shader
normalShader->useShader();
//Bind + Update + Draw
/* Render Shadows */
shadowShader->useShader();
glBindFramebuffer(GL_FRAMEBUFFER, Shadows::framebuffer());
//Viewport
glViewport(0,0,640,480);
//GLM Matrix Definitions
glm::mat4 shadow_matrix_view;
glm::mat4 shadow_matrix_projection;
//View And Projection Calculations
shadow_matrix_view = glm::lookAt(glm::vec3(light.x,light.y,light.z), glm::vec3(0,0,0), glm::vec3(0,1,0));
shadow_matrix_projection = glm::perspective(45.0f, 1.0f, 0.1f, 1000.0f);
//Calculate MVP(s)
glm::mat4 shadow_depth_mvp = shadow_matrix_projection * shadow_matrix_view * glm::mat4(1.0);
glm::mat4 shadow_depth_bias = glm::mat4(0.5,0,0,0,0,0.5,0,0,0,0,0.5,0,0.5,0.5,0.5,1) * shadow_depth_mvp;
//Send Data To The GPU
glUniformMatrix4fv(glGetUniformLocation(shadowShader->getShader(),"depth_matrix"), 1, GL_FALSE, &shadow_depth_mvp[0][0]);
glUniformMatrix4fv(glGetUniformLocation(normalShader->getShader(),"depth_matrix_bias"), 1, GL_FALSE, &shadow_depth_bias[0][0]);
renderScene();
glBindFramebuffer(GL_FRAMEBUFFER, 0);
/* Clear */
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
/* Shader */
normalShader->useShader();
/* Shadow-map */
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, Shadows::shadowmap());
glUniform1f(glGetUniformLocation(normalShader->getShader(),"shadowMap"),0);
/* Render Scene */
glViewport(0,0,640,480);
renderScene();
Fragment Shader - This is where I calculate the final color to be output and do the depth texture / shadow-map sampling. It could be the source of where I am going wrong:
//Shadows
uniform sampler2DShadow shadowMap;
in vec4 shadowCoord;
void main()
{
//Lighting Calculations...
//Shadow Sampling:
float visibility = 1.0;
if (texture(shadowMap, shadowCoord.xyz) < shadowCoord.z){
visibility = 0.1;
}
//Final Output
outColor = finalColor * visibility;
}
Edits
<1> AMD Hardware Issue - It was also suggested that this could be an issue of the GPU but I find this hard to believe given that it's a Radeon HD 6670. Would it be worth putting in a Nvidia card in to test this theory?
<2> Suggest Changes - I made some suggested changes from the comments and answers:
Firstly, I changed the light's perspective projection to an ortho one which gave me the accuracy I needed in the shadow-map so that now I can see the depth clearly (i.e -> it's not all white). In addition, it removes the need for the perspective division so I am using 3-dimensional coordinates for testing this. Below is a screenshot:
Secondly, I changed my texture sampling to this: visibility = texture(shadowMap,shadowCoord.xyz); which now always returns 0 and thus I cannot see the scene as it is considered ENTIRELY shadowed.
Thirdly and finally, I made a swap from GL_LEQUAL to GL_LESS as suggested an no changes occurred.
There is something fundamentally wrong with your shader:
uniform sampler2DShadow shadowMap; // NOTE: Shadow samplers perform comparison !!
...
if (texture(shadowMap, shadowCoord.xyz) < shadowCoord.z)
You have texture compare vs. reference enabled. That means that the 3rd texture coordinate is going to be compared by the texture (...) function and the returned value is going to be the result of the test function (GL_LEQUAL in this case).
In other words, texture (...) will return either 0.0 (fail) or 1.0 (pass) by comparing the looked up depth at shadowCoord.xy to the value of shadowCoord.z. You are doing this test twice.
Consider using this altered code instead:
float visibility = texture(shadowMap, shadowCoord.xyz);
That is not going to produce quite the results you want because your comparison function is GL_LEQUAL, but it is a start. Consider changing the comparison function to GL_LESS to get an exact functional match.

using glTexture in a shader

I'm using openGL and what I want to do is render my scene to a texture and then store that texture so that I can pass it into my fragment shader to be used to render something else.
I created a texture using glGenTexture() and attached it to a frame buffer and then rendered the frame buffer with glBindFrameBuffer(). I then set the framebuffer back to 0 to render back to the screen but now I'm stuck.
In my fragment shader I have a uniform sampler 2D 'myTexture' that I want to use to store the texture. How do I go about doing this?
For .jpg/png images that I found online I just used the following code:
glUseProgram(Shader->GetProgramID());
GLint ID = glGetUniformLocation(
Shader->GetProgramID(), "Map");
glUniform1i(ID, 0);
glActiveTexture(GL_TEXTURE0);
MapImage->Bind();
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT );
However this code doesn't work for the glTexture I created. Specifically I can't call myTexture->Bind() in the same way.
If you truly have "a uniform sampler 2D 'myTexture'" in your shader, why are you getting the uniform location for "Map"? You should be getting the uniform location for "myTexture"; that's the sampler's name, after all.
So I created a texture using glGenTexture() and attached it to a frame buffer
So in other words you did this:
glActiveTexture(GL_TEXTURE0);
GLuint myTexture;
glGenTextures(1, &myTexture);
glBindTexture(GL_TEXTURE_2D, myTexture);
// very important and often missed by newbies:
// A framebuffer attachment must be initialized but not bound
// when using the framebuffer, so it must be unbound after
// initializing
glTexImage2D(…, NULL);
glBindTexture(GL_TEXTURE_2D, 0);
glBindFramebuffer(…);
glFramebufferTexture2D(…, myTexture, …);
Okay, so myTexture is the name handle for the texture. What do we have to do? Unbinding the FBO, so that we can use the attached texture as a image source, so:
glBindFrameuffer(…, 0);
glActiveTexture(GL_TEXTURE0 + n);
glBindTexture(GL_TEXTURE_2D, myTexture);
glUseProgram(…);
glUniformi(texture_sampler_location, n);

Problems outputting gl_PrimitiveID to custom frame buffer object (FBO)

I have a very basic fragment shader which I want to output 'gl_PrimitiveID' to a fragment buffer object (FBO) which I have defined. Below is my fragment shader:
#version 150
uniform vec4 colorConst;
out vec4 fragColor;
out uvec4 triID;
void main(void)
{
fragColor = colorConst;
triID.r = uint(gl_PrimitiveID);
}
I setup my FBO like this:
GLuint renderbufId0;
GLuint renderbufId1;
GLuint depthbufId;
GLuint framebufId;
// generate render and frame buffer objects
glGenRenderbuffers( 1, &renderbufId0 );
glGenRenderbuffers( 1, &renderbufId1 );
glGenRenderbuffers( 1, &depthbufId );
glGenFramebuffers ( 1, &framebufId );
// setup first renderbuffer (fragColor)
glBindRenderbuffer(GL_RENDERBUFFER, renderbufId0);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA, gViewWidth, gViewHeight);
// setup second renderbuffer (triID)
glBindRenderbuffer(GL_RENDERBUFFER, renderbufId1);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGB32UI, gViewWidth, gViewHeight);
// setup depth buffer
glBindRenderbuffer(GL_RENDERBUFFER, depthbufId);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT32, gViewWidth, gViewHeight);
// setup framebuffer
glBindFramebuffer(GL_FRAMEBUFFER, framebufId);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, renderbufId0);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, GL_RENDERBUFFER, renderbufId1);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthbufId );
// check if everything went well
GLenum stat = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if(stat != GL_FRAMEBUFFER_COMPLETE) { exit(0); }
// setup color attachments
const GLenum att[] = {GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1};
glDrawBuffers(2, att);
// render mesh
RenderMyMesh()
// copy second color attachment (triID) to local buffer
glReadBuffer(GL_COLOR_ATTACHMENT1);
glReadPixels(0, 0, gViewWidth, gViewHeight, GL_RED, GL_UNSIGNED_INT, data);
For some reason glReadPixels gives me a 'GL_INVALID_OPERATION' error? However if i change the internal format of renderbufId1 from 'GL_RGB32UI' to 'GL_RGB' and I use 'GL_FLOAT' in glReadPixels instead of 'GL_UNSIGNED_INT' then everything works fine. Does anyone know why I am getting the 'GL_INVALID_OPERATION' error and how I can solve it?
Is there an alternative way of outputting 'gl_PrimitiveID'?
PS: The reason I want to output 'gl_PrimitiveID' like this is explained here: Picking triangles in OpenGL core profile when using glDrawElements
glReadPixels(0, 0, gViewWidth, gViewHeight, GL_RED, GL_UNSIGNED_INT, data);
As stated on the OpenGL Wiki, you need to use GL_RED_INTEGER when transferring true integer data. Otherwise, OpenGL will try to use floating-point conversion on it.
BTW, make sure you're using glBindFragDataLocation to set up which buffers those fragment shader outputs go to. Alternatively, you can set it up explicitly in the shader if you're using GLSL 3.30 or above.