offscreen rendering opengl 4.5 multisample FBO - c++

I'm referencing OpenGL Superbible 6 in my code.
First I simply wanted to implement object picking in my 3d scene. Eventually I've decided to use framebuffer objects and I have succeeded and then I understood the problem with the need to solve the problem of polygon edge aliasing, so, i've rewritten my code again to make use of GL_TEXTURE_2D_MULTISAMPLE
Here is the initialization code for framebuffer
void window_glview::init_framebuffer()
{
//CREATE FRAMEBUFFER OBJECT
GLenum gl_error=glGetError();
glGenTextures(1,&texture_id_framebuffer_color);
glBindTexture(GL_TEXTURE_2D_MULTISAMPLE,texture_id_framebuffer_color);
glTexStorage2DMultisample(GL_TEXTURE_2D_MULTISAMPLE,ANTIALIASING_SAMPLES,GL_RGBA8,client_area.right,client_area.bottom,GL_TRUE);
glGenTextures(1,&texture_id_framebuffer_objectid);
glBindTexture(GL_TEXTURE_2D_MULTISAMPLE,texture_id_framebuffer_objectid);
glTexStorage2DMultisample(GL_TEXTURE_2D_MULTISAMPLE,ANTIALIASING_SAMPLES,GL_RGBA8,client_area.right,client_area.bottom,GL_TRUE);
glGenTextures(1,&texture_id_framebuffer_depth);
glBindTexture(GL_TEXTURE_2D_MULTISAMPLE,texture_id_framebuffer_depth);
glTexStorage2DMultisample(GL_TEXTURE_2D_MULTISAMPLE,ANTIALIASING_SAMPLES,GL_DEPTH_COMPONENT32,client_area.right,client_area.bottom,GL_TRUE);
gl_error=glGetError();
glGenFramebuffers(1,&buffer_id_framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER,buffer_id_framebuffer);
gl_error=glGetError();
glFramebufferTexture(GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,texture_id_framebuffer_color,0);
glFramebufferTexture(GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,texture_id_framebuffer_objectid,0);
glFramebufferTexture(GL_FRAMEBUFFER,GL_DEPTH_ATTACHMENT,texture_id_framebuffer_depth,0);
GLenum draw_buffers[] =
{
GL_COLOR_ATTACHMENT0,
GL_COLOR_ATTACHMENT1
};
glDrawBuffers(2,draw_buffers);
GLenum status=glCheckFramebufferStatus(GL_FRAMEBUFFER);
if(status!=GL_FRAMEBUFFER_COMPLETE)
MessageBox(0,L"Failed to create framebuffer object",0,0);
glBindFramebuffer(GL_FRAMEBUFFER,0);
}
It's pretty common to most of the internet listings on the same topic.
Now here is my drawing code
void window_glview::paint()
{
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
//DRAW TO CUSTOM FRAMEBUFFER
glBindFramebuffer(GL_FRAMEBUFFER,buffer_id_framebuffer);
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glLineWidth(1.0);
draw_viewport();
viewport_object_count=0;
draw_lights();
glLineWidth(1.5);
for (unsigned short i=0;i<mesh_count;i++)
{
draw_mesh(mesh_table[i],GL_TRIANGLES,false);
}
//DRAW TO DEFAULT
glBindFramebuffer(GL_FRAMEBUFFER,0);
//USE TEXTURE FROM FRAMEBUFFER COLOR_ATTACHMENT0
glUseProgram(program_id_screen_render);
glBindTexture(GL_TEXTURE_2D_MULTISAMPLE,texture_id_framebuffer_color);
//HERE IS A QUAD DRAWING PROCESS
glBindBuffer(GL_ARRAY_BUFFER,buffer_id_screen_quad);
glVertexAttribPointer(0,4,GL_FLOAT,GL_FALSE,24,0);
glEnableVertexAttribArray(0);
glDrawArrays(GL_QUADS,0,4);
SwapBuffers(hDC);
}
vertex shader is simple
#version 450
layout(location=0) in vec4 _pos;
void main(void)
{
gl_Position=_pos;
}
fragment shader is written with the purpose of iterpreting multisamples
#version 450
uniform sampler2DMS screen_texture;
layout(location=0) out vec4 out_color;
void main(void)
{
ivec2 coord=ivec2(gl_FragCoord.xy);
vec4 result=vec4(0.0);
int i;
for (i=0;i<4;i++)
{
result=max(result,texelFetch(screen_texture,coord,i));
}
out_color=result;
}
I end up with a black screen. If i change out_color to something lice out_color=vec4(1.0,0.0,0.0,1.0) i get red screen.
What could go wrong?
In my initializer function for framebuffer when i pass GL_DEPTH_COMPONENT to glTexStorage2DMultisample, then i get error. I decided to pass GL_DEPTH_COMPONENT16 and it works. Why is that?
Should I better use RENDERBUFFER for some perpose and if yes, how can i read it to texture?

The texture with id texture_id_framebuffer_color, which is the texture you use for your final rendering, is not attached to the FBO while you render to the FBO:
glFramebufferTexture(GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,texture_id_framebuffer_color,0);
glFramebufferTexture(GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,texture_id_framebuffer_objectid,0);
Only one texture can be attached to a given attachment point at a time. So when you specify a second texture to be attached to COLOR_ATTACHMENT0, the first one automatically gets un-attached.
If you want to have two attachments, they will need to use different attachment points:
glFramebufferTexture(GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,texture_id_framebuffer_color,0);
glFramebufferTexture(GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT1,texture_id_framebuffer_objectid,0);

Related

Aliasing artifacts during transparent MSAA FBO resolve

I have the following rendering flow:
Clear MSAA x8 FBO color attachment (RGBA) with {0.0f,0.0f,0.0f,0.0f}
Issue single draw call (draw rectangular shape).
Bind Resolve FBO (just single RGBA color attachment)
Blit MSAA FBO into Resolve FBO.
What I noticed that when I clear MSAA FBO to all zeros, AA still has some aliasing artifacts along the diagonal edges.
But if I clear with {0.0f,0.0f,0.0f,1.0f} ,anti-aliasing looks ok.
To verify that the problem is with alpha channel,I attached color texture to resolve FBO that has GL_RGB format instead of GL_RGBA (same as resolving into screen framebuffer which doesn't show this issue),and in this case, the problem doesn't appear.
So my question is, why does it happen when MSAA RT cleared with zero alpha?
And how can I solve it,except clearing alpha to one? Why I need it cleared to zero? Because this is intermediate pass,results of which are used later in alpha compositing.
Here is some of the important parts of the code. I omit pure OpenGL init logic for textures and FBOs setup as it's abstracted behind C++ API,which is well tested in production. Both FBO use texture attachments of RGBA format.
Render loop:
const float clearColorZero[4] = { 0.0f,0.0f,0.0f,0.0f };
glClearNamedFramebufferfv(msaaFBO, GL_COLOR, 0, clearColorZero);
glBindFramebuffer(GL_FRAMEBUFFER, msaaFBO);
glViewport(0, 0,width,height);
//Bind shader program and mesh:
BindProgram(mainProg);
BindMesh(mesh);
glProgramUniformMatrix4fv(mainProg, 0, 1, GL_FALSE,glm::value_ptr(matrixMVP));
Draw(mesh);//calls glDrawArrays
//Resolve MSAA
glBlitNamedFramebuffer(
msaaFBO,
resolveFBO,
0, 0, width, height,
0, 0, width, height,
GL_COLOR_BUFFER_BIT,GL_NEAREST);
Simple fragment shader:
#version 450 core
out vec4 o_color;
void main()
{
o_color = vec4(1.0,228.0 /255.0,123.0/255.0,1.0);
}
Blit results with MSAA FBO alpha cleared to zero:
Blit results with MSAA FBO alpha cleared to one:
PS: Same happens if I resolve MSAA manually inside shader:
#version 450 core
#define NUM_MSAA_SAMPLES 8
layout(binding = 0) uniform sampler2DMS colorMap;
out vec4 o_color;
void main()
{
vec4 color = texelFetch(colorMap,ivec2(gl_FragCoord.xy), 0);
for(int i = 1; i < NUM_MSAA_SAMPLES; ++i)
{
vec4 samplePoint = texelFetch(colorMap, ivec2(gl_FragCoord.xy), i);
color += samplePoint;
}
color /= float(NUM_MSAA_SAMPLES);
o_color= color;
}
Here is magnified version of the screenshots that shows:
MSAA FBO cleared to zero:
MSAA FBO cleared to one:
I also can't see any changes even if I disable blending with glDisable(G_BLEND)

OpenGL multiple texture with multiple shader programs

I am trying to do a scene in OpenGL to simulate earth from space. I have two spheres right now, one for earth, and another slightly big for clouds. The earth and the cloud sphere objects have their own shader programs to keep it simple. The earth shader program takes 4 textures (day, night, specmap and normalmap) and the cloud shader program takes 2 textures (cloudmap and normalmap). I have an object class which has a render function, and in that function I use this logic:
//bind the current object's texture
for (GLuint i = 0; i < texIDs.size(); i++){
glActiveTexture(GL_TEXTURE0 + i);
if (cubemap)
glBindTexture(GL_TEXTURE_CUBE_MAP, texIDs[i]);
else
glBindTexture(GL_TEXTURE_2D, texIDs[i]);
}
if (samplers.size()){
for (GLuint i = 0; i < samplers.size(); i++){
glUniform1i(glGetUniformLocation(program, samplers[i]), i);
}
}
It starts from the 0th texture unit, and binds N number of textures to N number of texture units starting from GL_TEXTURE0. Then it binds the the samplers starting from 0 to N in the shader program. The samplers are provided by me while loading the textures:
void Object::loadTexture(const char* filename, const GLchar* sampler){
int texID;
texID = SOIL_load_OGL_texture(filename, SOIL_LOAD_AUTO, SOIL_CREATE_NEW_ID, SOIL_FLAG_MIPMAPS | SOIL_FLAG_TEXTURE_REPEATS);
if(texID == 0){
cerr << "SOIL error: " << SOIL_last_result();
}
cout << filename << " Tex ID: " << texID << endl;
texIDs.push_back(texID);
samplers.push_back(sampler);
//glBindTexture(GL_TEXTURE_2D, texID);
}
When I do this, all the textures in the first sphere (earth) gets loaded successfully, but in the seconds sphere I get no textures and I just get a black sphere. My query is, how should I manage multiple textures and samplers if I'm using different shader programs for each object?
From what I see You are binding all textures as separate texture unit
that is wrong
what if you have 100 objects and each has 4 textures ...
I strongly doubt that you have 400 texture units at your disposal
Texture ID (name) is not Texture unit ...
I render space bodies like this:
First pass renders the astro body geometry
I have specific texture units for specific tasks
// texture units:
// 0 - texture0 map 2D rgba (surface)
// 1 - texture1 map 2D rgba (clouds blend)
// 2 - normal map 2D xyz (normal/bump mapping)
// 3 - specular map 2D i (reflection shininess)
// 4 - light map 2D rgb rgb (night lights)
// 5 - enviroment/skybox cube map 3D rgb
see the shader in that link (it was written for the solar system visualization too)...
you bind only the textures for single body before each render of it
(after you bind the shader)
do not change the texture unit meanings (how shader will know which texture is what if you do?)
Second render pass adds atmospheres
no textures used
it is just single transparent quad covering whole screen
here some insights to your tasks
[edit1] example of multitexturing
// init shader once per render all geometries
GLint prog_id; // shader program ID;
GLint txrskybox; // global skybox environment cube map
GLint id;
glUseProgram(prog_id);
id=glGetUniformLocation(prog_id,"txr_texture0"); glUniform1i(id,0); //uniform sampler2D txr_texture0;
id=glGetUniformLocation(prog_id,"txr_texture1"); glUniform1i(id,1); //uniform sampler2D txr_texture1;
id=glGetUniformLocation(prog_id,"txr_normal"); glUniform1i(id,2); //uniform sampler2D txr_normal;
id=glGetUniformLocation(prog_id,"txr_specular"); glUniform1i(id,3); //uniform sampler2D txr_specular;
id=glGetUniformLocation(prog_id,"txr_light"); glUniform1i(id,4); //uniform sampler2D txr_light;
id=glGetUniformLocation(prog_id,"txr_skybox"); glUniform1i(id,5); //uniform samplerCube txr_skybox;
// add here all uniforms you need ...
glActiveTexture(GL_TEXTURE0+5); glEnable(GL_TEXTURE_CUBE_MAP); glBindTexture(GL_TEXTURE_CUBE_MAP,txrskybox);
for (i=0;i<all_objects;i++)
{
// add here all uniforms you need ...
// pass textures once per any object render
// obj::(GLint) txr0,txr1,txrnor,txrspec,txrlight; // object local textures
glActiveTexture(GL_TEXTURE0+0); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D,obj[i].txr0);
glActiveTexture(GL_TEXTURE0+1); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D,obj[i].txr1);
glActiveTexture(GL_TEXTURE0+2); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D,obj[i].txrnor);
glActiveTexture(GL_TEXTURE0+3); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D,obj[i].txrspec);
glActiveTexture(GL_TEXTURE0+4); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D,obj[i].txrlight);
// here render the geometry of obj[i]
}
// unbind textures and shaders
glActiveTexture(GL_TEXTURE0+5); glBindTexture(GL_TEXTURE_CUBE_MAP,0); glDisable(GL_TEXTURE_CUBE_MAP);
glActiveTexture(GL_TEXTURE0+4); glBindTexture(GL_TEXTURE_2D,0); glDisable(GL_TEXTURE_2D);
glActiveTexture(GL_TEXTURE0+3); glBindTexture(GL_TEXTURE_2D,0); glDisable(GL_TEXTURE_2D);
glActiveTexture(GL_TEXTURE0+2); glBindTexture(GL_TEXTURE_2D,0); glDisable(GL_TEXTURE_2D);
glActiveTexture(GL_TEXTURE0+1); glBindTexture(GL_TEXTURE_2D,0); glDisable(GL_TEXTURE_2D);
glActiveTexture(GL_TEXTURE0+0); glBindTexture(GL_TEXTURE_2D,0); glDisable(GL_TEXTURE_2D); // unit0 at last so it stays active ...
glUseProgram(0);

GLSL, combining 2D and 3D textures

I am trying to blend a 3D texture with a 2D one to make a terrain. The 3D texture has moss, sand, snow and the like, interpolated to enhance the illusion of heights. The 2D texture currently only has an orange line across meant to be a "road". This is my fragment shader:
# version 420
uniform sampler3D mainTexture;
uniform sampler2D roadTexture;
void main() {
vec4 diffuse3D = texture3D(mainTexture, gl_TexCoord[0].stp);
vec4 diffuse2D = texture2D(roadTexture, gl_TexCoord[1].st);
// Yes, I am aware I am only returning the 2D texture value
// However this is for testing purposes only
// Doing gl_FragColor = diffuse3D + diffuse2D;
// Or any other operation returns the 3D texture only
gl_FragColor = diffuse2D;
}
And this is my drawing call:
void Terrain::Draw() {
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, sizeof(glm::vec3), &v[0].x);
glEnableClientState(GL_NORMAL_ARRAY);
glNormalPointer(GL_FLOAT, sizeof(glm::vec3), &n[0].x);
s.enable(); // simple glUseProgram call within my Shader object
glClientActiveTexture(GL_TEXTURE0);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnable(GL_TEXTURE_3D);
glBindTexture(GL_TEXTURE_3D, id_texture);
s.setSampler("mainTexture",0); // Calls to glGetUniformLocation and glUniform1i
glTexCoordPointer(3, GL_FLOAT, sizeof(glm::vec3), &t[0].x);
glClientActiveTexture(GL_TEXTURE1);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, id_texture_road);
s.setSampler("roadTexture",1); // Same as above
glTexCoordPointer(2, GL_FLOAT, sizeof(glm::vec2), &t2[0].x);
glPushMatrix();
glScalef(scalex,scaley,scalez);
glDrawElements(GL_TRIANGLES, sizei, GL_UNSIGNED_INT, index);
glPopMatrix();
s.disable(); // glUseProgram(0)
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisable(GL_TEXTURE_3D);
glDisable(GL_TEXTURE_2D);
}
Here is the code for my setSampler() method:
void Shader::setSampler(std::string name, GLint value)
{
GLuint loc = glGetUniformLocation(program, name.c_str());
if (loc>0)
{
glUniform1i(loc, value);
}
}
The result is a solid black color upon the whole terrain. I have sadly been unable to find information on sampler3D, but the diffuse3D variable in my fragment shader does compute to the correct texture, and my texture coordinates for the 2D texture are being correcly sent to the fragment shader (I know this because I used them to color the terrain for testing and got a smooth gradinent from green to red, what you would expect using only the first 2 coordinates). I also checked the values passed to my setSampler() method and I do get the 0 and 1, and the 1 and 2 locations corresponding to them.
All of the help I can find on this issue is around the vicinity of the advice provided here, which I have already implemented).
Can anybody assist?
EDIT: So, just for kicks, I swapped my texture units so the 2D texture became unit 0 and the 3D became unit 1. Now only the 2D texture is rendered. But my texture units are passed correctly (at least in appearence) to the shader. Any clues?
vec4 diffuse3D = texture3D(mainTexture, gl_TexCoord[0].stp);
vec4 diffuse2D = texture2D(roadTexture, gl_TexCoord[1].st);
gl_FragColor = diffuse2D;
Let's pretend that this wasn't using shaders. Let's pretend you were just writing a function in C++ that returns a value.
int FuncName(int val1, int val2)
{
int test1 = Compute(val1);
int test2 = Compute(val2);
return test2;
}
What will this function return? Obviously, it returns Compute(val2), completely ignoring the value of test1. It won't magically combine test1 and test2. They're separate values, and therefore, they remain separate unless you explicitly combine them.
Just like your fragment shader.
Shaders aren't magic; they're programming. They only do what you tell them to. So if you say, "get a value from a texture and then don't do anything with it", it will dutifully do exactly that. Though odds are good that the compiler will optimize out the texture fetch entirely.
If you want a "blend" of two textures, you must blend them. You must fetch from each texture, then use both values to compute a new color.
How exactly you do that depends entirely on you. Maybe your 2D texture has some alpha that represents how much of the 2D texture to show. I don't know; you didn't describe what your texture looks like or how exactly you plan to show the road in some places and not in others.
the reason you get a black color is simply that you don't set proper uniform variables.
# version 420
uniform sampler3D mainTexture;
uniform sampler2D roadTexture;
void main() {
vec4 diffuse3D = texture3D(mainTexture, gl_TexCoord[0].stp);
vec4 diffuse2D = texture2D(roadTexture, gl_TexCoord[1].st);
gl_FragColor = diffuse2D;
}
what this shader is doing, is looking up the value of 'roadTexture' and displaying it. unfortunately, it has no clue which texture unit 'roadTexture' is currently bound to, and thus will acess texture unit 0, where your 3d texture is bound - so your're trying to access a 3d texture with 2d texcoords, which may well return all black. you'll need to query the uniform locations of your textures with glGetUniformLocation and then set them to the correct texture units ( 0/1, respectively ) with glUniform1i.
EDIT: also, you're using deprecated functionality, so your shader version directive should be changed to #version 420 compatibility - the default is core
You need to call glEnableClientState(GL_TEXTURE_COORD_ARRAY); again after you have made the second texture unit active with glClientActiveTexture(GL_TEXTURE1);
from http://www.opengl.org/sdk/docs/man2/xhtml/glEnableClientState.xml
enabling and disabling GL_TEXTURE_COORD_ARRAY affects the active client texture unit.
Just solved this problem. Apprently you still need glActiveTexture() in addition to glClientActiveTexture(). This was the code that worked, for anyone who gets the same problem:
glClientActiveTexture(GL_TEXTURE0);
glActiveTexture(GL_TEXTURE0);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glBindTexture(GL_TEXTURE_3D, id_texture);
s.setSampler("mainTexture",0); // Calls to glGetUniformLocation and glUniform1i
glTexCoordPointer(3, GL_FLOAT, sizeof(glm::vec3), &t[0].x);
glClientActiveTexture(GL_TEXTURE1);
glActiveTexture(GL_TEXTURE1);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glBindTexture(GL_TEXTURE_2D, id_texture_road);
s.setSampler("roadTexture",1); // Same as above
glTexCoordPointer(2, GL_FLOAT, sizeof(glm::vec2), &t2[0].x);
// Drawing Calls
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glClientActiveTexture(GL_TEXTURE0);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glActiveTexture(GL_TEXTURE0);
Thanks for reading.

OpenGL, Shader Model 3.3 Texturing: Black Textures?

I've been banging my head against this for hours now, I'm sure it's something simple, but I just can't get a result. I've had to edit this code down a bit because I've built a little library to encapsulate the OpenGL calls, but the following is an accurate description of the state of affairs.
I'm using the following vertex shader:
#version 330
in vec4 position;
in vec2 uv;
out vec2 varying_uv;
void main(void)
{
gl_Position = position;
varying_uv = uv;
}
And the following fragment shader:
#version 330
in vec2 varying_uv;
uniform sampler2D base_texture;
out vec4 fragment_colour;
void main(void)
{
fragment_colour = texture2D(base_texture, varying_uv);
}
Both shaders compile and the program links without issue.
In my init section, I load a single texture like so:
// Check for errors.
kt::kits::open_gl::Core<QString>::throw_on_error();
// Load an image.
QImage image("G:/test_image.png");
image = image.convertToFormat(QImage::Format_RGB888);
if(!image.isNull())
{
// Load up a single texture.
glGenTextures(1, &Texture);
glBindTexture(GL_TEXTURE_2D, Texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, image.width(), image.height(), 0, GL_RGB, GL_UNSIGNED_BYTE, image.constBits());
glBindTexture(GL_TEXTURE_2D, 0);
}
// Check for errors.
kt::kits::open_gl::Core<QString>::throw_on_error();
You'll observe that I'm using Qt to load the texture. The calls to ::throw_on_error() check for errors in OpenGL (by calling Error()), and throw an exception if one occurs. No OpenGL errors occur in this code, and the image loaded using Qt is valid.
Drawing is performed as follows:
// Clear previous.
glClear(GL_COLOR_BUFFER_BIT |
GL_DEPTH_BUFFER_BIT |
GL_STENCIL_BUFFER_BIT);
// Use our program.
glUseProgram(GLProgram);
// Bind the vertex array.
glBindVertexArray(GLVertexArray);
/* ------------------ Setting active texture here ------------------- */
// Tell the shader which textures are which.
kt::kits::open_gl::gl_int tAddr = glGetUniformLocation(GLProgram, "base_texture");
glUniform1i(tAddr, 0);
// Activate the texture Texture(0) as texture 0.
glActiveTexture(GL_TEXTURE0 + 0);
glBindTexture(GL_TEXTURE_2D, Texture);
/* ------------------------------------------------------------------ */
// Draw vertex array as triangles.
glDrawArrays(GL_TRIANGLES, 0, 4);
glBindVertexArray(0);
glUseProgram(0);
// Detect errors.
kt::kits::open_gl::Core<QString>::throw_on_error();
Similarly, no OpenGL errors occur, and a triangle is drawn to screeen. However, it looks like this:
It occurred to me the problem may be related to my texture coordinates. So, I rendered the following image using s as the 'red' component, and t as the 'green' component:
The texture coordinates appear correct, yet I'm still receiving the black triangle of doom. What am I doing wrong?
I think it could be depending on an incomplete init of your texture object.
Try to init the texture MIN and MAG filter
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
Moreover, I would suggest to check the size of the texture. If it is not power of 2, then you have to set the wrapping mode to CLAMP_TO_EDGE
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE);
Black textures are often due to this issue, very common problem around.
Ciao
In your fragment shader you're writing to a self defined target
fragment_colour = texture2D(base_texture, varying_uv);
If that's not to be gl_FragColor or gl_FragData[…], did you properly set the designated fragment data location?

Texture rendering and VBO's [OpenGL/SDL/C++]

So, I've been working on a little game project for a bit and I've hit a snag that's annoying me to no end. I load an obj file which then gets rendered after being put into a VBO. This part works fine, no problemo. However, I've been trying to get it to render the accompanying texture with the supplied UVs with no success. Currently, I just get a matte green colouration on my model. Upon investigating it in GDE, I've seen that texture gets loaded fine and occupies the GL_TEXTURE0 unit, so that's not the issue. I believe it may be my binding but I have no idea why this would fail...
void Model_Man::render_models()
{
for(int x=0; x<models.size(); x++)
{
if(models.at(x).visible==true)
{
glBindBuffer(GL_ARRAY_BUFFER,models.at(x).t_buff);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,models.at(x).i_buff);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT,0,0);
glClientActiveTexture(GL_TEXTURE0);
glTexCoordPointer(2,GL_FLOAT,0,&models.at(x).uvs[0]);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glActiveTexture(GL_TEXTURE0);
int tex_loc = glGetUniformLocation(models.at(x).shaderid,"color_texture");
glUniform1i(tex_loc,GL_TEXTURE0);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, models.at(x).mats.at(0).texid);
c_render.use_program(models.at(x).shaderid);
glDrawElements(GL_TRIANGLES,models.at(x).f_index.size()*3,GL_UNSIGNED_INT,0);
c_render.use_program();
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisable(GL_TEXTURE_2D);
}
}
}
And my shader files...
Shader.frag
uniform sampler2D color_texture;
void main() {
// Set the output color of our current pixel
gl_FragColor = texture2D(color_texture, gl_TexCoord[0].st);
}
Shader.vert
void main() {
gl_TexCoord[0] = gl_MultiTexCoord0;
// Set the position of the current vertex
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
And yes, I know I'm currently being horribly inefficient with my render loop :P but I'm already planning on refactoring it, I am just attempting to get this single model to draw correctly with everything I'm aiming to do. I have no clue why it wouldn't be rendering with the texture correctly applied - unless it's because I need to interleave my arrays but I'm still supplying it with uv data so I don't see why it fails.
The call that set the sampler uniform shall not set GL_TEXTUE0, but actually 0.
Indeed:
glUniform1i(location, 0)
For setting up a sampler uniform do:
glUseProgram(progId);
// ...
glActiveTexture(GL_TEXTURE0 + texUnit);
glBindTexture(texId);
glUniform1i(texUnit);
The main concept is that the uniform variable are a shader program state (it is mantained until you re-link the program or reset the uniform value). Without binding a program, glUniform1i shall fail since there's not shader program at which it can set the uniform value!
As a general advice, call glGetError after each OpenGL call to detect these conditions. Most of those calls can be removed by preprocessor in release version.
Well, found out that the big issue was that while I was binding a texture, I wasn't actually setting it in a way that it was understood as being used. Setting glClientActiveTexture(GL_TEXTURE0 + texUnit); in combination with glActiveTexture(); ended up being the final solution.