OpenGL - blend two textures on the same object - c++

I want to apply two textures on the same object (actually just a 2D rectangle) in order to blend them. I thought I would achieve that by simply calling glDrawElements with the first texture, then binding the other texture and calling glDrawElements a second time. Like this:
//create vertex buffer, frame buffer, depth buffer, texture sampler, build and bind model
//...
glEnable(GL_BLEND);
glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE, GL_ZERO);
glBlendEquation(GL_FUNC_ADD);
// Clear the screen
glClearColor(0, 0, 0, 0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Bind our texture in Texture Unit 0
GLuint textureID;
//create or load texture
//...
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textureID);
// Set our sampler to use Texture Unit 0
glUniform1i(textureSampler, 0);
// Draw the triangles !
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, (void*)0);
//second draw call
GLuint textureID2;
//create or load texture
//...
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textureID2);
// Set our sampler to use Texture Unit 0
glUniform1i(textureSampler, 0);
// Draw the triangles !
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, (void*)0);
Unfortunately, the 2nd texture is not drawn at all and I only see the first texture. If I call glClear between the two draw calls, it correctly draws the 2nd texture.
Any pointers? How can I force OpenGL to draw on the second call?

As an alternative to the approach you followed so far I would like to suggest using two texture samplers within your GLSL shader and perform the blending there. This way, you would be done with just one draw call, thus reducing CPU/GPU interaction. To do so, just define to texture samplers in your shader like
layout(binding = 0) uniform sampler2D texture_0;
layout(binding = 1) uniform sampler2D texture_1;
Alternatively, you can use a sampler array:
layout(binding = 0) uniform sampler2DArray textures;
In your application, setup the textures and samplers using
enum Sampler_Unit{BASE_COLOR_S = GL_TEXTURE0 + 0, NORMAL_S = GL_TEXTURE0 + 2};
glActiveTexture(Sampler_Unit::BASE_COLOR_S);
glBindTexture(GL_TEXTURE_2D, textureBuffer1);
glTexStorage2D( ....)
glActiveTexture(Sampler_Unit::NORMAL_S);
glBindTexture(GL_TEXTURE_2D, textureBuffer2);
glTexStorage2D( ....)

Thanks to #tkausl for the tip.
I had depth testing enabled during the initialization phase.
// Enable depth test
glEnable(GL_DEPTH_TEST);
// Accept fragment if it closer to the camera than the former one
glDepthFunc(GL_LESS);
The option needs to be disabled in my case, for the blend operation to work.
//make sure to disable depth test
glDisable(GL_DEPTH_TEST);

Related

FBO and Textures

I have attached two textures to an FBO. The first texture will be used to show a depth map. The second texture will show the object in a normal way.
If I do this, it works well and shows me the depth map.
GLuint atach0 = GL_DEPTH_ATTACHMENT;
glBindFramebuffer(GL_FRAMEBUFFER,fboBuffer);
glDrawBuffers(1,&atach0);
glClear(GL_DEPTH_BUFFER_BIT);
glViewport(0.0,0.0,640,480);
LProjection = glm::ortho(-10.0f,10.0f,-10.0f,10.0f,-500.0f,500.0f);
LView = glm::lookAt(glm::vec3(0.0f,15.0f,0.000001f),glm::vec3(0.0f,0.0f,0.0f),glm::vec3(0.0f,1.0f,0.0f));
LViewProjection = LProjection * LView;
glEnable(GL_DEPTH_TEST);
...CUBE3D
glBindFramebuffer(GL_FRAMEBUFFER,0);
glUniform1i(uniforTEX,0);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D,GL_DEPTH_MAP);
...DRAW-DEPTH-MAP!
glBindTexture(GL_TEXTURE_2D,0);
But if I do this, it only shows a white screen and the depth map is no longer visible.
GLuint atach0 = GL_DEPTH_ATTACHMENT;
glBindFramebuffer(GL_FRAMEBUFFER,fboBuffer);
glDrawBuffers(1,&atach0);
glClear(GL_DEPTH_BUFFER_BIT);
glViewport(auxrecX,auxrecY,auxrcAn,auxrcAl);
LProjection = glm::ortho(-10.0f,10.0f,-10.0f,10.0f,-500.0f,500.0f);
LView = glm::lookAt(glm::vec3(0.0f,15.0f,0.000001f),glm::vec3(0.0f,0.0f,0.0f),glm::vec3(0.0f,1.0f,0.0f));
LViewProjection = LProjection * LView;
glEnable(GL_DEPTH_TEST);
...CUBE3D
glBindFramebuffer(GL_FRAMEBUFFER,0);
GLuint atach1 = GL_COLOR_ATTACHMENT0;
glBindFramebuffer(GL_FRAMEBUFFER,fboBuffer);
glDrawBuffers(1,&atach1);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glViewport(0,0,640,480);
Projection = glm::perspective(45.0f,1.333333f,0.01f,1000.0f);
View = glm::lookAt(glm::vec3(0.0f,0.0f,3.0),glm::vec3(0.0f,0.0f,-15.0f),glm::vec3(0.0f,1.0f,0.0f));
ViewProjection = Projection * View;
glEnable(GL_DEPTH_TEST);
...CUBE3D
glBindFramebuffer(GL_FRAMEBUFFER,0);
glUniform1i(uniforTEX,0);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D,GL_DEPTH_MAP);
...DO NOT DRAW NOTHING
glBindTexture(GL_TEXTURE_2D,0);
I need to do it in that order. Because the first "pass" is for the use of the depth map in a shadow map. What am I doing wrong?
GL_DEPTH_ATTACHMENT is not valid for glDrawBuffers. glDrawBuffers of buffers into which outputs from the fragment shader data will be written.
When you do
GLuint atach0 = GL_DEPTH_ATTACHMENT;
glBindFramebuffer(GL_FRAMEBUFFER,fboBuffer);
then you'll get a GL_INVALID_ENUM error.
If you don't want to wirte to the depth map, then disable the depth test of use glDepthMask. The depth buffer attachment can't be switched on and off and it can't be uses somehow like a color buffer.
Further
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
will clear the depth buffer in any case.
I think there is a basic misunderstanding how the framebuffers for a shadow map have to be set up. You'll need a framebuffer for the shadow map only. This framebuffer has to have a depth buffer only.
You have to do mit somehow like this:
GLuint fboShadow; // framebuffer with depth buffer only (shadow map)
GLuint toShadowDepth; // texture which is the depth buffer of `fboShadow`
glBindFramebuffer(GL_FRAMEBUFFER, fboShadow);
GLuint atach0 = GL_NONE;
glDrawBuffers(1, &atach0);
glClear(GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
glViewport(auxrecX, auxrecY, auxrcAn, auxrcAl);
// draw the shadow map to the depth buffer only
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
glViewport(0, 0, 640, 480);
glUniform1i(uniforTEX, 0);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, toShadowDepth);
// draw the geometry to the default framebuffer by using the depth map

Writing and reading from the same texture for an iterative DE solver on OpenGL

I am trying to write a fluid simulator that requires solving iteratively some differential equations (Lattice-Boltzmann Method). I want it to be a real-time graphical visualisation using OpenGL. I ran into a problem. I use a shader to perform relevant calculations on GPU. What I what is to pass the texture describing the state of the system at time t into the shader, shader performs the calculation and returns the state of the system at time t+dt, I render the texture on a quad and then pass the texture back into the shader. However, I found that I can not read and write to the same texture at the same time. But I am sure I have seen implementations of such calculations on GPU. How do they work around it? I think I saw a few discussion on a different way of working around the fact that OpenGL can read and write the same texture, but I could not quite understand them and adapt them to my case. To render to texture I use: glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, renderedTexture, 0);
Here is my rendering routine:
do{
//count frames
frame_counter++;
// Render to our framebuffer
glBindFramebuffer(GL_FRAMEBUFFER, FramebufferName);
glViewport(0,0,windowWidth,windowHeight); // Render on the whole framebuffer, complete from the lower left corner to the upper right
// Clear the screen
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Use our shader
glUseProgram(programID);
// Bind our texture in Texture Unit 0
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, renderTexture);
glUniform1i(TextureID, 0);
printf("Inv Width: %f", (float)1.0/windowWidth);
//Pass inverse widths (put outside of the cycle in future)
glUniform1f(invWidthID, (float)1.0/windowWidth);
glUniform1f(invHeightID, (float)1.0/windowHeight);
// 1rst attribute buffer : vertices
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, quad_vertexbuffer);
glVertexAttribPointer(
0, // attribute 0. No particular reason for 0, but must match the layout in the shader.
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);
// Draw the triangles !
glDrawArrays(GL_TRIANGLES, 0, 6); // 2*3 indices starting at 0 -> 2 triangles
glDisableVertexAttribArray(0);
// Render to the screen
glBindFramebuffer(GL_FRAMEBUFFER, 0);
// Render on the whole framebuffer, complete from the lower left corner to the upper right
glViewport(0,0,windowWidth,windowHeight);
// Clear the screen
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Use our shader
glUseProgram(quad_programID);
// Bind our texture in Texture Unit 0
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, renderedTexture);
// Set our "renderedTexture" sampler to user Texture Unit 0
glUniform1i(texID, 0);
glUniform1f(timeID, (float)(glfwGetTime()*10.0f) );
// 1rst attribute buffer : vertices
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, quad_vertexbuffer);
glVertexAttribPointer(
0, // attribute 0. No particular reason for 0, but must match the layout in the shader.
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);
// Draw the triangles !
glDrawArrays(GL_TRIANGLES, 0, 6); // 2*3 indices starting at 0 -> 2 triangles
glDisableVertexAttribArray(0);
glReadBuffer(GL_BACK);
glBindTexture(GL_TEXTURE_2D, sourceTexture);
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 0, 0, windowWidth, windowHeight, 0);
// Swap buffers
glfwSwapBuffers(window);
glfwPollEvents();
}
What happens now, is that when I render to the framebuffer, I the texture I get as an input is empty, I think. But when I render the same texture on screen, it renders succesfully what I excpect.
Okay, I think I've managed to figure something out. Instead of rendering to a framebuffer what I can do is to use glCopyTexImage2D to copy whatever got rendered on the screen to a texture. Now, however, I have another issue: I can't understand if glCopyTexImage2D will work with a frame buffer. It works with onscreen rendering, but I am failing to get it to work when I am rendering to a framebuffer. Not sure if this is even possible in the first place. Made a separate question on this:
Does glCopyTexImage2D work when rendering offscreen?

OpenGL multiple texture with multiple shader programs

I am trying to do a scene in OpenGL to simulate earth from space. I have two spheres right now, one for earth, and another slightly big for clouds. The earth and the cloud sphere objects have their own shader programs to keep it simple. The earth shader program takes 4 textures (day, night, specmap and normalmap) and the cloud shader program takes 2 textures (cloudmap and normalmap). I have an object class which has a render function, and in that function I use this logic:
//bind the current object's texture
for (GLuint i = 0; i < texIDs.size(); i++){
glActiveTexture(GL_TEXTURE0 + i);
if (cubemap)
glBindTexture(GL_TEXTURE_CUBE_MAP, texIDs[i]);
else
glBindTexture(GL_TEXTURE_2D, texIDs[i]);
}
if (samplers.size()){
for (GLuint i = 0; i < samplers.size(); i++){
glUniform1i(glGetUniformLocation(program, samplers[i]), i);
}
}
It starts from the 0th texture unit, and binds N number of textures to N number of texture units starting from GL_TEXTURE0. Then it binds the the samplers starting from 0 to N in the shader program. The samplers are provided by me while loading the textures:
void Object::loadTexture(const char* filename, const GLchar* sampler){
int texID;
texID = SOIL_load_OGL_texture(filename, SOIL_LOAD_AUTO, SOIL_CREATE_NEW_ID, SOIL_FLAG_MIPMAPS | SOIL_FLAG_TEXTURE_REPEATS);
if(texID == 0){
cerr << "SOIL error: " << SOIL_last_result();
}
cout << filename << " Tex ID: " << texID << endl;
texIDs.push_back(texID);
samplers.push_back(sampler);
//glBindTexture(GL_TEXTURE_2D, texID);
}
When I do this, all the textures in the first sphere (earth) gets loaded successfully, but in the seconds sphere I get no textures and I just get a black sphere. My query is, how should I manage multiple textures and samplers if I'm using different shader programs for each object?
From what I see You are binding all textures as separate texture unit
that is wrong
what if you have 100 objects and each has 4 textures ...
I strongly doubt that you have 400 texture units at your disposal
Texture ID (name) is not Texture unit ...
I render space bodies like this:
First pass renders the astro body geometry
I have specific texture units for specific tasks
// texture units:
// 0 - texture0 map 2D rgba (surface)
// 1 - texture1 map 2D rgba (clouds blend)
// 2 - normal map 2D xyz (normal/bump mapping)
// 3 - specular map 2D i (reflection shininess)
// 4 - light map 2D rgb rgb (night lights)
// 5 - enviroment/skybox cube map 3D rgb
see the shader in that link (it was written for the solar system visualization too)...
you bind only the textures for single body before each render of it
(after you bind the shader)
do not change the texture unit meanings (how shader will know which texture is what if you do?)
Second render pass adds atmospheres
no textures used
it is just single transparent quad covering whole screen
here some insights to your tasks
[edit1] example of multitexturing
// init shader once per render all geometries
GLint prog_id; // shader program ID;
GLint txrskybox; // global skybox environment cube map
GLint id;
glUseProgram(prog_id);
id=glGetUniformLocation(prog_id,"txr_texture0"); glUniform1i(id,0); //uniform sampler2D txr_texture0;
id=glGetUniformLocation(prog_id,"txr_texture1"); glUniform1i(id,1); //uniform sampler2D txr_texture1;
id=glGetUniformLocation(prog_id,"txr_normal"); glUniform1i(id,2); //uniform sampler2D txr_normal;
id=glGetUniformLocation(prog_id,"txr_specular"); glUniform1i(id,3); //uniform sampler2D txr_specular;
id=glGetUniformLocation(prog_id,"txr_light"); glUniform1i(id,4); //uniform sampler2D txr_light;
id=glGetUniformLocation(prog_id,"txr_skybox"); glUniform1i(id,5); //uniform samplerCube txr_skybox;
// add here all uniforms you need ...
glActiveTexture(GL_TEXTURE0+5); glEnable(GL_TEXTURE_CUBE_MAP); glBindTexture(GL_TEXTURE_CUBE_MAP,txrskybox);
for (i=0;i<all_objects;i++)
{
// add here all uniforms you need ...
// pass textures once per any object render
// obj::(GLint) txr0,txr1,txrnor,txrspec,txrlight; // object local textures
glActiveTexture(GL_TEXTURE0+0); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D,obj[i].txr0);
glActiveTexture(GL_TEXTURE0+1); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D,obj[i].txr1);
glActiveTexture(GL_TEXTURE0+2); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D,obj[i].txrnor);
glActiveTexture(GL_TEXTURE0+3); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D,obj[i].txrspec);
glActiveTexture(GL_TEXTURE0+4); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D,obj[i].txrlight);
// here render the geometry of obj[i]
}
// unbind textures and shaders
glActiveTexture(GL_TEXTURE0+5); glBindTexture(GL_TEXTURE_CUBE_MAP,0); glDisable(GL_TEXTURE_CUBE_MAP);
glActiveTexture(GL_TEXTURE0+4); glBindTexture(GL_TEXTURE_2D,0); glDisable(GL_TEXTURE_2D);
glActiveTexture(GL_TEXTURE0+3); glBindTexture(GL_TEXTURE_2D,0); glDisable(GL_TEXTURE_2D);
glActiveTexture(GL_TEXTURE0+2); glBindTexture(GL_TEXTURE_2D,0); glDisable(GL_TEXTURE_2D);
glActiveTexture(GL_TEXTURE0+1); glBindTexture(GL_TEXTURE_2D,0); glDisable(GL_TEXTURE_2D);
glActiveTexture(GL_TEXTURE0+0); glBindTexture(GL_TEXTURE_2D,0); glDisable(GL_TEXTURE_2D); // unit0 at last so it stays active ...
glUseProgram(0);

Deferred Rendering Skybox OpenGL

I've just implemented deferred rendering and am having trouble getting my skybox working. I try rendering my skybox at the very end of my rendering loop and all I get is a black screen. Here's the rendering loop:
//binds the fbo
gBuffer.Bind();
//the shader that writes info to gbuffer
geometryPass.Bind();
glDepthMask(GL_TRUE);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
glDisable(GL_BLEND);
//draw geometry
geometryPass.SetUniform("model", transform.GetModel());
geometryPass.SetUniform("mvp", camera.GetViewProjection() * transform.GetModel());
mesh3.Draw();
geometryPass.SetUniform("model", transform2.GetModel());
geometryPass.SetUniform("mvp", camera.GetViewProjection() * transform2.GetModel());
sphere.Draw();
glDepthMask(GL_FALSE);
glDisable(GL_DEPTH_TEST);
glEnable(GL_BLEND);
glBlendEquation(GL_FUNC_ADD);
glBlendFunc(GL_ONE, GL_ONE);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glClear(GL_COLOR_BUFFER_BIT);
//shader that calculates lighting
pointLightPass.Bind();
pointLightPass.SetUniform("cameraPos", camera.GetTransform().GetPosition());
for (int i = 0; i < 2; i++)
{
pointLightPass.SetUniformPointLight("light", pointLights[i]);
pointLightPass.SetUniform("mvp", glm::mat4(1.0f));
//skybox.GetCubeMap()->Bind(9);
quad.Draw();
}
//draw skybox
glEnable(GL_DEPTH_TEST);
skybox.Render(camera);
window.Update();
window.SwapBuffers();
The following is the skybox's render function
glCullFace(GL_FRONT);
glDepthFunc(GL_LEQUAL);
m_transform.SetPosition(camera.GetTransform().GetPosition());
m_shader->Bind();
m_shader->SetUniform("mvp", camera.GetViewProjection() * m_transform.GetModel());
m_shader->SetUniform("cubeMap", 0);
m_cubeMap->Bind(0);
m_cubeMesh->Draw();
glDepthFunc(GL_LESS);
glCullFace(GL_BACK);
And here is the skybox's vertex shader:
layout (location = 0) in vec3 position;
out vec3 TexCoord;
uniform mat4 mvp;
void main()
{
vec4 pos = mvp * vec4(position, 1.0);
gl_Position = pos.xyww;
TexCoord = position;
}
The skybox's fragment shader just sets the output color to texture(cubeMap, TexCoord).
As you can see from the vertex shader, I'm setting the position's z component to be w so that it will always have a depth of 1. I am also setting the depth function to be GL_LEQUAL so that it will fail the depth test. Should this not only draw the skybox in places where other objects weren't already drawn? Why does it result in a black screen?
I know I have set up the skybox correctly because if I just draw the skybox by itself it shows up just fine.
I can briefly see for a split second the geometry that should be drawn before the skybox is drawn on top of everything.
Since you're using double buffering, seeing different things must be due to a different frame being drawn. The depth buffer in the default framebuffer isn't being cleared, which I believe is the cause of the temporal instability at least.
In your case, you want the default depth buffer to be the same as the GBuffer when you draw the skybox. A quick way to achieve this is with glBlitFramebuffer, also avoiding the need to clear it:
glBindFramebuffer(GL_READ_FRAMEBUFFER, gbuffer);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBlitFramebuffer(..., GL_DEPTH_BUFFER_BIT, ...);
Now to explain the black screen when the skybox fills the screen. Without the depth test, of course the skybox just draws. With the depth test, the skybox still draws on the first frame, but shortly after the second frame clears only the colour buffer. The depth buffer still contains stale skybox values so it does not get re-draw for this frame and you're left with black...
However your geometry pass draws without depth testing enabled, so this should still be visible even if the skybox isn't. Also this would only happen with GL_LESS and you have GL_LEQUAL. And you have glDepthMask false, which means nothing should write to the default depth buffer in your code. This points to the depth buffer containing other values, perhaps uninitialized, but in my experience it's initially zero. Also this still happens when the skybox doesn't fill the screen, drawn as a cube away from the camera, which blows away that argument. Now, perhaps if the geometry failed to draw in the second frame that would explain it. For that matter blatant driver bugs would too, but I'm not seeing any problems in the given code.
TLDR: Many unexplained things, so **I tried it myself and can't reproduce your problem...
Here's a quick example based on your code and it works fine for me...
(green sphere is the geometry, red cube is the skybox)
gl_Position = pos:
Note the yellow from additive blending even if the skybox is drawn over the top. I would have thought you'd be seeing this too.
gl_Position = pos.xyww:
Now for the code...
//I haven't enabled back face culling, but that shouldn't affect anything
//binds the fbo
fbo.bind();
//the shader that writes info to gbuffer
//geometryPass.Bind(); //fixed pipeline for now
glDepthMask(GL_TRUE);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
glDisable(GL_BLEND);
glColor3f(0,1,0);
fly.uploadCamera(); //glLoadMatrixf
sphere.draw();
glDepthMask(GL_FALSE);
glDisable(GL_DEPTH_TEST);
glEnable(GL_BLEND);
glBlendEquation(GL_FUNC_ADD);
glBlendFunc(GL_ONE, GL_ONE);
fbo.unbind(); //glBindFramebuffer(GL_FRAMEBUFFER, 0);
glClear(GL_COLOR_BUFFER_BIT);
//shader that calculates lighting
drawtex.use();
//pointLightPass.SetUniform("cameraPos", camera.GetTransform().GetPosition());
drawtex.set("tex", *(Texture2D*)fbo.colour[0]);
for (int i = 0; i < 2; i++)
{
//pointLightPass.SetUniformPointLight("light", pointLights[i]);
//pointLightPass.SetUniform("mvp", glm::mat4(1.0f));
//skybox.GetCubeMap()->Bind(9);
drawtex.set("modelviewMat", mat44::identity());
quad.draw();
}
drawtex.unuse();
//draw skybox
glEnable(GL_DEPTH_TEST);
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbo);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBlitFramebuffer(0, 0, fbo.size.x, fbo.size.y, 0, 0, fbo.size.x, fbo.size.y, GL_DEPTH_BUFFER_BIT, GL_NEAREST);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
//glCullFace(GL_FRONT);
glDepthFunc(GL_LEQUAL);
//m_transform.SetPosition(camera.GetTransform().GetPosition());
skybox.use();
skybox.set("mvp", fly.camera.getProjection() * fly.camera.getInverse() * mat44::translate(1,0,0));
//m_shader->SetUniform("mvp", camera.GetViewProjection() * m_transform.GetModel());
//m_shader->SetUniform("cubeMap", 0);
//m_cubeMap->Bind(0);
cube.draw();
skybox.unuse();
glDepthFunc(GL_LESS);
//glCullFace(GL_BACK);
//window.Update();
//window.SwapBuffers();

GLSL, combining 2D and 3D textures

I am trying to blend a 3D texture with a 2D one to make a terrain. The 3D texture has moss, sand, snow and the like, interpolated to enhance the illusion of heights. The 2D texture currently only has an orange line across meant to be a "road". This is my fragment shader:
# version 420
uniform sampler3D mainTexture;
uniform sampler2D roadTexture;
void main() {
vec4 diffuse3D = texture3D(mainTexture, gl_TexCoord[0].stp);
vec4 diffuse2D = texture2D(roadTexture, gl_TexCoord[1].st);
// Yes, I am aware I am only returning the 2D texture value
// However this is for testing purposes only
// Doing gl_FragColor = diffuse3D + diffuse2D;
// Or any other operation returns the 3D texture only
gl_FragColor = diffuse2D;
}
And this is my drawing call:
void Terrain::Draw() {
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, sizeof(glm::vec3), &v[0].x);
glEnableClientState(GL_NORMAL_ARRAY);
glNormalPointer(GL_FLOAT, sizeof(glm::vec3), &n[0].x);
s.enable(); // simple glUseProgram call within my Shader object
glClientActiveTexture(GL_TEXTURE0);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnable(GL_TEXTURE_3D);
glBindTexture(GL_TEXTURE_3D, id_texture);
s.setSampler("mainTexture",0); // Calls to glGetUniformLocation and glUniform1i
glTexCoordPointer(3, GL_FLOAT, sizeof(glm::vec3), &t[0].x);
glClientActiveTexture(GL_TEXTURE1);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, id_texture_road);
s.setSampler("roadTexture",1); // Same as above
glTexCoordPointer(2, GL_FLOAT, sizeof(glm::vec2), &t2[0].x);
glPushMatrix();
glScalef(scalex,scaley,scalez);
glDrawElements(GL_TRIANGLES, sizei, GL_UNSIGNED_INT, index);
glPopMatrix();
s.disable(); // glUseProgram(0)
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisable(GL_TEXTURE_3D);
glDisable(GL_TEXTURE_2D);
}
Here is the code for my setSampler() method:
void Shader::setSampler(std::string name, GLint value)
{
GLuint loc = glGetUniformLocation(program, name.c_str());
if (loc>0)
{
glUniform1i(loc, value);
}
}
The result is a solid black color upon the whole terrain. I have sadly been unable to find information on sampler3D, but the diffuse3D variable in my fragment shader does compute to the correct texture, and my texture coordinates for the 2D texture are being correcly sent to the fragment shader (I know this because I used them to color the terrain for testing and got a smooth gradinent from green to red, what you would expect using only the first 2 coordinates). I also checked the values passed to my setSampler() method and I do get the 0 and 1, and the 1 and 2 locations corresponding to them.
All of the help I can find on this issue is around the vicinity of the advice provided here, which I have already implemented).
Can anybody assist?
EDIT: So, just for kicks, I swapped my texture units so the 2D texture became unit 0 and the 3D became unit 1. Now only the 2D texture is rendered. But my texture units are passed correctly (at least in appearence) to the shader. Any clues?
vec4 diffuse3D = texture3D(mainTexture, gl_TexCoord[0].stp);
vec4 diffuse2D = texture2D(roadTexture, gl_TexCoord[1].st);
gl_FragColor = diffuse2D;
Let's pretend that this wasn't using shaders. Let's pretend you were just writing a function in C++ that returns a value.
int FuncName(int val1, int val2)
{
int test1 = Compute(val1);
int test2 = Compute(val2);
return test2;
}
What will this function return? Obviously, it returns Compute(val2), completely ignoring the value of test1. It won't magically combine test1 and test2. They're separate values, and therefore, they remain separate unless you explicitly combine them.
Just like your fragment shader.
Shaders aren't magic; they're programming. They only do what you tell them to. So if you say, "get a value from a texture and then don't do anything with it", it will dutifully do exactly that. Though odds are good that the compiler will optimize out the texture fetch entirely.
If you want a "blend" of two textures, you must blend them. You must fetch from each texture, then use both values to compute a new color.
How exactly you do that depends entirely on you. Maybe your 2D texture has some alpha that represents how much of the 2D texture to show. I don't know; you didn't describe what your texture looks like or how exactly you plan to show the road in some places and not in others.
the reason you get a black color is simply that you don't set proper uniform variables.
# version 420
uniform sampler3D mainTexture;
uniform sampler2D roadTexture;
void main() {
vec4 diffuse3D = texture3D(mainTexture, gl_TexCoord[0].stp);
vec4 diffuse2D = texture2D(roadTexture, gl_TexCoord[1].st);
gl_FragColor = diffuse2D;
}
what this shader is doing, is looking up the value of 'roadTexture' and displaying it. unfortunately, it has no clue which texture unit 'roadTexture' is currently bound to, and thus will acess texture unit 0, where your 3d texture is bound - so your're trying to access a 3d texture with 2d texcoords, which may well return all black. you'll need to query the uniform locations of your textures with glGetUniformLocation and then set them to the correct texture units ( 0/1, respectively ) with glUniform1i.
EDIT: also, you're using deprecated functionality, so your shader version directive should be changed to #version 420 compatibility - the default is core
You need to call glEnableClientState(GL_TEXTURE_COORD_ARRAY); again after you have made the second texture unit active with glClientActiveTexture(GL_TEXTURE1);
from http://www.opengl.org/sdk/docs/man2/xhtml/glEnableClientState.xml
enabling and disabling GL_TEXTURE_COORD_ARRAY affects the active client texture unit.
Just solved this problem. Apprently you still need glActiveTexture() in addition to glClientActiveTexture(). This was the code that worked, for anyone who gets the same problem:
glClientActiveTexture(GL_TEXTURE0);
glActiveTexture(GL_TEXTURE0);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glBindTexture(GL_TEXTURE_3D, id_texture);
s.setSampler("mainTexture",0); // Calls to glGetUniformLocation and glUniform1i
glTexCoordPointer(3, GL_FLOAT, sizeof(glm::vec3), &t[0].x);
glClientActiveTexture(GL_TEXTURE1);
glActiveTexture(GL_TEXTURE1);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glBindTexture(GL_TEXTURE_2D, id_texture_road);
s.setSampler("roadTexture",1); // Same as above
glTexCoordPointer(2, GL_FLOAT, sizeof(glm::vec2), &t2[0].x);
// Drawing Calls
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glClientActiveTexture(GL_TEXTURE0);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glActiveTexture(GL_TEXTURE0);
Thanks for reading.