I'm going a bit nuts on this since I don't really get what is wrong and what not. There must either be something that I've vastly misunderstood or there is some kind of bug either in the code or in the driver. I'm running this on AMD Radeon 5850 with the latest catalyst beta drivers as of last week.
OK, I began doing a OIT-rendering implementation and wanted to use a struct-array saved in a shader storage buffer object. Well, the indices in that one were reflecting/moving forward in memory way wrong and I pretty much assumed that it was a driver bug - since they just recently started supporting such thing + yeah, it's a beta driver.
Therefore I moved back a notch and used glsl-images from texture buffer objects instead, which I guess had been supported since at least a while back.
Still wasn't behaving correctly. So I created a simple test project and fumbled around a bit and now I think I've just pinched where the thing is.
OK! First I initialize the buffer and texture.
//Clearcolor and Cleardepth setup, disabling of depth test, compile and link shaderprogram etc.
...
//
GLint tbo, tex;
datasize = resolution.x * resolution.y * 4 * sizeof(GLfloat);
glGenBuffers(1, &tbo);
glBindBuffer(GL_TEXTURE_BUFFER, tbo);
glBufferData(GL_TEXTURE_BUFFER, datasize, NULL, GL_DYNAMIC_COPY);
glBindBuffer(GL_TEXTURE_BUFFER, 0);
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_BUFFER, tex);
glTexBuffer(GL_TEXTURE_BUFFER, GL_RGBA32F, tex);
glBindTexture(GL_TEXTURE_BUFFER, 0);
glBindImageTexture(2, tex, 0, GL_TRUE, 0, GL_READ_WRITE, GL_RGBA32F);
Then the rendering loop is - update and draw, update and draw ... With a delay in between so that I have time to see what the update does.
The update is like this...
ivec2 resolution; //Using GLM
resolution.x = (GLuint)(iResolution.x + .5f);
resolution.y = (GLuint)(iResolution.y + .5f);
glBindBuffer(GL_TEXTURE_BUFFER, tbo);
void *ptr = glMapBuffer(GL_TEXTURE_BUFFER, GL_WRITE_ONLY);
color *c = (color*)ptr; //color is a simple struct containing 4 GLfloats.
for (int i = 0; i < resolution.x*resolution.y; ++i)
{
c[i].r = c[i].g = c[i].b = c[i].a = 1.0f;
}
glUnmapBuffer(GL_TEXTURE_BUFFER); c = (color*)(ptr = NULL);
glBindBuffer(GL_TEXTURE_BUFFER, 0);
And the draw is like this...
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMemoryBarrier(GL_ALL_BARRIER_BITS);
ShaderProgram->Use(); //Simple shader program class
quad->Draw(GL_TRIANGLES); //Simple mesh class containing triangles (vertices) and colors
glFinish();
glMemoryBarrier(GL_ALL_BARRIER_BITS);
I just put some memory barriers around to be extra sure, shouldn't harm more than performance right? Well, the outcome was the same with or without the barriers anyway, so ... :)
The Shader program is a simple pass-through vertex shader and the fragment shader that's doing the testing.
Vertex shader
#version 430
in vec3 in_vertex;
void main(void)
{
gl_Position = vec4(in_vertex, 1.0);
}
Fragment shader (I guess coherent & memoryBarrier() isn't really needed here since I do them on CPU in between draw/fragment shader execution... but does it harm?)
#version 430
uniform vec2 iResolution;
layout(binding = 2, rgba32f) coherent uniform imageBuffer colorMap;
out vec4 FragColor;
void main(void)
{
ivec2 res = ivec2(int(iResolution.x + 0.5), int(iResolution.y + 0.5));
ivec2 pos = ivec2(int(gl_FragCoord.x + 0.5), int(gl_FragCoord.y + 0.5));
int pixelpos = pos.y * res.x + pos.x;
memoryBarrier();
vec4 prevPixel = imageLoad(colorMap, pixelpos);
vec4 green = vec4(0.0, 1.0, 0.0, 0.0);
imageStore(colorMap, pixelpos, green);
FragColor = prevPixel;
}
Expectation: A white screen! Since I'm writing "white" to the whole buffer between every draw even if I'm writing green to the image after load in the actual shader.
Result: The first frame is green, the rest is black. Some part of me thinks that there is a white frame thats too fast to be seen or some vsync-thing that tares it, but it this a place for logics? :P
Well, then I tried a new thing and moved the update block (where i'm writing "white" to the whole buffer) to the init instead.
Expectation: A white first frame, followed by a green screen.
Result: Oh yes its green allright! Even though the first frame is with some artifacts of white/green, sometimes only green. This might probably be due to (lack of) vsync of something, haven't checked that out. Still, I think I got the result I was looking for.
The conclusion I can draw out of this is that there is something wrong in my update.
Does it unhook the buffer from the texture reference or something? In that case, isn't it weird that the first frame is OK? It's only after the first imageStore-command (well, the first frame) that the texture goes all black - the "bind()-map()-unmap()-bind(0)" works the first time, but not afterwards.
My picture of glMapBuffer is that it copies the buffer data from GPU to CPU memory, let's you alter it and Unmap copies it back. Well, just now I thought that maybe it doesn't copy the buffer from GPU to CPU and then back, but only one way? Could it be the GL_WRITE_ONLY which should be changed to GL_READ_WRITE? Well, I've tried both. Supposedly one of them was correct, wouldn't my screen when using that one always be white in "test 1"?
ARGH, what am I doing wrong?
EDIT:
Well, I still don't know... Obviously glTexBuffer(GL_TEXTURE_BUFFER, GL_RGBA32F, tex); should be glTexBuffer(GL_TEXTURE_BUFFER, GL_RGBA32F, tbo);, but I think tbo and tex had the same value since they were generated in the same order. Therefore it worked in this implementation.
I have solved it though, in a manner I'm not very happy with since I really think that the above should work. On the other hand, the new solution is probably a bit better performance-wise.
Instead of using glMapBuffer(), I switched to keeping a copy of the tbo-memory on CPU by using glBufferSubData() and glgetBufferSubData() for sending the data between CPU/GPU. This worked, so I'll just continue with that solution.
But, yeah, the question still stands - Why doesn't glMapBuffer() work with my texture buffer objects?
glTexBuffer(GL_TEXTURE_BUFFER, GL_RGBA32F, tex);
should be
glTexBuffer(GL_TEXTURE_BUFFER, GL_RGBA32F, tbo);
perhaps there is something else wrong, but this stands out.
https://www.opengl.org/wiki/Buffer_Texture
Related
you can skip to the TL;DR at the bottom for the conclusion. I preferred to provide as much information as I could, so as to help narrow down the question further.
I've been having an issue with a heat haze effect I've been working on.
This is the sort of effect that I was thinking of but since this is a rather generalized system it would apply to any so called screen space refraction:
The haze effect is not where my issue lies as it is just a distortion of sampling coordinates, rather it's with what is sampled. My first approach was to render the distortions to another render target. This method was fairly successful but has a major downfall that's easy to foresee if you've dealt with screen space textures before. the problem is that because of the offset to the sampling coordinate, if an object is in front of the refractor, its edges will be taken into the refraction calculation.
as you can see it looks fine when all the geometry is either the environment (no depth test) or back geometry. and here with a cube closer than the refractor. As you can see it, there is this effect I'll call bleeding of the closer geometry.
relevant shader code for reference:
/* transparency.frag */
layout (location = 0) out vec4 out_color; // frag color
layout (location = 1) out vec4 bright; // used for bloom effect
layout (location = 2) out vec4 deform; // deform buffer
[...]
void main(void) {
[...]
vec2 n = __sample_noise_texture_with_time__{};
deform = vec4(n * .1, 0, 1);
out_color = vec4(0, 0, 0, .0);
bright = vec4(0.0, 0.0, 0.0, .9);
}
/* post_process.frag */
in vec2 texel;
uniform sampler2D screen_t;
uniform sampler2D depth_t;
uniform sampler2D bright_t;
uniform sampler2D deform_t;
[...]
void main(void) {
[...]
vec3 noise_sample = texture(deform_t, texel).xyz;
vec2 texel_c = texel + noise_sample.xy;
[sample screen and bloom with texel_c, gama corect, output to color buffer]
}
To try to combat this, I tried a technique that involved comparing depth components. to do this, i made the transparent object write its frag_depth tp the z component of my deform buffer like so
/* transparency.frag */
[...]
deform = vec4(n * .1, gl_FragCoord.z, 1);
[...]
and then to determine what is in front of what a quick check in the post processing shader.
[...]
float dist = texture(depth_t, texel_c).x;
float dist1 = noise_sample.z; // what i wrote to the deform buffer z
if (dist + .01 < dist1) { /* do something liek draw debug */ }
[...]
this worked somewhat but broke down as i moved away, even i i linearized the depth values and compared the distances.
EDIT 3: added better screenshots for the depth test phase
(In yellow where it's sampling something that's in front, couldn't be bothered to make it render the polygons as well so i drew them in)
(and here demonstrating it partially failing the depth comparison test from further away)
I also had some 'fun' with another technique where i passed the color buffer directly to the transparency shader and had it output the sample to its color output. In theory if the scene is Z sorted, this should produce the desired result. i'll let you be the judge of that.
(I have a few guesses as to what the patterns that emerge are since they are similar to the rasterisation patterns of GPUs however that's not very relevant sine that 'solution' was more of a desperation effort than anything)
TL;DR and Formal Question: I've had a go at a few techniques based on my knowledge and haven't been able to find much literature on the subject. so my question is: How do you realize sch effects as heat haze/distortion (that do not cover the whole screen might i add) and is there literature on the subject. For reference to what sort of effect I would be looking at, see my Overwatch screenshot and all other similar effects in the game.
Thought I would also mention just for completeness sake I'm running OpenGL 4.5 (on windows) with most shaders being version 4.00, and am working with a custom engine.
EDIT: If you want information about the software part of the engine feel free to ask. I didn't include any because it I didn't deem it relevant however i'd be glad to provide specs and code snippets as well as more shaders on demand.
EDIT 2: I thought i'd also mention that this could be achieved by using a second render pass and a clipping plane however, that would be costly and feels unnecessary since the viewpoint is the same. It might be that's this is the only solution but i don't believe so.
Thanks for your answers in advance!
I think the issue is you are trying to distort something that's behind an occluded object and that information is not available any more, because the object in front have overwitten the color value there. So you can't distort in information from a color buffer that does not exist anymore.
You are trying to solve it by depth testing and skipping the pixels that belong to an object closer to the camera than your transparent heat object, but this is causing the edge to leak into the distortion. Even if you get the edge skipped, if there was an object right behind the transparent object, occluded by the cube in the front, it wont distort in because the color information is not available.
Additional Render Pass
As you mention additional rendering pass with a clipping plane is certainly one solution to this problem.
Multiple render targets
Another solution similar to that would be to use multiple render targets, render the depth of the transparent object before hand, test for fragments that are behind it, and render them to another color buffer. Later use this buffer to distort instead of the full color buffer. You could also consider deffered shading.
Here is a code snippet of how you would setup multiple render targets.
//create your fbo
GLuint fboID;
glGenFramebuffers(1, &fboID);
glBindFramebuffer(GL_FRAMEBUFFER, fboID);
//create the rbo for depth
GLuint rboID;
glGenRenderbuffers(1, &rboID);
glBindRenderbuffer(GL_RENDERBUFFER, &rboID);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT32, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, rboID);
//create two color textures (one for distort)
Gluint colorTexture, distortcolorTexture;
glGenTextures(1, &colorTexture);
glGenTextures(1, &distortcolorTexture);
glBindTexture(GL_TEXTURE_2D, colorTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
glBindTexture(GL_TEXTURE_2D, distortcolorTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
//attach both textures
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, colorTexture, 0);
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, distortcolorTexture, 0);
//specify both the draw buffers
GLenum drawBuffers[2] = {GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1};
glDrawBuffers(2, DrawBuffers);
First render the transparent obj's depth. Then in your fragment shader for other objects
//compute color with your lighting...
//write color to colortexture
gl_FragData[0] = color;
//check if fragment behind your transparent object
if( depth >= tObjDepth )
{
//write color to distortcolortexture
gl_FragData[1] = color;
}
finally use the distortcolortexture for your distort shader.
Depth test for a matrix of pixels instead of single pixel.
I think the edge is leaking because maybe you don't simply distort one pixel but more of a matrix of pixels, perhaps you could also try checking the max depth for the matrix (eg: 3x3 pixels centered on current pixel) and discard it if it fails the depth test. (note : this still won't distort objects behind the occluding object which you might want distorted in).
I have implemented CPU code that copies a projected texture to a larger texture on a 3d object, 'decal baking' if you will, but now I need to implement it on the GPU. To do this I hope to use compute shader as its quite difficult to add an FBO in my current setup.
Example image from my current implementation
This question is more about how to use Compute shaders but for anyone interested, the idea is based on an answer I got from user jozxyqk, seen here: https://stackoverflow.com/a/27124029/2579996
The texture that is written-to is in my code called _texture, whilst the one projected is _textureProj
Simple compute shader
const char *csSrc[] = {
"#version 440\n",
"layout (binding = 0, rgba32f) uniform image2D destTex;\
layout (local_size_x = 16, local_size_y = 16) in;\
void main() {\
ivec2 storePos = ivec2(gl_GlobalInvocationID.xy);\
imageStore(destTex, storePos, vec4(0.0,0.0,1.0,1.0));\
}"
};
As you see I currently only want to have the texture updated to some arbitrary (blue) color.
Update function
void updateTex(){
glUseProgram(_computeShader);
const GLint location = glGetUniformLocation(_computeShader, "destTex");
if (location == -1){
printf("Could not locate uniform location for texture in CS");
}
// bind texture
glUniform1i(location, 0);
glBindImageTexture(0, *_texture, 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_RGBA32F);
// ^second param returns the GLint id - that im sure of.
glDispatchCompute(_texture->width() / 16, _texture->height() / 16, 1);
glMemoryBarrier(GL_SHADER_STORAGE_BARRIER_BIT);
glUseProgram(0);
printOpenGLError(); // reports no errors.
}
Problem
If i call the updateTex() outside of my main program object i see zero effect, whereas if I call it within its scope like so:
glUseProgram(_id); // vert, frag shader pipe
updateTex();
// Pass uniforms to shader
// bind _textureProj & _texture (latter is the one im trying to update)
glUseProgram(0);
Then upon rendering I see this:
QUESTION:
I realise that setting the update method within the main program object scope is not the proper way of doing it, however its the only way to get any visual results. It seems to me that what happens is that it pretty much eliminates the fragmentshader and draws to screenspace...
What can I do to get this working properly? (my main focus is to be able to write anything to the texture & update)
Please let me know if more code needs posting.
I believe in this case an FBO would be easier and faster, and would recommend that instead. But the question itself is still quite valid.
I'm surprised to see a sphere, given you're writing blue to the entire texture (minus any edge bits if the texture size is not a multiple of 16). I guess this is from code elsewhere.
Anyway, it seems your main problem is being able to write to the texture from a compute shader outside the setup code for regular rendering. I suspect this is related to how you bind your destTex image. I'm not sure what your TexUnit and activate methods do, but to bind a GL texture to an image unit, do this:
int imageUnitIndex = 0; //something unique
int uniformLocation = glGetUniformLocation(...);
glUniform1i(uniformLocation, imageUnitIndex); //program must be active
glBindImageTexture(imageUnitIndex, textureHandle, ...);
see:
https://www.opengl.org/sdk/docs/man/html/glBindImageTexture.xhtml
https://www.opengl.org/wiki/Image_Load_Store#Images_in_the_context
Lastly, as you're using image2D so GL_SHADER_IMAGE_ACCESS_BARRIER_BIT is the barrier to use. GL_SHADER_STORAGE_BARRIER_BIT is for storage buffer objects.
So I have a very complicated R^4 -> R^4 function, which I need to calculate for a lot of input glm::vec4s, in real time, so I want to do it on the GPU, for all vec4s parallel.
What I figured is that I would create a GL_RGBA32F texture, 1920x1 resolution (1920 is enough for my purposes), copy my input data onto the texture, then call a drawing of a line, so the rasterizer calls a fragment for each of my vec4s. Then either write the results back to the texture using imageload/store or render it to a 1920x1 framebuffer, and read it from there.
Problem is that for some reason opengl can't read my GL_RGBA32F texture.
Here is my code:
Setting up the texture (currently loaded with dummy data):
glm::vec4 texturedata[1920];
for (unsigned int i = 0; i < 1920; i++)
{
texturedata[i] = glm::vec4(1.0f, 1.0f, 1.0f, 1.0f);
}
glGenTextures(1, &datatexture);
glBindTexture(GL_TEXTURE_2D, datatexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, 1920, 1, 0, GL_RGBA, GL_FLOAT, texturedata);
Before each rendering:
glUseProgram(mprogram);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, datatexture);
glBindVertexArray(rasterizertriggervao);
glUniform1i(glGetUniformLocation(mprogram, "datatexture"), 0);
glDrawArrays(GL_LINES, 0, 2);
The rasterizertriggervao is 2 floats: -1, 1, and the vertex shader draws a nice line through the middle of my screen from that.
Fragment shader:
layout(binding = 0) uniform sampler2D datatexture;
out vec4 x;
void main()
{
x = vec4( (texture(datatexture, vec2(gl_FragCoord.x/1920.0, 0.0))).x, 0.0f, 0.0f, 1.0f );
}
So this should draw a nice red line in the middle of my screen for me. It draws a black one. The rasterizer called all 1920x1 fragments, and the texture is correctly copied to the GPU (I have Nvidia Nsight installed, which allows me to debug the GPU, check the contents of textures and whatnot on the GPU directly, and I checked, the texture is full of 1.0f).
However for some reason the sampling doesn't work.
I know that there are better ways to do GPGPU but this thing has to fit into a much bigger program nicely, and this is the way I need it to work, through textures :)
Your seem not to set the texture filter modes for your texture. Now, GL's defaults are (unfortunately) to use mip-mapping. But your texture is not mipmap-complete, so sampling from it will not work. You should add glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER, GL_NEAREAST) and probably also glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER, GL_NEAREAST).
As Andon M. Coleman already pointed out in the comments, you are not sampling the texture at the correct location. You should use vec2(gl_FragCoord.x/1920.0 + 0.5/1920.0, 0.5) in your case. I also agree with Andon M. Coleman's suggestion to directly use texelFetch(), since you can directly use the integer value `gl_FragCoord.x'.
I am trying to blend a 3D texture with a 2D one to make a terrain. The 3D texture has moss, sand, snow and the like, interpolated to enhance the illusion of heights. The 2D texture currently only has an orange line across meant to be a "road". This is my fragment shader:
# version 420
uniform sampler3D mainTexture;
uniform sampler2D roadTexture;
void main() {
vec4 diffuse3D = texture3D(mainTexture, gl_TexCoord[0].stp);
vec4 diffuse2D = texture2D(roadTexture, gl_TexCoord[1].st);
// Yes, I am aware I am only returning the 2D texture value
// However this is for testing purposes only
// Doing gl_FragColor = diffuse3D + diffuse2D;
// Or any other operation returns the 3D texture only
gl_FragColor = diffuse2D;
}
And this is my drawing call:
void Terrain::Draw() {
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, sizeof(glm::vec3), &v[0].x);
glEnableClientState(GL_NORMAL_ARRAY);
glNormalPointer(GL_FLOAT, sizeof(glm::vec3), &n[0].x);
s.enable(); // simple glUseProgram call within my Shader object
glClientActiveTexture(GL_TEXTURE0);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnable(GL_TEXTURE_3D);
glBindTexture(GL_TEXTURE_3D, id_texture);
s.setSampler("mainTexture",0); // Calls to glGetUniformLocation and glUniform1i
glTexCoordPointer(3, GL_FLOAT, sizeof(glm::vec3), &t[0].x);
glClientActiveTexture(GL_TEXTURE1);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, id_texture_road);
s.setSampler("roadTexture",1); // Same as above
glTexCoordPointer(2, GL_FLOAT, sizeof(glm::vec2), &t2[0].x);
glPushMatrix();
glScalef(scalex,scaley,scalez);
glDrawElements(GL_TRIANGLES, sizei, GL_UNSIGNED_INT, index);
glPopMatrix();
s.disable(); // glUseProgram(0)
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisable(GL_TEXTURE_3D);
glDisable(GL_TEXTURE_2D);
}
Here is the code for my setSampler() method:
void Shader::setSampler(std::string name, GLint value)
{
GLuint loc = glGetUniformLocation(program, name.c_str());
if (loc>0)
{
glUniform1i(loc, value);
}
}
The result is a solid black color upon the whole terrain. I have sadly been unable to find information on sampler3D, but the diffuse3D variable in my fragment shader does compute to the correct texture, and my texture coordinates for the 2D texture are being correcly sent to the fragment shader (I know this because I used them to color the terrain for testing and got a smooth gradinent from green to red, what you would expect using only the first 2 coordinates). I also checked the values passed to my setSampler() method and I do get the 0 and 1, and the 1 and 2 locations corresponding to them.
All of the help I can find on this issue is around the vicinity of the advice provided here, which I have already implemented).
Can anybody assist?
EDIT: So, just for kicks, I swapped my texture units so the 2D texture became unit 0 and the 3D became unit 1. Now only the 2D texture is rendered. But my texture units are passed correctly (at least in appearence) to the shader. Any clues?
vec4 diffuse3D = texture3D(mainTexture, gl_TexCoord[0].stp);
vec4 diffuse2D = texture2D(roadTexture, gl_TexCoord[1].st);
gl_FragColor = diffuse2D;
Let's pretend that this wasn't using shaders. Let's pretend you were just writing a function in C++ that returns a value.
int FuncName(int val1, int val2)
{
int test1 = Compute(val1);
int test2 = Compute(val2);
return test2;
}
What will this function return? Obviously, it returns Compute(val2), completely ignoring the value of test1. It won't magically combine test1 and test2. They're separate values, and therefore, they remain separate unless you explicitly combine them.
Just like your fragment shader.
Shaders aren't magic; they're programming. They only do what you tell them to. So if you say, "get a value from a texture and then don't do anything with it", it will dutifully do exactly that. Though odds are good that the compiler will optimize out the texture fetch entirely.
If you want a "blend" of two textures, you must blend them. You must fetch from each texture, then use both values to compute a new color.
How exactly you do that depends entirely on you. Maybe your 2D texture has some alpha that represents how much of the 2D texture to show. I don't know; you didn't describe what your texture looks like or how exactly you plan to show the road in some places and not in others.
the reason you get a black color is simply that you don't set proper uniform variables.
# version 420
uniform sampler3D mainTexture;
uniform sampler2D roadTexture;
void main() {
vec4 diffuse3D = texture3D(mainTexture, gl_TexCoord[0].stp);
vec4 diffuse2D = texture2D(roadTexture, gl_TexCoord[1].st);
gl_FragColor = diffuse2D;
}
what this shader is doing, is looking up the value of 'roadTexture' and displaying it. unfortunately, it has no clue which texture unit 'roadTexture' is currently bound to, and thus will acess texture unit 0, where your 3d texture is bound - so your're trying to access a 3d texture with 2d texcoords, which may well return all black. you'll need to query the uniform locations of your textures with glGetUniformLocation and then set them to the correct texture units ( 0/1, respectively ) with glUniform1i.
EDIT: also, you're using deprecated functionality, so your shader version directive should be changed to #version 420 compatibility - the default is core
You need to call glEnableClientState(GL_TEXTURE_COORD_ARRAY); again after you have made the second texture unit active with glClientActiveTexture(GL_TEXTURE1);
from http://www.opengl.org/sdk/docs/man2/xhtml/glEnableClientState.xml
enabling and disabling GL_TEXTURE_COORD_ARRAY affects the active client texture unit.
Just solved this problem. Apprently you still need glActiveTexture() in addition to glClientActiveTexture(). This was the code that worked, for anyone who gets the same problem:
glClientActiveTexture(GL_TEXTURE0);
glActiveTexture(GL_TEXTURE0);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glBindTexture(GL_TEXTURE_3D, id_texture);
s.setSampler("mainTexture",0); // Calls to glGetUniformLocation and glUniform1i
glTexCoordPointer(3, GL_FLOAT, sizeof(glm::vec3), &t[0].x);
glClientActiveTexture(GL_TEXTURE1);
glActiveTexture(GL_TEXTURE1);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glBindTexture(GL_TEXTURE_2D, id_texture_road);
s.setSampler("roadTexture",1); // Same as above
glTexCoordPointer(2, GL_FLOAT, sizeof(glm::vec2), &t2[0].x);
// Drawing Calls
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glClientActiveTexture(GL_TEXTURE0);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glActiveTexture(GL_TEXTURE0);
Thanks for reading.
So, I've been working on a little game project for a bit and I've hit a snag that's annoying me to no end. I load an obj file which then gets rendered after being put into a VBO. This part works fine, no problemo. However, I've been trying to get it to render the accompanying texture with the supplied UVs with no success. Currently, I just get a matte green colouration on my model. Upon investigating it in GDE, I've seen that texture gets loaded fine and occupies the GL_TEXTURE0 unit, so that's not the issue. I believe it may be my binding but I have no idea why this would fail...
void Model_Man::render_models()
{
for(int x=0; x<models.size(); x++)
{
if(models.at(x).visible==true)
{
glBindBuffer(GL_ARRAY_BUFFER,models.at(x).t_buff);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,models.at(x).i_buff);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT,0,0);
glClientActiveTexture(GL_TEXTURE0);
glTexCoordPointer(2,GL_FLOAT,0,&models.at(x).uvs[0]);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glActiveTexture(GL_TEXTURE0);
int tex_loc = glGetUniformLocation(models.at(x).shaderid,"color_texture");
glUniform1i(tex_loc,GL_TEXTURE0);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, models.at(x).mats.at(0).texid);
c_render.use_program(models.at(x).shaderid);
glDrawElements(GL_TRIANGLES,models.at(x).f_index.size()*3,GL_UNSIGNED_INT,0);
c_render.use_program();
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisable(GL_TEXTURE_2D);
}
}
}
And my shader files...
Shader.frag
uniform sampler2D color_texture;
void main() {
// Set the output color of our current pixel
gl_FragColor = texture2D(color_texture, gl_TexCoord[0].st);
}
Shader.vert
void main() {
gl_TexCoord[0] = gl_MultiTexCoord0;
// Set the position of the current vertex
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
And yes, I know I'm currently being horribly inefficient with my render loop :P but I'm already planning on refactoring it, I am just attempting to get this single model to draw correctly with everything I'm aiming to do. I have no clue why it wouldn't be rendering with the texture correctly applied - unless it's because I need to interleave my arrays but I'm still supplying it with uv data so I don't see why it fails.
The call that set the sampler uniform shall not set GL_TEXTUE0, but actually 0.
Indeed:
glUniform1i(location, 0)
For setting up a sampler uniform do:
glUseProgram(progId);
// ...
glActiveTexture(GL_TEXTURE0 + texUnit);
glBindTexture(texId);
glUniform1i(texUnit);
The main concept is that the uniform variable are a shader program state (it is mantained until you re-link the program or reset the uniform value). Without binding a program, glUniform1i shall fail since there's not shader program at which it can set the uniform value!
As a general advice, call glGetError after each OpenGL call to detect these conditions. Most of those calls can be removed by preprocessor in release version.
Well, found out that the big issue was that while I was binding a texture, I wasn't actually setting it in a way that it was understood as being used. Setting glClientActiveTexture(GL_TEXTURE0 + texUnit); in combination with glActiveTexture(); ended up being the final solution.