imageStore() doesn't work on AMD hardware (OpenGL 4.2) - opengl

I tried this code on Nvidia hardware without any problem but on AMD, the imageStore() function doesn't seem to do anything (No GL error is thrown though, I checked)
Shader:
#extension GL_EXT_shader_image_load_store : require
layout(size4x32) uniform image2D A;
void main(void){
vec4 output = vec4(0.111, 0.222 , 0.333, 0.444);
imageStore(A, ivec2(gl_FragCoord.xy-vec2(0.5,0.5)), output);
}
Calling Program:
glUseProgram(program);
glUniform1i(glGetUniformLocation (program , "A" ), id);
glBindImageTexture(id, texture, 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_RGBA32F);
//Bind the fbo associated with the texture to run a shader per pixel
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glViewport(0, 0, width, height);
glDrawBuffer(GL_NONE); //Forbid gl_FragColor to be modified
//Render a quad
draw();
//Then read the texture...
As suggested in an other thread by Nicol Bolas (Trouble with imageStore() (OpenGL 4.3)) I tried to add some barriers to insure that the memory is written when I read back the texture but no change, the texture that imageStore is supposed to write to is not modified.
void main(void){
vec4 output = vec4(0.111, 0.222 , 0.333, 0.444);
memoryBarrier();
imageStore(A, ivec2(gl_FragCoord.xy), output);
memoryBarrier();
}
In the main program:
...
draw();
glMemoryBarrierEXT(GL_ALL_BARRIER_BITS);
...
On the other hand, if I remove glDrawBuffer(GL_NONE) to simply output my value using gl_FragColor it works, as usual:
void main(void){
gl_FragColor = vec4(0.111, 0.222 , 0.333, 0.444);
}
but I really need to do it with imageStore since I want to use scatter writes.
I also tried to use imageLoad and didn't have any problem. What is happening with this imageStore function?
Any ideas?

I probably had the same problem with AMD card as you. I then looked at the source code of OpenGL Sample Pack at : http://www.g-truc.net/project-0026.html#menu
In studying this source code related to imageStore, I found that adding two following lines for the texture made my code work for AMD (the code in java)
gl.glTexParameteri(GL2.GL_TEXTURE_2D, GL2.GL_TEXTURE_MIN_FILTER, GL2.GL_NEAREST);
gl.glTexParameteri(GL2.GL_TEXTURE_2D, GL2.GL_TEXTURE_MAG_FILTER, GL2.GL_NEAREST);
If it does not work for you, just compare your code with OpenGL Sample Pack to find more.

Related

Compute Shader write to texture

I have implemented CPU code that copies a projected texture to a larger texture on a 3d object, 'decal baking' if you will, but now I need to implement it on the GPU. To do this I hope to use compute shader as its quite difficult to add an FBO in my current setup.
Example image from my current implementation
This question is more about how to use Compute shaders but for anyone interested, the idea is based on an answer I got from user jozxyqk, seen here: https://stackoverflow.com/a/27124029/2579996
The texture that is written-to is in my code called _texture, whilst the one projected is _textureProj
Simple compute shader
const char *csSrc[] = {
"#version 440\n",
"layout (binding = 0, rgba32f) uniform image2D destTex;\
layout (local_size_x = 16, local_size_y = 16) in;\
void main() {\
ivec2 storePos = ivec2(gl_GlobalInvocationID.xy);\
imageStore(destTex, storePos, vec4(0.0,0.0,1.0,1.0));\
}"
};
As you see I currently only want to have the texture updated to some arbitrary (blue) color.
Update function
void updateTex(){
glUseProgram(_computeShader);
const GLint location = glGetUniformLocation(_computeShader, "destTex");
if (location == -1){
printf("Could not locate uniform location for texture in CS");
}
// bind texture
glUniform1i(location, 0);
glBindImageTexture(0, *_texture, 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_RGBA32F);
// ^second param returns the GLint id - that im sure of.
glDispatchCompute(_texture->width() / 16, _texture->height() / 16, 1);
glMemoryBarrier(GL_SHADER_STORAGE_BARRIER_BIT);
glUseProgram(0);
printOpenGLError(); // reports no errors.
}
Problem
If i call the updateTex() outside of my main program object i see zero effect, whereas if I call it within its scope like so:
glUseProgram(_id); // vert, frag shader pipe
updateTex();
// Pass uniforms to shader
// bind _textureProj & _texture (latter is the one im trying to update)
glUseProgram(0);
Then upon rendering I see this:
QUESTION:
I realise that setting the update method within the main program object scope is not the proper way of doing it, however its the only way to get any visual results. It seems to me that what happens is that it pretty much eliminates the fragmentshader and draws to screenspace...
What can I do to get this working properly? (my main focus is to be able to write anything to the texture & update)
Please let me know if more code needs posting.
I believe in this case an FBO would be easier and faster, and would recommend that instead. But the question itself is still quite valid.
I'm surprised to see a sphere, given you're writing blue to the entire texture (minus any edge bits if the texture size is not a multiple of 16). I guess this is from code elsewhere.
Anyway, it seems your main problem is being able to write to the texture from a compute shader outside the setup code for regular rendering. I suspect this is related to how you bind your destTex image. I'm not sure what your TexUnit and activate methods do, but to bind a GL texture to an image unit, do this:
int imageUnitIndex = 0; //something unique
int uniformLocation = glGetUniformLocation(...);
glUniform1i(uniformLocation, imageUnitIndex); //program must be active
glBindImageTexture(imageUnitIndex, textureHandle, ...);
see:
https://www.opengl.org/sdk/docs/man/html/glBindImageTexture.xhtml
https://www.opengl.org/wiki/Image_Load_Store#Images_in_the_context
Lastly, as you're using image2D so GL_SHADER_IMAGE_ACCESS_BARRIER_BIT is the barrier to use. GL_SHADER_STORAGE_BARRIER_BIT is for storage buffer objects.

offscreen rendering opengl 4.5 multisample FBO

I'm referencing OpenGL Superbible 6 in my code.
First I simply wanted to implement object picking in my 3d scene. Eventually I've decided to use framebuffer objects and I have succeeded and then I understood the problem with the need to solve the problem of polygon edge aliasing, so, i've rewritten my code again to make use of GL_TEXTURE_2D_MULTISAMPLE
Here is the initialization code for framebuffer
void window_glview::init_framebuffer()
{
//CREATE FRAMEBUFFER OBJECT
GLenum gl_error=glGetError();
glGenTextures(1,&texture_id_framebuffer_color);
glBindTexture(GL_TEXTURE_2D_MULTISAMPLE,texture_id_framebuffer_color);
glTexStorage2DMultisample(GL_TEXTURE_2D_MULTISAMPLE,ANTIALIASING_SAMPLES,GL_RGBA8,client_area.right,client_area.bottom,GL_TRUE);
glGenTextures(1,&texture_id_framebuffer_objectid);
glBindTexture(GL_TEXTURE_2D_MULTISAMPLE,texture_id_framebuffer_objectid);
glTexStorage2DMultisample(GL_TEXTURE_2D_MULTISAMPLE,ANTIALIASING_SAMPLES,GL_RGBA8,client_area.right,client_area.bottom,GL_TRUE);
glGenTextures(1,&texture_id_framebuffer_depth);
glBindTexture(GL_TEXTURE_2D_MULTISAMPLE,texture_id_framebuffer_depth);
glTexStorage2DMultisample(GL_TEXTURE_2D_MULTISAMPLE,ANTIALIASING_SAMPLES,GL_DEPTH_COMPONENT32,client_area.right,client_area.bottom,GL_TRUE);
gl_error=glGetError();
glGenFramebuffers(1,&buffer_id_framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER,buffer_id_framebuffer);
gl_error=glGetError();
glFramebufferTexture(GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,texture_id_framebuffer_color,0);
glFramebufferTexture(GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,texture_id_framebuffer_objectid,0);
glFramebufferTexture(GL_FRAMEBUFFER,GL_DEPTH_ATTACHMENT,texture_id_framebuffer_depth,0);
GLenum draw_buffers[] =
{
GL_COLOR_ATTACHMENT0,
GL_COLOR_ATTACHMENT1
};
glDrawBuffers(2,draw_buffers);
GLenum status=glCheckFramebufferStatus(GL_FRAMEBUFFER);
if(status!=GL_FRAMEBUFFER_COMPLETE)
MessageBox(0,L"Failed to create framebuffer object",0,0);
glBindFramebuffer(GL_FRAMEBUFFER,0);
}
It's pretty common to most of the internet listings on the same topic.
Now here is my drawing code
void window_glview::paint()
{
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
//DRAW TO CUSTOM FRAMEBUFFER
glBindFramebuffer(GL_FRAMEBUFFER,buffer_id_framebuffer);
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glLineWidth(1.0);
draw_viewport();
viewport_object_count=0;
draw_lights();
glLineWidth(1.5);
for (unsigned short i=0;i<mesh_count;i++)
{
draw_mesh(mesh_table[i],GL_TRIANGLES,false);
}
//DRAW TO DEFAULT
glBindFramebuffer(GL_FRAMEBUFFER,0);
//USE TEXTURE FROM FRAMEBUFFER COLOR_ATTACHMENT0
glUseProgram(program_id_screen_render);
glBindTexture(GL_TEXTURE_2D_MULTISAMPLE,texture_id_framebuffer_color);
//HERE IS A QUAD DRAWING PROCESS
glBindBuffer(GL_ARRAY_BUFFER,buffer_id_screen_quad);
glVertexAttribPointer(0,4,GL_FLOAT,GL_FALSE,24,0);
glEnableVertexAttribArray(0);
glDrawArrays(GL_QUADS,0,4);
SwapBuffers(hDC);
}
vertex shader is simple
#version 450
layout(location=0) in vec4 _pos;
void main(void)
{
gl_Position=_pos;
}
fragment shader is written with the purpose of iterpreting multisamples
#version 450
uniform sampler2DMS screen_texture;
layout(location=0) out vec4 out_color;
void main(void)
{
ivec2 coord=ivec2(gl_FragCoord.xy);
vec4 result=vec4(0.0);
int i;
for (i=0;i<4;i++)
{
result=max(result,texelFetch(screen_texture,coord,i));
}
out_color=result;
}
I end up with a black screen. If i change out_color to something lice out_color=vec4(1.0,0.0,0.0,1.0) i get red screen.
What could go wrong?
In my initializer function for framebuffer when i pass GL_DEPTH_COMPONENT to glTexStorage2DMultisample, then i get error. I decided to pass GL_DEPTH_COMPONENT16 and it works. Why is that?
Should I better use RENDERBUFFER for some perpose and if yes, how can i read it to texture?
The texture with id texture_id_framebuffer_color, which is the texture you use for your final rendering, is not attached to the FBO while you render to the FBO:
glFramebufferTexture(GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,texture_id_framebuffer_color,0);
glFramebufferTexture(GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,texture_id_framebuffer_objectid,0);
Only one texture can be attached to a given attachment point at a time. So when you specify a second texture to be attached to COLOR_ATTACHMENT0, the first one automatically gets un-attached.
If you want to have two attachments, they will need to use different attachment points:
glFramebufferTexture(GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,texture_id_framebuffer_color,0);
glFramebufferTexture(GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT1,texture_id_framebuffer_objectid,0);

Reading and updating texture buffer in OpenGL/GLSL 4.3

I'm going a bit nuts on this since I don't really get what is wrong and what not. There must either be something that I've vastly misunderstood or there is some kind of bug either in the code or in the driver. I'm running this on AMD Radeon 5850 with the latest catalyst beta drivers as of last week.
OK, I began doing a OIT-rendering implementation and wanted to use a struct-array saved in a shader storage buffer object. Well, the indices in that one were reflecting/moving forward in memory way wrong and I pretty much assumed that it was a driver bug - since they just recently started supporting such thing + yeah, it's a beta driver.
Therefore I moved back a notch and used glsl-images from texture buffer objects instead, which I guess had been supported since at least a while back.
Still wasn't behaving correctly. So I created a simple test project and fumbled around a bit and now I think I've just pinched where the thing is.
OK! First I initialize the buffer and texture.
//Clearcolor and Cleardepth setup, disabling of depth test, compile and link shaderprogram etc.
...
//
GLint tbo, tex;
datasize = resolution.x * resolution.y * 4 * sizeof(GLfloat);
glGenBuffers(1, &tbo);
glBindBuffer(GL_TEXTURE_BUFFER, tbo);
glBufferData(GL_TEXTURE_BUFFER, datasize, NULL, GL_DYNAMIC_COPY);
glBindBuffer(GL_TEXTURE_BUFFER, 0);
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_BUFFER, tex);
glTexBuffer(GL_TEXTURE_BUFFER, GL_RGBA32F, tex);
glBindTexture(GL_TEXTURE_BUFFER, 0);
glBindImageTexture(2, tex, 0, GL_TRUE, 0, GL_READ_WRITE, GL_RGBA32F);
Then the rendering loop is - update and draw, update and draw ... With a delay in between so that I have time to see what the update does.
The update is like this...
ivec2 resolution; //Using GLM
resolution.x = (GLuint)(iResolution.x + .5f);
resolution.y = (GLuint)(iResolution.y + .5f);
glBindBuffer(GL_TEXTURE_BUFFER, tbo);
void *ptr = glMapBuffer(GL_TEXTURE_BUFFER, GL_WRITE_ONLY);
color *c = (color*)ptr; //color is a simple struct containing 4 GLfloats.
for (int i = 0; i < resolution.x*resolution.y; ++i)
{
c[i].r = c[i].g = c[i].b = c[i].a = 1.0f;
}
glUnmapBuffer(GL_TEXTURE_BUFFER); c = (color*)(ptr = NULL);
glBindBuffer(GL_TEXTURE_BUFFER, 0);
And the draw is like this...
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMemoryBarrier(GL_ALL_BARRIER_BITS);
ShaderProgram->Use(); //Simple shader program class
quad->Draw(GL_TRIANGLES); //Simple mesh class containing triangles (vertices) and colors
glFinish();
glMemoryBarrier(GL_ALL_BARRIER_BITS);
I just put some memory barriers around to be extra sure, shouldn't harm more than performance right? Well, the outcome was the same with or without the barriers anyway, so ... :)
The Shader program is a simple pass-through vertex shader and the fragment shader that's doing the testing.
Vertex shader
#version 430
in vec3 in_vertex;
void main(void)
{
gl_Position = vec4(in_vertex, 1.0);
}
Fragment shader (I guess coherent & memoryBarrier() isn't really needed here since I do them on CPU in between draw/fragment shader execution... but does it harm?)
#version 430
uniform vec2 iResolution;
layout(binding = 2, rgba32f) coherent uniform imageBuffer colorMap;
out vec4 FragColor;
void main(void)
{
ivec2 res = ivec2(int(iResolution.x + 0.5), int(iResolution.y + 0.5));
ivec2 pos = ivec2(int(gl_FragCoord.x + 0.5), int(gl_FragCoord.y + 0.5));
int pixelpos = pos.y * res.x + pos.x;
memoryBarrier();
vec4 prevPixel = imageLoad(colorMap, pixelpos);
vec4 green = vec4(0.0, 1.0, 0.0, 0.0);
imageStore(colorMap, pixelpos, green);
FragColor = prevPixel;
}
Expectation: A white screen! Since I'm writing "white" to the whole buffer between every draw even if I'm writing green to the image after load in the actual shader.
Result: The first frame is green, the rest is black. Some part of me thinks that there is a white frame thats too fast to be seen or some vsync-thing that tares it, but it this a place for logics? :P
Well, then I tried a new thing and moved the update block (where i'm writing "white" to the whole buffer) to the init instead.
Expectation: A white first frame, followed by a green screen.
Result: Oh yes its green allright! Even though the first frame is with some artifacts of white/green, sometimes only green. This might probably be due to (lack of) vsync of something, haven't checked that out. Still, I think I got the result I was looking for.
The conclusion I can draw out of this is that there is something wrong in my update.
Does it unhook the buffer from the texture reference or something? In that case, isn't it weird that the first frame is OK? It's only after the first imageStore-command (well, the first frame) that the texture goes all black - the "bind()-map()-unmap()-bind(0)" works the first time, but not afterwards.
My picture of glMapBuffer is that it copies the buffer data from GPU to CPU memory, let's you alter it and Unmap copies it back. Well, just now I thought that maybe it doesn't copy the buffer from GPU to CPU and then back, but only one way? Could it be the GL_WRITE_ONLY which should be changed to GL_READ_WRITE? Well, I've tried both. Supposedly one of them was correct, wouldn't my screen when using that one always be white in "test 1"?
ARGH, what am I doing wrong?
EDIT:
Well, I still don't know... Obviously glTexBuffer(GL_TEXTURE_BUFFER, GL_RGBA32F, tex); should be glTexBuffer(GL_TEXTURE_BUFFER, GL_RGBA32F, tbo);, but I think tbo and tex had the same value since they were generated in the same order. Therefore it worked in this implementation.
I have solved it though, in a manner I'm not very happy with since I really think that the above should work. On the other hand, the new solution is probably a bit better performance-wise.
Instead of using glMapBuffer(), I switched to keeping a copy of the tbo-memory on CPU by using glBufferSubData() and glgetBufferSubData() for sending the data between CPU/GPU. This worked, so I'll just continue with that solution.
But, yeah, the question still stands - Why doesn't glMapBuffer() work with my texture buffer objects?
glTexBuffer(GL_TEXTURE_BUFFER, GL_RGBA32F, tex);
should be
glTexBuffer(GL_TEXTURE_BUFFER, GL_RGBA32F, tbo);
perhaps there is something else wrong, but this stands out.
https://www.opengl.org/wiki/Buffer_Texture

GLSL, combining 2D and 3D textures

I am trying to blend a 3D texture with a 2D one to make a terrain. The 3D texture has moss, sand, snow and the like, interpolated to enhance the illusion of heights. The 2D texture currently only has an orange line across meant to be a "road". This is my fragment shader:
# version 420
uniform sampler3D mainTexture;
uniform sampler2D roadTexture;
void main() {
vec4 diffuse3D = texture3D(mainTexture, gl_TexCoord[0].stp);
vec4 diffuse2D = texture2D(roadTexture, gl_TexCoord[1].st);
// Yes, I am aware I am only returning the 2D texture value
// However this is for testing purposes only
// Doing gl_FragColor = diffuse3D + diffuse2D;
// Or any other operation returns the 3D texture only
gl_FragColor = diffuse2D;
}
And this is my drawing call:
void Terrain::Draw() {
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, sizeof(glm::vec3), &v[0].x);
glEnableClientState(GL_NORMAL_ARRAY);
glNormalPointer(GL_FLOAT, sizeof(glm::vec3), &n[0].x);
s.enable(); // simple glUseProgram call within my Shader object
glClientActiveTexture(GL_TEXTURE0);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnable(GL_TEXTURE_3D);
glBindTexture(GL_TEXTURE_3D, id_texture);
s.setSampler("mainTexture",0); // Calls to glGetUniformLocation and glUniform1i
glTexCoordPointer(3, GL_FLOAT, sizeof(glm::vec3), &t[0].x);
glClientActiveTexture(GL_TEXTURE1);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, id_texture_road);
s.setSampler("roadTexture",1); // Same as above
glTexCoordPointer(2, GL_FLOAT, sizeof(glm::vec2), &t2[0].x);
glPushMatrix();
glScalef(scalex,scaley,scalez);
glDrawElements(GL_TRIANGLES, sizei, GL_UNSIGNED_INT, index);
glPopMatrix();
s.disable(); // glUseProgram(0)
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisable(GL_TEXTURE_3D);
glDisable(GL_TEXTURE_2D);
}
Here is the code for my setSampler() method:
void Shader::setSampler(std::string name, GLint value)
{
GLuint loc = glGetUniformLocation(program, name.c_str());
if (loc>0)
{
glUniform1i(loc, value);
}
}
The result is a solid black color upon the whole terrain. I have sadly been unable to find information on sampler3D, but the diffuse3D variable in my fragment shader does compute to the correct texture, and my texture coordinates for the 2D texture are being correcly sent to the fragment shader (I know this because I used them to color the terrain for testing and got a smooth gradinent from green to red, what you would expect using only the first 2 coordinates). I also checked the values passed to my setSampler() method and I do get the 0 and 1, and the 1 and 2 locations corresponding to them.
All of the help I can find on this issue is around the vicinity of the advice provided here, which I have already implemented).
Can anybody assist?
EDIT: So, just for kicks, I swapped my texture units so the 2D texture became unit 0 and the 3D became unit 1. Now only the 2D texture is rendered. But my texture units are passed correctly (at least in appearence) to the shader. Any clues?
vec4 diffuse3D = texture3D(mainTexture, gl_TexCoord[0].stp);
vec4 diffuse2D = texture2D(roadTexture, gl_TexCoord[1].st);
gl_FragColor = diffuse2D;
Let's pretend that this wasn't using shaders. Let's pretend you were just writing a function in C++ that returns a value.
int FuncName(int val1, int val2)
{
int test1 = Compute(val1);
int test2 = Compute(val2);
return test2;
}
What will this function return? Obviously, it returns Compute(val2), completely ignoring the value of test1. It won't magically combine test1 and test2. They're separate values, and therefore, they remain separate unless you explicitly combine them.
Just like your fragment shader.
Shaders aren't magic; they're programming. They only do what you tell them to. So if you say, "get a value from a texture and then don't do anything with it", it will dutifully do exactly that. Though odds are good that the compiler will optimize out the texture fetch entirely.
If you want a "blend" of two textures, you must blend them. You must fetch from each texture, then use both values to compute a new color.
How exactly you do that depends entirely on you. Maybe your 2D texture has some alpha that represents how much of the 2D texture to show. I don't know; you didn't describe what your texture looks like or how exactly you plan to show the road in some places and not in others.
the reason you get a black color is simply that you don't set proper uniform variables.
# version 420
uniform sampler3D mainTexture;
uniform sampler2D roadTexture;
void main() {
vec4 diffuse3D = texture3D(mainTexture, gl_TexCoord[0].stp);
vec4 diffuse2D = texture2D(roadTexture, gl_TexCoord[1].st);
gl_FragColor = diffuse2D;
}
what this shader is doing, is looking up the value of 'roadTexture' and displaying it. unfortunately, it has no clue which texture unit 'roadTexture' is currently bound to, and thus will acess texture unit 0, where your 3d texture is bound - so your're trying to access a 3d texture with 2d texcoords, which may well return all black. you'll need to query the uniform locations of your textures with glGetUniformLocation and then set them to the correct texture units ( 0/1, respectively ) with glUniform1i.
EDIT: also, you're using deprecated functionality, so your shader version directive should be changed to #version 420 compatibility - the default is core
You need to call glEnableClientState(GL_TEXTURE_COORD_ARRAY); again after you have made the second texture unit active with glClientActiveTexture(GL_TEXTURE1);
from http://www.opengl.org/sdk/docs/man2/xhtml/glEnableClientState.xml
enabling and disabling GL_TEXTURE_COORD_ARRAY affects the active client texture unit.
Just solved this problem. Apprently you still need glActiveTexture() in addition to glClientActiveTexture(). This was the code that worked, for anyone who gets the same problem:
glClientActiveTexture(GL_TEXTURE0);
glActiveTexture(GL_TEXTURE0);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glBindTexture(GL_TEXTURE_3D, id_texture);
s.setSampler("mainTexture",0); // Calls to glGetUniformLocation and glUniform1i
glTexCoordPointer(3, GL_FLOAT, sizeof(glm::vec3), &t[0].x);
glClientActiveTexture(GL_TEXTURE1);
glActiveTexture(GL_TEXTURE1);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glBindTexture(GL_TEXTURE_2D, id_texture_road);
s.setSampler("roadTexture",1); // Same as above
glTexCoordPointer(2, GL_FLOAT, sizeof(glm::vec2), &t2[0].x);
// Drawing Calls
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glClientActiveTexture(GL_TEXTURE0);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glActiveTexture(GL_TEXTURE0);
Thanks for reading.

Texture rendering and VBO's [OpenGL/SDL/C++]

So, I've been working on a little game project for a bit and I've hit a snag that's annoying me to no end. I load an obj file which then gets rendered after being put into a VBO. This part works fine, no problemo. However, I've been trying to get it to render the accompanying texture with the supplied UVs with no success. Currently, I just get a matte green colouration on my model. Upon investigating it in GDE, I've seen that texture gets loaded fine and occupies the GL_TEXTURE0 unit, so that's not the issue. I believe it may be my binding but I have no idea why this would fail...
void Model_Man::render_models()
{
for(int x=0; x<models.size(); x++)
{
if(models.at(x).visible==true)
{
glBindBuffer(GL_ARRAY_BUFFER,models.at(x).t_buff);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,models.at(x).i_buff);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT,0,0);
glClientActiveTexture(GL_TEXTURE0);
glTexCoordPointer(2,GL_FLOAT,0,&models.at(x).uvs[0]);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glActiveTexture(GL_TEXTURE0);
int tex_loc = glGetUniformLocation(models.at(x).shaderid,"color_texture");
glUniform1i(tex_loc,GL_TEXTURE0);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, models.at(x).mats.at(0).texid);
c_render.use_program(models.at(x).shaderid);
glDrawElements(GL_TRIANGLES,models.at(x).f_index.size()*3,GL_UNSIGNED_INT,0);
c_render.use_program();
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisable(GL_TEXTURE_2D);
}
}
}
And my shader files...
Shader.frag
uniform sampler2D color_texture;
void main() {
// Set the output color of our current pixel
gl_FragColor = texture2D(color_texture, gl_TexCoord[0].st);
}
Shader.vert
void main() {
gl_TexCoord[0] = gl_MultiTexCoord0;
// Set the position of the current vertex
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
And yes, I know I'm currently being horribly inefficient with my render loop :P but I'm already planning on refactoring it, I am just attempting to get this single model to draw correctly with everything I'm aiming to do. I have no clue why it wouldn't be rendering with the texture correctly applied - unless it's because I need to interleave my arrays but I'm still supplying it with uv data so I don't see why it fails.
The call that set the sampler uniform shall not set GL_TEXTUE0, but actually 0.
Indeed:
glUniform1i(location, 0)
For setting up a sampler uniform do:
glUseProgram(progId);
// ...
glActiveTexture(GL_TEXTURE0 + texUnit);
glBindTexture(texId);
glUniform1i(texUnit);
The main concept is that the uniform variable are a shader program state (it is mantained until you re-link the program or reset the uniform value). Without binding a program, glUniform1i shall fail since there's not shader program at which it can set the uniform value!
As a general advice, call glGetError after each OpenGL call to detect these conditions. Most of those calls can be removed by preprocessor in release version.
Well, found out that the big issue was that while I was binding a texture, I wasn't actually setting it in a way that it was understood as being used. Setting glClientActiveTexture(GL_TEXTURE0 + texUnit); in combination with glActiveTexture(); ended up being the final solution.