Texture rendering and VBO's [OpenGL/SDL/C++] - c++

So, I've been working on a little game project for a bit and I've hit a snag that's annoying me to no end. I load an obj file which then gets rendered after being put into a VBO. This part works fine, no problemo. However, I've been trying to get it to render the accompanying texture with the supplied UVs with no success. Currently, I just get a matte green colouration on my model. Upon investigating it in GDE, I've seen that texture gets loaded fine and occupies the GL_TEXTURE0 unit, so that's not the issue. I believe it may be my binding but I have no idea why this would fail...
void Model_Man::render_models()
{
for(int x=0; x<models.size(); x++)
{
if(models.at(x).visible==true)
{
glBindBuffer(GL_ARRAY_BUFFER,models.at(x).t_buff);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,models.at(x).i_buff);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT,0,0);
glClientActiveTexture(GL_TEXTURE0);
glTexCoordPointer(2,GL_FLOAT,0,&models.at(x).uvs[0]);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glActiveTexture(GL_TEXTURE0);
int tex_loc = glGetUniformLocation(models.at(x).shaderid,"color_texture");
glUniform1i(tex_loc,GL_TEXTURE0);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, models.at(x).mats.at(0).texid);
c_render.use_program(models.at(x).shaderid);
glDrawElements(GL_TRIANGLES,models.at(x).f_index.size()*3,GL_UNSIGNED_INT,0);
c_render.use_program();
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisable(GL_TEXTURE_2D);
}
}
}
And my shader files...
Shader.frag
uniform sampler2D color_texture;
void main() {
// Set the output color of our current pixel
gl_FragColor = texture2D(color_texture, gl_TexCoord[0].st);
}
Shader.vert
void main() {
gl_TexCoord[0] = gl_MultiTexCoord0;
// Set the position of the current vertex
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
And yes, I know I'm currently being horribly inefficient with my render loop :P but I'm already planning on refactoring it, I am just attempting to get this single model to draw correctly with everything I'm aiming to do. I have no clue why it wouldn't be rendering with the texture correctly applied - unless it's because I need to interleave my arrays but I'm still supplying it with uv data so I don't see why it fails.

The call that set the sampler uniform shall not set GL_TEXTUE0, but actually 0.
Indeed:
glUniform1i(location, 0)
For setting up a sampler uniform do:
glUseProgram(progId);
// ...
glActiveTexture(GL_TEXTURE0 + texUnit);
glBindTexture(texId);
glUniform1i(texUnit);
The main concept is that the uniform variable are a shader program state (it is mantained until you re-link the program or reset the uniform value). Without binding a program, glUniform1i shall fail since there's not shader program at which it can set the uniform value!
As a general advice, call glGetError after each OpenGL call to detect these conditions. Most of those calls can be removed by preprocessor in release version.

Well, found out that the big issue was that while I was binding a texture, I wasn't actually setting it in a way that it was understood as being used. Setting glClientActiveTexture(GL_TEXTURE0 + texUnit); in combination with glActiveTexture(); ended up being the final solution.

Related

Collision with two fragment shaders at the same time

My intention was, that the less space an object takes up on the screen, the less bright the object should appear.
This is fragment shader fs_1
#version 450 core
layout (binding = 3, offset = 0) uniform atomic_uint area;
void main(void) {
atomicCounterIncrement(area);
}
That's the second fragment shader, fs_2
#version 450 core
layout (binding = 3) uniform area_block {
uint counter_value;
};
out vec4 color;
layout(location = 4) uniform float max_area;
void main(void){
float brightness = clamp(float(counter_value) / max_area, 0.0, 1.0);
color = vec4(brightness, brightness, brightness, 1.0);
}
I then attached the shaders to the program object. Now, the problematic part comes. I wonder if that's even acceptable or a complete mess. I created a named buffer and then bound it to the GL_ATOMIC_COUNTER_BUFFERbinding point. Then, I bound it to binding number 3. I reset the counter with zero. After that, my intention was to reuse the atomic counter buffer in the second fragment shader, so I bound it to the GL_UNIFORM_BUFFER target. Last, I pass the maximum expected area to the second shader, in order to calculate the brightness.
glUseProgram(program);
GLuint buf;
glGenBuffers(1, &buf);
glBindBuffer(GL_ATOMIC_COUNTER_BUFFER, buf);
glBufferData(GL_ATOMIC_COUNTER_BUFFER, 16 * sizeof(GLuint), NULL, GL_DYNAMIC_COPY);
glBindBufferBase(GL_ATOMIC_COUNTER_BUFFER, 3, buf);
const GLuint zero = 0;
glBufferSubData(GL_ATOMIC_COUNTER_BUFFER, 2 * sizeof(GLuint), sizeof(GLuint), &zero);
glBindBufferBase(GL_UNIFORM_BUFFER, 3, buf);
glUniform1f(4, info.windowHeight * info.windowWidth);//max_area
Like so, it doesn't seem to work. I also somewhere need to insert glColorMask, I suppose. This, in order to turn off the output of the first fragment shader. Furthermore, I think, I have to do something with glMemoryBarrier. Or is that not necessary? Have I called the functions in the wrong order?
I found no real references to it in the internet, no sample code on how to accomplish that. I'd be thankful for any answers.
Edit:
In addition, I have got some error message which makes the problem obvious:
glLinkProgram failed to link a GLSL program with the following program info log: 'Fragment shader(s) failed to link.
Fragment link error: INVALID_OPERATION.
ERROR: 0:8: error(#248) Function already has a body: main
ERROR: error(#273) 1 compilation errors. No code generated
I have very soon remarked that that might be the problem, that two fragment shaders have the mainbody. So how could I fix that? Could I leave both main bodies and instead create a second program object?
I got the design to work, eventually!
I've followed the great advices in the comments. To sum up: I created two program objects with the same vertex shader attached to them. After that, I called program (with fragment shader fs_1) as follows:
glUseProgram(program);
glDrawArrays(GL_TRIANGLES, 0, 3);
For some reason, I didn't have to call glMemoryBarrier or glColorMask.
I then made the following calls in my render-loop:
virtual void render(double currentTime) {
glUseProgram(program1);
glDrawArrays(GL_TRIANGLES, 0, 3);
}
There, I used the second program object, program1 (with fragment shader fs_2). I suppose it works, because there needs to be a vertex shader in each program object, not just in one. I then drew the triangle the first time, without any color output, but just to calculate the area. I then drew it the second time, using the "counted area" to calculate the brightness of the triangle.
I also realised, that the shaders are only executed when you issue a draw call.

Compute Shader write to texture

I have implemented CPU code that copies a projected texture to a larger texture on a 3d object, 'decal baking' if you will, but now I need to implement it on the GPU. To do this I hope to use compute shader as its quite difficult to add an FBO in my current setup.
Example image from my current implementation
This question is more about how to use Compute shaders but for anyone interested, the idea is based on an answer I got from user jozxyqk, seen here: https://stackoverflow.com/a/27124029/2579996
The texture that is written-to is in my code called _texture, whilst the one projected is _textureProj
Simple compute shader
const char *csSrc[] = {
"#version 440\n",
"layout (binding = 0, rgba32f) uniform image2D destTex;\
layout (local_size_x = 16, local_size_y = 16) in;\
void main() {\
ivec2 storePos = ivec2(gl_GlobalInvocationID.xy);\
imageStore(destTex, storePos, vec4(0.0,0.0,1.0,1.0));\
}"
};
As you see I currently only want to have the texture updated to some arbitrary (blue) color.
Update function
void updateTex(){
glUseProgram(_computeShader);
const GLint location = glGetUniformLocation(_computeShader, "destTex");
if (location == -1){
printf("Could not locate uniform location for texture in CS");
}
// bind texture
glUniform1i(location, 0);
glBindImageTexture(0, *_texture, 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_RGBA32F);
// ^second param returns the GLint id - that im sure of.
glDispatchCompute(_texture->width() / 16, _texture->height() / 16, 1);
glMemoryBarrier(GL_SHADER_STORAGE_BARRIER_BIT);
glUseProgram(0);
printOpenGLError(); // reports no errors.
}
Problem
If i call the updateTex() outside of my main program object i see zero effect, whereas if I call it within its scope like so:
glUseProgram(_id); // vert, frag shader pipe
updateTex();
// Pass uniforms to shader
// bind _textureProj & _texture (latter is the one im trying to update)
glUseProgram(0);
Then upon rendering I see this:
QUESTION:
I realise that setting the update method within the main program object scope is not the proper way of doing it, however its the only way to get any visual results. It seems to me that what happens is that it pretty much eliminates the fragmentshader and draws to screenspace...
What can I do to get this working properly? (my main focus is to be able to write anything to the texture & update)
Please let me know if more code needs posting.
I believe in this case an FBO would be easier and faster, and would recommend that instead. But the question itself is still quite valid.
I'm surprised to see a sphere, given you're writing blue to the entire texture (minus any edge bits if the texture size is not a multiple of 16). I guess this is from code elsewhere.
Anyway, it seems your main problem is being able to write to the texture from a compute shader outside the setup code for regular rendering. I suspect this is related to how you bind your destTex image. I'm not sure what your TexUnit and activate methods do, but to bind a GL texture to an image unit, do this:
int imageUnitIndex = 0; //something unique
int uniformLocation = glGetUniformLocation(...);
glUniform1i(uniformLocation, imageUnitIndex); //program must be active
glBindImageTexture(imageUnitIndex, textureHandle, ...);
see:
https://www.opengl.org/sdk/docs/man/html/glBindImageTexture.xhtml
https://www.opengl.org/wiki/Image_Load_Store#Images_in_the_context
Lastly, as you're using image2D so GL_SHADER_IMAGE_ACCESS_BARRIER_BIT is the barrier to use. GL_SHADER_STORAGE_BARRIER_BIT is for storage buffer objects.

Reading and updating texture buffer in OpenGL/GLSL 4.3

I'm going a bit nuts on this since I don't really get what is wrong and what not. There must either be something that I've vastly misunderstood or there is some kind of bug either in the code or in the driver. I'm running this on AMD Radeon 5850 with the latest catalyst beta drivers as of last week.
OK, I began doing a OIT-rendering implementation and wanted to use a struct-array saved in a shader storage buffer object. Well, the indices in that one were reflecting/moving forward in memory way wrong and I pretty much assumed that it was a driver bug - since they just recently started supporting such thing + yeah, it's a beta driver.
Therefore I moved back a notch and used glsl-images from texture buffer objects instead, which I guess had been supported since at least a while back.
Still wasn't behaving correctly. So I created a simple test project and fumbled around a bit and now I think I've just pinched where the thing is.
OK! First I initialize the buffer and texture.
//Clearcolor and Cleardepth setup, disabling of depth test, compile and link shaderprogram etc.
...
//
GLint tbo, tex;
datasize = resolution.x * resolution.y * 4 * sizeof(GLfloat);
glGenBuffers(1, &tbo);
glBindBuffer(GL_TEXTURE_BUFFER, tbo);
glBufferData(GL_TEXTURE_BUFFER, datasize, NULL, GL_DYNAMIC_COPY);
glBindBuffer(GL_TEXTURE_BUFFER, 0);
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_BUFFER, tex);
glTexBuffer(GL_TEXTURE_BUFFER, GL_RGBA32F, tex);
glBindTexture(GL_TEXTURE_BUFFER, 0);
glBindImageTexture(2, tex, 0, GL_TRUE, 0, GL_READ_WRITE, GL_RGBA32F);
Then the rendering loop is - update and draw, update and draw ... With a delay in between so that I have time to see what the update does.
The update is like this...
ivec2 resolution; //Using GLM
resolution.x = (GLuint)(iResolution.x + .5f);
resolution.y = (GLuint)(iResolution.y + .5f);
glBindBuffer(GL_TEXTURE_BUFFER, tbo);
void *ptr = glMapBuffer(GL_TEXTURE_BUFFER, GL_WRITE_ONLY);
color *c = (color*)ptr; //color is a simple struct containing 4 GLfloats.
for (int i = 0; i < resolution.x*resolution.y; ++i)
{
c[i].r = c[i].g = c[i].b = c[i].a = 1.0f;
}
glUnmapBuffer(GL_TEXTURE_BUFFER); c = (color*)(ptr = NULL);
glBindBuffer(GL_TEXTURE_BUFFER, 0);
And the draw is like this...
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMemoryBarrier(GL_ALL_BARRIER_BITS);
ShaderProgram->Use(); //Simple shader program class
quad->Draw(GL_TRIANGLES); //Simple mesh class containing triangles (vertices) and colors
glFinish();
glMemoryBarrier(GL_ALL_BARRIER_BITS);
I just put some memory barriers around to be extra sure, shouldn't harm more than performance right? Well, the outcome was the same with or without the barriers anyway, so ... :)
The Shader program is a simple pass-through vertex shader and the fragment shader that's doing the testing.
Vertex shader
#version 430
in vec3 in_vertex;
void main(void)
{
gl_Position = vec4(in_vertex, 1.0);
}
Fragment shader (I guess coherent & memoryBarrier() isn't really needed here since I do them on CPU in between draw/fragment shader execution... but does it harm?)
#version 430
uniform vec2 iResolution;
layout(binding = 2, rgba32f) coherent uniform imageBuffer colorMap;
out vec4 FragColor;
void main(void)
{
ivec2 res = ivec2(int(iResolution.x + 0.5), int(iResolution.y + 0.5));
ivec2 pos = ivec2(int(gl_FragCoord.x + 0.5), int(gl_FragCoord.y + 0.5));
int pixelpos = pos.y * res.x + pos.x;
memoryBarrier();
vec4 prevPixel = imageLoad(colorMap, pixelpos);
vec4 green = vec4(0.0, 1.0, 0.0, 0.0);
imageStore(colorMap, pixelpos, green);
FragColor = prevPixel;
}
Expectation: A white screen! Since I'm writing "white" to the whole buffer between every draw even if I'm writing green to the image after load in the actual shader.
Result: The first frame is green, the rest is black. Some part of me thinks that there is a white frame thats too fast to be seen or some vsync-thing that tares it, but it this a place for logics? :P
Well, then I tried a new thing and moved the update block (where i'm writing "white" to the whole buffer) to the init instead.
Expectation: A white first frame, followed by a green screen.
Result: Oh yes its green allright! Even though the first frame is with some artifacts of white/green, sometimes only green. This might probably be due to (lack of) vsync of something, haven't checked that out. Still, I think I got the result I was looking for.
The conclusion I can draw out of this is that there is something wrong in my update.
Does it unhook the buffer from the texture reference or something? In that case, isn't it weird that the first frame is OK? It's only after the first imageStore-command (well, the first frame) that the texture goes all black - the "bind()-map()-unmap()-bind(0)" works the first time, but not afterwards.
My picture of glMapBuffer is that it copies the buffer data from GPU to CPU memory, let's you alter it and Unmap copies it back. Well, just now I thought that maybe it doesn't copy the buffer from GPU to CPU and then back, but only one way? Could it be the GL_WRITE_ONLY which should be changed to GL_READ_WRITE? Well, I've tried both. Supposedly one of them was correct, wouldn't my screen when using that one always be white in "test 1"?
ARGH, what am I doing wrong?
EDIT:
Well, I still don't know... Obviously glTexBuffer(GL_TEXTURE_BUFFER, GL_RGBA32F, tex); should be glTexBuffer(GL_TEXTURE_BUFFER, GL_RGBA32F, tbo);, but I think tbo and tex had the same value since they were generated in the same order. Therefore it worked in this implementation.
I have solved it though, in a manner I'm not very happy with since I really think that the above should work. On the other hand, the new solution is probably a bit better performance-wise.
Instead of using glMapBuffer(), I switched to keeping a copy of the tbo-memory on CPU by using glBufferSubData() and glgetBufferSubData() for sending the data between CPU/GPU. This worked, so I'll just continue with that solution.
But, yeah, the question still stands - Why doesn't glMapBuffer() work with my texture buffer objects?
glTexBuffer(GL_TEXTURE_BUFFER, GL_RGBA32F, tex);
should be
glTexBuffer(GL_TEXTURE_BUFFER, GL_RGBA32F, tbo);
perhaps there is something else wrong, but this stands out.
https://www.opengl.org/wiki/Buffer_Texture

GLSL, combining 2D and 3D textures

I am trying to blend a 3D texture with a 2D one to make a terrain. The 3D texture has moss, sand, snow and the like, interpolated to enhance the illusion of heights. The 2D texture currently only has an orange line across meant to be a "road". This is my fragment shader:
# version 420
uniform sampler3D mainTexture;
uniform sampler2D roadTexture;
void main() {
vec4 diffuse3D = texture3D(mainTexture, gl_TexCoord[0].stp);
vec4 diffuse2D = texture2D(roadTexture, gl_TexCoord[1].st);
// Yes, I am aware I am only returning the 2D texture value
// However this is for testing purposes only
// Doing gl_FragColor = diffuse3D + diffuse2D;
// Or any other operation returns the 3D texture only
gl_FragColor = diffuse2D;
}
And this is my drawing call:
void Terrain::Draw() {
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, sizeof(glm::vec3), &v[0].x);
glEnableClientState(GL_NORMAL_ARRAY);
glNormalPointer(GL_FLOAT, sizeof(glm::vec3), &n[0].x);
s.enable(); // simple glUseProgram call within my Shader object
glClientActiveTexture(GL_TEXTURE0);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnable(GL_TEXTURE_3D);
glBindTexture(GL_TEXTURE_3D, id_texture);
s.setSampler("mainTexture",0); // Calls to glGetUniformLocation and glUniform1i
glTexCoordPointer(3, GL_FLOAT, sizeof(glm::vec3), &t[0].x);
glClientActiveTexture(GL_TEXTURE1);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, id_texture_road);
s.setSampler("roadTexture",1); // Same as above
glTexCoordPointer(2, GL_FLOAT, sizeof(glm::vec2), &t2[0].x);
glPushMatrix();
glScalef(scalex,scaley,scalez);
glDrawElements(GL_TRIANGLES, sizei, GL_UNSIGNED_INT, index);
glPopMatrix();
s.disable(); // glUseProgram(0)
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisable(GL_TEXTURE_3D);
glDisable(GL_TEXTURE_2D);
}
Here is the code for my setSampler() method:
void Shader::setSampler(std::string name, GLint value)
{
GLuint loc = glGetUniformLocation(program, name.c_str());
if (loc>0)
{
glUniform1i(loc, value);
}
}
The result is a solid black color upon the whole terrain. I have sadly been unable to find information on sampler3D, but the diffuse3D variable in my fragment shader does compute to the correct texture, and my texture coordinates for the 2D texture are being correcly sent to the fragment shader (I know this because I used them to color the terrain for testing and got a smooth gradinent from green to red, what you would expect using only the first 2 coordinates). I also checked the values passed to my setSampler() method and I do get the 0 and 1, and the 1 and 2 locations corresponding to them.
All of the help I can find on this issue is around the vicinity of the advice provided here, which I have already implemented).
Can anybody assist?
EDIT: So, just for kicks, I swapped my texture units so the 2D texture became unit 0 and the 3D became unit 1. Now only the 2D texture is rendered. But my texture units are passed correctly (at least in appearence) to the shader. Any clues?
vec4 diffuse3D = texture3D(mainTexture, gl_TexCoord[0].stp);
vec4 diffuse2D = texture2D(roadTexture, gl_TexCoord[1].st);
gl_FragColor = diffuse2D;
Let's pretend that this wasn't using shaders. Let's pretend you were just writing a function in C++ that returns a value.
int FuncName(int val1, int val2)
{
int test1 = Compute(val1);
int test2 = Compute(val2);
return test2;
}
What will this function return? Obviously, it returns Compute(val2), completely ignoring the value of test1. It won't magically combine test1 and test2. They're separate values, and therefore, they remain separate unless you explicitly combine them.
Just like your fragment shader.
Shaders aren't magic; they're programming. They only do what you tell them to. So if you say, "get a value from a texture and then don't do anything with it", it will dutifully do exactly that. Though odds are good that the compiler will optimize out the texture fetch entirely.
If you want a "blend" of two textures, you must blend them. You must fetch from each texture, then use both values to compute a new color.
How exactly you do that depends entirely on you. Maybe your 2D texture has some alpha that represents how much of the 2D texture to show. I don't know; you didn't describe what your texture looks like or how exactly you plan to show the road in some places and not in others.
the reason you get a black color is simply that you don't set proper uniform variables.
# version 420
uniform sampler3D mainTexture;
uniform sampler2D roadTexture;
void main() {
vec4 diffuse3D = texture3D(mainTexture, gl_TexCoord[0].stp);
vec4 diffuse2D = texture2D(roadTexture, gl_TexCoord[1].st);
gl_FragColor = diffuse2D;
}
what this shader is doing, is looking up the value of 'roadTexture' and displaying it. unfortunately, it has no clue which texture unit 'roadTexture' is currently bound to, and thus will acess texture unit 0, where your 3d texture is bound - so your're trying to access a 3d texture with 2d texcoords, which may well return all black. you'll need to query the uniform locations of your textures with glGetUniformLocation and then set them to the correct texture units ( 0/1, respectively ) with glUniform1i.
EDIT: also, you're using deprecated functionality, so your shader version directive should be changed to #version 420 compatibility - the default is core
You need to call glEnableClientState(GL_TEXTURE_COORD_ARRAY); again after you have made the second texture unit active with glClientActiveTexture(GL_TEXTURE1);
from http://www.opengl.org/sdk/docs/man2/xhtml/glEnableClientState.xml
enabling and disabling GL_TEXTURE_COORD_ARRAY affects the active client texture unit.
Just solved this problem. Apprently you still need glActiveTexture() in addition to glClientActiveTexture(). This was the code that worked, for anyone who gets the same problem:
glClientActiveTexture(GL_TEXTURE0);
glActiveTexture(GL_TEXTURE0);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glBindTexture(GL_TEXTURE_3D, id_texture);
s.setSampler("mainTexture",0); // Calls to glGetUniformLocation and glUniform1i
glTexCoordPointer(3, GL_FLOAT, sizeof(glm::vec3), &t[0].x);
glClientActiveTexture(GL_TEXTURE1);
glActiveTexture(GL_TEXTURE1);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glBindTexture(GL_TEXTURE_2D, id_texture_road);
s.setSampler("roadTexture",1); // Same as above
glTexCoordPointer(2, GL_FLOAT, sizeof(glm::vec2), &t2[0].x);
// Drawing Calls
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glClientActiveTexture(GL_TEXTURE0);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glActiveTexture(GL_TEXTURE0);
Thanks for reading.

imageStore() doesn't work on AMD hardware (OpenGL 4.2)

I tried this code on Nvidia hardware without any problem but on AMD, the imageStore() function doesn't seem to do anything (No GL error is thrown though, I checked)
Shader:
#extension GL_EXT_shader_image_load_store : require
layout(size4x32) uniform image2D A;
void main(void){
vec4 output = vec4(0.111, 0.222 , 0.333, 0.444);
imageStore(A, ivec2(gl_FragCoord.xy-vec2(0.5,0.5)), output);
}
Calling Program:
glUseProgram(program);
glUniform1i(glGetUniformLocation (program , "A" ), id);
glBindImageTexture(id, texture, 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_RGBA32F);
//Bind the fbo associated with the texture to run a shader per pixel
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glViewport(0, 0, width, height);
glDrawBuffer(GL_NONE); //Forbid gl_FragColor to be modified
//Render a quad
draw();
//Then read the texture...
As suggested in an other thread by Nicol Bolas (Trouble with imageStore() (OpenGL 4.3)) I tried to add some barriers to insure that the memory is written when I read back the texture but no change, the texture that imageStore is supposed to write to is not modified.
void main(void){
vec4 output = vec4(0.111, 0.222 , 0.333, 0.444);
memoryBarrier();
imageStore(A, ivec2(gl_FragCoord.xy), output);
memoryBarrier();
}
In the main program:
...
draw();
glMemoryBarrierEXT(GL_ALL_BARRIER_BITS);
...
On the other hand, if I remove glDrawBuffer(GL_NONE) to simply output my value using gl_FragColor it works, as usual:
void main(void){
gl_FragColor = vec4(0.111, 0.222 , 0.333, 0.444);
}
but I really need to do it with imageStore since I want to use scatter writes.
I also tried to use imageLoad and didn't have any problem. What is happening with this imageStore function?
Any ideas?
I probably had the same problem with AMD card as you. I then looked at the source code of OpenGL Sample Pack at : http://www.g-truc.net/project-0026.html#menu
In studying this source code related to imageStore, I found that adding two following lines for the texture made my code work for AMD (the code in java)
gl.glTexParameteri(GL2.GL_TEXTURE_2D, GL2.GL_TEXTURE_MIN_FILTER, GL2.GL_NEAREST);
gl.glTexParameteri(GL2.GL_TEXTURE_2D, GL2.GL_TEXTURE_MAG_FILTER, GL2.GL_NEAREST);
If it does not work for you, just compare your code with OpenGL Sample Pack to find more.