Asynchronous texture upload with Qt and OpenGL - c++

I'm writing a small video player using QOpenGLWidget. At the moment I'm struggling to get asynchronous texture upload working. In an earlier version of my code I wait for a "next frame" signal, upon which the frame is read from the hard drive, uploaded to the GPU and then rendered. Now I want to get this working asynchronously using a ring buffer on the GPU. I want a separate thread to upload the next N textures and the main thread to take one of this textures, display and invalidate it. As a first step I wrote a class to upload a single texture which I want to use from my QOpenGLWidget. I created shared contexts between my class and the QOpenGLWidget.
class GLWidget : public QOpenGLWidget, protected QOpenGLFunctions;
void GLWidget::paintGL() {
QOpenGLVertexArrayObject::Binder vaoBinder(&m_vao);
m_program->bind();
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
glFrontFace(GL_CCW);
m_program->setUniformValue("textureSamplerRed", 0);
m_program->setUniformValue("textureSamplerGreen", 1);
m_program->setUniformValue("textureSamplerBlue", 2);
glUniformMatrix4fv(m_matMVP_Loc, 1, GL_FALSE, &m_MVP[0][0]);
m_vertice_indices_Vbo.bind();
m_vertices_Vbo.bind();
m_texture_coordinates_Vbo.bind();
glDrawElements(
GL_TRIANGLE_STRIP, // mode
m_videoFrameTriangles_indices.size(), // count
GL_UNSIGNED_INT, // type
(void*)0 // element array buffer offset
);
m_program->release();
}
I wait for the GLWidget::initializeGL() to finish, emit a signal which is connected to the initialization of my texture loading class:
class TextureLoader2 : public QObject, protected QOpenGLFunctions;
void TextureLoader2::initialize(QOpenGLContext *context)
{
// sharing the OpenGL context with GLWidget
m_context.setFormat(context->format()); // need this?
m_context.setShareContext(context);
m_context.create();
m_context.makeCurrent(context->surface());
m_surface = context->surface();
}
And here is how I load a new frame:
void TextureLoader2::loadNextFrame(const int frameIdx)
{
QElapsedTimer timer;
timer.start();
bool is_current = m_context.makeCurrent(m_surface);
// some code which reads the frame from disk and sends to the GPU.
// srcR is a pointer to the the data for red. the upload for G and B is similar
if(!m_texture_Rdata)
{
m_texture_Rdata = std::make_shared<QOpenGLTexture>(QOpenGLTexture::Target2D);
m_texture_Rdata->create();
m_texture_Rdata->setSize(m_frameWidth,m_frameHeight);
m_texture_Rdata->setFormat(QOpenGLTexture::R8_UNorm);
m_texture_Rdata->allocateStorage(QOpenGLTexture::Red,QOpenGLTexture::UInt8);
m_texture_Rdata->setData(QOpenGLTexture::Red, QOpenGLTexture::UInt8, srcR);
// Set filtering modes for texture minification and magnification
m_texture_Rdata->setMinificationFilter(QOpenGLTexture::Nearest);
m_texture_Rdata->setMagnificationFilter(QOpenGLTexture::Linear);
m_texture_Rdata->setWrapMode(QOpenGLTexture::ClampToBorder);
}
else
{
m_texture_Rdata->setData(QOpenGLTexture::Red, QOpenGLTexture::UInt8, srcR);
}
// these are QOpenGLTextures
m_texture_Rdata->bind(0);
m_texture_Gdata->bind(1);
m_texture_Bdata->bind(2);
emit frameUploaded();
}
My QOpenGLWidget is displaying nothing unfortunately. I don't know how to proceed.
I know that the code for reading and sending the texture to the GPU is working, since if I leave out the line
bool is_current = m_context.makeCurrent(m_surface);
my whole window (not just the frame containing the QOpenGLWidget) is overwritten, displaying the texture.
I've been searching quite a bit, but I couldn't find any simple working example code for what I want to do. Hope someone has an idea what the issue might be. I've seen people mentioning using QOffscreenSurface or a second hidden widget it similar, but different contexts. Maybe I have to use one of those?
My fragment shader:
#version 330 core
// Interpolated values from the vertex shaders
in vec2 fragmentUV;
// Ouput data
out vec4 color_0;
// Values that stay constant for the whole mesh.
uniform sampler2D textureSamplerRed;
uniform sampler2D textureSamplerGreen;
uniform sampler2D textureSamplerBlue;
void main(){
vec3 myColor;
myColor.r = texture2D( textureSamplerRed, fragmentUV ).r;
myColor.g = texture2D( textureSamplerGreen, fragmentUV ).r;
myColor.b = texture2D( textureSamplerBlue, fragmentUV ).r;
color_0 = vec4(, 1.0f);
}

Related

Drawing a monochrome 2D array in OpenGL

I need to use OpenGL for a very specific purpose. I've got a 1D array of floats of size [SIZE][SIZE] (it's always square), that represents a 2D image. Drawing is just an extra here since I've been doing it using third programs by outputting the array to a text file, but I would like to give the option of doing it in the program itself.
This array is being constantly updated in a loop as it's supposed to represents values of a simulated field, the details of which are quite irrelevant, but the important point is that the value of each of them is going to be a float between -1 and 1. Now, I would like to just draw this array as 2D image (in real time), every N steps of the main loop. I tried using the pixel drawing tool of X11 (I'm doing this on Linux), and drawing the array by just looping over it and going pixel by pixel on a SIZE X SIZE window, but this was very slow and was taking much more than the simulation itself. I've been looking into OpenGL and from what I've read the ideal solution would be to reinterprate my array as a 2D Texture and then printing it on a quad. Apparently to use bare OpenGL I would have to readapt my code to work with the main loop of the OpenGL drawing and this is a bit inpractical, so if the same can be done in GLFW, I'm happy with it.
The image to draw is always square, and its orientation is completely irrelevant, it doesn't matter if it's drawn mirrored, upside down, transposed, etc, as it's supposed to be completely isotropic.
The main backbone of the program follows the next scheme
#include <iostream>
#include <GLFW/glfw3.h>
using namespace std;
int main(int argc, char** argv)
{
if (GFX) //GFX is a bool, only draw stuff if it's 1 (its value doesnt change)
{
//Initialize GLFW
}
float field[2*SIZE][SIZE] = {0}; //This is the array to print (only the first SIZE * SIZE components)
for (int i = 0; i < totalTime; i++)
{
for (int x=0; x < SIZE; x++)
{
for (int y=0; y < SIZE; y++)
{
//Each position of the array is then updates here
}
}
if (GFX)
{
//The drawing should be done here
}
}
return 0;
}
I've tried some code snippets and modified some other samples I've found around, but haven't been able to make it work, either they have to call a glLoop that breaks my own simulation loop, or it just prints a pixel in the centre.
So my main question is how to make a texture out of the first SIZE X SIZE components of field, and then draw it on a QUAD.
Thanks!
The simplest for a rookie is to use old API without shaders. In order to make that work you simply encode your data into 1D linear array of floats in range <0.0,1.0> which can be done from <-1,+1> pretty fast on CPU side with single for loop like this:
for (i=0;i<size*size;i++) data[i]=0.5*(data[i]+1.0);
I do not use GLUT nor code for your platform so I stick just to rendering:
//---------------------------------------------------------------------------
const int size=512; // data resolution
const int size2=size*size;
float data[size2]; // your float size*size data
GLuint txrid=-1; // GL texture ID
//---------------------------------------------------------------------------
void init() // this must be called once (after GL is initialized)
{
int i;
// generate float data
Randomize();
for (i=0;i<size2;i++) data[i]=Random();
// create texture
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,txrid);
glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE,GL_MODULATE);
glDisable(GL_TEXTURE_2D);
}
//---------------------------------------------------------------------------
void exit() // this must be called once (before GL is unitialized)
{
// release texture
glDeleteTextures(1,&txrid);
}
//---------------------------------------------------------------------------
void gl_draw()
{
glClear(GL_COLOR_BUFFER_BIT);
glDisable(GL_DEPTH_TEST);
glDisable(GL_CULL_FACE);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
// bind texture
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,txrid);
// copy your actual data into it
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE32F_ARB, size, size, 0, GL_LUMINANCE, GL_FLOAT, data);
// render single textured QUAD
glColor3f(1.0,1.0,1.0);
glBegin(GL_QUADS);
glTexCoord2f(0.0,0.0); glVertex2f(-1.0,-1.0);
glTexCoord2f(1.0,0.0); glVertex2f(+1.0,-1.0);
glTexCoord2f(1.0,1.0); glVertex2f(+1.0,+1.0);
glTexCoord2f(0.0,1.0); glVertex2f(-1.0,+1.0);
glEnd();
// unbind texture (so it does not mess with othre rendering)
glBindTexture(GL_TEXTURE_2D,0);
glDisable(GL_TEXTURE_2D);
glFlush();
SwapBuffers(hdc); // ignore this GLUT should make it on its own
}
//---------------------------------------------------------------------------
Here preview:
In order to make this work you need to call init() at start of your app after the GLUT creates GL context and exit() on Apps end before GLUT closes the GL context. The gl_draw() will render your data so it must be called in the drawing event of GLUT.
In case you do not want to do the range conversion to <0,1> on CPU side you can move it to shaders (very simple vertex and fragment shader) but I got the feeling you're rookie and shaders would be simply too much for you to start with. If you really want to go that way see:
complete GL+GLSL+VAO/VBO C++ example
It also covers the GL initialization without GLUT but on Windows ...
Now some notes to the program above:
I used GL_LUMINANCE32F_ARB texture format extention
its 32 bit floating point texture format that is not clamped so your data stays as is. It should be present on all nowadays gfx HW. I did this to ease up the transition to shaders latter on where you can operate at your raw data directly ...
size
in original GL specification the texture size should be power of 2 so 16,32,64,128,256,512,... If not you need to use rectangle texture extention but that is native in gfx HW for years now so no need to change anything. But on linux and MAC there might be problems with GL implementation so if something does not work try to use power of 2 size (just in case)...
Also do not get too craze with size as gfx cards has limits usually 2048 is safe limit for lowend stuff. If yo need more then do a mosaic of more QUADS/textures
GL_CLAMP_TO_EDGE
this is also extention (now native to HW) so your texture coordinates go from 0 to 1 instead of from 0+pixel/2 to 1-pixel/2 ...
However all of these are not GL 1.0 stuff so you need to add extentions to your App (if GLUT or whatever you use does not already). All of these are just tokens/constants no function calls so in case compiler complains it should be enough to:
#include <gl\glext.h>
After gl.h is included or add the defines directly instead:
#define GL_CLAMP_TO_EDGE 0x812F
#define GL_LUMINANCE32F_ARB 0x8818
btw. your code does not look like GLUT app (but I might be wrong as I do not use it) see this for example:
simple GLUT app example
Your header suggest GLFW3 that is something entirely different (unless its derived from GLUT) than GLUT so maybe you should edit tags and OP to match what you really have/use.
Now the shaders:
if you generate your data in <-1,+1> range:
for (i=0;i<size2;i++) data[i]=(2.0*Random())-1.0;
And use these shaders:
Vertex:
// Vertex
#version 400 core
layout(location = 0) in vec2 pos; // position
layout(location = 8) in vec2 tex; // texture
out vec2 vpos;
out vec2 vtex;
void main()
{
vpos=pos;
vtex=tex;
gl_Position=vec4(pos,0.0,1.0);
}
Fragment:
// Fragment
#version 400 core
uniform sampler2D txr;
in vec2 vpos; // position
in vec2 vtex; // texture
out vec4 col;
void main()
{
vec4 c;
c=texture(txr,vtex);
c=(c+1.0)*0.5;
col=c;
}
Then the result is the same (appart of faster conversion on GPU side). However you need to convert the GL_QUADS into VAO/VBO ( unless nVidia card is used but even then you definately should use VBO/VAO).

Compute Shader write to texture

I have implemented CPU code that copies a projected texture to a larger texture on a 3d object, 'decal baking' if you will, but now I need to implement it on the GPU. To do this I hope to use compute shader as its quite difficult to add an FBO in my current setup.
Example image from my current implementation
This question is more about how to use Compute shaders but for anyone interested, the idea is based on an answer I got from user jozxyqk, seen here: https://stackoverflow.com/a/27124029/2579996
The texture that is written-to is in my code called _texture, whilst the one projected is _textureProj
Simple compute shader
const char *csSrc[] = {
"#version 440\n",
"layout (binding = 0, rgba32f) uniform image2D destTex;\
layout (local_size_x = 16, local_size_y = 16) in;\
void main() {\
ivec2 storePos = ivec2(gl_GlobalInvocationID.xy);\
imageStore(destTex, storePos, vec4(0.0,0.0,1.0,1.0));\
}"
};
As you see I currently only want to have the texture updated to some arbitrary (blue) color.
Update function
void updateTex(){
glUseProgram(_computeShader);
const GLint location = glGetUniformLocation(_computeShader, "destTex");
if (location == -1){
printf("Could not locate uniform location for texture in CS");
}
// bind texture
glUniform1i(location, 0);
glBindImageTexture(0, *_texture, 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_RGBA32F);
// ^second param returns the GLint id - that im sure of.
glDispatchCompute(_texture->width() / 16, _texture->height() / 16, 1);
glMemoryBarrier(GL_SHADER_STORAGE_BARRIER_BIT);
glUseProgram(0);
printOpenGLError(); // reports no errors.
}
Problem
If i call the updateTex() outside of my main program object i see zero effect, whereas if I call it within its scope like so:
glUseProgram(_id); // vert, frag shader pipe
updateTex();
// Pass uniforms to shader
// bind _textureProj & _texture (latter is the one im trying to update)
glUseProgram(0);
Then upon rendering I see this:
QUESTION:
I realise that setting the update method within the main program object scope is not the proper way of doing it, however its the only way to get any visual results. It seems to me that what happens is that it pretty much eliminates the fragmentshader and draws to screenspace...
What can I do to get this working properly? (my main focus is to be able to write anything to the texture & update)
Please let me know if more code needs posting.
I believe in this case an FBO would be easier and faster, and would recommend that instead. But the question itself is still quite valid.
I'm surprised to see a sphere, given you're writing blue to the entire texture (minus any edge bits if the texture size is not a multiple of 16). I guess this is from code elsewhere.
Anyway, it seems your main problem is being able to write to the texture from a compute shader outside the setup code for regular rendering. I suspect this is related to how you bind your destTex image. I'm not sure what your TexUnit and activate methods do, but to bind a GL texture to an image unit, do this:
int imageUnitIndex = 0; //something unique
int uniformLocation = glGetUniformLocation(...);
glUniform1i(uniformLocation, imageUnitIndex); //program must be active
glBindImageTexture(imageUnitIndex, textureHandle, ...);
see:
https://www.opengl.org/sdk/docs/man/html/glBindImageTexture.xhtml
https://www.opengl.org/wiki/Image_Load_Store#Images_in_the_context
Lastly, as you're using image2D so GL_SHADER_IMAGE_ACCESS_BARRIER_BIT is the barrier to use. GL_SHADER_STORAGE_BARRIER_BIT is for storage buffer objects.

offscreen rendering opengl 4.5 multisample FBO

I'm referencing OpenGL Superbible 6 in my code.
First I simply wanted to implement object picking in my 3d scene. Eventually I've decided to use framebuffer objects and I have succeeded and then I understood the problem with the need to solve the problem of polygon edge aliasing, so, i've rewritten my code again to make use of GL_TEXTURE_2D_MULTISAMPLE
Here is the initialization code for framebuffer
void window_glview::init_framebuffer()
{
//CREATE FRAMEBUFFER OBJECT
GLenum gl_error=glGetError();
glGenTextures(1,&texture_id_framebuffer_color);
glBindTexture(GL_TEXTURE_2D_MULTISAMPLE,texture_id_framebuffer_color);
glTexStorage2DMultisample(GL_TEXTURE_2D_MULTISAMPLE,ANTIALIASING_SAMPLES,GL_RGBA8,client_area.right,client_area.bottom,GL_TRUE);
glGenTextures(1,&texture_id_framebuffer_objectid);
glBindTexture(GL_TEXTURE_2D_MULTISAMPLE,texture_id_framebuffer_objectid);
glTexStorage2DMultisample(GL_TEXTURE_2D_MULTISAMPLE,ANTIALIASING_SAMPLES,GL_RGBA8,client_area.right,client_area.bottom,GL_TRUE);
glGenTextures(1,&texture_id_framebuffer_depth);
glBindTexture(GL_TEXTURE_2D_MULTISAMPLE,texture_id_framebuffer_depth);
glTexStorage2DMultisample(GL_TEXTURE_2D_MULTISAMPLE,ANTIALIASING_SAMPLES,GL_DEPTH_COMPONENT32,client_area.right,client_area.bottom,GL_TRUE);
gl_error=glGetError();
glGenFramebuffers(1,&buffer_id_framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER,buffer_id_framebuffer);
gl_error=glGetError();
glFramebufferTexture(GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,texture_id_framebuffer_color,0);
glFramebufferTexture(GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,texture_id_framebuffer_objectid,0);
glFramebufferTexture(GL_FRAMEBUFFER,GL_DEPTH_ATTACHMENT,texture_id_framebuffer_depth,0);
GLenum draw_buffers[] =
{
GL_COLOR_ATTACHMENT0,
GL_COLOR_ATTACHMENT1
};
glDrawBuffers(2,draw_buffers);
GLenum status=glCheckFramebufferStatus(GL_FRAMEBUFFER);
if(status!=GL_FRAMEBUFFER_COMPLETE)
MessageBox(0,L"Failed to create framebuffer object",0,0);
glBindFramebuffer(GL_FRAMEBUFFER,0);
}
It's pretty common to most of the internet listings on the same topic.
Now here is my drawing code
void window_glview::paint()
{
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
//DRAW TO CUSTOM FRAMEBUFFER
glBindFramebuffer(GL_FRAMEBUFFER,buffer_id_framebuffer);
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glLineWidth(1.0);
draw_viewport();
viewport_object_count=0;
draw_lights();
glLineWidth(1.5);
for (unsigned short i=0;i<mesh_count;i++)
{
draw_mesh(mesh_table[i],GL_TRIANGLES,false);
}
//DRAW TO DEFAULT
glBindFramebuffer(GL_FRAMEBUFFER,0);
//USE TEXTURE FROM FRAMEBUFFER COLOR_ATTACHMENT0
glUseProgram(program_id_screen_render);
glBindTexture(GL_TEXTURE_2D_MULTISAMPLE,texture_id_framebuffer_color);
//HERE IS A QUAD DRAWING PROCESS
glBindBuffer(GL_ARRAY_BUFFER,buffer_id_screen_quad);
glVertexAttribPointer(0,4,GL_FLOAT,GL_FALSE,24,0);
glEnableVertexAttribArray(0);
glDrawArrays(GL_QUADS,0,4);
SwapBuffers(hDC);
}
vertex shader is simple
#version 450
layout(location=0) in vec4 _pos;
void main(void)
{
gl_Position=_pos;
}
fragment shader is written with the purpose of iterpreting multisamples
#version 450
uniform sampler2DMS screen_texture;
layout(location=0) out vec4 out_color;
void main(void)
{
ivec2 coord=ivec2(gl_FragCoord.xy);
vec4 result=vec4(0.0);
int i;
for (i=0;i<4;i++)
{
result=max(result,texelFetch(screen_texture,coord,i));
}
out_color=result;
}
I end up with a black screen. If i change out_color to something lice out_color=vec4(1.0,0.0,0.0,1.0) i get red screen.
What could go wrong?
In my initializer function for framebuffer when i pass GL_DEPTH_COMPONENT to glTexStorage2DMultisample, then i get error. I decided to pass GL_DEPTH_COMPONENT16 and it works. Why is that?
Should I better use RENDERBUFFER for some perpose and if yes, how can i read it to texture?
The texture with id texture_id_framebuffer_color, which is the texture you use for your final rendering, is not attached to the FBO while you render to the FBO:
glFramebufferTexture(GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,texture_id_framebuffer_color,0);
glFramebufferTexture(GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,texture_id_framebuffer_objectid,0);
Only one texture can be attached to a given attachment point at a time. So when you specify a second texture to be attached to COLOR_ATTACHMENT0, the first one automatically gets un-attached.
If you want to have two attachments, they will need to use different attachment points:
glFramebufferTexture(GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,texture_id_framebuffer_color,0);
glFramebufferTexture(GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT1,texture_id_framebuffer_objectid,0);

imageStore() doesn't work on AMD hardware (OpenGL 4.2)

I tried this code on Nvidia hardware without any problem but on AMD, the imageStore() function doesn't seem to do anything (No GL error is thrown though, I checked)
Shader:
#extension GL_EXT_shader_image_load_store : require
layout(size4x32) uniform image2D A;
void main(void){
vec4 output = vec4(0.111, 0.222 , 0.333, 0.444);
imageStore(A, ivec2(gl_FragCoord.xy-vec2(0.5,0.5)), output);
}
Calling Program:
glUseProgram(program);
glUniform1i(glGetUniformLocation (program , "A" ), id);
glBindImageTexture(id, texture, 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_RGBA32F);
//Bind the fbo associated with the texture to run a shader per pixel
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glViewport(0, 0, width, height);
glDrawBuffer(GL_NONE); //Forbid gl_FragColor to be modified
//Render a quad
draw();
//Then read the texture...
As suggested in an other thread by Nicol Bolas (Trouble with imageStore() (OpenGL 4.3)) I tried to add some barriers to insure that the memory is written when I read back the texture but no change, the texture that imageStore is supposed to write to is not modified.
void main(void){
vec4 output = vec4(0.111, 0.222 , 0.333, 0.444);
memoryBarrier();
imageStore(A, ivec2(gl_FragCoord.xy), output);
memoryBarrier();
}
In the main program:
...
draw();
glMemoryBarrierEXT(GL_ALL_BARRIER_BITS);
...
On the other hand, if I remove glDrawBuffer(GL_NONE) to simply output my value using gl_FragColor it works, as usual:
void main(void){
gl_FragColor = vec4(0.111, 0.222 , 0.333, 0.444);
}
but I really need to do it with imageStore since I want to use scatter writes.
I also tried to use imageLoad and didn't have any problem. What is happening with this imageStore function?
Any ideas?
I probably had the same problem with AMD card as you. I then looked at the source code of OpenGL Sample Pack at : http://www.g-truc.net/project-0026.html#menu
In studying this source code related to imageStore, I found that adding two following lines for the texture made my code work for AMD (the code in java)
gl.glTexParameteri(GL2.GL_TEXTURE_2D, GL2.GL_TEXTURE_MIN_FILTER, GL2.GL_NEAREST);
gl.glTexParameteri(GL2.GL_TEXTURE_2D, GL2.GL_TEXTURE_MAG_FILTER, GL2.GL_NEAREST);
If it does not work for you, just compare your code with OpenGL Sample Pack to find more.

Texture rendering and VBO's [OpenGL/SDL/C++]

So, I've been working on a little game project for a bit and I've hit a snag that's annoying me to no end. I load an obj file which then gets rendered after being put into a VBO. This part works fine, no problemo. However, I've been trying to get it to render the accompanying texture with the supplied UVs with no success. Currently, I just get a matte green colouration on my model. Upon investigating it in GDE, I've seen that texture gets loaded fine and occupies the GL_TEXTURE0 unit, so that's not the issue. I believe it may be my binding but I have no idea why this would fail...
void Model_Man::render_models()
{
for(int x=0; x<models.size(); x++)
{
if(models.at(x).visible==true)
{
glBindBuffer(GL_ARRAY_BUFFER,models.at(x).t_buff);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,models.at(x).i_buff);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT,0,0);
glClientActiveTexture(GL_TEXTURE0);
glTexCoordPointer(2,GL_FLOAT,0,&models.at(x).uvs[0]);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glActiveTexture(GL_TEXTURE0);
int tex_loc = glGetUniformLocation(models.at(x).shaderid,"color_texture");
glUniform1i(tex_loc,GL_TEXTURE0);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, models.at(x).mats.at(0).texid);
c_render.use_program(models.at(x).shaderid);
glDrawElements(GL_TRIANGLES,models.at(x).f_index.size()*3,GL_UNSIGNED_INT,0);
c_render.use_program();
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisable(GL_TEXTURE_2D);
}
}
}
And my shader files...
Shader.frag
uniform sampler2D color_texture;
void main() {
// Set the output color of our current pixel
gl_FragColor = texture2D(color_texture, gl_TexCoord[0].st);
}
Shader.vert
void main() {
gl_TexCoord[0] = gl_MultiTexCoord0;
// Set the position of the current vertex
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
And yes, I know I'm currently being horribly inefficient with my render loop :P but I'm already planning on refactoring it, I am just attempting to get this single model to draw correctly with everything I'm aiming to do. I have no clue why it wouldn't be rendering with the texture correctly applied - unless it's because I need to interleave my arrays but I'm still supplying it with uv data so I don't see why it fails.
The call that set the sampler uniform shall not set GL_TEXTUE0, but actually 0.
Indeed:
glUniform1i(location, 0)
For setting up a sampler uniform do:
glUseProgram(progId);
// ...
glActiveTexture(GL_TEXTURE0 + texUnit);
glBindTexture(texId);
glUniform1i(texUnit);
The main concept is that the uniform variable are a shader program state (it is mantained until you re-link the program or reset the uniform value). Without binding a program, glUniform1i shall fail since there's not shader program at which it can set the uniform value!
As a general advice, call glGetError after each OpenGL call to detect these conditions. Most of those calls can be removed by preprocessor in release version.
Well, found out that the big issue was that while I was binding a texture, I wasn't actually setting it in a way that it was understood as being used. Setting glClientActiveTexture(GL_TEXTURE0 + texUnit); in combination with glActiveTexture(); ended up being the final solution.