GLSL, combining 2D and 3D textures - opengl

I am trying to blend a 3D texture with a 2D one to make a terrain. The 3D texture has moss, sand, snow and the like, interpolated to enhance the illusion of heights. The 2D texture currently only has an orange line across meant to be a "road". This is my fragment shader:
# version 420
uniform sampler3D mainTexture;
uniform sampler2D roadTexture;
void main() {
vec4 diffuse3D = texture3D(mainTexture, gl_TexCoord[0].stp);
vec4 diffuse2D = texture2D(roadTexture, gl_TexCoord[1].st);
// Yes, I am aware I am only returning the 2D texture value
// However this is for testing purposes only
// Doing gl_FragColor = diffuse3D + diffuse2D;
// Or any other operation returns the 3D texture only
gl_FragColor = diffuse2D;
}
And this is my drawing call:
void Terrain::Draw() {
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, sizeof(glm::vec3), &v[0].x);
glEnableClientState(GL_NORMAL_ARRAY);
glNormalPointer(GL_FLOAT, sizeof(glm::vec3), &n[0].x);
s.enable(); // simple glUseProgram call within my Shader object
glClientActiveTexture(GL_TEXTURE0);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnable(GL_TEXTURE_3D);
glBindTexture(GL_TEXTURE_3D, id_texture);
s.setSampler("mainTexture",0); // Calls to glGetUniformLocation and glUniform1i
glTexCoordPointer(3, GL_FLOAT, sizeof(glm::vec3), &t[0].x);
glClientActiveTexture(GL_TEXTURE1);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, id_texture_road);
s.setSampler("roadTexture",1); // Same as above
glTexCoordPointer(2, GL_FLOAT, sizeof(glm::vec2), &t2[0].x);
glPushMatrix();
glScalef(scalex,scaley,scalez);
glDrawElements(GL_TRIANGLES, sizei, GL_UNSIGNED_INT, index);
glPopMatrix();
s.disable(); // glUseProgram(0)
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisable(GL_TEXTURE_3D);
glDisable(GL_TEXTURE_2D);
}
Here is the code for my setSampler() method:
void Shader::setSampler(std::string name, GLint value)
{
GLuint loc = glGetUniformLocation(program, name.c_str());
if (loc>0)
{
glUniform1i(loc, value);
}
}
The result is a solid black color upon the whole terrain. I have sadly been unable to find information on sampler3D, but the diffuse3D variable in my fragment shader does compute to the correct texture, and my texture coordinates for the 2D texture are being correcly sent to the fragment shader (I know this because I used them to color the terrain for testing and got a smooth gradinent from green to red, what you would expect using only the first 2 coordinates). I also checked the values passed to my setSampler() method and I do get the 0 and 1, and the 1 and 2 locations corresponding to them.
All of the help I can find on this issue is around the vicinity of the advice provided here, which I have already implemented).
Can anybody assist?
EDIT: So, just for kicks, I swapped my texture units so the 2D texture became unit 0 and the 3D became unit 1. Now only the 2D texture is rendered. But my texture units are passed correctly (at least in appearence) to the shader. Any clues?

vec4 diffuse3D = texture3D(mainTexture, gl_TexCoord[0].stp);
vec4 diffuse2D = texture2D(roadTexture, gl_TexCoord[1].st);
gl_FragColor = diffuse2D;
Let's pretend that this wasn't using shaders. Let's pretend you were just writing a function in C++ that returns a value.
int FuncName(int val1, int val2)
{
int test1 = Compute(val1);
int test2 = Compute(val2);
return test2;
}
What will this function return? Obviously, it returns Compute(val2), completely ignoring the value of test1. It won't magically combine test1 and test2. They're separate values, and therefore, they remain separate unless you explicitly combine them.
Just like your fragment shader.
Shaders aren't magic; they're programming. They only do what you tell them to. So if you say, "get a value from a texture and then don't do anything with it", it will dutifully do exactly that. Though odds are good that the compiler will optimize out the texture fetch entirely.
If you want a "blend" of two textures, you must blend them. You must fetch from each texture, then use both values to compute a new color.
How exactly you do that depends entirely on you. Maybe your 2D texture has some alpha that represents how much of the 2D texture to show. I don't know; you didn't describe what your texture looks like or how exactly you plan to show the road in some places and not in others.

the reason you get a black color is simply that you don't set proper uniform variables.
# version 420
uniform sampler3D mainTexture;
uniform sampler2D roadTexture;
void main() {
vec4 diffuse3D = texture3D(mainTexture, gl_TexCoord[0].stp);
vec4 diffuse2D = texture2D(roadTexture, gl_TexCoord[1].st);
gl_FragColor = diffuse2D;
}
what this shader is doing, is looking up the value of 'roadTexture' and displaying it. unfortunately, it has no clue which texture unit 'roadTexture' is currently bound to, and thus will acess texture unit 0, where your 3d texture is bound - so your're trying to access a 3d texture with 2d texcoords, which may well return all black. you'll need to query the uniform locations of your textures with glGetUniformLocation and then set them to the correct texture units ( 0/1, respectively ) with glUniform1i.
EDIT: also, you're using deprecated functionality, so your shader version directive should be changed to #version 420 compatibility - the default is core

You need to call glEnableClientState(GL_TEXTURE_COORD_ARRAY); again after you have made the second texture unit active with glClientActiveTexture(GL_TEXTURE1);
from http://www.opengl.org/sdk/docs/man2/xhtml/glEnableClientState.xml
enabling and disabling GL_TEXTURE_COORD_ARRAY affects the active client texture unit.

Just solved this problem. Apprently you still need glActiveTexture() in addition to glClientActiveTexture(). This was the code that worked, for anyone who gets the same problem:
glClientActiveTexture(GL_TEXTURE0);
glActiveTexture(GL_TEXTURE0);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glBindTexture(GL_TEXTURE_3D, id_texture);
s.setSampler("mainTexture",0); // Calls to glGetUniformLocation and glUniform1i
glTexCoordPointer(3, GL_FLOAT, sizeof(glm::vec3), &t[0].x);
glClientActiveTexture(GL_TEXTURE1);
glActiveTexture(GL_TEXTURE1);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glBindTexture(GL_TEXTURE_2D, id_texture_road);
s.setSampler("roadTexture",1); // Same as above
glTexCoordPointer(2, GL_FLOAT, sizeof(glm::vec2), &t2[0].x);
// Drawing Calls
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glClientActiveTexture(GL_TEXTURE0);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glActiveTexture(GL_TEXTURE0);
Thanks for reading.

Related

OpenGL - blend two textures on the same object

I want to apply two textures on the same object (actually just a 2D rectangle) in order to blend them. I thought I would achieve that by simply calling glDrawElements with the first texture, then binding the other texture and calling glDrawElements a second time. Like this:
//create vertex buffer, frame buffer, depth buffer, texture sampler, build and bind model
//...
glEnable(GL_BLEND);
glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE, GL_ZERO);
glBlendEquation(GL_FUNC_ADD);
// Clear the screen
glClearColor(0, 0, 0, 0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Bind our texture in Texture Unit 0
GLuint textureID;
//create or load texture
//...
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textureID);
// Set our sampler to use Texture Unit 0
glUniform1i(textureSampler, 0);
// Draw the triangles !
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, (void*)0);
//second draw call
GLuint textureID2;
//create or load texture
//...
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textureID2);
// Set our sampler to use Texture Unit 0
glUniform1i(textureSampler, 0);
// Draw the triangles !
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, (void*)0);
Unfortunately, the 2nd texture is not drawn at all and I only see the first texture. If I call glClear between the two draw calls, it correctly draws the 2nd texture.
Any pointers? How can I force OpenGL to draw on the second call?
As an alternative to the approach you followed so far I would like to suggest using two texture samplers within your GLSL shader and perform the blending there. This way, you would be done with just one draw call, thus reducing CPU/GPU interaction. To do so, just define to texture samplers in your shader like
layout(binding = 0) uniform sampler2D texture_0;
layout(binding = 1) uniform sampler2D texture_1;
Alternatively, you can use a sampler array:
layout(binding = 0) uniform sampler2DArray textures;
In your application, setup the textures and samplers using
enum Sampler_Unit{BASE_COLOR_S = GL_TEXTURE0 + 0, NORMAL_S = GL_TEXTURE0 + 2};
glActiveTexture(Sampler_Unit::BASE_COLOR_S);
glBindTexture(GL_TEXTURE_2D, textureBuffer1);
glTexStorage2D( ....)
glActiveTexture(Sampler_Unit::NORMAL_S);
glBindTexture(GL_TEXTURE_2D, textureBuffer2);
glTexStorage2D( ....)
Thanks to #tkausl for the tip.
I had depth testing enabled during the initialization phase.
// Enable depth test
glEnable(GL_DEPTH_TEST);
// Accept fragment if it closer to the camera than the former one
glDepthFunc(GL_LESS);
The option needs to be disabled in my case, for the blend operation to work.
//make sure to disable depth test
glDisable(GL_DEPTH_TEST);

OpenGL : How can I pass multiple texture to a shader with one variable?

I loaded the material and texture information from the Obj file based on this code (https://github.com/Bly7/OBJ-Loader). Now I want to load Sponza and render it. However, there are 10 or more textures, and even if all are passed to the shader, it is necessary to correspond to the appropriate vertices. Looking at the result of loading from the Obj file, textures are assigned to each part.
Example.
Mesh 0 : Pillar
position0(x, y, z), position1(x, y, z), uv0(x, y) ...
diffuse texture : tex0.png
Mesh 1 : Wall
position0(x, y, z), position1(x, y, z), uv0(x, y) ...
diffuse texture : tex1.png
.
.
.
Textures are kept as an array, and each has a corresponding mesh index. In my opinion, when passing vertex information to the shader, it works well if you divide it by the number of meshes and pass the texture at the same time. However, I'm not sure if this idea is correct, and I've tried several methods but it doesn't work.
This is the current simple code.
main.cpp :
do {
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture[i]); // texture[] : Array of textures loaded from obj file.
glUniform1i(glGetUniformLocation(shaderID, "myTex"), 0);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, vertex_position);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
glEnableVertexAttribArray(1);
glBindBuffer(GL_ARRAY_BUFFER, texCoord);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 0, (void*)0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, element);
glDrawElements(GL_TRIANGLES, element_indices.size(), GL_UNSIGNED_INT, (void*)0);
} while(glfwWindowShouldClose(window) == 0);
Fragment shader :
layout(location = 0) out vec4 out_color;
// from vertex shader
in vec2 texCoord;
uniform sampler2D myTex;
void main() {
out_color = texture(myTex, texCoord);
}
I want to correspond to the mesh index loaded with the "i" in the above code. Please let me know if my idea is wrong or if there is another way.
As your model has only one texture per mesh, I can suggest this simple code to use:
do {
glActiveTexture(GL_TEXTURE0);
glUniform1i(glGetUniformLocation(shaderID, "myTex"), 0);
for (unsigned int i = 0; i < mesh_count; i++) {
glcall(glBindTexture(type, texture[i]));
// Bind vertex and index (or element) buffer and setup vertex attribute pointers
// Draw mesh
}
} while (window_open);
The code much self-explaning. It first activates texture slot 0, then for every mesh it binds texture, vertex buffer, index or element buffer and does any preparation need to draw the mesh. Then it issues draw call.
Note that this is very basic example. Most models would look weird with this code. I would recommend to this tutorial from LearnOpenGL which explain this more broadly but in an easy way.

Multiple images of same mesh without duplicate triangle transfers

I take multiple images of the same mesh using OpenGL, GLEW and GLFW. The mesh (triangles) doesn't change in each shot, only the ModelViewMatrix does.
Here's the important code of my mainloop:
for (int i = 0; i < number_of_images; i++) {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
/* set GL_MODELVIEW matrix depending on i */
glBegin(GL_TRIANGLES);
for (Triangle &t : mesh) {
for (Point &p : t) {
glVertex3f(p.x, p.y, p.z);
}
}
glReadPixels(/*...*/) // get picture and store it somewhere
glfwSwapBuffers();
}
As you can see, I set/transfer the triangle vertices for each shot I want to take. Is there a solution in which I only need to transfer them once? My mesh is quite large, so this transfer takes quite some time.
In the year 2016 you must not use glBegin/glEnd. No way. Use Vertex Array Obejcts instead; and use custom vertex and/or geometry shaders to reposition and modify your vertex data. Using these techniques, you will upload your data to the GPU once, and then you'll be able to draw the same mesh with various transformations.
Here is an outline of how your code may look like:
// 1. Initialization.
// Object handles:
GLuint vao;
GLuint verticesVbo;
// Generate and bind vertex array object.
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
// Generate a buffer object.
glGenBuffers(1, &verticesVbo);
// Enable vertex attribute number 0, which
// corresponds to vertex coordinates in older OpenGL versions.
const GLuint ATTRIBINDEX_VERTEX = 0;
glEnableVertexAttribArray(ATTRIBINDEX_VERTEX);
// Bind buffer object.
glBindBuffer(GL_ARRAY_BUFFER, verticesVbo);
// Mesh geometry. In your actual code you probably will generate
// or load these data instead of hard-coding.
// This is an example of a single triangle.
GLfloat vertices[] = {
0.0f, 0.0f, -9.0f,
0.0f, 0.1f, -9.0f,
1.0f, 1.0f, -9.0f
};
// Determine vertex data format.
glVertexAttribPointer(ATTRIBINDEX_VERTEX, 3, GL_FLOAT, GL_FALSE, 0, 0);
// Pass actual data to the GPU.
glBufferData(GL_ARRAY_BUFFER, sizeof(GLfloat)*3*3, vertices, GL_STATIC_DRAW);
// Initialization complete - unbinding objects.
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
// 2. Draw calls.
while(/* draw calls are needed */) {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBindVertexArray(vao);
// Set transformation matrix and/or other
// transformation parameters here using glUniform* calls.
glDrawArrays(GL_TRIANGLES, 0, 3);
glBindVertexArray(0); // Unbinding just as an example in case if some other code will bind something else later.
}
And a vertex shader may look like this:
layout(location=0) in vec3 vertex_pos;
uniform mat4 viewProjectionMatrix; // Assuming you set this before glDrawArrays.
void main(void) {
gl_Position = viewProjectionMatrix * vec4(vertex_pos, 1.0f);
}
Also take a look at this page for a good modern accelerated graphics book.
#BDL already commented that you should abandon the immediate mode drawing calls (glBegin … glEnd) and switch to Vertex Array drawing (glDrawElements, glDrawArrays) that fetch their data from Vertex Buffer Objects (VBOs). #Sergey mentioned Vertex Array Objects in his answer, but those are actually state containers for VBOs.
A very important thing you have to understand – and the way you asked your question it's apparently something you're not aware of, yet – is, that OpenGL does not deal with "meshes", "scenes" or the like. OpenGL is just a drawing API. It draws points… lines… and triangles… one at a time… with no connection between them whatsoever. That's it. So when you show multiple views of the "same" thing, you must draw it several times. There's no way around this.
Most recent versions of OpenGL support multiple viewport rendering, but it still takes a geometry shader to multiply the geometry into several pieces to be drawn.

OpenGL, Shader Model 3.3 Texturing: Black Textures?

I've been banging my head against this for hours now, I'm sure it's something simple, but I just can't get a result. I've had to edit this code down a bit because I've built a little library to encapsulate the OpenGL calls, but the following is an accurate description of the state of affairs.
I'm using the following vertex shader:
#version 330
in vec4 position;
in vec2 uv;
out vec2 varying_uv;
void main(void)
{
gl_Position = position;
varying_uv = uv;
}
And the following fragment shader:
#version 330
in vec2 varying_uv;
uniform sampler2D base_texture;
out vec4 fragment_colour;
void main(void)
{
fragment_colour = texture2D(base_texture, varying_uv);
}
Both shaders compile and the program links without issue.
In my init section, I load a single texture like so:
// Check for errors.
kt::kits::open_gl::Core<QString>::throw_on_error();
// Load an image.
QImage image("G:/test_image.png");
image = image.convertToFormat(QImage::Format_RGB888);
if(!image.isNull())
{
// Load up a single texture.
glGenTextures(1, &Texture);
glBindTexture(GL_TEXTURE_2D, Texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, image.width(), image.height(), 0, GL_RGB, GL_UNSIGNED_BYTE, image.constBits());
glBindTexture(GL_TEXTURE_2D, 0);
}
// Check for errors.
kt::kits::open_gl::Core<QString>::throw_on_error();
You'll observe that I'm using Qt to load the texture. The calls to ::throw_on_error() check for errors in OpenGL (by calling Error()), and throw an exception if one occurs. No OpenGL errors occur in this code, and the image loaded using Qt is valid.
Drawing is performed as follows:
// Clear previous.
glClear(GL_COLOR_BUFFER_BIT |
GL_DEPTH_BUFFER_BIT |
GL_STENCIL_BUFFER_BIT);
// Use our program.
glUseProgram(GLProgram);
// Bind the vertex array.
glBindVertexArray(GLVertexArray);
/* ------------------ Setting active texture here ------------------- */
// Tell the shader which textures are which.
kt::kits::open_gl::gl_int tAddr = glGetUniformLocation(GLProgram, "base_texture");
glUniform1i(tAddr, 0);
// Activate the texture Texture(0) as texture 0.
glActiveTexture(GL_TEXTURE0 + 0);
glBindTexture(GL_TEXTURE_2D, Texture);
/* ------------------------------------------------------------------ */
// Draw vertex array as triangles.
glDrawArrays(GL_TRIANGLES, 0, 4);
glBindVertexArray(0);
glUseProgram(0);
// Detect errors.
kt::kits::open_gl::Core<QString>::throw_on_error();
Similarly, no OpenGL errors occur, and a triangle is drawn to screeen. However, it looks like this:
It occurred to me the problem may be related to my texture coordinates. So, I rendered the following image using s as the 'red' component, and t as the 'green' component:
The texture coordinates appear correct, yet I'm still receiving the black triangle of doom. What am I doing wrong?
I think it could be depending on an incomplete init of your texture object.
Try to init the texture MIN and MAG filter
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
Moreover, I would suggest to check the size of the texture. If it is not power of 2, then you have to set the wrapping mode to CLAMP_TO_EDGE
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE);
Black textures are often due to this issue, very common problem around.
Ciao
In your fragment shader you're writing to a self defined target
fragment_colour = texture2D(base_texture, varying_uv);
If that's not to be gl_FragColor or gl_FragData[…], did you properly set the designated fragment data location?

Texture rendering and VBO's [OpenGL/SDL/C++]

So, I've been working on a little game project for a bit and I've hit a snag that's annoying me to no end. I load an obj file which then gets rendered after being put into a VBO. This part works fine, no problemo. However, I've been trying to get it to render the accompanying texture with the supplied UVs with no success. Currently, I just get a matte green colouration on my model. Upon investigating it in GDE, I've seen that texture gets loaded fine and occupies the GL_TEXTURE0 unit, so that's not the issue. I believe it may be my binding but I have no idea why this would fail...
void Model_Man::render_models()
{
for(int x=0; x<models.size(); x++)
{
if(models.at(x).visible==true)
{
glBindBuffer(GL_ARRAY_BUFFER,models.at(x).t_buff);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,models.at(x).i_buff);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT,0,0);
glClientActiveTexture(GL_TEXTURE0);
glTexCoordPointer(2,GL_FLOAT,0,&models.at(x).uvs[0]);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glActiveTexture(GL_TEXTURE0);
int tex_loc = glGetUniformLocation(models.at(x).shaderid,"color_texture");
glUniform1i(tex_loc,GL_TEXTURE0);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, models.at(x).mats.at(0).texid);
c_render.use_program(models.at(x).shaderid);
glDrawElements(GL_TRIANGLES,models.at(x).f_index.size()*3,GL_UNSIGNED_INT,0);
c_render.use_program();
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisable(GL_TEXTURE_2D);
}
}
}
And my shader files...
Shader.frag
uniform sampler2D color_texture;
void main() {
// Set the output color of our current pixel
gl_FragColor = texture2D(color_texture, gl_TexCoord[0].st);
}
Shader.vert
void main() {
gl_TexCoord[0] = gl_MultiTexCoord0;
// Set the position of the current vertex
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
And yes, I know I'm currently being horribly inefficient with my render loop :P but I'm already planning on refactoring it, I am just attempting to get this single model to draw correctly with everything I'm aiming to do. I have no clue why it wouldn't be rendering with the texture correctly applied - unless it's because I need to interleave my arrays but I'm still supplying it with uv data so I don't see why it fails.
The call that set the sampler uniform shall not set GL_TEXTUE0, but actually 0.
Indeed:
glUniform1i(location, 0)
For setting up a sampler uniform do:
glUseProgram(progId);
// ...
glActiveTexture(GL_TEXTURE0 + texUnit);
glBindTexture(texId);
glUniform1i(texUnit);
The main concept is that the uniform variable are a shader program state (it is mantained until you re-link the program or reset the uniform value). Without binding a program, glUniform1i shall fail since there's not shader program at which it can set the uniform value!
As a general advice, call glGetError after each OpenGL call to detect these conditions. Most of those calls can be removed by preprocessor in release version.
Well, found out that the big issue was that while I was binding a texture, I wasn't actually setting it in a way that it was understood as being used. Setting glClientActiveTexture(GL_TEXTURE0 + texUnit); in combination with glActiveTexture(); ended up being the final solution.