How to properly align NPOT textures with odd row pixel width? - c++

I am using LUT files stored in .cube files for image post processing.
The LUT.cube file layout may look like this:
TITLE
LUT_3D_SIZE 2
1.000000 1.000000 1.000000 -> takes black pix. from input image and transfers to white output pix
1.000000 0.000000 0.000000
0.000000 1.000000 0.000000
0.000000 0.000000 1.000000
1.000000 1.000000 0.000000
1.000000 0.000000 1.000000
0.000000 1.000000 1.000000
0.000000 0.000000 0.000000 -> takes white pix. from input image and transfers to black output pix.
I load the raw data from the file to data vector (this part is 100% correct) and then load its data to GL_TEXTURE_3D and use it calling glActiveTexture(GL_TEXTURE0_ARB + unit); as a sampler3D in frag. shader utilizing GLSL texture(...) function for free interpolation.
glGenTextures(1, &texID);
glBindTexture(GL_TEXTURE_3D, texID);
constexpr int MIPMAP_LEVEL = 0; // I do not need any mipmapping for now.
//glPixelStorei(GL_UNPACK_ALIGNMENT, 1); -> should I use this???
glTexImage3D(
GL_TEXTURE_3D,
MIPMAP_LEVEL,
GL_RGB,
size.x, // Size of the LUT 2, 33 etc., x == y == z.
size.y,
size.z,
0,
GL_RGB,
GL_FLOAT, // I assume this is correct regarding how are stored the data in the LUT .cube file.
data
);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAX_LEVEL, MIPMAP_LEVEL);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
glBindTexture(GL_TEXTURE_3D, 0);
For simplycity lets say that input image is restricted to be RGB8 or RGB16 format.
This works OK, when LUT_3D_SIZE is 2 (I suppose any POT will work), producing expectable constant color transformation output.
However these LUT files can have LUT_3D_SIZE parameter of NPOT size - odd numbers like 33 typicaly and there is where I start to have problems and non-deterministic output (the output texture is varying every post process run - I assume this is because the non-aligned texture is filled with some random data).
How I should address this problem?
I guess I could use glPixelStorei(GL_PACK/UNPACK_ALIGNMENT, x); to compensate odd pixel row width (33), but I would like to understand what (math) is going on instead of trying to pick up some random alignment that would magicaly work for me. Also I am not sure if this is the real problem I am facing, so...
For clarification I am on desktop GL 3.3+ available, Nvidia card.

So the long story short: the problem was that something somewhere in the rest of the code was setting the GL_UNPACK_ALIGNMENT to 8 (find out using glGetIntegerv(GL_UNPACK_ALIGNMENT, &align)) which was not possible to divide the actual row byte size to get the whole number so the texture was malformed.
As the actual row byte size is 33 * 33 * 12 (width * depth * sizeof(GL_FLOAT) * 3 (RGB)) = 13 068 which is dividible by 1, 2 AND also 4 (so all of these possibilities must be valid if used by image loader) it was revealed in the end that it works for all of these settings and I must missed something (or forget to recompile or whatever when I experienced problems with GL_UNPACK_ALIGNMENT setted to 1 as it is valid option and now I experience it working properly - same with values 2 and 4).

Related

OpenGL, render to texture with floating point color without clipping value

I am not really sure what the English name for what I am trying to do is, please tell me if you know.
In order to run some physically based lighting calculations. I need to write floating point data to a texture using one OpenGL shader, and read this data again in another OpenGL shader, but the data I want to store may be less than 0 or above 1.
To do this, I set up a render buffer to render to this texture as follows (This is C++):
//Set up the light map we will use for lighting calculation
glGenFramebuffers(1, &light_Framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, light_Framebuffer);
glBlendFunc(GL_SRC_ALPHA, GL_DST_ALPHA);//Needed for light blending (true additive)
glGenTextures(1, &light_texture);
glBindTexture(GL_TEXTURE_2D, light_texture);
//Initialize empty, and at the size of the internal screen
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, w, h, 0, GL_RGBA, GL_FLOAT, 0);
//No interpolation, I want pixelation
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
//Now the light framebuffer renders to the texture we will use to calculate dynamic lighting
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, light_texture, 0);
GLenum DrawBuffers[1] = { GL_COLOR_ATTACHMENT0 };
glDrawBuffers(1, DrawBuffers);//Color attachment 0 as before
Notice that I use type GL_FLOAT and not GL_UNSIGNED_BYTE, according to this discussion Floating point type texture should not be clipped between 0 and 1.
Now, just to test that this is true, I simply set the color somewhere outside this range in the fragment shader which creates this texture:
#version 400 core
void main()
{
gl_FragColor = vec4(2.0,-2.0,2.0,2.0);
}
After rendering to this texture, I send this texture to the program which should use it like any other texture:
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, light_texture );//This is the texture I rendered too
glUniform1i(surf_lightTex_ID , 1);//This is the ID in the main display program
Again, just to check that this is working I have replaced the fragment shader with one which tests that the colors have been saved.
#version 400 core
uniform sampler2D lightSampler;
void main()
{
color = vec4(0,0,0,1);
if (texture(lightSampler,fragment_pos_uv).r>1.0)
color.r=1;
if (texture(lightSampler,fragment_pos_uv).g<0.0)
color.g=1;
}
If everything worked, everything should turn yellow, but needless to say this only gives me a black screen. So I tried the following:
#version 400 core
uniform sampler2D lightSampler;
void main()
{
color = vec4(0,0,0,1);
if (texture(lightSampler,fragment_pos_uv).r==1.0)
color.r=1;
if (texture(lightSampler,fragment_pos_uv).g==0.0)
color.g=1;
}
And I got
The parts which are green are in shadow in the testing scene, nevermind them; the main point is that all the channels of light_texture get clipped to between 0 and 1, which they should not do. I am not sure if the data is saved correctly and only clipped when I read it, or if the data is clipped to 0 to 1 when saving.
So, my question is, is there some way to read and write to an OpenGL texture, such that the data stored may be above 1 or below 0.
Also, No can not fix the problem by using 32 bit integer per channel and by applying a Sigmoid function before saving and its inverse after reading the data, that would break alpha blending.
The type and format arguments glTexImage2D only specify the format of the source image data, but do not affect the internal format of the texture. You must use a specific internal format. e.g.: GL_RGBA32F:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, w, h, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);

OpenGL - Draw object with image texture from .obj/.mtl

I'm attempting to create a cube with an image texture using OpenGL with Google Cardboard for Android. My code is largely based off of Google's demo code here. I can render the black/texture-less cube just fine, but the texture is what has me stuck.
FILES
Here is the .obj file:
mtllib 9c9ab3c3-ea26-4524-88f5-a524b6bb6057.mtl
g Mesh1 Model
usemtl Tile_Hexagon_White
v 2.16011 0 -2.27458
vt -7.08697 8.39536
vn 0 -1 -0
v 1.16011 0 -1.27458
vt -3.80613 4.70441
v 1.16011 0 -2.27458
vt -3.80613 8.39536
f 1/1/1 2/2/1 3/3/1
v 2.16011 0 -1.27458
vt -7.08697 4.70441
f 2/2/1 1/1/1 4/4/1
vt 7.46254 0
vn 1 0 -0
v 2.16011 1 -1.27458
vt 4.1817 3.69094
vt 4.1817 0
f 1/5/2 5/6/2 4/7/2
v 2.16011 1 -2.27458
vt 7.46254 3.69094
f 5/6/2 1/5/2 6/8/2
vt -7.08697 0
vn 0 0 -1
v 1.16011 1 -2.27458
vt -3.80613 3.69094
vt -7.08697 3.69094
f 1/9/3 7/10/3 6/11/3
vt -3.80613 0
f 7/10/3 1/9/3 3/12/3
vt -4.1817 0
vn -1 0 -0
vt -7.46254 3.69094
vt -7.46254 0
f 2/13/4 7/14/4 3/15/4
v 1.16011 1 -1.27458
vt -4.1817 3.69094
f 7/14/4 2/13/4 8/16/4
vt 3.80613 0
vn 0 0 1
vt 7.08697 3.69094
vt 3.80613 3.69094
f 2/17/5 5/18/5 8/19/5
vt 7.08697 0
f 5/18/5 2/17/5 4/20/5
vt 7.08697 4.70441
vn 0 1 -0
vt 3.80613 8.39536
vt 3.80613 4.70441
f 5/21/6 7/22/6 8/23/6
vt 7.08697 8.39536
f 7/22/6 5/21/6 6/24/6
And here is the .mtl file:
newmtl Tile_Hexagon_White
Ka 0.000000 0.000000 0.000000
Kd 0.835294 0.807843 0.800000
Ks 0.330000 0.330000 0.330000
map_Kd 9c9ab3c3-ea26-4524-88f5-a524b6bb6057/Tile_Hexagon_White.jpg
newmtl ForegroundColor
Ka 0.000000 0.000000 0.000000
Kd 0.000000 0.000000 0.000000
Ks 0.330000 0.330000 0.330000
The texture is just a repeating hexagonal pattern that I want to cover the entire cube:
CODE
Load the OBJ file and separate the data into arrays.
I have Vertices_[108], UV_[72], and Indices_[36]. I am confident that the program is functioning properly at this point, but I can provide the values if necessary.
I load the image after:
GLuint texture_id;
glGenTextures(1, &texture_id);
imageNum = texture_id;
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, imageNum);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
/*I don't really know how this block of code works but it's supposed to Load an image from the asset manager.
* env - the JNIEnv* environment
* java_asset_mgr - AAssetManager for fetching files
* path - the path and name of my texture
*/
jclass bitmap_factory_class =
env->FindClass("android/graphics/BitmapFactory");
jclass asset_manager_class =
env->FindClass("android/content/res/AssetManager");
jclass gl_utils_class = env->FindClass("android/opengl/GLUtils");
jmethodID decode_stream_method = env->GetStaticMethodID(
bitmap_factory_class, "decodeStream",
"(Ljava/io/InputStream;)Landroid/graphics/Bitmap;");
jmethodID open_method = env->GetMethodID(
asset_manager_class, "open", "(Ljava/lang/String;)Ljava/io/InputStream;");
jmethodID tex_image_2d_method = env->GetStaticMethodID(
gl_utils_class, "texImage2D", "(IILandroid/graphics/Bitmap;I)V");
jstring j_path = env->NewStringUTF(path.c_str());
RunAtEndOfScope cleanup_j_path([&] {
if (j_path) {
env->DeleteLocalRef(j_path);
}
});
jobject image_stream =
env->CallObjectMethod(java_asset_mgr, open_method, j_path);
jobject image_obj = env->CallStaticObjectMethod(
bitmap_factory_class, decode_stream_method, image_stream);
env->CallStaticVoidMethod(gl_utils_class, tex_image_2d_method, GL_TEXTURE_2D, 0,
image_obj, 0);
glGenerateMipmap(GL_TEXTURE_2D);
Draw:
//obj_program has a vertex shader and fragment shader attached.
glUseProgram(obj_program_);
std::array<float, 16> target_array = modelview_projection_target_.ToGlArray();
glUniformMatrix4fv(obj_modelview_projection_param_, 1, GL_FALSE,
target_array.data());
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, imageNum);
//DRAW
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, false, 0, Vertices_.data());
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 2, GL_FLOAT, false, 0, Uv_.data());
glDrawElements(GL_TRIANGLES, Indices_.size(), GL_UNSIGNED_SHORT,
Indices_.data());
RESULT
I end up with a cube that is oddly textured and I am unsure what is causing it.
I am sure there will be questions so I will do my best to answer and provide any information that will help!
The issue was that my texture needed to be repeated. These two lines were changed from
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
to
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);

OpenGL Compute Shader - glDispatchCompue() does not run

I'm currently working with a compute shader in OpenGl and my goal is to render from one texture onto another texture with some modifications. However, it does not seem like my compute shader has any effect on the textures at all.
After creating a compute shader I do the following
//Use the compute shader program
(*shaderPtr).useProgram();
//Get the uniform location for a uniform called "sourceTex"
//Then connect it to texture-unit 0
GLuint location = glGetUniformLocation((*shaderPtr).program, "sourceTex");
glUniform1i(location, 0);
//Bind buffers and call compute shader
this->bindAndCompute(bufferA, bufferB);
The bindAndCompute() function looks like this and its purpose is to ready the two buffers to be accessed by the compute shader and then run the compute shader.
bindAndCompute(GLuint sourceBuffer, GLuint targetBuffer){
glBindImageTexture(
0, //Always bind to slot 0
sourceBuffer,
0,
GL_FALSE,
0,
GL_READ_ONLY, //Only read from this texture
GL_RGB16F
);
glBindImageTexture(
1, //Always bind to slot 1
targetBuffer,
0,
GL_FALSE,
0,
GL_WRITE_ONLY, //Only write to this texture
GL_RGB16F
);
//this->height is currently 960
glDispatchCompute(1, this->height, 1); //Call upon shader
glMemoryBarrier(GL_SHADER_IMAGE_ACCESS_BARRIER_BIT);
}
And finally, here is the compute shader. I currently only try to set it so that it makes the second texture completely white.
#version 440
#extension GL_ARB_compute_shader : enable
#extension GL_ARB_shader_image_load_store : enable
layout (rgba16, binding=0) uniform image2D sourceTex; //Textures bound to 0 and 1 resp. that are used to
layout (rgba16, binding=1) uniform image2D targetTex; //acquire texture and save changes made to texture
layout (local_size_x=960 , local_size_y=1 , local_size_z=1) in; //Local work-group size
void main(){
vec4 result; //Vec4 to store the value to be written
pxlPos = ivec2(gl_GlobalInvocationID.xy); //Get pxl-pos
/*
result = imageLoad(sourceTex, pxlPos);
...
*/
imageStore(targetTex, pxlPos, vec4(1.0f)); //Write white to texture
}
Now, when I start bufferB is empty. When I run this I expect bufferB to become completely white. However, after this code bufferB remains empty. My conclusion is that either
A: The compute shader does not write to the texture
B: glDispatchCompute() is not run at all
However, i get no errors and the shader does compile as it should. I have checked that I bind the texture correctly when rendering by binding bufferA which I already know what it contains, then running bindAndCompute(bufferA, bufferA) to turn bufferA white. However, bufferA is unaltered. So, I've not been able to figure out why my compute shader has no effect. If anyone has any ideas on what I can try to do it would be appreciated.
End note: This has been my first question asked on this site. I've tried to present only relevant information but I still feel like maybe it became too much text anyway. If there is feedback on how to improve the structure of the question that is welcome as well.
---------------------------------------------------------------------
EDIT:
The textures I send in with sourceBuffer and targetBuffer is defined as following:
glGenTextures(1, *buffer);
glBindTexture(GL_TEXTURE_2D, *buffer);
glTexImage2D(
GL_TEXTURE_2D,
0,
GL_RGBA16F, //Internal format
this->width,
this->height,
0,
GL_RGBA, //Format read
GL_FLOAT, //Type of values in read format
NULL //source
);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
The image format of the images you bind doesn't match the image format in the shader. You bind a RGB16F (48byte per texel) texture, but state in the shader that it is of rgba16 format (64byte per texel).
Formats have to match according to the rules given here. Assuming that you allocated the texture in OpenGL, this means that the total size of each texel have to match. Also note, that 3-channel textures are (without some rather strange exceptions) not supported by image load/store.
As a side-note: The shader will execute and write if the texture format size matches. But what you write might be garbage because your textures are in 16-bit floating point format (RGBA_16F) while you tell the shader that they are in 16-bit unsigned normalized format (rgba16). Although this doesn't directlyy matter for the compute shader, it does matter if you read-back the texture or access it trough a sampler or write data > 1.0f or < 0.0f into it. If you want 16-bit floats, use rgba16f in the compute shader.

OpenGL Skybox visible borders

I have my skybox showing:
But there are borders of the box, which I don't want. I already searched the internet and they all said that GL_CLAMP_TO_EDGE should work, but I am still seeing the borders.
This is what I used for the texture loading:
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glGenerateMipmap(GL_TEXTURE_2D);
Can anyone tell me what I am doing wrong?
EDIT:
Strange thing is that the borders are only showing at the top of the skybox. so when a skybox face, touches the roof of the box.
Here an image of it:
I finally found the solution. This is a filthy mistake in the texture itself. There is a black border around the texture, but you can barely see it unless you zoom in. So I removed the borders and it worked.
Its texture coordinates floating error. If you use shaders you can clean it to strict [0.0f, 1.0f]. I cannot say is there is any possible solution for OpenGL API calls. But shaders must support this. Example using HLSL 2.0 (NVIDIA Cg) for post screen shader.
float g_fInverseViewportWidth: InverseViewportWidth;
float g_fInverseViewportHeight: InverseViewportHeight;
struct VS_OUTPUT {
float4 Pos: POSITION;
float2 texCoord: TEXCOORD0;
};
VS_OUTPUT vs_main(float4 Pos: POSITION){
VS_OUTPUT Out;
// Clean up inaccuracies
Pos.xy = sign(Pos.xy);
Out.Pos = float4(Pos.xy, 0, 1);
// Image-space
Out.texCoord.x = 0.5 * (1 + Pos.x + g_fInverseViewportWidth);
Out.texCoord.y = 0.5 * (1 - Pos.y + g_fInverseViewportHeight);
return Out;
}
Where sign rutine is used for strict [0, 1] texture coord specification. Also there is sign rutine for GLSL you may use. sign retrives sign of the vector or scalar it mean -1 for negative and 1 for positive value so to pass texture coord for vertex shader it must be specified as -1 for 0 and 1 for 1 than you may use this like formulae for actual texture coord specification:
Out.texCoord.x = 0.5 * (1 + Pos.x + g_fInverseViewportWidth);
Out.texCoord.y = 0.5 * (1 - Pos.y + g_fInverseViewportHeight);
Here you can see texture 1 texel width inaccurancy:
Now with modified shader:

openGL textures beginner question - 1D Texture creation?

EDIT
Ok I added some changes to my texture rendering, and I'm now at a point that it doesn't look how I want it but before I try to change anything I just want to be sure I'm on the right path. The problem I'm trying to fix is: I have 180000 vertices. Each of them can be from one of 190 "classes". Each class can have a different color assigned at a different time. So I'm trying to create a texture with 190 colors, and for each of the 180000 vertexes have a textureCoord to the coresponding class. So for some code:
self.bufferTextureIndex = glGenBuffersARB(1)
glBindBufferARB(GL_ARRAY_BUFFER_ARB, self.bufferTextureIndex)
glBufferDataARB(GL_ARRAY_BUFFER_ARB, ADT.arrayByteCount(textureIndexes), ADT.voidDataPointer(textureIndexes), GL_STATIC_DRAW_ARB)
self.texture = glGenTextures(1)
glBindTexture(GL_TEXTURE_1D, self.texture)
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MAG_FILTER, GL_NEAREST)
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_NEAREST)
glTexImage1D(GL_TEXTURE_1D, 0, GL_RGB, 190, 0, GL_RGB , GL_FLOAT, textureArray)
So textureIndexes is an array of floats from [0..1]. len(textureIndexes) is the number of vertices I'm using (180000). For the texture, textureArray contains 190 * 3 floats coresponding to the RBG of the colors I want for each class.
The drawing part:
glEnableClientState(GL_TEXTURE_COORD_ARRAY)
glBindBufferARB(GL_ARRAY_BUFFER_ARB, self.bufferTextureIndex)
glTexCoordPointer(1, GL_FLOAT, 0, None);
glBindTexture(GL_TEXTURE_1D, self.texture)
glEnable(GL_TEXTURE_1D)
if type == GL_POINTS:
glDrawArrays( GL_POINTS, 0, len(self.vertices) / 3 );
else:
glDrawElements(GL_TRIANGLES, len(self.triangles) , GL_UNSIGNED_SHORT, ADT.voidDataPointer(self.triangles))
So does this approach seem right ? The result isn't what I'm expecting but that might be for the color codification I chose and I cam further look on that if the main approach is a good one. I think that most likely the indexes are build wrong. To build them I have a file with a number between 0 and 190 corresponding to the class for each index. So my index building so far was just read index then index / 190 for each vertex to get a number in [0..1]
EDIT2
So I took your advice, did the index + 0.5 / 190 to generate my indexes. I'm printing the length and values of the indice array. It's 60000 and all are numbers between 0 and 1 , mostly between 0.3 and 0.95. But still all my vertices are of the same color. So the only thing I haven't checked is the 1D Texture generation. Maybe here is where I got it wrong:
i = 0
while i < 30:
textureArray.append([1,0,0])
i = i + 1
while i < 60:
textureArray.append([1,1,0])
i = i + 1
while i < 90:
textureArray.append([1,1,1])
i = i + 1
while i < 120:
textureArray.append([0,1,1])
i = i + 1
while i < 150:
textureArray.append([0,0,1])
i = i + 1
while i < 190:
i = i + 1
textureArray.append([0,0,0])
This is how I generate my texture array. This will not be the actual solution but for testing reasons. So my texture should be 1/6 red - 1/6 ... . The texture generation as above:
self.texture = glGenTextures(1)
glBindTexture(GL_TEXTURE_1D, self.texture)
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_WRAP_S, GL_REPEAT)
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_WRAP_T, GL_REPEAT)
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MAG_FILTER, GL_NEAREST)
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_NEAREST)
glTexImage1D(GL_TEXTURE_1D, 0, GL_RGB, 190, 0, GL_RGB , GL_FLOAT, textureArray)
Is this texture generating correct ? Because even though my indices range is like I mentioned before, all my vertices have the color of the very first color from the texture. Funny thing is that if I "omit" the glBindBufferARB(GL_ARRAY_BUFFER_ARB, self.bufferTextureIndex) form the drawing and let the normals take the place of the texture indices I do get some different colors, but textureIndexes seem to all point to the very first color from my texture. I've uploaded two sample files with the actual values of textureIndices and normals. No ideea why the first one doesn't work but the second seems to work(can't really verify if they are the correct colors but at least they are different).
http://www.megafileupload.com/en/file/315895/textureIndices-txt.html
http://www.megafileupload.com/en/file/315894/normalsTest-txt.html
EDIT3
So now my indices seems to work. Here is a sample image:
http://i.stack.imgur.com/yvlV3.png
However the odd part is that as you can see the borders are not defined properly. Could this be influenced in any way by some of the parameters I pass to the texture or should I triple check my textureIndex creation ?
At the moment you use your normals as texture coordinates, because self.bufferNormals was bound when calling glTexCoordPointer. 1D textures aren't just per vertex colors. they are accessed by per-vertex texture coordinates, like 2D textures, otherwise they would be a useless substitute for per-vertex colors. Read some introductory material on OpenGL texturing or texturing in general if you don't uderstand that.
As said above, definitely not.
EDIT: According to you newest question (the one with the screenshot), keep in mind, that when the vertices of a single triangle have different texture coordinates (in your case they would belong to different classes, which I suppose shouldn't happen), the texCoords are interpolated accross the triangle and then used to get the texture color. So you have to make sure all vertices of a triangle have the same texCoord if you don't want this to happen (which I suppose). So you have to duplicate vertices along the "material class" borders. Perhaps you could get a quick and dirty solution by just setting glShadeModel(GL_FLAT), but that would also flatten lighting and it is not determined to which class a border triangle belongs then.