Passing array data through textures in opengl - c++

So i'm trying to pass a bunch of vectors to the fragment shader and apparently i should do it with a 1d texture. But if i try to access the passed vectors, the values are not what i expect.
How should i index the texture() function?
Passing the texture:
std::vector<vec3> triangles;
//triangles is already filled by this point
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_1D, texture);
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage1D(GL_TEXTURE_1D, 0, GL_RGB16F, Object::triangles.size(), 0, GL_RGB, GL_FLOAT, &Object::triangles[0]);
GLint textureLoc = glGetUniformLocation( getId(), "triangles" );
glUniform1f(textureLoc, 0);
setUniform((int)Object::triangles.size(), "triCount");
glBindVertexArray(vao);
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
//draw a rectangle from -1,-1 to 1,1
fragment shader code:
uniform sampler1D triangles;
uniform int triCount;
struct Triangle{
vec3 a,b,c;
vec3 normal;
};
void main(){
for(int i = 0;i < triCount;i++){//for each triangle
Triangle triangle;
//set the points of the triangle
triangle.a = vec3(texture(triangles,i));
triangle.b = vec3(texture(triangles,i++));
triangle.c = vec3(texture(triangles,i++));
//set the normal vector of the triangle
triangle.normal = vec3(texture(triangles,i++));
//then i do stuff with the current triangle and return a color
}
}
The array contains 3 points and a normal vector of a bunch of triangles, that's why i read from the texture this way.
edit:
glGetTexImage confirmed that the passed texture is correct.

When using texture, the texture coordinates are floating point values ​​in the range [0.0, 1.0]. Use texelFetch to perform a lookup of a single Texel from texture with integral texture coordinates in the range [0, width):
triangle.a = texelFetch(triangles, i*4, 0).xyz;
triangle.b = texelFetch(triangles, i*4+1, 0).xyz;
triangle.c = texelFetch(triangles, i*4+2, 0).xyz;
triangle.normal = texelFetch(triangles, i*4+3, 0).xyz;
Be aware, that the computation of the Texel indices in your shader code is incorrect.
Alternatively, you can calculate the texture coordinate by dividing the index by the width of the texture. The size of a texture can be get by textureSize:
float width = float(textureSize(triangles, 0));
triangle.a = texture(triangles, (float(i*4)+0.5) / width).xyz;
triangle.b = texture(triangles, (float(i*4)+1.5) / width).xyz;
triangle.c = texture(triangles, (float(i*4)+2.5) / width).xyz;
triangle.normal = texture(triangles, (float(i*4)+3.5) / width).xyz;

Related

Using Texture Atlas as Texture Array in OpenGL

I'm now building a Voxel game. In the beginning, I use a texture atlas that stores all voxel textures and it works fine. After that, I decided to use Greedy Meshing in my game, thus texture atlas is not useful anymore. I read some articles which said that should use Texture Array instead. Then I tried to read and use the texture array technique for texturing. However, the result I got was all black in my game. So what am I missing?
This is my texture atlas (600 x 600)
Here is my Texture2DArray, I use this class to read and save a texture array
Texture2DArray::Texture2DArray() : Internal_Format(GL_RGBA8), Image_Format(GL_RGBA), Wrap_S(GL_REPEAT), Wrap_T(GL_REPEAT), Wrap_R(GL_REPEAT), Filter_Min(GL_NEAREST), Filter_Max(GL_NEAREST), Width(0), Height(0)
{
glGenTextures(1, &this->ID);
}
void Texture2DArray::Generate(GLuint width, GLuint height, unsigned char* data)
{
this->Width = width;
this->Height = height;
glBindTexture(GL_TEXTURE_2D_ARRAY, this->ID);
// I cannot decide what the texture array layer (depth) should be (I put here is 1 for layer number)
//Can anyone explain to me how to decide the texture layer here?
glTexImage3D(GL_TEXTURE_2D_ARRAY, 1, this->Internal_Format, this->Width, this->Height, 0, 1 , this->Image_Format, GL_UNSIGNED_BYTE, data);
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_WRAP_S, this->Wrap_S);
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_WRAP_T, this->Wrap_T);
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_WRAP_R, this->Wrap_R);
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MIN_FILTER, this->Filter_Min);
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MIN_FILTER, this->Filter_Max);
//unbind this texture for another creating texture
glBindTexture(GL_TEXTURE_2D_ARRAY, 0);
}
void Texture2DArray::Bind() const
{
glBindTexture(GL_TEXTURE_2D_ARRAY, this->ID);
}
Here is my Fragment Shader
#version 330 core
uniform sampler2DArray ourTexture;
in vec2 texCoord;
out vec4 FragColor;
void main(){
// 1 (the layer number) just for testing
FragColor = texture(ourTexture,vec3(texCoord, 1));
}
Here is my Vertex Shader
#version 330 core
layout (location = 0) in vec3 inPos;
layout (location = 1) in vec2 inTexCoord;
out vec2 texCoord;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main(){
gl_Position = projection * view * vec4(inPos,1.0f);
texCoord = inTexCoord;
}
This my rendering result
EDIT 1:
I figured out that texture atlas doesn't work with texture array because it is a grid so OpenGl cannot decide where it should begin. So I create a vertical texture (18 x 72) and try again but it still all black everywhere.
I have checked binding the texture before using it.
When the 3 dimensional texture image is specified, then the depth has to be the number of images which have to be stored in the array (e.g. imageCount). The width and the height parameter represent the width and height of 1 tile (e.g. tileW, tileH). The layer should be 0 and the border parameter has to be 0. See glTexImage3D. glTexImage3D creates the data store for the texture image. The memory which is required for the textures is reserved (GPU). It is possible to pass a pointer to the image data, but it is not necessary.
If all the tiles are stored in a vertical atlas, then the image data can be set directly:
glTexImage3D(GL_TEXTURE_2D_ARRAY, 0, this->Internal_Format,
tileW, tileH, imageCount, 0,
this->Image_Format, GL_UNSIGNED_BYTE, data);
If the tiles are in the 16x16 atlas, then the tiles have to by extracted from the texture atlas and to set each subimage in the texture array. (data[i] is the imaged data of one tile). Create the texture image:
glTexImage3D(GL_TEXTURE_2D_ARRAY, 0, this->Internal_Format,
tileW, tileH, imageCount, 0,
this->Image_Format, GL_UNSIGNED_BYTE, nullptr);
After that use glTexSubImage3D to put the texture data to the data store of the texture object. glTexSubImage3D uses the existing data store and copies data. e.g.:
for (int i = 0; i < imageCount; ++i)
{
glTexSubImage3D(GL_TEXTURE_2D_ARRAY, 0,
0, 0, i,
tileW, tileH, 1,
this->Image_Format, GL_UNSIGNED_BYTE, data[i]);
}
Note, you've to extract the tiles from the texture atlas and to set each subimage in the texture array. (data[i] is the imaged data of one tile)
An algorithm to extract the tiles and specify the texture image may look as follows
#include <algorithm> // std::copy
#include <vector> // std::vector
unsigned char* data = ...; // 16x16 texture atlas image data
int tileW = ...; // number of pixels in a row of 1 tile
int tileH = ...; // number of pixels in a column of 1 tile
int channels = 4; // 4 for RGBA
int tilesX = 16;
int tilesY = 16;
int imageCount = tilesX * tilesY;
glTexImage3D(GL_TEXTURE_2D_ARRAY, 0, this->Internal_Format,
tileW, tileH, imageCount, 0,
this->Image_Format, GL_UNSIGNED_BYTE, nullptr);
std::vector<unsigned char> tile(tileW * tileH * channels);
int tileSizeX = tileW * channels;
int rowLen = tilesX * tileSizeX;
for (int iy = 0; iy < tilesY; ++ iy)
{
for (int ix = 0; ix < tilesX; ++ ix)
{
unsigned char *ptr = data + iy*rowLen + ix*tileSizeX;
for (int row = 0; row < tileH; ++ row)
std::copy(ptr + row*rowLen, ptr + row*rowLen + tileSizeX,
tile.begin() + row*tileSizeX);
int i = iy * tilesX + ix;
glTexSubImage3D(GL_TEXTURE_2D_ARRAY, 0,
0, 0, i,
tileW, tileH, 1,
this->Image_Format, GL_UNSIGNED_BYTE, tile.data());
}
}

Store array of floats in texture & access the floats from the shader using texture coordinates

Edit: Removed alot of clutter and rephrased the question.
I have stored an array of floats into my shader using:
float simpleArray2D[4] = { 10.0f, 20.0f, 30.0f, 400.0f };
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, 2, 2, 0, GL_RGB, GL_FLOAT, &simpleArray2D);
How do I access specific elements from the float array in the shader?
Specific fragment shader code showing what I've done to test it so far, displaying a green color when the value is the specified one (10.0f in this case), and red if it's not.
vec2 textureCoordinates = vec2(0.0f, 0.0f);
float testValueFloat = float(texture(floatArraySampler, textureCoordinates));
outColor = testValueFloat >= 10.0f ? vec4(0,1,0,1) : vec4(1,0,0,1); //Showed green
//outColor = testValueFloat >= 10.1f ? vec4(0,1,0,1) : vec4(1,0,0,1); //Showed red
In GLSL you can use texelFetch to get a texel from a texture by integral coordinates.
This means the texels of the texture can be addressed similar the elements of an array, by its index:
ivec2 ij = ivec2(0, 0);
float testValueFloat = texelFetch(floatArraySampler, ij, 0).r;
But note, the array consists of 4 elements.
float simpleArray2D[4] = { 10.0f, 20.0f, 30.0f, 400.0f };
So the texture can be a 2x2 texture with one color channel (GL_RED)
glTexImage2D(GL_TEXTURE_2D, 0, GL_R32F, 2, 2, 0, GL_RED, GL_FLOAT, &simpleArray2D);
or a 1x1 texture with 4 color channels (GL_RGBA)
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, 1, 1, 0, GL_RGBA, GL_FLOAT, &simpleArray2D);
but it can't be a 2x2 RGBA texture, because for this the array would have to have 16 elements.

glGetPixels on Offscreen framebuffer opengl

I generate a PointCloud in my program, and now, I want to be able to click on a point in this point cloud rendered to my screen using OpenGL.
In order to do so, I used the trick of giving to each pixel in an offscreen render a colour based on its index in the VBO. I use the same camera for my offscreen render and my onscreen render so they move together, and when I click, I get values of my offscreen render to retrieve the position in the VBO to get the point I clicked on. This is the theory since when I click, I have only (0,0,0). I believe that means my FBO is not well renderer but I'm not sure whether it is that or if the problem comes from somewhere else...
So here are the steps. clicFBO is the FBO I'm using for offscreen render, and clicTextureColorBuf is the texture in which I write in the FBO
glGenFramebuffers(1, &clicFBO);
glBindFramebuffer(GL_FRAMEBUFFER, clicFBO);
glGenTextures(1, &clicTextureColorBuf);
glBindTexture(GL_TEXTURE_2D, clicTextureColorBuf);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, SCR_WIDTH, SCR_HEIGHT, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, clicTextureColorBuf, 0);
GLenum DrawBuffers[1] = { GL_COLOR_ATTACHMENT0 };
glDrawBuffers(1, DrawBuffers);
After that, I wrote a shader that gives to each point the color of its index in the VBO...
std::vector<cv::Point3f> reconstruction3D; //Will contain the position of my points
std::vector<float> indicesPointsVBO; //Will contain the indexes of each point
for (int i = 0; i < pts3d.size(); ++i) {
reconstruction3D.push_back(pts3d[i].pt3d);
colors3D.push_back(pt_tmp);
indicesPointsVBO.push_back(((float)i / (float)pts3d.size() ));
}
GLuint clicVAO, clicVBO[2];
glGenVertexArrays(1, &clicVAO);
glGenBuffers(2, &clicVBO[0]);
glBindVertexArray(clicVAO);
glBindBuffer(GL_ARRAY_BUFFER, clicVBO[0]);
glBufferData(GL_ARRAY_BUFFER, reconstruction3D.size() * sizeof(cv::Point3f), &reconstruction3D[0], GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (GLvoid*)0);
glEnable(GL_PROGRAM_POINT_SIZE);
glBindBuffer(GL_ARRAY_BUFFER, clicVBO[1]);
glBufferData(GL_ARRAY_BUFFER, indicesPointsVBO.size() * sizeof(float), &indicesPointsVBO[0], GL_STATIC_DRAW);
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 1, GL_FLOAT, GL_FALSE, 0, (GLvoid*)0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
and the vertex shader:
layout (location = 0) in vec3 pos;
layout (location = 1) in float col;
out float Col;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
uniform int pointSize;
void main()
{
gl_PointSize = pointSize;
gl_Position = projection * view * model * vec4(pos, 1.0);
Col = col;
}
And the Fragment:
#version 330 core
layout(location = 0) out vec4 FragColor;
in float Col;
void main()
{
FragColor = vec4(Col, Col, Col ,1.0);
}
And this is how I render this texture:
glm::mat4 view = camera.GetViewMatrix();
glm::mat4 projection = glm::perspective(glm::radians(camera.Zoom), (float)SCR_WIDTH / (float)SCR_HEIGHT, 1.0f, 100.0f);
glBindFramebuffer(GL_FRAMEBUFFER, clicFBO);
clicShader.use();
glDisable(GL_DEPTH_TEST);
glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
clicShader.setMat4("projection", projection);
clicShader.setMat4("view", view);
model = glm::mat4();
clicShader.setMat4("model", model);
clicShader.setInt("pointSize", pointSize);
glBindVertexArray(clicVAO);
glDrawArrays(GL_POINTS, 0, (GLsizei)reconstruction3D.size());
glBindFramebuffer(GL_FRAMEBUFFER, 0);
And then, when I click, I Use this piece of Code:
glBindFramebuffer(GL_FRAMEBUFFER, clicFBO);
glReadBuffer(GL_COLOR_ATTACHMENT0);
int width = 11, height = 11;
std::array<GLfloat, 363> arry{ 1 };
glReadPixels(Xpos - 5, Ypos - 5, width, height, GL_RGB, GL_UNSIGNED_BYTE, &arry);
for (int i = 0; i < 363; i+=3) { // It's 3 time the same number anyways for each number
std::cout << arry[i] << " "; // It gives me only 0's
}
std::cout << std::endl << std::endl;
glBindFramebuffer(GL_FRAMEBUFFER, clicFBO);
I know the error might be really stupid but I still have some problems with how OpenGL works.
I put what I thought was necessary to understand the problem (without extending too much), but if you need more code, I can write it too.
I know this is not a question in which you can say Yes or No and it's more like debugging my program, but since I really don't find from where the problem comes from, I'm looking toward someone who can explain to me what I did wrong. I do not necessarily seek the solution itself, but clues that could help me understand where my error is ...
Using a framebuffer object FBO to store a "object identifier" is a cool method. But also want to see the objects, right? Then you must render also to the default frame buffer (let me call it "defFB", which is not a FBO).
Because you need to render to two different targets, you need one of these techniques:
Draw objects twice (e.g. with two glDrawArrays calls), one to the FBO and a second one to the defFB.
Draw to two FBO's images at once and later blit one of then (with colors) to the defFB.
For the first technique you may use a texture attached to a FBO (as you currently do). Or you can use a "Renderbuffer" and draw to it.
The second approach needs a second "out" in the fragment shader:
layout(location = 0) out vec3 color; //GL_COLOR_ATTACHMENT0
layout(location = 1) out vec3 objID; //GL_COLOR_ATTACHMENT1
and setting the two attachments with glDrawBuffers.
For the blit part, read this answer.
Note that both "out" have the same format, vec3 in this example.
A fail in your code is that you set a RGB texture format and also use this format at glReadPixels, but your "out" in the FS is vec4 instead of vec3.
More concerns are:
Check the completeness with glCheckFramebufferStatus
Using a "depth attachment" to the FBO may be needed, even it will not be used for reading.
Disabling the depth test will put all elements if the frame. Your point-picking will select the last drawn, not the nearest.
I found the problem.
There were 2 failures in my code :
The first one is that in OpenGL, there is an Y inversion between the image and the framebuffer. So in order to pick the good point, you have to flip Y using the size of the viewport : I did it like this :
GLint m_viewport[4];
glGetIntegerv(GL_VIEWPORT, m_viewport);
int YposTMP = m_viewport[3] - Ypos - 1;
The second one is the use of
glReadPixels(Xpos - 2, Ypos - 2, width, height, GL_RGB, GL_UNSIGNED_BYTE, &pixels[0]);, the 6th parameter must be GL_FLOAT since the datas i'm returning are float.
Thanks all!
Best regards,
R.S

rgba arrays to OpenGL texture

For the gui for my game, I have a custom texture object that stores the rgba data for a texture. Each GUI element registered by my game adds to the final GUI texture, and then that texture is overlayed onto the framebuffer after post-processing.
I'm having trouble converting my Texture object to an openGL texture.
First I create a 1D int array that goes rgbargbargba... etc.
public int[] toIntArray(){
int[] colors = new int[(width*height)*4];
int i = 0;
for(int y = 0; y < height; ++y){
for(int x = 0; x < width; ++x){
colors[i] = r[x][y];
colors[i+1] = g[x][y];
colors[i+2] = b[x][y];
colors[i+3] = a[x][y];
i += 4;
}
}
return colors;
}
Where r, g, b, and a are jagged int arrays from 0 to 255. Next I create the int buffer and the texture.
id = glGenTextures();
glBindTexture(GL_TEXTURE_2D, id);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
IntBuffer iBuffer = BufferUtils.createIntBuffer(((width * height)*4));
int[] data = toIntArray();
iBuffer.put(data);
iBuffer.rewind();
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_INT, iBuffer);
glBindTexture(GL_TEXTURE_2D, 0);
After that I add a 50x50 red square into the upper left of the texture, and bind the texture to the framebuffer shader and render the fullscreen rect that displays my framebuffer.
frameBuffer.unbind(window.getWidth(), window.getHeight());
postShaderProgram.bind();
glEnable(GL_TEXTURE_2D);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, guiManager.texture()); // this gets the texture id that was created
postShaderProgram.setUniform("gui_texture", 1);
mesh.render();
postShaderProgram.unbind();
And then in my fragment shader, I try displaying the GUI:
#version 330
in vec2 Texcoord;
out vec4 outColor;
uniform sampler2D texFramebuffer;
uniform sampler2D gui_texture;
void main()
{
outColor = texture(gui_texture, Texcoord);
}
But all it outputs is a black window!
I added a red 50x50 rectangle into the upper left corner and verified that it exists, but for some reason it isn't showing in the final output.
That gives me reason to believe that I'm not converting my texture into an opengl texture with glTexImage2D correctly.
Can you see anything I'm doing wrong?
Update 1:
Here I saw them doing a similar thing using a float array, so I tried converting my 0-255 to a 0-1 float array and passing it as the image data like so:
public float[] toFloatArray(){
float[] colors = new float[(width*height)*4];
int i = 0;
for(int y = 0; y < height; ++y){
for(int x = 0; x < width; ++x){
colors[i] = (( r[x][y] * 1.0f) / 255);
colors[i+1] = (( g[x][y] * 1.0f) / 255);
colors[i+2] = (( b[x][y] * 1.0f) / 255);
colors[i+3] = (( a[x][y] * 1.0f) / 255);
i += 4;
}
}
return colors;
}
...
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_FLOAT, toFloatArray());
And it works!
I'm going to leave the question open however as I want to learn why the int buffer wasn't working :)
When you specified GL_UNSIGNED_INT as the type of the "Host" data, OpenGL expected 32 bits allocated for each color. Since OpenGL only maps the output colors in the default framebuffer to the range [0.0f, 1.0f], it'll take your input color values (mapped in the range [0, 255]) and divide all of them by the maximum size of an int (about 4.2 Billion) to get the final color displayed on screen. As an exercise, using your original code, set the "clear" color of the screen to white, and see that a black rectangle is getting drawn on screen.
You have two options. The first is to convert the color values to the range specified by GL_UNSIGNED_INT, which means for each color value, multiply them by Math.pow((long)2, 24), and trust that the integer overflow of multiplying by that value will behave correctly (since Java doesn't have unsigned integer types).
The other, far safer option, is to store each 0-255 value in a byte[] object (do not use char. char is 1 byte in C/C++/OpenGL, but is 2 bytes in Java) and specify the type of the elements as GL_UNSIGNED_BYTE.

What are the texture coordinates for a cube in OpenGL?

I have a cube defined as:
float vertices[] = { -width, -height, -depth, // 0
width, -height, -depth, // 1
width, height, -depth, // 2
-width, height, -depth, // 3
-width, -height, depth, // 4
width, -height, depth, // 5
width, height, depth, // 6
-width, height, depth // 7
};
and I have image 128x128 which I simply want to be painted on each of the 6 faces of the cube and nothing else. So what are the texture cooridinates? I need the actual values.
This is the drawing code:
// Counter-clockwise winding.
gl.glFrontFace(GL10.GL_CCW);
// Enable face culling.
gl.glEnable(GL10.GL_CULL_FACE);
// What faces to remove with the face culling.
gl.glCullFace(GL10.GL_BACK);
// Enabled the vertices buffer for writing and to be used during
// rendering.
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
// Specifies the location and data format of an array of vertex
// coordinates to use when rendering.
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, mVerticesBuffer);
// Bind the texture according to the set texture filter
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[filter]);
gl.glEnable(GL10.GL_TEXTURE_2D);
// Enable the texture state
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
// Point to our buffers
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, mTextureBuffer);
// Set flat color
gl.glColor4f(red, green, blue, alpha);
gl.glDrawElements(GL10.GL_TRIANGLES, mNumOfIndices,
GL10.GL_UNSIGNED_SHORT, mIndicesBuffer);
// ALL the DRAWING IS DONE NOW
// Disable the vertices buffer.
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
// Disable face culling.
gl.glDisable(GL10.GL_CULL_FACE);
This is the index array:
short indices[] = { 0, 2, 1,
0, 3, 2,
1,2,6,
6,5,1,
4,5,6,
6,7,4,
2,3,6,
6,3,7,
0,7,3,
0,4,7,
0,1,5,
0,5,4
};
I am not sure if index array is needed to find tex coordinates. Note that the cube vertex array I gave is the most efficient representation of a cube using the index array. The cube draws perfectly but not the textures. Only one side shows correct picture but other sides are messed up. I used the methods described in various online tutorials on textures but it does not work.
What you are looking for is a cube map. In OpenGL, you can define six textures at once (representing the size sides of a cube) and map them using 3D texture coordinates instead of the common 2D texture coordinates. For a simple cube, the texture coordinates would be the same as the vertices' respective normals. (If you will only be texturing plane cubes in this manner, you can consolidate normals and texture coordinates in your vertex shader, too!) Cube maps are much simpler than trying to apply the same texture to repeating quads (extra unnecessary drawing steps).
GLuint mHandle;
glGenTextures(1, &mHandle); // create your texture normally
// Note the target being used instead of GL_TEXTURE_2D!
glTextParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTextParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
glTextParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTextParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glBindTexture(GL_TEXTURE_CUBE_MAP, mHandle);
// Now, load in your six distinct images. They need to be the same dimensions!
// Notice the targets being specified: the six sides of the cube map.
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X, 0, GL_RGBA, width, height, 0,
format, GL_UNSIGNED_BYTE, data1);
glTexImage2D(GL_TEXTURE_CUBE_MAP_NEGATIVE_X, 0, GL_RGBA, width, height, 0,
format, GL_UNSIGNED_BYTE, data2);
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_Y, 0, GL_RGBA, width, height, 0,
format, GL_UNSIGNED_BYTE, data3);
glTexImage2D(GL_TEXTURE_CUBE_MAP_NEGATIVE_Y, 0, GL_RGBA, width, height, 0,
format, GL_UNSIGNED_BYTE, data4);
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_Z, 0, GL_RGBA, width, height, 0,
format, GL_UNSIGNED_BYTE, data5);
glTexImage2D(GL_TEXTURE_CUBE_MAP_NEGATIVE_Z, 0, GL_RGBA, width, height, 0,
format, GL_UNSIGNED_BYTE, data6);
glGenerateMipmap(GL_TEXTURE_CUBE_MAP);
glTextParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
// And of course, after you are all done using the textures...
glDeleteTextures(1, &mHandle);
When specifying your texture coordinates, you will then use sets of 3 coordinates instead of sets of 2. In a simple cube, you point to the 8 corners using normalized vectors. If N = 1.0 / sqrt(3.0) then one corner would be N, N, N; another would be N, N, -N; etc.
You need to define which orientation you want on each face (and that will change which texture coordinates are put on each vertex)
You need to duplicate the vertex positions as the same cube corner will have different texture coordinates depending on which face it is part of
if you want the full texture on each face, then the texture coordinates are (0, 0) (0, 1) (1, 1) (1, 0). How you map them to the specific vertices (the 24 of them, 4 per face) depends on the orientation you want.
For me, it's easier to consider your verticies as width = x, height = y and depth = z.
Then it's a simple matter of getting the 6 faces.
float vertices[] = { -x, -y, -z, // 0
x, -y, -z, // 1
x, y, -z, // 2
-x, y, -z, // 3
-x, -y, z, // 4
x, -y, z, // 5
x, y, z, // 6
-x, y, z// 7
};
For example the front face of your cube will have a positive depth (this cube's center is at 0,0,0 from the verticies that you've given), now since there are 8 points with 4 positive depths, your front face is 4,5,6,7, this is going from -x,-y anti clockwise to -x,y.
Ok, so your back face is all negative depth or -z so it's simply 0,1,2,3.
See the picture? Your left face is all negative width or -x so 0,3,4,7 and your right face is positive x so 1,2,5,6.
I'll let you figure out the top and bottom of the cube.
Your vertex array only describes 2 sides of a cube, but for arguments sake, say vertices[0] - vertices[3] describe 1 side then your texture coordinates may be:
float texCoords[] = { 0.0, 0.0, //bottom left of texture
1.0, 0.0, //bottom right " "
1.0, 1.0, //top right " "
0.0, 1.0 //top left " "
};
You can use those coordinates for texturing each subsequent side with the entire texture.
To render a skybox (cubemap), the below shader works for me:
Cubemap vertexshader::
attribute vec4 a_position;
varying vec3 v_cubemapTexture;
vec3 texture_pos;
uniform vec3 u_cubeCenterPt;
uniform mat4 mvp;
void main(void)
{
gl_Position = mvp * a_position;
texture_pos = vec3(a_position.x - u_cubeCenterPt.x, a_position.y - u_cubeCenterPt.y, a_position.z - u_cubeCenterPt.z);
v_cubemapTexture = normalize(texture_pos.xyz);
}
Cubemap fragmentshader::
precision highp float;
varying vec3 v_cubemapTexture;
uniform samplerCube cubeMapTextureSample;
void main(void)
{
gl_FragColor = textureCube(cubeMapTextureSample, v_cubemapTexture);
}
Hope it is useful...