I am working on a minecraft-ish game, and I've been working a little more with vbos. However; when drawing multiple faces in a single vbo I seem to have a little bit of a issue.
Here is my vbo-generation code:
glGenBuffers(1, &VBO);
glBindBuffer(GL_ARRAY_BUFFER, verts);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, verts * 9 * sizeof(GLfloat), NULL, GL_STATIC_DRAW);
void* ptr = glMapBuffer(GL_ARRAY_BUFFER, GL_READ_WRITE);
GLfloat*model = (GLfloat*)ptr;
GLfloat*tex = ((GLfloat*)ptr) + verts * 6;
GLfloat*color = ((GLfloat*)ptr) + verts * 3;
int p = 0;
int k = p * 3;
for (int mcy = 0; mcy < 5; mcy++) {
for (int mcx = 0; mcx < 5; mcx++) {
double addonX = mcx*32.0;
double addonY = mcy*32.0;
int addonx = mcx * 32;
int addony = mcy * 32;
if (!(hill.get(addonX, addonY)*400.0 > 100 && hill.get(32 + addonX, addonY)*400.0 > 100 && hill.get(addonX, 32 + addonY)*400.0 > 100 && hill.get(32 + addonX, 32 + addonY)*400.0 > 100)) {
draw = true;
int biome1 = BiomeToColor(GetBiome(x, y, addonX, addonY), hill.get(addonX, addonY)*400.0);
int biome2 = BiomeToColor(GetBiome(x, y, 32 + addonX, addonY), hill.get(32 + addonX, addonY)*400.0);
int biome3 = BiomeToColor(GetBiome(x, y, addonX, 32 + addonY), hill.get(addonX, 32 + addonY)*400.0);
int biome4 = BiomeToColor(GetBiome(x, y, 32 + addonX, 32 + addonY), hill.get(32 + addonY, 32 + addonY)*400.0);
model[k] = addonx+ 32;
model[k + 1] = addony;
model[k + 2] = hill.get(addonX + 32, addonY)*400.0;
color[k] = BiomeColors[biome2].r;
color[k + 1] = BiomeColors[biome2].g;
color[k + 2] = BiomeColors[biome2].b;
p++;
k = p * 3;
model[k] = addonx + 32;
model[k + 1] = addony + 32;
model[k + 2] = hill.get(addonX + 32, addonY + 32)*400.0;
color[k] = BiomeColors[biome4].r;
color[k + 1] = BiomeColors[biome4].g;
color[k + 2] = BiomeColors[biome4].b;
p++;
k = p * 3;
model[k] = addonx;
model[k + 1] = addony + 32;
model[k + 2] = hill.get(addonX, addonY + 32)*400.0;
color[k] = BiomeColors[biome3].r;
color[k + 1] = BiomeColors[biome3].g;
color[k + 2] = BiomeColors[biome3].b;
p++;
k = p * 3;
model[k] = addony;
model[k + 1] = addony;
model[k + 2] = hill.get(addonX, addonY)*400.0;
color[k] = BiomeColors[biome1].r;
color[k + 1] = BiomeColors[biome1].g;
color[k + 2] = BiomeColors[biome1].b;
p++;
k = p * 3;
}
}
}
glUnmapBuffer(GL_ARRAY_BUFFER);
glBindBuffer(GL_ARRAY_BUFFER, 0);
And here's the code I use to draw the vbo:
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glVertexPointer(3, GL_FLOAT, 0, 0);
glTexCoordPointer(3, GL_FLOAT, 0, (char*)NULL + verts * 6 * sizeof(GLfloat));
glColorPointer(3, GL_FLOAT, 0, (char*)NULL + verts * 3 * sizeof(GLfloat));
glDrawArrays(GL_QUADS, 0, VBO);
glBindBuffer(GL_ARRAY_BUFFER, 0);
Here's the result I want (using a single quad in every vbo):
unfortunatly I'm still new so you have to click this link :/
And here is the result I get with multiple quads in every vbo:
image
So why do I want to draw multiple quads in a single vbo?
One word: performance, if you compare the two images the thing that really pops out (well, except for the bug with the second image) is the framerate counter. I want to make this game into a big thing, so every fps matters to me.
EDIT:
Omg, I'm so stupid:
model[k] = addony;
A very simple mistake, but so devistating.
Just proves how so small things can brake the game.
It all workes now.
glDrawArrays(GL_QUADS, 0, VBO);
There are a few problems with this call:
the third parameter of glDrawArrays is the count of the things you are drawing so what you are actually saying is:
Draw Quads from my Buffer at 0 until VBO and then stop.
What you should be saying is:
Draw Quads from my Buffer at 0 until Buffer Length and then stop
so now it looks like this:
glDrawArrays(GL_QUADS, 0, verts);
'VBO' in your code is the ID of the Buffer that you want to use.
think about it like a pointer who's number you know or rather a user with an ID.
GL_QUADS is not good use GL_TRIANGLES there are many problems with GL_QUADS later especialy on mobile phones and on other platforms making your data in triangles is much much nicer.
You shouldn't be drawing in GL_QUADS for multiple reasons
Why are you not using VAO's? Are you using an older version of OpenGL that doesn't have VAO's? Otherwise I would suggest using VAO here instead of VBO so you dont need to bind pointers for each draw call.
glBindBuffer(GL_ARRAY_BUFFER, verts);
What you are trying to here is bind a VBO of id: 'verts' to be our current VBO.
'So why do I want to draw multiple quads in a single vbo? One word: performance'
Have you tried to draw multiple quads using instancing?
So sending a model matrix for each of the shapes so that you modify their positions and shapes in the shader and not in the buffer. This way you can draw one vbo over and over again just slightly transformed with a single draw call.
Here is a good tutorial on instancing:
http://learnopengl.com/#!Advanced-OpenGL/Instancing
Just out of curiosity but why did you decide to use:
glMapBuffer(GL_ARRAY_BUFFER, GL_READ_WRITE);
instead of buffering your data in the glBufferData call?
If you need to buffer the data later you can use glBufferSubData
Honestly though I think your performance problems stem from a range of factors.
I would personally use glBufferData instead of map data and when I need to do it during run time and not during loading I would use glBufferSubData.
I would upload the colors to the shader and draw multiples of the SAME VBO again and again with a different model matrix and colors allowing me to instance it.
However you shouldn't need to do that.
What I would recommend is making up the data in triangles and colors and drawing the whole ground as a mesh which you have seemed to tried to do. Your problem was most likely caused by glDrawArrays length being set to that of a VBO.
However in this case I would build a VBO using glBufferData with the size of a chunk then I would use glBufferSubData for each of the quads with colors etc. and once I am done I would draw that multiple times alongside different chunks.
I think it would be of use to you to do more theory of OpenGL.
Related
I'm trying to build the command array but I keep on getting "broken" mesh draws. here is the struct I try to populate:
My Vertex/indices are stored in buffers as :
QVector<QVector3D> mVertex; // all meshes in 1 vector
QVector<unsigned int> mIndices; // all meshes in 1 vector
int mIndicesCount = mIndices.size(); // this is per mesh accessible
int mVertexCount = mVertex.size(); // this is per mesh accessible
The loop :
int skip =0;
int offset =0;
for (size_t u = 0; u < jobSize; ++u) {
DrawElementsIndirectCommand *cmd = &dstCmds[u];
cmd->count = mNodeList[u]->mIndicesCount;
cmd->instanceCount = 1;
cmd->firstIndex = skip;
cmd->baseVertex = offset;
cmd->baseInstance = 1;
skip += (mNodeList[u]->mIndicesCount * sizeof(unsigned int));
offset += (mNodeList[u]->mVertexCount / sizeof(unsigned int));
}
Does any1 see any errors here? I'm lost.
Also tried this :
skip += (mNodeList[u]->mIndicesCount / sizeof(unsigned int));
offset += (mNodeList[u]->mVertexCount);
based on > OpenGL glMultiDrawElementsIndirect with Interleaved Buffers
EDIT 2
I could not get it to work with the suggesions in comments, or I did somethingw rong... here is the main code responsible for building the buffers & commands.
PS. this exercise is about trying to follow AZDO -
https://github.com/nvMcJohn/apitest/blob/master/src/solutions/untexturedobjects/gl/bufferstorage.cpp
int jobSize = mNodeList.size();
QVector<QVector3D> mVertex;
QVector<QVector3D> mNormals;
QVector<unsigned int> mIndices;
for (auto &node:mNodeList) {
mVertex.append(node->mVertex);
mNormals.append(node->mVertexNormal);
mIndices.append(node->mIndices);
}
glBindVertexArray(m_varray);
glBindBuffer(GL_ARRAY_BUFFER, m_vb);
glBufferData(GL_ARRAY_BUFFER, mVertex.size() * sizeof(QVector3D), &mVertex[0], GL_STATIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, m_ib);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, mIndices.size() * sizeof(unsigned int), &mIndices[0], GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
mShader->enableAttributeArray("In_v3Pos");
mShader->setAttributeBuffer("In_v3Pos", GL_FLOAT, 0, 3, sizeof(QVector3D));
glBindBuffer(GL_ARRAY_BUFFER, m_vn);
glBufferData(GL_ARRAY_BUFFER, mNormals.size() * sizeof(QVector3D), &mNormals[0], GL_STATIC_DRAW);
mShader->enableAttributeArray("In_v3Color");
mShader->setAttributeBuffer("In_v3Color", GL_FLOAT, 0, 3, sizeof(QVector3D));
const GLbitfield mapFlags = GL_MAP_WRITE_BIT | GL_MAP_PERSISTENT_BIT;
const GLbitfield createFlags = mapFlags | GL_DYNAMIC_STORAGE_BIT;
mCommands.Destroy();
mCommands.Create(BufferStorage::PersistentlyMappedBuffer, GL_DRAW_INDIRECT_BUFFER, 3 * jobSize, createFlags, mapFlags);
mTransformBuffer.Destroy();
mTransformBuffer.Create(BufferStorage::PersistentlyMappedBuffer, GL_SHADER_STORAGE_BUFFER, 3 * jobSize, createFlags, mapFlags);
glBindVertexArray(0);
DrawElementsIndirectCommand *dstCmds = mCommands.Reserve(jobSize);
int skip = 0;
int offset = 0;
for (size_t u = 0; u < jobSize; ++u) {
DrawElementsIndirectCommand *cmd = &dstCmds[u];
cmd->count = mNodeList[u]->mIndicesCount;
cmd->instanceCount = 1;
cmd->firstIndex = skip*sizeof(unsigned int);
cmd->baseVertex = offset;
cmd->baseInstance = 0;
skip += mNodeList[u]->mIndicesCount ;
offset += mNodeList[u]->mVertexCount;
}
I'm trying to interpolate a triangle with the help of vertex coordinates.
a
|\
| \
| \
| \
b|_ _ _ \c
I'm interpolating the vertices in this order (b,a),(a,c)and (c,b).
Here the a,b and c are the 3 dimensional coordinates with a color value.
a = (x1,y1,z1,c1);
b = (x2,y2,z2,c2);
c = (x3,y3,z3,c3);
Structure used to compute the calculation:
struct pointsInterpolateStruct{
QList<double> x,y,z;
QList<double> r, g, b, clr;
void clear() {
x.clear();
y.clear();
z.clear();
r.clear();
g.clear();
b.clear();
clr.clear();
}
};
Interpolation Code:
QList<double> x,y,z,clrs;
This above mentioned lists has been used to read the values from a file which contains the coordinates of a,b and c.
/**
void interpolate();
#param1 ipts is an object for the point interpolation struct which holds the x,y,z and color
#param2 idx1 is the point A
#param 3idx2 is the point B
#return returns the interpolated values after filling the struct pointsInterpolateStruct
*/
void GLThread::interpolate(pointsInterpolateStruct *ipts,int idx1, int idx2) {
int ipStep = 0;
double delX, imX,iX,delY,imY,iY,delZ,imZ,iZ,delClr,imC,iC;
ipStep = 5; // number of points needed between the 2 points
delX = imX = iX = delY = imY = iY = delZ = imZ = iZ = delClr = imC = iC = 0;
delX = (x.at(idx2) - x.at(idx1));
imX = x.at(idx1);
iX = (delX / (ipStep + 1));
delY = (y.at(idx2) - y.at(idx1));
imY = aParam->y.at(idx1);
iY = (delY / (ipStep + 1));
delZ = (z.at(idx2) - z.at(idx1));
imZ = z.at(idx1);
iZ = (delZ / (ipStep + 1));
delClr = (clrs.at(idx2) - clrs.at(idx1));
imC = clrs.at(idx1);
iC = (delClr / (ipStep + 1));
ipts->clear();
int i = 0;
while(i<= ipStep) {
ipts->x.append((imX+ iX * i));
ipts->y.append((imY+ iY * i));
ipts->z.append((imZ+ iZ * i));
ipts->clr.append((imC + iC * i));
i++;
}
}*
Visualization of this interpolated points using OpenGL :
All the points are filled to vertices and color buffers and I'm drawing it using the below format. Visualization is very fast even for larger points.
void GLWidget::drawInterpolatedTriangle(void) {
glEnableClientState(GL_COLOR_ARRAY);
glEnableClientState(GL_VERTEX_ARRAY);
glColorPointer(3, GL_FLOAT, 0, clr);
glVertexPointer(3, GL_FLOAT, 0, vrt);
glPushMatrix();
glDrawArrays(GL_POLYGON, 0, vrtCnt);
glPopMatrix();
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
}
}
}
Now everything working fine. I'm getting the desired output. But the problem is when I'm trying to do the same for 'n' number of triangles (say n = 40,000), the application gets crashed even if I called this function in a QThread and I found that this method is not an efficient method as it takes lot of time for computation.
Please suggest an optimistic way to do this process so that I can achieve better results at good performance.
Output image :
Interpolated Triangle (point view)
Mesh View
Polygon View
After examining the memory used by the application, I found that there's large number of unwanted data has been stored in the list and arrays in my program (i.e., clearing the list x,y,z,r,g,b and clrs in pointsInterpolateStruct). I have cleared all the unwanted / unused data instantly and tried to run the application with larger triangles. Now I can achieve better performance. I didn't changed anything in visualization process.
I'm trying to render a map, but unfortunately, only the underside is rendered.
I guess I'm doing something wrong while setting up the vertex and index buffers.
This is the part I initialize the vertex and index buffers:
// Initialize vertices and indices
SimpleVertex* vertices = new SimpleVertex[(dimension + 1) * (dimension + 1)];
WORD* indices = new WORD[dimension * dimension * 6];
for (WORD i = 0; i < dimension + 1; ++i)
{
for (WORD j = 0; j < dimension + 1; ++j)
{
vertices[i * (dimension + 1) + j].Pos = XMFLOAT3(i, rand() % 2, j);
vertices[i * (dimension + 1) + j].Color = XMFLOAT4(rand() % 2, rand() % 2, rand() % 2, 1.0f);
}
}
for (WORD i = 0; i < dimension; i++)
{
for (WORD j = 0; j < dimension; j++)
{
indices[(i * dimension + j) * 6] = (WORD)(i * (dimension + 1) + j);
indices[(i * dimension + j) * 6 + 2] = (WORD)(i * (dimension + 1) + j + 1);
indices[(i * dimension + j) * 6 + 1] = (WORD)((i + 1) * (dimension + 1) + j + 1);
indices[(i * dimension + j) * 6 + 3] = (WORD)(i * (dimension + 1) + j);
indices[(i * dimension + j) * 6 + 5] = (WORD)((i + 1) * (dimension + 1) + j + 1);
indices[(i * dimension + j) * 6 + 4] = (WORD)((i + 1) * (dimension + 1) + j);
}
}
// Create vertex buffer
D3D11_BUFFER_DESC bd;
ZeroMemory(&bd, sizeof(bd));
bd.Usage = D3D11_USAGE_DEFAULT;
bd.ByteWidth = sizeof(SimpleVertex)* (dimension + 1) * (dimension + 1);
bd.BindFlags = D3D11_BIND_VERTEX_BUFFER;
bd.CPUAccessFlags = 0;
D3D11_SUBRESOURCE_DATA InitData;
ZeroMemory(&InitData, sizeof(InitData));
InitData.pSysMem = vertices;
hr = g_pd3dDevice->CreateBuffer(&bd, &InitData, &g_pVertexBuffer);
delete vertices;
if (FAILED(hr))
return hr;
// Set vertex buffer
UINT stride = sizeof(SimpleVertex);
UINT offset = 0;
g_pImmediateContext->IASetVertexBuffers(0, 1, &g_pVertexBuffer, &stride, &offset);
// Create indices buffer
bd.Usage = D3D11_USAGE_DEFAULT;
bd.ByteWidth = sizeof(WORD)* dimension * dimension * 6;
bd.BindFlags = D3D11_BIND_INDEX_BUFFER;
bd.CPUAccessFlags = 0;
InitData.pSysMem = indices;
hr = g_pd3dDevice->CreateBuffer(&bd, &InitData, &g_pIndexBuffer);
delete indices;
if (FAILED(hr))
return hr;
Excuses for my bad English :(. Thank you for reading!
The first thing that occurred to me is you may be declaring your vertices in the wrong order. If your Direct3D context is expecting vertices to be counterclockwise, and yours are defined in clockwise order, "backface culling" will cause your polygons to be invisible unless viewed from the other side.
Specifically, D3D11_RASTERIZER_DESC::FrontCounterClockwise sets the direction. (see http://msdn.microsoft.com/en-us/library/windows/desktop/ff476198%28v=vs.85%29.aspx)
In the code where you set up your rasterizer description, try setting CullMode=D3D11_CULL_NONE and if the terrain appears, then this was your problem.
Most likely, the face culling wasn't set up properly.
In theory (thanks Google for providing links ;) ):
Face culling
Winding order
In practice:
You decide in which order to put your vertices within triangles (in reality, you manipulating with indices, as your buffers are indexed) - clockwise or counterclockwise.
Having decision #1 you now decide which faces must be considered as "front":
D3D11_RASTERIZER_DESC rd = {};
rd.FrontCounterClockwise = true; // counterclockwise are front
and you decide which faces rasterizer must cull: back ones, front ones, or none:
rd.CullMode = D3D11_CULL_BACK; // back faced primitives will be stripped out
// during rasterization
// (clockwise ones in our example)
So, you can either change your geometry winding and/or DirectX winding option and/or DirectX culling option.
Note: By-default, DirectX 11 uses false and D3D11_CULL_BACK for parameters above. So it considers clockwise primitives as front faced, and culls counterclockwise ones, considered back faced.
Note: To better understand culling, draw a triangle on both sides of piece of paper as if it would be same triangle viewed from different sides. Put indices near each vertex (same on both sides of paper). Draw an circular arrow showing winding order. Compare it with your mesh. Then it will be obvious which winding order and culling you must use.
Sources:
MSDN DirectX Reference pages:
D3D11_RASTERIZER_DESC
D3D11_CULL_MODE
ID3D11Device::CreateRasterizerState()
ID3D11DeviceContext::RSSetState()
I'm generating a terrain from a .bmp file, as a very early precursor for a strategy game. In my code I load the BMP file as an openGL texture, then using a double loop to generate coordinates (x, y redChannel). Then I create indices by again double looping and generating the triangles for a square between (x,y) to (x+1, y+1). However, when I run the code, I end up with an extra triangle going from the end of one line to the beginning of the next line, and which I cannot seem to solve. This only happens when I use varied heights and a sufficiently large map, or at least it is not visible otherwise.
This is the code:
void Map::setupVertices(GLsizei* &sizeP, GLint * &vertexArray, GLubyte* &colorArray){
//textureNum is the identifier generated by glGenTextures
GLuint textureNum = loadMap("heightmap.bmp");
//Bind the texture again, and extract the needed data
glBindTexture(GL_TEXTURE_2D, textureNum);
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_WIDTH, &width);
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_HEIGHT, &height);
GLint i = height*width;
GLubyte * imageData = new GLubyte[i+1];
glGetTexImage(GL_TEXTURE_2D,0,GL_RED, GL_UNSIGNED_BYTE, &imageData[0]);
//Setup varibles: counter (used for counting vertices)
//VertexArray: pointer to address for storing the vertices. Size: 3 ints per point, width*height points total
//ColorArray: pointer to address for storing the color data. 3 bytes per point.
int counter = 0;
vertexArray = new GLint[height*width*3];
colorArray = new GLubyte[height*width*3];
srand(time(NULL));
//Loop through rows
for (int y = 0; y < height; y++){
//Loop along the line
for (int x=0; x < width; x++){
//Add vertices: x, y, redChannel
//Add colordata: the common-color.
colorArray[counter] = imageData[x+y*width];
vertexArray[counter++] = x;
colorArray[counter] = imageData[x+y*width];
vertexArray[counter++] = y;
colorArray[counter] = imageData[x+y*width];//(float) (rand() % 255);
vertexArray[counter++] = (float)imageData[x+y*width] /255 * maxHeight;
}
}
//"Return" total vertice amount
sizeP = new GLsizei(counter);
}
void Map::setupIndices(GLsizei* &sizeP, GLuint* &indexArray){
//Pointer to location for storing indices. Size: 2 triangles per square, 3 points per triangle, width*height triangles
indexArray = new GLuint[width*height*2*3];
int counter = 0;
//Loop through rows, don't go to top row (because those triangles are to the row below)
for (int y = 0; y < height-1; y++){
//Loop along the line, don't go to last point (those are connected to second last point)
for (int x=0; x < width-1; x++){
//
// TL___TR
// | / |
// LL___LR
int lowerLeft = x + width*y;
int lowerRight = lowerLeft+1;
int topLeft = lowerLeft + width+1;
int topRight = topLeft + 1;
indexArray[counter++] = lowerLeft;
indexArray[counter++] = lowerRight;
indexArray[counter++] = topLeft;
indexArray[counter++] = topLeft;
indexArray[counter++] = lowerRight;
indexArray[counter++] = topRight;
}
}
//"Return" the amount of indices
sizeP = new GLsizei(counter);
}
I eventually draw this with this code:
void drawGL(){
glPushMatrix();
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3,GL_INT,0,mapHeight);
glEnableClientState(GL_COLOR_ARRAY);
glColorPointer(3,GL_UNSIGNED_BYTE,0,mapcolor);
if (totalIndices != 0x00000000){
glDrawElements(GL_TRIANGLES, *totalIndices, GL_UNSIGNED_INT, indices);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
glPopMatrix();
}
Here's a picture of the result:
http://s22.postimg.org/k2qoru3kx/open_GLtriangles.gif
And with only blue lines and black background.
http://s21.postimg.org/5yw8sz5mv/triangle_Error_Blue_Line.gif
There also appears to be one of these going in the other direction as well, at the very edge right, but I'm supposing for now that it may be related to the same issue.
I'd simplify this part:
int lowerLeft = x + width * y;
int lowerRight = (x + 1) + width * y;
int topLeft = x + width * (y + 1);
int topRight = (x + 1) + width * (y + 1);
The problem looks like topLeft has an extra + 1 when it should only have the + width.
This causes the "top" vertices to both be shifted along by one column. You might not notice the offsets within the grid and, as you pointed out, they're not visible until the height changes.
Also, returning new GLsizei(counter) seems a bit round about. Why not just pass in GLsizei& counter.
These might be worth a look too. You can save a fair bit of data using strip primitives for many procedural objects:
Generate a plane with triangle strips
triangle-strip-for-grids-a-construction
Currently, I'm able to load in a static sized texture which I have created. In this case it's 512 x 512.
This code is from the header:
#define TEXTURE_WIDTH 512
#define TEXTURE_HEIGHT 512
GLubyte textureArray[TEXTURE_HEIGHT][TEXTURE_WIDTH][4];
Here's the usage of glTexImage2D:
glTexImage2D(
GL_TEXTURE_2D, 0, GL_RGBA,
TEXTURE_WIDTH, TEXTURE_HEIGHT,
0, GL_RGBA, GL_UNSIGNED_BYTE, textureArray);
And here's how I'm populating the array (rough example, not exact copy from my code):
for (int i = 0; i < getTexturePixelCount(); i++)
{
textureArray[column][row][0] = (GLubyte)pixelValue1;
textureArray[column][row][1] = (GLubyte)pixelValue2;
textureArray[column][row][2] = (GLubyte)pixelValue3;
textureArray[column][row][3] = (GLubyte)pixelValue4;
}
How do I change that so that there's no need for TEXTURE_WIDTH and TEXTURE_HEIGHT? Perhaps I could use a pointer style array and dynamically allocate the memory...
Edit:
I think I see the problem, in C++ it can't really be done. The work around as pointed out by Budric is to use a single dimensional array but use all 3 dimensions multiplied to represent what would be the indexes:
GLbyte *array = new GLbyte[xMax * yMax * zMax];
And to access, for example x/y/z of 1/2/3, you'd need to do:
GLbyte byte = array[1 * 2 * 3];
However, the problem is, I don't think the glTexImage2D function supports this. Can anyone think of a workaround that would work with this OpenGL function?
Edit 2:
Attention OpenGL developers, this can be overcome by using a single dimensional array of pixels...
[0]: column 0 > [1]: row 0 > [2]: channel 0 ... n > [n]: row 1 ... n > [n]: column 1 .. n
... no need to use a 3 dimensional array. In this case I've had to use this work around as 3 dimensional arrays are apparently not strictly possible in C++.
Ok since this took me ages to figure this out, here it is:
My task was to implement the example from the OpenGL Red Book (9-1, p373, 5th Ed.) with a dynamic texture array.
The example uses:
static GLubyte checkImage[checkImageHeight][checkImageWidth][4];
Trying to allocate a 3-dimensional array, as you would guess, won't do the job. Someth. like this does NOT work:
GLubyte***checkImage;
checkImage = new GLubyte**[HEIGHT];
for (int i = 0; i < HEIGHT; ++i)
{
checkImage[i] = new GLubyte*[WIDTH];
for (int j = 0; j < WIDTH; ++j)
checkImage[i][j] = new GLubyte[DEPTH];
}
You have to use a one dimensional array:
unsigned int depth = 4;
GLubyte *checkImage = new GLubyte[height * width * depth];
You can access the elements using this loops:
for(unsigned int ix = 0; ix < height; ++ix)
{
for(unsigned int iy = 0; iy < width; ++iy)
{
int c = (((ix&0x8) == 0) ^ ((iy&0x8)) == 0) * 255;
checkImage[ix * width * depth + iy * depth + 0] = c; //red
checkImage[ix * width * depth + iy * depth + 1] = c; //green
checkImage[ix * width * depth + iy * depth + 2] = c; //blue
checkImage[ix * width * depth + iy * depth + 3] = 255; //alpha
}
}
Don't forget to delete it properly:
delete [] checkImage;
Hope this helps...
You can use
int width = 1024;
int height = 1024;
GLubyte * texture = new GLubyte[4*width*height];
...
glTexImage2D(
GL_TEXTURE_2D, 0, GL_RGBA,
width, height,
0, GL_RGBA, GL_UNSIGNED_BYTE, textureArray);
delete [] texture; //remove the un-needed local copy of the texture;
However you still need to specify the width and height to OpenGL in glTexImage2D call. This call copies texture data and that data is managed by OpenGL. You can delete, resize, change your original texture array all you want and it won't make a different to the texture you specified to OpenGL.
Edit:
C/C++ deals with only 1 dimensional arrays. The fact that you can do texture[a][b] is hidden and converted by the compiler at compile time. The compiler must know the number of columns and will do texture[a*cols + b].
Use a class to hide the allocation, access to the texture.
For academic purposes, if you really want dynamic multi dimensional arrays the following should work:
int rows = 16, cols = 16;
char * storage = new char[rows * cols];
char ** accessor2D = new char *[rows];
for (int i = 0; i < rows; i++)
{
accessor2D[i] = storage + i*cols;
}
accessor2D[5][5] = 2;
assert(storage[5*cols + 5] == accessor2D[5][5]);
delete [] accessor2D;
delete [] storage;
Notice that in all the cases I'm using 1D arrays. They are just arrays of pointers, and array of pointers to pointers. There's memory overhead to this. Also this is done for 2D array without colour components. For 3D dereferencing this gets really messy. Don't use this in your code.
You could always wrap it up in a class. If you are loading the image from a file you get the height and width out with the rest of the data (how else could you use the file?), you could store them in a class that wraps the file loading instead of using preprocessor defines. Something like:
class ImageLoader
{
...
ImageLoader(const char* filename, ...);
...
int GetHeight();
int GetWidth();
void* GetDataPointer();
...
};
Even better you could hide the function calls to glTexImage2d in there with it.
class GLImageLoader
{
...
ImageLoader(const char* filename, ...);
...
GLuint LoadToTexture2D(); // returns texture id
...
};