Loading vertices from .dae makes my program slow, why? - c++

i ve written a bunch of functions to load the collada (.dae) document, but the problem is the opengl glut (console) window responds slowly to keyboard reponses, i've used only string.h, stdlib.h, and fstream.h, and ofcourse gl/glut.h my program's main functions are:
Void LoadModel()
{
COLLADA ca;
double digits[3];
ca.OpenFile(char fname);
ca.EnterLibGeo();// get the position of <library_geometries>
ca.GetFloats();// search for the <float_array> from start to end, and saves thier position in the file
ca.GetAtrributes("count", char Attrib); //same as collada dom's function but its mine
Int run=atoi(Attrib); // to convert the attributes of count which is string in the file to integer
glBegin(GL_TRIANGLES);
for (int i=0;i<=run;i++)
{
MakeFloats(digits); // will convert string digits to floating point values, this function uses the starting position and ending position which GetFloats() stored in variables
glVertex3f(digits[0], digits[1], digitd[2]);
}
glEnd();
glFlush();
}
this application search for tags without loading the whole file contents into memory, LoadModel() function will be called by void display(), so whenever i try to use the keyboard function of glut, it reloads the vertex data from the file, this is ok for small .dae files but the large .dae files make my program respond slow, its because my program draws vertex by loading the file() each second, is this the right way loading models??

You are reading in the file each end every time you render the mesh; don't do that.
Instead read the file once and keep the model in memory (perhaps preprocessed a bit to ease rendering).
The VBO method of loading the mesh based on your example is:
COLLADA ca;
double digits[3];
ca.OpenFile(char fname);
ca.EnterLibGeo();// get the position of <library_geometries>
ca.GetFloats();// search for the <float_array> from start to end, and saves thier position in the file
ca.GetAtrributes("count", char Attrib); //same as collada dom's function but its mine
Int run=atoi(Attrib); // to convert the attributes of count which is string in the file to integer
int vbo;
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, run*3*sizeof(float), 0, GL_STATIC_DRAW);
do{
void* ptr = glMapBuffer(GL_ARRAY_BUFFER, GL_WRITE_ONLY);
for (int i=0;i<=run;i++)
{
MakeFloats(digits); // will convert string digits to floating point values, this function uses the starting position and ending position which GetFloats() stored in variables
memcpy(ptr+i*3*sizeof(float), digits, 3*sizeof(float));
}
}while(!glUnmapBuffer(GL_ARRAY_BUFFER));//if buffer got corrupted then remap and do again
Then you can bind the relative buffer and draw with glDrawArrays

Disk IO is relatively slow, and is more than likely the slowness you are seeing. You should try to remove any unnecessary work from your draw function. Only load your file once at start up then keep the data in memory. If you load different files based on key press either load all of them up front or once on demand.

Related

OpenGL, glMapNamedBuffer takes a long time

I've been writing an openGL program that generates vertices on the GPU using compute shaders, the problem is I need to read back the number of vertices from a buffer written to by one compute shader dispatch on the CPU so that I can allocate a buffer of the right size for the next compute shader dispatch to fill with vertices.
/*
* Stage 1- Populate the 3d texture with voxel values
*/
_EvaluateVoxels.Use();
glActiveTexture(GL_TEXTURE0);
GLPrintErrors("glActiveTexture(GL_TEXTURE0);");
glBindTexture(GL_TEXTURE_3D, _RandomSeedTexture);
glBindImageTexture(2, _VoxelValuesTexture, 0, GL_TRUE, NULL, GL_READ_WRITE, GL_R32F);
_EvaluateVoxels.SetVec3("CellSize", voxelCubeDims);
SetMetaBalls(metaballs);
_EvaluateVoxels.SetVec3("StartPos", chunkPosLL);
glDispatchCompute(voxelDim.x + 1, voxelDim.y + 1, voxelDim.z + 1);
glMemoryBarrier(GL_SHADER_IMAGE_ACCESS_BARRIER_BIT);
/*
* Stage 2 - Calculate the marching cube's case for each cube of 8 voxels,
* listing those that contain polygons and counting the no of vertices that will be produced
*/
_GetNonEmptyVoxels.Use();
_GetNonEmptyVoxels.SetFloat("IsoLevel", isoValue);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 3, _IntermediateDataSSBO);
glBindBufferBase(GL_ATOMIC_COUNTER_BUFFER, 0, _AtomicCountersBuffer);
glDispatchCompute(voxelDim.x, voxelDim.y, voxelDim.z);
glMemoryBarrier(GL_SHADER_STORAGE_BARRIER_BIT | GL_ATOMIC_COUNTER_BARRIER_BIT);
//printStage2(_IntermediateDataSSBO, true);
_StopWatch.StopTimer("stage2");
_StopWatch.StartTimer("getvertexcounter");
// this line takes a long time
unsigned int* vals = (unsigned int*)glMapNamedBuffer(_AtomicCountersBuffer, GL_READ_WRITE);
unsigned int vertex_counter = vals[1];
unsigned int index_counter = vals[0];
vals[0] = 0;
vals[1] = 0;
glUnmapNamedBuffer(_AtomicCountersBuffer);
The image below shows times in milliseconds that each stage of the code takes to run, "timer Evaluate" refers to the method as a whole, IE the sum total of the previous stages. getvertexcounter refers to only the mapping, reading and unmapping of a buffer containing the number of vertices. Please see code for more detail.
I've found this to be by far the slowest stage in the process, and I gather it has something to do with the asynchronous nature of the communication between openGL and the GPU and the need to synchronise data that was written by the compute shader so it can be read by the CPU. My question is this: Is this delay avoidable? I don't think that the overall approach is flawed because I know that someone else has implemented the algorithm in a similar way, albeit using direct X (I think).
You can find my code at https://github.com/JimMarshall35/Marching-cubes-cpp/tree/main/MarchingCubes , the code in question is in the file ComputeShaderMarcher.cpp and the method unsigned int ComputeShaderMarcher::GenerateMesh(const glm::vec3& chunkPosLL, const glm::vec3& chunkDim, const glm::ivec3& voxelDim, float isoValue, GLuint VBO)
In order to access data from a buffer that you have had OpenGL write some data to, the CPU must halt execution until the GPU has actually written that data. Whatever process you use to access this data (glMapBufferRange, glGetBufferSubData, etc), that process must halt until the GPU has finished generating the data.
So don't try to access GPU-generated data until you're sure the GPU has actually generated it (or you have absolutely nothing better to do on the CPU than wait). Use fence sync objects to test whether the GPU has finished executing past a certain point.

Opengl Pixel Co-ordinate manipulation

Ive been looking through the internet for a while now trying to find out if i can DIRECTLY manipulate Pixels like those that makes up the triangle in a mesh not vertex co-ordinates.
When a mesh is being formed in OpenGL the vertex co-ordinates that forms independent triangles are each filled with Pixels that gives it color.
Those pixels are what im trying to manipulate. So far in every Tutorial all i'm seeing is how to alter vertex coords even in the Fragment shader parts of Glsl tutorials i'm not finding anything on the Pixels directly. I'm being shown Texture and Vertex co-ordinates no direct Pixel manipulation.
So far what i know happens is each vertex is assigned some color value and all the Pixel processes get done during execution and you see the results.
So can Pixels be directly altered in OpenGl for each Triangle or What would u guys recommend? Cuz i've heard it might be possible in OpenCV but thats stuff is about Textures
If I get it right you have high poly mesh and want to simplify it by creating normal map for smaller poly count faces ...
Never done this but I would attack this problem like this:
create UV mapping of high poly mesh
create low poly mesh
so you need to merge smaller adjacent faces into bigger ones. Merging only faces that are not too angled to starting face (abs dot between normals is smaller than threshold)... You also need to remember original mesh of merged face.
for each merged face render normal
so render the merged face original polygon to texture but use UV as 2D vertex coordinates and output actual triangle normal as color
This will copy the normals into normal map at the correct position. Do not use any depth buffering blending lighting or whatever. Also the 2D view must be scaled and translated so the UV mapping will cover your texture (no perspective) Do not forget that the normal map (if RGB float used) is clamped so you should first normalize the normal and then convert to range <0,1> for example:
n = 0.5 * (vec3(1.0,1.0,1.0) + normalize(n));
read back the rendered texture
now it should hold the whole normal map. In case you do not have Render to texture available (older Intel HD) you can render to screen instead and then just use glReadPixels.
As you want to save this to image here a small VCL example of saving to 24 bit bmp:
//---------------------------------------------------------------------------
void screenshot(int xs,int ys) // xs,ys is GL screen resolution
{
// just in case your environment does not know basic programing datatypes
typedef unsigned __int8 BYTE;
typedef unsigned __int16 WORD;
typedef unsigned __int32 DWORD;
xs&=0xFFFFFFFC; // crop down resolution to be divisible by 4
ys&=0xFFFFFFFC; // in order make glReadPixel not crashing on some implementations
BYTE *dat,zero[4]={0,0,0,0};
int hnd,x,y,a,align,xs3=3*xs;
// allocate memory for pixel data
dat=new BYTE[xs3*ys];
if (dat==NULL) return;
// copy GL screen to dat
glReadPixels(0,0,xs,ys,GL_BGR,GL_UNSIGNED_BYTE,dat);
glFinish();
// BMP header structure
#pragma pack(push,1)
struct _hdr
{
char ID[2];
DWORD size;
WORD reserved1[2];
DWORD offset;
DWORD reserved2;
DWORD width,height;
WORD planes;
WORD bits;
DWORD compression;
DWORD imagesize;
DWORD xresolution,yresolution;
DWORD ncolors;
DWORD importantcolors;
} hdr;
#pragma pack(pop)
// BMP header extracted from uncompressed 24 bit BMP
const BYTE bmp24[sizeof(hdr)]={0x42,0x4D,0xE6,0x71,0xB,0x0,0x0,0x0,0x0,0x0,0x36,0x0,0x0,0x0,0x28,0x0,0x0,0x0,0xF4,0x1,0x0,0x0,0xF4,0x1,0x0,0x0,0x1,0x0,0x18,0x0,0x0,0x0,0x0,0x0,0xB0,0x71,0xB,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0};
// init hdr with 24 bit BMP header
for (x=0;x<sizeof(hdr);x++) ((BYTE*)(&hdr))[x]=bmp24[x];
// update hdr stuf with our image properties
align=0; // (4-(xs3&3))&3;
hdr.size=sizeof(hdr)+(ys*(xs3+align));
hdr.width=xs;
hdr.height=ys;
hdr.imagesize=ys*xs3;
// save BMP file (using VCL file functions exchange them with whatever you got)
hnd=FileCreate("screenshot.bmp"); // create screenshot image file (binary)
if (hnd!=-1) // if file created
{
FileWrite(hnd,&hdr,sizeof(hdr));// write bmp header
for (a=0,y=0;y<ys;y++,a+=xs3) // loop through all scan lines
{
FileWrite(hnd,&dat[a],xs3); // write scan line pixel data
if (align) // write scan line align zeropad if needed
FileWrite(hnd,zero,align);
}
FileClose(hnd); // close file
}
// cleanup before exit
delete[] dat; // release dat
}
//---------------------------------------------------------------------------
The only thing used from VCL are binary file access routines so just swap them with what you have at disposal. Now you can open this bmp in whatever image software and convert to whatever format you want like png... without the need to encode it yourself.
The bmp header structure was taken from this QA:
Importing BMP file in Turboc++ issue:BMP file is not being displayed properly in the output screen
Also beware of using char/int instead of BYTE/WORD/DWORD it usually leads to data corruption for tasks like this if you do not know what you doing...
You can do the same with color if the mesh is textured ... That way the normal map and color map would have the same UV mapping even if the original mesh uses more than single texture ...

DirectX using multiple Render Targets as input to each other

I have a fairly simple DirectX 11 framework setup that I want to use for various 2D simulations. I am currently trying to implement the 2D Wave Equation on the GPU. It requires I keep the grid state of the simulation at 2 previous timesteps in order to compute the new one.
How I went about it was this - I have a class called FrameBuffer, which has the following public methods:
bool Initialize(D3DGraphicsObject* graphicsObject, int width, int height);
void BeginRender(float clearRed, float clearGreen, float clearBlue, float clearAlpha) const;
void EndRender() const;
// Return a pointer to the underlying texture resource
const ID3D11ShaderResourceView* GetTextureResource() const;
In my main draw loop I have an array of 3 of these buffers. Every loop I use the textures from the previous 2 buffers as inputs to the next frame buffer and I also draw any user input to change the simulation state. I then draw the result.
int nextStep = simStep+1;
if (nextStep > 2)
nextStep = 0;
mFrameArray[nextStep]->BeginRender(0.0f,0.0f,0.0f,1.0f);
{
mGraphicsObj->SetZBufferState(false);
mQuad->GetRenderer()->RenderBuffers(d3dGraphicsObj->GetDeviceContext());
ID3D11ShaderResourceView* texArray[2] = { mFrameArray[simStep]->GetTextureResource(),
mFrameArray[prevStep]->GetTextureResource() };
result = mWaveShader->Render(d3dGraphicsObj, mQuad->GetRenderer()->GetIndexCount(), texArray);
if (!result)
return false;
// perform any extra input
I_InputSystem *inputSystem = ServiceProvider::Instance().GetInputSystem();
if (inputSystem->IsMouseLeftDown()) {
int x,y;
inputSystem->GetMousePos(x,y);
int width,height;
mGraphicsObj->GetScreenDimensions(width,height);
float xPos = MapValue((float)x,0.0f,(float)width,-1.0f,1.0f);
float yPos = MapValue((float)y,0.0f,(float)height,-1.0f,1.0f);
mColorQuad->mTransform.position = Vector3f(xPos,-yPos,0);
result = mColorQuad->Render(&viewMatrix,&orthoMatrix);
if (!result)
return false;
}
mGraphicsObj->SetZBufferState(true);
}
mFrameArray[nextStep]->EndRender();
prevStep = simStep;
simStep = nextStep;
ID3D11ShaderResourceView* currTexture = mFrameArray[nextStep]->GetTextureResource();
// Render texture to screen
mGraphicsObj->SetZBufferState(false);
mQuad->SetTexture(currTexture);
result = mQuad->Render(&viewMatrix,&orthoMatrix);
if (!result)
return false;
mGraphicsObj->SetZBufferState(true);
The problem is nothing is happening. Whatever I draw appears on the screen(I draw using a small quad) but no part of the simulation is actually ran. I can provide the shader code if required, but I am certain it works since I've implemented this before on the CPU using the same algorithm. I'm just not certain how well D3D render targets work and if I'm just drawing wrong every frame.
EDIT 1:
Here is the code for the begin and end render functions of the frame buffers:
void D3DFrameBuffer::BeginRender(float clearRed, float clearGreen, float clearBlue, float clearAlpha) const {
ID3D11DeviceContext *context = pD3dGraphicsObject->GetDeviceContext();
context->OMSetRenderTargets(1, &(mRenderTargetView._Myptr), pD3dGraphicsObject->GetDepthStencilView());
float color[4];
// Setup the color to clear the buffer to.
color[0] = clearRed;
color[1] = clearGreen;
color[2] = clearBlue;
color[3] = clearAlpha;
// Clear the back buffer.
context->ClearRenderTargetView(mRenderTargetView.get(), color);
// Clear the depth buffer.
context->ClearDepthStencilView(pD3dGraphicsObject->GetDepthStencilView(), D3D11_CLEAR_DEPTH, 1.0f, 0);
void D3DFrameBuffer::EndRender() const {
pD3dGraphicsObject->SetBackBufferRenderTarget();
}
Edit 2 Ok, I after I set up the DirectX debug layer I saw that I was using an SRV as a render target while it was still bound to the Pixel stage in out of the shaders. I fixed that by setting shader resources to NULL after I render with the wave shader, but the problem still persists - nothing actually gets ran or updated. I took the render target code from here and slightly modified it, if its any help: http://rastertek.com/dx11tut22.html
Okay, as I understand correct you need a multipass-rendering to texture.
Basiacally you do it like I've described here: link
You creating SRVs with both D3D11_BIND_SHADER_RESOURCE and D3D11_BIND_RENDER_TARGET bind flags.
You ctreating render targets from textures
You set first texture as input (*SetShaderResources()) and second texture as output (OMSetRenderTargets())
You Draw()*
then you bind second texture as input, and third as output
Draw()*
etc.
Additional advices:
If your target GPU capable to write to UAVs from non-compute shaders, you can use it. It is much more simple and less error prone.
If your target GPU suitable, consider using compute shader. It is a pleasure.
Don't forget to enable DirectX debug layer. Sometimes we make obvious errors and debug output can point to them.
Use graphics debugger to review your textures after each draw call.
Edit 1:
As I see, you call BeginRender and OMSetRenderTargets only once, so, all rendering goes into mRenderTargetView. But what you need is to interleave:
SetSRV(texture1);
SetRT(texture2);
Draw();
SetSRV(texture2);
SetRT(texture3);
Draw();
SetSRV(texture3);
SetRT(backBuffer);
Draw();
Also, we don't know what is mRenderTargetView yet.
so, before
result = mColorQuad->Render(&viewMatrix,&orthoMatrix);
somewhere must be OMSetRenderTargets .
Probably, it s better to review your Begin()/End() design, to make resource binding more clearly visible.
Happy coding! =)

How can I generate a screenshot when glReadPixels is empty?

I have a program which runs in a window using OpenGL (VS2012 with freeglut 2.8.1). Basically at every time step (run via a call to glutPostRedisplay from my glutIdleFunc hook) I call my own draw function followed by a call to glFlush to display the result. Then I call my own screenShot function which uses the glReadPixels function to dump the pixels to a tga file.
The problem with this setup is that the files are empty when the window gets minimised. That is to say, the output from glReadPixels is empty; How can I avoid this?
Here is a copy of the screenShot function I am using (I am not the copyright holder):
//////////////////////////////////////////////////
// Grab the OpenGL screen and save it as a .tga //
// Copyright (C) Marius Andra 2001 //
// http://cone3d.gz.ee EMAIL: cone3d#hot.ee //
//////////////////////////////////////////////////
// (modified by me a little)
int screenShot(int const num)
{
typedef unsigned char uchar;
// we will store the image data here
uchar *pixels;
// the thingy we use to write files
FILE * shot;
// we get the width/height of the screen into this array
int screenStats[4];
// get the width/height of the window
glGetIntegerv(GL_VIEWPORT, screenStats);
// generate an array large enough to hold the pixel data
// (width*height*bytesPerPixel)
pixels = new unsigned char[screenStats[2]*screenStats[3]*3];
// read in the pixel data, TGA's pixels are BGR aligned
glReadPixels(0, 0, screenStats[2], screenStats[3], 0x80E0,
GL_UNSIGNED_BYTE, pixels);
// open the file for writing. If unsucessful, return 1
std::string filename = kScreenShotFileNamePrefix + Function::Num2Str(num) + ".tga";
shot=fopen(filename.c_str(), "wb");
if (shot == NULL)
return 1;
// this is the tga header it must be in the beginning of
// every (uncompressed) .tga
uchar TGAheader[12]={0,0,2,0,0,0,0,0,0,0,0,0};
// the header that is used to get the dimensions of the .tga
// header[1]*256+header[0] - width
// header[3]*256+header[2] - height
// header[4] - bits per pixel
// header[5] - ?
uchar header[6]={((int)(screenStats[2]%256)),
((int)(screenStats[2]/256)),
((int)(screenStats[3]%256)),
((int)(screenStats[3]/256)),24,0};
// write out the TGA header
fwrite(TGAheader, sizeof(uchar), 12, shot);
// write out the header
fwrite(header, sizeof(uchar), 6, shot);
// write the pixels
fwrite(pixels, sizeof(uchar),
screenStats[2]*screenStats[3]*3, shot);
// close the file
fclose(shot);
// free the memory
delete [] pixels;
// return success
return 0;
}
So how can I print the screenshot to a TGA file regardless of whether Windows decides to actually display the content on the monitor?
Note: Because I am trying to keep a visual record of the progress of a simulation, I need to print every frame, regardless of whether it is being rendered. I realise that last statement is a bit of a contradiction, since I need to render the frame in order to produce the screengrab. To rephrase; I need glReadPixels (or some alternative function) to produce the updated state of my program at every step so that I can print it to a file, regardless of whether windows will choose to display it.
Sounds like you're running afoul of the pixel ownership problem.
Render to a FBO and use glReadPixels() to slurp images out of that instead of the front buffer.
I would suggest keeping the last rendered frame stored in memory and updating this memory's contents whenever an update is called and there is actual pixel data in the new render. Either that or you could use the accum perhaps, though I cant quite recall how it stores older frames (it may just end up updating out so fast that it stores no render data as well.
Another solution might be to use a shader to manually render each frame and write the result to a file

display list doesn't work the proper way

I had already coded the display list.. and when i opened it again the next day -> gone..
niiice i thought.. i've wasted hours for nothing
and the next problem is.. i can't get it to work anymore
the display list actually works but not how it should.. textures are stretched somehow
i get my world from a text file.. each platform defined by start amount of x.. end amount of x
the y amount of the platform's bottom.. u and v (which i don't use) and filter for choosing texture.
setupworld is the function that reads from the text file and writes my variables into a structure.
Oh and numblocks is the number of platforms to display
void setupworld(){
float xstart,xend,ystart,u,v;
unsigned int filter;
FILE *filein;
char oneline[255];
filein=fopen("data/world.txt","rt");
readstr(filein,oneline);
sscanf(oneline, "Anzahl %d\n",&numblocks);
for (int loop=0;loop<numblocks; loop++)
{
readstr(filein,oneline);
sscanf(oneline,"%f %f %f %f %f %d",&xstart,&xend,&ystart,&u,&v,&filter);
block.data[loop].xstart=xstart;
block.data[loop].xend=xend;
block.data[loop].ystart=ystart;
block.data[loop].u=u;
block.data[loop].v=v;
block.data[loop].filter=filter;
}
fclose(filein);
return;}
BuildLists() creates my Display List, but first loads the png files and the world, cause they influence my display list... i had to rewrite this part of code and i just dont know where i made the mistake..
first loop is for creating the platforms and the 2nd one for blocks.. each platform consists of a number of 2x2 blocks simply next to eachother
GLvoid BuildLists(){
texture[0]=LoadPNG("data/rock_gnd.png");
texture[1]=LoadPNG("data/rock_wall2.png");
texture[2]=LoadPNG("data/pilz_test.png");
setupworld();
quad[0]=glGenLists(numblocks);
for(int loop=0;loop<numblocks;loop++)
{
GLfloat xstart,xend,ystart,u,v;
xstart=block.data[loop].xstart;
xend=block.data[loop].xend;
ystart=block.data[loop].ystart;
u=block.data[loop].u;
v=block.data[loop].v;
GLuint filter=block.data[loop].filter;
GLfloat blocks=(xend-xstart)/2.0f;
glNewList(quad[loop],GL_COMPILE);
glBindTexture(GL_TEXTURE_2D, texture[filter]);
for(int y=0;y<blocks;y++)
{
glBegin(GL_QUADS);
glTexCoord2f(0.0f,0.0f);
glVertex3f(xstart,ystart,-1.0f);
glTexCoord2f(1.0f,0.0f);
glVertex3f(xstart+((y+1)*2.0f),ystart,-1.0f);
glTexCoord2f(1.0f,1.0f);
glVertex3f(xstart+((y+1)*2.0f),ystart+2.0f,-1.0f);
glTexCoord2f(0.0f,1.0f);
glVertex3f(xstart,ystart+2.0f,-1.0f);
glEnd();
}
glEndList();
quad[loop+1]=quad[loop]+1;
}
}
the display list is compiled during initialisation just before enabling the 2d textures
this is how i call it during my actual code
int DrawWorld(GLvoid){
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
GLfloat camtrans=-xpos;
glTranslatef(camtrans,0,0);
glPushMatrix();
for(int i=0;i<numblocks;i++)
{
glCallList(quad[i]);
}
glPopMatrix();
return TRUE; }
so this is it.. i think the mistake is in the BuildLists() function but i'm not sure anymore..
Here is the link to my screenshot.. as u see the textures look weird for some reason
http://www.grenzlandzocker.de/test.png
In your code:
quad[0]=glGenLists(numblocks);
creates numblocks display lists, but you only get the first display list id, and it is stored in quad[0]. Later, you use:
glNewList(quad[loop],GL_COMPILE);
where quad[loop] is undefined for loop != 0. You can instead use:
glNewList(quad[0] + loop,GL_COMPILE);
Because the ids for display lists are contiguous, starting with the value of quad[0]. You also need to modify your rendering code to:
glCallList(quad[0] + i);
For the same reason ...