OpenGL: GL_TEXTURE_1D trouble - opengl

I'm trying to add a simple repeating shaded gradient (think laminated layers) along the Z axis of arbitrary, highly tessellated objects (STL imports).
It's mostly working, but I get some really weird faces where the texture is not parallel to Z, but rotated and scaled differently. These faces are usually (possibly always) oriented vertically and some are quite large. I would think vertical faces would be the easiest to apply... so I'm currently stumped.
Here's the init:
glBindTexture(GL_TEXTURE_1D, _d->textures[0]);
glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE); //GL_DECAL);
glTexImage1D(GL_TEXTURE_1D, 0, GL_RGB, _d->sideImage.width(), 0,
GL_BGRA_EXT, GL_UNSIGNED_BYTE, _d->sideImage.bits());
And in use:
glEnable(GL_TEXTURE_1D);
foreach(const STLTriangle& t, _model.triangles())
{
glBegin(GL_TRIANGLES);
glNormal3f(t.normal[0], t.normal[1], t.normal[2]);
glTexCoord1f(fmod(t.v[0][2],1.0)); glVertex3f(t.v[0][0],t.v[0][1],t.v[0][2]);
glTexCoord1f(fmod(t.v[1][2],1.0)); glVertex3f(t.v[1][0],t.v[1][1],t.v[1][2]);
glTexCoord1f(fmod(t.v[2][2],1.0)); glVertex3f(t.v[2][0],t.v[2][1],t.v[2][2]);
glEnd();
}
glDisable(GL_TEXTURE_1D);
Does anyone see something wrong in how I'm going about this?

Okay, I've solved it. The fmod() was a clue.
Instead of:
glTexCoord1f(t.v[0][2]);
I needed to normalize Z along the part from 0 to 1: (or 0 to 1 * textureScaleFactor, really)
glTexCoord1f((t.v[0][2] - bounds.min[2]) / range);
I apply a scaling factor to range to get the texture density I want.
fmod() was really wrong. If a face has min and max vertex points 0 and 1, then you get the full texture. However, if you get 0 and 1.1, then that becomes 0.1, and it scales a tenth of the texture across the face. Oops. I'm amazed it worked on so many of the other faces. That's what I get for not taking a break and getting some sleep. :)
At least at some point (with fmod() removed), I was getting an aliasing effect that made more appear to be wrong than there really was. So, I also switched the texture to a mipmap. From:
glTexImage1D(GL_TEXTURE_1D, 0, GL_RGB, _d->sideImage.width(), 0,
GL_BGRA_EXT, GL_UNSIGNED_BYTE, _d->sideImage.bits());
To:
gluBuild1DMipmaps(GL_TEXTURE_1D, GL_RGB, _d->sideImage.width(), GL_BGRA_EXT,
GL_UNSIGNED_BYTE, _d->sideImage.bits());
Even with this, it's still aliasing at distance. Any suggestions on how to handle that? I'll play with the mipmap levels and see where that gets me...

Related

Lots of big textures is slow. Best way to speed up?

I am writing a terrain engine that has a lot of large textures draped over a heightmap. These textures are generated by rendering a bunch of stuff and then using the glCopyTexSubImage2D command a few times.
All fine and well (on speed and quality fronts), but when I make a lot of them (>45 1Mpixel) my framerate takes a nosedive from 60 to about 2. My first thought is that, well something (GPU or CPU) has to be pulling all million pixels per texture each time it is rendered which would definitely slow it down, so I tried what I thought was the solution: to implement mipmapping.
So I pull all the rgb values into an array and repass it into gluBuild2DMipmaps. (Is this not wasteful? I ask for some data and then give it right back? Is there a better way to do this with what I have (see below)).
Now, the mid-distant textures look terrible and bland and I am no better on the speed front.
Is there some way to get more detail on the further out textures while speeding up my rendering? Bear in mind that I am using freeglut and so am rather limited to opengl 2.
[EDIT: some code samples]
The generation of the texture:
//Only executes once as then the texture is defined.
if (TextureNumber == -1)
{
glGenTextures (1, &TextureNumber);
glBindTexture(GL_TEXTURE_2D, TextureNumber);
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR );
// ... Other not directly related stuff
glTexImage2D (GL_TEXTURE_2D, 0, GL_RGB, TEXTURE_SIZE, TEXTURE_SIZE, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL);
}
//The texture is built up a little bit at a time, over a number of calls.
// Do some rendering
// ...
glBindTexture(GL_TEXTURE_2D, TextureNumber);
//And copy it into the big texture
glCopyTexSubImage2D (GL_TEXTURE_2D, 0, texX * _patch_size, texY * _patch_size, 0, 0, _patch_size, _patch_size);
Finally, this is run once:
unsigned char* dat = new unsigned char [TEXTURE_SIZE*TEXTURE_SIZE*3];
glBindTexture(GL_TEXTURE_2D, TextureNumber);
glGetTexImage(GL_TEXTURE_2D,0,GL_RGB,GL_UNSIGNED_BYTE,dat);
gluBuild2DMipmaps(GL_TEXTURE_2D,3,TEXTURE_SIZE,TEXTURE_SIZE,GL_RGB,GL_UNSIGNED_BYTE,dat);
finishedTexture = true;
delete[] dat;
The rendering:
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glColor3f(1,1,1);
glVertexPointer( 3, GL_FLOAT, 0, VertexData);
glTexCoordPointer(2, GL_FLOAT, 0, TextureData);
glBindTexture(GL_TEXTURE_2D, TextureNumber);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glDrawElements( GL_TRIANGLES, //mode
numTri[detail], //count, ie. how many indices
GL_UNSIGNED_INT, //type of the index array
TriangleData[detail]);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
First way to speed up that code is to get rid of all functions that are from 20 years ago and convert it to shaders. Fixed pipeline can be very unoptimal on modern hardware, and constant sending data to GPU is also probably killing the performance.
Bear in mind that I am using freeglut and so am rather limited to opengl 2.
No, that's not true. Freeglut is concerned mostly with window and context creation, and you can still use GLLoad or GLEW to get OGL 3.x or 4.x functions.
A quick list of things I see:
Fixed pipeline state used glColor
No VBO used / combined with deprecated glVertexPointer
Perhaps FBO should be used to fill the textures initially?

Textures Stopped Working Correctly

Recently my textures went crazy. Last 2 textures that I tried to map appeared as in the picture below. I want it to appear as in the first picture but no matter what I did, it insists on appearing like in the latter one. Ignore the text please, it has nothing to do with texture.
I am using GLUT for my openGL windowing, and GLM obj loader's tga reader. I used the reader many times before and there was no problem. It just stopped working for my last two attempts to load texture. The related code is below:
Texture onScreenTexture;
if (LoadTGA(&onScreenTexture, "back.tga"))
{
glGenTextures(1, &onScreenTexture.texID);
glBindTexture(GL_TEXTURE_2D, onScreenTexture.texID);
glTexImage2D(GL_TEXTURE_2D, 0, onScreenTexture.bpp / 8, onScreenTexture.width, onScreenTexture.height, 0, onScreenTexture.type, GL_UNSIGNED_BYTE, onScreenTexture.imageData);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
if (onScreenTexture.imageData)
{
free(onScreenTexture.imageData);
}
}
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, onScreenTexture.texID);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL);
glBegin(GL_QUADS);
glTexCoord2f(0,0); glVertex2f(10.0, 10.0);
glTexCoord2f(0,1); glVertex2f(260, 10.0);
glTexCoord2f(1,1); glVertex2f(260, 110);
glTexCoord2f(1,0); glVertex2f(10.0, 110);
glEnd();
glDisable(GL_TEXTURE_2D);
This is nothing to do with the ratio of width/height (though you do seem to be rendering it 90 degrees rotated, causing some additional stretching), but with the packing of rows of pixels. This is apparent from the diagonal pattern, indicating an progressive alignment issue, and also the coloured stripes, showing that the RGB data is unaligned differently on each line.
In your case, you're loading a TGA, which has no row-padding, but passing it to GL which by default expects rows of pixels to be padded to a multiple of 4 bytes.
Your working textures probably are either 32-bit rather than 24-bit or are a multiple of 4 pixels wide, either of which gives a natural alignment.
Possible fixes for this are:
Change the dimensions of your texture, such that there will be no padding.
Change the loading of your texture, such that the padding is consistent with what GL expects
Tell GL how your texture is packed, using (for example) glPixelStorei(GL_UNPACK_ALIGNMENT, 1);

OpenGL CubeMap Texture Dimensions

After struggling all weekend, I finally have a sphere reflecting its environment in OpenGL. It almost looks good. Problem is, certain features don't line up. I can't find much information on the topic of OpenGL sphere mapping, outside of a two page section in the Red Book and a few scattered, and mostly unfinished forum topics. Not sure it's necessary, but I included my code where I load the textures. After trial and error, I found that having symmetrical dimensions freom 0 to 512 gets the best results, but they still aren't perfect (and the dimensions have to be powers of two, or it crashes). Does any one know of any strategies to get textures that line up more correctly?
void loadCubemapTextures(){
glGenTextures(1, &cubemap);
glBindTexture(GL_TEXTURE_CUBE_MAP, cubemap);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_R, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glDrawBuffer(GL_AUX1);
glReadBuffer(GL_AUX1);
glMatrixMode(GL_MODELVIEW);
//Generate cube map textures
for(int i = 0; i < 6; i++){
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
//Viewing Transformation
gluLookAt(pos[0], pos[1], pos[2],
view_pos[i][0], view_pos[i][1], view_pos[i][2],
top[i][0], top[i][1], top[i][2]);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(90.0, 1.0, cubemapRadius + 1, 200.0);
glMatrixMode(GL_MODELVIEW);
render();
glCopyTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, 0, GL_RGB, 0, 0, 256, 256, 0);
}
Your walls lack enough features to make their orientation and location clear, but I would bet those are the issue.
The values you have in view_pos and top are what define those. Among the things that look suspicious:
the black at the bottom of the shere comes from the face that captured the front view. It should be at the top (ie it's either flipped, or rotated 180).
likewise, the black that comes from the back of the view (in the center of the sphere) looks turned by a quarter of a circle (as black goes on the left rather than being on the top).
the reflection of the map sphere looks weird. It lacks resolution to figure out exactly what this is.
Anyways, to fix it, you can either make sure your values are what is expected from a proper cube-map definition (as in check all the specification), or just do a bunch of trial and error, after adding enough geometry to figure out what the orientation of each face is.
The problem had nothing to do with coordinates, thanks for the input though. I had forgotten to set the view matrix... a simple call to gluViewport inside the for-loop fixed it for me.

OpenGL texture randomly not shown

I have got a very, very strange problem in my C++ OpenGL application.
I simply load a texture and apply it to a quadric:
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexImage2D(GL_TEXTURE_2D, 0, 3, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, image);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
Then
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, tex);
gluQuadricDrawStyle(quad,GLU_FILL);
gluQuadricTexture(quad,GL_TRUE);
gluCylinder(quad,1,0,2,20,1);
glDisable(GL_TEXTURE_2D);
Now: it works perfectly 9 times out of ten, but sometimes the texture isn't shown (the quadric stays white).
The image is correctly loaded, so the problem should be with OpenGL. I have tried with several different images too. Always GL_NO_ERROR.
Any idea ? It is driving me crazy...
Found :) It was the GLint texture member that wasn't correctly reallocated in the copy constructor.
However, i still don't understand why it worked sometimes...
The code you are using seems valid. Have you ...
tried to use a simple quad instead of the quadric
assured that image is filled correctly
verified that tex is not altered somewhere else
assured that no other programs are using opengl at the same time
restarted your computer ;)

Removing OpenGL texture artifacts

I am creating a simple opengl application which obviously includes some 3d-objects and textures. My problem is however that artifacts appear on every texture. These come in the form of triangles along the edges.
I have noticed that it disappears as soon as I move the view-point closer to texture it renders perfectly. Therefore I have a suspicion that it has something to do, either with the mipmapping or the z-buffer.
Please note that all texture-coordinates are loaded from a .3ds-file and all of them are verified to be within the range of 0-1.
Here are a picture of my problem:
Picture 1
The textures are loaded like this:
//Texture parameters
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_NEAREST);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
//Define the 2d texture
glTexImage2D(GL_TEXTURE_2D, 0, 4, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, array);
//Create 2d mipmaps
gluBuild2DMipmaps(GL_TEXTURE_2D, 4, width, height, GL_RGBA, GL_UNSIGNED_BYTE, array);
When I was programming using DirectX, the near plane / far plane distance ratio caused artifacts in the edges.
In my case, if near plane was 1 unit away from 'camera' and far plane was 10000 units away, the ratio is 1/10000 and it created problems. If i set the near plane to 10 or 100, the ratio becomes bigger. It solved the jagged edges problem.
I don't know if/how it is applicable in OpenGL, but you might want to check it out.