texture on cube-side with opengl - c++

hello i want to use a texture on a cube (created by glutsolidcube()), how can i define where the texture is pictured at?
(for example on the "frontside" of a cube)
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, texture[0]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, filterMode);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, filterMode);
glColor4f(0.8,0.7,0.11,1.0);
glPushMatrix();
glScalef(4, 1.2, 1.5);
glTranslatef( 0, 0.025, 0);
glutSolidCube(0.1);
glPopMatrix();
glDisable(GL_TEXTURE_2D);
thanks

Not possible, since glutSolidCube() only generates vertexes and normals, not texture coordinates.
However, there are workarounds.

Unfortunately, using glutSolidCube is impossible, as it doesn't support texturing. What I'd suggest is a tutorial that explains the process that may help you. It's a bit outdated, but NeHe's texturing tutorial has some code that explains how to draw a cube, and the code is commented to explain which side is which for you.

Related

OpenGL Alternative for Drawing Textures with Immediate Mode?

I'm writing a simple graphics engine that draws textures on the screen using OpenGL and C++. The way that I draw the textures is using the source code below--the drawing is done in a method contained in a "Sprite" class that I wrote, which is called by the main scene's game loop.
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, m_textureID);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glColor3f(1.0f, 1.0f, 1.0f);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f);
glVertex2f(m_pos.x, m_pos.y);
glTexCoord2f(1.0f, 0.0f);
glVertex2f(m_pos.x + m_size.x, m_pos.y);
glTexCoord2f(1.0f, 1.0f);
glVertex2f(m_pos.x + m_size.x, m_pos.y + m_size.y);
glTexCoord2f(0.0f, 1.0f);
glVertex2f(m_pos.x, m_pos.y + m_size.y);
glEnd();
glDisable(GL_TEXTURE_2D);
m_textureID is the id of the texture that was already loaded with OpenGL, and m_pos is a vector that stores the x and y position of the sprite, and m_size stores the size of the sprite.
The reason why I'm posting this is because I heard from somebody who is familiar with OpenGL that this is the wrong way to draw many different textures on the screen. Apparently, using glBegin(GL_QUADS) and glEnd() around calls to glVertex can be slow if there are many graphics on the screen at once, and that the correct way to do it is using vertex pointers.
Can anyone give me any pointers (no pun intended) on how to speed up the performance of this technique, using my implementation that I described above? Is there anything else I may be doing wrong? (I'm relatively new to OpenGL and graphics programming.)
Thanks in advance!
You are using immediate mode, which isn't used much (if at all) in serious 3D graphics programming these days. In fact, on mobile devices with OpenGL ES (1.1 and 2.0) it isn't even available.
At the very least, you should use vertex arrays, which are set up using glVertexPointer, glNormalPointer, glColorPointer and glTexCoordPointer, and then drawn using glDrawArrays or glDrawElements.
The next step is to push your vertex data into GPU memory by using vertex buffer objects (VBOs) via glGenBuffer, glBindBuffer and glBufferData. You can find example code here and here.

Scale Bitmapfont clearly with openGL (interpolation)?

I load an bitmapfont (as png image) in my openGL Application to render characters from there at a fixed size. That's working. But: If I want to scale some glyphs at a smaller size it doesnt look great. Is there a way - without using pregenerated mipmaps (I have a big bunch of several characters and need stepless sizes) to scale this more beautiful? Some way of interpolation or something?
At the moment I use something like this (C/C++ on Mac OS X):
glPopMatrix();
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glBindTexture(GL_TEXTURE_2D, texture->getID());
glScalef(0.7f, 0.7f, 0); //scale here a size
{draw vertexes & set texcoords}
glDisable(GL_BLEND);
glDisable(GL_TEXTURE_2D);
glPushMatrix();
Any suggestion?
Did you try using Linear filtering on your textures?:
glTexParameteri(GL_TEXTURE_2D, GL_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_MAG_FILTER, GL_LINEAR);
This after binding textures.

What am I doing wrong that I can't render an image/texture properly in OpenGL?

I'm trying to render an image to the window. Super simple (or so I thought). No matter what I do, I always get this purple square. Some tidbits of my code:
// load image from file, called one time on load:
glClearColor (0.0, 0.0, 0.0, 0.0);
glShadeModel(GL_FLAT);
glEnable(GL_DEPTH_TEST);
RgbImage theTexMap( filename );
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, theTexMap.GetNumCols(), theTexMap.GetNumRows(), 0, GL_RGB, GL_UNSIGNED_BYTE, theTexMap.ImageData() );
// render quad and texture, called inside glutDisplayFunc callback
glEnable(GL_TEXTURE_2D);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
glBegin(GL_QUADS);
glTexCoord2f(0.0, 0.0); glVertex3f(-50.0, -50.0, 0.0);
glTexCoord2f(0.0, 1.0); glVertex3f(-50.0, 50.0, 0.0);
glTexCoord2f(1.0, 1.0); glVertex3f(50.0, 50.0, 0.0);
glTexCoord2f(1.0, 0.0); glVertex3f(50.0, -50.0, 0.0);
glEnd();
glFlush();
glDisable(GL_TEXTURE_2D);
I'm cutting out a lot of code because I'm attempting to extend an example from third party library (ARToolkit). There's a lot more code in there than needs displaying here.
Any ideas for a mistake I might be making that would display the quad, but not render the texture/image?
Rebind your texture object in your glutDisplayFunc() callback, Just In Caseā„¢.
Also, I'm slightly leery of the GL_RGBA8. Try GL_RGBA. Probably superstition on my part though.
Dunno without trying it myself, but it seems a bit strange that you're using GL_RGBA8 for internal format and GL_RGB for pixel format.
Personally, unless I'm using it on my texture I'll also do a GL_DISABLE(GL_LIGHTING) too for textured objects, dunno if that will help but I know I've run into some things I didn't really understand as far as light & texture combinations are concerned.
A few ideas:
Shouldn't you call glEnable(GL_TEXTURE_2D) before you configure and upload the texture?
Did you make sure that texture dimensions are powers of 2?
If you want to use GL_LINEAR magnification/minification functions, you will probably want to generate mipmaps from your original texture.
did you try to use GL_REPLACE rather than GL_MODULATE ?
I don't see you passing any color in with your vertices, and GL_MODULATE will modulate with whatever is the current color.
Your texture only specifies MIP level 0, and not the other MIP levels, so the result may be undefined if it isn't rendering at MIP level 0. The quick fix is:
glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);
It's an easy mistake. A good reference:
http://www.opengl.org/wiki/Common_Mistakes

OpenGL CubeMap Texture Dimensions

After struggling all weekend, I finally have a sphere reflecting its environment in OpenGL. It almost looks good. Problem is, certain features don't line up. I can't find much information on the topic of OpenGL sphere mapping, outside of a two page section in the Red Book and a few scattered, and mostly unfinished forum topics. Not sure it's necessary, but I included my code where I load the textures. After trial and error, I found that having symmetrical dimensions freom 0 to 512 gets the best results, but they still aren't perfect (and the dimensions have to be powers of two, or it crashes). Does any one know of any strategies to get textures that line up more correctly?
void loadCubemapTextures(){
glGenTextures(1, &cubemap);
glBindTexture(GL_TEXTURE_CUBE_MAP, cubemap);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_R, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glDrawBuffer(GL_AUX1);
glReadBuffer(GL_AUX1);
glMatrixMode(GL_MODELVIEW);
//Generate cube map textures
for(int i = 0; i < 6; i++){
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
//Viewing Transformation
gluLookAt(pos[0], pos[1], pos[2],
view_pos[i][0], view_pos[i][1], view_pos[i][2],
top[i][0], top[i][1], top[i][2]);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(90.0, 1.0, cubemapRadius + 1, 200.0);
glMatrixMode(GL_MODELVIEW);
render();
glCopyTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, 0, GL_RGB, 0, 0, 256, 256, 0);
}
Your walls lack enough features to make their orientation and location clear, but I would bet those are the issue.
The values you have in view_pos and top are what define those. Among the things that look suspicious:
the black at the bottom of the shere comes from the face that captured the front view. It should be at the top (ie it's either flipped, or rotated 180).
likewise, the black that comes from the back of the view (in the center of the sphere) looks turned by a quarter of a circle (as black goes on the left rather than being on the top).
the reflection of the map sphere looks weird. It lacks resolution to figure out exactly what this is.
Anyways, to fix it, you can either make sure your values are what is expected from a proper cube-map definition (as in check all the specification), or just do a bunch of trial and error, after adding enough geometry to figure out what the orientation of each face is.
The problem had nothing to do with coordinates, thanks for the input though. I had forgotten to set the view matrix... a simple call to gluViewport inside the for-loop fixed it for me.

Simple OpenGL texture map not working?

I'm trying to figure out texture mapping in OpenGL and I can't get a simple example to work.
The polygon is being drawn, though it's not textured but just a solid color. Also the bitmap is being loaded correctly into sprite1[] as I was successfully using glDrawPixels up til now.
I use glGenTextures to get my tex name, but I notice it doesn't change texName1; this GLuint is whatever I initialize it to, even after the call to glGenTextures...
I have enabled GL_TEXTURE_2D.
Heres the code:
GLuint texName1 = 0;
glGenTextures(1, &texName1);
glBindTexture(GL_TEXTURE_2D, texName1);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_BGRA_EXT, sprite1[18], sprite1[22], 0, GL_BGRA_EXT, GL_UNSIGNED_BYTE, &sprite1[54]);
glColor3f(1, 1, 0);
glBindTexture(GL_TEXTURE_2D, texName1);
glBegin(GL_QUADS);
glTexCoord2f (0.0, 0.0);
glVertex3f (0.0, 0.0, -5.0f);
glTexCoord2f (1.0, 0.0);
glVertex3f (.5, 0.0, -5.0f);
glTexCoord2f (1.0, 1.0);
glVertex3f (.5, .5, -5.0f);
glTexCoord2f (0.0, 1.0);
glVertex3f (0.0, .5, -5.0f);
glEnd();
UPDATE:
I'm at a loss. Here's everything I've tried:
Turns out I was initializing my texture before OGL was initialized. The texture is initialized (glGenTextures->glTexImage2D) in a class constructor and drawn (glBegin->glEnd) in a member function that is called every frame. genTextures appears to be working correctly now and I'm getting a name of 1.
Every possible combination of GL_RGBA8, GL_BGRA_EXT (GL_BGRA doesn't work on my system; I need the _EXT), and I even removed the alpha channel from the bitmap and tried all combinations of GL_RGB, GL_BGR_EXT, etc etc. No luck.
Tried procedurally creating a bitmap and using that
Made sure GL_COLOR_MATERIAL isn't enabled.
Changed bitmap size to 32x32.
Tried glTexEnvi instead of glTexEnvf.
In addition to mentat's note that you might have a problem with non-power-of-two texture dimensions, you mention the texture name generation not changing the name.
That sounds as if you're calling glGenTextures() too early, i.e. before initializing OpenGL. If you're not, then I suggest adding code just after the call to glGenTextures() that check the OpenGL error state, by calling glGetError().
In your comments, you say your bitmap is 29x20 pixels. Afaik to generate a valid texture, OpenGL requires that the image size (on each dimension) be a power of 2. It doesn't need to be a square, it can be a rectangle though. You can overcome this by using some OpenGL extensions like GL_ARB_texture_rectangle.
I'll put this here as I had the same issue and found another post explaining the issue.
The iPhone does support GL_BGRA(GL_EXT_BGRA) but seemingly only as an input format and not as an internal format. So, if you change the glTexImage2D call to have an internal format of GL_RGBA then it works.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, &sprite1[54]);
I hope this helps someone else that stumbles upon this post.
Some random ideas:
GL_COLOR_MATERIAL might be enabled
change "glTexEnvf" to "glTexEnvi" and see if that helps
if texName1 is 0 after glGenTextures you might not have an active OpenGL context
For error checking I recommend writing a small function that prints readable output for the most common results from glGetErrors and use that to find the line that creates the error. Another possibility would be to use something like GLIntercept, BuGLe or gDEBugger.
My OpenGL is rusty, but I remember having same problems with with glTexImage2D
. Finally I managed to make it work, but I always had more luck with gluBuild2DMipmaps
so i ended up with
gluBuild2DMipmaps (
GL_TEXTURE_2D, type, i.width, i.height, type, GL_UNSIGNED_BYTE, i.data
);
which replaced
glTexImage2D (
GL_TEXTURE_2D, 0, type, i.width, i.height, 0, type, GL_UNSIGNED_BYTE, i.data
);
I found the problem. My call to glEnable was glEnable(GL_BLEND | GL_TEXTURE_2D). Using glGetError I saw I was getting a GL_INVALID_ENUM for this call, so I moved GL_TEXTURE_2D to its own enable function and bingo. I guess logical OR isn't allowed for glEnable?
First thing I'd check is the colour material setting, as mentioned by ShadowIce, then check your texture file to ensure it's a reasonable size (i.e. something like 256x256) and an RGB bitmap. If the file has even a slight problem it WILL NOT render correctly, no matter how you try.
Then, I'd stop trying to just debug that code and instead see what you have different to the tutorial on the NeHe website.
NeHe is always a good place to check if you're trying to do stuff in OpenGL. Textures are probably the hardest thing to get right, and they only get more difficult as the rest of your GL skills increase.