I'm working with SDL and OpenGL, creating a fairly simple application. I created a basic text rendering function, which maps a generated texture onto a quad for each character. This texture is rendered from a bitmap of each character.
The bitmap is fairly small, about 800x16 pixels. It works absolutely fine on my desktop and laptop, both in and out of a VM (and on both Windows and Linux).
Now, I'm trying it on another computer, and the text becomes all garbled - it appears as though the computer can't handle a very basic thing like this. To see if it was due to the OS, I installed VirtualBox and tested it in the VM - but the result is even worse! Instead of rendering anything (albeit garbled), it just renders a plain white box.
Why is this occuring, and is there any way to solve it?
Some code - how I initialize the texture:
glGenTextures(1, &fontRef);
glBindTexture(GL_TEXTURE_2D, iFont);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, FNT_IMG_W, FNT_IMG_H, 0,
GL_RGB, GL_UNSIGNED_BYTE, MY_FONT);
Above, MY_FONT is an unsigned char array (the raw image dump from GIMP). When I draw a character:
GLfloat ix = c * (GLfloat) FNT_CHAR_W;
// We just map each corner of the texture to a new vertex.
glTexCoord2d(ix, FNT_CHAR_H); glVertex3d(x, y, 0);
glTexCoord2d(ix + FNT_CHAR_W, FNT_CHAR_H); glVertex3d(x + iCharW, y, 0);
glTexCoord2d(ix + FNT_CHAR_W, 0); glVertex3d(x + iCharW, y + iCharH, 0);
glTexCoord2d(ix, 0); glVertex3d(x, y + iCharH, 0);
That sounds to me as if the Graphics card of the machine you are working on only supports power of two textures (i.e. 16, 32, 64...). 800x16 certainly would not work on such a gfx card.
you can use glGet with ARB_texture_non_power_of_two to check if the gfx card does support it.
Or use GLEW to do that check for you.
Related
We are developing a C++ plug-in within an OpenGL application. The application will call a "render" method on our plug-in as necessary. While rendering our textures, we noticed that sometimes some of the textures are drawn completely white even though they are created with valid data. It appears to be random about which texture and when. While investigating what could cause some of the textures to render white, I noticed that simply trying to retrieve the size of a texture (even for the ones that render correctly) doesn't work. Here is the simple code to create the texture and retrieve its size:
GLuint textureId;
glGenTextures(1, &textureId);
glBindTexture(GL_TEXTURE_2D, textureId);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imageWidth, imageHeight, 0,
GL_BGRA, GL_UNSIGNED_BYTE, imageDdata);
// try to lookup the size of the texture
int textureWidth = 0, textureHeight = 0;
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_WIDTH, &textureWidth);
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_HEIGHT, &textureHeight);
glBindTexture(GL_TEXTURE_2D, 0);
The image width and height input to glTexImage2D are 1536 x 1536, however the values returned from glGetTexLevelParameter are 16384 x 256. In fact, any width and height image that I pass to glTexImage2D result in an output of 16384 x 256. If I pass a width and height of 64 x 64, I still get back 16384 x 256.
I am using the same simple texture load/render code in another standalone test application and it works correctly all the time. However, I get these white textures when I use the code within this larger application. I have also verified that glGetError() returns 0.
I am assuming the containing application is setting some OpenGL state that is causing problems when we try to render our textures. Do you have any suggestions for things to check that could cause these white textures OR invalid texture dimensions?
Update
Both my test application that renders correctly and the integrated application that doesn't render correctly are running within a VM on Windows 7 with Accelerated 3D Graphics enabled. Here is the VM environment:
CentOS 7.0
OpenGL 2.1 Mesa 9.2.5
Did you check that you've got a valid OpenGL context being active when this code is called? The values you get back may be uninitialized garbage left in the variables, which values don't get modified if glGetTexLevelParameter fails for some reason. Note that glGetErrors may return GL_NO_ERROR if there's no OpenGL context active.
To check if there's a OpenGL context use wglGetCurrentContext (Windows), glXGetCurrentContext (X11 / GLX) or CGLGetCurrentContext (MacOS X CGL) to query the active OpenGL context; if there's none active all of these functions will return NULL.
Just FYI: You should use GLint for retrieval of integer values from OpenGL, the reason for that is, the the OpenGL types have very specific sizes, which may differ from the primitive C types of the same name. For example a C unsigned int may vary between 16 to 64 bits in size, while a OpenGL GLuint always is fixed to 32 bits.
https://www.opengl.org/sdk/docs/man/docbook4/xhtml/glGetTexLevelParameter.xml
glGetTexLevelParameter returns the texture level parameters for the active texture unit.
Try glActiveTexture on all the units. See if you are getting default values.
I have a simple openGL question, currently I'm trying to learn texturing and here is the part I`m confused about it :
void initTextures()
{
GLuint gTextureSphere;
int width, height, channels = 1;
unsigned char* textureMapData = SOIL_load_image("res/texturemap.jpg", &width, &height, 0, SOIL_LOAD_RGB);
//texture map
glGenTextures(1,&gTextureSphere);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D,gTextureSphere);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, textureMapData);
SOIL_free_image_data(textureMapData);
glUniform1i(glGetUniformLocation(gProgramSphere, "normalTexture"), 0);
////////////////////////
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
}
I think the code above reads my image "texturemap.jpg" by SOIL_load_image function and store it at textureMapData variable. Now, I want to know that, what is the purpose of following 4 lines. I mean, I have already read the data. Am I putting this data into gTextureSphere variable with these following 4 lines ? I guess it is not possible since gTextureSphere is a GLuint type variable. Could anyone explain me ?
Now, I want to know that, what is the purpose of following 4 lines.
So far the texture data has only been loaded into the address space of the program. But OpenGL, the renderer API does not "magically" learn about the availability of that data. Let's break it down:
First generate a OpenGL handle we talk to with OpenGL so that it knows what texture object we're talking to it about. The generated handle will be stored in the variable gTextureSphere.
glGenTextures(1,&gTextureSphere);
OpenGL has several "plugs", called texture units into which texture objects can be "connected to". This tells OpenGL, that the following operations should happen on texture unit 0 (GL_TEXTURE0):
glActiveTexture(GL_TEXTURE0);
Next make a connection between the just selected texture unit and the texture object we, and OpenGL came into an agreement to call by the value contained in the variable gTextureSphere.
glBindTexture(GL_TEXTURE_2D,gTextureSphere);
Now that OpenGL knows, that we're talking about texture unit 0 and a certain texture to be plugged into it, we can tell it to do certain things with the texture object. For example copy the image data, read from a file and decoded into some buffer.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, textureMapData);
At this point OpenGL has a texture object with a working copy of the image data; we can now safely free the buffer we used to decode the image file into, since OpenGL has its own copy now.
The setup is a little complicated so I will try my best to detail it.
First I am attempting to use openCL/openGL interop. The code works when the interop cl::ImageGL is not used so the basics are there. The project is using 3 main openGL context (took a lot to get the openCL context to open in that). I am using QGLWidgets for the context. There is a hidden QGLWidget created first and the other 2 share the hidden's context. Each of the 2 QGLWidgets are running in their own threads. The hidden QGLWidget is transfer to a thread where the openCL context is created.
QGLFormat qglFormat;
qglFormat.setVersion(3, 3);
qglFormat.setProfile(QGLFormat::CoreProfile);
m_hiddenGl=new GLHiddenWidget(qglFormat);
m_hiddenGl->setVisible(false);
m_view1=new GLWidget(qglFormat, m_hiddenGl);
m_view2=new GLWidget(qglFormat, m_hiddenGl);
...
QThread *processThread=m_process.qThread();
m_hiddenGl->doneCurrent();
m_hiddenGl->context()->moveToThread(processThread);
GLWidget is a custom class that launches its own thread and moves the context, GLHiddenWidget again custom class but basically just overrides all functions needed to keep makeCurrent from being called by the main thread.
Inside the process thread at start is the following
m_hiddenGl->makeCurrent();
hdc=wglGetCurrentDC();
glHandle=wglGetCurrentContext();
cl_context_properties clContextProps[]={
CL_CONTEXT_PLATFORM, (cl_context_properties)m_openCLPlatform(),
CL_WGL_HDC_KHR, (intptr_t) hdc,
CL_GL_CONTEXT_KHR, (intptr_t) glHandle, 0
};
m_openCLContext=cl::Context(m_openCLDevice, clContextProps, NULL, NULL, &error);
This all is a go. From this point several kernels are executed sequentially on incoming data which is images. All of the kernels succeed (no errors) however the one kernel using a openGL texture to write out data fails to write anything. When using an openCL cl::Image2d it works fine (produces the correct output) even if the openCL context is created as interop.
The openGL texture is created after all of the openGL contexts are created and after the openCL context is created (also in the same thread as the openCL context). At the beginning of the processThread the texture is generated by the hidden QGLWidget with glGenTextures. Then all of the kernels are created along with other openCL buffers and images. Right before the kernel is executed the following is done.
if(initBuffer) //runs only if buffer size is changed, allways runs first time
{
m_hiddenGl->makeCurrent();
glBindTexture(GL_TEXTURE_2D, m_texture);
//I have attempted to put data in, result is always black from kernel
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glBindTexture(GL_TEXTURE_2D, 0);
glFinish();
//m_flags is CL_MEM_READ_WRITE
m_imageGL=cl::ImageGL(m_openCLContext, m_flags, GL_TEXTURE_2D, 0, m_texture, &error);
}
and a simplified kernel.
__kernel void kernel1(__read_only image2d_t src1, __read_only image2d_t src2, __write_only image2d_t dst)
{
int2 coord=(int2)(get_global_id(0), get_global_id(1));
uint4 value=255;
write_imageui(dst, coord, convert_uint4(value));
}
Even if I do not display the texture the image is still black. The image is read back from openCL and saved to the hard disk, when using cl::Image2d or cl::ImageGL. With cl::Image2d it is correct with cl::ImageGL it is black.
I've had the same problem and using write_imagef() (don't forget to divide RGBA values by 255.0!), instead of write_imageui() in the OpenCL kernel solved it, without changing the pixel format of the OpenGL texture (you may not have the opportunity to do that, especially if it's an existing texture, created somewhere else in a large app).
You'd have to post more code for us to find the error.
In any case, I've written an example program which does something very similar. It uses OpenCL to calculate a mandelbrot fractal then draws it with OpenGL inside a QGLWidget. The source is here: https://github.com/kylelutz/compute/blob/master/example/mandelbrot.cpp.
Hope that helps.
Well as noted above in my comment I found my problem was related to the definition of the texture for openGL. Since I used write_imageui in the openCL kernel the openGL texture had to be defined as follows:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8UI, width, height, 0, GL_RGBA_INTEGER, GL_UNSIGNED_BYTE, NULL);
I have made some time since then and created a example program that uses both openGL/openCL interop and threaded QGLWidgets. You can read my comments on it here http://www.krazer.com/?p=109 and/or you can get the source from github.com
I have got a very, very strange problem in my C++ OpenGL application.
I simply load a texture and apply it to a quadric:
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexImage2D(GL_TEXTURE_2D, 0, 3, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, image);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
Then
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, tex);
gluQuadricDrawStyle(quad,GLU_FILL);
gluQuadricTexture(quad,GL_TRUE);
gluCylinder(quad,1,0,2,20,1);
glDisable(GL_TEXTURE_2D);
Now: it works perfectly 9 times out of ten, but sometimes the texture isn't shown (the quadric stays white).
The image is correctly loaded, so the problem should be with OpenGL. I have tried with several different images too. Always GL_NO_ERROR.
Any idea ? It is driving me crazy...
Found :) It was the GLint texture member that wasn't correctly reallocated in the copy constructor.
However, i still don't understand why it worked sometimes...
The code you are using seems valid. Have you ...
tried to use a simple quad instead of the quadric
assured that image is filled correctly
verified that tex is not altered somewhere else
assured that no other programs are using opengl at the same time
restarted your computer ;)
I have created a png image in photoshop with transparencies that I have loaded into and OpenGL program. I have binded it to a texture and in the program the picture looks blurry and I'm not sure why.
alt text http://img685.imageshack.us/img685/9130/upload2.png
alt text http://img695.imageshack.us/img695/2424/upload1e.png
Loading Code
// Texture loading object
nv::Image title;
// Return true on success
if(title.loadImageFromFile("test.png"))
{
glGenTextures(1, &titleTex);
glBindTexture(GL_TEXTURE_2D, titleTex);
glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);
glTexImage2D(GL_TEXTURE_2D, 0, title.getInternalFormat(), title.getWidth(), title.getHeight(), 0, title.getFormat(), title.getType(), title.getLevel(0));
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAX_ANISOTROPY_EXT, 16.0f);
}
else
MessageBox(NULL, "Failed to load texture", "End of the world", MB_OK | MB_ICONINFORMATION);
Display Code
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glBindTexture(GL_TEXTURE_2D, titleTex);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
glTranslatef(-800, 0, 0.0);
glColor3f(1,1,1);
glBegin(GL_QUADS);
glTexCoord2f(0.0, 0.0); glVertex2f(0,0);
glTexCoord2f(0.0, 1.0); glVertex2f(0,600);
glTexCoord2f(1.0, 1.0); glVertex2f(1600,600);
glTexCoord2f(1.0, 0.0); glVertex2f(1600,0);
glEnd();
glDisable(GL_BLEND);
glDisable(GL_TEXTURE_2D);
EDIT: I don't think i'm stretching each pixel should be two in the co-ordinate system
int width=800, height=600;
int left = -400-(width-400);
int right = 400+(width-400);
int top = 400+(height-400);
int bottom = -400-(height-400);
gluOrtho2D(left,right,bottom,top);
OpenGL will (normally) require that the texture itself have a size that's a power of 2, so what's (probably) happening is that your texture is being scaled to a size where the dimensions are a power of 2, then it's being scaled back to the original size -- in the process of being scaled twice, you're losing some quality.
You apparently just want to display your bitmap without any scaling, without wrapping it to the surface of another object, or anything like that (i.e., any of the things textures are intended for). That being the case, I'd just display it as a bitmap, not a texture (e.g. see glRasterPos2i and glBitmap).
Why would you want to use mipmapping and anisotropic filtering for a static image on your start screen in the first place? It looks unlikely the image will be rotated (what anisotropic filtering is for) or has to be resized many times really fast (what mipmapping is for).
If the texture is being stretched: try using GL_NEAREST for your GL_MAG_FILTER, in this case it could give better results (GL_LINEAR is more accurate, but has a nature of blurring).
If the texture is minimized: same thing, try using GL_NEAREST_MIPMAP_NEAREST, or even better, try using no mipmaps and GL_NEAREST (or GL_LINEAR, whichever gives you the best result).
I'd suggest that you make the png have a resolution in a power of 2's. Ie 1024x512 and place the part you want to drawn in the upper left corner of it still in the resolution for the screen. Then rescale the texcoords to be 800.0/1024.0 and 600.0/512.0 to get the right part from the texture. I belive that what is going on is glTexImage2D can sometime handle width and height that are not a power of 2 but can then scale the input image thus filter it.
An example of handling this can be viewed here (a iPhone - OpenGLES - project that grabs a screen part non-power of 2 and draws that into a 512x512 texture and rescales the GL_TEXTURE matrix, line 123, instead of doing a manual rescale of the texcoords)
Here is another post mentioning this method.
Here is an hypothesis:
Your window is 800x600 (and maybe your framebuffer too), but your client area is not, because of the window decoration on the sides.
So your frame-buffer gets resized when being blitted to the client area of your window. Can you check your window creation code ?
Beware of exact size of the client area of your window. Double check it is what you expect it to be
Beware of pixel alignment rules. You might need to add 0.5 to your x/y coordinates to hit the pixel centers. Description for DirectX can be found here - OpenGL rules may be different, though.