loading textures in C++ OpenGL using DevIL - c++

Using C++, I'm trying to load a texture into OpenGL using DevIL. After scrounging around for different code segments, I have a bit of code done (shown below), but it doesn't seem to work completely.
Loading a texture (Part of a Texture2D class):
void Texture2D::LoadTexture(const char *file_name)
{
unsigned int image_ID;
ilInit();
iluInit();
ilutInit();
ilutRenderer(ILUT_OPENGL);
image_ID = ilutGLLoadImage((char*)file_name);
sheet.texture_ID = image_ID;
sheet.width = ilGetInteger(IL_IMAGE_WIDTH);
sheet.height = ilGetInteger(IL_IMAGE_HEIGHT);
}
This compiles and works fine. I do realise that I should only do the ilInit(), iluInit(), and ilutInit() once, but if I remove those lines the program instantly breaks upon loading any image (compiles fine, but errors on runtime).
Displaying the texture in OpenGL (Part of the same class):
void Texture2D::Draw()
{
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);
glPushMatrix();
u = v = 0;
// this is the origin point (the position of the button)
VectorXYZ point_TL; // Top Left
VectorXYZ point_BL; // Bottom Left
VectorXYZ point_BR; // Bottom Right
VectorXYZ point_TR; // Top Right
/* For the sake of simplicity, I've removed the code calculating the 4 points of the Quad. Assume that they are found correctly. */
glColor3f(1,1,1);
// bind the appropriate texture frame
glBindTexture(GL_TEXTURE_2D, sheet.texture_ID);
// draw the image as a quad the size of the first loaded image
glEnable(GL_TEXTURE_2D);
glBegin(GL_QUADS);
glTexCoord2f (0, 0);
glVertex3f (point_TL.x, point_TL.y, point_TL.z); // Top Left
glTexCoord2f (0, 1);
glVertex3f (point_BL.x, point_BL.y, point_BL.z); // Bottom Left
glTexCoord2f (1, 1);
glVertex3f (point_BR.x, point_BR.y, point_BR.z); // Bottom Right
glTexCoord2f (1, 0);
glVertex3f (point_TR.x, point_TR.y, point_TR.z); // Top Right
glEnd();
glPopMatrix();
glDisable(GL_BLEND);
glDisable(GL_TEXTURE_2D);
}
Currently, the quad shows up, but its completely white (the background colour it's given). The image I'm loading exists and is loaded fine (verified using the loaded size values).
Another few things I should note:
1) I am using a depth buffer. I've heard this doesn't go well with GL_BLEND?
2) I would really like to use the ilutGLLoadImage function.
3) I appreciate example code, as I'm a newbie to openGL and DevIL as a whole.

Yes, you have the problem. There might be issues with ilutGLLoadImage().
Try doing things manually:
Load the image using ilLoadImage
Generate the OpenGL texture handle using the glGenTextures
Upload the image to OpenGL glTextImage2D and ilGetData()
See this link for a working solution
http://r3dux.org/tag/ilutglloadimage/
I know, this solution seems to be "a little" complicated, but nobody knows jow much time you would spend fighting with this bug hidden deep in the DevIL.
Another way of fixing things: check you GL texture setup code. Anything in the filtering can be a reason for GL_INVALID_OPERATION.
We've a lot of times into the "White texture" issue while programming the old ATI cards.
Oh! The biggest guess: Non-power-of-two textures. Is you texture file 2^N by 2^N or something different ?
To use non-rectangular textures you just have to use GL extensions.
And the other one: are you using the textures in the same thread or in the other ? Remember that you should glGenTextures() and glBindTexture()/glBegin/glEnd in the same thread.

Related

dll injection: drawing simple game overlay with opengl

I'm trying to draw a custom opengl overlay (steam does that for example) in a 3d desktop game.
This overlay should basically be able to show the status of some variables which the user
can affect by pressing some keys. Think about it like a game trainer.
The goal is in the first place to draw a few primitives at a specific point on the screen. Later I want to have a little nice looking "gui" component in the game window.
The game uses the "SwapBuffers" method from the GDI32.dll.
Currently I'm able to inject a custom DLL file into the game and hook the "SwapBuffers" method.
My first idea was to insert the drawing of the overlay into that function. This could be done by switching the 3d drawing mode from the game into 2d, then draw the 2d overlay on the screen and switch it back again, like this:
//SwapBuffers_HOOK (HDC)
glPushMatrix();
glLoadIdentity();
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glOrtho(0.0, 640, 480, 0.0, 1.0, -1.0);
//"OVERLAY"
glBegin(GL_QUADS);
glColor3f(1.0f, 1.0f, 1.0f);
glVertex2f(0, 0);
glVertex2f(0.5f, 0);
glVertex2f(0.5f, 0.5f);
glVertex2f(0.0f, 0.5f);
glEnd();
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
glPopMatrix();
SwapBuffers_OLD(HDC);
However, this does not have any effect on the game at all.
Is my approach correct and reasonable (also considering my 3d to 2d switching code)?
I would like to know what the best way is to design and display a custom overlay in the hooked function. (should I use something like windows forms or should I assemble my component with opengl functions - lines, quads
...?)
Is the SwapBuffers method the best place to draw my overlay?
Any hint, source code or tutorial to something similiar is appreciated too.
The game by the way is counterstrike 1.6 and I don't intend to cheat online.
Thanks.
EDIT:
I could manage to draw a simple rectangle into the game's window by using a new opengl context as proposed by 'derHass'. Here is what I did:
//1. At the beginning of the hooked gdiSwapBuffers(HDC hdc) method save the old context
GLboolean gdiSwapBuffersHOOKED(HDC hdc) {
HGLRC oldContext = wglGetCurrentContext();
//2. If the new context has not been already created - create it
//(we need the "hdc" parameter for the current window, so the initialition
//process is happening in this method - anyone has a better solution?)
//Then set the new context to the current one.
if (!contextCreated) {
thisContext = wglCreateContext(hdc);
wglMakeCurrent(hdc, thisContext);
initContext();
}
else {
wglMakeCurrent(hdc, thisContext);
}
//Draw the quad in the new context and switch back to the old one.
drawContext();
wglMakeCurrent(hdc, oldContext);
return gdiSwapBuffersOLD(hdc);
}
GLvoid drawContext() {
glColor3f(1.0f, 0, 0);
glBegin(GL_QUADS);
glVertex2f(0,190.0f);
glVertex2f(100.0f, 190.0f);
glVertex2f(100.0f,290.0f);
glVertex2f(0, 290.0f);
glEnd();
}
GLvoid initContext() {
contextCreated = true;
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0, 640, 480, 0.0, 1.0, -1.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glClearColor(0, 0, 0, 1.0);
}
Here is the result:
cs overlay example
It is still very simple but I will try to add some more details, text etc. to it.
Thanks.
If the game is using OpenGL, then hooking into SwapBuffers is the way to go, in principle. In theory, there might be sevaral different drawables, and you might have to decide in your swap buffer function which one(s) are the right ones to modify.
There are a couple of issues with such kind of OpenGL interceptions, though:
OpenGL is a state machine. The application might have modified any GL state variable there is. The code you provided is far from complete to guarantee that something is draw. For example, if the application happens to have shaders enabled, all your matrix setup might be without effect, and what really would appear on the screen depends on the shaders.
If depth testing is on, your fragments might lie behind what already was drawn. If polygon culling is on, your primitive might be incorrectly winded for the currect culling mode. If the color masks are set to GL_FALSE or the draw buffer is not set to where you expect it, nothing will appear.
Also note that your attempt to "reset" the matrices is also wrong. You seem to assume that the current matrix mode is GL_MODELVIEW. But this doesn't have to be the case. It could as well be GL_PROJECTION or GL_TEXTURE. You also apply glOrtho to the current projection matrix without loading identity first, so this alone is a good reason for nothing to appear on the screen.
As OpenGL is a state machine, you also must restore all the state you touched. You already try this with the matrix stack push/pop. But you for example failed to restore the exact matrix mode. As you have seen in 1, a lot more state changes will be required, so restoring it will be more comples. Since you use legacy OpenGL, glPushAttrib() might come handy here.
SwapBuffers is not a GL function, but one of the operating system's API. It gets a drawable as parameter, and does only indirectly refer to any GL context. It might be called while another GL context is bound to the thread, or with none at all. If you want to play it safe, you'll also have to intercept the GL context creation function as well as MakeCurrent. In the worst (though very unlikely) case, the application has the GL context bound to another thread while it is calling the SwapBuffers, so there is no change for you in the hooked function to get to the context.
Putting this all together opens up another alternative: You can create your own GL context, bind it temporarily during the hooked SwapBuffers call and restore the original binding again. That way, you don't interfere with the GL state of the application at all. You still can augment the image content the application has rendered, since the framebuffer is part of the drawable, not the GL context. Doing so might have a negative impact on performance, but it might be so small that you never would even notice it.
Since you want to do this only for a single specific application, another approach would be to find out the minimal state changes which are necessary by observing what GL state the application actually set during the SwapBuffers call. A tool like apitrace can help you with that.

Display a quad perpendicular to the screen

When drawing a quad, it vanishes when rotation brings in a position perpendicular to the screen. Ideally what I'd like to see is (b) but I get nothing
Is there something wrong with my code ? (warning old openGL code following)
void draw_rect(double vector[4][3], int rgb[3], double transp)
{
GLint is_depth, is_blend, blend_src, blend_dst;
glGetIntegerv(GL_DEPTH_WRITEMASK, &is_depth);
glGetIntegerv(GL_BLEND, &is_blend);
glGetIntegerv(GL_BLEND_SRC, &blend_src);
glGetIntegerv(GL_BLEND_DST, &blend_dst);
glEnable(GL_BLEND);
glDepthMask(0);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
// code to set the color ...
glBegin(GL_POLYGON);
glVertex3v(&vector[0][0]);
glVertex3v(&vector[1][0]);
glVertex3v(&vector[2][0]);
glVertex3v(&vector[3][0]);
glEnd();
if (!is_blend){ glDisable(GL_BLEND); }
glDepthMask(is_depth);
glBlendFunc(blend_src, blend_dst);
}
A quad (assuming it is defined by coplanar faces, as in this case) is by definition infinitely thin. It is correct behavior for it to be invisible when perpendicular to the camera.
The "correct" solution is to make a box rather than a single quad.
See Drawing cube 3D using Opengl for an example using a cube. You'll need to tweak the vertex positions to make the cube smaller along one dimension (probably Z), but it'll give you the effect that you're looking for.
Also, stop using the fixed function stuff (glVertex, etc.). It's been deprecated for years. Shaders aren't that difficult, and examples are easy to find via your favorite search engine.
try making it a line of some definite width when the quad is perpendicular to the screen

Opengl surface rendering issue

I just started loading some obj files and render it with opengl. When I render these meshes I get this result (see pictures).
I think its some kind of depth problem but i cant figure it out by myself.
Thats the parameters for rendering:
// Dark blue background
glClearColor(0.0f, 0.0f, 0.4f, 0.0f);
// Enable depth test
glEnable( GL_DEPTH_TEST );
// Cull triangles which normal is not towards the camera
glEnable(GL_CULL_FACE);
I used this Tutorial code as template. https://code.google.com/p/opengl-tutorial-org/source/browse/#hg%2Ftutorial08_basic_shading
The problem is simple, you are doing FRONT or BACK culling.
And the object file contains CCW(Counter-Clock-Wise) or CW (Clock-Wise) cordinates, so written from left to right or right to left.
Your openGL code is expecting it in the other way round, so it hides the surfaces which you are looking backward on.
To check this solves your problem, just take out the glEnable(GL_CULL_FACE);
As this exactly seems to be producing the problem.
Additionally you can use glCullFace(ENUM); where ENUM has to be GL_FRONT or GL_BACK.
If you don't in at least one of both cases can't see your mesh (means in both cases: GL_FRONT or GL_BACK your just seeing the partial mesh) , thats a problem with your code of interpreting the .obj. or the .obj uses not strict surface vectors. (A mix of CCW and CW)
I am actually unsure what you mean, however glEnable(GL_CULL_FACE); and then GL_CULL_FACE(GL_BACK); will cull out or remove the back face of the object. This greatly reduces the lag while rendering objects, and only makes a difference if you are inside or "behind" the object.
Also, have you tried glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); before your render code?

Color coded picking problem in OpenGL

I am making a game, actually a very basic replica of Minecraft, for a class project of mine. I'm stuck in the picking process right now, which would enable me to destroy and create blocks in the game environment.
I've been trying to use OpenGL's own picking mode without any success, and building my own ray picker using math libraries seems to large a work for a project of this size. So, I've decided to use the color coded picking method, which consists of rendering every pickable object in a different color, then getting the color at the mouse position and using it to identify the picked object.
My current interface is just a 3D rendering of many boxes stacked, creating a terrain-like structure. Since I've done no texture mapping yet, all the boxes are shades of grey (lighting enabled).
Now, time for some actual code:
This is the initialization part, enabling texturing, lighting etc.
glEnable(GL_DEPTH_TEST);
glEnable(GL_TEXTURE_2D);
glShadeModel(GL_SMOOTH);
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
glEnable(GL_LIGHT1);
When a mouse button is clicked, I try to get the color at the mouse cursor's position (always the middle of the window, actually) by:
glDisable(GL_LIGHTING);
glDisable(GL_TEXTURE_2D);
glDisable(GL_DITHER);
glDisable(GL_LIGHT0);
glDisable(GL_LIGHT1);
renderColors();
GLubyte pixels[3];
glReadPixels(x, y, 1, 1, GL_RGB, GL_UNSIGNED_BYTE, (void *)pixels);
glEnable(GL_TEXTURE_2D);
glEnable(GL_LIGHTING);
glEnable(GL_DITHER);
glEnable(GL_LIGHT0);
glEnable(GL_LIGHT1);
Problem is, the disables do not work and I always get the RGB values of different shades of grey in my pixels array.
What could be the problem?
Perhaps you forget to clear the color buffer and disable depth buffer and all your rendered colors are causing Z-Fighting or not rendered at all (if z-test is "less"). Try to add swapbuffers code and see what gets rendered after your ColorRender code.

OpenGL Frame Buffer Object for rendering to textures, renders weirdly

I'm using python but OpenGL is pretty much done exactly the same way as in any other language.
The problem is that when I try to render a texture or a line to a texture by means of a frame buffer object, it is rendered upside down, too small in the bottom left corner. Very weird. I have these pictures to demonstrate:
This is how it looks,
www.godofgod.co.uk/my_files/Incorrect_operation.png
This is how it did look when I was using pygame instead. Pygame is too slow, I've learnt. My game would be unplayable without OpenGL's speed. Ignore the curved corners. I haven't implemented those in OpenGL yet. I need to solve this issue first.
www.godofgod.co.uk/my_files/Correct_operation.png
I'm not using depth.
What could cause this erratic behaviour. Here's the code (The functions are indented in the actual code. It does show right), you may find useful,
def texture_to_texture(target,surface,offset): #Target is an object of a class which contains texture data. This texture should be the target. Surface is the same but is the texture which should be drawn onto the target. offset is the offset where the surface texture will be drawn on the target texture.
#This will create the textures if not already. It will create textures from image data or block colour. Seems to work fine as direct rendering of textures to the screen works brilliantly.
if target.texture == None:
create_texture(target)
if surface.texture == None:
create_texture(surface)
frame_buffer = glGenFramebuffersEXT(1)
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, frame_buffer)
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, target.texture, 0) #target.texture is the texture id from the object
glPushAttrib(GL_VIEWPORT_BIT)
glViewport(0,0,target.surface_size[0],target.surface_size[1])
draw_texture(surface.texture,offset,surface.surface_size,[float(c)/255.0 for c in surface.colour]) #The last part changes the 0-255 colours to 0-1 The textures when drawn appear to have the correct colour. Don't worry about that.
glPopAttrib()
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0)
glDeleteFramebuffersEXT(1, [int(frame_buffer)]) #Requires the sequence of the integer conversion of the ctype variable, meaning [int(frame_buffer)] is the odd required way to pass the frame buffer id to the function.
This function may also be useful,
def draw_texture(texture,offset,size,c):
glMatrixMode(GL_MODELVIEW)
glLoadIdentity() #Loads model matrix
glColor4fv(c)
glBegin(GL_QUADS)
glVertex2i(*offset) #Top Left
glVertex2i(offset[0],offset[1] + size[1]) #Bottom Left
glVertex2i(offset[0] + size[0],offset[1] + size[1]) #Bottom, Right
glVertex2i(offset[0] + size[0],offset[1]) #Top, Right
glEnd()
glColor4fv((1,1,1,1))
glBindTexture(GL_TEXTURE_2D, texture)
glBegin(GL_QUADS)
glTexCoord2f(0.0, 0.0)
glVertex2i(*offset) #Top Left
glTexCoord2f(0.0, 1.0)
glVertex2i(offset[0],offset[1] + size[1]) #Bottom Left
glTexCoord2f(1.0, 1.0)
glVertex2i(offset[0] + size[0],offset[1] + size[1]) #Bottom, Right
glTexCoord2f(1.0, 0.0)
glVertex2i(offset[0] + size[0],offset[1]) #Top, Right
glEnd()
You don't show your projection matrix, so I'll assume it's identity too.
OpenGL framebuffer origin is bottom left, not top left.
The size issue is more difficult to explain. What is your projection matrix after all ?
also, you don't show how to use the texture, and I'm not sure what we're looking at in your "incorrect" image.
Some non-related comments:
creating a framebuffer each frame is not the right way to go about it.
come to think about it, why use framebuffer at all ? it seems that the only thing you're after is blending to the frame buffer ? glEnable(GL_BLEND) does that just fine.