I'm trying to draw a custom opengl overlay (steam does that for example) in a 3d desktop game.
This overlay should basically be able to show the status of some variables which the user
can affect by pressing some keys. Think about it like a game trainer.
The goal is in the first place to draw a few primitives at a specific point on the screen. Later I want to have a little nice looking "gui" component in the game window.
The game uses the "SwapBuffers" method from the GDI32.dll.
Currently I'm able to inject a custom DLL file into the game and hook the "SwapBuffers" method.
My first idea was to insert the drawing of the overlay into that function. This could be done by switching the 3d drawing mode from the game into 2d, then draw the 2d overlay on the screen and switch it back again, like this:
//SwapBuffers_HOOK (HDC)
glPushMatrix();
glLoadIdentity();
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glOrtho(0.0, 640, 480, 0.0, 1.0, -1.0);
//"OVERLAY"
glBegin(GL_QUADS);
glColor3f(1.0f, 1.0f, 1.0f);
glVertex2f(0, 0);
glVertex2f(0.5f, 0);
glVertex2f(0.5f, 0.5f);
glVertex2f(0.0f, 0.5f);
glEnd();
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
glPopMatrix();
SwapBuffers_OLD(HDC);
However, this does not have any effect on the game at all.
Is my approach correct and reasonable (also considering my 3d to 2d switching code)?
I would like to know what the best way is to design and display a custom overlay in the hooked function. (should I use something like windows forms or should I assemble my component with opengl functions - lines, quads
...?)
Is the SwapBuffers method the best place to draw my overlay?
Any hint, source code or tutorial to something similiar is appreciated too.
The game by the way is counterstrike 1.6 and I don't intend to cheat online.
Thanks.
EDIT:
I could manage to draw a simple rectangle into the game's window by using a new opengl context as proposed by 'derHass'. Here is what I did:
//1. At the beginning of the hooked gdiSwapBuffers(HDC hdc) method save the old context
GLboolean gdiSwapBuffersHOOKED(HDC hdc) {
HGLRC oldContext = wglGetCurrentContext();
//2. If the new context has not been already created - create it
//(we need the "hdc" parameter for the current window, so the initialition
//process is happening in this method - anyone has a better solution?)
//Then set the new context to the current one.
if (!contextCreated) {
thisContext = wglCreateContext(hdc);
wglMakeCurrent(hdc, thisContext);
initContext();
}
else {
wglMakeCurrent(hdc, thisContext);
}
//Draw the quad in the new context and switch back to the old one.
drawContext();
wglMakeCurrent(hdc, oldContext);
return gdiSwapBuffersOLD(hdc);
}
GLvoid drawContext() {
glColor3f(1.0f, 0, 0);
glBegin(GL_QUADS);
glVertex2f(0,190.0f);
glVertex2f(100.0f, 190.0f);
glVertex2f(100.0f,290.0f);
glVertex2f(0, 290.0f);
glEnd();
}
GLvoid initContext() {
contextCreated = true;
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0, 640, 480, 0.0, 1.0, -1.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glClearColor(0, 0, 0, 1.0);
}
Here is the result:
cs overlay example
It is still very simple but I will try to add some more details, text etc. to it.
Thanks.
If the game is using OpenGL, then hooking into SwapBuffers is the way to go, in principle. In theory, there might be sevaral different drawables, and you might have to decide in your swap buffer function which one(s) are the right ones to modify.
There are a couple of issues with such kind of OpenGL interceptions, though:
OpenGL is a state machine. The application might have modified any GL state variable there is. The code you provided is far from complete to guarantee that something is draw. For example, if the application happens to have shaders enabled, all your matrix setup might be without effect, and what really would appear on the screen depends on the shaders.
If depth testing is on, your fragments might lie behind what already was drawn. If polygon culling is on, your primitive might be incorrectly winded for the currect culling mode. If the color masks are set to GL_FALSE or the draw buffer is not set to where you expect it, nothing will appear.
Also note that your attempt to "reset" the matrices is also wrong. You seem to assume that the current matrix mode is GL_MODELVIEW. But this doesn't have to be the case. It could as well be GL_PROJECTION or GL_TEXTURE. You also apply glOrtho to the current projection matrix without loading identity first, so this alone is a good reason for nothing to appear on the screen.
As OpenGL is a state machine, you also must restore all the state you touched. You already try this with the matrix stack push/pop. But you for example failed to restore the exact matrix mode. As you have seen in 1, a lot more state changes will be required, so restoring it will be more comples. Since you use legacy OpenGL, glPushAttrib() might come handy here.
SwapBuffers is not a GL function, but one of the operating system's API. It gets a drawable as parameter, and does only indirectly refer to any GL context. It might be called while another GL context is bound to the thread, or with none at all. If you want to play it safe, you'll also have to intercept the GL context creation function as well as MakeCurrent. In the worst (though very unlikely) case, the application has the GL context bound to another thread while it is calling the SwapBuffers, so there is no change for you in the hooked function to get to the context.
Putting this all together opens up another alternative: You can create your own GL context, bind it temporarily during the hooked SwapBuffers call and restore the original binding again. That way, you don't interfere with the GL state of the application at all. You still can augment the image content the application has rendered, since the framebuffer is part of the drawable, not the GL context. Doing so might have a negative impact on performance, but it might be so small that you never would even notice it.
Since you want to do this only for a single specific application, another approach would be to find out the minimal state changes which are necessary by observing what GL state the application actually set during the SwapBuffers call. A tool like apitrace can help you with that.
Related
I am using xcode with glut, OpenGL and c++ and I am trying to import and draw a model. I have used an obj to .h file conversion and this is a small part of the header so you can see the structure.
unsigned int M4GunNumVerts = 37812;
GLfloat M4GunVerts [] = {
// f 1/1/1 1582/2/1 4733/3/1
{0.266494348503772, 0.0252334302709736, -0.000725898139236535},
{0.265592372987502, 0.0157389511523397, -0.000725898139236535},
{0.264890836474847, 0.0182004476109518, -0.00775888079925833},
I have tried to draw this in my main with this code.
glVertexPointer(3, GL_FLOAT, 0, M4GunVerts);
glNormalPointer(GL_FLOAT, 0, M4GunNormals);
glTexCoordPointer(2, GL_FLOAT, 0, M4GunTexCoords);
glDrawArrays(GL_TRIANGLES, 0, M4GunNumVerts);
When I run I cant see the model. I have set up a glut window and have made a triangle to see if shapes were being drawn and the triangle showed up. I don't know how fix this so I can see the model.
Here is the reshape function
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glViewport(0, 0, w, h);
gluPerspective(45, ratio, 0.01, 1000);
glMatrixMode(GL_MODELVIEW);
Can anybody help?
Either you:
do not enable arrays correctly
and or your camera view is in wrong direction to your object
and or your camera is inside the object. (while CULL_FACE is enabled)
and or wrongly or none at all glColor !!! (can not see black on black)
and or forget to bind texture (while enabled)
and or wrong glTexCoord (while enabled)
and or wrongly set lights (while enabled)
and or wrong perspective respect to object distance and size (view angle,znear,zfar)
if object is planar and you are looking on the thin side ...
Some hints:
Try to rotate camera around the scene (ideal with some keys like arrows and see if the object is behind... or mouse drag...). Also try to move your camera forward/backward (also can use keys but my favorite is mouse wheel).
Use glBegin() glVertex() glEnd() first to avoid problems with wrongly enabled arrays and start without lighting,textures,CULL_FACEing (have them disabled!!!). When you see your model then enable all incrementally so you see what is wrong. After all is OK then try arrays.
If your object seems the wrong way about while CULL_FACE enabled then your winding rule is wrong (CW/CCW) can be seen mostly while rotating object
here is simple OpenGL scene app in BDS2006 you can use it as test of your window and camera/model matrix settings
I'm trying to use GWEN to draw some GUI elements on top of my opengl scene. It seems to have set up correctly but nothing from gwen is actually being drawn (visibly at least). I'm using a custom renderer which is essentially GWEN's stock opengl renderer but with a different function for loading textures. And OpenGL::Begin() and OpenGL::End() replaced with these:
void coRenderer::Begin()
{
glUseProgram(0);
glDisable(GL_DEPTH_TEST);
glDepthMask(0);
glEnable(GL_BLEND);
glMatrixMode(GL_PROJECTION); // Select The Projection Matrix
glPushMatrix(); // Store The Projection Matrix
glLoadIdentity();
glOrtho(0, screen->w, screen->h, 0, -1, 1 );
glMatrixMode(GL_MODELVIEW);
glActiveTexture(GL_TEXTURE0);
}
void coRenderer::End()
{
Flush();
glMatrixMode(GL_PROJECTION); // Select The Projection Matrix
glPopMatrix(); // Restore The Old Projection Matrix
glDisable(GL_BLEND);
glEnable(GL_DEPTH_TEST);
glDepthMask(1);
glEnable(GL_TEXTURE_2D);
}
the code for gwen's opengl renderer is here:
http://gwen.googlecode.com/svn/trunk/trunk/gwen/Renderers/OpenGL/OpenGL.cpp
BTW I'm using OpenGL 2.1 not 3.0+
Ah GWEN. That frustrating GUI library.
When I started using it, and integrating it into the engine we wrote in school, I had the same issue as you, using the stock OpenGL renderer however. Turned out it was being positioned wrong, calling glLoadIdentity() to reset the identity matrix seemed to resolve it.
The issue you are having, could well end up being the same as what I had, or there could be a problem with your custom OpenGL renderer. I'm not sure if you know much about GWEN, or how it works, but it runs on a single texture, that skins the GUI. Are you loading that in? Perhaps your texture loader isn't loading it correctly.
Try using your Debugger and stepping through your program. Areas of interest would be where you're attempting to load the GUI skin, where you're assigning the screen space that GWEN can use, and when you're actually attempting to render the GUI.
I am trying to render some strings in the foreground in a OpenGL/GLUT application under MacOSX 10.7.2.
At the moment I am using this code to draw a few lines in the foreground and it works fine.
void drawForeground() {
int width = 10;
int height = 10;
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
glOrtho(-1, width, -1, height, -1, 1);
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
glDepthMask(GL_FALSE);
glBegin(GL_LINES);
//Draw the lines
glEnd();
/*********************/
glPopMatrix();
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glDepthMask(GL_TRUE);
}
Now I would like to draw also some text. In the previous function I added this piece of code in the line where I put the asterisks:
glRasterPos2d(2,2);
glutBitmapCharacter(GLUT_BITMAP_HELVETICA_10, 'c');
but it didn't work. If I use the same two lines outside the drawForeground method the 'c' appears.
I already called glDisable(GL_TEXTURE_2D) and nothing changed.
May someone help me understanding my error?
Solution:
It turned out the solution to be disabling lighting using glDisable(GL_LIGHTING), reenabling it after rendering the text.
I would like to underline that the text is rendered always at the same dimension, independently from the parameters of the glOrtho call.
Nothing certain, but a couple of things to try if you haven't already:
What is the color set to before you call glutBitmapCharacter()? If the drawing color is set to something that doesn't show up against the background, it could simply look like nothing is being drawn.
Have you tried calling glDisable(GL_TEXTURE) in addition to glDisable(GL_TEXTURE_2D)?
Are there other things like lighting that you enable anywhere else in your code, and then don't disable before rendering the text that might affect things? When I've run into bugs like this in the past, it seems like they are often related to something in the OpenGL state being in a state I didn't expect, often because I made some change to the state elsewhere and forgot to undo it. I would recommend that you try systematically commenting out various OpenGL calls in your code, even if they don't seem directly related, and see if the characters ever show up. If they do, then you'll know which state change you need to make/undo.
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
//set viewpoint
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(VIEW_ANGLE,Screen_Ratio,NEAR_CLIP,FAR_CLIP);
gluLookAt(0,5,5, 0,0,0, 0,1,0);
//transform model 1
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glRotatef(Theta, 0,1,0);
//draw model 1
glBegin(GL_QUADS);
...
glEnd();
The code above works fine, but is there any way to remove the call to gluPerspective?
What I mean is, I would like to call it only once in initialization, instead of repeatedly during each rendering.
You call gluPerspective there, because it belongs there. OpenGL is not a scene graph where you initialize things. It's a state driven drawing API. The projection matrix is a state and every serious graphics application changes this state multiple times throughout a single frame rendering.
OpenGL does not know geometrical objects, positions and cameras. It just pushes points, lines and triangles through a processing pipeline, and draws the result to the screen. After something has been drawn, OpenGL has no recollection of it, whatsoever.
I mean calling it only once in initialization.
OpenGL is not initialized (except creation of the rendering context, but actually this is part of the operating system's graphics stack, not OpenGL). Sure, you upload textures and buffer object data to it, but that can happen anytime.
Do not use gluLookAt on the projection matrix, as it defines the camera/view and therefore belongs to the modelview matrix, usually as the left-most transformation (the first after glLoadIdentity), where it makes up the view part of the word modelview. Although it also works your way, it's conceptually wrong. This would also solve your issue, as then you just don't have to touch the projection matrix every frame.
But actually datenwolf's approach is more conceptually clean regarding OpenGL's state machine architecture.
If you don't call glLoadIdentity() (which resets the current matrix to be the identity matrix, i.e. undoes what gluPerspective() has done) every frame and instead carefully push/pop the transform matrices you can get away with calling it only in initialization quite happily. Usually it's far easier just to call load identity each time your start drawing and then reset it. e.g.:
// Initalisation
glLoadIdentity();
gluPerspective(...);
Then later on:
// Drawing each frame
glClear(...);
glPushMatrix();
gluLookAt(...);
//draw stuff
glPopMatrix();
I'd like to try and implement some HCI for my existing OpenGL application. If possible, the menus should appear infront of my 3D graphics which would be in the background.
I was thinking of drawing a square directly in front of the "camera", and then drawing either textures or more primatives on top of the "base" square.
While the menus are active the camera can't move, so that the camera doesn't look away from the menus.
Does this sound far feteched to anyone or am I on the right tracks? How would everyone else do it?
I would just glPushMatrix, glLoadIdentity, do your drawing, then glPopMatrix and not worry about where your camera is.
You'll also need to disable and re-enable depth test, lighting and such
There is the GLUI library to do this (no personal experience)
Or if you are using Qt there are ways of rendering Qt widgets transparently on top of the OpenGL model, there is also beta support for rendering all of Qt in opengl.
You could also do all your 3d Rendering, then switch to orthographic projection and draw all your menu objects. This would be much easier than putting it all on a large billboarded quad as you suggested.
Check out this exerpt, specifically the heading "Projection Transformations".
As stated here, you need to apply a translation of 0.375 in x and y to get pixel perfect alignment:
glViewport(0, 0, width, height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0, width, 0, height);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(0.375, 0.375, 0.0);
/* render all primitives at integer positions */
The algorithm is simple:
Draw your 3D scene, presumably with depth testing enabled.
Disable depth testing so that your GUI elements will draw over the 3D stuff.
Use glPushMatrix to store you current model view and projection matrices (assuming you want to restore them - otherwise, just trump on them)
Set up your model view and projection matrices as described in the above code
Draw your UI stuff
Use glPushMatrix to restore your pushed matrices (assuming you pushed them)
Doing it like this makes the camera position irrelevant - in fact, as the camera moves, the 3D parts will be affected as normal, but the 2D overlay stays in place. I'm expecting that this is the behaviour you want.