This question already has an answer here:
OpenGl coordinate system is not at -1 to 1
(1 answer)
Closed 9 years ago.
Fixed: I am trying to create a basic game in C++ using OpenGL. I can make a window with a square in it, and make the square move around. However, I am having trouble getting the window to be the correct size.
If I try and make the window to be 800 by 600 then the window and borders will be this size, not just the bit inside the border. Is there a way I can make it so that the bit inside the border is the size I define?
I pasted the code in http://pastebin.com/jxd5YhHa.
After skimming over your code I saw a lot of the typical newbie mistakes – this time low level Win32 API style.
First and foremost: Do yourself a favor and please don't use the naked Win32 API. There are so many nice OpenGL frameworks, which have the added benefit of making your program cross plattform for free. In your case I'd recommend SDL, available from http://www.libsdl.org
Now the main problem with your program is, that it scattered some vital OpenGL operations all over the place. Most importantly setting up viewport and matrices. Your drawing functions should always follow the following scheme
void draw(…)
{
glDisable(GL_SCISSOR_TEST);
glClearColor(…);
glClearDepth(…);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
for(subview in views) {
glViewport(…);
#ifdef USE_FIXED_FUNCTION_PIPELINE
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
projection_setup();
for(model in scene) {
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
modelview_setup();
#else
glUseProgram(…);
setup_uniforms();
#endif
draw_stuff();
}
}
SwapBuffers();
}
The key here to your problem is, to setup viewport, projection and modelview in the drawing code right when you need it to what you need it to be.
Related
I'm trying to draw a custom opengl overlay (steam does that for example) in a 3d desktop game.
This overlay should basically be able to show the status of some variables which the user
can affect by pressing some keys. Think about it like a game trainer.
The goal is in the first place to draw a few primitives at a specific point on the screen. Later I want to have a little nice looking "gui" component in the game window.
The game uses the "SwapBuffers" method from the GDI32.dll.
Currently I'm able to inject a custom DLL file into the game and hook the "SwapBuffers" method.
My first idea was to insert the drawing of the overlay into that function. This could be done by switching the 3d drawing mode from the game into 2d, then draw the 2d overlay on the screen and switch it back again, like this:
//SwapBuffers_HOOK (HDC)
glPushMatrix();
glLoadIdentity();
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glOrtho(0.0, 640, 480, 0.0, 1.0, -1.0);
//"OVERLAY"
glBegin(GL_QUADS);
glColor3f(1.0f, 1.0f, 1.0f);
glVertex2f(0, 0);
glVertex2f(0.5f, 0);
glVertex2f(0.5f, 0.5f);
glVertex2f(0.0f, 0.5f);
glEnd();
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
glPopMatrix();
SwapBuffers_OLD(HDC);
However, this does not have any effect on the game at all.
Is my approach correct and reasonable (also considering my 3d to 2d switching code)?
I would like to know what the best way is to design and display a custom overlay in the hooked function. (should I use something like windows forms or should I assemble my component with opengl functions - lines, quads
...?)
Is the SwapBuffers method the best place to draw my overlay?
Any hint, source code or tutorial to something similiar is appreciated too.
The game by the way is counterstrike 1.6 and I don't intend to cheat online.
Thanks.
EDIT:
I could manage to draw a simple rectangle into the game's window by using a new opengl context as proposed by 'derHass'. Here is what I did:
//1. At the beginning of the hooked gdiSwapBuffers(HDC hdc) method save the old context
GLboolean gdiSwapBuffersHOOKED(HDC hdc) {
HGLRC oldContext = wglGetCurrentContext();
//2. If the new context has not been already created - create it
//(we need the "hdc" parameter for the current window, so the initialition
//process is happening in this method - anyone has a better solution?)
//Then set the new context to the current one.
if (!contextCreated) {
thisContext = wglCreateContext(hdc);
wglMakeCurrent(hdc, thisContext);
initContext();
}
else {
wglMakeCurrent(hdc, thisContext);
}
//Draw the quad in the new context and switch back to the old one.
drawContext();
wglMakeCurrent(hdc, oldContext);
return gdiSwapBuffersOLD(hdc);
}
GLvoid drawContext() {
glColor3f(1.0f, 0, 0);
glBegin(GL_QUADS);
glVertex2f(0,190.0f);
glVertex2f(100.0f, 190.0f);
glVertex2f(100.0f,290.0f);
glVertex2f(0, 290.0f);
glEnd();
}
GLvoid initContext() {
contextCreated = true;
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0, 640, 480, 0.0, 1.0, -1.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glClearColor(0, 0, 0, 1.0);
}
Here is the result:
cs overlay example
It is still very simple but I will try to add some more details, text etc. to it.
Thanks.
If the game is using OpenGL, then hooking into SwapBuffers is the way to go, in principle. In theory, there might be sevaral different drawables, and you might have to decide in your swap buffer function which one(s) are the right ones to modify.
There are a couple of issues with such kind of OpenGL interceptions, though:
OpenGL is a state machine. The application might have modified any GL state variable there is. The code you provided is far from complete to guarantee that something is draw. For example, if the application happens to have shaders enabled, all your matrix setup might be without effect, and what really would appear on the screen depends on the shaders.
If depth testing is on, your fragments might lie behind what already was drawn. If polygon culling is on, your primitive might be incorrectly winded for the currect culling mode. If the color masks are set to GL_FALSE or the draw buffer is not set to where you expect it, nothing will appear.
Also note that your attempt to "reset" the matrices is also wrong. You seem to assume that the current matrix mode is GL_MODELVIEW. But this doesn't have to be the case. It could as well be GL_PROJECTION or GL_TEXTURE. You also apply glOrtho to the current projection matrix without loading identity first, so this alone is a good reason for nothing to appear on the screen.
As OpenGL is a state machine, you also must restore all the state you touched. You already try this with the matrix stack push/pop. But you for example failed to restore the exact matrix mode. As you have seen in 1, a lot more state changes will be required, so restoring it will be more comples. Since you use legacy OpenGL, glPushAttrib() might come handy here.
SwapBuffers is not a GL function, but one of the operating system's API. It gets a drawable as parameter, and does only indirectly refer to any GL context. It might be called while another GL context is bound to the thread, or with none at all. If you want to play it safe, you'll also have to intercept the GL context creation function as well as MakeCurrent. In the worst (though very unlikely) case, the application has the GL context bound to another thread while it is calling the SwapBuffers, so there is no change for you in the hooked function to get to the context.
Putting this all together opens up another alternative: You can create your own GL context, bind it temporarily during the hooked SwapBuffers call and restore the original binding again. That way, you don't interfere with the GL state of the application at all. You still can augment the image content the application has rendered, since the framebuffer is part of the drawable, not the GL context. Doing so might have a negative impact on performance, but it might be so small that you never would even notice it.
Since you want to do this only for a single specific application, another approach would be to find out the minimal state changes which are necessary by observing what GL state the application actually set during the SwapBuffers call. A tool like apitrace can help you with that.
I am porting one of my game from Windows to Linux using GLFW3 for window creation. The code runs perfectly well when I run it in Windows (using GLFW3 and opengl) but when I compile and run it in ubuntu 12.10, there is an issue in fullscreen mode (in windowed mode it runs well) where the right part (about 25%) of the frame gets stretched and goes off screen.
Here's how I am creating GLFW window:
window = glfwCreateWindow(1024, 768, "Chaos Shell", glfwGetPrimaryMonitor(), NULL);
And here's my opengl initialisation code:
glViewport(0, 0, 1024, 768);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-512.0f, 512.0f, -384.0f, 384.0f, 0.0f, 1.0f);
glMatrixMode(GL_MODELVIEW);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Above code should load up the game in fullscreen mode with 1024 by 768 resolution.
When I run it, glfwCreateWindow changes the screen resolution from my current screen resolution (1366 by 768) to 1024 by 768, but the right part of the frame goes off screen. If I manually change the resolution to 1024 by 768 and then run the game, everything looks alright.
Also, running this same code in windows doesn't show any issue no matter what my current screen resolution is. It just changes the resolution to 1024 by 768 and then everything looks perfect.
If someone can find why it is acting weird in ubuntu then I will really appreciate...
You're probably running into an issue with the window manager. In short terms, the window manager didn't notice the change the change in resolution and due to the fullscreen flag expands the window to the old resolution.
Or you didn't get 1024×768 at all, because your screen doesn't support it and instead got a smaller, 16:9 resolution. So don't use hardcoded values for setting the viewport.
Honestly: You shouldn't change the screen resolution at all! Hardly anybody uses CRT displays anymore. And for displays using discrete pixels (LCDs, AMOLED, DLP projectors, LCoS projectors) it makes little sense to run them at anything else than their native resolution. So just create a fullscreen window without make the system change the resolution.
When setting the viewport query the actual window size from GLFW instead of relying on your hardcoded values (this actually could also fix your issue with a resolution change).
If you want to reduce the load on the GPU when rendering: Use a FBO to render to a texture of the desired resolution and in a last step draw that texture to a full screen quad, to stretch it up to display size. It looks better than what most screen scalers produce and your game doesn't mess with the rest of the system.
Update due to comment
Setting the screen resolution in response to the game being unable to cope with non 4:3 resolutions is very bad style. It took long enough for large game studios to adopt to wide screens. Which is unacceptable, because it's so easy to fix.
Don't cover up mistakes with forcing something on the user he might not want. And if the user has a nice display give him the opportunity to actually use it!
Your problem is not the display resolution. It's the hard coded viewport and projection setup. You need to fix that.
To fix your "game looks horrible at different resolution" issue you need to set the viewport and projection in response to the window's size. Like this:
int window_width, window_height;
glfwGetWindowSize(window, &window_width, &window_height);
if( 0 == window_width
|| 0 == window_height) {
/* window has no area, so there's nothing to draw to */
return;
}
float const window_aspect = (float)window_width / (float)window_height;
/* we want to draw with a total of 768 units vertically as if we
* were drawing to a screen 768 pixels in height. */
float const projection_scale = 768./2;
glViewport(0, 0, window_width, window_height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho( -aspect * projection_scale,
aspect * projection_scale,
-projection_scale,
projection_scale,
0.0f,
1.0f );
I'm trying to use GWEN to draw some GUI elements on top of my opengl scene. It seems to have set up correctly but nothing from gwen is actually being drawn (visibly at least). I'm using a custom renderer which is essentially GWEN's stock opengl renderer but with a different function for loading textures. And OpenGL::Begin() and OpenGL::End() replaced with these:
void coRenderer::Begin()
{
glUseProgram(0);
glDisable(GL_DEPTH_TEST);
glDepthMask(0);
glEnable(GL_BLEND);
glMatrixMode(GL_PROJECTION); // Select The Projection Matrix
glPushMatrix(); // Store The Projection Matrix
glLoadIdentity();
glOrtho(0, screen->w, screen->h, 0, -1, 1 );
glMatrixMode(GL_MODELVIEW);
glActiveTexture(GL_TEXTURE0);
}
void coRenderer::End()
{
Flush();
glMatrixMode(GL_PROJECTION); // Select The Projection Matrix
glPopMatrix(); // Restore The Old Projection Matrix
glDisable(GL_BLEND);
glEnable(GL_DEPTH_TEST);
glDepthMask(1);
glEnable(GL_TEXTURE_2D);
}
the code for gwen's opengl renderer is here:
http://gwen.googlecode.com/svn/trunk/trunk/gwen/Renderers/OpenGL/OpenGL.cpp
BTW I'm using OpenGL 2.1 not 3.0+
Ah GWEN. That frustrating GUI library.
When I started using it, and integrating it into the engine we wrote in school, I had the same issue as you, using the stock OpenGL renderer however. Turned out it was being positioned wrong, calling glLoadIdentity() to reset the identity matrix seemed to resolve it.
The issue you are having, could well end up being the same as what I had, or there could be a problem with your custom OpenGL renderer. I'm not sure if you know much about GWEN, or how it works, but it runs on a single texture, that skins the GUI. Are you loading that in? Perhaps your texture loader isn't loading it correctly.
Try using your Debugger and stepping through your program. Areas of interest would be where you're attempting to load the GUI skin, where you're assigning the screen space that GWEN can use, and when you're actually attempting to render the GUI.
I am new to OpenGL and I am still experimenting with basic shapes. I sometimes find many functions like glEnd and many more, that are not mentioned in the OpenGL 3+ documentation. Were they replaced by other functions? Or do I have to write them manually?
Is there a tutorial online that uses OpenGL 3+?
As for " gluPerspective" I have read that it isn't used in Opengl 3+. Isn't it supposed to be a separate function in GLUT? what does it has to do with OpenGL 3+? Last, what does Transform( Width, Height ); do? (I found it in some sample code I downloaded, and I can't find it in GLUT or OpenGL).
here is the code:
GLvoid Transform(GLfloat Width, GLfloat Height)
{
glViewport(00, 00, Width, Height); /* Set the viewport */
glMatrixMode(GL_PROJECTION); /* Select the projection matrix */
glLoadIdentity(); /* Reset The Projection Matrix */
gluPerspective(20.0,Width/Height,0.1,100.0); /* Calculate The Aspect Ratio Of The Window */
glMatrixMode(GL_MODELVIEW); /* Switch back to the modelview matrix */
}
/* A general OpenGL initialization function. Sets all of the initial parameters. */
GLvoid InitGL(GLfloat Width, GLfloat Height)
{
glClearColor(0.0, 0.0, 0.0, 0.0); /* This Will Clear The Background Color To Black */
glLineWidth(2.0); /* Add line width, ditto */
Transform( Width, Height ); /* Perform the transformation */
}
/* The function called when our window is resized */
GLvoid ReSizeGLScene(GLint Width, GLint Height)
{
if (Height==0) Height=1; /* Sanity checks */
if (Width==0) Width=1;
Transform( Width, Height ); /* Perform the transformation */
}
I sometimes find many functions like glEnd and many more, that are not mentioned in the OpenGL 3+ documentation. Were they replaced by other functions?
They have been completely removed, since their workings doesn't reflect well with how modern graphics systems work on both the hardware and the software side. glBegin(…) and glEnd() form the surroundings of the so called immediate mode: Every call causes an operation. This reflects how early graphics systems were built, some 20 years ago.
Today one prepares batches of data, transfers them to GPU memory and triggers batch drawings with a single drawing call. OpenGL does this through vertex arrays and vertex buffer objects (VBOs). Vertex arrays have been around since OpenGL-1.1 (1996), and the VBO API is founded on vertex arrays, so for any reasonable program VBO support was added easily.
Or do I have to write them manually? Is there a tutorial online that uses OpenGL 3+?
It depends on the function in question. For example the whole texture environment, combiners have been removed. Just like the matrix manipulation functions and the whole lighting interface.
What they did and configured is now done through shaders and uniforms. Since you're expected to supply shaders one might say, you're expected to implement this yourself. OTOH you'll quickly find out, that often writing a shader is easier and more concise, than fiddling with large numbers of OpenGL parameter setting calls. Also once you've progressed far enough you'll hardly miss the matrix manipulation functions. Every serious application dealing with 3D graphics maintains the transformation matrices itself; be it for enhanced flexibilty or simply because those matrices are required in other places, too, e.g. some physics simulation.
As for " gluPerspective" I have read that it isn't used in Opengl 3+. Isn't it supposed to be a separate function in GLUT? what does it has to do with OpenGL 3+? Last, what does Transform( Width, Height ); do? (I found it in some sample code I downloaded, and I can't find it in GLUT or OpenGL).
gluPerspective is part of GLU. GLU is a companion library of OpenGL Utility functions, that used to ship with OpenGL-1.1. However it is not part of the OpenGL specification and completely optional.
GLUT is something else again. It's a simplicistic framework for quick and dirty setup of a OpenGL window and context, offering some minimalistic input API. Also it's no longer actively maintained. Personally I recommend not using it. If you must use a GLUT API, use FreeGLUT. Or better yet, don't GLUT at all, use a toolkit like Qt, GTK or a framework like GLFW or SDL.
Were they replaced by other functions?
No.
Or do I have to write them manually?
For old-style immediate-mode geometry submission you'll have to make your own work-alike. The matrix stack has a replacement.
Is there a tutorial online that uses OpenGL 3+?
At least one.
I'd like to try and implement some HCI for my existing OpenGL application. If possible, the menus should appear infront of my 3D graphics which would be in the background.
I was thinking of drawing a square directly in front of the "camera", and then drawing either textures or more primatives on top of the "base" square.
While the menus are active the camera can't move, so that the camera doesn't look away from the menus.
Does this sound far feteched to anyone or am I on the right tracks? How would everyone else do it?
I would just glPushMatrix, glLoadIdentity, do your drawing, then glPopMatrix and not worry about where your camera is.
You'll also need to disable and re-enable depth test, lighting and such
There is the GLUI library to do this (no personal experience)
Or if you are using Qt there are ways of rendering Qt widgets transparently on top of the OpenGL model, there is also beta support for rendering all of Qt in opengl.
You could also do all your 3d Rendering, then switch to orthographic projection and draw all your menu objects. This would be much easier than putting it all on a large billboarded quad as you suggested.
Check out this exerpt, specifically the heading "Projection Transformations".
As stated here, you need to apply a translation of 0.375 in x and y to get pixel perfect alignment:
glViewport(0, 0, width, height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0, width, 0, height);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(0.375, 0.375, 0.0);
/* render all primitives at integer positions */
The algorithm is simple:
Draw your 3D scene, presumably with depth testing enabled.
Disable depth testing so that your GUI elements will draw over the 3D stuff.
Use glPushMatrix to store you current model view and projection matrices (assuming you want to restore them - otherwise, just trump on them)
Set up your model view and projection matrices as described in the above code
Draw your UI stuff
Use glPushMatrix to restore your pushed matrices (assuming you pushed them)
Doing it like this makes the camera position irrelevant - in fact, as the camera moves, the 3D parts will be affected as normal, but the 2D overlay stays in place. I'm expecting that this is the behaviour you want.