Glusphere giving strange lighting - opengl

When working with glut, i used glutsolidsphere to draw my spheres, but having moved to glfw, i had to use glusphere. I basically copied the entire function "glutsolidsphere" to my own code, but am getting a strange lighting problem where before i wasn't. Heres the code for the sphere :
void drawSolidSphere(GLdouble radius, GLint slices, GLint stacks)
{
GLUquadric *shape = gluNewQuadric();
gluQuadricDrawStyle(shape, GLU_FILL);
gluQuadricNormals(shape, GLU_SMOOTH);
gluSphere(shape, radius, slices, stacks);
}
Whats the problem here?
Edit : For some reason, i cant upload images from college, so i'll try describe it : The sphere outline looks fine, however you can see the segments on the inside, like the outside of the sphere is transparent, and it causes there to be clear divides in the sphere.

Looks like there's a problem with depth testing.
Assuming you have a depth buffer from glfw, does this fix it?
glEnable(GL_DEPTH_TEST);
I haven't used glfw, but to request a depth buffer it looks like you just need to pass 24 for example to the depthbits argument of glfwOpenWindow.
You will also need to add GL_DEPTH_BUFFER_BIT to your glClear call if you haven't already.
I've experienced inconsistencies with the default GL state, specifically GL_DEPTH_TEST, across windows and linux using glut/freeglut before.
Also, see gluNewQuadric leaking memory

Related

dll injection: drawing simple game overlay with opengl

I'm trying to draw a custom opengl overlay (steam does that for example) in a 3d desktop game.
This overlay should basically be able to show the status of some variables which the user
can affect by pressing some keys. Think about it like a game trainer.
The goal is in the first place to draw a few primitives at a specific point on the screen. Later I want to have a little nice looking "gui" component in the game window.
The game uses the "SwapBuffers" method from the GDI32.dll.
Currently I'm able to inject a custom DLL file into the game and hook the "SwapBuffers" method.
My first idea was to insert the drawing of the overlay into that function. This could be done by switching the 3d drawing mode from the game into 2d, then draw the 2d overlay on the screen and switch it back again, like this:
//SwapBuffers_HOOK (HDC)
glPushMatrix();
glLoadIdentity();
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glOrtho(0.0, 640, 480, 0.0, 1.0, -1.0);
//"OVERLAY"
glBegin(GL_QUADS);
glColor3f(1.0f, 1.0f, 1.0f);
glVertex2f(0, 0);
glVertex2f(0.5f, 0);
glVertex2f(0.5f, 0.5f);
glVertex2f(0.0f, 0.5f);
glEnd();
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
glPopMatrix();
SwapBuffers_OLD(HDC);
However, this does not have any effect on the game at all.
Is my approach correct and reasonable (also considering my 3d to 2d switching code)?
I would like to know what the best way is to design and display a custom overlay in the hooked function. (should I use something like windows forms or should I assemble my component with opengl functions - lines, quads
...?)
Is the SwapBuffers method the best place to draw my overlay?
Any hint, source code or tutorial to something similiar is appreciated too.
The game by the way is counterstrike 1.6 and I don't intend to cheat online.
Thanks.
EDIT:
I could manage to draw a simple rectangle into the game's window by using a new opengl context as proposed by 'derHass'. Here is what I did:
//1. At the beginning of the hooked gdiSwapBuffers(HDC hdc) method save the old context
GLboolean gdiSwapBuffersHOOKED(HDC hdc) {
HGLRC oldContext = wglGetCurrentContext();
//2. If the new context has not been already created - create it
//(we need the "hdc" parameter for the current window, so the initialition
//process is happening in this method - anyone has a better solution?)
//Then set the new context to the current one.
if (!contextCreated) {
thisContext = wglCreateContext(hdc);
wglMakeCurrent(hdc, thisContext);
initContext();
}
else {
wglMakeCurrent(hdc, thisContext);
}
//Draw the quad in the new context and switch back to the old one.
drawContext();
wglMakeCurrent(hdc, oldContext);
return gdiSwapBuffersOLD(hdc);
}
GLvoid drawContext() {
glColor3f(1.0f, 0, 0);
glBegin(GL_QUADS);
glVertex2f(0,190.0f);
glVertex2f(100.0f, 190.0f);
glVertex2f(100.0f,290.0f);
glVertex2f(0, 290.0f);
glEnd();
}
GLvoid initContext() {
contextCreated = true;
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0, 640, 480, 0.0, 1.0, -1.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glClearColor(0, 0, 0, 1.0);
}
Here is the result:
cs overlay example
It is still very simple but I will try to add some more details, text etc. to it.
Thanks.
If the game is using OpenGL, then hooking into SwapBuffers is the way to go, in principle. In theory, there might be sevaral different drawables, and you might have to decide in your swap buffer function which one(s) are the right ones to modify.
There are a couple of issues with such kind of OpenGL interceptions, though:
OpenGL is a state machine. The application might have modified any GL state variable there is. The code you provided is far from complete to guarantee that something is draw. For example, if the application happens to have shaders enabled, all your matrix setup might be without effect, and what really would appear on the screen depends on the shaders.
If depth testing is on, your fragments might lie behind what already was drawn. If polygon culling is on, your primitive might be incorrectly winded for the currect culling mode. If the color masks are set to GL_FALSE or the draw buffer is not set to where you expect it, nothing will appear.
Also note that your attempt to "reset" the matrices is also wrong. You seem to assume that the current matrix mode is GL_MODELVIEW. But this doesn't have to be the case. It could as well be GL_PROJECTION or GL_TEXTURE. You also apply glOrtho to the current projection matrix without loading identity first, so this alone is a good reason for nothing to appear on the screen.
As OpenGL is a state machine, you also must restore all the state you touched. You already try this with the matrix stack push/pop. But you for example failed to restore the exact matrix mode. As you have seen in 1, a lot more state changes will be required, so restoring it will be more comples. Since you use legacy OpenGL, glPushAttrib() might come handy here.
SwapBuffers is not a GL function, but one of the operating system's API. It gets a drawable as parameter, and does only indirectly refer to any GL context. It might be called while another GL context is bound to the thread, or with none at all. If you want to play it safe, you'll also have to intercept the GL context creation function as well as MakeCurrent. In the worst (though very unlikely) case, the application has the GL context bound to another thread while it is calling the SwapBuffers, so there is no change for you in the hooked function to get to the context.
Putting this all together opens up another alternative: You can create your own GL context, bind it temporarily during the hooked SwapBuffers call and restore the original binding again. That way, you don't interfere with the GL state of the application at all. You still can augment the image content the application has rendered, since the framebuffer is part of the drawable, not the GL context. Doing so might have a negative impact on performance, but it might be so small that you never would even notice it.
Since you want to do this only for a single specific application, another approach would be to find out the minimal state changes which are necessary by observing what GL state the application actually set during the SwapBuffers call. A tool like apitrace can help you with that.

Display a quad perpendicular to the screen

When drawing a quad, it vanishes when rotation brings in a position perpendicular to the screen. Ideally what I'd like to see is (b) but I get nothing
Is there something wrong with my code ? (warning old openGL code following)
void draw_rect(double vector[4][3], int rgb[3], double transp)
{
GLint is_depth, is_blend, blend_src, blend_dst;
glGetIntegerv(GL_DEPTH_WRITEMASK, &is_depth);
glGetIntegerv(GL_BLEND, &is_blend);
glGetIntegerv(GL_BLEND_SRC, &blend_src);
glGetIntegerv(GL_BLEND_DST, &blend_dst);
glEnable(GL_BLEND);
glDepthMask(0);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
// code to set the color ...
glBegin(GL_POLYGON);
glVertex3v(&vector[0][0]);
glVertex3v(&vector[1][0]);
glVertex3v(&vector[2][0]);
glVertex3v(&vector[3][0]);
glEnd();
if (!is_blend){ glDisable(GL_BLEND); }
glDepthMask(is_depth);
glBlendFunc(blend_src, blend_dst);
}
A quad (assuming it is defined by coplanar faces, as in this case) is by definition infinitely thin. It is correct behavior for it to be invisible when perpendicular to the camera.
The "correct" solution is to make a box rather than a single quad.
See Drawing cube 3D using Opengl for an example using a cube. You'll need to tweak the vertex positions to make the cube smaller along one dimension (probably Z), but it'll give you the effect that you're looking for.
Also, stop using the fixed function stuff (glVertex, etc.). It's been deprecated for years. Shaders aren't that difficult, and examples are easy to find via your favorite search engine.
try making it a line of some definite width when the quad is perpendicular to the screen

Opengl surface rendering issue

I just started loading some obj files and render it with opengl. When I render these meshes I get this result (see pictures).
I think its some kind of depth problem but i cant figure it out by myself.
Thats the parameters for rendering:
// Dark blue background
glClearColor(0.0f, 0.0f, 0.4f, 0.0f);
// Enable depth test
glEnable( GL_DEPTH_TEST );
// Cull triangles which normal is not towards the camera
glEnable(GL_CULL_FACE);
I used this Tutorial code as template. https://code.google.com/p/opengl-tutorial-org/source/browse/#hg%2Ftutorial08_basic_shading
The problem is simple, you are doing FRONT or BACK culling.
And the object file contains CCW(Counter-Clock-Wise) or CW (Clock-Wise) cordinates, so written from left to right or right to left.
Your openGL code is expecting it in the other way round, so it hides the surfaces which you are looking backward on.
To check this solves your problem, just take out the glEnable(GL_CULL_FACE);
As this exactly seems to be producing the problem.
Additionally you can use glCullFace(ENUM); where ENUM has to be GL_FRONT or GL_BACK.
If you don't in at least one of both cases can't see your mesh (means in both cases: GL_FRONT or GL_BACK your just seeing the partial mesh) , thats a problem with your code of interpreting the .obj. or the .obj uses not strict surface vectors. (A mix of CCW and CW)
I am actually unsure what you mean, however glEnable(GL_CULL_FACE); and then GL_CULL_FACE(GL_BACK); will cull out or remove the back face of the object. This greatly reduces the lag while rendering objects, and only makes a difference if you are inside or "behind" the object.
Also, have you tried glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); before your render code?

gwen + opengl can't see anything

I'm trying to use GWEN to draw some GUI elements on top of my opengl scene. It seems to have set up correctly but nothing from gwen is actually being drawn (visibly at least). I'm using a custom renderer which is essentially GWEN's stock opengl renderer but with a different function for loading textures. And OpenGL::Begin() and OpenGL::End() replaced with these:
void coRenderer::Begin()
{
glUseProgram(0);
glDisable(GL_DEPTH_TEST);
glDepthMask(0);
glEnable(GL_BLEND);
glMatrixMode(GL_PROJECTION); // Select The Projection Matrix
glPushMatrix(); // Store The Projection Matrix
glLoadIdentity();
glOrtho(0, screen->w, screen->h, 0, -1, 1 );
glMatrixMode(GL_MODELVIEW);
glActiveTexture(GL_TEXTURE0);
}
void coRenderer::End()
{
Flush();
glMatrixMode(GL_PROJECTION); // Select The Projection Matrix
glPopMatrix(); // Restore The Old Projection Matrix
glDisable(GL_BLEND);
glEnable(GL_DEPTH_TEST);
glDepthMask(1);
glEnable(GL_TEXTURE_2D);
}
the code for gwen's opengl renderer is here:
http://gwen.googlecode.com/svn/trunk/trunk/gwen/Renderers/OpenGL/OpenGL.cpp
BTW I'm using OpenGL 2.1 not 3.0+
Ah GWEN. That frustrating GUI library.
When I started using it, and integrating it into the engine we wrote in school, I had the same issue as you, using the stock OpenGL renderer however. Turned out it was being positioned wrong, calling glLoadIdentity() to reset the identity matrix seemed to resolve it.
The issue you are having, could well end up being the same as what I had, or there could be a problem with your custom OpenGL renderer. I'm not sure if you know much about GWEN, or how it works, but it runs on a single texture, that skins the GUI. Are you loading that in? Perhaps your texture loader isn't loading it correctly.
Try using your Debugger and stepping through your program. Areas of interest would be where you're attempting to load the GUI skin, where you're assigning the screen space that GWEN can use, and when you're actually attempting to render the GUI.

Why does my colored cube not work with GL_BLEND?

My cube isn't rendering as expected when I use GL_BLEND.
glEnable(GL_CULL_FACE);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE);
I'm also having a similar problem with drawing some semi-opaque vertices in front, which could well be related.
Related: Why do my semi-opaque vertices make background objects brighter in OpenGL?
Here's what it's supposed to look like:
Normal cube http://img408.imageshack.us/img408/2853/normalcube.png
And here's what it actually looks like:
Dark cube http://img7.imageshack.us/img7/7133/darkcube.png
Please see the code used to create the colored cube, and the code used to actually draw the cube.
The cube is being drawn like so:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glPushMatrix();
glLoadIdentity();
// ... do some translation, rotation, etc ...
drawCube();
glPopMatrix();
// ... swap the buffers ...
You could try disabling all lighting before drawing the cube:
glDisable(GL_LIGHTING);
It looks like you have lighting enabled on the second one,
try with a glShadeModel( GL_FLAT ) before drawing,
This has me stomped. What it looks like is that some vertices have some alpha values that are non-opaque. However the code you posted has all 1. for alpha. So... in order to debug more, did you try to change your clear color to something non-black ? Say green ?
From the code, I doubt lighting is turned on, since no normals were specified.
Last comment, offtopic... You should really not use glBegin/glEnd (2 function calls per vertex + 2 per primitive is really not a good usage of the recent developments in OpenGL). Try glDrawElements with QUAD_LIST, or even better, TRIANGLE_LIST. You already have the data nicely laid out for that.