OpenGL Orthographic Projection Clipping - opengl

Assuming I use Orhographic Projection, and have a reshape function like this:
void reshape(f32 width, f32 height){
aspect = width/height;
glViewport(0, 0, width, height);
// guaranted 960x640 HUD canvas
if(640*aspect>=960){
ortho.x = 640*aspect;
ortho.y = 640;
}else{
ortho.x = 960;
ortho.y = 960/aspect;
}
glOrtho(0, ortho.x, ortho.y, 0, -1.0f, 1.0f);
}
How can I make sure, that all vertices >ortho.x or >ortho.y (normally offscreen) are didn't drawn?
Because if I scale the windows to something with a bigger aspect ratio than 1.5f (960/640) I see the objects, that schouldn't be full visible (because the viewport is so big like the window).
Is there something like a clipping pane in orthographic projection?

What you want is to use [glScissor][1] to ensure that the rendered area never goes beyond a certain size. glScissor takes a rectangle in window coordinates (remember: window coordinates have the origin at the bottom-left). The scissor test prevents the generation of fragments outside of this area.
To activate the scissor test, you must use glEnable(GL_SCISSOR). Unless you do that, the above call won't actually do anything.

Use constant values for the limit parameters of glOrtho, but use glViewport and glScissor (enable with glEnable(GL_SCISSOR_TEST)) to limit rendering to a sub-portion of your window.
BTW: You should set the projection and viewport in the rendering function. Doing it in the reshape handler makes not much sense. In any serious OpenGL application you'll switch projection modes several times during a full render, so just do it that way from the very beginning.

Related

Drawing an OpenGL overlay using SwapBuffers interception

I'm trying to make a library that would allow me to draw my overlay on top of the content of a game window that uses OpenGL by intercepting the call to the SwapBuffers function. For interception i use Microsoft Detours.
BOOL WINAPI __SwapBuffers(HDC hDC)
{
HGLRC oldContext = wglGetCurrentContext();
if (!context) // Global variable
{
context = wglCreateContext(hDC);
}
wglMakeCurrent(hDC, context);
// Drawing
glRectf(0.1F, 0.5F, 0.2F, 0.6F);
wglMakeCurrent(hDC, oldContext);
return _SwapBuffers(hDC); // Call the original SwapBuffers
}
This code works, but occasionally, when I move my mouse, my overlay blinks. Why? Some forums have said that such an implementation can significantly reduce FPS. Is there any better alternative? How do I correctly translate a normal position to an OpenGL position? For example, width = 1366. It turns out 1366 = 1, and 0 = -1. How to get the value for example for 738? What about height?
To translate a screen coordinate to normal coordinate you need to know the screen width and screen height, linear mapping from [0, screenwidth] to [-1, 1] / [0, screenheight] to [-1, 1]. It is as simple as follows:
int screenwidth, screenheight;
//...
screenwidth = 1366;
screenheight = 738;
//...
float screenx, screeny;
float x = (screenx/(float)screenwidth)*2-1;
float y = (screeny/(float)screenheight)*2-1;
Problem z=0:
glRect renders to z=0, it is a problem because the plane would be infinitely near. Because opengl considers rendering to world space still. Screen space lies at (x, y, 1) in non transformed world space. OpenGL almost always works with 3D coordinates.
There are two ways to tackle this problem:
You should prefer using functions with a z component, because opengl does not render correctly at z=0. z=1 corresponds to the normalized screen space
or you add a glTranslatef(0,0,1); to get to the normalized screen space
Remember to disable depth testing when rendering 2D on the screen space and resetting the modelview matrix.

How to display shape and text on OpenGL screen simultaneously?

The below code works perfectly fine with no fatal error but, when i use arguments "w","h" in "gluortho2d" as gluortho2d(0,w,h,0) in reshape function I get text on screen whereas if I put these arguments "0,0" as gluortho2d(0,0,0,0) I get shape of box.
How can I get both of them(box and text) simultaneously on screen?
#include"glut.h"
void drawBitmapText(char *string, float x, float y, float z);
void reshape(int w, int h);
void display(void);
void drawBitmapText(char *string, float x, float y, float z)
{
char *c;
glRasterPos3f(x, y, z);//define position on the screen where to draw text.
for (c = string; *c != '\0'; c++)
{
glutBitmapCharacter(GLUT_BITMAP_TIMES_ROMAN_24, *c);
}
}
void reshape(int w, int h)
{
glViewport(0, 0, w, h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();//Resets to identity Matrix.
gluOrtho2D(0, w, h, 0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}
void display(void)
{
glBegin(GL_POLYGON);//1
glVertex2f(-0.2, 0.6 - 0.3);
glVertex2f(-0.1, 0.6 - 0.3);
glVertex2f(-0.1, 0.5 - 0.3);
glVertex2f(-0.2, 0.5 - 0.3);
glEnd();
glColor3f(0, 1, 0);
drawBitmapText("Usama Ishfaq", 200, 400, 0);//drawBitmapText("Usama Ishfaq", x(how much right), y(how much down), z);
glutSwapBuffers();
}
int main(int argc, char* argv[])
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE);
glutInitWindowSize(500, 500);
glutInitWindowPosition(100, 100);
glutCreateWindow("Usama OGL Window");
glutDisplayFunc(display);
glutReshapeFunc(reshape);
glutMainLoop();
return 0;
}
By not following bad tutorials and placing calls to glViewport and projection matrix setup at the only place valid: The display function. Setting the viewport and projection matrix in the reshape handler is an anti-pattern. Don't do it.
Do this
void display(void)
{
int const w = glutGet(GLUT_WINDOW_WIDTH);
int const h = glutGet(GLUT_WINDOW_HEIGHT);
glViewport(0, 0, w, h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();//Resets to identity Matrix.
gluOrtho2D(-1, 1, -1, 1);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glBegin(GL_POLYGON);//1
glVertex2f(-0.2, 0.6 - 0.3);
glVertex2f(-0.1, 0.6 - 0.3);
glVertex2f(-0.1, 0.5 - 0.3);
glVertex2f(-0.2, 0.5 - 0.3);
glEnd();
/* viewport doesn't change in this
* application, but it's perfectly
* valid to set a different
* glViewport(...) here */
glMatrixMode(GL_PROJECTION);
glLoadIdentity();//Resets to identity Matrix.
gluOrtho2D(0, w, h, 0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glColor3f(0, 1, 0);
drawBitmapText("Usama Ishfaq", 200, 400, 0);//drawBitmapText("Usama Ishfaq", x(how much right), y(how much down), z);
glutSwapBuffers();
}
Update (due to request in coments):
Why is it wrong to set the viewport and projection parameters in the reshape handler? Well, you just experienced the reason yourself: They are not "one size fits all" state and throughout rendering slightly more complex frames that go beyond just a mesh drawn, you're going to want to mix and match different viewports and projections throughout rendering. Here's a (incomplete) list of things that require to have different viewports and projections while rendering a single frame:
render-to-texture (FBO) – needs viewport withing the bounds of the texture, and usually also a different projection (important for shadow mapping, dynamic cubemaps and lots of other advanced, multipass rendering techniques)
minimaps / overview frames or similar in the corner (viewport covering just the corner)
text annotation overlays (different projection; usually a plain identity transform so to draw text rectangles directly in NDC space)
"magnifying glass" overlay
Since changing viewport and projection state happens multiple times in only slightly more complex OpenGL drawing, it makes
a) zero sense to set it in the reshape handler: whatever the handler sets will be set only at the beginning of the drawing of the first frame and thereafter the frame drawing code itself would have to reset to what the reshape handler sets. So why even bother doing it in the reshape handler at all?
b) placing viewport and projection setup code in the reshape handler a burden in the long run, because it might cause other parts of the program getting dependent on that. And if that happens, once you realize your mistake and try to move that viewport and projection setup code to where it belongs other parts of the program that relied on it being called from the reshape handler break and you have to fix those, too.
All in all, there are no reasons to place any drawing related calls (and glViewport and projection setup definitely are drawing related) in the reshape handler. Of course "one time" initialization is perfectly fine there, i.e. if you want to adjust the size of FBO render targets to match the window, or if you want to prepare an overlay image that later on gets applied repeatedly.
You can make this much simpler. For what you're doing, there's no need to bother with setting transformations at all.
It looks like, for the box, you're trying to use coordinates in the range [-1.0, 1.0] for both coordinate directions. This corresponds to the OpenGL NDC (Normalized Device Coordinates) coordinate system, which is the coordinate space vertices are in after both the modelview and projection transformations are applied. If you keep these at their default identity matrix, you can specify coordinates directly in NDC space. In other words, to use coordinates in the range [-1.0, 1.0], do... nothing at all, and just keep everything at its default.
The reason the box rendering works for you when you call:
gluOrtho2D(0.0, 0.0, 0.0, 0.0);
is that this call will result in an error, as documented on the man page:
GL_INVALID_VALUE is generated if left = right, or bottom = top, or near = far.
and will therefore keep the defaults untouched, which is exactly what you need.
Now, for the text, it looks like you want to specify the position in units of pixels. The problem you're having is that glRasterPos*() runs the specified coordinates through the transformation pipeline, meaning that, with the default identity modelview and projection transformations, it expects the input coordinates to be in the range [-1.0, 1.0] just like the coordinates you pass to glVertex2f().
Fortunately, there's a very easy way to avoid that. There's a very similar glWindowPos*() call, with the only difference that the coordinates passed to it are in window coordinates, which are in units of pixels.
So in summary:
Remove all glMatrixMode() calls.
Remove all glLoadIdentity() calls.
Remove all gluOrtho2D() calls.
In drawBitmapText(), replace the glRasterPos3f() call by:
glWindowPos2f(x, y);
The only thing to watch out for is that the origin of window coordinates is in the bottom left corner. So if your text position is given relative to the top left corner, you'll need something like:
glWindowPos2f(x, windowHeight - y);
To address some misleading information in another answer: It's perfectly fine to call glViewport() in the reshape() function, as long as you use the same viewport for all your rendering. In more complex applications, you will often need different viewports for different parts of the rendering (e.g. when you render to FBOs, or to only part of the window), so you will need to call glViewport() at the proper places during rendering. But for a simple example, where you do all your rendering to the entire window, there's nothing wrong with calling it in reshape().

Object without deformations - OpenGL

Anybody knows how to keep a triangle without deformations and always at the middle of the windows whatever is his size?
I know I have to do one callback with reshape function and then define it, but I'm not sure what is going inside resize function:
void resize(int width, int height) {
viewport(0,0,width,height);
...?
}
I have this main help. glutInitWindowSize(600, 600);
Since the GL calls use normalised vertex coordinates ranging from -1 to +1, it is possible to keep any object in the center of the screen by using the right coordinates independent of the screen pixel sizes.
However, the same independency also brings in the behaviour that, depending on the screen aspect ratio (or window aspect ratio, as the case may be) the object will also change, unless explicitly accounted for. See the discussions in How can i convert an oval to circle openGL ES2.0 Android
Here's a important hint: Don't use the resize callback to do anything with OpenGL.
I know I have to do one callback with reshape function and then define it, but I'm not sure what is going inside resize function:
Then you knew wrong.
It leads to a lot of confusion. OpenGL is a state based drawing API and like all state machines it should be reset into a well known state before you use it. That includes projection and viewport. With that in mind your problem becomes trivial
void display()
{
/* draw some stuff */
glViewport(...);
setup_projection();
setup_modelview();
draw_stuff();
/* draw some other stuff with different projection and modelview */
glViewport(...);
setup_other_projection();
setup_other_modelview();
draw_other_stuff();
/* ... */
swapBuffers();
}
If you're using GLUT you can use glutGet(GLUT_WINDOW_WIDTH) and glutGet(GLUT_WINDOW_HEIGHT) to retrieve the window's size for the viewport calls.
So in your case you'd use a glViewport that covers your whole window and a projection that always maps a certain view space into that viewport. For example
void display()
{
int const win_width = glutGet(GLUT_WINDOW_WIDTH);
int const win_height = glutGet(GLUT_WINDOW_HEIGHT);
float const win_aspect = (float)win_width / (float) win_height;
glViewport(0, 0, win_width, win_height);
/* Using fixed function pipeline for brevity */
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
/* map the vertical range -1…1 to the window height and
a symmetric range -aspect … 0 … +aspect to the viewport */
glOrtho(-win_aspect, win_aspect, -1, 1, -1, 1);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
draw_triangle();
/* ... */
glutSwapBuffers();
}

Opengl: Viewport, clipping, matrixtmode confusion

I've been studying Computer Graphics and I'm very confused about the role of the viewport, gluortho and when to use GL_MatrixMode and GL_Projection.
Here is a sample code I wrote that confuses me.
void init()
{
glClearColor(1.0,1.0,1.0,1.0);//Background Color of Viewport
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-200,200,-200,200,-50,50);
glMatrixMode(GL_MODELVIEW);
}
void wheel()
{
glClear(GL_COLOR_BUFFER_BIT);
glColor3f(1,0.2,0.2);
glLoadIdentity();
glViewport(0,0,200,200);
glutSolidCube(100);
glFlush();
}
void main(int argc,char** argv)
{
glutInit(&argc,argv);
glutInitWindowSize(400,400);
glutInitWindowPosition(400,400);//Position from the top left corner
glutCreateWindow("Car");
init();
glutDisplayFunc(wheel);//Shape to draw
glutMainLoop();
}
When I change the Cube's size to 200 it disappears, why? Is that because it's larger than the z clipping?
When I remove glMatrixMode(GL_MODELVIEW) the cube disappears why?
If I don't flush at the end of the display function the cube disappears as well,why?
When I make the viewport smaller the object get smaller does that mean the object coordinates are relative to the viewport and not the world coordinates?
When you change the cubes size to 200, its faces extend beyond the near and far clipping planes, which you've set in your glOrtho call to -50 and 50. Technically you'd then be viewing the inside of the cube, but the far side of the cube is also outside of the far clipping plane, so you can't see its backface.
Removing the call to set the matrix mode to GL_MODELVIEW means your glLoadIdentity call operates on the fixed functionality projection matrix (I'm pretty sure), and so the cube is directly translated into Normalized Device Coordinates, and it once again extends beyond all the clipping planes.
Finally, glViewport defines the size of the buffer you should be rendering to, and therefore usually matches your screen size. Making it smaller effectively makes your screen size smaller, but does not change the actual GLUT window size. In mathematical terms, it changes the way fragments are projected from normalized device coordinates into screen coordinates.

How to tell the size of font in pixels when rendered with openGL

I'm working on the editor for Bitfighter, where we use the default OpenGL stroked font. We generally render the text with a linewidth of 2, but this makes smaller fonts less readable. What I'd like to do is detect when the fontsize will fall below some threshold, and drop the linewidth to 1. The problem is, after all the transforms and such are applied, I don't know how to tell how tall (in pixels) a font of size <fontsize> will be rendered.
This is the actual inner rendering function:
if(---something--- < thresholdSizeInPixels)
glLineWidth(1);
float scalefactor = fontsize / 120;
glPushMatrix();
glTranslatef(x, y + (fix ? 0 : size), 0);
glRotatef(angle * radiansToDegreesConversion, 0, 0, 1);
glScalef(scaleFactor, -scaleFactor, 1);
for(S32 i = 0; string[i]; i++)
OpenglUtils::drawCharacter(string[i]);
glPopMatrix();
Just before calling this, I want to check the height of the font, then drop the linewidth if necessary. What goes in the ---something--- spot?
Bitfighter is a pure old-school 2D game, so there are no fancy 3D transforms going on. All code is in C++.
My solution was to combine the first part Christian Rau's solution with a fragment of the second. Basically, I can get the current scaling factor with this:
static float modelview[16];
glGetFloatv(GL_MODELVIEW_MATRIX, modelview); // Fills modelview[]
float scalefact = modelview[0];
Then, I multiply scalefact by the fontsize in pixels, and multiply that by the ratio of windowHeight / canvasHeight to get the height in pixels that my text will be rendered.
That is...
textheight = scalefact * fontsize * widndowHeight / canvasHeight
And I liked also the idea of scaling the line thickness rather than stepping from 2 to 1 when a threshold is crossed. It all works very nicely now.
where we use the default OpenGL stroked font
OpenGL doesn't do fonts. There is no default OpenGL stroked font.
Maybe you are referring to GLUT and its glutStrokeCharacter function. Then please take note that GLUT is not part of OpenGL. It's an independent library, focused on providing a simplicistic framework for small OpenGL demos and tutorials.
To answer your question: GLUT Stroke Fonts are defined in terms of vertices, so the usual transformations apply. Since usually all transformations are linear, you can simply transform the vector (0, base_height, 0) through modelview and projection finally doing the perspective divide (gluProject does all this for you – GLU is not part OpenGL, too), the resulting vector is what you're looking for; take the vector length for scaling the width.
This should be determinable rather easily. The font's size in pixels just depends on the modelview transformation (actually only the scaling part), the projection transformation (which is a simple orthographic projection, I suppose) and the viewport settings, and of course on the size of an individual character of the font in untransformed form (what goes into the glVertex calls).
So you just take the font's basic size (lets consider the height only and call it height) and first do the modelview transformation (assuming the scaling shown in the code is the only one):
height *= scaleFactor;
Next we do the projection transformation:
height /= (top-bottom);
with top and bottom being the values you used when specifying the orthographic transformation (e.g. using glOrtho). And last but not least we do the viewport transformation:
height *= viewportHeight;
with viewportHeight being, you guessed it, the height of the viewport specified in the glViewport call. The resulting height should be the height of your font in pixels. You can use this to somehow scale the line width (without an if), as the line width parameter is in floats anyway, let OpenGL do the discretization.
If your transformation pipeline is more complicated, you could use a more general approach using the complete transformation matrices, perhaps with the help of gluProject to transform an object-space point to a screen-space point:
double x0, x1, y0, y1, z;
double modelview[16], projection[16];
int viewport[4];
glGetDoublev(GL_MODELVIEW_MATRIX, modelview);
glGetDoublev(GL_PROJECTION_MATRIX, projection);
glGetIntegerv(GL_VIEWPORT, viewport);
gluProject(0.0, 0.0, 0.0, modelview, projection, viewport, &x0, &y0, &z);
gluProject(fontWidth, fontHeight, 0.0, modelview, projection, viewport, &x1, &y1, &z);
x1 -= x0;
y1 -= y0;
fontScreenSize = sqrt(x1*x1 + y1*y1);
Here I took the diagonal of the character and not only the height, to better ignore rotations and we used the origin as reference value to ignore translations.
You might also find the answers to this question interesting, which give some more insight into OpenGL's transformation pipeline.