opengl glLineWidth() doesn't change size of lines - opengl

I wrote this function, where I set up line width to draw a rectangle, but when calling it, the line width doesn't change at all. How can i use glLineWidth correctly?
void drawRect(Rectangle &rect)
{
double x1 = rect.min.x;
double y1 = rect.min.y;
double x2 = rect.max.x;
double y2 = rect.max.y;
glLineWidth(3.0f);
glBegin(GL_LINE_LOOP);
glVertex2d(x1, y1);
glVertex2d(x2, y1);
glVertex2d(x2, y2);
glVertex2d(x1, y2);
glEnd();
}

OpenGL implementations are not required to support rendering of wide lines.
You can query the range of supported line widths with:
GLfloat lineWidthRange[2] = {0.0f, 0.0f};
glGetFloatv(GL_ALIASED_LINE_WIDTH_RANGE, lineWidthRange);
// Maximum supported line width is in lineWidthRange[1].
The required minimum for both limits is 1.0, meaning that support for line widths larger than 1.0 is not required. Also, drawing wide lines is a deprecated feature, and will not be supported anymore if you move to a newer (core profile) version of OpenGL.
An alternative to drawing wide lines is to render thin polygons instead.

Related

How to draw jagged pixellated line in OpenGL

I am able to draw a line using OpenGL, which produces the following result:
However, this line looks smooth and out-of-place compared to the pixellated style of my game. I would prefer a result that looks more like this:
Can someone please give me some tips on how I can get OpenGL to draw a pixellated line like this?
Here is my renderLine() method (using immediate mode, I'm afraid):
public static void renderLine(float x1, float y1, float x2, float y2,
float thickness, float[] colour) {
// Store model matrix to prevent contamination
glPushMatrix();
// Set colour and thickness
glColor4f(colour[0], colour[1], colour[2], colour[3]);
glLineWidth(thickness);
// Draw line
glBegin(GL_LINES);
{
glVertex2f(x1, y1);
glVertex2f(x2, y2);
}
glEnd();
// Restore previous state
glColor4f(1, 1, 1, 1);
glLineWidth(1.0f);
glPopMatrix();
}
There's really nothing built-in in OpenGL to draw such lines.
You could draw a tight quad containing the pixels in the line and use a fragment shader that evaluates which pixels are to be left in and which are to left out.
However, considering that you want the look of all your game to be pixellated, the best solution is to render to a smaller texture and then blit it to the screen with x4 (or whatever factor) nearest neighbor interpolation. Then drawing such line reduces to just a regular GL_LINE without GL_LINE_SMOOTHing.

How to display shape and text on OpenGL screen simultaneously?

The below code works perfectly fine with no fatal error but, when i use arguments "w","h" in "gluortho2d" as gluortho2d(0,w,h,0) in reshape function I get text on screen whereas if I put these arguments "0,0" as gluortho2d(0,0,0,0) I get shape of box.
How can I get both of them(box and text) simultaneously on screen?
#include"glut.h"
void drawBitmapText(char *string, float x, float y, float z);
void reshape(int w, int h);
void display(void);
void drawBitmapText(char *string, float x, float y, float z)
{
char *c;
glRasterPos3f(x, y, z);//define position on the screen where to draw text.
for (c = string; *c != '\0'; c++)
{
glutBitmapCharacter(GLUT_BITMAP_TIMES_ROMAN_24, *c);
}
}
void reshape(int w, int h)
{
glViewport(0, 0, w, h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();//Resets to identity Matrix.
gluOrtho2D(0, w, h, 0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}
void display(void)
{
glBegin(GL_POLYGON);//1
glVertex2f(-0.2, 0.6 - 0.3);
glVertex2f(-0.1, 0.6 - 0.3);
glVertex2f(-0.1, 0.5 - 0.3);
glVertex2f(-0.2, 0.5 - 0.3);
glEnd();
glColor3f(0, 1, 0);
drawBitmapText("Usama Ishfaq", 200, 400, 0);//drawBitmapText("Usama Ishfaq", x(how much right), y(how much down), z);
glutSwapBuffers();
}
int main(int argc, char* argv[])
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE);
glutInitWindowSize(500, 500);
glutInitWindowPosition(100, 100);
glutCreateWindow("Usama OGL Window");
glutDisplayFunc(display);
glutReshapeFunc(reshape);
glutMainLoop();
return 0;
}
By not following bad tutorials and placing calls to glViewport and projection matrix setup at the only place valid: The display function. Setting the viewport and projection matrix in the reshape handler is an anti-pattern. Don't do it.
Do this
void display(void)
{
int const w = glutGet(GLUT_WINDOW_WIDTH);
int const h = glutGet(GLUT_WINDOW_HEIGHT);
glViewport(0, 0, w, h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();//Resets to identity Matrix.
gluOrtho2D(-1, 1, -1, 1);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glBegin(GL_POLYGON);//1
glVertex2f(-0.2, 0.6 - 0.3);
glVertex2f(-0.1, 0.6 - 0.3);
glVertex2f(-0.1, 0.5 - 0.3);
glVertex2f(-0.2, 0.5 - 0.3);
glEnd();
/* viewport doesn't change in this
* application, but it's perfectly
* valid to set a different
* glViewport(...) here */
glMatrixMode(GL_PROJECTION);
glLoadIdentity();//Resets to identity Matrix.
gluOrtho2D(0, w, h, 0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glColor3f(0, 1, 0);
drawBitmapText("Usama Ishfaq", 200, 400, 0);//drawBitmapText("Usama Ishfaq", x(how much right), y(how much down), z);
glutSwapBuffers();
}
Update (due to request in coments):
Why is it wrong to set the viewport and projection parameters in the reshape handler? Well, you just experienced the reason yourself: They are not "one size fits all" state and throughout rendering slightly more complex frames that go beyond just a mesh drawn, you're going to want to mix and match different viewports and projections throughout rendering. Here's a (incomplete) list of things that require to have different viewports and projections while rendering a single frame:
render-to-texture (FBO) – needs viewport withing the bounds of the texture, and usually also a different projection (important for shadow mapping, dynamic cubemaps and lots of other advanced, multipass rendering techniques)
minimaps / overview frames or similar in the corner (viewport covering just the corner)
text annotation overlays (different projection; usually a plain identity transform so to draw text rectangles directly in NDC space)
"magnifying glass" overlay
Since changing viewport and projection state happens multiple times in only slightly more complex OpenGL drawing, it makes
a) zero sense to set it in the reshape handler: whatever the handler sets will be set only at the beginning of the drawing of the first frame and thereafter the frame drawing code itself would have to reset to what the reshape handler sets. So why even bother doing it in the reshape handler at all?
b) placing viewport and projection setup code in the reshape handler a burden in the long run, because it might cause other parts of the program getting dependent on that. And if that happens, once you realize your mistake and try to move that viewport and projection setup code to where it belongs other parts of the program that relied on it being called from the reshape handler break and you have to fix those, too.
All in all, there are no reasons to place any drawing related calls (and glViewport and projection setup definitely are drawing related) in the reshape handler. Of course "one time" initialization is perfectly fine there, i.e. if you want to adjust the size of FBO render targets to match the window, or if you want to prepare an overlay image that later on gets applied repeatedly.
You can make this much simpler. For what you're doing, there's no need to bother with setting transformations at all.
It looks like, for the box, you're trying to use coordinates in the range [-1.0, 1.0] for both coordinate directions. This corresponds to the OpenGL NDC (Normalized Device Coordinates) coordinate system, which is the coordinate space vertices are in after both the modelview and projection transformations are applied. If you keep these at their default identity matrix, you can specify coordinates directly in NDC space. In other words, to use coordinates in the range [-1.0, 1.0], do... nothing at all, and just keep everything at its default.
The reason the box rendering works for you when you call:
gluOrtho2D(0.0, 0.0, 0.0, 0.0);
is that this call will result in an error, as documented on the man page:
GL_INVALID_VALUE is generated if left = right, or bottom = top, or near = far.
and will therefore keep the defaults untouched, which is exactly what you need.
Now, for the text, it looks like you want to specify the position in units of pixels. The problem you're having is that glRasterPos*() runs the specified coordinates through the transformation pipeline, meaning that, with the default identity modelview and projection transformations, it expects the input coordinates to be in the range [-1.0, 1.0] just like the coordinates you pass to glVertex2f().
Fortunately, there's a very easy way to avoid that. There's a very similar glWindowPos*() call, with the only difference that the coordinates passed to it are in window coordinates, which are in units of pixels.
So in summary:
Remove all glMatrixMode() calls.
Remove all glLoadIdentity() calls.
Remove all gluOrtho2D() calls.
In drawBitmapText(), replace the glRasterPos3f() call by:
glWindowPos2f(x, y);
The only thing to watch out for is that the origin of window coordinates is in the bottom left corner. So if your text position is given relative to the top left corner, you'll need something like:
glWindowPos2f(x, windowHeight - y);
To address some misleading information in another answer: It's perfectly fine to call glViewport() in the reshape() function, as long as you use the same viewport for all your rendering. In more complex applications, you will often need different viewports for different parts of the rendering (e.g. when you render to FBOs, or to only part of the window), so you will need to call glViewport() at the proper places during rendering. But for a simple example, where you do all your rendering to the entire window, there's nothing wrong with calling it in reshape().

OpenGL circle radius issue when drawing square

I have a function that draws a circle.
glBegin(GL_LINE_LOOP);
for(int i = 0; i < 20; i++)
{
float theta = 2.0f * 3.1415926f * float(i) / float(20);//get the current angle
float rad_x = ratio*(radius * cosf(theta));//calculate the x component
float rad_y = radius * sinf(theta);//calculate the y component
glVertex2f(x + rad_x, y + rad_y);//output vertex
}
glEnd();
This works dandy. I save the x, y and radius values in my object.
However when I try and draw a square with the following function call:
newSquare(id, red, green, blue, x, (x + radius), y, (y + radius));
I get the following image.
As you see, the square is nearly double as wide (looks more like the diameter). The following code is how I create my square box. As you can see it starts in the center of the circle in which it should. And should stretch out to the edge of the circle.
glBegin(GL_QUADS);
glVertex2f(x2, y2);
glVertex2f(x2, y1);
glVertex2f(x1, y1);
glVertex2f(x1, y2);
glEnd();
I can't seem to understand why this is!
If you're correcting the x-position for one object, you have to do it for all others as well.
However, if you continue this, you'll get into trouble very soon. In your case, only the width of objects is corrected but not their positions. You can solve all your problems by setting an orthographic projection matrix and you won't ever need to correct positions again. E.g. like so:
glMatrixMode(GL_PROJECTION); //switch to projection matrix
glOrtho(-ratio, ratio, -1, 1, -1, 1);
glMatrixMode(GL_MODELVIEW); //switch back to model view
where
ratio = windo width / window height
This constructs a coordinate system where the top edge has y=1, the bottom edge y=-1 and the left and right sides have x=-ratio and x=ratio, respectively.

OpenGL circle drawing getting an ellipse

I'm trying to draw a circle in opengl, but i can't seem to get the right coordinates, so i always get an ellipsis no matter what method i use. The current code is as follows:
void Ball::Render() {
float center_position_x = _body->GetWorldCenter().x;
float center_position_y = _body->GetWorldCenter().y;
float radius = static_cast<b2CircleShape*>(_body->GetFixtureList()->GetShape())->m_radius;
glBegin(GL_LINE_STRIP);
for(float angle = 0; angle <= 360; ++angle) {
float angleInRadians = glm::radians(angle);
float x = glm::cos(angleInRadians);
float y = glm::sin(angleInRadians);
glVertex3f(x,y,0);
}
glEnd();
}
( I know that code should draw the circle at the origin, but even then it's not a perfect circle, if i understand correctly that should draw a circle at the origin with radius=1 )
The other methods i used were:
http://slabode.exofire.net/circle_draw.shtml
http://www.opengl.org/discussion_boards/showthread.php/167955-drawing-a-smooth-circle
This is my OpenGL window setup code (It's a stub program so i'm starting with hardcoded values):
gluOrtho2D(-10, 15, -10, 15);
Use gluOrtho2D(-WIDOWS_WIDTH, WIDOWS_WIDTH, -WINDOWS_HEIGHT, WINDOWS_HEIGHT);
By the way your are using fixed pipeline in 2013. With a vertex shader this would be much easy to understand.
The aspect ratio of your 2D projection matrix defined by gluOrtho2d must be same as the viewport/window on which you are rendering otherwise you'll notice distortions in the figures you are rendering. You can use the above gluOrtho2d statement or other way of writing it is -
float ar = (float)WindowWidth/WindowHeight;
gluOrtho2D(-1*ar, ar, -1, 1)

How to tell the size of font in pixels when rendered with openGL

I'm working on the editor for Bitfighter, where we use the default OpenGL stroked font. We generally render the text with a linewidth of 2, but this makes smaller fonts less readable. What I'd like to do is detect when the fontsize will fall below some threshold, and drop the linewidth to 1. The problem is, after all the transforms and such are applied, I don't know how to tell how tall (in pixels) a font of size <fontsize> will be rendered.
This is the actual inner rendering function:
if(---something--- < thresholdSizeInPixels)
glLineWidth(1);
float scalefactor = fontsize / 120;
glPushMatrix();
glTranslatef(x, y + (fix ? 0 : size), 0);
glRotatef(angle * radiansToDegreesConversion, 0, 0, 1);
glScalef(scaleFactor, -scaleFactor, 1);
for(S32 i = 0; string[i]; i++)
OpenglUtils::drawCharacter(string[i]);
glPopMatrix();
Just before calling this, I want to check the height of the font, then drop the linewidth if necessary. What goes in the ---something--- spot?
Bitfighter is a pure old-school 2D game, so there are no fancy 3D transforms going on. All code is in C++.
My solution was to combine the first part Christian Rau's solution with a fragment of the second. Basically, I can get the current scaling factor with this:
static float modelview[16];
glGetFloatv(GL_MODELVIEW_MATRIX, modelview); // Fills modelview[]
float scalefact = modelview[0];
Then, I multiply scalefact by the fontsize in pixels, and multiply that by the ratio of windowHeight / canvasHeight to get the height in pixels that my text will be rendered.
That is...
textheight = scalefact * fontsize * widndowHeight / canvasHeight
And I liked also the idea of scaling the line thickness rather than stepping from 2 to 1 when a threshold is crossed. It all works very nicely now.
where we use the default OpenGL stroked font
OpenGL doesn't do fonts. There is no default OpenGL stroked font.
Maybe you are referring to GLUT and its glutStrokeCharacter function. Then please take note that GLUT is not part of OpenGL. It's an independent library, focused on providing a simplicistic framework for small OpenGL demos and tutorials.
To answer your question: GLUT Stroke Fonts are defined in terms of vertices, so the usual transformations apply. Since usually all transformations are linear, you can simply transform the vector (0, base_height, 0) through modelview and projection finally doing the perspective divide (gluProject does all this for you – GLU is not part OpenGL, too), the resulting vector is what you're looking for; take the vector length for scaling the width.
This should be determinable rather easily. The font's size in pixels just depends on the modelview transformation (actually only the scaling part), the projection transformation (which is a simple orthographic projection, I suppose) and the viewport settings, and of course on the size of an individual character of the font in untransformed form (what goes into the glVertex calls).
So you just take the font's basic size (lets consider the height only and call it height) and first do the modelview transformation (assuming the scaling shown in the code is the only one):
height *= scaleFactor;
Next we do the projection transformation:
height /= (top-bottom);
with top and bottom being the values you used when specifying the orthographic transformation (e.g. using glOrtho). And last but not least we do the viewport transformation:
height *= viewportHeight;
with viewportHeight being, you guessed it, the height of the viewport specified in the glViewport call. The resulting height should be the height of your font in pixels. You can use this to somehow scale the line width (without an if), as the line width parameter is in floats anyway, let OpenGL do the discretization.
If your transformation pipeline is more complicated, you could use a more general approach using the complete transformation matrices, perhaps with the help of gluProject to transform an object-space point to a screen-space point:
double x0, x1, y0, y1, z;
double modelview[16], projection[16];
int viewport[4];
glGetDoublev(GL_MODELVIEW_MATRIX, modelview);
glGetDoublev(GL_PROJECTION_MATRIX, projection);
glGetIntegerv(GL_VIEWPORT, viewport);
gluProject(0.0, 0.0, 0.0, modelview, projection, viewport, &x0, &y0, &z);
gluProject(fontWidth, fontHeight, 0.0, modelview, projection, viewport, &x1, &y1, &z);
x1 -= x0;
y1 -= y0;
fontScreenSize = sqrt(x1*x1 + y1*y1);
Here I took the diagonal of the character and not only the height, to better ignore rotations and we used the origin as reference value to ignore translations.
You might also find the answers to this question interesting, which give some more insight into OpenGL's transformation pipeline.