How to draw jagged pixellated line in OpenGL - opengl

I am able to draw a line using OpenGL, which produces the following result:
However, this line looks smooth and out-of-place compared to the pixellated style of my game. I would prefer a result that looks more like this:
Can someone please give me some tips on how I can get OpenGL to draw a pixellated line like this?
Here is my renderLine() method (using immediate mode, I'm afraid):
public static void renderLine(float x1, float y1, float x2, float y2,
float thickness, float[] colour) {
// Store model matrix to prevent contamination
glPushMatrix();
// Set colour and thickness
glColor4f(colour[0], colour[1], colour[2], colour[3]);
glLineWidth(thickness);
// Draw line
glBegin(GL_LINES);
{
glVertex2f(x1, y1);
glVertex2f(x2, y2);
}
glEnd();
// Restore previous state
glColor4f(1, 1, 1, 1);
glLineWidth(1.0f);
glPopMatrix();
}

There's really nothing built-in in OpenGL to draw such lines.
You could draw a tight quad containing the pixels in the line and use a fragment shader that evaluates which pixels are to be left in and which are to left out.
However, considering that you want the look of all your game to be pixellated, the best solution is to render to a smaller texture and then blit it to the screen with x4 (or whatever factor) nearest neighbor interpolation. Then drawing such line reduces to just a regular GL_LINE without GL_LINE_SMOOTHing.

Related

How to display shape and text on OpenGL screen simultaneously?

The below code works perfectly fine with no fatal error but, when i use arguments "w","h" in "gluortho2d" as gluortho2d(0,w,h,0) in reshape function I get text on screen whereas if I put these arguments "0,0" as gluortho2d(0,0,0,0) I get shape of box.
How can I get both of them(box and text) simultaneously on screen?
#include"glut.h"
void drawBitmapText(char *string, float x, float y, float z);
void reshape(int w, int h);
void display(void);
void drawBitmapText(char *string, float x, float y, float z)
{
char *c;
glRasterPos3f(x, y, z);//define position on the screen where to draw text.
for (c = string; *c != '\0'; c++)
{
glutBitmapCharacter(GLUT_BITMAP_TIMES_ROMAN_24, *c);
}
}
void reshape(int w, int h)
{
glViewport(0, 0, w, h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();//Resets to identity Matrix.
gluOrtho2D(0, w, h, 0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}
void display(void)
{
glBegin(GL_POLYGON);//1
glVertex2f(-0.2, 0.6 - 0.3);
glVertex2f(-0.1, 0.6 - 0.3);
glVertex2f(-0.1, 0.5 - 0.3);
glVertex2f(-0.2, 0.5 - 0.3);
glEnd();
glColor3f(0, 1, 0);
drawBitmapText("Usama Ishfaq", 200, 400, 0);//drawBitmapText("Usama Ishfaq", x(how much right), y(how much down), z);
glutSwapBuffers();
}
int main(int argc, char* argv[])
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE);
glutInitWindowSize(500, 500);
glutInitWindowPosition(100, 100);
glutCreateWindow("Usama OGL Window");
glutDisplayFunc(display);
glutReshapeFunc(reshape);
glutMainLoop();
return 0;
}
By not following bad tutorials and placing calls to glViewport and projection matrix setup at the only place valid: The display function. Setting the viewport and projection matrix in the reshape handler is an anti-pattern. Don't do it.
Do this
void display(void)
{
int const w = glutGet(GLUT_WINDOW_WIDTH);
int const h = glutGet(GLUT_WINDOW_HEIGHT);
glViewport(0, 0, w, h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();//Resets to identity Matrix.
gluOrtho2D(-1, 1, -1, 1);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glBegin(GL_POLYGON);//1
glVertex2f(-0.2, 0.6 - 0.3);
glVertex2f(-0.1, 0.6 - 0.3);
glVertex2f(-0.1, 0.5 - 0.3);
glVertex2f(-0.2, 0.5 - 0.3);
glEnd();
/* viewport doesn't change in this
* application, but it's perfectly
* valid to set a different
* glViewport(...) here */
glMatrixMode(GL_PROJECTION);
glLoadIdentity();//Resets to identity Matrix.
gluOrtho2D(0, w, h, 0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glColor3f(0, 1, 0);
drawBitmapText("Usama Ishfaq", 200, 400, 0);//drawBitmapText("Usama Ishfaq", x(how much right), y(how much down), z);
glutSwapBuffers();
}
Update (due to request in coments):
Why is it wrong to set the viewport and projection parameters in the reshape handler? Well, you just experienced the reason yourself: They are not "one size fits all" state and throughout rendering slightly more complex frames that go beyond just a mesh drawn, you're going to want to mix and match different viewports and projections throughout rendering. Here's a (incomplete) list of things that require to have different viewports and projections while rendering a single frame:
render-to-texture (FBO) – needs viewport withing the bounds of the texture, and usually also a different projection (important for shadow mapping, dynamic cubemaps and lots of other advanced, multipass rendering techniques)
minimaps / overview frames or similar in the corner (viewport covering just the corner)
text annotation overlays (different projection; usually a plain identity transform so to draw text rectangles directly in NDC space)
"magnifying glass" overlay
Since changing viewport and projection state happens multiple times in only slightly more complex OpenGL drawing, it makes
a) zero sense to set it in the reshape handler: whatever the handler sets will be set only at the beginning of the drawing of the first frame and thereafter the frame drawing code itself would have to reset to what the reshape handler sets. So why even bother doing it in the reshape handler at all?
b) placing viewport and projection setup code in the reshape handler a burden in the long run, because it might cause other parts of the program getting dependent on that. And if that happens, once you realize your mistake and try to move that viewport and projection setup code to where it belongs other parts of the program that relied on it being called from the reshape handler break and you have to fix those, too.
All in all, there are no reasons to place any drawing related calls (and glViewport and projection setup definitely are drawing related) in the reshape handler. Of course "one time" initialization is perfectly fine there, i.e. if you want to adjust the size of FBO render targets to match the window, or if you want to prepare an overlay image that later on gets applied repeatedly.
You can make this much simpler. For what you're doing, there's no need to bother with setting transformations at all.
It looks like, for the box, you're trying to use coordinates in the range [-1.0, 1.0] for both coordinate directions. This corresponds to the OpenGL NDC (Normalized Device Coordinates) coordinate system, which is the coordinate space vertices are in after both the modelview and projection transformations are applied. If you keep these at their default identity matrix, you can specify coordinates directly in NDC space. In other words, to use coordinates in the range [-1.0, 1.0], do... nothing at all, and just keep everything at its default.
The reason the box rendering works for you when you call:
gluOrtho2D(0.0, 0.0, 0.0, 0.0);
is that this call will result in an error, as documented on the man page:
GL_INVALID_VALUE is generated if left = right, or bottom = top, or near = far.
and will therefore keep the defaults untouched, which is exactly what you need.
Now, for the text, it looks like you want to specify the position in units of pixels. The problem you're having is that glRasterPos*() runs the specified coordinates through the transformation pipeline, meaning that, with the default identity modelview and projection transformations, it expects the input coordinates to be in the range [-1.0, 1.0] just like the coordinates you pass to glVertex2f().
Fortunately, there's a very easy way to avoid that. There's a very similar glWindowPos*() call, with the only difference that the coordinates passed to it are in window coordinates, which are in units of pixels.
So in summary:
Remove all glMatrixMode() calls.
Remove all glLoadIdentity() calls.
Remove all gluOrtho2D() calls.
In drawBitmapText(), replace the glRasterPos3f() call by:
glWindowPos2f(x, y);
The only thing to watch out for is that the origin of window coordinates is in the bottom left corner. So if your text position is given relative to the top left corner, you'll need something like:
glWindowPos2f(x, windowHeight - y);
To address some misleading information in another answer: It's perfectly fine to call glViewport() in the reshape() function, as long as you use the same viewport for all your rendering. In more complex applications, you will often need different viewports for different parts of the rendering (e.g. when you render to FBOs, or to only part of the window), so you will need to call glViewport() at the proper places during rendering. But for a simple example, where you do all your rendering to the entire window, there's nothing wrong with calling it in reshape().

opengl glLineWidth() doesn't change size of lines

I wrote this function, where I set up line width to draw a rectangle, but when calling it, the line width doesn't change at all. How can i use glLineWidth correctly?
void drawRect(Rectangle &rect)
{
double x1 = rect.min.x;
double y1 = rect.min.y;
double x2 = rect.max.x;
double y2 = rect.max.y;
glLineWidth(3.0f);
glBegin(GL_LINE_LOOP);
glVertex2d(x1, y1);
glVertex2d(x2, y1);
glVertex2d(x2, y2);
glVertex2d(x1, y2);
glEnd();
}
OpenGL implementations are not required to support rendering of wide lines.
You can query the range of supported line widths with:
GLfloat lineWidthRange[2] = {0.0f, 0.0f};
glGetFloatv(GL_ALIASED_LINE_WIDTH_RANGE, lineWidthRange);
// Maximum supported line width is in lineWidthRange[1].
The required minimum for both limits is 1.0, meaning that support for line widths larger than 1.0 is not required. Also, drawing wide lines is a deprecated feature, and will not be supported anymore if you move to a newer (core profile) version of OpenGL.
An alternative to drawing wide lines is to render thin polygons instead.

OpenGL: why the spheres with same X coordinate are not in a straight in the output?

I am working on a C++ project and it is written in MFC Templates;
using the OpenGL Library I am drawing the spheres in a special coordinate. I go to this special coordinate with glTranslatef function, but when I draw two spheres with the same X coordinates, it look likes they have a difference in their x.
For example when I draw two sphere in (x,y,z):(1,1,0), and (x,y,z):(1,2,0) the output is this:
this view is from the above:
This is my function for drawing the spheres:
void MYGLView::DrawSphere(double X_position, double Y_Position, double Z_Position,
GLdouble radius, int longitudeSubdiv, int latitudeSubdiv,
double Red, double Green,double Blue)
{
gluQuadricDrawStyle(m_quadrObj, GLU_FILL);
float shininess = 64.0f;
glPushMatrix();
glTranslatef(X_position,Y_Position,Z_Position);
glColor3f(Red,Green,Blue);
gluSphere(m_quadrObj,radius,longitudeSubdiv,latitudeSubdiv);
//glTranslatef(-3,0,0);
glFlush();
glPopMatrix();
}
Can you tell me where I make the mistake?
your camera is slightly turned downwards. Therefore you have a vanishing point for all vertical lines. If you want that all vertical lines are parallel on the screen your camera is not allowed to tilt downwards. Alternatively you can use parallel projection, where all lines that are parallel in the world remain parallel in the image.

How to clip with circles in OpenGL

I'm wondering if it is possible to simulate the effect of looking through the keyhole in OpenGL.
I have my 3D scene drawn but I want to make everythig black everything except a central circle.
I tried this solution but its doing the completely opposite of what I want:
// here i draw my 3D scene
// Begin 2D orthographic mode
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
GLint viewport [4];
glGetIntegerv(GL_VIEWPORT, viewport);
gluOrtho2D(0, viewport[2], viewport[3], 0);
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
// Here I draw a circle in the center of the screen
float radius=50;
glBegin(GL_TRIANGLE_FAN);
glVertex2f(x, y);
for( int n = 0; n <= 100; ++n )
{
float const t = 2*M_PI*(float)n/(float)100;
glVertex2f(x + sin(t)*r, y + cos(t)*r);
}
glEnd();
// end orthographic 2D mode
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
glPopMatrix();
What I get is a circle drawn in the center, but I would like to obtain its complementary...
Like everything else in OpenGL, there are a few ways to do this. Here are two off the top of my head.
Use a circle texture: (recommended)
Draw the scene.
Switch to an orthographic projection, and draw a quad over the entire screen using a texture which has a white circle at the center. Use the appropriate blending function:
glEnable(GL_BLEND);
glBlendFunc(GL_ZERO, GL_SRC_COLOR);
/* Draw a full-screen quad with a white circle at the center */
Alternatively, you can use a pixel shader to generate the circular shape.
Use a stencil test: (not recommended, but it may be easier if you don't have textures or shaders)
Clear the stencil buffer, and draw the circle into it.
glEnable(GL_STENCIL_TEST);
glStencilFunc(GL_ALWAYS, 1, 1);
glStencilOp(GL_REPLACE, GL_REPLACE, GL_REPLACE);
/* draw circle */
Enable the stencil test for the remainder of the scene.
glEnable(GL_STENCIL_TEST)
glStencilFunc(GL_EQUAL, 1, 1);
glStencileOp(GL_KEEP, GL_KEEP, GL_KEEP);
/* Draw the scene */
Footnote: I recommend avoiding use of immediate mode at any point in your code, and using arrays instead. This will improve the compatibility, maintainability, readibility, and performance of your code --- a win in all areas.

Trying to zoom in on an arbitrary rect within a screen-aligned quad

I've got a screen-aligned quad, and I'd like to zoom into an arbitrary rectangle within that quad, but I'm not getting my math right.
I think I've got the translate worked out, just not the scaling. Basically, my code is the following:
//
// render once zoomed in
glPushMatrix();
glTranslatef(offX, offY, 0);
glScalef(?wtf?, ?wtf?, 1.0f);
RenderQuad();
glPopMatrix();
//
// render PIP display
glPushMatrix();
glTranslatef(0.7f, 0.7f, 0);
glScalef(0.175f, 0.175f, 1.0f);
RenderQuad();
glPopMatrix();
Anyone have any tips? The user selects a rect area, and then those values are passed to my rendering object as [x, y, w, h], where those values are percentages of the viewport's width and height.
Given that your values are passed as [x, y, w, h] I think you want to first translate in the negative direction to get the upper left hand corner at 0,0, then scale by 1/w and 1/h to fill it to the screen. Like this:
//
// render once zoomed in
glPushMatrix();
glTranslatef(-x, -y, 0);
glScalef(1.0/w, 1.0/h, 1.0f);
RenderQuad();
glPopMatrix();
Does this work?
When I've needed to do this, I've always just changed the parameters I passed to glOrtho, glFrustrum, gluPerspective, or whatever (whichever I was using).
From your comment it looks like you want to draw the same quad (RenderQuad()) as full image and in PIP mode.
Assuming you have widthPIP and heightPIP and startXY position of PIP window then use widthPIP/totalWidth, heightPIP/totalHeight to scale original quad and re-render at given startXY.
Is this what you are looking for?