Why does polygon smoothing produce a broken line? - opengl

When using polygon smoothing in OpenGL, I get some broken lines inside my polygons as can be seen in the images:
My code looks like this:
glEnable(GL_POLYGON_SMOOTH)
glHint(GL_POLYGON_SMOOTH_HINT, GL_NICEST)
glPolygonMode(GL_FRONT, GL_FILL)
glPolygonMode(GL_BACK, GL_LINE)
glLineWidth(5)
glBegin(GL_QUADS)
glVertex3f(0.1, 0.1, 0)
glVertex3f(0.2, 0.1, 0)
glVertex3f(0.2, 0.2, 0)
glVertex3f(0.1, 0.2, 0)
glEnd()
glDisable(GL_POLYGON_SMOOTH)
Why does this happen and how can I solve this?

Drawing quads or polygons is usually emulated on modern hardware by drawing several triangles. When smoothing borders of these triangles, the diagonal which splits the quad into the two triangles will get smoothed too, producing this artifacts.
The OpenGL API you are using (fixed function pipeline) is deprecated for a decade now and should not be used anymore. Especially "special features" like polygon smoothing or drawing polygons might behave very differently on different hardware.
Modern OpenGL (3.3+ Core Profile) works completely different and comes with a set of better methods to deal with antialiasing like MSAA or post processing (FXAA, ...)

I think your blend mode is the problem. For polygon smoothing, there is the special source blend mode GL_SRC_ALPHA_SATURATE
so do try glBlendFunc(GL_SRC_ALPHA_SATURATE, GL_ONE)

Related

How to avoid z-fighting in OpenGL when drawing both the mesh surface and the polygon edges?

Consider the following image:
It has both the mesh surface and the polygon edges visualized. What's more, even though the edges and the faces should have the same z-coordinates on the places where they are drawn and cause z-fighting, in this image, the polygon edges are always visible as long as they are not covered by a (non-adjacent) polygon, and there is no visible z-fighting. How can this be achieved in OpenGL?
I usually just enable some MSAA on the frame buffer I'm rendering to, and then just do:
glDepthFunc(GL_LESS);
drawShadededMesh();
glDepthFunc(GL_LEQUAL);
drawWireMesh();
That usually works well enough in most cases. Failing that(as mentioned in the comments) you can experiment with glPolygonOffset.
glEnable(GL_POLYGON_OFFSET_FILL);
glPolygonOffset(1.0, 1.0); ///< may need adjustment for your use case
glDepthFunc(GL_LESS);
drawShadededMesh();
glDisable(GL_POLYGON_OFFSET_FILL);
glDepthFunc(GL_LEQUAL);
drawWireMesh();

Display a quad perpendicular to the screen

When drawing a quad, it vanishes when rotation brings in a position perpendicular to the screen. Ideally what I'd like to see is (b) but I get nothing
Is there something wrong with my code ? (warning old openGL code following)
void draw_rect(double vector[4][3], int rgb[3], double transp)
{
GLint is_depth, is_blend, blend_src, blend_dst;
glGetIntegerv(GL_DEPTH_WRITEMASK, &is_depth);
glGetIntegerv(GL_BLEND, &is_blend);
glGetIntegerv(GL_BLEND_SRC, &blend_src);
glGetIntegerv(GL_BLEND_DST, &blend_dst);
glEnable(GL_BLEND);
glDepthMask(0);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
// code to set the color ...
glBegin(GL_POLYGON);
glVertex3v(&vector[0][0]);
glVertex3v(&vector[1][0]);
glVertex3v(&vector[2][0]);
glVertex3v(&vector[3][0]);
glEnd();
if (!is_blend){ glDisable(GL_BLEND); }
glDepthMask(is_depth);
glBlendFunc(blend_src, blend_dst);
}
A quad (assuming it is defined by coplanar faces, as in this case) is by definition infinitely thin. It is correct behavior for it to be invisible when perpendicular to the camera.
The "correct" solution is to make a box rather than a single quad.
See Drawing cube 3D using Opengl for an example using a cube. You'll need to tweak the vertex positions to make the cube smaller along one dimension (probably Z), but it'll give you the effect that you're looking for.
Also, stop using the fixed function stuff (glVertex, etc.). It's been deprecated for years. Shaders aren't that difficult, and examples are easy to find via your favorite search engine.
try making it a line of some definite width when the quad is perpendicular to the screen

Generate texture from polygon (openGL)

I have a quad and I would like to use the gradient it produces as a texture for another polygon.
glPushMatrix();
glTranslatef(250,250,0);
glBegin(GL_POLYGON);
glColor3f(255,0,0);
glVertex2f(10,0);
glVertex2f(100,0);
glVertex2f(100,100);
glVertex2f(50,50);
glVertex2f(0,100);
glEnd(); //End quadrilateral coordinates
glPopMatrix();
glBegin(GL_QUADS); //Begin quadrilateral coordinates
glVertex2f(0,0);
glColor3f(0,255,0);
glVertex2f(150,0);
glVertex2f(150,150);
glColor3f(255,0,0);
glVertex2f(0,150);
glEnd(); //End quadrilateral coordinates
My goal is to make the 5 vertex polygon have the gradient of the quad (maybe a texture is not the best bet)
Thanks
Keep it simple!
It is very simple to create a gradient texture in code, e.g.:
// gradient white -> black
GLubyte gradient[2*3] = { 255,255,255, 0,0,0 };
// WARNING: check documentation, I am not quite sure about syntax and order:
glTexture1D( GL_TEXTURE_1D, 0,3, 2, 0, GL_RGB, GL_UNSIGNED_BYTE, gradient );
// setup texture parameters, draw your polygon etc.
The graphics hardware and/or the GL will create a sweet looking gradient from color one to color two for you (remember: that's one of the basic advantages of having hardware accelerated polygon drawing, you don't have to do interpolation work in software).
Your real problem is: which texture coordinates do you use on the 5 vertex polygon. But that was not your question... ;-)
To do that, you'd have to do a render-to-texture. While this is commonplace and supported by practically every board, it's typically used for quite elaborate effects (e.g. mirrors).
If it's really just a gradient, I'd try to create the gradient in am app like Paint.Net. If you really need to create them at run-time, use a pixel shader to implement render-to-texture. However, I'm afraid explaining pixel shaders in a few words is a bit tough - there are lots of tutorials on this on the net, however.
With the pixel shader, you gain a lot of control over the graphic card. This allows you to render your scene to a temporary buffer and then apply that buffer as a texture quite easily, plus a lot more functionality.

OpenGL Alpha blending with wrong color

I am trying to create a simple ray tracer. I have a perspective view which shows the rays visibly for debugging purposes.
In my example screenshot below I have a single white sphere to be raytraced and a green sphere representing the eye.
Rays are drawn as lines with
glLineWidth(10.0f)
If a ray misses the sphere it is given color glColor4ub(100,100,100,100);
in my initialization code I have the following:
glEnable(GL_ALPHA_TEST);
glAlphaFunc(GL_GREATER, 0.0f);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA,GL_SRC_ALPHA);
You can see in the screen shot that for some reason, the rays passing between the perspective view point and the sphere are being color blended with the axis line behind the sphere, rather than with the sphere itself.
Here is a screenshot:
Can anyone explain what I am doing wrong here?
Thanks!!
Is it a possibility you cast those rays before you draw the sphere?
Then if Z-buffer is enabled, the sphere's fragments simply won't be rendered, as those parts of rays are closer. When you are drawing something semi-transparent (using blending), you should watch the order you draw things carefully.
In fact I think you cannot use Z-buffer in any sensible way together with ray-tracing process. You'll have to track Z-order manually. While we are at it OpenGL might not be the best API to visualize ray-tracing process. (It will do so possibly much slower than pure software ray-tracer)
You dont need the glAlphaFunc, disable it.
Light rays should be blended by adding to the buffer: glBlendFunc(GL_ONE, GL_ONE) (for premultiplied alpha, which you chose.
Turn off depth buffer writing (not testing) when rendering the rays: glDepthMask(GL_FALSE)
Render the rays last.
AlphaTest is only for discarding fragments - not for blending them. Check the spec
By using it, you are telling OpenGL that you want it to throw away the pixels instead of drawing them, so you won't can any transparent blending. The most common blending function is
glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); You can also check out the OpenGL Transparency FAQ.

Why does my colored cube not work with GL_BLEND?

My cube isn't rendering as expected when I use GL_BLEND.
glEnable(GL_CULL_FACE);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE);
I'm also having a similar problem with drawing some semi-opaque vertices in front, which could well be related.
Related: Why do my semi-opaque vertices make background objects brighter in OpenGL?
Here's what it's supposed to look like:
Normal cube http://img408.imageshack.us/img408/2853/normalcube.png
And here's what it actually looks like:
Dark cube http://img7.imageshack.us/img7/7133/darkcube.png
Please see the code used to create the colored cube, and the code used to actually draw the cube.
The cube is being drawn like so:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glPushMatrix();
glLoadIdentity();
// ... do some translation, rotation, etc ...
drawCube();
glPopMatrix();
// ... swap the buffers ...
You could try disabling all lighting before drawing the cube:
glDisable(GL_LIGHTING);
It looks like you have lighting enabled on the second one,
try with a glShadeModel( GL_FLAT ) before drawing,
This has me stomped. What it looks like is that some vertices have some alpha values that are non-opaque. However the code you posted has all 1. for alpha. So... in order to debug more, did you try to change your clear color to something non-black ? Say green ?
From the code, I doubt lighting is turned on, since no normals were specified.
Last comment, offtopic... You should really not use glBegin/glEnd (2 function calls per vertex + 2 per primitive is really not a good usage of the recent developments in OpenGL). Try glDrawElements with QUAD_LIST, or even better, TRIANGLE_LIST. You already have the data nicely laid out for that.