I'm drawing some lines in OpenGL (from C) using code like this:
glLineStipple(6, 0xEEEE);
glEnable(GL_LINE_STIPPLE);
glBegin(GL_LINE_STRIP);
glVertex2f(x, y);
...
It works great if everything is still. However, my problem is that as soon as I zoom in or out the line starts shimmering. I mean that the location of the dashes in the line move around. It looks very sloppy.
Is there someway to anchor how the line is dashed in model space? I think my issue is that glLineStipple() look at the number of pixels drawn, but I'd like it to look at the length in model space instead.
Yes. Texture the line instead of using stipple and alpha blend. I´d prefer mirror wrapping mode for the texture but repeat would work just as well depedning on the pattern. Use alpha test to remove coverage causing artifact, if any. Put texture coordinates on the line just as any polygon.
No shader required.
Related
I am using GL_LINE_STRIP and glLineWidth to draw lines.
However, this leads to gaps between the single, straight segments of the strip.
I had mitigated the problem by using Catmull-Rom Splines and thus the segments where smooth enough to not notice the gaps anymore:
But now I noticed the gaps are different depending on the OpenGL implementation. Mesa introduces larger gaps than my graphic card, notice the gaps in the upper part and how the lower part with much smaller segments is noticeably darker due to more gaps:
Please note that image 1 and 2 are the same render code, the opacity is 255 in both cases, just the used opengl32.dll differs.
I then added the drawing of every joint as point:
glBegin(GL_LINE_STRIP);
for (auto p : interpolatedPoints) {
glVertex2f(p.x, p.y);
}
glEnd();
glBegin(GL_POINTS);
for (auto p : interpolatedPoints) {
glVertex2f(p.x, p.y);
}
glEnd();
This works for opacity 255 but not if I want to reduce the objects transparency. What happens then, is that the transparent point overlays the transparent line strip, thus increasing the opacity especially in areas with very short strips:
Solution 1: Polyline quadstrip
Ditching GL_LINE_STRIP altogether and triangulate the line strip ourselves seems the solution here but this looks like a larger rewrite for me - either I need a new shader or I need to calculate the triangles.
Solution 2: Blending
Wanting to avoid the rewrite, I was wondering: can blending solve the issue? Currently I use
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Is there a blending configuration which would prevent the points to add on the alpha channels of the lines? I tried some other constants here but I had no success. Please note also that the black background in the screenshots may not be black at all but contain other objects and textures which should be "correctly" overlayed by the semi-transparent line.
As a potential easy fix, you could try setting glHint(GL_LINE_SMOOTH_HINT, GL_NICEST) and see if that helps.
If you want your lines to look nice when drawn transparently, I suggest drawing all your lines onto a separate framebuffer than the rest of your scene, reusing the same depth buffer, and with full opacity. Then draw the lines framebuffer onto the rest-of-your-scene framebuffer with partial transparency.
I am rendering lines at the same position as a mesh and when the lines are right on top of the mesh I get an effect where the lines start to clip and parts of them disappear, especially when moving around.
What I eventually have to do is something like the screenshot below, move the lines off the mesh a little bit in order to get rid of that effect. Does anyone have any ideas on how I can render the lines so that they are right on the mesh without parts of it disappearing?
First of all, are these lines really lines? Or are they the mesh drawn a second time with glPolygonMode (GL_FRONT_AND_BACK, GL_LINES)? In either case, the problem is that the filled polygons and your lines are generating (roughly) the same depths and passing/failing the depth test is not working like you want.
The solution is to add a very slight depth offset to one of the primitives, and the easiest way to do this is to use glPolygonOffset (...). But it is worth mentioning that this only works for filled primitives (e.g. GL_POLYGON, GL_QUADS, GL_TRIANGLES) and not GL_LINES or GL_POINTS.
If it is the same mesh drawn twice, on the second pass you can do something like this:
glPolygonOffset (-0.1f, -1.0f);
glEnable (GL_POLYGON_OFFSET_LINE);
... Draw Mesh Second Time
glDisable (GL_POLYGON_OFFSET_LINE);
Otherwise, you will need to apply the depth offset when you draw the (solid) mesh the first time:
glPolygonOffset (0.1f, 1.0f);
glEnable (GL_POLYGON_OFFSET_FILL);
... Draw Mesh
glDisable (GL_POLYGON_OFFSET_FILL);
... Draw Lines
You may need to tweak the values used for glPolygonOffset (...), it depends on a number of factors including depth buffer precision and the implementation. Generally the second parameter is going to be more important, it is a constant offset applied to the depth. The first factor varies according to the change in depth (e.g. angle).
Edit: just so you know: I have not solved this problem perfectly yet, currently I am using 0.5px offset, it seems to work, but as others have said, it is not the "proper" solution. So I am looking for the real deal, the diamond exit rule solution didn't work at all.
I believe it is a bug in the graphics card perhaps, but if so, then any professional programmer should have their bullet-proof solutions for this, right?
Edit: I have now bought a new nvidia card (had ATI card before), and i still experience this problem. I also see the same bug in many, many games. So i guess it is impossible to fix in a clean way?
Here is image of the bug:
How do you overcome this problem? Preferrably a non-shader solution, if possible. I tried to set offset for the first line when i drew 4 individual lines myself instead of using wireframe mode, but that didnt work out very well: if the rectangle size changed, it sometimes looked perfect rectangle, but sometimes even worse than before my fix.
This is how i render the quad:
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
glBegin(GL_QUADS);
glVertex2f(...);
glVertex2f(...);
glVertex2f(...);
glVertex2f(...);
glEnd();
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
Yes, I know I can use vertex arrays or VBO's, but that isn't the point here.
I also tried GL_LINE_LOOP, but it didn't fix the bug.
Edit: One solution is at, which works so far: Opengl pixel perfect 2D drawing by Lie Ryan:
Note that OpenGL coordinate space has no notion of integers,
everything is a float and the "centre" of an OpenGL pixel is really at
the 0.5,0.5 instead of its top-left corner. Therefore, if you want a
1px wide line from 0,0 to 10,10 inclusive, you really had to draw a
line from 0.5,0.5 to 10.5,10.5.
This will be especially apparent if you turn on anti-aliasing, if you
have anti-aliasing and you try to draw from 50,0 to 50,100 you may see
a blurry 2px wide line because the line fell in-between two pixels.
Although you've discovered that shifting your points by 0.5 makes the problem go away it's not for the reason that you think.
The answer does indeed lie in the diamond exit rule which is also at the heart of the correctly accepted answer to Opengl pixel perfect 2D drawing.
The diagram below shows four fragments/pixels with a diamond inscribed within each. The four coloured spots represent possible starting points for your quad/line loop i.e. the window co-ordinates of the first vertex.
You didn't say which way you were drawing the quad but it doesn't matter. I'll assume, for argument's sake, that you are drawing it clockwise. The issue is whether the top left of the four fragments shown will be produced by rasterising either your first or last line (it cannot be both).
If you start on the yellow vertex then the first line passes through the diamond and exits it as it passes horizontally to the right. The fragment will therefore be produced as a result of the first line's rasterisation.
If you start on the green vertex then the first line exits the fragment without passing through the diamond and hence never exits the diamond. However the last line will pass through it vertically and exit it as it ascends back to the green vertex. The fragment will therefore be produced as a result of the last line's rasterisation.
If you start on the blue vertex then the first line passes through the diamond and exits it as it passes horizontally to the right. The fragment will therefore be produced as a result of the first line's rasterisation.
If you start on the red vertex then the first line exits the fragment without passing through the diamond and hence never exits the diamond. The last line will also not pass through the diamond and therefore not exit it as it ascends back to the red vertex. The fragment will therefore not be produced as a result of either line's rasterisation.
Note that any vertex that is inside the diamond will automatically cause the fragment to be produced as the first line must exit the diamond (provided your quad is actually big enough to leave the diamond of course).
This is not a bug, this is exactly following the specification. The last pixel of a line is not drawn to prevent overdraw with following line segments, which would cause problems with blending. Solution: Send the last vertex twice.
Code Update
// don't use glPolygonMode, it doesn't
// do what you think it does
glBegin(GL_LINE_STRIP);
glVertex2f(a);
glVertex2f(b);
glVertex2f(c);
glVertex2f(d);
glVertex2f(a);
glVertex2f(a); // resend last vertex another time, to close the loop
glEnd();
BTW: You should learn how to use vertex arrays. Immediate mode (glBegin, glEnd, glVertex calls) have been removed from OpenGL-3.x core and onward.
#Troubadour described the problem perfectly. It's not a driver bug. GL is acting exactly as specified. It's designed for sub-pixel accurate representation of the world space object in device space. That's what it's doing. Solutions are 1) anti-alias, so the device space affords more fidelity and 2) arrange for a world coordinate system where all transformed vertices fall in the middle of a device pixel. This is the "general" solution you are looking for.
In all cases you can achieve 2) by moving the input points around. Just shift each point enough to take its transform to the middle of a device pixel.
For some (unaltered) point sets, you can do it by slighly modifying the view transformation. The 1/2 pixel shift is an example. It works e.g. if the world space is an integer-scaled transform of device space followed by a translation by integers, where world coordinates are also integers. Under many other conditions, though, +1/2 won't work.
** Edit **
NB as I said a uniform shift (1/2 or any other) can be built into the view transform. There is no reason to fiddle with vertex coordinates. Just prepend a translation, e.g. glTranslatef(0.5f, 0.5f, 0.0f);
Try change 0.5 into the odd magic number that's used everywhere 0.375.
Used be opengl, and X11 etc.
Becuase of that diamond rule mentioned and how graphiccards draw to avoid unnecessery overdraws of pixels.
Provide some link but there's lots of them, just search keywords opengl 0.375 diamond rule if you need more information. It's about how outlines and fills are treated algorithmically in opengl. It's needed for pixelperfect rendering of textures in for example 2d sprites aswell.
Take a look at this.
Want to add something; So doing what you want, implementing diamond rule implemented in code would be simply one liner; change 0.5 into 0.375 like this;
And it should render properly.
glTranslatef(0.375, 0.375, 0.0);
When you draw a line in OpenGL, glLineWidth creates a fixed-size line, regardless how close the line is to you.
I wanted to draw a line that will appear bigger when it's close. Now, I understand that if I use a rectangle to achieve this effect, it will look a bit pixelated once the polygon is far enough.
What I've previously done is to draw a normal GL_LINE up to the point where the line would get bigger than the pixel size, and then continue with a rectangle from that point. However, it's not as fast as just chucking everything down to a vertex array or VBO, as it had to be recalculated every frame.
What other methods are available? Or am I stuck with this?
I like to use a gradated texture like this to draw lines:
This is really the alpha of my texture. So you have a fully opaque center fading to fully transparent at the edges. Then you can draw your line using a rectangle with points:
(x1,y1,0,0), (x2,y1,1,0), (x1,y2,0,1), (x2,y2,1,1)
where the last two entries in each tuple are u and v of the texture. It ends up looking very smooth. You can even string together lots of very small rectangles to make curvy lines.
If you're just drawing a bunch of lines and want a quick and easy depth effect try adding fog. The attenuation of the lines as they recede makes our brains think they're 3d. This isn't going to work if the near ends are really close to the viewer.
If you want your lines be thicker on near end and thinner on far end, I suppose you have to model them from polygons.
I have two objects drawn on screen in openGL, one is a sphere using the GLU object and one is a texture mapped star. Regardless of the z coordinates, the texture mapped star always seems to draw in front. Is this normal openGL behavior? Is there a way to prevent this?
Note: I am working within the worldwind framework, so maybe something else is going on causing this. But I'm just wondering is it normal for the texture mapped objects to appear in front? I don't think so but I'm not sure...
This isn't a bug in worldwind, this is actually desired behavior. Using glVertex2f() is the same as using glVertex3f() and setting z = 0. So it simply draws the star at a plane very close to the viewer (also depending on your projection).
To solve your issue, you can either disable depth writes using glDepthMask(0), then draw the star, call glDepthMask(1) and then draw the sphere, which will now be in front of the star.
You can also use glDepthFunc(GL_GREATER) on the star or glDisable(GL_DEPTH_TEST) on the sphere to quickly achieve the same effect.
To make anything more complicated (such as star intersecting the sphere), you need to use matrices to put the star at the desired position.