Drawing a "3D-looking" line in OpenGL - opengl

When you draw a line in OpenGL, glLineWidth creates a fixed-size line, regardless how close the line is to you.
I wanted to draw a line that will appear bigger when it's close. Now, I understand that if I use a rectangle to achieve this effect, it will look a bit pixelated once the polygon is far enough.
What I've previously done is to draw a normal GL_LINE up to the point where the line would get bigger than the pixel size, and then continue with a rectangle from that point. However, it's not as fast as just chucking everything down to a vertex array or VBO, as it had to be recalculated every frame.
What other methods are available? Or am I stuck with this?

I like to use a gradated texture like this to draw lines:
This is really the alpha of my texture. So you have a fully opaque center fading to fully transparent at the edges. Then you can draw your line using a rectangle with points:
(x1,y1,0,0), (x2,y1,1,0), (x1,y2,0,1), (x2,y2,1,1)
where the last two entries in each tuple are u and v of the texture. It ends up looking very smooth. You can even string together lots of very small rectangles to make curvy lines.

If you're just drawing a bunch of lines and want a quick and easy depth effect try adding fog. The attenuation of the lines as they recede makes our brains think they're 3d. This isn't going to work if the near ends are really close to the viewer.

If you want your lines be thicker on near end and thinner on far end, I suppose you have to model them from polygons.

Related

Rasterizer not picking up GL_LINES as I would want it to

So I'm rendering this diagram each frame:
https://dl.dropbox.com/u/44766482/diagramm.png
Basically, each second it moves everything one pixel to the left and every frame it updates the rightmost pixel column with current data. So a lot of changes are made.
It is completely constructed from GL_LINES, always from bottom to top.
However those black missing columns are not intentional at all, it's just the rasterizer not picking them up.
I'm using integers for positions and bytes for colors, the projection matrix is exactly 1:1; translating by 1 means moving 1 pixel. Orthogonal.
So my problem is, how to get rid of the black lines? I suppose I could write the data to texture, but that seems expensive. Currently I use a VBO.
Render you columns as quads instead with a width of 1 pixel, the rasterization rules of OpenGL will make sure you have no holes this way.
Realize the question is already closed, but you can also get the effect you want by drawing your lines centered at 0.5. A pixel's CENTER is at 0.5, and drawing a line there will always be picked up by the rasterizer in the right place.

Perfect filled triangle rendering algorithm?

Where can I get an algorithm to render filled triangles? Edit3: I cant use OpenGL for rendering it. I need the per-pixel algorithm for this.
My goal is to render a regular polygon from triangles, so if I use this triangle filling algorithm, the edges from each triangle wouldn't overlap (or make gaps between them), because then it would result into rendering errors if I use for example XOR to render the pixels.
Therefore, the render quality should match to OpenGL rendering, so I should be able to define - for example - a circle with N-vertices, and it would render like a circle with any size correctly; so it doesn't use only integer coordinates to render it like some triangle filling algorithms do.
I would need the ability to control the triangle filling myself: I could add my own logic on how each of the individual pixels would be rendered. So I need the bare code behind the rendering, to have full control on it. It should be efficient enough to draw tens of thousands of triangles without waiting more than a second perhaps. (I'm not sure how fast it can be at best, but I hope it wont take more than 10 seconds).
Preferred language would be C++, but I can convert other languages to my needs.
If there are no free algorithms for this, where can I learn to build one myself, and how hard would that actually be? (me=math noob).
I added OpenGL tag since this is somehow related to it.
Edit2: I tried the algo in here: http://joshbeam.com/articles/triangle_rasterization/ But it seems to be slightly broken, here is a circle with 64 triangles rendered with it:
But if you zoom in, you can see the errors:
Explanation: There is 2 pixels overlapping to the other triangle colors, which should not happen! (or transparency or XOR etc effects will produce bad rendering).
It seems like the errors are more visible on smaller circles. This is not acceptable if I want to have a XOR effect for the pixels.
What can I do to fix these, so it will fill it perfectly without overlapped pixels or gaps?
Edit4: I noticed that rendering very small circles isn't very good. I realised this was because the coordinates were indeed converted to integers. How can I treat the coordinates as floats and make it render the circle precisely and perfectly just like in OpenGL ? Here is example how bad the small circles look like:
Notice how perfect the OpenGL render is! THAT is what I want to achieve, without using OpenGL. NOTE: I dont just want to render perfect circle, but any polygon shape.
There's always the half-space method.
OpenGL uses the GPU to perform this job. This is accelerated in hardware and is called rasterization.
As far as i know the hardware implementation is based on the scan-line algorithm.
This used to be done by creating the outline and then filling in the horizontal lines. See this link for more details - http://joshbeam.com/articles/triangle_rasterization/
Edit: I don't think this will produce the lone pixels you are after, there should be a pixel on every line.
Your problem looks a lot like the problem one has when it comes to triangles sharing the very same edge. What is done by triangles sharing an edge is that one triangle is allowed to conquer the space while the other has to leave it blank.
When doing work with a graphic card usually one gets this behavior by applying a drawing order from left to right while also enabling a z-buffer test or testing if the pixel has ever been drawn. So if a pixel with the very same z-value is already set, changing the pixel is not allowed.
In your example with the circles the line of both neighboring circle segments are not exact. You have to check if the edges are calculated differently and why.
Whenever you draw two different shapes and you see something like that you can either fix your model (so they share all the edge vertexes), go for a z-buffer test or a color test.
You can also minimize the effect by drawing edges using a sub-buffer that has a higher resolution and down-sample it. Since this does not effect the whole area it is more cost effective in terms of space and time when compared to down-sampling the whole scene.

How to render perfect wireframed rectangle in 2D mode with OpenGL?

Edit: just so you know: I have not solved this problem perfectly yet, currently I am using 0.5px offset, it seems to work, but as others have said, it is not the "proper" solution. So I am looking for the real deal, the diamond exit rule solution didn't work at all.
I believe it is a bug in the graphics card perhaps, but if so, then any professional programmer should have their bullet-proof solutions for this, right?
Edit: I have now bought a new nvidia card (had ATI card before), and i still experience this problem. I also see the same bug in many, many games. So i guess it is impossible to fix in a clean way?
Here is image of the bug:
How do you overcome this problem? Preferrably a non-shader solution, if possible. I tried to set offset for the first line when i drew 4 individual lines myself instead of using wireframe mode, but that didnt work out very well: if the rectangle size changed, it sometimes looked perfect rectangle, but sometimes even worse than before my fix.
This is how i render the quad:
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
glBegin(GL_QUADS);
glVertex2f(...);
glVertex2f(...);
glVertex2f(...);
glVertex2f(...);
glEnd();
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
Yes, I know I can use vertex arrays or VBO's, but that isn't the point here.
I also tried GL_LINE_LOOP, but it didn't fix the bug.
Edit: One solution is at, which works so far: Opengl pixel perfect 2D drawing by Lie Ryan:
Note that OpenGL coordinate space has no notion of integers,
everything is a float and the "centre" of an OpenGL pixel is really at
the 0.5,0.5 instead of its top-left corner. Therefore, if you want a
1px wide line from 0,0 to 10,10 inclusive, you really had to draw a
line from 0.5,0.5 to 10.5,10.5.
This will be especially apparent if you turn on anti-aliasing, if you
have anti-aliasing and you try to draw from 50,0 to 50,100 you may see
a blurry 2px wide line because the line fell in-between two pixels.
Although you've discovered that shifting your points by 0.5 makes the problem go away it's not for the reason that you think.
The answer does indeed lie in the diamond exit rule which is also at the heart of the correctly accepted answer to Opengl pixel perfect 2D drawing.
The diagram below shows four fragments/pixels with a diamond inscribed within each. The four coloured spots represent possible starting points for your quad/line loop i.e. the window co-ordinates of the first vertex.
You didn't say which way you were drawing the quad but it doesn't matter. I'll assume, for argument's sake, that you are drawing it clockwise. The issue is whether the top left of the four fragments shown will be produced by rasterising either your first or last line (it cannot be both).
If you start on the yellow vertex then the first line passes through the diamond and exits it as it passes horizontally to the right. The fragment will therefore be produced as a result of the first line's rasterisation.
If you start on the green vertex then the first line exits the fragment without passing through the diamond and hence never exits the diamond. However the last line will pass through it vertically and exit it as it ascends back to the green vertex. The fragment will therefore be produced as a result of the last line's rasterisation.
If you start on the blue vertex then the first line passes through the diamond and exits it as it passes horizontally to the right. The fragment will therefore be produced as a result of the first line's rasterisation.
If you start on the red vertex then the first line exits the fragment without passing through the diamond and hence never exits the diamond. The last line will also not pass through the diamond and therefore not exit it as it ascends back to the red vertex. The fragment will therefore not be produced as a result of either line's rasterisation.
Note that any vertex that is inside the diamond will automatically cause the fragment to be produced as the first line must exit the diamond (provided your quad is actually big enough to leave the diamond of course).
This is not a bug, this is exactly following the specification. The last pixel of a line is not drawn to prevent overdraw with following line segments, which would cause problems with blending. Solution: Send the last vertex twice.
Code Update
// don't use glPolygonMode, it doesn't
// do what you think it does
glBegin(GL_LINE_STRIP);
glVertex2f(a);
glVertex2f(b);
glVertex2f(c);
glVertex2f(d);
glVertex2f(a);
glVertex2f(a); // resend last vertex another time, to close the loop
glEnd();
BTW: You should learn how to use vertex arrays. Immediate mode (glBegin, glEnd, glVertex calls) have been removed from OpenGL-3.x core and onward.
#Troubadour described the problem perfectly. It's not a driver bug. GL is acting exactly as specified. It's designed for sub-pixel accurate representation of the world space object in device space. That's what it's doing. Solutions are 1) anti-alias, so the device space affords more fidelity and 2) arrange for a world coordinate system where all transformed vertices fall in the middle of a device pixel. This is the "general" solution you are looking for.
In all cases you can achieve 2) by moving the input points around. Just shift each point enough to take its transform to the middle of a device pixel.
For some (unaltered) point sets, you can do it by slighly modifying the view transformation. The 1/2 pixel shift is an example. It works e.g. if the world space is an integer-scaled transform of device space followed by a translation by integers, where world coordinates are also integers. Under many other conditions, though, +1/2 won't work.
** Edit **
NB as I said a uniform shift (1/2 or any other) can be built into the view transform. There is no reason to fiddle with vertex coordinates. Just prepend a translation, e.g. glTranslatef(0.5f, 0.5f, 0.0f);
Try change 0.5 into the odd magic number that's used everywhere 0.375.
Used be opengl, and X11 etc.
Becuase of that diamond rule mentioned and how graphiccards draw to avoid unnecessery overdraws of pixels.
Provide some link but there's lots of them, just search keywords opengl 0.375 diamond rule if you need more information. It's about how outlines and fills are treated algorithmically in opengl. It's needed for pixelperfect rendering of textures in for example 2d sprites aswell.
Take a look at this.
Want to add something; So doing what you want, implementing diamond rule implemented in code would be simply one liner; change 0.5 into 0.375 like this;
And it should render properly.
glTranslatef(0.375, 0.375, 0.0);

OpenGL lighting question?

Greetings all,
As seen in the image , I draw lots of contours using GL_LINE_STRIP.
But the contours look like a mess and I wondering how I can make this look good.(to see the depth..etc )
I must render contours so , i have to stick with GL_LINE_STRIP.I am wondering how I can enable lighting for this?
Thanks in advance
Original image
http://oi53.tinypic.com/287je40.jpg
Lighting contours isn't going to do much good, but you could use fog or manually set the line colors based on distance (or even altitude) to give a depth effect.
Updated:
umanga, at first I thought lighting wouldn't work because lighting is based on surface normal vectors - and you have no surfaces. However #roe pointed out that normal vectors are actually per vertex in OpenGL, and as such, any POLYLINE can have normals. So that would be an option.
It's not entirely clear what the normal should be for a 3D line, as #Julien said. The question is how to define normals for the contour lines such that the resulting lighting makes visual sense and helps clarify the depth?
If all the vertices in each contour are coplanar (e.g. in the XY plane), you could set the 3D normal to be the 2D normal, with 0 as the Z coordinate. The resulting lighting would give a visual sense of shape, though maybe not of depth.
If you know the slope of the surface (assuming there is a surface) at each point along the line, you could use the surface normal and do a better job of showing depth; this is essentially like a hill-shading applied only to the contour lines. The question then is why not display the whole surface?
End of update
+1 to Ben's suggestion of setting the line colors based on altitude (is it topographic contours?) or based on distance from viewer. You could also fill the polygon surrounded by each contour with a similar color, as in http://en.wikipedia.org/wiki/File:IsraelCVFRtopography.jpg
Another way to make the lines clearer would be to have fewer of them... can you adjust the density of the contours? E.g. one contour line per 5ft height difference instead of per 1ft, or whatever the units are. Depending on what it is you're drawing contours of.
Other techniques for elucidating depth include stereoscopy, and rotating the image in 3D while the viewer is watching.
If your looking for shading then you would normally convert the contours to a solid. The usual way to do that is to build a mesh by setting up 4 corner points at zero height at the bounds or beyond then dropping the contours into the mesh and getting the mesh to triangulate the coords in. Once done you then have a triangulated solid hull for which you can find the normals and smooth them over adjacent faces to create smooth terrain.
To triangulate the mesh one normally uses the Delaunay algorithm which is a bit of a beast but there does exist libraries for doing it. The best of which I know of is the ones based on Guibas as Stolfi papers since its pretty optimal.
To generate the normals you do a simple cross product and ensure the facing is correct and manually renormalize them before feeding into the glNormal.
The in the old days you used to make a glList out of the result but the newer way is to make a vertex array. If you want to be extra flash then you can look for coincident planar faces and optimize the mesh down for faster redraw but thats a bit of a black art - good for games, not so good for CAD.
(thx for bonus last time)

OpenGL 2D game question

I want to make a game with Worms-like destructible terrain in 2D, using OpenGL.
What is the best approach for this?
Draw pixel per pixel? (Uh, not good?)
Have the world as a texture and manipulate it (is that possible?)
Thanks in advance
Thinking about the way Worms terrain looked, I came up with this idea. But I'm not sure how you would implement it in OpenGL. It's more of a layered 2D drawing approach. I'm posting the idea anyway. I've emulated the approach using Paint.NET.
First, you have a background sky layer.
And you have a terrain layer.
The terrain layer is masked so the top portion isn't drawn. Draw the terrain layer on top of the sky layer to form the scene.
Now for the main idea. Any time there is an explosion or other terrain-deforming event, you draw a circle or other shape on the terrain layer, using the terrain layer itself as a drawing mask (so only the part of the circle that overlaps existing terrain is drawn), to wipe out part of the terrain. Use a transparent/mask-color brush for the fill and some color similar to the terrain for the thick pen.
You can repeat this process to add more deformations. You could keep this layer in memory and add deformations as they occur or you could even render them in memory each frame if there aren't too many deformations to render.
I guess you'd better use texture-filled polygons with the correct mapping (a linear one that doesn't stretch the texture to use all the texels, but leaves the cropped areas out), and then reshape them as they get destroyed.
I'm assuming your problem will be to implement the collision between characters/weapons/terrain.
As long as you aren't doing this on opengl es, you might be able to get away with using the stencil buffer to do per-pixel collision detection and have your terrain be a single modifyable texture.
This page will give an idea:
http://kometbomb.net/2007/07/11/hardware-accelerated-2d-collision-detection-in-opengl/
The way I imagine it is this:
a plane with the texture applied
a path( a vector of points/segments ) used for ground collisions.
When something explodes, you do a boolean operation (rectangle-circle) for the texture(revealing the background) and for the 'walkable' path.
What I'm trying to say is you do a geometric boolean operation and you use the result to update the texture(with an alpha mask or something) and update the data structure you use to keep track of the walkable area(which ever that might be).
Split things up, instead of relying only on gl draw methods
I think I would start by drawing the foreground into the stencil buffer so the stencil buffer is set to 1 bits anywhere there's foreground, and 0 elsewhere (where you want your sky to show).
Then to draw a frame, you draw your sky, enable the stencil buffer, and draw the foreground. For the initial frame (before any explosion has destroyed part of the foreground) the stencil buffer won't really be doing anything.
When you do have an explosion, however, you draw it to the stencil buffer (clearing the stencil buffer for that circle). Then you re-draw your data as before: draw the sky, enable the stencil buffer, and draw the foreground.
This lets you get the effect you want (the foreground disappears where desired) without having to modify the foreground texture at all. If you prefer not to use the stencil buffer, the alternative that seems obvious to me would be to enable blending, and just manipulate the alpha channel of your foreground texture -- set the alpha to 0 (transparent) where it's been affected by an explosion. IMO, the stencil buffer is a bit cleaner approach, but manipulating the alpha channel is pretty simple as well.
I think, but this is just a quick idea, that a good way might be to draw a Very Large Number of Lines.
I'm thinking that you represent the landscape as a bunch of line segments, for each column of the screen you have 0..n vertical lines, that make up the ground:
12 789
0123 6789
0123456789
0123456789
In the above awesomeness, the column of "0":s makes up a single line, and so on. I didn't try to illustrate the case where a single pixel column has more than one line, since it's a bit hard in this coarse format.
I'm not sure this will be efficient, but it at least makes some sense since lines are an OpenGL primitive.
You can color and texture the lines by enabling texture-mapping and specifying the desired texture coordinates for each line segment.
Typically the way I have seen it done is to have each entity be a textured quad, then update the texture for animation. For a destructible terrain it might be best to break the train into tiles then you only have to update the ones that have changed. Don't use GLdrawpixels it is probably the slowest approach possible (outside of reloading textures from disk every frame though it would be close.)