How do I render thick 2D lines as polygons? - opengl

I have a path made up of a list of 2D points. I want to turn these into a strip of triangles in order to render a textured line with a specified thickness (and other such things). So essentially the list of 2D points need to become a list of vertices specifying the outline of a polygon that if rendered would render the line. The problem is handling the corner joins, miters, caps etc. The resulting polygon needs to be "perfect" in the sense of no overdraw, clean joins, etc. so that it could feasibly be extruded or otherwise toyed with.
Are there any simple resources around that can provide algorithm insight, code or any more information on doing this efficiently?
I absolutely DO NOT want a full fledged 2D vector library (cairo, antigrain, OpenVG, etc.) with curves, arcs, dashes and all the bells and whistles. I've been digging in multiple source trees for OpenVG implementations and other things to find some insight, but it's all terribly convoluted.
I'm definitely willing to code it myself, but there are many degenerate cases (small segments + thick widths + sharp corners) that create all kinds of join issues. Even a little help would save me hours of trying to deal with them all.
EDIT: Here's an example of one of those degenerate cases that causes ugliness if you were simply to go from vertex to vertex. Red is the original path. The orange blocks are rectangles drawn at a specified width aligned and centered on each segment.

Oh well - I've tried to solve that problem myself. I wasted two month on a solution that tried to solve the zero overdraw problem. As you've already found out you can't deal with all degenerated cases and have zero overdraw at the same time.
You can however use a hybrid approach:
Write yourself a routine that checks if the joins can be constructed from simple geometry without problems. To do so you have to check the join-angle, the width of the line and the length of the joined line-segments (line-segments that are shorter than their width are a PITA). With some heuristics you should be able to sort out all the trivial cases.
I don't know how your average line-data looks like, but in my case more than 90% of the wide lines had no degenerated cases.
For all other lines:
You've most probably already found out that if you tolerate overdraw, generating the geometry is a lot easier. Do so, and let a polygon CSG algorithm and a tesselation algorithm do the hard job.
I've evaluated most of the available tesselation packages, and I ended up with the GLU tesselator. It was fast, robust, never crashed (unlike most other algorithms). It was free and the license allowed me to include it in a commercial program. The quality and speed of the tesselation is okay. You will not get delaunay triangulation quality, but since you just need the triangles for rendering that's not a problem.
Since I disliked the tesselator API I lifted the tesselation code from the free SGI OpenGL reference implementation, rewrote the entire front-end and added memory pools to get the number of allocations down. It took two days to do this, but it was well worth it (like factor five performance improvement). The solution ended up in a commercial OpenVG implementation btw :-)
If you're rendering with OpenGL on a PC, you may want to move the tesselation/CSG-job from the CPU to the GPU and use stencil-buffer or z-buffer tricks to remove the overdraw. That's a lot easier and may be even faster than CPU tesselation.

I just found this amazing work:
http://www.codeproject.com/Articles/226569/Drawing-polylines-by-tessellation
It seems to do exactly what you want, and its licence allows to use it even in commercial applications. Plus, the author did a truly great job to detail his method. I'll probably give it a shot at some point to replace my own not-nearly-as-perfect implementation.

A simple method off the top of my head.
Bisect the angle of each 2d Vertex, this will create a nice miter line. Then move along that line, both inward and outward, the amount of your "thickness" (or thickness divided by two?), you now have your inner and outer polygon points. Move to the next point, repeat the same process, building your new polygon points along the way. Then apply a triangualtion to get your render-ready vertexes.

I ended up having to get my hands dirty and write a small ribbonizer to solve a similar problem.
For me the issue was that I wanted fat lines in OpenGL that did not have the kinds of artifacts that I was seeing with OpenGL on the iPhone. After looking at various solutions; bezier curves and the like - I decided it was probably easiest to just make my own. There are a couple of different approaches.
One approach is to find the angle of intersection between two segments and then move along that intersection line a certain distance away from the surface and treat that as a ribbon vertex. I tried that and it did not look intuitive; the ribbon width would vary.
Another approach is to actually compute a normal to the surface of the line segments and use that to compute the ideal ribbon edge for that segment and to do actual intersection tests between ribbon segments. This worked well except that for sharp corners the ribbon line segment intersections were too far away ( if the inter-segment angle approached 180' ).
I worked around the sharp angle issue with two approaches. The Paul Bourke line intersection algorithm ( which I used in an unoptimized way ) suggested detecting if the intersection was inside of the segments. Since both segments are identical I only needed to test one of the segments for intersection. I could then arbitrate how to resolve this; either by fudging a best point between the two ends or by putting on an end cap - both approaches look good - the end cap approach may throw off the polygon front/back facing ordering for opengl.
See http://paulbourke.net/geometry/lineline2d/
See my source code here : https://gist.github.com/1474156

I'm interested in this too, since I want to perfect my mapping application's (Kosmos) drawing of roads. One workaround I used is to draw the polyline twice, once with a thicker line and once with a thinner, with a different color. But this is not really a polygon, it's just a quick way of simulating one. See some samples here: http://wiki.openstreetmap.org/wiki/Kosmos_Rendering_Help#Rendering_Options
I'm not sure if this is what you need.

I think I'd reach for a tessellation algorithm. It's true that in most case where these are used the aim is to reduce the number of vertexes to optimise rendering, but in your case you could parameterise to retain all the detail - and the possibility of optimising may come in useful.
There are numerous tessellation algorithms and code around on the web - I wrapped up a pure C on in a DLL a few years back for use with a Delphi landscape renderer, and they are not an uncommon subject for advanced graphics coding tutorials and the like.

See if Delaunay triangulation can help.

In my case I could afford to overdraw. I just drow circles with radius = width/2 centered on each of the polyline's vertices.
Artifacts are masked this way, and it is very easy to implement, if you can live with "rounded" corners and some overdrawing.

From your image it looks like that you are drawing box around line segments with FILL on and using orange color. Doing so is going to create bad overdraws for sure. So first thing to do would be not render black border and fill color can be opaque.
Why can't you use GL_LINES primitive to do what you intent to do? You can specify width, filtering, smoothness, texture anything. You can render all vertices using glDrawArrays(). I know this is not something you have in mind but as you are focusing on 2D drawing, this might be easier approach. (search for Textured lines etc.)

Related

how to detect the coordinates of certain points on image

I'm using the ORB algorithm to detect and get the coordinates of the crossings of rope shown in the image, which is represented by the red dot. I want to detect the coordinates of the four points surrounding the crossing represented by the blue dots. All the four points have the same distance from the red spot.
Any idea how to get their coordinates by getting use of the red spot coordinate.
Thank you in Advance
Although you're using ORB, you're still going to need an algorithm to segment the rope from the background, or at least some technique to identify image chunks that belong to the rope and that are equidistant from the red dot. There are a number of options to explore.
It's important to consider your lighting & imaging as separate problems to be solved if this is meant to be a real-world application. This looks a bit like a problem for a class rather than for a application you'll sell and support, but you should still consider lighting:
Will your algorithm(s) still work when light level is reduced?
How will detection be affected by changes in camera pose relative to the surface where the rope will be located?
If you'll be detecting "black" rope, will the algorithm also be required to detect rope of different colors? dirty rope? rope on different backgrounds?
Since you're object of interest is rope, you have to consider a class of algorithms suitable for detection of non-rigid objects. Always consider the simplest solution first!
Connected Components
Connected components labeling is a traditional image processing algorithm and still suitable as the starting point for many applications. The last I knew, this was implemented in OpenCV as findContours(). This can also be called "blob finding" or some variant thereof.
https://en.wikipedia.org/wiki/Connected-component_labeling
https://docs.opencv.org/2.4/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?highlight=findcontours
Depending on lighting, you may have to take different steps to binarize the image before running connected components. As a start, convert the color image to grayscale, which will simplify the task significantly.
Try a manual threshold since you can quickly test a number of values to see the effect. Don't be too discouraged if the binarization isn't quite right--this can often be fixed with preprocessing.
If a range of manual thresholds works (e.g. 52 - 76 in an 8-bit grayscale range), then use an algorithm that will automatically calculate the threshold for you: Otsu, entropy-based methods, etc., will all offer comparable performance. Whichever technique works best, the code/algorithm can be tweaked further to optimize for your rope application.
If thresholding and binarization don't work--which for your rope application seems unlikely, at least how you've presented it--then switch to thinking in terms of gradient-based (edge-based, energy-based) techniques.
But assuming you can separate the rope from the background, you're still going to need a method to start at the red dot [within the rope] and move equal distances out to the blue points. More about that later after a discussion of other rope segmentation methods.
Note: connected components labeling can work in scenarios beyond just binarizing black & white images. If you can create a texture field or some other 2D representation of the image that makes it possible to distinguish the black rope from the relatively light background, you may be able to use a connected components algorithm. (Finding a "more complicated" or "more modern" algorithm isn't necessarily going to be the right approach.)
In a binarized image, blobs can be nested: on a white background you can have several black blobs, inside of one or more of which are white blobs, inside of which are black blobs, etc. An earlier version of OpenCV handled this reasonably well. (OpenCV is a nice starting point, and a touchpoint for many, but for a number of reasons it doesn't always compare favorably to other open source and commercial packages; popularity notwithstanding, OpenCV has some issues.)
Once you have a "blob" (a 4-connected region of pixels) in a 2D digital image, you can treat the blob as an object, at which point you have a number of options:
Edge tracing: trace around the inside and outside edges of the blob. From what I recall, OpenCV does (or at least should) have some relatively straightforward method to get the edges.
Split the blob into component blobs, each of which can be treated separately
Convert the blob to a polygon
...
A connected components algorithm should be high on the list of techniques to try if you have a non-rigid object.
Boolean Operations
Once you have the rope as a connected component (and possibly even without this), you can use boolean image operations to find the spots at the blue dots in your image:
Create a circular region in data, or even in the image
Find the intersection of the circle (an annulus) and the black region representing the rope. Using your original image, you should have four regions.
Find the center point of the intersection regions.
You could even try this without using connected components at all, but using connected components as part of the solution could make it more robust.
Polygon Simplification
If you have a blob, which in your application would be a connected set of black pixels representing the rope on the floor, then you can consider converting this blob to one or more polygons for further processing. There are advantages to working with polygons.
If you consider only the outside boundary of the rope, then you can see that the set of pixels defining the boundary represents a polygon. It's a polygon with a lot of points, and not a convex polygon, but a polygon nonetheless.
To simplify the polygon, you can use an algorithm such as Ramer-Douglas-Puecker:
https://en.wikipedia.org/wiki/Ramer%E2%80%93Douglas%E2%80%93Peucker_algorithm
Once you have a simplified polygon, you can try a few techniques to render useful data from the polygon
Angle Bisector Network
Triangulation (e.g. using ear clipping)
Triangulation is typically dependent on initial conditions, so the resulting triangulation for slighting different polygons (that is, rope -> blob -> polygon -> simplified polygon). So in your application it might be useful to triangulate the dark rope region, and then to connect the center of one triangle to the center of the next nearest triangle. You'll also have to deal with crossings, such as the rope overlap. Ultimately this can yield a "skeletonization" of the rope. Speaking of which...
Skeletonization
If the rope problem was posed to you as a class exercise, then it may have been a prompt to try skeletonization. You can read about it here:
https://en.wikipedia.org/wiki/Topological_skeleton
Skeletonization and thinning have their own problems to solve, but you should dig into them a bit and see those problems themselves.
The Medial Axis Transform (MAT) is a related concept. Long story there.
Edge-based techniques
There are a number of techniques to generate "edge images" based on edge strength, energy, entropy, etc. Making them robust takes a little effort. If you've had academic training in image processing you've likely heard of Harris, Sobel, Canny, and similar processing methods--none are magic bullets, but they're simple and dependable and will yield data you need.
An "edge image" consists of pixels representing the image gradient strength [and sometimes the gradient direction]. People may call this edge image something else, but it's the concept that matters.
What you then do with the edge data is another subject altogether. But one reason to think of edge images (or at least object borders) is that it reduces the amount of information your algorithm(s) will need to process.
Mean Shift (and related)
To get back to segmentation mentioned in the section on connected components, there are other methods for segmenting figures from a background: K-means, mean shift, and so on. You probably won't need any of those, but they're neat and worth studying.
Stroke Width Transform
This is an intriguing technique used to extract text from noisy backgrounds. Although it's intended for OCR, it could work for rope since the rope width is relatively constant, the rope shape varies, there are crossings, etc.
In short, and simplifying quite a bit, you can think of SWT as a means to find "strokes" (thick lines) by finding gradients antiparallel to each other. On either side of a stroke (or line), the edge gradient points normal to the object edge. The normal on one side of the stroke points opposite the direction of the normal on the other side of the stroke. By filtering for pixel-gradient pairs within a certain distance of each other, you can isolate certain strokes--even automatically. For your example the collection of points representing edge pairs for the rope would be much more common than other point pairs.
Non-Rigid Matching
There are techniques for matching non-rigid shapes, but they would not be worth exploring. If any of the techniques I mentioned above is unfamiliar to you, explore some of those first before you try any fancier algorithms.
CNNs, machine learning, etc.
Just don't even think of these methods as a starting point.
Other Considerations
If this were an application for industry, security, or whatnot, you'd have to determine how well your image processing worked under all environmental considerations. That's not an easy task, and can make all the difference between a setup that "works" in the lab and a setup that actually works in practice.
I hope that's of some help. Feel free to post a reply if I've confused more than helped, or if you want to explore some idea in more detail. Though I tried to touch on some common(ish) techniques, I didn't mention all the different ways of addressing this problem.
And briefly: once you have a skeleton, point network, or whatever representing a reduced data set for the rope and the red dot (the identified feature), a few techniques to find the items at the blue dots:
For a skeleton, trace along each "branch" of the rope outward from the know until the geodesic distance or straight-line 2D distance is the distance D that you want.
To use geometry, create a circle of width 1 - 2 pixels. Find the intersection of that circle and the rope. Find the center point of the intersections of circle and rope. (Also described above.)
Good luck!

OpenGL : thick and smooth/non-broken lines *in 3D*

I have a 3D CAD-like application for which I use OpenGL wrapper library (OpenSceneGraph). For the application I am trying to come up with the best strategy on how to render thick and smooth lines in 3D.
By thick and smooth I mean:
line thickness can be more than OpenGL maximum linewidth value (it seems to be 10.f on my machine)
when composing polylines I want to avoid the look of "broken lines" (see example image below)
At the moment I render my polylines by using GL_LINE_STRIP_ADJACENCY.
I found there are many different resources on how to render nice looking lines and curves in 2D. The simplest approach that does not require much thinking is to render the line as a set of quads (GL_QUAD_STRIP). The good thing about this solution is that it solves both of my problems at the same time.
As an example, I also found this nice library that allows to achieve wide range of line and curve looks. It uses triangles for rendering.
Note: I do not seek for fancy effects like per-vertex coloring or brush-like strokes, just a 3D line segment that can have large thickness and that connects well with another line segment without any gaps between them.
The problem with those 2D approaches is that they are 2D. When I change the view point, it is obvious my line geometries are not lines but rather 2D "ribbons" lying in certain 3D planes. And I want them to look like 3D lines.
When thinking about the problem, I could only come up the the following approaches:
Render line as a set of 2D quads (triangles) and then make them to always face the camera
Use some 3D shape like cylinder to represent a line segment
I am not sure how feasible any of the two solutions are (I am beginner in OpenGL). I might have hundreds or even thousands of polylines on the scene. I am also wondering if there is a better, smarter way to approach the problem? I am open to anything and interested in the most efficient way. Thank you.
EDIT: as pointed by user #rickyviking, I didn't clarify explicitly that I am going after a 2D look (like in any CAD-like app) which would mean: the thickness of the lines does not depend on how far/near the camera is located from them.
UPDATE: thanks for answer of #rickyviking, I chose the direction to move with - geometry shaders. I still do not have a complete solution, but might post a final update and minimal code when the result is achieved, here.
I ended up using geometry shader as was suggested by one of the answers. The main idea was to convert every line segment into a triangular strip and to make sure that it always faces the camera and thickness remains the same.
I also wrote a blog post about the implementation details. If someone finds it useful, I placed the shader codes on one of my github repos (the code also provides example on how to draw thick and smooth Bezier curves using the same technique).
Here are some result screenshots (green lines are drawn using the shaders, red are by using GL_LINE_STRIP_ADJACENCY):
Note the gaps between the adjacent line segments of the red lines compared to the green ones.
Two Bezier curves (all drawn using the shaders):
First of all you need clarify yourself whether the result you're after should "look" 2D or 3D.
When you mention cylinders, they will certainly have a "3D effect" (geometries farther away thinner/smaller than geometries closer to the viewpoint), but that is not what CAD-like applications normally looks like.
I believe you're after a 2D looking result, where the width of your lines is independent from their distance from viewpoint.
When using a library like the one you mention, you should recompute the geometries every time you change your viewpoint (possibly every frame) to have them always face the screen plane.
An alternative and better performing approach would be to use the "solid wireframe" technique, here is an implementation from NVidia, which uses geometry shaders (you can find several examples on how to use them in OpenSceneGraph).
Other similar techniques are shown here: https://forum.libcinder.org/topic/smooth-thick-lines-using-geometry-shader
I don't know too much about it but one of the many interesting things about NVidia's "path rendering" SDK (which was on the face of it just another one of those nice 2D libraries you're talking about) was that it actually put a bit of effort into interoperating with 3D. See the "Eureka: 3D Path Rendering!" section in this whitepaper.
There used to be talk of it becoming a standard (non-vendor-specific) OpenGL extension but I'm not sure it ever happened.
I've used cylinders before, but that was an application that needed actual real-world sizes, varying sizes in one line and even color changing.

Mesh simplification of a grid-like structure

I'm working on a 3D building app. The building is done on a 3D grid (like a Rubik's Cube), and each cell of the grid is either a solid cube or a 45 degree slope. To illustrate, here's a picture of a chamfered cube I pulled off of google images:
Ignore the image to the right, the focus is the one on the left. Currently, in the building phase, I have each face of each cell drawn separately. When it comes to exporting it, though, I'd like to simplify it. So in the above cube, I'd like the up-down-left-right-back-front faces to be composed of a single quad each (two triangles), and the edges would be reduced from two quads to single quads.
What I've been trying to do most recently is the following:
Iterate through the shape layer by layer, from all directions, and for each layer figure out a good simplification (remove overlapping edges to create single polygon, then split polygon to avoid holes, use ear clipping to triangulate).
I'm clearly over complicating things (at least I hope I am). If I've got a list of vertices, normals, and indices (currently with lots of duplicate vertices), is there some tidy way to simplify? The limitations are that indices can't be shared between faces (because I need the normals pointing in different directions), but otherwise I don't mind if it's not the fastest or most optimal solution, I'd rather it be easy to implement and maintain.
EDIT: Just to further clarify, I've already performed hidden face removal, that's not an issue. And secondly, it's of utmost importance that there is no degradation in quality, only simplification of the faces themselves (I need to retain the sharp edges).
Thanks goes to Roger Rowland for the great tips! If anyone else stumbles upon this question, here's a short summary of what I did:
First thing to tackle: ensure that the mesh you are attempting to simplify is a manifold mesh! This is a requirement for traversing halfedge data structures. One instance where I has issues with this was overlapping quads and triangles; I initially resolved to just leave the quads whole, rather than splitting them into triangles, because it was easier, but that resulted in edges that broke the halfedge mesh.
Once the mesh is manifold, create a halfedge mesh out of the vertices and faces.
With that done, decimate the mesh. I did it via edge collapsing, determining which edges to collapse through normal deviation (in my case, if the resulting faces from the collapse had normals not equal to their original values, then the collapse was not performed).
I did this via my own implementation at first, but I started running into frustrating bugs, and thus opted to use OpenMesh instead (it's very easy to get started with).
There's still one issue I have yet to resolve: if there are two cubes diagonally to one another, touching, the result is an edge with four faces connected to it: a complex edge! I suspect it'd be trivial to iterate through the edges checking for the number of faces connected, and then resolving by duplicating the appropriate vertices. But with that said, it's not something I'm going to invest the time in fixing, unless it becomes a critical issue later on.
I am giving a theoretical answer.
For the figure left, find all 'edge sharing triangles' with same normal (same x,y,z coordinates)(make it unit normal because of uneffect of direction of positive scaling of vectors). Merge them. Then triangulate it with maximum aspect ratio will give a solution you want.
Another easy and possible way for mesh simplification is I am proposing now.
Take the NORMALS and divide with magnitude(root of sum of squares of coordinates), gives unit normal vector. And take the adjucent triangles and take DOT PRODUCT between them(multiply x,y,z coordinates each and add). It gives the COSINE value of angle between these normals or triangles. Take a range(like 0.99-1) and consider the all adjacent triangles in this range with respect to referring triangle and merge them and retriangulate. We definitely can ignore some triangles in weird directions with smaller areas.
There is also another proposal for a more simple mesh reduction like in your left figure or building figures. Define a pre-defined number of faces (here 6+8 = 14) means value of normals, and classify all faces according to the direction close to these(by dot product) and merge and retriangulate.
Google "mesh simplification". You'll find that this problem is a huge one and is heavily researched. Take a look at these introductory resources: link (p.11 starts the good stuff) and link. CGAL has a good discussion, as well: link.
Once familiar with the issues, you'll have some decisions for applying simplification to your problem. How fast should the simplification be? How important is accuracy? (Iterative vertex clustering is a quick and dirty approach, but its results can be arbitrarily ugly.) Can you rely on a 3rd party library? (i.e. CGAL? GTS doesn't appear active any longer, but there are others) .

Ray-mesh intersection or AABB tree implementation in C++ with little overhead?

Can you recommend me...
either a proven lightweight C / C++ implementation of an AABB tree?
or, alternatively, another efficient data-structure, plus a lightweight C / C++ implementation, to solve the problem of intersecting a large number of rays with a large number of triangles?
"Large number" means several 100k for both rays and triangles.
I am aware that AABB trees are part of the CGAL library and probably of game physics libraries like Bullet. However, I don't want the overhead of an enormous additional library in my project. Ideally, I'd like to use a small float-type templated header-only implementation. I would also go for something with a bunch of CPP files, as long as it integrated easily in my project. Dependency on boost is ok.
Yes, I have googled, but without success.
I should mention that my application context is mesh processing, and not rendering. In a nutshell, I'm transferring the topology of a reference mesh to the geometry of a mesh from a 3D scan. I'm shooting rays from vertices and along the normals of the reference mesh towards the 3D scan, and I need to recover the intersection of these rays with the scan.
Edit
Several answers / comments pointed to nearest-neighbor data structures. I have created a small illustration regarding the problems that arise when ray-mesh intersections are approached with nearest neighbor methods. Nearest neighbors methods can be used as heuristics that work in many cases, but I'm not convinced that they actually solve the problem systematically, like AABB trees do.
While this code is a bit old and using the 3DS Max SDK, it gives a fairly good tree system for object-object collision deformations in C++. Can't tell at a glance if it is Quad-tree, AABB-tree, or even OBB-tree (comments are a bit skimpy too).
http://www.max3dstuff.com/max4/objectDeform/help.html
It will require translation from Max to your own system but it may be worth the effort.
Try the ANN library:
http://www.cs.umd.edu/~mount/ANN/
It's "Approximate Nearest Neighbors". I know, you're looking for something slightly different, but here's how you can use this to speed up your data processing:
Feed points into ANN.
Query a user-selectable (think of this as a "per-mesh knob") radius around each vertex that you want to ray-cast from and find out the mesh vertices that are within range.
Select only the triangles that are within that range, and ray trace along the normal to find the one you want.
By judiciously choosing the search radius, you will definitely get a sizable speed-up without compromising on accuracy.
If there's no real time requirements, I'd first try brute force.
1M * 1M ray->triangle tests shouldn't take much more than a few minutes to run (in CPU).
If that's a problem, the second best thing to do would be to restrict the search area by calculating a adjacency graph/relation between the triangles/polygons in the target mesh. After an initial guess fails, one can try the adjacent triangles. This of course relies on lack of self occlusion / multiple hit points. (which I think is one interpretation of "visibility doesn't apply to this problem").
Also depending on how pathological the topologies are, one could try environment mapping the target mesh on a unit cube (each pixel would consists of a list of triangles projected on it) and test the initial candidate by a single ray->aabb test + lookup.
Given the feedback, there's one more simple option to consider -- space partitioning to simple 3D grid, where each dimension can be subdivided by the histogram of the x/y/z locations or even regularly.
100x100x100 grid is of very manageable size of 1e6 entries
the maximum number of cubes to visit is proportional to the diameter (max 300)
There are ~60000 extreme cells, which suggests an order of 10 triangles per cell
caveats: triangles must be placed on every cell they occupy
-- a conservative algorithm places them to cells they don't belong to; large triangles will probably require clipping and reassembly.

OpenGL- Simple 2D clipping/occlusion method?

I'm working on a relatively small 2D (top-view) game demo, using OpenGL for my graphics. It's going for a basic stealth-based angle, and as such with all my enemies I'm drawing a sight arc so the player knows where they are looking.
One of my problems so far is that when I draw this sight arc (as a filled polygon) it naturally shows through any walls on the screen since there's nothing stopping it:
http://tinyurl.com/43y4o5z
I'm curious how I might best be able to prevent something like this. I do already have code in place that will let me detect line-intersections with walls and so on (for the enemy sight detection), and I could theoretically use this to detect such a case and draw the polygon accordingly, but this would likely be quite fiddly and/or inefficient, so I figure if there's any built-in OpenGL systems that can do this for me it would probably do it much better.
I've tried looking for questions on topics like clipping/occlusion but I'm not even sure if these are exactly what I should be looking for; my OpenGL skills are limited. It seems that anything using, say, glClipPlanes or glScissor wouldn't be suited to this due to the large amount of individual walls and so on.
Lastly, this is just a demo I'm making in my spare time, so graphics aren't exactly my main worry. If there's a (reasonably) painless way to do this then I'd hope someone can point me in the right direction; if there's no simple way then I can just leave the problem for now or find other workarounds.
This is essentially a shadowing problem. Here's how I'd go about it:
For each point around the edge of your arc, trace a (2D) ray from the enemy towards the point, looking for intersections with the green boxes. If the green boxes are always going to be axis-aligned, the math will be a lot easier (look for Ray-AABB intersection). Rendering the intersection points as a triangle fan will give you your arc.
As you mention that you already have the line-wall intersection code going, then as long as that will tell you the distance from the enemy to the wall, then you'll be able to use it for the sight arc. Don't automatically assume it'll be too slow - we're not running on 486s any more. You can always reduce the number of points around the edge of your arc to speed things up.
OpenGL's built-in occlusion handling is designed for 3D tasks and I can't think of a simple way to rig it to achieve the effect you are after. If it were me, the way I would solve this is to use a fragment shader program, but be forewarned that this definitely does not fall under "a (reasonably) painless way to do this". Briefly, you first render a binary "occlusion map" which is black where there are walls and white otherwise. Then you render the "viewing arc" like you are currently doing with a fragment program that is designed to search from the viewer towards the target location, searching for an occluder (black pixel). If it finds an occluder, then it renders that pixel of the "viewing arc" as 100% transparent. Overall though, while this is a "correct" solution I would definitely say that this is a complex feature and you seem okay without implementing it.
I figure if there's any built-in OpenGL systems that can do this for me it would probably do it much better.
OpenGL is a drawing API, not a geometry processing library.
Actually your intersection test method is the right way to do it. However to speed it up you should use a spatial subdivision structure. In your case you have something that's cries for a Binary Space Partitioning tree. BSP trees have the nice property, that the complexity for finding intersections of a line with walls is in average about O(log n) and worst case is O(n log n), or in other words, BSP tress are very efficient. See the BSP FAQ for details http://www.opengl.org//resources/code/samples/bspfaq/index.html