I need an algorithm to render pixel perfect triangles without using OpenGL.
Where can I get such an algorithm like OpenGL is using? Preferrably single threaded.
It must do it like OpenGL does it, because OpenGL doesn't leave gaps or overlapped pixels on two triangles that share an edge. Scanline method makes those errors: http://joshbeam.com/articles/triangle_rasterization/ . I tried this half-space algo too: http://devmaster.net/forums/topic/1145-advanced-rasterization/ but I couldnt make it work (renders gibberish).
How does OpenGL do it? Is it so intensive job that I should just forget about it and use the (buggy) scanline method instead?
Where can I get the algorithm?
I would prefer a C++ solution but the language isn't the problem.
OpenGL uses a scanline based method, as Alan and your article link mentions. However, managing gaps is not a matter of math accuracy. You need an additional convention for the pixels on the edges of the triangle. The convention OpenGL uses is the same as the convention used by Direct3D. It's called "top-left" and the best explanation I've seen is on this MSDN rasterization rules article.
A typical OpenGL implementation will use the scanline/rasterization method more or less the same as your first link. The key to avoiding the gaps and overlaps is to be very careful with your math to avoid uncertainty in rounding errors.
If you think of pixels as squares of the screen with some finite area, then you can get overlaps because adjoining triangles will often both overlap the pixel. But instead if you think of the pixel as the infinitely small point in the center of the square, then you can can guarantee no gaps or overlaps.
You also need the arithmetic-certainty of fixed-point integer math instead of the rounding errors of floating point math. The left end of each rasterized scanline starts at first pixel greater or equal to the starting scan value. The right end of the rasterized scanline ends on the one before the first pixel strictly less than the end scan value.
Take a look at OpenMesa, an open-source implementation of the OpenGL specification. It currently supports up to OpenGL 3.0.
Related
Here's what I have:
I load a texture from disk, say 256x256, say a picture of penguin
I create another texture with same dimensions and I draw some stuff on it
I want to find distance between the two openGL textures AS FAST AS POSSIBLE (accuracy of the distance function is of LEAST concern).
Actually, they might not be textures in the first place.
I might as well compare a region of the framebuffer to the loaded texture or do some other "magic".
How to do this super-fast?
This has little to do with OpenGL in fact. OpenGL is just a 3D rasterization "driver".
The main idea is to get a distance/similarity algorithm, which is a tricky task. For beginning you could do a Root Mean Square Deviation algorithm. It will give you a number of how far away pixel values are.
When you implement it on CPU you could benchmark it for your needs and maybe convert to OpenCL. I don't think loading the "penguin" to GPU just to compare it to other shapes you prepare there is a super-fast process by itself.
Try to be more specific with your next question and avoid attitude of "I have a very well microscope, how do I hit nails with it super-fast?"
You could use the Pearson-Product for matching different images. You can find it on an answer of mine. Instead of matching a template more little than the original image, you could correlated directly two images.
Shortly, you get the deviation of each textel from the average, giving you a correlation factor.
But improving that algorithm by using shaders could be a little hard. First, you have to compute the textel average: maybe some OpenGL extension (like histogram) may help you in this task.
Then, you could use fragment shaders to perform single-component computation (the difference between each textel with the averaged one computed previously. The average textel has to be passed as uniform, and the result should be stored in floating-point texture (you can render it on a framebuffer object).
Then, you should sum up all textel of the resulting texture in order to get the correlation between the two souce textures.
This may worth in the case images are very big. Otherwise I think it's just better to execute the algorithm on CPU, using SIMD instruction set (like MMX, SSE, AVX).
For example, I could relate the positions where I draw my entities to the screen before sending them to OpenGL - or: I give OpenGL the direct coordinates, relative to the game world, which are of course in a much larger scale.
Is drawing with large vertex coordinates slow?
There aren't speed issues, but you will find that you get accuracy issues as the differences between your numbers will be very small in relation to the size of the numbers.
For example, if you model is centred at (0,0,0) and is 10 x 10 x 10 then two points at opposite sides of your model are at (-5,-5,-5) and (5,5,5). If that same model is centred at (1000,1000,1000) then your points are (995,995,995) and (1005,1005,1005).
While the absolute difference is the same, the difference relative to the coordinates isn't.
This could lead to rounding errors and display artefacts.
You should arrange the coordinates of models so that the centre of the model is at (0,0,0).
Speed of larger values will be the same as smaller values of the same data type. But if you want more than the normal 32bit precision and enable something like GL_EXT_vertex_attrib_64bit, then there may be performance penalties for that.
Large values are allowed, but one extra thing to watch out for other than the standard floating point precision issues is the precision of the depth buffer.
The depth precision is non-linearly distributed. So it is more precise close up and less precise for things far away. So if you want to be able to use a wide range of values for vertex coordinates, be aware that you may see z-fighting artifacts if you intend to render giant objects very far away from the camera very small things close to the camera. Setting your near and far clip planes as close as possible to your actual scene extents will help but may not be enough to fix the problem. See http://www.opengl.org/resources/faq/technical/depthbuffer.htm, especially the last two points for more details.
Also, I know you asked about vertex positions, but texture coordinates may have additional precision limits beyond normal floating point rules. Assigning a texture coordinate of (9999999.5, 9999999.5) and expecting it to wrap properly to the (0..1, 0..1) range may not work as you expect (depending on implementation/hardware).
The floating point pipeline has constant speed on operations, regardless of the size of the operands.
The only danger is getting abnormally large or small numbers where the error gets out of hand and you end up with subnormal numbers.
As seen in the image
I draw set of contours (polygons) as GL_LINE_STRIP.
Now I want to select curve(polygon) under the mouse to delete,move..etc in 3D .
I am wondering which method to use:
1.use OpenGL picking and selection. ( glRenderMode(GL_SELECT) )
2.use manual collision detection , by using a pick-ray and check whether the ray is inside each polygon.
I strongly recommend against GL_SELECT. This method is very old and absent in new GL versions, and you're likely to get problems with modern graphics cards. Don't expect it to be supported by hardware - probably you'd encounter a software (driver) fallback for this mode on many GPUs, provided it would work at all. Use at your own risk :)
Let me provide you with an alternative.
For solid, big objects, there's an old, good approach of selection by:
enabling and setting the scissor test to a 1x1 window at the cursor position
drawing the screen with no lighting, texturing and multisampling, assigning an unique solid colour for every "important" entity - this colour will become the object ID for picking
calling glReadPixels and retrieving the colour, which would then serve to identify the picked object
clearing the buffers, resetting the scissor to the normal size and drawing the scene normally.
This gives you a very reliable "per-object" picking method. Also, drawing and clearing only 1 pixel with minimal per-pixel operation won't really hurt your performance, unless you are short on vertex processing power (unlikely, I think) or have really a lot of objects and are likely to get CPU-bound on the number of draw calls (but then again, I believe it's possible to optimize this away to a single draw call if you could pass the colour as per-pixel data).
The colour in RGB is 3 unsigned bytes, but it should be possible to additionally use the alpha channel of the framebuffer for the last byte, so you'd get 4 bytes in total - enough to store any 32-bit pointer to the object as the colour.
Alternatively, you can create a dedicated framebuffer object with a specific pixel format (like GL_R32UI, or even GL_RG32UI if you need 64 bits) for that.
The above is a nice and quick alternative (both in terms of reliability and in implementation time) for the strict geometric approach.
I found that on new GPUs, the GL_SELECT mode is extremely slow. I played with a few different ways of fixing the problem.
The first was to do a CPU collision test, which worked, but wasn't as fast as I would have liked. It definitely slows down when you are casting rays into the screen (using gluUnproject) and then trying to find which object the mouse is colliding with. The only way I got satisfactory speeds was to use an octree to reduce the number of collision tests down and then do a bounding box collision test - however, this resulted in a method that was not pixel perfect.
The method I settled on was to first find all the objects under the mouse (using gluUnproject and bounding box collision tests) which is usually very fast. I then rendered each of the objects that have potentially collided with the mouse in the backbuffer as a different color. I then used glReadPixel to get the color under the mouse, and map that back to the object. glReadPixel is a slow call, since it has to read from the frame buffer. However, it is done once per frame, which ends up taking a negligible amount of time. You can speed it up by rendering to a PBO if you'd like.
Giawa
umanga, Cant see how to reply inline... maybe I should sign up :)
First of all I must apologize for giving you the wrong algo - i did the back face culling one. But the one you need is very similar which is why I got confused... d'oh.
Get the camera position to mouse vector as said before.
For each contour, loop through all the coords in pairs (0-1, 1-2, 2-3, ... n-0) in it and make a vec out of them as before. I.e. walk the contour.
Now do the cross prod of those two (contour edge to mouse vec) instead of between pairs like I said before, do that for all the pairs and vector add them all up.
At the end find the magnitude of the resulting vector. If the result is zero (taking into account rounding errors) then your outside the shape - regardless of facing. If your interested in facing then instead of the mag you can do that dot prod with the mouse vector to find the facing and test the sign +/-.
It works because the algo finds the amount of distance from the vector line to each point in turn. As you sum them up and you are outside then they all cancel out because the contour is closed. If your inside then they all sum up. Its actually Gauss's Law of electromagnetic fields in physics...
See:http://en.wikipedia.org/wiki/Gauss%27s_law and note "the right-hand side of the equation is the total charge enclosed by S divided by the electric constant" noting the word "enclosed" - i.e. zero means not enclosed.
You can still do that optimization with the bounding boxes for speed.
In the past I've used GL_SELECT to determine which object(s) contributed the pixel(s) of interest and then used computational geometry to get an accurate intersection with the object(s) if required.
Do you expect to select by clicking the contour (on the edge) or the interior of the polygon? Your second approach sounds like you want clicks in the interior to select the tightest containing polygon. I don't think that GL_SELECT after rendering GL_LINE_STRIP is going to make the interior responsive to clicks.
If this was a true contour plot (from the image I don't think it is, edges appear to intersect) then a much simpler algorithm would be available.
You cant use select if you stay with the lines because you would have to click on the line pixels rendered not the space inside the lines bounding them which I read as what you wish to do.
You can use Kos's answer but in order to render the space you need to solid fill it which would involve converting all of your contours to convex types which is painful. So I think that would work sometimes and give the wrong answer in some cases unless you did that.
What you need to do is use the CPU. You have the view extents from the viewport and the perspective matrix. With the mouse coord, generate the view to mouse pointer vector. You also have all the coords of the contours.
Take the first coord of the first contour and make a vector to the second coord. Make a vector out of them. Take 3rd coord and make a vector from 2 to 3 and repeat all the way around your contour and finally make the last one from coord n back to 0 again. For each pair in sequence find the cross product and sum up all the results. When you have that final summation vector keep hold of that and do a dot product with the mouse pointer direction vector. If its +ve then the mouse is inside the contour, if its -ve then its not and if 0 then I guess the plane of the contour and the mouse direction are parallel.
Do that for each contour and then you will know which of them are spiked by your mouse. Its up to you which one you want to pick from that set. Highest Z ?
It sounds like a lot of work but its not too bad and will give the right answer. You might like to additionally keep bounding boxes of all your contours then you can early out the ones off of the mouse vector by doing the same math as for the full vector but only on the 4 sides and if its not inside then the contour cannot be either.
The first is easy to implement and widely used.
I have a game engine that uses OpenGL for display.
I coded a small menu for it, and then I noticed something odd after rendering the text.
http://img43.imageshack.us/i/garbagen.png/
As you see, the font is somewhat unreadable, but the lower parts (like "Tests") look as intended.
Seems the position on the screen affects readability, as the edges get cut.
The font is 9x5, a value that I divide by 2 to obtain width/height and render the object from the center.
So, with 4.5x2.5 pixels (I use floats for x, y, width and height of simple rectangles), the texture is messed up if rendered somewhere other than x.5 or so. However, it only does so in two computers for now, but I would dislike this error to come out since it makes text unreadable. I can make it 4.55x2.55 (by adding a bit of extra size when dividing by 2), and then it renders adequately in all machines (or at least doesn't happen as often in the problematic two), but I fear this is a hack too gross to keep it and doesn't solve the issue entirely, and it might scale the text up making the font look..."fat".
So my question is...is there any way I can prevent this, and not exchanging those values to integers? (I need the slight differences floats offers in comparison). Can I find out which width/heights are divisible by two, and those that aren't, handle them differently? If it's indeed a video card issue, would it be possible to circumvent it?
Sorry if there's anything lacking for the question, I don't resort to questioning the internet often and I have no coding studies. I'll be happy to provide any line or chunk of code that might be required.
If you have to draw your text at non-integer coordinates, you should enable texture filtering. Use glTexParameterfv to set GL_TEXTURE_MIN_FILTER and GL_TEXTURE_MAG_FILTER to GL_LINEAR. Your text will be blurred, but that cannot be avoided without resorting to pixel perfect (=integer) coordinates.
Note that your 0.05 workaround does nothing but change the way the effective coordinates are rounded to integers. When using GL_NEAREST texture filtering, there's no such thing as a half pixel offset. Even if you specify these coordinates, the texture filter will round them for you. You just push it in the right direction with the additional 0.05.
For best reliability I would find a way to eliminate the fractions. I only have a little experience with XNA and MDX, so I don't know if there is a good reason, but why are you going by the center rather than corner?
Trying to do pixel-perfect stuff like this can be hard in OpenGL due to different resolutions, texture filtering etc.
Some things you could try:
Draw your font into one large texture (say 512x512).
Draw the glyphs larger than you need and anti-alias using the alpha channel (transparency).
Leave some blank space (4 or 8 pixels) around each glyph. If you have them pushed up right against eachother (like you would if you were drawing a font for software-rendering back in the DOS days), then filtering will make them bleed into eachother.
Or you could take a different approach and make them out of line segments. This may work better for fonts on the scales you're dealing with.
I have a path made up of a list of 2D points. I want to turn these into a strip of triangles in order to render a textured line with a specified thickness (and other such things). So essentially the list of 2D points need to become a list of vertices specifying the outline of a polygon that if rendered would render the line. The problem is handling the corner joins, miters, caps etc. The resulting polygon needs to be "perfect" in the sense of no overdraw, clean joins, etc. so that it could feasibly be extruded or otherwise toyed with.
Are there any simple resources around that can provide algorithm insight, code or any more information on doing this efficiently?
I absolutely DO NOT want a full fledged 2D vector library (cairo, antigrain, OpenVG, etc.) with curves, arcs, dashes and all the bells and whistles. I've been digging in multiple source trees for OpenVG implementations and other things to find some insight, but it's all terribly convoluted.
I'm definitely willing to code it myself, but there are many degenerate cases (small segments + thick widths + sharp corners) that create all kinds of join issues. Even a little help would save me hours of trying to deal with them all.
EDIT: Here's an example of one of those degenerate cases that causes ugliness if you were simply to go from vertex to vertex. Red is the original path. The orange blocks are rectangles drawn at a specified width aligned and centered on each segment.
Oh well - I've tried to solve that problem myself. I wasted two month on a solution that tried to solve the zero overdraw problem. As you've already found out you can't deal with all degenerated cases and have zero overdraw at the same time.
You can however use a hybrid approach:
Write yourself a routine that checks if the joins can be constructed from simple geometry without problems. To do so you have to check the join-angle, the width of the line and the length of the joined line-segments (line-segments that are shorter than their width are a PITA). With some heuristics you should be able to sort out all the trivial cases.
I don't know how your average line-data looks like, but in my case more than 90% of the wide lines had no degenerated cases.
For all other lines:
You've most probably already found out that if you tolerate overdraw, generating the geometry is a lot easier. Do so, and let a polygon CSG algorithm and a tesselation algorithm do the hard job.
I've evaluated most of the available tesselation packages, and I ended up with the GLU tesselator. It was fast, robust, never crashed (unlike most other algorithms). It was free and the license allowed me to include it in a commercial program. The quality and speed of the tesselation is okay. You will not get delaunay triangulation quality, but since you just need the triangles for rendering that's not a problem.
Since I disliked the tesselator API I lifted the tesselation code from the free SGI OpenGL reference implementation, rewrote the entire front-end and added memory pools to get the number of allocations down. It took two days to do this, but it was well worth it (like factor five performance improvement). The solution ended up in a commercial OpenVG implementation btw :-)
If you're rendering with OpenGL on a PC, you may want to move the tesselation/CSG-job from the CPU to the GPU and use stencil-buffer or z-buffer tricks to remove the overdraw. That's a lot easier and may be even faster than CPU tesselation.
I just found this amazing work:
http://www.codeproject.com/Articles/226569/Drawing-polylines-by-tessellation
It seems to do exactly what you want, and its licence allows to use it even in commercial applications. Plus, the author did a truly great job to detail his method. I'll probably give it a shot at some point to replace my own not-nearly-as-perfect implementation.
A simple method off the top of my head.
Bisect the angle of each 2d Vertex, this will create a nice miter line. Then move along that line, both inward and outward, the amount of your "thickness" (or thickness divided by two?), you now have your inner and outer polygon points. Move to the next point, repeat the same process, building your new polygon points along the way. Then apply a triangualtion to get your render-ready vertexes.
I ended up having to get my hands dirty and write a small ribbonizer to solve a similar problem.
For me the issue was that I wanted fat lines in OpenGL that did not have the kinds of artifacts that I was seeing with OpenGL on the iPhone. After looking at various solutions; bezier curves and the like - I decided it was probably easiest to just make my own. There are a couple of different approaches.
One approach is to find the angle of intersection between two segments and then move along that intersection line a certain distance away from the surface and treat that as a ribbon vertex. I tried that and it did not look intuitive; the ribbon width would vary.
Another approach is to actually compute a normal to the surface of the line segments and use that to compute the ideal ribbon edge for that segment and to do actual intersection tests between ribbon segments. This worked well except that for sharp corners the ribbon line segment intersections were too far away ( if the inter-segment angle approached 180' ).
I worked around the sharp angle issue with two approaches. The Paul Bourke line intersection algorithm ( which I used in an unoptimized way ) suggested detecting if the intersection was inside of the segments. Since both segments are identical I only needed to test one of the segments for intersection. I could then arbitrate how to resolve this; either by fudging a best point between the two ends or by putting on an end cap - both approaches look good - the end cap approach may throw off the polygon front/back facing ordering for opengl.
See http://paulbourke.net/geometry/lineline2d/
See my source code here : https://gist.github.com/1474156
I'm interested in this too, since I want to perfect my mapping application's (Kosmos) drawing of roads. One workaround I used is to draw the polyline twice, once with a thicker line and once with a thinner, with a different color. But this is not really a polygon, it's just a quick way of simulating one. See some samples here: http://wiki.openstreetmap.org/wiki/Kosmos_Rendering_Help#Rendering_Options
I'm not sure if this is what you need.
I think I'd reach for a tessellation algorithm. It's true that in most case where these are used the aim is to reduce the number of vertexes to optimise rendering, but in your case you could parameterise to retain all the detail - and the possibility of optimising may come in useful.
There are numerous tessellation algorithms and code around on the web - I wrapped up a pure C on in a DLL a few years back for use with a Delphi landscape renderer, and they are not an uncommon subject for advanced graphics coding tutorials and the like.
See if Delaunay triangulation can help.
In my case I could afford to overdraw. I just drow circles with radius = width/2 centered on each of the polyline's vertices.
Artifacts are masked this way, and it is very easy to implement, if you can live with "rounded" corners and some overdrawing.
From your image it looks like that you are drawing box around line segments with FILL on and using orange color. Doing so is going to create bad overdraws for sure. So first thing to do would be not render black border and fill color can be opaque.
Why can't you use GL_LINES primitive to do what you intent to do? You can specify width, filtering, smoothness, texture anything. You can render all vertices using glDrawArrays(). I know this is not something you have in mind but as you are focusing on 2D drawing, this might be easier approach. (search for Textured lines etc.)