Both OpenGL and Direct3D use pixel's center as a sample point during rasterization (without antialiasing).
For example here is the quote from D3D11 rasterization rules:
Any pixel center which falls inside a triangle is drawn
I tried to find out what is the reason to use (0.5, 0.5) instead of, say, (0.0, 0.0) or whatever else in range of 0.0 - 1.0f for both x and y.
The result might be translated a little, but does it really matter? Does it produce some visible artifacts? May be, it makes some algorithms harder to implement? Or it's just a convention?
Again, I don't talk about multisampling here.
So what is the reason?
Maybe this is not the answer to your problem, but I try to answer your question from ray tracing perspective.
In ray tracing, you can get color of every single points in the scene. But since we have a limited amount of pixel, you need to downsample to your image to your screen pixels.
In ray tracing, if you use 1 ray per pixel, we generally choose center point to create our ray which gives the most correct render results. In the image below, I try to show the difference when you choose a corner of pixel or center. The distance will get bigger when your object is far from the rendering screen.
If you use more than one ray for each pixel, lets say 5 rays (4 corners + 1 center) and average the result, you will of course get more realistic image ( Will handle aliasing problems much better) However it will be slower as you guess.
So, it is probably the same idea that opengl and directX take one sample for each pixel instead of multisampling and taking average (Performance issues) and center point is probably giving the best result.
EDIT :
For area rasterization, center of pixel is used because if center of pixel remains inside Area, it is guaranteed that at least 50% of pixel is inside the shape.(Except shape corners) That's why since the proportion is greater than half that pixel is colored.
For other corner selections there is no general rule. Lets look at example image below. The black point (bottom left) is outside of area and should not be drawn (And when you look at it more than half of pixel is outside. However if you look at blue point %80 of pixel is inside area but since bottom left corner is outside area it shouldn't be drawn
This answer mainly focuses on the OP's comment on
Cagkan Toptas answer:
Thanx for the answer, but my question is: why does it give better
results? Does it at all? If yes, what is the explanation?"
It depends on how you define "better" results. From an image qualioty perspective, it does not change much, as long as the primitves are not specifically aligned (after the projection).
Using just one sample at (0,0) instead (0.5, 0.5) will just shift the scene by half a pixel (in both axis, of course). In the general case of aribitrary placed primitves, the average error should be the same.
However, if you want "pixel-exact" drawing (i.e. for text, and UI, and also full-screen post-processing effects), you just would have to take the convention of the underlying implementation into account, and both conventions would work.
One advantage of the "center at half integers" rule is that you can get the integer pixel coordinates (with respect to the sample locations) of the nearest pixel by a simple floor(floating_point_coords) operation, which is simpler than rounding to the nearest integer.
Related
I was researching about MSAA and how it works. I understand the concept how it works and what is the idea behind it. Basically, if the center of triangle covers the center of the pixel this is processed ( in case of the non-msaa). However, If msaa is involved. Let's say 4xmsaa then it will sample 4 other point as sub-sample. Pixel shader will execute per-pixel. However, occlusion,and coverage test will be applied for each sub-pixel. The point I'm confused is I imagine the pixel as little squares on the screen and I couldn't understand how sub-sampling points are determined inside the sample rectangle. How computer aware of one pixels sub-sample locations. And if there is only one square how it sub-sampled colors are determined.(If there is one square then there should be only one color). Lastly,How each sub-sample might have different depth value if it was basically same pixel.
Thank you!
Basically, if the center of triangle covers the center of the pixel this is processed ( in case of the non-msaa).
No, that's not making sense. The center of a triangle is just a point, and that pint falling onto a pixel center means nothing. Standard rasterizing rule is: if the center of the pixel lies inside of the triangle, a fragment is produced (with special rules for cases where the center of the pixel lies exactly on the boundary of the triangle).
The point I'm confused is I imagine the pixel as little squares on the screen and I couldn't understand how sub-sampling points are determined inside the sample rectangle.
No Idea what you mean by "sample rectangle", but keeping that aside: If you use some coordinate frame of reference where a pixel is 1x1 units in area, than you can simply use fractional parts for describing locations within a pixel.
Default OpenGL Window space uses a convention where (0,0) is the lower left corner of the bottom left pixel, and (width,height) is the upper-right corner of the top-right pixel, and all the pixel centers are at half integers .5.
The rasterizer of a real GPU does work with fixed-point representations, and the D3D spec requires that it has at least 8 bits of fractional precision for sub-pixel locations (GL leaves the exact precsision up to the implementor).
Note that at this point, the pixel raster is not relevant at all. A coverage sample is just testing if some 2D point lies inside or outside of an 2D triangle, and a point is always a mathematically infinitely small entity with an area of 0. The conventions for the coordinate systems to do this calculation in can be arbitrarly defined.
And if there is only one square how it sub-sampled colors are determined.(If there is one square then there should be only one color). Lastly,How each sub-sample might have different depth value if it was basically same pixel.
When you use multipsamling, you always use a multisampled framebuffer, which means that for each pixel, there is not a single color, depth, ... value, but there are n (your multisampling count, typically between 2 and 16 inclusively). You will need an additional pass to calculate the single per-pixel values needed for displaying the anti-aliased results (the grpahics API might hide this from you when done on the default frambebuffer, but when you work with custom render targets, you have to do this manually).
I'm using TriangleList to output my primitives. Most all of the time I need to draw rectangles, triangles, circles. From time to time I need to draw very thin triangles (width=2px for example). I thought it should look like a line (almost a line) but it looks like separate points :)
Following picture shows what I'm talking about:
First picture at the left side shows how do I draw a rectangle (counter clockwise, from top right corner). And then you can see the "width" of the rectangle which I call "dx".
How to avoid this behavior? I would it looks like a straight (almost straight) line, not as points :)
As #BrettHale mentions, this is an aliasing problem. For example,
Without super/multisampling, the triangle only covers the centre of the bottom right pixel and only it will receive colour. Real pixels have area and in a perfect situation, would receive a portion of the colour equal to the area covered. "Antialiasing" techniques reduce aliasing effects caused by not integrating colour across pixels.
Getting it to look right without being incredibly slow is hard. OpenGL provides GL_POLYGON_SMOOTH, which conservatively rasterizes triangles and draws the correct percentages of colour to each pixel using blending. This works well until you have overlapping triangles and you hit the problem of transparency sorting where order-independent transparency is needed. A simple and more brute force solution is to render to a much bigger texture and then downsample. This is essentially what supersampling does, except the samples can be "anisotropic" (irregular) which gives a nicer result. Multisampling techniques are adaptive and a bit more efficient, e.g. supersample pixels only at triangle edges. It is fairly straightforward to set this up with OpenGL.
However, as the triangle area approaches zero the area will too and it'll still disappear entirely even with antialiasing (although will fade out rather than become pixelated). Although not physically correct, you may instead be after a minimum 1-pixel width triangle so you get the lines you want even if it's a really thin triangle. This is where doing your own conservative rasterization may be of interest.
This is the problem of skinny triangles in general. For example, in adaptive subdivision when you have skinny T-junctions, it happens all the time. One solution is to draw the edges (you can use GL_LINE_STRIP) with having antialiasing effect on You can have:
Gl.glShadeModel(Gl.GL_SMOOTH);
Gl.glEnable(Gl.GL_LINE_SMOOTH);
Gl.glEnable(Gl.GL_BLEND);
Gl.glBlendFunc(Gl.GL_SRC_ALPHA, Gl.GL_ONE_MINUS_SRC_ALPHA);
Gl.glHint(Gl.GL_LINE_SMOOTH_HINT, Gl.GL_DONT_CARE);
before drawing the lines so you get lines when your triangle is very small...
This is called a subpixel feature, when geometry gets smaller than a single pixel. If you animated the very thin triangle, you would see the pixels pop in and out.
Try turning multi-sampling on. Most GL windowing libraries support multisampled back buffer. You can also force it on in your graphics driver settings.
If the triangle is generated by geometry shader, then you can make the triangle area dynamic.
For example, you can make the triangle width always greater than 1px.
// ndc coord is range from -1.0 to 1.0 and the screen width is 1920.
float pixel_unit = 2.0 / 1920.0;
vec2 center = 0.5 * (triangle[0].xy + triangle[1].xy );
// Remember to divide the w component.
float triangle_width = (triangle[0].xy - center)/triangle[0].w;
float scale_ratio = pixel_unit / triangle_width;
if (scale_ratio > 1.0){
triagle[0].xy = (triangle[0].xy - center) * scale_ratio + center;
triagle[1].xy = (triangle[1].xy - center) * scale_ratio + center;
}
This issue can also be addressed via conservative rasterisation. The following summary is reproduced from the documentation for the NV_conservative_raster OpenGL extension:
This extension adds a "conservative" rasterization mode where any pixel
that is partially covered, even if no sample location is covered, is
treated as fully covered and a corresponding fragment will be shaded.
Similar extensions exist for the other major graphics APIs.
I've been trying to utilize the techniques in Eric Penner's "Shader Amortization using
Pixel Quad Message Passing" from GPU Pro 2, Chapter VI.2. The basic idea is that modern GPU's process fragment shaders in 2x2 fragment quads, and you can use ddx() and ddy() to get the value of some_var at all four fragments as long as the following hold:
Your GPU supports high-quality derivatives
You know which fragment you're processing (top-left, top-right, bottom-left, bottom-right)
This opens up a lot of opportunities for fragment shader optimization (like distributing texture fetches over a 2x2 pixel quad) that you'd need Compute Shaders to beat.
My problem is this:
I can't deterministically detect which fragment I'm processing. Ideally, each fragment block would start at even-numbered output pixel coords like (0, 0), (2, 0), ... (1024, 1024), ..., so you'd just need to check whether the output pixel x and y coords are even or odd to know which fragment you're currently processing. The method Penner uses in the book assumes this works...but it seems to be going wrong for me.
Unfortunately, my 2x2 fragment quads appear to be starting in nondeterministic places: I've seen them start at (even, even), (even, odd), and (odd, even). I can't remember if I've seen (odd, odd) or not, but anyway, the arrangement seems to depend on a myriad of factors I don't understand, including the output resolution and shader specifics. (I'm testing on an 8800 GTS, in case anyone's wondering.)
Does anyone know what might be causing this nondeterminism or have any documentation on it? I understand there's virtually no official standardization in this area, but I'm more interested in how things work in practice on modern desktop-level GPU's, and I'm hoping there's a way to get this technique to work. If no one knows how to reason about the even/odd start behavior, does anyone know any other way of determining the current fragment's relative location in its 2x2 quad?
Thanks :)
As it turns out, the premise of my question was mostly wrong:
The 2x2 fragment quads DO almost always start on even pixel numbers...as long as the output resolution is even-numbered.
If the output resolution is odd-numbered (a possibility with the underlying program I'm working with), things can get more complicated, for obvious reasons. I don't expect there's any uniformity here across drivers/GPU's/etc. either, but my current tests (which themselves may still be buggy) appear to demonstrate 2x2 pixel quads starting at an odd pixel along the dimension with odd resolution, at least when the odd dimension is horizontal.
All of this weirdness helped obscure my bigger issue: The code I used to detect the fragment's location in the pixel quad was buggy. I tested by setting the texture coordinates equal within a pixel quad (set to the pixel quad center)...or so I thought. However, I calculated the screen coordinates based on a full-screen quad where the uv mapping has the +v axis pointing downward. The screenspace origin starts at the bottom-left, because it's based on the top-right quadrant of Cartesian coordinates, and I accidentally forgot to invert the v-coordinate of the uv offset I used to find the pixel quad center. Many of my nondeterministic observations came from failing to check my assumptions while debugging and misinterpreting things as a result, particularly in combination with odd resolutions.
This was an embarrassing mistake I should have caught a lot sooner, but I figured I'd detail it as a warning to others to always double-check the direction of your vertical axis when you're dealing with opposite-facing coordinate frames. ;)
UPDATE:
I ran across a situation where 2x2 pixel quads started on even pixel numbers even when the resolution was odd. Thanks to the nondeterminism under odd resolutions, I had to work out another solution:
If you're deriving your screen pixel numbers from the uv coords of a fullscreen quad (for post-processing), the fragment location derived from this is only useful for arranging/placing shared samples between fragments, etc., not for the quad-pixel communication itself. You'll need to have screen pixel numbers with respect to the screenspace origin for that. You can derive these from vertex positions, or you can use ddx().x and ddy().y on the uv-based pixel numbers to find out their screen direction and mirror the fragment position in the appropriate direction from there.
Calculate the fragment location based on your screen pixel numbers (with respect to the true screenspace origin) and the assumption 2x2 pixel quads start on even pixels. (If you used uv-based pixel numbers, now is the time to mirror things.)
Do a ddx().x and ddy().y on the fragment location, and if they're negative in either direction, you know the pixel quad starts at an odd pixel number in that direction...so mirror in that direction.
If you calculate two fragment positions, one based on a uv origin and one based on a screen origin, use the uv-based one for reasoning about uv-based sample placement, and use the screen-based one for actually obtaining the values of a variable at neighboring fragments.
Profit.
I'll post a link to my working MIT-licensed code once I release it on Github, along with usage examples (the speedup is unfortunately not what I expected, but whatever ;)). I'm just waiting to get done with a larger shader I'll be uploading along with it.
I'm trying to implement an algorithm from a graphics paper and part of the algorithm is rendering spheres of known radius to a buffer. They say that they render the spheres by computing the location and size in a vertex shader and then doing appropriate shading in a fragment shader.
Any guesses as to how they actually did this? The position and radius are known in world coordinates and the projection is perspective. Does that mean that the sphere will be projected as a circle?
I found a paper that describes what you need - calculating the bounding quadric. See:
http://web4.cs.ucl.ac.uk/staff/t.weyrich/projects/quadrics/pbg06.pdf
Section 3.2, Bounding Box calculation. The paper also mentions doing it on the vertex shader, so it might be what you're after.
Some personal thought:
You can approximate the bounding box by approximating the size of the sphere by its radius, though. Transform that to screen space and you'll get a slightly larger than correct bounding box, but it won't be that far off. This fails when the camera is too close to the point, or when the sphere it too large, of course. But otherwise should be quite optimal to calculate, as it would be simply a ratio between two similar, right triangles.
If you can figure out the chord length, then the ratio will yield the precise answer, but that's a little beyond me at the moment.
alt text http://xavierho.com/temp/Sphere-Screen-Space.png
Of course, that's just a rough approximation, and has a large error sometimes, but it would get things going quickly, easy.
Otherwise, see paper linked above and use the correct way. =]
The sphere will be projected as an ellipse unless it's at the cameras center as brainjam says.
The article that Xavier Ho links to describes the generalization of sphere projection (That is, quadratic projection). It is a very good read and I recommend it too. However, if you are only interested in sphere projection and more precisely the quadrilateral that bounds the projection then The Mechanics of Robust Stencil Shadows, page 6: Scissor Optimization details how to do it.
A Note on Xavier Ho's Approximation
I would like to add that the approximation that Xavier Ho suggests is, as he notes too, very approximative. I actually used it for a tile-based forward renderer to approximate light bounds in screen space. The following image shows how it neatly enables good performance with 400 omni (spherically bound) lights in a scene: Tile-based Rendering - Far View. However, just like Xavier Ho predicted the inaccuracy of the light bounds causes artifacts up close as seen here when zoomed in: Tile-based Rendering - Close view. The overlapping quadrilaterals fail to bound the lights completely and instead clip the edges revealing the tile grid.
In general, a sphere is seen as an ellipse in perspective:
(source: jrank.org)
The above image is at the bottom of this article.
Section 6 of this article describes how the bounding trapezoid of the sphere's projection is obtained. Before computers, artists and draftsmen has to figure this out by hand.
I am working on an application that detects the most prominent rectangle in an image, then seeks to rotate it so that the bottom left of the rectangle rests at the origin, similar to how IUPR's OSCAR system works. However, once the most prominent rectangle is detected, I am unsure how to take into account the depth component or z-axis, as the rectangle won't always be "head-on". Any examples to further my understanding would be greatly appreciated. Seen below is an example from IUPR's OSCAR system.
alt text http://quito.informatik.uni-kl.de/oscar/oscar.php?serverimage=img_0324.jpg&montage=use
You don't actually need to deal with the 3D information in this case, it's just a mappping function, from one set of coordinates to another.
Look at affine transformations, they're capable of correcting simple skew and perspective effects. You should be able to find code somewhere that will calculate a transform from the 4 points at the corners of your rectangle.
Almost forgot - if "fast" is really important, you could simplify the system to only use simple shear transformations in combination, though that'll have a bad impact on image quality for highly-tilted subjects.
Actually, I think you can get away with something much simpler than Mark's approach.
Once you have the 2D coordinates on the skewed image, re-purpose those coordinates as texture coordinates.
In a renderer, draw a simple rectangle where each corner's vertices are texture mapped to the vertices found on the skewed 2D image (normalized and otherwise transformed to your rendering system's texture coordinate plane).
Now you can rely on hardware (using OpenGL or similar) to do the correction for you, or you can write your own texture mapper:
The aspect ratio will need to be guessed at since we are disposing of the actual 3D info. However, you can get away with just taking the max width and max height of your skewed rectangle.
Perspective Texture Mapping by Chris Hecker