I'm trying to implement an algorithm from a graphics paper and part of the algorithm is rendering spheres of known radius to a buffer. They say that they render the spheres by computing the location and size in a vertex shader and then doing appropriate shading in a fragment shader.
Any guesses as to how they actually did this? The position and radius are known in world coordinates and the projection is perspective. Does that mean that the sphere will be projected as a circle?
I found a paper that describes what you need - calculating the bounding quadric. See:
http://web4.cs.ucl.ac.uk/staff/t.weyrich/projects/quadrics/pbg06.pdf
Section 3.2, Bounding Box calculation. The paper also mentions doing it on the vertex shader, so it might be what you're after.
Some personal thought:
You can approximate the bounding box by approximating the size of the sphere by its radius, though. Transform that to screen space and you'll get a slightly larger than correct bounding box, but it won't be that far off. This fails when the camera is too close to the point, or when the sphere it too large, of course. But otherwise should be quite optimal to calculate, as it would be simply a ratio between two similar, right triangles.
If you can figure out the chord length, then the ratio will yield the precise answer, but that's a little beyond me at the moment.
alt text http://xavierho.com/temp/Sphere-Screen-Space.png
Of course, that's just a rough approximation, and has a large error sometimes, but it would get things going quickly, easy.
Otherwise, see paper linked above and use the correct way. =]
The sphere will be projected as an ellipse unless it's at the cameras center as brainjam says.
The article that Xavier Ho links to describes the generalization of sphere projection (That is, quadratic projection). It is a very good read and I recommend it too. However, if you are only interested in sphere projection and more precisely the quadrilateral that bounds the projection then The Mechanics of Robust Stencil Shadows, page 6: Scissor Optimization details how to do it.
A Note on Xavier Ho's Approximation
I would like to add that the approximation that Xavier Ho suggests is, as he notes too, very approximative. I actually used it for a tile-based forward renderer to approximate light bounds in screen space. The following image shows how it neatly enables good performance with 400 omni (spherically bound) lights in a scene: Tile-based Rendering - Far View. However, just like Xavier Ho predicted the inaccuracy of the light bounds causes artifacts up close as seen here when zoomed in: Tile-based Rendering - Close view. The overlapping quadrilaterals fail to bound the lights completely and instead clip the edges revealing the tile grid.
In general, a sphere is seen as an ellipse in perspective:
(source: jrank.org)
The above image is at the bottom of this article.
Section 6 of this article describes how the bounding trapezoid of the sphere's projection is obtained. Before computers, artists and draftsmen has to figure this out by hand.
Related
I am looking for a way to "fill" three-dimensional geometry with color, and quite possibly a texture at some time later on.
Suppose for a moment that you could physically phase your head into a concrete wall, logically you would see only darkness. In OpenGL, however, when you do this the world is naturally hollow and transparent due to culling and because of how the geometry is drawn. I want to simulate the darkness/color/texture within it instead.
I know some games do this by overlaying a texture/color directly over the hud--therefore blinding the player.
Is there another way to do this, though? Suppose the player is standing half in water; they can partially see below the waves. How would you fill it to prevent them from being able to see clearly below what is now half of their screen?
What is this concept even called?
A problem with the texture-in-front-of-the-camera method is a texture is 2D but you want to visualize a slice of a 3D volume. For the first thing you talk about, the head-inside-a-wall idea, I'll point you to "3D/volume texturing". For standing-half-in-water, you're after "volume rendering" with "absorption" (discussed by #user3670102).
3D texturing
The general idea here is you have some function that defines a colour everywhere in a 3D space, not just on a surface (as with regular texture mapping). This is nice because you can put geometry anywhere and colour it in the fragment shader based on the 3D position. Think of taking a slice through the volume and looking at the intersection colour.
For the head-in-a-wall effect you could draw a full screen polygon in front of the player (right on the near clipping plane, although you might want to push this forwards a bit so its not too small) and colour it based on a 3D function. Now it'll look properly solid and move ad the player does and not like you've cheaply stuck a texture over the screen.
The actual function could be defined with a 3D texture but that's very memory intensive. Instead, you could look into either procedural 3D colour (a procedural wood or brick shader is pretty common as an example). Even assuming a 2D texture is "extruded" through the volume will work, or better yet weight 3 textures (one for each axis) based on the angle of the intersection/surface you're drawing on.
Detecting an intersection with the geometry and the near clipping plane is probably the hardest bit here. If I were you I'd look at tricks with the z-buffer and make sure to draw everything as solid non-self-intersecting geometry. A simple idea might be to draw back faces only after drawing everything with front faces. If you can see back faces that part of the near plane must be inside something. For these pixels you could calculate the near clipping plane position in world space and apply a 3D texture. Though I suspect there are faster ways than drawing everything twice.
In reality there would probably be no light getting to what you see and it should be black, but I guess just ignore this and render the colour directly, unlit.
Absorption
This sounds way harder than it actually is. If you have some transparent solid that's all the one colour ("homogeneous") then it removes light the further light has to travel through it. Think of many alpha-transparent surfaces, take the limit and you have an exponential. The light remaining is close to 1/exp(dist) or exp(-dist). Google "Beer's Law". From here,
vec3 Absorbance = WaterColor * WaterDensity * -WaterDepth;
vec3 Transmittance = exp(Absorbance);
A great way to find distances through something is to render the back faces (or seabed/water floor) with additive blending using a shader that draws distance to a floating point texture. Then switch to subtractive blending and render all the front faces (or water surface). You're left with a texture containing distances/depth for the above equation.
Volume Rendering
Combining the two ideas, the material is both a transparent solid but the colour (and maybe density) varies throughout the volume. This starts to get pretty complicated if you have large amounts of data and want it to be fast. A straight forward way to render this is to numerically integrate a ray through the 3D texture (or procedural function, whatever you're using), at the same time applying the absorption function. A basic brute force Euler integration might start a ray for each pixel on the near plane, then march forwards at even distances. Over each step while you march you assume the colour remains constant and apply absorption, keeping track of how much light you have left. A quick google brings up this.
This seems related to looking through what's called "participating media". On the less extreme end, you'd have light fog, or smoky haze. In the middle could be, say, dirty water. And the extreme case would be your head-in-the-wall example.
Doing this in a physically accurate way isn't trivial, because the darkening effect is more pronounced when the thickness of the media is greater.
But you can fake this by making some assumptions and giving the interior geometry (under the water or inside the wall) darker by reduced lighting or using darker colors. If you care about the depth effect, look at OpenGL and fog.
For underwater, you can make the back side of the water a semi-transparent color that causes stuff above it to have a suitable change in color.
If you really want to go nuts with accuracy, look at Kajia's Rendering Equation. That covers everything (including stuff that glows), but generally needs simplification and approximations to be more useful.
I need to make spherical billboards (i.e., setting depth), but taking into account perspective projection--ideally including off-center frusta.
I wasn't able to find any references to anyone succeeding at this--although there are plenty of explanations as to why standard billboards don't have perspective distortions. Unfortunately, for my application, the lack isn't a cosmetic defect; it's actually important to the algorithm.
I did a bit of investigation on my own:
The math gets pretty messy rather quickly. The obvious approaches don't work: for example, you can't orient the billboard perpendicular to a viewing ray because tangential rays wouldn't intersect the billboard at right angles.
Probably the most promising approach I found was to render the billboard parallel to the near clipping plane, stretching it with a vertex shader into an ellipse. This only handles perturbations along one axis (so e.g. it won't handle spheres rendered in a corner of the view), but the main obstacle is calculating depth correctly; you can't compute it as you would for an undistorted sphere because the "sphere" is occluding itself.
Point of fact, I didn't find a good solution, and I couldn't find anyone who has. Anyone have an idea?
While browsing around not even remotely working on this problem, I stumbled on http://iquilezles.org/www/articles/sphereproj/sphereproj.htm, which is pretty close. The linked tutorial shows how to compute a bounding ellipse for a rasterized sphere; getting the depth (at worst, using a raycast) should be fairly easy to derive.
I'm implementing a deferred lighting mechanism in my OpenGL graphics engine following this tutorial. It works fine, I don't get into trouble with that.
When it comes to the point lights, it says to render spheres around the lights to only pass those pixels throught the lighting shader, that might be affected by the light. There are some Issues with that method concerning cullface and camera position precisely explained here. To solve those, the tutorial uses the stencil-test.
I doubt the efficiency of that method which leads me to my first Question:
Wouldn't it be much better to draw a circle representing the light-sphere?
A sphere always looks like a circle on the screen, no matter from which perspective you're lokking at it. The task would be to determine the screenposition and -scaling of the circle. This method would have 3 advantages:
No cullface-issue
No camereposition-in-lightsphere-issue
Much more efficient (amount of vertices severely reduced + no stencil test)
Are there any disadvantages using this technique?
My second Question deals with implementing mentioned method. The circles' center position could be easily calculated as always:
vec4 screenpos = modelViewProjectionMatrix * vec4(pos, 1.0);
vec2 centerpoint = vec2(screenpos / screenpos.w);
But now how to calculate the scaling of the resulting circle?
It should be dependent on the distance (camera to light) and somehow the perspective view.
I don't think that would work. The point of using spheres is they are used as light volumes and not just circles. We want to apply lighting to those polygons in the scene that are inside the light volume. As the scene is rendered, the depth buffer is written to. This data is used by the light volume render step to apply lighting correctly. If it were just a circle, you would have no way of knowing whether A and C should be illuminated or not, even if the circle was projected to a correct depth.
I didn't read the whole thing, but i think i understand general idea of this method.
Won't help much. You will still have issues if you move the camera so that the circle will be behind the near plane - in this case none of the fragments will be generated, and the light will "disappear"
Lights described in the article will have a sharp falloff - understandably so, since sphere or circle will have sharp border. I wouldn-t call it point lightning...
For me this looks like premature optimization... I would certainly just be rendering whole screenquad and do the shading almost as usual, with no special cases to worry about. Don't forget that all the manipulations with opengl state and additional draw operations will also introduce overhead, and it is not clear which one will outscale the other here.
You forgot to do perspective division here
The simplest way to calculate scaling - transform a point on the surface of sphere to screen coords, and calculate vector length. It mst be a point on the border in screen space, obviously.
From what I gathered he used sparse voxel octrees and raycasting. It doesn't seem like he used opengl or direct3d and when I look at the game Voxelstein it appears that miniature cubes are actually being drawn instead of just a bunch of 2d square. Which caught me off guard I'm not sure how he is doing that without opengl or direct3d.
I tried to read through the source code but it was difficult for me to understand what was going on. I would like to implement something similar and would like the algorithm to do so.
I'm interested in how he performed rendering, culling, occlusion, and lighting. Any help is appreciated.
The algorithm is closer to ray-casting than ray-tracing. You can get an explanation from Ken Silverman himself here:
https://web.archive.org/web/20120321063223/http://www.jonof.id.au/forum/index.php?topic=30.0
In short: on a grid, store an rle list of surface voxels for each x,y stack of voxels (if z means 'up'). Assuming 4 degrees of freedom, ray-cast across it for each vertical line on the screen, and maintain a list of visible spans which is clipped as each cube is drawn. For 6 degrees of freedom, do something similar but with scanlines which are tilted in screenspace.
I didn't look at the algorithm itself, but I can tell the following based off the screenshots:
it appears that miniature cubes are actually being drawn instead of just a bunch of 2d square
Yep, that's how ray-tracing works. It doesn't draw 2d squares, it traces rays. If you trace your rays against many miniature cubes, you'll see many miniature cubes. The scene is represented by many miniature cubes (voxels), hence you see them when you look up close. It would be nice to actually smoothen the data somehow (trace against smoothed energy function) to make them look smoother.
I'm interested in how he performed rendering
by ray-tracing
culling
no need for culling when ray-tracing, particularly in a voxel scene. As you move along the ray you check only the voxels that the ray intersects.
occlusion
voxel-voxel occlusion is handled naturally by ray-tracing; it would return the first voxel hit, which is the closest. If you draw sprites you can use a Z-buffer generated by the ray-tracer.
and lighting
It's possible to approximate the local normal by looking at nearby cells and looking which are occupied and which are not. Then performing the lighting calculation. Alternatively each voxel can store the normal along with its color or other material properties.
I am working on an application that detects the most prominent rectangle in an image, then seeks to rotate it so that the bottom left of the rectangle rests at the origin, similar to how IUPR's OSCAR system works. However, once the most prominent rectangle is detected, I am unsure how to take into account the depth component or z-axis, as the rectangle won't always be "head-on". Any examples to further my understanding would be greatly appreciated. Seen below is an example from IUPR's OSCAR system.
alt text http://quito.informatik.uni-kl.de/oscar/oscar.php?serverimage=img_0324.jpg&montage=use
You don't actually need to deal with the 3D information in this case, it's just a mappping function, from one set of coordinates to another.
Look at affine transformations, they're capable of correcting simple skew and perspective effects. You should be able to find code somewhere that will calculate a transform from the 4 points at the corners of your rectangle.
Almost forgot - if "fast" is really important, you could simplify the system to only use simple shear transformations in combination, though that'll have a bad impact on image quality for highly-tilted subjects.
Actually, I think you can get away with something much simpler than Mark's approach.
Once you have the 2D coordinates on the skewed image, re-purpose those coordinates as texture coordinates.
In a renderer, draw a simple rectangle where each corner's vertices are texture mapped to the vertices found on the skewed 2D image (normalized and otherwise transformed to your rendering system's texture coordinate plane).
Now you can rely on hardware (using OpenGL or similar) to do the correction for you, or you can write your own texture mapper:
The aspect ratio will need to be guessed at since we are disposing of the actual 3D info. However, you can get away with just taking the max width and max height of your skewed rectangle.
Perspective Texture Mapping by Chris Hecker