I am unsure of how to describe what I'm after, so I drew a picture to help:
My question, is it possible within OpenGL to create the illusion of those pixel looking bumps on a single polygon, without having to resort to using many polygons? And if it is, what's the method?
I think what your looking for is actually Parallax mapping (Or Parallax Occlusion mapping).
Demos:
http://www.youtube.com/watch?v=01owTezYC-w
http://www.youtube.com/watch?v=gcAsJdo7dME&NR=1
http://www.youtube.com/watch?v=njKdLvmBl88
Parralax mapping basically works by using the height map to alter the texture UV coordinate being used.
The main disadvantage to parallax is that anything that appears to be 'outside' the polygon will be clipped (think of looking at an image on a 3D tv), so it's best for things indented in a surface rather than sticking out of it (although you can reduce this by making the polygon lager than the visible texture area). It's also fairly complex and would need to be combined with other shader techniques for a good effect.
Bump mapping works by using a texture for normal's, this makes the light's shading appear to be 3D however it does not change 3D data depending on the position of the viewer only the shading. Bump mapping would also be fairly useless for the OP's sample image since the surface is all the same angle just at different heights, bump mapping relies on the changes in the surfaces angles. You would have to slope the edges like this.
Displacement mapping/tessellation uses a texture to generate more polygons rather than just being 1 polygon.
There's a video comparing all 3 here
EDIT: There is also Relief mapping, which is a similar to parallax. See demo. There's a comparison video too (it's a bit lowquality but relief looks like it gives better depth).
I think what you're after is bump mapping. The link goes to a simple tutorial.
You may also be thinking of displacement mapping.
Of the techniques mentioned in other people's answers:
Bump mapping is the easiest to achieve, but doesn't do any occlusion.
Parallax mapping is probably the most complex to achieve, and doesn't work well in all cases.
Displacement mapping requires high-end hardware and drivers, and creates additional geometry.
Actually modeling the polygons is always an option.
It really depends on how close you expect the viewer to be and how prominent the bumps are. If you're flying down the Death Star trench, you'll need to model the bumps or use displacement mapping. If you're a few hundred meters up, bumpmapping should suffice.
If you have DX11 class hardware then you could tessellate the polygon and then apply displacement mapping. See http://developer.nvidia.com/node/24. But then it gets a little complicated to get it running and develop something on top of it.
Related
Until today, when I wanted to create reflections (a mirror) in opengl, I rendered a view into a texture and displayed that texture on the mirroring surface.
What i want to know is, are there any other methods to create a mirror in opengl?
And 2. can this be done lonely in shaders (e.g. geometry shader) ?
Ray-tracing. You can write a ray-tracer in the fragment shader (every fragment follows a ray). Ray-tracers can perfectly deal with reflection (mirroring) on all kinds of surfaces.
You can find an OpenGL example here and a WebGL example including mirroring here.
There are no universal way to do that, in any 3D API i know of.
Depending on your case there are several possible techniques with different downsides.
Planar reflections: That's what you are doing already.
Note that your mirror needs to be flat and you have to clip so anything closer than the mirror ins't rendered into the texture.
Good old cubemaps: attach a cubemap to each mirror then sample it in the reflection direction. This works for any surface but you will need to render the cubemaps (which can be done only once if you don't care about moving objects being reflected). I don't think you can do this without shaders but only the mirror will need one. Its a very common technique as it's easy do implement, can be dynamic and fairly cheap while being easy to integrate into an existing engine.
Screen space ray-marching: It's what danny-ruijters suggested. Kind of like SSAO : for each pixel, sample the depth buffer along the reflection vector until you hit something. This has the advantage to be applicable anywhere (on arbitrary complex surfaces) however it can only reflect stuff that appear on screen which can introduce lots of small artifacts but it's completly dynamic and very simple to implement. Note that you will need an additional pass (or rendering normals into a buffer) to access your scene final color in while computing the reflections. You absolutely need shaders for that, but it's post process so it won't interfere with the scene rendering if that's what you fear.
Some modern game engines use this to add small details to reflective surfaces without the burden of having to compute/store cubemaps.
They are probably many other ways to render mirrors but these are the tree main one (at least for what i know) ways of doing reflections.
I need to render a simple 3D cube with OpenGL filled with points that lie on a regular 64x64x64 grid. An image can be found here.
It's hard to explain, but obviously there are some perceptual difficulties due to the projection from 3d to 2d. I tried to displace the points by an randomly generated offset, which helped a little bit, but wasn't really satisfying.
I think there's even a name for that effect, but I couldn't find it, so it would be great if someone could name it and maybe give some advide to reduce it.
You might be thinking of Moiré patterns. MSAA (multi-sample anti-aliasing) might help, or perhaps the introduction of jitter. See also: Supersampling
Alternatively, you could draw the points using point sprites or billboarding, which can be implemented very efficiently using modern (GL) geometry shaders.
I know that normal mapping describes the process of adding detail to meshes without increasing the polygon count, and that this is achieved by using specific normal textures for manipulating the way light is applied to the object. Okay.
But what is bump mapping then? Is it just another term for normal mapping?
How do the visual results compare? Can both techniques be combined?
Bump Mapping describes a general technique for simulating bumps and wrinkles on the surface of an object. This is normally accomplished by manipulating surface normals when doing lighting calculations.
Normal Mapping is a variation of Bump Mapping in which the surface normals are provided via a texture, with normals embedded into the RGB channels of the image.
Other techniques, such as Parallax Mapping, are also Bump Mapping techniques because they distort the surface normals.
To answer the second part of the question, they could fairly easily be combined. The base surface normals could be determined from a normal mapping and then modified via another bump mapping technique.
Bump mapping was originally suggested by Jim Blinn back in 1978. His system basically works by perturbing the normal on a surface by using the height of that texel and the height of the surrounding texels.
This is quite similar to DUDV bumpmapping (You may recall the original environment mapped bump mapping as introduced in DX6 which was DUDV). This works by pre-calculating the derivatives from above so that you can miss out the first stage of the calculation (as it does not change each frame).
Normal mapping is a very similar technique that works by, simply, replacing the normal at each texel position. Conceptually its much simpler.
There is another technique that produces "similar" results. It is called emboss bump mapping. This method works by using multipass rendering. Basically you end up subtracting a gray scale heightmap from the last pass but offsetting it a small amount based on the light direction.
There are other ways of emulating surface topology as well.
Elevation mapping uses the height map as an alpha texture and then renders multiple slices through that texture with a different alpha value to simulate the change in height. If not performed correctly, however, the slices can be very visible.
Displacement mapping works by generating a 3D mesh that uses the texture as its basis. This, obviously, massively increase your vertex count.
Steep parallax, relief mapping, etc are the newest techniques. They work by casting a ray through the heightmap until it intersects. This has the big advantage that if a lump should block out the texture behing it now does as the ray doesn't hit the heightmap behind where it initially hits so always displays the "closest" texel.
I have a program in which I need to apply a 2-dimensional texture (simple image) to a surface generated using the marching-cubes algorithm. I have access to the geometry and can add texture coordinates with relative ease, but the best way to generate the coordinates is eluding me.
Each point in the volume represents a single unit of data, and each unit of data may have different properties. To simplify things, I'm looking at sorting them into "types" and assigning each type a texture (or portion of a single large texture atlas).
My problem is I have no idea how to generate the appropriate coordinates. I can store the location of the type's texture in the type class and use that, but then seams will be horribly stretched (if two neighboring points use different parts of the atlas). If possible, I'd like to blend the textures on seams, but I'm not sure the best manner to do that. Blending is optional, but I need to texture the vertices in some fashion. It's possible, but undesirable, to split the geometry into parts for each type, or to duplicate vertices for texturing purposes.
I'd like to avoid using shaders if possible, but if necessary I can use a vertex and/or fragment shader to do the texture blending. If I do use shaders, what would be the most efficient way of telling it was texture or portion to sample? It seems like passing the type through a parameter would be the simplest way, but possible slow.
My volumes are relatively small, 8-16 points in each dimension (I'm keeping them smaller to speed up generation, but there are many on-screen at a given time). I briefly considered making the isosurface twice the resolution of the volume, so each point has more vertices (8, in theory), which may simplify texturing. It doesn't seem like that would make blending any easier, though.
To build the surfaces, I'm using the Visualization Library for OpenGL and its marching cubes and volume system. I have the geometry generated fine, just need to figure out how to texture it.
Is there a way to do this efficiently, and if so what? If not, does anyone have an idea of a better way to handle texturing a volume?
Edit: Just to note, the texture isn't simply a gradient of colors. It's actually a texture, usually with patterns. Hence the difficulty in mapping it, a gradient would've been trivial.
Edit 2: To help clarify the problem, I'm going to add some examples. They may just confuse things, so consider everything above definite fact and these just as help if they can.
My geometry is in cubes, always (loaded, generated and saved in cubes). If shape influences possible solutions, that's it.
I need to apply textures, consisting of patterns and/or colors (unique ones depending on the point's "type") to the geometry, in a technique similar to the splatting done for terrain (this isn't terrain, however, so I don't know if the same techniques could be used).
Shaders are a quick and easy solution, although I'd like to avoid them if possible, as I mentioned before. Something usable in a fixed-function pipeline is preferable, mostly for the minor increase in compatibility and development time. Since it's only a minor increase, I will go with shaders and multipass rendering if necessary.
Not sure if any other clarification is necessary, but I'll update the question as needed.
On the texture combination part of the question:
Have you looked into 3d textures? As we're talking marching cubes I should probably immediately say that I'm explicitly not talking about volumetric textures. Instead you stack all your 2d textures into a 3d texture. You then encode each texture coordinate to be the 2d position it would be and the texture it would reference as the third coordinate. It works best if your textures are generally of the type where, logically, to transition from one type of pattern to another you have to go through the intermediaries.
An obvious use example is texture mapping to a simple height map — you might have a snow texture on top, a rocky texture below that, a grassy texture below that and a water texture at the bottom. If a vertex that references the water is next to one that references the snow then it is acceptable for the geometry fill to transition through the rock and grass texture.
An alternative is to do it in multiple passes using additive blending. For each texture, draw every face that uses that texture and draw a fade to transparent extending across any faces that switch from one texture to another.
You'll probably want to prep the depth buffer with a complete draw (with the colour masks all set to reject changes to the colour buffer) then switch to a GL_EQUAL depth test and draw again with writing to the depth buffer disabled. Drawing exactly the same geometry through exactly the same transformation should produce exactly the same depth values irrespective of issues of accuracy and precision. Use glPolygonOffset if you have issues.
On the coordinates part:
Popular and easy mappings are cylindrical, box and spherical. Conceptualise that your shape is bounded by a cylinder, box or sphere with a well defined mapping from surface points to texture locations. Then for each vertex in your shape, start at it and follow the normal out until you strike the bounding geometry. Then grab the texture location that would be at that position on the bounding geometry.
I guess there's a potential problem that normals tend not to be brilliant after marching cubes, but I'll wager you know more about that problem than I do.
This is a hard and interesting problem.
The simplest way is to avoid the issue completely by using 3D texture maps, especially if you just want to add some random surface detail to your isosurface geometry. Perlin noise based procedural textures implemented in a shader work very well for this.
The difficult way is to look into various algorithms for conformal texture mapping (also known as conformal surface parametrization), which aim to produce a mapping between 2D texture space and the surface of the 3D geometry which is in some sense optimal (least distorting). This paper has some good pictures. Be aware that the topology of the geometry is very important; it's easy to generate a conformal mapping to map a texture onto a closed surface like a brain, considerably more complex for higher genus objects where it's necessary to introduce cuts/tears/joins.
You might want to try making a UV Map of a mesh in a tool like Blender to see how they do it. If I understand your problem, you have a 3D field which defines a solid volume as well as a (continuous) color. You've created a mesh from the volume, and now you need to UV-map the mesh to a 2D texture with texels extracted from the continuous color space. In a tool you would define "seams" in the 3D mesh which you could cut apart so that the whole mesh could be laid flat to make a UV map. There may be aliasing in your texture at the seams, so when you render the mesh it will also be discontinuous at those seams (ie a triangle strip can't cross over the seam because it's a discontinuity in the texture).
I don't know any formal methods for flattening the mesh, but you could imagine cutting it along the seams and then treating the whole thing as a spring/constraint system that you drop onto a flat surface. I'm all about solving things the hard way. ;-)
Due to the issues with texturing and some of the constraints I have, I've chosen to write a different algorithm to build the geometry and handle texturing directly in that as it produces surfaces. It's somewhat less smooth than the marching cubes, but allows me to apply the texcoords in a way that works for my project (and is a bit faster).
For anyone interested in texturing marching cubes, or just blending textures, Tommy's answer is a very interesting technique and the links timday posted are excellent resources on flattening meshes for texturing. Thanks to both of them for their answers, hopefully they can be of use to others. :)
Using stereovision, I am producing depthmaps representing the 3d environment as viewed from a camera. There is one depthmap per "keyframe" associated with a camera position. The goal is to translate those 2d depthmaps into the 3d space (and later merge them to reconstruct the whole environment).
What would be the best (efficient) way to translate those depthmaps in 3d? Each depthmap is 752x480 large, so the number of triangles can grow quite fast. I would like an automatic system to manage the level of detail of the objects.
My team uses Ogre3d so it would be great to find a solution with it. What I am looking for is very similar to what Terrain do, except that I want to be able to put the resulting objects wherever I want (translation, rotation) and I think Terrain can't do that.
I am quite new to Ogre3d so please forgive me if there is a straightforward solution I should know. If another tool than Ogre3d is more appropriate to my problem, I'd be happy to learn about it!
Not clear what you want to do "merge depahtmap with envirronement" ?
Anyway, in your case, you seems stuck to make them 3d using terrain heightmap techniques.
In you case, as the depthmap is screen aligned, use a screen space simple raycasting technique. So you must do a compositor in ogre3D that takes that depth map and transform it on the pixel you want.
Translation and rotation from a depth map may be limited to xy on screen, as like terrain heightmap (you cannot have caves using heightmaps), you do miss a dimension.
Not directly related but might help: in pure screen space, there is a technique "position reconstruction" that help getting object world space positions, but only if you have a load of infos on the camera used to generate the depthmap you're using, for example: http://www.gamerendering.com/2009/12/07/position-reconstruction/