Procedural Planets, Heightmaps and textures - opengl

I am currently working on an OpenGL procedural planet generator. I hope to use it for a space RPG, that will not allow players to go down to the surface of a planet so I have ignored anything ROAM related. At the momement I am drawing a cube with VBOs and mapping onto a sphere as shown here.
I am familiar with most fractal heightmap generating techniques and have already implemented my own version of midpoint displacement(not that useful in this case I know).
My question is, what is the best way to procedurally generate the heightmap. I have looked at libnoise which allows me to make tilable heightmaps/textures, but as far as I can see I would need to generate a net like:
Leaving the tiling obvious.
Could anyone advise me on the best route to take?
Any input would be much appreciated.
Thanks,
Henry.

It looks like you understand the problem with generating a flat, seamless surface and then trying to map it onto a sphere.
How about using a 3D noise function instead? A 3D noise function takes 3 coordinates instead of 2 as its input, so imagine a 3D array full of generated numbers (instead of a 2D array). Thus, once you have a 3D noise function, you can generate a 2D texture, but instead of using 2D coordinates for each pixel, use the 3D coordinates of where that pixel would be on the sphere. (I hope that convoluted sentence made sense!)
Take a look at halfway-down this page about Perlin noise: https://web.archive.org/web/20120829114554/http://local.wasp.uwa.edu.au/~pbourke/texture_colour/perlin/
I think it describes exactly what you want with regards to spheres.

You may also want to check out this article from 2004 on how to 'split' up a sphere into manageable parts.
http://www.gamedev.net/reference/articles/article2074.asp

Related

Reduce negative perceptual effects when rendering a 3D cube filled with points with OpenGL

I need to render a simple 3D cube with OpenGL filled with points that lie on a regular 64x64x64 grid. An image can be found here.
It's hard to explain, but obviously there are some perceptual difficulties due to the projection from 3d to 2d. I tried to displace the points by an randomly generated offset, which helped a little bit, but wasn't really satisfying.
I think there's even a name for that effect, but I couldn't find it, so it would be great if someone could name it and maybe give some advide to reduce it.
You might be thinking of Moiré patterns. MSAA (multi-sample anti-aliasing) might help, or perhaps the introduction of jitter. See also: Supersampling
Alternatively, you could draw the points using point sprites or billboarding, which can be implemented very efficiently using modern (GL) geometry shaders.

rendered 3D Scene to point cloud

Is there a way to extract a point cloud from a rendered 3D Scene (using OPENGL)?
in Detail:
The input should be a rendered 3D Scene.
The output should be e.g a three dimensional array with vertices(x,y,z).
Mission possible or impossible?
Render your scene using an orthographic view so that all of it fits on screen at once.
Use a g-buffer (search for this term or "fat pixel" or "deferred rendering") to capture
(X,Y,Z, R, G, B, A) at each sample point in the framebuffer.
Read back your framebuffer and put the (X,Y,Z,R,G,B,A) tuple at each sample point in a
linear array.
You now have a point cloud sampled from your conventional geometry using OpenGL. Apart from the readback from the GPU to the host, this will be very fast.
Going further with this:
Use depth peeling (search for this term) to generate samples on surfaces that are not
nearest to the camera.
Repeat the rendering from several viewpoints (or equivalently for several rotations
of the scene) to be sure of capturing fragments from a the nooks and crannies of the
scene and append the points generated from each pass into one big linear array.
I think you should take your input data and manually multiply it by your transformation and modelview matrices. No need to use OpenGL for that, just some vector/matrices math.
If I understand correctly, you want to deconstruct a final rendering (2D) of a 3D scene. In general, there is no capability built-in to OpenGL that does this.
There are however many papers describing approaches to analyzing a 2D image to generate a 3D representation. This is for example what the Microsoft Kinect does to some extent. Look at the papers presented at previous editions of SIGGRAPH for a starting point. Many implementations probably make use of the GPU (OpenGL, DirectX, CUDA, etc.) to do their magic, but that's about it. For example, edge-detection filters to identify the visible edges of objects and histogram functions can run on the GPU.
Depending on your application domain, you might be in for something near impossible or there might be a shortcut you can use to identify shapes and vertices.
edit
I think you might have a misunderstanding of how OpenGL rendering works. The application produces and sends to OpenGL the vertices of triangles forming polygons and 3d objects. OpenGL then rasterizes (i.e. converts to pixels) these objects to form a 2d rendering of the 3d scene from a particular point of view with a particular field of view. When you say you want to retrieve a "point cloud" of the vertices, it's hard to understand what you want since you are responsible for producing these vertices in the first place!

OpenGL, How to create a "bumpy Polygon"?

I am unsure of how to describe what I'm after, so I drew a picture to help:
My question, is it possible within OpenGL to create the illusion of those pixel looking bumps on a single polygon, without having to resort to using many polygons? And if it is, what's the method?
I think what your looking for is actually Parallax mapping (Or Parallax Occlusion mapping).
Demos:
http://www.youtube.com/watch?v=01owTezYC-w
http://www.youtube.com/watch?v=gcAsJdo7dME&NR=1
http://www.youtube.com/watch?v=njKdLvmBl88
Parralax mapping basically works by using the height map to alter the texture UV coordinate being used.
The main disadvantage to parallax is that anything that appears to be 'outside' the polygon will be clipped (think of looking at an image on a 3D tv), so it's best for things indented in a surface rather than sticking out of it (although you can reduce this by making the polygon lager than the visible texture area). It's also fairly complex and would need to be combined with other shader techniques for a good effect.
Bump mapping works by using a texture for normal's, this makes the light's shading appear to be 3D however it does not change 3D data depending on the position of the viewer only the shading. Bump mapping would also be fairly useless for the OP's sample image since the surface is all the same angle just at different heights, bump mapping relies on the changes in the surfaces angles. You would have to slope the edges like this.
Displacement mapping/tessellation uses a texture to generate more polygons rather than just being 1 polygon.
There's a video comparing all 3 here
EDIT: There is also Relief mapping, which is a similar to parallax. See demo. There's a comparison video too (it's a bit lowquality but relief looks like it gives better depth).
I think what you're after is bump mapping. The link goes to a simple tutorial.
You may also be thinking of displacement mapping.
Of the techniques mentioned in other people's answers:
Bump mapping is the easiest to achieve, but doesn't do any occlusion.
Parallax mapping is probably the most complex to achieve, and doesn't work well in all cases.
Displacement mapping requires high-end hardware and drivers, and creates additional geometry.
Actually modeling the polygons is always an option.
It really depends on how close you expect the viewer to be and how prominent the bumps are. If you're flying down the Death Star trench, you'll need to model the bumps or use displacement mapping. If you're a few hundred meters up, bumpmapping should suffice.
If you have DX11 class hardware then you could tessellate the polygon and then apply displacement mapping. See http://developer.nvidia.com/node/24. But then it gets a little complicated to get it running and develop something on top of it.

3d rendering of a surface from a depthmap

Using stereovision, I am producing depthmaps representing the 3d environment as viewed from a camera. There is one depthmap per "keyframe" associated with a camera position. The goal is to translate those 2d depthmaps into the 3d space (and later merge them to reconstruct the whole environment).
What would be the best (efficient) way to translate those depthmaps in 3d? Each depthmap is 752x480 large, so the number of triangles can grow quite fast. I would like an automatic system to manage the level of detail of the objects.
My team uses Ogre3d so it would be great to find a solution with it. What I am looking for is very similar to what Terrain do, except that I want to be able to put the resulting objects wherever I want (translation, rotation) and I think Terrain can't do that.
I am quite new to Ogre3d so please forgive me if there is a straightforward solution I should know. If another tool than Ogre3d is more appropriate to my problem, I'd be happy to learn about it!
Not clear what you want to do "merge depahtmap with envirronement" ?
Anyway, in your case, you seems stuck to make them 3d using terrain heightmap techniques.
In you case, as the depthmap is screen aligned, use a screen space simple raycasting technique. So you must do a compositor in ogre3D that takes that depth map and transform it on the pixel you want.
Translation and rotation from a depth map may be limited to xy on screen, as like terrain heightmap (you cannot have caves using heightmaps), you do miss a dimension.
Not directly related but might help: in pure screen space, there is a technique "position reconstruction" that help getting object world space positions, but only if you have a load of infos on the camera used to generate the depthmap you're using, for example: http://www.gamerendering.com/2009/12/07/position-reconstruction/

CLOD Planet wide texturing in OpenGL

I'm finishing up on a 3D planet with ROAM (continuous level of detail).
My goal now is to have good quality render using textures.
I'm trying to find a way I can use a tiling system (small good textures combined), but in a way I can take advantage of my CLOD mesh.
Current algorithms (from what I've found) using this tiling systems produce a huge texture and then direcly apply it. That is not what I want... the planet is very big, and I want more power than simply increasing the texture size.
Is there any known algorithm/opengl feature for this kind of stuff?
I don't know much about shaders, but Is it possible to create one that paints a objects alone... I mean, not giving the texcoords, but putting the right color for every pixel (not vertex) of the mesh?
PS: My world is built using perlin noise... so I can get the height in any world point (height map with infinite resolution)
You have used 3D Perlin noise for the terrain, why not generate the texture as well? Generally, programs like Terragen, Vistapro and the like use altitude to randomly select a range of color from the palette, modify that color based on slope, and perhaps add detail from smaller textures based on both slope and altitude. In your case, distance could also modify detail. For that matter, 2d perlin noise would work well for detail texture.
Have you modified the heightmap at all? Something like an ocean would be hard to achieve with pure 3d Perlin noise, but flattening everything below a certain altitude and applying a nice algorithmic ocean texture (properly tuned 2d Perlin noise with transparency below a certain level) would look good.