How to create a fractal cube? - opengl

I would like to render volumetric clouds in OpenGL.
I found an interesting paper that describes a simple technique to render volumetric clouds.
(http://www.inframez.com/events_volclouds_slide18.htm)
However I don't know how to create their "fractal cube" (or perlin-noise cube).
My question is: how to create the 6 tileable fractal textures of a cube?
Edit: my aim is to make a volumetric cloud object, not a cloud skybox.

A nice introduction to Perlin noise, written by Ken Perlin himself, is here. He talks about generating a one or two dimensional noise function in some detail, and then generalises it to show how it would work in 3D, to generate a solid cube of noise like you want.

When using a 2D billboarded cloud texture, you create an alpha-blended 2D texture where the transparency looks cloud-like. What they're asking you to do is almost the same thing, only the texture wraps around a cube seamlessly (like a skybox). The perlin-noise filter looks like an algorithm to make something look cloud-like.
My shortcut approach to this would be to use Photoshop's cloud filter to create your texture. Follow the basic concept of this tutorial for the alpha blending, but don't do the circular gradient. Cut it into a seamless skybox-like grid (i.e. so it has 6 sides and folds properly around a cube).

I think the 'fractal cube' texture they refer to is an FBM (Fractal Brownian Motion) fractal generated from a number of octaves of Perlin noise. This Game Programming Gems Chapter discusses how they are formed. The basic idea is to combine multiple 'octaves' of Perlin noise, with each octave having around twice the frequency of the previous octave. You can make this seamlessly tiling by modifying the noise function. Photoshop's cloud filter is basically FBM noise and is seamlessly tiling so you can just use that if you have access to Photoshop.

If you're really interested in nice cloud rendering, then Mark Harris's algorithm is quite good albeit complicated: http://www.markmark.net/clouds/

Related

random voxel terrain on the gpu (dc vs. cms)

I want to create a random fractal terrain on the gpu (with a compute shader). I've started with implementing marching cubes: Generating Complex Procedural Terrains Using the GPU, and it works really good: marching cubes on the gpu. However, marching cubes can't extract sharp features or use an adaptive resolution.
So I looked for an advanced isosurface extraction algorithm and found Dual Contouring of Hermite Data. So I implemented dc in Java to test this algorithm and it looks great: dc on the cpu. (There are some holes in the mesh and no sharp features, but I was too lazy to implement/fix this because it's only a prototype.)
But I've noticed some negative aspects:
Intercell-dependent. (I have no idea how to port this to the gpu: the only resource I've found is Dual Contouring with OpenCL.)
I don't know how to create a chunk system, because there are no "clear" borders https://i.stack.imgur.com/62dy6.png.
So I continued my search for a better algorithm and found Cubical Marching Squares: Adaptive Feature Preserving Surface Extraction from Volume Data. This seems to be the perfect algorithm for me: intercell-independent, adaptive, sharp features, primal structure and even manifold. Unfortunately, there are no resources on how to implement this algorithm except for this Cubical Marching Squares Implementation. I think I understand the two parts of the algorithm: create a grid, for each cell:
Subdivide until the maximum depth is reached or there is no need to do so.
Split each cell into 6 faces, extract their surface and stitch them together.
But I don't know how to connect those two parts (especially the part with transitional faces, page 38).
So does anybody know how to implement dc as a shader, how to implement cms or a better algorithm (maybe dual marching cubes, I think it has the same problem as dc, but I haven't tested it yet)?
As you have already mentioned for the GPU you would need no intercell-dependencies... I'm sure people have come up with workarounds for that on DC but CMS should be one of the best for all those things that you need, namely intercell-independent (by definition), preserves sharp features, creates manifold geometry, and supports adaptive resolution.
In terms of resources on CMS I agree they are quite limited.
The original paper: http://graphics.csie.ntu.edu.tw/CMS/
This project by Matt Keeter has a 'C' CMS implementation in it (https://www.mattkeeter.com/projects/kokopelli/)
https://github.com/mkeeter/kokopelli <-- code
I used Matt Keeter's implementation as a reference for my partial 'C++' implementation (the thesis that you linked), here is the code for it in case you didn't find it:
https://bitbucket.org/GRassovsky/cubical-marching-squares
However keep in mind that it is indeed partial, that is to say it has the main algorithm working (adaptive and manifold), but I haven't got around to implement the sharp-feature preservation, 2D and 3D disambiguation, etc. It is also currently a basic CPU implementation... I had the good intentions to implement all those things and make a GPU implementation but for now have had no time.
"But I don't know how to connect those two parts" - that only goes to say how badly I have written my thesis, because I try to explain how I do this :D (Mind you, I am not sure exactly how this was done in the original paper... should we say that some things are not very clear there and you would have to use your imagination :))

OpenGL Picking from a large set

I'm trying to, in JOGL, pick from a large set of rendered quads (several thousands). Does anyone have any recommendations?
To give you more detail, I'm plotting a large set of data as billboards with procedurally created textures.
I've seen this post OpenGL GL_SELECT or manual collision detection? and have found it helpful. However it can take my program up to several minutes to complete a rendering of the full set, so I don't think drawing 2x (for color picking) is an option.
I'm currently drawing with calls to glBegin/glVertex.../glEnd. Given that I made the switch to batch rendering on the GPU with vao's and vbo's, do you think I would receive a speedup large enough to facilitate color picking?
If not, given all of the recommendations against using GL_SELECT, do you think it would be worth me using it?
I've investigated multithreaded CPU approaches to picking these quads that completely sidestep OpenGL all together. Do you think a OpenGL-less CPU solution is the way to go?
Sorry for all the questions. My main question remains to be, whats a good way that one can pick from a large set of quads using OpenGL (JOGL)?
The best way to pick from a large number of quad cannot be easily defined. I don't like color picking or similar techniques very much, because they seem to be to impractical for most situations. I never understood why there are so many tutorials that focus on people that are new to OpenGl or even programming focus on picking that is just useless for nearly everything. For exmaple: Try to get a pixel you clicked on in a heightmap: Not possible. Try to locate the exact mesh in a model you clicked on: Impractical.
If you have a large number of quads you will probably need a good spatial partitioning or at least (better also) a scene graph. Ok, you don't need this, but it helps A LOT. Look at some tutorials for scene graphs for further information's, it's a good thing to know if you start with 3D programming, because you get to know a lot of concepts and not only OpenGl code.
So what to do now to start with some picking? Take the inverse of your modelview matrix (iirc with glUnproject(...)) on the position where your mouse cursor is. With the orientation of your camera you can now cast a ray into your spatial structure (or your scene graph that holds a spatial structure). Now check for collisions with your quads. I currently have no link, but if you search for inverse modelview matrix you should find some pages that explain this better and in more detail than it would be practical to do here.
With this raycasting based technique you will be able to find your quad in O(log n), where n is the number of quads you have. With some heuristics based on the exact layout of your application (your question is too generic to be more specific) you can improve this a lot for most cases.
An easy spatial structure for this is for example a quadtree. However you should start with they raycasting first to fully understand this technique.
Never faced such problem, but in my opinion, I think the CPU based picking is the best way to try.
If you have a large set of quads, maybe you can group quads by space to avoid testing all quads. For example, you can group the quads in two boxes and firtly test which box you
I just implemented color picking but glReadPixels is slow here (I've read somehere that it might be bad for asynchron behaviour between GL and CPU).
Another possibility seems to me using transform feedback and a geometry shader that does the scissor test. The GS can then discard all faces that do not contain the mouse position. The transform feedback buffer contains then exactly the information about hovered meshes.
You probably want to write the depth to the transform feedback buffer too, so that you can find the topmost hovered mesh.
This approach works also nice with instancing (additionally write the instance id to the buffer)
I haven't tried it yet but I guess it will be a lot faster then using glReadPixels.
I only found this reference for this approach.
I'm using the solution that I've borrowed from DirectX SDK, there's a nice example how to detect the selected polygon in a vertext buffer object.
The same algorithm works nice with OpenGL.

BlitzMax - generating 2D neon glowing line effect to png file

I'm looking to create a glowing line effect in BlitzMax, something like a Star Wars lightsaber or laserbeam. Doesn't have to be realtime, but just to TImage objects and then maybe saved to PNG for later use in animation. I'm happy to use 3D features, but it will be for use in a 2D game.
Since it will be on black/space background, my strategy is to draw a series of white blurred lines with color and high transparency, then eventually central lines less blurred and more white. What I want to draw is actually bezier curved lines. Drawing curved lines is easy enough, but I can't use the technique above to create a good laser/neon effect because it comes out looking very segmented. So, I think it may be better to use a blur effect/shader on what does render well, which is a 1-pixel bezier curve.
The problems I've been having are:
Applying a shader to just a certain area of the screen where lines are drawn. If there's a way to do draw lines to a texture and then blur that texture and save the png, that would be great to hear about. There's got to be a way to do this, but I just haven't gotten the right elements working together yet. Any help from someone familiar with this stuff would be greatly appreciated.
Using just 2D calls could be advantageous, simpler to understand and re-use.
It would be very nice to know how to save a PNG that preserves the transparency/alpha stuff.
p.s. I've reviewed this post (and others), have the samples working, and even developed my own 5x5 shader. But, it's 3D and a scene-wide thing that doesn't seem to convert to 2D or just a certain area very well.
http://www.blitzbasic.com/Community/posts.php?topic=85263
Ok, well I don't know about BlitzMax, so I can't go into much detail regarding implementation, but to give you some pointers:
For applying shaders to specific parts of the image only, you will probably want to use multiple rendering passes to compose your scene.
If you have pixel access, doing the same things that fragment shaders do is, of course, possible "the oldskool way" in 2D, ie. something like getpixel/setpixel. However, you'll have much poorer performance this way.
If you have a texture with an alpha channel intact, saving in PNG with an alpha channel should Just Work (sorry, once again no idea how to do this in BlitzMax specifically). Just make sure you're using RGBA modes all along.

IDEAs: how to interactively render large image series using GPU-based direct volume rendering

I'm looking for idea's how to convert a 30+gb, 2000+ colored TIFF image series into a dataset able to be visualized in realtime (interactive frame rates) using GPU-based volume rendering (using OpenCL / OpenGL / GLSL). I want to use a direct volume visualization approach instead of surface fitting (i.e. raycasting instead of marching cubes).
The problem is two-fold, first I need to convert my images into a 3D dataset. The first thing which came into my mind is to see all images as 2D textures and simply stack them to create a 3D texture.
The second problem is the interactive frame rates. For this I will probably need some sort of downsampling in combination with "details-on-demand" loading the high-res dataset when zooming or something.
A first point-wise approach i found is:
polygonization of the complete volume data through layer-by-layer processing and generating corresponding image texture;
carrying out all essential transformations through vertex processor operations;
dividing polygonal slices into smaller fragments, where the corresponding depth and texture coordinates are recorded;
in fragment processing, deploying the vertex shader programming technique to enhance the rendering of fragments.
But I have no concrete ideas of how to start implementing this approach.
I would love to see some fresh ideas or ideas on how to start implementing the approach shown above.
If anyone has any fresh ideas in this area, they're probably going to be trying to develop and publish them. It's an ongoing area of research.
In your "point-wise approach", it seems like you have outlined the basic method of slice-based volume rendering. This can give good results, but many people are switching to a hardware raycasting method. There is an example of this in the CUDA SDK if you are interested.
A good method for hierarchical volume rendering was detailed by Crassin et al. in their paper called Gigavoxels. It uses an octree-based approach, which only loads bricks needed in memory when they are needed.
A very good introductory book in this area is Real-Time Volume Graphics.
I've done a bit of volume rendering, though my code generated an isosurface using marching cubes and displayed that. However, in my modest self-education of volume rendering I did come across an interesting short paper: Volume Rendering on Common Computer Hardware. It comes with source example too. I never got around to checking it out but it seemed promising. It is it DirectX though, not OpenGL. Maybe it can give you some ideas and a place to start.

How to create fast and easy scene-independent shadows w/o shaders in OpenGL

Let i have some mesh (for ex. sphere) in the center of room, full of cubes and one light source. How can i make fast and easy shadow-casting in OpenGL, using "standard" (fixed) functions only? Note: the result must contain cube and sphere shadows as well.
If you can generate a silhouette of the sphere then you could use shadow volumes. nVidia hardware has also supported fixed function shadow mapping for a fair while as well.
Shadow volumes have the disadvantage of very high fill rate requirements. Shadow maps can be better but require an extra pass.
If you are projecting on to a single plane it may well be easier to just project the object on to a plane.
There is no fast and easy way. There are lots of differnt techiques, that each have their own pros and cons. You can look at a project I host on github, that uses very simple code to create a shadow, using the shadow volume technique (http://iuiz.github.com/VolumeShadow/). However it is written in Java, but it should not be hard to port it to any other language.
The most important ways to create shadows are the so called "shadow mapping" method, where you render your scene (with the camera at the light source, directed to each shadow casting object) to a texture. And the second technique is the shadow voulume method (made famous with Doom3).
I've found one way using StencilBuffers. Being a little confused for a while, i finally got the idea - whith this the most hard thing would be looping through each light source and projecting all scene objects. This one looks more pretty than texture shadowing and works faster than volumeric shadows. here and here are some resources, which helped me to understand matrix multiplication step (it confused me a bit when i was looking through dino demo). As for me, this method is most easy to understand and use. The only question left to solve is how to calculate multiplication matrix.
Although this method could be changed a bit using textures as shown here.
Thanks everybody! =)