Perlin Noise as a Density Function in Dual Contouring (Terrain Generation) - c++

The issue that I'm having is working out how 3D Perlin Noise is used as a density function with the Dual Contouring Algorithm. I've found it difficult to find any Dual Contouring implementations or blogs/write ups that use or discuss the specifics of how they use 3D Perlin Noise with Dual Contouring to generate terrain.
I've managed to implement Dual Contouring with an implicit sphere so know its working correctly and have a basic idea of how density functions works (at least in a spheres case), have a basic knowledge of Perlin Noise (implemented the 3D version tested via texture output and basic cubic voxel terrain generation) and understand how to compute the gradient (analytical derivative in this case) for use with the QEF calculation.
I've experimented with a number of ways to try get relatively decent looking terrain with no success:
1) Generate the density values for the grid via Perlin noise (i.e. value < 1 is inside, > 1 is outside and 0 is on the contour) and then generate the intersection values (where the contour intersects the cube) for the cubes completely separately again using Perlin Noise, using the co-ordinate input as the index of the cube we're on with a minor offset. (Basically just generated a really cubic surface)
2) Use Perlin Noise in a similar way to the sphere function, generate the density grid via Perlin Noise and then for each intersection interpolate along the edge between the two nearby cubes density values until the Perlin Noise hits 0 (i.e. the exact point that the density function intersects the edge) then use the co-ordinate that triggered the 0 output from the Perlin Noise. (Again generates a cubic surface, think it might be because the interpolation usually triggers 0 when the co-ords hit an integer value i.e. on the Perlin latices edge.)
3) This is the one I've had most success with, similar to 2) except i keep the input values to the Perlin Noise within a 0.1 to 0.9 range i.e. kept within a single lattice. It feeds out ok values for some cubes but calculating the QEF (the averaged point for the cube) sends a fair amount of points well outside the range of a cube despite all the intersection points being well within the cube. I think its something to do with the way I use the Perlin Gradients, essentially I interpolate to find the co-ordinate that generates the 0 for the Perlin Noise return value, I then use the gradient(analytically derivative) from this co-ordinate and the culmination of all these sends the co-ordinate miles away.
Feel free to ask for more information to be posted up if its required I'll do my best, this is my first time posting so sorry if the post is sub-par.
tl;dr How is 3D Perlin Noise used as a density function to generate terrain in a Dual Contouring Algorithm? specifically in generating intersection points and normals for edges for the QEF calculation.
Edit: The new current method I'm trying (and failing) to use to integrate Perlin Noise using the perlin noise generator to fill up the initial density data, after this point for any edge intersection I do some simple linear maths to find the ratio on the line where the 0 crosses (so if a density value on one side was 0.5 and on the other side it was -0.5 the value given out would be that the 0 is 50% of the way between them so 0.5f). Once I have this value i simply add it on to the relevant co-ordinate to get the intersection point on that edge. After this point I use the intersection as input to calculate the Perlin Noises 3D Gradient (essentially just running the noise function with it as an input and calculating the analytically derivative using the method found here: 3D Perlin noise analytical derivative).
Once this is done the gradient and intersection values are fed into the QEF function, which uses SVD decomposition instead of the QR decomposition described as being more accurate within the original Dual Contouring Paper (found SVD to be simpler looking and it also sounded like it'd be less performance heavy from the paper).
A few screenshots to hopefully help a little by showing what it currently looks like. The first has the diagonal code commented out and the second has the Diagonal in place, the number immediately after all the prints of "vertex out of bounds" is the number of vertices outside of their cube.
http://imgur.com/di1DxRQ
The second with the diagonal division occurring:
http://imgur.com/X4uCab3

Related

How do you generate Pseudorandom Gradient for perlin noise?

I have a question. so I read this article about classic and simplex noise. here. I don't understand how would you generate a gradient for grid points. it says that you should pick 8 or 16 gradients within the unit circle, but how should those gradients repeat over time?
The gradient noises are often made by generating a pseudo-random vector for each grid point but the method described here uses a set of pregenerated vector and :
To associate each grid point with exactly one gradient, the integer coordinates of the point can
be used to compute a hash value, which in turn can be used as the index into a look-up table of
the gradients.
if the hash function is periodic, the noise will be periodic too and that is the case in the article (you can see it in the example code at the end).

Smoothing motion parameters

I have been working on video stabilization for quite a few weeks now. The algorithm I'm following basically involves 3 steps :-
1. FAST feature detection and Matching
2. Calculating affine transformation (scale + rotation + translation x + translation y ) from matched keypoints
3. Smooth motion parameters using cubic spline or b-spline.
I have been able to calculate affine transform. But I am stuck at smoothing motion parameters. I have been unable to evaluate spline function to smooth the three parameters.
Here is a graph for smoothed data points
Any suggestion or help as to how can I code to get a desired result as shown in the graph?
Here is the code that calculate the points on the curve
B-spline Curves
But now the code will use all control points as transform parameters to formulate.
I think i will run in post-processing (not real time).
Did you run B spline smoothing in real time?

Edge detection / angle

I can successfully threshold images and find edges in an image. What I am struggling with is trying to extract the angle of the black edges accurately.
I am currently taking the extreme points of the black edge and calculating the angle with the atan2 function, but because of aliasing, depending on the point you choose the angle can come out with some degree of variation. Is there a reliable programmable way of choosing the points to calculate the angle from?
Example image:
For example, the Gimp Measure tool angle at 3.12°,
If you're writing your own library, then creating a robust solution for this problem will allow you to develop several independent chunks of code that you can string together to solve other problems, too. I'll assume that you want to find the corners of the checkerboard under arbitrary rotation, under varying lighting conditions, in the presence of image noise, with a little nonlinear pincushion/barrel distortion, and so on.
Although there are simple kernel-based techniques to find whole pixels as edge pixels, when working with filled polygons you'll want to favor algorithms that can find edges with sub-pixel accuracy so that you can perform accurate line fits. Even though the gradient from dark square to white square crosses several pixels, the "true" edge will be found at some sub-pixel point, and very likely not the point you'd guess by manually clicking.
I tried to provide a simple summary of edge finding in this older SO post:
what is the relationship between image edges and gradient?
For problems like yours, a robust solution is to find edge points along the dark-to-light transitions with sub-pixel accuracy, then fit lines to the edge points, and use the line angles. If you are processing a true camera image, and if there is an uncorrected radial distortion in the image, then there are some potential problems with measurement accuracy, but we'll ignore those.
If you want to find an accurate fit for an edge, then it'd be great to scan for sub-pixel edges in a direction perpendicular to that edge. That presupposes that we have some reasonable estimate of the edge direction to begin with. We can first find a rough estimate of the edge orientation, then perform an accurate line fit.
The algorithm below may appear to have too many steps, but my purpose is point out how to provide a robust solution.
Perform a few iterations of erosion on black pixels to separate the black boxes from one another.
Run a connected components algorithm (blob-finding algorithm) to find the eroded black squares.
Identify the center (x,y) point of each eroded square as well as the (x,y) end points defining the major and minor axes.
Maintain the data for each square in a structure that has the total area in pixels, the center (x,y) point, the (x,y) points of the major and minor axes, etc.
As needed, eliminate all components (blobs) that are too small. For example, you would want to exclude all "salt and pepper" noise blobs. You might also temporarily ignore checkboard squares that are cut off by the image edges--we can return to those later.
Then you'll loop through your list of blobs and do the following for each blob:
Determine the direction roughly perpendicular to the edges of the checkerboard square. How you accomplish this depends in part on what data you calculate when you run your connected components algorithm. In a general-purpose image processing library, a standard connected components algorithm will determine dozens of properties and measurements for each individual blob: area, roundness, major axis direction, minor axis direction, end points of the major and minor axis, etc. For rectangular figures, it can be sufficient to calculate the topmost, leftmost, rightmost, and bottommost points, as these will define the four corners.
Generate edge scans in the direction roughly perpendicular to the edges. These must be performed on the original, unmodified image. This generally assumes you have bilinear interpolation implemented to find the grayscale values of sub-pixel (x,y) points such as (100.35, 25.72) since your scan lines won't fall exactly on whole pixels.
Use a sub-pixel edge point finding technique. In general, you'll perform a curve fit to the edge points in the direction of the scan, then find the real-valued (x,y) point at maximum gradient. That's the edge point.
Store all sub-pixel edge points in a list/array/collection.
Generate line fits for the edge points. These can use Hough, RANSAC, least squares, or other techniques.
From the line equations for each of your four line fits, calculate the line angle.
That algorithm finds the angles independently for each black checkerboard square. It may be overkill for this one application, but if you're developing a library maybe it'll give you some ideas about what sub-algorithms to implement and how to structure them. For example, the algorithm would rely on implementations of these techniques:
Image morphology (e.g. erode, dilate, close, open, ...)
Kernel operations to implement morphology
Thresholding to binarize an image -- the Otsu method is worth checking out
Connected components algorithm (a.k.a blob finding, or the OpenCV contours function)
Data structure for blob
Moment calculations for blob data
Bilinear interpolation to find sub-pixel (x,y) values
A linear ray-scanning technique to find (x,y) gray values along a specific direction (which will also rely on bilinear interpolation)
A curve fitting technique and means to determine steepest tangent to find edge points
Robust line fit technique: Hough, RANSAC, and/or least squares
Data structure for line equation, related functions
All that said, if you're willing to settle for a slight loss of accuracy, and if you know that the image does not suffer from radial distortion, etc., and if you just need to find the angle of the parallel lines defined by all checkboard edges, then you might try..
Simple kernel-based edge point finding technique (Laplacian on Gaussian-smoothed image)
Hough line fit to edge points
Choose the two line fits with the greatest number of votes, which should be one set of horizontal-ish lines and the other set of vertical-ish lines
There are also other techniques that are less accurate but easier to implement:
Use a kernel-based corner-finding operator
Find the angles between corner points.
And so on and so on. As you're developing your library and creating robust implementations of standalone functions that you can string together to create application-specific solutions, you're likely to find that robust solutions rely on more steps than you would have guessed, but it'll also be more clear what the failure mode will be at each incremental step, and how to address that failure mode.
Can I ask, what C++ library are you using to code this?
Jerry is right, if you actually apply a threshold to the image it would be in 2bit, black OR white. What you may have applied is a kind of limiter instead.
You can make a threshold function (if you're coding the image processing yourself) by applying the limiter you may have been using and then turning all non-white pixels black. If you have the right settings, the squares should be isolated and you will be able to calculate the angle.
Once this is done you can use a path finding algorithm to find some edge, any edge will do. If you find a more or less straight path, you can use the extreme points as you are doing now to determine the angle. Since the checker-board rotation is only relevant within 90 degrees, your angle should be modulo 90 degrees or pi over 2 radians.
I'm not sure it's (anywhere close to) the right answer, but my immediate reaction would be to threshold twice: once where anything but black is treated as white, and once where anything but white is treated as black.
Find the angle for each, then interpolate between the two angles.
Your problem have few solutions but all have one very important issue which you seem to neglect. Note: When you are trying to make geometrical calculation in the image, the points you use must be as far as possible one from the other. You are taking 2 points inside a single square. Those points are very close one to another, so a slight error in pixel location of of the points leads to a large error in the angle. Why do you use only a single square, when you have many squares in the image?
Here are few solutions:
Find the line angle of every square. You have at least 9 squares in the image, 4 lines in each square which give you total of 36 angles (18 will be roughly at 3[deg] and 18 will be ~93[deg]). Remove the 90[degrees] and you get 36 different measurements of the angle. Sort them and take the average of the middle 30 (disregarding the lower 3 and higher 3 measurements). This will give you an accurate result
Second solution, find the left extreme point of the leftmost square and the right extreme point of the rightmost square. Now calculate the angle between them. The result will be much more accurate because the points are far away.
A third algorithm which will give you accurate results because it doesn't involve finding any points and no need for thresholding. Just smooth the image, calculate gradients in X and Y directions (gx,gy), calculate the angle of the gradient in each pixel atan(gy,gx) and make histogram of the angles. You will have 2 significant peaks near the 3[deg] and 93[deg]. Just find the peaks by searching the maximum in the histogram. This will work even if you have a lot of noise in the image, even with antialising and jpg artifacts, and even if you have other drawings on the image. But remember, you must smooth the image a lot before calculating the derivatives.

Finding curvature from a noisy set of data points using 2d/3dsplines? (C++)

I am trying to extract the curvature of a pulse along its profile (see the picture below). The pulse is calculated on a grid of length and height: 150 x 100 cells by using Finite Differences, implemented in C++.
I extracted all the points with the same value (contour/ level set) and marked them as the red continuous line in the picture below. The other colors are negligible.
Then I tried to find the curvature from this already noisy (due to grid discretization) contour line by the following means:
(moving average already applied)
1) Curvature via Tangents
The curvature of the line at point P is defined by:
So the curvature is the limes of angle delta over the arclength between P and N. Since my points have a certain distance between them, I could not approximate the limes enough, so that the curvature was not calculated correctly. I tested it with a circle, which naturally has a constant curvature. But I could not reproduce this (only 1 significant digit was correct).
2) Second derivative of the line parametrized by arclength
I calculated the first derivative of the line with respect to arclength, smoothed with a moving average and then took the derivative again (2nd derivative). But here I also got only 1 significant digit correct.
Unfortunately taking a derivative multiplies the already inherent noise to larger levels.
3) Approximating the line locally with a circle
Since the reciprocal of the circle radius is the curvature I used the following approach:
This worked best so far (2 correct significant digits), but I need to refine even further. So my new idea is the following:
Instead of using the values at the discrete points to determine the curvature, I want to approximate the pulse profile with a 3 dimensional spline surface. Then I extract the level set of a certain value from it to gain a smooth line of points, which I can find a nice curvature from.
So far I could not find a C++ library which can generate such a Bezier spline surface. Could you maybe point me to any?
Also do you think this approach is worth giving a shot, or will I lose too much accuracy in my curvature?
Do you know of any other approach?
With very kind regards,
Jan
edit: It seems I can not post pictures as a new user, so I removed all of them from my question, even though I find them important to explain my issue. Is there any way I can still show them?
edit2: ok, done :)
There is ALGLIB that supports various flavours of interpolation:
Polynomial interpolation
Rational interpolation
Spline interpolation
Least squares fitting (linear/nonlinear)
Bilinear and bicubic spline interpolation
Fast RBF interpolation/fitting
I don't know whether it meets all of your requirements. I personally have not worked with this library yet, but I believe cubic spline interpolation could be what you are looking for (two times differentiable).
In order to prevent an overfitting to your noisy input points you should apply some sort of smoothing mechanism, e.g. you could try if things like Moving Window Average/Gaussian/FIR filters are applicable. Also have a look at (Cubic) Smoothing Splines.

Photoshop Drop Shadow style in GLSL

I am trying to implement Adobe Photoshop's Drop Shadow layer style in OpenGL. I need to add a blurriness to the edges of the shadow which is controlled by "Size" property in Photoshop.I first thought that running it through a typical Gaussian blur algo would be fine.But looking closer at the effect it is clear to me that the Gaussian blur wouldn't give the same effect as it processes all the fragments of the raster uniformly.In Photoshop the Blur areas are always along the edges of the shadow shape.Those get wider towards the center of the shape.Anyone can point to an algorithm or GLSL example which blurs the shape on its edges based on the size parameter just like in Photoshop ?
UPDATE: Here is my final result using Euclidian Distance field and the technique outlined in
this Valve paper + the recent book "OpenGL Insights":
I am very interested in this answer as well, since I am trying to replicate the Photoshop Layer Styles in my open source project:
https://github.com/vinniefalco/LayerEffects
This is what I know:
Drop Shadow and Inner Shadow are duals of each other. Adding a drop shadow on a layer is the same as adding an Inner Shadow to a layer with an inverted mask.
Outer Glow with Technique set to "Precise" calculates a Euclidean Distance Transform (EDT) with a Chamfer metric.
Stroke set to Gradient, "Shape Burst" uses an identical EDT.
Outer Glow with Technique set to "Softer" uses some unknown transform identical to the one used for Drop Shadow.
Since the distance transform plays a key role in almost every Photoshop Layer Style, it might be reasonable to assume that the unknown transform in Drop Shadow is a variation of the EDT. The only other variation I have been able to find is called the "Gaussian Distance Transform" (GDT). Unfortunately there is only one description of it, in the book "2-D and 3-D Image Registration for Medical, Remote Sensing, and Industrial Applications." The PDF is available:
http://read.pudn.com/downloads85/ebook/327739/Wiley%5B1%5D.Interscience.2-D.and.3-D.Image.Registration.for.Medical.Remote.Sensing.and.Industrial.Applications.pdf
Here's the description of the GDT:
If we convolve an image with a monotonically increasing radial function, an image will be obtained that functions like a distance transform image. The inverse of a Gaussian may be used as the monotonically increasing radial function. Therefore, to obtain the distance transform of an image, the image is convolved with a Gaussian and the intensities of the convolved image are inverted. Computation of the distance transform in this manner makes the obtained distances less sensitive to noise. This is demonstrated in an example in Fig.4.6. Figures4.6a and 4.6b show distance transforms of images4.5a and 4.5b, respectively, computed by Gaussian convolution. Compared to the Euclidean distance transform, the distance transform computed by Gaussian convolution is less sensitive to noise.
Given this image:
(source: imgfsr.com)
Here's the signed Euclidean Distance Transform and the signed Gaussian Distance Transform:
(source: imgfsr.com)
(Images from http://www.imgfsr.com/ifsr_dtg.html)