I recently became aware of Cell noise based procedural generation techniques. I came across this website which explained the concept of cellular (or Worley) noise quite well. Upon further reading, the author links to Voro-Noise in which Inigo Quilez (the author of Voro-Noise) proceeds to give a rough overview of his shader. I don't understand a few parts of it however. Why is the hash function created as such:
vec3 hash3( vec2 p ) {
vec3 q = vec3( dot(p,vec2(127.1,311.7)),
dot(p,vec2(269.5,183.3)),
dot(p,vec2(419.2,371.9)) );
return fract(sin(q)*43758.5453);
}
I'm not sure the significance of the numbers nor why sin is used here, it doesn't appear to be explained on his page.
I also don't understand why there is a need to iterate over 25 points instead of the 9 used in previous examples.
for (int j=-2; j<=2; j++) {
for (int i=-2; i<=2; i++) {
...
}
}
changing the '2' and '-2' in each loop to '1' and '-1' seems to create artifacts and change the nature of the blur.
Why are we taking the dot product of the distance difference and itself? (which I guess finds the length of the vector squared?)
vec2 r = g - f + o.xy;
float d = dot(r,r);
And finally, while the author of voro-noise goes into what the power function is, he didn't really go into detail into why this works. why do we create K in this way (the author talks about raising smooth step to the power of 1, but I don't see that being a possibility here) Why do we create ww, multiply it by the offsets z component and why do we divide va by wt? I understand we add some sort of interpolation value between all points based on their contribution, but I just don't understand why the interpolator is constructed as is
float k = 1.0+63.0*pow(1.0-v,4.0);
float va = 0.0;
float wt = 0.0;
...
float ww = pow( 1.0-smoothstep(0.0,1.414,sqrt(d)), k );
va += o.z*ww;
wt += ww;
...
return va/wt;
How does the hermitian interpolator (smoothstep here) help here? From what I understand we are passing 0, sqrt(2) and the sqrt(length of difference vector) here.
Related
I need to know if a point lies on a segment in 2d.
I already solved that issue using Boost and intersects between a point and a segment.
Question
is it possible to use the Eigen library only?
Why:
I am currently in the middle of an algorithm that uses Eigen, and I don't want to copy my points again and again in a loop to a Boost geometry construct to check for this.
This is what I currently do and it works. But it adds a dependency.
This is a contrived code:
using Line2 = Eigen::Hyperplane <double, 2>;
using Vec2 = Eigen::Vector2d;
Vec2 a( 0, 0 );
Vec2 b( 1, 1 );
Vec2 d(70, 100);
Line2 ab = Line2::Through( a, b );
auto res = ab.normal();
auto projection_point = ab.projection(d); <- that projection point is not on the segment ab
//.. here I test with Boost and loop again if necessary
if there is no alternative I will keep doing it the way I do, but I am sure someone knows better out there.
I don't want to write the function myself (otherwise, I prefer to keep using Boost). I just want to rely on Eigen.
Thanks
I suggest checking the distance of the point and its projection onto the line against an epsilon. Plus a check against the distance from origin, plus a check (via a dot product) that the point lies in the correct direction from the origin.
Eigen has a parameterized line type for this. It isn't a line segment but an origin plus direction vector. So the maximum distance from the origin has to be computed and stored separately.
using PLine2d = Eigen::ParameterizedLine<double, 2>;
Eigen::Vector2d a = ..., b = ..., d = ...;
PLine2d line = Pline2d::Through(a, b);
double sqrlength = (b - a).squaredNorm();
Eigen::Vector2d projection = line.projection(d);
Eigen::Vector2d fromOrigin = projection - line.origin();
bool isOnLineSegment = projection.isApprox(d) &&
fromOrigin.squaredNorm() <= sqrlength &&
fromOrigin.dot(line.direction()) >= 0.f;
I haven't counted operations but there is a good chance that the code you linked is shorter and faster when implemented in Eigen. Cross and dot products are readily available, after all.
Replace squaredNorm() with stableNorm() if desired. In 2D hypotNorm() would also work well.
A potential issue is isApprox() which, as the documentations states, fails close to zero. Therefore depending on your data, an extended check may be necessary such as
(projection.isApprox(d) || (d.isZero() && projection.isZero()))
All have tuneable epsilon parameters.
I am applying a 3D Voronoi pattern on a mesh. Using those loops, I am able to compute the cell position, an id and the distance.
But I would like to compute a normal based on the generated pattern.
How can I generate a normal or reorient the current normal based on this pattern and associated cells ?
The aim is to provide a faced look for the mesh. Each cell's normal should point in the same direction and adjacent cells point in different directions. Those directions should be based on the original mesh normals, I don't want to totally break mesh normals and have those points in random directions.
Here's how I generate the Voronoi pattern.
float3 p = floor(position);
float3 f = frac(position);
float id = 0.0;
float distance = 10.0;
for (int k = -1; k <= 1; k++)
{
for (int j = -1; j <= 1; j++)
{
for (int i = -1; i <= 1; i++)
{
float3 cell = float3(float(i), float(j), float(k));
float3 random = hash3(p + cell);
float3 r = cell - f + random * angleOffset;
float d = dot(r, r);
if (d < distance)
{
id = random;
distance = d;
cellPosition = cell + p;
normal = ?
}
}
}
}
And here's the hash function :
float3 hash3(float3 x)
{
x = float3(dot(x, float3(127.1, 311.7, 74.7)),
dot(x, float3(269.5, 183.3, 246.1)),
dot(x, float3(113.5, 271.9, 124.6)));
return frac(sin(x)*43758.5453123);
}
This looks like a fairly expensive fragment shader, it might more sense to bake out a normal map than to try to do this in real time.
It's hard to tell what your shader is doing but I think it's checking every pixel against a 3x3 grid of voronoi cells. One weird thing is that random is a vec3 that somehow gets assigned to id, which is just a scalar.
Anyway, it sounds like you would like to perturb the mesh-supplied normal by a random vector, but you'd like all pixels corresponding to a particular voronoi cell to be perturbed in the same way.
Since you already have a variable called random which presumably represents a random value generated deterministically as a function of the voronoi cell, you could just use that. For example, the following would perturb the normal by a small amount:
normal = normalize(meshNormal + 0.2 * normalize(random));
If you want to give more weight to the random component, just increase the 0.2 constant.
Given a set of points P I need to find a line L that best approximates these points. I have tried to use the function gsl_fit_linear from the GNU scientific library. However my data set often contains points that have a line of best fit with undefined slope (x=c), thus gsl_fit_linear returns NaN. It is my understanding that it is best to use total least squares for this sort of thing because it is fast, robust and it gives the equation in terms of r and theta (so x=c can still be represented). I can't seem to find any C/C++ code out there currently for this problem. Does anyone know of a library or something that I can use? I've read a few research papers on this but the topic is still a little fizzy so I don't feel confident implementing my own.
Update:
I made a first attempt at programming my own with armadillo using the given code on this wikipedia page. Alas I have so far been unsuccessful.
This is what I have so far:
void pointsToLine(vector<Point> P)
{
Row<double> x(P.size());
Row<double> y(P.size());
for (int i = 0; i < P.size(); i++)
{
x << P[i].x;
y << P[i].y;
}
int m = P.size();
int n = x.n_cols;
mat Z = join_rows(x, y);
mat U;
vec s;
mat V;
svd(U, s, V, Z);
mat VXY = V(span(0, (n-1)), span(n, (V.n_cols-1)));
mat VYY = V(span(n, (V.n_rows-1)) , span(n, (V.n_cols-1)));
mat B = (-1*VXY) / VYY;
cout << B << endl;
}
the output from B is always 0.5504, Even when my data set changes. As well I thought that the output should be two values, so I'm definitely doing something very wrong.
Thanks!
To find the line that minimises the sum of the squares of the (orthogonal) distances from the line, you can proceed as follows:
The line is the set of points p+r*t where p and t are vectors to be found, and r varies along the line. We restrict t to be unit length. While there is another, simpler, description in two dimensions, this one works with any dimension.
The steps are
1/ compute the mean p of the points
2/ accumulate the covariance matrix C
C = Sum{ i | (q[i]-p)*(q[i]-p)' } / N
(where you have N points and ' denotes transpose)
3/ diagonalise C and take as t the eigenvector corresponding to the largest eigenvalue.
All this can be justified, starting from the (orthogonal) distance squared of a point q from a line represented as above, which is
d2(q) = q'*q - ((q-p)'*t)^2
EDIT: I have now solved the problem; you can see my solution in the answers.
I'm in the process of writing a realtime raytracer using OpenGL (in a GLSL Compute Shader), and I've run into a slight problem with some of my line-triangle intersections (or at least, I believe they are the culprit). Here's a picture of what's happening:
As you can see some pixels are being coloured black at the intersection of two triangles near the top of the image. It's probably got something to do with the way I'm handling floats or something, and I've tried searching for a solution online but can't find similar situations. Perhaps there's an important keyword I'm missing?
Anyways, the important piece of code is this one:
#define EPSILON 0.001f
#define FAR_CLIP 10000.0f
float FindRayTriangleIntersection(Ray r, Triangle p)
{
// Based on Moller-Trumbone paper
vec3 E1 = p.v1 - p.v0;
vec3 E2 = p.v2 - p.v0;
vec3 T = r.origin - p.v0;
vec3 D = r.dir;
vec3 P = cross(D, E2);
vec3 Q = cross(T, E1);
float f = 1.0f / dot(P, E1);
float t = f * dot(Q, E2);
float u = f * dot(P, T);
float v = f * dot(Q, D);
if (u > -EPSILON && v > -EPSILON && u+v < 1.0f+EPSILON) return t;
else return FAR_CLIP;
}
I've tried various values for EPSILON, tried variations with +/- for the EPSILON values, but to no avail. Also, changing the 1.0f+EPSILON to a 1.0-EPSILON yields a steady black line the whole way across.
Also to clarify, there definitely is NOT a gap between the two triangles. They are tightly packed (and I have also tried extending them so they intersect, but I still get the same black dots).
Curiously enough, the bottom intersection shows no sign of this phenomenon.
Last note: if more of my code is needed just ask and I'll try to isolate some more code (or maybe just link to the entire shader).
UPDATE: It was pointed out that the 'black artifacts' are in fact brown. So I've dug a bit deeper and turned off all reflections, and got this result:
The brown colour is actually coming from just the copper material on the top, but more importantly I think I have an idea what the cause of the problem is, but I'm no closer to solving it.
It seems that when the rays get fired out, due to very slight imperfections in the floating arithmetic, some rays intersect the top triangle, and some intersect the bottom.
So I suppose now the question reduces to this: how can I have some sort of consistency in deciding which triangle should be hit in cases like this?
So it turns out it was not the code I had posted that caused the problem. Thanks to some help in the comment, I was able to find it was this code when I'm determining the nearest object to the camera:
float nearest_t = FAR_CLIP;
int nearest_index = 0;
for (int j=0; j<NumObjects; j++)
{
float t = FAR_CLIP;
t = FindRayObjectIntersection(r, objects[j]);
if (t < nearest_t && t > EPSILON && t < FAR_CLIP)
{
nearest_t = t;
nearest_index = j;
}
}
When determining t, sometimes the triangles were so close together that the t < nearest_t had an almost probabilistic result, since the intersections were roughly the same distance from the camera.
My initial solution was to change the inner if-statement to:
if (t < nearest_t-EPSILON && t > EPSILON && t < FAR_CLIP)
This ensures that if two intersections are very close together, it will always choose the first object to display (unless the second object is closer by at least EPSILON). Here is a resulting image (with reflections disabled):
Now there were still some small artifacts, so it was clear that there was still a slight problem. So upon some discussion in the comments, #Soonts came up with the idea of blending the triangles' colours. This lead me to have to change the above code further in order to keep track of the distance to both triangles:
if (t > EPSILON && t < FAR_CLIP && abs(nearest_t - t) < EPSILON)
{
nearest_index2 = nearest_index;
nearest_t2 = nearest_t;
}
if (t < nearest_t+EPSILON && t > EPSILON && t < FAR_CLIP)
{
nearest_t = t;
nearest_index = j;
}
I also added this colour blending code:
OverallColor = mix(c1, c2, 0.5f * abs(T1 - T2) / EPSILON);
After these two steps, and honestly I think this effect was more from the logic change than the blending change, I got this result:
I hope others find this solution helpful, or at the very least that it sparks some ideas to solve your own problems. As a final example, here is the beautiful result with reflections, softer shadows and some anti-aliasing all turned on:
Happy raytracing!
I'm studying about the CSS algorithm and I don't get the hang of the concept of 'Arc Length Parameter'.
According to the literature, planar curve Gamma(u)=(x(u),y(u)) and they say this u is the arc length parameter and apparently, Gaussian Kernel g is also parameterized by this u here.
Stop me if I got something wrong but, aren't x and y location of the pixel? How is it represented by another parameter?
I had no idea when I first saw it on the literature so, I looked up the code. and apparently, I got puzzled even more.
here is the portion of the code
void getGaussianDerivs(double sigma, int M, vector<double>& gaussian,
vector<double>& dg, vector<double>& d2g) {
int L = (M - 1) / 2;
double sigma_sq = sigma * sigma;
double sigma_quad = sigma_sq*sigma_sq;
dg.resize(M); d2g.resize(M); gaussian.resize(M);
Mat_<double> g = getGaussianKernel(M, sigma, CV_64F);
for (double i = -L; i < L+1.0; i += 1.0) {
int idx = (int)(i+L);
gaussian[idx] = g(idx);
// from http://www.cedar.buffalo.edu/~srihari/CSE555/Normal2.pdf
dg[idx] = (-i/sigma_sq) * g(idx);
d2g[idx] = (-sigma_sq + i*i)/sigma_quad * g(idx);
}
}
so, it seems the code uses simple 1D Gaussian Kernel Aperture size of M and it is trying to compute its 1st and 2nd derivatives. As far as I know, 1D Gaussian kernel has parameter of x which is a horizontal coordinate and sigma which is scale. it seems like that 'arc length parameter u' is equivalent to the variable of x. That doesn't make any sense because later in the code, it directly convolutes the set of x and y on the contour.
what is this u?
PS. since I replied to the fellow who tried to answer my question, I think I should modify my question, so, here we go.
What I'm confusing is, how is this parameter 'u' implemented in codes? I think I understood the full code above -of course, I inserted only a portion of the code- but the problem is, I have no idea about what it would be for the 'improved' version of the algorithm. It says it's using 'affine length parameter' instead of this 'arc length parameter' and I'm not so sure how I implement the concept into the code.
According to the literature, the main difference between arc length parameter and affine length parameter is it's sampling interval and arc length parameter uses 1 for the vertical and horizontal direction and root of 2 for the diagonal direction. It makes sense since the portion of the code above is using for loop to compute 1st and 2nd derivatives of the 1d Gaussian and it directly inserts the value of interval 1 but, how is it gonna be with different interval with different variable? Is it possible that I'm not able to use 'for loop' for it?