I have a Nurbs curve (Ctrl Points, Knot Vector, Weights & 3rd or 4th degree). I am able to compute the length by sampling along the curve using a parameter t [0, 1]. Computing the sum of the distances along the curve gives an approximate length of the curve.
Is there a better way to compute the length of the curve?
Linear sampling: I would like to sample the curve linearly such that the distance between the first sample t = t0 = 0 to t = t1 is S1 and the last sample between t n-1 to tn = 1 is S2 and all the samples in between have lengths that interpolate linearly from S1 to S2.
S1 and S2 are fixed.
The curve length follows the equation
ds / dt = √(x'²(t) + y'²(t))
where s is the curvilinear abscissa, t the curve parameter and the derivatives are taken on t.
What you are willing to do amounts to constructing the function t(s) and imposing your values of s. This is conveniently done by writing the differential equation
dt / ds = 1 / √(x'²(t) + y'²(t))
and integrating it numerically, for example with Runge-Kutta.
Yes. Sampling as you do, instead of pairs, work by the triples of points, interpreting them as arcs passing through those three points.
You will get much smaller approximation error with the same spacing of the sampling points, vs. the straight line segments.
To compute the length of a parametrized curve c(u)=(x(u), y(u)) you can use the general formula.
see curvilinear abscissa from wikipedia
You explicitly know x(u) and y(u) since
NURBS from wikipedia
I believe you have the formula of the derivative of the rational basis functions. Therefore you have x'(u) and y'(u). Then you can either integrate using simpson's rule or use gauss points specifically to integrate rational polynomial or better yet use your favorite symbolic calculus tool (maple, wolfram,...) to compute exactly the integral.
Related
As my question states, I want to calculate the Fourier transform F(q) of a radial function f(r) (defined on [0,infinity[ and which decays like an exponential exp(-Ar +b) at large r) as accurately as possible in Fortran. The function values come from a data file (which I can easily interpolate through cubic interpolation for example and extrapolate since the behaviour at large r is known).
I'm using the "physics" definition of the Fourier transform in 3D, which gives (because f is radial) :
I first tried to calculate this integral for some chosen values of q by using Gauss-Legendre quadrature, by generating some 60 or 100 abscissas and weights via the NAG routine D01BCF (D01BCF link). In the case of Gauss Legendre quadrature, the problem is to choose the interval [0,B] on which to integrate. While the function f loses 4 to 5 orders of magnitude from r=10 to r=20 (example), the choice of B as a strong influence on the result of the calculation... When I compared the result I get to a "nearly exact" calculation (made with matlab but with a veeeery long computation time), I saw that in fact this was only valid for small values of q (of the order of 5, when I have to deal with values as large as 150). A Gauss-Laguerre quadrature does not give any better result, probably because of the oscillatory part of the integrand.
I then tried to compute this Fourier transform for some given values of q with the routine D01ASF (D01ASF link). It is a "one-dimensional quadrature, adaptive, semi-infinite interval, weight function cos(ωx) or sin(ωx) ", which is exactly what I need. The results are quite convincing for q up to 80 or 100 if I input absolute error tolerances of 10E-5. Problems are : I would need to go at larger q, and the Fourier transform F(q) oscillates with a magnitude of ~ 10E-6 at such q's. Lowering the tolerance to 10E-5 already takes some time and even makes the whole thing to output some error message from the subroutine so I don't know if 10E-6 would be feasible.
I'm thus currently wondering if trying to calculate this Fourier transform with FFT wouldn't be a good idea ? The problems I face are that I don't know how to calculate radial wave functions with FFT (and also that I don't even know how to use FFT properly either since the definition of the transform is not even the same (exponent sign and argument) and that I never used it before).
Would you have ideas ? :)
EDIT 2 : I tried by FFT (using the routine C06FAF from NAG library). It works quite well up to some large values of q. The problem I face is that there is always some constant normalising factor to account for. I don't get why. This normalising factor evolves with the number N of points used in the mesh. It has the for of a power law : Normalising Factor F = N^(-0.5) x exp(9.9) approximately (see figure where the black line is the "exact" Fourier Transform and the green, magenta, blue, red and yellow lines are the FFT calculated for different values of N)
EDIT3 : I found the factor to be A*N^(-0.5) where A is the length of the integration mesh
I'm working on an application were I have a set of Contours(each one representing a Potential Line) and I wanna check "How straight" is that contour/shape.
The article I am using as a refrence uses the following technique:
It Matches a "segmented" line crossing the shape like so-
Then grading how "straight" is the line.
Heres an example of the Contours I am working on:
How would you go about implementing this technique?
Is there any other way of checking "How Straight" is a contour\shape?
Regards!
My first guess would be to use a coefficient of determination. That would be, fit a linear line to all your point assuming some reasonable origin where you won't receive rounding errors and calculate R^2.
A more advanced approach, if all contours are disconnected components, would be to calculate the structure model index (the link is for bone morphometry, but they explain the concept and cite the original paper.) This gives you a number that tells you how much your segment is "like a rod". This is just an idea, though. Anything that forms curves or has branches will be less and less like a rod.
I would say that it also depends on what you are using the metric for and if your contours are always generally carrying left to right.
An additional method would be to create the covariance matrix of your points, calculate the eigenvalues from that matrix, and take their ratio (where the ratio is greater than or equal to 1; otherwise, invert the ratio.) This is the basic principle behind a PCA besides the final ratio. If you have a rather linear data set (the data set varies in only one direction) then you will have a very large ratio. As the data set becomes less and less linear (or more uncorrelated) you would see the ratio approach one. A perfectly linear data set would be infinity and a perfect circle one (I believe, but I would appreciate if someone could verify this for me.) Also, working in two dimensions would mean the calculation would be computationally cheap and straight forward.
This would handle outliers very well and would be invariant to the rotation and shape of your contour. You also have a number which is always positive. The only issue would be preventing overflow when dividing the two eigenvalues. Then again you could always divide the smaller eigenvalue by the larger and your metric would be bound between zero and one, one being a circle and zero being a straight line.
Either way, you would need to test if this parameter is sensitive enough for your application.
One example for a simple algorithm is using the dot product between two segments to determine the angle between them. The formula for dot product is:
A * B = ||A|| ||B|| cos(theta)
Solving the equation for cos(theta) yields
cos(theta) = (A * B / (||A|| ||B||))
Since cos(0) = 1, cos(pi) = -1.0 and you're checking for the "straightness" of the lines, a line whose normalization of cos(theta) angles is closest to -1.0 is the straightest.
straightness = SUM(cos(theta))/(number of line segments)
where a straight line is close to -1.0, and a non-straight line approaches 1.0. Keep in mind this is a cursory evaluation of this algorithm and it obviously has edge cases and caveats that would need to be addressed in an implementation.
The trick is to use image moments. In short, you calculate the minimum inertia around an axis, the inertia around an axis perpendicular to this, and the ratio between them (which is always between 0 and 1; since inertia is non-negative)
For a straight line, the inertia along the line is zero, so the ratio is also zero. For a circle, the inertia is the same along all axis so the ratio is one. Your segmented line will be 0.01 or so as it's a fairly good match.
A simpler method is to compare the circumference of the the convex polygon containing the shape with the circumference of the shape itself. For a line, they're trivially equal, and for a not too crooked shape it's still comparable.
I'm trying to interpolate spherical harmonics to a cubic, Cartesian grid.
The output data of my spherical, pseudo-spectral simulation has Nr radial levels between rMin and rMax, each containing a set of finite-order spherical harmonics for longitude and latitude. The spherical harmonics are mapped to a physical spherical grid containing Ni latitudes and Nj longitudes via a triangular truncation.
The domain is as follows:
Radial levels: rMin <= r(k) <= rMax, with indexing 1 <= k <= Nr
Spherical harmonics (triangular truncation, without aliasing from transform):
Nm = (Nj-1)/3
0 <= m <= Nm
m <= l <= Nm
nlm == (nm+1)*(nm+2)/2 (the total number of l,m combinations)
Data arrays:
Spectral form: complex*16, dimension( 1:nlm, 1:Nr ) :: foo_spectral
Cartesian form: real*8, dimension( 1:Nx, 1:Ny, 1:Nz ) :: foo_cartesian
I'm looking for an accurate and efficient way to interpolate the data from its spectral representation to a cubic Cartesian grid with edge-length 2*rMax, such that the spherical domain fits perfectly inside. I only want to interpolate within the sphere, however: for points corresponding to r<rMin or rMax<r, the cubic grid should have OUTSIDE_DOMAIN values.
Currently, I have to transform the data from its spectral representation (spherical harmonics: foo(Nr,nlm)) to a physical representation (spherical grid: foo(Nr,Ni,Nj)), and then use a QHULL routine in IDL to interpolate from the physical, spherical grid to the physical, cubic grid (foo(Nx,Ny,Nz)) (note that Nx==Ny==Nz for a cubic grid).
The size of my data is larger than my existing code (written in IDL) can handle, and converting to spherical space is unnecessary for my purposes. I'd like a more direct method that is stand-alone -- not dependent on IDL, for instance.
Any thoughts about how this could be done? I'm willing to use open-source libraries, but it would be nice to not have to.
Thanks in advance!
I would strongly recommend using libraries for this; the spherical harmonic transform is hard to do efficiently and accurately and it's unlikely your first attempts will be anything like as good as existing routines.
One library a colleague thinks quite highly of is SHTns, which will both do the synthesis (inverse transform) for you and do the interpolation (for any given shell) at an arbitrary point. It has fortran bindings. You'd still have to handle the multiple radial shells yourself, one way or another (probably by doing what you're doing now - transform everything onto a spherical grid, and then use standard interpolation methods to get onto a cubic grid), and while that is a little tricky to do right it's much more straightforward than the spherical harmonic transform part.
I want to automatically find the "knee" point of the eigenvalue plot. I.e. I have a vector of eigenvalues (sorted from highest to lowest) and I want some heuristic to find the "knee" point.
Is there some heuristic for doing that
I've found the two following proposals so far.
Setting a threshold, say 0.99, or 0.95 and keep m of n eigenvalues when T(m-1) < 0.99 *T(n) <= T(m) where T(m) = sum(i=1:m){lambda(i)}
The knee is located at a point where the radius of curvature is a local minimum. For a curve y = f(x) the curvature is k = y''/(1+(y')^2)^(3/2). Just replace the derivatives with finite differences.
What do you think of these two proposals? How can I implement the second one? I don't understand how to replace the derivatives with the differences
Did you read this paper?
Non-Graphical Solutions for Cattell’s Scree Test
https://ppw.kuleuven.be/okp/_pdf/Raiche2013NGSFC.pdf
I am trying to extract the curvature of a pulse along its profile (see the picture below). The pulse is calculated on a grid of length and height: 150 x 100 cells by using Finite Differences, implemented in C++.
I extracted all the points with the same value (contour/ level set) and marked them as the red continuous line in the picture below. The other colors are negligible.
Then I tried to find the curvature from this already noisy (due to grid discretization) contour line by the following means:
(moving average already applied)
1) Curvature via Tangents
The curvature of the line at point P is defined by:
So the curvature is the limes of angle delta over the arclength between P and N. Since my points have a certain distance between them, I could not approximate the limes enough, so that the curvature was not calculated correctly. I tested it with a circle, which naturally has a constant curvature. But I could not reproduce this (only 1 significant digit was correct).
2) Second derivative of the line parametrized by arclength
I calculated the first derivative of the line with respect to arclength, smoothed with a moving average and then took the derivative again (2nd derivative). But here I also got only 1 significant digit correct.
Unfortunately taking a derivative multiplies the already inherent noise to larger levels.
3) Approximating the line locally with a circle
Since the reciprocal of the circle radius is the curvature I used the following approach:
This worked best so far (2 correct significant digits), but I need to refine even further. So my new idea is the following:
Instead of using the values at the discrete points to determine the curvature, I want to approximate the pulse profile with a 3 dimensional spline surface. Then I extract the level set of a certain value from it to gain a smooth line of points, which I can find a nice curvature from.
So far I could not find a C++ library which can generate such a Bezier spline surface. Could you maybe point me to any?
Also do you think this approach is worth giving a shot, or will I lose too much accuracy in my curvature?
Do you know of any other approach?
With very kind regards,
Jan
edit: It seems I can not post pictures as a new user, so I removed all of them from my question, even though I find them important to explain my issue. Is there any way I can still show them?
edit2: ok, done :)
There is ALGLIB that supports various flavours of interpolation:
Polynomial interpolation
Rational interpolation
Spline interpolation
Least squares fitting (linear/nonlinear)
Bilinear and bicubic spline interpolation
Fast RBF interpolation/fitting
I don't know whether it meets all of your requirements. I personally have not worked with this library yet, but I believe cubic spline interpolation could be what you are looking for (two times differentiable).
In order to prevent an overfitting to your noisy input points you should apply some sort of smoothing mechanism, e.g. you could try if things like Moving Window Average/Gaussian/FIR filters are applicable. Also have a look at (Cubic) Smoothing Splines.