problems plotting tan() with SymPy - sympy

I'm trying to plot various trig functions with sympy. This works fine with sine, cosine, etc., but tan() displays wierd results. Furthermore, if I rerun just the plot() function, then I get a different result each time.
from sympy import symbols
from sympy.functions.elementary.trigonometric import tan
from sympy.plotting.plot import plot
x = symbols('x')
eqn = tan(x)
plot(eqn, (x, -10, 10))

I understand that this is not how one usually visualizes the tangent function:
but it's not wrong. SymPy found that there are extremely large y-values, and chose the vertical scale accordingly. On this scale, all values are practically zero except in close proximity to the singularities. And the unequal height of spikes near singularities is because the evaluation points are not going to hit any pi/2 + pi*n exactly, and how close they come depends on what n is.
All that said, you need a sane vertical scale, enforced with ylim:
plot(eqn, (x, -10, 10), ylim=(-20, 20))

Related

Robust atan(y,x) on GLSL for converting XY coordinate to angle

In GLSL (specifically 3.00 that I'm using), there are two versions of
atan(): atan(y_over_x) can only return angles between -PI/2, PI/2, while atan(y/x) can take all 4 quadrants into account so the angle range covers everything from -PI, PI, much like atan2() in C++.
I would like to use the second atan to convert XY coordinates to angle.
However, atan() in GLSL, besides not able to handle when x = 0, is not very stable. Especially where x is close to zero, the division can overflow resulting in an opposite resulting angle (you get something close to -PI/2 where you suppose to get approximately PI/2).
What is a good, simple implementation that we can build on top of GLSL atan(y,x) to make it more robust?
I'm going to answer my own question to share my knowledge. We first notice that the instability happens when x is near zero. However, we can also translate that as abs(x) << abs(y). So first we divide the plane (assuming we are on a unit circle) into two regions: one where |x| <= |y| and another where |x| > |y|, as shown below:
We know that atan(x,y) is much more stable in the green region -- when x is close to zero we simply have something close to atan(0.0) which is very stable numerically, while the usual atan(y,x) is more stable in the orange region. You can also convince yourself that this relationship:
atan(x,y) = PI/2 - atan(y,x)
holds for all non-origin (x,y), where it is undefined, and we are talking about atan(y,x) that is able to return angle value in the entire range of -PI,PI, not atan(y_over_x) which only returns angle between -PI/2, PI/2. Therefore, our robust atan2() routine for GLSL is quite simple:
float atan2(in float y, in float x)
{
bool s = (abs(x) > abs(y));
return mix(PI/2.0 - atan(x,y), atan(y,x), s);
}
As a side note, the identity for mathematical function atan(x) is actually:
atan(x) + atan(1/x) = sgn(x) * PI/2
which is true because its range is (-PI/2, PI/2).
Depending on your targeted platform, this might be a solved problem. The OpenGL spec for atan(y, x) specifies that it should work in all quadrants, leaving behavior undefined only when x and y are both 0.
So one would expect any decent implementation to be stable near all axes, as this is the whole purpose behind 2-argument atan (or atan2).
The questioner/answerer is correct in that some implementations do take shortcuts. However, the accepted solution makes the assumption that a bad implementation will always be unstable when x is near zero: on some hardware (my Galaxy S4 for example) the value is stable when x is near zero, but unstable when y is near zero.
To test your GLSL renderer's implementation of atan(y,x), here's a WebGL test pattern. Follow the link below and as long as your OpenGL implementation is decent, you should see something like this:
Test pattern using native atan(y,x): http://glslsandbox.com/e#26563.2
If all is well, you should see 8 distinct colors (ignoring the center).
The linked demo samples atan(y,x) for several values of x and y, including 0, very large, and very small values. The central box is atan(0.,0.)--undefined mathematically, and implementations vary. I've seen 0 (red), PI/2 (green), and NaN (black) on hardware I've tested.
Here's a test page for the accepted solution. Note: the host's WebGL version lacks mix(float,float,bool), so I added an implementation that matches the spec.
Test pattern using atan2(y,x) from accepted answer: http://glslsandbox.com/e#26666.0
Your proposed solution still fails in the case x=y=0. Here both of the atan() functions return NaN.
Further I would not rely on mix to switch between the two cases. I am not sure how this is implemented/compiled, but IEEE float rules for x*NaN and x+NaN result again in NaN. So if your compiler really used mix/interpolation the result should be NaN for x=0 or y=0.
Here is another fix which solved the problem for me:
float atan2(in float y, in float x)
{
return x == 0.0 ? sign(y)*PI/2 : atan(y, x);
}
When x=0 the angle can be ±π/2. Which of the two depends on y only. If y=0 too, the angle can be arbitrary (vector has length 0). sign(y) returns 0 in that case which is just ok.
Sometimes the best way to improve the performance of a piece of code is to avoid calling it in the first place. For example, one of the reasons you might want to determine the angle of a vector is so that you can use this angle to construct a rotation matrix using combinations of the angle's sine and cosine. However, the sine and cosine of a vector (relative to the origin) are already hidden in plain sight inside the vector itself. All you need to do is to create a normalized version of the vector by dividing each vector coordinate by the total length of the vector. Here's the two-dimensional example to calculate the sine and cosine of the angle of vector [ x y ]:
double length = sqrt(x*x + y*y);
double cos = x / length;
double sin = y / length;
Once you have the sine and cosine values, you can now directly populate a rotation matrix with these values to perform a clockwise or counterclockwise rotation of arbitrary vectors by the same angle, or you can concatenate a second rotation matrix to rotate to an angle other than zero. In this case, you can think of the rotation matrix as "normalizing" the angle to zero for an arbitrary vector. This approach is extensible to the three-dimensional (or N-dimensional) case as well, although for example you will have three angles and six sin/cos pairs to calculate (one angle per plane) for 3D rotation.
In situations where you can use this approach, you get a big win by bypassing the atan calculation completely, which is possible since the only reason you wanted to determine the angle was to calculate the sine and cosine values. By skipping the conversion to angle space and back, you not only avoid worrying about division by zero, but you also improve precision for angles which are near the poles and would otherwise suffer from being multiplied/divided by large numbers. I've successfully used this approach in a GLSL program which rotates a scene to zero degrees to simplify a computation.
It can be easy to get so caught up in an immediate problem that you can lose sight of why you need this information in the first place. Not that this works in every case, but sometimes it helps to think out of the box...
A formula that gives an angle in the four quadrants for any value
of coordinates x and y. For x=y=0 the result is undefined.
f(x,y)=pi()-pi()/2*(1+sign(x))* (1-sign(y^2))-pi()/4*(2+sign(x))*sign(y)
-sign(x*y)*atan((abs(x)-abs(y))/(abs(x)+abs(y)))

Desert fractal OpenGL

we're trying to generate a 3d world using a 2d perlin noise (with a recorsive/fractal technique). We have generated mountains and valleys quite fine but now we are having problems with desert and dunes because we only worked on persistence and octaves and we aren't able to make the classic shape of the dune. Has anybody already experienced that? Any solution, possibly still using perlin noise, or also other algorithms which allow you to do this?
You could give the Musgrave ridged multifractal a try. It gives nice ridged structures and you can use your existing noise algorithms for it.
The C reference implementation for it is here
Dunes are lobsided: .='\ cross section... you may want to use an initial shape of that kind
They are regular, like waves in the sea. not completely noise
they are elongated towards the wind
I didnt use the first condition, but i have made great dunes by multiplying 2 1d perin noises together, or even 2 sin/parabol functions, where they are both lined to one axis. i.e. Z, and they have a small low frequency Sin or noise wobbling them along X axis, so they aren't alined.
try this:
dunes = sin ( X + 1dperlin(Z) *.2 ) * sin ( X + 1dperlin(Z+432) *.2 );
otherwise to test it:
dunes = sin ( X + sin(Z) *.2 ) (plus or times or devided by) sin ( X + sin(Z+432) *.2 );
0.2 makes dunes 10 times longer than wide, and it's like when two straight water waves meet at almost the same angle, plus an uncertainty variable using noise for the angle.
Maybe turbulence is yet enough for what you need... Try to play with turbulence using the absolute value of your octaves return values instead of the normal values. You can also evaluate separately and combine your noise and your turbulence to mix both effects in some areas.

Plot issue while changing axes scales on matplotlib scatter plot

I am drawing a scatter plot using matplotlib. This is a somewhat peculiar issue. Here's the plot of the data when there is no scale of the axes
plt.scatter(x, y , marker ='x')
#plt.xscale('log')
#plt.yscale('log')
plt.show()
The plot of the data with axes scaled to `logarithm' scale.
Why is this happening? The same happens even when the base is changed to 2 or e
It appears that in this particular case, you can't scale the axes after the plot. You can scale them beforehand, so:
plt.xscale('log')
plt.yscale('log')
plt.scatter(x, y , marker ='x')
plt.show()
In this particular case (identical markers), you could also use plt.plot instead of plt.scatter, and then scaling post-plot will work:
plt.plot(x, y, 'x')
plt.xscale('log')
plt.yscale('log')
plt.show()
My guess as to why you can't scale post scatter plot: a scatter plot returns a PathCollection, which probably results in the scaling function looking only at the last entry of that collection, that is, the last point. That would of course scale only in the range 1e5 - 1e6. plot() returns a Line2D, which takes the complete plot into account.
The log scaling should clip values sensibly so 0 in the data are not the problem.
I suspect the issue is that your limits have negative values so naively taking the log of the limits and using those causes issues.
You can also explicitly set the limit
ax = plt.gca()
ax.set_xlim([1e-5, 1e6])
ax.set_ylim([1e-5, 1e6])
and not rely on auto-scaling.

My neural net learns sin x but not cos x

I have build my own neural net and I have a weird problem with it.
The net is quite a simple feed-forward 1-N-1 net with back propagation learning. Sigmoid is used as activation function.
My training set is generated with random values between [-PI, PI] and their [0,1]-scaled sine values (This is because the "Sigmoid-net" produces only values between [0,1] and unscaled sine -function produces values between [-1,1]).
With that training-set, and the net set to 1-10-1 with learning rate of 0.5, everything works great and the net learns sin-function as it should. BUT.. if I do everything exately the same way for COSINE -function, the net won't learn it. Not with any setup of hidden layer size or learning rate.
Any ideas? Am I missing something?
EDIT: My problem seems to be similar than can be seen with this applet. It won't seem to learn sine-function unless something "easier" is taught for the weights first (like 1400 cycles of quadratic function). All the other settings in the applet can be left as they initially are. So in the case of sine or cosine it seems that the weights need some boosting to atleast partially right direction before a solution can be found. Why is this?
I'm struggling to see how this could work.
You have, as far as I can see, 1 input, N nodes in 1 layer, then 1 output. So there is no difference between any of the nodes in the hidden layer of the net. Suppose you have an input x, and a set of weights wi. Then the output node y will have the value:
y = Σi w_i x
= x . Σi w_i
So this is always linear.
In order for the nodes to be able to learn differently, they must be wired differently and/or have access to different inputs. So you could supply inputs of the value, the square root of the value (giving some effect of scale), etc and wire different hidden layer nodes to different inputs, and I suspect you'll need at least one more hidden layer anyway.
The neural net is not magic. It produces a set of specific weights for a weighted sum. Since you can derive a set weights to approximate a sine or cosine function, that must inform your idea of what inputs the neural net will need in order to have some chance of succeeding.
An explicit example: the Taylor series of the exponential function is:
exp(x) = 1 + x/1! + x^2/2! + x^3/3! + x^4/4! ...
So if you supplied 6 input notes with 1, x1, x2 etc, then a neural net that just received each input to one corresponding node, and multiplied it by its weight then fed all those outputs to the output node would be capable of the 6-term taylor expansion of the exponential:
in hid out
1 ---- h0 -\
x -- h1 --\
x^2 -- h2 ---\
x^3 -- h3 ----- y
x^4 -- h4 ---/
x^5 -- h5 --/
Not much of a neural net, but you get the point.
Further down the wikipedia page on Taylor series, there are expansions for sin and cos, which are given in terms of odd powers of x and even powers of x respectively (think about it, sin is odd, cos is even, and yes it is that straightforward), so if you supply all the powers of x I would guess that the sin and cos versions will look pretty similar with alternating zero weights. (sin: 0, 1, 0, -1/6..., cos: 1, 0, -1/2...)
I think you can always compute sine and then compute cosine externally. I think your concern here is why the neural net is not learning the cosine function when it can learn the sine function. Assuming that this artifact if not because of your code; I would suggest the following:
It definitely looks like an error in the learning algorithm. Could be because of your starting point. Try starting with weights that gives the correct result for the first input and then march forward.
Check if there is heavy bias in your learning - more +ve than -ve
Since cosine can be computed by sine 90 minus angle, you could find the weights and then recompute the weights in 1 step for cosine.

Physically-based fracture simulation with opengl/c++

I am trying to implement the ideas in this paper for modeling fracture:
http://graphics.berkeley.edu/papers/Obrien-GMA-1999-08/index.html
I am stuck at a point (essentially page 4...) and would really appreciate any help. The part I am stuck on involves the deformation of tetrahedron (using FEM).
I have a single tetrahedron defined by four nodes (each node has a x, y, z position) in which I calculate the following matrices from:
u: each column is a vector containing material coordinates (x, y, z,
1) for each node (so total 4 columns), a 4x4 matrix
B: inverse(u), he calls this the basis matrix, a 4x4 matrix
P: each column is a vector containing real world coordinates (x, y,
z) for each node, I set P is initially equal to u since the object is
not deformed at the rest state, a 3x4 matrix
V: give some initial velocities for (x, y, z) in each node, so a 3x4
matrix
delta: basically an identity matrix, {{1, 0, 0}, {0, 1, 0}, {0, 0,
1}, {0, 0, 0}}
I get x(u) = P*B*u and v(u) = V*B*u, but not sure where to use these...
Also, I get dx = P*B*delta and dv = V*B*delta
I then get strain by Green's strain tensor, epsilon = 1/2(dx+transpose(dx)) - Identity_3x3
And then stress, sigma = lambda*trace(epsilon)*Identity_3x3 + 2*mu*epsilon
I get the elastic force by equation (24) on page 4 of the paper. It's just a big summation.
I then using explicit integration to update real world coordinates P. The idea is that the velocity update involves the force on the node of the tetrahedron and therefore affects the real-world coordinate position, making the object deform.
The problem, however, is that the force is incredibly small...something x 10^-19, etc. So, c++ usually rounds to 0. I've stepped through the calculations and can't figure out why.
I know I'm missing something here, just can't figure out what. What update am I not doing correctly?
A common reason why the force is small is that your Young's modulus (lambda) is too small. If you are using a scale of meters, a macro scale object might have 10^5 young's modlus and a .3 to .4 Poisson's ratio.
It sounds like what might be happening is that your tet is still in the rest configuration. In the presence of no deformation, the strain will be zero and so in-turn the stress and force will also be about zero. You can perturb the vertices in various ways and make sure your strain (epsilon) is being computed correctly. One simple test is to scale by 2 about the centroid which should give you a positive strain. If you scale by .5 about the centroid you will get a negative strain. If you translate the vertices uniformly you will get no change in strain (a common FEM invariant). If you rotate them you probably will get a change, but a co-rotational constitutive model wouldn't.
Note you might think that gravity would cause deformation, but unless one of the vertices is constrained, the uniform force on all vertices will cause a uniform translation which will not change the strain from being zero.
You definitely should not need to use arbitrary precision arithmetic for the examples in the paper. In fact, floats typically are sufficient for these types of simulation.
I might be mistaken, but c++ doubles only go to 15 decimal places, (at least that's what my std::numeric_limits says). So you're way out of precision.
So you might end up needing a library for arbitrary precision arithmetics, e.g., http://gmplib.org/