we're trying to generate a 3d world using a 2d perlin noise (with a recorsive/fractal technique). We have generated mountains and valleys quite fine but now we are having problems with desert and dunes because we only worked on persistence and octaves and we aren't able to make the classic shape of the dune. Has anybody already experienced that? Any solution, possibly still using perlin noise, or also other algorithms which allow you to do this?
You could give the Musgrave ridged multifractal a try. It gives nice ridged structures and you can use your existing noise algorithms for it.
The C reference implementation for it is here
Dunes are lobsided: .='\ cross section... you may want to use an initial shape of that kind
They are regular, like waves in the sea. not completely noise
they are elongated towards the wind
I didnt use the first condition, but i have made great dunes by multiplying 2 1d perin noises together, or even 2 sin/parabol functions, where they are both lined to one axis. i.e. Z, and they have a small low frequency Sin or noise wobbling them along X axis, so they aren't alined.
try this:
dunes = sin ( X + 1dperlin(Z) *.2 ) * sin ( X + 1dperlin(Z+432) *.2 );
otherwise to test it:
dunes = sin ( X + sin(Z) *.2 ) (plus or times or devided by) sin ( X + sin(Z+432) *.2 );
0.2 makes dunes 10 times longer than wide, and it's like when two straight water waves meet at almost the same angle, plus an uncertainty variable using noise for the angle.
Maybe turbulence is yet enough for what you need... Try to play with turbulence using the absolute value of your octaves return values instead of the normal values. You can also evaluate separately and combine your noise and your turbulence to mix both effects in some areas.
Related
In GLSL (specifically 3.00 that I'm using), there are two versions of
atan(): atan(y_over_x) can only return angles between -PI/2, PI/2, while atan(y/x) can take all 4 quadrants into account so the angle range covers everything from -PI, PI, much like atan2() in C++.
I would like to use the second atan to convert XY coordinates to angle.
However, atan() in GLSL, besides not able to handle when x = 0, is not very stable. Especially where x is close to zero, the division can overflow resulting in an opposite resulting angle (you get something close to -PI/2 where you suppose to get approximately PI/2).
What is a good, simple implementation that we can build on top of GLSL atan(y,x) to make it more robust?
I'm going to answer my own question to share my knowledge. We first notice that the instability happens when x is near zero. However, we can also translate that as abs(x) << abs(y). So first we divide the plane (assuming we are on a unit circle) into two regions: one where |x| <= |y| and another where |x| > |y|, as shown below:
We know that atan(x,y) is much more stable in the green region -- when x is close to zero we simply have something close to atan(0.0) which is very stable numerically, while the usual atan(y,x) is more stable in the orange region. You can also convince yourself that this relationship:
atan(x,y) = PI/2 - atan(y,x)
holds for all non-origin (x,y), where it is undefined, and we are talking about atan(y,x) that is able to return angle value in the entire range of -PI,PI, not atan(y_over_x) which only returns angle between -PI/2, PI/2. Therefore, our robust atan2() routine for GLSL is quite simple:
float atan2(in float y, in float x)
{
bool s = (abs(x) > abs(y));
return mix(PI/2.0 - atan(x,y), atan(y,x), s);
}
As a side note, the identity for mathematical function atan(x) is actually:
atan(x) + atan(1/x) = sgn(x) * PI/2
which is true because its range is (-PI/2, PI/2).
Depending on your targeted platform, this might be a solved problem. The OpenGL spec for atan(y, x) specifies that it should work in all quadrants, leaving behavior undefined only when x and y are both 0.
So one would expect any decent implementation to be stable near all axes, as this is the whole purpose behind 2-argument atan (or atan2).
The questioner/answerer is correct in that some implementations do take shortcuts. However, the accepted solution makes the assumption that a bad implementation will always be unstable when x is near zero: on some hardware (my Galaxy S4 for example) the value is stable when x is near zero, but unstable when y is near zero.
To test your GLSL renderer's implementation of atan(y,x), here's a WebGL test pattern. Follow the link below and as long as your OpenGL implementation is decent, you should see something like this:
Test pattern using native atan(y,x): http://glslsandbox.com/e#26563.2
If all is well, you should see 8 distinct colors (ignoring the center).
The linked demo samples atan(y,x) for several values of x and y, including 0, very large, and very small values. The central box is atan(0.,0.)--undefined mathematically, and implementations vary. I've seen 0 (red), PI/2 (green), and NaN (black) on hardware I've tested.
Here's a test page for the accepted solution. Note: the host's WebGL version lacks mix(float,float,bool), so I added an implementation that matches the spec.
Test pattern using atan2(y,x) from accepted answer: http://glslsandbox.com/e#26666.0
Your proposed solution still fails in the case x=y=0. Here both of the atan() functions return NaN.
Further I would not rely on mix to switch between the two cases. I am not sure how this is implemented/compiled, but IEEE float rules for x*NaN and x+NaN result again in NaN. So if your compiler really used mix/interpolation the result should be NaN for x=0 or y=0.
Here is another fix which solved the problem for me:
float atan2(in float y, in float x)
{
return x == 0.0 ? sign(y)*PI/2 : atan(y, x);
}
When x=0 the angle can be ±π/2. Which of the two depends on y only. If y=0 too, the angle can be arbitrary (vector has length 0). sign(y) returns 0 in that case which is just ok.
Sometimes the best way to improve the performance of a piece of code is to avoid calling it in the first place. For example, one of the reasons you might want to determine the angle of a vector is so that you can use this angle to construct a rotation matrix using combinations of the angle's sine and cosine. However, the sine and cosine of a vector (relative to the origin) are already hidden in plain sight inside the vector itself. All you need to do is to create a normalized version of the vector by dividing each vector coordinate by the total length of the vector. Here's the two-dimensional example to calculate the sine and cosine of the angle of vector [ x y ]:
double length = sqrt(x*x + y*y);
double cos = x / length;
double sin = y / length;
Once you have the sine and cosine values, you can now directly populate a rotation matrix with these values to perform a clockwise or counterclockwise rotation of arbitrary vectors by the same angle, or you can concatenate a second rotation matrix to rotate to an angle other than zero. In this case, you can think of the rotation matrix as "normalizing" the angle to zero for an arbitrary vector. This approach is extensible to the three-dimensional (or N-dimensional) case as well, although for example you will have three angles and six sin/cos pairs to calculate (one angle per plane) for 3D rotation.
In situations where you can use this approach, you get a big win by bypassing the atan calculation completely, which is possible since the only reason you wanted to determine the angle was to calculate the sine and cosine values. By skipping the conversion to angle space and back, you not only avoid worrying about division by zero, but you also improve precision for angles which are near the poles and would otherwise suffer from being multiplied/divided by large numbers. I've successfully used this approach in a GLSL program which rotates a scene to zero degrees to simplify a computation.
It can be easy to get so caught up in an immediate problem that you can lose sight of why you need this information in the first place. Not that this works in every case, but sometimes it helps to think out of the box...
A formula that gives an angle in the four quadrants for any value
of coordinates x and y. For x=y=0 the result is undefined.
f(x,y)=pi()-pi()/2*(1+sign(x))* (1-sign(y^2))-pi()/4*(2+sign(x))*sign(y)
-sign(x*y)*atan((abs(x)-abs(y))/(abs(x)+abs(y)))
I have a set of images of the same scene but shot with different exposures. These images have no EXIF data so there is no way to extract useful info like f-stop, shutter speed etc.
What I'm trying to do is to determine the difference in stops between the images i.e. Image1 is +1.3 stops of Image0.
My current approach is to first calculate luminance from the image's RGB values using the equation
L = 0.2126 * R + 0.7152 * G + 0.0722 * B
I've seen different numbers being used in the equation but generally it should not affect the end result L too much.
After that I derive the log-average luminance of the image.
exp(avg of log(luminance of image))
But somehow the log-avg luminance doesn't seem to give much indication on exposure difference btw the images.
Any ideas on how to determine exposure difference?
edit: on c/c++
You have to generally solve two problems:
1. Linearize your image data
(In case it's not obvious what is meant: two times more light collected by your pixel shall result in two times the intensity value in your linearized image.)
Your image input might be (sufficiently) linearized already -> you may skip to part 2. If your content came from a camera and it's a JPEG, then this will most certainly not be the case.
The real 'solution' to this problem is finding the camera response function, which you want to invert and apply to your image data to get linear intensity values. This is by no means a trivial task. The EMoR model is widely used in all sorts of software (Photoshop, PTGui, Photomatix, etc.) to describe camera response functions. Some open source software solving this problem (but using a different model iirc) is PFScalibrate.
Having that said, you may get away with a simple inverse gamma application. A rough 'gestimation' for the right gamma value might be found by doing this:
capture an evenly lit, static scene with two exposure times e and e/2
apply a couple of inverse gamma transforms (e.g. for 1.8 to 2.4 in 0.1 steps) on both images
multiply all the short exposure images with 2.0 and subtract them from the respective long exposure images
pick the gamma that lead to the smallest overall difference
2. Find the actual difference of irradiation in stops, i.e. log2(scale factor)
Presuming the scene was static (no moving objects or camera), this is relatively easy:
sum1 = sum2 = 0
foreach pixel pair (p1,p2) from the two images:
if p1 or p2 is close to 0 or 255:
skip this pair
sum1 += p1 and sum2 += p2
return log2(sum1 / sum2)
On large images this will certainly work just as well and a lot faster if you sub-sample the images.
If the camera was static but the scene was not (moving objects), this starts to work less well. I produced acceptable results in this case by simply repeating the above procedure several times and use the output of the previous run as an estimate for the correct scale factor and then discard pixel pairs who's quotient is too far away from the current estimate. So basically replacing the above if line with the following:
if <see above> or if abs(log2(p1/p2) - estimate) > 0.5:
I'd stop the repetition after a fixed number of iterations or if two consecutive estimates are sufficiently close to each other.
EDIT: A note about conversion to luminance
You don't need to do that at all (as Tony D mentioned already) and if you insist, then do it after the linearization step (as Mark Ransom noted). In a perfect setting (static scene, no noise, no de-mosaicing, no quantization) every channel of every pixel would have the same ratio p1/p2 (if neither is saturated). Therefore the relative weighting of the different channels is irrelevant. You may sum over all pixels/channels (weighing R, G and B equally) or maybe only use the green channel.
I have build my own neural net and I have a weird problem with it.
The net is quite a simple feed-forward 1-N-1 net with back propagation learning. Sigmoid is used as activation function.
My training set is generated with random values between [-PI, PI] and their [0,1]-scaled sine values (This is because the "Sigmoid-net" produces only values between [0,1] and unscaled sine -function produces values between [-1,1]).
With that training-set, and the net set to 1-10-1 with learning rate of 0.5, everything works great and the net learns sin-function as it should. BUT.. if I do everything exately the same way for COSINE -function, the net won't learn it. Not with any setup of hidden layer size or learning rate.
Any ideas? Am I missing something?
EDIT: My problem seems to be similar than can be seen with this applet. It won't seem to learn sine-function unless something "easier" is taught for the weights first (like 1400 cycles of quadratic function). All the other settings in the applet can be left as they initially are. So in the case of sine or cosine it seems that the weights need some boosting to atleast partially right direction before a solution can be found. Why is this?
I'm struggling to see how this could work.
You have, as far as I can see, 1 input, N nodes in 1 layer, then 1 output. So there is no difference between any of the nodes in the hidden layer of the net. Suppose you have an input x, and a set of weights wi. Then the output node y will have the value:
y = Σi w_i x
= x . Σi w_i
So this is always linear.
In order for the nodes to be able to learn differently, they must be wired differently and/or have access to different inputs. So you could supply inputs of the value, the square root of the value (giving some effect of scale), etc and wire different hidden layer nodes to different inputs, and I suspect you'll need at least one more hidden layer anyway.
The neural net is not magic. It produces a set of specific weights for a weighted sum. Since you can derive a set weights to approximate a sine or cosine function, that must inform your idea of what inputs the neural net will need in order to have some chance of succeeding.
An explicit example: the Taylor series of the exponential function is:
exp(x) = 1 + x/1! + x^2/2! + x^3/3! + x^4/4! ...
So if you supplied 6 input notes with 1, x1, x2 etc, then a neural net that just received each input to one corresponding node, and multiplied it by its weight then fed all those outputs to the output node would be capable of the 6-term taylor expansion of the exponential:
in hid out
1 ---- h0 -\
x -- h1 --\
x^2 -- h2 ---\
x^3 -- h3 ----- y
x^4 -- h4 ---/
x^5 -- h5 --/
Not much of a neural net, but you get the point.
Further down the wikipedia page on Taylor series, there are expansions for sin and cos, which are given in terms of odd powers of x and even powers of x respectively (think about it, sin is odd, cos is even, and yes it is that straightforward), so if you supply all the powers of x I would guess that the sin and cos versions will look pretty similar with alternating zero weights. (sin: 0, 1, 0, -1/6..., cos: 1, 0, -1/2...)
I think you can always compute sine and then compute cosine externally. I think your concern here is why the neural net is not learning the cosine function when it can learn the sine function. Assuming that this artifact if not because of your code; I would suggest the following:
It definitely looks like an error in the learning algorithm. Could be because of your starting point. Try starting with weights that gives the correct result for the first input and then march forward.
Check if there is heavy bias in your learning - more +ve than -ve
Since cosine can be computed by sine 90 minus angle, you could find the weights and then recompute the weights in 1 step for cosine.
I would like to code something that could take some sort of input and identify it as a square wave, triangle wave, or some sort of waveform. I also need some way of generating said waves.
I do have experience with C/C++, however, I'm not sure how I would approach simulating all of this. Eventually, I would like to translate it to a microcontroller program for reading its analog input to determine the waveform.
EDIT: Sorry; I should have mentioned it would be at a known frequency and the amplitude should be unknown.
Generating the waves is significantly easier than identifying them. I have a small project that does some wave generation. Here's an example from my project:
float amplitude;
switch (sound->wavetype)
{
case LA_SQUARE:
amplitude = sound->theta > .5 ? 1.0 : -1.0;
break;
case LA_SINE:
amplitude = sin(2 * PI * sound->theta);
break;
case LA_TRIANGLE:
amplitude = sound->theta > .5 ? 4 * sound->theta - 3 : -4 * sound->theta + 1;
break;
case LA_SAWTOOTH:
amplitude = 2 * sound->theta - 1.0;
break;
case LA_NOISE:
amplitude = ((float)rand() / RAND_MAX);
break;
default:
;
}
theta here is updated at every frame along the wave form and is dependent on the frequency of the wave you're creating.
As for identifying waves, if you know you're just going to be getting simple, unmixed square, triangle or sine waves, you can probably just do some simple tests. Look at the change in amplitude at any two points along the wave. If they're the same, square wave. If they're changing linearly (that is, if the change in amplitude is constant) you've got a triangle wave (or a sawtooth, if you're making that distinction). Otherwise, it's a sine wave. Keep in mind this check only works if you're expecting just those types of wave, and they're not being mixed or anything. There's some other edge cases in there I can think of but I'll let you worry about that.
If you're doing anything fancier, you're going to need to probably look up a book that specializes in this sort of thing, like the one suggested in the comments section.
Start with MATLAB or the free GNU Octave work-alike. You can generate arrays with the desired waveforms and write appropriate functions to decode/identify. When you have the details worked out, grab a copy of the FFTW (fastest fourier transforn in the west) library to handle the fft/ifft routines for your c/c++ code. The signal processing workbench module of MATLAB has lots of useful tools to achieve your objective.
To identifying waveforms: If you know the frequency, you can do quite a lot by using discrete gradients, as Alex suggests in his answer.
Another method would be to use an interpolation technique and have a look at the coefficients. Still another would be a fast Fourier transform. These last two are computationally more intense, but also much more accurate, especially when identifying more complex waveforms. You would have to see whether your uC is fast enough or, if you are lucky, has a hardware-FFT.
I was studying Perlin's Noise through some examples # http://dindinx.net/OpenGL/index.php?menu=exemples&submenu=shaders and couldn't help to notice that his make3DNoiseTexture() in perlin.c uses noise3(ni) instead of PerlinNoise3D(...)
Now why is that? Isn't Perlin's Noise supposed to be a summation of different noise frequencies and amplitudes?
Qestion 2 is what does ni, inci, incj, inck stand for? Why use ni instead of x,y coordinates? Why is ni incremented with
ni[0]+=inci;
inci = 1.0 / (Noise3DTexSize / frequency);
I see Hugo Elias created his Perlin2D with x,y coordinates, and so does PerlinNoise3D(...).
Thanks in advance :)
I now understand why and am going to answer my own question in hopes that it helps other people.
Perlin's Noise is actually a synthesis of gradient noises. In its production process, we must compute the dot product of a vector pointing from one of the corners flooring the input point to the input point itself with the random-generated gradient vector.
Now if the input point were a whole number, such as the xyz coordinates of a texture you want to create, the dot product would always return 0, which would give you a flat noise. So instead, we use inci, incj, inck as an alternative index. Yep, just an index, nothing else.
Now returning to question 1, there are two methods to implement Perlin's Noise:
1.Calculate the noise values separately and store them in the RGBA slots in the texture
2.Synthesize the noises up before-hand and store them in one of the RGBA slots in the texture
noise3(ni) is the actual implementation of method 1, while PerlinNoise3D(...) suggests the latter.
In my personal opinion, method 1 is much better because you have much more flexibility over how you use each octave in your shaders.
My guess on the reason for using noise3(ni) in make3DNoiseTexture() instead if PerlinNoise3D(...) is that when you use that noise texture in your shader you want to be able to replicate and modify the functionality of PerlinNoise3D(...) directly in the shader.
My guess for the reasoning behind ni, inci, incj, inck is that using x,y,z of the volume directly don't give a good result so by scaling the the noise with the frequency instead it is possible to adjust the resolution of the noise independently from the volume size.