I'm trying to visualize Mandelbrot set with OpenGL and have found very strange behaviour when it comes to smooth coloring.
Let's assume, for the current complex valueC, algorithm have escaped after n iterations when Z had been proven to be more than 2.
I programmed coloring part like this:
if(n==maxIterations){
color=0.0; //0.0 is black in OpenGL when put to each channel of RGB
//Points in M-brot set are colored black.
} else {
color = (n + 1 - log(log(abs(Z)))/log(2.0) )/maxIterations;
//continuous coloring algorithm, color is between 0.0 and 1.0
//Points outside M-brot set are colored depending of their absolute value,
//from brightest near the edge of set to darkest far away from set.
}
glColor3f(color ,color ,color );
//OpenGL-command for making RGB-color from three channel values.
The problem is, this just don't work well. Some smoothing is noticable, but it's not perfect.
But when I add two additional iterations (just found this somewhere without explanation)
Z=Z*Z+C;
n++;
in the "else"-branch before calculating a color, the picture comes out to be absolutely, gracefully smooth.
Why does this happen? Why do we need to place additional iterations in coloring part after checking the point to be in set?
I'm not actually certain, but I'm guessing it has something to do with the fact that the log of the log of a number (log(log(n))) is somewhat of an unstable thing for "small" numbers n, where "small" in this case means something close to 2. If your Z has just escaped, it's close to 2. If you continue to iterate, you get (quickly) further and further from 2, and log(log(abs(Z))) stabilizes, thus giving you a more predictable value... which, then, gives you smoother values.
Example data, arbitrarily chosen:
n Z.real Z.imag |Z| status color
-- ----------------- ----------------- ----------- ------- -----
0 -0.74 -0.2 0.766551 bounded [nonsensical]
1 -0.2324 0.096 0.251447 bounded [nonsensical]
2 -0.69520624 -0.2446208 0.736988 bounded [nonsensical]
3 -0.31652761966 0.14012381319 0.346157 bounded [nonsensical]
4 -0.65944494902 -0.28870611409 0.719874 bounded [nonsensical]
5 -0.38848357953 0.18077157738 0.428483 bounded [nonsensical]
6 -0.62175887162 -0.34045357891 0.708867 bounded [nonsensical]
7 -0.46932454495 0.22336006613 0.519765 bounded [nonsensical]
8 -0.56962419064 -0.40965672279 0.701634 bounded [nonsensical]
9 -0.58334691196 0.26670075833 0.641423 bounded [nonsensical]
10 -0.4708356748 -0.51115812757 0.69496 bounded [nonsensical]
11 -0.77959639873 0.28134296385 0.828809 bounded [nonsensical]
12 -0.2113833184 -0.63866792284 0.67274 bounded [nonsensical]
13 -1.1032138084 0.070007489775 1.10543 bounded 0.173185134517425
14 0.47217965836 -0.35446645882 0.590424 bounded [nonsensical]
15 -0.64269284066 -0.53474370285 0.836065 bounded [nonsensical]
16 -0.6128967403 0.48735189882 0.783042 bounded [nonsensical]
17 -0.60186945901 -0.79739278033 0.999041 bounded [nonsensical]
18 -1.0135884004 0.75985272263 1.26678 bounded 0.210802091344997
19 -0.29001471459 -1.7403558114 1.76435 bounded 0.208165835763602
20 -3.6847298156 0.80945758785 3.77259 ESCAPED 0.205910029166315
21 12.182012228 -6.1652650168 13.6533 ESCAPED 0.206137522227716
22 109.65092918 -150.41066764 186.136 ESCAPED 0.20614160700086
23 -10600.782669 -32985.538932 34647.1 ESCAPED 0.20614159039676
24 -975669186.18 699345058.7 1.20042e+09 ESCAPED 0.206141590396481
25 4.6284684972e+17 -1.3646588486e+18 1.44101e+18 ESCAPED 0.206141590396481
26 -1.6480665667e+36 -1.263256098e+36 2.07652e+36 ESCAPED 0.206141590396481
Notice how much the color value is still fluctuating at n in [20,22], getting stable at n=23, and consistent from n=24 onward. And notice that the Z values there are WAY outside the circle of radius 2 that bounds the Mandelbrot set.
I haven't actually done enough of the math to be sure that this is actually a solid explanation, but that would be my guess.
AS I just made the same program and as good mathematician I'll explain it why, hope it would be clear.
If let say the sequence with seed C has escaped after k iterations.
Since the function it self is defined as Limit as the iterations go to infinity it is not that good the get something close to k. Here is why.
As we know that after 2 the sequence goes infinity let's consider some iteration T where z has become enough big. Considered to it C will be insegnificantly small, since normally you look the set in [-2,2] and [-1.5,1.5] for the 2 axis. So on the T+1 iteration z will be ~~ z^2 from the previous and is easy to check that in that case |z| of T+1 will be ~~ |z|^2 of the previous.
Our function is log(|z|)/2^k for the K-th iteration. In the case we are looking at
it is easy to see That on the T+1 iteration it will be ~~
(source: equationsheet.com)
which is the function at T iteration.
In other words as |z| becomes "significantly" bigger than the seed C the function becomes more and more stable. You DO NOT want to use an iteration close to escaping iteration k, since there actually Z will be close to 2 , and as depending on C it may not be insignificantly small compared to it and so you will not be close to the Limit.
As |C| is near 2 actually on the first escaping iteration you will be a LOT away from the Limit. If on the other hand you choose to make it after |Z|>100 for escaping bound for instance or just take several more iterations you will get quite more stable.
Hope for anyone interested in the question to have answered him good.
For now it seems that "additional iterations" idea has no convincing formulas for it, if not counting the fact it just works.
In one old version of Wikipedia's Mandelbrot article there are lines about Continuous Coloring:
Second, it is recommended that a few
extra iterations are done so that z
can grow. If you stop iterating as
soon as z escapes, there is the
possibility that the smoothing
algorithm will not work.
Better than nothing.
Related
in this 2d array 1 represents a point and 0 represents blank area.
for example this array:
1 0 0 0 1
0 0 1 0 0
0 0 0 0 0
0 0 0 0 1
my answer should be 2, because there are 2 squares (or rectangles) in this array like this
all the points should be used, and you can't make another square | rectangle if all its points are already used (like we can't make another square from the point in the middle to the point in the top right) because they are both already used in other squares, you can use any point multiple times just if at least one corner is not used point.
I could solve it as an implementation problem, but I am not understanding how backtracking is related to this problem.
thanks in advance.
Backtracking, lets take a look at another possible answer to your problem, you listed:
{0,0} to (2,1}
{0,0} to {4,0}
As one solution another solution is (With respect to the point can be used multiple times as long as one point is unused):
{4,0} to {2,1} (first time 4,0 and 2,1 is used)
{0,0} to {2,1} (first time 0,0 is used)
{0,0} to {4,4} (first time 4,4 is used)
Which is 3 moves, with backtracking it is designed to show you alternative results using recursion. In this equation if you start the starting location for calculating the squares at different areas of the array you can achieve different results.
for an example iterating starts from 0,0, and going right across each row trying to find all possible rectangles starting with [0,0] will give the solution you provided, iteratings starting from 4,0 and going left across each row trying to find all possible solutions will give my result.
In 2-d array, there are pixels of bmp files. and its size is width(3*65536) * height(3*65536) of which I scaled.
It's like this.
1 2 3 4
5 6 7 8
9 10 11 12
Between 1 and 2, There are 2 holes as I enlarged the original 2-d array. ( multiply 3 )
I use 1-d array-like access method like this.
array[y* width + x]
index
0 1 2 3 4 5 6 7 8 9...
1 2 3 4 5 6 7 8 9 10 11 12
(this array is actually 2-d array and is scaled by multiplying 3)
now I can patch the hole like this solution.
In double for loop, in the condition (j%3==1)
Image[i*width+j] = Image[i*width+(j-1)]*(1-1/3) + Image[i*width+(j+2)]*(1-2/3)
In another condition ( j%3==2 )
Image[i*width+j] = Image[i*width+(j-2)]*(1-2/3) + Image[i*width+(j+1)]*(1-1/3)
This is the way I know I could patch the holes which is so called "Bilinear Interpolation".
I want to be sure about what I know before implementing this logic into my code. Thanks for reading.
Bi linear interpolation requires either 2 linear interpolation passes (horizontal and vertical) per interpolated pixel (well, some of them only require 1), or requires up to 4 source pixels per interpolated pixel.
Between 1 and 2 there are two holes. Between 1 and 5 there are 2 holes. Between 1 and 6 there are 4 holes. Your code, as written, could only patch holes between 1 and 2, not the other holes correctly.
In addition your division is integer division, and does not do what you want.
Generally you are far better off writing a r=interpolate_between(a,b,x,y) function, that interpolates between a and b at step x out of y. Then test and fix. Now scale your image horizontally using it, and check visually you got it right (especially the edges!)
Now try using it to scale vertically only.
Now do both horizontal, then vertical.
Next, write the bilinear version, which you can test again using the linear version three times (will be within rounding error). Then try to bilinear scale the image, checking visually.
Compare with the two-linear scale. It should differ only by rounding error.
At each of these stages you'll have a single "new" operation that can go wrong, with the previous code already validated.
Writing everything at once will lead to complex bug-ridden code.
I am learning about Two Dimensional Neuron Network so I am facing many obstacles but I believe it is worth it and I am really enjoying this learning process.
Here's my plan: To make a 2-D NN work on recognizing images of digits. Images are 5 by 3 grids and I prepared 10 images from zero to nine. For Example this would be number 7:
Number 7 has indexes 0,1,2,5,8,11,14 as 1s (or 3,4,6,7,9,10,12,13 as 0s doesn't matter) and so on. Therefore, my input layer will be a 5 by 3 neuron layer and I will be feeding it zeros OR ones only (not in between and the indexes depends on which image I am feeding the layer).
My output layer however will be one dimensional layer of 10 neurons. Depends on which digit was recognized, a certain neuron will fire a value of one and the rest should be zeros (shouldn't fire).
I am done with implementing everything, I have a problem in computing though and I would really appreciate any help. I am getting an extremely high error rate and an extremely low (negative) output values on all output neurons and values (error and output) do not change even on the 10,000th pass.
I would love to go further and post my Backpropagation methods since I believe the problem is in it. However to break down my work I would love to hear some comments first, I want to know if my design is approachable.
Does my plan make sense?
All the posts are speaking about ranges ( 0->1, -1 ->+1, 0.01 -> 0.5 etc ), will it work for either { 0 | .OR. | 1 } on the output layer and not a range? if yes, how can I control that?
I am using TanHyperbolic as my transfer function. Does it make a difference between this and sigmoid, other functions.. etc?
Any ideas/comments/guidance are appreciated and thanks in advance
Well, by the description given above, I think that the design and approach taken it's correct! With respect to the choice of the activation function, remember that those functions help to get the neurons which have the largest activation number, also, their algebraic properties, such as an easy derivative, help with the definition of Backpropagation. Taking this into account, you should not worry about your choice of activation function.
The ranges that you mention above, correspond to a process of scaling of the input, it is better to have your input images in range 0 to 1. This helps to scale the error surface and help with the speed and convergence of the optimization process. Because your input set is composed of images, and each image is composed of pixels, the minimum value and and the maximum value that a pixel can attain is 0 and 255, respectively. To scale your input in this example, it is essential to divide each value by 255.
Now, with respect to the training problems, Have you tried checking if your gradient calculation routine is correct? i.e., by using the cost function, and evaluating the cost function, J? If not, try generating a toy vector theta that contains all the weight matrices involved in your neural network, and evaluate the gradient at each point, by using the definition of gradient, sorry for the Matlab example, but it should be easy to port to C++:
perturb = zeros(size(theta));
e = 1e-4;
for p = 1:numel(theta)
% Set perturbation vector
perturb(p) = e;
loss1 = J(theta - perturb);
loss2 = J(theta + perturb);
% Compute Numerical Gradient
numgrad(p) = (loss2 - loss1) / (2*e);
perturb(p) = 0;
end
After evaluating the function, compare the numerical gradient, with the gradient calculated by using backpropagation. If the difference between each calculation is less than 3e-9, then your implementation shall be correct.
I recommend to checkout the UFLDL tutorials offered by the Stanford Artificial Intelligence Laboratory, there you can find a lot of information related to neural networks and its paradigms, it's worth to take look at it!
http://ufldl.stanford.edu/wiki/index.php/Main_Page
http://ufldl.stanford.edu/tutorial/
If it is needed to generate randoms in [N, M] range, but with more numbers close to avg (N <= avg <= M), which is better to use:
poisson_distribution or
normal_distribution?
Seeing at examples at cppreference pages (at bottom of the pages), they both generate what is needed:
poisson_distribution at point 4:
0 *
1 *******
2 **************
3 *******************
4 *******************
5 ***************
6 **********
7 *****
8 **
9 *
10
11
12
13
normal_distribution at point 5 with standard deviation 2:
-2
-1
0
1 *
2 ***
3 ******
4 ********
5 **********
6 ********
7 *****
8 ***
9 *
10
11
12
What to choose? May be something else?
Neither choice is great if you need the outcomes on a bounded range. The normal distribution has infinite tails at both ends, the Poisson distribution has an infinite upper tail. At a minimum you'd want a truncated form of one of them. If you're not truncating, note that the normal is always symmetric about its mean while a Poisson can be quite skewed. The two distributions also differ in the fact that the normal is continuous, the Poisson is discrete, although you can discretize continuous distributions by binning the results.
If you want a discrete set of outcomes on a bounded range, you could try a scaled and shifted binomial distribution. A binomial with parameters n and p counts how many "successes" you get out of n trials when the trials are independent and all yield success with probability p. Make n = M - N and shift the outcome by N to get outcomes in the range [N,M].
If you want a continuous range of outcomes, consider a beta distribution. You can fudge the parameters to get a wide variety of distribution shapes and dial in the mean to where you want it, and scale+shift it to any range you want.
You can center both distributions in a point that suits your needs.
But if M is small, then the Poisson distribution has a 'fat tail', that is, the probability of getting a number above M is higher compared to the normal distribution.
In the normal case, you can control this chance via the variance parameter (it can be as small as you want).
The other, rather obvious difference is that Poisson will onli give you positive integers, whreas a Normal Distribution will give any number in the [N,M] range.
Plus, when [N,M] are large enough, the Poisson converges to a Normal distribution. So even if the Poisson is the right model, the normal approximation won't be so inaccurate.
With this in mind, if the numbers do not simulate a counting process, I would go for the Normal.
If you need distribution which is within range (not an infinite or semi-infinite one like normal or Poisson), but have clear maximum, you may try Irwin-Hall one with several degrees of freedom. Say IH(16) will have minimum at 0, maximum at 16 and peak at 8, see http://en.wikipedia.org/wiki/Irwin%E2%80%93Hall_distribution
Very easy to sample, easy to scale, and you could play with n to get peak wider or narrower
I prefer Normal distribution, because it is closer to real life problems, while Poisson distribution is used for special cases only. Choosing N.D makes your problem more general.
How can I measure this area in C++?
(update: I posted the solution and code as an answer rather than edit the question again)
The ideal line (dashed red) is the plot from starting point with the average rise added with each angle of measurement; this I obtain via average. I measured the test data in black. How can I quantify the area of the dip in blue? X-axis is unitized, so slopes and math are simplified.
I could determine a cutoff for the size of areas like this and then flag this part for retesting or failure. Rarely, there is another dip that appears closer to the right, but setting a cutoff value for standard deviation usually fails those parts.
Update
Diego's answer helped me visualize this. Now that I can see what I'm trying to do, I'll work on the algorithm to implement the "homemade dip detector". :)
Why?
I created a test bench to test throttle position sensors I'm selling. I'm trying to programatically quantify how straight the plot is by analyzing the data collected. This one particular model is vexing me.
Sample plot of a part I prefer not to sell:
The X axis are evenly spaced angles of throttle opening. The stepper motor turns the input shaft, stopping every 0.75° to measure the output on a 10 bit ADC, which gets translated to the Y axis. The plot is the translation of data[idx] to idx,value mapped to (x,y) bitmap coordinates. Then I draw lines between the points within the bitmap using Bresenham's algorithm.
My other TPS products produce amazingly linear output.
The lower (left) portion of the plot is crucial to normal usage of any motor vehicle; it's when you're driving around town, entering parking lots, etc. This particular part has a tendency to develop a dip around 15° opening and I wish to use the program to quantify this "dip" in the curve and rely less upon the tester's intuition. In the above example, the plot dips but doesn't return to what an ideal line might be.
Even though this is an embedded application, printing the report takes 10 seconds, thus I do not consider stepping through an array of 120 points of data multiple times a waste of cycles. Also, since I'm using a uC32 PIC32 microcontroller, there's plenty of memory, so I have the luxury of being able to ponder this problem within the controller.
What I'm trying already
Array of rise between test points: I dismiss the X-axis entirely, considering it unitized, and then make an array of change from one reading to the next. This array is what contributes to the report's "Min rise between points: 0 Max: 14". I call this array deltas.
I've tried using standard deviation on deltas, however, during testing I have found that a low Std Dev is not a reliable measure for this part. If the dip quickly returns to the original line implied by early data points, the Std Dev can be deceptively low (observed to be as low as 2.3) but the part is still something I wouldn't want to use. I tried setting a cutoff at 2.6, but it failed too many parts with great plots. The other, more linear part linked to above can reliably count on Std Dev for quality.
Kurtosis seems not to apply for this situation at all. I learned of Kurtosis today and found a Statistics Library which includes Kurtosis and Skewness. During continued testing, I found that of these two measures, there was not a trend of positive, negative, or amplitude which would correspond to either passing or failing. That same gentleman has shared a linear regression library, but I believe Lin Reg is unrelated to my situation, as I am comfortable with the assumption of the AVG of deltas being my ideal line. Linear Regression and R^2 are more for finding a line from less ideal data or much larger sets.
Comparing each delta to AVG and Std Dev I set up a monitor to check each delta against final average of the deltas's data. Here, too, I couldn't find a reliable metric. Too many good parts would not pass a test restricting any delta to within 2x Std Dev away from the Average. Ultimately, the only variation from AVG I could settle on is to be within AVG+Std Dev difference from the AVG itself. Anything more restrictive would fail otherwise good parts. And the elusive dip around 15° opening can sneak through this test.
Homemade dip detector When feeding deltas to the serial monitor of the computer, I observed consecutive negative deltas during the dip, so I programmed in a dip detector, but it feels very crude to me. If there are 5 or more negative deltas in a row, I sum them. I have seen that if I take that sum the dip's differences from AVG then divide by the number of negative deltas, a value over 2.9 or 3 could mean a fail. I have observed dips lasting from 6 to 15 deltas. Readily observable dips would have their differences from AVG sum up to -35.
Trending accumulated variation from the AVG The above made me think watching the summation of deltas as it wanders away from AVG could be the answer. Meaning, I step through the array and sum the differences of each delta from AVG. I thought I was on to something until a good part blew this theory. I was seeing a trend of the fewer times the running sum varied from AVG by less than 2x AVG, the more straight the line appeared. Many ideal parts would only show 8 or less delta points where the sumOfDiffs would stray from the AVG very far.
float sumOfDiffs=0.0;
for( int idx=0; idx<stop; idx++ ){
float spread = deltas[idx] - line->AdcAvgRise;
sumOfDiffs = sumOfDiffs + spread;
...
testVal = 2*line->AdcAvgRise;
if( sumOfDiffs > testVal || sumOfDiffs < -testVal ){
flag = 'S';
}
...
}
And then a part with a fantastic linear plot came through with 58 data points where sumOfDiffs was more than twice the AVG! I find this amazing, as at the end of the ~120 data points, sumOfDiffs value is -0.000057.
During testing, the final sumOfDiffs result would often register as 0.000000 and only on exceptionally bad parts would it be greater than .000100. I found this quite surprising, actually: how a "bad part" can have accumulated great accuracy.
Sample output from monitoring sumOfDiffs This below output shows a dip happening. The test watches as the running sumOfDiffs is more than 2x the AVG away from the AVG for the whole test. This dip lasts from deltas idx of 23 through 49; starts at 17.25° and lasts for 19.5°.
Avg rise: 6.75 Std dev: 2.577
idx: delta diff from avg sumOfDiffs Flag
23: 5 -1.75 -14.05 S
24: 6 -0.75 -14.80 S
25: 7 0.25 -14.55 S
26: 5 -1.75 -16.30 S
27: 3 -3.75 -20.06 S
28: 3 -3.75 -23.81 S
29: 7 0.25 -23.56 S
30: 4 -2.75 -26.31 S
31: 2 -4.75 -31.06 S
32: 8 1.25 -29.82 S
33: 6 -0.75 -30.57 S
34: 9 2.25 -28.32 S
35: 8 1.25 -27.07 S
36: 5 -1.75 -28.82 S
37: 15 8.25 -20.58 S
38: 7 0.25 -20.33 S
39: 5 -1.75 -22.08 S
40: 9 2.25 -19.83 S
41: 10 3.25 -16.58 S
42: 9 2.25 -14.34 S
43: 3 -3.75 -18.09 S
44: 6 -0.75 -18.84 S
45: 11 4.25 -14.59 S
47: 3 -3.75 -16.10 S
48: 8 1.25 -14.85 S
49: 8 1.25 -13.60 S
Final Sum of diffs: 0.000030
RunningStats analysis:
NumDataValues= 125
Mean= 6.752
StandardDeviation= 2.577
Skewness= 0.251
Kurtosis= -0.277
Sobering note about quality: what started me on this journey was learning how major automotive OEM suppliers consider a 4 point test to be the standard measure for these parts. My first test bench used an Arduino with 8k of RAM, didn't have a TFT display nor a printer, and a mechanical resolution of only 3°! Back then I simply tested deltas being within arbitrary total bounds and choosing a limit of how big any single delta could be. My 120+ point test feels high class compared to that 30 point test from before, but that test had no idea about these dips.
Premises
the mean of a set of data has the mathematical property that the sum of the deviations from the mean is 0.
this explains why both bad and good datasets alwais give almost 0.
basically the result when differs from zero is essentially an accumulations of rounding errors in the diffs and that's why unfortunately cannot hold useful informations
the thing that most clearly define what you're looking for is your image: you're looking for an AREA and this is why you're not finding the solution in this ways:
looking to a metric in the single points is too local to extract that information
looking to global accumulations or parameters (global standard deviation) is too global and you lose the data among too much information and source of variations
kurtosis (you've already told I know but is for completeness) is out of its field of applications since this is not a probability distribution
in the end the more suitable approach of your already tryied ones is the "Homemade dip detector" because thinks in a way that is local but not too much.
Last but not least:
Any Algorithm you're going to choose has its tacit points on which it stands.
So maybe one is looking for a super clever algorithm that with no parametrization and tuning automatically adapts to the problem and self define thereshods and other.
On the other side there is an algorithm that will stand on the knowledge by the writer of the tipical data behavior (good and bad) and that is itself stupid in the way that if there is another different and unespected behavior the results are unpredictable
Ok, the right way is one of this two or is in-between them depending on the application. So if it works also the "Homemade dip detectors" can be a solution. There is not reason to define it crude but it could be that is not sufficient based on applicaton needs and that's an other thing.
How to find the area
Once you have the data the first thing is to clearly define the "theoretical straight line". I give some options:
use RANSAC algorithm (formally the best option IMHO)
this give you the best fit to the aligned points disregarding the not aligned ones
it is quite difficult and maybe oversized for this work (IMHO)
consider the line defined by the first and last point
you told that the dip is almost always in the same position that is not near boundaries so first and last points can be thought as affordable
very easy to implement
this is an example of using the knowledge about expected behaviors as I told before so you need to think if and how much confidence you give to this assumption
consider a linear fit to the first 10 points and last 10 points
is only a more affordable version of previous since using more points you can be less worried that maybe just the first point or the last were affected by any measure problem and so all fails because of this
also quite easy to implement
if I were you I will use this or something inspired to this
calculate the Y value given by the straight line for each X
calculate the area between the two curves (or the areas under the function Y_dev = Y_data - Y_straight that is mathematically the same) with this procedure:
PositiveMax = 0; NegativeMax = 0;
start from first point (value can be positive or negative) and put in a temporary area accumulator tmp_Area
for each next point
if the sign is the same then accumulate the value
if it is different
stop accumulating
check if the accumulated value is the greater than PositiveMax or below NegativeMax and if it is than store as new PositiveMax or NegativeMax
in any case reset the accumulator with tmp_Area = Y_dev; to the current value starting this way a new accumulation
in the end you will have the values of the maximum overvalued contiguous area and maximum undervalued contiguous area that I think are the scores you're looking for.
if you want you can only manage the NegativeMax based on observed and expected data behaviors
you may find useful to put a thereshold so that if a value Y_dev is lower than the thereshold you do not accumulate it.
this in order to not obtain large accumulations from many points close to the straight line that can be similar to the accumulations of few points far from the line
the need of this and and the proper thereshold needs to be evaluated on some sample data
you need to find an appropriate thereshold for this contiguous area and you can have it only from observation of sample data.
again: it can be you observing and deciding the thereshold or you can build a repository of good and bad samples and write a program that automatically learn which thereshold to use. But his is not the algorithm, this is how to find its operative parameters and there is nothing wrong to do by human brain.. ..it only depends if we're looking for a method to separate bad and good things or if we're looking for and autoadaptive algorithm that does this.. ..you decide the target.
It turns out the result of my gut feeling and Diego's method is an average of the integral. I still don't like that name, so I have described the algorithm and have asked on Math.SE what to call this, which got migrated to "Cross Validated", Stats.SE .
I Updated graphs after a massive edit of my Math.SE question. It turns out I'm taking the average of a closed integral of the derivative of the data. :P First, we gather the data:
Next is the "derivative": step through the original data array to form the deltas array which is the rise of ADC values from one 0.75° step to the next. "Rise" or "slope" is what the derivative is: dy/dx.
With the "slope" or average leveled out, I can find multiple negative deltas in a row, sum them, then divide by the count at the end of the dip. The sum is an integral of the area between average and the deltas and when the dip goes back positive, I can divide the sum by the count of the dips.
During testing, I came up with a cutoff value for this average of the integral at 2.6. That was a great measure of my "gut instinct" looking at the plot thinking a part was good or bad.
In case someone else finds themselves trying to quantify this, here's the code I implemented. Note that it is only looking for negative dips. Also, dipCountLimit is defined elsewhere as 5. In addition to the dip detector/accumulator (ie Numerical Integrator) I also have a spike detector that arbitrarily flags the test as bad if any data points stray from the average by the amount of average + standard deviation. AVG+STD DEV as a spike limit was chosen arbitrarily based on the observed plots of the parts it would fail.
int dipdx=0;
// inDipFlag also counts the length of this dip
int inDipFlag=0;
float dips[140] = { 0.0 };
for( int idx=0; idx<stop; idx++ ){
const float diffFromAvg = deltas[idx] - line->AdcAvgRise;
// state machine to monitor dips
const int _stop = stop-1;
if( diffFromAvg < 0 && idx < _stop ) {
// check NEXT data point for negative diff & set dipFlag to put state in dip
const float nextDiff = deltas[idx+1] - line->AdcAvgRise;
if( nextDiff < 0 && inDipFlag == 0 )
inDipFlag = 1;
// already IN a dip, and next diff is negative
if( nextDiff < 0 && inDipFlag > 0 ) {
inDipFlag++;
}
// accumulate this dip
dips[dipdx]+= diffFromAvg;
// next data point ends this dip and we advance dipdx to next dip
if( inDipFlag > 0 && nextDiff > 0 ) {
if( inDipFlag < dipCountLimit ){
// reset the accumulator, do not advance dipdx to next entry
dips[dipdx]=0.0;
} else {
// change this entry's value from dip sum to its ratio
dips[dipdx] = -dips[dipdx]/inDipFlag;
// advance dipdx to next entry
dipdx++;
}
// Next diff isn't negative, so the dip is done
inDipFlag = 0;
}
}
}