When optimizing a SVGP with Poisson Likelihood for a big data set I see what I think are exploding gradients.
After a few epochs I see a spiky drop of the ELBO, which then very slowly recovers after getting rid of all progress made before.
Roughly 21 iterations correspond to an Epoch.
This spike (at least the second one) resulted in a complete shift of the parameters (for vectors of parameters I just plotted the norm to see changes):
How can I deal with that? My first approach would be to clip the gradient, but that seems to require digging around the gpflow code.
My Setup:
Training works via Natural Gradients for the variational parameters and ADAM for the rest, with a slowly (linearly) increasing schedule for the Natural Gradient Gamma.
The batch and inducing point sizes are as large as possible for my setup
(both 2^12, with the data set consisting of ~88k samples). I include 1e-5 jitter and initialize the inducing points with kmeans.
I use a combined kernel, consisting of a combination of RBF, Matern52, a periodic and a linear kernel on a total of 95 features (a lot of them due to a one-hot encoding), all learnable.
The lengthscales are transformed with gpflow.transforms.
with gpflow.defer_build():
k1 = Matern52(input_dim=len(kernel_idxs["coords"]), active_dims=kernel_idxs["coords"], ARD=False)
k2 = Periodic(input_dim=len(kernel_idxs["wday"]), active_dims=kernel_idxs["wday"])
k3 = Linear(input_dim=len(kernel_idxs["onehot"]), active_dims=kernel_idxs["onehot"], ARD=True)
k4 = RBF(input_dim=len(kernel_idxs["rest"]), active_dims=kernel_idxs["rest"], ARD=True)
#
k1.lengthscales.transform = gpflow.transforms.Exp()
k2.lengthscales.transform = gpflow.transforms.Exp()
k3.variance.transform = gpflow.transforms.Exp()
k4.lengthscales.transform = gpflow.transforms.Exp()
m = gpflow.models.SVGP(X, Y, k1 + k2 + k3 + k4, gpflow.likelihoods.Poisson(), Z,
mean_function=gpflow.mean_functions.Constant(c=np.ones(1)),
minibatch_size=MB_SIZE, name=NAME)
m.mean_function.set_trainable(False)
m.compile()
UPDATE: Using only ADAM
Following the suggestion by Mark, I switched to ADAM only,
which helped me get rid of that sudden explosion. However, I still initialized with an epoch of natgrad only, which seems to save a lot of time.
In addition, the variational parameters seem to change a lot less abrupt (in terms of their norm at least). I guess they'll converge way slower now, but at least it's stable.
Just to add to Mark's answer above, when using nat grads in non-conjugate models it can take a bit of tuning to get the best performance, and instability is potentially a problem. As Mark points out, the large steps that provide potentially faster convergence can also lead to the parameters ending up in in bad regions of the parameter space. When the variational approximation is good (i.e. the true and approximate posterior are close) then there is good reason to expect that the nat grad will perform well, but unfortunately there is no silver bullet in the general case. See https://arxiv.org/abs/1903.02984 for some intuition.
This is very interesting. Perhaps trying to not use natgrads is a good idea as well. Clipping gradients indeed seems like a hack that could work. And yes, this would require digging around in the GPflow code a bit. One tip that can help towards this, is by not using the GPflow optimisers directly. The model._likelihood_tensor contains the TF tensor that should be optimised. Perhaps with some manual TensorFlow magic, you can do the gradient clipping on here before running an optimiser.
In general, I think this sounds like you've stumbled on an actual research problem. Usually these large gradients have a good reason in the model, which can be addressed with careful thought. Is it variance in some monte carlo estimate? Is the objective function behaving badly?
Regarding why not using natural gradients helps. Natural gradients use the Fisher matrix as a preconditioner to perform second order optimisation. Doing so can result in quite aggressive moves in parameter space. In certain cases (when there are usable conjugacy relations) these aggressive moves can make optimisation much faster. This case, with the Poisson likelihood, is not one where there are conjugacy relations that will necessarily help optimisation. In fact, the Fisher preconditioner can often be detrimental, particularly when variational parameters are not near the optimum.
Current hash functions are designed to have big changes on hash even if only a very small portion of input is changed. What I need, is a hash algorithm which output mutation will be directly proportional to input mutation. For example, I need something similar to this:
Hash("STR1") => 1000
Hash("STR2") => 1001
Hash("STR3") => 1002
etc.
I'm not good at algorithms, but never heared of such implementation, although I'm almost sure someone should already come up with this algorithm.
My current requirement is to have large bitrate (512 bits maybe?) to avoid collisions.
Thanks
UPDATE
I think I should clarify my goal, I see that I did a very poor job explaining what I need. Sorry, I'm not a native English speaker and great communicator.
So basically I need this hash algorithm for searching similar binary files. You can think of it as Antivirus hashing algorithm. It calculates file checksum, but unlike traditional hashing functions, even after some small modification in malware binary, it still is able to detect it. This is pretty much what I'm looking for.
Another aspect is to avoid collision. Let me explain what I mean by that. It's not a conflicting goal. I want Hash("STR1") to produce 1000 and Hash("STR2") to produce 1001 or 1010 maybe, doesn't matter as long as the value is close relative to previous hash. But Hash("This is a very large string or maybe even binary data" + 100 random chars) should not produce a value close to 1000. I understand it will not work always and there would be some hash/hash-range collisions, but I think I can introduce another hashing algorithm and verify both to minimize collisions.
So what do you think? Maybe there is a better way to achieve my goal, maybe I'm asking too much, I don't know. I'm not well versed in Cryptograpy, math or algorithms.
Thank you again for your time and effort
How about a simple summation? Your hash can then wrap at the desired size, and if you take this into account during hash comparisons, a small difference in inputs should yield a small difference in hashes.
However, I think "minimal collisions" and "proportional change in output" are conflicting goals.
This is called, in other domains, perceptual hashing.
One approach to this is as follows:
Get a training multiset of n-grams. (E.g. if n=2 and your training data was "This is a test" your training set would be "Th", "hi", "is", "s ", etc)
Sort and calculate the frequencies of said n-grams, decending.
Then the hash of a word is the first bits of "for each n-gram in the database, is this word's frequency said n-gram higher than the average frequency?"
Note that this can and will result in many collisions with similar words, unfortunately, unless the hash length is absurdly long.
MD5 or SHA-x is not what you want.
According to wikipedia, for example the substitution cipher has no avalanche effect (this is the word you mean).
In terms of hashing you could use some kind of a figure total.
For example:
char* hashme = "hallo123";
int result=0;
for(int i = 0; i<8; ++i) {
result += hashme[i];
}
It may be geared towards kids, but the old NSA Kid's section has some really good ideas.
Of course, these algorithms are really insecure, so you cannot use this in place of REAL encryption. (But you can't use a real encryption algorithm when you just want to have fun, either.)
The number grid involves setting up a grid, then using the coordinates of each letter:
Further ideas:
Mix up the letter arangement
Convert numbers to binary to obfuscate
A winding way also uses a grid. Essentially, the letters are packed in the grid left to right, in rows downwards. The output is produced by slicing vertically through the grid:
Typically hash and encryption algorithms oriented towards cryptography will behave in the exact opposite way of what you're looking for (i.e. small changes in the input will cause large changes in the output and vice versa), so this algorithm class is a dead end.
As a quick digression on why these algorithms behave like this: of necessity, they're designed to obscure statistical relationships between the input and output to make them more difficult to crack. For example, in the English language the letter "e" is by far the most commonly-used letter; in some very weak classical ciphers you could simply find the most common letter and figure that that corresponds to "e" (e.g. - if n is the most common letter, then odds are n = e). Actually, a statistical pattern like you describe would likely make the algorithm significantly more vulnerable to chosen-plaintext, known-plaintext, man in the middle, and replay attacks.
The man in the middle and replay attacks would be made significantly easier by the fact that it would be much easier to edit the ciphertext to achieve the desired plaintext without knowing the key (especially if you have access to a couple of chosen plaintexts).
If you know that
7/19/2016 1:35 transfer $10 from account x to account y
(where the datestamp is used to defend against a replay attack) encodes to
12345678910
whereas
7/19/2016 1:40 transfer $10 from account x to account y
encodes to
12445678910
it's a pretty safe guess that
12545678910
will mean something like
7/19/2016 1:45 transfer $10 from account x to account y
Without having access to the original key, you could replay this packet on a regular basis to continue to steal money from someone's account simply by making a trivial edit. Granted, this is a fairly contrived example, but it still illustrates the basic problem.
My understanding of what you're looking for is statistical similarity between files. This might help some: https://en.wikipedia.org/wiki/Semantic_similarity
This does indeed exist. The term is Locality-sensitive hashing. A concrete implementation can be found here.
Depending on the source document you might want to look at digital forensics or VisualRank (from google) for finding similar images and video. For textual data this is commonly used in anti-spam (read more here). For binary files you might want to first run disassembler and then run the algorithm on the text version - but this is just my feeling, I don't have a research to back this statement but it would be an interesting hypothesis to test.
I've been reading this website: http://www.csimn.com/CSI_pages/PIDforDummies.html and I'm confused about the proportional integral part. Here's what it says.
Proportional control
Here’s a diagram of the controller when we have enabled only P control:
In Proportional Only mode, the controller simply multiplies the Error by the Proportional Gain (Kp) to get the controller output.
The Proportional Gain is the setting that we tune to get our desired performance from a “P only” controller.
A match made in heaven: The P + I Controller
If we put Proportional and Integral Action together, we get the humble PI controller. The Diagram below shows how the algorithm in a PI controller is calculated.
The tricky thing about Integral Action is that it will really screw up your process unless you know exactly how much Integral action to apply.
A good PID Tuning technique will calculate exactly how much Integral to apply for your specific process - but how is the Integral Action adjusted in the first place?
As you can see, the proportional part is easy to understand it says that you multiply error by tuning variable. The part that I don't get is where you get the P and I from on the second part, and what mathematical operation you do with them. I don't have a degree in mathematics or advanced calculus knowledge, so I would appreciate it if you would try to keep it algebra level.
There is a big part missing from the text, the actual physical system that turns the control into a process and the actual physical variable.
Think of the integral as some kind of averaging operation that filters out small oscillations in the PV input. It also represents some kind of memory of the immediate past of the process.
A moving exponential average, for instance, can be thought of being a mix of integral and proportional action.
Staying with the car driving example, if you come to a curb where you need the steering wheel in a certain position to go in a circle, you don't just yank the wheel to that position, you move it gradually (most of the time). Exactly such ramp-up and -down actions are effects of using the integral action part.
I integral part is just summation also multiplied by some constant.
Analogue integration is done by nonlinear gain and amplifier.
Digital integration of first order is just:
output += input*dt;
second order is:
temp += input*dt;
output += temp*dt;
dt is the duration time of iteration loop (timer or what ever)
do not forget that PI regulator can have more complicated response
i1 += input*dt;
i2 += i1*dt;
i3 += i2*dt;
output = a0*input + a1*i1 + a2*i2 +a3*i3 ...;
where a0 is the P part
Now the I regulator adds more and more amount of control value
until the controlled value is the same as the preset value
the longer it takes to match it the faster it controls
this creates fast oscillations around preset value
in comparison to P with the same gain
but in average the control time is smaller then in just P regulators
therefore the I gain is usually much much smaller which creates the memory and smooth effect LutzL mentioned. (while the regulation time is similar or smaller then just for P regulation)
The controlled device has its own response
this can be represented as differential function
there is a lot of theory in cybernetics about obtaining the right regulator response
to match your process needs as:
quality of control
reaction times
max oscillations amplitude
stability
but for all you need differential math like solving system of differential equations of any order
strongly recommend use of Laplace transform
but many people also use Z transform instead
So I-regulator add speed to regulation
but it also create bigger oscillations
and when not matching the regulated system properly also creates instability
Integration adds overflow risks to regulation (Analog integration is very sensitive to it)
Also take in mind you can also substracting the I part from control value
which will make the exact opposite
sometimes the combination of more I parts are used to match desired regulation response shape
OK, I have been working on a random image selector and queue system (so you don't see the same images too often).
All was going swimmingly (as far as my crappy code does) until I got to the random bit. I wanted to test it, but how do you test for it? There is no Debug.Assert(i.IsRandom) (sadly) :D
So, I got my brain on it after watering it with some tea and came up with the following, I was just wondering if I could have your thoughts?
Basically I knew the random bit was the problem, so I ripped that out to a delegate (which would then be passed to the objects constructor).
I then created a class that pretty much performs the same logic as the live code, but remembers the value selected in a private variable.
I then threw that delegate to the live class and tested against that:
i.e.
Debug.Assert(myObj.RndVal == RndIntTester.ValuePassed);
But I couldn't help but think, was I wasting my time? I ran that through lots of iterations to see if it fell over at any time etc.
Do you think I was wasting my time with this? Or could I have got away with:
GateKiller's answer reminded me of this:
Update to Clarify
I should add that I basically never want to see the same result more than X number of times from a pool of Y size.
The addition of the test container basically allowed me to see if any of the previously selected images were "randomly" selected.
I guess technically the thing here being tested in not the RNG (since I never wrote that code) but the fact that am I expecting random results from a limited pool, and I want to track them.
Test from the requirement : "so you don't see the same images too often"
Ask for 100 images. Did you see an image too often?
There is a handy list of statistical randomness tests and related research on Wikipedia. Note that you won't know for certain that a source is truly random with most of these, you'll just have ruled out some ways in which it may be easily predictable.
If you have a fixed set of items, and you don't want them to repeat too often, shuffle the collection randomly. Then you will be sure that you never see the same image twice in a row, feel like you're listening to Top 20 radio, etc. You'll make a full pass through the collection before repeating.
Item[] foo = …
for (int idx = foo.size(); idx > 1; --idx) {
/* Pick random number from half-open interval [0, idx) */
int rnd = random(idx);
Item tmp = foo[idx - 1];
foo[idx - 1] = foo[rnd];
foo[rnd] = tmp;
}
If you have too many items to collect and shuffle all at once (10s of thousands of images in a repository), you can add some divide-and-conquer to the same approach. Shuffle groups of images, then shuffle each group.
A slightly different approach that sounds like it might apply to your revised problem statement is to have your "image selector" implementation keep its recent selection history in a queue of at most Y length. Before returning an image, it tests to see if its in the queue X times already, and if so, it randomly selects another, until it find one that passes.
If you are really asking about testing the quality of the random number generator, I'll have to open the statistics book.
It's impossible to test if a value is truly random or not. The best you can do is perform the test some large number of times and test that you got an appropriate distribution, but if the results are truly random, even this has a (very small) chance of failing.
If you're doing white box testing, and you know your random seed, then you can actually compute the expected result, but you may need a separate test to test the randomness of your RNG.
The generation of random numbers is
too important to be left to chance. -- Robert R. Coveyou
To solve the psychological problem:
A decent way to prevent apparent repetitions is to select a few items at random from the full set, discarding duplicates. Play those, then select another few. How many is "a few" depends on how fast you're playing them and how big the full set is, but for example avoiding a repeat inside the larger of "20", and "5 minutes" might be OK. Do user testing - as the programmer you'll be so sick of slideshows you're not a good test subject.
To test randomising code, I would say:
Step 1: specify how the code MUST map the raw random numbers to choices in your domain, and make sure that your code correctly uses the output of the random number generator. Test this by Mocking the generator (or seeding it with a known test value if it's a PRNG).
Step 2: make sure the generator is sufficiently random for your purposes. If you used a library function, you do this by reading the documentation. If you wrote your own, why?
Step 3 (advanced statisticians only): run some statistical tests for randomness on the output of the generator. Make sure you know what the probability is of a false failure on the test.
There are whole books one can write about randomness and evaluating if something appears to be random, but I'll save you the pages of mathematics. In short, you can use a chi-square test as a way of determining how well an apparently "random" distribution fits what you expect.
If you're using Perl, you can use the Statistics::ChiSquare module to do the hard work for you.
However if you want to make sure that your images are evenly distributed, then you probably won't want them to be truly random. Instead, I'd suggest you take your entire list of images, shuffle that list, and then remove an item from it whenever you need a "random" image. When the list is empty, you re-build it, re-shuffle, and repeat.
This technique means that given a set of images, each individual image can't appear more than once every iteration through your list. Your images can't help but be evenly distributed.
All the best,
Paul
What the Random and similar functions give you is but pseudo-random numbers, a series of numbers produced through a function. Usually, you give that function it's first input parameter (a.k.a. the "seed") which is used to produce the first "random" number. After that, each last value is used as the input parameter for the next iteration of the cycle. You can check the Wikipedia article on "Pseudorandom number generator", the explanation there is very good.
All of these algorithms have something in common: the series repeats itself after a number of iterations. Remember, these aren't truly random numbers, only series of numbers that seem random. To select one generator over another, you need to ask yourself: What do you want it for?
How do you test randomness? Indeed you can. There are plenty of tests for that. The first and most simple is, of course, run your pseudo-random number generator an enormous number of times, and compile the number of times each result appears. In the end, each result should've appeared a number of times very close to (number of iterations)/(number of possible results). The greater the standard deviation of this, the worse your generator is.
The second is: how much random numbers are you using at the time? 2, 3? Take them in pairs (or tripplets) and repeat the previous experiment: after a very long number of iterations, each expected result should have appeared at least once, and again the number of times each result has appeared shouldn't be too far away from the expected. There are some generators which work just fine for taking one or 2 at a time, but fail spectacularly when you're taking 3 or more (RANDU anyone?).
There are other, more complex tests: some involve plotting the results in a logarithmic scale, or onto a plane with a circle in the middle and then counting how much of the plots fell within, others... I believe those 2 above should suffice most of the times (unless you're a finicky mathematician).
Random is Random. Even if the same picture shows up 4 times in a row, it could still be considered random.
My opinion is that anything random cannot be properly tested.
Sure you can attempt to test it, but there are so many combinations to try that you are better off just relying on the RNG and spot checking a large handful of cases.
Well, the problem is that random numbers by definition can get repeated (because they are... wait for it: random). Maybe what you want to do is save the latest random number and compare the calculated one to that, and if equal just calculate another... but now your numbers are less random (I know there's not such a thing as "more or less" randomness, but let me use the term just this time), because they are guaranteed not to repeat.
Anyway, you should never give random numbers so much thought. :)
As others have pointed out, it is impossible to really test for randomness. You can (and should) have the randomness contained to one particular method, and then write unit tests for every other method. That way, you can test all of the other functionality, assuming that you can get a random number out of that one last part.
store the random values and before you use the next generated random number, check against the stored value.
Any good pseudo-random number generator will let you seed the generator. If you seed the generator with same number, then the stream of random numbers generated will be the same. So why not seed your random number generator and then create your unit tests based on that particular stream of numbers?
To get a series of non-repeating random numbers:
Create a list of random numbers.
Add a sequence number to each random number
Sort the sequenced list by the original random number
Use your sequence number as a new random number.
Don't test the randomness, test to see if the results your getting are desirable (or, rather, try to get undesirable results a few times before accepting that your results are probably going to be desirable).
It will be impossible to ensure that you'll never get an undesirable result if you're testing a random output, but you can at least increase the chances that you'll notice it happening.
I would either take N pools of Y size, checking for any results that appear more than X number of times, or take one pool of N*Y size, checking every group of Y size for any result that appears more than X times (1 to Y, 2 to Y + 1, 3 to Y + 2, etc). What N is would depend on how reliable you want the test to be.
Random numbers are generated from a distribution. In this case, every value should have the same propability of appearing. If you calculate an infinite amount of randoms, you get the exact distribution.
In practice, call the function many times and check the results. If you expect to have N images, calculate 100*N randoms, then count how many of each expected number were found. Most should appear 70-130 times. Re-run the test with different random-seed to see if the results are different.
If you find the generator you use now is not good enough, you can easily find something. Google for "Mersenne Twister" - that is much more random than you ever need.
To avoid images re-appearing, you need something less random. A simple approach would be to check for the unallowed values, if its one of those, re-calculate.
Although you cannot test for randomness, you can test that for correlation, or distribution, of a sequence of numbers.
Hard to test goal: Each time we need an image, select 1 of 4 images at random.
Easy to test goal: For every 100 images we select, each of the 4 images must appear at least 20 times.
I agree with Adam Rosenfield. For the situation you're talking about, the only thing you can usefully test for is distribution across the range.
The situation I usually encounter is that I'm generating pseudorandom numbers with my favourite language's PRNG, and then manipulating them into the desired range. To check whether my manipulations have affected the distribution, I generate a bunch of numbers, manipulate them, and then check the distribution of the results.
To get a good test, you should generate at least a couple orders of magnitude more numbers than your range holds. The more values you use, the better the test. Obviously if you have a really large range, this won't work since you'll have to generate far too many numbers. But in your situation it should work fine.
Here's an example in Perl that illustrates what I mean:
for (my $i=0; $i<=100000; $i++) {
my $r = rand; # Get the random number
$r = int($r * 1000); # Move it into the desired range
$dist{$r} ++; # Count the occurrences of each number
}
print "Min occurrences: ", (sort { $a <=> $b } values %dist)[1], "\n";
print "Max occurrences: ", (sort { $b <=> $a } values %dist)[1], "\n";
If the spread between the min and max occurrences is small, then your distribution is good. If it's wide, then your distribution may be bad. You can also use this approach to check whether your range was covered and whether any values were missed.
Again, the more numbers you generate, the more valid the results. I tend to start small and work up to whatever my machine will handle in a reasonable amount of time, e.g. five minutes.
Supposing you are testing a range for randomness within integers, one way to verify this is to create a gajillion (well, maybe 10,000 or so) 'random' numbers and plot their occurrence on a histogram.
****** ****** ****
***********************************************
*************************************************
*************************************************
*************************************************
*************************************************
*************************************************
*************************************************
*************************************************
*************************************************
1 2 3 4 5
12345678901234567890123456789012345678901234567890
The above shows a 'relatively' normal distribution.
if it looked more skewed, such as this:
****** ****** ****
************ ************ ************
************ ************ ***************
************ ************ ****************
************ ************ *****************
************ ************ *****************
*************************** ******************
**************************** ******************
******************************* ******************
**************************************************
1 2 3 4 5
12345678901234567890123456789012345678901234567890
Then you can see there is less randomness. As others have mentioned, there is the issue of repetition to contend with as well.
If you were to write a binary file of say 10,000 random numbers from your generator using, say a random number from 1 to 1024 and try to compress that file using some compression (zip, gzip, etc.) then you could compare the two file sizes. If there is 'lots' of compression, then it's not particularly random. If there isn't much of a change in size, then it's 'pretty random'.
Why this works
The compression algorithms look for patterns (repetition and otherwise) and reduces that in some way. One way to look a these compression algorithms is a measure of the amount of information in a file. A highly compressed file has little information (e.g. randomness) and a little-compressed file has much information (randomness)