I want to test my application when it gets tricked by a false passed SHA-160 sum and thus would like to compute a change to the data being summed which results in the original SHA-160 sum again and thus would be missed. I am using the Botan library in C++ to compute the sum.
How can I compute a change to a bit stream that is around 1500 bits such that its SHA-160 is identical to the original?
The short answer is: you can't.
The long answer is: you can, but only with vast amounts of computation power. The entire purpose of hash algorithms is to make it hard to find collisions. If it were easy to find a collision, then there'd be little point in using the hash algorithm.
To solve your test problem, I suggest abstracting away the file-reading/hash-computing part of your application into a separate class, and then mocking it with a fake hash implementation in order to test the rest of the application.
Related
I've coded a number of integer multiplication routines for Atmel's AVR architecture. I found following a simple pattern for the multiplier (and a similar one for the multiplicand) useful, if unconvincing (start at zero, step by a one in every byte (in addition to eventual carries)).
There seems to be quite a bit about testing hardware multiplier implementations, but:
What can be recommended for testing software implementations of integer multiplication? Exhaustive testing gets out of hand - if not at, then beyond 16×16 bit.
Most approaches uses Genere & Test
generate test operands
The safest is to use all combinations of operands. But with bignums or small computing power or memory is this not possible or practical. In that case are usually used some test cases which will (most likely) stress the algorithm tested (like propagating carry ... or be near safe limits) and use only them. Another option is to use pseudo random operands and hope for a valid test sample. or combine all these in some way together
compute multiplication
Just apply the tested algorithm on generated input data.
assess the results
so you need to compare the algorithm(multiplication) result to something. Mostly use different algorithm of the same process to compare with. Or use inverse functions (like c=a*b; if (a!=c/b) ...). The problem with this is that you can not distinguish between error in compared or compared to algorithm ... unless you use something 100% working or compare to more then just one operation... There is also the possibility to have precomputed result table (computed on different platform 100% bug free) and compare to that.
AVR Tag make this a tough question. So some hints:
many MCU's have a lot of Flash memory for program that can be used to store precomputed values to compare to
sometimes running in emulator is faster/more comfortable but with the risk of dismissing some HW related issues.
I want to unit test a signal generator - let's say it generates a simple sine wave, or does frequency modulation of a signal onto a sine wave. It's easy enough to define sensible test parameters, and it's well known what the output should "look like" - but this is quite hard to test.
I could do (eg) a frequency analysis on the output and check that, check the maximum amplitude etc, but a) this will make the test code significantly more complicated than the code it's testing and b) doesn't fully test the shape of the output.
Is there an established way to do this?
One way to do this would be to capture a "known good" output and compare bit-for-bit against that. As long as your algorithm is deterministic you should get the same output every time. You might have to recalibrate it occasionally if anything changes, but at least you'll know if it does change at all.
This situation is a strong argument for a modeling tool like Matlab, to generate and review a well understood test set automatically, as well as to provide an environment for automatic comparison and scoring. Especially for instances where combinatorial explosions of test variations take place, automation makes it possible and straight forward generate a huge dataset, locate problems, and pare back if needed to a representative qualification test set.
Often undervalued is the means to generate a large, extensive tests exercising both the requirements and the limits of the implementation of your design. Thinking about and designing those cases up front is also a huge advantage in introducing a clean, problem free system.
One possible semi-automated way of testing is to code up your signal generators from spec by 3 different algorithms, or perhaps by 3 different programmers in 3 different programming languages. Then randomly generate parameters within the complete range of legal control input values and capture and compare the outputs of all 3 generators to see if they agree within some error bound. You could also include some typical and some suspected worse case parameters. If the outputs always agree, then there's a much higher probability that everything works per spec than if they don't.
I'm implementing a compression algorithm. Thing is, it is taking a second for a 20 Kib files, so that's not acceptable. I think it's slow because of the calculations.
I need suggestions on how to make it faster. I have some tips already, like shifting bits instead of multiplying, but I really want to be sure of which changes actually help because of the complexity of the program. I also accept suggestions concerning compiler options, I've heard there is a way to make the program do faster mathematical calculations.
Common operations are:
pow(...) function of math library
large number % 2
large number multiplying
Edit: the program has no floating point numbers
The question of how to make things faster should not be asked here to other people, but rather in your environment to a profiler. Use the profiler to determine where most of the time is spent, and that will hint you into which operations need to be improved, then if you don't know how to do it, ask about specific operations. It is almost impossible to say what you need to change without knowing what your original code is, and the question does not provide enough information: pow(...) function: what are the arguments to the function, is the exponent fixed? how much precision do you need? can you change the function for something that will yield a similar result? large number: how large is large in large number? what is number in this context? integers? floating point?
Your question is very broad, without enough informaiton to give you concrete advise, we have to do with a general roadmap.
What platform, what compiler? What is "large number"? What have you done already, what do you know about optimization?
Test a release build with optimization (/Ox /LTCG in Visual C++, -O3 IIRC for gcc)
Measure where time is spent - disk access, or your actual compression routine?
Is there a better algorithm, and code flow? The fastest operation is the one not executed.
for 20K files, memory working set should not be an issue (unless your copmpression requries large data structures), so so code optimization are the next step indeed
a modern compiler implements a lot of optimizations already, e.g replacing a division by a power-of-two constant with a bit shift.
pow is very slow for native integers
if your code is well written, you may try to post it, maybe someone's up to the challenge.
Hints :-
1) modulo 2 works only on the last bit.
2) power functions can be implemented in logn time, where n is the power. (Math library should be fast enough though). Also for fast power you may check this out
If nothing works, just check if there exists some fast algorithm.
I have made a quite few genetic algorithms; they work (they find a reasonable solution quickly). But I have now discovered TDD. Is there a way to write a genetic algorithm (which relies heavily on random numbers) in a TDD way?
To pose the question more generally, How do you test a non-deterministic method/function. Here is what I have thought of:
Use a specific seed. Which wont help if I make a mistake in the code in the first place but will help finding bugs when refactoring.
Use a known list of numbers. Similar to the above but I could follow the code through by hand (which would be very tedious).
Use a constant number. At least I know what to expect. It would be good to ensure that a dice always reads 6 when RandomFloat(0,1) always returns 1.
Try to move as much of the non-deterministic code out of the GA as possible. which seems silly as that is the core of it's purpose.
Links to very good books on testing would be appreciated too.
Seems to me that the only way to test its consistent logic is to apply consistent input, ... or treat each iteration as a single automaton whose state is tested before and after that iteration, turning the overall nondeterministic system into testable components based on deterministic iteration values.
For variations/breeding/attribute inheritance in iterations, test those values on the boundaries of each iteration and test the global output of all iterations based on known input/output from successful iteration-subtests ...
Because the algorithm is iterative you can use induction in your testing to ensure it works for 1 iteration, n+1 iterations to prove it will produce correct results (regardless of data determinism) for a given input range/domain and the constraints on possible values in the input.
Edit I found this strategies for testing nondeterministic systems which might provide some insight. It might be helpful for statistical analysis of live results once the TDD/development process proves the logic is sound.
I would test random functions by testing them a number of times and analyzing whether the distribution of return values meets the statistical expectations (this involves some statistical knowledge).
If you're talking TDD, I would say definitely start out by picking a constant number and growing your test suite from there. I've done TDD on a few highly mathematical problems and it helps to have a few constant cases you know and have worked out by hand to run with from the beginning.
W/R/T your 4th point, moving nondeterministic code out of the GA, I think this is probably an approach worth considering. If you can decompose the algorithm and separate the nondeterministic concerns, it should make testing the deterministic parts straightforward. As long as you're careful about how you name things I don't think that you're sacrificing much here. Unless I am misunderstanding you, the GA will still delegate to this code, but it lives somewhere else.
As far as links to very good books on (developer) testing my favorites are:
Test Driven by Lasse Kosela
Working Effectively with Legacy Code by Michael Feathers
XUnit Test Patterns by Gerard Meszaros
Next Generation Java™ Testing: TestNG and Advanced Concepts by Cédric Beust & Hani Suleiman
One way I do for unit testing of non-deterministic functions of GA algorithms is put the election of random numbers in a different function of the logic one that uses that random numbers.
For example, if you have a function that takes a gene (vector of something) and takes two random points of the gene to do something with them (mutation or whatever), you can put the generation of the random numbers in a function, and then pass them along with the gene to another function that contains the logic given that numbers.
This way you can do TDD with the logic function and pass it certain genes and certain numbers, knowing exactly what the logic should do on the gene given that numbers and being able to write asserts on the modified gene.
Another way, to test with the generation of random numbers is externalizing that generation to another class, that could be accessed via a context or loaded from a config value, and using a different one for test executions. There would be two implementations of that class, one for production that generates actual random numbers, and another for testing, that would have ways to accept the numbers that later it will generate. Then in the test you could provide that certain numbers that the class will supply to the tested code.
You could write a redundant neural network to analyze the results from your algorithm and have the output ranked based on expected outcomes. :)
Break your method down as much as your can. Then you can also have a unit test around just the random part to check the range of values. Even have the test run it a few times to see if the result changes.
All of your functions should be completely deterministic. This means that none of the functions you are testing should generate the random number inside the function itself. You will want to pass that in as a parameter. That way when your program is making decisions based on your random numbers, you can pass in representative numbers to test the expected output for that number. The only thing that shouldn't be deterministic is your actual random number generator, which you don't really need to worry too much about because you shouldn't be writing this yourself. You should be able to just assume it works as long as its an established library.
That's for your unit tests. For your integration tests, if you are doing that, you might look into mocking your random number generation, replacing it with an algorithm that will return known numbers from 0..n for every random number that you need to generate.
I wrote a C# TDD Genetic Algorithm didactic application:
http://code.google.com/p/evo-lisa-clone/
Let's take the simplest random result method in the application: PointGenetics.Create, which creates a random point, given the boundaries. For this method I used 5 tests, and none of them relies on a specific seed:
http://code.google.com/p/evo-lisa-clone/source/browse/trunk/EvoLisaClone/EvoLisaCloneTest/PointGeneticsTest.cs
The randomness test is simple: for a large boundary (many possibilities), two consecutive generated points should not be equal. The remaining tests check other constraints.
Well the most testable part is the fitness function - where all your logic will be. this can be in some cases quite complex (you might be running all sorts of simulations based on input parameters) so you wanna be sure all that stuff works with a whole lot of unit tests, and this work can follow whatever methodology.
With regards to testing the GA parameters (mutation rate, cross-over strategy, whatever) if you're implementing that stuff yourself you can certainly test it (you can again have unit tests around mutation logic etc.) but you won't be able to test the 'fine-tuning' of the GA.
In other words, you won't be able to test if GA actually performs other than by the goodness of the solutions found.
A test that the algorithm gives you the same result for the same input could help you but sometimes you will make changes that change the result picking behavior of the algorithm.
I would make the most effort to have a test that ensures that the algorithm gives you a correct result. If the algorithm gives you a correct result for a number of static seeds and random values the algorithm works or is not broken through the changes made.
Another chance in TDD is the possibility to evaluate the algorithm. If you can automatically check how good a result is you could add tests that show that a change hasn't lowered the qualities of your results or increased your calculating time unreasonable.
If you want to test your algorithm with many base seeds you maybe want to have to test suits one suit that runs a quick test for starting after every save to ensure that you haven't broken anything and one suit that runs for a longer time for a later evaluation
I would highly suggest looking into using mock objects for your unit test cases (http://en.wikipedia.org/wiki/Mock_object). You can use them to mock out objects that make random guesses in order to cause you to get expected results instead.
Imagine that you have an internally controlled list of vendors. Now imagine that you want to match unstructured strings against that list. Most will be easy to match, but some may be reasonably impossible. The algorithm will assign a confidence to each match, but a human needs to confirm all matches produced.
How could this algorithm be unit tested? The only idea I have had so far is to take a sample of pairs matched by humans and make sure the algorithm is able to successfully match those, omitting strings that I couldn't reasonably expect our algorithm to handle. Is there a better way?
i'd try some 'canonical' pairs, both "should match" and "shouldn't match" pairs, and test only if the confidence is above (or below) a given threshold.
maybe you can also do some ordering checks, such as "no pair should have greater confidence than the one from the exact match pair", or "the pair that matches all consonants should be >= the only vowels one".
You can also test if the confidence of strings your algorithm won't handle well is sufficiently low. In this way you can see if there is a threshold over which you can trust your algorithm as well.
An interesting exercise would be to store the human answers that correct your algorithm and try to see if you could improve your algorithm to not get them wrong.
If you can, add the new matches to the unit tests.
I don't think there's a better way than what you describe; effectively, you're just using a set of predefined data to test that the algorithm does what you expect. For any very complicated algorithm which has very nonlinear inputs and outputs, that's about the best you can do; choose a good test set, and assure that you run properly against that set of known values. If other values come up which need to be tested in the future, you can add them to the set of tested values.
That sound fair. If it's possible (given time constraints) get as large of a sample of human matches as possible, you could get a picture of how well your algorithm is doing. You could design specific unit tests which pass if they're within X% of correctness.
Best of luck.
I think there are two issues here: The way your code behaves according to the algorithm, and the way the algorithm is successful (i.e does not accept answers which a human later rejects, and does not reject answers a human would accept).
Issue 1 is regular testing. Issue 2 I would go with previous result sets (i.e. compare the algorithm's results to human ones).
What you describe is the best way because it is subjective what is the best match, only a human can come up with the appropriate test cases.
It sounds as though you are describing an algorithm which is deterministic, but one which is sufficiently difficult that your best initial guess at the correct result is going to be whatever your current implementation delivers to you (aka deterministic implementation to satisfy fuzzy requirements).
For those sorts of circumstances, I will use a "Guru Checks Changes" pattern. Generate a collection of inputs, record the outputs, and in subsequent runs of the unit tests, verify that the outcome is consistent with the previous results. Not so great for ensuring that the target algorithm is implemented correctly, but it is effective for ensuring that the most recent refactoring hasn't changed the behavior in the test space.
A variation of this - which may be more palatable for your circumstance, is to start from the same initial data collection, but rather than trying to preserve precisely the same result every time you instead predefine some buckets, and flag any time an implementation change moves a test result from one confidence bucket to another.
Samples that have clearly correct answers (exact matches, null matches, high value corner cases) should be kept in a separate test.