If testing my method which is supposed to return a value based on a certain criteria (maybe it's validating credentials)
testAuthenticate_ValidCredentials_ReturnTrue
Should I also write separate methods to test whether it returns the correct value if the criteria isn't met?
testAuthenticate_InValidCredentials_ReturnFalse
In other words, should I run multiple tests per method?
yes, it is better to tailor each test to check only one funcational aspect of your code, so separate tests for valid (authenticated) and invalid (rejected) credentials is the proper approach.
As to the larger issue of how many tests to write total, ideally you want to run every source line in the code being tested.
Related
Let's say I have a list that has about 210000 english words.
I need to use all these 210000 words as test case.
I need to make sure every words in that list is covered every time I run my test.
The question is: What is the best practices to store these words in my test?
should I save all these words in a slice (will it be too large a slice? ), or should I save these words in a external file (like words.txt) and load the file line by line when needed?
Test data is usually stored in a directory named testdata to keep it separate from the other source code or data files (see the docs from the command go help test). The go tool ignores stuff inside that directory.
210,000 words should take up only single digit megabytes of RAM anyway, which isn't much. Just have a helper function that reads the words from the file before each test (perhaps caching them), or define a TestMain() function which reads them once and stores them in a global variable for access by tests that are subsequently run.
Edit: Regarding best practices, it's sometimes nicer to store test data in testdata even if the data isn't large. For example, I sometimes need to use multiple short JSON snippets in test cases, and perhaps use them more than once. Storing them in appropriately named files under a subdirectory of testdata can be more readable than littering Go code with a bunch of JSON snippets.
The slight loss of performance is generally not an issue in tests. Whichever method makes the code easier to understand could be the 'best practice'.
I'm finding myself writing a lot of boilerplate for my unit tests. I could down on that boilerplate significantly if I stored my unit test inputs along with the expected outputs in a csv file and directed my test suite to read the inputs form that file, pass them to the function being tested, and then compare its output with the values in the file's expected output column.
Is this considered good practice?
Instead of storing this in a separate file, I would recommend to store it in some kind of table (probably an array) inside your test code and iterate over that table. Most testing frameworks have specific support for this: in JUnit the feature is called parameterized tests. Then you even don't have to implement the iteration over that set of inputs and expected outputs yourself.
Say, for example, I have a method that checks for valid UK postcodes. I have written a unit test for that method that tests when a correct UK postcode is passed in, the method returns true.
Should I create a separate unit test to test for an incorrect UK postcode, or do it in the same unit test?
Thanks
You should create seperate test cases for each case. This will give you confidence that any future code that calls this method will work, also if you refactor you can see exactly which test fails, instead of just seeing 1test fail and have no idea why.
Personally, I'd write a few tests that check it works correctly with different types of valid postcodes (NE1 2XX, NE21 2XX, E1 3YY, etc. trying out different valid combinations of characters and numbers) and several failing tests with invalid ones of different types (e.g. NEI 3XX).
What I use to do is to create two functions, say test_valid_data() and test_invalid_data(), and two data sets, say valid_data[] and invalid_data[]. I then write four test procedures:
test_valid_data(valid_data[]) : This test should pass
test_valid_data(invalid_data[]) : This test should fail
test_invalid_data(valid_data[]) : This test should fail
test_invalid_data(invalid_data[]) : This test should pass
Working like this allows you to pinpoint failing test according to a particular data set. That behavior would be hard to achieve with only one big test. It also validate that valid data are not considered invalid and vice versa.
I'd like to unit test a gen_fsm that uses a fairly large record for its state. The record is defined within the erl file that also defines the gen_fsm and thus is not (to my knowledge) visible to other modules.
Possible approaches:
Put the record into an hrl file and include that in both modules. This is ok, but spreads code that is logically owned by the gen_fsm across multiple files.
Fake a record with a raw tuple in the unit test module. This would get pretty ugly as the record is already over 20 fields.
Export a function from my gen_fsm that will convert a proplist to the correct record type with some record_info magic. While possible, I don't like the idea of polluting my module interface.
Actually spawn the gen_fsm and send it a series of messages to put it in the right state for the unit test. There is substantial complexity to this approach (although Meck helps) and I feel like I'm wasting these great, pure Module:StateName functions that I should be able to call without a whole bunch of setup.
Any other suggestions?
You might consider just putting your tests directly into your gen_fsm module, which of course would give them access to the record. If you'd rather not include the tests in production code, and assuming you're using eunit, you can conditionally compile them in or out as indicated in the eunit user's guide:
-ifdef(EUNIT).
% test code here
...
-endif.
In CPP unit we run unit test as part of build as part of post build setup. We will be running multiple tests as part of this. In case if any test case fails post build should not stop, it should go ahead and run all the test cases and should report summary how many test cases passed and failed. how can we achieve this.
Thanks!
His question is specific enough. You need a test runner. Encapsulate each test in its own behavior and class. The test project is contained separately from the tested code. Afterwards just configure your XMLOutputter. You can find an excellent example of how to do this in the linux website. http://www.yolinux.com/TUTORIALS/CppUnit.html
We use this way to compile our test projects for our main projects and observe if everything is ok. Now it all becomes the work of maintaining your test code.
Your question is too vague for a precise answer. Usually, a unit test engine return a code to tell it has failed (like a non zero return code in the shell on linux) or generate some output file with results. The calling system handle this. If you have written it (some home made scripts) you have to give the option to go on tests execution even if an error occurred. If you are using some tools like continuous integration server, then you have to go through the doc and find the option that allows you to go on when tests fails.
A workaround is to write a script that return a "OK" result even if the unit test fails, but there you lose some automatic verification ...
Be more specific if you want more clues.
my2c
I would just write your tests this way. Instead of using the CPPUNIT_ASSERT macros or whatever you would write them in regular C++ with some way of logging errors.
You could use a macro for this too of course. Something like:
LOGASSERT( some_expression )
could be defined to execute some_expression and to log the expression together with FILE and LINE if it fails, and you can also log exceptions of course, as well as ones that are not thrown, simply by writing them in your tests (with macros if you want to log the expression that caused them with FILE and LINE).
If you are writing macros I would advise you to limit the content of your macro to calling an inline function with extra parameters.