How to set a value of a variable during a test? - unit-testing

Suppose I have a code in Java (or any other language) that I want to test. Let's say that I want to test how the code behaves when variable 'myVar' holds an integer value of 10 at a certain line.
One option will be to assign the value 10 to variable 'myVar' at this line. This will work fine, but it will make the code dirty. If I want to test another scenario, I'll have to fix this line. What will happen if I have a huge number of scenarios ?
I was wondering whether there is an option to hold an external file/configuration that will be loaded whenever I want to test this specific scenario without modifying the code?

One of the ways is to use a Json File. Just put your value into the Json file "test.json" at location c:\testfolder with following content:
{
"Scenario":"10"
}
Now you can access the value using following code in C#:
StreamReader file = File.OpenText("c:\testfolder\test.json");
JsonTextReader reader = new JsonTextReader(file);
JObject jObject = (JObject)JToken.ReadFrom(reader);
int a = jObject ["Scenario"];
You can use this variable a which contains value 10 from the Json file.

Related

Efficient way to pass gui variables to classes?

I'm using the program Maya to make a rather large project in python. I have numerous options that will be determined by a GUI and input by the user.
One example of an option is what dimensions to render at. However I did not make a GUI yet and am still in the testing faze.
What I ultimately want is a way to have variables be able to be looked up and used by various classes/methods within multiple modules. And also that there be a way that I can test all the code without having an actual GUI.
Should I directly pass all data to each method? My issue with this is if method foo relies on variable A, but method bar needs to call foo, it could get real annoying passing these variables to Foo from everywhere its called.
Another way I saw was passing all variables through to each class instance itself and using instance variables to access. But what if an option changes, then i'd have to put reload imports every time it runs.
For testing what I use now is a module that gets variables from a config file with the variables, and i import that module and use the instance variables throughout the script.
def __init__(self):
# Get and assign all instance variables.
options = config_section_map('Attrs', '%s\\ui_options.ini' %(data_path))
for k, v in options.items():
if v.lower() == 'none':
options[k] = None
self.check_all = int(options['check_all'])
self.control_group = options['control_group']
Does anyone have advice or can point me in the right direction dealing with getting/using ui variables?
If the options list is not overly long and won't change, you can simply set member variables in the class initializer, which makes the initialization easy for readers to understand:
class OptionData(object):
def __init___(self):
#set the options on startup
self.initial_path = "//network"
self.initial_name = "filename"
self.use_hdr = True
# ... etc
If you expect the initializations to change often you can split out the initial values into the constructor for the class:
class OptionData(object):
def __init___(self, path = "//network", name = "filename", hdr=True)
self.initial_path = path
self.initial_name = name
self.use_hdr = hdr
If you need to persist the data, you can fill out the class reading the cfg file as you're doing, or store it in some other way. Persisting makes things harder because you can't guarantee that the user won't open two Maya's at the same time, potentially changing the saved data in unpredictable ways. You can store per-file copies of the data using Maya's fileInfo.
In both of these cases I'd make the actual GUI take the data object (the OptionData or whatever you call yours) as an initializer. That way you can read and write the data from the GUI. Then have the actual functional code read the OptionData:
def perform_render(optiondata):
#.... etc
That way you can run a batch process without the gui at all and the functional code will be none the wiser. The GUI's only job is to be a custom editor for the data object and then to pass it on to the final function in a valid state.

cppUnit: setUp function executed once for multiple testmethods

I've got an object Obj doing some (elaborate) computation and want to check weather the result (let's call it aComputed and bComputed) is correct or not. Therefore I want to split this task up into multiple test methods:
testA() { load aToBe; check if number aComputed = aToBe }
testB() { load bToBe; check if number bComputed = bToBe }
The problem is, that Obj is "executed" twice (which takes a lot of time) - one time per test. The question is: How can I manage that it's just "executed" once and the result is used used by both tests?
At the moment Obj is placed inside the setUp-function and saves the results to a private member of the test-class.
Thanks for helping!
There is no easy solution that allows you to split the code into two test methods. Each test method results in a new test object with an own set of local variables.
Obviously you could work around this problem through a static variable but in the long run this normally just causes issues and breaks the ideas behind the framework.
The better idea is to just write the two CPPUNIT_ASSERT in the same test method. If the results are part of the same calculation there is most likely not much value in splitting the checks into two independent test methods.

Functions using global variables as inputs

So, I'm writing some python scripts and I have some code like the one shown below:
a = 3
dict = {"run":runMiles(a)}
a = 5
The runMiles func takes one variable that is an int. For some reason when dict["run"] is called, the variable doesn't seem to use the "new" variable. It is important to realize that both a and dict are global variables
It does so because the function runMiles() executes at the time this line dict = {"run":runMiles(a)} executes and when you call dict["run"] it will only give the return value got after execution of the function and does not execute the function again to fetch the value of "run" and thus the value does not get updated.

How to test that a file is left unchanged?

I'm testing a function that may modify a file. How do i test that it is unchanged in the cases where I want it to?
I don't want to check the content, because the file may have been overwritten with the same content, changing the modification time.
I can't really check the modification time, either. Since I like tests to be self-contained, the original file would be written just before the (non-)modification test, rendering the modification time unreliable.
You can use DI to mock your filewriter. This way you do not need the file at all, only check if the write function is called and you know if the file was modified.
I would split the function into two separate functions; the first decides whether the modification should be made, the second makes the notification. The second is only called if necessary. In pretend language:
function bool IsModificationRequired()
{
// return true or false based on your actual code
}
function void WriteFile()
{
new File().Write("file");
}
function void WriteIfModified()
{
if (IsModificationRequired())
WriteFile();
}
And test
Assert.IsTrue(IsModificationRequired());
Well assuming you are using a text file and reasonable size. Just hash the file content, if before modify and after modidfy hashcode is same then - it means the file content is not changed.
Here is the link to Algorithim Design Manual - Steve Skiena (Google Book Result)
Section 3.8
How can i convicne you that a file isn't changed ?

Unit Tests for comparing text files in NUnit

I have a class that processes a 2 xml files and produces a text file.
I would like to write a bunch of unit / integration tests that can individually pass or fail for this class that do the following:
For input A and B, generate the output.
Compare the contents of the generated file to the contents expected output
When the actual contents differ from the expected contents, fail and display some useful information about the differences.
Below is the prototype for the class along with my first stab at unit tests.
Is there a pattern I should be using for this sort of testing, or do people tend to write zillions of TestX() functions?
Is there a better way to coax text-file differences from NUnit? Should I embed a textfile diff algorithm?
class ReportGenerator
{
string Generate(string inputPathA, string inputPathB)
{
//do stuff
}
}
[TextFixture]
public class ReportGeneratorTests
{
static Diff(string pathToExpectedResult, string pathToActualResult)
{
using (StreamReader rs1 = File.OpenText(pathToExpectedResult))
{
using (StreamReader rs2 = File.OpenText(pathToActualResult))
{
string actualContents = rs2.ReadToEnd();
string expectedContents = rs1.ReadToEnd();
//this works, but the output could be a LOT more useful.
Assert.AreEqual(expectedContents, actualContents);
}
}
}
static TestGenerate(string pathToInputA, string pathToInputB, string pathToExpectedResult)
{
ReportGenerator obj = new ReportGenerator();
string pathToResult = obj.Generate(pathToInputA, pathToInputB);
Diff(pathToExpectedResult, pathToResult);
}
[Test]
public void TestX()
{
TestGenerate("x1.xml", "x2.xml", "x-expected.txt");
}
[Test]
public void TestY()
{
TestGenerate("y1.xml", "y2.xml", "y-expected.txt");
}
//etc...
}
Update
I'm not interested in testing the diff functionality. I just want to use it to produce more readable failures.
As for the multiple tests with different data, use the NUnit RowTest extension:
using NUnit.Framework.Extensions;
[RowTest]
[Row("x1.xml", "x2.xml", "x-expected.xml")]
[Row("y1.xml", "y2.xml", "y-expected.xml")]
public void TestGenerate(string pathToInputA, string pathToInputB, string pathToExpectedResult)
{
ReportGenerator obj = new ReportGenerator();
string pathToResult = obj.Generate(pathToInputA, pathToInputB);
Diff(pathToExpectedResult, pathToResult);
}
You are probably asking for the testing against "gold" data. I don't know if there is specific term for this kind of testing accepted world-wide, but this is how we do it.
Create base fixture class. It basically has "void DoTest(string fileName)", which will read specific file into memory, execute abstract transformation method "string Transform(string text)", then read fileName.gold from the same place and compare transformed text with what was expected. If content is different, it throws exception. Exception thrown contains line number of the first difference as well as text of expected and actual line. As text is stable, this is usually enough information to spot the problem right away. Be sure to mark lines with "Expected:" and "Actual:", or you will be guessing forever which is which when looking at test results.
Then, you will have specific test fixtures, where you implement Transform method which does right job, and then have tests which look like this:
[Test] public void TestX() { DoTest("X"); }
[Test] public void TestY() { DoTest("Y"); }
Name of the failed test will instantly tell you what is broken. Of course, you can use row testing to group similar tests. Having separate tests also helps in a number of situations like ignoring tests, communicating tests to colleagues and so on. It is not a big deal to create a snippet which will create test for you in a second, you will spend much more time preparing data.
Then you will also need some test data and a way your base fixture will find it, be sure to set up rules about it for the project. If test fails, dump actual output to the file near the gold, and erase it if test pass. This way you can use diff tool when needed. When there is no gold data found, test fails with appropriate message, but actual output is written anyway, so you can check that it is correct and copy it to become "gold".
I would probably write a single unit test that contains a loop. Inside the loop, I'd read 2 xml files and a diff file, and then diff the xml files (without writing it to disk) and compare it to the diff file read from disk. The files would be numbered, e.g. a1.xml, b1.xml, diff1.txt ; a2.xml, b2.xml, diff2.txt ; a3.xml, b3.xml, diff3.txt, etc., and the loop stops when it doesn't find the next number.
Then, you can write new tests just by adding new text files.
Rather than call .AreEqual you could parse the two input streams yourself, keep a count of line and column and compare the contents. As soon as you find a difference, you can generate a message like...
Line 32 Column 12 - Found 'x' when 'y' was expected
You could optionally enhance that by displaying multiple lines of output
Difference at Line 32 Column 12, first difference shown
A = this is a txst
B = this is a tests
Note, as a rule, I'd generally only generate through my code one of the two streams you have. The other I'd grab from a test/text file, having verified by eye or other method that the data contained is correct!
I would probably use XmlReader to iterate through the files and compare them. When I hit a difference I would display an XPath to the location where the files are different.
PS: But in reality it was always enough for me to just do a simple read of the whole file to a string and compare the two strings. For the reporting it is enough to see that the test failed. Then when I do the debugging I usually diff the files using Araxis Merge to see where exactly I have issues.