Let's say there is a function:
type config struct {
percent float64
flat float64
}
func Calculate(c *config, arrayofdata *arrayofdata) float64 {
result := 0.0
for _, data := range arrayofdata {
value1 = data * percent
value2 = flat
result += math.Min(value1, value2)
}
return result
}
This simple calculate function simply calculates the result per data based on the flat value or the percent whichever is lower. And aggregates them.
If I were to write tests, how would you do it???
Do I have to have multiple test for each trivial scenario?
Say when value1 < value2 ??
TestCalculate/CorrectValue_FlatValueLessThanPercentValue
Say when value1 > value2 ??
TestCalculate/CorrectValue_FlatValueEqualToPercentValue
Check if flat is added per data?? So for 3 contents of arrayofdata, result = 3*config.flat??
TestCalculate/CorrectValue_FlatValuePerData
All these seem very trivial and can simply be combined into one test. what is the recommended way?
Like say a test where
config { percent: 1, flat: 20}
And then you put arrayofdata where each element checks for one of each case above written
arrayofdata: {
1, // 1*percent < flat
40, // 40*percent > flat
}
And the result would be correct if we add up the values, so you already check for case when more than one element in arrayofdata.
Is this a better approach? One test but combining the details.
And separate tests for other cases like zero elements in arrayofdata etc.
I would recommend following the common practices in Clean Code, by Martin.Two of the discussed guidelines in particular are "One Assert per Test" and "Single Concept per Test".
When we have asserts that test more than 1 scenario and things start failing, it can become difficult to figure out which part is no longer passing. When that happens, you're going to end up splitting that unit test out into three separate test anyways.
Might as well start them out at 3 separate test, just keep your tests clean and expressive so if you come back to it months later you can figure out what unit test is doing what :)
Related
In a cell, is it possible to do if(x=y, z, x) without having to repeat x in the value_if_false argument? Whether there is a way of using if() to make this work or another function doesn't matter, and there isn't a specific formula I'm struggling with as I come across this blocker quite often (hence posting).
To help illustrate the need, if we take x as a complex or more advanced formula, such as
ARRAYFORMULA(IF(E$6:Q$6 < EoMONTH($P$4,0), "Not Active", IF(E$6:Q$6<$Q$4 + ISBLANK($Q$4) > 0,
COUNTIF({'Data'!$B$3:$B&'Data'!$I$3:$I&'Data'!$K$3:$K},$B$4&$C9&E$6:Q$6), "Not Active")))
and I wanted to put an if statement in there that changed the result only if a condition was true, the formula would more than double in size due to having to reference x twice:
=ARRAYFORMULA(IF(IF(E$6:Q$6 < EoMONTH($P$4,0), "Not Active", IF(E$6:Q$6<$Q$4 + ISBLANK($Q$4) > 0,
COUNTIF({'Data'!$B$3:$B&'Data'!$I$3:$I&'Data'!$K$3:$K},$B$4&$C9&E$6:Q$6), "Not Active"))) = 0, "No data", IF(E$6:Q$6 < EoMONTH($P$4,0), "Not Active", IF(E$6:Q$6<$Q$4 + ISBLANK($Q$4) > 0,
COUNTIF({'Data'!$B$3:$B&'Data'!$I$3:$I&'Data'!$K$3:$K},$B$4&$C9&E$6:Q$6), "Not Active"))))
This is just an example (the code is irrelevant), I'm trying to keep my formulas neat, tidy and efficient so that handing off to others is easier. Then I'm also mindful that it is calculating the same complex formula twice, which would probably slow the spreadsheet down especially when iterated throughout a spreadsheet.
Interested to hear the community thoughts and suggestions on this, hopefully I was clear in explaining it. :)
The only simple way to achieve this would be with the use of helper columns. They don't need to be in the same sheet as your main equation, but they do need to be within that same spreadsheet as a whole (ie you could have a sheet named "calc" that's specifically used to calculate intermediate steps and set "variables" by referencing those cells).
The only other option (which gets a bit complicated) is to create a custom function within Google Apps Script. For example, if you wanted to calculate (B1*A4)/C5 in multiple places, you could create a custom function like this:
/**
* Returns a calculation using cells A4, B1, and C5.
* #return A calculation using cells A4, B1, and C5.
* #customfunction
*/
function x() {
var ss = SpreadsheetApp.getActiveSpreadsheet().getSheetByName('MainSheet');
var val1 = ss.getRange('B1').getValue();
var val2 = ss.getRange('A4').getValue();
var val3 = ss.getRange('C5').getValue();
return (val1*val2)/val3;
}
Then in your sheet, you could use this within a formula like this:
=if(A1="yes", x(), "no")
This custom function could obviously be altered to fit one's needs (ex taking in arguments to define the cells that the calculations should be done on instead of hard coding them, etc).
Other than this, there is currently no way to define variables within a formula itself.
This is possible to a certain extent, using TEXT's Meta Instructions, if you're using numbers and simple math conditions.
x
y
z
output
10
10
5
10
=TEXT(A3,"[="&B3&"]0;"&C3&"")
x
y
z
output
11
10
5
5
As long as your complex formula returns a number for x(or the output can be coerced to a number), this should be possible and it avoids repetition.
I agree, I would love if there was like a DECODE or NVL type function you could use so that you didn't need to repeat the original statement multiple times.
However, in many cases, when I encounter this, I can often reference another cell. Not in the way that has been suggested already, where the formula exists in another cell, but rather that the decision to perform the formula is based on another cell.
For example, using your values, lets assume the formula ((if(x=y, z, x)) only gets calculated when column 'w' is populated. Maybe column 'w' is a key component of the formula. Then you can write the formula as: if(w="",z,x). It's not exactly the same as testing the answer to the equation first and doesn't work in all situations, but in many cases I can find another field that's of key relevance to the formula that lets me get around this.
I am pretty new to coding and unit tests.
I am writing a unit test for a method converts string to date.
What do I assert for a positive test
it goes like below
String s1 = "11/11/2018";
Date returnedValue = Class.ConvertToDate(s1);
assert(????);
I am not sure what methods the Date class in your code snippet provides, so I will assume that it has the following methods: getDayNrInMonth which returns the day as an int value starting from 1, getMonthNrInYear which returns the month as an int starting from 1, and getYearNr which returns the year as an int value (lets just ignore that there are different calendarian systems, and that is all for the Gregorian calendar).
The important question you would have to answer first is, what the goal of the test would be. There are several possible goals that you could test for - which means that you probably should end up with writing several tests. Let's assume you want to test that the year part was converted correctly. Then, the assertion could look like follows:
assert(returnedValue.getYearNr() == 2018);
Most likely, however, your test framework will provide something better for you here, possibly a function assertEquals, which you could then use as follows:
assertEquals(returnedValue.getYearNr(), 2018);
If in contrast you want to check that the month is converted correctly, you will immediately realize that the test example you have chosen is not ideal:
assertEquals(returnedValue.getMonthNrInYear(), 11);
As in your example day and month both are 11, this test would not be very reliable. So to test the correct conversion of the day and the month, the input strings should be chosen to have distinguishable values for the two.
While you move on with learning to program and to test, you will find that there are many more aspects that could be considered. The links in the comments can help you. However, hopefully the above points support you in taking the next steps.
I have a method which does many things, so that mean many side affects are created. For instance, say a single call to REST API returns a JSON object with many fields set. In this case if we want to check for each individual fields should we have one single test method which will contain many assertEquals or should we have one single test method per field validation containing one assertEquals.
Similarly a method can have many other side affects, for eg, saving to the database, sending emails etc. In this case should I have one unit test method per side affect?
In addition, if there are multiple test inputs per SUT methods then will this affect the decision of how many test methods are created?
Moreover, it might be related to story that means the story says this, this sub functionalities belong to the story in that case should not they belong to the same test method? Because if the requirement changes then all the test methods per side affects need to be changed as well. Would that be manageable?
In this case if we want to check for each individual fields should we have one single test method which will contain many assertEquals or should we have one single test method per field validation containing one assertEquals.
Worrying about the number of asserts in a test case is like worrying about how many lines are in a function. It's a first order approximation of complexity, but it's not really what you should be concerned with.
What you should be concerned with is how hard that test is to maintain, and how good its diagnostics are. When the test fails, will you be able to figure out why? A lot of that depends on how good your assertEquals is with large data structures.
json = call_rest();
want = { ...whatever you expect... };
assertEquals( json.from_json(), want );
If that just tells you they're not equal, that's not very useful. You then have to manually go in and look at json and want. If it dumps out both data structures, that's also not very useful, you have to look for the differences by eye.
But if it presents a useful diff of the two data structures, that is useful. For example, Perl's Test2 will produce diagnostics like this.
use Test2::Bundle::Extended;
is { foo => 23, bar => 42, baz => 99 },
{ foo => 22, bar => 42, zip => 99 };
done_testing;
# +-------+------------------+---------+------------------+
# | PATH | GOT | OP | CHECK |
# +-------+------------------+---------+------------------+
# | {foo} | 23 | eq | 22 |
# | {zip} | <DOES NOT EXIST> | | 99 |
# | {baz} | 99 | !exists | <DOES NOT EXIST> |
# +-------+------------------+---------+------------------+
Then there's the question of maintenance, and again this comes down to how good your assertEquals is. A single assertEquals is easy to read and maintain.
json = call_rest();
want = { ...whatever you expect... };
assertEquals( json.from_json(), want );
There's very little code, and want is very clear about what is expected.
While multiple gets very wordy.
json = call_rest();
have = json.from_json();
assertEquals( have['thiskey'], 23 );
assertEquals( have['thatkey'], 42 );
assertEquals( have['foo'], 99 );
...and so on...
It can be difficult to know which assert failed, you have to tell by line number or (if your test suite supports it) manually naming each assert which is more work and more maintenance and one more thing to get wrong.
OTOH individual assertEquals allow for more flexibility. For example, what if you only want to check certain fields? What if there are acceptable values in a range?
# bar just has to contain a number
assert( have['bar'].is_number );
But some test suites support this via a single assert.
want = {
thiskey: 23,
thatkey: 42,
foo: 99,
bar: is_number
}
assertEquals( json.from_json, want );
is_number is a special object that tells assert it's not a normal quality check, but to just check that the value is a number. If your test suite supports this style, it's generally superior to writing out a bunch of asserts. The declarative approach means less code to write, read, and maintain.
The answer is: it depends on how good your testing tools are!
I have a relatively large project, which up to now had almost no automatic tests.
Now the management has given us several weeks of time to stabilize the product, including test automation.
In order to get the most value of these X weeks we can spend on automatic testing, I need to know what classes/methods to test first.
How can I prioritize testing efforts (decide, which class/method to test now, which later) apart from the approaches listed below?
Calculate the dependents for each class (how many other classes use class, including transitive dependencies). Classes with the greatest number of dependent classes should be tested first.
Find out, which classes change most frequently (according to the version control system). Frequent changes may be a symptom of either lots of bugs or active development in these classes. In both cases, it makes sense to write unit tests for them.
Find out, which classes are involved in bug reports from testers and/or customers.
All of your ideas seems good. This article can help you with prioritizing and automating.
This is formula of how to do Estimating Testing effort:
Method for Testing Process (based on use case driven approach).
Step 1 : count number of use cases (NUC) of system
step 2 : Set Avg Time Test Cases(ATTC) as per test plan
step 3 : Estimate total number of test cases (NTC)
Total number of test cases = Number of usecases X Avg testcases per a use case
Step 4 : Set Avg Execution Time (AET) per a test case (idelly 15 min depends on your system)
Step 5 : Calculate Total Execution Time (TET)
TET = Total number of test cases * AET
Step 6 : Calculate Test Case Creation Time (TCCT)
usually we will take 1.5 times of TET as TCCT
TCCT = 1.5 * TET
Step 7 : Time for ReTest Case Execution (RTCE) this is for retesting
usually we take 0.5 times of TET
RTCE = 0.5 * TET
Step 8 : Set Report generation Time (RGT
usually we take 0.2 times of TET
RGT = 0.2 * TET
Step 9 : Set Test Environment Setup Time (TEST)
it also depends on test plan
Step 10 : Total Estimation time = TET + TCCT+ RTCE + RGT + TEST + some buffer...;)
Here is example how it works:
Total No of use cases (NUC) : 227
Average test cases per Use cases(AET) : 10
Estimated Test cases(NTC) : 227 * 10 = 2270
Time estimation execution (TET) : 2270/4 = 567.5 hr
Time for creating testcases (TCCT) : 567.5*4/3 = 756.6 hr
Time for retesting (RTCE) : 567.5/2 = 283.75 hr
Report Generation(RGT) = 100 hr
Test Environment Setup Time(TEST) = 20 hr.
Total Hrs 1727.85 + buffer
4 means Number of test cases executed per hour
i.e 15 min will take for execution of each test case
And since you are going to automate almost from scratch
up to now had almost no automatic tests
I think you may consider not only benefits, but Myths about Automated testing too:
Automation cannot replace the human
Once automated, cost savings is a given
Every test case can be automated
Testing can be fully automated
One test tool is suitable for all tasks
Automated Testing doesn’t mean Automatic
After all Testing ... It means Computer aided testing.
Other than the above answer, I would like to add few more points regrading priority and coverage for automation:
1. Before you start any coding, understand what all test cases can give you maximum coverage like a full flow or end to end test scenario.
2. Always maintain the usability of automation test suite - like code those test cases first which can be utilised all the time and can cover most of regression part rather than concentrating a specific feature of application.
3. When you've a limited time, try to avoid implementing fancy things like logger, email summary report, html report etc...
4. Better to have data driven framework rather than having a keyword or hybrid framework because you can cover many test cases by just varying your test data in a limited time.
5. Maintain an excel sheet or csv for test data, test results, you can use JXL Lib to handle excel sheet in java.
6. Read excel sheet, write excel sheet are the most common method which you may want to use in your automation code. You can get reference about this from blog: http://testingmindzz.blogspot.in/
There are several places, where one has to convert one data object into another. For example incoming data from a webservice or a REST service into an object that is persistable.
Is there a way to unit test that all incoming data gets filled into the right places of the "outgoing" objects without copying the converter logic inside the test?
If the fields are all called the same, and one is feeling adventurous, reflections could do some work.. But I don't feel like going down that path..
Acceptance tests won't catch a bug if say a Person that has a name and a firstname gets converted into a Person where name == firstname due to some copy+paste mistake.
So right now I just skip testing object/model conversion and rather take a really good look at my converter.
Has anyone any idea on how to do this differently?
If you need to test that multiplication works, you should not replicate the multiplication logic. Define test data that you know are correct, and test that the multiplicaiton is ok.
assert( 4*5, 20 )
and not
assert( 4*5, 4*5 )
Here the test data are 4, 5, 20, and test that logic that ties them is the multiplication. The same principle holds in your case. Define test data and test that convertion produces the right results.
(As you point out, making test themsleves generic with reflection, etc., defeats the purpose of testing.)