Let's say there is a function to determine if a button should be visible.
fun isButtonVisible(fitlers: List<Filters>, results: List<Shop>, isLoading: Boolean) {
return fitlers.isNotEmpty() && results.isEmpty() && !isLoading
}
Now I would like to test this function using PBT like:
"the button should be visible if filters is not empty and results is empty and is not loading" {
forAll { filters: List<Filters>, results: List<Shop>, isLoading: Boolean ->
val actual = isButtonVisible(filters, results, isLoading)
// Here reimplement the logic
val expected = filters.isNotEmpty() && results.isEmpty() && !isLoading
assertThat(actual).isEqual(expected)
}
}
It seems that I just reimplement the logic again in my test, is this correct? If not, how can I come up with another properties if the logic is just simple combinations of several flags?
that is not right.
you should not have to calculate what the expected value should be during the test, you should know what the result should be, set it as such and compare it against the actual result.
Tests work by calling the method you want to test and comparing the result against an already known, expected value.
"the button should be visible when filters are not empty, results is empty, isLoading is false " {
forAll { filters: List<Filters>, results: List<Shop>, isLoading: Boolean ->
val actualVisibleFlag = isButtonVisible(filters, results, isLoading)
val expectedVisibleFlag = true
assertThat(actualVisibleFlag ).isEqual(expectedVisibleFlag )
}
}
Your expected value is known, this is the point I am trying to make.
For each combination of inputs, you create a new test.
The idea here is that when you have a bug you can easily see which existing test fails or you can add a new one which highlights the bug.
If you call a method, to give you the result of what you think you should get, well, how do you know that method is correct anyway? How do you know it works correctly for every combination?
You might get away with less tests if you reduce your number of flags maybe, do you really need 4 of them?
Now, each language / framework has ( or should have ) support for a matrix kind of thing so you can easily write the values of every combination
Related
Another question is if there is any better way to write this method?
Public decimal CalculateTotalPrice(List<product> items)
{
decimal totalPrice = 0.m;
foreach(Product p in items)
{
if(p.Offer == "")
calc = new DefaultCalc();
else if(p.Offer == "BuyOneGetOneFree")
calc = new BuyOneGetOneFreeCalc();
else if(p.Offer == "ThreeInPriceOfTwo")
calc = new ThreeInPriceOfTwoCalc()
totalPrice += calc.Calculate(p.Quantity, p.UnitPrice);
}
return totalPrice;
}
You should probably review Polly Want a Message, by Sandi Metz
How to unit test a method that is having multiple object creation in switch statement?
An important thing to notice here is that the switch statement is an implementation detail. From the point of view of the caller, this thing is just a function
Public decimal CalculateTotalPrice(List<product> items);
If the pricing computations are fixed, you can just use the usual example based tests:
assertEquals(expectedPrice, CalculateTotalPrice(items));
But if they aren't fixed, you can still do testing based on the properties of the method. Scott Wlaschin has a really good introduction to property based testing. Based on the logic you show here, there are some things we can promise about prices, without knowing anything about the strategies in use
the price should always be greater than zero.
the price of a list of items is the same as the sum of the prices of the individual items.
if there is any better way to write this method?
You might separate choosing the pricing strategy from using the strategy. As Sandi notes, that sort of construct often shows up in more than once place.
foreach(Product p in items)
{
calc = pricing(p.Offer);
totalPrice += calc.Calculate(p.Quantity, p.UnitPrice);
}
"pricing" would then become something that you pass into this function (either as an argument, or as a dependency).
In effect, you would end up with three different kinds of test.
Checks that pricing returns the right pricing strategy for each offer.
Checks that each strategy performs its own calculation correctly.
Checks that CalculateTotalPrice compute the sum correctly.
Personally, I prefer to treat the test subject as a single large black box, but there are good counter arguments. Horses for courses.
Constructors can not be mocked (at least with free mocking frameworks).
Write tests without mocking as far as your tests run fast and test case setup is not very very complicated.
In your particular case you should be able to write tests without mocking.
Prepare data
var products = new List<Product>
{
new Product { Quantity = 10, UnitPrice = 5.0m, Offer = "" },
new Product { Quantity = 2, UnitPrice = 3.0m , Offer = "BuyOneGetOneFree" },
new Product { Quantity = 3, UnitPrice = 2.0m , Offer = "ThreeInPriceOfTwo" },
}
// prepare expected total
var expected = 57.0m; // 10 * 50.0 + 1 * 3.0 + 2 * 2.0
// Run the test
var actual = CalculateTotalPrice(products);
actual.Should().Be(expected); // pass Ok.
With this approach tests will not depend on implementation details.
You will be able to freely play with designs without rewriting tests every time you change your implementation logic.
The other answers are technically fine, but I would suggest one thing:
if(p.Offer == "")
calc = new DefaultCalc();
else if(p.Offer == "BuyOneGetOneFree")
calc = new BuyOneGetOneFreeCalc();
else if(p.Offer == "ThreeInPriceOfTwo")
calc = new ThreeInPriceOfTwoCalc()
should absolutely go into its own method/scope/whatever.
You are mapping a string to a specific calculator. That should happen in one place, and one place only. You see, first you do that here. Then some method method comes around that needs the same mapping. So you start duplicating.
I have a CouchDB database which has documents with the following format:
{ createdBy: 'userId', at: 123456, type: 'action_type' }
I want to write a view that will give me how many actions of each type were created by which user. I was able to do that creating a view that does this:
emit([doc.createdBy, doc.type, doc.at], 1);
With the reduce function "sum" and consuming the view in this way:
/_design/userActionsDoc/_view/userActions?group_level=2
this returns a result with rows just in the way I want:
"rows":[ {"key":["userId","ACTION_1"],"value":20}, ...
the problem is that now I want to filter the results for a given time period. So I want to have the exact same information but only considering actions which happened within a given time period.
I can filter the documents by "at" if I emit the fields in a different order.
?group_level=3&startkey=[149328316160]&endkey=[1493283161647,{},{}]
emit([doc.at, doc.type, doc.createdBy], 1);
but then I won't get the results grouped by userId and actionType. Is there a way to have both? Maybe writing my own reduce function?
I feel your pain. I have done two different things in the past to attempt to solve similar issues.
The first pattern is a pain and may work great or may not work at all. I've experienced both. Your map function looks something like this:
function(doc) {
var obj = {};
obj[doc.createdBy] = {};
obj[doc.createdBy][doc.type] = 1;
emit(doc.at, obj);
// Ignore this for now
// emit(doc.at, JSON.stringify(obj));
}
Then your reduce function looks like this:
function(key, values, rereduce) {
var output = {};
values.forEach(function(v) {
// Ignore this for now
// v = JSON.parse(v);
for (var user in v) {
for (var action in v[user]) {
output[user][action] = (output[user][action] || 0) + v[user][action];
}
}
});
return output;
// Ignore this for now
// return JSON.stringify(output);
}
With large datasets, this usually results in a couch error stating that your reduce function is not shrinking fast enough. In that case, you may be able to stringify/parse the objects as shown in the "ignore" comments in the code.
The reasoning behind this is that couchdb ultimately wants you to output a simple object like a string or integer in a reduce function. In my experience, it doesn't seem to matter that the string gets longer, as long as it remains a string. If you output an object, at some point the function errors because you have added too many props to that object.
The second pattern is potentially better, but requires that your time periods are "defined" ahead of time. If your time period requirements can be locked down to a specific year, specific month, day, quarter, etc. You just emit multiple times in your map function. Below I assume the at property is epoch milliseconds, or at least something that the date constructor can accurately parse.
function(doc) {
var time_key;
var my_date = new Date(doc.at);
//// Used for filtering results in a given year
//// e.g. startkey=["2017"]&endkey=["2017",{}]
time_key = my_date.toISOString().substr(0,4);
emit([time_key, doc.createdBy, doc.type], 1);
//// Used for filtering results in a given month
//// e.g. startkey=["2017-01"]&endkey=["2017-01",{}]
time_key = my_date.toISOString().substr(0,7);
emit([time_key, doc.createdBy, doc.type], 1);
//// Used for filtering results in a given quarter
//// e.g. startkey=["2017Q1"]&endkey=["2017Q1",{}]
time_key = my_date.toISOString().substr(0,4) + 'Q' + Math.floor(my_date.getMonth()/3).toString();
emit([time_key, doc.createdBy, doc.type], 1);
}
Then, your reduce function is the same as in your original. Essentially you're just trying to define a constant value for the first item in your key that corresponds to a defined time period. Works well for business reporting, but not so much for allowing for flexible time periods.
I'm trying to create an MPMediaQuery that will return results in chronological order, preferably ascending or descending based on the query itself.
Right now my query returns items in ascending chrono order (oldest at top) but I'd like to be able to reverse the order.
I need my results to be in an MPMediaQuery so that I can use it with the MPMediaPlayer.
var qryPodcasts = MPMediaQuery()
var titleFilter = MPMediaPropertyPredicate()
titleFilter = MPMediaPropertyPredicate(value: "This American Life", forProperty: MPMediaItemPropertyPodcastTitle, comparisonType: .equalTo)
qryPodcasts.addFilterPredicate(titleFilter)
EDIT::
I've been able to get a step closer to my goal however I still have a problem that causes a crash.
I've added this code which will sort the resulting query based on "releaseDate" however not all Podcasts conform so this property can be nil causing a crash:
let myItems = qryPodcasts.items?.sorted{($0.releaseDate)! > ($1.releaseDate)!}
let podCollection = MPMediaItemCollection(items: myItems!)
myMP.setQueue(with: podCollection!)
How can I avoid this error and how do I handle items without a releaseDate?
Just for this part:
How can I avoid this error and how do I handle items without a releaseDate?
Avoid using forced-unwrapping (!) and provide default values for nil:
let myItems = qryPodcasts.items?.sorted{($0.releaseDate ?? Date.distantFuture) > ($1.releaseDate ?? Date.distantFuture)}
You'd better check all other forced-unwrappings in your code, are you really 100% sure those never return nil?
How do I write a simple ABAP Unit Assert statement to check if any call, expression or other statement evaluates to true?
I can't see any basic assert() or assert_true() methods in CL_AUNIT_ASSERT while I'd expect those to be very common. I can approximate such an assert as follows, but is there no cleaner way?
cl_aunit_assert=>assert_equals(
act = boolc( lv_value > 100 OR lv_value < 2 )
exp = abap_true ).
cl_aunit_assert=>assert_equals(
act = mo_model->is_active )
exp = abap_true ).
Depending on your SAP NetWeaver stack you can (or should) use the updated ABAP Unit Class CL_ABAP_UNIT_ASSERT. This class is available at a Basis-Release >= 7.02. SAP declared this class as 'FINAL' so it´s impossible to inherit from it, but on the other side they added some ASSERT-Methods like the ASSERT_TRUE Method!
Here is a possible usage of this method:
cl_abap_unit_assert=>assert_true(
exporting
act = m_ref_foo->is_bar( l_some_var )
msg = 'is_bar Method fails with Input { l_some_var }'
).
For the releases I have access to, there's probably no shorter way than the one you outlined. You can create a subclass of CL_AUNIT_ASSERT and add your own static ASSERT_TRUE method. It's not a bad idea to do so and at the same time make your local ABAP Unit test class a subclass of that ZCL_AUNIT_ASSERT - this way, you can omit the cl_aunit_assert=> prefix which will save some keystrokes.
You cannot see such methods because there is no boolean type in ABAP.
While in Java, C++ or C, you are able to assign a result of a condition to a variable, like this
int i = 5;
boolean result = i > 3;
You cannot do the same thing in ABAP as there is no boolean type. Therefore what in other languages is a one liner, in ABAP it will always be more prolix.
DATA: i TYPE i VALUE 5.
DATA: result TYPE abap_bool.
IF i > 3.
result = abap_true.
ELSE.
result = abap_false.
ENDIF.
The thing you used seems to be a new feature, that has been recently added to the language and most of the customers will not be using for a long time. Also the CL_AUNIT_ASSERT class was created way before the new elements came to the language.
So right now, there is a possibility to write the above thing as one liner. However there is still no boolean type in the language.
DATA: i TYPE i VALUE 5.
DATA: result TYPE abap_bool.
result = boolc( i > 3 ).
On the other hand, there is no boolean type, but you could simply use ASSERT_INITIAL or ASSERT_NOT_INITIAL in this case, as boolean is emulated by either X (true) or space (false). The latter is an initial value in ABAP.
The cleanest way is to just fail:
if value > limit.
cl_abap_unit_assert=>fail( ).
endif.
Or a more informative:
cl_abap_unit=>fail( msg = 'Limit exceeded' ).
I have a pretty simple unit test that is testing the proper generation of a generic List<SelectListItem> .
[TestMethod()]
public void PopulateSelectListWithSeperateTextAndValueLists()
{
//Arrange
SetupDisplayAndValueLists();
bool allOption = false;
//Act
List<SelectListItem> result = ControllerHelpers.PopulateSelectList(valueList, displayList, allOption);
//Assert
Assert.AreEqual(expected, result);
}
The Assert always returns false, even though I have checked and confirmed that both objects have the same exact values.
Is there any special considerations when unit testing return results that are generics?
Updated with new tests and their status
Assert.AreEqual(4, result.Count); //passes
Assert.AreEqual(result[0].Text, expected[0].Text, "0 element is not found");//passes
Assert.AreEqual(result[1].Text, expected[1].Text, "1 element is not found");//passes
Assert.AreEqual(result[2].Text, expected[2].Text, "2 element is not found");//passes
Assert.AreEqual(result[3].Text, expected[3].Text, "3 element is not found");//passes
Assert.AreEqual(result[0].Value, expected[0].Value, "0 element is not found");//passes
Assert.AreEqual(result[1].Value, expected[1].Value, "1 element is not found");//passes
Assert.AreEqual(result[2].Value, expected[2].Value, "2 element is not found");//passes
Assert.AreEqual(result[3].Value, expected[3].Value, "3 element is not found");//passes
Assert.IsTrue(result.Contains(expected[0]), "0 element is not found"); //doesn't pass
Assert.IsTrue(result.Contains(expected[1]), "1 element is not found"); //doesn't pass
Assert.IsTrue(result.Contains(expected[2]), "2 element is not found"); //doesn't pass
Assert.IsTrue(result.Contains(expected[3]), "3 element is not found"); //doesn't pass
Assert.AreEqual(expectedList, result); //doesn't pass
Use the CollectionAssert class instead of the Assert class. You can choose to validate that items are in the same order, or just that they both have the same items overall.
Again though, if the items in your collection are reference types and not value types, it may not compare them how you want. (Though strings will work fine)
Update: Since you're comparing the .Text property of those items, you could try to use LINQ to return the Text properties as a collection. Then, CollectionAssert will work exactly as you want it for comparing the actual and expected collections of Text.
The issue here might not be related to generics, but to do with what the equality of 2 lists is implemented as. Equals() on a list may be the Object implementation, checking if it's the same instance only, and not comparing contents.
When I need to test the contents of a list have been populated as expected using C# and mbUnit, I tend to check the count is equal, and then check each item within the list. Alternately, if I'm not bothered about the order of the items in the results list, I can check it contains each.
Assert.AreEqual(3, result.Count);
Assert.Contains(expectedList[0], result);
Assert.Contains(expectedList[1], result);
Assert.Contains(expectedList[2], result);
Edit:
It looks like SelectListItem uses the Object.Equals() implementation, and only checks for referential equality (same instance). There's 2 solutions which come to mind.
Write a method to check a list contains an item with a given text and value, then reuse that. It's a little cleaner, but not hugely so, unless you have more tests.
Use linq statements to select all the text, and all the values, from the result list. Then use Asserts with CollectionEquivalentConstraints to check the lists are equal. (Note I haven't tested this myself, and am going off online documentation).
var texts = result.Select(x => x.Text).ToList();
var values = result.Select(x => x.Value).ToList();
Assert.That(texts, Is.EquivalentTo(new string[] { expectedList[0].Text, expectedList[1].Text, ... });
Assert.That(values, Is.EquivalentTo(new string[] { expectedList[0].Value, expectedList[1].Value, ... });
You could also simplify this significantly by generating your expected values as 2 separate lists. You could likely also generate a Dictionary, and provide Keys and Values as the equivalent lists.
Dim i As Integer
Assert.AreEqual(expected.Count, actual.Count)
For i = 0 To expected.Count - 1
Assert.AreEqual(expected.ToList.Item(i).ID, actual.ToList.Item(i).ID)
Next
In this case I am comparing the IDs, I suppose you could compare any value-type key field and get the same. This passed, while none of the CollectionAssert methods did me any good.
Lisa Morgan