I have to iterate over some map and for every key, to make checks based on the current element,so I tried:
class TestSuite extends Specification {
#Shared
def elements
def setupSpec{
elements = ['a.txt':1,'b.txt':2]
}
#Unroll
def 'test for #first and #second'() {
expect:
true
where:
[first, second] << [elements.keySet(), elements[first].findResults { key, value->
key.substring(key.indexOf('.') + 1)
}].combinations()
}
}
but Spock fails and says that first is unknown.
How can I do that so that the two values to be in the name of the test so in unroll to see their values?
Edited
I have to say that your code does not make much sense. I have hesistated to just downvote your question instead of replying. There is so much wrong with this question. Just shows a lack of effort in formulating your question.
The first red flag is that you did not even try to compile this code. You have a map with String keys and integer values:
your map litteral does not compile. key/value pairs are separated by commas in groovy.
You call methods of this map that (Map#key() ?) are not in the API.
Then, there are two possibilities, either your imaginary key() method returns the key, which does not make any sense, because first IS the key already, or key() returns the value, which is really bad naming. But bad naming is not all, because then you call toUpperCase() on an integer.... This is a mess!
Nevertheless I am going to show you how you can base the value of a where variable on the value of another where variable, because that's the core part of your question, with
import spock.lang.*
class MyFirstSpec extends Specification {
//elements needs to be Shared to be used in a where block
#Shared
def elements = ['a':1,'b':2]
#Unroll
def 'test for #first and #second and #third'() {
expect:
true
where:
first << elements.keySet()
and:
second = elements[first]
third = first.toUpperCase()
}
}
​resulting in
- test for a and 1 and A
- test for b and 2 and B
Related
I have a situation where I am using functions to model rule applications, with each function returning the actions it would take when applied, or, if the rule cannot be applied, the empty list. I have a number of rules that I would like to try in sequence and short-circuit. In other languages I am used to, I would treat the empty sequence as false/None and chain them with orElse, like this:
def ruleOne(): Seq[Action] = ???
def ruleTwo(): Seq[Action] = ???
def ruleThree(): Seq[Action] = ???
def applyRules(): Seq[Action] = ruleOne().orElse(ruleTwo).orElse(ruleThree)
However, as I understand the situation, this will not work and will, in fact, do something other than what I expect.
I could use return which feels bad to me, or, even worse, nested if statements. if let would have been great here, but AFAICT Scala does not have that.
What is the idiomatic approach here?
You have different approaches here.
One of them is combining all the actions inside a Seq (so creating a Seq[Seq[Action]]) and then using find (it will return the first element that matches a given condition). So, for instance:
Seq(ruleOne, ruleTwo, ruleThree).find(_.nonEmpty).getOrElse(Seq.empty[Action])
I do not know clearly your domain application, but the last getOrElse allows to convert the Option produced by the find method in a Seq. This method though eval all the sequences (no short circuit).
Another approach consists in enriching Seq with a method that simulated your idea of orElse using pimp my library/extensions method:
implicit class RichSeq[T](left: Seq[T]) {
def or(right: => Seq[T]): Seq[T] = if(left.isEmpty) { right } else { left }
}
The by name parameter enable short circuit evaluation. Indeed, the right sequence is computed only if the left sequence is empty.
Scala 3 has a better syntax to this kind of abstraction:
extension[T](left: Seq[T]){
def or(rigth: => Seq[T]): Seq[T] = if(left.nonEmpty) { left } else { rigth }
}
In this way, you can call:
ruleOne or ruleTwo or ruleThree
Scastie for scala 2
Scastie for scala 3
lang_group = 'en'
for place_category in place['categories']:
translation, created = \
PlaceTypesTranslations.objects.get_or_create(
name=place_category, lang_group=lang_group,
defaults={'place_type_group': PlaceTypesGroups.objects.create()})
In this case if the loop has 1000 iterations and, for example, 500 times created=True and other 500 times created=False, anyway will be created 1000 PlaceTypesGroups, so for some reason even if get_or_create returns get, defaults anyway is executed.
The same algorithm, but different approach:
lang_group = 'en'
for place_category in place['categories']:
if PlaceTypesTranslations.objects.filter(name=place_category, lang_group=lang_group).exists():
place_cat_trans = PlaceTypesTranslations.objects.get(name=place_category, lang_group=lang_group)
place_group = place_cat_trans.place_type_group
else:
place_group = PlaceTypesGroups.objects.create()
place_cat_trans = PlaceTypesTranslations.objects.create(name=place_category,
lang_group=lang_group,
place_type_group=place_group)
In this case just 500 times will be created PlaceTypesGroups as expected.
Why is that? What I do not see in the 1st case? Why does get_or_create creates 1000 PlaceTypesGroups?
That's just the way Python expressions always work. Anything inside an expression must always be fully evaluated before the expression itself can be passed to a function or method.
However, Django specifically lets you pass a callable, rather than a value, in the defaults hash. So you can do:
PlaceTypesTranslations.objects.get_or_create(
name=place_category, lang_group=lang_group,
defaults={'place_type_group': PlaceTypesGroups.objects.create})
and it will call the create method as required.
it's called 1000x, because you are assigning the returned value from the function. I'll start with a simple example:
place_type_group = some_function()
The variable now contains whatever the function returns, right?
Now if you wrap it into a dictionary, it's still the same thing, just wrapper into a dictionary:
dict(place_type_group = some_function())
the element in a dict still contains the value returned from some_function()
The dictionary above is just equal for the following code, which is what you do in your code (ie. assigning function return value into a variable)
{'place_type_group': some_function() }
It is apparently Pythonic to return values that can be treated as 'False' versions of the successful return type, such that if MyIterableObject: do_things() is a simple way to deal with the output whether or not it is actually there.
With generators, bool(MyGenerator) is always True even if it would have a len of 0 or something equally empty. So while I could write something like the following:
result = list(get_generator(*my_variables))
if result:
do_stuff(result)
It seems like it defeats the benefit of having a generator in the first place.
Perhaps I'm just missing a language feature or something, but what is the pythonic language construct for explicitly indicating that work is not to be done with empty generators?
To be clear, I'd like to be able to give the user some insight as to how much work the script actually did (if any) - contextual snippet as follows:
# Python 2.7
templates = files_from_folder(path_to_folder)
result = list(get_same_sections(templates)) # returns generator
if not result:
msg("No data to sync.")
sys.exit()
for data in result:
for i, tpl in zip(data, templates):
tpl['sections'][i]['uuid'] = data[-1]
msg("{} sections found to sync up.".format(len(result)))
It works, but I think that ultimately it's a waste to change the generator into a list just to see if there's any work to do, so I assume there's a better way, yes?
EDIT: I get the sense that generators just aren't supposed to be used in this way, but I will add an example to show my reasoning.
There's a semi-popular 'helper function' in Python that you see now and again when you need to traverse a structure like a nested dict or what-have-you. Usually called getnode or getn, whenever I see it, it reads something like this:
def get_node(seq, path):
for p in path:
if p in seq:
seq = seq[p]
else:
return ()
return seq
So in this way, you can make it easier to deal with the results of a complicated path to data in a nested structure without always checking for None or try/except when you're not actually dealing with 'something exceptional'.
mydata = get_node(my_container, ('path', 2, 'some', 'data'))
if mydata: # could also be "for x in mydata", etc
do_work(mydata)
else:
something_else()
It's looking less like this kind of syntax would (or could) exist with generators, without writing a class that handles generators in this way as has been suggested.
A generator does not have a length until you've exhausted its iterations.
the only way to get whether it's got anything or not, is to exhaust it
items = list(myGenerator)
if items:
# do something
Unless you wrote a class with attribute nonzero that internally looks at your iterations list
class MyGenerator(object):
def __init__(self, items):
self.items = items
def __iter__(self):
for i in self.items:
yield i
def __nonzero__(self):
return bool(self.items)
>>> bool(MyGenerator([]))
False
>>> bool(MyGenerator([1]))
True
>>>
I need to create a structure, in my mind similar to an array of linked lists (where a python list = array and dictionary = linked list). I have a list called blocks, and this is something like what I am looking to make:
blocks[0] = {dictionary},{dictionary},{dictionary},...
blocks[1] = {dictionary},{dictionary},{dictionary},...
etc..
currently I build the blocks as such:
blocks = []
blocks.append[()]
blocks.append[()]
blocks.append[()]
blocks.append[()]
I know that must look ridiculous. I just cannot see in my head what that just made, which is part of my problem. I assign to a block from a different list of dictionary items. Here is a brief overview of how a single block is created...
hold = {}
hold['file']=file
hold['count']=count
hold['mass']=mass_lbs
mg1.append(hold)
##this append can happen several times to mg1
blocks[i].append(mg1[j])
##where i is an index for the block I want to append to, and j is the list index corresponding to whichever dictionary item of mg1 I want to grab.
The reason I want these four main indices in blocks is so that I have shorter code with just the one list instead of block1 block2 block3 block4, which would just make the code way longer than it is now.
Okay, going off of what was discussed in the comments, you're looking for a simple way to create a structure that is a list of four items where each item is a list of dictionaries, and all the dictionaries in one of those lists have the same keys but not necessarily the same values. However, if you know exactly what keys each dictionary will have and that never changes, then it might be worth it to consider making them classes that wrap dictionaries and have each of the four lists be a list of objects. This would be easier to keep in your head, and a bit more Pythonic in my opinion. You also gain the advantage of ensuring that the keys in the dictionary are static, plus you can define helper methods. And by emulating the methods of a container type, you can still use dictionary syntax.
class BlockA:
def __init__(self):
self.dictionary = {'file':None, 'count':None, 'mass':None }
def __len__(self):
return len(self.dictionary)
def __getitem__(self, key):
return self.dictionary[key]
def __setitem__(self, key, value):
if key in self.dictionary:
self.dictionary[key] = value
else:
raise KeyError
def __repr__(self):
return str(self.dictionary)
block1 = BlockA()
block1['file'] = "test"
block2 = BlockA()
block2['file'] = "other test"
Now, you've got a guarantee that all instances of your first block object will have the same keys and no additional keys. You can make similar classes for your other blocks, or some general class, or some mix of the two using inheritance. Now to make your data structure:
blocks = [ [block1, block2], [], [], [] ]
print(blocks) # Or "print blocks" if you're not using Python 3.x
blocks[0][0]['file'] = "some new file"
print(blocks)
It might also be worthwhile to have a class for this blocks container, with specific methods for adding blocks of each type and accessing blocks of each type. That way you wouldn't trip yourself up with accidentally adding the wrong kind of block to one of the four lists or similar issues. But depending on how much you'll be using this structure, that could be overkill.
I have a pretty simple unit test that is testing the proper generation of a generic List<SelectListItem> .
[TestMethod()]
public void PopulateSelectListWithSeperateTextAndValueLists()
{
//Arrange
SetupDisplayAndValueLists();
bool allOption = false;
//Act
List<SelectListItem> result = ControllerHelpers.PopulateSelectList(valueList, displayList, allOption);
//Assert
Assert.AreEqual(expected, result);
}
The Assert always returns false, even though I have checked and confirmed that both objects have the same exact values.
Is there any special considerations when unit testing return results that are generics?
Updated with new tests and their status
Assert.AreEqual(4, result.Count); //passes
Assert.AreEqual(result[0].Text, expected[0].Text, "0 element is not found");//passes
Assert.AreEqual(result[1].Text, expected[1].Text, "1 element is not found");//passes
Assert.AreEqual(result[2].Text, expected[2].Text, "2 element is not found");//passes
Assert.AreEqual(result[3].Text, expected[3].Text, "3 element is not found");//passes
Assert.AreEqual(result[0].Value, expected[0].Value, "0 element is not found");//passes
Assert.AreEqual(result[1].Value, expected[1].Value, "1 element is not found");//passes
Assert.AreEqual(result[2].Value, expected[2].Value, "2 element is not found");//passes
Assert.AreEqual(result[3].Value, expected[3].Value, "3 element is not found");//passes
Assert.IsTrue(result.Contains(expected[0]), "0 element is not found"); //doesn't pass
Assert.IsTrue(result.Contains(expected[1]), "1 element is not found"); //doesn't pass
Assert.IsTrue(result.Contains(expected[2]), "2 element is not found"); //doesn't pass
Assert.IsTrue(result.Contains(expected[3]), "3 element is not found"); //doesn't pass
Assert.AreEqual(expectedList, result); //doesn't pass
Use the CollectionAssert class instead of the Assert class. You can choose to validate that items are in the same order, or just that they both have the same items overall.
Again though, if the items in your collection are reference types and not value types, it may not compare them how you want. (Though strings will work fine)
Update: Since you're comparing the .Text property of those items, you could try to use LINQ to return the Text properties as a collection. Then, CollectionAssert will work exactly as you want it for comparing the actual and expected collections of Text.
The issue here might not be related to generics, but to do with what the equality of 2 lists is implemented as. Equals() on a list may be the Object implementation, checking if it's the same instance only, and not comparing contents.
When I need to test the contents of a list have been populated as expected using C# and mbUnit, I tend to check the count is equal, and then check each item within the list. Alternately, if I'm not bothered about the order of the items in the results list, I can check it contains each.
Assert.AreEqual(3, result.Count);
Assert.Contains(expectedList[0], result);
Assert.Contains(expectedList[1], result);
Assert.Contains(expectedList[2], result);
Edit:
It looks like SelectListItem uses the Object.Equals() implementation, and only checks for referential equality (same instance). There's 2 solutions which come to mind.
Write a method to check a list contains an item with a given text and value, then reuse that. It's a little cleaner, but not hugely so, unless you have more tests.
Use linq statements to select all the text, and all the values, from the result list. Then use Asserts with CollectionEquivalentConstraints to check the lists are equal. (Note I haven't tested this myself, and am going off online documentation).
var texts = result.Select(x => x.Text).ToList();
var values = result.Select(x => x.Value).ToList();
Assert.That(texts, Is.EquivalentTo(new string[] { expectedList[0].Text, expectedList[1].Text, ... });
Assert.That(values, Is.EquivalentTo(new string[] { expectedList[0].Value, expectedList[1].Value, ... });
You could also simplify this significantly by generating your expected values as 2 separate lists. You could likely also generate a Dictionary, and provide Keys and Values as the equivalent lists.
Dim i As Integer
Assert.AreEqual(expected.Count, actual.Count)
For i = 0 To expected.Count - 1
Assert.AreEqual(expected.ToList.Item(i).ID, actual.ToList.Item(i).ID)
Next
In this case I am comparing the IDs, I suppose you could compare any value-type key field and get the same. This passed, while none of the CollectionAssert methods did me any good.
Lisa Morgan