I'm trying to create a spec test for the following recipe code:
if node.attribute?(node['tested_cookbook']['some_attribute'])
include_recipe('tested_cookbook::first')
else
include_recipe('tested_cookbook::second')
I have the following spec for this:
require 'spec_helper'
describe 'tested_cookbook::default' do
let(:chef_run) { ChefSpec::SoloRunner.new(platform: 'windows', version: '2008R2') do |node|
node.set['tested_cookbook']['some_attribute'] = "some_value"
end.converge(described_recipe) }
it 'includes recipe iis' do
expect(chef_run).to include_recipe('tested_cookbook::first')
end
end
The problem is that this test will always fail.
How do I properly mock the outcome of 'node.attribute?' ?
Thank you.
I'm not sure you can override the node object in Chefspec without monkey patching, which I think is probably more trouble than it's worth. I really almost never even see node.attribute? used, so it may be somewhat of an anti-pattern. (Do you really care if it was set, vs. if it has a non-nil value or not?)
I would just avoid using attribute? in the first place, e.g.
Recipe:
if node['tested_cookbook'] && node['tested_cookbook']['some_attribute'])
include_recipe('tested_cookbook::first')
else
include_recipe('tested_cookbook::second')
end
Spec:
require 'spec_helper'
describe 'tested_cookbook::default' do
let(:chef_run) { ChefSpec::SoloRunner.new(platform: 'windows', version: '2008R2') do |node|
node.set['tested_cookbook']['some_attribute'] = "some_value"
end.converge(described_recipe) }
it 'includes recipe iis' do
expect(chef_run).to include_recipe('tested_cookbook::first')
end
end
It's common practice to give these attributes a default value, too, so it would be even more idiomatic to say:
attributes/default.rb:
default['tested_cookbook']['some_attribute'] = 'second'
recipe:
include_recipe "tested_cookbook::#{node['tested_cookbook']['some_attribute']}"
And then in your spec, do the same check as before. You're using an attribute to run ::second, but allowing someone to override it to ::first. If you don't like the pattern of actually using the attribute value to include, you could make it a flag and keep your previous if-statement too.
Related
I'm trying to write a test to make sure a particular attribute doesn't exist in my output html, however, I'm having trouble figuring out the appropriate way.
I'm using Jest and Enzyme.
The example html that's being tested is...
Material Design
and the lines that do the testing are...
const linkProps = component.find('a').first().props();
expect( linkProps ).not.toHaveProperty('rel');
I'm not sure if the first line is the most efficient way to find the tag, but it's confirmed to be working. The second line, however, fails even though the rel attr doesn't exist in the html.
It fails with...
expect(received).not.toHaveProperty(path)
Expected path: not "rel"
Received value: undefined
When I use toHaveProperty to test that an attribute does exist, it's fine, but what's the appropriate way to test that it doesn't exist?
I've realised that one possible answer is to use prop() and toBe()
If i'm expecting the attribute to undefined, then that's what I put into the toBe function.
const linkTag = component.find('a');
expect( linkTag.prop('rel') ).toBe(undefined);
There might be better answers though, so I'm not marking this one as correct just yet.
If your test title is 'attribute "rel" should not exist', I would follow same instructions in your test, like:
test('attribute "rel" should not exist', () => {
const linkTag = component.find('a');
expect(linkTag).not.toHaveAttribute('rel');
});
Check toHaveAttribute docs here!
How could I get the name / version of the next migration to execute? Something similar to migrations:latest but more like migrations:next. I need this as input to another command so it needs to be parseable output (can't really just use migrations:status).
You can use the Configuration object of the Doctrine migrations bundle. This is even (somewhat) documented as custom configuration.
Here is a minimal code example that works for me:
public function migrationVersionAction(EntityManagerInterface $em, ParameterBagInterface $parameters) {
$connection = $em->getConnection();
$configuration = new \Doctrine\Migrations\Configuration\Configuration($connection);
$configuration->setMigrationsNamespace($parameters->get('doctrine_migrations.namespace'));
$configuration->setMigrationsDirectory($parameters->get('doctrine_migrations.dir_name'));
$configuration->setMigrationsTableName($parameters->get('doctrine_migrations.table_name'));
return new JsonResponse([
'prev' => $configuration->resolveVersionAlias('prev'),
'current' => $configuration->resolveVersionAlias('current'),
'next' => $configuration->resolveVersionAlias('next'),
'latest' => $configuration->resolveVersionAlias('latest')
]);
}
You might want to set the remaining parameters as well though, especially if they differ from the defaults. For this, the configuration documentation might help in addition to the link above.
I have created rspec tests for my scopes (scope1, scope2 and scope3) and they pass as expected but I would also like to add some tests for a class method that I have which is what is actually called from my controller (the controller calls the scopes indirectly via this class method):
def self.my_class_method(arg1, arg2)
scoped = self.all
if arg1.present?
scoped = scoped.scope1(arg1)
end
if arg2.present?
scoped = scoped.scope2(arg2)
elsif arg1.present?
scoped = scoped.scope3(arg1)
end
scoped
end
It seems a bit redundant to run the same scope tests for each scenario in this class method when I know they already pass so I assume I really only need to ensure that different scopes are called/applied dependant on the args being passed into this class method.
Can someone advise on what this rspec test would look like.
I thought it might be something along the lines of
expect_any_instance_of(MyModel.my_class_method(arg1, nil)).to receive(:scope1).with(arg1, nil)
but that doesn't work.
I would also appreciate confirmation that this is all that's necessary to test in this situation when I've already tested the scopes anyway would be reassurring.
The Rspec code you wrote is really testing the internal implementation of your method. You should test that the method returns what you want it to return given the arguments, not that it does it in a certain way. That way, your tests will be less brittle. For example if you change what scope1 is called, you won't have to rewrite your my_class_method tests.
I would do that by creating a number of instances of the class and then call the method with various arguments and check that the results are what you expect.
I don't know what scope1 and scope2 do, so I made an example where the arguments are a name attribute for you model and the scope methods simply retrieve all models except those with that name. Obviously, whatever your real arguments and scope methods do you should put that in your tests, and you should modify the expected results accordingly.
I used the to_ary method for the expected results since the self.all call actually returns an ActiveRecord association and therefore wouldn't otherwise match the expected array. You could probably use includes and does_not_includes instead of eq, but perhaps you care about the order or something.
describe MyModel do
describe ".my_class_method" do
# Could be helpful to use FactoryGirl here
# Also note the bang (!) version of let
let!(:my_model_1) { MyModel.create(name: "alex") }
let!(:my_model_2) { MyModel.create(name: "bob") }
let!(:my_model_3) { MyModel.create(name: "chris") }
context "with nil arguments" do
let(:arg1) { nil }
let(:arg2) { nil }
it "returns all" do
expected = [my_model_1, my_model_2, my_model_3]
expect_my_class_method_to_return expected
end
end
context "with a first argument equal to a model's name" do
let(:arg1) { my_model_1.name }
let(:arg2) { nil }
it "returns all except models with name matching the argument" do
expected = [my_model_2, my_model_3]
expect_my_class_method_to_return expected
end
context "with a second argument equal to another model's name" do
let(:arg1) { my_model_1.name }
let(:arg2) { my_model_2.name }
it "returns all except models with name matching either argument" do
expected = [my_model_3]
expect_my_class_method_to_return expected
end
end
end
end
private
def expect_my_class_method_to_return(expected)
actual = described_class.my_class_method(arg1, arg2).to_ary
expect(actual).to eq expected
end
end
I am in the middle of upgrading an app from Grails 1.3.7 to 2.2
So far, its been relatively painless and straight forward.
Until we started running the unit tests.
Under 1.3.7, all the tests passed.
Under 2.2, about half are now failing. The tests haven't changed, they are still the old style mockDomain...
What is most concerning to me is that basic gorm features are missing on some of the domain classes.
Things like .list and .get
Failure: testList_NoMaxSpecified_10Shown(com.litle.bldvwr.StreamControllerTests)
| groovy.lang.MissingMethodException: No signature of method: >com.litle.bldvwr.Stream.list() is applicable for argument types: () values: []
Possible solutions: list(), list(), list(), list(java.lang.Object), list(java.util.Map), >list(java.lang.Object)
and
Failure: >testAddFailureOutputToHappyPathWithIntegrationFailure(com.litle.bldvwr.LogParserServiceTests)
| groovy.lang.MissingMethodException: No signature of method: >com.litle.bldvwr.Result.get() is applicable for argument types: () values: []
Possible solutions: get(java.io.Serializable), get(java.lang.Object), >get(java.io.Serializable), getId(), grep(), grep(java.lang.Object)
The general pattern of for this type of failure is:
mockDomain(Phase, [new Phase(id:1, name: 'xxx')])
mockDomain(Result, [new Result(id:1, phase: Phase.get(1), failureOutput:"")])
logParserService.addFailureOutputTo(Result.get(1))
And it is that last get that is causing the no signature error.
While we intend to start using the new Unit Test functionality, I was hoping to avoid having to rewrite the 500+ current tests.
Thoughts, ideas?
-Clark
Using the new #Mock() annotation in your test for the domain objects will inject all the expected mock GORM methods, and you can even just save() your domain objects instead of providing the list in mockDomain() call.
#Mock([Result, Nightly])
class MyTests {
void testSomething() {
def night = new Nightly( name:'nightly1')
night.id=1
night.save(validate: false)
assert Nightly.get(1).name == 'nightly1'
assert Result.count() == 0
new Result(status: Constants.SUCCESS, type: Constants.INTEGRATION,
nightly: Nightly.get(1)).save(validate: false)
assert Result.count() == 1
assert Result.findByStatus(Constants.SUCCESS) != null // yay dynamic finders!
}
}
http://grails.org/doc/latest/guide/testing.html#unitTestingDomains
You'll have to update all your tests to the new ways, but it's much nicer that the old 1.3 ways.
So here is what we found.
With 1.3, you could do:
{
mockDomain(Nightly, [new Nightly(id: 7)])
mockDomain(Result, [
new Result(status: Constants.SUCCESS,
type: Constants.INTEGRATION, nightly: Nightly.get(7))
])
service.callSomething(results, Nightly.get(7))
assert result==Nightly.get(7).property
And it would work just fine. You whoul have a Mock domain object with and id of 7, and the get would work just fine.
Since then, something changed, and you can no longer set the id has part of the create.
What you need to do is this:
night = new Nightly( name:'nightly1')
night.id=1
mockDomain(Nightly, [night])
mockDomain(Result, [
new Result(status: Constants.SUCCESS, type: Constants.INTEGRATION, nightly: Nightly.get(1))
])
and that mostly sets up the mocks correctly.
The issue we ran into next was that outside of the mockDomain call, Nightly.get() would not work.
So now we need to save the "mocked" domains in local variables in order to do post action comparison and checks.
Not a completely horrible solution, but less elegant than we were hoping for.
I've kinda been struggling with this for some time; let's see if somebody can help me out.
Although it's not explicitly said in the Readme, ember-data provides somewhat validations support. You can see that on some parts of the code and documentation:
https://github.com/emberjs/data/blob/master/packages/ember-data/lib/system/model/states.js#L411
https://github.com/emberjs/data/blob/master/packages/ember-data/lib/system/model/states.js#L529
The REST adapter doesn't add validations support on itself, but I found out that if I add something like this in the ajax calls, I can put the model on a "invalid" state with the errors object that came from the server side:
error: function(xhr){
var data = Ember.$.parseJSON(xhr.responseText);
store.recordWasInvalid(record, data.errors);
}
So I can easily to the following:
var transaction = App.store.transaction();
var record = transaction.createRecord(App.Post);
record.set('someProperty', 'invalid value');
transaction.commit()
// This makes the validation fail
record.set('someProperty', 'a valid value');
transaction.commit();
// This doesn't trigger the commit again.
The thing is: As you see, transactions don't try to recommit. This is explained here and here.
So the thing is: If I can't reuse a commit, how should I handle this? I kinda suspect that has something to do to the fact I'm asyncronously putting the model to the invalid state - by reading the documentation, it seems like is something meant for client-side validations. In this case, how should I use them?
I have a pending pull request that should fix this
https://github.com/emberjs/data/pull/539
I tried Javier's answer, but I get "Invalid Path" when doing any record.set(...) with the record in invalid state. What I found worked was:
// with the record in invalid state
record.send('becameValid');
record.set('someProperty', 'a valid value');
App.store.commit();
Alternatively, it seems that if I call record.get(...) first then subsequent record.set(...) calls work. This is probably a bug. But the above work-around will work in general for being able to re-commit the same record even without changing any properties. (Of course, if the properties are still invalid it will just fail again.)
this may seem to be an overly simple answer, but why not create a new transaction and add the pre-existing record to it? i'm also trying to figure out an error handling approach.
also you should probably consider writing this at the store level rather than the adapter level for the sake of re-use.
For some unknown reason, the record becomes part of the store default transaction. This code works for me:
var transaction = App.store.transaction();
var record = transaction.createRecord(App.Post);
record.set('someProperty', 'invalid value');
transaction.commit()
record.set('someProperty', 'a valid value');
App.store.commit(); // The record is created in backend
The problem is that after the first failure, you must always use the App.store.commit() with the problems it has.
Give a look at this gist. Its the pattern that i use in my projects.
https://gist.github.com/danielgatis/5550982
#josepjaume
Take a look at https://github.com/esbanarango/ember-model-validator.
Example:
import Model, { attr } from '#ember-data/model';
import { modelValidator } from 'ember-model-validator';
#modelValidator
export default class MyModel extends Model {
#attr('string') fullName;
#attr('string') fruit;
#attr('string') favoriteColor;
validations = {
fullName: {
presence: true
},
fruit: {
presence: true
},
favoriteColor: {
color: true
}
};
}